diff --git a/.git-blame-ignore-revs b/.git-blame-ignore-revs new file mode 100644 index 0000000000..f5f6a8475c --- /dev/null +++ b/.git-blame-ignore-revs @@ -0,0 +1,9 @@ +### NOTE +# Run `git config blame.ignoreRevsFile .git-blame-ignore-revs` +# from the repository's root to tell `git blame` to ignore +# the commits below. + +# 2025 +######################################## +# Format with fourmolu +ef19b2301d4155497a5a69d9d7a710413d92f2c7 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5aecdf392c..90ffcc31fb 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -203,19 +203,19 @@ and improvements are always welcome. ## Formatting the code -We use `stylish-haskell` 0.14.6.0 for Haskell code formatting. +We use `fourmolu` for Haskell code formatting. See [tools.nix](./nix/tools.nix) for the `fourmolu` version used in CI. Either enable editor integration or call the script used by CI itself: ```bash -./scripts/ci/run-stylish.sh +./scripts/ci/run-fourmolu.sh ``` When using Nix, you can use the following command, which will build and use -the right version of `stylish-haskell`. +the right version of `fourmolu`. ```bash -nix develop -c ./scripts/ci/run-stylish.sh +nix develop -c ./scripts/ci/run-fourmolu.sh ``` # Generating documentation and setting up hoogle diff --git a/docs/website/contents/for-developers/StyleGuide.md b/docs/website/contents/for-developers/StyleGuide.md index 9cc7336806..a0e4b25a86 100644 --- a/docs/website/contents/for-developers/StyleGuide.md +++ b/docs/website/contents/for-developers/StyleGuide.md @@ -37,8 +37,8 @@ along with its context -- eg whatever is available in the GitHub PR interface. As long that rendering doesn't show that the PR spoils something like intentional alignment for example, then the PR has no style problems. -We run `stylish-haskell` as a requirement for merging. The specific -configuration can be found [here][stylish-config]. +We run `fourmolu` as a requirement for merging. The specific +configuration can be found [here][fourmolu-config]. ## Guiding principles @@ -59,20 +59,6 @@ We value the following principles in the consensus team: similar steps, documenting why it does this and that, ..., are all worth the extra effort. - The layout and formatting of the code can help convey the meaning and the - essence of the code. Alignment can help emphasise the similarities *and* the - differences between cases. - - This is why we do not believe in automatic code formatters: they have no idea - of the story the code is trying to tell. While the convenience, automation, - and uniformity are big advantages, in our opinion, they come at the cost of - code clarity. - - The style is not optimised for "diff-ability" (code review is important, - but happens less frequently than reading code). All things being equal, - writing code in a way that minimises unnecessary noise in a `diff` is good, - but not at the cost of clarity. - * __Consistency__: inconsistent style, especially within a single module, looks sloppy, inspires little confidence in the quality of the code, and distracts. Consistency is also a helpful guiding factor when deciding @@ -82,177 +68,22 @@ We value the following principles in the consensus team: ## Formatting -We now list the formatting rules we have converged on. As these have grown -organically, not all code follows these rules. When touching some existing code, -we in general recommend sticking to the existing style, but when it differs from -the rules below, it is good practice to update the code's style to match them. +We now list the formatting rules we have converged on. 1. __Indentation__: we indent by 2 spaces. *Why:* to avoid wasting horizontal screen space. - Some consequences of this rule: - - a. The `where` clause of a function body is indented 2 spaces from the left - margin, and the function body is indented 2 spaces from the `where`: - - ```haskell - foo x y z = - .. - where - a = .. - ``` - - The `where` keyword acts as a separator between the body and the bindings. - Keeping them at the same indentation level would make it hard to see where - the body ends. - - We stick with this indentation even if the `where` clause is not present, - just to avoid unnecessary changes when a `where` clause is added. - - b. We indent record `data` and `newtype` definitions as follows: - - ```haskell - data Foo = Foo { - fooBar :: Int - , fooArgument :: Bool - } - deriving (Show, Eq) - - newtype Foo = Foo { - unFoo :: Int - } - deriving (Show, Eq) - ``` - - The `deriving` is indented from the left margin, and the constructors - are indented from the `deriving` clause. This provides a consistent - style for datatypes with multiple constructors (see below). - - Multiple deriving clauses using `DerivingStrategies` are aligned: - - ```haskell - data Foo = Foo { - fooBar :: Int - , fooArgument :: Bool - } - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks, NFData) - - newtype Foo = Foo { - unFoo :: Int - } - deriving stock (Show) - deriving newtype (Eq) - deriving NoThunks via InspectHeapNamed "Foo" Foo - ``` - - Parentheses around a singleton list of classes are optional. - - Records that fit onto a single line can be formatted like this: - - ```haskell - data Foo = X {foo :: A, bar :: B} | Y - ``` - - c. We indent `data` definitions with multiple constructors as follows: - - ```haskell - data Foo = - Bar Int Int - | Baz - Int - Int - (Maybe Bool) - [Foo] - ``` - - Note the arguments of `Baz` being indented by two spaces. - - d. Both of the following are fine - - ```haskell - let fooBarBaz = fooBar - baz - - let fooBarBaz = - fooBar baz - ``` - - whichever is more natural. - - In the rare case that you want a `where` clause on a `let` binding, indent - by 4 spaces, like described in (a): - - ```haskell - let fooBarBaz = - .. - where - aux = .. - ``` - - e. `do` is placed after the `=`: - - ```haskell - foo .. = do - bar - baz - ``` - - Function calls can be placed on the same line as the `do`, unless this - would make the line too long: - - ```haskell - foo .. = atomically $ do - bar - baz - - -- If the first line too long: - foo .. = - atomically $ do - bar - baz - ``` - - The `where` block can be indented by 2 spaces, as described in (a). - - In a `case`, use hanging `do`: - - ```haskell - case foo of - X -> do - .. - Y -> do - .. - ``` - - f. While it is technically possible to add a `where` clause to a pattern - match case, use a `let` instead, to emphasise that the binding is local: - - ```haskell - case x of y - A x -> A_body - B y -> - let bl = bl_body y - in B_body - ``` - - Note that we align `B_body` with `bl` in the `let` block. At the moment we - are not being very consistent with this. - - Using a `where` clause for a `case` can be okay, but tends to make the - scope a bit confusing, so we try to avoid it. - -2. __Line length__: we limit the number of characters per line to 80. +2. __Line length__: we limit the number of characters per line to 100. *Why:* long lines are less readable (there is a reason why books and newspapers limit their line length). It's also practical: even with (ultra-)wide monitors, most people tend to have many windows side by side. - If you are going beyond 80 characters, wrap the line, introduce local + If you are going beyond 100 characters, wrap the line, introduce local bindings, etc. - Comments and docstrings should also be wrapped at 80 characters. + Comments and docstrings should also be wrapped at 100 characters. There are a few exceptions: @@ -261,253 +92,9 @@ the rules below, it is good practice to update the code's style to match them. alignment (emphasising differences and similarities) can outweigh the line length limit. - * Diagrams, examples, or long URLs in the comments can be wider than 80 + * Diagrams, examples, or long URLs in the comments can be wider than 100 characters. - For certain constructs we have concrete recommendations on how to wrap them - in case their length exceeds 80 characters: - - a. Type signatures: if a type signature doesn't fit on one line, wrap it like - this: - - ```haskell - fooBar :: - a - -> .. - -> .. - ``` - - *Why:* Keeping the `::` on the first line is consistent with the rest of - the style (compare to `module Foo where` for example), and has a practical - benefit: it makes it much easier to grep for the definition of `fooBar`. - The `->` is indented 2 spaces from the left margin, as usual; the first - argument is aligned with the rest (and happens to therefore be indented 5 - spaces). - - When there are constraints: - - ```haskell - fooBar :: - (Eq a, ..) - => a - -> .. - -> .. - ``` - - When there is an explicit `forall`: - - ```haskell - fooBar :: - forall a .. z. (Eq a, ..) - => a - -> .. - -> .. - ``` - - If the `forall` line gets too long, wrap it after the `.`: - - ```haskell - fooBar :: - forall a .. z. - (Eq a, ..) - => a - -> .. - -> .. - ``` - - Note that the `.` after the `forall` stays on the same line and that there - is no space before it. - - If the constraints don't fit on one line: - - ```haskell - fooBar :: - forall a .. z. ( - Eq a - , .. - ) - => a - -> .. - -> .. - ``` - - If there is a large function argument in the type signature: - - ```haskell - fooBar :: - => a - -> ( forall c. Eq c - => c - -> .. - -> .. - ) - -> .. - ``` - - Note that the first arrow in the function argument is indented one space - relative to the opening parenthesis. The above line wrapping rules apply - to the nested function type as well. - - b. Function calls: when not all arguments to a function call fit on a single - line, either introduce clear local bindings for the arguments or put each - argument on a separate line, indented 2 spaces from the function call: - - ```haskell - fooBar - x - (baz + 1) - bar - (foo (bar x)) - ``` - - *Why*: multiple lines of multiple arguments are hard to read; for example, - in - - ```haskell - fooBar - x (baz + 1) - bar (foo (bar x)) - ``` - - it might look like `bar` is applied to `(foo (bar x))`, whereas in fact - of course they are both just two arguments to `fooBar`. So, either - everything on the same line as the function call, or else a line - per argument. - - When writing a function call in the applicative style that does not fit on - a single line, indent it as follows: - - ```haskell - fooBar - <$> x - <*> baz + 1 - <*> bar - <*> foo (bar x) - ``` - - c. Argument lists: put the formal arguments of a function on a single line - when possible: - - ```haskell - foo a b (SomeRecord {field = x}) = - .. - ``` - - Bracketing a pattern match on a record is optional, but we feel it aids - clarity. - - When that does not fit on a single line, move any pattern matches to a - `where` block: - - ```haskell - foo a b c = - .. - where - SomeRecord {field = x} = c - ``` - - When that is still not enough, then the function has so many arguments - that *naming* them is not only useful for alignment, it also helps to - clarify call sites: introduce a record. - - ```haskell - foo args = - .. - where - Args { - argA = a - , argB = b - , argC = SomeRecord {field = x} - } = args - ``` - - d. Class or instance contexts: when a class or instance declaration doesn't - fit onto a single line because of the super-class context, wrap the line - before the `=>` and align the class name with the first character in the - context: - - ```haskell - class (Eq a, ..) - => C a where - - instance (Eq a, ..) - => C a where - ``` - - When the context doesn't fit onto a single line, wrap as follows: - - ```haskell - class ( Eq a - , .. - ) => C a where - - instance ( Eq a - , .. - ) => C a where - ``` - - e. Tuples in type signatures: - - ```haskell - foo :: - a - -> ( .. - , .. - ) - -> ( .. - , .. - ) - ``` - - f. Datatypes: - - ```haskell - data Foo = - Bar - Arg1 - Arg2 - .. - ArgN - | Baz - - data Foo = Foo { - , longFieldName :: - HasCallStack - => Int - -> .. - } - ``` - - g. Type synonyms: - - ```haskell - type Foo a b = - AVeryLongTypeHereAndItKeepsGoing - Arg1 - (Maybe b) - Arg3 - - type Cts a = ( - Eq a - , .. - , .. - ) - ``` - - h. Function composition: - - ```haskell - foo = - h - . g - . f - ``` - - *Why*: The alignment of the `.`s and the function names makes the - structure easy to see at a glance. - - This generalises to other binary operators, e.g., `+`, `*`, etc. - 3. __Parentheses__: avoid redundant parentheses, except when they help with the order of operations. Use your judgement, and aim for clarity. Redundant parentheses sometimes help the reader, but sometimes confuse as they @@ -554,38 +141,7 @@ the rules below, it is good practice to update the code's style to match them. Choose between using parenthesis, `$` and `.` in whichever way you think results in the most readable code. -6. __Opening braces__: we don't start a new line for opening braces: - - ```haskell - data Foo = Foo { - .. - , .. - } - - mkFoo x = Foo { - .. - , .. - } - - modifyFoo foo = foo { - .. - , .. - } - - bar foo = .. - where - Foo { - .. - , .. - } = foo - ``` - - There don't seem to be any really good arguments for putting the bracket - either on the same line or on the next, other than the fact that the opening - bracket indicates that there is more to follow. It is also consistent with - the `where` in `instance .. where` and `class .. where`. - -7. __Blank lines__: we use *exactly one blank line* between different +6. __Blank lines__: we use *exactly one blank line* between different declarations: export lists, import lists, declarations, etc. *Why:* a blank line helps with readability. Always using a single one is @@ -631,7 +187,7 @@ the rules below, it is good practice to update the code's style to match them. [posix-line]: https://stackoverflow.com/questions/729692/why-should-text-files-end-with-a-newline#answer-729795 -8. __Sections__: we group related definitions in sections that start with a +7. __Sections__: we group related definitions in sections that start with a section title. The same grouping can be replicated in the export list. ```haskell @@ -668,10 +224,10 @@ the rules below, it is good practice to update the code's style to match them. which is separated from the first line by one blank line. The section header has a single blank line above and below it. -9. __Comment style__: in general we tend to use `--` instead of `{- .. -}`. We +8. __Comment style__: in general we tend to use `--` instead of `{- .. -}`. We sometimes make exceptions for big non-Haddock comments. -10. __Haddock formatting__: we use [Haddock formatting][haddock-formatting] in +9. __Haddock formatting__: we use [Haddock formatting][haddock-formatting] in docstrings. We also do this in comments for consistency. ```haskell @@ -743,106 +299,6 @@ the rules below, it is good practice to update the code's style to match them. [haddock-formatting]: https://www.haskell.org/haddock/doc/html/ch03s08.html -11. __Alignment__: we align things when it helps with readability. - - Alignment makes it clear which things are the *same* and which things are - *different*, compare the following code block - - ```haskell - foo (Quux a b c) = bar a b c - foo (Bar b c) = bar [] b c - foo (FooBar a c) = bar a [] c - ``` - - with the aligned version: - - ```haskell - foo (Quux a b c) = bar a b c - foo (Bar b c) = bar [] b c - foo (FooBar a c) = bar a [] c - ``` - - Alignment makes it easier to spot errors. For example, compare the two code - blocks, where the `c` argument is forgotten on the second line: - - ```haskell - foo (Quux a b c) = bar a b c - foo (Bar b c) = bar [] b c - foo (FooBar a c) = bar a [] - ``` - - ```haskell - foo (Quux a b c) = bar a b c - foo (Bar b c) = bar [] b c - foo (FooBar a c) = bar a [] - ``` - - It is immediately obvious in the aligned code, but not in the unaligned - code. - -12. __Pattern guard alignment__: - - This is one area in which we have not yet converged on a single style, - and there are two styles in use: - - ```haskell - foo x y z - | x == y - = .. - | Just z' <- z - , z' == x - , let x' = .. - = .. x' - | otherwise - = .. - where - .. - ``` - - versus - - ```haskell - foo x y z - | x == y = - .. - | otherwise = - .. - where - .. - ``` - - Similarly for `case`: - - ```haskell - case mX of - Just x - | x > 100 - -> .. - | x > 0 - -> .. - _otherwise - -> .. - ``` - - versus - - ```haskell - case mX of - Just x - | x > 100 -> - .. - | x > 0 -> - .. - _otherwise -> - .. - ``` - - Choose whichever style you prefer. The latter style is more suitable for - hanging `do`. - - In either style, use of `_otherwise` instead of `_`, as the latter is - easy to miss. - 13. __case vs function with multiple clauses__: The choice between using a `case` and having multiple clauses of the @@ -855,24 +311,8 @@ the rules below, it is good practice to update the code's style to match them. x:xs -> .. ``` -14. __if-then-else__: - - When using `if-then-else` in combination with `do`, follow the following - style: - - ```haskell - if foo then do - bar - baz - else do - quux - bar - ``` - - *Why:* to avoid wasting horizontal screen space. - -15. __Import lists__: we use `stylish-haskell` to automatically format import - lists. See the [`.stylish-haskell.yaml` config][stylish-config]. +15. __Import lists__: we use `fourmolu` to automatically format import + lists. See the [`fourmolu.yaml` config][fourmolu-config]. When importing modules from consensus and in particular modules from the same package, an import list and a qualifier can be omitted. For example, @@ -882,12 +322,13 @@ the rules below, it is good practice to update the code's style to match them. When importing from other packages, we prefer to use either an import list or a qualifier. -16. __Export lists__: we use `stylish-haskell` to automatically format export - lists. See the [`.stylish-haskell.yaml` config][stylish-config]. We format +16. __Export lists__: we use `fourmolu` to automatically format export + lists. See the [`fourmolu.yaml` config][fourmolu-config]. We format export lists in the following way: ```haskell - module X ( + module X + ( .. , .. ) where @@ -896,8 +337,8 @@ the rules below, it is good practice to update the code's style to match them. We sometimes use Haddock headings: ```haskell - module X ( - -- * Foo + module X + ( -- * Foo .. -- ** Foo Bar , .. @@ -911,21 +352,21 @@ the rules below, it is good practice to update the code's style to match them. (note the space): ```haskell - module X ( - Foo (..) + module X + ( Foo (..) , Bar (MkBar) ) where ``` - *Why:* this is consistent with how `stylish-haskell` formats it when + *Why:* this is consistent with how `fourmolu` formats it when importing it. When intentionally hiding the constructor of a datatype or newtype, we add a `-- opaque` comment after it in the export list to be explicit about this: ```haskell - module X ( - Foo -- opaque + module X + ( Foo -- opaque ) where ``` @@ -978,14 +419,14 @@ the rules below, it is good practice to update the code's style to match them. 18. __Records__: We purposefully discourage the use of `RecordWildCards`. - + For records we often use `NamedFieldPuns` to make it convenient to extract fields from the record. We use the following convention when naming fields to avoid duplicate record fields (we do not use `DuplicateRecordFields`): ```haskell - data SomeRecord = SomeRecord { - someRecordA :: .. + data SomeRecord = SomeRecord + { someRecordA :: .. , someRecordB :: .. } ``` @@ -1123,10 +564,10 @@ the rules below, it is good practice to update the code's style to match them. import Foo.C (fooC, ...) ``` - + *Why:* this leads to more changes to the export list, but makes it absolutely clear where each identifier comes from. - + ## Guidelines There are more general guidelines on how we write and structure code. @@ -1204,4 +645,4 @@ There are more general guidelines on how we write and structure code. use that shared code in a test suite defined in another package. To avoid this problem, we avoid sharing source directories in `cabal` files. -[stylish-config]: https://github.com/IntersectMBO/ouroboros-consensus/blob/master/.stylish-haskell.yaml +[fourmolu-config]: https://github.com/IntersectMBO/ouroboros-consensus/blob/master/fourmolu.yaml diff --git a/fourmolu.yaml b/fourmolu.yaml new file mode 100644 index 0000000000..eb2c97d140 --- /dev/null +++ b/fourmolu.yaml @@ -0,0 +1,6 @@ +indentation: 2 +column-limit: 100 +import-export-style: leading +haddock-style: single-line +single-constraint-parens: never +single-deriving-parens: never diff --git a/nix/formatting-linting.nix b/nix/formatting-linting.nix index eca3092278..65a12eee66 100644 --- a/nix/formatting-linting.nix +++ b/nix/formatting-linting.nix @@ -25,7 +25,7 @@ let fi ''; formattingLinting = { - stylish = checkFormatting pkgs.stylish-haskell ../scripts/ci/run-stylish.sh; + fourmolu = checkFormatting pkgs.fourmolu ../scripts/ci/run-fourmolu.sh; cabal-gild = checkFormatting pkgs.cabal-gild ../scripts/ci/run-cabal-gild.sh; nixpkgs-fmt = checkFormatting pkgs.nixpkgs-fmt ../scripts/ci/run-nixpkgs-fmt.sh; dos2unix = checkFormatting pkgs.dos2unix ../scripts/ci/run-dos2unix.sh; diff --git a/nix/shell.nix b/nix/shell.nix index 01550952ff..f2294171dd 100644 --- a/nix/shell.nix +++ b/nix/shell.nix @@ -9,13 +9,13 @@ hsPkgs.shellFor { pkgs.cabal-docspec pkgs.fd pkgs.nixpkgs-fmt - pkgs.stylish-haskell pkgs.dos2unix pkgs.cabal-gild pkgs.hlint pkgs.cabal-hoogle pkgs.ghcid pkgs.xrefcheck + pkgs.fourmolu # release management pkgs.scriv diff --git a/nix/tools.nix b/nix/tools.nix index fcc1b06631..368fe9a202 100644 --- a/nix/tools.nix +++ b/nix/tools.nix @@ -31,14 +31,16 @@ in }; }; - stylish-haskell = tool "stylish-haskell" "0.14.6.0" { }; - cabal-gild = tool "cabal-gild" "1.5.0.1" { }; hlint = tool "hlint" "3.8" { }; xrefcheck = tool "xrefcheck" "0.3.1" { }; + fourmolu = tool "fourmolu" "0.18.0.0" { + compiler-nix-name = "ghc98"; + }; + haskellBuildUtils = prev.haskellBuildUtils.override { inherit (final.hsPkgs.args) compiler-nix-name; index-state = tool-index-state; diff --git a/ouroboros-consensus-cardano/app/DBAnalyser/Parsers.hs b/ouroboros-consensus-cardano/app/DBAnalyser/Parsers.hs index b93543492c..ba77dd8465 100644 --- a/ouroboros-consensus-cardano/app/DBAnalyser/Parsers.hs +++ b/ouroboros-consensus-cardano/app/DBAnalyser/Parsers.hs @@ -1,23 +1,23 @@ {-# LANGUAGE ApplicativeDo #-} {-# LANGUAGE LambdaCase #-} -module DBAnalyser.Parsers ( - BlockType (..) +module DBAnalyser.Parsers + ( BlockType (..) , blockTypeParser , parseCmdLine ) where -import Cardano.Crypto (RequiresNetworkMagic (..)) -import Cardano.Tools.DBAnalyser.Analysis -import Cardano.Tools.DBAnalyser.Block.Byron -import Cardano.Tools.DBAnalyser.Block.Cardano -import Cardano.Tools.DBAnalyser.Block.Shelley -import Cardano.Tools.DBAnalyser.Types -import qualified Data.Foldable as Foldable -import Options.Applicative -import Ouroboros.Consensus.Block (SlotNo (..), WithOrigin (..)) -import Ouroboros.Consensus.Byron.Node (PBftSignatureThreshold (..)) -import Ouroboros.Consensus.Shelley.Node (Nonce (..)) +import Cardano.Crypto (RequiresNetworkMagic (..)) +import Cardano.Tools.DBAnalyser.Analysis +import Cardano.Tools.DBAnalyser.Block.Byron +import Cardano.Tools.DBAnalyser.Block.Cardano +import Cardano.Tools.DBAnalyser.Block.Shelley +import Cardano.Tools.DBAnalyser.Types +import Data.Foldable qualified as Foldable +import Options.Applicative +import Ouroboros.Consensus.Block (SlotNo (..), WithOrigin (..)) +import Ouroboros.Consensus.Byron.Node (PBftSignatureThreshold (..)) +import Ouroboros.Consensus.Shelley.Node (Nonce (..)) {------------------------------------------------------------------------------- Parsing @@ -27,102 +27,129 @@ parseCmdLine :: Parser (DBAnalyserConfig, BlockType) parseCmdLine = (,) <$> parseDBAnalyserConfig <*> blockTypeParser parseDBAnalyserConfig :: Parser DBAnalyserConfig -parseDBAnalyserConfig = DBAnalyserConfig - <$> strOption (mconcat [ - long "db" +parseDBAnalyserConfig = + DBAnalyserConfig + <$> strOption + ( mconcat + [ long "db" , help "Path to the Chain DB" , metavar "PATH" - ]) - <*> switch (mconcat [ - long "verbose" + ] + ) + <*> switch + ( mconcat + [ long "verbose" , help "Enable verbose logging" - ]) + ] + ) <*> parseSelectDB <*> parseValidationPolicy <*> parseAnalysis <*> parseLimit - <*> Foldable.asum [ - flag' V1InMem $ mconcat [ - long "v1-in-mem" - , help "use v1 in-memory backing store" - ] - , flag' V1LMDB $ mconcat [ - long "lmdb" - , help "use v1 LMDB backing store" - ] - , flag' V2InMem $ mconcat [ - long "v2-in-mem" - , help "use v2 in-memory backend" - ] - ] + <*> Foldable.asum + [ flag' V1InMem $ + mconcat + [ long "v1-in-mem" + , help "use v1 in-memory backing store" + ] + , flag' V1LMDB $ + mconcat + [ long "lmdb" + , help "use v1 LMDB backing store" + ] + , flag' V2InMem $ + mconcat + [ long "v2-in-mem" + , help "use v2 in-memory backend" + ] + ] parseSelectDB :: Parser SelectDB parseSelectDB = - SelectImmutableDB <$> analyseFrom - where - analyseFrom :: Parser (WithOrigin SlotNo) - analyseFrom = fmap (maybe Origin (NotOrigin . SlotNo)) $ optional $ option auto - ( long "analyse-from" - <> metavar "SLOT_NUMBER" - <> help "Start analysis from ledger state stored at specific slot number" ) - + SelectImmutableDB <$> analyseFrom + where + analyseFrom :: Parser (WithOrigin SlotNo) + analyseFrom = + fmap (maybe Origin (NotOrigin . SlotNo)) $ + optional $ + option + auto + ( long "analyse-from" + <> metavar "SLOT_NUMBER" + <> help "Start analysis from ledger state stored at specific slot number" + ) parseValidationPolicy :: Parser (Maybe ValidateBlocks) parseValidationPolicy = - optional $ option reader $ mconcat [ - long "db-validation" - , help $ "The extent of the ChainDB on-disk files validation. This is " - <> "completely unrelated to validation of the ledger rules. " - <> "Possible values: validate-all-blocks, minimum-block-validation." - ] - where - reader = maybeReader $ \case - "validate-all-blocks" -> Just ValidateAllBlocks - "minimum-block-validation" -> Just MinimumBlockValidation - _ -> Nothing + optional $ + option reader $ + mconcat + [ long "db-validation" + , help $ + "The extent of the ChainDB on-disk files validation. This is " + <> "completely unrelated to validation of the ledger rules. " + <> "Possible values: validate-all-blocks, minimum-block-validation." + ] + where + reader = maybeReader $ \case + "validate-all-blocks" -> Just ValidateAllBlocks + "minimum-block-validation" -> Just MinimumBlockValidation + _ -> Nothing parseAnalysis :: Parser AnalysisName -parseAnalysis = Foldable.asum [ - flag' ShowSlotBlockNo $ mconcat [ - long "show-slot-block-no" - , help "Show slot and block number and hash of all blocks" - ] - , flag' CountTxOutputs $ mconcat [ - long "count-tx-outputs" - , help "Show number of transaction outputs per block" - ] - , flag' ShowBlockHeaderSize $ mconcat [ - long "show-block-header-size" - , help "Show the header sizes of all blocks" - ] - , flag' ShowBlockTxsSize $ mconcat [ - long "show-block-txs-size" - , help "Show the total transaction sizes per block" - ] - , flag' ShowEBBs $ mconcat [ - long "show-ebbs" - , help "Show all EBBs and their predecessors" - ] +parseAnalysis = + Foldable.asum + [ flag' ShowSlotBlockNo $ + mconcat + [ long "show-slot-block-no" + , help "Show slot and block number and hash of all blocks" + ] + , flag' CountTxOutputs $ + mconcat + [ long "count-tx-outputs" + , help "Show number of transaction outputs per block" + ] + , flag' ShowBlockHeaderSize $ + mconcat + [ long "show-block-header-size" + , help "Show the header sizes of all blocks" + ] + , flag' ShowBlockTxsSize $ + mconcat + [ long "show-block-txs-size" + , help "Show the total transaction sizes per block" + ] + , flag' ShowEBBs $ + mconcat + [ long "show-ebbs" + , help "Show all EBBs and their predecessors" + ] , storeLedgerParser - , flag' CountBlocks $ mconcat [ - long "count-blocks" - , help "Count number of blocks processed" - ] + , flag' CountBlocks $ + mconcat + [ long "count-blocks" + , help "Count number of blocks processed" + ] , checkNoThunksParser - , flag' TraceLedgerProcessing $ mconcat [ - long "trace-ledger" - , help $ "Maintain ledger state and trace ledger phases in the GHC event" + , flag' TraceLedgerProcessing $ + mconcat + [ long "trace-ledger" + , help $ + "Maintain ledger state and trace ledger phases in the GHC event" <> " log. The db-analyser tool performs era-specific analysis" <> " of the ledger state and inserts markers for 'significant'" <> " events, such as for example epoch transitions." - ] - , fmap ReproMempoolAndForge $ option auto $ mconcat [ - long "repro-mempool-and-forge" - , help $ "Maintain ledger state and mempool trafficking the" - <> " transactions of each block. The integer is how many" - <> "blocks to put in the mempool at once." - , metavar "INT" - ] + ] + , fmap ReproMempoolAndForge $ + option auto $ + mconcat + [ long "repro-mempool-and-forge" + , help $ + "Maintain ledger state and mempool trafficking the" + <> " transactions of each block. The integer is how many" + <> "blocks to put in the mempool at once." + , metavar "INT" + ] , benchmarkLedgerOpsParser , getBlockApplicationMetrics , pure OnlyValidation @@ -130,98 +157,127 @@ parseAnalysis = Foldable.asum [ storeLedgerParser :: Parser AnalysisName storeLedgerParser = do - slot <- SlotNo <$> option auto - ( long "store-ledger" - <> metavar "SLOT_NUMBER" - <> help "Store ledger state at specific slot number" ) - ledgerValidation <- flag LedgerReapply LedgerApply - ( long "full-ledger-validation" - <> help ( "Use full block application while applying blocks to ledger states, " - <> "also validating signatures and scripts. " - <> "This is much slower than block reapplication (the default)." + slot <- + SlotNo + <$> option + auto + ( long "store-ledger" + <> metavar "SLOT_NUMBER" + <> help "Store ledger state at specific slot number" + ) + ledgerValidation <- + flag + LedgerReapply + LedgerApply + ( long "full-ledger-validation" + <> help + ( "Use full block application while applying blocks to ledger states, " + <> "also validating signatures and scripts. " + <> "This is much slower than block reapplication (the default)." ) - ) + ) pure $ StoreLedgerStateAt slot ledgerValidation checkNoThunksParser :: Parser AnalysisName -checkNoThunksParser = CheckNoThunksEvery <$> option auto - ( long "checkThunks" - <> metavar "BLOCK_COUNT" - <> help "Check the ledger state for thunks every n blocks" ) +checkNoThunksParser = + CheckNoThunksEvery + <$> option + auto + ( long "checkThunks" + <> metavar "BLOCK_COUNT" + <> help "Check the ledger state for thunks every n blocks" + ) parseLimit :: Parser Limit -parseLimit = Foldable.asum [ - Limit <$> option auto (mconcat [ - long "num-blocks-to-process" - , help "Maximum number of blocks we want to process" - , metavar "INT" - ]) - , pure Unlimited - ] +parseLimit = + Foldable.asum + [ Limit + <$> option + auto + ( mconcat + [ long "num-blocks-to-process" + , help "Maximum number of blocks we want to process" + , metavar "INT" + ] + ) + , pure Unlimited + ] benchmarkLedgerOpsParser :: Parser AnalysisName benchmarkLedgerOpsParser = - benchmarkLedgerOpsFlagParser + benchmarkLedgerOpsFlagParser *> (BenchmarkLedgerOps <$> pMaybeOutputFile <*> pApplyMode) - where - benchmarkLedgerOpsFlagParser = - flag' BenchmarkLedgerOps $ mconcat [ - long "benchmark-ledger-ops" - , help $ "Maintain ledger state and benchmark the main ledger calculations for each block." - <> " Prints one line of stats per block to the given output file " - <> " (defaults to stdout)." - ] + where + benchmarkLedgerOpsFlagParser = + flag' BenchmarkLedgerOps $ + mconcat + [ long "benchmark-ledger-ops" + , help $ + "Maintain ledger state and benchmark the main ledger calculations for each block." + <> " Prints one line of stats per block to the given output file " + <> " (defaults to stdout)." + ] - pApplyMode = - flag LedgerApply LedgerReapply $ mconcat [ - long "reapply" - , help $ "Measure header/block *re*application instead of full application." - ] + pApplyMode = + flag LedgerApply LedgerReapply $ + mconcat + [ long "reapply" + , help $ "Measure header/block *re*application instead of full application." + ] getBlockApplicationMetrics :: Parser AnalysisName -getBlockApplicationMetrics = do +getBlockApplicationMetrics = do fGetBlockApplicationMetrics <- partialGetBlockApplicationMetricsParser - mOutputFile <- pMaybeOutputFile + mOutputFile <- pMaybeOutputFile pure $ fGetBlockApplicationMetrics mOutputFile - where - partialGetBlockApplicationMetricsParser = - GetBlockApplicationMetrics . NumberOfBlocks - <$> option auto (mconcat [ long "get-block-application-metrics" - , metavar "NUM" - , help $ "Compute block application metrics every 'NUM' blocks (it currently supports slot and block numbers and UTxO size). " - <> "Stores the result to the given output file " - <> " (defaults to stdout)." - ] - ) + where + partialGetBlockApplicationMetricsParser = + GetBlockApplicationMetrics . NumberOfBlocks + <$> option + auto + ( mconcat + [ long "get-block-application-metrics" + , metavar "NUM" + , help $ + "Compute block application metrics every 'NUM' blocks (it currently supports slot and block numbers and UTxO size). " + <> "Stores the result to the given output file " + <> " (defaults to stdout)." + ] + ) pMaybeOutputFile :: Parser (Maybe FilePath) pMaybeOutputFile = optional $ strOption - ( long "out-file" - <> metavar "FILE" - <> help "Optional output file. Default is to write to stdout." - <> completer (bashCompleter "file") + ( long "out-file" + <> metavar "FILE" + <> help "Optional output file. Default is to write to stdout." + <> completer (bashCompleter "file") ) {------------------------------------------------------------------------------- Parse BlockType-specific arguments -------------------------------------------------------------------------------} -data BlockType = - ByronBlock ByronBlockArgs +data BlockType + = ByronBlock ByronBlockArgs | ShelleyBlock ShelleyBlockArgs | CardanoBlock CardanoBlockArgs blockTypeParser :: Parser BlockType -blockTypeParser = subparser $ mconcat - [ command "byron" - (info (parseByronType <**> helper) (progDesc "Analyse a Byron-only DB")) - , command "shelley" - (info (parseShelleyType <**> helper) (progDesc "Analyse a Shelley-only DB")) - , command "cardano" - (info (parseCardanoType <**> helper) (progDesc "Analyse a Cardano DB")) - ] +blockTypeParser = + subparser $ + mconcat + [ command + "byron" + (info (parseByronType <**> helper) (progDesc "Analyse a Byron-only DB")) + , command + "shelley" + (info (parseShelleyType <**> helper) (progDesc "Analyse a Shelley-only DB")) + , command + "cardano" + (info (parseCardanoType <**> helper) (progDesc "Analyse a Cardano DB")) + ] parseByronType :: Parser BlockType parseByronType = ByronBlock <$> parseByronArgs @@ -233,50 +289,75 @@ parseCardanoType :: Parser BlockType parseCardanoType = CardanoBlock <$> parseCardanoArgs parseCardanoArgs :: Parser CardanoBlockArgs -parseCardanoArgs = CardanoBlockArgs +parseCardanoArgs = + CardanoBlockArgs <$> parseConfigFile <*> parsePBftSignatureThreshold parseShelleyArgs :: Parser ShelleyBlockArgs -parseShelleyArgs = ShelleyBlockArgs - <$> strOption (mconcat [ - long "configShelley" +parseShelleyArgs = + ShelleyBlockArgs + <$> strOption + ( mconcat + [ long "configShelley" , help "Path to config file" , metavar "PATH" - ]) - <*> Foldable.asum [ Nonce <$> parseNonce - , pure NeutralNonce] - where - parseNonce = strOption (mconcat [ - long "nonce" + ] + ) + <*> Foldable.asum + [ Nonce <$> parseNonce + , pure NeutralNonce + ] + where + parseNonce = + strOption + ( mconcat + [ long "nonce" , help "Initial nonce, i.e., hash of the genesis config file" , metavar "NONCE" - ]) + ] + ) parseConfigFile :: Parser FilePath -parseConfigFile = strOption $ mconcat [ - long "config" - , help "Path to config file" - , metavar "PATH" - ] +parseConfigFile = + strOption $ + mconcat + [ long "config" + , help "Path to config file" + , metavar "PATH" + ] parsePBftSignatureThreshold :: Parser (Maybe PBftSignatureThreshold) -parsePBftSignatureThreshold = optional $ fmap PBftSignatureThreshold $ option auto $ mconcat [ - long "threshold" - , help "PBftSignatureThreshold" - , metavar "THRESHOLD" - ] +parsePBftSignatureThreshold = + optional $ + fmap PBftSignatureThreshold $ + option auto $ + mconcat + [ long "threshold" + , help "PBftSignatureThreshold" + , metavar "THRESHOLD" + ] parseByronArgs :: Parser ByronBlockArgs -parseByronArgs = ByronBlockArgs +parseByronArgs = + ByronBlockArgs <$> parseConfigFile - <*> flag RequiresNoMagic RequiresMagic (mconcat [ - long "requires-magic" + <*> flag + RequiresNoMagic + RequiresMagic + ( mconcat + [ long "requires-magic" , help "The DB contains blocks from a testnet, requiring network magic, rather than mainnet" - ]) - <*> optional (option auto (mconcat [ - long "genesisHash" - , help "Expected genesis hash" - , metavar "HASH" - ])) + ] + ) + <*> optional + ( option + auto + ( mconcat + [ long "genesisHash" + , help "Expected genesis hash" + , metavar "HASH" + ] + ) + ) <*> parsePBftSignatureThreshold diff --git a/ouroboros-consensus-cardano/app/DBSynthesizer/Parsers.hs b/ouroboros-consensus-cardano/app/DBSynthesizer/Parsers.hs index 51509e0ea7..a4cb4ab76a 100644 --- a/ouroboros-consensus-cardano/app/DBSynthesizer/Parsers.hs +++ b/ouroboros-consensus-cardano/app/DBSynthesizer/Parsers.hs @@ -1,17 +1,16 @@ module DBSynthesizer.Parsers (parseCommandLine) where -import Cardano.Tools.DBSynthesizer.Types -import Data.Word (Word64) -import Options.Applicative as Opt -import Ouroboros.Consensus.Block.Abstract (SlotNo (..)) - +import Cardano.Tools.DBSynthesizer.Types +import Data.Word (Word64) +import Options.Applicative as Opt +import Ouroboros.Consensus.Block.Abstract (SlotNo (..)) parseCommandLine :: IO (NodeFilePaths, NodeCredentials, DBSynthesizerOptions) parseCommandLine = - Opt.customExecParser p opts - where - p = Opt.prefs Opt.showHelpOnEmpty - opts = Opt.info parserCommandLine mempty + Opt.customExecParser p opts + where + p = Opt.prefs Opt.showHelpOnEmpty + opts = Opt.info parserCommandLine mempty parserCommandLine :: Parser (NodeFilePaths, NodeCredentials, DBSynthesizerOptions) parserCommandLine = @@ -42,9 +41,12 @@ parseDBSynthesizerOptions = parseForgeOptions :: Parser ForgeLimit parseForgeOptions = - ForgeLimitSlot <$> parseSlotLimit - <|> ForgeLimitBlock <$> parseBlockLimit - <|> ForgeLimitEpoch <$> parseEpochLimit + ForgeLimitSlot + <$> parseSlotLimit + <|> ForgeLimitBlock + <$> parseBlockLimit + <|> ForgeLimitEpoch + <$> parseEpochLimit parseChainDBFilePath :: Parser FilePath parseChainDBFilePath = @@ -102,47 +104,51 @@ parseBulkFilePath = parseSlotLimit :: Parser SlotNo parseSlotLimit = - SlotNo <$> option auto - ( short 's' - <> long "slots" - <> metavar "NUMBER" - <> help "Amount of slots to process" - ) + SlotNo + <$> option + auto + ( short 's' + <> long "slots" + <> metavar "NUMBER" + <> help "Amount of slots to process" + ) parseBlockLimit :: Parser Word64 parseBlockLimit = - option auto - ( short 'b' - <> long "blocks" - <> metavar "NUMBER" - <> help "Amount of blocks to forge" + option + auto + ( short 'b' + <> long "blocks" + <> metavar "NUMBER" + <> help "Amount of blocks to forge" ) parseEpochLimit :: Parser Word64 parseEpochLimit = - option auto - ( short 'e' - <> long "epochs" - <> metavar "NUMBER" - <> help "Amount of epochs to process" + option + auto + ( short 'e' + <> long "epochs" + <> metavar "NUMBER" + <> help "Amount of epochs to process" ) parseForce :: Parser Bool parseForce = switch - ( short 'f' - <> help "Force overwrite an existing Chain DB" + ( short 'f' + <> help "Force overwrite an existing Chain DB" ) parseAppend :: Parser Bool parseAppend = switch - ( short 'a' - <> help "Append to an existing Chain DB" + ( short 'a' + <> help "Append to an existing Chain DB" ) parseOpenMode :: Parser DBSynthesizerOpenMode parseOpenMode = - (parseForce *> pure OpenCreateForce) - <|> (parseAppend *> pure OpenAppend) - <|> pure OpenCreate + (parseForce *> pure OpenCreateForce) + <|> (parseAppend *> pure OpenAppend) + <|> pure OpenCreate diff --git a/ouroboros-consensus-cardano/app/DBTruncater/Parsers.hs b/ouroboros-consensus-cardano/app/DBTruncater/Parsers.hs index fc97e34a44..135492a0c5 100644 --- a/ouroboros-consensus-cardano/app/DBTruncater/Parsers.hs +++ b/ouroboros-consensus-cardano/app/DBTruncater/Parsers.hs @@ -1,35 +1,38 @@ module DBTruncater.Parsers (commandLineParser) where -import Cardano.Tools.DBTruncater.Types -import DBAnalyser.Parsers -import Options.Applicative -import Ouroboros.Consensus.Block.Abstract +import Cardano.Tools.DBTruncater.Types +import DBAnalyser.Parsers +import Options.Applicative +import Ouroboros.Consensus.Block.Abstract commandLineParser :: Parser (DBTruncaterConfig, BlockType) commandLineParser = (,) <$> parseDBTruncaterConfig <*> blockTypeParser parseDBTruncaterConfig :: Parser DBTruncaterConfig -parseDBTruncaterConfig = DBTruncaterConfig +parseDBTruncaterConfig = + DBTruncaterConfig <$> parseChainDBPath <*> parseTruncateAfter <*> parseVerbose - where - parseChainDBPath = strOption $ + where + parseChainDBPath = + strOption $ mconcat [ long "db" , help "Path of the chain DB" , metavar "PATH" ] - parseVerbose = switch (long "verbose" <> help "Enable verbose logging") + parseVerbose = switch (long "verbose" <> help "Enable verbose logging") parseTruncateAfter :: Parser TruncateAfter parseTruncateAfter = fmap TruncateAfterSlot slotNoOption <|> fmap TruncateAfterBlock blockNoOption slotNoOption :: Parser SlotNo slotNoOption = - SlotNo <$> option auto mods - where - mods = mconcat + SlotNo <$> option auto mods + where + mods = + mconcat [ long "truncate-after-slot" , metavar "SLOT_NUMBER" , help "Remove all blocks with a higher slot number" @@ -37,9 +40,10 @@ slotNoOption = blockNoOption :: Parser BlockNo blockNoOption = - BlockNo <$> option auto mods - where - mods = mconcat + BlockNo <$> option auto mods + where + mods = + mconcat [ long "truncate-after-block" , metavar "BLOCK_NUMBER" , help "The block number of the intended new tip of the chain after truncation" diff --git a/ouroboros-consensus-cardano/app/GenHeader/Parsers.hs b/ouroboros-consensus-cardano/app/GenHeader/Parsers.hs index 34058a8bef..c23deab88a 100644 --- a/ouroboros-consensus-cardano/app/GenHeader/Parsers.hs +++ b/ouroboros-consensus-cardano/app/GenHeader/Parsers.hs @@ -1,46 +1,74 @@ module GenHeader.Parsers (parseOptions) where -import Cardano.Tools.Headers (Options (..)) -import Data.Version (showVersion) -import Options.Applicative (Parser, ParserInfo, auto, command, - execParser, help, helper, hsubparser, info, long, metavar, - option, progDesc, short, (<**>)) -import Paths_ouroboros_consensus_cardano (version) +import Cardano.Tools.Headers (Options (..)) +import Data.Version (showVersion) +import Options.Applicative + ( Parser + , ParserInfo + , auto + , command + , execParser + , help + , helper + , hsubparser + , info + , long + , metavar + , option + , progDesc + , short + , (<**>) + ) +import Paths_ouroboros_consensus_cardano (version) parseOptions :: IO Options parseOptions = execParser argsParser argsParser :: ParserInfo Options argsParser = - info - (optionsParser <**> helper) - ( progDesc $ - unlines - [ "gen-header - A utility to generate valid and invalid Praos headers for testing purpose" - , "version: " <> showVersion version - ] - ) + info + (optionsParser <**> helper) + ( progDesc $ + unlines + [ "gen-header - A utility to generate valid and invalid Praos headers for testing purpose" + , "version: " <> showVersion version + ] + ) optionsParser :: Parser Options optionsParser = - hsubparser - ( command "generate" (info generateOptionsParser (progDesc "Generate Praos headers context and valid/invalid headers. Writes JSON formatted context to stdout and headers to stdout.")) - <> command "validate" (info validateOptionsParser (progDesc "Validate a sample of Praos headers within a context. Reads JSON formatted sample from stdin.")) + hsubparser + ( command + "generate" + ( info + generateOptionsParser + ( progDesc + "Generate Praos headers context and valid/invalid headers. Writes JSON formatted context to stdout and headers to stdout." + ) ) + <> command + "validate" + ( info + validateOptionsParser + ( progDesc + "Validate a sample of Praos headers within a context. Reads JSON formatted sample from stdin." + ) + ) + ) validateOptionsParser :: Parser Options validateOptionsParser = pure Validate generateOptionsParser :: Parser Options generateOptionsParser = - Generate <$> countParser + Generate <$> countParser countParser :: Parser Int countParser = - option - auto - ( long "count" - <> short 'c' - <> metavar "INT" - <> help "Number of headers to generate" - ) + option + auto + ( long "count" + <> short 'c' + <> metavar "INT" + <> help "Number of headers to generate" + ) diff --git a/ouroboros-consensus-cardano/app/db-analyser.hs b/ouroboros-consensus-cardano/app/db-analyser.hs index 4c20471191..67366c71af 100644 --- a/ouroboros-consensus-cardano/app/db-analyser.hs +++ b/ouroboros-consensus-cardano/app/db-analyser.hs @@ -17,32 +17,42 @@ -- [--num-blocks-to-process INT] COMMAND module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import Cardano.Tools.DBAnalyser.Run -import Cardano.Tools.DBAnalyser.Types -import Cardano.Tools.GitRev (gitRev) -import Control.Monad (void) -import qualified Data.Text as T -import DBAnalyser.Parsers -import Main.Utf8 (withStdTerminalHandles) -import Options.Applicative (execParser, footer, fullDesc, helper, - info, progDesc, (<**>)) - +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.DBAnalyser.Run +import Cardano.Tools.DBAnalyser.Types +import Cardano.Tools.GitRev (gitRev) +import Control.Monad (void) +import DBAnalyser.Parsers +import Data.Text qualified as T +import Main.Utf8 (withStdTerminalHandles) +import Options.Applicative + ( execParser + , footer + , fullDesc + , helper + , info + , progDesc + , (<**>) + ) main :: IO () main = withStdTerminalHandles $ do - cryptoInit - (conf, blocktype) <- getCmdLine - void $ case blocktype of - ByronBlock args -> analyse conf args - ShelleyBlock args -> analyse conf args - CardanoBlock args -> analyse conf args + cryptoInit + (conf, blocktype) <- getCmdLine + void $ case blocktype of + ByronBlock args -> analyse conf args + ShelleyBlock args -> analyse conf args + CardanoBlock args -> analyse conf args getCmdLine :: IO (DBAnalyserConfig, BlockType) getCmdLine = execParser opts - where - opts = info (parseCmdLine <**> helper) (mconcat [ - fullDesc - , progDesc "Simple framework used to analyse a Chain DB" - , footer $ "ouroboros-consensus commit: " <> T.unpack gitRev - ]) + where + opts = + info + (parseCmdLine <**> helper) + ( mconcat + [ fullDesc + , progDesc "Simple framework used to analyse a Chain DB" + , footer $ "ouroboros-consensus commit: " <> T.unpack gitRev + ] + ) diff --git a/ouroboros-consensus-cardano/app/db-immutaliser.hs b/ouroboros-consensus-cardano/app/db-immutaliser.hs index 162ce4e164..6c5fccd839 100644 --- a/ouroboros-consensus-cardano/app/db-immutaliser.hs +++ b/ouroboros-consensus-cardano/app/db-immutaliser.hs @@ -3,50 +3,64 @@ module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import Cardano.Tools.DBImmutaliser.Run (DBDirs (..), Opts (..)) -import qualified Cardano.Tools.DBImmutaliser.Run as DBImmutaliser -import Main.Utf8 (withStdTerminalHandles) -import Options.Applicative +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.DBImmutaliser.Run (DBDirs (..), Opts (..)) +import Cardano.Tools.DBImmutaliser.Run qualified as DBImmutaliser +import Main.Utf8 (withStdTerminalHandles) +import Options.Applicative + main :: IO () main = withStdTerminalHandles $ do - cryptoInit - DBImmutaliser.run =<< execParser optsParser + cryptoInit + DBImmutaliser.run =<< execParser optsParser optsParser :: ParserInfo Opts optsParser = - info (helper <*> parse) $ fullDesc <> progDesc desc - where - desc = "Copy a specific chain out of a VolatileDB into an ImmutableDB" + info (helper <*> parse) $ fullDesc <> progDesc desc + where + desc = "Copy a specific chain out of a VolatileDB into an ImmutableDB" - parse = do - dbDirs <- do - immDBDir <- strOption $ mconcat - [ long "immutable-db" - , help "Path to the ImmutableDB" + parse = do + dbDirs <- do + immDBDir <- + strOption $ + mconcat + [ long "immutable-db" + , help "Path to the ImmutableDB" + , metavar "PATH" + ] + volDBDir <- + strOption $ + mconcat + [ long "volatile-db" + , help "Path to the VolatileDB" + , metavar "PATH" + ] + pure DBDirs{immDBDir, volDBDir} + configFile <- + strOption $ + mconcat + [ long "config" + , help "Path to config file, in the same format as for the node or db-analyser" , metavar "PATH" ] - volDBDir <- strOption $ mconcat - [ long "volatile-db" - , help "Path to the VolatileDB" - , metavar "PATH" + verbose <- + switch $ + mconcat + [ long "verbose" + , help "Enable verbose logging" + ] + dotOut <- + optional $ + strOption $ + mconcat + [ long "dot-out" + , help "Write the volatile block tree to a file in DOT format" + ] + dryRun <- + switch $ + mconcat + [ long "dry-run" + , help "Do not actually append anything to the ImmutableDB" ] - pure DBDirs {immDBDir, volDBDir} - configFile <- strOption $ mconcat - [ long "config" - , help "Path to config file, in the same format as for the node or db-analyser" - , metavar "PATH" - ] - verbose <- switch $ mconcat - [ long "verbose" - , help "Enable verbose logging" - ] - dotOut <- optional $ strOption $ mconcat - [ long "dot-out" - , help "Write the volatile block tree to a file in DOT format" - ] - dryRun <- switch $ mconcat - [ long "dry-run" - , help "Do not actually append anything to the ImmutableDB" - ] - pure Opts {dbDirs, configFile, verbose, dotOut, dryRun} + pure Opts{dbDirs, configFile, verbose, dotOut, dryRun} diff --git a/ouroboros-consensus-cardano/app/db-synthesizer.hs b/ouroboros-consensus-cardano/app/db-synthesizer.hs index 43c6ccae59..45a8cb0a9f 100644 --- a/ouroboros-consensus-cardano/app/db-synthesizer.hs +++ b/ouroboros-consensus-cardano/app/db-synthesizer.hs @@ -23,18 +23,17 @@ -- -a Append to an existing Chain DB module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import Cardano.Tools.DBSynthesizer.Run -import DBSynthesizer.Parsers -import Main.Utf8 (withStdTerminalHandles) -import System.Exit - +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.DBSynthesizer.Run +import DBSynthesizer.Parsers +import Main.Utf8 (withStdTerminalHandles) +import System.Exit main :: IO () main = withStdTerminalHandles $ do - cryptoInit - (paths, creds, forgeOpts) <- parseCommandLine - let - genTxs _ _ _ _ = pure [] - result <- initialize paths creds forgeOpts >>= either die (uncurry (synthesize genTxs)) - putStrLn $ "--> done; result: " ++ show result + cryptoInit + (paths, creds, forgeOpts) <- parseCommandLine + let + genTxs _ _ _ _ = pure [] + result <- initialize paths creds forgeOpts >>= either die (uncurry (synthesize genTxs)) + putStrLn $ "--> done; result: " ++ show result diff --git a/ouroboros-consensus-cardano/app/db-truncater.hs b/ouroboros-consensus-cardano/app/db-truncater.hs index 7f43a94aa1..6183ce3668 100644 --- a/ouroboros-consensus-cardano/app/db-truncater.hs +++ b/ouroboros-consensus-cardano/app/db-truncater.hs @@ -1,27 +1,35 @@ module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import Cardano.Tools.DBTruncater.Run -import Cardano.Tools.DBTruncater.Types -import DBAnalyser.Parsers (BlockType (..)) -import qualified DBTruncater.Parsers as DBTruncater -import Main.Utf8 (withStdTerminalHandles) -import Options.Applicative (execParser, fullDesc, helper, info, - progDesc, (<**>)) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl () -import Prelude hiding (truncate) +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.DBTruncater.Run +import Cardano.Tools.DBTruncater.Types +import DBAnalyser.Parsers (BlockType (..)) +import DBTruncater.Parsers qualified as DBTruncater +import Main.Utf8 (withStdTerminalHandles) +import Options.Applicative + ( execParser + , fullDesc + , helper + , info + , progDesc + , (<**>) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl () +import Prelude hiding (truncate) main :: IO () main = withStdTerminalHandles $ do - cryptoInit - (conf, blocktype) <- getCommandLineConfig - case blocktype of - ByronBlock args -> truncate conf args - ShelleyBlock args -> truncate conf args - CardanoBlock args -> truncate conf args + cryptoInit + (conf, blocktype) <- getCommandLineConfig + case blocktype of + ByronBlock args -> truncate conf args + ShelleyBlock args -> truncate conf args + CardanoBlock args -> truncate conf args getCommandLineConfig :: IO (DBTruncaterConfig, BlockType) getCommandLineConfig = execParser opts - where - opts = info (DBTruncater.commandLineParser <**> helper) + where + opts = + info + (DBTruncater.commandLineParser <**> helper) (fullDesc <> progDesc "Utility for truncating an ImmutableDB") diff --git a/ouroboros-consensus-cardano/app/gen-header.hs b/ouroboros-consensus-cardano/app/gen-header.hs index 2a2ea76cae..60189f4b5b 100644 --- a/ouroboros-consensus-cardano/app/gen-header.hs +++ b/ouroboros-consensus-cardano/app/gen-header.hs @@ -1,12 +1,12 @@ -- | This tool generates valid and invalid Cardano headers. module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import Cardano.Tools.Headers (run) -import GenHeader.Parsers (parseOptions) -import Main.Utf8 (withUtf8) +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.Headers (run) +import GenHeader.Parsers (parseOptions) +import Main.Utf8 (withUtf8) main :: IO () main = withUtf8 $ do - cryptoInit - parseOptions >>= run + cryptoInit + parseOptions >>= run diff --git a/ouroboros-consensus-cardano/app/immdb-server.hs b/ouroboros-consensus-cardano/app/immdb-server.hs index 5e118362ef..6e26ac4db8 100644 --- a/ouroboros-consensus-cardano/app/immdb-server.hs +++ b/ouroboros-consensus-cardano/app/immdb-server.hs @@ -3,55 +3,61 @@ module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import qualified Cardano.Tools.DBAnalyser.Block.Cardano as Cardano -import Cardano.Tools.DBAnalyser.HasAnalysis (mkProtocolInfo) -import qualified Cardano.Tools.ImmDBServer.Diffusion as ImmDBServer -import Data.Void -import Main.Utf8 (withStdTerminalHandles) -import qualified Network.Socket as Socket -import Options.Applicative -import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.DBAnalyser.Block.Cardano qualified as Cardano +import Cardano.Tools.DBAnalyser.HasAnalysis (mkProtocolInfo) +import Cardano.Tools.ImmDBServer.Diffusion qualified as ImmDBServer +import Data.Void +import Main.Utf8 (withStdTerminalHandles) +import Network.Socket qualified as Socket +import Options.Applicative +import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) main :: IO () main = withStdTerminalHandles $ do - cryptoInit - Opts {immDBDir, port, configFile} <- execParser optsParser - let sockAddr = Socket.SockAddrInet port hostAddr - where - -- could also be passed in - hostAddr = Socket.tupleToHostAddress (127, 0, 0, 1) - args = Cardano.CardanoBlockArgs configFile Nothing - ProtocolInfo{pInfoConfig} <- mkProtocolInfo args - absurd <$> ImmDBServer.run immDBDir sockAddr pInfoConfig + cryptoInit + Opts{immDBDir, port, configFile} <- execParser optsParser + let sockAddr = Socket.SockAddrInet port hostAddr + where + -- could also be passed in + hostAddr = Socket.tupleToHostAddress (127, 0, 0, 1) + args = Cardano.CardanoBlockArgs configFile Nothing + ProtocolInfo{pInfoConfig} <- mkProtocolInfo args + absurd <$> ImmDBServer.run immDBDir sockAddr pInfoConfig -data Opts = Opts { - immDBDir :: FilePath - , port :: Socket.PortNumber +data Opts = Opts + { immDBDir :: FilePath + , port :: Socket.PortNumber , configFile :: FilePath } optsParser :: ParserInfo Opts optsParser = - info (helper <*> parse) $ fullDesc <> progDesc desc - where - desc = "Serve an ImmutableDB via ChainSync and BlockFetch" + info (helper <*> parse) $ fullDesc <> progDesc desc + where + desc = "Serve an ImmutableDB via ChainSync and BlockFetch" - parse = do - immDBDir <- strOption $ mconcat - [ long "db" - , help "Path to the ImmutableDB" - , metavar "PATH" - ] - port <- option auto $ mconcat - [ long "port" - , help "Port to serve on" - , value 3001 - , showDefault - ] - configFile <- strOption $ mconcat - [ long "config" - , help "Path to config file, in the same format as for the node or db-analyser" - , metavar "PATH" - ] - pure Opts {immDBDir, port, configFile} + parse = do + immDBDir <- + strOption $ + mconcat + [ long "db" + , help "Path to the ImmutableDB" + , metavar "PATH" + ] + port <- + option auto $ + mconcat + [ long "port" + , help "Port to serve on" + , value 3001 + , showDefault + ] + configFile <- + strOption $ + mconcat + [ long "config" + , help "Path to config file, in the same format as for the node or db-analyser" + , metavar "PATH" + ] + pure Opts{immDBDir, port, configFile} diff --git a/ouroboros-consensus-cardano/app/snapshot-converter.hs b/ouroboros-consensus-cardano/app/snapshot-converter.hs index 6a4b163dda..9a13c99679 100644 --- a/ouroboros-consensus-cardano/app/snapshot-converter.hs +++ b/ouroboros-consensus-cardano/app/snapshot-converter.hs @@ -8,186 +8,193 @@ module Main (main) where -import Cardano.Crypto.Init (cryptoInit) -import Cardano.Tools.DBAnalyser.HasAnalysis (mkProtocolInfo) -import Codec.Serialise -import qualified Control.Monad as Monad -import Control.Monad.Except -import qualified Control.Monad.Trans as Trans (lift) -import Control.ResourceRegistry (ResourceRegistry) -import qualified Control.ResourceRegistry as RR -import Control.Tracer (nullTracer) -import Data.Bifunctor -import qualified Data.ByteString.Builder as BS -import qualified Data.SOP.Dict as Dict -import DBAnalyser.Parsers -import Main.Utf8 -import Options.Applicative -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.Args as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.Lock as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory as V2 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq as V2 -import Ouroboros.Consensus.Util.CRC -import Ouroboros.Consensus.Util.IOLike -import System.FilePath (splitFileName) -import System.FS.API -import System.FS.API.Lazy -import System.FS.CRC -import System.FS.IO -import System.IO.Temp +import Cardano.Crypto.Init (cryptoInit) +import Cardano.Tools.DBAnalyser.HasAnalysis (mkProtocolInfo) +import Codec.Serialise +import Control.Monad qualified as Monad +import Control.Monad.Except +import Control.Monad.Trans qualified as Trans (lift) +import Control.ResourceRegistry (ResourceRegistry) +import Control.ResourceRegistry qualified as RR +import Control.Tracer (nullTracer) +import DBAnalyser.Parsers +import Data.Bifunctor +import Data.ByteString.Builder qualified as BS +import Data.SOP.Dict qualified as Dict +import Main.Utf8 +import Options.Applicative +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.Lock qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory qualified as V2 +import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq qualified as V2 +import Ouroboros.Consensus.Util.CRC +import Ouroboros.Consensus.Util.IOLike +import System.FS.API +import System.FS.API.Lazy +import System.FS.CRC +import System.FS.IO +import System.FilePath (splitFileName) +import System.IO.Temp data Format - = Legacy - | Mem - | LMDB - deriving (Show, Read) + = Legacy + | Mem + | LMDB + deriving (Show, Read) data Config = Config - { from :: Format - -- ^ Which format the input snapshot is in - , inpath :: FilePath - -- ^ Path to the input snapshot - , to :: Format - -- ^ Which format the output snapshot must be in - , outpath :: FilePath - -- ^ Path to the output snapshot - } + { from :: Format + -- ^ Which format the input snapshot is in + , inpath :: FilePath + -- ^ Path to the input snapshot + , to :: Format + -- ^ Which format the output snapshot must be in + , outpath :: FilePath + -- ^ Path to the output snapshot + } getCommandLineConfig :: IO (Config, BlockType) getCommandLineConfig = - execParser $ - info - ((,) <$> parseConfig <*> blockTypeParser <**> helper) - (fullDesc <> progDesc "Utility for converting snapshots to and from UTxO-HD") + execParser $ + info + ((,) <$> parseConfig <*> blockTypeParser <**> helper) + (fullDesc <> progDesc "Utility for converting snapshots to and from UTxO-HD") parseConfig :: Parser Config parseConfig = - Config - <$> argument - auto - ( mconcat - [ help "From format (Legacy, Mem or LMDB)" - , metavar "FORMAT-IN" - ] - ) - <*> strArgument - ( mconcat - [ help "Input dir/file. Use relative paths like ./100007913" - , metavar "PATH-IN" - ] - ) - - <*> argument - auto - ( mconcat - [ help "To format (Legacy, Mem or LMDB)" - , metavar "FORMAT-OUT" - ] - ) - <*> strArgument - ( mconcat - [ help "Output dir/file Use relative paths like ./100007913" - , metavar "PATH-OUT" - ] - ) + Config + <$> argument + auto + ( mconcat + [ help "From format (Legacy, Mem or LMDB)" + , metavar "FORMAT-IN" + ] + ) + <*> strArgument + ( mconcat + [ help "Input dir/file. Use relative paths like ./100007913" + , metavar "PATH-IN" + ] + ) + <*> argument + auto + ( mconcat + [ help "To format (Legacy, Mem or LMDB)" + , metavar "FORMAT-OUT" + ] + ) + <*> strArgument + ( mconcat + [ help "Output dir/file Use relative paths like ./100007913" + , metavar "PATH-OUT" + ] + ) -- Helpers pathToDiskSnapshot :: FilePath -> Maybe (SomeHasFS IO, FsPath, DiskSnapshot) -pathToDiskSnapshot path = (SomeHasFS $ ioHasFS $ MountPoint dir, mkFsPath [file],) <$> snapshotFromPath file - where - (dir, file) = splitFileName path +pathToDiskSnapshot path = (SomeHasFS $ ioHasFS $ MountPoint dir,mkFsPath [file],) <$> snapshotFromPath file + where + (dir, file) = splitFileName path defaultLMDBLimits :: V1.LMDBLimits defaultLMDBLimits = - V1.LMDBLimits - { V1.lmdbMapSize = 16 * 1024 * 1024 * 1024 - , V1.lmdbMaxDatabases = 10 - , V1.lmdbMaxReaders = 16 - } + V1.LMDBLimits + { V1.lmdbMapSize = 16 * 1024 * 1024 * 1024 + , V1.lmdbMaxDatabases = 10 + , V1.lmdbMaxReaders = 16 + } data Error blk - = SnapshotError (SnapshotFailure blk) - | TablesCantDeserializeError DeserialiseFailure - | TablesTrailingBytes - | SnapshotFormatMismatch Format String - | ReadSnapshotCRCError FsPath CRCError - deriving Exception + = SnapshotError (SnapshotFailure blk) + | TablesCantDeserializeError DeserialiseFailure + | TablesTrailingBytes + | SnapshotFormatMismatch Format String + | ReadSnapshotCRCError FsPath CRCError + deriving Exception instance StandardHash blk => Show (Error blk) where - show (SnapshotError err) = "Couldn't deserialize the snapshot. Are you running the same node version that created the snapshot? " <> show err - show (TablesCantDeserializeError err) = "Couldn't deserialize the tables: " <> show err - show TablesTrailingBytes = "Malformed tables, there are trailing bytes!" - show (SnapshotFormatMismatch expected err) = "The input snapshot does not seem to correspond to the input format:\n\t" <> show expected <> "\n\tThe provided path " <> err - show (ReadSnapshotCRCError fp err) = "An error occurred while reading the snapshot checksum at " <> show fp <> ": \n\t" <> show err + show (SnapshotError err) = + "Couldn't deserialize the snapshot. Are you running the same node version that created the snapshot? " + <> show err + show (TablesCantDeserializeError err) = "Couldn't deserialize the tables: " <> show err + show TablesTrailingBytes = "Malformed tables, there are trailing bytes!" + show (SnapshotFormatMismatch expected err) = + "The input snapshot does not seem to correspond to the input format:\n\t" + <> show expected + <> "\n\tThe provided path " + <> err + show (ReadSnapshotCRCError fp err) = "An error occurred while reading the snapshot checksum at " <> show fp <> ": \n\t" <> show err checkSnapshotFileStructure :: Format -> FsPath -> SomeHasFS IO -> ExceptT (Error blk) IO () checkSnapshotFileStructure m p (SomeHasFS fs) = case m of - Legacy -> want (doesFileExist fs) p "is NOT a file" - Mem -> newFormatCheck "tvar" - LMDB -> newFormatCheck "data.mdb" - where - want :: (FsPath -> IO Bool) -> FsPath -> String -> ExceptT (Error blk) IO () - want fileType path err = do - exists <- Trans.lift $ fileType path - Monad.unless exists $ throwError $ SnapshotFormatMismatch m err + Legacy -> want (doesFileExist fs) p "is NOT a file" + Mem -> newFormatCheck "tvar" + LMDB -> newFormatCheck "data.mdb" + where + want :: (FsPath -> IO Bool) -> FsPath -> String -> ExceptT (Error blk) IO () + want fileType path err = do + exists <- Trans.lift $ fileType path + Monad.unless exists $ throwError $ SnapshotFormatMismatch m err - isDir = (doesDirectoryExist, [], "is NOT a directory") - hasTablesDir = (doesDirectoryExist, ["tables"], "DOES NOT contain a \"tables\" directory") - hasState = (doesFileExist, ["state"], "DOES NOT contain a \"state\" file") - hasTables tb = (doesFileExist, ["tables", tb], "DOES NOT contain a \"tables/" <> tb <> "\" file") + isDir = (doesDirectoryExist, [], "is NOT a directory") + hasTablesDir = (doesDirectoryExist, ["tables"], "DOES NOT contain a \"tables\" directory") + hasState = (doesFileExist, ["state"], "DOES NOT contain a \"state\" file") + hasTables tb = (doesFileExist, ["tables", tb], "DOES NOT contain a \"tables/" <> tb <> "\" file") - newFormatCheck tb = - mapM_ - (\(doCheck, extra, err) -> want (doCheck fs) (p mkFsPath extra) err) - [ isDir - , hasTablesDir - , hasState - , hasTables tb - ] + newFormatCheck tb = + mapM_ + (\(doCheck, extra, err) -> want (doCheck fs) (p mkFsPath extra) err) + [ isDir + , hasTablesDir + , hasState + , hasTables tb + ] load :: - forall blk. - ( LedgerDbSerialiseConstraints blk - , CanStowLedgerTables (LedgerState blk) - , LedgerSupportsProtocol blk - , LedgerSupportsLedgerDB blk - ) - => Config - -> ResourceRegistry IO - -> CodecConfig blk - -> FilePath - -> ExceptT (Error blk) IO (ExtLedgerState blk EmptyMK, LedgerTables (ExtLedgerState blk) ValuesMK) + forall blk. + ( LedgerDbSerialiseConstraints blk + , CanStowLedgerTables (LedgerState blk) + , LedgerSupportsProtocol blk + , LedgerSupportsLedgerDB blk + ) => + Config -> + ResourceRegistry IO -> + CodecConfig blk -> + FilePath -> + ExceptT (Error blk) IO (ExtLedgerState blk EmptyMK, LedgerTables (ExtLedgerState blk) ValuesMK) load config@Config{inpath = pathToDiskSnapshot -> Just (fs@(SomeHasFS hasFS), path, ds)} rr ccfg tempFP = case from config of Legacy -> do checkSnapshotFileStructure Legacy path fs (st, checksumAsRead) <- - first unstowLedgerTables - <$> withExceptT - (SnapshotError . InitFailureRead . ReadSnapshotFailed) - (readExtLedgerState fs (decodeDiskExtLedgerState ccfg) decode path) + first unstowLedgerTables + <$> withExceptT + (SnapshotError . InitFailureRead . ReadSnapshotFailed) + (readExtLedgerState fs (decodeDiskExtLedgerState ccfg) decode path) let crcPath = path <.> "checksum" crcFileExists <- Trans.lift $ doesFileExist hasFS crcPath Monad.when crcFileExists $ do snapshotCRC <- - withExceptT (ReadSnapshotCRCError crcPath) $ - readCRC hasFS crcPath + withExceptT (ReadSnapshotCRCError crcPath) $ + readCRC hasFS crcPath Monad.when (checksumAsRead /= snapshotCRC) $ - throwError $ SnapshotError $ InitFailureRead ReadSnapshotDataCorruption + throwError $ + SnapshotError $ + InitFailureRead ReadSnapshotDataCorruption pure (forgetLedgerTables st, projectLedgerTables st) Mem -> do checkSnapshotFileStructure Mem path fs @@ -199,29 +206,34 @@ load config@Config{inpath = pathToDiskSnapshot -> Just (fs@(SomeHasFS hasFS), pa ((dbch, bstore), _) <- withExceptT SnapshotError $ V1.loadSnapshot - nullTracer - (V1.LMDBBackingStoreArgs tempFP defaultLMDBLimits Dict.Dict) - ccfg - (V1.SnapshotsFS fs) - ds + nullTracer + (V1.LMDBBackingStoreArgs tempFP defaultLMDBLimits Dict.Dict) + ccfg + (V1.SnapshotsFS fs) + ds (V1.current dbch,) <$> Trans.lift (V1.bsReadAll bstore (V1.changelogLastFlushedState dbch)) load _ _ _ _ = error "Malformed input path!" store :: - ( LedgerDbSerialiseConstraints blk - , CanStowLedgerTables (LedgerState blk) - , LedgerSupportsProtocol blk - , LedgerSupportsLedgerDB blk - ) - => Config - -> CodecConfig blk - -> (ExtLedgerState blk EmptyMK, LedgerTables (ExtLedgerState blk) ValuesMK) - -> SomeHasFS IO - -> IO () + ( LedgerDbSerialiseConstraints blk + , CanStowLedgerTables (LedgerState blk) + , LedgerSupportsProtocol blk + , LedgerSupportsLedgerDB blk + ) => + Config -> + CodecConfig blk -> + (ExtLedgerState blk EmptyMK, LedgerTables (ExtLedgerState blk) ValuesMK) -> + SomeHasFS IO -> + IO () store config@Config{outpath = pathToDiskSnapshot -> Just (fs@(SomeHasFS hasFS), path, DiskSnapshot _ suffix)} ccfg (state, tbs) tempFS = - case to config of + case to config of Legacy -> do - crc <- writeExtLedgerState fs (encodeDiskExtLedgerState ccfg) path (stowLedgerTables $ state `withLedgerTables` tbs) + crc <- + writeExtLedgerState + fs + (encodeDiskExtLedgerState ccfg) + path + (stowLedgerTables $ state `withLedgerTables` tbs) withFile hasFS (path <.> "checksum") (WriteMode MustBeNew) $ \h -> Monad.void $ hPutAll hasFS h . BS.toLazyByteString . BS.word32HexFixed $ getCRC crc Mem -> do @@ -231,28 +243,34 @@ store config@Config{outpath = pathToDiskSnapshot -> Just (fs@(SomeHasFS hasFS), LMDB -> do chlog <- newTVarIO (V1.empty state) lock <- V1.mkLedgerDBLock - bs <- V1.newLMDBBackingStore nullTracer defaultLMDBLimits (V1.LiveLMDBFS tempFS) (V1.SnapshotsFS fs) (V1.InitFromValues (pointSlot $ getTip state) state tbs) + bs <- + V1.newLMDBBackingStore + nullTracer + defaultLMDBLimits + (V1.LiveLMDBFS tempFS) + (V1.SnapshotsFS fs) + (V1.InitFromValues (pointSlot $ getTip state) state tbs) Monad.void $ V1.withReadLock lock $ do V1.takeSnapshot chlog ccfg nullTracer (V1.SnapshotsFS fs) bs suffix store _ _ _ _ = error "Malformed output path!" main :: IO () main = withStdTerminalHandles $ do - cryptoInit - (conf, blocktype) <- getCommandLineConfig - case blocktype of - ByronBlock args -> run conf args - ShelleyBlock args -> run conf args - CardanoBlock args -> run conf args - where - run conf args = do - ccfg <- configCodec . pInfoConfig <$> mkProtocolInfo args - withSystemTempDirectory "lmdb" $ \dir -> do - let tempFS = SomeHasFS $ ioHasFS $ MountPoint dir - RR.withRegistry $ \rr -> do - putStrLn "Loading snapshot..." - state <- either throwIO pure =<< runExceptT (load conf rr ccfg dir) - putStrLn "Loaded snapshot" - putStrLn "Writing snapshot..." - store conf ccfg state tempFS - putStrLn "Written snapshot" + cryptoInit + (conf, blocktype) <- getCommandLineConfig + case blocktype of + ByronBlock args -> run conf args + ShelleyBlock args -> run conf args + CardanoBlock args -> run conf args + where + run conf args = do + ccfg <- configCodec . pInfoConfig <$> mkProtocolInfo args + withSystemTempDirectory "lmdb" $ \dir -> do + let tempFS = SomeHasFS $ ioHasFS $ MountPoint dir + RR.withRegistry $ \rr -> do + putStrLn "Loading snapshot..." + state <- either throwIO pure =<< runExceptT (load conf rr ccfg dir) + putStrLn "Loaded snapshot" + putStrLn "Writing snapshot..." + store conf ccfg state tempFS + putStrLn "Written snapshot" diff --git a/ouroboros-consensus-cardano/ouroboros-consensus-cardano.cabal b/ouroboros-consensus-cardano/ouroboros-consensus-cardano.cabal index 86fa8bfaf0..4cfe9cfe98 100644 --- a/ouroboros-consensus-cardano/ouroboros-consensus-cardano.cabal +++ b/ouroboros-consensus-cardano/ouroboros-consensus-cardano.cabal @@ -34,6 +34,7 @@ flag asserts common common-lib default-language: Haskell2010 + default-extensions: ImportQualifiedPost ghc-options: -Wall -Wcompat @@ -45,6 +46,7 @@ common common-lib -Wmissing-export-lists -Wunused-packages -Wno-unticked-promoted-constructors + -Wprepositive-qualified-module if flag(asserts) ghc-options: -fno-ignore-asserts diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/ByronHFC.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/ByronHFC.hs index 6761c762ea..7666c802af 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/ByronHFC.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/ByronHFC.hs @@ -11,43 +11,42 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.ByronHFC ( - ByronBlockHFC +module Ouroboros.Consensus.Byron.ByronHFC + ( ByronBlockHFC , ByronPartialLedgerConfig (..) ) where -import Cardano.Binary -import qualified Cardano.Chain.Common as CC -import qualified Cardano.Chain.Genesis as CC.Genesis -import qualified Cardano.Chain.Update as CC.Update -import Control.Monad -import qualified Data.Map.Strict as Map -import Data.Maybe (listToMaybe, mapMaybe) -import Data.MemPack -import Data.SOP.Index (Index (..)) -import Data.Void (Void, absurd) -import Data.Word -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger -import qualified Ouroboros.Consensus.Byron.Ledger.Inspect as Byron.Inspect -import Ouroboros.Consensus.Byron.Node () -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Degenerate -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import Ouroboros.Consensus.HardFork.Simple -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.PBFT (PBft, PBftCrypto) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.IndexedMemPack +import Cardano.Binary +import Cardano.Chain.Common qualified as CC +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.Update qualified as CC.Update +import Control.Monad +import Data.Map.Strict qualified as Map +import Data.Maybe (listToMaybe, mapMaybe) +import Data.MemPack +import Data.SOP.Index (Index (..)) +import Data.Void (Void, absurd) +import Data.Word +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Ledger.Inspect qualified as Byron.Inspect +import Ouroboros.Consensus.Byron.Node () +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Degenerate +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HardFork.Simple +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.PBFT (PBft, PBftCrypto) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Synonym for convenience @@ -62,11 +61,12 @@ type ByronBlockHFC = HardForkBlock '[ByronBlock] instance ImmutableEraParams ByronBlock where immutableEraParams cfg = - byronEraParamsNeverHardForks (byronGenesisConfig (configBlock cfg)) + byronEraParamsNeverHardForks (byronGenesisConfig (configBlock cfg)) instance NoHardForks ByronBlock where - toPartialLedgerConfig _ cfg = ByronPartialLedgerConfig { - byronLedgerConfig = cfg + toPartialLedgerConfig _ cfg = + ByronPartialLedgerConfig + { byronLedgerConfig = cfg , byronTriggerHardFork = TriggerHardForkNotDuringThisExecution } @@ -79,11 +79,11 @@ instance NoHardForks ByronBlock where -- 'ByronBlock'. instance SupportedNetworkProtocolVersion ByronBlockHFC where supportedNodeToNodeVersions _ = - Map.map HardForkNodeToNodeDisabled $ + Map.map HardForkNodeToNodeDisabled $ supportedNodeToNodeVersions (Proxy @ByronBlock) supportedNodeToClientVersions _ = - Map.map HardForkNodeToClientDisabled $ + Map.map HardForkNodeToClientDisabled $ supportedNodeToClientVersions (Proxy @ByronBlock) latestReleasedNodeVersion = latestReleasedNodeVersionDefault @@ -99,16 +99,16 @@ instance SerialiseConstraintsHFC ByronBlock -- existing Byron blocks. instance SerialiseHFC '[ByronBlock] where encodeDiskHfcBlock (DegenCodecConfig ccfg) (DegenBlock b) = - encodeDisk ccfg b + encodeDisk ccfg b decodeDiskHfcBlock (DegenCodecConfig ccfg) = - fmap DegenBlock <$> decodeDisk ccfg + fmap DegenBlock <$> decodeDisk ccfg reconstructHfcPrefixLen _ = - reconstructPrefixLen (Proxy @(Header ByronBlock)) + reconstructPrefixLen (Proxy @(Header ByronBlock)) reconstructHfcNestedCtxt _ prefix blockSize = - mapSomeNestedCtxt NCZ $ - reconstructNestedCtxt (Proxy @(Header ByronBlock)) prefix blockSize + mapSomeNestedCtxt NCZ $ + reconstructNestedCtxt (Proxy @(Header ByronBlock)) prefix blockSize getHfcBinaryBlockInfo (DegenBlock b) = - getBinaryBlockInfo b + getBinaryBlockInfo b {------------------------------------------------------------------------------- Figure out the transition point for Byron @@ -156,77 +156,79 @@ instance SerialiseHFC '[ByronBlock] where a Shelley ledger at that point. -------------------------------------------------------------------------------} -byronTransition :: PartialLedgerConfig ByronBlock - -> Word16 -- ^ Shelley major protocol version - -> LedgerState ByronBlock mk - -> Maybe EpochNo +byronTransition :: + PartialLedgerConfig ByronBlock -> + -- | Shelley major protocol version + Word16 -> + LedgerState ByronBlock mk -> + Maybe EpochNo byronTransition partialConfig shelleyMajorVersion state = - takeAny + takeAny . mapMaybe isTransitionToShelley . Byron.Inspect.protocolUpdates lConfig $ state - where - ByronPartialLedgerConfig lConfig _ = partialConfig - ByronTransitionInfo transitionInfo = byronLedgerTransition state - - k = CC.Genesis.gdK $ CC.Genesis.configGenesisData lConfig - - isTransitionToShelley :: Byron.Inspect.ProtocolUpdate -> Maybe EpochNo - isTransitionToShelley update = do - guard $ CC.Update.pvMajor version == shelleyMajorVersion - case Byron.Inspect.protocolUpdateState update of - Byron.Inspect.UpdateCandidate _becameCandidateSlotNo adoptedIn -> do - becameCandidateBlockNo <- Map.lookup version transitionInfo - guard $ isReallyStable becameCandidateBlockNo - return adoptedIn - Byron.Inspect.UpdateStableCandidate adoptedIn -> - -- If the Byron ledger thinks it's stable, it's _definitely_ stable - return adoptedIn - _otherwise -> - -- The proposal isn't yet a candidate, never mind a stable one - mzero - where - version :: CC.Update.ProtocolVersion - version = Byron.Inspect.protocolUpdateVersion update - - -- Normally, stability in the ledger is defined in terms of slots, not - -- blocks. Byron considers the proposal to be stable after the slot is more - -- than @2k@ old. That is not wrong: after @2k@, the block indeed is stable. - -- - -- Unfortunately, this means that the /conclusion about stability itself/ - -- is /not/ stable: if we were to switch to a denser fork, we might change - -- our mind (on the sparse chain we thought the block was already stable, - -- but on the dense chain we conclude it is it not yet stable). - -- - -- It is unclear at the moment if this presents a problem; the HFC assumes - -- monotonicity of timing info, in the sense that that any slot/time - -- conversions are either unknown or else not subject to rollback. - -- The problem sketched above might mean that we can go from "conversion - -- known" to "conversion unknown", but then when we go back again to - -- "conversion known", we /are/ guaranteed that we'd get the same answer. - -- - -- Rather than trying to analyse this subtle problem, we instead base - -- stability on block numbers; after the block is `k` deep, we know for sure - -- that it is stable, and moreover, no matter which chain we switch to, that - -- will remain to be the case. - -- - -- The Byron 'UpdateState' records the 'SlotNo' of the block in which the - -- proposal became a candidate (i.e., when the last required endorsement - -- came in). That doesn't tell us very much, we need to know the block - -- number; that's precisely what the 'ByronTransition' part of the Byron - -- state tells us. - isReallyStable :: BlockNo -> Bool - isReallyStable (BlockNo bno) = distance >= CC.unBlockCount k - where - distance :: Word64 - distance = case byronLedgerTipBlockNo state of - Origin -> bno + 1 - NotOrigin (BlockNo tip) -> tip - bno - - -- We only expect a single proposal that updates to Shelley, but in case - -- there are multiple, any one will do - takeAny :: [a] -> Maybe a - takeAny = listToMaybe + where + ByronPartialLedgerConfig lConfig _ = partialConfig + ByronTransitionInfo transitionInfo = byronLedgerTransition state + + k = CC.Genesis.gdK $ CC.Genesis.configGenesisData lConfig + + isTransitionToShelley :: Byron.Inspect.ProtocolUpdate -> Maybe EpochNo + isTransitionToShelley update = do + guard $ CC.Update.pvMajor version == shelleyMajorVersion + case Byron.Inspect.protocolUpdateState update of + Byron.Inspect.UpdateCandidate _becameCandidateSlotNo adoptedIn -> do + becameCandidateBlockNo <- Map.lookup version transitionInfo + guard $ isReallyStable becameCandidateBlockNo + return adoptedIn + Byron.Inspect.UpdateStableCandidate adoptedIn -> + -- If the Byron ledger thinks it's stable, it's _definitely_ stable + return adoptedIn + _otherwise -> + -- The proposal isn't yet a candidate, never mind a stable one + mzero + where + version :: CC.Update.ProtocolVersion + version = Byron.Inspect.protocolUpdateVersion update + + -- Normally, stability in the ledger is defined in terms of slots, not + -- blocks. Byron considers the proposal to be stable after the slot is more + -- than @2k@ old. That is not wrong: after @2k@, the block indeed is stable. + -- + -- Unfortunately, this means that the /conclusion about stability itself/ + -- is /not/ stable: if we were to switch to a denser fork, we might change + -- our mind (on the sparse chain we thought the block was already stable, + -- but on the dense chain we conclude it is it not yet stable). + -- + -- It is unclear at the moment if this presents a problem; the HFC assumes + -- monotonicity of timing info, in the sense that that any slot/time + -- conversions are either unknown or else not subject to rollback. + -- The problem sketched above might mean that we can go from "conversion + -- known" to "conversion unknown", but then when we go back again to + -- "conversion known", we /are/ guaranteed that we'd get the same answer. + -- + -- Rather than trying to analyse this subtle problem, we instead base + -- stability on block numbers; after the block is `k` deep, we know for sure + -- that it is stable, and moreover, no matter which chain we switch to, that + -- will remain to be the case. + -- + -- The Byron 'UpdateState' records the 'SlotNo' of the block in which the + -- proposal became a candidate (i.e., when the last required endorsement + -- came in). That doesn't tell us very much, we need to know the block + -- number; that's precisely what the 'ByronTransition' part of the Byron + -- state tells us. + isReallyStable :: BlockNo -> Bool + isReallyStable (BlockNo bno) = distance >= CC.unBlockCount k + where + distance :: Word64 + distance = case byronLedgerTipBlockNo state of + Origin -> bno + 1 + NotOrigin (BlockNo tip) -> tip - bno + + -- We only expect a single proposal that updates to Shelley, but in case + -- there are multiple, any one will do + takeAny :: [a] -> Maybe a + takeAny = listToMaybe {------------------------------------------------------------------------------- SingleEraBlock Byron @@ -234,46 +236,47 @@ byronTransition partialConfig shelleyMajorVersion state = instance SingleEraBlock ByronBlock where singleEraTransition pcfg _eraParams _eraStart ledgerState = - case byronTriggerHardFork pcfg of - TriggerHardForkNotDuringThisExecution -> Nothing - TriggerHardForkAtEpoch epoch -> Just epoch - TriggerHardForkAtVersion shelleyMajorVersion -> - byronTransition - pcfg - shelleyMajorVersion - ledgerState - - singleEraInfo _ = SingleEraInfo { - singleEraName = "Byron" - } + case byronTriggerHardFork pcfg of + TriggerHardForkNotDuringThisExecution -> Nothing + TriggerHardForkAtEpoch epoch -> Just epoch + TriggerHardForkAtVersion shelleyMajorVersion -> + byronTransition + pcfg + shelleyMajorVersion + ledgerState + + singleEraInfo _ = + SingleEraInfo + { singleEraName = "Byron" + } instance PBftCrypto bc => HasPartialConsensusConfig (PBft bc) - -- Use defaults + +-- Use defaults -- | When Byron is part of the hard-fork combinator, we use the partial ledger -- config. Standalone Byron uses the regular ledger config. This means that -- the partial ledger config is the perfect place to store the trigger -- condition for the hard fork to Shelley, as we don't have to modify the -- ledger config for standalone Byron. -data ByronPartialLedgerConfig = ByronPartialLedgerConfig { - byronLedgerConfig :: !(LedgerConfig ByronBlock) - , byronTriggerHardFork :: !TriggerHardFork - } +data ByronPartialLedgerConfig = ByronPartialLedgerConfig + { byronLedgerConfig :: !(LedgerConfig ByronBlock) + , byronTriggerHardFork :: !TriggerHardFork + } deriving (Show, Generic, NoThunks) instance HasPartialLedgerConfig ByronBlock where - type PartialLedgerConfig ByronBlock = ByronPartialLedgerConfig completeLedgerConfig _ _ = byronLedgerConfig instance SerialiseNodeToClient ByronBlock ByronPartialLedgerConfig where - encodeNodeToClient ccfg version (ByronPartialLedgerConfig lconfig triggerhf) - = mconcat [ - encodeListLen 2 - , toCBOR @(LedgerConfig ByronBlock) lconfig - , encodeNodeToClient ccfg version triggerhf - ] + encodeNodeToClient ccfg version (ByronPartialLedgerConfig lconfig triggerhf) = + mconcat + [ encodeListLen 2 + , toCBOR @(LedgerConfig ByronBlock) lconfig + , encodeNodeToClient ccfg version triggerhf + ] decodeNodeToClient ccfg version = do enforceSize "ByronPartialLedgerConfig" 2 ByronPartialLedgerConfig @@ -285,36 +288,40 @@ instance SerialiseNodeToClient ByronBlock ByronPartialLedgerConfig where -------------------------------------------------------------------------------} instance HasCanonicalTxIn '[ByronBlock] where - newtype instance CanonicalTxIn '[ByronBlock] = ByronHFCTxIn { - getByronHFCTxIn :: Void + newtype CanonicalTxIn '[ByronBlock] = ByronHFCTxIn + { getByronHFCTxIn :: Void } deriving stock (Show, Eq, Ord) deriving newtype (NoThunks, MemPack) - injectCanonicalTxIn IZ key = absurd key + injectCanonicalTxIn IZ key = absurd key injectCanonicalTxIn (IS idx') _ = case idx' of {} ejectCanonicalTxIn _ key = absurd $ getByronHFCTxIn key instance HasHardForkTxOut '[ByronBlock] where - type instance HardForkTxOut '[ByronBlock] = Void - injectHardForkTxOut IZ txout = absurd txout - injectHardForkTxOut (IS idx') _ = case idx' of {} - ejectHardForkTxOut IZ txout = absurd txout - ejectHardForkTxOut (IS idx') _ = case idx' of {} + type HardForkTxOut '[ByronBlock] = Void + injectHardForkTxOut IZ txout = absurd txout + injectHardForkTxOut (IS idx') _ = case idx' of {} + ejectHardForkTxOut IZ txout = absurd txout + ejectHardForkTxOut (IS idx') _ = case idx' of {} -deriving via Void - instance IndexedMemPack (LedgerState (HardForkBlock '[ByronBlock]) EmptyMK) Void +deriving via + Void + instance + IndexedMemPack (LedgerState (HardForkBlock '[ByronBlock]) EmptyMK) Void instance BlockSupportsHFLedgerQuery '[ByronBlock] where - answerBlockQueryHFLookup IZ _cfg (q :: BlockQuery ByronBlock QFLookupTables result) _dlv = case q of {} + answerBlockQueryHFLookup IZ _cfg (q :: BlockQuery ByronBlock QFLookupTables result) _dlv = case q of {} answerBlockQueryHFLookup (IS is) _cfg _q _dlv = case is of {} - answerBlockQueryHFTraverse IZ _cfg (q :: BlockQuery ByronBlock QFTraverseTables result) _dlv = case q of {} + answerBlockQueryHFTraverse IZ _cfg (q :: BlockQuery ByronBlock QFTraverseTables result) _dlv = case q of {} answerBlockQueryHFTraverse (IS is) _cfg _q _dlv = case is of {} queryLedgerGetTraversingFilter IZ (q :: BlockQuery ByronBlock QFTraverseTables result) = case q of {} queryLedgerGetTraversingFilter (IS is) _q = case is of {} -deriving via TrivialLedgerTables (LedgerState (HardForkBlock '[ByronBlock])) - instance SerializeTablesWithHint (LedgerState (HardForkBlock '[ByronBlock])) +deriving via + TrivialLedgerTables (LedgerState (HardForkBlock '[ByronBlock])) + instance + SerializeTablesWithHint (LedgerState (HardForkBlock '[ByronBlock])) diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Crypto/DSIGN.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Crypto/DSIGN.hs index bb8fe671dd..69e24be448 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Crypto/DSIGN.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Crypto/DSIGN.hs @@ -11,33 +11,40 @@ {-# LANGUAGE UndecidableInstances #-} -- | Byron digital signatures. -module Ouroboros.Consensus.Byron.Crypto.DSIGN ( - ByronDSIGN +module Ouroboros.Consensus.Byron.Crypto.DSIGN + ( ByronDSIGN , HasSignTag (..) , SigDSIGN (..) , SignKeyDSIGN (..) , VerKeyDSIGN (..) ) where - -import qualified Cardano.Chain.Block as CC.Block -import qualified Cardano.Chain.UTxO as CC.UTxO -import Cardano.Crypto (ProtocolMagicId, SignTag (..), Signature (..), - SigningKey (..), VerificationKey (..), deterministicKeyGen, - signRaw, toVerification, verifySignatureRaw) -import Cardano.Crypto.DSIGN.Class -import Cardano.Crypto.Seed (SeedBytesExhausted (..), getBytesFromSeed) -import qualified Cardano.Crypto.Signing as Crypto -import qualified Cardano.Crypto.Wallet as CC -import Cardano.Ledger.Binary -import Control.Exception (throw) -import Data.ByteString (ByteString) -import Data.Coerce (coerce) -import Data.Proxy (Proxy (..)) -import GHC.Generics (Generic) -import NoThunks.Class (InspectHeapNamed (..), NoThunks) -import Ouroboros.Consensus.Util (eitherToMaybe) -import Ouroboros.Consensus.Util.Condense +import Cardano.Chain.Block qualified as CC.Block +import Cardano.Chain.UTxO qualified as CC.UTxO +import Cardano.Crypto + ( ProtocolMagicId + , SignTag (..) + , Signature (..) + , SigningKey (..) + , VerificationKey (..) + , deterministicKeyGen + , signRaw + , toVerification + , verifySignatureRaw + ) +import Cardano.Crypto.DSIGN.Class +import Cardano.Crypto.Seed (SeedBytesExhausted (..), getBytesFromSeed) +import Cardano.Crypto.Signing qualified as Crypto +import Cardano.Crypto.Wallet qualified as CC +import Cardano.Ledger.Binary +import Control.Exception (throw) +import Data.ByteString (ByteString) +import Data.Coerce (coerce) +import Data.Proxy (Proxy (..)) +import GHC.Generics (Generic) +import NoThunks.Class (InspectHeapNamed (..), NoThunks) +import Ouroboros.Consensus.Util (eitherToMaybe) +import Ouroboros.Consensus.Util.Condense class (HasSignTag a, Decoded a) => ByronSignable a instance (HasSignTag a, Decoded a) => ByronSignable a @@ -45,8 +52,10 @@ instance (HasSignTag a, Decoded a) => ByronSignable a class HasSignTag a where signTag :: VerKeyDSIGN ByronDSIGN -> proxy a -> SignTag -signTagFor :: forall a. HasSignTag a - => VerKeyDSIGN ByronDSIGN -> a -> SignTag +signTagFor :: + forall a. + HasSignTag a => + VerKeyDSIGN ByronDSIGN -> a -> SignTag signTagFor genKey _ = signTag genKey (Proxy @a) instance HasSignTag CC.UTxO.TxSigData where @@ -58,64 +67,63 @@ instance HasSignTag (Annotated CC.Block.ToSign ByteString) where data ByronDSIGN instance DSIGNAlgorithm ByronDSIGN where - - type SeedSizeDSIGN ByronDSIGN = 32 - type SizeVerKeyDSIGN ByronDSIGN = 64 - type SizeSignKeyDSIGN ByronDSIGN = 128 - type SizeSigDSIGN ByronDSIGN = 64 - - algorithmNameDSIGN _ = "ByronDSIGN" - - -- Context required for Byron digital signatures - -- - -- We require the the protocol magic as well as the verification key of the - -- genesis stakeholder of which the signing node is a delegate, which is - -- required for signing blocks. - type ContextDSIGN ByronDSIGN = (ProtocolMagicId, VerKeyDSIGN ByronDSIGN) - - newtype VerKeyDSIGN ByronDSIGN = VerKeyByronDSIGN VerificationKey - deriving (Show, Eq, Generic) - deriving NoThunks via InspectHeapNamed "VerKeyDSIGN ByronDSIGN" (VerKeyDSIGN ByronDSIGN) - - newtype SignKeyDSIGN ByronDSIGN = SignKeyByronDSIGN SigningKey - deriving (Show, Generic) - deriving NoThunks via InspectHeapNamed "SignKeyDSIGN ByronDSIGN" (SignKeyDSIGN ByronDSIGN) - - newtype SigDSIGN ByronDSIGN = SigByronDSIGN (Signature CC.Block.ToSign) - deriving (Show, Eq, Generic) - deriving NoThunks via InspectHeapNamed "SigDSIGN ByronDSIGN" (SigDSIGN ByronDSIGN) - - type Signable ByronDSIGN = ByronSignable - - genKeyDSIGN seed = - SignKeyByronDSIGN . snd $ deterministicKeyGen seedBytes - where - seedBytes = case getBytesFromSeed 32 seed of - Just (x,_) -> x - Nothing -> throw $ SeedBytesExhausted (-1) (-1) -- TODO We can't get the seed size! - - deriveVerKeyDSIGN (SignKeyByronDSIGN sk) = VerKeyByronDSIGN $ toVerification sk - - signDSIGN (magic, genKey) a (SignKeyByronDSIGN sk) = - SigByronDSIGN - . coerce - $ signRaw magic (Just $ signTagFor genKey a) sk (recoverBytes a) - - verifyDSIGN (magic, genKey) (VerKeyByronDSIGN vk) a (SigByronDSIGN sig) = - if verifySignatureRaw vk (Crypto.signTag magic (signTagFor genKey a) <> recoverBytes a) $ coerce sig - then Right () - else Left "Verification failed" - - rawSerialiseVerKeyDSIGN (VerKeyByronDSIGN (VerificationKey vk)) = CC.unXPub vk - rawSerialiseSignKeyDSIGN (SignKeyByronDSIGN (SigningKey sk)) = CC.unXPrv sk - rawSerialiseSigDSIGN (SigByronDSIGN (Signature sig)) = CC.unXSignature sig - - rawDeserialiseVerKeyDSIGN bs = - VerKeyByronDSIGN . VerificationKey <$> (eitherToMaybe $ CC.xpub bs) - rawDeserialiseSignKeyDSIGN bs = - SignKeyByronDSIGN . SigningKey <$> (eitherToMaybe $ CC.xprv bs) - rawDeserialiseSigDSIGN bs = - SigByronDSIGN . Signature <$> (eitherToMaybe $ CC.xsignature bs) + type SeedSizeDSIGN ByronDSIGN = 32 + type SizeVerKeyDSIGN ByronDSIGN = 64 + type SizeSignKeyDSIGN ByronDSIGN = 128 + type SizeSigDSIGN ByronDSIGN = 64 + + algorithmNameDSIGN _ = "ByronDSIGN" + + -- Context required for Byron digital signatures + -- + -- We require the the protocol magic as well as the verification key of the + -- genesis stakeholder of which the signing node is a delegate, which is + -- required for signing blocks. + type ContextDSIGN ByronDSIGN = (ProtocolMagicId, VerKeyDSIGN ByronDSIGN) + + newtype VerKeyDSIGN ByronDSIGN = VerKeyByronDSIGN VerificationKey + deriving (Show, Eq, Generic) + deriving NoThunks via InspectHeapNamed "VerKeyDSIGN ByronDSIGN" (VerKeyDSIGN ByronDSIGN) + + newtype SignKeyDSIGN ByronDSIGN = SignKeyByronDSIGN SigningKey + deriving (Show, Generic) + deriving NoThunks via InspectHeapNamed "SignKeyDSIGN ByronDSIGN" (SignKeyDSIGN ByronDSIGN) + + newtype SigDSIGN ByronDSIGN = SigByronDSIGN (Signature CC.Block.ToSign) + deriving (Show, Eq, Generic) + deriving NoThunks via InspectHeapNamed "SigDSIGN ByronDSIGN" (SigDSIGN ByronDSIGN) + + type Signable ByronDSIGN = ByronSignable + + genKeyDSIGN seed = + SignKeyByronDSIGN . snd $ deterministicKeyGen seedBytes + where + seedBytes = case getBytesFromSeed 32 seed of + Just (x, _) -> x + Nothing -> throw $ SeedBytesExhausted (-1) (-1) -- TODO We can't get the seed size! + + deriveVerKeyDSIGN (SignKeyByronDSIGN sk) = VerKeyByronDSIGN $ toVerification sk + + signDSIGN (magic, genKey) a (SignKeyByronDSIGN sk) = + SigByronDSIGN + . coerce + $ signRaw magic (Just $ signTagFor genKey a) sk (recoverBytes a) + + verifyDSIGN (magic, genKey) (VerKeyByronDSIGN vk) a (SigByronDSIGN sig) = + if verifySignatureRaw vk (Crypto.signTag magic (signTagFor genKey a) <> recoverBytes a) $ coerce sig + then Right () + else Left "Verification failed" + + rawSerialiseVerKeyDSIGN (VerKeyByronDSIGN (VerificationKey vk)) = CC.unXPub vk + rawSerialiseSignKeyDSIGN (SignKeyByronDSIGN (SigningKey sk)) = CC.unXPrv sk + rawSerialiseSigDSIGN (SigByronDSIGN (Signature sig)) = CC.unXSignature sig + + rawDeserialiseVerKeyDSIGN bs = + VerKeyByronDSIGN . VerificationKey <$> (eitherToMaybe $ CC.xpub bs) + rawDeserialiseSignKeyDSIGN bs = + SignKeyByronDSIGN . SigningKey <$> (eitherToMaybe $ CC.xprv bs) + rawDeserialiseSigDSIGN bs = + SigByronDSIGN . Signature <$> (eitherToMaybe $ CC.xsignature bs) instance Condense (SigDSIGN ByronDSIGN) where - condense (SigByronDSIGN s) = show s + condense (SigByronDSIGN s) = show s diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/EBBs.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/EBBs.hs index 54be220e54..be5709559b 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/EBBs.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/EBBs.hs @@ -1,340 +1,1304 @@ module Ouroboros.Consensus.Byron.EBBs (knownEBBs) where -import Cardano.Chain.Block (HeaderHash) +import Cardano.Chain.Block (HeaderHash) h :: String -> HeaderHash h = read knownEBBs :: [(HeaderHash, Maybe HeaderHash)] -knownEBBs = [ - -- Mainnet - (h "89d9b5a5b8ddc8d7e5a6795e9774d97faf1efea59b2caf7eaf9f8c5b32059df4", Nothing) - , (h "1941d944df546dea699791c318aeb9cc63b94e4cdb133d79856cda35bf7ecbb1", Just $ h "3bd04916b6bc2ad849d519cfae4ffe3b1a1660c098dbcd3e884073dd54bc8911") - , (h "d46adef46f0760f7b89bd03926a0a631aded69b30379ca0404a0f68ebd2936dc", Just $ h "e9684707f89e93723906ca9b06fd32e8fd297e14ee3c3cb5d79fe0100e244c5c") - , (h "7dd0c73651016e0706ee5f18e0b35bc6579e683b25db5e596e09eca9346f0065", Just $ h "7cc07fd783d9cf2d43b7deb1f5169d63e5a35b9bc3bcb97de06c9d6608c37524") - , (h "8360a8d832a4e59c91efb21576a0aa5f3cf185e067da729cdc15331204128685", Just $ h "c793ac68d109921e5d04e9648e8ef112a29aa173f3ac0d40aa27c1aa824f37f1") - , (h "9021389e9fe671b9fd47ea15ff1e17ab37870e30a3053de8f5e60649f6ef0ec8", Just $ h "9b744703908858efdc285f40e67c16bbf9851a5a071028871066d9dad458158a") - , (h "6ca7fd6d27b48e980c9c8f6499c8f783317ae9686f3b8a458125825c7ce32e54", Just $ h "0281ffb72c852242ac921686aa17317743d8b5897c58678a99e171fad6cbc5c0") - , (h "8313ccac635027eb2150c27f417c00541371f0a40cbc5fc714dc6c3b0e93ac19", Just $ h "edaeab3fa50abc2f10d76629dbc72088beab7aec8de77f37cb8439216facfb5d") - , (h "e96a7fa23688a0d0f68bfdacddd1a1976f1c26724c484a6e35099d77d1acf186", Just $ h "0ea08bc5ce983cdf63d04f3f87b02f3505d52dc59222dcaaf89dd7629ade71df") - , (h "b86032b4c1a3b9e5569fccf1573016d6f35efd06a4f1002a4bb97058941dc103", Just $ h "2864f2051e0fdcf9b9aed4f5466ac7481cdcf8b360e3bebd7881b363d63643e7") - , (h "af82845f28b968c49eeb5b9cdb902af357435fe2a2c45dfb9ac7196adf9c1470", Just $ h "ff3f37d545b0a933f7556abdbcc91c096147ebf5332b5e512083906726b2bf19") - , (h "0f27259d20d9c8cc17780b672ec1efeb224acb25ca84d17ec4689325b06d804a", Just $ h "2a5fd14ba10583ba912ec75e03f24179d187ab7f37dcf769558952c954af2b22") - , (h "477691c837f0b7d450f084faa04b22459dac1e643f886c3a8a761d3eab9e7e65", Just $ h "c9da059f0022ae82d9e186a3ea97ee222a7aea27832519576d410789431e91cf") - , (h "93cb7b9cadd44104da662aa17e94169361108fb630ad6bb5bf56d48789f7854c", Just $ h "d734a3305f5b3323b2e664e7beb50c0f26532bd242df239c69b3b934f06522cd") - , (h "39d89a1e837e968ba35370be47cdfcbfd193cd992fdeed557b77c49b77ee59cf", Just $ h "dd5f2d406ea6c62b6e370f89cc39d80e40152d5b1537c41387359e220ddf5f92") - , (h "9df8408f56c0c1d6e7cae2a8f3390ce0fe6a46c5219cbc29ca4c9b9411c0f5e7", Just $ h "180a01126e2ddae64abfdb5366f3335ed0cf4d10273f24c8b681cee38ed98409") - , (h "93c52837302a61b9a4975b057b68a4c83ba9bd9a03854d4de8c58a46c8b3d0c5", Just $ h "91e7a18ee35b2e569567b4f1310b82d2bf302ba99890b8ce0b304d648c33a6db") - , (h "c4bf68e443df0abd985d72a5f9a4dfb50d5cca28e9c817812f9a666d357ea165", Just $ h "cbfbb1ab086ff34ef52c66e819c9ed897bc35724018f651b889611fd1777dc67") - , (h "2498d9e98a5aa7e3c70e6aa0f7b5ba2e62d7e3a062a5d5717547a50b784a0c57", Just $ h "45c2185719fb3f94bd112e69bde2c006e593e6d8e06f6175fc351141034a83f7") - , (h "fee8162888525d62f824fdde17e40974cfb69ca01de668aac1e3526a53aab921", Just $ h "faf8656965ea437abeab35829c7fec08d89ae1dcc8aa1ed966cdc1afa3cc0d36") - , (h "6c1825661e4f567ccd62ee774456971941565b66012ba093b0655516a15300a3", Just $ h "5bbc9b954377636cd594af87b8afea5b51837b4fb32b94bd3df065b789a0a1ee") - , (h "513c60b42fad56df2862cff73d8966d9cdfac267f7fed0622dca9cd7af728aaa", Just $ h "7d0a99accb4be05906c8e20253cddda11f5aa6983de853923aad1b7b2233259d") - , (h "49eacedd439e3bc9cffd1c1be7477543c90997d4e50403cedae259eb321f7c50", Just $ h "9e24241dd3d73963b1fc963c515477c9a7ec1e5e79d93ee5b85198117bfcfae3") - , (h "bda92f619fa9ff2c486c493bfa19c061a40f66a1901db30c8651478d5982ab27", Just $ h "e4b9633f586b7c01cd7fd74320c36e653fefbefa824effff0e1fee6d7fc5bc69") - , (h "471ee59bee5cadc22ec85c4519acd4ee6f843eb30b34793db6ba1a9eb0426afb", Just $ h "312ecd199635560c66f55acf3eef0ede3d83ae718150adac5dbcfb81289ad0e0") - , (h "e8945c01028c3811916a50f8a6508daf333377ea5697cda33fbcd13d60b321ad", Just $ h "8b1f645cc74adc4ad5c756e514c9ae0a25d693547ac1f3b82cb8bfa451c740cd") - , (h "d736ce41636ceed47bc7852bf64a1c44a39b727231dbe1a778de96b68e3d1e20", Just $ h "0f7a47cea9c8aae114e2ea39a93eab74a3812aa8db6cd2f5b22209ca33f8b762") - , (h "41d18923991ed04f15abf23ae8c08318c552288da81b499e9397081b353dac2d", Just $ h "254851a6b39c906ec02e9a8a7f06305c0c6700d7f2cb6b33005156bdd5c3ebcf") - , (h "6fd5e5f45485f36b9e70270865f3e8d3360ea38f25338a53725054e53b3b4908", Just $ h "1c26656e7333a07f223e7e1f2350ddc1e4bac0b5245692b95ff1c127bfa5a52f") - , (h "b55046da4190b68a18725dae62732c5679d26a2ff53510e38744e5262f6d22f6", Just $ h "535496ca4b7e2ee172f16e9501c353466c30bce792788aa2448ee774e6e49f6e") - , (h "71d7068e981a233cbd1c65d73a04a399e42c78aa44c17b2d971566f5c17b680f", Just $ h "6a644bf448570069374805dcd40187bd99a324830ec086da0d024f732fd2ed60") - , (h "644d7069580dc3ddeded68000199b3606b19e7e04f87a50db530956ff8b98ea6", Just $ h "b6b7971aadd2a16cfaf177cc7b0a2f4650fba667d2a89cc75d0f972893ae75f9") - , (h "b6bb091acdaafe5c2d98b3e3f6e85dcb676bb1866c1b86ce67a4c035f497f3a3", Just $ h "ca737e6f276c57ed2e026091b912914aba12e8a917269f36a4c9261aedcc2a2e") - , (h "2e7d5662ef08e6290b72c29788accf8ddb2365dce7a95698cf0280eb769e3cd7", Just $ h "7984bc3c8b96be0826c5034135f316428212060631706d7c3263854be3dc529f") - , (h "cdd944cde86881147dced0f4d6b7234798bcd93c3e6b1bd34a198c35c8c7948f", Just $ h "5fd0c2cc69431b519a68dbd2e7c70cbcbf2ec7fe1d68cc62980b4ff8940462cc") - , (h "4229695b833c7d3f52521929c96965235b3d9081a2cad3603a76df883f586ad0", Just $ h "77eb7496ff763b4501f97e853ccd886d98e188eb19832a9a2251229517255577") - , (h "9d0290acdfd6dd68ba152f35bd56cb9d4ba89a79f8b1d638d09a4a00c8326dd0", Just $ h "5b64541bd2e23974b95c453e2dff67151fc0f28a73c3b40adba1ae1ee3bab391") - , (h "07fc2b83a768a6f958b7c72454a7bdf84336fc4e8dd6d9c0b71db5a5f004f7cb", Just $ h "07312ef90499f668791cb7c97c9a16c9c3d2d4893a58c48c39f88e796ada7a29") - , (h "409e8e2a08b77a21e65635eee20822d6e801b65acbe10f99f7935ef6a02ddedf", Just $ h "5f8e6368bf04f9d16a7adf5b0e37e8a095df2df848e18cc48984e097af598ecb") - , (h "66ab026f017a4d062297b150e3c330c8431888b274d0ce408f61676e38022ab3", Just $ h "636dcaf68b4a5e4dedca8031d159d5e3bcedb3abd2e434bd838a1d30a6b0fcf1") - , (h "31732323544f560dbeffa0258a0484302035ceb27f806f03a5127db303084900", Just $ h "55503f559d930e17344e395c964f71c33402778c874daf62ba8b97de1e5c3902") - , (h "fbb266aa8f59d5c34429268c396a7b7354ace711ed0e8a54425e4983a826a3e6", Just $ h "0e859fe991e743c30e9870e9135240e98073d96e3282c48c9032cdcf94817edd") - , (h "ab49d2db8a22b8de09456272b8e58c8dc2d2b6d17501233fc7c454f2932d5478", Just $ h "06efacff01810b44bd51cb8fd589feca7ddff5c8f222d8c65a9f443871e72e45") - , (h "ce2f8ad0887e647f703b6792c503765ae7a38a35919e4688d58555c02d028d7b", Just $ h "9211bb7008251c1bc2d7863dd06d73de9b6f4a19a5fb40a354613cbaa1c6b5ce") - , (h "f2d1692f23cad89254bc72ad8a069feb1372b41de3d3051f0e97c64fb39f5a0d", Just $ h "209ad054b0d93b5f2db7ecb7a85ab976a4e3fc03ea7b91a7b6dcc8e3740c8ed6") - , (h "3196a0edf8dbc83ae8409f991e74d1b8a0e322d718751c9a448e8d27dc69cf8c", Just $ h "86f0e6f10e4286e7f1997a788468c9bd941bb2fc647d1ffbbce6dc019d264c6b") - , (h "d67759e7149ea5fcc878ee94efa0e706a32a57c3b4694a5f2402430daf41d263", Just $ h "90ffcabccd5fb19e290a65db309a2f7e3200ef31f249350974806439212c34e2") - , (h "3e95d9cf920170e8fc35a87505839af22a6a23e9bc4e3707c9b0894ecf97895b", Just $ h "6f0226bb8a5b53bcf8f22af8fb847c7ccc5859fd299398cf9145538c7e8552bb") - , (h "d9e9ef3277180fd7ad404e3b7d66e049a7fff9da1595fe464be68943dca79510", Just $ h "b05742df56b52c2fbdb9daba0a8d93bd9d1a8f8ff0185922f544b7234460bba6") - , (h "30d20d5423e501de0461846d9f7f5afb09efb013fbb0bf95184d78485b04ff37", Just $ h "f570e79e366464a864a0e0c4c8048da9976a7c1987b4fe49bdd6c697fa926be6") - , (h "3683b42bc7d7f7d0c38eb46763c2c2c4987d7270c1955bc712a379b53de1e835", Just $ h "bea5b0356fd54dad2c7400dc811f67b3ed5956f25d33d04abebe86aacd6e5eb1") - , (h "60d0f38034af88016244a9a5ed52e4ff053867e157537fa27b9a7d6b33c68475", Just $ h "89042f9f40defba0c26b73cbbf9ad10296bf942ef876a3f28d158421579481c6") - , (h "09be08c5c7f495fb23291602cdea3529418aead4f476ec759b8068914615e0a0", Just $ h "fc5ebbf8a921a4bc9f45538196434c4c6fc8f2c1505765b3bfd98c04262327aa") - , (h "ae8e27654b426832e3bde56b0bdabe0d5597000913effb4cb240a0186a327e3f", Just $ h "01774b45045812f5edd64bad41d3d478c2371fd7826690ca031f9d2b7a5a10a5") - , (h "fb58cb86921561b7b862c1bed586421e708e4569548c2c4cfe6bfc4d4aabf5f9", Just $ h "29b91b18aae36dfe13bbc4a08395d1729fd1adf2b9dcb2479a06afa4392b6f59") - , (h "716373239cfb7b64488ecbb2c0419bf22de8c187f0d480ffd909e5be9aee4ecf", Just $ h "15c1e7fb3d5a9acf56d6290a94b0c47a9812f8389a8afac02a7b96114d81a965") - , (h "b0c530824f88a5ceb168bdafe9c24661e2098557d60b95788e97c386cfe69ffb", Just $ h "20d715a8f8b2c36f9343a04e6ce8e4bb8c6cc06ed1fe0d6ee48727f47e2b8d97") - , (h "4cd15ce51ff41b3cac040683ce40d73c60a50c6d06c9dbe08bfc351aaf174f0a", Just $ h "51c6774c1704d922050d4730bd8beb1e9083ad60f604a9549d505894774a2cf9") - , (h "e824ee206c29a4c43b907d39146fa339158fd93fd83a5a71605fe278d75d07b7", Just $ h "834d37b04d85ae60f97d231c6041118898b29bb10b4efe832638015f2e4ec040") - , (h "b307ddd09daed4882ad2d94a018958d2f5ead4f52c7b04621109186ca840d59c", Just $ h "37788b6021138903f10c4c9f0627dc0b338c883556506d0a8263f9f5233a409c") - , (h "efad133526785f07025f029b6a51aef45d8e2bd9171cebb71dc5113f7a7927b3", Just $ h "6a5b2a6bb5434e1827f78c6bc943504c69968f1734a721088810f193ef610ce2") - , (h "8c53e50a3c233deff3d15f87394a9f25b23aaad398250420999fb30b73c5d8d5", Just $ h "ec99f7df858441ed05e947bcf88834704686d4892cadee8a0c77eb854fbc1e7d") - , (h "c4ac22a380af80de94cf18348bcc4efae3b8cc4805486ca2fd152a0d4e027b79", Just $ h "445c510e03e5fa070d71b0e64343eb033dea91a0737cc32a552b1a664ed73b71") - , (h "87d5a36a4219d11d4381650559b769e833847149e58b1c543929b30ca7af9bf8", Just $ h "5a713c9596bdb1935f835506d8161ab75e358f658c8bb8f656c0fb13f884672e") - , (h "60b2fec0207fde0f9891f8479e6dc6c6741b513186c84f76bea9f612f6339900", Just $ h "2f0db218ff8c7ddd5fd11c5724e305e069f58c5bc9445c5486004f20ad5c3afa") - , (h "9b03f8143a705b8fb44064e5fb9e8ec2ea8a7a645ef87df8f45f75bc8963c00d", Just $ h "1c7aa475b21b20690b60d318b38ea69bd63f66a3e8949650eba50de31a990060") - , (h "5601474a97a4ab955b894ab25cafdd219042bae83addee8f4c1299b282b8a5e2", Just $ h "e02c4688b977d6655d51e948fd161073d72759b878751137dcab6c42a396f967") - , (h "7a317fda6860b998ab34e6df1d1d8b21ba60d9a2396e992a8ee73bfdcae6d820", Just $ h "6e5da1bf4b2c203438408505fdd6f8970fbf4703570af678a9435f5f44ac089f") - , (h "4672dea0a5275519b1ed4a564740e3f85dc0cf98adaeaffac245eb648a06b3f0", Just $ h "fe379bb300c1b57170725554029eac5d802445ea8f326dbf866036d647c6c742") - , (h "fd509e014462437d1786934ec6b622a705aab62318c87107a0f245b4cb404a83", Just $ h "63fdabdebfb527f13dcda8d3b83dec19fd6e5022371032fa9474ada47e830adc") - , (h "7a88baf29982e95f601f90a1e099a373b03362b2d356e91104d9ba5e95d36056", Just $ h "d10ce7039bff2974e6dff683bcd761a5a8134432c95bf46ee07b38ea674e6613") - , (h "34a45aaedae4665afbab1fe2d5a1732ceb3e6d8ae9af0c1083d5eddc7038bb91", Just $ h "86bb7c9f912741e67769e3a97adf091288219056b346ac0459745c31864b6374") - , (h "e325be55e9c30b5ca89357ac6684d9cf8bdae2ecda603f6febb676fdb87741dc", Just $ h "ce496ec6ba3e1c278c633ac7d3a77b8f3018bbc5f0073a364075c11977e6e03d") - , (h "8eaf019d681a364c596f3ef7648d64d05480f11ab24da6add90ddb51e79c5c4b", Just $ h "ec5348295760043ff95eca74d57b2194b41e432c6d7f3fe28c8c6dccf6d13aad") - , (h "e9b830abdbc86d20ed0c6812f2c5b29f730a5e4cf48a2b9d0d99b979b65dcd2b", Just $ h "1b44ce89f19161725de142dacda3d92cc85d9788f9ef3d196597974fd818e4f7") - , (h "5815f7890f407e75ea0c40784aeeb3104a24972495d72c56f8e2bd259b5f6109", Just $ h "d2c25669f9c8ed3eeac855fc34ecbd822f6592634f7efd7ab2d8eb1b8006c622") - , (h "13c26f05091afaf1d2f207f88764255442d861536b256fedf2ee4a4665b2309e", Just $ h "2c759b62c71cea1a8ef6a463c26c1a847b452fdd6bb7ebd675c97cd70284a0f3") - , (h "af80b889bc1b7574d5bd33659727d6ff15977abb0836fc92e28043ec981e7e7c", Just $ h "656b7e2f655e513da98e9edfc496d93e950d1d006abb25349938a8df9fe5b1bf") - , (h "1d0647c60dfc6fcfeb59ea5b4ad3106e8944d5ffec50edcd88f731f88bec0df4", Just $ h "5434c003cc072e45167eb267f0d2f7962aec78360d4008ed1ca7555c62406020") - , (h "e2f5652f71af8b91224390a7133c5220cbc5ec65c2ca0101e9ea8ef49a893474", Just $ h "d96d0e24fc07e8f3f26a8c987687a8773286545b533d7f5d942ca30fb699a187") - , (h "3cfa76d00cd6f6d8e44ee0671ac3914dbe79dd054cf784b523d760c769e80bd4", Just $ h "834b20982356a5a8ba71f462540012eb00b3f7218c700adff749721c111ad44a") - , (h "ad4dc9022caf9c96d2ee0bae1f9cd1d871eac1db69129f0a957e61b269b4e6fb", Just $ h "8f94acecd83eb4a2c68c80cf01b37ab3d6c02ace3537eaa71c8142aa5c9e1c5c") - , (h "cb829d6055ef80e7cf94c6b450c1fe96a300697945806b0654e51f0d403c9f5a", Just $ h "1816fefcf5d342b6f7bf34d5264bd8aa25f307174c4a7d78db5e41dfd387b5bf") - , (h "34727a37ef91074928ae386c1c5d2fb76b1b77e1db06d987ed9d5e5675b5aaeb", Just $ h "bfb3e75a8d8ccad1512858ae2bcf02f61cb088d9628a655074f7406eab099064") - , (h "8284fc4fe9f2f5d055b74d09bceb3bef40b8e8641096d52b224e655f03c3db28", Just $ h "b19e4920dabd9668427d930f94ee866554b4f63e37dde2dae6bc05e343366bb2") - , (h "c0d9bf97f9f2fed3265487bd855daf8f726019f4641d5eaa7a97b1af2df93f83", Just $ h "a0c47e9af3902f7107ffd14487d02f888565585d789e2ba30137c1fa11b7771e") - , (h "ced2cc28ab6335a3f970050957124c24359068b2ec21c2362e1a7381b6bef44a", Just $ h "b9937c338033d1ab210e72e0850af45b64eeec6f654fe537fce710a2dab1ac88") - , (h "290b6b1aa44ad40fb49f56e22d06a686fda1c12eed7561ea678a5d51e6aca39c", Just $ h "bab8424e827362187f702f7c963aabc1d79cd0d06e0bbd5b5a516828c2a7dacf") - , (h "a6522cfd3724279d9f90513ea7493ee218bc4700f4171c44e92798dd600ab61d", Just $ h "945708db11e2d3daaf2508de6386c0d1864a5e7bc45a777f7180da4f3947624f") - , (h "0d76d3846921fd1802712128cc5f29eebfe59e7cface023c40a51fb5cadf6628", Just $ h "6713bc69f90f9bf758592c10a8696fb06e70ee3df237edc867940a4a59dd4626") - , (h "9792e26bed159e3e3cb188e8a4a5d415d2c9ee4323b285a9b25089dea39ffe5f", Just $ h "1653f6e3b4537405acc13c6f58f3d64ce349db3274481fba84a9fd1ff0a19359") - , (h "04f5c31b5b81c6699ba258223e1690f0d82f25408e1a0e54a3aabe4149b28b9c", Just $ h "2e52e4f23fcf6e2e8f9ae00e62077a102fbacae1887d062c9b04334c35743c33") - , (h "a69b52507e9c5a43972792609e4db550af9514085685def88bc1d132050928c9", Just $ h "96d0c74399a410e2af5d399cd47513d7827c044741ac0957f8f00b522f99c334") - , (h "f9811b6ee30113e366bea79c02c944b88f89a67d13db1fd88effe991ba27e0c2", Just $ h "f1561094d75db8240e62008179882f126d68519b79db7bdaaffbbe2174c519ea") - , (h "7f7eee72c19a742a448a80ae1ba8972ba462babc23ce7375d4b566de8684074e", Just $ h "b012f6a716a6d835042312836b0f7b30eb4045f65d853a3fabd3631e8a88c4b2") - , (h "a0484d825526bba7e1e61d317a8152914fdbd6ee88af67239bdb1541333f1b6f", Just $ h "1d36b7698e116116172b3e7a903ed36838e2477f32e11784b25fb34b49d2cabb") - , (h "206445ee4177ca5fd564d6a290ce7fee683d674070adc9b31e83e4ea5c16d334", Just $ h "468bdda388fbab5ade442ce37aa10df8c39a239b9d0ad82dc6dc526e59adb7eb") - , (h "3677e40c8c7aa5f773863e22903c2bfd44180730c4a53ceac75edc4f28c60395", Just $ h "12c26270457ac7fb00f70f0843e4c939910678683a1193a90750ef6d49baad92") - , (h "e622209e261795a0dff6deba4432123828aa4d9a6a959749493215e524497957", Just $ h "07d1830daa506f9891c0e9ce901de1e215cc1feaed8d1a8c7d4dcdb264070651") - , (h "cf51d1440fd2fede1e2163bbeef4bd0941b65d7f47719b761ff896df34477ebe", Just $ h "bada79e7f8cec82c37de5ebf7362ac2ad455b88b288b05f696ff92a6d3a00933") - , (h "46d133e7f2c90ce1117e653e8e7b6734f4624b76b746d1435e5b58fb8407955c", Just $ h "6be4621eab180be3b860609aa75f26a673c8c1fe536297ba9ce57b26cf355123") - , (h "f0b0e932344ec1f5410a0b63b88d16e62df6430de57a5aac15667befe02da083", Just $ h "a4d6940442710caedf25acd27776d052c3187db4e080f63770ef2a940d73c57d") - , (h "bdf2eb6d3fa919c0eac3303dab3c513432a878d0f85b38c532c40d4b01203752", Just $ h "0810307af92dadab84c616662cdb38c87087b0cb6f367a09aaeba18c9a3c663f") - , (h "100ea79bf0160b2cbe9bc8af95924cab0ed78ae5fe2aeb3c57e92c122b972997", Just $ h "01147a161d6882846c48b9f332898e46c2eaa81c03ee3733ae1baf3ef4d7f8d9") - , (h "30e9260f82620f0e225b54ef08e136054a9506f8841d0bebcf6e160411b58785", Just $ h "29d71d14b3fd25df4ae7579c182bea49c78d314771cc7fee0e1f5cee275c6a96") - , (h "ef64ca27b9fa462a07a4437935ed1cf914c1d95783521b36ac70a966fad6552a", Just $ h "8d05bfff3b8d8e599eaa1be9690914fcbfc01a978e942740a2ac31da9d5b1b73") - , (h "0e0c0dd54ddc902d59ae85b1d11b85ef14ef54b5d43db73150320c47093160fa", Just $ h "cfc65b1bf0eea9bc0b7da17f57e262c59a4eff39febc69f67bcbfc540ad828c1") - , (h "bfc7eac3114296ffd8216b9ef814260e15430c47ca0190c6a5b19946b747f7ed", Just $ h "0871ac8c54e197ade0df4edec839489dde4aba6e05e5d8ac10603d6695686cc6") - , (h "acf1f08d72ac98052315d1511de578ac87a752a20e4f9546d0e4220fc166ecf1", Just $ h "7cc75e07fa3cf4ebfd1788ffee236a4fcb3211df281f7ad5049359a5890c220c") - , (h "72cd7645f80130f94c1f3bbee3f026a45009d248608015307f17fbed844a3887", Just $ h "03301fe5d844163acfa03496aa1d6f704bae08fa6390136cffc2d875fb5a9383") - , (h "2f9706c57cebce99b4189d46bbba370ebbfe1c47a2d87a96b7ab7a2cb5983a36", Just $ h "c1d42341683580d405314b1d496050ff4a2559b0d9d772454e612856df20d7e8") - , (h "5d9da0f05635babf7a6fa28d6701bda5d39337790599973f77e3436a433bfe9b", Just $ h "4345c0e61b1677c0f8846c32b72c3f38b24742599509123e42d66a9ee09669c9") - , (h "e784ccd334df759d4a0d74495fa1c0c4742da6e75027810a14da607900d7707f", Just $ h "4d5d931f0d3e6d8077c227a5834a4a0979e2f2fa0cb8150a593cda986d81c533") - , (h "d26b88917ae60b258bab408474dcf201cc4fdbbf0ccfbbb658806c40b82c4ce6", Just $ h "07d08c29e39384f3feb0cdeacfe615653723b3c0fa0ae5fd143bb625df718e47") - , (h "4ff19f0a3e3b1264d2a5e1db00ed01423dff27b497557d5fba4149801ec13ada", Just $ h "70ca5276f4ff81e377fcce1fb2d01ac3d3dfe2fc5d59d3ba2edf986ddf959f38") - , (h "c0169ac56fc41be75bb63c8896571d1b433eda34f2ca471047f588d6284f5e3b", Just $ h "590ddaaa8871865bfeba64ef38088ac7ada72f7169f2952c79175cbd37fdcec6") - , (h "8942e79228eacbde1875a5403c449022071a59cb3e07567406fb4a4292fcac90", Just $ h "5c1a5e22986f19b6a8b52081b86e1c05faa7189743e733e18b31f44ff87f1c1f") - , (h "1da8b10ab6c2e5a4c6483fb272c92b0d82ca7b3b6d47b701b560adc9ccbbdfee", Just $ h "2fe2e77e9515766e14a7cfac199c04f1df50e37158126df4f65fea5435a43d2a") - , (h "67c524707040581ce0f4166a856a9b0355b12e4e43e2554e82e6971e01562aa0", Just $ h "8afa3370d50b23e9451cddc2e231c370d354cb0c040ad102afa72c8f0e8bee3a") - , (h "0acc462eea6fc10a893c7c4fac396f5e2b6a2c11f4e6c11d73a6a15617e264d9", Just $ h "f9a927bd9b03774a78a6cb7286b9e07bc5c1a9c5a33a915a183d5c5b1ec400f0") - , (h "f90b0cbf9120adc96371769f26f9134b5acbd676e0911b89e7b55ade07a135ef", Just $ h "828b194e19958c02672e496c8c958daa4bbf8ab538c56d24c54dc63c43ab100b") - , (h "9e443b0d6dce0f8e90bbc53ddfc103d4881659d00c9883edd6ac54d4e50d54c1", Just $ h "cffd286bdc69eb90c5ced3637bd7018b7e3e577fb129d09921e8ae85bc775240") - , (h "a4f83dfcc4673a94a3cf11c7db9e259bb26e03e260ed27b3b82287752fd99a5c", Just $ h "12eaab84c3d41a4a13ea9e2d1e7224c5169a3d78a3f005cedc47e292303a5623") - , (h "a69b684e7b9661340ff2a37e801cb9e2bafe016da2579fa34492ad089ae7e175", Just $ h "e4a473e6e52594d97a4d5a457f7f67a0fceb89a5f1d34e0a65668f369c432478") - , (h "7e122358be2b160454a8eee16176a0679ef25943ec1cb76b5a672d1cf26ea189", Just $ h "d57b3bec2d4e6ce3024b668ae0497b5246cf594fbab773f70c6d08a481ea24bc") - , (h "ddbf15fea2ff40ad657007888fa835fb79238b1c3a3f3cc5c5791ab68b75162f", Just $ h "3cfaa36fac18a07b6e04e40d3f8036626d43261ad153710d35696704b88132ce") - , (h "022a13af10636682435db05e6618e3e56887f01edf78f464082a370044b16fe5", Just $ h "b3220cf64999d0072ee8e56b3042d14bedc3f0e487ce38675f441204efcc5fde") - , (h "d5b4f4f539fb287d5faaf4e2261455fc5ecc4674e1a5ace6d3504825d7df93fe", Just $ h "4317837b835bcd6fc1bf86fe38d8f27bffd06b9e6f604d36e94b48562ce71144") - , (h "3fbef86cfb64ec7ef7071c4e98f9e5f2f8cbad73f23f33f48059bcd5dc70ff9d", Just $ h "3f1d33d70b69c0a6a410e18cba5c75a97bac0e72173b32ce3a2d0b401ec95441") - , (h "dc50b335c68aca2118a027bd6967d117029218533273ad2710fd2dc0aceb005a", Just $ h "669ea3d79b520932a0836aeffb538835043601d3679a4d9871053ad0e344d2f1") - , (h "b9b9805053b73700ab9be9c9ef1ed34f96a1bc892b211d5bb157e8477ef52bb6", Just $ h "76d2307e8bfe87ad8faa78f83b26977e15ff1718a2bfb369bda8ed37a1d8e651") - , (h "ffdd4477a43ff7914fff6362f3367a7a2fa1697a66073f77c49314114c9892d7", Just $ h "0b59c81a757daace192199dc621fa76278bdc8cd1587e17eb163031cb63a1773") - , (h "5f911e995aa975b1186f306ebfd55c0d66cb6a9ff0d497ae62327dee8e004621", Just $ h "d0b5c57adbdf91309578cb4624b51ac92118d79313d671cba4254343f1855e9f") - , (h "e39ce1fb0897eeac1c312e71fd4a10d7eedf09c9e7b3445eb14a4f6ebd380a84", Just $ h "12d6357bc0ed1ebaf224134865ec7216feb54036bc0f15a6d548be2966c50037") - , (h "22208f9a3db080c75643f20f25a95d33e278fbd79b8517f25ca1234eb814b1ba", Just $ h "41705445e27e524818f17353898b6915da994e67fb1ff4a1b90d998f39a7b7bf") - , (h "09dc20c171fdda75522271915713290adc456b4c0992f6d4b2711dc68a4b5afb", Just $ h "756716bc3259dc515f86f0959cac508f7d81f1297ce7383836d599e2a88d0369") - , (h "d59c8986421b04e07a5503a3394ab8191fd2efd7b428d42a3f054b609fef903c", Just $ h "61519b889637e95380e7b57f9e83ad492bd99351b55fb75892fc52cf5e9dd9bd") - , (h "fd90801b3dd4a8ec0cb9ac62e0b289ad597a33d7ff9fba3f4ec3ad85ae0f0191", Just $ h "af14c3e90442489ee3a32e7ce39bb673718236c157763f9cbf5270a23e59ead1") - , (h "8c2f5512941fd1d310da85e54accf96c48fe0addc6153249cc64adbc4bd8b8e1", Just $ h "75b229641d6ce402f1a48734fab9ab4ac947f7ebcfc70df9bbe69214918f31fa") - , (h "1f29ce7874e99b3389fc95fa4f192d6e24d2412df00033dd45506d46fbb71578", Just $ h "459472e4577a0cfb5e610b59684ac8c219281eaaf54af3807123e6572964b6f8") - , (h "233ec3032480e7bebb138a49f638d2a4d4dba88b2db9c3452b5560022a418d70", Just $ h "e0dda9637957f95a42427b2abf3f1dc5de3313bb0779bfa186b755416d8c7174") - , (h "0d78e559cd3de80cb0d0dd62ba22ccb49296fa86ad42bd8c5f75a022ba8c3987", Just $ h "936b8adb761c3d0cd66a72441cdef6e9dbe9dba0a7c08a71ccbd66ce3550ee73") - , (h "22d30e5884d849a930a3878050a45b87a57ca4a23d981d93a7fda4433e489b1a", Just $ h "732074e796263ba7b82d58b176f687dd1fe9f6ae85361fba974df34233296738") - , (h "13ab0adb03e3e84431dd6ab3e64878227d198868eb0f232837aedfbcb2d20c20", Just $ h "f4e092225001c836b439d1140b202f3c504e77b238bab6f95780b46d2191e7ef") - , (h "bfd99efd110f4db6df5153475d3a02f118547b728f4bcd986d8d56252f29cf4b", Just $ h "6c3dd068b3b95d89adc032bcdfa2704a26c0049517122ca56e5d2f6b3d240352") - , (h "7f5d7d233f44d92533586b0396641433923a921422123bfe1785d7b07f840fea", Just $ h "73ee0b2ecb4049f7c3c9d1a876687967c519c3ca90a6d3e616b234a9ca2e0758") - , (h "c6149fec551c36d5b137c36805a677b94f8d52a794cb2369f3a13d062c84e0e2", Just $ h "54db91da066dd62d9fdb7e8379dc02191844d8be55130499465311f1fca4636e") - , (h "5ddda1e552f3324b1b467921f6045da8b26e2bbd1c66ad114742fa5110217f91", Just $ h "0eae59c2f15e6327e02714198a9f7efa6d20dfa8f7c4f01cda026331094d3e63") - , (h "7a80f2fb64cd9857a0836ef2c2173f39f070458acbaf084b0d317c298176cbc5", Just $ h "5341a8aff296211379cf07c4d09feafa1929c3a79f3a7578d27d588859026257") - , (h "36870d4f2a83229a714945e10ca0407dd1830e95d4069be1293b97929504d9db", Just $ h "d02c98048182d4b336dcc0dfbd3ec1e9f0f5ee593538fa6ec1109f2c93afa59a") - , (h "7db251abe473d7847b5f2057252537f1d6c85a303f6eac33a0c2eea80e4c562f", Just $ h "c99ebc63492b42820dd1fcdaf63cc7104b33885111f78d769d8ba7f512060356") - , (h "21c82d7e43e481f6aff1e245eac58b1af2dd7cd3c917919adeb6ab7ca5724c27", Just $ h "6ed93328c37e1927d9605060ba070a474191b9cea439c040e7952a4f4a3afcc4") - , (h "5c1fb745b04d28e4e55aa421c6fa9ae6e26d1062b0ec42520c58e790fd8b0d09", Just $ h "7ba55bc718d5e8eeed0043eeeeea42868ab0fe74ab02491c407cdab1fc86731f") - , (h "4a7c836c4e298dd90a59c90fa0f9461a308c9b7285281eda820e5ece39b8e665", Just $ h "c02b1f61f202e68f5c7c71d6fc77c84ef82d28555f1e3ab34f92e6c44c42791e") - , (h "1a91cdd2a9a30163d108c71d470e1f9b47f85df800418574cacdea31163cb37c", Just $ h "5b2f223aa9a64d4c192a55370c484261d16ffceed341f193316db69144a30175") - , (h "b72d945d51a11aff38114d908675093a39e45d9c728d9d076579ab1ce06836cd", Just $ h "8cf3bcff0fd76d9b41826260a242f1221c3b9145244ac4f1ae0ca2619c4da7a6") - , (h "98a74447dd0341620c3442727cc52e0644b9749ad0e09f072f5d6ac562a32ce2", Just $ h "09bb26d2a40e59e2885d131205db71e448f1373a45f9a9164109c4559a620b14") - , (h "7942acb9a0b44be82fa2409dbefb308a05986cec86bca1609b252131caf3a5eb", Just $ h "8e752347dbd996346ec7cfd14ced0be1761d0cb15db1de5a3ce1346476710f6a") - , (h "4fbf10f94c03225cac5b933b1673b6d56db565a01da9b4fdeeef723ac0fd1f47", Just $ h "dd5737d832a456dc56d8494e49ad536395b8d75a3a17ac907e99c8d091e18cd6") - , (h "b7e2046c9aac95c3adbdd0fdd99ce6a482faf4a368956d7fa6989700fa1671cf", Just $ h "adf0e5a0b5d14dea6a3954135227f863e8892dd8603f2b0aefcaa4a53f25a5b6") - , (h "2d9600413006002280572bc90599896fdfac44ceca6c9e6f841b4f4323d2a9b9", Just $ h "aa31acff8756b9abf038ff144954d28a1241a122ee9dd8c443ca4800092d57d8") - , (h "fc2c8cde65c334da3fb60ce44ac8cf750044e4e40e8393d98dcde1a4c90237fd", Just $ h "42a0763a9fb9e886d8f7a4296dfb7ef2002e65ff66c840a5afd3e3b5a36faa4d") - , (h "788e389dce6e966733b89886ade6cdf6db91117706a85b633f1b94ac8819d2dc", Just $ h "0ed768118416cb3b960ecd668e257dc24a8135ccada18b9b693049596b44f99c") - , (h "c026cb90d1411e59bb6f94d004565590f732984d216effb2391ce068bdf1633b", Just $ h "26eb78523cdce41b8c46283d50e72d272b2ce40c1585b846c9c6b461de5ce172") - , (h "222513fb41fe11e5fee62fecc505be781e0f12521290a4ade539e7301a59159c", Just $ h "36295893d7718e0515c8082b39b8ccff988e19f79a18876ef82d17c94d8c54c3") - , (h "1140db2d521e942a1fb60f611f4f0140f7f5c136af89a617a22ebef4118a090e", Just $ h "ed044f04b6febead1feedd95e9897e0e669fb5f33c6d6e54bbd137d807773fae") - , (h "14601edbd3de2f9fa4e02ca17b7651c61990a4a41cb6b269dd2c0be065b28fd2", Just $ h "4daa82d5f92116f24ac5b79abf6402e1444ed732272c99bd3fa0a45970200834") - , (h "18ce09aff53f9bea9fe87be77a37a719157f1c214dc3c2553df73b42c2d80700", Just $ h "eafed5ccb8e22081c8ac545c3ddadc37a4fc5ee9f4025fc881336c37d46be046") - , (h "5a85825f9928a2b121e02f13ac43be0976f5f3987a09329a650511884e77a536", Just $ h "9cb71054b8fcf74332006508a5ef359be091b2da987acfd10675b10f108ab128") - , (h "d20ada1eeb577fe69fcc180c7539bf6a14206c1d400b91e4c5373de13c8c3ca4", Just $ h "9856162a65e34a5ce4e75d9f0430da017070f99f497d9c7dda623579e2f86bbe") - , (h "5a5a92b2793ec6a47256ed752438c0df18854506938f48516e2ee3f15cc59451", Just $ h "2bf6d8cb81394318c613e890b6e696ab46900d3a776cdb45fd72a511ad52e5d4") - , (h "734a1049ba606152e6abb426ce408477642b059cb0c1e8407aafca39302f14a1", Just $ h "175a39d233624971bbb40cb0e56eba78d8035e88de24737ccdc4804e487572b2") - , (h "575f2326bdf87ee565d9fbdfdac7d724bc9a1b0ca0faa497fe5d4b35acfb3371", Just $ h "2aa512c62ccee0164e9075bf916dc9250857363417f9b289606cc17e0100b3e3") - , (h "33584f3b54f617c9c186140a0238c21d2505bf7b95a79e3b584ca0602976ca7e", Just $ h "005ab76f5613bb71aa341f9a1717c50129d3785c8bc66d66ba451290e9665a14") - , (h "bb46fd3c3e7981dbf994a9dceee0ce8cbca9b90cec4ea01d6c21863beadce7f3", Just $ h "008df176de0153baef13113b1bba48fd6bc77d3c8c7f50952f6053d72915a304") - , (h "a5bd93cca61f17f90d94746cacaef15c6073a335a4db4b34ac6a18e4dc781fd8", Just $ h "98e8159d0037e2393b00c452a4a0c8485480524b0309d86917d801f0095a4a51") - - -- Staging - , (h "b365f1be6863b453f12b93e1810909b10c79a95ee44bf53414888513fe172c90", Nothing) - , (h "9fb147114d56e5e81e7817d37da4a87040664d0e031d89464ba94d55620b96b9", Just $ h "76a6cf6a934456d1c86a5706365da9086ff65b6844c84f1c2d50c851e02c9c8f") - , (h "863ec31fa56ee8c77fa5101f1c7ce9821ae77afeefeb980e89e8ca986ff1226b", Just $ h "fb62f9d4b276911aea3bb7bf80c8b860bd128e39744c373f0cb7d84737e0b12c") - , (h "e0ec1c7c5cfe2358d5bd59fbd4619f9eb25cbc7d84f87f4c66edc75343002714", Just $ h "c617f7e5fdabdc5c69ee3501e90e7f2cb0d59fe7ede06080444dbd3d3ae8144c") - , (h "fcccb905dac43709c987d9b15de62de89bbee2770a0f21ff1f0b57b84a6012b8", Just $ h "cbd00a3ed002c1ae23e4f4b24e1f2c23a5da2668f559f569146cc9e132bcbda5") - , (h "8b6be07fbb909f4110451b56517d56d6c5eaf9c43c9e703566c67173bd3f281e", Just $ h "c9942ae544781c8b3c590d5cdd689530978b49e153ae3bfe5aa9cba71ee7e300") - , (h "d1e2f4320a3fa84d948d7d15384369118be5ea114ca1f358eef666f1abe3d874", Just $ h "567fa18d7c624e93fb3094e77b075ed377e3aea2742ee200f539f23ca32f3bdb") - , (h "226b77f53f09752fe52543993a083adc6bae0d2f675c86b73baae0c8db96358a", Just $ h "078d093d567ff1613b72c30d8309e0ec012381ea192f7a93e963262aad66f6f6") - , (h "b29a2f91d314f997d3c3ad93327133af0ad2c79d5199527975fe489df1950e33", Just $ h "916c8ad0e1e297ca13a619ac4737dbb1176906b4a99ac3ae067e8431acc4480b") - , (h "85d5cffe3da5a35fdbec0fa1b4eb57bd8c92b85561b0b3fc8872951ab6a55d6c", Just $ h "ed3578a1d14f171fe7c6fc7ed53d1eea475b6d730730324ad2f4464977f7d5a9") - , (h "55d57e16949716e7f5fc63a8ef354d1ef97d1a3d2bec129eadd04857f4590b40", Just $ h "f7e438ea989e621ebba3aa76c9e348ffb874c2dd0e186737d28375d29a422070") - , (h "bed261244c16861c1f659fa6d227197c1a46020b49ce91b5c1ca03cbb481ede5", Just $ h "aab83e789a01fe6e43d1829c5f83028857c8dbef236d4fcdda419d0dc0daabcd") - , (h "e9960a386180eac79aec104b19bc6771b3e13ecd3f5b575abe95a61fdbceb4bc", Just $ h "f2eb6981bd2175d6c3075ba7a16b42ec26a1b45ad7ec3e680982dd3d49a9c589") - , (h "40b10f1e768a277a9904ef37062a7c252540fe0769d40b6198d436fe531b20d7", Just $ h "5c25203b3bd4c53085838bd9c0d0b54e875418607b2eb2aed827fde0a37f514a") - , (h "3b62e7a3de1ff1e65cdd8a3e25a9a6f67ac6dd7c307e8ab849abc081b7525305", Just $ h "71a9e0106a0f9ac1d1ddd1030e4ae0c759db4cb89a2b6a8d09d55305babb9a3f") - , (h "a2d4fcf3114a14421217ac60fa7017bf672bd07305990890f61229658df06a43", Just $ h "d0c9d5955d3bf653e18d6bbaaadb270dbd1361d07e6357d900f71c6dc5462495") - , (h "0754e2d638a84f21d75ddcb420a130daf65d5c77a01d09170db20890c7ed77cc", Just $ h "c79af79e8a1241e307ef0e2fa1ba26550c74bc88a4d1021edba7cfde6a05404f") - , (h "792657ad951643532f7f54e6959cc411c275d9d0da58ebb7806bffab317ab380", Just $ h "c39c5214cb175a84b781a5410d448b45e2bae83b709818c104092b840027bfc9") - , (h "bb2dc8c2dbabbde0760c5ebbdeed79ae51bf5ddea928aabf09f3e90897f766b4", Just $ h "dee11ca6e07fd05ae2a88b7b456aeee6f7f7c3aed12da3cfebd608c7592ffe18") - , (h "d433e4cf72f52c0dd6b4b21667f832666939c1bff6e490c3d49c59a6f9b74fa5", Just $ h "5356cb86c098f60ab215bc27f6ed117a6a8230396eede511dd316a60988e5cdd") - , (h "207009b3864a7eb56a104849124f62d951681c3f2734fd7d18e32b925f67edf7", Just $ h "36dd51672e0f493d652dce00db972612ce71bb0ceb94bafa0aabe03c3790f523") - , (h "d88a5f04c09db96a119935933498e3bd3755d6afb8f7d4950681eaf48cc5af90", Just $ h "f81c52e438dbca30a4c5dfccee4d6a52132bc76fa359b1eead7a1fa4a8034c33") - , (h "c14a2559161e3220474547c076c695e47b016969f311c1fe261543559434791e", Just $ h "4f0c09a6010a81bbc339f9c14e54363e6acf474f34a8efea95058a317a264f56") - , (h "9b7ab13e4914b18d367f43fddac9176ad76b7636fe2d4801296350f33c3f48c3", Just $ h "e02e5bcef8d77dad828ac34944e26efdaca87e0173224ecd9b8da6cc18c7f983") - , (h "5747199a27c43af57f046ad2889618ebd25b9f4852975d76c24bbf470c89684b", Just $ h "61ed79c3a3db7dd49f4ad7d29e80660990229e234772f102b60ec1150131323f") - , (h "b5540e531b01e046ab31a08fcc2b12caef7cdcc26827f74d83c01d519515367c", Just $ h "763b2b2d0efe984eb2c78914c59d97e5265f449e499c033b0a5631552d5fe5d0") - , (h "b356bb25b250897264e323ad092a55dec818c83a6ca58c09b914a13d42dd7872", Just $ h "7b6dc7151ba3cf9098ac423893117ecd3ca86827a4c8131c057d7185b24a6291") - , (h "25a93ffe30baac803d533a1903392aab7b16e55d9fadef030a4546e19bf4d304", Just $ h "55235c896c9260701c1510f801db5b09097a699ae90118c2e9cc12ba6c938546") - , (h "2923acec9b597ea7e05ab7b3fb8b7a7a64e9e6645f51abe6ff711690b9a5525f", Just $ h "b82e7519e74cda942d07d491f0e6e5e0d9ae39e654642bfa761f14d2e4f6de0a") - , (h "99ec13567ffdd621264c3c032631794bb775141dadc13552589df196a8b4c317", Just $ h "b0bdd31bb69f7c3574a1f00c4fb5dc1ddb29b33e546fbc21d3ce55097d118a4f") - , (h "d7772a21237c973646e6a48d6834ed37a2ee34dc8571ca7f49ec643c25fe4262", Just $ h "4a348a0041d68f21bf84f68e55b871bb633e83c7dbf5f196e07b260a19a5b35b") - , (h "dc48b4f42362874ba9bc4e49161642723d77f68adafc4ee83a3730e8e0d31232", Just $ h "36a95d256387bb294ab76a3fb111c52866d78a33c1ef86e41177c5561919bd33") - , (h "f37544332ae5b8b16d4a8be5d9cf1ca03db0a60a304b5a99d61ccd2a819839d8", Just $ h "5f8f6101e0a8d5dea8270c538d1f9de85c4c97567606930da0b64644bca0cfcf") - , (h "5555742d42a434a29e302871b1272f4c3a3b7fa3f173dbd9952bd69765cc5818", Just $ h "b56ce8699fa67f89c9f80f12a7ccf4756dd7cfcd4707831721bdd726839311e1") - , (h "facf8917f8edd18936ee6d2fb9537a829f56e855418f2ec0bfeb105fba046bad", Just $ h "83bb2c6e3bfb0c19bde21c4d87448761f2c4baa6a7fe85d3c16927728970ad1d") - , (h "187f2edc0ea75bb4f1606791846f4102b4ea629022f65a84cc78b5d248697004", Just $ h "8530bfeb6fc046301679b0cb00e51696822e32144e9274fa28678c7d8191990a") - , (h "7ac0a18d8c7bd87debca0a722eaf7d8284492f3bfdfb33b97166b2b47b6ee84a", Just $ h "006861faf0f2e1d7a6c2bb90293af0f4fca6b500c98588b1a54126cd9cbdff7a") - , (h "b47b034660bb6c0a8ecab804b3e7aaf79fd708d2fd5d9d4fb48c7a4e8b9b8c7a", Just $ h "525042f960d14a139176a7e077ee9364d57856ed522601698aedee91fd521f3e") - , (h "05d5e7742083340c28c4967c790ffe2600975020289591fe09859685f97a989c", Just $ h "c0bc9485b93d0d70002e3b4d64a097a49dc8271912b24c8fdc11269e58d91634") - , (h "3adc9af909705709208e34dc8e6872afac9c6e6086d68807fb7ded2c0eb8d817", Just $ h "1b0b21a95bc96a4f3149906392f0167c8d2169b73f001f88fb032ddd90dad0ba") - , (h "07a58c4270ddf0dc077d0d5b5894e5176a5340694d04bd5bdcb770bc56ed7c9d", Just $ h "5bc16e5b7896a4f6649a83684a38c5344cf74d2ce90a4701020b19a5ab7e6bdd") - , (h "e5227e8fad58796dafd27be9df740b515460c96ec4a5f9b70c39956bc60bcc63", Just $ h "5201d00f4aafcbe3033a56e1eb8ef829975b643f628b836072fa82655706a771") - , (h "ec86a8264dd7b22a05a3508400299dc26e82fb94cbcdbca85b1c5fdfc7d67f19", Just $ h "3f53d350dc3fdca3527d392c893cea179fdb1f69e177fe4ab6aa0663ea69e4a1") - , (h "8d9aacd65fa8e3c56ec1631f129db0b58de064070c5955e4579ebc63f31fd41c", Just $ h "cf72f434d3acc82b46da3b40dcbcc02873d5e3a834d2aa5bf99dee8d5d0cb781") - , (h "64c00f9a1f86bd41f31a6c35e14ad0afcf6b63e87dd1ace04bacba8b247045a0", Just $ h "f02ba2f5c39aef4d6a09c4fad61ed73f67a3bdc0bb7d4f0c4dbd3d81053f45be") - , (h "48e47a6ab40080f67fd6fc704b9747691820b074d96e8a26cf2f903cf9c1fb9e", Just $ h "e4d5b224b34cdb9a61a14ef0ff57967f4cac4fd9b25b7a31bb7555008300bbfe") - , (h "e2cc39789cbd0628417bf61255b4c3d35e154ed66830454eeb3be74cdfbd635b", Just $ h "d388530f1dfe26bdd0f36783fade4c425934c35fc6e0eca877d8e6eebfa3ffa7") - , (h "e1096958905bea03b7f141ce79828fade135d9fa21b13aa83dd6068dbd1f2940", Just $ h "810253d5874a92d6ab5d2d037ab631083608d8a74e281269ffc11497a142d272") - , (h "482b3b0cc0eb9431bafc865f50949c37ad801f8d26cd013f1a41ef23096c8a87", Just $ h "d40fd0570f503f42878cce67326b70efe249a32a7a48a4152eff8210ad958a4d") - , (h "6ba5279f7a5f9ffb1d6a879d99cdfc6896c937f6e1968c7c87eec3f3867fafcb", Just $ h "3216e810dab2835d5a80edc56f80c811a4cfa99cc3d35235c20c4b2f860b138d") - , (h "26113f1597adf1a4b213b8d9ebd590a4c8888928468a401427ae914679b165b8", Just $ h "ff290e11c60c275eb7c319b1ae3fb1775cc637dae60306c338b819811aa046a1") - , (h "6f9c2e144e9ab7f32930f764000bf328ce44387b36073c63ac56aac9a325cd17", Just $ h "b1e29becfd744ed9d0bd2aca90f6077601f71752fa40496fc8c6dde019033428") - , (h "03c2f50d3a628c5f858bb433d9ed06cc305fa1dece8090378e7d4966431da5d4", Just $ h "f1f35d6f034e99115c0c0b9109176b0a13fe333cc4b7a89a1c6eb9cd067ef384") - , (h "673da8e423e9c07d622719f7321c96384b153ec10cb21f6031aae5b829bbe9fa", Just $ h "c9145eb22ffd0b2719196e48e270bb8119dd69f4edb5ebb486310f9964903009") - , (h "73240b5b5f4a8b79674f112fb0bb8941d9b1312d3e2506385335cad0f11efd1d", Just $ h "ea36d3790f71ccca3acd0d0c7d47d86f7c225deee47b2b164ae12e86deee59fc") - , (h "1e477b27bf2465f2a41ea34be26c36083c6bd32ff397a7f273eb3527d448a327", Just $ h "898abd947bc7b3d593e6f233410f1c7cd743ef7fb1fd7186d0106200fb7471d2") - , (h "18c2b02f23c4f7715d81d465febce159fd95463689cbce2a590c639a5376def2", Just $ h "a279863086a8b052776bfa72fcdf05b278c3ec750d7b281553de5022013a6da5") - , (h "369f6b1c16f709824957b20a5c2366b388f89e8065f7cf5367a8a71446c503fa", Just $ h "04b905e4e0580c4b1f7d32e119725d0234ea08dc3c63a69cbd231278a074c0f8") - , (h "b63cb49967eb4d7dd0e778b6848740568010c677f8485b4037abbce9677acdec", Just $ h "f78e675552c21ff81474eb923e4897326dcb09d4907dcc3813bec4ebf7e9fd11") - , (h "67154ff1e4448fcdc96941f2def69239dba8b485670b55081bb7c4f59c14a833", Just $ h "cc72df817bf62c6c79e2b7e0ab93731624f59ef4e741d6c62bf771bf81358fb9") - , (h "4decd75f7921c6e4a4f25dfd4493e4f8abe82043d9a93ae149122427997001c3", Just $ h "174633dbd275f3341a0a6529ff9eae2541600031c15de792e32519f43ada6c75") - , (h "2f3cf11ae8bb88f1524485ec164d98a46fbedcb63c1666dcf1091c531c175c83", Just $ h "6b0337fbf3f3b273a2b907add3eded9c7cba0b6a3b3512e19fbb678f3af15b94") - , (h "cad6f3f6c536fecd7d501d6a7c5794022e9d9c5740c477af2d44d263b2736be9", Just $ h "7e53c08bc401ffcf5e99a3a9149ffd8bbd7cd8c2882305d6b5200d47d899455f") - , (h "43895ed2778175b1778bf3e78184e69ae5cd2afafc85828a5f1324dfc7d5a15d", Just $ h "de6b802c703173097e09919a3703a1959ac139647d3f0b22489c536c844a4c48") - , (h "caf4ce3994826a99ce47cbeb8df9dc483717e40c5f5e66a2e9225561a42b9f28", Just $ h "3b7e51e0c4d91f166d98c3e286bb33cd1518fd4d4f33f61b2e9a4f3193254ad3") - , (h "94a3ec764a370c495b51dc199db15b74c2e90da6eb63c582752f2d79f5be052b", Just $ h "b708e9100b948ea1148149ae31e4db0e9d3870878332c52231609fe159c5a91b") - , (h "dc332995587aca3b3f22ae14dace171ceb20049e72531a3290f499dac0aeb4f1", Just $ h "8e4ee6d86018cbb73400cca5d4cc9ba6e729774aab0c39e9dfa4e19c8be79a60") - , (h "199c9516752d309418b201df06f80c005beb8b5d127e25f2642aa1ffb47a1682", Just $ h "2f72b2c88abd4ff8bca65247ac0f24ec0ed6d8a3735a209031b97be9a5f473b6") - , (h "97719dd81f307d22046f476774a8de51dbdacf98991a14c90a9d9f9b26f91ade", Just $ h "a89008680e3211064698e5657a40c52d00ebb09cd5754ae8eb84e74ed1026399") - , (h "6d30a738d80edd83a2a847f5e3de30f194aa4f00c79f379c22eac5b71b3e77bb", Just $ h "ebf807ae25a56aae3fc0e615a3e039d84d0d3a476f3d7315f951b9dec9d3e09c") - , (h "446ed4e16254f71bfa18aeaf6e3d1d48aba798f303eaf67b9b24a09d07125bdd", Just $ h "864b699fb74933dfb4e60a9780b3969b34d3eefee44b7a9483674fb570905f28") - , (h "b9de18a83aa05fedfc7c6441360e53434495f8f85258b7d76d0dc1d0f3527340", Just $ h "2aaa2260a118885d90c92dc19504f8e336f4b0943a210f1d76444ad6c4263328") - , (h "0e7a47d467abe88a3050da3094974105319268aa1ff5dfbed5bdab01be257ebe", Just $ h "ef8e130ff5384b5ae5497ab282baee365851c21eb00a9fbcc481b1e79781a5ed") - , (h "37d301bbc3909f77584e1f94f429712f86a853df559fc1a6de2accdca2b47929", Just $ h "7f4dced6f315ccf5ff06f887e603559a92f20fa734c58e54f678bf05ca2a37f7") - , (h "d961d70a6d8d7604a0e0a8ee02daff83fa57b0dfea0d7c83e313cb296c82a8ce", Just $ h "d2e4e8404e634b7587cbd66f379a548095cdee1c6bdb8363ccac27e0769fc43a") - , (h "76c28320b320e49ff71eafb042bafb2669e8528cd67fc54105463ba205f3a30d", Just $ h "d9e4c3f40e51413b102eb91a30626d7015ec9d631ca9e9cfe7cd8e7f6448796b") - , (h "4b18397df6fd250937612de9d056977b4cb258b665bd76e8edefbe63139f51dc", Just $ h "31f59669edad855be2f7d3f5a4fd770276facb93860ad37b1501b7f86927f325") - , (h "ff80679821298091b612fa015f03534d4bd0dd12762b7bc6a5fe9a04e17d0376", Just $ h "38191afa9a4e72882254d2ffd7b955cd2960e826fa8547f4e752691e187fefda") - , (h "ec4ddde4c65766999511202a132f4bb84413fdd7d650cdec9602931ce247f605", Just $ h "8be94d17f43b39d05deb20c8f0ba119f85a521ccff43d5e1ae53a0d291f685ad") - , (h "58ce92d3b4fb36c9eed7b9927f86c97ee09992e626fd3f4b27843a47e628f389", Just $ h "c2baa5c43f6239ae300adf51510b49bd3c6a1a500dd928004d613f5994eb9d98") - , (h "7550f6bb6370cc5ff0e1a8047f2848a6fc128429d77181c9c001c51bba300a4b", Just $ h "7f1cd4a027d5ca159210abe5b0999c1d3f704478cdb049a3ac4d2d35feda86b0") - , (h "971a199590b2947d4ea321417d94ecdd1ae916de5fbac4177434c6207f27f724", Just $ h "a1af9729a611009dfa2a03570eb7f49125c9b639e25b92bb65732c8ef52ded20") - , (h "fd048d1b7c04d4483b0c2d4087414943ddf20a3687f1d6850d7d54c77df1b666", Just $ h "178837a717272adfffc62743b7ba293f2adea3972993fdb37092ddce31ec0262") - , (h "cf61ba379f5e73a8c984755f7a453c004ff97d441746ee965c7cc15011bd23af", Just $ h "7a40b9f79b6be9bd7531fb2e7e4f7e9043d1c929dc42be44fd8a7d6e9dc8fe43") - , (h "e526594c7c6f8d6ada1057dc039efff7608beb51c6361dfbdfd0fe4f0d9d96bc", Just $ h "faf5c3e88eecd5803e035a7662d3ed80bfc0169b222c1604780777c65696e91e") - , (h "ce60936eec63fc67d936ecd7557356fec45a428ea84f08fe7d069074adc55f9f", Just $ h "e7d19c622ae632140a2caba0c7e8251b603537f7c7834e5d11e22bc14f979526") - , (h "8653c01a8b27f3fcd0ea9f7733abbc26a282891358f618f473edbbc63c75cfc0", Just $ h "a3beb278b7812468d4a24f4c2c1fefe29c029730279667c6d8b3f916d2796d71") - , (h "759d59906aadaaca303542a840276c2f84ead4feaf20b907edd5bc4488e80caf", Just $ h "47dd8601702436c07d97d412c4a2a5e7c3e297d70d4b2050a7dd20950cfdb3b7") - , (h "d81b0817b79c0cce58dbce7ebf6bb5e97c2276ae2700e91ab7cc6638e9a20974", Just $ h "a51ea8a8d3bcbbe9cd7d47d7fc2999fdc78ef9979866064b12c329c9bdcfab4a") - , (h "362ab16f8f6dae57a5fdd2eb2162a06e9a458f12b6498a1c76730d142a8311d2", Just $ h "505eddf8ae35eaa8bb9891b28e2f128ed55e2f3dd546095f6e9a74ef43a6d0e7") - , (h "c79dd844ca2ecf7ae0360df71d94db4d31817b3ee6ec42cd9504dddfbbc512c9", Just $ h "613a52acab27fcc676c8453fd44c43455b8cd6894c3c7a61ddd434add5b1aa21") - , (h "69e1cd9a7089ddca1439ad69a57bb8e7da1a52832fef4dbecac7df21af536713", Just $ h "ec9608754cdb1642072a72c8fb9df4d9dd628fd80a3b3e814a8d0a389c4e80e3") - , (h "78f016becc4acfb3587f7be6e919ea9d3522e24afe3d4d93cd538f08782d5553", Just $ h "f3fd482d51fbe62a8a1270a6b8fb29e5e8290167ea0cfd99112a53dc9786d027") - , (h "ec9d536c9fd710ededf76289d8775b8032fab756b0b4bc3885683bb4e9fc6b89", Just $ h "23d8a42ea749c6037b08ae71f367ba3adc8981c28d79ce02c7bd83ac86f24a6e") - , (h "9ea8cd211c4a04e479158b586761a980e2d487d3614911ac637703f9eac33a96", Just $ h "0cc0cf36e51b149974352b4edd018c4a31f428f265388b8b077008790b1ffc27") - , (h "0ca5fbac3900b8a405547a55f8ed37cc41b1a146eb1cc39a88c75baafc3d01ed", Just $ h "20271b28986d0441619b385ecbd77057fdc6224d25d3f652ea908527cefe1fcf") - , (h "32d01fc39fa5d11f3bb2c9c40e33b1c7948939b9574d09e10f62357eece32562", Just $ h "56fa7fcc4787c5234287dfaeb68468b2f9b02787c6050e6b9ec3374860d46110") - , (h "758be3bbd36ebe7663a1e17d988b33a06bb9d6925e6d7b42126cb79e614ae602", Just $ h "cd6d9337cfbbf727e7874ba46a3904a3a1534bcedf2e8555587ab553d53cb9e4") - , (h "463df09ece8751f82bf46317f2d6044ac917132f4739c9afad4988cb38ed9acc", Just $ h "e458fd60b169646bc61ca537f895f6a8c11b5bf87f41a11562ff0329697ee423") - , (h "af619a1328be0e3f46563a6541d51306c51c93aa72638aead2cfd2b29e085008", Just $ h "180ae9f818497a774761b9d79a06298b9bad8d41f543b19fe72a4a72bde82689") - , (h "34d6373c6b67efe58bcabec91f98efd536f95f5b5c4736b3662ce8d74ef2a599", Just $ h "e25732563acf2df25d1b577026de9790ed51d66c69e0cc853758b19c76dcddaf") - , (h "b66d2e0e41875b15f806a77ee7e2be2be3a9059d50c052fbafbd6d954e696bd3", Just $ h "322f5106f64de8c4358b8f5a7f98eccc92f8016e5a3a65e56a213339081e6b7a") - , (h "19ee0fcef4702c95925a8f796608daa2caf71a34ed0f365c2935572dc7f64b23", Just $ h "82c995d9fa0ac0e21de43464c4e94ddb0e628dd9fe338a0130ee0f4cdae21a6c") - , (h "3fa6c68522b82e954c49a72dc6e62b622b30d8145ab9f62fc1c0f3c11eeb0480", Just $ h "0a6f6d759a0303ed98a138c13cf5397b0e920b9c27c9de26823caf7c462b7624") - , (h "280d91c2be8ff5855f423591ebc9ea16a8a739461ead6587cd49b2d3f60a5b67", Just $ h "0c144dc99389a9fb9b4113dbc5460bb40605d8718ea02fbcf8ef7af3770f11a4") - , (h "0e92f4745fccfa48aeeeb4360c86dcd3168a741c246ef4d83a56d6fc98aa6973", Just $ h "7e788a59ebb36b51a15a10e13cfbcce90466bd96d12b71fcfca7e38f60e3958c") - , (h "0334b68ac99cba39dcde31867fe40c8b117bf9b5440a2d52d7f877a067ef4a06", Just $ h "444167eca9aeff7271c3c395b7781d2695b3590753db450d5a33c2abe4d4ffd9") - , (h "433f13cd01fbd9525d926bcbfc6956a2b962f036efb4c23d5900e0e4ac89b0fe", Just $ h "0e551405b2c33fa22b1dde0e28951b84b8a983ce50bfeab5b93cc6433c5df8dc") - , (h "c2d15eaa13298253d71bac49e913ee6bb608f6959bf50e7e87ba9ec505fb7175", Just $ h "d9137fa21e5aaf9399f402b4102c9e028561cbdf150a9bea912c124a2d6cb0bf") - , (h "e56aab0d4f3e36a2f2a5f6f1b8b5c552a488b226932297f14f19a1d6f3a99ead", Just $ h "764a271e1a586dcb317cc28c4bd13937e5b41a61ff265bdcea752b990dc2eb6d") - , (h "23c31e4b33753ce531609a5c79f1d2f9f0bea276100436d5dd6d14a7634b26da", Just $ h "6d62f399ec0c7b303fd49b868f22cf208f9cb9fc55a037985ee57b6a1fd7e5c8") - , (h "c792cb6e823de8963136d2c0044ef19caccb7e91e5db302d992979be39b16687", Just $ h "3ce57b662d5d504bfcafb1a80167981227a24d6e8f2e95b8e6b3a015a5e3bd0e") - , (h "5d5f88749ed877385d76e7fc027eac83cec3853718ba6d808b9521c074a2f7d3", Just $ h "2b625a20a1fa9c3a29be6deb1f8b9a72ac0b00e7cbf922c9e36be8e1978b5b4c") - , (h "ea119b5a0301f702e0fdd1db758762bbcdc7b143693c7d6c7049e110ea3bef6b", Just $ h "ca462e294d7592f3eb1f1814d4e5de2e090848138feed1eafd98c83164e9c464") - , (h "b7e8ec89e90f9da42ced8ce4f088ee412f7dc66db00aecfb526dab8319b1f630", Just $ h "c6cb7ab881b1b9637db8015c2b519dafa6172368cc74ceb1cc492d7fce0e8010") - , (h "df22b8d3c84762c4af4bd600be1ec864c15a6fab3a100243161e6ba65576655a", Just $ h "c08998295321c382dc822d51ee237249b55f1ab5bbd5263ace25e71dfba816b6") - , (h "cbbe0c2b0f4f3a8a252a1a05810135bc2e39ea96fe5688f7f599b37f7147f2b6", Just $ h "efd2fbd1081844191a7baf3b0ea5c3cc8b76298cad59e77b18ff35388abf40e2") - , (h "2bb80f9f63f0fc38c9544ad816af62e1c4528614b7a37673f23ad67a82a78e3d", Just $ h "df61e92b54cb9fefea5728b58967b699332ce1cca62afa9f12bc84b1287e7cbc") - , (h "ecb3faa5ef60d3b4d52750892eab69929820e2d8e6f15e42115c797e67b84b6d", Just $ h "f6fdca06421b5b2cfc266996226ce9cf8c4ad5186c447b06f0f67ee00e5096e2") - - -- Testnet - , (h "8f8602837f7c6f8b8867dd1cbc1842cf51a27eaed2c70ef48325d00f8efb320f", Nothing) - , (h "7e8b2df7730261d8831fe0206591570734d353c15d5266b7fe77097090d33cbd", Just $ h "507534faacb02cc212f121065531ac406c64a05a77b1a126b73e0342c4fc429e") - , (h "2e5f7988afe31f5abf2010904f38243dcf8f8b9a08cb040a2d346aed0166bb83", Just $ h "f28025573fdfaeee5cbbe0f25a67d204fbf3ca72851ca9981ed7609843323901") - , (h "e8ab8ad7890733532363514eda520471ecc4a35d33aef0cded877ad85337bbac", Just $ h "c3fd1619452b46ae7b5139d49e5044e70e720c5b221f62cab6eb0fe62de881b7") - , (h "af45205bdd89afca48497b9585e75e28049c9f681c119df90abd39d44eba8567", Just $ h "998ca7f3b4463a7db4c046d7a3c489565135727f695b6bd2ef57334e681ad1e9") - , (h "faedf3b89b231a7a5b0042e04ae937ba97c8cb74308ea635ae33ff79c7ae45ff", Just $ h "c8a8a33302a41bc4a3f28c607044282ed218d1dbe6544c132455b1c4041b45bc") - , (h "b4b749ffbb304488f031795ee94fe4d99a4853665291790acd4cb48c2d628076", Just $ h "3257a0a7f019f5540ac298bc89be33823426f173fe9a72e907af9b553d63f8ce") - , (h "a1c1b18067e46f554b53502227a3f1e308ee4c6b976be7f28a7b1d37bc72a277", Just $ h "972d15301b7b2f9c065751168cf6bf56085a2189119a7b9f623c9ce92eddf5b9") - , (h "f0d4ec6221bb405f31e41e7e8a130682c435db402f1dcf1f91274297eaded764", Just $ h "a738b386194cb533d910abe6b68eac2ff0c262bb660e3ba84348d683d71db6f0") - , (h "acd499973d23e3718f5075436bbdc51b2b9c4bfbd01b697f3345a2fa8c9c3216", Just $ h "f8d42727b12f620989738fd06dc7f91b7fa2749337830cd5024ac2d8acc018f3") - , (h "dbbd7ae6572286ad98789fb72aa232dd3f5bfc33327d1b38281489a60c6e0e5f", Just $ h "a77c7d1673ba6ee36b56c421abae3b0fc9f1a56a9b4757be2af691c8032824e6") - , (h "a0ed09fe9d804422475ad412be47ff0695ba95e5ad2b2d6509b7246f40ea3d75", Just $ h "c046263bae1772e52fda25ad77502b1aa8d241c74dfa7ad66fd09b9a769fca48") - , (h "e2d8bebeabe86fd5f48d9152b876a6a926df7e53e7f6f765b1241e3322e505d8", Just $ h "6711d1718717a4c441c7011367a1cff2622321220204c10e6673a42b3393339c") - , (h "ce191f524f3a049f7a340b2441e4c4c300b2ce3251ff7cbc049d22a3c208f419", Just $ h "aab3374d14f985c2021289fa425438040a8fcdb89c68dfbeafb8bfcccbcf2989") - , (h "e3925c9473284dde61d4e046eeb6f6cd0cce16082e9c90c35032e074cb0b6bfb", Just $ h "3cb50a1267998b686a7b1dc63df4e2209cf377cf3426bcb7110d27710e2da8b8") - , (h "8475954d831504481c01c362cf5773f08a3215df80f59817ec2eb63d163c8c56", Just $ h "2d22a450ba28cd5ca5af9cd835ed3c8f8d9d5e2647ecdc5eb61f755ae3a070c3") - , (h "31423c21fac504bd0178ad746c0a7eab80628a77da373ad8e4ce3d3a2f354cd1", Just $ h "6a833674f2a5a3048662256025bd5a51694304056a8e0e2035b13135cf17db4c") - , (h "75b8f4f227d744e86f6de2155ee52c7be4775a8a406034867c354b08b44e057e", Just $ h "d56c17678711ea6ba864b542a44f2c45126556a2e690f9b1c7e9315bb76cfbf3") - , (h "fa5a76bfea0ae6729cd02198560b4f62fad559b5bf446d71eef59ecb02326f95", Just $ h "b1e5ac20522ab4fda7688216614df13437da795622b4c2640fa06a9644dc91e6") - , (h "afc6ebc214a88520b577d13e95c8fefee6b9937188ef691945f5f19b5c9b80ed", Just $ h "fd0be5ccc586904915349966fdff0b41888d6b0d2fc13cb57a9bbb60b8f006c6") - , (h "74cd9c480fc4e3038752cc8e164d980e4c4deb1f5362149925bef517d75f612e", Just $ h "f9da8e956ee835896dbcc645028aa092185b1ce384a28ebbb7e6fd4d5ef2a573") - , (h "a9c8247eaaaab364a0acac4cb51624250170ea84b2c35a6a30f53c56ed301748", Just $ h "8394f0b596615f1e450fad484462457dfcf3c26ffb066549c0c97ee1acb7020f") - , (h "d71da4e4e9be2ffc5d99a83dc258b736f0026e403e9230dacf7e1b61bcd22cde", Just $ h "289deca47d063dd530ec05a0d3d1432300b69dcbcd0968a7f889df4d5ff6a1ef") - , (h "9884291f1a329a3ff4b71b2d347500aa73f2f0be4803c1634a2107b4b184c596", Just $ h "9d94ed32f19091284bdadc97305e5b34d34d75c06233aceab051671e54f1ac59") - , (h "0f2a0ed4ca5d15301d3e7a43f8a1db22b6099892caca4d365e8cede0ddb499c4", Just $ h "73c0faf6d7876e18f872cfec410b64824891951b248fda0cdab931da0540816b") - , (h "1dc4282ffc9b24a7d7de1cc48038e5cecf9ff06a2d21a7cedac0fe2858aa6273", Just $ h "b4d24bfbda6fd0526768824a9353b2b8eea488d0f07e8510a1fa8250961513fb") - , (h "f6d414c97ea19971f9d3a4ef3ed319996bacbb10bf4401cdbd0e55a958be42d4", Just $ h "e3a6ed573d36e53be220d100227f8be4b4464265eb018e3a97ffa1ac40ec8630") - , (h "caefd5e28f31c47d1c2589c0a7a42ed3fbf8ae56ae578ca9685be05b717cda8a", Just $ h "e687bb1244ad0fc1dc30ee0f3524820496d4af38d32e56dbe22d60b41ed37d82") - , (h "7f9105e3763eabbbe240385608ced7082cc64e02cbdffe0a956fbccbce018636", Just $ h "1ba487a2555a91def90d7ba7f33404ca7f06b654913a409d2ad81949edb34e9d") - , (h "b6f8f472df2aa5368e112969989ffef3a4340de37d71db01319aacac4b984b9a", Just $ h "6e03aef8d2c944c6bc9e686e240768c0ef2b451f47617143a2a32e53ad9964ac") - ] +knownEBBs = + [ -- Mainnet + (h "89d9b5a5b8ddc8d7e5a6795e9774d97faf1efea59b2caf7eaf9f8c5b32059df4", Nothing) + , + ( h "1941d944df546dea699791c318aeb9cc63b94e4cdb133d79856cda35bf7ecbb1" + , Just $ h "3bd04916b6bc2ad849d519cfae4ffe3b1a1660c098dbcd3e884073dd54bc8911" + ) + , + ( h "d46adef46f0760f7b89bd03926a0a631aded69b30379ca0404a0f68ebd2936dc" + , Just $ h "e9684707f89e93723906ca9b06fd32e8fd297e14ee3c3cb5d79fe0100e244c5c" + ) + , + ( h "7dd0c73651016e0706ee5f18e0b35bc6579e683b25db5e596e09eca9346f0065" + , Just $ h "7cc07fd783d9cf2d43b7deb1f5169d63e5a35b9bc3bcb97de06c9d6608c37524" + ) + , + ( h "8360a8d832a4e59c91efb21576a0aa5f3cf185e067da729cdc15331204128685" + , Just $ h "c793ac68d109921e5d04e9648e8ef112a29aa173f3ac0d40aa27c1aa824f37f1" + ) + , + ( h "9021389e9fe671b9fd47ea15ff1e17ab37870e30a3053de8f5e60649f6ef0ec8" + , Just $ h "9b744703908858efdc285f40e67c16bbf9851a5a071028871066d9dad458158a" + ) + , + ( h "6ca7fd6d27b48e980c9c8f6499c8f783317ae9686f3b8a458125825c7ce32e54" + , Just $ h "0281ffb72c852242ac921686aa17317743d8b5897c58678a99e171fad6cbc5c0" + ) + , + ( h "8313ccac635027eb2150c27f417c00541371f0a40cbc5fc714dc6c3b0e93ac19" + , Just $ h "edaeab3fa50abc2f10d76629dbc72088beab7aec8de77f37cb8439216facfb5d" + ) + , + ( h "e96a7fa23688a0d0f68bfdacddd1a1976f1c26724c484a6e35099d77d1acf186" + , Just $ h "0ea08bc5ce983cdf63d04f3f87b02f3505d52dc59222dcaaf89dd7629ade71df" + ) + , + ( h "b86032b4c1a3b9e5569fccf1573016d6f35efd06a4f1002a4bb97058941dc103" + , Just $ h "2864f2051e0fdcf9b9aed4f5466ac7481cdcf8b360e3bebd7881b363d63643e7" + ) + , + ( h "af82845f28b968c49eeb5b9cdb902af357435fe2a2c45dfb9ac7196adf9c1470" + , Just $ h "ff3f37d545b0a933f7556abdbcc91c096147ebf5332b5e512083906726b2bf19" + ) + , + ( h "0f27259d20d9c8cc17780b672ec1efeb224acb25ca84d17ec4689325b06d804a" + , Just $ h "2a5fd14ba10583ba912ec75e03f24179d187ab7f37dcf769558952c954af2b22" + ) + , + ( h "477691c837f0b7d450f084faa04b22459dac1e643f886c3a8a761d3eab9e7e65" + , Just $ h "c9da059f0022ae82d9e186a3ea97ee222a7aea27832519576d410789431e91cf" + ) + , + ( h "93cb7b9cadd44104da662aa17e94169361108fb630ad6bb5bf56d48789f7854c" + , Just $ h "d734a3305f5b3323b2e664e7beb50c0f26532bd242df239c69b3b934f06522cd" + ) + , + ( h "39d89a1e837e968ba35370be47cdfcbfd193cd992fdeed557b77c49b77ee59cf" + , Just $ h "dd5f2d406ea6c62b6e370f89cc39d80e40152d5b1537c41387359e220ddf5f92" + ) + , + ( h "9df8408f56c0c1d6e7cae2a8f3390ce0fe6a46c5219cbc29ca4c9b9411c0f5e7" + , Just $ h "180a01126e2ddae64abfdb5366f3335ed0cf4d10273f24c8b681cee38ed98409" + ) + , + ( h "93c52837302a61b9a4975b057b68a4c83ba9bd9a03854d4de8c58a46c8b3d0c5" + , Just $ h "91e7a18ee35b2e569567b4f1310b82d2bf302ba99890b8ce0b304d648c33a6db" + ) + , + ( h "c4bf68e443df0abd985d72a5f9a4dfb50d5cca28e9c817812f9a666d357ea165" + , Just $ h "cbfbb1ab086ff34ef52c66e819c9ed897bc35724018f651b889611fd1777dc67" + ) + , + ( h "2498d9e98a5aa7e3c70e6aa0f7b5ba2e62d7e3a062a5d5717547a50b784a0c57" + , Just $ h "45c2185719fb3f94bd112e69bde2c006e593e6d8e06f6175fc351141034a83f7" + ) + , + ( h "fee8162888525d62f824fdde17e40974cfb69ca01de668aac1e3526a53aab921" + , Just $ h "faf8656965ea437abeab35829c7fec08d89ae1dcc8aa1ed966cdc1afa3cc0d36" + ) + , + ( h "6c1825661e4f567ccd62ee774456971941565b66012ba093b0655516a15300a3" + , Just $ h "5bbc9b954377636cd594af87b8afea5b51837b4fb32b94bd3df065b789a0a1ee" + ) + , + ( h "513c60b42fad56df2862cff73d8966d9cdfac267f7fed0622dca9cd7af728aaa" + , Just $ h "7d0a99accb4be05906c8e20253cddda11f5aa6983de853923aad1b7b2233259d" + ) + , + ( h "49eacedd439e3bc9cffd1c1be7477543c90997d4e50403cedae259eb321f7c50" + , Just $ h "9e24241dd3d73963b1fc963c515477c9a7ec1e5e79d93ee5b85198117bfcfae3" + ) + , + ( h "bda92f619fa9ff2c486c493bfa19c061a40f66a1901db30c8651478d5982ab27" + , Just $ h "e4b9633f586b7c01cd7fd74320c36e653fefbefa824effff0e1fee6d7fc5bc69" + ) + , + ( h "471ee59bee5cadc22ec85c4519acd4ee6f843eb30b34793db6ba1a9eb0426afb" + , Just $ h "312ecd199635560c66f55acf3eef0ede3d83ae718150adac5dbcfb81289ad0e0" + ) + , + ( h "e8945c01028c3811916a50f8a6508daf333377ea5697cda33fbcd13d60b321ad" + , Just $ h "8b1f645cc74adc4ad5c756e514c9ae0a25d693547ac1f3b82cb8bfa451c740cd" + ) + , + ( h "d736ce41636ceed47bc7852bf64a1c44a39b727231dbe1a778de96b68e3d1e20" + , Just $ h "0f7a47cea9c8aae114e2ea39a93eab74a3812aa8db6cd2f5b22209ca33f8b762" + ) + , + ( h "41d18923991ed04f15abf23ae8c08318c552288da81b499e9397081b353dac2d" + , Just $ h "254851a6b39c906ec02e9a8a7f06305c0c6700d7f2cb6b33005156bdd5c3ebcf" + ) + , + ( h "6fd5e5f45485f36b9e70270865f3e8d3360ea38f25338a53725054e53b3b4908" + , Just $ h "1c26656e7333a07f223e7e1f2350ddc1e4bac0b5245692b95ff1c127bfa5a52f" + ) + , + ( h "b55046da4190b68a18725dae62732c5679d26a2ff53510e38744e5262f6d22f6" + , Just $ h "535496ca4b7e2ee172f16e9501c353466c30bce792788aa2448ee774e6e49f6e" + ) + , + ( h "71d7068e981a233cbd1c65d73a04a399e42c78aa44c17b2d971566f5c17b680f" + , Just $ h "6a644bf448570069374805dcd40187bd99a324830ec086da0d024f732fd2ed60" + ) + , + ( h "644d7069580dc3ddeded68000199b3606b19e7e04f87a50db530956ff8b98ea6" + , Just $ h "b6b7971aadd2a16cfaf177cc7b0a2f4650fba667d2a89cc75d0f972893ae75f9" + ) + , + ( h "b6bb091acdaafe5c2d98b3e3f6e85dcb676bb1866c1b86ce67a4c035f497f3a3" + , Just $ h "ca737e6f276c57ed2e026091b912914aba12e8a917269f36a4c9261aedcc2a2e" + ) + , + ( h "2e7d5662ef08e6290b72c29788accf8ddb2365dce7a95698cf0280eb769e3cd7" + , Just $ h "7984bc3c8b96be0826c5034135f316428212060631706d7c3263854be3dc529f" + ) + , + ( h "cdd944cde86881147dced0f4d6b7234798bcd93c3e6b1bd34a198c35c8c7948f" + , Just $ h "5fd0c2cc69431b519a68dbd2e7c70cbcbf2ec7fe1d68cc62980b4ff8940462cc" + ) + , + ( h "4229695b833c7d3f52521929c96965235b3d9081a2cad3603a76df883f586ad0" + , Just $ h "77eb7496ff763b4501f97e853ccd886d98e188eb19832a9a2251229517255577" + ) + , + ( h "9d0290acdfd6dd68ba152f35bd56cb9d4ba89a79f8b1d638d09a4a00c8326dd0" + , Just $ h "5b64541bd2e23974b95c453e2dff67151fc0f28a73c3b40adba1ae1ee3bab391" + ) + , + ( h "07fc2b83a768a6f958b7c72454a7bdf84336fc4e8dd6d9c0b71db5a5f004f7cb" + , Just $ h "07312ef90499f668791cb7c97c9a16c9c3d2d4893a58c48c39f88e796ada7a29" + ) + , + ( h "409e8e2a08b77a21e65635eee20822d6e801b65acbe10f99f7935ef6a02ddedf" + , Just $ h "5f8e6368bf04f9d16a7adf5b0e37e8a095df2df848e18cc48984e097af598ecb" + ) + , + ( h "66ab026f017a4d062297b150e3c330c8431888b274d0ce408f61676e38022ab3" + , Just $ h "636dcaf68b4a5e4dedca8031d159d5e3bcedb3abd2e434bd838a1d30a6b0fcf1" + ) + , + ( h "31732323544f560dbeffa0258a0484302035ceb27f806f03a5127db303084900" + , Just $ h "55503f559d930e17344e395c964f71c33402778c874daf62ba8b97de1e5c3902" + ) + , + ( h "fbb266aa8f59d5c34429268c396a7b7354ace711ed0e8a54425e4983a826a3e6" + , Just $ h "0e859fe991e743c30e9870e9135240e98073d96e3282c48c9032cdcf94817edd" + ) + , + ( h "ab49d2db8a22b8de09456272b8e58c8dc2d2b6d17501233fc7c454f2932d5478" + , Just $ h "06efacff01810b44bd51cb8fd589feca7ddff5c8f222d8c65a9f443871e72e45" + ) + , + ( h "ce2f8ad0887e647f703b6792c503765ae7a38a35919e4688d58555c02d028d7b" + , Just $ h "9211bb7008251c1bc2d7863dd06d73de9b6f4a19a5fb40a354613cbaa1c6b5ce" + ) + , + ( h "f2d1692f23cad89254bc72ad8a069feb1372b41de3d3051f0e97c64fb39f5a0d" + , Just $ h "209ad054b0d93b5f2db7ecb7a85ab976a4e3fc03ea7b91a7b6dcc8e3740c8ed6" + ) + , + ( h "3196a0edf8dbc83ae8409f991e74d1b8a0e322d718751c9a448e8d27dc69cf8c" + , Just $ h "86f0e6f10e4286e7f1997a788468c9bd941bb2fc647d1ffbbce6dc019d264c6b" + ) + , + ( h "d67759e7149ea5fcc878ee94efa0e706a32a57c3b4694a5f2402430daf41d263" + , Just $ h "90ffcabccd5fb19e290a65db309a2f7e3200ef31f249350974806439212c34e2" + ) + , + ( h "3e95d9cf920170e8fc35a87505839af22a6a23e9bc4e3707c9b0894ecf97895b" + , Just $ h "6f0226bb8a5b53bcf8f22af8fb847c7ccc5859fd299398cf9145538c7e8552bb" + ) + , + ( h "d9e9ef3277180fd7ad404e3b7d66e049a7fff9da1595fe464be68943dca79510" + , Just $ h "b05742df56b52c2fbdb9daba0a8d93bd9d1a8f8ff0185922f544b7234460bba6" + ) + , + ( h "30d20d5423e501de0461846d9f7f5afb09efb013fbb0bf95184d78485b04ff37" + , Just $ h "f570e79e366464a864a0e0c4c8048da9976a7c1987b4fe49bdd6c697fa926be6" + ) + , + ( h "3683b42bc7d7f7d0c38eb46763c2c2c4987d7270c1955bc712a379b53de1e835" + , Just $ h "bea5b0356fd54dad2c7400dc811f67b3ed5956f25d33d04abebe86aacd6e5eb1" + ) + , + ( h "60d0f38034af88016244a9a5ed52e4ff053867e157537fa27b9a7d6b33c68475" + , Just $ h "89042f9f40defba0c26b73cbbf9ad10296bf942ef876a3f28d158421579481c6" + ) + , + ( h "09be08c5c7f495fb23291602cdea3529418aead4f476ec759b8068914615e0a0" + , Just $ h "fc5ebbf8a921a4bc9f45538196434c4c6fc8f2c1505765b3bfd98c04262327aa" + ) + , + ( h "ae8e27654b426832e3bde56b0bdabe0d5597000913effb4cb240a0186a327e3f" + , Just $ h "01774b45045812f5edd64bad41d3d478c2371fd7826690ca031f9d2b7a5a10a5" + ) + , + ( h "fb58cb86921561b7b862c1bed586421e708e4569548c2c4cfe6bfc4d4aabf5f9" + , Just $ h "29b91b18aae36dfe13bbc4a08395d1729fd1adf2b9dcb2479a06afa4392b6f59" + ) + , + ( h "716373239cfb7b64488ecbb2c0419bf22de8c187f0d480ffd909e5be9aee4ecf" + , Just $ h "15c1e7fb3d5a9acf56d6290a94b0c47a9812f8389a8afac02a7b96114d81a965" + ) + , + ( h "b0c530824f88a5ceb168bdafe9c24661e2098557d60b95788e97c386cfe69ffb" + , Just $ h "20d715a8f8b2c36f9343a04e6ce8e4bb8c6cc06ed1fe0d6ee48727f47e2b8d97" + ) + , + ( h "4cd15ce51ff41b3cac040683ce40d73c60a50c6d06c9dbe08bfc351aaf174f0a" + , Just $ h "51c6774c1704d922050d4730bd8beb1e9083ad60f604a9549d505894774a2cf9" + ) + , + ( h "e824ee206c29a4c43b907d39146fa339158fd93fd83a5a71605fe278d75d07b7" + , Just $ h "834d37b04d85ae60f97d231c6041118898b29bb10b4efe832638015f2e4ec040" + ) + , + ( h "b307ddd09daed4882ad2d94a018958d2f5ead4f52c7b04621109186ca840d59c" + , Just $ h "37788b6021138903f10c4c9f0627dc0b338c883556506d0a8263f9f5233a409c" + ) + , + ( h "efad133526785f07025f029b6a51aef45d8e2bd9171cebb71dc5113f7a7927b3" + , Just $ h "6a5b2a6bb5434e1827f78c6bc943504c69968f1734a721088810f193ef610ce2" + ) + , + ( h "8c53e50a3c233deff3d15f87394a9f25b23aaad398250420999fb30b73c5d8d5" + , Just $ h "ec99f7df858441ed05e947bcf88834704686d4892cadee8a0c77eb854fbc1e7d" + ) + , + ( h "c4ac22a380af80de94cf18348bcc4efae3b8cc4805486ca2fd152a0d4e027b79" + , Just $ h "445c510e03e5fa070d71b0e64343eb033dea91a0737cc32a552b1a664ed73b71" + ) + , + ( h "87d5a36a4219d11d4381650559b769e833847149e58b1c543929b30ca7af9bf8" + , Just $ h "5a713c9596bdb1935f835506d8161ab75e358f658c8bb8f656c0fb13f884672e" + ) + , + ( h "60b2fec0207fde0f9891f8479e6dc6c6741b513186c84f76bea9f612f6339900" + , Just $ h "2f0db218ff8c7ddd5fd11c5724e305e069f58c5bc9445c5486004f20ad5c3afa" + ) + , + ( h "9b03f8143a705b8fb44064e5fb9e8ec2ea8a7a645ef87df8f45f75bc8963c00d" + , Just $ h "1c7aa475b21b20690b60d318b38ea69bd63f66a3e8949650eba50de31a990060" + ) + , + ( h "5601474a97a4ab955b894ab25cafdd219042bae83addee8f4c1299b282b8a5e2" + , Just $ h "e02c4688b977d6655d51e948fd161073d72759b878751137dcab6c42a396f967" + ) + , + ( h "7a317fda6860b998ab34e6df1d1d8b21ba60d9a2396e992a8ee73bfdcae6d820" + , Just $ h "6e5da1bf4b2c203438408505fdd6f8970fbf4703570af678a9435f5f44ac089f" + ) + , + ( h "4672dea0a5275519b1ed4a564740e3f85dc0cf98adaeaffac245eb648a06b3f0" + , Just $ h "fe379bb300c1b57170725554029eac5d802445ea8f326dbf866036d647c6c742" + ) + , + ( h "fd509e014462437d1786934ec6b622a705aab62318c87107a0f245b4cb404a83" + , Just $ h "63fdabdebfb527f13dcda8d3b83dec19fd6e5022371032fa9474ada47e830adc" + ) + , + ( h "7a88baf29982e95f601f90a1e099a373b03362b2d356e91104d9ba5e95d36056" + , Just $ h "d10ce7039bff2974e6dff683bcd761a5a8134432c95bf46ee07b38ea674e6613" + ) + , + ( h "34a45aaedae4665afbab1fe2d5a1732ceb3e6d8ae9af0c1083d5eddc7038bb91" + , Just $ h "86bb7c9f912741e67769e3a97adf091288219056b346ac0459745c31864b6374" + ) + , + ( h "e325be55e9c30b5ca89357ac6684d9cf8bdae2ecda603f6febb676fdb87741dc" + , Just $ h "ce496ec6ba3e1c278c633ac7d3a77b8f3018bbc5f0073a364075c11977e6e03d" + ) + , + ( h "8eaf019d681a364c596f3ef7648d64d05480f11ab24da6add90ddb51e79c5c4b" + , Just $ h "ec5348295760043ff95eca74d57b2194b41e432c6d7f3fe28c8c6dccf6d13aad" + ) + , + ( h "e9b830abdbc86d20ed0c6812f2c5b29f730a5e4cf48a2b9d0d99b979b65dcd2b" + , Just $ h "1b44ce89f19161725de142dacda3d92cc85d9788f9ef3d196597974fd818e4f7" + ) + , + ( h "5815f7890f407e75ea0c40784aeeb3104a24972495d72c56f8e2bd259b5f6109" + , Just $ h "d2c25669f9c8ed3eeac855fc34ecbd822f6592634f7efd7ab2d8eb1b8006c622" + ) + , + ( h "13c26f05091afaf1d2f207f88764255442d861536b256fedf2ee4a4665b2309e" + , Just $ h "2c759b62c71cea1a8ef6a463c26c1a847b452fdd6bb7ebd675c97cd70284a0f3" + ) + , + ( h "af80b889bc1b7574d5bd33659727d6ff15977abb0836fc92e28043ec981e7e7c" + , Just $ h "656b7e2f655e513da98e9edfc496d93e950d1d006abb25349938a8df9fe5b1bf" + ) + , + ( h "1d0647c60dfc6fcfeb59ea5b4ad3106e8944d5ffec50edcd88f731f88bec0df4" + , Just $ h "5434c003cc072e45167eb267f0d2f7962aec78360d4008ed1ca7555c62406020" + ) + , + ( h "e2f5652f71af8b91224390a7133c5220cbc5ec65c2ca0101e9ea8ef49a893474" + , Just $ h "d96d0e24fc07e8f3f26a8c987687a8773286545b533d7f5d942ca30fb699a187" + ) + , + ( h "3cfa76d00cd6f6d8e44ee0671ac3914dbe79dd054cf784b523d760c769e80bd4" + , Just $ h "834b20982356a5a8ba71f462540012eb00b3f7218c700adff749721c111ad44a" + ) + , + ( h "ad4dc9022caf9c96d2ee0bae1f9cd1d871eac1db69129f0a957e61b269b4e6fb" + , Just $ h "8f94acecd83eb4a2c68c80cf01b37ab3d6c02ace3537eaa71c8142aa5c9e1c5c" + ) + , + ( h "cb829d6055ef80e7cf94c6b450c1fe96a300697945806b0654e51f0d403c9f5a" + , Just $ h "1816fefcf5d342b6f7bf34d5264bd8aa25f307174c4a7d78db5e41dfd387b5bf" + ) + , + ( h "34727a37ef91074928ae386c1c5d2fb76b1b77e1db06d987ed9d5e5675b5aaeb" + , Just $ h "bfb3e75a8d8ccad1512858ae2bcf02f61cb088d9628a655074f7406eab099064" + ) + , + ( h "8284fc4fe9f2f5d055b74d09bceb3bef40b8e8641096d52b224e655f03c3db28" + , Just $ h "b19e4920dabd9668427d930f94ee866554b4f63e37dde2dae6bc05e343366bb2" + ) + , + ( h "c0d9bf97f9f2fed3265487bd855daf8f726019f4641d5eaa7a97b1af2df93f83" + , Just $ h "a0c47e9af3902f7107ffd14487d02f888565585d789e2ba30137c1fa11b7771e" + ) + , + ( h "ced2cc28ab6335a3f970050957124c24359068b2ec21c2362e1a7381b6bef44a" + , Just $ h "b9937c338033d1ab210e72e0850af45b64eeec6f654fe537fce710a2dab1ac88" + ) + , + ( h "290b6b1aa44ad40fb49f56e22d06a686fda1c12eed7561ea678a5d51e6aca39c" + , Just $ h "bab8424e827362187f702f7c963aabc1d79cd0d06e0bbd5b5a516828c2a7dacf" + ) + , + ( h "a6522cfd3724279d9f90513ea7493ee218bc4700f4171c44e92798dd600ab61d" + , Just $ h "945708db11e2d3daaf2508de6386c0d1864a5e7bc45a777f7180da4f3947624f" + ) + , + ( h "0d76d3846921fd1802712128cc5f29eebfe59e7cface023c40a51fb5cadf6628" + , Just $ h "6713bc69f90f9bf758592c10a8696fb06e70ee3df237edc867940a4a59dd4626" + ) + , + ( h "9792e26bed159e3e3cb188e8a4a5d415d2c9ee4323b285a9b25089dea39ffe5f" + , Just $ h "1653f6e3b4537405acc13c6f58f3d64ce349db3274481fba84a9fd1ff0a19359" + ) + , + ( h "04f5c31b5b81c6699ba258223e1690f0d82f25408e1a0e54a3aabe4149b28b9c" + , Just $ h "2e52e4f23fcf6e2e8f9ae00e62077a102fbacae1887d062c9b04334c35743c33" + ) + , + ( h "a69b52507e9c5a43972792609e4db550af9514085685def88bc1d132050928c9" + , Just $ h "96d0c74399a410e2af5d399cd47513d7827c044741ac0957f8f00b522f99c334" + ) + , + ( h "f9811b6ee30113e366bea79c02c944b88f89a67d13db1fd88effe991ba27e0c2" + , Just $ h "f1561094d75db8240e62008179882f126d68519b79db7bdaaffbbe2174c519ea" + ) + , + ( h "7f7eee72c19a742a448a80ae1ba8972ba462babc23ce7375d4b566de8684074e" + , Just $ h "b012f6a716a6d835042312836b0f7b30eb4045f65d853a3fabd3631e8a88c4b2" + ) + , + ( h "a0484d825526bba7e1e61d317a8152914fdbd6ee88af67239bdb1541333f1b6f" + , Just $ h "1d36b7698e116116172b3e7a903ed36838e2477f32e11784b25fb34b49d2cabb" + ) + , + ( h "206445ee4177ca5fd564d6a290ce7fee683d674070adc9b31e83e4ea5c16d334" + , Just $ h "468bdda388fbab5ade442ce37aa10df8c39a239b9d0ad82dc6dc526e59adb7eb" + ) + , + ( h "3677e40c8c7aa5f773863e22903c2bfd44180730c4a53ceac75edc4f28c60395" + , Just $ h "12c26270457ac7fb00f70f0843e4c939910678683a1193a90750ef6d49baad92" + ) + , + ( h "e622209e261795a0dff6deba4432123828aa4d9a6a959749493215e524497957" + , Just $ h "07d1830daa506f9891c0e9ce901de1e215cc1feaed8d1a8c7d4dcdb264070651" + ) + , + ( h "cf51d1440fd2fede1e2163bbeef4bd0941b65d7f47719b761ff896df34477ebe" + , Just $ h "bada79e7f8cec82c37de5ebf7362ac2ad455b88b288b05f696ff92a6d3a00933" + ) + , + ( h "46d133e7f2c90ce1117e653e8e7b6734f4624b76b746d1435e5b58fb8407955c" + , Just $ h "6be4621eab180be3b860609aa75f26a673c8c1fe536297ba9ce57b26cf355123" + ) + , + ( h "f0b0e932344ec1f5410a0b63b88d16e62df6430de57a5aac15667befe02da083" + , Just $ h "a4d6940442710caedf25acd27776d052c3187db4e080f63770ef2a940d73c57d" + ) + , + ( h "bdf2eb6d3fa919c0eac3303dab3c513432a878d0f85b38c532c40d4b01203752" + , Just $ h "0810307af92dadab84c616662cdb38c87087b0cb6f367a09aaeba18c9a3c663f" + ) + , + ( h "100ea79bf0160b2cbe9bc8af95924cab0ed78ae5fe2aeb3c57e92c122b972997" + , Just $ h "01147a161d6882846c48b9f332898e46c2eaa81c03ee3733ae1baf3ef4d7f8d9" + ) + , + ( h "30e9260f82620f0e225b54ef08e136054a9506f8841d0bebcf6e160411b58785" + , Just $ h "29d71d14b3fd25df4ae7579c182bea49c78d314771cc7fee0e1f5cee275c6a96" + ) + , + ( h "ef64ca27b9fa462a07a4437935ed1cf914c1d95783521b36ac70a966fad6552a" + , Just $ h "8d05bfff3b8d8e599eaa1be9690914fcbfc01a978e942740a2ac31da9d5b1b73" + ) + , + ( h "0e0c0dd54ddc902d59ae85b1d11b85ef14ef54b5d43db73150320c47093160fa" + , Just $ h "cfc65b1bf0eea9bc0b7da17f57e262c59a4eff39febc69f67bcbfc540ad828c1" + ) + , + ( h "bfc7eac3114296ffd8216b9ef814260e15430c47ca0190c6a5b19946b747f7ed" + , Just $ h "0871ac8c54e197ade0df4edec839489dde4aba6e05e5d8ac10603d6695686cc6" + ) + , + ( h "acf1f08d72ac98052315d1511de578ac87a752a20e4f9546d0e4220fc166ecf1" + , Just $ h "7cc75e07fa3cf4ebfd1788ffee236a4fcb3211df281f7ad5049359a5890c220c" + ) + , + ( h "72cd7645f80130f94c1f3bbee3f026a45009d248608015307f17fbed844a3887" + , Just $ h "03301fe5d844163acfa03496aa1d6f704bae08fa6390136cffc2d875fb5a9383" + ) + , + ( h "2f9706c57cebce99b4189d46bbba370ebbfe1c47a2d87a96b7ab7a2cb5983a36" + , Just $ h "c1d42341683580d405314b1d496050ff4a2559b0d9d772454e612856df20d7e8" + ) + , + ( h "5d9da0f05635babf7a6fa28d6701bda5d39337790599973f77e3436a433bfe9b" + , Just $ h "4345c0e61b1677c0f8846c32b72c3f38b24742599509123e42d66a9ee09669c9" + ) + , + ( h "e784ccd334df759d4a0d74495fa1c0c4742da6e75027810a14da607900d7707f" + , Just $ h "4d5d931f0d3e6d8077c227a5834a4a0979e2f2fa0cb8150a593cda986d81c533" + ) + , + ( h "d26b88917ae60b258bab408474dcf201cc4fdbbf0ccfbbb658806c40b82c4ce6" + , Just $ h "07d08c29e39384f3feb0cdeacfe615653723b3c0fa0ae5fd143bb625df718e47" + ) + , + ( h "4ff19f0a3e3b1264d2a5e1db00ed01423dff27b497557d5fba4149801ec13ada" + , Just $ h "70ca5276f4ff81e377fcce1fb2d01ac3d3dfe2fc5d59d3ba2edf986ddf959f38" + ) + , + ( h "c0169ac56fc41be75bb63c8896571d1b433eda34f2ca471047f588d6284f5e3b" + , Just $ h "590ddaaa8871865bfeba64ef38088ac7ada72f7169f2952c79175cbd37fdcec6" + ) + , + ( h "8942e79228eacbde1875a5403c449022071a59cb3e07567406fb4a4292fcac90" + , Just $ h "5c1a5e22986f19b6a8b52081b86e1c05faa7189743e733e18b31f44ff87f1c1f" + ) + , + ( h "1da8b10ab6c2e5a4c6483fb272c92b0d82ca7b3b6d47b701b560adc9ccbbdfee" + , Just $ h "2fe2e77e9515766e14a7cfac199c04f1df50e37158126df4f65fea5435a43d2a" + ) + , + ( h "67c524707040581ce0f4166a856a9b0355b12e4e43e2554e82e6971e01562aa0" + , Just $ h "8afa3370d50b23e9451cddc2e231c370d354cb0c040ad102afa72c8f0e8bee3a" + ) + , + ( h "0acc462eea6fc10a893c7c4fac396f5e2b6a2c11f4e6c11d73a6a15617e264d9" + , Just $ h "f9a927bd9b03774a78a6cb7286b9e07bc5c1a9c5a33a915a183d5c5b1ec400f0" + ) + , + ( h "f90b0cbf9120adc96371769f26f9134b5acbd676e0911b89e7b55ade07a135ef" + , Just $ h "828b194e19958c02672e496c8c958daa4bbf8ab538c56d24c54dc63c43ab100b" + ) + , + ( h "9e443b0d6dce0f8e90bbc53ddfc103d4881659d00c9883edd6ac54d4e50d54c1" + , Just $ h "cffd286bdc69eb90c5ced3637bd7018b7e3e577fb129d09921e8ae85bc775240" + ) + , + ( h "a4f83dfcc4673a94a3cf11c7db9e259bb26e03e260ed27b3b82287752fd99a5c" + , Just $ h "12eaab84c3d41a4a13ea9e2d1e7224c5169a3d78a3f005cedc47e292303a5623" + ) + , + ( h "a69b684e7b9661340ff2a37e801cb9e2bafe016da2579fa34492ad089ae7e175" + , Just $ h "e4a473e6e52594d97a4d5a457f7f67a0fceb89a5f1d34e0a65668f369c432478" + ) + , + ( h "7e122358be2b160454a8eee16176a0679ef25943ec1cb76b5a672d1cf26ea189" + , Just $ h "d57b3bec2d4e6ce3024b668ae0497b5246cf594fbab773f70c6d08a481ea24bc" + ) + , + ( h "ddbf15fea2ff40ad657007888fa835fb79238b1c3a3f3cc5c5791ab68b75162f" + , Just $ h "3cfaa36fac18a07b6e04e40d3f8036626d43261ad153710d35696704b88132ce" + ) + , + ( h "022a13af10636682435db05e6618e3e56887f01edf78f464082a370044b16fe5" + , Just $ h "b3220cf64999d0072ee8e56b3042d14bedc3f0e487ce38675f441204efcc5fde" + ) + , + ( h "d5b4f4f539fb287d5faaf4e2261455fc5ecc4674e1a5ace6d3504825d7df93fe" + , Just $ h "4317837b835bcd6fc1bf86fe38d8f27bffd06b9e6f604d36e94b48562ce71144" + ) + , + ( h "3fbef86cfb64ec7ef7071c4e98f9e5f2f8cbad73f23f33f48059bcd5dc70ff9d" + , Just $ h "3f1d33d70b69c0a6a410e18cba5c75a97bac0e72173b32ce3a2d0b401ec95441" + ) + , + ( h "dc50b335c68aca2118a027bd6967d117029218533273ad2710fd2dc0aceb005a" + , Just $ h "669ea3d79b520932a0836aeffb538835043601d3679a4d9871053ad0e344d2f1" + ) + , + ( h "b9b9805053b73700ab9be9c9ef1ed34f96a1bc892b211d5bb157e8477ef52bb6" + , Just $ h "76d2307e8bfe87ad8faa78f83b26977e15ff1718a2bfb369bda8ed37a1d8e651" + ) + , + ( h "ffdd4477a43ff7914fff6362f3367a7a2fa1697a66073f77c49314114c9892d7" + , Just $ h "0b59c81a757daace192199dc621fa76278bdc8cd1587e17eb163031cb63a1773" + ) + , + ( h "5f911e995aa975b1186f306ebfd55c0d66cb6a9ff0d497ae62327dee8e004621" + , Just $ h "d0b5c57adbdf91309578cb4624b51ac92118d79313d671cba4254343f1855e9f" + ) + , + ( h "e39ce1fb0897eeac1c312e71fd4a10d7eedf09c9e7b3445eb14a4f6ebd380a84" + , Just $ h "12d6357bc0ed1ebaf224134865ec7216feb54036bc0f15a6d548be2966c50037" + ) + , + ( h "22208f9a3db080c75643f20f25a95d33e278fbd79b8517f25ca1234eb814b1ba" + , Just $ h "41705445e27e524818f17353898b6915da994e67fb1ff4a1b90d998f39a7b7bf" + ) + , + ( h "09dc20c171fdda75522271915713290adc456b4c0992f6d4b2711dc68a4b5afb" + , Just $ h "756716bc3259dc515f86f0959cac508f7d81f1297ce7383836d599e2a88d0369" + ) + , + ( h "d59c8986421b04e07a5503a3394ab8191fd2efd7b428d42a3f054b609fef903c" + , Just $ h "61519b889637e95380e7b57f9e83ad492bd99351b55fb75892fc52cf5e9dd9bd" + ) + , + ( h "fd90801b3dd4a8ec0cb9ac62e0b289ad597a33d7ff9fba3f4ec3ad85ae0f0191" + , Just $ h "af14c3e90442489ee3a32e7ce39bb673718236c157763f9cbf5270a23e59ead1" + ) + , + ( h "8c2f5512941fd1d310da85e54accf96c48fe0addc6153249cc64adbc4bd8b8e1" + , Just $ h "75b229641d6ce402f1a48734fab9ab4ac947f7ebcfc70df9bbe69214918f31fa" + ) + , + ( h "1f29ce7874e99b3389fc95fa4f192d6e24d2412df00033dd45506d46fbb71578" + , Just $ h "459472e4577a0cfb5e610b59684ac8c219281eaaf54af3807123e6572964b6f8" + ) + , + ( h "233ec3032480e7bebb138a49f638d2a4d4dba88b2db9c3452b5560022a418d70" + , Just $ h "e0dda9637957f95a42427b2abf3f1dc5de3313bb0779bfa186b755416d8c7174" + ) + , + ( h "0d78e559cd3de80cb0d0dd62ba22ccb49296fa86ad42bd8c5f75a022ba8c3987" + , Just $ h "936b8adb761c3d0cd66a72441cdef6e9dbe9dba0a7c08a71ccbd66ce3550ee73" + ) + , + ( h "22d30e5884d849a930a3878050a45b87a57ca4a23d981d93a7fda4433e489b1a" + , Just $ h "732074e796263ba7b82d58b176f687dd1fe9f6ae85361fba974df34233296738" + ) + , + ( h "13ab0adb03e3e84431dd6ab3e64878227d198868eb0f232837aedfbcb2d20c20" + , Just $ h "f4e092225001c836b439d1140b202f3c504e77b238bab6f95780b46d2191e7ef" + ) + , + ( h "bfd99efd110f4db6df5153475d3a02f118547b728f4bcd986d8d56252f29cf4b" + , Just $ h "6c3dd068b3b95d89adc032bcdfa2704a26c0049517122ca56e5d2f6b3d240352" + ) + , + ( h "7f5d7d233f44d92533586b0396641433923a921422123bfe1785d7b07f840fea" + , Just $ h "73ee0b2ecb4049f7c3c9d1a876687967c519c3ca90a6d3e616b234a9ca2e0758" + ) + , + ( h "c6149fec551c36d5b137c36805a677b94f8d52a794cb2369f3a13d062c84e0e2" + , Just $ h "54db91da066dd62d9fdb7e8379dc02191844d8be55130499465311f1fca4636e" + ) + , + ( h "5ddda1e552f3324b1b467921f6045da8b26e2bbd1c66ad114742fa5110217f91" + , Just $ h "0eae59c2f15e6327e02714198a9f7efa6d20dfa8f7c4f01cda026331094d3e63" + ) + , + ( h "7a80f2fb64cd9857a0836ef2c2173f39f070458acbaf084b0d317c298176cbc5" + , Just $ h "5341a8aff296211379cf07c4d09feafa1929c3a79f3a7578d27d588859026257" + ) + , + ( h "36870d4f2a83229a714945e10ca0407dd1830e95d4069be1293b97929504d9db" + , Just $ h "d02c98048182d4b336dcc0dfbd3ec1e9f0f5ee593538fa6ec1109f2c93afa59a" + ) + , + ( h "7db251abe473d7847b5f2057252537f1d6c85a303f6eac33a0c2eea80e4c562f" + , Just $ h "c99ebc63492b42820dd1fcdaf63cc7104b33885111f78d769d8ba7f512060356" + ) + , + ( h "21c82d7e43e481f6aff1e245eac58b1af2dd7cd3c917919adeb6ab7ca5724c27" + , Just $ h "6ed93328c37e1927d9605060ba070a474191b9cea439c040e7952a4f4a3afcc4" + ) + , + ( h "5c1fb745b04d28e4e55aa421c6fa9ae6e26d1062b0ec42520c58e790fd8b0d09" + , Just $ h "7ba55bc718d5e8eeed0043eeeeea42868ab0fe74ab02491c407cdab1fc86731f" + ) + , + ( h "4a7c836c4e298dd90a59c90fa0f9461a308c9b7285281eda820e5ece39b8e665" + , Just $ h "c02b1f61f202e68f5c7c71d6fc77c84ef82d28555f1e3ab34f92e6c44c42791e" + ) + , + ( h "1a91cdd2a9a30163d108c71d470e1f9b47f85df800418574cacdea31163cb37c" + , Just $ h "5b2f223aa9a64d4c192a55370c484261d16ffceed341f193316db69144a30175" + ) + , + ( h "b72d945d51a11aff38114d908675093a39e45d9c728d9d076579ab1ce06836cd" + , Just $ h "8cf3bcff0fd76d9b41826260a242f1221c3b9145244ac4f1ae0ca2619c4da7a6" + ) + , + ( h "98a74447dd0341620c3442727cc52e0644b9749ad0e09f072f5d6ac562a32ce2" + , Just $ h "09bb26d2a40e59e2885d131205db71e448f1373a45f9a9164109c4559a620b14" + ) + , + ( h "7942acb9a0b44be82fa2409dbefb308a05986cec86bca1609b252131caf3a5eb" + , Just $ h "8e752347dbd996346ec7cfd14ced0be1761d0cb15db1de5a3ce1346476710f6a" + ) + , + ( h "4fbf10f94c03225cac5b933b1673b6d56db565a01da9b4fdeeef723ac0fd1f47" + , Just $ h "dd5737d832a456dc56d8494e49ad536395b8d75a3a17ac907e99c8d091e18cd6" + ) + , + ( h "b7e2046c9aac95c3adbdd0fdd99ce6a482faf4a368956d7fa6989700fa1671cf" + , Just $ h "adf0e5a0b5d14dea6a3954135227f863e8892dd8603f2b0aefcaa4a53f25a5b6" + ) + , + ( h "2d9600413006002280572bc90599896fdfac44ceca6c9e6f841b4f4323d2a9b9" + , Just $ h "aa31acff8756b9abf038ff144954d28a1241a122ee9dd8c443ca4800092d57d8" + ) + , + ( h "fc2c8cde65c334da3fb60ce44ac8cf750044e4e40e8393d98dcde1a4c90237fd" + , Just $ h "42a0763a9fb9e886d8f7a4296dfb7ef2002e65ff66c840a5afd3e3b5a36faa4d" + ) + , + ( h "788e389dce6e966733b89886ade6cdf6db91117706a85b633f1b94ac8819d2dc" + , Just $ h "0ed768118416cb3b960ecd668e257dc24a8135ccada18b9b693049596b44f99c" + ) + , + ( h "c026cb90d1411e59bb6f94d004565590f732984d216effb2391ce068bdf1633b" + , Just $ h "26eb78523cdce41b8c46283d50e72d272b2ce40c1585b846c9c6b461de5ce172" + ) + , + ( h "222513fb41fe11e5fee62fecc505be781e0f12521290a4ade539e7301a59159c" + , Just $ h "36295893d7718e0515c8082b39b8ccff988e19f79a18876ef82d17c94d8c54c3" + ) + , + ( h "1140db2d521e942a1fb60f611f4f0140f7f5c136af89a617a22ebef4118a090e" + , Just $ h "ed044f04b6febead1feedd95e9897e0e669fb5f33c6d6e54bbd137d807773fae" + ) + , + ( h "14601edbd3de2f9fa4e02ca17b7651c61990a4a41cb6b269dd2c0be065b28fd2" + , Just $ h "4daa82d5f92116f24ac5b79abf6402e1444ed732272c99bd3fa0a45970200834" + ) + , + ( h "18ce09aff53f9bea9fe87be77a37a719157f1c214dc3c2553df73b42c2d80700" + , Just $ h "eafed5ccb8e22081c8ac545c3ddadc37a4fc5ee9f4025fc881336c37d46be046" + ) + , + ( h "5a85825f9928a2b121e02f13ac43be0976f5f3987a09329a650511884e77a536" + , Just $ h "9cb71054b8fcf74332006508a5ef359be091b2da987acfd10675b10f108ab128" + ) + , + ( h "d20ada1eeb577fe69fcc180c7539bf6a14206c1d400b91e4c5373de13c8c3ca4" + , Just $ h "9856162a65e34a5ce4e75d9f0430da017070f99f497d9c7dda623579e2f86bbe" + ) + , + ( h "5a5a92b2793ec6a47256ed752438c0df18854506938f48516e2ee3f15cc59451" + , Just $ h "2bf6d8cb81394318c613e890b6e696ab46900d3a776cdb45fd72a511ad52e5d4" + ) + , + ( h "734a1049ba606152e6abb426ce408477642b059cb0c1e8407aafca39302f14a1" + , Just $ h "175a39d233624971bbb40cb0e56eba78d8035e88de24737ccdc4804e487572b2" + ) + , + ( h "575f2326bdf87ee565d9fbdfdac7d724bc9a1b0ca0faa497fe5d4b35acfb3371" + , Just $ h "2aa512c62ccee0164e9075bf916dc9250857363417f9b289606cc17e0100b3e3" + ) + , + ( h "33584f3b54f617c9c186140a0238c21d2505bf7b95a79e3b584ca0602976ca7e" + , Just $ h "005ab76f5613bb71aa341f9a1717c50129d3785c8bc66d66ba451290e9665a14" + ) + , + ( h "bb46fd3c3e7981dbf994a9dceee0ce8cbca9b90cec4ea01d6c21863beadce7f3" + , Just $ h "008df176de0153baef13113b1bba48fd6bc77d3c8c7f50952f6053d72915a304" + ) + , + ( h "a5bd93cca61f17f90d94746cacaef15c6073a335a4db4b34ac6a18e4dc781fd8" + , Just $ h "98e8159d0037e2393b00c452a4a0c8485480524b0309d86917d801f0095a4a51" + ) + , -- Staging + (h "b365f1be6863b453f12b93e1810909b10c79a95ee44bf53414888513fe172c90", Nothing) + , + ( h "9fb147114d56e5e81e7817d37da4a87040664d0e031d89464ba94d55620b96b9" + , Just $ h "76a6cf6a934456d1c86a5706365da9086ff65b6844c84f1c2d50c851e02c9c8f" + ) + , + ( h "863ec31fa56ee8c77fa5101f1c7ce9821ae77afeefeb980e89e8ca986ff1226b" + , Just $ h "fb62f9d4b276911aea3bb7bf80c8b860bd128e39744c373f0cb7d84737e0b12c" + ) + , + ( h "e0ec1c7c5cfe2358d5bd59fbd4619f9eb25cbc7d84f87f4c66edc75343002714" + , Just $ h "c617f7e5fdabdc5c69ee3501e90e7f2cb0d59fe7ede06080444dbd3d3ae8144c" + ) + , + ( h "fcccb905dac43709c987d9b15de62de89bbee2770a0f21ff1f0b57b84a6012b8" + , Just $ h "cbd00a3ed002c1ae23e4f4b24e1f2c23a5da2668f559f569146cc9e132bcbda5" + ) + , + ( h "8b6be07fbb909f4110451b56517d56d6c5eaf9c43c9e703566c67173bd3f281e" + , Just $ h "c9942ae544781c8b3c590d5cdd689530978b49e153ae3bfe5aa9cba71ee7e300" + ) + , + ( h "d1e2f4320a3fa84d948d7d15384369118be5ea114ca1f358eef666f1abe3d874" + , Just $ h "567fa18d7c624e93fb3094e77b075ed377e3aea2742ee200f539f23ca32f3bdb" + ) + , + ( h "226b77f53f09752fe52543993a083adc6bae0d2f675c86b73baae0c8db96358a" + , Just $ h "078d093d567ff1613b72c30d8309e0ec012381ea192f7a93e963262aad66f6f6" + ) + , + ( h "b29a2f91d314f997d3c3ad93327133af0ad2c79d5199527975fe489df1950e33" + , Just $ h "916c8ad0e1e297ca13a619ac4737dbb1176906b4a99ac3ae067e8431acc4480b" + ) + , + ( h "85d5cffe3da5a35fdbec0fa1b4eb57bd8c92b85561b0b3fc8872951ab6a55d6c" + , Just $ h "ed3578a1d14f171fe7c6fc7ed53d1eea475b6d730730324ad2f4464977f7d5a9" + ) + , + ( h "55d57e16949716e7f5fc63a8ef354d1ef97d1a3d2bec129eadd04857f4590b40" + , Just $ h "f7e438ea989e621ebba3aa76c9e348ffb874c2dd0e186737d28375d29a422070" + ) + , + ( h "bed261244c16861c1f659fa6d227197c1a46020b49ce91b5c1ca03cbb481ede5" + , Just $ h "aab83e789a01fe6e43d1829c5f83028857c8dbef236d4fcdda419d0dc0daabcd" + ) + , + ( h "e9960a386180eac79aec104b19bc6771b3e13ecd3f5b575abe95a61fdbceb4bc" + , Just $ h "f2eb6981bd2175d6c3075ba7a16b42ec26a1b45ad7ec3e680982dd3d49a9c589" + ) + , + ( h "40b10f1e768a277a9904ef37062a7c252540fe0769d40b6198d436fe531b20d7" + , Just $ h "5c25203b3bd4c53085838bd9c0d0b54e875418607b2eb2aed827fde0a37f514a" + ) + , + ( h "3b62e7a3de1ff1e65cdd8a3e25a9a6f67ac6dd7c307e8ab849abc081b7525305" + , Just $ h "71a9e0106a0f9ac1d1ddd1030e4ae0c759db4cb89a2b6a8d09d55305babb9a3f" + ) + , + ( h "a2d4fcf3114a14421217ac60fa7017bf672bd07305990890f61229658df06a43" + , Just $ h "d0c9d5955d3bf653e18d6bbaaadb270dbd1361d07e6357d900f71c6dc5462495" + ) + , + ( h "0754e2d638a84f21d75ddcb420a130daf65d5c77a01d09170db20890c7ed77cc" + , Just $ h "c79af79e8a1241e307ef0e2fa1ba26550c74bc88a4d1021edba7cfde6a05404f" + ) + , + ( h "792657ad951643532f7f54e6959cc411c275d9d0da58ebb7806bffab317ab380" + , Just $ h "c39c5214cb175a84b781a5410d448b45e2bae83b709818c104092b840027bfc9" + ) + , + ( h "bb2dc8c2dbabbde0760c5ebbdeed79ae51bf5ddea928aabf09f3e90897f766b4" + , Just $ h "dee11ca6e07fd05ae2a88b7b456aeee6f7f7c3aed12da3cfebd608c7592ffe18" + ) + , + ( h "d433e4cf72f52c0dd6b4b21667f832666939c1bff6e490c3d49c59a6f9b74fa5" + , Just $ h "5356cb86c098f60ab215bc27f6ed117a6a8230396eede511dd316a60988e5cdd" + ) + , + ( h "207009b3864a7eb56a104849124f62d951681c3f2734fd7d18e32b925f67edf7" + , Just $ h "36dd51672e0f493d652dce00db972612ce71bb0ceb94bafa0aabe03c3790f523" + ) + , + ( h "d88a5f04c09db96a119935933498e3bd3755d6afb8f7d4950681eaf48cc5af90" + , Just $ h "f81c52e438dbca30a4c5dfccee4d6a52132bc76fa359b1eead7a1fa4a8034c33" + ) + , + ( h "c14a2559161e3220474547c076c695e47b016969f311c1fe261543559434791e" + , Just $ h "4f0c09a6010a81bbc339f9c14e54363e6acf474f34a8efea95058a317a264f56" + ) + , + ( h "9b7ab13e4914b18d367f43fddac9176ad76b7636fe2d4801296350f33c3f48c3" + , Just $ h "e02e5bcef8d77dad828ac34944e26efdaca87e0173224ecd9b8da6cc18c7f983" + ) + , + ( h "5747199a27c43af57f046ad2889618ebd25b9f4852975d76c24bbf470c89684b" + , Just $ h "61ed79c3a3db7dd49f4ad7d29e80660990229e234772f102b60ec1150131323f" + ) + , + ( h "b5540e531b01e046ab31a08fcc2b12caef7cdcc26827f74d83c01d519515367c" + , Just $ h "763b2b2d0efe984eb2c78914c59d97e5265f449e499c033b0a5631552d5fe5d0" + ) + , + ( h "b356bb25b250897264e323ad092a55dec818c83a6ca58c09b914a13d42dd7872" + , Just $ h "7b6dc7151ba3cf9098ac423893117ecd3ca86827a4c8131c057d7185b24a6291" + ) + , + ( h "25a93ffe30baac803d533a1903392aab7b16e55d9fadef030a4546e19bf4d304" + , Just $ h "55235c896c9260701c1510f801db5b09097a699ae90118c2e9cc12ba6c938546" + ) + , + ( h "2923acec9b597ea7e05ab7b3fb8b7a7a64e9e6645f51abe6ff711690b9a5525f" + , Just $ h "b82e7519e74cda942d07d491f0e6e5e0d9ae39e654642bfa761f14d2e4f6de0a" + ) + , + ( h "99ec13567ffdd621264c3c032631794bb775141dadc13552589df196a8b4c317" + , Just $ h "b0bdd31bb69f7c3574a1f00c4fb5dc1ddb29b33e546fbc21d3ce55097d118a4f" + ) + , + ( h "d7772a21237c973646e6a48d6834ed37a2ee34dc8571ca7f49ec643c25fe4262" + , Just $ h "4a348a0041d68f21bf84f68e55b871bb633e83c7dbf5f196e07b260a19a5b35b" + ) + , + ( h "dc48b4f42362874ba9bc4e49161642723d77f68adafc4ee83a3730e8e0d31232" + , Just $ h "36a95d256387bb294ab76a3fb111c52866d78a33c1ef86e41177c5561919bd33" + ) + , + ( h "f37544332ae5b8b16d4a8be5d9cf1ca03db0a60a304b5a99d61ccd2a819839d8" + , Just $ h "5f8f6101e0a8d5dea8270c538d1f9de85c4c97567606930da0b64644bca0cfcf" + ) + , + ( h "5555742d42a434a29e302871b1272f4c3a3b7fa3f173dbd9952bd69765cc5818" + , Just $ h "b56ce8699fa67f89c9f80f12a7ccf4756dd7cfcd4707831721bdd726839311e1" + ) + , + ( h "facf8917f8edd18936ee6d2fb9537a829f56e855418f2ec0bfeb105fba046bad" + , Just $ h "83bb2c6e3bfb0c19bde21c4d87448761f2c4baa6a7fe85d3c16927728970ad1d" + ) + , + ( h "187f2edc0ea75bb4f1606791846f4102b4ea629022f65a84cc78b5d248697004" + , Just $ h "8530bfeb6fc046301679b0cb00e51696822e32144e9274fa28678c7d8191990a" + ) + , + ( h "7ac0a18d8c7bd87debca0a722eaf7d8284492f3bfdfb33b97166b2b47b6ee84a" + , Just $ h "006861faf0f2e1d7a6c2bb90293af0f4fca6b500c98588b1a54126cd9cbdff7a" + ) + , + ( h "b47b034660bb6c0a8ecab804b3e7aaf79fd708d2fd5d9d4fb48c7a4e8b9b8c7a" + , Just $ h "525042f960d14a139176a7e077ee9364d57856ed522601698aedee91fd521f3e" + ) + , + ( h "05d5e7742083340c28c4967c790ffe2600975020289591fe09859685f97a989c" + , Just $ h "c0bc9485b93d0d70002e3b4d64a097a49dc8271912b24c8fdc11269e58d91634" + ) + , + ( h "3adc9af909705709208e34dc8e6872afac9c6e6086d68807fb7ded2c0eb8d817" + , Just $ h "1b0b21a95bc96a4f3149906392f0167c8d2169b73f001f88fb032ddd90dad0ba" + ) + , + ( h "07a58c4270ddf0dc077d0d5b5894e5176a5340694d04bd5bdcb770bc56ed7c9d" + , Just $ h "5bc16e5b7896a4f6649a83684a38c5344cf74d2ce90a4701020b19a5ab7e6bdd" + ) + , + ( h "e5227e8fad58796dafd27be9df740b515460c96ec4a5f9b70c39956bc60bcc63" + , Just $ h "5201d00f4aafcbe3033a56e1eb8ef829975b643f628b836072fa82655706a771" + ) + , + ( h "ec86a8264dd7b22a05a3508400299dc26e82fb94cbcdbca85b1c5fdfc7d67f19" + , Just $ h "3f53d350dc3fdca3527d392c893cea179fdb1f69e177fe4ab6aa0663ea69e4a1" + ) + , + ( h "8d9aacd65fa8e3c56ec1631f129db0b58de064070c5955e4579ebc63f31fd41c" + , Just $ h "cf72f434d3acc82b46da3b40dcbcc02873d5e3a834d2aa5bf99dee8d5d0cb781" + ) + , + ( h "64c00f9a1f86bd41f31a6c35e14ad0afcf6b63e87dd1ace04bacba8b247045a0" + , Just $ h "f02ba2f5c39aef4d6a09c4fad61ed73f67a3bdc0bb7d4f0c4dbd3d81053f45be" + ) + , + ( h "48e47a6ab40080f67fd6fc704b9747691820b074d96e8a26cf2f903cf9c1fb9e" + , Just $ h "e4d5b224b34cdb9a61a14ef0ff57967f4cac4fd9b25b7a31bb7555008300bbfe" + ) + , + ( h "e2cc39789cbd0628417bf61255b4c3d35e154ed66830454eeb3be74cdfbd635b" + , Just $ h "d388530f1dfe26bdd0f36783fade4c425934c35fc6e0eca877d8e6eebfa3ffa7" + ) + , + ( h "e1096958905bea03b7f141ce79828fade135d9fa21b13aa83dd6068dbd1f2940" + , Just $ h "810253d5874a92d6ab5d2d037ab631083608d8a74e281269ffc11497a142d272" + ) + , + ( h "482b3b0cc0eb9431bafc865f50949c37ad801f8d26cd013f1a41ef23096c8a87" + , Just $ h "d40fd0570f503f42878cce67326b70efe249a32a7a48a4152eff8210ad958a4d" + ) + , + ( h "6ba5279f7a5f9ffb1d6a879d99cdfc6896c937f6e1968c7c87eec3f3867fafcb" + , Just $ h "3216e810dab2835d5a80edc56f80c811a4cfa99cc3d35235c20c4b2f860b138d" + ) + , + ( h "26113f1597adf1a4b213b8d9ebd590a4c8888928468a401427ae914679b165b8" + , Just $ h "ff290e11c60c275eb7c319b1ae3fb1775cc637dae60306c338b819811aa046a1" + ) + , + ( h "6f9c2e144e9ab7f32930f764000bf328ce44387b36073c63ac56aac9a325cd17" + , Just $ h "b1e29becfd744ed9d0bd2aca90f6077601f71752fa40496fc8c6dde019033428" + ) + , + ( h "03c2f50d3a628c5f858bb433d9ed06cc305fa1dece8090378e7d4966431da5d4" + , Just $ h "f1f35d6f034e99115c0c0b9109176b0a13fe333cc4b7a89a1c6eb9cd067ef384" + ) + , + ( h "673da8e423e9c07d622719f7321c96384b153ec10cb21f6031aae5b829bbe9fa" + , Just $ h "c9145eb22ffd0b2719196e48e270bb8119dd69f4edb5ebb486310f9964903009" + ) + , + ( h "73240b5b5f4a8b79674f112fb0bb8941d9b1312d3e2506385335cad0f11efd1d" + , Just $ h "ea36d3790f71ccca3acd0d0c7d47d86f7c225deee47b2b164ae12e86deee59fc" + ) + , + ( h "1e477b27bf2465f2a41ea34be26c36083c6bd32ff397a7f273eb3527d448a327" + , Just $ h "898abd947bc7b3d593e6f233410f1c7cd743ef7fb1fd7186d0106200fb7471d2" + ) + , + ( h "18c2b02f23c4f7715d81d465febce159fd95463689cbce2a590c639a5376def2" + , Just $ h "a279863086a8b052776bfa72fcdf05b278c3ec750d7b281553de5022013a6da5" + ) + , + ( h "369f6b1c16f709824957b20a5c2366b388f89e8065f7cf5367a8a71446c503fa" + , Just $ h "04b905e4e0580c4b1f7d32e119725d0234ea08dc3c63a69cbd231278a074c0f8" + ) + , + ( h "b63cb49967eb4d7dd0e778b6848740568010c677f8485b4037abbce9677acdec" + , Just $ h "f78e675552c21ff81474eb923e4897326dcb09d4907dcc3813bec4ebf7e9fd11" + ) + , + ( h "67154ff1e4448fcdc96941f2def69239dba8b485670b55081bb7c4f59c14a833" + , Just $ h "cc72df817bf62c6c79e2b7e0ab93731624f59ef4e741d6c62bf771bf81358fb9" + ) + , + ( h "4decd75f7921c6e4a4f25dfd4493e4f8abe82043d9a93ae149122427997001c3" + , Just $ h "174633dbd275f3341a0a6529ff9eae2541600031c15de792e32519f43ada6c75" + ) + , + ( h "2f3cf11ae8bb88f1524485ec164d98a46fbedcb63c1666dcf1091c531c175c83" + , Just $ h "6b0337fbf3f3b273a2b907add3eded9c7cba0b6a3b3512e19fbb678f3af15b94" + ) + , + ( h "cad6f3f6c536fecd7d501d6a7c5794022e9d9c5740c477af2d44d263b2736be9" + , Just $ h "7e53c08bc401ffcf5e99a3a9149ffd8bbd7cd8c2882305d6b5200d47d899455f" + ) + , + ( h "43895ed2778175b1778bf3e78184e69ae5cd2afafc85828a5f1324dfc7d5a15d" + , Just $ h "de6b802c703173097e09919a3703a1959ac139647d3f0b22489c536c844a4c48" + ) + , + ( h "caf4ce3994826a99ce47cbeb8df9dc483717e40c5f5e66a2e9225561a42b9f28" + , Just $ h "3b7e51e0c4d91f166d98c3e286bb33cd1518fd4d4f33f61b2e9a4f3193254ad3" + ) + , + ( h "94a3ec764a370c495b51dc199db15b74c2e90da6eb63c582752f2d79f5be052b" + , Just $ h "b708e9100b948ea1148149ae31e4db0e9d3870878332c52231609fe159c5a91b" + ) + , + ( h "dc332995587aca3b3f22ae14dace171ceb20049e72531a3290f499dac0aeb4f1" + , Just $ h "8e4ee6d86018cbb73400cca5d4cc9ba6e729774aab0c39e9dfa4e19c8be79a60" + ) + , + ( h "199c9516752d309418b201df06f80c005beb8b5d127e25f2642aa1ffb47a1682" + , Just $ h "2f72b2c88abd4ff8bca65247ac0f24ec0ed6d8a3735a209031b97be9a5f473b6" + ) + , + ( h "97719dd81f307d22046f476774a8de51dbdacf98991a14c90a9d9f9b26f91ade" + , Just $ h "a89008680e3211064698e5657a40c52d00ebb09cd5754ae8eb84e74ed1026399" + ) + , + ( h "6d30a738d80edd83a2a847f5e3de30f194aa4f00c79f379c22eac5b71b3e77bb" + , Just $ h "ebf807ae25a56aae3fc0e615a3e039d84d0d3a476f3d7315f951b9dec9d3e09c" + ) + , + ( h "446ed4e16254f71bfa18aeaf6e3d1d48aba798f303eaf67b9b24a09d07125bdd" + , Just $ h "864b699fb74933dfb4e60a9780b3969b34d3eefee44b7a9483674fb570905f28" + ) + , + ( h "b9de18a83aa05fedfc7c6441360e53434495f8f85258b7d76d0dc1d0f3527340" + , Just $ h "2aaa2260a118885d90c92dc19504f8e336f4b0943a210f1d76444ad6c4263328" + ) + , + ( h "0e7a47d467abe88a3050da3094974105319268aa1ff5dfbed5bdab01be257ebe" + , Just $ h "ef8e130ff5384b5ae5497ab282baee365851c21eb00a9fbcc481b1e79781a5ed" + ) + , + ( h "37d301bbc3909f77584e1f94f429712f86a853df559fc1a6de2accdca2b47929" + , Just $ h "7f4dced6f315ccf5ff06f887e603559a92f20fa734c58e54f678bf05ca2a37f7" + ) + , + ( h "d961d70a6d8d7604a0e0a8ee02daff83fa57b0dfea0d7c83e313cb296c82a8ce" + , Just $ h "d2e4e8404e634b7587cbd66f379a548095cdee1c6bdb8363ccac27e0769fc43a" + ) + , + ( h "76c28320b320e49ff71eafb042bafb2669e8528cd67fc54105463ba205f3a30d" + , Just $ h "d9e4c3f40e51413b102eb91a30626d7015ec9d631ca9e9cfe7cd8e7f6448796b" + ) + , + ( h "4b18397df6fd250937612de9d056977b4cb258b665bd76e8edefbe63139f51dc" + , Just $ h "31f59669edad855be2f7d3f5a4fd770276facb93860ad37b1501b7f86927f325" + ) + , + ( h "ff80679821298091b612fa015f03534d4bd0dd12762b7bc6a5fe9a04e17d0376" + , Just $ h "38191afa9a4e72882254d2ffd7b955cd2960e826fa8547f4e752691e187fefda" + ) + , + ( h "ec4ddde4c65766999511202a132f4bb84413fdd7d650cdec9602931ce247f605" + , Just $ h "8be94d17f43b39d05deb20c8f0ba119f85a521ccff43d5e1ae53a0d291f685ad" + ) + , + ( h "58ce92d3b4fb36c9eed7b9927f86c97ee09992e626fd3f4b27843a47e628f389" + , Just $ h "c2baa5c43f6239ae300adf51510b49bd3c6a1a500dd928004d613f5994eb9d98" + ) + , + ( h "7550f6bb6370cc5ff0e1a8047f2848a6fc128429d77181c9c001c51bba300a4b" + , Just $ h "7f1cd4a027d5ca159210abe5b0999c1d3f704478cdb049a3ac4d2d35feda86b0" + ) + , + ( h "971a199590b2947d4ea321417d94ecdd1ae916de5fbac4177434c6207f27f724" + , Just $ h "a1af9729a611009dfa2a03570eb7f49125c9b639e25b92bb65732c8ef52ded20" + ) + , + ( h "fd048d1b7c04d4483b0c2d4087414943ddf20a3687f1d6850d7d54c77df1b666" + , Just $ h "178837a717272adfffc62743b7ba293f2adea3972993fdb37092ddce31ec0262" + ) + , + ( h "cf61ba379f5e73a8c984755f7a453c004ff97d441746ee965c7cc15011bd23af" + , Just $ h "7a40b9f79b6be9bd7531fb2e7e4f7e9043d1c929dc42be44fd8a7d6e9dc8fe43" + ) + , + ( h "e526594c7c6f8d6ada1057dc039efff7608beb51c6361dfbdfd0fe4f0d9d96bc" + , Just $ h "faf5c3e88eecd5803e035a7662d3ed80bfc0169b222c1604780777c65696e91e" + ) + , + ( h "ce60936eec63fc67d936ecd7557356fec45a428ea84f08fe7d069074adc55f9f" + , Just $ h "e7d19c622ae632140a2caba0c7e8251b603537f7c7834e5d11e22bc14f979526" + ) + , + ( h "8653c01a8b27f3fcd0ea9f7733abbc26a282891358f618f473edbbc63c75cfc0" + , Just $ h "a3beb278b7812468d4a24f4c2c1fefe29c029730279667c6d8b3f916d2796d71" + ) + , + ( h "759d59906aadaaca303542a840276c2f84ead4feaf20b907edd5bc4488e80caf" + , Just $ h "47dd8601702436c07d97d412c4a2a5e7c3e297d70d4b2050a7dd20950cfdb3b7" + ) + , + ( h "d81b0817b79c0cce58dbce7ebf6bb5e97c2276ae2700e91ab7cc6638e9a20974" + , Just $ h "a51ea8a8d3bcbbe9cd7d47d7fc2999fdc78ef9979866064b12c329c9bdcfab4a" + ) + , + ( h "362ab16f8f6dae57a5fdd2eb2162a06e9a458f12b6498a1c76730d142a8311d2" + , Just $ h "505eddf8ae35eaa8bb9891b28e2f128ed55e2f3dd546095f6e9a74ef43a6d0e7" + ) + , + ( h "c79dd844ca2ecf7ae0360df71d94db4d31817b3ee6ec42cd9504dddfbbc512c9" + , Just $ h "613a52acab27fcc676c8453fd44c43455b8cd6894c3c7a61ddd434add5b1aa21" + ) + , + ( h "69e1cd9a7089ddca1439ad69a57bb8e7da1a52832fef4dbecac7df21af536713" + , Just $ h "ec9608754cdb1642072a72c8fb9df4d9dd628fd80a3b3e814a8d0a389c4e80e3" + ) + , + ( h "78f016becc4acfb3587f7be6e919ea9d3522e24afe3d4d93cd538f08782d5553" + , Just $ h "f3fd482d51fbe62a8a1270a6b8fb29e5e8290167ea0cfd99112a53dc9786d027" + ) + , + ( h "ec9d536c9fd710ededf76289d8775b8032fab756b0b4bc3885683bb4e9fc6b89" + , Just $ h "23d8a42ea749c6037b08ae71f367ba3adc8981c28d79ce02c7bd83ac86f24a6e" + ) + , + ( h "9ea8cd211c4a04e479158b586761a980e2d487d3614911ac637703f9eac33a96" + , Just $ h "0cc0cf36e51b149974352b4edd018c4a31f428f265388b8b077008790b1ffc27" + ) + , + ( h "0ca5fbac3900b8a405547a55f8ed37cc41b1a146eb1cc39a88c75baafc3d01ed" + , Just $ h "20271b28986d0441619b385ecbd77057fdc6224d25d3f652ea908527cefe1fcf" + ) + , + ( h "32d01fc39fa5d11f3bb2c9c40e33b1c7948939b9574d09e10f62357eece32562" + , Just $ h "56fa7fcc4787c5234287dfaeb68468b2f9b02787c6050e6b9ec3374860d46110" + ) + , + ( h "758be3bbd36ebe7663a1e17d988b33a06bb9d6925e6d7b42126cb79e614ae602" + , Just $ h "cd6d9337cfbbf727e7874ba46a3904a3a1534bcedf2e8555587ab553d53cb9e4" + ) + , + ( h "463df09ece8751f82bf46317f2d6044ac917132f4739c9afad4988cb38ed9acc" + , Just $ h "e458fd60b169646bc61ca537f895f6a8c11b5bf87f41a11562ff0329697ee423" + ) + , + ( h "af619a1328be0e3f46563a6541d51306c51c93aa72638aead2cfd2b29e085008" + , Just $ h "180ae9f818497a774761b9d79a06298b9bad8d41f543b19fe72a4a72bde82689" + ) + , + ( h "34d6373c6b67efe58bcabec91f98efd536f95f5b5c4736b3662ce8d74ef2a599" + , Just $ h "e25732563acf2df25d1b577026de9790ed51d66c69e0cc853758b19c76dcddaf" + ) + , + ( h "b66d2e0e41875b15f806a77ee7e2be2be3a9059d50c052fbafbd6d954e696bd3" + , Just $ h "322f5106f64de8c4358b8f5a7f98eccc92f8016e5a3a65e56a213339081e6b7a" + ) + , + ( h "19ee0fcef4702c95925a8f796608daa2caf71a34ed0f365c2935572dc7f64b23" + , Just $ h "82c995d9fa0ac0e21de43464c4e94ddb0e628dd9fe338a0130ee0f4cdae21a6c" + ) + , + ( h "3fa6c68522b82e954c49a72dc6e62b622b30d8145ab9f62fc1c0f3c11eeb0480" + , Just $ h "0a6f6d759a0303ed98a138c13cf5397b0e920b9c27c9de26823caf7c462b7624" + ) + , + ( h "280d91c2be8ff5855f423591ebc9ea16a8a739461ead6587cd49b2d3f60a5b67" + , Just $ h "0c144dc99389a9fb9b4113dbc5460bb40605d8718ea02fbcf8ef7af3770f11a4" + ) + , + ( h "0e92f4745fccfa48aeeeb4360c86dcd3168a741c246ef4d83a56d6fc98aa6973" + , Just $ h "7e788a59ebb36b51a15a10e13cfbcce90466bd96d12b71fcfca7e38f60e3958c" + ) + , + ( h "0334b68ac99cba39dcde31867fe40c8b117bf9b5440a2d52d7f877a067ef4a06" + , Just $ h "444167eca9aeff7271c3c395b7781d2695b3590753db450d5a33c2abe4d4ffd9" + ) + , + ( h "433f13cd01fbd9525d926bcbfc6956a2b962f036efb4c23d5900e0e4ac89b0fe" + , Just $ h "0e551405b2c33fa22b1dde0e28951b84b8a983ce50bfeab5b93cc6433c5df8dc" + ) + , + ( h "c2d15eaa13298253d71bac49e913ee6bb608f6959bf50e7e87ba9ec505fb7175" + , Just $ h "d9137fa21e5aaf9399f402b4102c9e028561cbdf150a9bea912c124a2d6cb0bf" + ) + , + ( h "e56aab0d4f3e36a2f2a5f6f1b8b5c552a488b226932297f14f19a1d6f3a99ead" + , Just $ h "764a271e1a586dcb317cc28c4bd13937e5b41a61ff265bdcea752b990dc2eb6d" + ) + , + ( h "23c31e4b33753ce531609a5c79f1d2f9f0bea276100436d5dd6d14a7634b26da" + , Just $ h "6d62f399ec0c7b303fd49b868f22cf208f9cb9fc55a037985ee57b6a1fd7e5c8" + ) + , + ( h "c792cb6e823de8963136d2c0044ef19caccb7e91e5db302d992979be39b16687" + , Just $ h "3ce57b662d5d504bfcafb1a80167981227a24d6e8f2e95b8e6b3a015a5e3bd0e" + ) + , + ( h "5d5f88749ed877385d76e7fc027eac83cec3853718ba6d808b9521c074a2f7d3" + , Just $ h "2b625a20a1fa9c3a29be6deb1f8b9a72ac0b00e7cbf922c9e36be8e1978b5b4c" + ) + , + ( h "ea119b5a0301f702e0fdd1db758762bbcdc7b143693c7d6c7049e110ea3bef6b" + , Just $ h "ca462e294d7592f3eb1f1814d4e5de2e090848138feed1eafd98c83164e9c464" + ) + , + ( h "b7e8ec89e90f9da42ced8ce4f088ee412f7dc66db00aecfb526dab8319b1f630" + , Just $ h "c6cb7ab881b1b9637db8015c2b519dafa6172368cc74ceb1cc492d7fce0e8010" + ) + , + ( h "df22b8d3c84762c4af4bd600be1ec864c15a6fab3a100243161e6ba65576655a" + , Just $ h "c08998295321c382dc822d51ee237249b55f1ab5bbd5263ace25e71dfba816b6" + ) + , + ( h "cbbe0c2b0f4f3a8a252a1a05810135bc2e39ea96fe5688f7f599b37f7147f2b6" + , Just $ h "efd2fbd1081844191a7baf3b0ea5c3cc8b76298cad59e77b18ff35388abf40e2" + ) + , + ( h "2bb80f9f63f0fc38c9544ad816af62e1c4528614b7a37673f23ad67a82a78e3d" + , Just $ h "df61e92b54cb9fefea5728b58967b699332ce1cca62afa9f12bc84b1287e7cbc" + ) + , + ( h "ecb3faa5ef60d3b4d52750892eab69929820e2d8e6f15e42115c797e67b84b6d" + , Just $ h "f6fdca06421b5b2cfc266996226ce9cf8c4ad5186c447b06f0f67ee00e5096e2" + ) + , -- Testnet + (h "8f8602837f7c6f8b8867dd1cbc1842cf51a27eaed2c70ef48325d00f8efb320f", Nothing) + , + ( h "7e8b2df7730261d8831fe0206591570734d353c15d5266b7fe77097090d33cbd" + , Just $ h "507534faacb02cc212f121065531ac406c64a05a77b1a126b73e0342c4fc429e" + ) + , + ( h "2e5f7988afe31f5abf2010904f38243dcf8f8b9a08cb040a2d346aed0166bb83" + , Just $ h "f28025573fdfaeee5cbbe0f25a67d204fbf3ca72851ca9981ed7609843323901" + ) + , + ( h "e8ab8ad7890733532363514eda520471ecc4a35d33aef0cded877ad85337bbac" + , Just $ h "c3fd1619452b46ae7b5139d49e5044e70e720c5b221f62cab6eb0fe62de881b7" + ) + , + ( h "af45205bdd89afca48497b9585e75e28049c9f681c119df90abd39d44eba8567" + , Just $ h "998ca7f3b4463a7db4c046d7a3c489565135727f695b6bd2ef57334e681ad1e9" + ) + , + ( h "faedf3b89b231a7a5b0042e04ae937ba97c8cb74308ea635ae33ff79c7ae45ff" + , Just $ h "c8a8a33302a41bc4a3f28c607044282ed218d1dbe6544c132455b1c4041b45bc" + ) + , + ( h "b4b749ffbb304488f031795ee94fe4d99a4853665291790acd4cb48c2d628076" + , Just $ h "3257a0a7f019f5540ac298bc89be33823426f173fe9a72e907af9b553d63f8ce" + ) + , + ( h "a1c1b18067e46f554b53502227a3f1e308ee4c6b976be7f28a7b1d37bc72a277" + , Just $ h "972d15301b7b2f9c065751168cf6bf56085a2189119a7b9f623c9ce92eddf5b9" + ) + , + ( h "f0d4ec6221bb405f31e41e7e8a130682c435db402f1dcf1f91274297eaded764" + , Just $ h "a738b386194cb533d910abe6b68eac2ff0c262bb660e3ba84348d683d71db6f0" + ) + , + ( h "acd499973d23e3718f5075436bbdc51b2b9c4bfbd01b697f3345a2fa8c9c3216" + , Just $ h "f8d42727b12f620989738fd06dc7f91b7fa2749337830cd5024ac2d8acc018f3" + ) + , + ( h "dbbd7ae6572286ad98789fb72aa232dd3f5bfc33327d1b38281489a60c6e0e5f" + , Just $ h "a77c7d1673ba6ee36b56c421abae3b0fc9f1a56a9b4757be2af691c8032824e6" + ) + , + ( h "a0ed09fe9d804422475ad412be47ff0695ba95e5ad2b2d6509b7246f40ea3d75" + , Just $ h "c046263bae1772e52fda25ad77502b1aa8d241c74dfa7ad66fd09b9a769fca48" + ) + , + ( h "e2d8bebeabe86fd5f48d9152b876a6a926df7e53e7f6f765b1241e3322e505d8" + , Just $ h "6711d1718717a4c441c7011367a1cff2622321220204c10e6673a42b3393339c" + ) + , + ( h "ce191f524f3a049f7a340b2441e4c4c300b2ce3251ff7cbc049d22a3c208f419" + , Just $ h "aab3374d14f985c2021289fa425438040a8fcdb89c68dfbeafb8bfcccbcf2989" + ) + , + ( h "e3925c9473284dde61d4e046eeb6f6cd0cce16082e9c90c35032e074cb0b6bfb" + , Just $ h "3cb50a1267998b686a7b1dc63df4e2209cf377cf3426bcb7110d27710e2da8b8" + ) + , + ( h "8475954d831504481c01c362cf5773f08a3215df80f59817ec2eb63d163c8c56" + , Just $ h "2d22a450ba28cd5ca5af9cd835ed3c8f8d9d5e2647ecdc5eb61f755ae3a070c3" + ) + , + ( h "31423c21fac504bd0178ad746c0a7eab80628a77da373ad8e4ce3d3a2f354cd1" + , Just $ h "6a833674f2a5a3048662256025bd5a51694304056a8e0e2035b13135cf17db4c" + ) + , + ( h "75b8f4f227d744e86f6de2155ee52c7be4775a8a406034867c354b08b44e057e" + , Just $ h "d56c17678711ea6ba864b542a44f2c45126556a2e690f9b1c7e9315bb76cfbf3" + ) + , + ( h "fa5a76bfea0ae6729cd02198560b4f62fad559b5bf446d71eef59ecb02326f95" + , Just $ h "b1e5ac20522ab4fda7688216614df13437da795622b4c2640fa06a9644dc91e6" + ) + , + ( h "afc6ebc214a88520b577d13e95c8fefee6b9937188ef691945f5f19b5c9b80ed" + , Just $ h "fd0be5ccc586904915349966fdff0b41888d6b0d2fc13cb57a9bbb60b8f006c6" + ) + , + ( h "74cd9c480fc4e3038752cc8e164d980e4c4deb1f5362149925bef517d75f612e" + , Just $ h "f9da8e956ee835896dbcc645028aa092185b1ce384a28ebbb7e6fd4d5ef2a573" + ) + , + ( h "a9c8247eaaaab364a0acac4cb51624250170ea84b2c35a6a30f53c56ed301748" + , Just $ h "8394f0b596615f1e450fad484462457dfcf3c26ffb066549c0c97ee1acb7020f" + ) + , + ( h "d71da4e4e9be2ffc5d99a83dc258b736f0026e403e9230dacf7e1b61bcd22cde" + , Just $ h "289deca47d063dd530ec05a0d3d1432300b69dcbcd0968a7f889df4d5ff6a1ef" + ) + , + ( h "9884291f1a329a3ff4b71b2d347500aa73f2f0be4803c1634a2107b4b184c596" + , Just $ h "9d94ed32f19091284bdadc97305e5b34d34d75c06233aceab051671e54f1ac59" + ) + , + ( h "0f2a0ed4ca5d15301d3e7a43f8a1db22b6099892caca4d365e8cede0ddb499c4" + , Just $ h "73c0faf6d7876e18f872cfec410b64824891951b248fda0cdab931da0540816b" + ) + , + ( h "1dc4282ffc9b24a7d7de1cc48038e5cecf9ff06a2d21a7cedac0fe2858aa6273" + , Just $ h "b4d24bfbda6fd0526768824a9353b2b8eea488d0f07e8510a1fa8250961513fb" + ) + , + ( h "f6d414c97ea19971f9d3a4ef3ed319996bacbb10bf4401cdbd0e55a958be42d4" + , Just $ h "e3a6ed573d36e53be220d100227f8be4b4464265eb018e3a97ffa1ac40ec8630" + ) + , + ( h "caefd5e28f31c47d1c2589c0a7a42ed3fbf8ae56ae578ca9685be05b717cda8a" + , Just $ h "e687bb1244ad0fc1dc30ee0f3524820496d4af38d32e56dbe22d60b41ed37d82" + ) + , + ( h "7f9105e3763eabbbe240385608ced7082cc64e02cbdffe0a956fbccbce018636" + , Just $ h "1ba487a2555a91def90d7ba7f33404ca7f06b654913a409d2ad81949edb34e9d" + ) + , + ( h "b6f8f472df2aa5368e112969989ffef3a4340de37d71db01319aacac4b984b9a" + , Just $ h "6e03aef8d2c944c6bc9e686e240768c0ef2b451f47617143a2a32e53ad9964ac" + ) + ] diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger.hs index 7be453bcc5..edcc41efc7 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger.hs @@ -3,13 +3,13 @@ module Ouroboros.Consensus.Byron.Ledger (module X) where -- Modules Aux, Conversions and Orphans are not re-exported, as they deal with -- wrapping cardano-ledger-byron; this should not be needed elsewhere in consensus. -import Ouroboros.Consensus.Byron.Ledger.Block as X -import Ouroboros.Consensus.Byron.Ledger.Config as X -import Ouroboros.Consensus.Byron.Ledger.Forge as X -import Ouroboros.Consensus.Byron.Ledger.HeaderValidation as X -import Ouroboros.Consensus.Byron.Ledger.Integrity as X -import Ouroboros.Consensus.Byron.Ledger.Ledger as X -import Ouroboros.Consensus.Byron.Ledger.Mempool as X -import Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion as X -import Ouroboros.Consensus.Byron.Ledger.PBFT as X -import Ouroboros.Consensus.Byron.Ledger.Serialisation as X +import Ouroboros.Consensus.Byron.Ledger.Block as X +import Ouroboros.Consensus.Byron.Ledger.Config as X +import Ouroboros.Consensus.Byron.Ledger.Forge as X +import Ouroboros.Consensus.Byron.Ledger.HeaderValidation as X +import Ouroboros.Consensus.Byron.Ledger.Integrity as X +import Ouroboros.Consensus.Byron.Ledger.Ledger as X +import Ouroboros.Consensus.Byron.Ledger.Mempool as X +import Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion as X +import Ouroboros.Consensus.Byron.Ledger.PBFT as X +import Ouroboros.Consensus.Byron.Ledger.Serialisation as X diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Block.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Block.hs index cc8a966959..5b37fa7e9c 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Block.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Block.hs @@ -8,23 +8,27 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Byron.Ledger.Block ( - -- * Hash +module Ouroboros.Consensus.Byron.Ledger.Block + ( -- * Hash ByronHash (..) , mkByronHash + -- * Block , ByronBlock (..) , annotateByronBlock , mkByronBlock + -- * Header , Header (..) , mkBoundaryByronHeader , mkByronHeader , mkRegularByronHeader + -- * Dealing with EBBs , byronBlockIsEBB , byronHeaderIsEBB , knownEBBs + -- * Low-level API , UnsizedHeader (..) , joinSizeHint @@ -32,45 +36,47 @@ module Ouroboros.Consensus.Byron.Ledger.Block ( , splitSizeHint ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Byron.API as CC -import qualified Cardano.Chain.Slotting as CC -import qualified Cardano.Crypto.Hashing as CC -import Cardano.Ledger.Binary -import qualified Crypto.Hash as Crypto -import Data.ByteString (ByteString) -import qualified Data.ByteString as Strict -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Data.Typeable -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import qualified Ouroboros.Consensus.Byron.EBBs as EBBs -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Ledger.Orphans () -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Network.SizeInBytes (SizeInBytes) +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Byron.API qualified as CC +import Cardano.Chain.Slotting qualified as CC +import Cardano.Crypto.Hashing qualified as CC +import Cardano.Ledger.Binary +import Crypto.Hash qualified as Crypto +import Data.ByteString (ByteString) +import Data.ByteString qualified as Strict +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.Typeable +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.EBBs qualified as EBBs +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Ledger.Orphans () +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Network.SizeInBytes (SizeInBytes) {------------------------------------------------------------------------------- Header hash -------------------------------------------------------------------------------} -newtype ByronHash = ByronHash { unByronHash :: CC.HeaderHash } - deriving stock (Eq, Ord, Show, Generic) +newtype ByronHash = ByronHash {unByronHash :: CC.HeaderHash} + deriving stock (Eq, Ord, Show, Generic) deriving newtype (EncCBOR, DecCBOR, Condense) - deriving anyclass (NoThunks) + deriving anyclass NoThunks mkByronHash :: CC.ABlockOrBoundaryHdr ByteString -> ByronHash mkByronHash = ByronHash . CC.abobHdrHash instance ConvertRawHash ByronBlock where - toShortRawHash _ = CC.abstractHashToShort . unByronHash + toShortRawHash _ = CC.abstractHashToShort . unByronHash fromShortRawHash _ = ByronHash . CC.unsafeAbstractHashFromShort - hashSize _ = fromIntegral $ Crypto.hashDigestSize - (error "proxy" :: Crypto.Blake2b_256) + hashSize _ = + fromIntegral $ + Crypto.hashDigestSize + (error "proxy" :: Crypto.Blake2b_256) {------------------------------------------------------------------------------- Block @@ -83,26 +89,27 @@ instance ConvertRawHash ByronBlock where -- * We cache the slot number as this is not readily available for EBBs. -- Having it cached allows us to e.g. give a 'HasHeader' instance. -- * We cache the hash as this is expensive to compute and we need it often. -data ByronBlock = ByronBlock { - byronBlockRaw :: !(CC.ABlockOrBoundary ByteString) - , byronBlockSlotNo :: !SlotNo - , byronBlockHash :: !ByronHash - } +data ByronBlock = ByronBlock + { byronBlockRaw :: !(CC.ABlockOrBoundary ByteString) + , byronBlockSlotNo :: !SlotNo + , byronBlockHash :: !ByronHash + } deriving (Eq, Show) instance Condense ByronBlock where condense = condense . byronBlockRaw -instance ShowProxy ByronBlock where +instance ShowProxy ByronBlock mkByronBlock :: CC.EpochSlots -> CC.ABlockOrBoundary ByteString -> ByronBlock -mkByronBlock epochSlots blk = ByronBlock { - byronBlockRaw = blk +mkByronBlock epochSlots blk = + ByronBlock + { byronBlockRaw = blk , byronBlockSlotNo = fromByronSlotNo $ CC.abobHdrSlotNo epochSlots hdr - , byronBlockHash = mkByronHash hdr + , byronBlockHash = mkByronHash hdr } - where - hdr = CC.abobHdrFromBlock blk + where + hdr = CC.abobHdrFromBlock blk -- | Construct Byron block from unannotated 'CC.Block' -- @@ -118,78 +125,83 @@ annotateByronBlock es = mkByronBlock es . CC.ABOBBlock . CC.reAnnotateBlock es -- | Byron header -- -- See 'ByronBlock' for comments on why we cache certain values. -data instance Header ByronBlock = ByronHeader { - byronHeaderRaw :: !(CC.ABlockOrBoundaryHdr ByteString) - , byronHeaderSlotNo :: !SlotNo - , byronHeaderHash :: !ByronHash - - -- | Hint about the block size - -- - -- This is used only for the block fetch client. When this value is - -- wrong, block fetch might make suboptimal decisions, but it shouldn't - -- /break/ anything - , byronHeaderBlockSizeHint :: !SizeInBytes - } +data instance Header ByronBlock = ByronHeader + { byronHeaderRaw :: !(CC.ABlockOrBoundaryHdr ByteString) + , byronHeaderSlotNo :: !SlotNo + , byronHeaderHash :: !ByronHash + , byronHeaderBlockSizeHint :: !SizeInBytes + -- ^ Hint about the block size + -- + -- This is used only for the block fetch client. When this value is + -- wrong, block fetch might make suboptimal decisions, but it shouldn't + -- /break/ anything + } deriving (Eq, Show, Generic) instance GetHeader ByronBlock where - getHeader ByronBlock{..} = ByronHeader { - byronHeaderRaw = CC.abobHdrFromBlock byronBlockRaw - , byronHeaderSlotNo = byronBlockSlotNo - , byronHeaderHash = byronBlockHash + getHeader ByronBlock{..} = + ByronHeader + { byronHeaderRaw = CC.abobHdrFromBlock byronBlockRaw + , byronHeaderSlotNo = byronBlockSlotNo + , byronHeaderHash = byronBlockHash , byronHeaderBlockSizeHint = (+ overhead) . fromIntegral . Strict.length $ -- For some reason regular blocks lack a 'Decoded' instance case byronBlockRaw of - CC.ABOBBlock blk -> CC.blockAnnotation blk - CC.ABOBBoundary blk -> recoverBytes blk + CC.ABOBBlock blk -> CC.blockAnnotation blk + CC.ABOBBoundary blk -> recoverBytes blk } - where - -- The maximum block size is 65536, the CBOR-in-CBOR tag for this block - -- is: - -- - -- > D8 18 # tag(24) - -- > 1A 00010000 # bytes(65536) - -- - -- Which is 7 bytes, enough for up to 4294967295 bytes. - overhead = 7 {- CBOR-in-CBOR -} + 2 {- EBB tag -} + where + -- The maximum block size is 65536, the CBOR-in-CBOR tag for this block + -- is: + -- + -- > D8 18 # tag(24) + -- > 1A 00010000 # bytes(65536) + -- + -- Which is 7 bytes, enough for up to 4294967295 bytes. + overhead = 7 {- CBOR-in-CBOR -} + 2 {- EBB tag -} -- Check if a block matches its header -- -- Note that we cannot check this for an EBB, as the EBB header doesn't -- store a hash of the EBB body. blockMatchesHeader hdr blk = - CC.abobMatchesBody (byronHeaderRaw hdr) (byronBlockRaw blk) + CC.abobMatchesBody (byronHeaderRaw hdr) (byronBlockRaw blk) headerIsEBB hdr = case byronHeaderRaw hdr of - CC.ABOBBlockHdr _ -> Nothing - CC.ABOBBoundaryHdr bhdr -> Just - . EpochNo - . CC.boundaryEpoch - $ bhdr + CC.ABOBBlockHdr _ -> Nothing + CC.ABOBBoundaryHdr bhdr -> + Just + . EpochNo + . CC.boundaryEpoch + $ bhdr instance Condense (Header ByronBlock) where condense = CC.aBlockOrBoundaryHdr condense condense . byronHeaderRaw -instance ShowProxy (Header ByronBlock) where +instance ShowProxy (Header ByronBlock) instance NoThunks (Header ByronBlock) where showTypeOf _ = show $ typeRep (Proxy @(Header ByronBlock)) -mkByronHeader :: CC.EpochSlots - -> CC.ABlockOrBoundaryHdr ByteString - -> SizeInBytes -- ^ Block size hint - -> Header ByronBlock +mkByronHeader :: + CC.EpochSlots -> + CC.ABlockOrBoundaryHdr ByteString -> + -- | Block size hint + SizeInBytes -> + Header ByronBlock mkByronHeader epochSlots = joinSizeHint . mkUnsizedHeader epochSlots -mkRegularByronHeader :: CC.AHeader ByteString - -> SizeInBytes - -> Header ByronBlock +mkRegularByronHeader :: + CC.AHeader ByteString -> + SizeInBytes -> + Header ByronBlock mkRegularByronHeader = joinSizeHint . mkRegularUnsizedHeader -mkBoundaryByronHeader :: SlotNo - -> CC.ABoundaryHeader ByteString - -> SizeInBytes - -> Header ByronBlock +mkBoundaryByronHeader :: + SlotNo -> + CC.ABoundaryHeader ByteString -> + SizeInBytes -> + Header ByronBlock mkBoundaryByronHeader slotNo = joinSizeHint . mkBoundaryUnsizedHeader slotNo {------------------------------------------------------------------------------- @@ -205,9 +217,10 @@ instance HasHeader ByronBlock where getHeaderFields = getBlockHeaderFields instance HasHeader (Header ByronBlock) where - getHeaderFields hdr = HeaderFields { - headerFieldHash = byronHeaderHash hdr - , headerFieldSlot = byronHeaderSlotNo hdr + getHeaderFields hdr = + HeaderFields + { headerFieldHash = byronHeaderHash hdr + , headerFieldSlot = byronHeaderSlotNo hdr , headerFieldBlockNo = fromByronBlockNo . CC.abobHdrChainDifficulty $ byronHeaderRaw hdr } @@ -216,8 +229,8 @@ instance GetPrevHash ByronBlock where fromByronPrevHash :: Maybe CC.HeaderHash -> ChainHash ByronBlock fromByronPrevHash = \case - Nothing -> GenesisHash - Just h -> BlockHash (ByronHash h) + Nothing -> GenesisHash + Just h -> BlockHash (ByronHash h) {------------------------------------------------------------------------------- Dealing with EBBs @@ -225,21 +238,22 @@ fromByronPrevHash = \case byronHeaderIsEBB :: Header ByronBlock -> IsEBB byronHeaderIsEBB = go . byronHeaderRaw - where - go :: CC.ABlockOrBoundaryHdr a -> IsEBB - go (CC.ABOBBlockHdr _) = IsNotEBB - go (CC.ABOBBoundaryHdr _) = IsEBB + where + go :: CC.ABlockOrBoundaryHdr a -> IsEBB + go (CC.ABOBBlockHdr _) = IsNotEBB + go (CC.ABOBBoundaryHdr _) = IsEBB byronBlockIsEBB :: ByronBlock -> IsEBB byronBlockIsEBB = byronHeaderIsEBB . getHeader knownEBBs :: Map (HeaderHash ByronBlock) (ChainHash ByronBlock) knownEBBs = Map.fromList $ map aux EBBs.knownEBBs - where - aux :: (CC.HeaderHash, Maybe CC.HeaderHash) - -> (ByronHash, ChainHash ByronBlock) - aux (ebb, Nothing) = (ByronHash ebb, GenesisHash) - aux (ebb, Just prev) = (ByronHash ebb, BlockHash (ByronHash prev)) + where + aux :: + (CC.HeaderHash, Maybe CC.HeaderHash) -> + (ByronHash, ChainHash ByronBlock) + aux (ebb, Nothing) = (ByronHash ebb, GenesisHash) + aux (ebb, Just prev) = (ByronHash ebb, BlockHash (ByronHash prev)) {------------------------------------------------------------------------------- Unsized header @@ -248,59 +262,65 @@ knownEBBs = Map.fromList $ map aux EBBs.knownEBBs -- | Header without a size hint -- -- Defined in order to support backwards compatible binary encodings. -data UnsizedHeader = UnsizedHeader { - unsizedHeaderRaw :: !(CC.ABlockOrBoundaryHdr ByteString) - , unsizedHeaderSlotNo :: !SlotNo - , unsizedHeaderHash :: !ByronHash - } - -mkUnsizedHeader :: CC.EpochSlots - -> CC.ABlockOrBoundaryHdr ByteString - -> UnsizedHeader +data UnsizedHeader = UnsizedHeader + { unsizedHeaderRaw :: !(CC.ABlockOrBoundaryHdr ByteString) + , unsizedHeaderSlotNo :: !SlotNo + , unsizedHeaderHash :: !ByronHash + } + +mkUnsizedHeader :: + CC.EpochSlots -> + CC.ABlockOrBoundaryHdr ByteString -> + UnsizedHeader mkUnsizedHeader epochSlots = \case - CC.ABOBBlockHdr hdr -> mkRegularUnsizedHeader hdr - CC.ABOBBoundaryHdr hdr -> mkBoundaryUnsizedHeader slotNo hdr - where - slotNo = fromByronSlotNo $ - CC.boundaryBlockSlot epochSlots (CC.boundaryEpoch hdr) + CC.ABOBBlockHdr hdr -> mkRegularUnsizedHeader hdr + CC.ABOBBoundaryHdr hdr -> mkBoundaryUnsizedHeader slotNo hdr + where + slotNo = + fromByronSlotNo $ + CC.boundaryBlockSlot epochSlots (CC.boundaryEpoch hdr) mkRegularUnsizedHeader :: CC.AHeader ByteString -> UnsizedHeader -mkRegularUnsizedHeader hdr = UnsizedHeader { - unsizedHeaderRaw = hdr' +mkRegularUnsizedHeader hdr = + UnsizedHeader + { unsizedHeaderRaw = hdr' , unsizedHeaderSlotNo = fromByronSlotNo $ CC.headerSlot hdr - , unsizedHeaderHash = mkByronHash hdr' + , unsizedHeaderHash = mkByronHash hdr' } - where - hdr' :: CC.ABlockOrBoundaryHdr ByteString - hdr' = CC.ABOBBlockHdr hdr + where + hdr' :: CC.ABlockOrBoundaryHdr ByteString + hdr' = CC.ABOBBlockHdr hdr -- | For a boundary header, we must be told the slot -mkBoundaryUnsizedHeader :: SlotNo - -> CC.ABoundaryHeader ByteString - -> UnsizedHeader -mkBoundaryUnsizedHeader slotNo hdr = UnsizedHeader { - unsizedHeaderRaw = hdr' +mkBoundaryUnsizedHeader :: + SlotNo -> + CC.ABoundaryHeader ByteString -> + UnsizedHeader +mkBoundaryUnsizedHeader slotNo hdr = + UnsizedHeader + { unsizedHeaderRaw = hdr' , unsizedHeaderSlotNo = slotNo - , unsizedHeaderHash = mkByronHash hdr' + , unsizedHeaderHash = mkByronHash hdr' } - where - hdr' :: CC.ABlockOrBoundaryHdr ByteString - hdr' = CC.ABOBBoundaryHdr hdr + where + hdr' :: CC.ABlockOrBoundaryHdr ByteString + hdr' = CC.ABOBBoundaryHdr hdr splitSizeHint :: Header ByronBlock -> (UnsizedHeader, SizeInBytes) -splitSizeHint ByronHeader{..} = ( - UnsizedHeader { - unsizedHeaderRaw = byronHeaderRaw - , unsizedHeaderSlotNo = byronHeaderSlotNo - , unsizedHeaderHash = byronHeaderHash - } - , byronHeaderBlockSizeHint - ) +splitSizeHint ByronHeader{..} = + ( UnsizedHeader + { unsizedHeaderRaw = byronHeaderRaw + , unsizedHeaderSlotNo = byronHeaderSlotNo + , unsizedHeaderHash = byronHeaderHash + } + , byronHeaderBlockSizeHint + ) joinSizeHint :: UnsizedHeader -> SizeInBytes -> Header ByronBlock -joinSizeHint UnsizedHeader{..} size = ByronHeader { - byronHeaderRaw = unsizedHeaderRaw - , byronHeaderSlotNo = unsizedHeaderSlotNo - , byronHeaderHash = unsizedHeaderHash +joinSizeHint UnsizedHeader{..} size = + ByronHeader + { byronHeaderRaw = unsizedHeaderRaw + , byronHeaderSlotNo = unsizedHeaderSlotNo + , byronHeaderHash = unsizedHeaderHash , byronHeaderBlockSizeHint = size } diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Config.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Config.hs index 3a26f1337c..68e7048c1d 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Config.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Config.hs @@ -2,56 +2,56 @@ {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Ledger.Config ( - -- * Block config +module Ouroboros.Consensus.Byron.Ledger.Config + ( -- * Block config BlockConfig (..) , byronEpochSlots , byronGenesisHash , byronProtocolMagic , byronProtocolMagicId + -- * Codec config , CodecConfig (..) , mkByronCodecConfig + -- * Storage config , StorageConfig (..) + -- * Compact genesis config , compactGenesisConfig ) where -import qualified Cardano.Chain.Genesis as CC.Genesis -import qualified Cardano.Chain.Slotting as CC.Slot -import qualified Cardano.Chain.Update as CC.Update -import qualified Cardano.Crypto as Crypto -import qualified Data.Map.Strict as Map -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.Slotting qualified as CC.Slot +import Cardano.Chain.Update qualified as CC.Update +import Cardano.Crypto qualified as Crypto +import Data.Map.Strict qualified as Map +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block {------------------------------------------------------------------------------- Block config -------------------------------------------------------------------------------} -- | Extended configuration we need for Byron -data instance BlockConfig ByronBlock = ByronConfig { - -- | Genesis configuration - byronGenesisConfig :: !CC.Genesis.Config - - -- | Node protocol version - -- - -- NOTE: This is /static/ for the node, and may not correspond to what's - -- on the chain. It's the protocol supported by /this/ node; to change it, - -- you'd have to change the software. - , byronProtocolVersion :: !CC.Update.ProtocolVersion - - -- | Node software version - -- - -- Like 'byronProtocolVersion', this is independent from the chain. - , byronSoftwareVersion :: !CC.Update.SoftwareVersion - } +data instance BlockConfig ByronBlock = ByronConfig + { byronGenesisConfig :: !CC.Genesis.Config + -- ^ Genesis configuration + , byronProtocolVersion :: !CC.Update.ProtocolVersion + -- ^ Node protocol version + -- + -- NOTE: This is /static/ for the node, and may not correspond to what's + -- on the chain. It's the protocol supported by /this/ node; to change it, + -- you'd have to change the software. + , byronSoftwareVersion :: !CC.Update.SoftwareVersion + -- ^ Node software version + -- + -- Like 'byronProtocolVersion', this is independent from the chain. + } deriving (Generic, NoThunks) byronGenesisHash :: BlockConfig ByronBlock -> CC.Genesis.GenesisHash @@ -70,25 +70,26 @@ byronEpochSlots = CC.Genesis.configEpochSlots . byronGenesisConfig Codec config -------------------------------------------------------------------------------} -newtype instance CodecConfig ByronBlock = ByronCodecConfig { - getByronEpochSlots :: CC.Slot.EpochSlots - } +newtype instance CodecConfig ByronBlock = ByronCodecConfig + { getByronEpochSlots :: CC.Slot.EpochSlots + } deriving (Generic, NoThunks) mkByronCodecConfig :: CC.Genesis.Config -> CodecConfig ByronBlock -mkByronCodecConfig cfg = ByronCodecConfig { - getByronEpochSlots = CC.Genesis.configEpochSlots cfg +mkByronCodecConfig cfg = + ByronCodecConfig + { getByronEpochSlots = CC.Genesis.configEpochSlots cfg } {------------------------------------------------------------------------------- Storage config -------------------------------------------------------------------------------} -newtype instance StorageConfig ByronBlock = ByronStorageConfig { - -- | We need the 'BlockConfig' to be able to forge an EBB in - -- 'nodeInitChainDB'. - getByronBlockConfig :: BlockConfig ByronBlock - } +newtype instance StorageConfig ByronBlock = ByronStorageConfig + { getByronBlockConfig :: BlockConfig ByronBlock + -- ^ We need the 'BlockConfig' to be able to forge an EBB in + -- 'nodeInitChainDB'. + } deriving (Generic, NoThunks) {------------------------------------------------------------------------------- @@ -103,8 +104,10 @@ newtype instance StorageConfig ByronBlock = ByronStorageConfig { -- keep Byron's genesis config in memory (even in later eras), this can save us -- a bit of memory. compactGenesisConfig :: CC.Genesis.Config -> CC.Genesis.Config -compactGenesisConfig cfg = cfg { - CC.Genesis.configGenesisData = (CC.Genesis.configGenesisData cfg) { - CC.Genesis.gdAvvmDistr = CC.Genesis.GenesisAvvmBalances Map.empty - } +compactGenesisConfig cfg = + cfg + { CC.Genesis.configGenesisData = + (CC.Genesis.configGenesisData cfg) + { CC.Genesis.gdAvvmDistr = CC.Genesis.GenesisAvvmBalances Map.empty + } } diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Conversions.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Conversions.hs index e357968864..559c6bf61b 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Conversions.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Conversions.hs @@ -1,33 +1,35 @@ -module Ouroboros.Consensus.Byron.Ledger.Conversions ( - -- * From @cardano-ledger-byron@ to @ouroboros-consensus@ +module Ouroboros.Consensus.Byron.Ledger.Conversions + ( -- * From @cardano-ledger-byron@ to @ouroboros-consensus@ fromByronBlockCount , fromByronBlockNo , fromByronEpochSlots , fromByronSlotLength , fromByronSlotNo + -- * From @ouroboros-consensus@ to @cardano-ledger-byron@ , toByronBlockCount , toByronSlotLength , toByronSlotNo + -- * Extract info from the genesis config , genesisNumCoreNodes , genesisSecurityParam , genesisSlotLength ) where -import qualified Cardano.Chain.Common as CC -import qualified Cardano.Chain.Genesis as Genesis -import qualified Cardano.Chain.Slotting as CC -import qualified Cardano.Chain.Update as CC -import Cardano.Ledger.BaseTypes (nonZeroOr, unNonZero) -import Data.Coerce -import qualified Data.Set as Set -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Byron.Ledger.Orphans () -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Node.ProtocolInfo +import Cardano.Chain.Common qualified as CC +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.Slotting qualified as CC +import Cardano.Chain.Update qualified as CC +import Cardano.Ledger.BaseTypes (nonZeroOr, unNonZero) +import Data.Coerce +import Data.Set qualified as Set +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Byron.Ledger.Orphans () +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Node.ProtocolInfo {------------------------------------------------------------------------------- From @cardano-ledger-byron@ to @ouroboros-consensus@ @@ -46,8 +48,9 @@ fromByronEpochSlots :: CC.EpochSlots -> EpochSize fromByronEpochSlots (CC.EpochSlots n) = EpochSize n fromByronSlotLength :: Natural -> SlotLength -fromByronSlotLength = slotLengthFromMillisec - . (fromIntegral :: Natural -> Integer) +fromByronSlotLength = + slotLengthFromMillisec + . (fromIntegral :: Natural -> Integer) {------------------------------------------------------------------------------- From @ouroboros-consensus@ to @cardano-ledger-byron@ @@ -60,8 +63,9 @@ toByronBlockCount :: SecurityParam -> CC.BlockCount toByronBlockCount (SecurityParam k) = CC.BlockCount $ unNonZero k toByronSlotLength :: SlotLength -> Natural -toByronSlotLength = (fromIntegral :: Integer -> Natural) - . slotLengthToMillisec +toByronSlotLength = + (fromIntegral :: Integer -> Natural) + . slotLengthToMillisec {------------------------------------------------------------------------------- Extract info from genesis @@ -69,13 +73,13 @@ toByronSlotLength = (fromIntegral :: Integer -> Natural) genesisSecurityParam :: Genesis.Config -> SecurityParam genesisSecurityParam = - fromByronBlockCount + fromByronBlockCount . Genesis.gdK . Genesis.configGenesisData genesisNumCoreNodes :: Genesis.Config -> NumCoreNodes genesisNumCoreNodes = - NumCoreNodes + NumCoreNodes . fromIntegral . Set.size . Genesis.unGenesisKeyHashes @@ -84,5 +88,5 @@ genesisNumCoreNodes = genesisSlotLength :: Genesis.Config -> Natural genesisSlotLength = - CC.ppSlotDuration + CC.ppSlotDuration . Genesis.configProtocolParameters diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Forge.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Forge.hs index 67078f8975..00340e21e6 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Forge.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Forge.hs @@ -2,95 +2,109 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Ledger.Forge ( - forgeByronBlock +module Ouroboros.Consensus.Byron.Ledger.Forge + ( forgeByronBlock , forgeRegularBlock + -- * For testing purposes , forgeEBB ) where -import qualified Cardano.Chain.Block as CC.Block -import qualified Cardano.Chain.Byron.API as CC -import qualified Cardano.Chain.Common as CC.Common -import qualified Cardano.Chain.Delegation as CC.Delegation -import qualified Cardano.Chain.Genesis as CC.Genesis -import qualified Cardano.Chain.Slotting as CC.Slot -import qualified Cardano.Chain.Ssc as CC.Ssc -import qualified Cardano.Chain.Update as CC.Update -import qualified Cardano.Chain.UTxO as CC.UTxO -import qualified Cardano.Crypto as Crypto -import Cardano.Crypto.DSIGN -import Cardano.Ledger.Binary (Annotated (..), byronProtVer, - reAnnotate) -import Control.Monad (void) -import Data.ByteString (ByteString) -import Data.Coerce (coerce) -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Crypto.DSIGN -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Config -import Ouroboros.Consensus.Byron.Ledger.Mempool -import Ouroboros.Consensus.Byron.Ledger.PBFT -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool - (LedgerSupportsMempool (..), txForgetValidated) -import Ouroboros.Consensus.Protocol.PBFT +import Cardano.Chain.Block qualified as CC.Block +import Cardano.Chain.Byron.API qualified as CC +import Cardano.Chain.Common qualified as CC.Common +import Cardano.Chain.Delegation qualified as CC.Delegation +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.Slotting qualified as CC.Slot +import Cardano.Chain.Ssc qualified as CC.Ssc +import Cardano.Chain.UTxO qualified as CC.UTxO +import Cardano.Chain.Update qualified as CC.Update +import Cardano.Crypto qualified as Crypto +import Cardano.Crypto.DSIGN +import Cardano.Ledger.Binary + ( Annotated (..) + , byronProtVer + , reAnnotate + ) +import Control.Monad (void) +import Data.ByteString (ByteString) +import Data.Coerce (coerce) +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Crypto.DSIGN +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Config +import Ouroboros.Consensus.Byron.Ledger.Mempool +import Ouroboros.Consensus.Byron.Ledger.PBFT +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool + ( LedgerSupportsMempool (..) + , txForgetValidated + ) +import Ouroboros.Consensus.Protocol.PBFT forgeByronBlock :: - HasCallStack - => TopLevelConfig ByronBlock - -> BlockNo -- ^ Current block number - -> SlotNo -- ^ Current slot number - -> TickedLedgerState ByronBlock mk -- ^ Current ledger - -> [Validated (GenTx ByronBlock)] -- ^ Txs to include - -> PBftIsLeader PBftByronCrypto -- ^ Leader proof ('IsLeader') - -> ByronBlock + HasCallStack => + TopLevelConfig ByronBlock -> + -- | Current block number + BlockNo -> + -- | Current slot number + SlotNo -> + -- | Current ledger + TickedLedgerState ByronBlock mk -> + -- | Txs to include + [Validated (GenTx ByronBlock)] -> + -- | Leader proof ('IsLeader') + PBftIsLeader PBftByronCrypto -> + ByronBlock forgeByronBlock cfg = forgeRegularBlock (configBlock cfg) forgeEBB :: - BlockConfig ByronBlock - -> SlotNo -- ^ Current slot - -> BlockNo -- ^ Current block number - -> ChainHash ByronBlock -- ^ Previous hash - -> ByronBlock + BlockConfig ByronBlock -> + -- | Current slot + SlotNo -> + -- | Current block number + BlockNo -> + -- | Previous hash + ChainHash ByronBlock -> + ByronBlock forgeEBB cfg curSlot curNo prevHash = - mkByronBlock epochSlots - . CC.Block.ABOBBoundary - . CC.reAnnotateBoundary (byronProtocolMagicId cfg) - $ boundaryBlock - where - epochSlots :: CC.Slot.EpochSlots - epochSlots = byronEpochSlots cfg - - prevHeaderHash :: Either CC.Genesis.GenesisHash CC.Block.HeaderHash - prevHeaderHash = case prevHash of - GenesisHash -> Left (byronGenesisHash cfg) - BlockHash (ByronHash h) -> Right h - - boundaryBlock :: CC.Block.ABoundaryBlock () - boundaryBlock = - CC.Block.ABoundaryBlock { - CC.Block.boundaryBlockLength = 0 -- Used only in testing anyway + mkByronBlock epochSlots + . CC.Block.ABOBBoundary + . CC.reAnnotateBoundary (byronProtocolMagicId cfg) + $ boundaryBlock + where + epochSlots :: CC.Slot.EpochSlots + epochSlots = byronEpochSlots cfg + + prevHeaderHash :: Either CC.Genesis.GenesisHash CC.Block.HeaderHash + prevHeaderHash = case prevHash of + GenesisHash -> Left (byronGenesisHash cfg) + BlockHash (ByronHash h) -> Right h + + boundaryBlock :: CC.Block.ABoundaryBlock () + boundaryBlock = + CC.Block.ABoundaryBlock + { CC.Block.boundaryBlockLength = 0 -- Used only in testing anyway , CC.Block.boundaryHeader - , CC.Block.boundaryBody = CC.Block.ABoundaryBody () - , CC.Block.boundaryAnnotation = () + , CC.Block.boundaryBody = CC.Block.ABoundaryBody () + , CC.Block.boundaryAnnotation = () } - boundaryHeader :: CC.Block.ABoundaryHeader () - boundaryHeader = CC.Block.mkABoundaryHeader + boundaryHeader :: CC.Block.ABoundaryHeader () + boundaryHeader = + CC.Block.mkABoundaryHeader prevHeaderHash epoch (coerce curNo) () - where - CC.Slot.EpochNumber epoch = - CC.Slot.slotNumberEpoch epochSlots (coerce curSlot) + where + CC.Slot.EpochNumber epoch = + CC.Slot.slotNumberEpoch epochSlots (coerce curSlot) -- | Internal helper data type for 'forgeRegularBlock' used to accumulate the -- different kinds of block payloads that can be found in a given collection @@ -98,145 +112,160 @@ forgeEBB cfg curSlot curNo prevHash = -- -- n.b. This data type is not to be exposed from this module. data BlockPayloads = BlockPayloads - { bpTxs :: ![CC.UTxO.TxAux] - , bpDlgCerts :: ![CC.Delegation.Certificate] - , bpUpVotes :: ![CC.Update.Vote] + { bpTxs :: ![CC.UTxO.TxAux] + , bpDlgCerts :: ![CC.Delegation.Certificate] + , bpUpVotes :: ![CC.Update.Vote] , bpUpProposal :: !(Maybe CC.Update.Proposal) - -- ^ 'Just' if there is at least one 'CC.Update.Proposal' in a list of - -- Byron 'GenTx's and 'Nothing' if there are none. It is worth noting that - -- if we encounter multiple 'CC.Update.Proposal's in a collection of - -- 'GenTx's, this value will be that of the last 'CC.Update.Proposal' - -- encountered. + -- ^ 'Just' if there is at least one 'CC.Update.Proposal' in a list of + -- Byron 'GenTx's and 'Nothing' if there are none. It is worth noting that + -- if we encounter multiple 'CC.Update.Proposal's in a collection of + -- 'GenTx's, this value will be that of the last 'CC.Update.Proposal' + -- encountered. } initBlockPayloads :: BlockPayloads -initBlockPayloads = BlockPayloads - { bpTxs = [] - , bpDlgCerts = [] - , bpUpVotes = [] - , bpUpProposal = Nothing - } +initBlockPayloads = + BlockPayloads + { bpTxs = [] + , bpDlgCerts = [] + , bpUpVotes = [] + , bpUpProposal = Nothing + } forgeRegularBlock :: - HasCallStack - => BlockConfig ByronBlock - -> BlockNo -- ^ Current block number - -> SlotNo -- ^ Current slot number - -> TickedLedgerState ByronBlock mk -- ^ Current ledger - -> [Validated (GenTx ByronBlock)] -- ^ Txs to include - -> PBftIsLeader PBftByronCrypto -- ^ Leader proof ('IsLeader') - -> ByronBlock + HasCallStack => + BlockConfig ByronBlock -> + -- | Current block number + BlockNo -> + -- | Current slot number + SlotNo -> + -- | Current ledger + TickedLedgerState ByronBlock mk -> + -- | Txs to include + [Validated (GenTx ByronBlock)] -> + -- | Leader proof ('IsLeader') + PBftIsLeader PBftByronCrypto -> + ByronBlock forgeRegularBlock cfg bno sno st txs isLeader = - forge $ - forgePBftFields - (mkByronContextDSIGN cfg) - isLeader - (reAnnotate byronProtVer $ Annotated toSign ()) - where - epochSlots :: CC.Slot.EpochSlots - epochSlots = byronEpochSlots cfg - - blockPayloads :: BlockPayloads - blockPayloads = - foldr - extendBlockPayloads - initBlockPayloads - txs - - txPayload :: CC.UTxO.TxPayload - txPayload = CC.UTxO.mkTxPayload (bpTxs blockPayloads) - - dlgPayload :: CC.Delegation.Payload - dlgPayload = CC.Delegation.unsafePayload (bpDlgCerts blockPayloads) - - updatePayload :: CC.Update.Payload - updatePayload = CC.Update.payload (bpUpProposal blockPayloads) - (bpUpVotes blockPayloads) - - extendBlockPayloads :: Validated (GenTx ByronBlock) - -> BlockPayloads - -> BlockPayloads - extendBlockPayloads validatedGenTx bp@BlockPayloads{bpTxs, bpDlgCerts, bpUpVotes} = - -- TODO: We should try to use 'recoverProof' (and other variants of - -- 'recoverBytes') here as opposed to throwing away the serializations - -- (the 'ByteString' annotations) with 'void' as we're currently doing. - case txForgetValidated validatedGenTx of - ByronTx _ tx -> bp { bpTxs = void tx : bpTxs } - ByronDlg _ cert -> bp { bpDlgCerts = void cert : bpDlgCerts } - -- TODO: We should throw an error if we encounter multiple - -- 'ByronUpdateProposal's (i.e. if 'bpUpProposal' 'isJust'). - -- This is because we should only be provided with a maximum of one - -- 'ByronUpdateProposal' to include in a block payload. - ByronUpdateProposal _ prop -> bp { bpUpProposal = Just (void prop) } - ByronUpdateVote _ vote -> bp { bpUpVotes = void vote : bpUpVotes } - - body :: CC.Block.Body - body = CC.Block.ABody { - CC.Block.bodyTxPayload = txPayload - , CC.Block.bodySscPayload = CC.Ssc.SscPayload - , CC.Block.bodyDlgPayload = dlgPayload - , CC.Block.bodyUpdatePayload = updatePayload - } + forge $ + forgePBftFields + (mkByronContextDSIGN cfg) + isLeader + (reAnnotate byronProtVer $ Annotated toSign ()) + where + epochSlots :: CC.Slot.EpochSlots + epochSlots = byronEpochSlots cfg + + blockPayloads :: BlockPayloads + blockPayloads = + foldr + extendBlockPayloads + initBlockPayloads + txs + + txPayload :: CC.UTxO.TxPayload + txPayload = CC.UTxO.mkTxPayload (bpTxs blockPayloads) + + dlgPayload :: CC.Delegation.Payload + dlgPayload = CC.Delegation.unsafePayload (bpDlgCerts blockPayloads) + + updatePayload :: CC.Update.Payload + updatePayload = + CC.Update.payload + (bpUpProposal blockPayloads) + (bpUpVotes blockPayloads) + + extendBlockPayloads :: + Validated (GenTx ByronBlock) -> + BlockPayloads -> + BlockPayloads + extendBlockPayloads validatedGenTx bp@BlockPayloads{bpTxs, bpDlgCerts, bpUpVotes} = + -- TODO: We should try to use 'recoverProof' (and other variants of + -- 'recoverBytes') here as opposed to throwing away the serializations + -- (the 'ByteString' annotations) with 'void' as we're currently doing. + case txForgetValidated validatedGenTx of + ByronTx _ tx -> bp{bpTxs = void tx : bpTxs} + ByronDlg _ cert -> bp{bpDlgCerts = void cert : bpDlgCerts} + -- TODO: We should throw an error if we encounter multiple + -- 'ByronUpdateProposal's (i.e. if 'bpUpProposal' 'isJust'). + -- This is because we should only be provided with a maximum of one + -- 'ByronUpdateProposal' to include in a block payload. + ByronUpdateProposal _ prop -> bp{bpUpProposal = Just (void prop)} + ByronUpdateVote _ vote -> bp{bpUpVotes = void vote : bpUpVotes} - proof :: CC.Block.Proof - proof = CC.Block.mkProof body + body :: CC.Block.Body + body = + CC.Block.ABody + { CC.Block.bodyTxPayload = txPayload + , CC.Block.bodySscPayload = CC.Ssc.SscPayload + , CC.Block.bodyDlgPayload = dlgPayload + , CC.Block.bodyUpdatePayload = updatePayload + } + + proof :: CC.Block.Proof + proof = CC.Block.mkProof body - prevHeaderHash :: CC.Block.HeaderHash - prevHeaderHash = case getTipHash st of - GenesisHash -> error + prevHeaderHash :: CC.Block.HeaderHash + prevHeaderHash = case getTipHash st of + GenesisHash -> + error "the first block on the Byron chain must be an EBB" - BlockHash (ByronHash h) -> h - - epochAndSlotCount :: CC.Slot.EpochAndSlotCount - epochAndSlotCount = CC.Slot.fromSlotNumber epochSlots (coerce sno) - - toSign :: CC.Block.ToSign - toSign = CC.Block.ToSign { - CC.Block.tsHeaderHash = prevHeaderHash - , CC.Block.tsSlot = epochAndSlotCount - , CC.Block.tsDifficulty = coerce bno - , CC.Block.tsBodyProof = proof - , CC.Block.tsProtocolVersion = byronProtocolVersion cfg - , CC.Block.tsSoftwareVersion = byronSoftwareVersion cfg + BlockHash (ByronHash h) -> h + + epochAndSlotCount :: CC.Slot.EpochAndSlotCount + epochAndSlotCount = CC.Slot.fromSlotNumber epochSlots (coerce sno) + + toSign :: CC.Block.ToSign + toSign = + CC.Block.ToSign + { CC.Block.tsHeaderHash = prevHeaderHash + , CC.Block.tsSlot = epochAndSlotCount + , CC.Block.tsDifficulty = coerce bno + , CC.Block.tsBodyProof = proof + , CC.Block.tsProtocolVersion = byronProtocolVersion cfg + , CC.Block.tsSoftwareVersion = byronSoftwareVersion cfg + } + + dlgCertificate :: CC.Delegation.Certificate + dlgCertificate = pbftIsLeaderDlgCert isLeader + + headerGenesisKey :: Crypto.VerificationKey + VerKeyByronDSIGN headerGenesisKey = dlgCertGenVerKey dlgCertificate + + forge :: + PBftFields PBftByronCrypto (Annotated CC.Block.ToSign ByteString) -> + ByronBlock + forge ouroborosPayload = annotateByronBlock epochSlots block + where + block :: CC.Block.Block + block = + CC.Block.ABlock + { CC.Block.blockHeader = header + , CC.Block.blockBody = body + , CC.Block.blockAnnotation = () + } + + headerSignature :: CC.Block.BlockSignature + headerSignature = CC.Block.ABlockSignature dlgCertificate (coerce sig) + where + sig :: Crypto.Signature CC.Block.ToSign + SignedDSIGN (SigByronDSIGN sig) = pbftSignature ouroborosPayload + + header :: CC.Block.Header + header = + CC.Block.AHeader + { CC.Block.aHeaderProtocolMagicId = ann (Crypto.getProtocolMagicId (byronProtocolMagic cfg)) + , CC.Block.aHeaderPrevHash = ann prevHeaderHash + , CC.Block.aHeaderSlot = ann (coerce sno) + , CC.Block.aHeaderDifficulty = ann (coerce bno) + , CC.Block.headerProtocolVersion = byronProtocolVersion cfg + , CC.Block.headerSoftwareVersion = byronSoftwareVersion cfg + , CC.Block.aHeaderProof = ann proof + , CC.Block.headerGenesisKey = headerGenesisKey + , CC.Block.headerSignature = headerSignature + , CC.Block.headerAnnotation = () + , CC.Block.headerExtraAnnotation = () } - dlgCertificate :: CC.Delegation.Certificate - dlgCertificate = pbftIsLeaderDlgCert isLeader - - headerGenesisKey :: Crypto.VerificationKey - VerKeyByronDSIGN headerGenesisKey = dlgCertGenVerKey dlgCertificate - - forge :: PBftFields PBftByronCrypto (Annotated CC.Block.ToSign ByteString) - -> ByronBlock - forge ouroborosPayload = annotateByronBlock epochSlots block - where - block :: CC.Block.Block - block = CC.Block.ABlock { - CC.Block.blockHeader = header - , CC.Block.blockBody = body - , CC.Block.blockAnnotation = () - } - - headerSignature :: CC.Block.BlockSignature - headerSignature = CC.Block.ABlockSignature dlgCertificate (coerce sig) - where - sig :: Crypto.Signature CC.Block.ToSign - SignedDSIGN (SigByronDSIGN sig) = pbftSignature ouroborosPayload - - header :: CC.Block.Header - header = CC.Block.AHeader { - CC.Block.aHeaderProtocolMagicId = ann (Crypto.getProtocolMagicId (byronProtocolMagic cfg)) - , CC.Block.aHeaderPrevHash = ann prevHeaderHash - , CC.Block.aHeaderSlot = ann (coerce sno) - , CC.Block.aHeaderDifficulty = ann (coerce bno) - , CC.Block.headerProtocolVersion = byronProtocolVersion cfg - , CC.Block.headerSoftwareVersion = byronSoftwareVersion cfg - , CC.Block.aHeaderProof = ann proof - , CC.Block.headerGenesisKey = headerGenesisKey - , CC.Block.headerSignature = headerSignature - , CC.Block.headerAnnotation = () - , CC.Block.headerExtraAnnotation = () - } - - ann :: b -> Annotated b () - ann b = Annotated b () + ann :: b -> Annotated b () + ann b = Annotated b () diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/HeaderValidation.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/HeaderValidation.hs index af8272b3e0..7c44773897 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/HeaderValidation.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/HeaderValidation.hs @@ -2,27 +2,26 @@ {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Ledger.HeaderValidation ( - ByronOtherHeaderEnvelopeError (..) +module Ouroboros.Consensus.Byron.Ledger.HeaderValidation + ( ByronOtherHeaderEnvelopeError (..) , TipInfoIsEBB (..) ) where -import qualified Cardano.Chain.Slotting as CC -import Control.Monad (when) -import Control.Monad.Except (throwError) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Config -import Ouroboros.Consensus.Byron.Ledger.Orphans () -import Ouroboros.Consensus.Byron.Ledger.PBFT () -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderValidation +import Cardano.Chain.Slotting qualified as CC +import Control.Monad (when) +import Control.Monad.Except (throwError) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Config +import Ouroboros.Consensus.Byron.Ledger.Orphans () +import Ouroboros.Consensus.Byron.Ledger.PBFT () +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderValidation {------------------------------------------------------------------------------- Envelope @@ -33,45 +32,46 @@ instance HasAnnTip ByronBlock where tipInfoHash _ (TipInfoIsEBB h _) = h getTipInfo b = TipInfoIsEBB (blockHash b) (byronHeaderIsEBB b) -data ByronOtherHeaderEnvelopeError = - UnexpectedEBBInSlot !SlotNo +data ByronOtherHeaderEnvelopeError + = UnexpectedEBBInSlot !SlotNo deriving (Eq, Show, Generic, NoThunks) instance BasicEnvelopeValidation ByronBlock where - expectedFirstBlockNo _ = BlockNo 0 + expectedFirstBlockNo _ = BlockNo 0 minimumPossibleSlotNo _ = SlotNo 0 -- EBB shares its block number with its predecessor expectedNextBlockNo _ (TipInfoIsEBB _ prevIsEBB) (TipInfoIsEBB _ curIsEBB) b = - case (prevIsEBB, curIsEBB) of - (IsNotEBB, IsEBB) -> b - _otherwise -> succ b + case (prevIsEBB, curIsEBB) of + (IsNotEBB, IsEBB) -> b + _otherwise -> succ b -- EBB shares its slot number with its successor minimumNextSlotNo _ (TipInfoIsEBB _ prevIsEBB) (TipInfoIsEBB _ curIsEBB) s = - case (prevIsEBB, curIsEBB) of - (IsEBB, IsNotEBB) -> s - _otherwise -> succ s + case (prevIsEBB, curIsEBB) of + (IsEBB, IsNotEBB) -> s + _otherwise -> succ s instance ValidateEnvelope ByronBlock where type OtherHeaderEnvelopeError ByronBlock = ByronOtherHeaderEnvelopeError additionalEnvelopeChecks cfg _ledgerView hdr = - when (fromIsEBB newIsEBB && not (canBeEBB actualSlotNo)) $ - throwError $ UnexpectedEBBInSlot actualSlotNo - where - actualSlotNo :: SlotNo - actualSlotNo = blockSlot hdr + when (fromIsEBB newIsEBB && not (canBeEBB actualSlotNo)) $ + throwError $ + UnexpectedEBBInSlot actualSlotNo + where + actualSlotNo :: SlotNo + actualSlotNo = blockSlot hdr - newIsEBB :: IsEBB - newIsEBB = byronHeaderIsEBB hdr + newIsEBB :: IsEBB + newIsEBB = byronHeaderIsEBB hdr - canBeEBB :: SlotNo -> Bool - canBeEBB (SlotNo s) = s `mod` epochSlots == 0 + canBeEBB :: SlotNo -> Bool + canBeEBB (SlotNo s) = s `mod` epochSlots == 0 - epochSlots :: Word64 - epochSlots = - CC.unEpochSlots + epochSlots :: Word64 + epochSlots = + CC.unEpochSlots . byronEpochSlots . configBlock $ cfg diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Inspect.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Inspect.hs index 6df86bab9c..4f05e5c319 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Inspect.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Inspect.hs @@ -1,41 +1,41 @@ {-# LANGUAGE MultiWayIf #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Ledger.Inspect ( - ByronLedgerUpdate (..) +module Ouroboros.Consensus.Byron.Ledger.Inspect + ( ByronLedgerUpdate (..) + -- * Layer around the Byron protocol update inteface , ProtocolUpdate (..) , UpdateState (..) , protocolUpdates ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Common as CC -import qualified Cardano.Chain.Genesis as CC.Genesis -import qualified Cardano.Chain.ProtocolConstants as CC -import qualified Cardano.Chain.Slotting as CC -import qualified Cardano.Chain.Update as U -import qualified Cardano.Chain.Update.Validation.Endorsement as U.E -import qualified Cardano.Chain.Update.Validation.Interface as U.I -import qualified Cardano.Chain.Update.Validation.Registration as U.R -import Control.Monad -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Void -import Data.Word -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Ledger.Ledger -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History.Util as History -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Util.Condense +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Common qualified as CC +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.ProtocolConstants qualified as CC +import Cardano.Chain.Slotting qualified as CC +import Cardano.Chain.Update qualified as U +import Cardano.Chain.Update.Validation.Endorsement qualified as U.E +import Cardano.Chain.Update.Validation.Interface qualified as U.I +import Cardano.Chain.Update.Validation.Registration qualified as U.R +import Control.Monad +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Void +import Data.Word +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Ledger.Ledger +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History.Util qualified as History +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Protocol update @@ -47,43 +47,39 @@ import Ouroboros.Consensus.Util.Condense -- don't really need to track them, and adding them would add a lot of output -- to the 'Show' instance. We could easily add them however if that would be -- useful. -data ProtocolUpdate = ProtocolUpdate { - protocolUpdateVersion :: U.ProtocolVersion - , protocolUpdateState :: UpdateState - } +data ProtocolUpdate = ProtocolUpdate + { protocolUpdateVersion :: U.ProtocolVersion + , protocolUpdateState :: UpdateState + } deriving (Show, Eq) -- | The various states a protocol update goes through -- -- Listed in chronological order. -data UpdateState = - -- | The update was registered, but does not yet have any votes +data UpdateState + = -- | The update was registered, but does not yet have any votes -- -- We record the 'SlotNo' of the slot in which the update was registered. -- After registration, nodes must vote on it. UpdateRegistered SlotNo - - -- | The update is accumulating votes + | -- | The update is accumulating votes -- -- We record which nodes have voted for the proposal. The proposal must -- accumulate a sufficient number of votes before it can be confirmed. - | UpdateActive (Set CC.KeyHash) - - -- | The update has amassed a sufficient number of votes + UpdateActive (Set CC.KeyHash) + | -- | The update has amassed a sufficient number of votes -- -- We record the 'SlotNo' of the slot in which the required threshold of -- votes was met. At this point @2k@ slots need to pass before the update -- can be endorsed. - | UpdateConfirmed SlotNo - - -- | The votes are stable. We can start to accumulate endorsements. + UpdateConfirmed SlotNo + | -- | The votes are stable. We can start to accumulate endorsements. -- -- We record which nodes have endorsed the proposal. The proposal must -- accumulate a sufficient number of endorsements before it is nominated -- and becomes a candidate. - | UpdateStablyConfirmed (Set CC.KeyHash) - - -- | The update has amassed a sufficient number of endorsements + UpdateStablyConfirmed (Set CC.KeyHash) + | -- | The update has amassed a sufficient number of endorsements -- -- We record the 'SlotNo' of the slot in which the required threshold of -- endorsement was met. At this point a further @2k@ slots need to pass @@ -91,159 +87,163 @@ data UpdateState = -- -- We additionally record the 'EpochNo' in which the candidate will be -- adopted, /if/ it becomes stable. - | UpdateCandidate SlotNo EpochNo - - -- | The endorsements are stable. The update will be accepted. + UpdateCandidate SlotNo EpochNo + | -- | The endorsements are stable. The update will be accepted. -- -- We record the 'EpochNo' of the epoch in which it will become active. - | UpdateStableCandidate EpochNo + UpdateStableCandidate EpochNo deriving (Show, Eq) -- | All proposal updates, from new to old protocolUpdates :: - LedgerConfig ByronBlock - -> LedgerState ByronBlock mk - -> [ProtocolUpdate] -protocolUpdates genesis st = concat [ - map fromCandidate candidates - - -- Don't record an update both as a proposal and a candidate - , map fromRegistered . Map.toList $ + LedgerConfig ByronBlock -> + LedgerState ByronBlock mk -> + [ProtocolUpdate] +protocolUpdates genesis st = + concat + [ map fromCandidate candidates + , -- Don't record an update both as a proposal and a candidate + map fromRegistered . Map.toList $ Map.filter (not . hasCandidate . U.R.pupProtocolVersion) registered ] - where - -- Configuration - - k :: CC.BlockCount - epochSize :: CC.EpochSlots - stableAfter :: Word64 - takesEffectAfter :: Word64 - - k = CC.Genesis.gdK $ CC.Genesis.configGenesisData genesis - epochSize = CC.Genesis.configEpochSlots genesis - stableAfter = CC.unSlotCount $ CC.kSlotSecurityParam k - takesEffectAfter = CC.unSlotCount $ CC.kUpdateStabilityParam k - - -- The impossible cases are impossible because these slots refer to - -- the slots of blocks on the chain. - isStable :: SlotNo -> Bool - isStable slotNo = depth >= stableAfter - where - depth :: Word64 - depth = case ledgerTipSlot st of - Origin -> error "isStable: impossible" - NotOrigin s -> if s < slotNo - then error "isStable: impossible" - else History.countSlots s slotNo - - -- Extract relevant bits from the update state - - updState :: U.I.State - registered :: U.R.ProtocolUpdateProposals - registeredAt :: Map U.UpId CC.SlotNumber - confirmed :: Map U.UpId CC.SlotNumber - votes :: Map U.UpId (Set CC.KeyHash) - candidates :: [U.E.CandidateProtocolUpdate] - endorsements :: Map U.ProtocolVersion (Set CC.KeyHash) - - updState = CC.cvsUpdateState $ byronLedgerState st - registered = U.I.registeredProtocolUpdateProposals updState - registeredAt = U.I.proposalRegistrationSlot updState - confirmed = U.I.confirmedProposals updState - votes = U.I.proposalVotes updState - candidates = U.I.candidateProtocolUpdates updState - endorsements = Map.fromListWith Set.union - . map (\e -> ( U.E.endorsementProtocolVersion e - , Set.singleton (U.E.endorsementKeyHash e) - )) - . Set.toList - $ U.I.registeredEndorsements updState - - -- From registered proposals - - fromRegistered :: (U.UpId, U.R.ProtocolUpdateProposal) -> ProtocolUpdate - fromRegistered (upId, proposal) = ProtocolUpdate { - protocolUpdateVersion = version - , protocolUpdateState = - -- We do the checks in reverse chronological order - if | not (Set.null updEndorsed) -> - UpdateStablyConfirmed updEndorsed - - | Just confirmedInSlot <- updConfirmed -> - if isStable confirmedInSlot - then UpdateStablyConfirmed Set.empty - else UpdateConfirmed confirmedInSlot - - | not (Set.null updVotes) -> - UpdateActive updVotes - - | otherwise -> - UpdateRegistered updSlot - } - where - version :: U.ProtocolVersion - version = U.R.pupProtocolVersion proposal - - updVotes :: Set CC.KeyHash - updConfirmed :: Maybe SlotNo - updEndorsed :: Set CC.KeyHash - updSlot :: SlotNo - - updVotes = Map.findWithDefault Set.empty upId votes - updConfirmed = fromByronSlotNo <$> Map.lookup upId confirmed - updEndorsed = Map.findWithDefault Set.empty version endorsements - updSlot = case Map.lookup upId registeredAt of - Nothing -> error "updSlot: invalid Byron state" - Just slot -> fromByronSlotNo slot - - -- From candidate proposals - - fromCandidate :: U.E.CandidateProtocolUpdate -> ProtocolUpdate - fromCandidate candidate = ProtocolUpdate { - protocolUpdateVersion = version - , protocolUpdateState = - if not (isStable slot) - then UpdateCandidate slot (cpuEpoch slot) - else UpdateStableCandidate (cpuEpoch slot) - } - where - slot :: SlotNo - version :: U.ProtocolVersion - - slot = fromByronSlotNo $ U.E.cpuSlot candidate - version = U.E.cpuProtocolVersion candidate - - -- Is there a candidate for this version? - hasCandidate :: U.ProtocolVersion -> Bool - hasCandidate v = any ((== v) . U.E.cpuProtocolVersion) candidates - - -- Given the 'SlotNo' of a candidate, compute in which 'Epoch' it will - -- become active. - -- - -- This follows the same structure as the computation in the A/B test. Let - -- @s@ be the slot the update proposal was endorsed (gathered enough - -- endorsements). Note that the very first slot in which the transition - -- /could/ occur is @s + 1@; adding the required stability, the first slot - -- in which the transition could occur is @s + 4k + 1@. This means that the - -- last slot which /must/ be in /this/ era is @s + 4k@. Hence the last - -- /epoch/ that must be in this era is @epoch (s + 4k)@, and the first epoch - -- of the /next/ era is @succ (epoch (s + 4k))@. - cpuEpoch :: SlotNo -> EpochNo - cpuEpoch = succ . slotToEpoch . History.addSlots takesEffectAfter - - -- Slot conversion - -- - -- This is valid for slots in the Byron era only; just like the Byron - -- ledger itself, it assumes the Byron era is the /first/ era. - slotToEpoch :: SlotNo -> EpochNo - slotToEpoch (SlotNo s) = EpochNo (s `div` CC.unEpochSlots epochSize) + where + -- Configuration + + k :: CC.BlockCount + epochSize :: CC.EpochSlots + stableAfter :: Word64 + takesEffectAfter :: Word64 + + k = CC.Genesis.gdK $ CC.Genesis.configGenesisData genesis + epochSize = CC.Genesis.configEpochSlots genesis + stableAfter = CC.unSlotCount $ CC.kSlotSecurityParam k + takesEffectAfter = CC.unSlotCount $ CC.kUpdateStabilityParam k + + -- The impossible cases are impossible because these slots refer to + -- the slots of blocks on the chain. + isStable :: SlotNo -> Bool + isStable slotNo = depth >= stableAfter + where + depth :: Word64 + depth = case ledgerTipSlot st of + Origin -> error "isStable: impossible" + NotOrigin s -> + if s < slotNo + then error "isStable: impossible" + else History.countSlots s slotNo + + -- Extract relevant bits from the update state + + updState :: U.I.State + registered :: U.R.ProtocolUpdateProposals + registeredAt :: Map U.UpId CC.SlotNumber + confirmed :: Map U.UpId CC.SlotNumber + votes :: Map U.UpId (Set CC.KeyHash) + candidates :: [U.E.CandidateProtocolUpdate] + endorsements :: Map U.ProtocolVersion (Set CC.KeyHash) + + updState = CC.cvsUpdateState $ byronLedgerState st + registered = U.I.registeredProtocolUpdateProposals updState + registeredAt = U.I.proposalRegistrationSlot updState + confirmed = U.I.confirmedProposals updState + votes = U.I.proposalVotes updState + candidates = U.I.candidateProtocolUpdates updState + endorsements = + Map.fromListWith Set.union + . map + ( \e -> + ( U.E.endorsementProtocolVersion e + , Set.singleton (U.E.endorsementKeyHash e) + ) + ) + . Set.toList + $ U.I.registeredEndorsements updState + + -- From registered proposals + + fromRegistered :: (U.UpId, U.R.ProtocolUpdateProposal) -> ProtocolUpdate + fromRegistered (upId, proposal) = + ProtocolUpdate + { protocolUpdateVersion = version + , protocolUpdateState = + -- We do the checks in reverse chronological order + if + | not (Set.null updEndorsed) -> + UpdateStablyConfirmed updEndorsed + | Just confirmedInSlot <- updConfirmed -> + if isStable confirmedInSlot + then UpdateStablyConfirmed Set.empty + else UpdateConfirmed confirmedInSlot + | not (Set.null updVotes) -> + UpdateActive updVotes + | otherwise -> + UpdateRegistered updSlot + } + where + version :: U.ProtocolVersion + version = U.R.pupProtocolVersion proposal + + updVotes :: Set CC.KeyHash + updConfirmed :: Maybe SlotNo + updEndorsed :: Set CC.KeyHash + updSlot :: SlotNo + + updVotes = Map.findWithDefault Set.empty upId votes + updConfirmed = fromByronSlotNo <$> Map.lookup upId confirmed + updEndorsed = Map.findWithDefault Set.empty version endorsements + updSlot = case Map.lookup upId registeredAt of + Nothing -> error "updSlot: invalid Byron state" + Just slot -> fromByronSlotNo slot + + -- From candidate proposals + + fromCandidate :: U.E.CandidateProtocolUpdate -> ProtocolUpdate + fromCandidate candidate = + ProtocolUpdate + { protocolUpdateVersion = version + , protocolUpdateState = + if not (isStable slot) + then UpdateCandidate slot (cpuEpoch slot) + else UpdateStableCandidate (cpuEpoch slot) + } + where + slot :: SlotNo + version :: U.ProtocolVersion + + slot = fromByronSlotNo $ U.E.cpuSlot candidate + version = U.E.cpuProtocolVersion candidate + + -- Is there a candidate for this version? + hasCandidate :: U.ProtocolVersion -> Bool + hasCandidate v = any ((== v) . U.E.cpuProtocolVersion) candidates + + -- Given the 'SlotNo' of a candidate, compute in which 'Epoch' it will + -- become active. + -- + -- This follows the same structure as the computation in the A/B test. Let + -- @s@ be the slot the update proposal was endorsed (gathered enough + -- endorsements). Note that the very first slot in which the transition + -- /could/ occur is @s + 1@; adding the required stability, the first slot + -- in which the transition could occur is @s + 4k + 1@. This means that the + -- last slot which /must/ be in /this/ era is @s + 4k@. Hence the last + -- /epoch/ that must be in this era is @epoch (s + 4k)@, and the first epoch + -- of the /next/ era is @succ (epoch (s + 4k))@. + cpuEpoch :: SlotNo -> EpochNo + cpuEpoch = succ . slotToEpoch . History.addSlots takesEffectAfter + + -- Slot conversion + -- + -- This is valid for slots in the Byron era only; just like the Byron + -- ledger itself, it assumes the Byron era is the /first/ era. + slotToEpoch :: SlotNo -> EpochNo + slotToEpoch (SlotNo s) = EpochNo (s `div` CC.unEpochSlots epochSize) {------------------------------------------------------------------------------- Inspection -------------------------------------------------------------------------------} -data ByronLedgerUpdate = - ByronUpdatedProtocolUpdates [ProtocolUpdate] +data ByronLedgerUpdate + = ByronUpdatedProtocolUpdates [ProtocolUpdate] deriving (Show, Eq) instance Condense ByronLedgerUpdate where @@ -251,12 +251,12 @@ instance Condense ByronLedgerUpdate where instance InspectLedger ByronBlock where type LedgerWarning ByronBlock = Void - type LedgerUpdate ByronBlock = ByronLedgerUpdate + type LedgerUpdate ByronBlock = ByronLedgerUpdate inspectLedger tlc before after = do - guard $ updatesBefore /= updatesAfter - return $ LedgerUpdate $ ByronUpdatedProtocolUpdates updatesAfter - where - updatesBefore, updatesAfter :: [ProtocolUpdate] - updatesBefore = protocolUpdates (configLedger tlc) before - updatesAfter = protocolUpdates (configLedger tlc) after + guard $ updatesBefore /= updatesAfter + return $ LedgerUpdate $ ByronUpdatedProtocolUpdates updatesAfter + where + updatesBefore, updatesAfter :: [ProtocolUpdate] + updatesBefore = protocolUpdates (configLedger tlc) before + updatesAfter = protocolUpdates (configLedger tlc) after diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Integrity.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Integrity.hs index 174a21ea79..a92d8ae43a 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Integrity.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Integrity.hs @@ -1,19 +1,19 @@ {-# LANGUAGE NamedFieldPuns #-} -module Ouroboros.Consensus.Byron.Ledger.Integrity ( - verifyBlockIntegrity +module Ouroboros.Consensus.Byron.Ledger.Integrity + ( verifyBlockIntegrity , verifyHeaderIntegrity , verifyHeaderSignature ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Crypto.DSIGN.Class as CC.Crypto -import Data.Either (isRight) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Config -import Ouroboros.Consensus.Byron.Ledger.PBFT () -import Ouroboros.Consensus.Protocol.PBFT +import Cardano.Chain.Block qualified as CC +import Cardano.Crypto.DSIGN.Class qualified as CC.Crypto +import Data.Either (isRight) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Config +import Ouroboros.Consensus.Byron.Ledger.PBFT () +import Ouroboros.Consensus.Protocol.PBFT -- | Verify whether a header matches its signature. -- @@ -21,17 +21,18 @@ import Ouroboros.Consensus.Protocol.PBFT -- This function will always return 'True' for an EBB. verifyHeaderSignature :: BlockConfig ByronBlock -> Header ByronBlock -> Bool verifyHeaderSignature cfg hdr = - case validateView cfg hdr of - PBftValidateBoundary{} -> - -- EBB, no signature to check - True - PBftValidateRegular fields signed contextDSIGN -> - let PBftFields { pbftIssuer, pbftSignature } = fields - in isRight $ CC.Crypto.verifySignedDSIGN - contextDSIGN - pbftIssuer - signed - pbftSignature + case validateView cfg hdr of + PBftValidateBoundary{} -> + -- EBB, no signature to check + True + PBftValidateRegular fields signed contextDSIGN -> + let PBftFields{pbftIssuer, pbftSignature} = fields + in isRight $ + CC.Crypto.verifySignedDSIGN + contextDSIGN + pbftIssuer + signed + pbftSignature -- | Verify whether a header is not corrupted. -- @@ -43,15 +44,16 @@ verifyHeaderSignature cfg hdr = -- This function will always return 'True' for an EBB. verifyHeaderIntegrity :: BlockConfig ByronBlock -> Header ByronBlock -> Bool verifyHeaderIntegrity cfg hdr = - verifyHeaderSignature cfg hdr && + verifyHeaderSignature cfg hdr + && -- @CC.headerProtocolMagicId@ is the only field of a regular header that -- is not signed, so check it manually. case byronHeaderRaw hdr of - CC.ABOBBlockHdr h -> CC.headerProtocolMagicId h == protocolMagicId - -- EBB, we can't check it - CC.ABOBBoundaryHdr _ -> True - where - protocolMagicId = byronProtocolMagicId cfg + CC.ABOBBlockHdr h -> CC.headerProtocolMagicId h == protocolMagicId + -- EBB, we can't check it + CC.ABOBBoundaryHdr _ -> True + where + protocolMagicId = byronProtocolMagicId cfg -- | Verifies whether the block is not corrupted by checking its signature and -- witnesses. @@ -60,7 +62,7 @@ verifyHeaderIntegrity cfg hdr = -- anything for an EBB. verifyBlockIntegrity :: BlockConfig ByronBlock -> ByronBlock -> Bool verifyBlockIntegrity cfg blk = - verifyHeaderIntegrity cfg hdr && - blockMatchesHeader hdr blk - where - hdr = getHeader blk + verifyHeaderIntegrity cfg hdr + && blockMatchesHeader hdr blk + where + hdr = getHeader blk diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Ledger.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Ledger.hs index 1f96bf9baa..8ef1fb88c2 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Ledger.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Ledger.hs @@ -14,16 +14,17 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Instances requires for consensus/ledger integration -module Ouroboros.Consensus.Byron.Ledger.Ledger ( - ByronTransition (..) +module Ouroboros.Consensus.Byron.Ledger.Ledger + ( ByronTransition (..) + -- * Ledger integration , byronEraParams , byronEraParamsNeverHardForks , initByronLedgerState + -- * Serialisation , decodeByronAnnTip , decodeByronLedgerState @@ -35,77 +36,79 @@ module Ouroboros.Consensus.Byron.Ledger.Ledger ( , encodeByronLedgerState , encodeByronQuery , encodeByronResult + -- * Type family instances , BlockQuery (..) , LedgerState (..) , LedgerTables (..) , Ticked (..) + -- * Auxiliary , validationErrorImpossible ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Byron.API as CC -import qualified Cardano.Chain.Common as Gen -import qualified Cardano.Chain.Genesis as Gen -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Chain.Update.Validation.Endorsement as UPE -import qualified Cardano.Chain.Update.Validation.Interface as UPI -import qualified Cardano.Chain.UTxO as CC -import qualified Cardano.Chain.ValidationMode as CC -import Cardano.Ledger.BaseTypes (unNonZero) -import Cardano.Ledger.Binary (fromByronCBOR, toByronCBOR) -import Cardano.Ledger.Binary.Plain (encodeListLen, enforceSize) -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as CBOR -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (decode, encode) -import Control.Monad (replicateM) -import Control.Monad.Except (Except, runExcept, throwError) -import qualified Control.State.Transition.Extended as STS -import Data.ByteString (ByteString) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Void (Void) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Ledger.HeaderValidation () -import Ouroboros.Consensus.Byron.Ledger.PBFT -import Ouroboros.Consensus.Byron.Ledger.Serialisation -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.Abstract -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.IndexedMemPack +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Byron.API qualified as CC +import Cardano.Chain.Common qualified as Gen +import Cardano.Chain.Genesis qualified as Gen +import Cardano.Chain.UTxO qualified as CC +import Cardano.Chain.Update qualified as Update +import Cardano.Chain.Update.Validation.Endorsement qualified as UPE +import Cardano.Chain.Update.Validation.Interface qualified as UPI +import Cardano.Chain.ValidationMode qualified as CC +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Ledger.Binary (fromByronCBOR, toByronCBOR) +import Cardano.Ledger.Binary.Plain (encodeListLen, enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (decode, encode) +import Control.Monad (replicateM) +import Control.Monad.Except (Except, runExcept, throwError) +import Control.State.Transition.Extended qualified as STS +import Data.ByteString (ByteString) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Void (Void) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Ledger.HeaderValidation () +import Ouroboros.Consensus.Byron.Ledger.PBFT +import Ouroboros.Consensus.Byron.Ledger.Serialisation +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- LedgerState -------------------------------------------------------------------------------} -data instance LedgerState ByronBlock mk = ByronLedgerState { - byronLedgerTipBlockNo :: !(WithOrigin BlockNo) - , byronLedgerState :: !CC.ChainValidationState - , byronLedgerTransition :: !ByronTransition - } +data instance LedgerState ByronBlock mk = ByronLedgerState + { byronLedgerTipBlockNo :: !(WithOrigin BlockNo) + , byronLedgerState :: !CC.ChainValidationState + , byronLedgerTransition :: !ByronTransition + } deriving (Eq, Show, Generic, NoThunks) -- | Information required to determine the transition from Byron to Shelley -data ByronTransition = - -- | Per candidate proposal, the 'BlockNo' in which it became a candidate +data ByronTransition + = -- | Per candidate proposal, the 'BlockNo' in which it became a candidate -- -- The HFC needs to know when a candidate proposal becomes stable. We cannot -- reliably do this using 'SlotNo': doing so would mean that if we were to @@ -124,25 +127,31 @@ instance UpdateLedger ByronBlock type instance LedgerCfg (LedgerState ByronBlock) = Gen.Config -initByronLedgerState :: Gen.Config - -> Maybe CC.UTxO -- ^ Optionally override UTxO - -> LedgerState ByronBlock mk -initByronLedgerState genesis mUtxo = ByronLedgerState { - byronLedgerState = override mUtxo initState +initByronLedgerState :: + Gen.Config -> + -- | Optionally override UTxO + Maybe CC.UTxO -> + LedgerState ByronBlock mk +initByronLedgerState genesis mUtxo = + ByronLedgerState + { byronLedgerState = override mUtxo initState , byronLedgerTipBlockNo = Origin , byronLedgerTransition = ByronTransitionInfo Map.empty } - where - initState :: CC.ChainValidationState - initState = case runExcept $ CC.initialChainValidationState genesis of - Right st -> st - Left e -> error $ + where + initState :: CC.ChainValidationState + initState = case runExcept $ CC.initialChainValidationState genesis of + Right st -> st + Left e -> + error $ "could not create initial ChainValidationState: " <> show e - override :: Maybe CC.UTxO - -> CC.ChainValidationState -> CC.ChainValidationState - override Nothing st = st - override (Just utxo) st = st { CC.cvsUtxo = utxo } + override :: + Maybe CC.UTxO -> + CC.ChainValidationState -> + CC.ChainValidationState + override Nothing st = st + override (Just utxo) st = st{CC.cvsUtxo = utxo} {------------------------------------------------------------------------------- GetTip @@ -156,40 +165,42 @@ instance GetTip (Ticked (LedgerState ByronBlock)) where getByronTip :: CC.ChainValidationState -> Point ByronBlock getByronTip state = - case CC.cvsPreviousHash state of - -- In this case there are no blocks in the ledger state. The genesis - -- block does not occupy a slot, so its point is Origin. - Left _genHash -> GenesisPoint - Right hdrHash -> BlockPoint slot (ByronHash hdrHash) - where - slot = fromByronSlotNo (CC.cvsLastSlot state) + case CC.cvsPreviousHash state of + -- In this case there are no blocks in the ledger state. The genesis + -- block does not occupy a slot, so its point is Origin. + Left _genHash -> GenesisPoint + Right hdrHash -> BlockPoint slot (ByronHash hdrHash) + where + slot = fromByronSlotNo (CC.cvsLastSlot state) {------------------------------------------------------------------------------- Ticked ledger state -------------------------------------------------------------------------------} -- | The ticked Byron ledger state -data instance Ticked (LedgerState ByronBlock) mk = TickedByronLedgerState { - tickedByronLedgerState :: !CC.ChainValidationState - , untickedByronLedgerTransition :: !ByronTransition - } +data instance Ticked (LedgerState ByronBlock) mk = TickedByronLedgerState + { tickedByronLedgerState :: !CC.ChainValidationState + , untickedByronLedgerTransition :: !ByronTransition + } deriving (Generic, NoThunks) instance IsLedger (LedgerState ByronBlock) where type LedgerErr (LedgerState ByronBlock) = CC.ChainValidationError - type AuxLedgerEvent (LedgerState ByronBlock) = - VoidLedgerEvent (LedgerState ByronBlock) + type + AuxLedgerEvent (LedgerState ByronBlock) = + VoidLedgerEvent (LedgerState ByronBlock) - applyChainTickLedgerResult _ cfg slotNo ByronLedgerState{..} = pureLedgerResult $ - TickedByronLedgerState { - tickedByronLedgerState = + applyChainTickLedgerResult _ cfg slotNo ByronLedgerState{..} = + pureLedgerResult $ + TickedByronLedgerState + { tickedByronLedgerState = CC.applyChainTick cfg (toByronSlotNo slotNo) byronLedgerState , untickedByronLedgerTransition = byronLedgerTransition } -type instance TxIn (LedgerState ByronBlock) = Void +type instance TxIn (LedgerState ByronBlock) = Void type instance TxOut (LedgerState ByronBlock) = Void instance LedgerTablesAreTrivial (LedgerState ByronBlock) where @@ -197,17 +208,27 @@ instance LedgerTablesAreTrivial (LedgerState ByronBlock) where instance LedgerTablesAreTrivial (Ticked (LedgerState ByronBlock)) where convertMapKind (TickedByronLedgerState x y) = TickedByronLedgerState x y -deriving via Void - instance IndexedMemPack (LedgerState ByronBlock EmptyMK) Void - -deriving via TrivialLedgerTables (LedgerState ByronBlock) - instance HasLedgerTables (LedgerState ByronBlock) -deriving via TrivialLedgerTables (Ticked (LedgerState ByronBlock)) - instance HasLedgerTables (Ticked (LedgerState ByronBlock)) -deriving via TrivialLedgerTables (LedgerState ByronBlock) - instance CanStowLedgerTables (LedgerState ByronBlock) -deriving via TrivialLedgerTables (LedgerState ByronBlock) - instance SerializeTablesWithHint (LedgerState ByronBlock) +deriving via + Void + instance + IndexedMemPack (LedgerState ByronBlock EmptyMK) Void + +deriving via + TrivialLedgerTables (LedgerState ByronBlock) + instance + HasLedgerTables (LedgerState ByronBlock) +deriving via + TrivialLedgerTables (Ticked (LedgerState ByronBlock)) + instance + HasLedgerTables (Ticked (LedgerState ByronBlock)) +deriving via + TrivialLedgerTables (LedgerState ByronBlock) + instance + CanStowLedgerTables (LedgerState ByronBlock) +deriving via + TrivialLedgerTables (LedgerState ByronBlock) + instance + SerializeTablesWithHint (LedgerState ByronBlock) {------------------------------------------------------------------------------- Supporting the various consensus interfaces @@ -226,9 +247,9 @@ data instance BlockQuery ByronBlock fp result where instance BlockSupportsLedgerQuery ByronBlock where answerPureBlockQuery _cfg GetUpdateInterfaceState dlv = - CC.cvsUpdateState (byronLedgerState ledgerState) - where - ExtLedgerState { ledgerState } = dlv + CC.cvsUpdateState (byronLedgerState ledgerState) + where + ExtLedgerState{ledgerState} = dlv answerBlockQueryLookup _cfg q _dlv = case q of {} answerBlockQueryTraverse _cfg q _dlv = case q of {} blockQueryIsSupportedOnVersion GetUpdateInterfaceState = const True @@ -242,25 +263,25 @@ deriving instance Show (BlockQuery ByronBlock fp result) instance ShowQuery (BlockQuery ByronBlock fp) where showResult GetUpdateInterfaceState = show -instance ShowProxy (BlockQuery ByronBlock) where +instance ShowProxy (BlockQuery ByronBlock) instance LedgerSupportsPeerSelection ByronBlock where getPeers = const [] instance CommonProtocolParams ByronBlock where maxHeaderSize = fromIntegral . Update.ppMaxHeaderSize . getProtocolParameters - maxTxSize = fromIntegral . Update.ppMaxTxSize . getProtocolParameters + maxTxSize = fromIntegral . Update.ppMaxTxSize . getProtocolParameters -- | Return the protocol parameters adopted by the given ledger. getProtocolParameters :: LedgerState ByronBlock mk -> Update.ProtocolParameters getProtocolParameters = - CC.adoptedProtocolParameters + CC.adoptedProtocolParameters . CC.cvsUpdateState . byronLedgerState instance LedgerSupportsProtocol ByronBlock where protocolLedgerView _cfg = - toPBftLedgerView + toPBftLedgerView . CC.getDelegationMap . tickedByronLedgerState @@ -280,45 +301,49 @@ instance LedgerSupportsProtocol ByronBlock where -- To create a forecast, take the delegation state from the given ledger -- state, and apply the updates that should be applied by the given slot. ledgerViewForecastAt cfg (ByronLedgerState _tipBlkNo st _) = Forecast at $ \for -> - toPBftLedgerView <$> if + toPBftLedgerView + <$> if | for == lastSlot -> - return $ CC.getDelegationMap st + return $ CC.getDelegationMap st | for < maxFor -> - return $ CC.previewDelegationMap (toByronSlotNo for) st + return $ CC.previewDelegationMap (toByronSlotNo for) st | otherwise -> - throwError $ OutsideForecastRange { - outsideForecastAt = at - , outsideForecastMaxFor = maxFor - , outsideForecastFor = for - } - where - k = unNonZero $ maxRollbacks $ genesisSecurityParam cfg - lastSlot = fromByronSlotNo $ CC.cvsLastSlot st - at = NotOrigin lastSlot - - -- The upper bound is exclusive - maxFor :: SlotNo - maxFor = case at of - Origin -> SlotNo $ 2 * k - NotOrigin s -> SlotNo $ unSlotNo s + 1 + (2 * k) + throwError $ + OutsideForecastRange + { outsideForecastAt = at + , outsideForecastMaxFor = maxFor + , outsideForecastFor = for + } + where + k = unNonZero $ maxRollbacks $ genesisSecurityParam cfg + lastSlot = fromByronSlotNo $ CC.cvsLastSlot st + at = NotOrigin lastSlot + + -- The upper bound is exclusive + maxFor :: SlotNo + maxFor = case at of + Origin -> SlotNo $ 2 * k + NotOrigin s -> SlotNo $ unSlotNo s + 1 + (2 * k) -- | To be used for a Byron-to-X (where X is typically Shelley) chain. byronEraParams :: Gen.Config -> HardFork.EraParams -byronEraParams genesis = HardFork.EraParams { - eraEpochSize = fromByronEpochSlots $ Gen.configEpochSlots genesis +byronEraParams genesis = + HardFork.EraParams + { eraEpochSize = fromByronEpochSlots $ Gen.configEpochSlots genesis , eraSlotLength = fromByronSlotLength $ genesisSlotLength genesis - , eraSafeZone = HardFork.StandardSafeZone (2 * k) + , eraSafeZone = HardFork.StandardSafeZone (2 * k) , eraGenesisWin = GenesisWindow (2 * k) } - where - k = unNonZero $ maxRollbacks $ genesisSecurityParam genesis + where + k = unNonZero $ maxRollbacks $ genesisSecurityParam genesis -- | Separate variant of 'byronEraParams' to be used for a Byron-only chain. byronEraParamsNeverHardForks :: Gen.Config -> HardFork.EraParams -byronEraParamsNeverHardForks genesis = HardFork.EraParams { - eraEpochSize = fromByronEpochSlots $ Gen.configEpochSlots genesis +byronEraParamsNeverHardForks genesis = + HardFork.EraParams + { eraEpochSize = fromByronEpochSlots $ Gen.configEpochSlots genesis , eraSlotLength = fromByronSlotLength $ genesisSlotLength genesis - , eraSafeZone = HardFork.UnsafeIndefiniteSafeZone + , eraSafeZone = HardFork.UnsafeIndefiniteSafeZone , eraGenesisWin = GenesisWindow (2 * Gen.unBlockCount (Gen.configK genesis)) } @@ -348,82 +373,89 @@ validationErrorImpossible _ = error "validationErrorImpossible: unexpected error the right arguments, and maintain the snapshots. -------------------------------------------------------------------------------} -applyByronBlock :: STS.ValidationPolicy - -> ComputeLedgerEvents - -> LedgerConfig ByronBlock - -> ByronBlock - -> TickedLedgerState ByronBlock mk1 - -> Except (LedgerError ByronBlock) (LedgerState ByronBlock mk2) -applyByronBlock doValidation - _doEvents - cfg - blk@(ByronBlock raw _ (ByronHash blkHash)) - ls = +applyByronBlock :: + STS.ValidationPolicy -> + ComputeLedgerEvents -> + LedgerConfig ByronBlock -> + ByronBlock -> + TickedLedgerState ByronBlock mk1 -> + Except (LedgerError ByronBlock) (LedgerState ByronBlock mk2) +applyByronBlock + doValidation + _doEvents + cfg + blk@(ByronBlock raw _ (ByronHash blkHash)) + ls = case raw of - CC.ABOBBlock raw' -> applyABlock byronOpts cfg raw' blkHash blkNo ls - CC.ABOBBoundary raw' -> applyABoundaryBlock cfg raw' blkNo ls - where + CC.ABOBBlock raw' -> applyABlock byronOpts cfg raw' blkHash blkNo ls + CC.ABOBBoundary raw' -> applyABoundaryBlock cfg raw' blkNo ls + where blkNo :: BlockNo blkNo = blockNo blk byronOpts = CC.fromBlockValidationMode $ case doValidation of - STS.ValidateAll -> CC.BlockValidation - STS.ValidateNone -> CC.NoBlockValidation + STS.ValidateAll -> CC.BlockValidation + STS.ValidateNone -> CC.NoBlockValidation STS.ValidateSuchThat _ -> CC.BlockValidation -applyABlock :: CC.ValidationMode - -> Gen.Config - -> CC.ABlock ByteString - -> CC.HeaderHash - -> BlockNo - -> TickedLedgerState ByronBlock mk1 - -> Except (LedgerError ByronBlock) (LedgerState ByronBlock mk2) +applyABlock :: + CC.ValidationMode -> + Gen.Config -> + CC.ABlock ByteString -> + CC.HeaderHash -> + BlockNo -> + TickedLedgerState ByronBlock mk1 -> + Except (LedgerError ByronBlock) (LedgerState ByronBlock mk2) applyABlock validationMode cfg blk blkHash blkNo TickedByronLedgerState{..} = do - st' <- CC.validateBlock cfg validationMode blk blkHash tickedByronLedgerState - - let updState :: UPI.State - updState = CC.cvsUpdateState st' - - -- Transition info as it would look like if all entries were new - ifNew :: Map Update.ProtocolVersion BlockNo - ifNew = Map.fromList $ map aux (UPI.candidateProtocolUpdates updState) - where - aux :: UPE.CandidateProtocolUpdate - -> (Update.ProtocolVersion, BlockNo) - aux candidate = (UPE.cpuProtocolVersion candidate, blkNo) - - transition' :: ByronTransition - transition' = - case untickedByronLedgerTransition of - ByronTransitionInfo oldEntries -> ByronTransitionInfo $ - -- Candidates that have /just/ become candidates - let newEntries :: Map Update.ProtocolVersion BlockNo - newEntries = ifNew `Map.difference` oldEntries - - -- Remove any entries that aren't candidates anymore - in (oldEntries `Map.intersection` ifNew) `Map.union` newEntries - - return ByronLedgerState { - byronLedgerTipBlockNo = NotOrigin blkNo - , byronLedgerState = st' - , byronLedgerTransition = transition' - } + st' <- CC.validateBlock cfg validationMode blk blkHash tickedByronLedgerState + + let updState :: UPI.State + updState = CC.cvsUpdateState st' + + -- Transition info as it would look like if all entries were new + ifNew :: Map Update.ProtocolVersion BlockNo + ifNew = Map.fromList $ map aux (UPI.candidateProtocolUpdates updState) + where + aux :: + UPE.CandidateProtocolUpdate -> + (Update.ProtocolVersion, BlockNo) + aux candidate = (UPE.cpuProtocolVersion candidate, blkNo) + + transition' :: ByronTransition + transition' = + case untickedByronLedgerTransition of + ByronTransitionInfo oldEntries -> + ByronTransitionInfo $ + -- Candidates that have /just/ become candidates + let newEntries :: Map Update.ProtocolVersion BlockNo + newEntries = ifNew `Map.difference` oldEntries + in -- Remove any entries that aren't candidates anymore + (oldEntries `Map.intersection` ifNew) `Map.union` newEntries + + return + ByronLedgerState + { byronLedgerTipBlockNo = NotOrigin blkNo + , byronLedgerState = st' + , byronLedgerTransition = transition' + } -- | Apply boundary block -- -- Since boundary blocks don't modify the delegation state, they also don't -- modify the delegation history. -applyABoundaryBlock :: Gen.Config - -> CC.ABoundaryBlock ByteString - -> BlockNo - -> TickedLedgerState ByronBlock mk1 - -> Except (LedgerError ByronBlock) (LedgerState ByronBlock mk2) +applyABoundaryBlock :: + Gen.Config -> + CC.ABoundaryBlock ByteString -> + BlockNo -> + TickedLedgerState ByronBlock mk1 -> + Except (LedgerError ByronBlock) (LedgerState ByronBlock mk2) applyABoundaryBlock cfg blk blkNo TickedByronLedgerState{..} = do - st' <- CC.validateBoundary cfg blk tickedByronLedgerState - return ByronLedgerState { - byronLedgerTipBlockNo = NotOrigin blkNo - , byronLedgerState = st' + st' <- CC.validateBoundary cfg blk tickedByronLedgerState + return + ByronLedgerState + { byronLedgerTipBlockNo = NotOrigin blkNo + , byronLedgerState = st' , byronLedgerTransition = untickedByronLedgerTransition } @@ -438,13 +470,15 @@ decodeByronAnnTip :: Decoder s (AnnTip ByronBlock) decodeByronAnnTip = decodeAnnTipIsEBB decodeByronHeaderHash encodeByronExtLedgerState :: ExtLedgerState ByronBlock mk -> Encoding -encodeByronExtLedgerState = encodeExtLedgerState +encodeByronExtLedgerState = + encodeExtLedgerState encodeByronLedgerState encodeByronChainDepState encodeByronAnnTip encodeByronHeaderState :: HeaderState ByronBlock -> Encoding -encodeByronHeaderState = encodeHeaderState +encodeByronHeaderState = + encodeHeaderState encodeByronChainDepState encodeByronAnnTip @@ -461,49 +495,51 @@ encodeByronHeaderState = encodeHeaderState encodeByronTransition :: ByronTransition -> Encoding encodeByronTransition (ByronTransitionInfo bNos) | Map.null bNos = CBOR.encodeWord8 0 - | otherwise = - CBOR.encodeListLen (fromIntegral (Map.size bNos)) - <> mconcat (map aux (Map.toAscList bNos)) - where - aux :: (Update.ProtocolVersion, BlockNo) -> Encoding - aux (Update.ProtocolVersion { pvMajor, pvMinor, pvAlt }, bno) = mconcat [ - CBOR.encodeListLen 4 - , encode pvMajor - , encode pvMinor - , encode pvAlt - , encode bno - ] + | otherwise = + CBOR.encodeListLen (fromIntegral (Map.size bNos)) + <> mconcat (map aux (Map.toAscList bNos)) + where + aux :: (Update.ProtocolVersion, BlockNo) -> Encoding + aux (Update.ProtocolVersion{pvMajor, pvMinor, pvAlt}, bno) = + mconcat + [ CBOR.encodeListLen 4 + , encode pvMajor + , encode pvMinor + , encode pvAlt + , encode bno + ] -- | Decode Byron transition info -- -- See comments for 'encodeByronTransition'. decodeByronTransition :: Decoder s ByronTransition decodeByronTransition = do - ttype <- CBOR.peekTokenType - fmap ByronTransitionInfo $ case ttype of - CBOR.TypeUInt -> do - tag <- CBOR.decodeWord8 - case tag of - 0 -> return $ Map.empty - _otherwise -> fail "decodeByronTransition: unexpected tag" - CBOR.TypeListLen -> do - size <- CBOR.decodeListLen - Map.fromAscList <$> replicateM size aux - _otherwise -> - fail "decodeByronTransition: unexpected token type" - where - aux :: Decoder s (Update.ProtocolVersion, BlockNo) - aux = do - enforceSize "decodeByronTransition.aux" 4 - pvMajor <- decode - pvMinor <- decode - pvAlt <- decode - bno <- decode - return (Update.ProtocolVersion { pvMajor, pvMinor, pvAlt }, bno) + ttype <- CBOR.peekTokenType + fmap ByronTransitionInfo $ case ttype of + CBOR.TypeUInt -> do + tag <- CBOR.decodeWord8 + case tag of + 0 -> return $ Map.empty + _otherwise -> fail "decodeByronTransition: unexpected tag" + CBOR.TypeListLen -> do + size <- CBOR.decodeListLen + Map.fromAscList <$> replicateM size aux + _otherwise -> + fail "decodeByronTransition: unexpected token type" + where + aux :: Decoder s (Update.ProtocolVersion, BlockNo) + aux = do + enforceSize "decodeByronTransition.aux" 4 + pvMajor <- decode + pvMinor <- decode + pvAlt <- decode + bno <- decode + return (Update.ProtocolVersion{pvMajor, pvMinor, pvAlt}, bno) encodeByronLedgerState :: LedgerState ByronBlock mk -> Encoding -encodeByronLedgerState ByronLedgerState{..} = mconcat [ - encodeListLen 3 +encodeByronLedgerState ByronLedgerState{..} = + mconcat + [ encodeListLen 3 , encode byronLedgerTipBlockNo , encode byronLedgerState , encodeByronTransition byronLedgerTransition @@ -511,31 +547,33 @@ encodeByronLedgerState ByronLedgerState{..} = mconcat [ decodeByronLedgerState :: Decoder s (LedgerState ByronBlock mk) decodeByronLedgerState = do - enforceSize "ByronLedgerState" 3 - ByronLedgerState - <$> decode - <*> decode - <*> decodeByronTransition + enforceSize "ByronLedgerState" 3 + ByronLedgerState + <$> decode + <*> decode + <*> decodeByronTransition encodeByronQuery :: BlockQuery ByronBlock fp result -> Encoding encodeByronQuery query = case query of - GetUpdateInterfaceState -> CBOR.encodeWord8 0 + GetUpdateInterfaceState -> CBOR.encodeWord8 0 decodeByronQuery :: Decoder s (SomeBlockQuery (BlockQuery ByronBlock)) decodeByronQuery = do - tag <- CBOR.decodeWord8 - case tag of - 0 -> return $ SomeBlockQuery GetUpdateInterfaceState - _ -> fail $ "decodeByronQuery: invalid tag " <> show tag + tag <- CBOR.decodeWord8 + case tag of + 0 -> return $ SomeBlockQuery GetUpdateInterfaceState + _ -> fail $ "decodeByronQuery: invalid tag " <> show tag encodeByronResult :: BlockQuery ByronBlock fp result -> result -> Encoding encodeByronResult query = case query of - GetUpdateInterfaceState -> toByronCBOR + GetUpdateInterfaceState -> toByronCBOR -decodeByronResult :: BlockQuery ByronBlock fp result - -> forall s. Decoder s result +decodeByronResult :: + BlockQuery ByronBlock fp result -> + forall s. + Decoder s result decodeByronResult query = case query of - GetUpdateInterfaceState -> fromByronCBOR + GetUpdateInterfaceState -> fromByronCBOR instance CanUpgradeLedgerTables (LedgerState ByronBlock) where upgradeTables _ _ = id diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Mempool.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Mempool.hs index 1a323e58e6..0f01c1fba9 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Mempool.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Mempool.hs @@ -7,20 +7,21 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Byron mempool integration -module Ouroboros.Consensus.Byron.Ledger.Mempool ( - -- * Mempool integration +module Ouroboros.Consensus.Byron.Ledger.Mempool + ( -- * Mempool integration GenTx (..) , TxId (..) , Validated (..) + -- * Transaction IDs , byronIdDlg , byronIdProp , byronIdTx , byronIdVote + -- * Serialisation , decodeByronApplyTxError , decodeByronGenTx @@ -28,52 +29,62 @@ module Ouroboros.Consensus.Byron.Ledger.Mempool ( , encodeByronApplyTxError , encodeByronGenTx , encodeByronGenTxId + -- * Low-level API (primarily for testing) , fromMempoolPayload , toMempoolPayload + -- * Auxiliary functions , countByronGenTxs ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Byron.API as CC -import qualified Cardano.Chain.Delegation as Delegation -import qualified Cardano.Chain.MempoolPayload as CC -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Chain.UTxO as Utxo -import qualified Cardano.Chain.ValidationMode as CC -import Cardano.Crypto (hashDecoded) -import qualified Cardano.Crypto as CC -import Cardano.Ledger.Binary (ByteSpan, DecoderError (..), - byronProtVer, fromByronCBOR, serialize, slice, toByronCBOR, - unsafeDeserialize) -import Cardano.Ledger.Binary.Plain (enforceSize) -import Cardano.Prelude (Natural, cborError) -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as CBOR -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Control.Monad (void) -import Control.Monad.Except (Except, throwError) -import Data.ByteString (ByteString) -import qualified Data.ByteString as Strict -import qualified Data.ByteString.Lazy as Lazy -import Data.Maybe (maybeToList) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (InspectHeapNamed (..), NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Conversions (toByronSlotNo) -import Ouroboros.Consensus.Byron.Ledger.Ledger -import Ouroboros.Consensus.Byron.Ledger.Orphans () -import Ouroboros.Consensus.Byron.Ledger.Serialisation - (byronBlockEncodingOverhead) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.Condense +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Byron.API qualified as CC +import Cardano.Chain.Delegation qualified as Delegation +import Cardano.Chain.MempoolPayload qualified as CC +import Cardano.Chain.UTxO qualified as Utxo +import Cardano.Chain.Update qualified as Update +import Cardano.Chain.ValidationMode qualified as CC +import Cardano.Crypto (hashDecoded) +import Cardano.Crypto qualified as CC +import Cardano.Ledger.Binary + ( ByteSpan + , DecoderError (..) + , byronProtVer + , fromByronCBOR + , serialize + , slice + , toByronCBOR + , unsafeDeserialize + ) +import Cardano.Ledger.Binary.Plain (enforceSize) +import Cardano.Prelude (Natural, cborError) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Control.Monad (void) +import Control.Monad.Except (Except, throwError) +import Data.ByteString (ByteString) +import Data.ByteString qualified as Strict +import Data.ByteString.Lazy qualified as Lazy +import Data.Maybe (maybeToList) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (InspectHeapNamed (..), NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Conversions (toByronSlotNo) +import Ouroboros.Consensus.Byron.Ledger.Ledger +import Ouroboros.Consensus.Byron.Ledger.Orphans () +import Ouroboros.Consensus.Byron.Ledger.Serialisation + ( byronBlockEncodingOverhead + ) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Transactions @@ -84,44 +95,44 @@ import Ouroboros.Consensus.Util.Condense -- This is effectively the same as 'CC.AMempoolPayload' but we cache the -- transaction ID (a hash). data instance GenTx ByronBlock - = ByronTx !Utxo.TxId !(Utxo.ATxAux ByteString) - | ByronDlg !Delegation.CertificateId !(Delegation.ACertificate ByteString) - | ByronUpdateProposal !Update.UpId !(Update.AProposal ByteString) - | ByronUpdateVote !Update.VoteId !(Update.AVote ByteString) + = ByronTx !Utxo.TxId !(Utxo.ATxAux ByteString) + | ByronDlg !Delegation.CertificateId !(Delegation.ACertificate ByteString) + | ByronUpdateProposal !Update.UpId !(Update.AProposal ByteString) + | ByronUpdateVote !Update.VoteId !(Update.AVote ByteString) deriving (Eq, Generic) deriving NoThunks via InspectHeapNamed "GenTx ByronBlock" (GenTx ByronBlock) -instance ShowProxy (GenTx ByronBlock) where +instance ShowProxy (GenTx ByronBlock) -newtype instance Validated (GenTx ByronBlock) = ValidatedByronTx { - forgetValidatedByronTx :: GenTx ByronBlock - } +newtype instance Validated (GenTx ByronBlock) = ValidatedByronTx + { forgetValidatedByronTx :: GenTx ByronBlock + } deriving (Eq, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks type instance ApplyTxErr ByronBlock = CC.ApplyMempoolPayloadErr -- orphaned instance -instance ShowProxy CC.ApplyMempoolPayloadErr where +instance ShowProxy CC.ApplyMempoolPayloadErr instance LedgerSupportsMempool ByronBlock where -- Check that the annotation is the canonical encoding. This is currently -- enforced by 'decodeByronGenTx', see its docstring for more context. txInvariant tx = - CC.mempoolPayloadRecoverBytes tx' == CC.mempoolPayloadReencode tx' - where - tx' = toMempoolPayload tx + CC.mempoolPayloadRecoverBytes tx' == CC.mempoolPayloadReencode tx' + where + tx' = toMempoolPayload tx applyTx cfg _wti slot tx st = - (\st' -> (st', ValidatedByronTx tx)) + (\st' -> (st', ValidatedByronTx tx)) <$> applyByronGenTx validationMode cfg slot tx st - where - validationMode = CC.ValidationMode CC.BlockValidation Utxo.TxValidation + where + validationMode = CC.ValidationMode CC.BlockValidation Utxo.TxValidation reapplyTx _ cfg slot vtx st = - applyByronGenTx validationMode cfg slot (forgetValidatedByronTx vtx) st - where - validationMode = CC.ValidationMode CC.NoBlockValidation Utxo.TxValidationNoCrypto + applyByronGenTx validationMode cfg slot (forgetValidatedByronTx vtx) st + where + validationMode = CC.ValidationMode CC.NoBlockValidation Utxo.TxValidationNoCrypto txForgetValidated = forgetValidatedByronTx @@ -131,69 +142,72 @@ instance TxLimits ByronBlock where type TxMeasure ByronBlock = IgnoringOverflow ByteSize32 blockCapacityTxMeasure _cfg st = - IgnoringOverflow - $ ByteSize32 - $ CC.getMaxBlockSize cvs - byronBlockEncodingOverhead - where - cvs = tickedByronLedgerState st + IgnoringOverflow $ + ByteSize32 $ + CC.getMaxBlockSize cvs - byronBlockEncodingOverhead + where + cvs = tickedByronLedgerState st txMeasure _cfg st tx = - if txszNat > maxTxSize then throwError err else - pure $ IgnoringOverflow $ ByteSize32 $ fromIntegral txsz - where - maxTxSize = - Update.ppMaxTxSize - $ CC.adoptedProtocolParameters - $ CC.cvsUpdateState - $ tickedByronLedgerState st - - txszNat = fromIntegral txsz :: Natural - - txsz = - Strict.length - $ CC.mempoolPayloadRecoverBytes - $ toMempoolPayload tx - - err = - CC.MempoolTxErr - $ Utxo.UTxOValidationTxValidationError - $ Utxo.TxValidationTxTooLarge txszNat maxTxSize + if txszNat > maxTxSize + then throwError err + else + pure $ IgnoringOverflow $ ByteSize32 $ fromIntegral txsz + where + maxTxSize = + Update.ppMaxTxSize $ + CC.adoptedProtocolParameters $ + CC.cvsUpdateState $ + tickedByronLedgerState st + + txszNat = fromIntegral txsz :: Natural + + txsz = + Strict.length $ + CC.mempoolPayloadRecoverBytes $ + toMempoolPayload tx + + err = + CC.MempoolTxErr $ + Utxo.UTxOValidationTxValidationError $ + Utxo.TxValidationTxTooLarge txszNat maxTxSize data instance TxId (GenTx ByronBlock) - = ByronTxId !Utxo.TxId - | ByronDlgId !Delegation.CertificateId + = ByronTxId !Utxo.TxId + | ByronDlgId !Delegation.CertificateId | ByronUpdateProposalId !Update.UpId - | ByronUpdateVoteId !Update.VoteId + | ByronUpdateVoteId !Update.VoteId deriving (Eq, Ord) deriving NoThunks via InspectHeapNamed "TxId (GenTx ByronBlock)" (TxId (GenTx ByronBlock)) -instance ShowProxy (TxId (GenTx ByronBlock)) where +instance ShowProxy (TxId (GenTx ByronBlock)) instance HasTxId (GenTx ByronBlock) where - txId (ByronTx i _) = ByronTxId i - txId (ByronDlg i _) = ByronDlgId i + txId (ByronTx i _) = ByronTxId i + txId (ByronDlg i _) = ByronDlgId i txId (ByronUpdateProposal i _) = ByronUpdateProposalId i - txId (ByronUpdateVote i _) = ByronUpdateVoteId i + txId (ByronUpdateVote i _) = ByronUpdateVoteId i instance ConvertRawTxId (GenTx ByronBlock) where - toRawTxIdHash (ByronTxId i) = CC.abstractHashToShort i - toRawTxIdHash (ByronDlgId i) = CC.abstractHashToShort i + toRawTxIdHash (ByronTxId i) = CC.abstractHashToShort i + toRawTxIdHash (ByronDlgId i) = CC.abstractHashToShort i toRawTxIdHash (ByronUpdateProposalId i) = CC.abstractHashToShort i - toRawTxIdHash (ByronUpdateVoteId i) = CC.abstractHashToShort i + toRawTxIdHash (ByronUpdateVoteId i) = CC.abstractHashToShort i instance HasTxs ByronBlock where extractTxs blk = case byronBlockRaw blk of -- EBBs don't contain transactions - CC.ABOBBoundary _ebb -> [] - CC.ABOBBlock regularBlk -> fromMempoolPayload <$> - maybeToList proposal <> votes <> dlgs <> txs - where - body = CC.blockBody regularBlk - - txs = CC.MempoolTx <$> Utxo.aUnTxPayload (CC.bodyTxPayload body) - proposal = CC.MempoolUpdateProposal <$> Update.payloadProposal (CC.bodyUpdatePayload body) - votes = CC.MempoolUpdateVote <$> Update.payloadVotes (CC.bodyUpdatePayload body) - dlgs = CC.MempoolDlg <$> Delegation.getPayload (CC.bodyDlgPayload body) + CC.ABOBBoundary _ebb -> [] + CC.ABOBBlock regularBlk -> + fromMempoolPayload + <$> maybeToList proposal <> votes <> dlgs <> txs + where + body = CC.blockBody regularBlk + + txs = CC.MempoolTx <$> Utxo.aUnTxPayload (CC.bodyTxPayload body) + proposal = CC.MempoolUpdateProposal <$> Update.payloadProposal (CC.bodyUpdatePayload body) + votes = CC.MempoolUpdateVote <$> Update.payloadVotes (CC.bodyUpdatePayload body) + dlgs = CC.MempoolDlg <$> Delegation.getPayload (CC.bodyDlgPayload body) {------------------------------------------------------------------------------- Conversion to and from 'AMempoolPayload' @@ -201,23 +215,23 @@ instance HasTxs ByronBlock where toMempoolPayload :: GenTx ByronBlock -> CC.AMempoolPayload ByteString toMempoolPayload = go - where - -- Just extract the payload @p@ - go :: GenTx ByronBlock -> CC.AMempoolPayload ByteString - go (ByronTx _ p) = CC.MempoolTx p - go (ByronDlg _ p) = CC.MempoolDlg p - go (ByronUpdateProposal _ p) = CC.MempoolUpdateProposal p - go (ByronUpdateVote _ p) = CC.MempoolUpdateVote p + where + -- Just extract the payload @p@ + go :: GenTx ByronBlock -> CC.AMempoolPayload ByteString + go (ByronTx _ p) = CC.MempoolTx p + go (ByronDlg _ p) = CC.MempoolDlg p + go (ByronUpdateProposal _ p) = CC.MempoolUpdateProposal p + go (ByronUpdateVote _ p) = CC.MempoolUpdateVote p fromMempoolPayload :: CC.AMempoolPayload ByteString -> GenTx ByronBlock fromMempoolPayload = go - where - -- Bundle the payload @p@ with its ID - go :: CC.AMempoolPayload ByteString -> GenTx ByronBlock - go (CC.MempoolTx p) = ByronTx (byronIdTx p) p - go (CC.MempoolDlg p) = ByronDlg (byronIdDlg p) p - go (CC.MempoolUpdateProposal p) = ByronUpdateProposal (byronIdProp p) p - go (CC.MempoolUpdateVote p) = ByronUpdateVote (byronIdVote p) p + where + -- Bundle the payload @p@ with its ID + go :: CC.AMempoolPayload ByteString -> GenTx ByronBlock + go (CC.MempoolTx p) = ByronTx (byronIdTx p) p + go (CC.MempoolDlg p) = ByronDlg (byronIdDlg p) p + go (CC.MempoolUpdateProposal p) = ByronUpdateProposal (byronIdProp p) p + go (CC.MempoolUpdateVote p) = ByronUpdateVote (byronIdVote p) p {------------------------------------------------------------------------------- Auxiliary: transaction IDs @@ -244,10 +258,10 @@ instance Condense (GenTx ByronBlock) where condense = condense . toMempoolPayload instance Condense (GenTxId ByronBlock) where - condense (ByronTxId i) = condense i - condense (ByronDlgId i) = condense i + condense (ByronTxId i) = condense i + condense (ByronDlgId i) = condense i condense (ByronUpdateProposalId i) = condense i - condense (ByronUpdateVoteId i) = condense i + condense (ByronUpdateVoteId i) = condense i instance Show (GenTx ByronBlock) where show = condense @@ -262,20 +276,21 @@ instance Show (GenTxId ByronBlock) where Applying transactions -------------------------------------------------------------------------------} -applyByronGenTx :: CC.ValidationMode - -> LedgerConfig ByronBlock - -> SlotNo - -> GenTx ByronBlock - -> TickedLedgerState ByronBlock mk1 - -> Except (ApplyTxErr ByronBlock) (TickedLedgerState ByronBlock mk2) +applyByronGenTx :: + CC.ValidationMode -> + LedgerConfig ByronBlock -> + SlotNo -> + GenTx ByronBlock -> + TickedLedgerState ByronBlock mk1 -> + Except (ApplyTxErr ByronBlock) (TickedLedgerState ByronBlock mk2) applyByronGenTx validationMode cfg slot genTx st = - (\state -> st {tickedByronLedgerState = state}) <$> - CC.applyMempoolPayload - validationMode - cfg - (toByronSlotNo slot) - (toMempoolPayload genTx) - (tickedByronLedgerState st) + (\state -> st{tickedByronLedgerState = state}) + <$> CC.applyMempoolPayload + validationMode + cfg + (toByronSlotNo slot) + (toMempoolPayload genTx) + (tickedByronLedgerState st) {------------------------------------------------------------------------------- Serialisation @@ -299,41 +314,43 @@ encodeByronGenTx genTx = toByronCBOR (toMempoolPayload genTx) -- /canonical/ encoding, not the /original, possibly non-canonical/ encoding. decodeByronGenTx :: Decoder s (GenTx ByronBlock) decodeByronGenTx = fromMempoolPayload . canonicalise <$> fromByronCBOR - where - -- Fill in the 'ByteString' annotation with a canonical encoding of the - -- 'GenTx'. We must reserialise the deserialised 'GenTx' to be sure we - -- have the canonical one. We don't have access to the original - -- 'ByteString' anyway, so having to reserialise here gives us a - -- 'ByteString' we can use. - canonicalise :: CC.AMempoolPayload ByteSpan - -> CC.AMempoolPayload ByteString - canonicalise mp = Lazy.toStrict . slice canonicalBytes <$> mp' - where - canonicalBytes = serialize byronProtVer (void mp) - -- 'unsafeDeserialize' cannot fail, since we just 'serialize'd it. - -- Note that we cannot reuse @mp@, as its 'ByteSpan' might differ from - -- the canonical encoding's 'ByteSpan'. - mp' = unsafeDeserialize byronProtVer canonicalBytes + where + -- Fill in the 'ByteString' annotation with a canonical encoding of the + -- 'GenTx'. We must reserialise the deserialised 'GenTx' to be sure we + -- have the canonical one. We don't have access to the original + -- 'ByteString' anyway, so having to reserialise here gives us a + -- 'ByteString' we can use. + canonicalise :: + CC.AMempoolPayload ByteSpan -> + CC.AMempoolPayload ByteString + canonicalise mp = Lazy.toStrict . slice canonicalBytes <$> mp' + where + canonicalBytes = serialize byronProtVer (void mp) + -- 'unsafeDeserialize' cannot fail, since we just 'serialize'd it. + -- Note that we cannot reuse @mp@, as its 'ByteSpan' might differ from + -- the canonical encoding's 'ByteSpan'. + mp' = unsafeDeserialize byronProtVer canonicalBytes encodeByronGenTxId :: GenTxId ByronBlock -> Encoding -encodeByronGenTxId genTxId = mconcat [ - CBOR.encodeListLen 2 +encodeByronGenTxId genTxId = + mconcat + [ CBOR.encodeListLen 2 , case genTxId of - ByronTxId i -> toByronCBOR (0 :: Word8) <> toByronCBOR i - ByronDlgId i -> toByronCBOR (1 :: Word8) <> toByronCBOR i + ByronTxId i -> toByronCBOR (0 :: Word8) <> toByronCBOR i + ByronDlgId i -> toByronCBOR (1 :: Word8) <> toByronCBOR i ByronUpdateProposalId i -> toByronCBOR (2 :: Word8) <> toByronCBOR i - ByronUpdateVoteId i -> toByronCBOR (3 :: Word8) <> toByronCBOR i + ByronUpdateVoteId i -> toByronCBOR (3 :: Word8) <> toByronCBOR i ] decodeByronGenTxId :: Decoder s (GenTxId ByronBlock) decodeByronGenTxId = do - enforceSize "GenTxId (ByronBlock cfg)" 2 - CBOR.decodeWord8 >>= \case - 0 -> ByronTxId <$> fromByronCBOR - 1 -> ByronDlgId <$> fromByronCBOR - 2 -> ByronUpdateProposalId <$> fromByronCBOR - 3 -> ByronUpdateVoteId <$> fromByronCBOR - tag -> cborError $ DecoderErrorUnknownTag "GenTxId (ByronBlock cfg)" tag + enforceSize "GenTxId (ByronBlock cfg)" 2 + CBOR.decodeWord8 >>= \case + 0 -> ByronTxId <$> fromByronCBOR + 1 -> ByronDlgId <$> fromByronCBOR + 2 -> ByronUpdateProposalId <$> fromByronCBOR + 3 -> ByronUpdateVoteId <$> fromByronCBOR + tag -> cborError $ DecoderErrorUnknownTag "GenTxId (ByronBlock cfg)" tag encodeByronApplyTxError :: ApplyTxErr ByronBlock -> Encoding encodeByronApplyTxError = toByronCBOR diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/NetworkProtocolVersion.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/NetworkProtocolVersion.hs index 3955d93f56..70daecf51d 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/NetworkProtocolVersion.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/NetworkProtocolVersion.hs @@ -1,41 +1,41 @@ {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion ( - ByronNodeToClientVersion (..) +module Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion + ( ByronNodeToClientVersion (..) , ByronNodeToNodeVersion (..) ) where -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Node.NetworkProtocolVersion -data ByronNodeToNodeVersion = - -- | We send headers without a size hint +data ByronNodeToNodeVersion + = -- | We send headers without a size hint ByronNodeToNodeVersion1 - - -- | We send headers /with/ a size hint - | ByronNodeToNodeVersion2 + | -- | We send headers /with/ a size hint + ByronNodeToNodeVersion2 deriving (Show, Eq, Ord, Enum, Bounded) -data ByronNodeToClientVersion = - ByronNodeToClientVersion1 +data ByronNodeToClientVersion + = ByronNodeToClientVersion1 deriving (Show, Eq, Ord, Enum, Bounded) instance HasNetworkProtocolVersion ByronBlock where - type BlockNodeToNodeVersion ByronBlock = ByronNodeToNodeVersion + type BlockNodeToNodeVersion ByronBlock = ByronNodeToNodeVersion type BlockNodeToClientVersion ByronBlock = ByronNodeToClientVersion -- | This instance isn't used apart from tests; we therefore make our life easy -- below. instance SupportedNetworkProtocolVersion ByronBlock where - supportedNodeToNodeVersions _ = Map.fromList + supportedNodeToNodeVersions _ = + Map.fromList [ (v, ByronNodeToNodeVersion2) | v <- [minBound .. maxBound] ] - supportedNodeToClientVersions _ = Map.fromList + supportedNodeToClientVersions _ = + Map.fromList [ (v, ByronNodeToClientVersion1) | v <- [minBound .. maxBound] ] diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Orphans.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Orphans.hs index 53b4ae7ce2..09df6c04b8 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Orphans.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Orphans.hs @@ -3,29 +3,31 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE StandaloneDeriving #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Byron.Ledger.Orphans () where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Common as CC -import qualified Cardano.Chain.Delegation as CC -import qualified Cardano.Chain.MempoolPayload as CC -import qualified Cardano.Chain.Update as CC -import qualified Cardano.Chain.UTxO as CC -import Cardano.Crypto (shortHashF) -import qualified Cardano.Crypto -import Cardano.Ledger.Binary (Annotated (unAnnotated), fromByronCBOR, - toByronCBOR) -import Codec.Serialise (Serialise, decode, encode) -import Control.Monad (void) -import Data.ByteString (ByteString) -import Data.Coerce -import Data.Text (unpack) -import Formatting -import NoThunks.Class (InspectHeap (..), NoThunks) -import Ouroboros.Consensus.Util.Condense +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Common qualified as CC +import Cardano.Chain.Delegation qualified as CC +import Cardano.Chain.MempoolPayload qualified as CC +import Cardano.Chain.UTxO qualified as CC +import Cardano.Chain.Update qualified as CC +import Cardano.Crypto (shortHashF) +import Cardano.Crypto qualified +import Cardano.Ledger.Binary + ( Annotated (unAnnotated) + , fromByronCBOR + , toByronCBOR + ) +import Codec.Serialise (Serialise, decode, encode) +import Control.Monad (void) +import Data.ByteString (ByteString) +import Data.Coerce +import Data.Text (unpack) +import Formatting +import NoThunks.Class (InspectHeap (..), NoThunks) +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Serialise @@ -47,62 +49,67 @@ instance Condense CC.HeaderHash where condense = formatToString CC.headerHashF instance Condense (CC.ABlock ByteString) where - condense = unpack - . sformat build - . CC.txpTxs - . CC.bodyTxPayload - . CC.blockBody + condense = + unpack + . sformat build + . CC.txpTxs + . CC.bodyTxPayload + . CC.blockBody instance Condense (CC.AHeader ByteString) where - condense hdr = mconcat [ - "( hash: " <> unpack condensedHash + condense hdr = + mconcat + [ "( hash: " <> unpack condensedHash , ", previousHash: " <> unpack condensedPrevHash - , ", slot: " <> unpack condensedSlot - , ", issuer: " <> condense issuer - , ", delegate: " <> condense delegate + , ", slot: " <> unpack condensedSlot + , ", issuer: " <> condense issuer + , ", delegate: " <> condense delegate , ")" ] - where - psigCert = CC.delegationCertificate $ CC.headerSignature hdr - issuer = CC.issuerVK psigCert - delegate = CC.delegateVK psigCert - hdrHash = CC.headerHashAnnotated hdr + where + psigCert = CC.delegationCertificate $ CC.headerSignature hdr + issuer = CC.issuerVK psigCert + delegate = CC.delegateVK psigCert + hdrHash = CC.headerHashAnnotated hdr - condensedHash = sformat CC.headerHashF $ hdrHash - condensedPrevHash = sformat CC.headerHashF $ CC.headerPrevHash hdr - condensedSlot = sformat build $ unAnnotated (CC.aHeaderSlot hdr) + condensedHash = sformat CC.headerHashF $ hdrHash + condensedPrevHash = sformat CC.headerHashF $ CC.headerPrevHash hdr + condensedSlot = sformat build $ unAnnotated (CC.aHeaderSlot hdr) instance Condense (CC.ABoundaryBlock ByteString) where condense = condense . CC.boundaryHeader instance Condense (CC.ABlockOrBoundary ByteString) where - condense (CC.ABOBBlock blk) = mconcat [ - "( header: " <> condense (CC.blockHeader blk) - , ", body: " <> condense blk + condense (CC.ABOBBlock blk) = + mconcat + [ "( header: " <> condense (CC.blockHeader blk) + , ", body: " <> condense blk , ")" ] condense (CC.ABOBBoundary ebb) = - condense ebb + condense ebb instance Condense (CC.ABoundaryHeader ByteString) where - condense hdr = mconcat [ - "( ebb: " <> condense (CC.boundaryEpoch hdr) - , ", hash: " <> condensedHash + condense hdr = + mconcat + [ "( ebb: " <> condense (CC.boundaryEpoch hdr) + , ", hash: " <> condensedHash , ", previousHash: " <> condensedPrevHash , ")" ] - where - condensedHash = - unpack - . sformat CC.headerHashF - . coerce - . Cardano.Crypto.hashDecoded . fmap CC.wrapBoundaryBytes - $ hdr - - condensedPrevHash = - unpack $ case CC.boundaryPrevHash hdr of - Left _ -> "Genesis" - Right h -> sformat CC.headerHashF h + where + condensedHash = + unpack + . sformat CC.headerHashF + . coerce + . Cardano.Crypto.hashDecoded + . fmap CC.wrapBoundaryBytes + $ hdr + + condensedPrevHash = + unpack $ case CC.boundaryPrevHash hdr of + Left _ -> "Genesis" + Right h -> sformat CC.headerHashF h instance Condense CC.TxId where condense hash = "txid:" <> unpack (sformat shortHashF hash) @@ -117,14 +124,14 @@ instance Condense CC.VoteId where condense hash = "voteid: " <> unpack (sformat shortHashF hash) instance Condense (CC.AMempoolPayload a) where - condense (CC.MempoolTx tx) = - "tx: " <> unpack (sformat build (void tx)) - condense (CC.MempoolDlg cert) = - "dlg: " <> unpack (sformat build (void cert)) - condense (CC.MempoolUpdateProposal p) = - "updateproposal: " <> unpack (sformat build (void p)) - condense (CC.MempoolUpdateVote vote) = - "updatevote: " <> unpack (sformat build (void vote)) + condense (CC.MempoolTx tx) = + "tx: " <> unpack (sformat build (void tx)) + condense (CC.MempoolDlg cert) = + "dlg: " <> unpack (sformat build (void cert)) + condense (CC.MempoolUpdateProposal p) = + "updateproposal: " <> unpack (sformat build (void p)) + condense (CC.MempoolUpdateVote vote) = + "updatevote: " <> unpack (sformat build (void vote)) instance Condense Cardano.Crypto.VerificationKey where condense = unpack . sformat build @@ -138,5 +145,7 @@ instance Condense Cardano.Crypto.VerificationKey where -- Cardano.Chain.Delegation.Validation.Registration.TooLarge is not exported, -- but occurs somewhere in CC.ChainValidationError, so we use -- 'InspectHeap' instead of deriving one using Generics. -deriving via InspectHeap CC.ChainValidationError - instance NoThunks CC.ChainValidationError +deriving via + InspectHeap CC.ChainValidationError + instance + NoThunks CC.ChainValidationError diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/PBFT.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/PBFT.hs index 2f45fd7d9d..cbde6f5fb7 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/PBFT.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/PBFT.hs @@ -2,72 +2,77 @@ {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Instances required to support PBFT -module Ouroboros.Consensus.Byron.Ledger.PBFT ( - decodeByronChainDepState +module Ouroboros.Consensus.Byron.Ledger.PBFT + ( decodeByronChainDepState , encodeByronChainDepState , fromPBftLedgerView , mkByronContextDSIGN , toPBftLedgerView ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Delegation as Delegation -import Cardano.Crypto.DSIGN -import Cardano.Ledger.Binary (Annotated) -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import Data.ByteString (ByteString) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Crypto.DSIGN -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Config -import Ouroboros.Consensus.Byron.Ledger.Serialisation () -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Delegation qualified as Delegation +import Cardano.Crypto.DSIGN +import Cardano.Ledger.Binary (Annotated) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Data.ByteString (ByteString) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Crypto.DSIGN +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Config +import Ouroboros.Consensus.Byron.Ledger.Serialisation () +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S type instance BlockProtocol ByronBlock = PBft PBftByronCrypto -- | Construct DSIGN required for Byron crypto -mkByronContextDSIGN :: BlockConfig ByronBlock - -> VerKeyDSIGN ByronDSIGN - -> ContextDSIGN ByronDSIGN +mkByronContextDSIGN :: + BlockConfig ByronBlock -> + VerKeyDSIGN ByronDSIGN -> + ContextDSIGN ByronDSIGN mkByronContextDSIGN = (,) . byronProtocolMagicId instance BlockSupportsProtocol ByronBlock where validateView cfg hdr@ByronHeader{..} = - case byronHeaderRaw of - CC.ABOBBoundaryHdr _ -> pbftValidateBoundary hdr - CC.ABOBBlockHdr regular -> - let pbftFields :: PBftFields PBftByronCrypto - (Annotated CC.ToSign ByteString) - pbftFields = PBftFields { - pbftIssuer = VerKeyByronDSIGN - . Delegation.delegateVK - . CC.delegationCertificate - . CC.headerSignature - $ regular - , pbftGenKey = VerKeyByronDSIGN - . CC.headerGenesisKey - $ regular - , pbftSignature = SignedDSIGN - . SigByronDSIGN - . CC.signature - . CC.headerSignature - $ regular + case byronHeaderRaw of + CC.ABOBBoundaryHdr _ -> pbftValidateBoundary hdr + CC.ABOBBlockHdr regular -> + let pbftFields :: + PBftFields + PBftByronCrypto + (Annotated CC.ToSign ByteString) + pbftFields = + PBftFields + { pbftIssuer = + VerKeyByronDSIGN + . Delegation.delegateVK + . CC.delegationCertificate + . CC.headerSignature + $ regular + , pbftGenKey = + VerKeyByronDSIGN + . CC.headerGenesisKey + $ regular + , pbftSignature = + SignedDSIGN + . SigByronDSIGN + . CC.signature + . CC.headerSignature + $ regular } - - in PBftValidateRegular - pbftFields - (CC.recoverSignedBytes epochSlots regular) - (mkByronContextDSIGN cfg (pbftGenKey pbftFields)) - where - epochSlots = byronEpochSlots cfg + in PBftValidateRegular + pbftFields + (CC.recoverSignedBytes epochSlots regular) + (mkByronContextDSIGN cfg (pbftGenKey pbftFields)) + where + epochSlots = byronEpochSlots cfg selectView _ = mkPBftSelectView @@ -78,10 +83,10 @@ fromPBftLedgerView :: PBftLedgerView PBftByronCrypto -> Delegation.Map fromPBftLedgerView = Delegation.Map . pbftDelegates encodeByronChainDepState :: - ChainDepState (BlockProtocol ByronBlock) - -> Encoding + ChainDepState (BlockProtocol ByronBlock) -> + Encoding encodeByronChainDepState = S.encodePBftState decodeByronChainDepState :: - Decoder s (ChainDepState (BlockProtocol ByronBlock)) + Decoder s (ChainDepState (BlockProtocol ByronBlock)) decodeByronChainDepState = S.decodePBftState diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Serialisation.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Serialisation.hs index 43a9a42dfc..aee7ce1508 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Serialisation.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Ledger/Serialisation.hs @@ -6,14 +6,14 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Ledger.Serialisation ( - -- * Data family instances +module Ouroboros.Consensus.Byron.Ledger.Serialisation + ( -- * Data family instances NestedCtxt_ (..) , RawBoundaryHeader , RawHeader + -- * Serialisation , byronBlockEncodingOverhead , decodeByronBlock @@ -26,8 +26,10 @@ module Ouroboros.Consensus.Byron.Ledger.Serialisation ( , encodeByronBoundaryHeader , encodeByronHeaderHash , encodeByronRegularHeader + -- * Support for on-disk format , byronBinaryBlockInfo + -- * Unsized header , addV1Envelope , decodeUnsizedHeader @@ -36,23 +38,30 @@ module Ouroboros.Consensus.Byron.Ledger.Serialisation ( , fakeByronBlockSizeHint ) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Slotting as CC -import Cardano.Ledger.Binary (ByteSpan, annotationBytes, byronProtVer, - fromByronCBOR, slice, toByronCBOR, toPlainDecoder) -import Cardano.Ledger.Binary.Plain (Decoder, Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (Serialise (..)) -import Control.Monad (guard) -import Control.Monad.Except (Except, throwError) -import qualified Data.ByteString as Strict -import qualified Data.ByteString.Lazy as Lazy -import Data.Word (Word32) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger.Block -import Ouroboros.Consensus.Byron.Ledger.Orphans () -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) -import Ouroboros.Network.SizeInBytes (SizeInBytes) +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Slotting qualified as CC +import Cardano.Ledger.Binary + ( ByteSpan + , annotationBytes + , byronProtVer + , fromByronCBOR + , slice + , toByronCBOR + , toPlainDecoder + ) +import Cardano.Ledger.Binary.Plain (Decoder, Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (Serialise (..)) +import Control.Monad (guard) +import Control.Monad.Except (Except, throwError) +import Data.ByteString qualified as Strict +import Data.ByteString.Lazy qualified as Lazy +import Data.Word (Word32) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger.Block +import Ouroboros.Consensus.Byron.Ledger.Orphans () +import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) +import Ouroboros.Network.SizeInBytes (SizeInBytes) {------------------------------------------------------------------------------- Serialise instances @@ -69,7 +78,7 @@ instance Serialise ByronHash where -------------------------------------------------------------------------------} type RawBoundaryHeader = CC.ABoundaryHeader Strict.ByteString -type RawHeader = CC.AHeader Strict.ByteString +type RawHeader = CC.AHeader Strict.ByteString {------------------------------------------------------------------------------- Nested contents @@ -79,42 +88,41 @@ type RawHeader = CC.AHeader Strict.ByteString -- nested type instead. data instance NestedCtxt_ ByronBlock f a where CtxtByronRegular :: - !SizeInBytes - -> NestedCtxt_ ByronBlock Header RawHeader - + !SizeInBytes -> + NestedCtxt_ ByronBlock Header RawHeader -- | In order to reconstruct 'Header ByronBlock' we need the 'SlotNo' -- -- We could compute that using 'EpochSlots', but we don't have that available -- here. CtxtByronBoundary :: - !SizeInBytes - -> NestedCtxt_ ByronBlock Header (SlotNo, RawBoundaryHeader) + !SizeInBytes -> + NestedCtxt_ ByronBlock Header (SlotNo, RawBoundaryHeader) deriving instance Show (NestedCtxt_ ByronBlock f a) instance SameDepIndex (NestedCtxt_ ByronBlock f) where sameDepIndex (CtxtByronRegular size) (CtxtByronRegular size') = do - guard (size == size') - return Refl + guard (size == size') + return Refl sameDepIndex (CtxtByronBoundary size) (CtxtByronBoundary size') = do - guard (size == size') - return Refl + guard (size == size') + return Refl sameDepIndex _ _ = - Nothing + Nothing instance HasNestedContent Header ByronBlock where unnest hdr = case byronHeaderRaw hdr of - CC.ABOBBoundaryHdr h -> DepPair (NestedCtxt (CtxtByronBoundary blockSize)) (slotNo, h) - CC.ABOBBlockHdr h -> DepPair (NestedCtxt (CtxtByronRegular blockSize)) h - where - blockSize = byronHeaderBlockSizeHint hdr - slotNo = byronHeaderSlotNo hdr + CC.ABOBBoundaryHdr h -> DepPair (NestedCtxt (CtxtByronBoundary blockSize)) (slotNo, h) + CC.ABOBBlockHdr h -> DepPair (NestedCtxt (CtxtByronRegular blockSize)) h + where + blockSize = byronHeaderBlockSizeHint hdr + slotNo = byronHeaderSlotNo hdr nest = \case - DepPair (NestedCtxt (CtxtByronBoundary blockSize)) (slotNo, h) -> - mkBoundaryByronHeader slotNo h blockSize - DepPair (NestedCtxt (CtxtByronRegular blockSize)) h -> - mkRegularByronHeader h blockSize + DepPair (NestedCtxt (CtxtByronBoundary blockSize)) (slotNo, h) -> + mkBoundaryByronHeader slotNo h blockSize + DepPair (NestedCtxt (CtxtByronRegular blockSize)) h -> + mkRegularByronHeader h blockSize {------------------------------------------------------------------------------- Serialisation @@ -128,24 +136,25 @@ instance HasNestedContent Header ByronBlock where -- proposals). byronBlockEncodingOverhead :: Word32 byronBlockEncodingOverhead = - blockHeaderOverhead + blockBodyOverhead + safetyMargin - where - -- The maximum block header size. - blockHeaderOverhead = 650 - - -- The block body overhead excluding the actual generalized transactions. - blockBodyOverhead = 1 {- ABody: encodeListLen 4 -} - + 2 {- TxPayload: list -} - + 1 {- SscPayload: encodeListLen 2 -} - + 1 {- SscPayload: Word8 -} - + 1 {- SscPayload: mempty :: Set () -} - + 2 {- Delegation.Payload: list -} - + 1 {- Update.Payload: encodeListLen 2 -} - + 1 {- Update.Payload: Maybe AProposal -} - + 2 {- Update.Payload: list of AVote -} - - -- Just for safety. - safetyMargin = 1024 + blockHeaderOverhead + blockBodyOverhead + safetyMargin + where + -- The maximum block header size. + blockHeaderOverhead = 650 + + -- The block body overhead excluding the actual generalized transactions. + blockBodyOverhead = + 1 {- ABody: encodeListLen 4 -} + + 2 {- TxPayload: list -} + + 1 {- SscPayload: encodeListLen 2 -} + + 1 {- SscPayload: Word8 -} + + 1 {- SscPayload: mempty :: Set () -} + + 2 {- Delegation.Payload: list -} + + 1 {- Update.Payload: encodeListLen 2 -} + + 1 {- Update.Payload: Maybe AProposal -} + + 2 {- Update.Payload: list of AVote -} + + -- Just for safety. + safetyMargin = 1024 encodeByronHeaderHash :: HeaderHash ByronBlock -> Encoding encodeByronHeaderHash = toByronCBOR @@ -162,26 +171,32 @@ decodeByronHeaderHash = fromByronCBOR -- binary compatible with 'CC.encCBORABlockOrBoundary', but does not use it and -- instead takes advantage of the annotations (using 'encodePreEncoded'). encodeByronBlock :: ByronBlock -> CBOR.Encoding -encodeByronBlock blk = mconcat [ - CBOR.encodeListLen 2 +encodeByronBlock blk = + mconcat + [ CBOR.encodeListLen 2 , case byronBlockRaw blk of - CC.ABOBBoundary b -> mconcat [ - CBOR.encodeWord 0 - , CBOR.encodePreEncoded $ CC.boundaryAnnotation b - ] - CC.ABOBBlock b -> mconcat [ - CBOR.encodeWord 1 - , CBOR.encodePreEncoded $ CC.blockAnnotation b - ] + CC.ABOBBoundary b -> + mconcat + [ CBOR.encodeWord 0 + , CBOR.encodePreEncoded $ CC.boundaryAnnotation b + ] + CC.ABOBBlock b -> + mconcat + [ CBOR.encodeWord 1 + , CBOR.encodePreEncoded $ CC.blockAnnotation b + ] ] -- | Inverse of 'encodeByronBlock' decodeByronBlock :: CC.EpochSlots -> Decoder s (Lazy.ByteString -> ByronBlock) decodeByronBlock epochSlots = - toPlainDecoder Nothing byronProtVer $ - flip (\bs -> mkByronBlock epochSlots - . annotationBytes bs) - <$> CC.decCBORABlockOrBoundary epochSlots + toPlainDecoder Nothing byronProtVer $ + flip + ( \bs -> + mkByronBlock epochSlots + . annotationBytes bs + ) + <$> CC.decCBORABlockOrBoundary epochSlots -- | Decoder for a regular (non-EBB) Byron block. -- @@ -192,14 +207,18 @@ decodeByronBlock epochSlots = -- -- Use 'decodeByronBlock' when you can, this function is provided for use by -- the hard-fork combinator. -decodeByronRegularBlock :: CC.EpochSlots - -> Decoder s (Lazy.ByteString -> ByronBlock) +decodeByronRegularBlock :: + CC.EpochSlots -> + Decoder s (Lazy.ByteString -> ByronBlock) decodeByronRegularBlock epochSlots = - toPlainDecoder Nothing byronProtVer $ - flip (\bs -> mkByronBlock epochSlots - . annotationBytes bs - . CC.ABOBBlock) - <$> CC.decCBORABlock epochSlots + toPlainDecoder Nothing byronProtVer $ + flip + ( \bs -> + mkByronBlock epochSlots + . annotationBytes bs + . CC.ABOBBlock + ) + <$> CC.decCBORABlock epochSlots -- | Decoder for a boundary Byron block. -- @@ -210,14 +229,18 @@ decodeByronRegularBlock epochSlots = -- -- Use 'decodeByronBlock' when you can, this function is provided for use by -- the hard-fork combinator. -decodeByronBoundaryBlock :: CC.EpochSlots - -> Decoder s (Lazy.ByteString -> ByronBlock) +decodeByronBoundaryBlock :: + CC.EpochSlots -> + Decoder s (Lazy.ByteString -> ByronBlock) decodeByronBoundaryBlock epochSlots = - toPlainDecoder Nothing byronProtVer $ - flip (\bs -> mkByronBlock epochSlots - . annotationBytes bs - . CC.ABOBBoundary) - <$> CC.decCBORABoundaryBlock + toPlainDecoder Nothing byronProtVer $ + flip + ( \bs -> + mkByronBlock epochSlots + . annotationBytes bs + . CC.ABOBBoundary + ) + <$> CC.decCBORABoundaryBlock -- | Encodes a raw Byron header /without/ a tag indicating whether it's a -- regular header or an EBB header. @@ -228,10 +251,10 @@ encodeByronRegularHeader = toByronCBOR . CBOR.encodePreEncoded . CC.headerAnnota -- | Inverse of 'encodeByronRegularHeader' decodeByronRegularHeader :: - CC.EpochSlots - -> Decoder s (Lazy.ByteString -> RawHeader) + CC.EpochSlots -> + Decoder s (Lazy.ByteString -> RawHeader) decodeByronRegularHeader epochSlots = - toPlainDecoder Nothing byronProtVer $ + toPlainDecoder Nothing byronProtVer $ flip annotationBytes <$> CC.decCBORAHeader epochSlots -- | Encodes a raw Byron EBB header /without/ a tag indicating whether it's a @@ -244,7 +267,7 @@ encodeByronBoundaryHeader = toByronCBOR . CBOR.encodePreEncoded . CC.boundaryHea -- | Inverse of 'encodeByronBoundaryHeader' decodeByronBoundaryHeader :: Decoder s (Lazy.ByteString -> RawBoundaryHeader) decodeByronBoundaryHeader = - toPlainDecoder Nothing byronProtVer $ + toPlainDecoder Nothing byronProtVer $ flip annotationBytes <$> CC.decCBORABoundaryHeader -- | The 'BinaryBlockInfo' of the given 'ByronBlock'. @@ -256,14 +279,16 @@ decodeByronBoundaryHeader = -- added to the sliced bytestring before it can be deserialised using -- 'decodeByronHeader'. byronBinaryBlockInfo :: ByronBlock -> BinaryBlockInfo -byronBinaryBlockInfo blk = BinaryBlockInfo - { headerOffset = 1 {- 'encodeListLen' of the outer 'Either' envelope -} - + 1 {- the tag -} - + 1 {- 'encodeListLen' of the block: header + body + ... -} - -- Compute the length of the annotated header - , headerSize = fromIntegral $ Strict.length $ case byronBlockRaw blk of +byronBinaryBlockInfo blk = + BinaryBlockInfo + { headerOffset = + 1 {- 'encodeListLen' of the outer 'Either' envelope -} + + 1 {- the tag -} + + 1 {- 'encodeListLen' of the block: header + body + ... -} + -- Compute the length of the annotated header + , headerSize = fromIntegral $ Strict.length $ case byronBlockRaw blk of CC.ABOBBoundary b -> CC.boundaryHeaderAnnotation $ CC.boundaryHeader b - CC.ABOBBlock b -> CC.headerAnnotation $ CC.blockHeader b + CC.ABOBBlock b -> CC.headerAnnotation $ CC.blockHeader b } {------------------------------------------------------------------------------- @@ -278,35 +303,37 @@ byronBinaryBlockInfo blk = BinaryBlockInfo -- 2-tuple and the 'Word' indicating whether its an EBB or regular block. isEbbEnvelope :: IsEBB -> Lazy.ByteString isEbbEnvelope = \case - IsEBB -> "\130\NUL" + IsEBB -> "\130\NUL" IsNotEBB -> "\130\SOH" addV1Envelope :: - (SomeSecond (NestedCtxt Header) ByronBlock, Lazy.ByteString) - -> Lazy.ByteString + (SomeSecond (NestedCtxt Header) ByronBlock, Lazy.ByteString) -> + Lazy.ByteString addV1Envelope (SomeSecond (NestedCtxt ctxt), bs) = isEbbTag <> bs - where - isEbbTag = case ctxt of - CtxtByronBoundary {} -> isEbbEnvelope IsEBB - CtxtByronRegular {} -> isEbbEnvelope IsNotEBB + where + isEbbTag = case ctxt of + CtxtByronBoundary{} -> isEbbEnvelope IsEBB + CtxtByronRegular{} -> isEbbEnvelope IsNotEBB -- | Drop the V1 EBB-or-regular-header envelope and reconstruct the context. -- Since we don't know the block size, use 'fakeByronBlockSizeHint'. dropV1Envelope :: - Lazy.ByteString - -> Except String (SomeSecond (NestedCtxt Header) ByronBlock, Lazy.ByteString) + Lazy.ByteString -> + Except String (SomeSecond (NestedCtxt Header) ByronBlock, Lazy.ByteString) dropV1Envelope bs = case Lazy.splitAt 2 bs of - (prefix, suffix) - | prefix == isEbbEnvelope IsEBB - -> return ( SomeSecond . NestedCtxt $ CtxtByronBoundary fakeByronBlockSizeHint - , suffix - ) - | prefix == isEbbEnvelope IsNotEBB - -> return ( SomeSecond . NestedCtxt $ CtxtByronRegular fakeByronBlockSizeHint - , suffix - ) - | otherwise - -> throwError "decodeUnsized: invalid prefix" + (prefix, suffix) + | prefix == isEbbEnvelope IsEBB -> + return + ( SomeSecond . NestedCtxt $ CtxtByronBoundary fakeByronBlockSizeHint + , suffix + ) + | prefix == isEbbEnvelope IsNotEBB -> + return + ( SomeSecond . NestedCtxt $ CtxtByronRegular fakeByronBlockSizeHint + , suffix + ) + | otherwise -> + throwError "decodeUnsized: invalid prefix" -- | Fake size (used in compatibility mode) fakeByronBlockSizeHint :: SizeInBytes @@ -322,14 +349,17 @@ encodeUnsizedHeader :: UnsizedHeader -> Encoding encodeUnsizedHeader (UnsizedHeader raw _ _) = toByronCBOR $ CC.encCBORABlockOrBoundaryHdr raw -- | Inverse of 'encodeSizedHeader' -decodeUnsizedHeader :: CC.EpochSlots - -> Decoder s (Lazy.ByteString -> UnsizedHeader) +decodeUnsizedHeader :: + CC.EpochSlots -> + Decoder s (Lazy.ByteString -> UnsizedHeader) decodeUnsizedHeader epochSlots = - toPlainDecoder Nothing byronProtVer $ + toPlainDecoder Nothing byronProtVer $ fillInByteString <$> CC.decCBORABlockOrBoundaryHdr epochSlots - where - fillInByteString :: CC.ABlockOrBoundaryHdr ByteSpan - -> Lazy.ByteString - -> UnsizedHeader - fillInByteString it theBytes = mkUnsizedHeader epochSlots $ + where + fillInByteString :: + CC.ABlockOrBoundaryHdr ByteSpan -> + Lazy.ByteString -> + UnsizedHeader + fillInByteString it theBytes = + mkUnsizedHeader epochSlots $ Lazy.toStrict . slice theBytes <$> it diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node.hs index 9fb24f382d..d4859ceb74 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node.hs @@ -5,18 +5,19 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Byron.Node ( - PBftSignatureThreshold (..) +module Ouroboros.Consensus.Byron.Node + ( PBftSignatureThreshold (..) , blockForgingByron , byronBlockForging + -- * Secrets , ByronLeaderCredentials (..) , ByronLeaderCredentialsError , mkByronLeaderCredentials , mkPBftCanBeLeader + -- * ProtocolInfo , ProtocolParamsByron (..) , defaultPBftSignatureThreshold @@ -25,94 +26,95 @@ module Ouroboros.Consensus.Byron.Node ( , protocolInfoByron ) where -import qualified Cardano.Chain.Delegation as Delegation -import qualified Cardano.Chain.Genesis as Genesis -import Cardano.Chain.ProtocolConstants (kEpochSlots) -import Cardano.Chain.Slotting (EpochSlots (..)) -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Crypto as Crypto -import Control.Monad (guard) -import Data.Coerce (coerce) -import Data.Maybe -import Data.Text (Text) -import Data.Void (Void) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (SystemStart (..)) -import Ouroboros.Consensus.Byron.Crypto.DSIGN -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Ledger.Inspect () -import Ouroboros.Consensus.Byron.Node.Serialisation () -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.NodeId (CoreNodeId) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S -import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB (..)) -import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) -import Ouroboros.Consensus.Util ((....:)) -import Ouroboros.Network.Magic (NetworkMagic (..)) +import Cardano.Chain.Delegation qualified as Delegation +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.ProtocolConstants (kEpochSlots) +import Cardano.Chain.Slotting (EpochSlots (..)) +import Cardano.Chain.Update qualified as Update +import Cardano.Crypto qualified as Crypto +import Control.Monad (guard) +import Data.Coerce (coerce) +import Data.Maybe +import Data.Text (Text) +import Data.Void (Void) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime (SystemStart (..)) +import Ouroboros.Consensus.Byron.Crypto.DSIGN +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Ledger.Inspect () +import Ouroboros.Consensus.Byron.Node.Serialisation () +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.NodeId (CoreNodeId) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S +import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB (..)) +import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) +import Ouroboros.Consensus.Util ((....:)) +import Ouroboros.Network.Magic (NetworkMagic (..)) {------------------------------------------------------------------------------- Credentials -------------------------------------------------------------------------------} -- | Credentials needed to produce blocks in the Byron era. -data ByronLeaderCredentials = ByronLeaderCredentials { - blcSignKey :: Crypto.SigningKey - , blcDlgCert :: Delegation.Certificate - -- | Only core nodes can produce blocks. The 'CoreNodeId' is used to - -- determine the order (round-robin) in which core nodes produce blocks. - , blcCoreNodeId :: CoreNodeId - -- | Identifier for this set of credentials. - -- - -- Useful when the node is running with multiple sets of credentials. - , blcLabel :: Text - } - deriving (Show) +data ByronLeaderCredentials = ByronLeaderCredentials + { blcSignKey :: Crypto.SigningKey + , blcDlgCert :: Delegation.Certificate + , blcCoreNodeId :: CoreNodeId + -- ^ Only core nodes can produce blocks. The 'CoreNodeId' is used to + -- determine the order (round-robin) in which core nodes produce blocks. + , blcLabel :: Text + -- ^ Identifier for this set of credentials. + -- + -- Useful when the node is running with multiple sets of credentials. + } + deriving Show -- | Make the 'ByronLeaderCredentials', with a couple sanity checks: -- -- * That the block signing key and the delegation certificate match. -- * That the delegation certificate does correspond to one of the genesis -- keys from the genesis file. --- mkByronLeaderCredentials :: - Genesis.Config - -> Crypto.SigningKey - -> Delegation.Certificate - -> Text - -> Either ByronLeaderCredentialsError ByronLeaderCredentials + Genesis.Config -> + Crypto.SigningKey -> + Delegation.Certificate -> + Text -> + Either ByronLeaderCredentialsError ByronLeaderCredentials mkByronLeaderCredentials gc sk cert lbl = do - guard (Delegation.delegateVK cert == Crypto.toVerification sk) - ?! NodeSigningKeyDoesNotMatchDelegationCertificate - - let vkGenesis = Delegation.issuerVK cert - nid <- genesisKeyCoreNodeId gc (VerKeyByronDSIGN vkGenesis) - ?! DelegationCertificateNotFromGenesisKey - - return ByronLeaderCredentials { - blcSignKey = sk - , blcDlgCert = cert - , blcCoreNodeId = nid - , blcLabel = lbl - } - where - (?!) :: Maybe a -> e -> Either e a - Just x ?! _ = Right x - Nothing ?! e = Left e - -data ByronLeaderCredentialsError = - NodeSigningKeyDoesNotMatchDelegationCertificate - | DelegationCertificateNotFromGenesisKey + guard (Delegation.delegateVK cert == Crypto.toVerification sk) + ?! NodeSigningKeyDoesNotMatchDelegationCertificate + + let vkGenesis = Delegation.issuerVK cert + nid <- + genesisKeyCoreNodeId gc (VerKeyByronDSIGN vkGenesis) + ?! DelegationCertificateNotFromGenesisKey + + return + ByronLeaderCredentials + { blcSignKey = sk + , blcDlgCert = cert + , blcCoreNodeId = nid + , blcLabel = lbl + } + where + (?!) :: Maybe a -> e -> Either e a + Just x ?! _ = Right x + Nothing ?! e = Left e + +data ByronLeaderCredentialsError + = NodeSigningKeyDoesNotMatchDelegationCertificate + | DelegationCertificateNotFromGenesisKey deriving (Eq, Show) {------------------------------------------------------------------------------- @@ -126,38 +128,43 @@ type instance ForgeStateInfo ByronBlock = () type instance ForgeStateUpdateError ByronBlock = Void byronBlockForging :: - Monad m - => ByronLeaderCredentials - -> BlockForging m ByronBlock -byronBlockForging creds = BlockForging { - forgeLabel = blcLabel creds + Monad m => + ByronLeaderCredentials -> + BlockForging m ByronBlock +byronBlockForging creds = + BlockForging + { forgeLabel = blcLabel creds , canBeLeader , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () - , checkCanForge = \cfg slot tickedPBftState _isLeader () -> - pbftCheckCanForge - (configConsensus cfg) - canBeLeader - slot - tickedPBftState - , forgeBlock = \cfg -> return ....: forgeByronBlock cfg + , checkCanForge = \cfg slot tickedPBftState _isLeader () -> + pbftCheckCanForge + (configConsensus cfg) + canBeLeader + slot + tickedPBftState + , forgeBlock = \cfg -> return ....: forgeByronBlock cfg } - where - canBeLeader = mkPBftCanBeLeader creds + where + canBeLeader = mkPBftCanBeLeader creds mkPBftCanBeLeader :: ByronLeaderCredentials -> CanBeLeader (PBft PBftByronCrypto) -mkPBftCanBeLeader (ByronLeaderCredentials sk cert nid _) = PBftCanBeLeader { - pbftCanBeLeaderCoreNodeId = nid - , pbftCanBeLeaderSignKey = SignKeyByronDSIGN sk - , pbftCanBeLeaderDlgCert = cert +mkPBftCanBeLeader (ByronLeaderCredentials sk cert nid _) = + PBftCanBeLeader + { pbftCanBeLeaderCoreNodeId = nid + , pbftCanBeLeaderSignKey = SignKeyByronDSIGN sk + , pbftCanBeLeaderDlgCert = cert } -blockForgingByron :: Monad m - => ProtocolParamsByron - -> [BlockForging m ByronBlock] -blockForgingByron ProtocolParamsByron { byronLeaderCredentials = mLeaderCreds - } = - byronBlockForging - <$> maybeToList mLeaderCreds +blockForgingByron :: + Monad m => + ProtocolParamsByron -> + [BlockForging m ByronBlock] +blockForgingByron + ProtocolParamsByron + { byronLeaderCredentials = mLeaderCreds + } = + byronBlockForging + <$> maybeToList mLeaderCreds {------------------------------------------------------------------------------- ProtocolInfo @@ -169,67 +176,76 @@ defaultPBftSignatureThreshold :: PBftSignatureThreshold defaultPBftSignatureThreshold = PBftSignatureThreshold 0.22 -- | Parameters needed to run Byron -data ProtocolParamsByron = ProtocolParamsByron { - byronGenesis :: Genesis.Config - , byronPbftSignatureThreshold :: Maybe PBftSignatureThreshold - , byronProtocolVersion :: Update.ProtocolVersion - , byronSoftwareVersion :: Update.SoftwareVersion - , byronLeaderCredentials :: Maybe ByronLeaderCredentials - } - -protocolInfoByron :: ProtocolParamsByron - -> ProtocolInfo ByronBlock -protocolInfoByron ProtocolParamsByron { - byronGenesis = genesisConfig - , byronPbftSignatureThreshold = mSigThresh - , byronProtocolVersion = pVer - , byronSoftwareVersion = sVer - } = - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = PBftConfig { - pbftParams = byronPBftParams compactedGenesisConfig mSigThresh - } - , topLevelConfigLedger = compactedGenesisConfig - , topLevelConfigBlock = blockConfig - , topLevelConfigCodec = mkByronCodecConfig compactedGenesisConfig - , topLevelConfigStorage = ByronStorageConfig blockConfig - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - , pInfoInitLedger = ExtLedgerState { - -- Important: don't pass the compacted genesis config to - -- 'initByronLedgerState', it needs the full one, including the AVVM - -- balances. - ledgerState = initByronLedgerState genesisConfig Nothing - , headerState = genesisHeaderState S.empty - } +data ProtocolParamsByron = ProtocolParamsByron + { byronGenesis :: Genesis.Config + , byronPbftSignatureThreshold :: Maybe PBftSignatureThreshold + , byronProtocolVersion :: Update.ProtocolVersion + , byronSoftwareVersion :: Update.SoftwareVersion + , byronLeaderCredentials :: Maybe ByronLeaderCredentials + } + +protocolInfoByron :: + ProtocolParamsByron -> + ProtocolInfo ByronBlock +protocolInfoByron + ProtocolParamsByron + { byronGenesis = genesisConfig + , byronPbftSignatureThreshold = mSigThresh + , byronProtocolVersion = pVer + , byronSoftwareVersion = sVer + } = + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + PBftConfig + { pbftParams = byronPBftParams compactedGenesisConfig mSigThresh + } + , topLevelConfigLedger = compactedGenesisConfig + , topLevelConfigBlock = blockConfig + , topLevelConfigCodec = mkByronCodecConfig compactedGenesisConfig + , topLevelConfigStorage = ByronStorageConfig blockConfig + , topLevelConfigCheckpoints = emptyCheckpointsMap + } + , pInfoInitLedger = + ExtLedgerState + { -- Important: don't pass the compacted genesis config to + -- 'initByronLedgerState', it needs the full one, including the AVVM + -- balances. + ledgerState = initByronLedgerState genesisConfig Nothing + , headerState = genesisHeaderState S.empty + } } - where + where compactedGenesisConfig = compactGenesisConfig genesisConfig blockConfig = mkByronConfig compactedGenesisConfig pVer sVer protocolClientInfoByron :: EpochSlots -> ProtocolClientInfo ByronBlock protocolClientInfoByron epochSlots = - ProtocolClientInfo { - pClientInfoCodecConfig = ByronCodecConfig { - getByronEpochSlots = epochSlots - } + ProtocolClientInfo + { pClientInfoCodecConfig = + ByronCodecConfig + { getByronEpochSlots = epochSlots + } } byronPBftParams :: Genesis.Config -> Maybe PBftSignatureThreshold -> PBftParams -byronPBftParams cfg threshold = PBftParams { - pbftSecurityParam = genesisSecurityParam cfg - , pbftNumNodes = genesisNumCoreNodes cfg +byronPBftParams cfg threshold = + PBftParams + { pbftSecurityParam = genesisSecurityParam cfg + , pbftNumNodes = genesisNumCoreNodes cfg , pbftSignatureThreshold = fromMaybe defaultPBftSignatureThreshold threshold } -mkByronConfig :: Genesis.Config - -> Update.ProtocolVersion - -> Update.SoftwareVersion - -> BlockConfig ByronBlock -mkByronConfig genesisConfig pVer sVer = ByronConfig { - byronGenesisConfig = genesisConfig +mkByronConfig :: + Genesis.Config -> + Update.ProtocolVersion -> + Update.SoftwareVersion -> + BlockConfig ByronBlock +mkByronConfig genesisConfig pVer sVer = + ByronConfig + { byronGenesisConfig = genesisConfig , byronProtocolVersion = pVer , byronSoftwareVersion = sVer } @@ -240,15 +256,15 @@ mkByronConfig genesisConfig pVer sVer = ByronConfig { instance ConfigSupportsNode ByronBlock where getSystemStart = - SystemStart - . Genesis.gdStartTime - . extractGenesisData + SystemStart + . Genesis.gdStartTime + . extractGenesisData getNetworkMagic = - NetworkMagic - . Crypto.unProtocolMagicId - . Genesis.gdProtocolMagicId - . extractGenesisData + NetworkMagic + . Crypto.unProtocolMagicId + . Genesis.gdProtocolMagicId + . extractGenesisData extractGenesisData :: BlockConfig ByronBlock -> Genesis.GenesisData extractGenesisData = Genesis.configGenesisData . byronGenesisConfig @@ -261,7 +277,7 @@ instance NodeInitStorage ByronBlock where -- The epoch size is fixed and can be derived from @k@ by the ledger -- ('kEpochSlots'). nodeImmutableDbChunkInfo = - simpleChunkInfo + simpleChunkInfo . (coerce :: EpochSlots -> EpochSize) . kEpochSlots . Genesis.gdK @@ -270,14 +286,14 @@ instance NodeInitStorage ByronBlock where -- If the current chain is empty, produce a genesis EBB and add it to the -- ChainDB. Only an EBB can have Genesis (= empty chain) as its predecessor. - nodeInitChainDB cfg InitChainDB { getCurrentLedger, addBlock } = do - tip <- ledgerTipPoint <$> getCurrentLedger - case tip of - BlockPoint {} -> return () - GenesisPoint -> addBlock genesisEBB - where - genesisEBB = - forgeEBB (getByronBlockConfig cfg) (SlotNo 0) (BlockNo 0) GenesisHash + nodeInitChainDB cfg InitChainDB{getCurrentLedger, addBlock} = do + tip <- ledgerTipPoint <$> getCurrentLedger + case tip of + BlockPoint{} -> return () + GenesisPoint -> addBlock genesisEBB + where + genesisEBB = + forgeEBB (getByronBlockConfig cfg) (SlotNo 0) (BlockNo 0) GenesisHash nodeCheckIntegrity = verifyBlockIntegrity . getByronBlockConfig @@ -292,7 +308,9 @@ instance BlockSupportsSanityCheck ByronBlock where configAllSecurityParams = pure . pbftSecurityParam . pbftParams . topLevelConfigProtocol -deriving via SelectViewDiffusionPipelining ByronBlock - instance BlockSupportsDiffusionPipelining ByronBlock +deriving via + SelectViewDiffusionPipelining ByronBlock + instance + BlockSupportsDiffusionPipelining ByronBlock instance RunNode ByronBlock diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node/Serialisation.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node/Serialisation.hs index d926a380c0..8818c6e351 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node/Serialisation.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Node/Serialisation.hs @@ -5,36 +5,38 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Byron.Node.Serialisation () where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Byron.API as CC -import Cardano.Chain.Genesis -import Cardano.Ledger.Binary (fromByronCBOR, toByronCBOR) -import Cardano.Ledger.Binary.Plain -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (decode, encode) -import Control.Monad.Except -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.ByteString.Short as Short -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised (..), unwrapCBORinCBOR, - wrapCBORinCBOR) -import Ouroboros.Network.SizeInBytes (SizeInBytes (..)) +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Byron.API qualified as CC +import Cardano.Chain.Genesis +import Cardano.Ledger.Binary (fromByronCBOR, toByronCBOR) +import Cardano.Ledger.Binary.Plain +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (decode, encode) +import Control.Monad.Except +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short qualified as Short +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block + ( Serialised (..) + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) +import Ouroboros.Network.SizeInBytes (SizeInBytes (..)) {------------------------------------------------------------------------------- EncodeDisk & DecodeDisk @@ -58,6 +60,7 @@ instance DecodeDisk ByronBlock (LedgerState ByronBlock mk) where -- | @'ChainDepState' ('BlockProtocol' 'ByronBlock')@ instance EncodeDisk ByronBlock (PBftState PBftByronCrypto) where encodeDisk _ = encodeByronChainDepState + -- | @'ChainDepState' ('BlockProtocol' 'ByronBlock')@ instance DecodeDisk ByronBlock (PBftState PBftByronCrypto) where decodeDisk _ = decodeByronChainDepState @@ -77,60 +80,62 @@ instance SerialiseNodeToNodeConstraints ByronBlock where -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. instance SerialiseNodeToNode ByronBlock ByronBlock where - encodeNodeToNode _ _ = wrapCBORinCBOR encodeByronBlock + encodeNodeToNode _ _ = wrapCBORinCBOR encodeByronBlock decodeNodeToNode ccfg _ = unwrapCBORinCBOR (decodeByronBlock epochSlots) - where - epochSlots = getByronEpochSlots ccfg + where + epochSlots = getByronEpochSlots ccfg instance SerialiseNodeToNode ByronBlock (Header ByronBlock) where encodeNodeToNode ccfg = \case - ByronNodeToNodeVersion1 -> - wrapCBORinCBOR $ - encodeUnsizedHeader . fst . splitSizeHint - ByronNodeToNodeVersion2 -> - encodeDisk ccfg . unnest + ByronNodeToNodeVersion1 -> + wrapCBORinCBOR $ + encodeUnsizedHeader . fst . splitSizeHint + ByronNodeToNodeVersion2 -> + encodeDisk ccfg . unnest decodeNodeToNode ccfg = \case - ByronNodeToNodeVersion1 -> - unwrapCBORinCBOR $ - (flip joinSizeHint fakeByronBlockSizeHint .) + ByronNodeToNodeVersion1 -> + unwrapCBORinCBOR $ + (flip joinSizeHint fakeByronBlockSizeHint .) <$> decodeUnsizedHeader epochSlots - ByronNodeToNodeVersion2 -> - nest <$> decodeDisk ccfg - where - epochSlots = getByronEpochSlots ccfg + ByronNodeToNodeVersion2 -> + nest <$> decodeDisk ccfg + where + epochSlots = getByronEpochSlots ccfg -- | 'Serialised' uses CBOR-in-CBOR by default. instance SerialiseNodeToNode ByronBlock (Serialised ByronBlock) - -- Default instance + +-- Default instance instance SerialiseNodeToNode ByronBlock (SerialisedHeader ByronBlock) where encodeNodeToNode ccfg version = case version of - -- Drop the context and add the tag, encode that using CBOR-in-CBOR - ByronNodeToNodeVersion1 -> - encode - . Serialised - . addV1Envelope - . aux - . serialisedHeaderToDepPair - where - aux :: GenDepPair Serialised (f blk) - -> (SomeSecond f blk, Lazy.ByteString) - aux (GenDepPair ix (Serialised bytes)) = (SomeSecond ix, bytes) - - ByronNodeToNodeVersion2 -> encodeDisk ccfg + -- Drop the context and add the tag, encode that using CBOR-in-CBOR + ByronNodeToNodeVersion1 -> + encode + . Serialised + . addV1Envelope + . aux + . serialisedHeaderToDepPair + where + aux :: + GenDepPair Serialised (f blk) -> + (SomeSecond f blk, Lazy.ByteString) + aux (GenDepPair ix (Serialised bytes)) = (SomeSecond ix, bytes) + ByronNodeToNodeVersion2 -> encodeDisk ccfg decodeNodeToNode ccfg version = case version of - ByronNodeToNodeVersion1 -> do - bs <- unSerialised <$> decode - either fail (return . SerialisedHeaderFromDepPair) $ - runExcept $ aux <$> dropV1Envelope bs - where - aux :: (SomeSecond f blk, Lazy.ByteString) - -> GenDepPair Serialised (f blk) - aux (SomeSecond ix, bytes) = GenDepPair ix (Serialised bytes) - - ByronNodeToNodeVersion2 -> decodeDisk ccfg + ByronNodeToNodeVersion1 -> do + bs <- unSerialised <$> decode + either fail (return . SerialisedHeaderFromDepPair) $ + runExcept $ + aux <$> dropV1Envelope bs + where + aux :: + (SomeSecond f blk, Lazy.ByteString) -> + GenDepPair Serialised (f blk) + aux (SomeSecond ix, bytes) = GenDepPair ix (Serialised bytes) + ByronNodeToNodeVersion2 -> decodeDisk ccfg -- | No CBOR-in-CBOR, because we check for canonical encodings, which means we -- can use the recomputed encoding for the annotation. @@ -155,14 +160,15 @@ instance SerialiseNodeToClient ByronBlock Config where -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. instance SerialiseNodeToClient ByronBlock ByronBlock where - encodeNodeToClient _ _ = wrapCBORinCBOR encodeByronBlock + encodeNodeToClient _ _ = wrapCBORinCBOR encodeByronBlock decodeNodeToClient ccfg _ = unwrapCBORinCBOR (decodeByronBlock epochSlots) - where - epochSlots = getByronEpochSlots ccfg + where + epochSlots = getByronEpochSlots ccfg -- | 'Serialised' uses CBOR-in-CBOR by default. instance SerialiseNodeToClient ByronBlock (Serialised ByronBlock) - -- Default instance + +-- Default instance -- | No CBOR-in-CBOR, because we check for canonical encodings, which means we -- can use the recomputed encoding for the annotation. @@ -185,7 +191,7 @@ instance SerialiseNodeToClient ByronBlock CC.ApplyMempoolPayloadErr where instance SerialiseNodeToClient ByronBlock (SomeBlockQuery (BlockQuery ByronBlock)) where encodeNodeToClient _ _ (SomeBlockQuery q) = encodeByronQuery q - decodeNodeToClient _ _ = decodeByronQuery + decodeNodeToClient _ _ = decodeByronQuery instance SerialiseBlockQueryResult ByronBlock BlockQuery where encodeBlockQueryResult _ _ = encodeByronResult @@ -198,62 +204,67 @@ instance SerialiseBlockQueryResult ByronBlock BlockQuery where instance ReconstructNestedCtxt Header ByronBlock where reconstructPrefixLen _ = PrefixLen 2 reconstructNestedCtxt _proxy prefix size = - -- The first byte is @encodeListLen 2@, the second (index 1) is 0 for - -- EBB, 1 for regular block - case Short.index prefix 1 of - 0 -> SomeSecond $ NestedCtxt (CtxtByronBoundary size) - 1 -> SomeSecond $ NestedCtxt (CtxtByronRegular size) - _ -> error $ "invalid ByronBlock with prefix: " <> show prefix + -- The first byte is @encodeListLen 2@, the second (index 1) is 0 for + -- EBB, 1 for regular block + case Short.index prefix 1 of + 0 -> SomeSecond $ NestedCtxt (CtxtByronBoundary size) + 1 -> SomeSecond $ NestedCtxt (CtxtByronRegular size) + _ -> error $ "invalid ByronBlock with prefix: " <> show prefix instance EncodeDiskDepIx (NestedCtxt Header) ByronBlock where - encodeDiskDepIx _ccfg (SomeSecond (NestedCtxt ctxt)) = mconcat [ - CBOR.encodeListLen 2 + encodeDiskDepIx _ccfg (SomeSecond (NestedCtxt ctxt)) = + mconcat + [ CBOR.encodeListLen 2 , case ctxt of - CtxtByronBoundary size -> mconcat [ - CBOR.encodeWord8 0 - , CBOR.encodeWord32 (getSizeInBytes size) - ] - CtxtByronRegular size -> mconcat [ - CBOR.encodeWord8 1 - , CBOR.encodeWord32 (getSizeInBytes size) - ] + CtxtByronBoundary size -> + mconcat + [ CBOR.encodeWord8 0 + , CBOR.encodeWord32 (getSizeInBytes size) + ] + CtxtByronRegular size -> + mconcat + [ CBOR.encodeWord8 1 + , CBOR.encodeWord32 (getSizeInBytes size) + ] ] instance EncodeDiskDep (NestedCtxt Header) ByronBlock where encodeDiskDep _ccfg (NestedCtxt ctxt) h = - case ctxt of - CtxtByronRegular _size -> - encodeByronRegularHeader h - CtxtByronBoundary _size -> - -- We don't encode the 'SlotNo' - -- This is important, because this encoder/decoder must be compatible - -- with the raw bytes as stored on disk as part of a Byron block. - encodeByronBoundaryHeader (snd h) + case ctxt of + CtxtByronRegular _size -> + encodeByronRegularHeader h + CtxtByronBoundary _size -> + -- We don't encode the 'SlotNo' + -- This is important, because this encoder/decoder must be compatible + -- with the raw bytes as stored on disk as part of a Byron block. + encodeByronBoundaryHeader (snd h) instance DecodeDiskDepIx (NestedCtxt Header) ByronBlock where decodeDiskDepIx _ccfg = do - enforceSize "decodeDiskDepIx ByronBlock" 2 - CBOR.decodeWord8 >>= \case - 0 -> SomeSecond . NestedCtxt . CtxtByronBoundary . SizeInBytes <$> CBOR.decodeWord32 - 1 -> SomeSecond . NestedCtxt . CtxtByronRegular . SizeInBytes <$> CBOR.decodeWord32 - t -> cborError $ DecoderErrorUnknownTag "decodeDiskDepIx ByronBlock" t + enforceSize "decodeDiskDepIx ByronBlock" 2 + CBOR.decodeWord8 >>= \case + 0 -> SomeSecond . NestedCtxt . CtxtByronBoundary . SizeInBytes <$> CBOR.decodeWord32 + 1 -> SomeSecond . NestedCtxt . CtxtByronRegular . SizeInBytes <$> CBOR.decodeWord32 + t -> cborError $ DecoderErrorUnknownTag "decodeDiskDepIx ByronBlock" t instance DecodeDiskDep (NestedCtxt Header) ByronBlock where decodeDiskDep ByronCodecConfig{..} (NestedCtxt ctxt) = - case ctxt of - CtxtByronRegular _size -> - decodeByronRegularHeader getByronEpochSlots - CtxtByronBoundary _size -> - auxBoundary <$> decodeByronBoundaryHeader - where - auxBoundary :: (Lazy.ByteString -> RawBoundaryHeader) - -> (Lazy.ByteString -> (SlotNo, RawBoundaryHeader)) - auxBoundary f bs = - (slotNo, hdr) - where - hdr :: RawBoundaryHeader - hdr = f bs - - slotNo :: SlotNo - slotNo = fromByronSlotNo $ - CC.boundaryBlockSlot getByronEpochSlots (CC.boundaryEpoch hdr) + case ctxt of + CtxtByronRegular _size -> + decodeByronRegularHeader getByronEpochSlots + CtxtByronBoundary _size -> + auxBoundary <$> decodeByronBoundaryHeader + where + auxBoundary :: + (Lazy.ByteString -> RawBoundaryHeader) -> + (Lazy.ByteString -> (SlotNo, RawBoundaryHeader)) + auxBoundary f bs = + (slotNo, hdr) + where + hdr :: RawBoundaryHeader + hdr = f bs + + slotNo :: SlotNo + slotNo = + fromByronSlotNo $ + CC.boundaryBlockSlot getByronEpochSlots (CC.boundaryEpoch hdr) diff --git a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Protocol.hs b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Protocol.hs index 01d6a82bbd..ede0babcca 100644 --- a/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Protocol.hs +++ b/ouroboros-consensus-cardano/src/byron/Ouroboros/Consensus/Byron/Protocol.hs @@ -1,22 +1,22 @@ {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Byron.Protocol ( - PBftByronCrypto +module Ouroboros.Consensus.Byron.Protocol + ( PBftByronCrypto , genesisKeyCoreNodeId , nodeIdToGenesisKey ) where -import qualified Cardano.Chain.Common as CC.Common -import qualified Cardano.Chain.Delegation as CC.Delegation -import qualified Cardano.Chain.Genesis as CC.Genesis -import Control.Monad (guard) -import Data.Set (Set) -import qualified Data.Set as Set -import Ouroboros.Consensus.Byron.Crypto.DSIGN -import Ouroboros.Consensus.Byron.Ledger.Orphans () -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.PBFT +import Cardano.Chain.Common qualified as CC.Common +import Cardano.Chain.Delegation qualified as CC.Delegation +import Cardano.Chain.Genesis qualified as CC.Genesis +import Control.Monad (guard) +import Data.Set (Set) +import Data.Set qualified as Set +import Ouroboros.Consensus.Byron.Crypto.DSIGN +import Ouroboros.Consensus.Byron.Ledger.Orphans () +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.PBFT {------------------------------------------------------------------------------- Crypto @@ -25,9 +25,9 @@ import Ouroboros.Consensus.Protocol.PBFT data PBftByronCrypto instance PBftCrypto PBftByronCrypto where - type PBftDSIGN PBftByronCrypto = ByronDSIGN + type PBftDSIGN PBftByronCrypto = ByronDSIGN type PBftDelegationCert PBftByronCrypto = CC.Delegation.Certificate - type PBftVerKeyHash PBftByronCrypto = CC.Common.KeyHash + type PBftVerKeyHash PBftByronCrypto = CC.Common.KeyHash dlgCertGenVerKey = VerKeyByronDSIGN . CC.Delegation.issuerVK dlgCertDlgVerKey = VerKeyByronDSIGN . CC.Delegation.delegateVK @@ -42,22 +42,25 @@ instance PBftCrypto PBftByronCrypto where -- -- In PBFT, the 'CoreNodeId' index is determined by the 0-based position in -- the sort order of the genesis key hashes. -genesisKeyCoreNodeId :: CC.Genesis.Config - -> VerKeyDSIGN ByronDSIGN - -- ^ The genesis verification key - -> Maybe CoreNodeId +genesisKeyCoreNodeId :: + CC.Genesis.Config -> + -- | The genesis verification key + VerKeyDSIGN ByronDSIGN -> + Maybe CoreNodeId genesisKeyCoreNodeId gc vkey = - CoreNodeId . fromIntegral <$> - Set.lookupIndex (hashVerKey vkey) (genesisKeyHashes gc) + CoreNodeId . fromIntegral + <$> Set.lookupIndex (hashVerKey vkey) (genesisKeyHashes gc) -- | Inverse of 'genesisKeyCoreNodeId' -nodeIdToGenesisKey :: CC.Genesis.Config - -> CoreNodeId - -> Maybe CC.Common.KeyHash +nodeIdToGenesisKey :: + CC.Genesis.Config -> + CoreNodeId -> + Maybe CC.Common.KeyHash nodeIdToGenesisKey gc (CoreNodeId nid) = do - guard $ nid < fromIntegral (Set.size (genesisKeyHashes gc)) - return $ Set.elemAt (fromIntegral nid) (genesisKeyHashes gc) + guard $ nid < fromIntegral (Set.size (genesisKeyHashes gc)) + return $ Set.elemAt (fromIntegral nid) (genesisKeyHashes gc) genesisKeyHashes :: CC.Genesis.Config -> Set CC.Common.KeyHash -genesisKeyHashes = CC.Genesis.unGenesisKeyHashes - . CC.Genesis.configGenesisKeyHashes +genesisKeyHashes = + CC.Genesis.unGenesisKeyHashes + . CC.Genesis.configGenesisKeyHashes diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano.hs index 70ec410d00..f3202df2f6 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano.hs @@ -1,28 +1,30 @@ {-# LANGUAGE DataKinds #-} -module Ouroboros.Consensus.Cardano ( - -- * The block type of the Cardano block chain +module Ouroboros.Consensus.Cardano + ( -- * The block type of the Cardano block chain CardanoBlock + -- * Supported protocols , ProtocolByron , ProtocolCardano , ProtocolShelley + -- * Abstract over the various protocols , CardanoHardForkTrigger (..) , CardanoHardForkTriggers (..) , module X ) where -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node as X -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Node -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.Protocol.Praos (Praos) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Node as X -import Ouroboros.Consensus.Shelley.ShelleyHFC +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node as X +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Node +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Node as X +import Ouroboros.Consensus.Shelley.ShelleyHFC {------------------------------------------------------------------------------- Supported protocols @@ -33,12 +35,14 @@ import Ouroboros.Consensus.Shelley.ShelleyHFC breaking any assumptions made in @cardano-node@. -------------------------------------------------------------------------------} -type ProtocolByron = HardForkProtocol '[ ByronBlock ] -type ProtocolCardano = HardForkProtocol '[ ByronBlock - , ShelleyBlock (TPraos StandardCrypto) ShelleyEra - , ShelleyBlock (TPraos StandardCrypto) AllegraEra - , ShelleyBlock (TPraos StandardCrypto) MaryEra - , ShelleyBlock (TPraos StandardCrypto) AlonzoEra - , ShelleyBlock (Praos StandardCrypto) BabbageEra - , ShelleyBlock (Praos StandardCrypto) ConwayEra - ] +type ProtocolByron = HardForkProtocol '[ByronBlock] +type ProtocolCardano = + HardForkProtocol + '[ ByronBlock + , ShelleyBlock (TPraos StandardCrypto) ShelleyEra + , ShelleyBlock (TPraos StandardCrypto) AllegraEra + , ShelleyBlock (TPraos StandardCrypto) MaryEra + , ShelleyBlock (TPraos StandardCrypto) AlonzoEra + , ShelleyBlock (Praos StandardCrypto) BabbageEra + , ShelleyBlock (Praos StandardCrypto) ConwayEra + ] diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Block.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Block.hs index 8db3c12acc..ee296d591b 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Block.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Block.hs @@ -8,90 +8,200 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE ViewPatterns #-} -module Ouroboros.Consensus.Cardano.Block ( - -- * Eras +module Ouroboros.Consensus.Cardano.Block + ( -- * Eras CardanoEras , CardanoShelleyEras , module Ouroboros.Consensus.Shelley.Eras , ShelleyBasedLedgerEras + -- * Block , CardanoBlock - -- Note: by exporting the pattern synonyms as part of the matching data - -- type (instead of as separate patterns), we get better exhaustiveness - -- checks from GHC. But GHC expects a data type, not a type family, that's - -- why we sometimes mention the data type of the instance in these exports - -- instead of the abstract type family. - , HardForkBlock (BlockAllegra, BlockAlonzo, BlockByron, BlockMary, BlockShelley, BlockBabbage, BlockConway) + -- Note: by exporting the pattern synonyms as part of the matching data + -- type (instead of as separate patterns), we get better exhaustiveness + -- checks from GHC. But GHC expects a data type, not a type family, that's + -- why we sometimes mention the data type of the instance in these exports + -- instead of the abstract type family. + , HardForkBlock + ( BlockAllegra + , BlockAlonzo + , BlockByron + , BlockMary + , BlockShelley + , BlockBabbage + , BlockConway + ) + -- * Headers , CardanoHeader - , Header (HeaderAllegra, HeaderAlonzo, HeaderByron, HeaderMary, HeaderShelley, HeaderBabbage, HeaderConway) + , Header + ( HeaderAllegra + , HeaderAlonzo + , HeaderByron + , HeaderMary + , HeaderShelley + , HeaderBabbage + , HeaderConway + ) + -- * Generalised transactions , CardanoApplyTxErr , CardanoGenTx , CardanoGenTxId , GenTx (GenTxAllegra, GenTxAlonzo, GenTxByron, GenTxMary, GenTxShelley, GenTxBabbage, GenTxConway) - , HardForkApplyTxErr (ApplyTxErrAllegra, ApplyTxErrAlonzo, ApplyTxErrByron, ApplyTxErrMary, ApplyTxErrShelley, ApplyTxErrWrongEra, ApplyTxErrBabbage, ApplyTxErrConway) - , TxId (GenTxIdAllegra, GenTxIdAlonzo, GenTxIdByron, GenTxIdMary, GenTxIdShelley, GenTxIdBabbage, GenTxIdConway) + , HardForkApplyTxErr + ( ApplyTxErrAllegra + , ApplyTxErrAlonzo + , ApplyTxErrByron + , ApplyTxErrMary + , ApplyTxErrShelley + , ApplyTxErrWrongEra + , ApplyTxErrBabbage + , ApplyTxErrConway + ) + , TxId + ( GenTxIdAllegra + , GenTxIdAlonzo + , GenTxIdByron + , GenTxIdMary + , GenTxIdShelley + , GenTxIdBabbage + , GenTxIdConway + ) + -- * LedgerError , CardanoLedgerError - , HardForkLedgerError (LedgerErrorAllegra, LedgerErrorAlonzo, LedgerErrorByron, LedgerErrorMary, LedgerErrorShelley, LedgerErrorWrongEra, LedgerErrorBabbage, LedgerErrorConway) + , HardForkLedgerError + ( LedgerErrorAllegra + , LedgerErrorAlonzo + , LedgerErrorByron + , LedgerErrorMary + , LedgerErrorShelley + , LedgerErrorWrongEra + , LedgerErrorBabbage + , LedgerErrorConway + ) + -- * OtherEnvelopeError , CardanoOtherHeaderEnvelopeError - , HardForkEnvelopeErr (OtherHeaderEnvelopeErrorAllegra, OtherHeaderEnvelopeErrorBabbage, OtherHeaderEnvelopeErrorConway, OtherHeaderEnvelopeErrorAlonzo, OtherHeaderEnvelopeErrorByron, OtherHeaderEnvelopeErrorMary, OtherHeaderEnvelopeErrorShelley, OtherHeaderEnvelopeErrorWrongEra) + , HardForkEnvelopeErr + ( OtherHeaderEnvelopeErrorAllegra + , OtherHeaderEnvelopeErrorBabbage + , OtherHeaderEnvelopeErrorConway + , OtherHeaderEnvelopeErrorAlonzo + , OtherHeaderEnvelopeErrorByron + , OtherHeaderEnvelopeErrorMary + , OtherHeaderEnvelopeErrorShelley + , OtherHeaderEnvelopeErrorWrongEra + ) + -- * TipInfo , CardanoTipInfo - , OneEraTipInfo (TipInfoAllegra, TipInfoAlonzo, TipInfoByron, TipInfoBabbage, TipInfoConway, TipInfoMary, TipInfoShelley) + , OneEraTipInfo + ( TipInfoAllegra + , TipInfoAlonzo + , TipInfoByron + , TipInfoBabbage + , TipInfoConway + , TipInfoMary + , TipInfoShelley + ) + -- * Query - , BlockQuery (QueryAnytimeAllegra, QueryAnytimeAlonzo, QueryAnytimeBabbage, QueryAnytimeConway, QueryAnytimeByron, QueryAnytimeMary, QueryAnytimeShelley, QueryHardFork, QueryIfCurrentAllegra, QueryIfCurrentAlonzo, QueryIfCurrentBabbage, QueryIfCurrentConway, QueryIfCurrentByron, QueryIfCurrentMary, QueryIfCurrentShelley) + , BlockQuery + ( QueryAnytimeAllegra + , QueryAnytimeAlonzo + , QueryAnytimeBabbage + , QueryAnytimeConway + , QueryAnytimeByron + , QueryAnytimeMary + , QueryAnytimeShelley + , QueryHardFork + , QueryIfCurrentAllegra + , QueryIfCurrentAlonzo + , QueryIfCurrentBabbage + , QueryIfCurrentConway + , QueryIfCurrentByron + , QueryIfCurrentMary + , QueryIfCurrentShelley + ) , CardanoQuery , CardanoQueryResult , Either (QueryResultSuccess, QueryResultEraMismatch) + -- * CodecConfig , CardanoCodecConfig , CodecConfig (CardanoCodecConfig) + -- * BlockConfig , BlockConfig (CardanoBlockConfig) , CardanoBlockConfig + -- * StorageConfig , CardanoStorageConfig , StorageConfig (CardanoStorageConfig) + -- * ConsensusConfig , CardanoConsensusConfig , ConsensusConfig (CardanoConsensusConfig) + -- * LedgerConfig , CardanoLedgerConfig , HardForkLedgerConfig (CardanoLedgerConfig) + -- * LedgerState , CardanoLedgerState - , LedgerState (LedgerStateAllegra, LedgerStateAlonzo, LedgerStateBabbage, LedgerStateConway, LedgerStateByron, LedgerStateMary, LedgerStateShelley) + , LedgerState + ( LedgerStateAllegra + , LedgerStateAlonzo + , LedgerStateBabbage + , LedgerStateConway + , LedgerStateByron + , LedgerStateMary + , LedgerStateShelley + ) + -- * ChainDepState , CardanoChainDepState - , HardForkState (ChainDepStateAllegra, ChainDepStateAlonzo, ChainDepStateBabbage, ChainDepStateConway, ChainDepStateByron, ChainDepStateMary, ChainDepStateShelley) + , HardForkState + ( ChainDepStateAllegra + , ChainDepStateAlonzo + , ChainDepStateBabbage + , ChainDepStateConway + , ChainDepStateByron + , ChainDepStateMary + , ChainDepStateShelley + ) + -- * EraMismatch , EraMismatch (..) ) where -import Data.Kind -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.Strict -import Ouroboros.Consensus.Block (BlockProtocol) -import Ouroboros.Consensus.Byron.Ledger.Block (ByronBlock) -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.HeaderValidation (OtherHeaderEnvelopeError, - TipInfo) -import Ouroboros.Consensus.Ledger.Abstract (LedgerError) -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (ApplyTxErr, - GenTxId) -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) -import Ouroboros.Consensus.Protocol.Praos (Praos) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.Kind +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.Strict +import Ouroboros.Consensus.Block (BlockProtocol) +import Ouroboros.Consensus.Byron.Ledger.Block (ByronBlock) +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HeaderValidation + ( OtherHeaderEnvelopeError + , TipInfo + ) +import Ouroboros.Consensus.Ledger.Abstract (LedgerError) +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool + ( ApplyTxErr + , GenTxId + ) +import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- The eras of the Cardano blockchain @@ -111,8 +221,8 @@ type CardanoShelleyEras c = , ShelleyBlock (TPraos c) AllegraEra , ShelleyBlock (TPraos c) MaryEra , ShelleyBlock (TPraos c) AlonzoEra - , ShelleyBlock (Praos c) BabbageEra - , ShelleyBlock (Praos c) ConwayEra + , ShelleyBlock (Praos c) BabbageEra + , ShelleyBlock (Praos c) ConwayEra ] type ShelleyBasedLedgerEras :: Type -> [Type] @@ -132,86 +242,93 @@ type ShelleyBasedLedgerEras c = -- Here we use layout and adjacency to make it obvious that we haven't -- miscounted. -pattern TagByron :: f ByronBlock -> NS f (CardanoEras c) +pattern TagByron :: f ByronBlock -> NS f (CardanoEras c) pattern TagShelley :: f (ShelleyBlock (TPraos c) ShelleyEra) -> NS f (CardanoEras c) pattern TagAllegra :: f (ShelleyBlock (TPraos c) AllegraEra) -> NS f (CardanoEras c) -pattern TagMary :: f (ShelleyBlock (TPraos c) MaryEra ) -> NS f (CardanoEras c) -pattern TagAlonzo :: f (ShelleyBlock (TPraos c) AlonzoEra ) -> NS f (CardanoEras c) -pattern TagBabbage :: f (ShelleyBlock (Praos c) BabbageEra) -> NS f (CardanoEras c) -pattern TagConway :: f (ShelleyBlock (Praos c) ConwayEra ) -> NS f (CardanoEras c) - -pattern TagByron x = Z x -pattern TagShelley x = S (Z x) -pattern TagAllegra x = S (S (Z x)) -pattern TagMary x = S (S (S (Z x))) -pattern TagAlonzo x = S (S (S (S (Z x)))) -pattern TagBabbage x = S (S (S (S (S (Z x))))) -pattern TagConway x = S (S (S (S (S (S (Z x)))))) +pattern TagMary :: f (ShelleyBlock (TPraos c) MaryEra) -> NS f (CardanoEras c) +pattern TagAlonzo :: f (ShelleyBlock (TPraos c) AlonzoEra) -> NS f (CardanoEras c) +pattern TagBabbage :: f (ShelleyBlock (Praos c) BabbageEra) -> NS f (CardanoEras c) +pattern TagConway :: f (ShelleyBlock (Praos c) ConwayEra) -> NS f (CardanoEras c) + +pattern TagByron x = Z x +pattern TagShelley x = S (Z x) +pattern TagAllegra x = S (S (Z x)) +pattern TagMary x = S (S (S (Z x))) +pattern TagAlonzo x = S (S (S (S (Z x)))) +pattern TagBabbage x = S (S (S (S (S (Z x))))) +pattern TagConway x = S (S (S (S (S (S (Z x)))))) {------------------------------------------------------------------------------- INTERNAL A telescope function for each era -------------------------------------------------------------------------------} -pattern TeleByron :: - f ByronBlock - -> Telescope g f (CardanoEras c) +pattern TeleByron :: + f ByronBlock -> + Telescope g f (CardanoEras c) pattern TeleShelley :: - g ByronBlock - -> f (ShelleyBlock (TPraos c) ShelleyEra) - -> Telescope g f (CardanoEras c) + g ByronBlock -> + f (ShelleyBlock (TPraos c) ShelleyEra) -> + Telescope g f (CardanoEras c) pattern TeleAllegra :: - g ByronBlock - -> g (ShelleyBlock (TPraos c) ShelleyEra) - -> f (ShelleyBlock (TPraos c) AllegraEra) - -> Telescope g f (CardanoEras c) - -pattern TeleMary :: - g ByronBlock - -> g (ShelleyBlock (TPraos c) ShelleyEra) - -> g (ShelleyBlock (TPraos c) AllegraEra) - -> f (ShelleyBlock (TPraos c) MaryEra) - -> Telescope g f (CardanoEras c) - -pattern TeleAlonzo :: - g ByronBlock - -> g (ShelleyBlock (TPraos c) ShelleyEra) - -> g (ShelleyBlock (TPraos c) AllegraEra) - -> g (ShelleyBlock (TPraos c) MaryEra) - -> f (ShelleyBlock (TPraos c) AlonzoEra) - -> Telescope g f (CardanoEras c) - -pattern TeleBabbage :: - g ByronBlock - -> g (ShelleyBlock (TPraos c) ShelleyEra) - -> g (ShelleyBlock (TPraos c) AllegraEra) - -> g (ShelleyBlock (TPraos c) MaryEra) - -> g (ShelleyBlock (TPraos c) AlonzoEra) - -> f (ShelleyBlock (Praos c) BabbageEra) - -> Telescope g f (CardanoEras c) - -pattern TeleConway :: - g ByronBlock - -> g (ShelleyBlock (TPraos c) ShelleyEra) - -> g (ShelleyBlock (TPraos c) AllegraEra) - -> g (ShelleyBlock (TPraos c) MaryEra) - -> g (ShelleyBlock (TPraos c) AlonzoEra) - -> g (ShelleyBlock (Praos c) BabbageEra) - -> f (ShelleyBlock (Praos c) ConwayEra) - -> Telescope g f (CardanoEras c) + g ByronBlock -> + g (ShelleyBlock (TPraos c) ShelleyEra) -> + f (ShelleyBlock (TPraos c) AllegraEra) -> + Telescope g f (CardanoEras c) + +pattern TeleMary :: + g ByronBlock -> + g (ShelleyBlock (TPraos c) ShelleyEra) -> + g (ShelleyBlock (TPraos c) AllegraEra) -> + f (ShelleyBlock (TPraos c) MaryEra) -> + Telescope g f (CardanoEras c) + +pattern TeleAlonzo :: + g ByronBlock -> + g (ShelleyBlock (TPraos c) ShelleyEra) -> + g (ShelleyBlock (TPraos c) AllegraEra) -> + g (ShelleyBlock (TPraos c) MaryEra) -> + f (ShelleyBlock (TPraos c) AlonzoEra) -> + Telescope g f (CardanoEras c) + +pattern TeleBabbage :: + g ByronBlock -> + g (ShelleyBlock (TPraos c) ShelleyEra) -> + g (ShelleyBlock (TPraos c) AllegraEra) -> + g (ShelleyBlock (TPraos c) MaryEra) -> + g (ShelleyBlock (TPraos c) AlonzoEra) -> + f (ShelleyBlock (Praos c) BabbageEra) -> + Telescope g f (CardanoEras c) + +pattern TeleConway :: + g ByronBlock -> + g (ShelleyBlock (TPraos c) ShelleyEra) -> + g (ShelleyBlock (TPraos c) AllegraEra) -> + g (ShelleyBlock (TPraos c) MaryEra) -> + g (ShelleyBlock (TPraos c) AlonzoEra) -> + g (ShelleyBlock (Praos c) BabbageEra) -> + f (ShelleyBlock (Praos c) ConwayEra) -> + Telescope g f (CardanoEras c) -- Here we use layout and adjacency to make it obvious that we haven't -- miscounted. -pattern TeleByron x = TZ x -pattern TeleShelley byron x = TS byron (TZ x) -pattern TeleAllegra byron shelley x = TS byron (TS shelley (TZ x)) -pattern TeleMary byron shelley allegra x = TS byron (TS shelley (TS allegra (TZ x))) -pattern TeleAlonzo byron shelley allegra mary x = TS byron (TS shelley (TS allegra (TS mary (TZ x)))) -pattern TeleBabbage byron shelley allegra mary alonzo x = TS byron (TS shelley (TS allegra (TS mary (TS alonzo (TZ x))))) -pattern TeleConway byron shelley allegra mary alonzo babbage x = TS byron (TS shelley (TS allegra (TS mary (TS alonzo (TS babbage (TZ x)))))) +pattern TeleByron x = + TZ x +pattern TeleShelley byron x = + TS byron (TZ x) +pattern TeleAllegra byron shelley x = + TS byron (TS shelley (TZ x)) +pattern TeleMary byron shelley allegra x = + TS byron (TS shelley (TS allegra (TZ x))) +pattern TeleAlonzo byron shelley allegra mary x = + TS byron (TS shelley (TS allegra (TS mary (TZ x)))) +pattern TeleBabbage byron shelley allegra mary alonzo x = + TS byron (TS shelley (TS allegra (TS mary (TS alonzo (TZ x))))) +pattern TeleConway byron shelley allegra mary alonzo babbage x = + TS byron (TS shelley (TS allegra (TS mary (TS alonzo (TS babbage (TZ x)))))) {------------------------------------------------------------------------------- The block type of the Cardano block chain @@ -228,7 +345,6 @@ pattern TeleConway byron shelley allegra mary alonzo babbage x = TS byron (TS s -- > f (BlockAllegra a) = _ -- > f (BlockMary m) = _ -- > f (BlockAlonzo m) = _ --- type CardanoBlock c = HardForkBlock (CardanoEras c) pattern BlockByron :: ByronBlock -> CardanoBlock c @@ -253,7 +369,7 @@ pattern BlockConway :: ShelleyBlock (Praos c) ConwayEra -> CardanoBlock c pattern BlockConway b = HardForkBlock (OneEraBlock (TagConway (I b))) {-# COMPLETE - BlockByron + BlockByron , BlockShelley , BlockAllegra , BlockMary @@ -262,7 +378,6 @@ pattern BlockConway b = HardForkBlock (OneEraBlock (TagConway (I b))) , BlockConway #-} - {------------------------------------------------------------------------------- Headers -------------------------------------------------------------------------------} @@ -274,42 +389,43 @@ pattern HeaderByron :: Header ByronBlock -> CardanoHeader c pattern HeaderByron h = HardForkHeader (OneEraHeader (TagByron h)) pattern HeaderShelley :: - Header (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoHeader c + Header (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoHeader c pattern HeaderShelley h = HardForkHeader (OneEraHeader (TagShelley h)) pattern HeaderAllegra :: - Header (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoHeader c + Header (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoHeader c pattern HeaderAllegra h = HardForkHeader (OneEraHeader (TagAllegra h)) pattern HeaderMary :: - Header (ShelleyBlock (TPraos c) MaryEra) - -> CardanoHeader c + Header (ShelleyBlock (TPraos c) MaryEra) -> + CardanoHeader c pattern HeaderMary h = HardForkHeader (OneEraHeader (TagMary h)) pattern HeaderAlonzo :: - Header (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoHeader c + Header (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoHeader c pattern HeaderAlonzo h = HardForkHeader (OneEraHeader (TagAlonzo h)) pattern HeaderBabbage :: - Header (ShelleyBlock (Praos c) BabbageEra) - -> CardanoHeader c + Header (ShelleyBlock (Praos c) BabbageEra) -> + CardanoHeader c pattern HeaderBabbage h = HardForkHeader (OneEraHeader (TagBabbage h)) pattern HeaderConway :: - Header (ShelleyBlock (Praos c) ConwayEra) - -> CardanoHeader c + Header (ShelleyBlock (Praos c) ConwayEra) -> + CardanoHeader c pattern HeaderConway h = HardForkHeader (OneEraHeader (TagConway h)) -{-# COMPLETE HeaderByron - , HeaderShelley - , HeaderAllegra - , HeaderMary - , HeaderAlonzo - , HeaderBabbage - , HeaderConway +{-# COMPLETE + HeaderByron + , HeaderShelley + , HeaderAllegra + , HeaderMary + , HeaderAlonzo + , HeaderBabbage + , HeaderConway #-} {------------------------------------------------------------------------------- @@ -341,7 +457,7 @@ pattern GenTxConway :: GenTx (ShelleyBlock (Praos c) ConwayEra) -> CardanoGenTx pattern GenTxConway tx = HardForkGenTx (OneEraGenTx (TagConway tx)) {-# COMPLETE - GenTxByron + GenTxByron , GenTxShelley , GenTxAllegra , GenTxMary @@ -355,51 +471,52 @@ type CardanoGenTxId c = GenTxId (CardanoBlock c) pattern GenTxIdByron :: GenTxId ByronBlock -> CardanoGenTxId c pattern GenTxIdByron txid = - HardForkGenTxId (OneEraGenTxId (TagByron (WrapGenTxId txid))) + HardForkGenTxId (OneEraGenTxId (TagByron (WrapGenTxId txid))) pattern GenTxIdShelley :: - GenTxId (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoGenTxId c + GenTxId (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoGenTxId c pattern GenTxIdShelley txid = - HardForkGenTxId (OneEraGenTxId (TagShelley (WrapGenTxId txid))) + HardForkGenTxId (OneEraGenTxId (TagShelley (WrapGenTxId txid))) pattern GenTxIdAllegra :: - GenTxId (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoGenTxId c + GenTxId (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoGenTxId c pattern GenTxIdAllegra txid = - HardForkGenTxId (OneEraGenTxId (TagAllegra (WrapGenTxId txid))) + HardForkGenTxId (OneEraGenTxId (TagAllegra (WrapGenTxId txid))) pattern GenTxIdMary :: - GenTxId (ShelleyBlock (TPraos c) MaryEra) - -> CardanoGenTxId c + GenTxId (ShelleyBlock (TPraos c) MaryEra) -> + CardanoGenTxId c pattern GenTxIdMary txid = - HardForkGenTxId (OneEraGenTxId (TagMary (WrapGenTxId txid))) + HardForkGenTxId (OneEraGenTxId (TagMary (WrapGenTxId txid))) pattern GenTxIdAlonzo :: - GenTxId (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoGenTxId c + GenTxId (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoGenTxId c pattern GenTxIdAlonzo txid = - HardForkGenTxId (OneEraGenTxId (TagAlonzo (WrapGenTxId txid))) + HardForkGenTxId (OneEraGenTxId (TagAlonzo (WrapGenTxId txid))) pattern GenTxIdBabbage :: - GenTxId (ShelleyBlock (Praos c) BabbageEra) - -> CardanoGenTxId c + GenTxId (ShelleyBlock (Praos c) BabbageEra) -> + CardanoGenTxId c pattern GenTxIdBabbage txid = - HardForkGenTxId (OneEraGenTxId (TagBabbage (WrapGenTxId txid))) + HardForkGenTxId (OneEraGenTxId (TagBabbage (WrapGenTxId txid))) pattern GenTxIdConway :: - GenTxId (ShelleyBlock (Praos c) ConwayEra) - -> CardanoGenTxId c + GenTxId (ShelleyBlock (Praos c) ConwayEra) -> + CardanoGenTxId c pattern GenTxIdConway txid = - HardForkGenTxId (OneEraGenTxId (TagConway (WrapGenTxId txid))) - -{-# COMPLETE GenTxIdByron - , GenTxIdShelley - , GenTxIdAllegra - , GenTxIdMary - , GenTxIdAlonzo - , GenTxIdBabbage - , GenTxIdConway + HardForkGenTxId (OneEraGenTxId (TagConway (WrapGenTxId txid))) + +{-# COMPLETE + GenTxIdByron + , GenTxIdShelley + , GenTxIdAllegra + , GenTxIdMary + , GenTxIdAlonzo + , GenTxIdBabbage + , GenTxIdConway #-} -- | An error resulting from applying a 'CardanoGenTx' to the ledger. @@ -417,61 +534,61 @@ pattern GenTxIdConway txid = -- > "Transaction from the " <> otherEraName eraMismatch <> -- > " era applied to a ledger from the " <> -- > ledgerEraName eraMismatch <> " era" --- type CardanoApplyTxErr c = HardForkApplyTxErr (CardanoEras c) pattern ApplyTxErrByron :: ApplyTxErr ByronBlock -> CardanoApplyTxErr c pattern ApplyTxErrByron err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagByron (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagByron (WrapApplyTxErr err))) pattern ApplyTxErrShelley :: - ApplyTxErr (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoApplyTxErr c + ApplyTxErr (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoApplyTxErr c pattern ApplyTxErrShelley err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagShelley (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagShelley (WrapApplyTxErr err))) pattern ApplyTxErrAllegra :: - ApplyTxErr (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoApplyTxErr c + ApplyTxErr (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoApplyTxErr c pattern ApplyTxErrAllegra err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagAllegra (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagAllegra (WrapApplyTxErr err))) pattern ApplyTxErrMary :: - ApplyTxErr (ShelleyBlock (TPraos c) MaryEra) - -> CardanoApplyTxErr c + ApplyTxErr (ShelleyBlock (TPraos c) MaryEra) -> + CardanoApplyTxErr c pattern ApplyTxErrMary err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagMary (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagMary (WrapApplyTxErr err))) pattern ApplyTxErrAlonzo :: - ApplyTxErr (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoApplyTxErr c + ApplyTxErr (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoApplyTxErr c pattern ApplyTxErrAlonzo err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagAlonzo (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagAlonzo (WrapApplyTxErr err))) pattern ApplyTxErrBabbage :: - ApplyTxErr (ShelleyBlock (Praos c) BabbageEra) - -> CardanoApplyTxErr c + ApplyTxErr (ShelleyBlock (Praos c) BabbageEra) -> + CardanoApplyTxErr c pattern ApplyTxErrBabbage err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagBabbage (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagBabbage (WrapApplyTxErr err))) pattern ApplyTxErrConway :: - ApplyTxErr (ShelleyBlock (Praos c) ConwayEra) - -> CardanoApplyTxErr c + ApplyTxErr (ShelleyBlock (Praos c) ConwayEra) -> + CardanoApplyTxErr c pattern ApplyTxErrConway err = - HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagConway (WrapApplyTxErr err))) + HardForkApplyTxErrFromEra (OneEraApplyTxErr (TagConway (WrapApplyTxErr err))) pattern ApplyTxErrWrongEra :: EraMismatch -> CardanoApplyTxErr c pattern ApplyTxErrWrongEra eraMismatch <- - HardForkApplyTxErrWrongEra (mkEraMismatch -> eraMismatch) - -{-# COMPLETE ApplyTxErrByron - , ApplyTxErrShelley - , ApplyTxErrAllegra - , ApplyTxErrMary - , ApplyTxErrAlonzo - , ApplyTxErrBabbage - , ApplyTxErrConway - , ApplyTxErrWrongEra + HardForkApplyTxErrWrongEra (mkEraMismatch -> eraMismatch) + +{-# COMPLETE + ApplyTxErrByron + , ApplyTxErrShelley + , ApplyTxErrAllegra + , ApplyTxErrMary + , ApplyTxErrAlonzo + , ApplyTxErrBabbage + , ApplyTxErrConway + , ApplyTxErrWrongEra #-} {------------------------------------------------------------------------------- @@ -493,67 +610,67 @@ pattern ApplyTxErrWrongEra eraMismatch <- -- > "Block from the " <> otherEraName eraMismatch <> -- > " era applied to a ledger from the " <> -- > ledgerEraName eraMismatch <> " era" --- type CardanoLedgerError c = HardForkLedgerError (CardanoEras c) pattern LedgerErrorByron :: LedgerError ByronBlock -> CardanoLedgerError c pattern LedgerErrorByron err = - HardForkLedgerErrorFromEra (OneEraLedgerError (TagByron (WrapLedgerErr err))) + HardForkLedgerErrorFromEra (OneEraLedgerError (TagByron (WrapLedgerErr err))) pattern LedgerErrorShelley :: - LedgerError (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoLedgerError c + LedgerError (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoLedgerError c pattern LedgerErrorShelley err = - HardForkLedgerErrorFromEra - (OneEraLedgerError (TagShelley (WrapLedgerErr err))) + HardForkLedgerErrorFromEra + (OneEraLedgerError (TagShelley (WrapLedgerErr err))) pattern LedgerErrorAllegra :: - LedgerError (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoLedgerError c + LedgerError (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoLedgerError c pattern LedgerErrorAllegra err = - HardForkLedgerErrorFromEra - (OneEraLedgerError (TagAllegra (WrapLedgerErr err))) + HardForkLedgerErrorFromEra + (OneEraLedgerError (TagAllegra (WrapLedgerErr err))) pattern LedgerErrorMary :: - LedgerError (ShelleyBlock (TPraos c) MaryEra) - -> CardanoLedgerError c + LedgerError (ShelleyBlock (TPraos c) MaryEra) -> + CardanoLedgerError c pattern LedgerErrorMary err = - HardForkLedgerErrorFromEra - (OneEraLedgerError (TagMary (WrapLedgerErr err))) + HardForkLedgerErrorFromEra + (OneEraLedgerError (TagMary (WrapLedgerErr err))) pattern LedgerErrorAlonzo :: - LedgerError (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoLedgerError c + LedgerError (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoLedgerError c pattern LedgerErrorAlonzo err = - HardForkLedgerErrorFromEra - (OneEraLedgerError (TagAlonzo (WrapLedgerErr err))) + HardForkLedgerErrorFromEra + (OneEraLedgerError (TagAlonzo (WrapLedgerErr err))) pattern LedgerErrorBabbage :: - LedgerError (ShelleyBlock (Praos c) BabbageEra) - -> CardanoLedgerError c + LedgerError (ShelleyBlock (Praos c) BabbageEra) -> + CardanoLedgerError c pattern LedgerErrorBabbage err = - HardForkLedgerErrorFromEra - (OneEraLedgerError (TagBabbage (WrapLedgerErr err))) + HardForkLedgerErrorFromEra + (OneEraLedgerError (TagBabbage (WrapLedgerErr err))) pattern LedgerErrorConway :: - LedgerError (ShelleyBlock (Praos c) ConwayEra) - -> CardanoLedgerError c + LedgerError (ShelleyBlock (Praos c) ConwayEra) -> + CardanoLedgerError c pattern LedgerErrorConway err = - HardForkLedgerErrorFromEra - (OneEraLedgerError (TagConway (WrapLedgerErr err))) + HardForkLedgerErrorFromEra + (OneEraLedgerError (TagConway (WrapLedgerErr err))) pattern LedgerErrorWrongEra :: EraMismatch -> CardanoLedgerError c pattern LedgerErrorWrongEra eraMismatch <- - HardForkLedgerErrorWrongEra (mkEraMismatch -> eraMismatch) - -{-# COMPLETE LedgerErrorByron - , LedgerErrorShelley - , LedgerErrorAllegra - , LedgerErrorMary - , LedgerErrorAlonzo - , LedgerErrorBabbage - , LedgerErrorConway - , LedgerErrorWrongEra + HardForkLedgerErrorWrongEra (mkEraMismatch -> eraMismatch) + +{-# COMPLETE + LedgerErrorByron + , LedgerErrorShelley + , LedgerErrorAllegra + , LedgerErrorMary + , LedgerErrorAlonzo + , LedgerErrorBabbage + , LedgerErrorConway + , LedgerErrorWrongEra #-} {------------------------------------------------------------------------------- @@ -563,63 +680,64 @@ pattern LedgerErrorWrongEra eraMismatch <- -- | An error resulting from validating a 'CardanoHeader'. type CardanoOtherHeaderEnvelopeError c = HardForkEnvelopeErr (CardanoEras c) -pattern OtherHeaderEnvelopeErrorByron - :: OtherHeaderEnvelopeError ByronBlock - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorByron :: + OtherHeaderEnvelopeError ByronBlock -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorByron err = - HardForkEnvelopeErrFromEra - (OneEraEnvelopeErr (TagByron (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra + (OneEraEnvelopeErr (TagByron (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorShelley - :: OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorShelley :: + OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorShelley err = - HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagShelley (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagShelley (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorAllegra - :: OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorAllegra :: + OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorAllegra err = - HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagAllegra (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagAllegra (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorMary - :: OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) MaryEra) - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorMary :: + OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) MaryEra) -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorMary err = - HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagMary (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagMary (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorAlonzo - :: OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorAlonzo :: + OtherHeaderEnvelopeError (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorAlonzo err = - HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagAlonzo (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagAlonzo (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorBabbage - :: OtherHeaderEnvelopeError (ShelleyBlock (Praos c) BabbageEra) - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorBabbage :: + OtherHeaderEnvelopeError (ShelleyBlock (Praos c) BabbageEra) -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorBabbage err = - HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagBabbage (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagBabbage (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorConway - :: OtherHeaderEnvelopeError (ShelleyBlock (Praos c) ConwayEra) - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorConway :: + OtherHeaderEnvelopeError (ShelleyBlock (Praos c) ConwayEra) -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorConway err = - HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagConway (WrapEnvelopeErr err))) + HardForkEnvelopeErrFromEra (OneEraEnvelopeErr (TagConway (WrapEnvelopeErr err))) -pattern OtherHeaderEnvelopeErrorWrongEra - :: EraMismatch - -> CardanoOtherHeaderEnvelopeError c +pattern OtherHeaderEnvelopeErrorWrongEra :: + EraMismatch -> + CardanoOtherHeaderEnvelopeError c pattern OtherHeaderEnvelopeErrorWrongEra eraMismatch <- - HardForkEnvelopeErrWrongEra (mkEraMismatch -> eraMismatch) - -{-# COMPLETE OtherHeaderEnvelopeErrorByron - , OtherHeaderEnvelopeErrorShelley - , OtherHeaderEnvelopeErrorAllegra - , OtherHeaderEnvelopeErrorMary - , OtherHeaderEnvelopeErrorAlonzo - , OtherHeaderEnvelopeErrorBabbage - , OtherHeaderEnvelopeErrorConway - , OtherHeaderEnvelopeErrorWrongEra + HardForkEnvelopeErrWrongEra (mkEraMismatch -> eraMismatch) + +{-# COMPLETE + OtherHeaderEnvelopeErrorByron + , OtherHeaderEnvelopeErrorShelley + , OtherHeaderEnvelopeErrorAllegra + , OtherHeaderEnvelopeErrorMary + , OtherHeaderEnvelopeErrorAlonzo + , OtherHeaderEnvelopeErrorBabbage + , OtherHeaderEnvelopeErrorConway + , OtherHeaderEnvelopeErrorWrongEra #-} {------------------------------------------------------------------------------- @@ -633,42 +751,43 @@ pattern TipInfoByron :: TipInfo ByronBlock -> CardanoTipInfo c pattern TipInfoByron ti = OneEraTipInfo (TagByron (WrapTipInfo ti)) pattern TipInfoShelley :: - TipInfo (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoTipInfo c + TipInfo (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoTipInfo c pattern TipInfoShelley ti = OneEraTipInfo (TagShelley (WrapTipInfo ti)) pattern TipInfoAllegra :: - TipInfo (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoTipInfo c + TipInfo (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoTipInfo c pattern TipInfoAllegra ti = OneEraTipInfo (TagAllegra (WrapTipInfo ti)) pattern TipInfoMary :: - TipInfo (ShelleyBlock (TPraos c) MaryEra) - -> CardanoTipInfo c + TipInfo (ShelleyBlock (TPraos c) MaryEra) -> + CardanoTipInfo c pattern TipInfoMary ti = OneEraTipInfo (TagMary (WrapTipInfo ti)) pattern TipInfoAlonzo :: - TipInfo (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoTipInfo c + TipInfo (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoTipInfo c pattern TipInfoAlonzo ti = OneEraTipInfo (TagAlonzo (WrapTipInfo ti)) pattern TipInfoBabbage :: - TipInfo (ShelleyBlock (Praos c) BabbageEra) - -> CardanoTipInfo c + TipInfo (ShelleyBlock (Praos c) BabbageEra) -> + CardanoTipInfo c pattern TipInfoBabbage ti = OneEraTipInfo (TagBabbage (WrapTipInfo ti)) pattern TipInfoConway :: - TipInfo (ShelleyBlock (Praos c) ConwayEra) - -> CardanoTipInfo c + TipInfo (ShelleyBlock (Praos c) ConwayEra) -> + CardanoTipInfo c pattern TipInfoConway ti = OneEraTipInfo (TagConway (WrapTipInfo ti)) -{-# COMPLETE TipInfoByron - , TipInfoShelley - , TipInfoAllegra - , TipInfoMary - , TipInfoAlonzo - , TipInfoBabbage - , TipInfoConway +{-# COMPLETE + TipInfoByron + , TipInfoShelley + , TipInfoAllegra + , TipInfoMary + , TipInfoAlonzo + , TipInfoBabbage + , TipInfoConway #-} {------------------------------------------------------------------------------- @@ -680,70 +799,70 @@ type CardanoQuery c = BlockQuery (CardanoBlock c) -- | Byron-specific query that can only be answered when the ledger is in the -- Byron era. -pattern QueryIfCurrentByron - :: () - => CardanoQueryResult c result ~ a - => BlockQuery ByronBlock fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentByron :: + () => + CardanoQueryResult c result ~ a => + BlockQuery ByronBlock fp result -> + CardanoQuery c fp a -- | Shelley-specific query that can only be answered when the ledger is in the -- Shelley era. -pattern QueryIfCurrentShelley - :: () - => CardanoQueryResult c result ~ a - => BlockQuery (ShelleyBlock (TPraos c) ShelleyEra) fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentShelley :: + () => + CardanoQueryResult c result ~ a => + BlockQuery (ShelleyBlock (TPraos c) ShelleyEra) fp result -> + CardanoQuery c fp a -- | Allegra-specific query that can only be answered when the ledger is in the -- Allegra era. -pattern QueryIfCurrentAllegra - :: () - => CardanoQueryResult c result ~ a - => BlockQuery (ShelleyBlock (TPraos c) AllegraEra) fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentAllegra :: + () => + CardanoQueryResult c result ~ a => + BlockQuery (ShelleyBlock (TPraos c) AllegraEra) fp result -> + CardanoQuery c fp a -- | Mary-specific query that can only be answered when the ledger is in the -- Mary era. -pattern QueryIfCurrentMary - :: () - => CardanoQueryResult c result ~ a - => BlockQuery (ShelleyBlock (TPraos c) MaryEra) fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentMary :: + () => + CardanoQueryResult c result ~ a => + BlockQuery (ShelleyBlock (TPraos c) MaryEra) fp result -> + CardanoQuery c fp a -- | Alonzo-specific query that can only be answered when the ledger is in the -- Alonzo era. -pattern QueryIfCurrentAlonzo - :: () - => CardanoQueryResult c result ~ a - => BlockQuery (ShelleyBlock (TPraos c) AlonzoEra) fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentAlonzo :: + () => + CardanoQueryResult c result ~ a => + BlockQuery (ShelleyBlock (TPraos c) AlonzoEra) fp result -> + CardanoQuery c fp a -- | Babbage-specific query that can only be answered when the ledger is in the -- Babbage era. -pattern QueryIfCurrentBabbage - :: () - => CardanoQueryResult c result ~ a - => BlockQuery (ShelleyBlock (Praos c) BabbageEra) fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentBabbage :: + () => + CardanoQueryResult c result ~ a => + BlockQuery (ShelleyBlock (Praos c) BabbageEra) fp result -> + CardanoQuery c fp a -- | Conway-specific query that can only be answered when the ledger is in the -- Conway era. -pattern QueryIfCurrentConway - :: () - => CardanoQueryResult c result ~ a - => BlockQuery (ShelleyBlock (Praos c) ConwayEra) fp result - -> CardanoQuery c fp a +pattern QueryIfCurrentConway :: + () => + CardanoQueryResult c result ~ a => + BlockQuery (ShelleyBlock (Praos c) ConwayEra) fp result -> + CardanoQuery c fp a -- Here we use layout and adjacency to make it obvious that we haven't -- miscounted. -pattern QueryIfCurrentByron q = QueryIfCurrent (QZ q) -pattern QueryIfCurrentShelley q = QueryIfCurrent (QS (QZ q)) -pattern QueryIfCurrentAllegra q = QueryIfCurrent (QS (QS (QZ q))) -pattern QueryIfCurrentMary q = QueryIfCurrent (QS (QS (QS (QZ q)))) -pattern QueryIfCurrentAlonzo q = QueryIfCurrent (QS (QS (QS (QS (QZ q))))) -pattern QueryIfCurrentBabbage q = QueryIfCurrent (QS (QS (QS (QS (QS (QZ q)))))) -pattern QueryIfCurrentConway q = QueryIfCurrent (QS (QS (QS (QS (QS (QS (QZ q))))))) +pattern QueryIfCurrentByron q = QueryIfCurrent (QZ q) +pattern QueryIfCurrentShelley q = QueryIfCurrent (QS (QZ q)) +pattern QueryIfCurrentAllegra q = QueryIfCurrent (QS (QS (QZ q))) +pattern QueryIfCurrentMary q = QueryIfCurrent (QS (QS (QS (QZ q)))) +pattern QueryIfCurrentAlonzo q = QueryIfCurrent (QS (QS (QS (QS (QZ q))))) +pattern QueryIfCurrentBabbage q = QueryIfCurrent (QS (QS (QS (QS (QS (QZ q)))))) +pattern QueryIfCurrentConway q = QueryIfCurrent (QS (QS (QS (QS (QS (QS (QZ q))))))) -- | Query about the Byron era that can be answered anytime, i.e., -- independent from where the tip of the ledger is. @@ -752,10 +871,9 @@ pattern QueryIfCurrentConway q = QueryIfCurrent (QS (QS (QS (QS (QS (QS (QZ q)) -- the ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeByron EraStart --- -pattern QueryAnytimeByron - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeByron :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeByron q = QueryAnytime q (EraIndex (TagByron (K ()))) -- | Query about the Shelley era that can be answered anytime, i.e., @@ -765,10 +883,9 @@ pattern QueryAnytimeByron q = QueryAnytime q (EraIndex (TagByron (K ()))) -- ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeShelley EraStart --- -pattern QueryAnytimeShelley - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeShelley :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeShelley q = QueryAnytime q (EraIndex (TagShelley (K ()))) -- | Query about the Allegra era that can be answered anytime, i.e., @@ -778,10 +895,9 @@ pattern QueryAnytimeShelley q = QueryAnytime q (EraIndex (TagShelley (K ()))) -- ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeAllegra EraStart --- -pattern QueryAnytimeAllegra - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeAllegra :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeAllegra q = QueryAnytime q (EraIndex (TagAllegra (K ()))) -- | Query about the Mary era that can be answered anytime, i.e., @@ -791,10 +907,9 @@ pattern QueryAnytimeAllegra q = QueryAnytime q (EraIndex (TagAllegra (K ()))) -- ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeMary EraStart --- -pattern QueryAnytimeMary - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeMary :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeMary q = QueryAnytime q (EraIndex (TagMary (K ()))) -- | Query about the Alonzo era that can be answered anytime, i.e., independent @@ -804,10 +919,9 @@ pattern QueryAnytimeMary q = QueryAnytime q (EraIndex (TagMary (K ()))) -- ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeAlonzo EraStart --- -pattern QueryAnytimeAlonzo - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeAlonzo :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeAlonzo q = QueryAnytime q (EraIndex (TagAlonzo (K ()))) -- | Query about the Babbage era that can be answered anytime, i.e., independent @@ -817,10 +931,9 @@ pattern QueryAnytimeAlonzo q = QueryAnytime q (EraIndex (TagAlonzo (K ()))) -- ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeBabbage EraStart --- -pattern QueryAnytimeBabbage - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeBabbage :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeBabbage q = QueryAnytime q (EraIndex (TagBabbage (K ()))) -- | Query about the Conway era that can be answered anytime, i.e., independent @@ -830,26 +943,26 @@ pattern QueryAnytimeBabbage q = QueryAnytime q (EraIndex (TagBabbage (K ()))) -- ledger is in the Byron, Shelley, ... era), use: -- -- > QueryAnytimeConway EraStart --- -pattern QueryAnytimeConway - :: QueryAnytime result - -> CardanoQuery c QFNoTables result +pattern QueryAnytimeConway :: + QueryAnytime result -> + CardanoQuery c QFNoTables result pattern QueryAnytimeConway q = QueryAnytime q (EraIndex (TagConway (K ()))) -{-# COMPLETE QueryIfCurrentByron - , QueryIfCurrentShelley - , QueryIfCurrentAllegra - , QueryIfCurrentMary - , QueryIfCurrentAlonzo - , QueryIfCurrentBabbage - , QueryAnytimeByron - , QueryAnytimeShelley - , QueryAnytimeAllegra - , QueryAnytimeMary - , QueryAnytimeAlonzo - , QueryAnytimeBabbage - , QueryAnytimeConway - , QueryHardFork +{-# COMPLETE + QueryIfCurrentByron + , QueryIfCurrentShelley + , QueryIfCurrentAllegra + , QueryIfCurrentMary + , QueryIfCurrentAlonzo + , QueryIfCurrentBabbage + , QueryAnytimeByron + , QueryAnytimeShelley + , QueryAnytimeAllegra + , QueryAnytimeMary + , QueryAnytimeAlonzo + , QueryAnytimeBabbage + , QueryAnytimeConway + , QueryHardFork #-} -- | The result of a 'CardanoQuery' @@ -877,28 +990,29 @@ pattern QueryResultEraMismatch eraMismatch <- Left (mkEraMismatch -> eraMismatch -- the Byron, Shelley, ... 'CodecConfig's. type CardanoCodecConfig c = CodecConfig (CardanoBlock c) -pattern CardanoCodecConfig - :: CodecConfig ByronBlock - -> CodecConfig (ShelleyBlock (TPraos c) ShelleyEra) - -> CodecConfig (ShelleyBlock (TPraos c) AllegraEra) - -> CodecConfig (ShelleyBlock (TPraos c) MaryEra) - -> CodecConfig (ShelleyBlock (TPraos c) AlonzoEra) - -> CodecConfig (ShelleyBlock (Praos c) BabbageEra) - -> CodecConfig (ShelleyBlock (Praos c) ConwayEra) - -> CardanoCodecConfig c +pattern CardanoCodecConfig :: + CodecConfig ByronBlock -> + CodecConfig (ShelleyBlock (TPraos c) ShelleyEra) -> + CodecConfig (ShelleyBlock (TPraos c) AllegraEra) -> + CodecConfig (ShelleyBlock (TPraos c) MaryEra) -> + CodecConfig (ShelleyBlock (TPraos c) AlonzoEra) -> + CodecConfig (ShelleyBlock (Praos c) BabbageEra) -> + CodecConfig (ShelleyBlock (Praos c) ConwayEra) -> + CardanoCodecConfig c pattern CardanoCodecConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgBabbage cfgConway = - HardForkCodecConfig { - hardForkCodecConfigPerEra = PerEraCodecConfig - ( cfgByron - :* cfgShelley - :* cfgAllegra - :* cfgMary - :* cfgAlonzo - :* cfgBabbage - :* cfgConway - :* Nil + HardForkCodecConfig + { hardForkCodecConfigPerEra = + PerEraCodecConfig + ( cfgByron + :* cfgShelley + :* cfgAllegra + :* cfgMary + :* cfgAlonzo + :* cfgBabbage + :* cfgConway + :* Nil ) - } + } {-# COMPLETE CardanoCodecConfig #-} @@ -912,28 +1026,29 @@ pattern CardanoCodecConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgB -- the Byron, Shelley, ... 'BlockConfig's. type CardanoBlockConfig c = BlockConfig (CardanoBlock c) -pattern CardanoBlockConfig - :: BlockConfig ByronBlock - -> BlockConfig (ShelleyBlock (TPraos c) ShelleyEra) - -> BlockConfig (ShelleyBlock (TPraos c) AllegraEra) - -> BlockConfig (ShelleyBlock (TPraos c) MaryEra) - -> BlockConfig (ShelleyBlock (TPraos c) AlonzoEra) - -> BlockConfig (ShelleyBlock (Praos c) BabbageEra) - -> BlockConfig (ShelleyBlock (Praos c) ConwayEra) - -> CardanoBlockConfig c +pattern CardanoBlockConfig :: + BlockConfig ByronBlock -> + BlockConfig (ShelleyBlock (TPraos c) ShelleyEra) -> + BlockConfig (ShelleyBlock (TPraos c) AllegraEra) -> + BlockConfig (ShelleyBlock (TPraos c) MaryEra) -> + BlockConfig (ShelleyBlock (TPraos c) AlonzoEra) -> + BlockConfig (ShelleyBlock (Praos c) BabbageEra) -> + BlockConfig (ShelleyBlock (Praos c) ConwayEra) -> + CardanoBlockConfig c pattern CardanoBlockConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgBabbage cfgConway = - HardForkBlockConfig { - hardForkBlockConfigPerEra = PerEraBlockConfig - ( cfgByron - :* cfgShelley - :* cfgAllegra - :* cfgMary - :* cfgAlonzo - :* cfgBabbage - :* cfgConway - :* Nil + HardForkBlockConfig + { hardForkBlockConfigPerEra = + PerEraBlockConfig + ( cfgByron + :* cfgShelley + :* cfgAllegra + :* cfgMary + :* cfgAlonzo + :* cfgBabbage + :* cfgConway + :* Nil ) - } + } {-# COMPLETE CardanoBlockConfig #-} @@ -947,28 +1062,29 @@ pattern CardanoBlockConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgB -- the Byron, Shelley, ... 'StorageConfig's. type CardanoStorageConfig c = StorageConfig (CardanoBlock c) -pattern CardanoStorageConfig - :: StorageConfig ByronBlock - -> StorageConfig (ShelleyBlock (TPraos c) ShelleyEra) - -> StorageConfig (ShelleyBlock (TPraos c) AllegraEra) - -> StorageConfig (ShelleyBlock (TPraos c) MaryEra) - -> StorageConfig (ShelleyBlock (TPraos c) AlonzoEra) - -> StorageConfig (ShelleyBlock (Praos c) BabbageEra) - -> StorageConfig (ShelleyBlock (Praos c) ConwayEra) - -> CardanoStorageConfig c +pattern CardanoStorageConfig :: + StorageConfig ByronBlock -> + StorageConfig (ShelleyBlock (TPraos c) ShelleyEra) -> + StorageConfig (ShelleyBlock (TPraos c) AllegraEra) -> + StorageConfig (ShelleyBlock (TPraos c) MaryEra) -> + StorageConfig (ShelleyBlock (TPraos c) AlonzoEra) -> + StorageConfig (ShelleyBlock (Praos c) BabbageEra) -> + StorageConfig (ShelleyBlock (Praos c) ConwayEra) -> + CardanoStorageConfig c pattern CardanoStorageConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgBabbage cfgConway = - HardForkStorageConfig { - hardForkStorageConfigPerEra = PerEraStorageConfig - ( cfgByron - :* cfgShelley - :* cfgAllegra - :* cfgMary - :* cfgAlonzo - :* cfgBabbage - :* cfgConway - :* Nil + HardForkStorageConfig + { hardForkStorageConfigPerEra = + PerEraStorageConfig + ( cfgByron + :* cfgShelley + :* cfgAllegra + :* cfgMary + :* cfgAlonzo + :* cfgBabbage + :* cfgConway + :* Nil ) - } + } {-# COMPLETE CardanoStorageConfig #-} @@ -985,28 +1101,29 @@ pattern CardanoStorageConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cf type CardanoConsensusConfig c = ConsensusConfig (HardForkProtocol (CardanoEras c)) -pattern CardanoConsensusConfig - :: PartialConsensusConfig (BlockProtocol ByronBlock) - -> PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) ShelleyEra)) - -> PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AllegraEra)) - -> PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) MaryEra)) - -> PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AlonzoEra)) - -> PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) BabbageEra)) - -> PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) ConwayEra)) - -> CardanoConsensusConfig c +pattern CardanoConsensusConfig :: + PartialConsensusConfig (BlockProtocol ByronBlock) -> + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) ShelleyEra)) -> + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AllegraEra)) -> + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) MaryEra)) -> + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AlonzoEra)) -> + PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) BabbageEra)) -> + PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) ConwayEra)) -> + CardanoConsensusConfig c pattern CardanoConsensusConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgBabbage cfgConway <- - HardForkConsensusConfig { - hardForkConsensusConfigPerEra = PerEraConsensusConfig - ( WrapPartialConsensusConfig cfgByron - :* WrapPartialConsensusConfig cfgShelley - :* WrapPartialConsensusConfig cfgAllegra - :* WrapPartialConsensusConfig cfgMary - :* WrapPartialConsensusConfig cfgAlonzo - :* WrapPartialConsensusConfig cfgBabbage - :* WrapPartialConsensusConfig cfgConway - :* Nil + HardForkConsensusConfig + { hardForkConsensusConfigPerEra = + PerEraConsensusConfig + ( WrapPartialConsensusConfig cfgByron + :* WrapPartialConsensusConfig cfgShelley + :* WrapPartialConsensusConfig cfgAllegra + :* WrapPartialConsensusConfig cfgMary + :* WrapPartialConsensusConfig cfgAlonzo + :* WrapPartialConsensusConfig cfgBabbage + :* WrapPartialConsensusConfig cfgConway + :* Nil ) - } + } {-# COMPLETE CardanoConsensusConfig #-} @@ -1022,28 +1139,29 @@ pattern CardanoConsensusConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo -- NOTE: not 'LedgerConfig', but 'PartialLedgerConfig'. type CardanoLedgerConfig c = HardForkLedgerConfig (CardanoEras c) -pattern CardanoLedgerConfig - :: PartialLedgerConfig ByronBlock - -> PartialLedgerConfig (ShelleyBlock (TPraos c) ShelleyEra) - -> PartialLedgerConfig (ShelleyBlock (TPraos c) AllegraEra) - -> PartialLedgerConfig (ShelleyBlock (TPraos c) MaryEra) - -> PartialLedgerConfig (ShelleyBlock (TPraos c) AlonzoEra) - -> PartialLedgerConfig (ShelleyBlock (Praos c) BabbageEra) - -> PartialLedgerConfig (ShelleyBlock (Praos c) ConwayEra) - -> CardanoLedgerConfig c +pattern CardanoLedgerConfig :: + PartialLedgerConfig ByronBlock -> + PartialLedgerConfig (ShelleyBlock (TPraos c) ShelleyEra) -> + PartialLedgerConfig (ShelleyBlock (TPraos c) AllegraEra) -> + PartialLedgerConfig (ShelleyBlock (TPraos c) MaryEra) -> + PartialLedgerConfig (ShelleyBlock (TPraos c) AlonzoEra) -> + PartialLedgerConfig (ShelleyBlock (Praos c) BabbageEra) -> + PartialLedgerConfig (ShelleyBlock (Praos c) ConwayEra) -> + CardanoLedgerConfig c pattern CardanoLedgerConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfgBabbage cfgConway <- - HardForkLedgerConfig { - hardForkLedgerConfigPerEra = PerEraLedgerConfig - ( WrapPartialLedgerConfig cfgByron - :* WrapPartialLedgerConfig cfgShelley - :* WrapPartialLedgerConfig cfgAllegra - :* WrapPartialLedgerConfig cfgMary - :* WrapPartialLedgerConfig cfgAlonzo - :* WrapPartialLedgerConfig cfgBabbage - :* WrapPartialLedgerConfig cfgConway - :* Nil + HardForkLedgerConfig + { hardForkLedgerConfigPerEra = + PerEraLedgerConfig + ( WrapPartialLedgerConfig cfgByron + :* WrapPartialLedgerConfig cfgShelley + :* WrapPartialLedgerConfig cfgAllegra + :* WrapPartialLedgerConfig cfgMary + :* WrapPartialLedgerConfig cfgAlonzo + :* WrapPartialLedgerConfig cfgBabbage + :* WrapPartialLedgerConfig cfgConway + :* Nil ) - } + } {-# COMPLETE CardanoLedgerConfig #-} @@ -1059,69 +1177,77 @@ pattern CardanoLedgerConfig cfgByron cfgShelley cfgAllegra cfgMary cfgAlonzo cfg -- bidirectional. type CardanoLedgerState c mk = LedgerState (CardanoBlock c) mk -pattern LedgerStateByron - :: LedgerState ByronBlock mk - -> CardanoLedgerState c mk +pattern LedgerStateByron :: + LedgerState ByronBlock mk -> + CardanoLedgerState c mk pattern LedgerStateByron st <- - HardForkLedgerState - (State.HardForkState - (TeleByron (State.Current { currentState = Flip st }))) - -pattern LedgerStateShelley - :: LedgerState (ShelleyBlock (TPraos c) ShelleyEra) mk - -> CardanoLedgerState c mk + HardForkLedgerState + ( State.HardForkState + (TeleByron (State.Current{currentState = Flip st})) + ) + +pattern LedgerStateShelley :: + LedgerState (ShelleyBlock (TPraos c) ShelleyEra) mk -> + CardanoLedgerState c mk pattern LedgerStateShelley st <- - HardForkLedgerState - (State.HardForkState - (TeleShelley _ (State.Current { currentState = Flip st }))) - -pattern LedgerStateAllegra - :: LedgerState (ShelleyBlock (TPraos c) AllegraEra) mk - -> CardanoLedgerState c mk + HardForkLedgerState + ( State.HardForkState + (TeleShelley _ (State.Current{currentState = Flip st})) + ) + +pattern LedgerStateAllegra :: + LedgerState (ShelleyBlock (TPraos c) AllegraEra) mk -> + CardanoLedgerState c mk pattern LedgerStateAllegra st <- - HardForkLedgerState - (State.HardForkState - (TeleAllegra _ _ (State.Current { currentState = Flip st }))) - -pattern LedgerStateMary - :: LedgerState (ShelleyBlock (TPraos c) MaryEra) mk - -> CardanoLedgerState c mk + HardForkLedgerState + ( State.HardForkState + (TeleAllegra _ _ (State.Current{currentState = Flip st})) + ) + +pattern LedgerStateMary :: + LedgerState (ShelleyBlock (TPraos c) MaryEra) mk -> + CardanoLedgerState c mk pattern LedgerStateMary st <- - HardForkLedgerState - (State.HardForkState - (TeleMary _ _ _ (State.Current { currentState = Flip st }))) - -pattern LedgerStateAlonzo - :: LedgerState (ShelleyBlock (TPraos c) AlonzoEra) mk - -> CardanoLedgerState c mk + HardForkLedgerState + ( State.HardForkState + (TeleMary _ _ _ (State.Current{currentState = Flip st})) + ) + +pattern LedgerStateAlonzo :: + LedgerState (ShelleyBlock (TPraos c) AlonzoEra) mk -> + CardanoLedgerState c mk pattern LedgerStateAlonzo st <- - HardForkLedgerState - (State.HardForkState - (TeleAlonzo _ _ _ _ (State.Current { currentState = Flip st }))) - -pattern LedgerStateBabbage - :: LedgerState (ShelleyBlock (Praos c) BabbageEra) mk - -> CardanoLedgerState c mk + HardForkLedgerState + ( State.HardForkState + (TeleAlonzo _ _ _ _ (State.Current{currentState = Flip st})) + ) + +pattern LedgerStateBabbage :: + LedgerState (ShelleyBlock (Praos c) BabbageEra) mk -> + CardanoLedgerState c mk pattern LedgerStateBabbage st <- - HardForkLedgerState - (State.HardForkState - (TeleBabbage _ _ _ _ _ (State.Current { currentState = Flip st }))) - -pattern LedgerStateConway - :: LedgerState (ShelleyBlock (Praos c) ConwayEra) mk - -> CardanoLedgerState c mk + HardForkLedgerState + ( State.HardForkState + (TeleBabbage _ _ _ _ _ (State.Current{currentState = Flip st})) + ) + +pattern LedgerStateConway :: + LedgerState (ShelleyBlock (Praos c) ConwayEra) mk -> + CardanoLedgerState c mk pattern LedgerStateConway st <- - HardForkLedgerState - (State.HardForkState - (TeleConway _ _ _ _ _ _ (State.Current { currentState = Flip st }))) - -{-# COMPLETE LedgerStateByron - , LedgerStateShelley - , LedgerStateAllegra - , LedgerStateMary - , LedgerStateAlonzo - , LedgerStateBabbage - , LedgerStateConway + HardForkLedgerState + ( State.HardForkState + (TeleConway _ _ _ _ _ _ (State.Current{currentState = Flip st})) + ) + +{-# COMPLETE + LedgerStateByron + , LedgerStateShelley + , LedgerStateAllegra + , LedgerStateMary + , LedgerStateAlonzo + , LedgerStateBabbage + , LedgerStateConway #-} {------------------------------------------------------------------------------- @@ -1136,60 +1262,61 @@ pattern LedgerStateConway st <- -- bidirectional. type CardanoChainDepState c = HardForkChainDepState (CardanoEras c) -pattern ChainDepStateByron - :: ChainDepState (BlockProtocol ByronBlock) - -> CardanoChainDepState c +pattern ChainDepStateByron :: + ChainDepState (BlockProtocol ByronBlock) -> + CardanoChainDepState c pattern ChainDepStateByron st <- - State.HardForkState - (TeleByron (State.Current { currentState = WrapChainDepState st })) + State.HardForkState + (TeleByron (State.Current{currentState = WrapChainDepState st})) -pattern ChainDepStateShelley - :: ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) ShelleyEra)) - -> CardanoChainDepState c +pattern ChainDepStateShelley :: + ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) ShelleyEra)) -> + CardanoChainDepState c pattern ChainDepStateShelley st <- - State.HardForkState - (TeleShelley _ (State.Current { currentState = WrapChainDepState st })) + State.HardForkState + (TeleShelley _ (State.Current{currentState = WrapChainDepState st})) -pattern ChainDepStateAllegra - :: ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) AllegraEra)) - -> CardanoChainDepState c +pattern ChainDepStateAllegra :: + ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) AllegraEra)) -> + CardanoChainDepState c pattern ChainDepStateAllegra st <- - State.HardForkState - (TeleAllegra _ _ (State.Current { currentState = WrapChainDepState st })) + State.HardForkState + (TeleAllegra _ _ (State.Current{currentState = WrapChainDepState st})) -pattern ChainDepStateMary - :: ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) MaryEra)) - -> CardanoChainDepState c +pattern ChainDepStateMary :: + ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) MaryEra)) -> + CardanoChainDepState c pattern ChainDepStateMary st <- - State.HardForkState - (TeleMary _ _ _ (State.Current { currentState = WrapChainDepState st })) + State.HardForkState + (TeleMary _ _ _ (State.Current{currentState = WrapChainDepState st})) -pattern ChainDepStateAlonzo - :: ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) AlonzoEra)) - -> CardanoChainDepState c +pattern ChainDepStateAlonzo :: + ChainDepState (BlockProtocol (ShelleyBlock (TPraos c) AlonzoEra)) -> + CardanoChainDepState c pattern ChainDepStateAlonzo st <- - State.HardForkState - (TeleAlonzo _ _ _ _ (State.Current { currentState = WrapChainDepState st })) + State.HardForkState + (TeleAlonzo _ _ _ _ (State.Current{currentState = WrapChainDepState st})) -pattern ChainDepStateBabbage - :: ChainDepState (BlockProtocol (ShelleyBlock (Praos c) BabbageEra)) - -> CardanoChainDepState c +pattern ChainDepStateBabbage :: + ChainDepState (BlockProtocol (ShelleyBlock (Praos c) BabbageEra)) -> + CardanoChainDepState c pattern ChainDepStateBabbage st <- - State.HardForkState - (TeleBabbage _ _ _ _ _ (State.Current { currentState = WrapChainDepState st })) + State.HardForkState + (TeleBabbage _ _ _ _ _ (State.Current{currentState = WrapChainDepState st})) -pattern ChainDepStateConway - :: ChainDepState (BlockProtocol (ShelleyBlock (Praos c) ConwayEra)) - -> CardanoChainDepState c +pattern ChainDepStateConway :: + ChainDepState (BlockProtocol (ShelleyBlock (Praos c) ConwayEra)) -> + CardanoChainDepState c pattern ChainDepStateConway st <- - State.HardForkState - (TeleConway _ _ _ _ _ _ (State.Current { currentState = WrapChainDepState st })) - -{-# COMPLETE ChainDepStateByron - , ChainDepStateShelley - , ChainDepStateAllegra - , ChainDepStateMary - , ChainDepStateAlonzo - , ChainDepStateBabbage - , ChainDepStateConway + State.HardForkState + (TeleConway _ _ _ _ _ _ (State.Current{currentState = WrapChainDepState st})) + +{-# COMPLETE + ChainDepStateByron + , ChainDepStateShelley + , ChainDepStateAllegra + , ChainDepStateMary + , ChainDepStateAlonzo + , ChainDepStateBabbage + , ChainDepStateConway #-} diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/CanHardFork.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/CanHardFork.hs index 5e30589a63..259fd42315 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/CanHardFork.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/CanHardFork.hs @@ -12,76 +12,85 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Cardano.CanHardFork ( - CardanoHardForkConstraints +module Ouroboros.Consensus.Cardano.CanHardFork + ( CardanoHardForkConstraints , TriggerHardFork (..) + -- * Re-exports of Shelley code , ShelleyPartialLedgerConfig (..) , crossEraForecastAcrossShelley , translateChainDepStateAcrossShelley + -- * Exposed for testing , getConwayTranslationContext ) where - -import Cardano.Ledger.Allegra.Translation - (shelleyToAllegraAVVMsToDelete) -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Ledger.Core as SL -import qualified Cardano.Ledger.Genesis as SL -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Ledger.Shelley.Translation - (toFromByronTranslationContext) -import qualified Cardano.Protocol.TPraos.API as SL -import qualified Cardano.Protocol.TPraos.Rules.Prtcl as SL -import qualified Cardano.Protocol.TPraos.Rules.Tickn as SL -import Control.Monad.Except (runExcept, throwError) -import Data.Coerce (coerce) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Functors (Flip (..)) -import Data.SOP.InPairs (RequiringBoth (..), ignoringBoth) -import qualified Data.SOP.Strict as SOP -import Data.SOP.Tails (Tails (..)) -import qualified Data.SOP.Tails as Tails -import Data.Void -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.ByronHFC () -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node () -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.HardFork.History (Bound (boundSlot), - addSlots) -import Ouroboros.Consensus.HardFork.Simple -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32, - IgnoringOverflow, TxMeasure) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Protocol.Abstract hiding - (translateChainDepState) -import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) -import qualified Ouroboros.Consensus.Protocol.PBFT.State as PBftState -import Ouroboros.Consensus.Protocol.Praos (Praos) -import qualified Ouroboros.Consensus.Protocol.Praos as Praos -import Ouroboros.Consensus.Protocol.TPraos -import qualified Ouroboros.Consensus.Protocol.TPraos as TPraos -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Node () -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Ouroboros.Consensus.Shelley.ShelleyHFC -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (eitherToMaybe) +import Cardano.Ledger.Allegra.Translation + ( shelleyToAllegraAVVMsToDelete + ) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Core qualified as SL +import Cardano.Ledger.Genesis qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Translation + ( toFromByronTranslationContext + ) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.Rules.Prtcl qualified as SL +import Cardano.Protocol.TPraos.Rules.Tickn qualified as SL +import Control.Monad.Except (runExcept, throwError) +import Data.Coerce (coerce) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs (RequiringBoth (..), ignoringBoth) +import Data.SOP.Strict qualified as SOP +import Data.SOP.Tails (Tails (..)) +import Data.SOP.Tails qualified as Tails +import Data.Void +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.ByronHFC () +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node () +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.History + ( Bound (boundSlot) + , addSlots + ) +import Ouroboros.Consensus.HardFork.Simple +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool + ( ByteSize32 + , IgnoringOverflow + , TxMeasure + ) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Protocol.Abstract hiding + ( translateChainDepState + ) +import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) +import Ouroboros.Consensus.Protocol.PBFT.State qualified as PBftState +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.Praos qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Protocol.TPraos qualified as TPraos +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Node () +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Ouroboros.Consensus.Shelley.ShelleyHFC +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (eitherToMaybe) {------------------------------------------------------------------------------- CanHardFork @@ -118,278 +127,285 @@ type CardanoHardForkConstraints c = instance CardanoHardForkConstraints c => CanHardFork (CardanoEras c) where type HardForkTxMeasure (CardanoEras c) = ConwayMeasure - hardForkEraTranslation = EraTranslation { - translateLedgerState = - PCons translateLedgerStateByronToShelleyWrapper - $ PCons translateLedgerStateShelleyToAllegraWrapper - $ PCons translateLedgerStateAllegraToMaryWrapper - $ PCons translateLedgerStateMaryToAlonzoWrapper - $ PCons translateLedgerStateAlonzoToBabbageWrapper - $ PCons translateLedgerStateBabbageToConwayWrapper - $ PNil - , translateLedgerTables = - PCons translateLedgerTablesByronToShelleyWrapper - $ PCons translateLedgerTablesShelleyToAllegraWrapper - $ PCons translateLedgerTablesAllegraToMaryWrapper - $ PCons translateLedgerTablesMaryToAlonzoWrapper - $ PCons translateLedgerTablesAlonzoToBabbageWrapper - $ PCons translateLedgerTablesBabbageToConwayWrapper - $ PNil - , translateChainDepState = - PCons translateChainDepStateByronToShelleyWrapper - $ PCons translateChainDepStateAcrossShelley - $ PCons translateChainDepStateAcrossShelley - $ PCons translateChainDepStateAcrossShelley - $ PCons translateChainDepStateAcrossShelley - $ PCons translateChainDepStateAcrossShelley - $ PNil - , crossEraForecast = - PCons crossEraForecastByronToShelleyWrapper - $ PCons crossEraForecastAcrossShelley - $ PCons crossEraForecastAcrossShelley - $ PCons crossEraForecastAcrossShelley - $ PCons crossEraForecastAcrossShelley - $ PCons crossEraForecastAcrossShelley - $ PNil - } + hardForkEraTranslation = + EraTranslation + { translateLedgerState = + PCons translateLedgerStateByronToShelleyWrapper $ + PCons translateLedgerStateShelleyToAllegraWrapper $ + PCons translateLedgerStateAllegraToMaryWrapper $ + PCons translateLedgerStateMaryToAlonzoWrapper $ + PCons translateLedgerStateAlonzoToBabbageWrapper $ + PCons translateLedgerStateBabbageToConwayWrapper $ + PNil + , translateLedgerTables = + PCons translateLedgerTablesByronToShelleyWrapper $ + PCons translateLedgerTablesShelleyToAllegraWrapper $ + PCons translateLedgerTablesAllegraToMaryWrapper $ + PCons translateLedgerTablesMaryToAlonzoWrapper $ + PCons translateLedgerTablesAlonzoToBabbageWrapper $ + PCons translateLedgerTablesBabbageToConwayWrapper $ + PNil + , translateChainDepState = + PCons translateChainDepStateByronToShelleyWrapper $ + PCons translateChainDepStateAcrossShelley $ + PCons translateChainDepStateAcrossShelley $ + PCons translateChainDepStateAcrossShelley $ + PCons translateChainDepStateAcrossShelley $ + PCons translateChainDepStateAcrossShelley $ + PNil + , crossEraForecast = + PCons crossEraForecastByronToShelleyWrapper $ + PCons crossEraForecastAcrossShelley $ + PCons crossEraForecastAcrossShelley $ + PCons crossEraForecastAcrossShelley $ + PCons crossEraForecastAcrossShelley $ + PCons crossEraForecastAcrossShelley $ + PNil + } hardForkChainSel = - -- Byron <-> Shelley, ... - TCons (SOP.hpure CompareBlockNo) - -- Inter-Shelley-based - $ Tails.hcpure (Proxy @(HasPraosSelectView c)) CompareSameSelectView + -- Byron <-> Shelley, ... + TCons (SOP.hpure CompareBlockNo) + -- Inter-Shelley-based + $ + Tails.hcpure (Proxy @(HasPraosSelectView c)) CompareSameSelectView hardForkInjectTxs = - PCons (ignoringBoth $ Pair2 cannotInjectTx cannotInjectValidatedTx) - $ PCons ( ignoringBoth - $ Pair2 - translateTxShelleyToAllegraWrapper - translateValidatedTxShelleyToAllegraWrapper - ) - $ PCons ( ignoringBoth - $ Pair2 - translateTxAllegraToMaryWrapper - translateValidatedTxAllegraToMaryWrapper - ) - $ PCons (RequireBoth $ \_cfgMary cfgAlonzo -> - let ctxt = getAlonzoTranslationContext cfgAlonzo - in - Pair2 - (translateTxMaryToAlonzoWrapper ctxt) + PCons (ignoringBoth $ Pair2 cannotInjectTx cannotInjectValidatedTx) + $ PCons + ( ignoringBoth $ + Pair2 + translateTxShelleyToAllegraWrapper + translateValidatedTxShelleyToAllegraWrapper + ) + $ PCons + ( ignoringBoth $ + Pair2 + translateTxAllegraToMaryWrapper + translateValidatedTxAllegraToMaryWrapper + ) + $ PCons + ( RequireBoth $ \_cfgMary cfgAlonzo -> + let ctxt = getAlonzoTranslationContext cfgAlonzo + in Pair2 + (translateTxMaryToAlonzoWrapper ctxt) (translateValidatedTxMaryToAlonzoWrapper ctxt) - ) - $ PCons (RequireBoth $ \_cfgAlonzo _cfgBabbage -> - let ctxt = SL.NoGenesis - in - Pair2 - (translateTxAlonzoToBabbageWrapper ctxt) + ) + $ PCons + ( RequireBoth $ \_cfgAlonzo _cfgBabbage -> + let ctxt = SL.NoGenesis + in Pair2 + (translateTxAlonzoToBabbageWrapper ctxt) (translateValidatedTxAlonzoToBabbageWrapper ctxt) - ) - $ PCons (RequireBoth $ \_cfgBabbage cfgConway -> - let ctxt = getConwayTranslationContext cfgConway - in - Pair2 - (translateTxBabbageToConwayWrapper ctxt) + ) + $ PCons + ( RequireBoth $ \_cfgBabbage cfgConway -> + let ctxt = getConwayTranslationContext cfgConway + in Pair2 + (translateTxBabbageToConwayWrapper ctxt) (translateValidatedTxBabbageToConwayWrapper ctxt) - ) + ) $ PNil hardForkInjTxMeasure = - fromByteSize `o` - fromByteSize `o` - fromByteSize `o` - fromByteSize `o` - fromAlonzo `o` - fromConway `o` - fromConway `o` - nil - where - nil :: SOP.NS f '[] -> a - nil = \case {} - - infixr `o` - o :: - (TxMeasure x -> a) - -> (SOP.NS WrapTxMeasure xs -> a) - -> SOP.NS WrapTxMeasure (x : xs) - -> a - o f g = \case - SOP.Z (WrapTxMeasure x) -> f x - SOP.S y -> g y - - fromByteSize :: IgnoringOverflow ByteSize32 -> ConwayMeasure - fromByteSize x = fromAlonzo $ AlonzoMeasure x mempty - fromAlonzo x = fromConway $ ConwayMeasure x mempty - fromConway x = x - -class (SelectView (BlockProtocol blk) ~ PraosChainSelectView c) => HasPraosSelectView c blk -instance (SelectView (BlockProtocol blk) ~ PraosChainSelectView c) => HasPraosSelectView c blk + fromByteSize + `o` fromByteSize + `o` fromByteSize + `o` fromByteSize + `o` fromAlonzo + `o` fromConway + `o` fromConway + `o` nil + where + nil :: SOP.NS f '[] -> a + nil = \case {} + + infixr 9 `o` + o :: + (TxMeasure x -> a) -> + (SOP.NS WrapTxMeasure xs -> a) -> + SOP.NS WrapTxMeasure (x : xs) -> + a + o f g = \case + SOP.Z (WrapTxMeasure x) -> f x + SOP.S y -> g y + + fromByteSize :: IgnoringOverflow ByteSize32 -> ConwayMeasure + fromByteSize x = fromAlonzo $ AlonzoMeasure x mempty + fromAlonzo x = fromConway $ ConwayMeasure x mempty + fromConway x = x + +class SelectView (BlockProtocol blk) ~ PraosChainSelectView c => HasPraosSelectView c blk +instance SelectView (BlockProtocol blk) ~ PraosChainSelectView c => HasPraosSelectView c blk {------------------------------------------------------------------------------- Translation from Byron to Shelley -------------------------------------------------------------------------------} translateHeaderHashByronToShelley :: - forall c. - ShelleyCompatible (TPraos c) ShelleyEra - => Proxy c - -> HeaderHash ByronBlock - -> HeaderHash (ShelleyBlock (TPraos c) ShelleyEra) + forall c. + ShelleyCompatible (TPraos c) ShelleyEra => + Proxy c -> + HeaderHash ByronBlock -> + HeaderHash (ShelleyBlock (TPraos c) ShelleyEra) translateHeaderHashByronToShelley _ = - fromShortRawHash (Proxy @(ShelleyBlock (TPraos c) ShelleyEra)) - . toShortRawHash (Proxy @ByronBlock) + fromShortRawHash (Proxy @(ShelleyBlock (TPraos c) ShelleyEra)) + . toShortRawHash (Proxy @ByronBlock) translatePointByronToShelley :: - forall c. - ( ShelleyCompatible (TPraos c) ShelleyEra - ) - => Point ByronBlock - -> WithOrigin BlockNo - -> WithOrigin (ShelleyTip (TPraos c) ShelleyEra) + forall c. + ShelleyCompatible (TPraos c) ShelleyEra => + Point ByronBlock -> + WithOrigin BlockNo -> + WithOrigin (ShelleyTip (TPraos c) ShelleyEra) translatePointByronToShelley point bNo = - case (point, bNo) of - (GenesisPoint, Origin) -> - Origin - (BlockPoint s h, NotOrigin n) -> NotOrigin ShelleyTip { - shelleyTipSlotNo = s - , shelleyTipBlockNo = n - , shelleyTipHash = translateHeaderHashByronToShelley (Proxy @c) h - } - _otherwise -> - error "translatePointByronToShelley: invalid Byron state" + case (point, bNo) of + (GenesisPoint, Origin) -> + Origin + (BlockPoint s h, NotOrigin n) -> + NotOrigin + ShelleyTip + { shelleyTipSlotNo = s + , shelleyTipBlockNo = n + , shelleyTipHash = translateHeaderHashByronToShelley (Proxy @c) h + } + _otherwise -> + error "translatePointByronToShelley: invalid Byron state" translateLedgerStateByronToShelleyWrapper :: - ( ShelleyCompatible (TPraos c) ShelleyEra - ) - => RequiringBoth - WrapLedgerConfig - TranslateLedgerState - ByronBlock - (ShelleyBlock (TPraos c) ShelleyEra) + ShelleyCompatible (TPraos c) ShelleyEra => + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + ByronBlock + (ShelleyBlock (TPraos c) ShelleyEra) translateLedgerStateByronToShelleyWrapper = - RequireBoth - $ \_ (WrapLedgerConfig cfgShelley) -> - TranslateLedgerState { - translateLedgerStateWith = \epochNo ledgerByron -> - valuesAsDiffs + RequireBoth $ + \_ (WrapLedgerConfig cfgShelley) -> + TranslateLedgerState + { translateLedgerStateWith = \epochNo ledgerByron -> + valuesAsDiffs . unstowLedgerTables - $ ShelleyLedgerState { - shelleyLedgerTip = - translatePointByronToShelley + $ ShelleyLedgerState + { shelleyLedgerTip = + translatePointByronToShelley (ledgerTipPoint ledgerByron) (byronLedgerTipBlockNo ledgerByron) - , shelleyLedgerState = - SL.translateToShelleyLedgerState - (toFromByronTranslationContext (shelleyLedgerGenesis cfgShelley)) - epochNo - (byronLedgerState ledgerByron) - , shelleyLedgerTransition = - ShelleyTransitionInfo{shelleyAfterVoting = 0} - , shelleyLedgerTables = emptyLedgerTables - } - } + , shelleyLedgerState = + SL.translateToShelleyLedgerState + (toFromByronTranslationContext (shelleyLedgerGenesis cfgShelley)) + epochNo + (byronLedgerState ledgerByron) + , shelleyLedgerTransition = + ShelleyTransitionInfo{shelleyAfterVoting = 0} + , shelleyLedgerTables = emptyLedgerTables + } + } translateLedgerTablesByronToShelleyWrapper :: - TranslateLedgerTables ByronBlock (ShelleyBlock (TPraos c) ShelleyEra) -translateLedgerTablesByronToShelleyWrapper = TranslateLedgerTables { - translateTxInWith = absurd + TranslateLedgerTables ByronBlock (ShelleyBlock (TPraos c) ShelleyEra) +translateLedgerTablesByronToShelleyWrapper = + TranslateLedgerTables + { translateTxInWith = absurd , translateTxOutWith = absurd } translateChainDepStateByronToShelleyWrapper :: - RequiringBoth - WrapConsensusConfig - (Translate WrapChainDepState) - ByronBlock - (ShelleyBlock (TPraos c) ShelleyEra) + RequiringBoth + WrapConsensusConfig + (Translate WrapChainDepState) + ByronBlock + (ShelleyBlock (TPraos c) ShelleyEra) translateChainDepStateByronToShelleyWrapper = - RequireBoth $ \_ (WrapConsensusConfig cfgShelley) -> - Translate $ \_ (WrapChainDepState pbftState) -> - WrapChainDepState $ - translateChainDepStateByronToShelley cfgShelley pbftState + RequireBoth $ \_ (WrapConsensusConfig cfgShelley) -> + Translate $ \_ (WrapChainDepState pbftState) -> + WrapChainDepState $ + translateChainDepStateByronToShelley cfgShelley pbftState translateChainDepStateByronToShelley :: - forall bc c. - ConsensusConfig (TPraos c) - -> PBftState bc - -> TPraosState -translateChainDepStateByronToShelley TPraosConfig { tpraosParams } pbftState = - -- Note that the 'PBftState' doesn't know about EBBs. So if the last slot of - -- the Byron era were occupied by an EBB (and no regular block in that same - -- slot), we would pick the wrong slot here, i.e., the slot of the regular - -- block before the EBB. - -- - -- Fortunately, this is impossible for two reasons: - -- - -- 1. On mainnet we stopped producing EBBs a while before the transition. - -- 2. The transition happens at the start of an epoch, so if the last slot - -- were occupied by an EBB, it must have been the EBB at the start of the - -- previous epoch. This means the previous epoch must have been empty, - -- which is a violation of the "@k@ blocks per @2k@ slots" property. - TPraosState (PBftState.lastSignedSlot pbftState) $ - SL.ChainDepState - { SL.csProtocol = SL.PrtclState Map.empty nonce nonce - , SL.csTickn = SL.TicknState { - SL.ticknStateEpochNonce = nonce + forall bc c. + ConsensusConfig (TPraos c) -> + PBftState bc -> + TPraosState +translateChainDepStateByronToShelley TPraosConfig{tpraosParams} pbftState = + -- Note that the 'PBftState' doesn't know about EBBs. So if the last slot of + -- the Byron era were occupied by an EBB (and no regular block in that same + -- slot), we would pick the wrong slot here, i.e., the slot of the regular + -- block before the EBB. + -- + -- Fortunately, this is impossible for two reasons: + -- + -- 1. On mainnet we stopped producing EBBs a while before the transition. + -- 2. The transition happens at the start of an epoch, so if the last slot + -- were occupied by an EBB, it must have been the EBB at the start of the + -- previous epoch. This means the previous epoch must have been empty, + -- which is a violation of the "@k@ blocks per @2k@ slots" property. + TPraosState (PBftState.lastSignedSlot pbftState) $ + SL.ChainDepState + { SL.csProtocol = SL.PrtclState Map.empty nonce nonce + , SL.csTickn = + SL.TicknState + { SL.ticknStateEpochNonce = nonce , SL.ticknStatePrevHashNonce = SL.NeutralNonce } - -- Overridden before used - , SL.csLabNonce = SL.NeutralNonce - } - where - nonce = tpraosInitialNonce tpraosParams + , -- Overridden before used + SL.csLabNonce = SL.NeutralNonce + } + where + nonce = tpraosInitialNonce tpraosParams crossEraForecastByronToShelleyWrapper :: - forall c. - RequiringBoth - WrapLedgerConfig - (CrossEraForecaster LedgerState WrapLedgerView) - ByronBlock - (ShelleyBlock (TPraos c) ShelleyEra) + forall c. + RequiringBoth + WrapLedgerConfig + (CrossEraForecaster LedgerState WrapLedgerView) + ByronBlock + (ShelleyBlock (TPraos c) ShelleyEra) crossEraForecastByronToShelleyWrapper = - RequireBoth $ \_ (WrapLedgerConfig cfgShelley) -> - CrossEraForecaster (forecast cfgShelley) - where - -- We ignore the Byron ledger view and create a new Shelley. - -- - -- The full Shelley forecast range (stability window) starts from the first - -- slot of the Shelley era, no matter how many slots there are between the - -- Byron ledger and the first Shelley slot. Note that this number of slots - -- is still guaranteed to be less than the forecast range of the HFC in the - -- Byron era. - forecast :: - ShelleyLedgerConfig ShelleyEra - -> Bound - -> SlotNo - -> LedgerState ByronBlock mk - -> Except - OutsideForecastRange - (WrapLedgerView (ShelleyBlock (TPraos c) ShelleyEra)) - forecast cfgShelley bound forecastFor currentByronState - | forecastFor < maxFor - = return $ - WrapLedgerView $ - SL.mkInitialShelleyLedgerView - (toFromByronTranslationContext (shelleyLedgerGenesis cfgShelley)) - | otherwise - = throwError $ OutsideForecastRange { - outsideForecastAt = ledgerTipSlot currentByronState + RequireBoth $ \_ (WrapLedgerConfig cfgShelley) -> + CrossEraForecaster (forecast cfgShelley) + where + -- We ignore the Byron ledger view and create a new Shelley. + -- + -- The full Shelley forecast range (stability window) starts from the first + -- slot of the Shelley era, no matter how many slots there are between the + -- Byron ledger and the first Shelley slot. Note that this number of slots + -- is still guaranteed to be less than the forecast range of the HFC in the + -- Byron era. + forecast :: + ShelleyLedgerConfig ShelleyEra -> + Bound -> + SlotNo -> + LedgerState ByronBlock mk -> + Except + OutsideForecastRange + (WrapLedgerView (ShelleyBlock (TPraos c) ShelleyEra)) + forecast cfgShelley bound forecastFor currentByronState + | forecastFor < maxFor = + return $ + WrapLedgerView $ + SL.mkInitialShelleyLedgerView + (toFromByronTranslationContext (shelleyLedgerGenesis cfgShelley)) + | otherwise = + throwError $ + OutsideForecastRange + { outsideForecastAt = ledgerTipSlot currentByronState , outsideForecastMaxFor = maxFor - , outsideForecastFor = forecastFor + , outsideForecastFor = forecastFor } - where - globals = shelleyLedgerGlobals cfgShelley - swindow = SL.stabilityWindow globals - - -- This is the exclusive upper bound of the forecast range - -- - -- If Shelley's stability window is 0, it means we can't forecast /at - -- all/ in the Shelley era. Not even to the first slot in the Shelley - -- era! Remember that forecasting to slot @S@ means forecasting the - -- ledger view obtained from the ledger state /after/ applying the block - -- with slot @S@. If the stability window is 0, we can't even forecast - -- after the very first "virtual" Shelley block, meaning we can't - -- forecast into the Shelley era when still in the Byron era. - maxFor :: SlotNo - maxFor = addSlots swindow (boundSlot bound) + where + globals = shelleyLedgerGlobals cfgShelley + swindow = SL.stabilityWindow globals + + -- This is the exclusive upper bound of the forecast range + -- + -- If Shelley's stability window is 0, it means we can't forecast /at + -- all/ in the Shelley era. Not even to the first slot in the Shelley + -- era! Remember that forecasting to slot @S@ means forecasting the + -- ledger view obtained from the ledger state /after/ applying the block + -- with slot @S@. If the stability window is 0, we can't even forecast + -- after the very first "virtual" Shelley block, meaning we can't + -- forecast into the Shelley era when still in the Byron era. + maxFor :: SlotNo + maxFor = addSlots swindow (boundSlot bound) {------------------------------------------------------------------------------- Translation from Shelley to Allegra @@ -397,67 +413,73 @@ crossEraForecastByronToShelleyWrapper = translateLedgerStateShelleyToAllegraWrapper :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (TPraos c) ShelleyEra) - (ShelleyBlock (TPraos c) AllegraEra) + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (TPraos c) ShelleyEra) + (ShelleyBlock (TPraos c) AllegraEra) translateLedgerStateShelleyToAllegraWrapper = - ignoringBoth $ - TranslateLedgerState { - translateLedgerStateWith = \_epochNo ls -> - -- In the Shelley to Allegra transition, the AVVM addresses have - -- to be deleted, and their balance has to be moved to the - -- reserves. For this matter, the Ledger keeps track of these - -- small set of entries since the Byron to Shelley transition and - -- provides them to us through 'shelleyToAllegraAVVMsToDelete'. + ignoringBoth $ + TranslateLedgerState + { translateLedgerStateWith = \_epochNo ls -> + -- In the Shelley to Allegra transition, the AVVM addresses have + -- to be deleted, and their balance has to be moved to the + -- reserves. For this matter, the Ledger keeps track of these + -- small set of entries since the Byron to Shelley transition and + -- provides them to us through 'shelleyToAllegraAVVMsToDelete'. + -- + -- In the long run, the ledger will already use ledger states + -- parametrized by the map kind and therefore will already provide + -- the differences in this translation. + let avvms = + SL.unUTxO $ + shelleyToAllegraAVVMsToDelete $ + shelleyLedgerState ls + + -- While techically we can diff the LedgerTables, it becomes + -- complex doing so, as we cannot perform operations with + -- 'LedgerTables l1 mk' and 'LedgerTables l2 mk'. Because of + -- this, for now we choose to generate the differences out of + -- thin air as we know that in this era translation these are + -- the only differences produced. -- - -- In the long run, the ledger will already use ledger states - -- parametrized by the map kind and therefore will already provide - -- the differences in this translation. - let avvms = SL.unUTxO - $ shelleyToAllegraAVVMsToDelete - $ shelleyLedgerState ls - - -- While techically we can diff the LedgerTables, it becomes - -- complex doing so, as we cannot perform operations with - -- 'LedgerTables l1 mk' and 'LedgerTables l2 mk'. Because of - -- this, for now we choose to generate the differences out of - -- thin air as we know that in this era translation these are - -- the only differences produced. - -- - -- When adding more tables, this decision might need to be - -- revisited, as there might be other diffs produced in the - -- translation. - avvmsAsDeletions = LedgerTables - . DiffMK - . Diff.fromMapDeletes - . Map.map SL.upgradeTxOut - $ avvms - - -- This 'stowLedgerTables' + 'withLedgerTables' injects the - -- values provided by the Ledger so that the translation - -- operation finds those entries in the UTxO and destroys - -- them, modifying the reserves accordingly. - stowedState = stowLedgerTables - . withLedgerTables ls - . LedgerTables - . ValuesMK - $ avvms - - resultingState = unFlip . unComp - . SL.translateEra' SL.NoGenesis - . Comp . Flip - $ stowedState - - in resultingState `withLedgerTables` avvmsAsDeletions - } + -- When adding more tables, this decision might need to be + -- revisited, as there might be other diffs produced in the + -- translation. + avvmsAsDeletions = + LedgerTables + . DiffMK + . Diff.fromMapDeletes + . Map.map SL.upgradeTxOut + $ avvms + + -- This 'stowLedgerTables' + 'withLedgerTables' injects the + -- values provided by the Ledger so that the translation + -- operation finds those entries in the UTxO and destroys + -- them, modifying the reserves accordingly. + stowedState = + stowLedgerTables + . withLedgerTables ls + . LedgerTables + . ValuesMK + $ avvms + + resultingState = + unFlip + . unComp + . SL.translateEra' SL.NoGenesis + . Comp + . Flip + $ stowedState + in resultingState `withLedgerTables` avvmsAsDeletions + } translateLedgerTablesShelleyToAllegraWrapper :: TranslateLedgerTables (ShelleyBlock (TPraos c) ShelleyEra) (ShelleyBlock (TPraos c) AllegraEra) -translateLedgerTablesShelleyToAllegraWrapper = TranslateLedgerTables { - translateTxInWith = coerce +translateLedgerTablesShelleyToAllegraWrapper = + TranslateLedgerTables + { translateTxInWith = coerce , translateTxOutWith = SL.upgradeTxOut } @@ -465,14 +487,16 @@ translateTxShelleyToAllegraWrapper :: InjectTx (ShelleyBlock (TPraos c) ShelleyEra) (ShelleyBlock (TPraos c) AllegraEra) -translateTxShelleyToAllegraWrapper = InjectTx $ +translateTxShelleyToAllegraWrapper = + InjectTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra SL.NoGenesis . Comp translateValidatedTxShelleyToAllegraWrapper :: InjectValidatedTx (ShelleyBlock (TPraos c) ShelleyEra) (ShelleyBlock (TPraos c) AllegraEra) -translateValidatedTxShelleyToAllegraWrapper = InjectValidatedTx $ +translateValidatedTxShelleyToAllegraWrapper = + InjectValidatedTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra SL.NoGenesis . Comp {------------------------------------------------------------------------------- @@ -486,23 +510,24 @@ translateLedgerStateAllegraToMaryWrapper :: (ShelleyBlock (TPraos c) AllegraEra) (ShelleyBlock (TPraos c) MaryEra) translateLedgerStateAllegraToMaryWrapper = - ignoringBoth $ - TranslateLedgerState { - translateLedgerStateWith = \_epochNo -> - noNewTickingDiffs - . unFlip - . unComp - . SL.translateEra' SL.NoGenesis - . Comp - . Flip - } + ignoringBoth $ + TranslateLedgerState + { translateLedgerStateWith = \_epochNo -> + noNewTickingDiffs + . unFlip + . unComp + . SL.translateEra' SL.NoGenesis + . Comp + . Flip + } translateLedgerTablesAllegraToMaryWrapper :: TranslateLedgerTables (ShelleyBlock (TPraos c) AllegraEra) (ShelleyBlock (TPraos c) MaryEra) -translateLedgerTablesAllegraToMaryWrapper = TranslateLedgerTables { - translateTxInWith = coerce +translateLedgerTablesAllegraToMaryWrapper = + TranslateLedgerTables + { translateTxInWith = coerce , translateTxOutWith = SL.upgradeTxOut } @@ -510,14 +535,16 @@ translateTxAllegraToMaryWrapper :: InjectTx (ShelleyBlock (TPraos c) AllegraEra) (ShelleyBlock (TPraos c) MaryEra) -translateTxAllegraToMaryWrapper = InjectTx $ +translateTxAllegraToMaryWrapper = + InjectTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra SL.NoGenesis . Comp translateValidatedTxAllegraToMaryWrapper :: InjectValidatedTx (ShelleyBlock (TPraos c) AllegraEra) (ShelleyBlock (TPraos c) MaryEra) -translateValidatedTxAllegraToMaryWrapper = InjectValidatedTx $ +translateValidatedTxAllegraToMaryWrapper = + InjectValidatedTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra SL.NoGenesis . Comp {------------------------------------------------------------------------------- @@ -531,47 +558,50 @@ translateLedgerStateMaryToAlonzoWrapper :: (ShelleyBlock (TPraos c) MaryEra) (ShelleyBlock (TPraos c) AlonzoEra) translateLedgerStateMaryToAlonzoWrapper = - RequireBoth $ \_cfgMary cfgAlonzo -> - TranslateLedgerState { - translateLedgerStateWith = \_epochNo -> - noNewTickingDiffs - . unFlip - . unComp - . SL.translateEra' (getAlonzoTranslationContext cfgAlonzo) - . Comp - . Flip - } + RequireBoth $ \_cfgMary cfgAlonzo -> + TranslateLedgerState + { translateLedgerStateWith = \_epochNo -> + noNewTickingDiffs + . unFlip + . unComp + . SL.translateEra' (getAlonzoTranslationContext cfgAlonzo) + . Comp + . Flip + } translateLedgerTablesMaryToAlonzoWrapper :: TranslateLedgerTables (ShelleyBlock (TPraos c) MaryEra) (ShelleyBlock (TPraos c) AlonzoEra) -translateLedgerTablesMaryToAlonzoWrapper = TranslateLedgerTables { - translateTxInWith = coerce +translateLedgerTablesMaryToAlonzoWrapper = + TranslateLedgerTables + { translateTxInWith = coerce , translateTxOutWith = SL.upgradeTxOut } getAlonzoTranslationContext :: - WrapLedgerConfig (ShelleyBlock (TPraos c) AlonzoEra) - -> SL.TranslationContext AlonzoEra + WrapLedgerConfig (ShelleyBlock (TPraos c) AlonzoEra) -> + SL.TranslationContext AlonzoEra getAlonzoTranslationContext = - shelleyLedgerTranslationContext . unwrapLedgerConfig + shelleyLedgerTranslationContext . unwrapLedgerConfig translateTxMaryToAlonzoWrapper :: - SL.TranslationContext AlonzoEra - -> InjectTx - (ShelleyBlock (TPraos c) MaryEra) - (ShelleyBlock (TPraos c) AlonzoEra) -translateTxMaryToAlonzoWrapper ctxt = InjectTx $ + SL.TranslationContext AlonzoEra -> + InjectTx + (ShelleyBlock (TPraos c) MaryEra) + (ShelleyBlock (TPraos c) AlonzoEra) +translateTxMaryToAlonzoWrapper ctxt = + InjectTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra ctxt . Comp translateValidatedTxMaryToAlonzoWrapper :: - forall c. - SL.TranslationContext AlonzoEra - -> InjectValidatedTx - (ShelleyBlock (TPraos c) MaryEra) - (ShelleyBlock (TPraos c) AlonzoEra) -translateValidatedTxMaryToAlonzoWrapper ctxt = InjectValidatedTx $ + forall c. + SL.TranslationContext AlonzoEra -> + InjectValidatedTx + (ShelleyBlock (TPraos c) MaryEra) + (ShelleyBlock (TPraos c) AlonzoEra) +translateValidatedTxMaryToAlonzoWrapper ctxt = + InjectValidatedTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra ctxt . Comp {------------------------------------------------------------------------------- @@ -586,70 +616,74 @@ translateLedgerStateAlonzoToBabbageWrapper :: (ShelleyBlock (Praos c) BabbageEra) translateLedgerStateAlonzoToBabbageWrapper = RequireBoth $ \_cfgAlonzo _cfgBabbage -> - TranslateLedgerState { - translateLedgerStateWith = \_epochNo -> - noNewTickingDiffs - . unFlip - . unComp - . SL.translateEra' SL.NoGenesis - . Comp - . Flip - . transPraosLS - } - where - transPraosLS :: - LedgerState (ShelleyBlock (TPraos c) AlonzoEra) mk -> - LedgerState (ShelleyBlock (Praos c) AlonzoEra) mk - transPraosLS (ShelleyLedgerState wo nes st tb) = - ShelleyLedgerState - { shelleyLedgerTip = fmap castShelleyTip wo - , shelleyLedgerState = nes - , shelleyLedgerTransition = st - , shelleyLedgerTables = coerce tb - } + TranslateLedgerState + { translateLedgerStateWith = \_epochNo -> + noNewTickingDiffs + . unFlip + . unComp + . SL.translateEra' SL.NoGenesis + . Comp + . Flip + . transPraosLS + } + where + transPraosLS :: + LedgerState (ShelleyBlock (TPraos c) AlonzoEra) mk -> + LedgerState (ShelleyBlock (Praos c) AlonzoEra) mk + transPraosLS (ShelleyLedgerState wo nes st tb) = + ShelleyLedgerState + { shelleyLedgerTip = fmap castShelleyTip wo + , shelleyLedgerState = nes + , shelleyLedgerTransition = st + , shelleyLedgerTables = coerce tb + } translateLedgerTablesAlonzoToBabbageWrapper :: TranslateLedgerTables (ShelleyBlock (TPraos c) AlonzoEra) (ShelleyBlock (Praos c) BabbageEra) -translateLedgerTablesAlonzoToBabbageWrapper = TranslateLedgerTables { - translateTxInWith = coerce +translateLedgerTablesAlonzoToBabbageWrapper = + TranslateLedgerTables + { translateTxInWith = coerce , translateTxOutWith = SL.upgradeTxOut } translateTxAlonzoToBabbageWrapper :: - SL.TranslationContext BabbageEra - -> InjectTx - (ShelleyBlock (TPraos c) AlonzoEra) - (ShelleyBlock (Praos c) BabbageEra) -translateTxAlonzoToBabbageWrapper ctxt = InjectTx $ + SL.TranslationContext BabbageEra -> + InjectTx + (ShelleyBlock (TPraos c) AlonzoEra) + (ShelleyBlock (Praos c) BabbageEra) +translateTxAlonzoToBabbageWrapper ctxt = + InjectTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra ctxt . Comp . transPraosTx - where - transPraosTx - :: GenTx (ShelleyBlock (TPraos c) AlonzoEra) - -> GenTx (ShelleyBlock (Praos c) AlonzoEra) - transPraosTx (ShelleyTx ti tx) = ShelleyTx ti (coerce tx) + where + transPraosTx :: + GenTx (ShelleyBlock (TPraos c) AlonzoEra) -> + GenTx (ShelleyBlock (Praos c) AlonzoEra) + transPraosTx (ShelleyTx ti tx) = ShelleyTx ti (coerce tx) translateValidatedTxAlonzoToBabbageWrapper :: - forall c. - SL.TranslationContext BabbageEra - -> InjectValidatedTx - (ShelleyBlock (TPraos c) AlonzoEra) - (ShelleyBlock (Praos c) BabbageEra) -translateValidatedTxAlonzoToBabbageWrapper ctxt = InjectValidatedTx $ - fmap unComp - . eitherToMaybe - . runExcept - . SL.translateEra ctxt - . Comp - . transPraosValidatedTx + forall c. + SL.TranslationContext BabbageEra -> + InjectValidatedTx + (ShelleyBlock (TPraos c) AlonzoEra) + (ShelleyBlock (Praos c) BabbageEra) +translateValidatedTxAlonzoToBabbageWrapper ctxt = + InjectValidatedTx $ + fmap unComp + . eitherToMaybe + . runExcept + . SL.translateEra ctxt + . Comp + . transPraosValidatedTx where - transPraosValidatedTx - :: WrapValidatedGenTx (ShelleyBlock (TPraos c) AlonzoEra) - -> WrapValidatedGenTx (ShelleyBlock (Praos c) AlonzoEra) + transPraosValidatedTx :: + WrapValidatedGenTx (ShelleyBlock (TPraos c) AlonzoEra) -> + WrapValidatedGenTx (ShelleyBlock (Praos c) AlonzoEra) transPraosValidatedTx (WrapValidatedGenTx x) = case x of - ShelleyValidatedTx txid vtx -> WrapValidatedGenTx $ - ShelleyValidatedTx txid (SL.coerceValidated vtx) + ShelleyValidatedTx txid vtx -> + WrapValidatedGenTx $ + ShelleyValidatedTx txid (SL.coerceValidated vtx) {------------------------------------------------------------------------------- Translation from Babbage to Conway @@ -663,44 +697,47 @@ translateLedgerStateBabbageToConwayWrapper :: (ShelleyBlock (Praos c) ConwayEra) translateLedgerStateBabbageToConwayWrapper = RequireBoth $ \_cfgBabbage cfgConway -> - TranslateLedgerState { - translateLedgerStateWith = \_epochNo -> - noNewTickingDiffs - . unFlip - . unComp - . SL.translateEra' (getConwayTranslationContext cfgConway) - . Comp - . Flip - } + TranslateLedgerState + { translateLedgerStateWith = \_epochNo -> + noNewTickingDiffs + . unFlip + . unComp + . SL.translateEra' (getConwayTranslationContext cfgConway) + . Comp + . Flip + } translateLedgerTablesBabbageToConwayWrapper :: TranslateLedgerTables (ShelleyBlock (Praos c) BabbageEra) (ShelleyBlock (Praos c) ConwayEra) -translateLedgerTablesBabbageToConwayWrapper = TranslateLedgerTables { - translateTxInWith = coerce +translateLedgerTablesBabbageToConwayWrapper = + TranslateLedgerTables + { translateTxInWith = coerce , translateTxOutWith = SL.upgradeTxOut } getConwayTranslationContext :: - WrapLedgerConfig (ShelleyBlock (Praos c) ConwayEra) - -> SL.TranslationContext ConwayEra + WrapLedgerConfig (ShelleyBlock (Praos c) ConwayEra) -> + SL.TranslationContext ConwayEra getConwayTranslationContext = - shelleyLedgerTranslationContext . unwrapLedgerConfig + shelleyLedgerTranslationContext . unwrapLedgerConfig translateTxBabbageToConwayWrapper :: - SL.TranslationContext ConwayEra - -> InjectTx - (ShelleyBlock (Praos c) BabbageEra) - (ShelleyBlock (Praos c) ConwayEra) -translateTxBabbageToConwayWrapper ctxt = InjectTx $ + SL.TranslationContext ConwayEra -> + InjectTx + (ShelleyBlock (Praos c) BabbageEra) + (ShelleyBlock (Praos c) ConwayEra) +translateTxBabbageToConwayWrapper ctxt = + InjectTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra ctxt . Comp translateValidatedTxBabbageToConwayWrapper :: - forall c. - SL.TranslationContext ConwayEra - -> InjectValidatedTx - (ShelleyBlock (Praos c) BabbageEra) - (ShelleyBlock (Praos c) ConwayEra) -translateValidatedTxBabbageToConwayWrapper ctxt = InjectValidatedTx $ + forall c. + SL.TranslationContext ConwayEra -> + InjectValidatedTx + (ShelleyBlock (Praos c) BabbageEra) + (ShelleyBlock (Praos c) ConwayEra) +translateValidatedTxBabbageToConwayWrapper ctxt = + InjectValidatedTx $ fmap unComp . eitherToMaybe . runExcept . SL.translateEra ctxt . Comp diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Condense.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Condense.hs index c5d365a4c8..b19e664579 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Condense.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Condense.hs @@ -1,16 +1,15 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Cardano.Condense () where -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.HardFork.Combinator.Condense -import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.HardFork.Combinator.Condense +import Ouroboros.Consensus.Shelley.Ledger {------------------------------------------------------------------------------- Condense diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Ledger.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Ledger.hs index 94effcc0fd..888168144f 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Ledger.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Ledger.hs @@ -16,143 +16,154 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - -- TODO: can we un-orphan this module? {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Cardano.Ledger ( - CardanoTxOut (..) +module Ouroboros.Consensus.Cardano.Ledger + ( CardanoTxOut (..) , eliminateCardanoTxOut ) where -import Cardano.Ledger.Binary.Decoding hiding (Decoder) -import Cardano.Ledger.Binary.Encoding hiding (Encoding) -import Cardano.Ledger.Core (Era, eraDecoder, eraProtVerLow) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Ledger.Shelley.LedgerState as SL (dsUnifiedL, - esLStateL, lsCertStateL, nesEsL) -import qualified Cardano.Ledger.UMap as SL -import Codec.CBOR.Decoding -import Codec.CBOR.Encoding -import qualified Data.Map as Map -import Data.MemPack -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.Index -import Data.SOP.Strict -import qualified Data.SOP.Tails as Tails -import qualified Data.SOP.Telescope as Telescope -import Data.Void -import GHC.Generics (Generic) -import Lens.Micro -import NoThunks.Class -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Protocol.Praos (Praos) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Ledger (IsShelleyBlock, - ShelleyBlock, ShelleyCompatible, shelleyLedgerState) -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.IndexedMemPack - -instance CardanoHardForkConstraints c - => HasCanonicalTxIn (CardanoEras c) where - newtype instance CanonicalTxIn (CardanoEras c) = CardanoTxIn { - getCardanoTxIn :: SL.TxIn +import Cardano.Ledger.Binary.Decoding hiding (Decoder) +import Cardano.Ledger.Binary.Encoding hiding (Encoding) +import Cardano.Ledger.Core (Era, eraDecoder, eraProtVerLow) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.LedgerState as SL + ( dsUnifiedL + , esLStateL + , lsCertStateL + , nesEsL + ) +import Cardano.Ledger.UMap qualified as SL +import Codec.CBOR.Decoding +import Codec.CBOR.Encoding +import Data.Map qualified as Map +import Data.MemPack +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.Index +import Data.SOP.Strict +import Data.SOP.Tails qualified as Tails +import Data.SOP.Telescope qualified as Telescope +import Data.Void +import GHC.Generics (Generic) +import Lens.Micro +import NoThunks.Class +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Ledger + ( IsShelleyBlock + , ShelleyBlock + , ShelleyCompatible + , shelleyLedgerState + ) +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.IndexedMemPack + +instance + CardanoHardForkConstraints c => + HasCanonicalTxIn (CardanoEras c) + where + newtype CanonicalTxIn (CardanoEras c) = CardanoTxIn + { getCardanoTxIn :: SL.TxIn } deriving stock (Show, Eq, Ord) deriving newtype NoThunks - injectCanonicalTxIn IZ byronTxIn = absurd byronTxIn + injectCanonicalTxIn IZ byronTxIn = absurd byronTxIn injectCanonicalTxIn (IS idx) shelleyTxIn = case idx of - IZ -> CardanoTxIn shelleyTxIn - IS IZ -> CardanoTxIn shelleyTxIn - IS (IS IZ) -> CardanoTxIn shelleyTxIn - IS (IS (IS IZ)) -> CardanoTxIn shelleyTxIn - IS (IS (IS (IS IZ))) -> CardanoTxIn shelleyTxIn - IS (IS (IS (IS (IS IZ)))) -> CardanoTxIn shelleyTxIn - IS (IS (IS (IS (IS (IS idx'))))) -> case idx' of {} - - ejectCanonicalTxIn IZ _ = - error "ejectCanonicalTxIn: Byron has no TxIns" + IZ -> CardanoTxIn shelleyTxIn + IS IZ -> CardanoTxIn shelleyTxIn + IS (IS IZ) -> CardanoTxIn shelleyTxIn + IS (IS (IS IZ)) -> CardanoTxIn shelleyTxIn + IS (IS (IS (IS IZ))) -> CardanoTxIn shelleyTxIn + IS (IS (IS (IS (IS IZ)))) -> CardanoTxIn shelleyTxIn + IS (IS (IS (IS (IS (IS idx'))))) -> case idx' of {} + + ejectCanonicalTxIn IZ _ = + error "ejectCanonicalTxIn: Byron has no TxIns" ejectCanonicalTxIn (IS idx) cardanoTxIn = case idx of - IZ -> getCardanoTxIn cardanoTxIn - IS IZ -> getCardanoTxIn cardanoTxIn - IS (IS IZ) -> getCardanoTxIn cardanoTxIn - IS (IS (IS IZ)) -> getCardanoTxIn cardanoTxIn - IS (IS (IS (IS IZ))) -> getCardanoTxIn cardanoTxIn - IS (IS (IS (IS (IS IZ)))) -> getCardanoTxIn cardanoTxIn - IS (IS (IS (IS (IS (IS idx'))))) -> case idx' of {} + IZ -> getCardanoTxIn cardanoTxIn + IS IZ -> getCardanoTxIn cardanoTxIn + IS (IS IZ) -> getCardanoTxIn cardanoTxIn + IS (IS (IS IZ)) -> getCardanoTxIn cardanoTxIn + IS (IS (IS (IS IZ))) -> getCardanoTxIn cardanoTxIn + IS (IS (IS (IS (IS IZ)))) -> getCardanoTxIn cardanoTxIn + IS (IS (IS (IS (IS (IS idx'))))) -> case idx' of {} instance CardanoHardForkConstraints c => MemPack (CanonicalTxIn (CardanoEras c)) where packM = packM . getCardanoTxIn packedByteCount = packedByteCount . getCardanoTxIn unpackM = CardanoTxIn <$> unpackM -data CardanoTxOut c = - ShelleyTxOut {-# UNPACK #-} !(TxOut (LedgerState (ShelleyBlock (TPraos c) ShelleyEra))) +data CardanoTxOut c + = ShelleyTxOut {-# UNPACK #-} !(TxOut (LedgerState (ShelleyBlock (TPraos c) ShelleyEra))) | AllegraTxOut {-# UNPACK #-} !(TxOut (LedgerState (ShelleyBlock (TPraos c) AllegraEra))) - | MaryTxOut {-# UNPACK #-} !(TxOut (LedgerState (ShelleyBlock (TPraos c) MaryEra))) - | AlonzoTxOut !(TxOut (LedgerState (ShelleyBlock (TPraos c) AlonzoEra))) + | MaryTxOut {-# UNPACK #-} !(TxOut (LedgerState (ShelleyBlock (TPraos c) MaryEra))) + | AlonzoTxOut !(TxOut (LedgerState (ShelleyBlock (TPraos c) AlonzoEra))) | BabbageTxOut !(TxOut (LedgerState (ShelleyBlock (Praos c) BabbageEra))) - | ConwayTxOut !(TxOut (LedgerState (ShelleyBlock (Praos c) ConwayEra))) + | ConwayTxOut !(TxOut (LedgerState (ShelleyBlock (Praos c) ConwayEra))) deriving stock (Show, Eq, Generic) deriving anyclass NoThunks -- | Eliminate the wrapping of CardanoTxOut with the provided function. Similar -- to 'hcimap' on an 'NS'. eliminateCardanoTxOut :: - forall r c. CardanoHardForkConstraints c - => (forall x. - -- TODO ProtoCrypto constraint should be in IsShelleyBlock - ( IsShelleyBlock x - ) - => Index (CardanoEras c) x - -> TxOut (LedgerState x) - -> r - ) - -> CardanoTxOut c -> r + forall r c. + CardanoHardForkConstraints c => + ( forall x. + -- TODO ProtoCrypto constraint should be in IsShelleyBlock + IsShelleyBlock x => + Index (CardanoEras c) x -> + TxOut (LedgerState x) -> + r + ) -> + CardanoTxOut c -> + r eliminateCardanoTxOut f = \case ShelleyTxOut txout -> f (IS IZ) txout AllegraTxOut txout -> f (IS (IS IZ)) txout - MaryTxOut txout -> f (IS (IS (IS IZ))) txout - AlonzoTxOut txout -> f (IS (IS (IS (IS IZ)))) txout + MaryTxOut txout -> f (IS (IS (IS IZ))) txout + AlonzoTxOut txout -> f (IS (IS (IS (IS IZ)))) txout BabbageTxOut txout -> f (IS (IS (IS (IS (IS IZ))))) txout - ConwayTxOut txout -> f (IS (IS (IS (IS (IS (IS IZ)))))) txout + ConwayTxOut txout -> f (IS (IS (IS (IS (IS (IS IZ)))))) txout instance CardanoHardForkConstraints c => HasHardForkTxOut (CardanoEras c) where - - type instance HardForkTxOut (CardanoEras c) = CardanoTxOut c + type HardForkTxOut (CardanoEras c) = CardanoTxOut c injectHardForkTxOut idx !txOut = case idx of - IS IZ -> ShelleyTxOut txOut - IS (IS IZ) -> AllegraTxOut txOut - IS (IS (IS IZ)) -> MaryTxOut txOut - IS (IS (IS (IS IZ))) -> AlonzoTxOut txOut - IS (IS (IS (IS (IS IZ)))) -> BabbageTxOut txOut - IS (IS (IS (IS (IS (IS IZ))))) -> ConwayTxOut txOut + IS IZ -> ShelleyTxOut txOut + IS (IS IZ) -> AllegraTxOut txOut + IS (IS (IS IZ)) -> MaryTxOut txOut + IS (IS (IS (IS IZ))) -> AlonzoTxOut txOut + IS (IS (IS (IS (IS IZ)))) -> BabbageTxOut txOut + IS (IS (IS (IS (IS (IS IZ))))) -> ConwayTxOut txOut IS (IS (IS (IS (IS (IS (IS idx')))))) -> case idx' of {} ejectHardForkTxOut :: - forall y. - Index (CardanoEras c) y - -> HardForkTxOut (CardanoEras c) - -> TxOut (LedgerState y) + forall y. + Index (CardanoEras c) y -> + HardForkTxOut (CardanoEras c) -> + TxOut (LedgerState y) ejectHardForkTxOut targetIdx = eliminateCardanoTxOut - (\origIdx -> - unwrapTxOut - . maybe (error "anachrony") id - . Tails.extendWithTails origIdx targetIdx txOutTranslations - . WrapTxOut + ( \origIdx -> + unwrapTxOut + . maybe (error "anachrony") id + . Tails.extendWithTails origIdx targetIdx txOutTranslations + . WrapTxOut ) -instance CardanoHardForkConstraints c - => IndexedMemPack (LedgerState (HardForkBlock (CardanoEras c)) EmptyMK) (CardanoTxOut c) where +instance + CardanoHardForkConstraints c => + IndexedMemPack (LedgerState (HardForkBlock (CardanoEras c)) EmptyMK) (CardanoTxOut c) + where indexedTypeName _ = "CardanoTxOut" indexedPackM _ = eliminateCardanoTxOut (const packM) indexedPackedByteCount _ = eliminateCardanoTxOut (const packedByteCount) @@ -160,64 +171,82 @@ instance CardanoHardForkConstraints c let -- These could be made into a CAF to avoid recomputing it, but -- it is only used in serialization so it is not critical. - np = ( (Fn $ const $ error "unpacking a byron txout") - :* (Fn $ const $ Comp $ K . ShelleyTxOut <$> unpackM) - :* (Fn $ const $ Comp $ K . AllegraTxOut <$> unpackM) - :* (Fn $ const $ Comp $ K . MaryTxOut <$> unpackM) - :* (Fn $ const $ Comp $ K . AlonzoTxOut <$> unpackM) - :* (Fn $ const $ Comp $ K . BabbageTxOut <$> unpackM) - :* (Fn $ const $ Comp $ K . ConwayTxOut <$> unpackM) - :* Nil - ) + np = + ( (Fn $ const $ error "unpacking a byron txout") + :* (Fn $ const $ Comp $ K . ShelleyTxOut <$> unpackM) + :* (Fn $ const $ Comp $ K . AllegraTxOut <$> unpackM) + :* (Fn $ const $ Comp $ K . MaryTxOut <$> unpackM) + :* (Fn $ const $ Comp $ K . AlonzoTxOut <$> unpackM) + :* (Fn $ const $ Comp $ K . BabbageTxOut <$> unpackM) + :* (Fn $ const $ Comp $ K . ConwayTxOut <$> unpackM) + :* Nil + ) hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) -instance CardanoHardForkConstraints c => SerializeTablesWithHint (LedgerState (HardForkBlock (CardanoEras c))) where +instance + CardanoHardForkConstraints c => + SerializeTablesWithHint (LedgerState (HardForkBlock (CardanoEras c))) + where encodeTablesWithHint (HardForkLedgerState (HardForkState idx)) (LedgerTables (ValuesMK tbs)) = let -- These could be made into a CAF to avoid recomputing it, but -- it is only used in serialization so it is not critical. - np = (Fn $ const $ K $ Codec.CBOR.Encoding.encodeMapLen 0) - :* (Fn $ const $ K $ encOne (Proxy @ShelleyEra)) - :* (Fn $ const $ K $ encOne (Proxy @AllegraEra)) - :* (Fn $ const $ K $ encOne (Proxy @MaryEra)) - :* (Fn $ const $ K $ encOne (Proxy @AlonzoEra)) - :* (Fn $ const $ K $ encOne (Proxy @BabbageEra)) - :* (Fn $ const $ K $ encOne (Proxy @ConwayEra)) - :* Nil - in hcollapse $ hap np $ Telescope.tip idx + np = + (Fn $ const $ K $ Codec.CBOR.Encoding.encodeMapLen 0) + :* (Fn $ const $ K $ encOne (Proxy @ShelleyEra)) + :* (Fn $ const $ K $ encOne (Proxy @AllegraEra)) + :* (Fn $ const $ K $ encOne (Proxy @MaryEra)) + :* (Fn $ const $ K $ encOne (Proxy @AlonzoEra)) + :* (Fn $ const $ K $ encOne (Proxy @BabbageEra)) + :* (Fn $ const $ K $ encOne (Proxy @ConwayEra)) + :* Nil + in + hcollapse $ hap np $ Telescope.tip idx where - encOne :: forall era. Era era => Proxy era -> Encoding - encOne _ = toPlainEncoding (eraProtVerLow @era) $ encodeMap encodeMemPack (eliminateCardanoTxOut (const encodeMemPack)) tbs - - decodeTablesWithHint :: forall s. LedgerState (HardForkBlock (CardanoEras c)) EmptyMK - -> Decoder s (LedgerTables (LedgerState (HardForkBlock (CardanoEras c))) ValuesMK) + encOne :: forall era. Era era => Proxy era -> Encoding + encOne _ = + toPlainEncoding (eraProtVerLow @era) $ + encodeMap encodeMemPack (eliminateCardanoTxOut (const encodeMemPack)) tbs + + decodeTablesWithHint :: + forall s. + LedgerState (HardForkBlock (CardanoEras c)) EmptyMK -> + Decoder s (LedgerTables (LedgerState (HardForkBlock (CardanoEras c))) ValuesMK) decodeTablesWithHint (HardForkLedgerState (HardForkState idx)) = let -- These could be made into a CAF to avoid recomputing it, but -- it is only used in serialization so it is not critical. - np = (Fn $ const $ Comp $ K . LedgerTables @(LedgerState (HardForkBlock (CardanoEras c))) . ValuesMK <$> pure Map.empty) - :* (Fn $ Comp . fmap K . getOne ShelleyTxOut . unFlip . currentState) - :* (Fn $ Comp . fmap K . getOne AllegraTxOut . unFlip . currentState) - :* (Fn $ Comp . fmap K . getOne MaryTxOut . unFlip . currentState) - :* (Fn $ Comp . fmap K . getOne AlonzoTxOut . unFlip . currentState) - :* (Fn $ Comp . fmap K . getOne BabbageTxOut . unFlip . currentState) - :* (Fn $ Comp . fmap K . getOne ConwayTxOut . unFlip . currentState) - :* Nil - in hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) + np = + ( Fn $ + const $ + Comp $ + K . LedgerTables @(LedgerState (HardForkBlock (CardanoEras c))) . ValuesMK <$> pure Map.empty + ) + :* (Fn $ Comp . fmap K . getOne ShelleyTxOut . unFlip . currentState) + :* (Fn $ Comp . fmap K . getOne AllegraTxOut . unFlip . currentState) + :* (Fn $ Comp . fmap K . getOne MaryTxOut . unFlip . currentState) + :* (Fn $ Comp . fmap K . getOne AlonzoTxOut . unFlip . currentState) + :* (Fn $ Comp . fmap K . getOne BabbageTxOut . unFlip . currentState) + :* (Fn $ Comp . fmap K . getOne ConwayTxOut . unFlip . currentState) + :* Nil + in + hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) where - getOne :: forall proto era. - ShelleyCompatible proto era - => (TxOut (LedgerState (ShelleyBlock proto era)) -> CardanoTxOut c) - -> LedgerState (ShelleyBlock proto era) EmptyMK - -> Decoder s (LedgerTables (LedgerState (HardForkBlock (CardanoEras c))) ValuesMK) - getOne toCardanoTxOut st = - let certInterns = - internsFromMap - $ shelleyLedgerState st - ^. SL.nesEsL - . SL.esLStateL - . SL.lsCertStateL - . SL.certDStateL - . SL.dsUnifiedL - . SL.umElemsL - in LedgerTables . ValuesMK <$> eraDecoder @era (decodeMap decodeMemPack (toCardanoTxOut <$> decShareCBOR certInterns)) + getOne :: + forall proto era. + ShelleyCompatible proto era => + (TxOut (LedgerState (ShelleyBlock proto era)) -> CardanoTxOut c) -> + LedgerState (ShelleyBlock proto era) EmptyMK -> + Decoder s (LedgerTables (LedgerState (HardForkBlock (CardanoEras c))) ValuesMK) + getOne toCardanoTxOut st = + let certInterns = + internsFromMap $ + shelleyLedgerState st + ^. SL.nesEsL + . SL.esLStateL + . SL.lsCertStateL + . SL.certDStateL + . SL.dsUnifiedL + . SL.umElemsL + in LedgerTables . ValuesMK + <$> eraDecoder @era (decodeMap decodeMemPack (toCardanoTxOut <$> decShareCBOR certInterns)) diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Node.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Node.hs index 290c764da5..99bcf2f432 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Node.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/Node.hs @@ -14,18 +14,27 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Cardano.Node ( - CardanoHardForkConstraints +module Ouroboros.Consensus.Cardano.Node + ( CardanoHardForkConstraints , CardanoHardForkTrigger (..) - , CardanoHardForkTriggers (.., CardanoHardForkTriggers', triggerHardForkShelley, triggerHardForkAllegra, triggerHardForkMary, triggerHardForkAlonzo, triggerHardForkBabbage, triggerHardForkConway) + , CardanoHardForkTriggers + ( .. + , CardanoHardForkTriggers' + , triggerHardForkShelley + , triggerHardForkAllegra + , triggerHardForkMary + , triggerHardForkAlonzo + , triggerHardForkBabbage + , triggerHardForkConway + ) , CardanoProtocolParams (..) , MaxMajorProtVer (..) , TriggerHardFork (..) , protocolClientInfoCardano , protocolInfoCardano + -- * SupportedNetworkProtocolVersion , pattern CardanoNodeToClientVersion12 , pattern CardanoNodeToClientVersion13 @@ -36,74 +45,81 @@ module Ouroboros.Consensus.Cardano.Node ( , pattern CardanoNodeToNodeVersion2 ) where -import Cardano.Binary (DecoderError (..), enforceSize) -import Cardano.Chain.Slotting (EpochSlots) -import qualified Cardano.Ledger.Api.Era as L -import qualified Cardano.Ledger.Api.Transition as L -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Prelude (cborError) -import qualified Cardano.Protocol.TPraos.OCert as Absolute (KESPeriod (..), - ocertKESPeriod) -import qualified Codec.CBOR.Decoding as CBOR -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Control.Exception (assert) -import qualified Data.ByteString.Short as Short -import Data.Functor.These (These1 (..)) -import qualified Data.Map.Strict as Map -import Data.SOP.BasicFunctors -import Data.SOP.Counting -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index -import Data.SOP.OptNP (NonEmptyOptNP, OptNP (OptSkip)) -import qualified Data.SOP.OptNP as OptNP -import Data.SOP.Strict -import Data.Word (Word16, Word64) -import Lens.Micro ((^.)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.ByronHFC -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import qualified Ouroboros.Consensus.Byron.Ledger as Byron -import qualified Ouroboros.Consensus.Byron.Ledger.Conversions as Byron -import Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion -import Ouroboros.Consensus.Byron.Node -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.Cardano.QueryHF () -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Embed.Nary -import Ouroboros.Consensus.HardFork.Combinator.Serialisation -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Run -import qualified Ouroboros.Consensus.Protocol.Ledger.HotKey as HotKey -import Ouroboros.Consensus.Protocol.Praos (Praos, PraosParams (..)) -import Ouroboros.Consensus.Protocol.Praos.Common - (praosCanBeLeaderOpCert) -import Ouroboros.Consensus.Protocol.TPraos (TPraos, TPraosParams (..)) -import qualified Ouroboros.Consensus.Protocol.TPraos as Shelley -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) -import qualified Ouroboros.Consensus.Shelley.Ledger as Shelley -import Ouroboros.Consensus.Shelley.Ledger.Block (IsShelleyBlock, - ShelleyBlockLedgerEra) -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion -import Ouroboros.Consensus.Shelley.Node -import Ouroboros.Consensus.Shelley.Node.Common (ShelleyEraWithCrypto, - shelleyBlockIssuerVKey) -import qualified Ouroboros.Consensus.Shelley.Node.Praos as Praos -import qualified Ouroboros.Consensus.Shelley.Node.TPraos as TPraos -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.IOLike +import Cardano.Binary (DecoderError (..), enforceSize) +import Cardano.Chain.Slotting (EpochSlots) +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Prelude (cborError) +import Cardano.Protocol.TPraos.OCert qualified as Absolute + ( KESPeriod (..) + , ocertKESPeriod + ) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Control.Exception (assert) +import Data.ByteString.Short qualified as Short +import Data.Functor.These (These1 (..)) +import Data.Map.Strict qualified as Map +import Data.SOP.BasicFunctors +import Data.SOP.Counting +import Data.SOP.Functors (Flip (..)) +import Data.SOP.Index +import Data.SOP.OptNP (NonEmptyOptNP, OptNP (OptSkip)) +import Data.SOP.OptNP qualified as OptNP +import Data.SOP.Strict +import Data.Word (Word16, Word64) +import Lens.Micro ((^.)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.ByronHFC +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Byron.Ledger qualified as Byron +import Ouroboros.Consensus.Byron.Ledger.Conversions qualified as Byron +import Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion +import Ouroboros.Consensus.Byron.Node +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.Cardano.QueryHF () +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Embed.Nary +import Ouroboros.Consensus.HardFork.Combinator.Serialisation +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Protocol.Ledger.HotKey qualified as HotKey +import Ouroboros.Consensus.Protocol.Praos (Praos, PraosParams (..)) +import Ouroboros.Consensus.Protocol.Praos.Common + ( praosCanBeLeaderOpCert + ) +import Ouroboros.Consensus.Protocol.TPraos (TPraos, TPraosParams (..)) +import Ouroboros.Consensus.Protocol.TPraos qualified as Shelley +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) +import Ouroboros.Consensus.Shelley.Ledger qualified as Shelley +import Ouroboros.Consensus.Shelley.Ledger.Block + ( IsShelleyBlock + , ShelleyBlockLedgerEra + ) +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion +import Ouroboros.Consensus.Shelley.Node +import Ouroboros.Consensus.Shelley.Node.Common + ( ShelleyEraWithCrypto + , shelleyBlockIssuerVKey + ) +import Ouroboros.Consensus.Shelley.Node.Praos qualified as Praos +import Ouroboros.Consensus.Shelley.Node.TPraos qualified as TPraos +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- SerialiseHFC @@ -138,86 +154,88 @@ import Ouroboros.Consensus.Util.IOLike -- instance CardanoHardForkConstraints c => SerialiseHFC (CardanoEras c) where encodeDiskHfcBlock (CardanoCodecConfig ccfgByron ccfgShelley ccfgAllegra ccfgMary ccfgAlonzo ccfgBabbage ccfgConway) = \case - -- We are backwards compatible with Byron and thus use the exact same - -- encoding. - BlockByron blockByron -> encodeDisk ccfgByron blockByron - -- For Shelley and later eras, we need to prepend the hard fork envelope. - BlockShelley blockShelley -> prependTag 2 $ encodeDisk ccfgShelley blockShelley - BlockAllegra blockAllegra -> prependTag 3 $ encodeDisk ccfgAllegra blockAllegra - BlockMary blockMary -> prependTag 4 $ encodeDisk ccfgMary blockMary - BlockAlonzo blockAlonzo -> prependTag 5 $ encodeDisk ccfgAlonzo blockAlonzo - BlockBabbage blockBabbage -> prependTag 6 $ encodeDisk ccfgBabbage blockBabbage - BlockConway blockConway -> prependTag 7 $ encodeDisk ccfgConway blockConway + -- We are backwards compatible with Byron and thus use the exact same + -- encoding. + BlockByron blockByron -> encodeDisk ccfgByron blockByron + -- For Shelley and later eras, we need to prepend the hard fork envelope. + BlockShelley blockShelley -> prependTag 2 $ encodeDisk ccfgShelley blockShelley + BlockAllegra blockAllegra -> prependTag 3 $ encodeDisk ccfgAllegra blockAllegra + BlockMary blockMary -> prependTag 4 $ encodeDisk ccfgMary blockMary + BlockAlonzo blockAlonzo -> prependTag 5 $ encodeDisk ccfgAlonzo blockAlonzo + BlockBabbage blockBabbage -> prependTag 6 $ encodeDisk ccfgBabbage blockBabbage + BlockConway blockConway -> prependTag 7 $ encodeDisk ccfgConway blockConway decodeDiskHfcBlock (CardanoCodecConfig ccfgByron ccfgShelley ccfgAllegra ccfgMary ccfgAlonzo ccfgBabbage ccfgConway) = do - enforceSize "CardanoBlock" 2 - CBOR.decodeWord >>= \case - 0 -> fmap BlockByron <$> Byron.decodeByronBoundaryBlock epochSlots - 1 -> fmap BlockByron <$> Byron.decodeByronRegularBlock epochSlots - -- We don't have to drop the first two bytes from the 'ByteString' - -- passed to the decoder as slicing already takes care of this. - 2 -> fmap BlockShelley <$> decodeDisk ccfgShelley - 3 -> fmap BlockAllegra <$> decodeDisk ccfgAllegra - 4 -> fmap BlockMary <$> decodeDisk ccfgMary - 5 -> fmap BlockAlonzo <$> decodeDisk ccfgAlonzo - 6 -> fmap BlockBabbage <$> decodeDisk ccfgBabbage - 7 -> fmap BlockConway <$> decodeDisk ccfgConway - t -> cborError $ DecoderErrorUnknownTag "CardanoBlock" (fromIntegral t) - where - epochSlots = Byron.getByronEpochSlots ccfgByron + enforceSize "CardanoBlock" 2 + CBOR.decodeWord >>= \case + 0 -> fmap BlockByron <$> Byron.decodeByronBoundaryBlock epochSlots + 1 -> fmap BlockByron <$> Byron.decodeByronRegularBlock epochSlots + -- We don't have to drop the first two bytes from the 'ByteString' + -- passed to the decoder as slicing already takes care of this. + 2 -> fmap BlockShelley <$> decodeDisk ccfgShelley + 3 -> fmap BlockAllegra <$> decodeDisk ccfgAllegra + 4 -> fmap BlockMary <$> decodeDisk ccfgMary + 5 -> fmap BlockAlonzo <$> decodeDisk ccfgAlonzo + 6 -> fmap BlockBabbage <$> decodeDisk ccfgBabbage + 7 -> fmap BlockConway <$> decodeDisk ccfgConway + t -> cborError $ DecoderErrorUnknownTag "CardanoBlock" (fromIntegral t) + where + epochSlots = Byron.getByronEpochSlots ccfgByron reconstructHfcPrefixLen _ = PrefixLen 2 reconstructHfcNestedCtxt _ prefix blockSize = - case Short.index prefix 1 of - 0 -> SomeSecond $ NestedCtxt (NCZ (Byron.CtxtByronBoundary blockSize)) - 1 -> SomeSecond $ NestedCtxt (NCZ (Byron.CtxtByronRegular blockSize)) - 2 -> SomeSecond $ NestedCtxt (NCS (NCZ Shelley.CtxtShelley)) - 3 -> SomeSecond $ NestedCtxt (NCS (NCS (NCZ Shelley.CtxtShelley))) - 4 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCZ Shelley.CtxtShelley)))) - 5 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCS (NCZ Shelley.CtxtShelley))))) - 6 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCS (NCS (NCZ Shelley.CtxtShelley)))))) - 7 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCS (NCS (NCS (NCZ Shelley.CtxtShelley))))))) - _ -> error $ "CardanoBlock: invalid prefix " <> show prefix + case Short.index prefix 1 of + 0 -> SomeSecond $ NestedCtxt (NCZ (Byron.CtxtByronBoundary blockSize)) + 1 -> SomeSecond $ NestedCtxt (NCZ (Byron.CtxtByronRegular blockSize)) + 2 -> SomeSecond $ NestedCtxt (NCS (NCZ Shelley.CtxtShelley)) + 3 -> SomeSecond $ NestedCtxt (NCS (NCS (NCZ Shelley.CtxtShelley))) + 4 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCZ Shelley.CtxtShelley)))) + 5 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCS (NCZ Shelley.CtxtShelley))))) + 6 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCS (NCS (NCZ Shelley.CtxtShelley)))))) + 7 -> SomeSecond $ NestedCtxt (NCS (NCS (NCS (NCS (NCS (NCS (NCZ Shelley.CtxtShelley))))))) + _ -> error $ "CardanoBlock: invalid prefix " <> show prefix getHfcBinaryBlockInfo = \case - BlockByron blockByron -> - getBinaryBlockInfo blockByron - -- For Shelley and the later eras, we need to account for the two extra - -- bytes of the envelope. - BlockShelley blockShelley -> - shiftHeaderOffset 2 $ getBinaryBlockInfo blockShelley - BlockAllegra blockAllegra -> - shiftHeaderOffset 2 $ getBinaryBlockInfo blockAllegra - BlockMary blockMary -> - shiftHeaderOffset 2 $ getBinaryBlockInfo blockMary - BlockAlonzo blockAlonzo -> - shiftHeaderOffset 2 $ getBinaryBlockInfo blockAlonzo - BlockBabbage blockBabbage -> - shiftHeaderOffset 2 $ getBinaryBlockInfo blockBabbage - BlockConway blockConway -> - shiftHeaderOffset 2 $ getBinaryBlockInfo blockConway - where - shiftHeaderOffset :: Word16 -> BinaryBlockInfo -> BinaryBlockInfo - shiftHeaderOffset shift binfo = binfo { - headerOffset = headerOffset binfo + shift - } + BlockByron blockByron -> + getBinaryBlockInfo blockByron + -- For Shelley and the later eras, we need to account for the two extra + -- bytes of the envelope. + BlockShelley blockShelley -> + shiftHeaderOffset 2 $ getBinaryBlockInfo blockShelley + BlockAllegra blockAllegra -> + shiftHeaderOffset 2 $ getBinaryBlockInfo blockAllegra + BlockMary blockMary -> + shiftHeaderOffset 2 $ getBinaryBlockInfo blockMary + BlockAlonzo blockAlonzo -> + shiftHeaderOffset 2 $ getBinaryBlockInfo blockAlonzo + BlockBabbage blockBabbage -> + shiftHeaderOffset 2 $ getBinaryBlockInfo blockBabbage + BlockConway blockConway -> + shiftHeaderOffset 2 $ getBinaryBlockInfo blockConway + where + shiftHeaderOffset :: Word16 -> BinaryBlockInfo -> BinaryBlockInfo + shiftHeaderOffset shift binfo = + binfo + { headerOffset = headerOffset binfo + shift + } estimateHfcBlockSize = \case - HeaderByron headerByron -> estimateBlockSize headerByron - -- For Shelley and later eras, we add two extra bytes, see the - -- 'SerialiseHFC' instance. - HeaderShelley headerShelley -> estimateBlockSize headerShelley + 2 - HeaderAllegra headerAllegra -> estimateBlockSize headerAllegra + 2 - HeaderMary headerMary -> estimateBlockSize headerMary + 2 - HeaderAlonzo headerAlonzo -> estimateBlockSize headerAlonzo + 2 - HeaderBabbage headerBabbage -> estimateBlockSize headerBabbage + 2 - HeaderConway headerConway -> estimateBlockSize headerConway + 2 + HeaderByron headerByron -> estimateBlockSize headerByron + -- For Shelley and later eras, we add two extra bytes, see the + -- 'SerialiseHFC' instance. + HeaderShelley headerShelley -> estimateBlockSize headerShelley + 2 + HeaderAllegra headerAllegra -> estimateBlockSize headerAllegra + 2 + HeaderMary headerMary -> estimateBlockSize headerMary + 2 + HeaderAlonzo headerAlonzo -> estimateBlockSize headerAlonzo + 2 + HeaderBabbage headerBabbage -> estimateBlockSize headerBabbage + 2 + HeaderConway headerConway -> estimateBlockSize headerConway + 2 -- | Prepend the given tag by creating a CBOR 2-tuple with the tag as the -- first element and the given 'Encoding' as the second. prependTag :: Word -> Encoding -> Encoding -prependTag tag payload = mconcat [ - CBOR.encodeListLen 2 +prependTag tag payload = + mconcat + [ CBOR.encodeListLen 2 , CBOR.encodeWord tag , payload ] @@ -233,22 +251,22 @@ prependTag tag payload = mconcat [ -- versions have been released before the hard fork pattern CardanoNodeToNodeVersion1 :: BlockNodeToNodeVersion (CardanoBlock c) pattern CardanoNodeToNodeVersion1 = - HardForkNodeToNodeDisabled ByronNodeToNodeVersion1 + HardForkNodeToNodeDisabled ByronNodeToNodeVersion1 -- | The hard fork enabled using the latest version of Byron and Shelley for -- each Byron and Shelley era. pattern CardanoNodeToNodeVersion2 :: BlockNodeToNodeVersion (CardanoBlock c) pattern CardanoNodeToNodeVersion2 = - HardForkNodeToNodeEnabled - HardForkSpecificNodeToNodeVersion1 - ( WrapNodeToNodeVersion ByronNodeToNodeVersion2 - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 - :* Nil + HardForkNodeToNodeEnabled + HardForkSpecificNodeToNodeVersion1 + ( WrapNodeToNodeVersion ByronNodeToNodeVersion2 + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersion1 + :* Nil ) -- | The hard fork enabled, and the Shelley, Allegra, Mary, Alonzo and Babbage @@ -256,16 +274,16 @@ pattern CardanoNodeToNodeVersion2 = -- Shelley-based eras. pattern CardanoNodeToClientVersion12 :: BlockNodeToClientVersion (CardanoBlock c) pattern CardanoNodeToClientVersion12 = - HardForkNodeToClientEnabled - HardForkSpecificNodeToClientVersion3 - ( EraNodeToClientEnabled ByronNodeToClientVersion1 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 - :* Nil + HardForkNodeToClientEnabled + HardForkSpecificNodeToClientVersion3 + ( EraNodeToClientEnabled ByronNodeToClientVersion1 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion8 + :* Nil ) -- | The hard fork enabled, and the Shelley, Allegra, Mary, Alonzo and Babbage @@ -273,16 +291,16 @@ pattern CardanoNodeToClientVersion12 = -- Shelley-based eras. pattern CardanoNodeToClientVersion13 :: BlockNodeToClientVersion (CardanoBlock c) pattern CardanoNodeToClientVersion13 = - HardForkNodeToClientEnabled - HardForkSpecificNodeToClientVersion3 - ( EraNodeToClientEnabled ByronNodeToClientVersion1 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 - :* Nil + HardForkNodeToClientEnabled + HardForkSpecificNodeToClientVersion3 + ( EraNodeToClientEnabled ByronNodeToClientVersion1 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion9 + :* Nil ) -- | The hard fork enabled, and the Shelley, Allegra, Mary, Alonzo and Babbage @@ -290,16 +308,16 @@ pattern CardanoNodeToClientVersion13 = -- Shelley-based eras. pattern CardanoNodeToClientVersion14 :: BlockNodeToClientVersion (CardanoBlock c) pattern CardanoNodeToClientVersion14 = - HardForkNodeToClientEnabled - HardForkSpecificNodeToClientVersion3 - ( EraNodeToClientEnabled ByronNodeToClientVersion1 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 - :* Nil + HardForkNodeToClientEnabled + HardForkSpecificNodeToClientVersion3 + ( EraNodeToClientEnabled ByronNodeToClientVersion1 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion10 + :* Nil ) -- | The hard fork enabled, and the Shelley, Allegra, Mary, Alonzo and Babbage @@ -307,16 +325,16 @@ pattern CardanoNodeToClientVersion14 = -- Shelley-based eras. pattern CardanoNodeToClientVersion15 :: BlockNodeToClientVersion (CardanoBlock c) pattern CardanoNodeToClientVersion15 = - HardForkNodeToClientEnabled - HardForkSpecificNodeToClientVersion3 - ( EraNodeToClientEnabled ByronNodeToClientVersion1 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 - :* Nil + HardForkNodeToClientEnabled + HardForkSpecificNodeToClientVersion3 + ( EraNodeToClientEnabled ByronNodeToClientVersion1 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion11 + :* Nil ) -- | The hard fork enabled, and the Shelley, Allegra, Mary, Alonzo and Babbage @@ -324,25 +342,29 @@ pattern CardanoNodeToClientVersion15 = -- Shelley-based eras. pattern CardanoNodeToClientVersion16 :: BlockNodeToClientVersion (CardanoBlock c) pattern CardanoNodeToClientVersion16 = - HardForkNodeToClientEnabled - HardForkSpecificNodeToClientVersion3 - ( EraNodeToClientEnabled ByronNodeToClientVersion1 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 - :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 - :* Nil + HardForkNodeToClientEnabled + HardForkSpecificNodeToClientVersion3 + ( EraNodeToClientEnabled ByronNodeToClientVersion1 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 + :* EraNodeToClientEnabled ShelleyNodeToClientVersion12 + :* Nil ) -instance CardanoHardForkConstraints c - => SupportedNetworkProtocolVersion (CardanoBlock c) where - supportedNodeToNodeVersions _ = Map.fromList $ +instance + CardanoHardForkConstraints c => + SupportedNetworkProtocolVersion (CardanoBlock c) + where + supportedNodeToNodeVersions _ = + Map.fromList $ [ (NodeToNodeV_14, CardanoNodeToNodeVersion2) ] - supportedNodeToClientVersions _ = Map.fromList $ + supportedNodeToClientVersions _ = + Map.fromList $ [ (NodeToClientV_16, CardanoNodeToClientVersion12) , (NodeToClientV_17, CardanoNodeToClientVersion13) , (NodeToClientV_18, CardanoNodeToClientVersion14) @@ -357,58 +379,58 @@ instance CardanoHardForkConstraints c -------------------------------------------------------------------------------} -- | When to trigger a hard fork to a Cardano era. -data CardanoHardForkTrigger blk = - -- | Trigger the hard fork when the ledger protocol version is updated to +data CardanoHardForkTrigger blk + = -- | Trigger the hard fork when the ledger protocol version is updated to -- the default for that era (@'L.eraProtVerLow' \@('ShelleyBlockLedgerEra' -- blk)@). Also see 'TriggerHardForkAtVersion'. CardanoTriggerHardForkAtDefaultVersion - | - -- | Trigger the hard fork at the given epoch. For testing only. Also see + | -- | Trigger the hard fork at the given epoch. For testing only. Also see -- 'TriggerHardForkAtEpoch'. CardanoTriggerHardForkAtEpoch EpochNo - deriving stock (Show) + deriving stock Show toTriggerHardFork :: - forall blk. L.Era (ShelleyBlockLedgerEra blk) - => CardanoHardForkTrigger blk - -> TriggerHardFork + forall blk. + L.Era (ShelleyBlockLedgerEra blk) => + CardanoHardForkTrigger blk -> + TriggerHardFork toTriggerHardFork = \case - CardanoTriggerHardForkAtDefaultVersion -> - TriggerHardForkAtVersion $ - SL.getVersion (L.eraProtVerLow @(ShelleyBlockLedgerEra blk)) - CardanoTriggerHardForkAtEpoch epochNo -> - TriggerHardForkAtEpoch epochNo - -newtype CardanoHardForkTriggers = CardanoHardForkTriggers { - getCardanoHardForkTriggers :: - NP CardanoHardForkTrigger (CardanoShelleyEras StandardCrypto) + CardanoTriggerHardForkAtDefaultVersion -> + TriggerHardForkAtVersion $ + SL.getVersion (L.eraProtVerLow @(ShelleyBlockLedgerEra blk)) + CardanoTriggerHardForkAtEpoch epochNo -> + TriggerHardForkAtEpoch epochNo + +newtype CardanoHardForkTriggers = CardanoHardForkTriggers + { getCardanoHardForkTriggers :: + NP CardanoHardForkTrigger (CardanoShelleyEras StandardCrypto) } pattern CardanoHardForkTriggers' :: - (c ~ StandardCrypto) - => CardanoHardForkTrigger (ShelleyBlock (TPraos c) ShelleyEra) - -> CardanoHardForkTrigger (ShelleyBlock (TPraos c) AllegraEra) - -> CardanoHardForkTrigger (ShelleyBlock (TPraos c) MaryEra) - -> CardanoHardForkTrigger (ShelleyBlock (TPraos c) AlonzoEra) - -> CardanoHardForkTrigger (ShelleyBlock (Praos c) BabbageEra) - -> CardanoHardForkTrigger (ShelleyBlock (Praos c) ConwayEra) - -> CardanoHardForkTriggers -pattern CardanoHardForkTriggers' { - triggerHardForkShelley - , triggerHardForkAllegra - , triggerHardForkMary - , triggerHardForkAlonzo - , triggerHardForkBabbage - , triggerHardForkConway - } = - CardanoHardForkTriggers - ( triggerHardForkShelley - :* triggerHardForkAllegra - :* triggerHardForkMary - :* triggerHardForkAlonzo - :* triggerHardForkBabbage - :* triggerHardForkConway - :* Nil + c ~ StandardCrypto => + CardanoHardForkTrigger (ShelleyBlock (TPraos c) ShelleyEra) -> + CardanoHardForkTrigger (ShelleyBlock (TPraos c) AllegraEra) -> + CardanoHardForkTrigger (ShelleyBlock (TPraos c) MaryEra) -> + CardanoHardForkTrigger (ShelleyBlock (TPraos c) AlonzoEra) -> + CardanoHardForkTrigger (ShelleyBlock (Praos c) BabbageEra) -> + CardanoHardForkTrigger (ShelleyBlock (Praos c) ConwayEra) -> + CardanoHardForkTriggers +pattern CardanoHardForkTriggers' + { triggerHardForkShelley + , triggerHardForkAllegra + , triggerHardForkMary + , triggerHardForkAlonzo + , triggerHardForkBabbage + , triggerHardForkConway + } = + CardanoHardForkTriggers + ( triggerHardForkShelley + :* triggerHardForkAllegra + :* triggerHardForkMary + :* triggerHardForkAlonzo + :* triggerHardForkBabbage + :* triggerHardForkConway + :* Nil ) {-# COMPLETE CardanoHardForkTriggers' #-} @@ -434,30 +456,28 @@ pattern CardanoHardForkTriggers' { -- @ProtVer (SL.natVersion @10) 0@, this indicates that the node is -- ready to perform an intra-era hardfork (from version @9@ to version -- @10@). --- -data CardanoProtocolParams c = CardanoProtocolParams { - byronProtocolParams :: ProtocolParamsByron - , shelleyBasedProtocolParams :: ProtocolParamsShelleyBased c - , cardanoHardForkTriggers :: CardanoHardForkTriggers +data CardanoProtocolParams c = CardanoProtocolParams + { byronProtocolParams :: ProtocolParamsByron + , shelleyBasedProtocolParams :: ProtocolParamsShelleyBased c + , cardanoHardForkTriggers :: CardanoHardForkTriggers , cardanoLedgerTransitionConfig :: L.TransitionConfig L.LatestKnownEra - , cardanoCheckpoints :: CheckpointsMap (CardanoBlock c) - -- | The greatest protocol version that this node's software and config - -- files declare to handle correctly. - -- - -- This parameter has two consequences. First, the blocks minted - -- will include the protocol version in their header, but - -- essentially only for public signaling (eg measuring the - -- percentage of adoption of software updates). - -- - -- Second, and more importantly, it's passed to the protocol logic. In - -- particular, the node's envelope check will begin rejecting all blocks - -- (actually, their headers) if the chain moves to a greater protocol - -- version. This should never happen in a node that is using up-to-date - -- software and config files. Note that the missing software update is - -- not necessarily a 'HardForkBlock' era transition: it might be an - -- /intra-era hard fork/ (ie conditionals in the ledger rules). - -- - , cardanoProtocolVersion :: ProtVer + , cardanoCheckpoints :: CheckpointsMap (CardanoBlock c) + , cardanoProtocolVersion :: ProtVer + -- ^ The greatest protocol version that this node's software and config + -- files declare to handle correctly. + -- + -- This parameter has two consequences. First, the blocks minted + -- will include the protocol version in their header, but + -- essentially only for public signaling (eg measuring the + -- percentage of adoption of software updates). + -- + -- Second, and more importantly, it's passed to the protocol logic. In + -- particular, the node's envelope check will begin rejecting all blocks + -- (actually, their headers) if the chain moves to a greater protocol + -- version. This should never happen in a node that is using up-to-date + -- software and config files. Note that the missing software update is + -- not necessarily a 'HardForkBlock' era transition: it might be an + -- /intra-era hard fork/ (ie conditionals in the ledger rules). } -- | Create a 'ProtocolInfo' for 'CardanoBlock' @@ -472,106 +492,112 @@ data CardanoProtocolParams c = CardanoProtocolParams { -- PRECONDITION: only a single set of Shelley credentials is allowed when used -- for mainnet (check against @'SL.gNetworkId' == 'SL.Mainnet'@). protocolInfoCardano :: - forall c m. (IOLike m, CardanoHardForkConstraints c) - => CardanoProtocolParams c - -> ( ProtocolInfo (CardanoBlock c) - , m [BlockForging m (CardanoBlock c)] - ) + forall c m. + (IOLike m, CardanoHardForkConstraints c) => + CardanoProtocolParams c -> + ( ProtocolInfo (CardanoBlock c) + , m [BlockForging m (CardanoBlock c)] + ) protocolInfoCardano paramsCardano | SL.Mainnet <- SL.sgNetworkId genesisShelley - , length credssShelleyBased > 1 - = error "Multiple Shelley-based credentials not allowed for mainnet" - | otherwise - = assertWithMsg (validateGenesis genesisShelley) - ( ProtocolInfo { - pInfoConfig = cfg - , pInfoInitLedger = initExtLedgerStateCardano - } - , blockForging - ) - where - CardanoProtocolParams { - byronProtocolParams - , shelleyBasedProtocolParams - , cardanoHardForkTriggers = CardanoHardForkTriggers' { - triggerHardForkShelley + , length credssShelleyBased > 1 = + error "Multiple Shelley-based credentials not allowed for mainnet" + | otherwise = + assertWithMsg + (validateGenesis genesisShelley) + ( ProtocolInfo + { pInfoConfig = cfg + , pInfoInitLedger = initExtLedgerStateCardano + } + , blockForging + ) + where + CardanoProtocolParams + { byronProtocolParams + , shelleyBasedProtocolParams + , cardanoHardForkTriggers = + CardanoHardForkTriggers' + { triggerHardForkShelley , triggerHardForkAllegra , triggerHardForkMary , triggerHardForkAlonzo , triggerHardForkBabbage , triggerHardForkConway } - , cardanoLedgerTransitionConfig - , cardanoCheckpoints - , cardanoProtocolVersion - } = paramsCardano - - genesisShelley = cardanoLedgerTransitionConfig ^. L.tcShelleyGenesisL - - ProtocolParamsByron { - byronGenesis = genesisByron - , byronLeaderCredentials = mCredsByron - } = byronProtocolParams - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = initialNonceShelley - , shelleyBasedLeaderCredentials = credssShelleyBased - } = shelleyBasedProtocolParams - - transitionConfigShelley = transitionConfigAllegra ^. L.tcPreviousEraConfigL - transitionConfigAllegra = transitionConfigMary ^. L.tcPreviousEraConfigL - transitionConfigMary = transitionConfigAlonzo ^. L.tcPreviousEraConfigL - transitionConfigAlonzo = transitionConfigBabbage ^. L.tcPreviousEraConfigL - transitionConfigBabbage = transitionConfigConway ^. L.tcPreviousEraConfigL - transitionConfigConway = cardanoLedgerTransitionConfig - - -- The major protocol version of the last era is the maximum major protocol - -- version we support. - -- - maxMajorProtVer :: MaxMajorProtVer - maxMajorProtVer = MaxMajorProtVer $ pvMajor cardanoProtocolVersion - - -- Byron - - ProtocolInfo { - pInfoConfig = topLevelConfigByron@TopLevelConfig { - topLevelConfigProtocol = consensusConfigByron - , topLevelConfigLedger = ledgerConfigByron - , topLevelConfigBlock = blockConfigByron - } - , pInfoInitLedger = initExtLedgerStateByron - } = protocolInfoByron byronProtocolParams - - partialConsensusConfigByron :: PartialConsensusConfig (BlockProtocol ByronBlock) - partialConsensusConfigByron = consensusConfigByron - - partialLedgerConfigByron :: PartialLedgerConfig ByronBlock - partialLedgerConfigByron = ByronPartialLedgerConfig { - byronLedgerConfig = ledgerConfigByron - , byronTriggerHardFork = toTriggerHardFork triggerHardForkShelley - } + , cardanoLedgerTransitionConfig + , cardanoCheckpoints + , cardanoProtocolVersion + } = paramsCardano + + genesisShelley = cardanoLedgerTransitionConfig ^. L.tcShelleyGenesisL + + ProtocolParamsByron + { byronGenesis = genesisByron + , byronLeaderCredentials = mCredsByron + } = byronProtocolParams + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = initialNonceShelley + , shelleyBasedLeaderCredentials = credssShelleyBased + } = shelleyBasedProtocolParams + + transitionConfigShelley = transitionConfigAllegra ^. L.tcPreviousEraConfigL + transitionConfigAllegra = transitionConfigMary ^. L.tcPreviousEraConfigL + transitionConfigMary = transitionConfigAlonzo ^. L.tcPreviousEraConfigL + transitionConfigAlonzo = transitionConfigBabbage ^. L.tcPreviousEraConfigL + transitionConfigBabbage = transitionConfigConway ^. L.tcPreviousEraConfigL + transitionConfigConway = cardanoLedgerTransitionConfig + + -- The major protocol version of the last era is the maximum major protocol + -- version we support. + -- + maxMajorProtVer :: MaxMajorProtVer + maxMajorProtVer = MaxMajorProtVer $ pvMajor cardanoProtocolVersion + + -- Byron - kByron :: SecurityParam - kByron = Byron.genesisSecurityParam genesisByron - - -- Shelley + ProtocolInfo + { pInfoConfig = + topLevelConfigByron@TopLevelConfig + { topLevelConfigProtocol = consensusConfigByron + , topLevelConfigLedger = ledgerConfigByron + , topLevelConfigBlock = blockConfigByron + } + , pInfoInitLedger = initExtLedgerStateByron + } = protocolInfoByron byronProtocolParams - tpraosParams :: TPraosParams - tpraosParams = - Shelley.mkTPraosParams - maxMajorProtVer - initialNonceShelley - genesisShelley + partialConsensusConfigByron :: PartialConsensusConfig (BlockProtocol ByronBlock) + partialConsensusConfigByron = consensusConfigByron - TPraosParams { tpraosSlotsPerKESPeriod, tpraosMaxKESEvo } = tpraosParams + partialLedgerConfigByron :: PartialLedgerConfig ByronBlock + partialLedgerConfigByron = + ByronPartialLedgerConfig + { byronLedgerConfig = ledgerConfigByron + , byronTriggerHardFork = toTriggerHardFork triggerHardForkShelley + } - praosParams :: PraosParams - praosParams = PraosParams - { praosSlotsPerKESPeriod = SL.sgSlotsPerKESPeriod genesisShelley, - praosLeaderF = SL.mkActiveSlotCoeff $ SL.sgActiveSlotsCoeff genesisShelley, - praosSecurityParam = SecurityParam $ SL.sgSecurityParam genesisShelley, - praosMaxKESEvo = SL.sgMaxKESEvolutions genesisShelley, - praosMaxMajorPV = maxMajorProtVer, - praosRandomnessStabilisationWindow = + kByron :: SecurityParam + kByron = Byron.genesisSecurityParam genesisByron + + -- Shelley + + tpraosParams :: TPraosParams + tpraosParams = + Shelley.mkTPraosParams + maxMajorProtVer + initialNonceShelley + genesisShelley + + TPraosParams{tpraosSlotsPerKESPeriod, tpraosMaxKESEvo} = tpraosParams + + praosParams :: PraosParams + praosParams = + PraosParams + { praosSlotsPerKESPeriod = SL.sgSlotsPerKESPeriod genesisShelley + , praosLeaderF = SL.mkActiveSlotCoeff $ SL.sgActiveSlotsCoeff genesisShelley + , praosSecurityParam = SecurityParam $ SL.sgSecurityParam genesisShelley + , praosMaxKESEvo = SL.sgMaxKESEvolutions genesisShelley + , praosMaxMajorPV = maxMajorProtVer + , praosRandomnessStabilisationWindow = -- This value is used for all Praos eras /except/ Babbage, see -- 'partialConsensusConfigBabbage'. SL.computeRandomnessStabilisationWindow @@ -579,178 +605,185 @@ protocolInfoCardano paramsCardano (SL.mkActiveSlotCoeff $ SL.sgActiveSlotsCoeff genesisShelley) } - PraosParams { praosSlotsPerKESPeriod, praosMaxKESEvo } = praosParams - - blockConfigShelley :: BlockConfig (ShelleyBlock (TPraos c) ShelleyEra) - blockConfigShelley = - Shelley.mkShelleyBlockConfig - cardanoProtocolVersion - genesisShelley - (shelleyBlockIssuerVKey <$> credssShelleyBased) - - partialConsensusConfigShelley :: - PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) ShelleyEra)) - partialConsensusConfigShelley = tpraosParams - - partialLedgerConfigShelley :: PartialLedgerConfig (ShelleyBlock (TPraos c) ShelleyEra) - partialLedgerConfigShelley = - mkPartialLedgerConfigShelley - transitionConfigShelley - (toTriggerHardFork triggerHardForkAllegra) - - kShelley :: SecurityParam - kShelley = SecurityParam $ sgSecurityParam genesisShelley - - -- Allegra - - blockConfigAllegra :: BlockConfig (ShelleyBlock (TPraos c) AllegraEra) - blockConfigAllegra = - Shelley.mkShelleyBlockConfig - cardanoProtocolVersion - genesisShelley - (shelleyBlockIssuerVKey <$> credssShelleyBased) - - partialConsensusConfigAllegra :: - PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AllegraEra)) - partialConsensusConfigAllegra = tpraosParams - - partialLedgerConfigAllegra :: PartialLedgerConfig (ShelleyBlock (TPraos c) AllegraEra) - partialLedgerConfigAllegra = - mkPartialLedgerConfigShelley - transitionConfigAllegra - (toTriggerHardFork triggerHardForkMary) - - -- Mary - - blockConfigMary :: BlockConfig (ShelleyBlock (TPraos c) MaryEra) - blockConfigMary = - Shelley.mkShelleyBlockConfig - cardanoProtocolVersion - genesisShelley - (shelleyBlockIssuerVKey <$> credssShelleyBased) - - partialConsensusConfigMary :: - PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) MaryEra)) - partialConsensusConfigMary = tpraosParams - - partialLedgerConfigMary :: PartialLedgerConfig (ShelleyBlock (TPraos c) MaryEra) - partialLedgerConfigMary = - mkPartialLedgerConfigShelley - transitionConfigMary - (toTriggerHardFork triggerHardForkAlonzo) - - -- Alonzo - - blockConfigAlonzo :: BlockConfig (ShelleyBlock (TPraos c) AlonzoEra) - blockConfigAlonzo = - Shelley.mkShelleyBlockConfig - cardanoProtocolVersion - genesisShelley - (shelleyBlockIssuerVKey <$> credssShelleyBased) - - partialConsensusConfigAlonzo :: - PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AlonzoEra)) - partialConsensusConfigAlonzo = tpraosParams - - partialLedgerConfigAlonzo :: PartialLedgerConfig (ShelleyBlock (TPraos c) AlonzoEra) - partialLedgerConfigAlonzo = - mkPartialLedgerConfigShelley - transitionConfigAlonzo - (toTriggerHardFork triggerHardForkBabbage) - - -- Babbage - - blockConfigBabbage :: BlockConfig (ShelleyBlock (Praos c) BabbageEra) - blockConfigBabbage = - Shelley.mkShelleyBlockConfig - cardanoProtocolVersion - genesisShelley - (shelleyBlockIssuerVKey <$> credssShelleyBased) - - partialConsensusConfigBabbage :: - PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) BabbageEra)) - partialConsensusConfigBabbage = praosParams { - -- For Praos in Babbage (just as in all TPraos eras) we use the - -- smaller (3k/f vs 4k/f slots) stability window here for - -- backwards-compatibility. See erratum 17.3 in the Shelley ledger - -- specs for context. - praosRandomnessStabilisationWindow = - SL.computeStabilityWindow - (SL.unNonZero $ SL.sgSecurityParam genesisShelley) - (SL.mkActiveSlotCoeff $ SL.sgActiveSlotsCoeff genesisShelley) - } - - - partialLedgerConfigBabbage :: PartialLedgerConfig (ShelleyBlock (Praos c) BabbageEra) - partialLedgerConfigBabbage = - mkPartialLedgerConfigShelley - transitionConfigBabbage - (toTriggerHardFork triggerHardForkConway) - - -- Conway - - blockConfigConway :: BlockConfig (ShelleyBlock (Praos c) ConwayEra) - blockConfigConway = - Shelley.mkShelleyBlockConfig - cardanoProtocolVersion - genesisShelley - (shelleyBlockIssuerVKey <$> credssShelleyBased) - - partialConsensusConfigConway :: - PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) ConwayEra)) - partialConsensusConfigConway = praosParams - - partialLedgerConfigConway :: PartialLedgerConfig (ShelleyBlock (Praos c) ConwayEra) - partialLedgerConfigConway = - mkPartialLedgerConfigShelley - transitionConfigConway - TriggerHardForkNotDuringThisExecution - - -- Cardano - - k :: SecurityParam - k = assert (kByron == kShelley) kByron - - shape :: History.Shape (CardanoEras c) - shape = History.Shape $ Exactly $ - K (Byron.byronEraParams genesisByron) - :* K (Shelley.shelleyEraParams genesisShelley) - :* K (Shelley.shelleyEraParams genesisShelley) - :* K (Shelley.shelleyEraParams genesisShelley) - :* K (Shelley.shelleyEraParams genesisShelley) - :* K (Shelley.shelleyEraParams genesisShelley) - :* K (Shelley.shelleyEraParams genesisShelley) - :* Nil + PraosParams{praosSlotsPerKESPeriod, praosMaxKESEvo} = praosParams + + blockConfigShelley :: BlockConfig (ShelleyBlock (TPraos c) ShelleyEra) + blockConfigShelley = + Shelley.mkShelleyBlockConfig + cardanoProtocolVersion + genesisShelley + (shelleyBlockIssuerVKey <$> credssShelleyBased) + + partialConsensusConfigShelley :: + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) ShelleyEra)) + partialConsensusConfigShelley = tpraosParams + + partialLedgerConfigShelley :: PartialLedgerConfig (ShelleyBlock (TPraos c) ShelleyEra) + partialLedgerConfigShelley = + mkPartialLedgerConfigShelley + transitionConfigShelley + (toTriggerHardFork triggerHardForkAllegra) + + kShelley :: SecurityParam + kShelley = SecurityParam $ sgSecurityParam genesisShelley + + -- Allegra + + blockConfigAllegra :: BlockConfig (ShelleyBlock (TPraos c) AllegraEra) + blockConfigAllegra = + Shelley.mkShelleyBlockConfig + cardanoProtocolVersion + genesisShelley + (shelleyBlockIssuerVKey <$> credssShelleyBased) + + partialConsensusConfigAllegra :: + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AllegraEra)) + partialConsensusConfigAllegra = tpraosParams + + partialLedgerConfigAllegra :: PartialLedgerConfig (ShelleyBlock (TPraos c) AllegraEra) + partialLedgerConfigAllegra = + mkPartialLedgerConfigShelley + transitionConfigAllegra + (toTriggerHardFork triggerHardForkMary) + + -- Mary + + blockConfigMary :: BlockConfig (ShelleyBlock (TPraos c) MaryEra) + blockConfigMary = + Shelley.mkShelleyBlockConfig + cardanoProtocolVersion + genesisShelley + (shelleyBlockIssuerVKey <$> credssShelleyBased) + + partialConsensusConfigMary :: + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) MaryEra)) + partialConsensusConfigMary = tpraosParams + + partialLedgerConfigMary :: PartialLedgerConfig (ShelleyBlock (TPraos c) MaryEra) + partialLedgerConfigMary = + mkPartialLedgerConfigShelley + transitionConfigMary + (toTriggerHardFork triggerHardForkAlonzo) + + -- Alonzo + + blockConfigAlonzo :: BlockConfig (ShelleyBlock (TPraos c) AlonzoEra) + blockConfigAlonzo = + Shelley.mkShelleyBlockConfig + cardanoProtocolVersion + genesisShelley + (shelleyBlockIssuerVKey <$> credssShelleyBased) + + partialConsensusConfigAlonzo :: + PartialConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) AlonzoEra)) + partialConsensusConfigAlonzo = tpraosParams + + partialLedgerConfigAlonzo :: PartialLedgerConfig (ShelleyBlock (TPraos c) AlonzoEra) + partialLedgerConfigAlonzo = + mkPartialLedgerConfigShelley + transitionConfigAlonzo + (toTriggerHardFork triggerHardForkBabbage) + + -- Babbage + + blockConfigBabbage :: BlockConfig (ShelleyBlock (Praos c) BabbageEra) + blockConfigBabbage = + Shelley.mkShelleyBlockConfig + cardanoProtocolVersion + genesisShelley + (shelleyBlockIssuerVKey <$> credssShelleyBased) + + partialConsensusConfigBabbage :: + PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) BabbageEra)) + partialConsensusConfigBabbage = + praosParams + { -- For Praos in Babbage (just as in all TPraos eras) we use the + -- smaller (3k/f vs 4k/f slots) stability window here for + -- backwards-compatibility. See erratum 17.3 in the Shelley ledger + -- specs for context. + praosRandomnessStabilisationWindow = + SL.computeStabilityWindow + (SL.unNonZero $ SL.sgSecurityParam genesisShelley) + (SL.mkActiveSlotCoeff $ SL.sgActiveSlotsCoeff genesisShelley) + } - cfg :: TopLevelConfig (CardanoBlock c) - cfg = TopLevelConfig { - topLevelConfigProtocol = HardForkConsensusConfig { - hardForkConsensusConfigK = k - , hardForkConsensusConfigShape = shape - , hardForkConsensusConfigPerEra = PerEraConsensusConfig - ( WrapPartialConsensusConfig partialConsensusConfigByron - :* WrapPartialConsensusConfig partialConsensusConfigShelley - :* WrapPartialConsensusConfig partialConsensusConfigAllegra - :* WrapPartialConsensusConfig partialConsensusConfigMary - :* WrapPartialConsensusConfig partialConsensusConfigAlonzo - :* WrapPartialConsensusConfig partialConsensusConfigBabbage - :* WrapPartialConsensusConfig partialConsensusConfigConway - :* Nil - ) - } - , topLevelConfigLedger = HardForkLedgerConfig { - hardForkLedgerConfigShape = shape - , hardForkLedgerConfigPerEra = PerEraLedgerConfig - ( WrapPartialLedgerConfig partialLedgerConfigByron - :* WrapPartialLedgerConfig partialLedgerConfigShelley - :* WrapPartialLedgerConfig partialLedgerConfigAllegra - :* WrapPartialLedgerConfig partialLedgerConfigMary - :* WrapPartialLedgerConfig partialLedgerConfigAlonzo - :* WrapPartialLedgerConfig partialLedgerConfigBabbage - :* WrapPartialLedgerConfig partialLedgerConfigConway - :* Nil - ) - } + partialLedgerConfigBabbage :: PartialLedgerConfig (ShelleyBlock (Praos c) BabbageEra) + partialLedgerConfigBabbage = + mkPartialLedgerConfigShelley + transitionConfigBabbage + (toTriggerHardFork triggerHardForkConway) + + -- Conway + + blockConfigConway :: BlockConfig (ShelleyBlock (Praos c) ConwayEra) + blockConfigConway = + Shelley.mkShelleyBlockConfig + cardanoProtocolVersion + genesisShelley + (shelleyBlockIssuerVKey <$> credssShelleyBased) + + partialConsensusConfigConway :: + PartialConsensusConfig (BlockProtocol (ShelleyBlock (Praos c) ConwayEra)) + partialConsensusConfigConway = praosParams + + partialLedgerConfigConway :: PartialLedgerConfig (ShelleyBlock (Praos c) ConwayEra) + partialLedgerConfigConway = + mkPartialLedgerConfigShelley + transitionConfigConway + TriggerHardForkNotDuringThisExecution + + -- Cardano + + k :: SecurityParam + k = assert (kByron == kShelley) kByron + + shape :: History.Shape (CardanoEras c) + shape = + History.Shape $ + Exactly $ + K (Byron.byronEraParams genesisByron) + :* K (Shelley.shelleyEraParams genesisShelley) + :* K (Shelley.shelleyEraParams genesisShelley) + :* K (Shelley.shelleyEraParams genesisShelley) + :* K (Shelley.shelleyEraParams genesisShelley) + :* K (Shelley.shelleyEraParams genesisShelley) + :* K (Shelley.shelleyEraParams genesisShelley) + :* Nil + + cfg :: TopLevelConfig (CardanoBlock c) + cfg = + TopLevelConfig + { topLevelConfigProtocol = + HardForkConsensusConfig + { hardForkConsensusConfigK = k + , hardForkConsensusConfigShape = shape + , hardForkConsensusConfigPerEra = + PerEraConsensusConfig + ( WrapPartialConsensusConfig partialConsensusConfigByron + :* WrapPartialConsensusConfig partialConsensusConfigShelley + :* WrapPartialConsensusConfig partialConsensusConfigAllegra + :* WrapPartialConsensusConfig partialConsensusConfigMary + :* WrapPartialConsensusConfig partialConsensusConfigAlonzo + :* WrapPartialConsensusConfig partialConsensusConfigBabbage + :* WrapPartialConsensusConfig partialConsensusConfigConway + :* Nil + ) + } + , topLevelConfigLedger = + HardForkLedgerConfig + { hardForkLedgerConfigShape = shape + , hardForkLedgerConfigPerEra = + PerEraLedgerConfig + ( WrapPartialLedgerConfig partialLedgerConfigByron + :* WrapPartialLedgerConfig partialLedgerConfigShelley + :* WrapPartialLedgerConfig partialLedgerConfigAllegra + :* WrapPartialLedgerConfig partialLedgerConfigMary + :* WrapPartialLedgerConfig partialLedgerConfigAlonzo + :* WrapPartialLedgerConfig partialLedgerConfigBabbage + :* WrapPartialLedgerConfig partialLedgerConfigConway + :* Nil + ) + } , topLevelConfigBlock = CardanoBlockConfig blockConfigByron @@ -781,138 +814,150 @@ protocolInfoCardano paramsCardano , topLevelConfigCheckpoints = cardanoCheckpoints } - -- When the initial ledger state is not in the Byron era, register various - -- data from the genesis config (if provided) in the ledger state. For - -- example, this includes initial staking and initial funds (useful for - -- testing/benchmarking). - initExtLedgerStateCardano :: ExtLedgerState (CardanoBlock c) ValuesMK - initExtLedgerStateCardano = ExtLedgerState { - headerState = initHeaderState - , ledgerState = overShelleyBasedLedgerState initLedgerState - } - where - overShelleyBasedLedgerState (HardForkLedgerState st) = - HardForkLedgerState $ hap (fn id :* registerAny) st - - initHeaderState :: HeaderState (CardanoBlock c) - initLedgerState :: LedgerState (CardanoBlock c) ValuesMK - ExtLedgerState initLedgerState initHeaderState = - injectInitialExtLedgerState cfg - $ initExtLedgerStateByron - - registerAny :: NP (Flip LedgerState ValuesMK -.-> Flip LedgerState ValuesMK) (CardanoShelleyEras c) - registerAny = - hcmap (Proxy @IsShelleyBlock) injectIntoTestState $ - WrapTransitionConfig transitionConfigShelley - :* WrapTransitionConfig transitionConfigAllegra - :* WrapTransitionConfig transitionConfigMary - :* WrapTransitionConfig transitionConfigAlonzo - :* WrapTransitionConfig transitionConfigBabbage - :* WrapTransitionConfig transitionConfigConway - :* Nil - - injectIntoTestState :: - ShelleyBasedEra era - => WrapTransitionConfig (ShelleyBlock proto era) - -> (Flip LedgerState ValuesMK -.-> Flip LedgerState ValuesMK) (ShelleyBlock proto era) - injectIntoTestState (WrapTransitionConfig tcfg) = fn $ \(Flip st) -> - -- We need to unstow the injected values - Flip $ unstowLedgerTables $ forgetLedgerTables $ st { - Shelley.shelleyLedgerState = L.injectIntoTestState tcfg - (Shelley.shelleyLedgerState $ stowLedgerTables st) - } - - -- | For each element in the list, a block forging thread will be started. - -- - -- When no credentials are passed, there will be no threads. - -- - -- Typically, there will only be a single set of credentials for Shelley. - -- - -- In case there are multiple credentials for Shelley, which is only done - -- for testing/benchmarking purposes, we'll have a separate thread for each - -- of them. - -- - -- If Byron credentials are passed, we merge them with the Shelley - -- credentials if possible, so that we only have a single thread running in - -- the case we have Byron credentials and a single set of Shelley - -- credentials. If there are multiple Shelley credentials, we merge the - -- Byron credentials with the first Shelley one but still have separate - -- threads for the remaining Shelley ones. - blockForging :: m [BlockForging m (CardanoBlock c)] - blockForging = do - shelleyBased <- traverse blockForgingShelleyBased credssShelleyBased - let blockForgings :: [NonEmptyOptNP (BlockForging m) (CardanoEras c)] - blockForgings = case (mBlockForgingByron, shelleyBased) of - (Nothing, shelleys) -> shelleys - (Just byron, []) -> [byron] - (Just byron, shelley:shelleys) -> - OptNP.zipWith merge byron shelley : shelleys - where - -- When merging Byron with Shelley-based eras, we should never - -- merge two from the same era. - merge (These1 _ _) = error "forgings of the same era" - merge (This1 x) = x - merge (That1 y) = y - - return $ hardForkBlockForging "Cardano" <$> blockForgings - - mBlockForgingByron :: Maybe (NonEmptyOptNP (BlockForging m) (CardanoEras c)) - mBlockForgingByron = do - creds <- mCredsByron - return $ byronBlockForging creds `OptNP.at` IZ - - blockForgingShelleyBased :: - ShelleyLeaderCredentials c - -> m (NonEmptyOptNP (BlockForging m) (CardanoEras c)) - blockForgingShelleyBased credentials = do - let ShelleyLeaderCredentials - { shelleyLeaderCredentialsInitSignKey = initSignKey - , shelleyLeaderCredentialsCanBeLeader = canBeLeader - } = credentials - - hotKey <- do - let maxKESEvo :: Word64 - maxKESEvo = assert (tpraosMaxKESEvo == praosMaxKESEvo) praosMaxKESEvo - - startPeriod :: Absolute.KESPeriod - startPeriod = Absolute.ocertKESPeriod $ praosCanBeLeaderOpCert canBeLeader - - HotKey.mkHotKey @m @c initSignKey startPeriod maxKESEvo - - let slotToPeriod :: SlotNo -> Absolute.KESPeriod - slotToPeriod (SlotNo slot) = assert (tpraosSlotsPerKESPeriod == praosSlotsPerKESPeriod) $ - Absolute.KESPeriod $ fromIntegral $ slot `div` praosSlotsPerKESPeriod - - let tpraos :: forall era. - ShelleyEraWithCrypto c (TPraos c) era - => BlockForging m (ShelleyBlock (TPraos c) era) - tpraos = - TPraos.shelleySharedBlockForging hotKey slotToPeriod credentials - - let praos :: forall era. - ShelleyEraWithCrypto c (Praos c) era - => BlockForging m (ShelleyBlock (Praos c) era) - praos = - Praos.praosSharedBlockForging hotKey slotToPeriod credentials - - pure - $ OptSkip -- Byron - $ OptNP.fromNonEmptyNP $ - tpraos :* - tpraos :* - tpraos :* - tpraos :* - praos :* - praos :* - Nil + -- When the initial ledger state is not in the Byron era, register various + -- data from the genesis config (if provided) in the ledger state. For + -- example, this includes initial staking and initial funds (useful for + -- testing/benchmarking). + initExtLedgerStateCardano :: ExtLedgerState (CardanoBlock c) ValuesMK + initExtLedgerStateCardano = + ExtLedgerState + { headerState = initHeaderState + , ledgerState = overShelleyBasedLedgerState initLedgerState + } + where + overShelleyBasedLedgerState (HardForkLedgerState st) = + HardForkLedgerState $ hap (fn id :* registerAny) st + + initHeaderState :: HeaderState (CardanoBlock c) + initLedgerState :: LedgerState (CardanoBlock c) ValuesMK + ExtLedgerState initLedgerState initHeaderState = + injectInitialExtLedgerState cfg $ + initExtLedgerStateByron + + registerAny :: NP (Flip LedgerState ValuesMK -.-> Flip LedgerState ValuesMK) (CardanoShelleyEras c) + registerAny = + hcmap (Proxy @IsShelleyBlock) injectIntoTestState $ + WrapTransitionConfig transitionConfigShelley + :* WrapTransitionConfig transitionConfigAllegra + :* WrapTransitionConfig transitionConfigMary + :* WrapTransitionConfig transitionConfigAlonzo + :* WrapTransitionConfig transitionConfigBabbage + :* WrapTransitionConfig transitionConfigConway + :* Nil + + injectIntoTestState :: + ShelleyBasedEra era => + WrapTransitionConfig (ShelleyBlock proto era) -> + (Flip LedgerState ValuesMK -.-> Flip LedgerState ValuesMK) (ShelleyBlock proto era) + injectIntoTestState (WrapTransitionConfig tcfg) = fn $ \(Flip st) -> + -- We need to unstow the injected values + Flip $ + unstowLedgerTables $ + forgetLedgerTables $ + st + { Shelley.shelleyLedgerState = + L.injectIntoTestState + tcfg + (Shelley.shelleyLedgerState $ stowLedgerTables st) + } + + -- \| For each element in the list, a block forging thread will be started. + -- + -- When no credentials are passed, there will be no threads. + -- + -- Typically, there will only be a single set of credentials for Shelley. + -- + -- In case there are multiple credentials for Shelley, which is only done + -- for testing/benchmarking purposes, we'll have a separate thread for each + -- of them. + -- + -- If Byron credentials are passed, we merge them with the Shelley + -- credentials if possible, so that we only have a single thread running in + -- the case we have Byron credentials and a single set of Shelley + -- credentials. If there are multiple Shelley credentials, we merge the + -- Byron credentials with the first Shelley one but still have separate + -- threads for the remaining Shelley ones. + blockForging :: m [BlockForging m (CardanoBlock c)] + blockForging = do + shelleyBased <- traverse blockForgingShelleyBased credssShelleyBased + let blockForgings :: [NonEmptyOptNP (BlockForging m) (CardanoEras c)] + blockForgings = case (mBlockForgingByron, shelleyBased) of + (Nothing, shelleys) -> shelleys + (Just byron, []) -> [byron] + (Just byron, shelley : shelleys) -> + OptNP.zipWith merge byron shelley : shelleys + where + -- When merging Byron with Shelley-based eras, we should never + -- merge two from the same era. + merge (These1 _ _) = error "forgings of the same era" + merge (This1 x) = x + merge (That1 y) = y + + return $ hardForkBlockForging "Cardano" <$> blockForgings + + mBlockForgingByron :: Maybe (NonEmptyOptNP (BlockForging m) (CardanoEras c)) + mBlockForgingByron = do + creds <- mCredsByron + return $ byronBlockForging creds `OptNP.at` IZ + + blockForgingShelleyBased :: + ShelleyLeaderCredentials c -> + m (NonEmptyOptNP (BlockForging m) (CardanoEras c)) + blockForgingShelleyBased credentials = do + let ShelleyLeaderCredentials + { shelleyLeaderCredentialsInitSignKey = initSignKey + , shelleyLeaderCredentialsCanBeLeader = canBeLeader + } = credentials + + hotKey <- do + let maxKESEvo :: Word64 + maxKESEvo = assert (tpraosMaxKESEvo == praosMaxKESEvo) praosMaxKESEvo + + startPeriod :: Absolute.KESPeriod + startPeriod = Absolute.ocertKESPeriod $ praosCanBeLeaderOpCert canBeLeader + + HotKey.mkHotKey @m @c initSignKey startPeriod maxKESEvo + + let slotToPeriod :: SlotNo -> Absolute.KESPeriod + slotToPeriod (SlotNo slot) = + assert (tpraosSlotsPerKESPeriod == praosSlotsPerKESPeriod) $ + Absolute.KESPeriod $ + fromIntegral $ + slot `div` praosSlotsPerKESPeriod + + let tpraos :: + forall era. + ShelleyEraWithCrypto c (TPraos c) era => + BlockForging m (ShelleyBlock (TPraos c) era) + tpraos = + TPraos.shelleySharedBlockForging hotKey slotToPeriod credentials + + let praos :: + forall era. + ShelleyEraWithCrypto c (Praos c) era => + BlockForging m (ShelleyBlock (Praos c) era) + praos = + Praos.praosSharedBlockForging hotKey slotToPeriod credentials + + pure $ + OptSkip $ -- Byron + OptNP.fromNonEmptyNP $ + tpraos + :* tpraos + :* tpraos + :* tpraos + :* praos + :* praos + :* Nil protocolClientInfoCardano :: - forall c. - -- Byron - EpochSlots - -> ProtocolClientInfo (CardanoBlock c) -protocolClientInfoCardano epochSlots = ProtocolClientInfo { - pClientInfoCodecConfig = + forall c. + -- Byron + EpochSlots -> + ProtocolClientInfo (CardanoBlock c) +protocolClientInfoCardano epochSlots = + ProtocolClientInfo + { pClientInfoCodecConfig = CardanoCodecConfig (pClientInfoCodecConfig (protocolClientInfoByron epochSlots)) (pClientInfoCodecConfig protocolClientInfoShelley) @@ -928,22 +973,22 @@ protocolClientInfoCardano epochSlots = ProtocolClientInfo { -------------------------------------------------------------------------------} mkPartialLedgerConfigShelley :: - L.EraTransition era - => L.TransitionConfig era - -> TriggerHardFork - -> PartialLedgerConfig (ShelleyBlock proto era) + L.EraTransition era => + L.TransitionConfig era -> + TriggerHardFork -> + PartialLedgerConfig (ShelleyBlock proto era) mkPartialLedgerConfigShelley transitionConfig shelleyTriggerHardFork = - ShelleyPartialLedgerConfig { - shelleyLedgerConfig = - Shelley.mkShelleyLedgerConfig - (transitionConfig ^. L.tcShelleyGenesisL) - (transitionConfig ^. L.tcTranslationContextL) - -- 'completeLedgerConfig' will replace the 'History.dummyEpochInfo' - -- in the partial ledger config with the correct one. - History.dummyEpochInfo - , shelleyTriggerHardFork = shelleyTriggerHardFork - } + ShelleyPartialLedgerConfig + { shelleyLedgerConfig = + Shelley.mkShelleyLedgerConfig + (transitionConfig ^. L.tcShelleyGenesisL) + (transitionConfig ^. L.tcTranslationContextL) + -- 'completeLedgerConfig' will replace the 'History.dummyEpochInfo' + -- in the partial ledger config with the correct one. + History.dummyEpochInfo + , shelleyTriggerHardFork = shelleyTriggerHardFork + } -- | We need this wrapper to partially apply a 'TransitionConfig' in an NP. -newtype WrapTransitionConfig blk = - WrapTransitionConfig (L.TransitionConfig (ShelleyBlockLedgerEra blk)) +newtype WrapTransitionConfig blk + = WrapTransitionConfig (L.TransitionConfig (ShelleyBlockLedgerEra blk)) diff --git a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/QueryHF.hs b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/QueryHF.hs index 34e6617220..889ad87061 100644 --- a/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/QueryHF.hs +++ b/ouroboros-consensus-cardano/src/ouroboros-consensus-cardano/Ouroboros/Consensus/Cardano/QueryHF.hs @@ -15,112 +15,114 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Cardano.QueryHF () where -import Data.Functor.Product -import Data.Singletons -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Index -import Data.SOP.Strict -import NoThunks.Class -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node () -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.Cardano.Ledger -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Node () -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.Functor.Product +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Index +import Data.SOP.Strict +import Data.Singletons +import NoThunks.Class +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node () +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.Cardano.Ledger +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Node () +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.TypeFamilyWrappers -- | Just to have the @x@ as the last type variable -newtype FlipBlockQuery footprint result x = - FlipBlockQuery (BlockQuery x footprint result) +newtype FlipBlockQuery footprint result x + = FlipBlockQuery (BlockQuery x footprint result) answerCardanoQueryHF :: - forall x xs c footprint result m. - ( xs ~ CardanoEras c - , CardanoHardForkConstraints c - , All (Compose NoThunks WrapTxOut) xs - , SingI footprint - ) - => ( forall blk. - IsShelleyBlock blk - => Index xs blk - -> ExtLedgerCfg blk - -> BlockQuery blk footprint result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m result - ) - -> Index xs x - -> ExtLedgerCfg x - -> BlockQuery x footprint result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m result + forall x xs c footprint result m. + ( xs ~ CardanoEras c + , CardanoHardForkConstraints c + , All (Compose NoThunks WrapTxOut) xs + , SingI footprint + ) => + ( forall blk. + IsShelleyBlock blk => + Index xs blk -> + ExtLedgerCfg blk -> + BlockQuery blk footprint result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m result + ) -> + Index xs x -> + ExtLedgerCfg x -> + BlockQuery x footprint result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m result answerCardanoQueryHF f idx cfg q dlv = case sing :: Sing footprint of SQFNoTables -> error "answerCardanoQueryHF: unreachable, this was called with a QFNoTables query" - _ -> hcollapse $ - hap - ( (Fn $ \(Pair _ (FlipBlockQuery q')) -> case q' of {}) - :* hcmap - (Proxy @(IsShelleyBlock)) - (\idx' -> Fn $ \(Pair cfg' (FlipBlockQuery q')) -> K $ f (IS idx') cfg' q' dlv) - indices - ) - (injectNS idx (Pair cfg (FlipBlockQuery q))) + _ -> + hcollapse $ + hap + ( (Fn $ \(Pair _ (FlipBlockQuery q')) -> case q' of {}) + :* hcmap + (Proxy @(IsShelleyBlock)) + (\idx' -> Fn $ \(Pair cfg' (FlipBlockQuery q')) -> K $ f (IS idx') cfg' q' dlv) + indices + ) + (injectNS idx (Pair cfg (FlipBlockQuery q))) shelleyCardanoFilter :: - forall proto era c result. - ( CardanoHardForkConstraints c - , ShelleyCompatible proto era - ) - => BlockQuery (ShelleyBlock proto era) QFTraverseTables result - -> TxOut (LedgerState (HardForkBlock (CardanoEras c))) - -> Bool + forall proto era c result. + ( CardanoHardForkConstraints c + , ShelleyCompatible proto era + ) => + BlockQuery (ShelleyBlock proto era) QFTraverseTables result -> + TxOut (LedgerState (HardForkBlock (CardanoEras c))) -> + Bool shelleyCardanoFilter q = eliminateCardanoTxOut (\_ -> shelleyQFTraverseTablesPredicate q) instance CardanoHardForkConstraints c => BlockSupportsHFLedgerQuery (CardanoEras c) where - answerBlockQueryHFLookup = + answerBlockQueryHFLookup = answerCardanoQueryHF - (\idx -> answerShelleyLookupQueries - (injectLedgerTables idx) - (ejectHardForkTxOut idx) - (ejectCanonicalTxIn idx) + ( \idx -> + answerShelleyLookupQueries + (injectLedgerTables idx) + (ejectHardForkTxOut idx) + (ejectCanonicalTxIn idx) ) answerBlockQueryHFTraverse = answerCardanoQueryHF - (\idx -> answerShelleyTraversingQueries - (ejectHardForkTxOut idx) - (ejectCanonicalTxIn idx) - (queryLedgerGetTraversingFilter idx) + ( \idx -> + answerShelleyTraversingQueries + (ejectHardForkTxOut idx) + (ejectCanonicalTxIn idx) + (queryLedgerGetTraversingFilter idx) ) queryLedgerGetTraversingFilter idx q = case idx of -- Byron - IZ -> byronCardanoFilter q + IZ -> byronCardanoFilter q -- Shelley based - IS IZ -> shelleyCardanoFilter q - IS (IS IZ) -> shelleyCardanoFilter q - IS (IS (IS IZ)) -> shelleyCardanoFilter q - IS (IS (IS (IS IZ))) -> shelleyCardanoFilter q - IS (IS (IS (IS (IS IZ)))) -> shelleyCardanoFilter q - IS (IS (IS (IS (IS (IS IZ))))) -> shelleyCardanoFilter q + IS IZ -> shelleyCardanoFilter q + IS (IS IZ) -> shelleyCardanoFilter q + IS (IS (IS IZ)) -> shelleyCardanoFilter q + IS (IS (IS (IS IZ))) -> shelleyCardanoFilter q + IS (IS (IS (IS (IS IZ)))) -> shelleyCardanoFilter q + IS (IS (IS (IS (IS (IS IZ))))) -> shelleyCardanoFilter q IS (IS (IS (IS (IS (IS (IS idx')))))) -> case idx' of {} byronCardanoFilter :: - BlockQuery ByronBlock QFTraverseTables result - -> TxOut (LedgerState (HardForkBlock (CardanoEras c))) - -> Bool + BlockQuery ByronBlock QFTraverseTables result -> + TxOut (LedgerState (HardForkBlock (CardanoEras c))) -> + Bool byronCardanoFilter = \case {} diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Crypto.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Crypto.hs index 963a43b135..5c1b58ab41 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Crypto.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Crypto.hs @@ -4,4 +4,4 @@ -- | Module defining the crypto primitives used throughout Shelley based eras. module Ouroboros.Consensus.Shelley.Crypto (StandardCrypto) where -import Cardano.Protocol.Crypto (StandardCrypto) +import Cardano.Protocol.Crypto (StandardCrypto) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Eras.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Eras.hs index 32a6ba6892..771ba2f63c 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Eras.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Eras.hs @@ -11,17 +11,17 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Eras ( - -- * Eras based on the Shelley ledger +module Ouroboros.Consensus.Shelley.Eras + ( -- * Eras based on the Shelley ledger AllegraEra , AlonzoEra , BabbageEra , ConwayEra , MaryEra , ShelleyEra + -- * Eras instantiated with standard crypto , StandardAllegra , StandardAlonzo @@ -29,53 +29,58 @@ module Ouroboros.Consensus.Shelley.Eras ( , StandardConway , StandardMary , StandardShelley + -- * Shelley-based era , ConwayEraGovDict (..) , ShelleyBasedEra (..) , WrapTx (..) + -- * Convenience functions , isBeforeConway + -- * Re-exports , StandardCrypto ) where -import Cardano.Binary -import Cardano.Ledger.Allegra (AllegraEra) -import Cardano.Ledger.Allegra.Translation () -import Cardano.Ledger.Alonzo (AlonzoEra) -import qualified Cardano.Ledger.Alonzo.Rules as Alonzo -import qualified Cardano.Ledger.Alonzo.Translation as Alonzo -import qualified Cardano.Ledger.Alonzo.Tx as Alonzo -import qualified Cardano.Ledger.Api.Era as L -import Cardano.Ledger.Babbage (BabbageEra) -import qualified Cardano.Ledger.Babbage.Rules as Babbage -import qualified Cardano.Ledger.Babbage.Translation as Babbage -import Cardano.Ledger.BaseTypes -import Cardano.Ledger.Binary (DecCBOR, EncCBOR) -import Cardano.Ledger.Conway (ConwayEra) -import qualified Cardano.Ledger.Conway.Governance as CG -import qualified Cardano.Ledger.Conway.Rules as Conway -import qualified Cardano.Ledger.Conway.Rules as SL - (ConwayLedgerPredFailure (..)) -import qualified Cardano.Ledger.Conway.Translation as Conway -import Cardano.Ledger.Core as Core -import Cardano.Ledger.Mary (MaryEra) -import Cardano.Ledger.Mary.Translation () -import Cardano.Ledger.Shelley (ShelleyEra) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Ledger.Shelley.Core as Core -import qualified Cardano.Ledger.Shelley.LedgerState as SL -import qualified Cardano.Ledger.Shelley.Rules as SL -import qualified Cardano.Ledger.Shelley.Transition as SL -import qualified Cardano.Protocol.TPraos.API as SL -import Control.Monad.Except -import Control.State.Transition (PredicateFailure) -import Data.Data (Proxy (Proxy)) -import Data.List.NonEmpty (NonEmpty ((:|))) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Ledger.SupportsMempool - (WhetherToIntervene (..)) -import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto) +import Cardano.Binary +import Cardano.Ledger.Allegra (AllegraEra) +import Cardano.Ledger.Allegra.Translation () +import Cardano.Ledger.Alonzo (AlonzoEra) +import Cardano.Ledger.Alonzo.Rules qualified as Alonzo +import Cardano.Ledger.Alonzo.Translation qualified as Alonzo +import Cardano.Ledger.Alonzo.Tx qualified as Alonzo +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.Babbage (BabbageEra) +import Cardano.Ledger.Babbage.Rules qualified as Babbage +import Cardano.Ledger.Babbage.Translation qualified as Babbage +import Cardano.Ledger.BaseTypes +import Cardano.Ledger.Binary (DecCBOR, EncCBOR) +import Cardano.Ledger.Conway (ConwayEra) +import Cardano.Ledger.Conway.Governance qualified as CG +import Cardano.Ledger.Conway.Rules qualified as Conway +import Cardano.Ledger.Conway.Rules qualified as SL + ( ConwayLedgerPredFailure (..) + ) +import Cardano.Ledger.Conway.Translation qualified as Conway +import Cardano.Ledger.Core as Core +import Cardano.Ledger.Mary (MaryEra) +import Cardano.Ledger.Mary.Translation () +import Cardano.Ledger.Shelley (ShelleyEra) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core as Core +import Cardano.Ledger.Shelley.LedgerState qualified as SL +import Cardano.Ledger.Shelley.Rules qualified as SL +import Cardano.Ledger.Shelley.Transition qualified as SL +import Cardano.Protocol.TPraos.API qualified as SL +import Control.Monad.Except +import Control.State.Transition (PredicateFailure) +import Data.Data (Proxy (Proxy)) +import Data.List.NonEmpty (NonEmpty ((:|))) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Ledger.SupportsMempool + ( WhetherToIntervene (..) + ) +import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto) {------------------------------------------------------------------------------- Eras instantiated with standard crypto @@ -83,26 +88,32 @@ import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto) -- | The Shelley era with standard crypto type StandardShelley = ShelleyEra + {-# DEPRECATED StandardShelley "In favor of `ShelleyEra`" #-} -- | The Allegra era with standard crypto type StandardAllegra = AllegraEra + {-# DEPRECATED StandardAllegra "In favor of `AllegraEra`" #-} -- | The Mary era with standard crypto type StandardMary = MaryEra + {-# DEPRECATED StandardMary "In favor of `MaryEra`" #-} -- | The Alonzo era with standard crypto type StandardAlonzo = AlonzoEra + {-# DEPRECATED StandardAlonzo "In favor of `AlonzoEra`" #-} -- | The Babbage era with standard crypto type StandardBabbage = BabbageEra + {-# DEPRECATED StandardBabbage "In favor of `BabbageEra`" #-} -- | The Conway era with standard crypto type StandardConway = ConwayEra + {-# DEPRECATED StandardConway "In favor of `ConwayEra`" #-} {------------------------------------------------------------------------------- @@ -125,73 +136,71 @@ type StandardConway = ConwayEra -- needed to determine the hard fork point. In the future this should be -- replaced with an appropriate API - see -- https://github.com/IntersectMBO/ouroboros-network/issues/2890 -class ( Core.EraSegWits era - , Core.EraGov era - , SL.ApplyTx era - , SL.ApplyBlock era - , SL.EraTransition era - - -- TODO This constraint is quite tight, since it fixes things to the - -- original TPraos ledger view. We would like to ultimately remove it. - , SL.GetLedgerView era - - , NoThunks (SL.StashedAVVMAddresses era) - , EncCBOR (SL.StashedAVVMAddresses era) - , DecCBOR (SL.StashedAVVMAddresses era) - , Show (SL.StashedAVVMAddresses era) - , Eq (SL.StashedAVVMAddresses era) - - , DecCBOR (PredicateFailure (EraRule "LEDGER" era)) - , EncCBOR (PredicateFailure (EraRule "LEDGER" era)) - , DecCBOR (PredicateFailure (EraRule "UTXOW" era)) - , EncCBOR (PredicateFailure (EraRule "UTXOW" era)) - , Eq (PredicateFailure (EraRule "BBODY" era)) - , Show (PredicateFailure (EraRule "BBODY" era)) - , NoThunks (PredicateFailure (EraRule "BBODY" era)) - , NoThunks (Core.TranslationContext era) - - , ToCBOR (Core.TranslationContext era) - , FromCBOR (Core.TranslationContext era) - ) => ShelleyBasedEra era where - +class + ( Core.EraSegWits era + , Core.EraGov era + , SL.ApplyTx era + , SL.ApplyBlock era + , SL.EraTransition era + , -- TODO This constraint is quite tight, since it fixes things to the + -- original TPraos ledger view. We would like to ultimately remove it. + SL.GetLedgerView era + , NoThunks (SL.StashedAVVMAddresses era) + , EncCBOR (SL.StashedAVVMAddresses era) + , DecCBOR (SL.StashedAVVMAddresses era) + , Show (SL.StashedAVVMAddresses era) + , Eq (SL.StashedAVVMAddresses era) + , DecCBOR (PredicateFailure (EraRule "LEDGER" era)) + , EncCBOR (PredicateFailure (EraRule "LEDGER" era)) + , DecCBOR (PredicateFailure (EraRule "UTXOW" era)) + , EncCBOR (PredicateFailure (EraRule "UTXOW" era)) + , Eq (PredicateFailure (EraRule "BBODY" era)) + , Show (PredicateFailure (EraRule "BBODY" era)) + , NoThunks (PredicateFailure (EraRule "BBODY" era)) + , NoThunks (Core.TranslationContext era) + , ToCBOR (Core.TranslationContext era) + , FromCBOR (Core.TranslationContext era) + ) => + ShelleyBasedEra era + where applyShelleyBasedTx :: - SL.Globals - -> SL.LedgerEnv era - -> SL.LedgerState era - -> WhetherToIntervene - -> Core.Tx era - -> Except - (SL.ApplyTxError era) - ( SL.LedgerState era - , SL.Validated (Core.Tx era) - ) + SL.Globals -> + SL.LedgerEnv era -> + SL.LedgerState era -> + WhetherToIntervene -> + Core.Tx era -> + Except + (SL.ApplyTxError era) + ( SL.LedgerState era + , SL.Validated (Core.Tx era) + ) -- | Whether the era has an instance of 'CG.ConwayEraGov' getConwayEraGovDict :: proxy era -> Maybe (ConwayEraGovDict era) data ConwayEraGovDict era where - ConwayEraGovDict :: CG.ConwayEraGov era => ConwayEraGovDict era + ConwayEraGovDict :: CG.ConwayEraGov era => ConwayEraGovDict era isBeforeConway :: forall era. L.Era era => Proxy era -> Bool isBeforeConway _ = - L.eraProtVerLow @era < L.eraProtVerLow @L.ConwayEra + L.eraProtVerLow @era < L.eraProtVerLow @L.ConwayEra -- | The default implementation of 'applyShelleyBasedTx', a thin wrapper around -- 'SL.applyTx' defaultApplyShelleyBasedTx :: - ShelleyBasedEra era - => SL.Globals - -> SL.LedgerEnv era - -> SL.LedgerState era - -> WhetherToIntervene - -> Core.Tx era - -> Except - (SL.ApplyTxError era) - ( SL.LedgerState era - , SL.Validated (Core.Tx era) - ) + ShelleyBasedEra era => + SL.Globals -> + SL.LedgerEnv era -> + SL.LedgerState era -> + WhetherToIntervene -> + Core.Tx era -> + Except + (SL.ApplyTxError era) + ( SL.LedgerState era + , SL.Validated (Core.Tx era) + ) defaultApplyShelleyBasedTx globals ledgerEnv mempoolState _wti tx = - liftEither $ + liftEither $ SL.applyTx globals ledgerEnv @@ -231,10 +240,11 @@ instance ShelleyBasedEra ConwayEra where getConwayEraGovDict _ = Just ConwayEraGovDict -applyAlonzoBasedTx :: forall era. - ( ShelleyBasedEra era, - SupportsTwoPhaseValidation era, - Core.Tx era ~ Alonzo.AlonzoTx era +applyAlonzoBasedTx :: + forall era. + ( ShelleyBasedEra era + , SupportsTwoPhaseValidation era + , Core.Tx era ~ Alonzo.AlonzoTx era ) => Globals -> SL.LedgerEnv era -> @@ -243,45 +253,45 @@ applyAlonzoBasedTx :: forall era. Alonzo.AlonzoTx era -> Except (SL.ApplyTxError era) - ( SL.LedgerState era, - SL.Validated (Alonzo.AlonzoTx era) + ( SL.LedgerState era + , SL.Validated (Alonzo.AlonzoTx era) ) applyAlonzoBasedTx globals ledgerEnv mempoolState wti tx = do - (mempoolState', vtx) <- - (`catchError` handler) - $ defaultApplyShelleyBasedTx + (mempoolState', vtx) <- + (`catchError` handler) $ + defaultApplyShelleyBasedTx + globals + ledgerEnv + mempoolState + wti + intervenedTx + pure (mempoolState', vtx) + where + intervenedTx = case wti of + DoNotIntervene -> tx{Alonzo.isValid = Alonzo.IsValid True} + Intervene -> tx + + handler e = case (wti, e) of + (DoNotIntervene, SL.ApplyTxError (err :| [])) + | isIncorrectClaimedFlag (Proxy @era) err -> + -- rectify the flag and include the transaction + -- + -- This either lets the ledger punish the script author for sending + -- a bad script or else prevents our peer's buggy script validator + -- from preventing inclusion of a valid script. + -- + -- TODO 'applyTx' et al needs to include a return value indicating + -- whether we took this branch; it's a reason to disconnect from + -- the peer who sent us the incorrect flag (ie Issue #3276) + defaultApplyShelleyBasedTx globals ledgerEnv mempoolState wti - intervenedTx - pure (mempoolState', vtx) - where - intervenedTx = case wti of - DoNotIntervene -> tx { Alonzo.isValid = Alonzo.IsValid True } - Intervene -> tx - - handler e = case (wti, e) of - (DoNotIntervene, SL.ApplyTxError (err :| [])) - | isIncorrectClaimedFlag (Proxy @era) err - -> - -- rectify the flag and include the transaction - -- - -- This either lets the ledger punish the script author for sending - -- a bad script or else prevents our peer's buggy script validator - -- from preventing inclusion of a valid script. - -- - -- TODO 'applyTx' et al needs to include a return value indicating - -- whether we took this branch; it's a reason to disconnect from - -- the peer who sent us the incorrect flag (ie Issue #3276) - defaultApplyShelleyBasedTx - globals - ledgerEnv - mempoolState - wti - tx{Alonzo.isValid = Alonzo.IsValid False} - _ -> throwError e - -- reject the transaction, protecting the local wallet + tx{Alonzo.isValid = Alonzo.IsValid False} + _ -> throwError e + +-- reject the transaction, protecting the local wallet class SupportsTwoPhaseValidation era where -- NOTE: this class won't be needed once https://github.com/IntersectMBO/cardano-ledger/issues/4167 is implemented. @@ -327,23 +337,23 @@ instance SupportsTwoPhaseValidation BabbageEra where ( Alonzo.ValidationTagMismatch (Alonzo.IsValid _claimedFlag) _validationErrs - ) - ) - ) - ) -> True + ) + ) + ) + ) -> True _ -> False instance SupportsTwoPhaseValidation ConwayEra where isIncorrectClaimedFlag _ = \case SL.ConwayUtxowFailure ( Conway.UtxoFailure - ( Conway.UtxosFailure - ( Conway.ValidationTagMismatch - (Alonzo.IsValid _claimedFlag) - _validationErrs - ) - ) - ) -> True + ( Conway.UtxosFailure + ( Conway.ValidationTagMismatch + (Alonzo.IsValid _claimedFlag) + _validationErrs + ) + ) + ) -> True _ -> False {------------------------------------------------------------------------------- @@ -373,20 +383,23 @@ instance Core.TranslateEra MaryEra WrapTx where instance Core.TranslateEra AlonzoEra WrapTx where type TranslationError AlonzoEra WrapTx = Core.TranslationError AlonzoEra Alonzo.Tx translateEra ctxt = - fmap (WrapTx . Alonzo.unTx) + fmap (WrapTx . Alonzo.unTx) . Core.translateEra @AlonzoEra ctxt - . Alonzo.Tx . unwrapTx + . Alonzo.Tx + . unwrapTx instance Core.TranslateEra BabbageEra WrapTx where type TranslationError BabbageEra WrapTx = Core.TranslationError BabbageEra Babbage.Tx translateEra ctxt = - fmap (WrapTx . Babbage.unTx) + fmap (WrapTx . Babbage.unTx) . Core.translateEra @BabbageEra ctxt - . Babbage.Tx . unwrapTx + . Babbage.Tx + . unwrapTx instance Core.TranslateEra ConwayEra WrapTx where type TranslationError ConwayEra WrapTx = Core.TranslationError ConwayEra Conway.Tx translateEra ctxt = - fmap (WrapTx . Conway.unTx) + fmap (WrapTx . Conway.unTx) . Core.translateEra @ConwayEra ctxt - . Conway.Tx . unwrapTx + . Conway.Tx + . unwrapTx diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/HFEras.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/HFEras.hs index ec7e09e1bb..939d9a8b65 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/HFEras.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/HFEras.hs @@ -1,7 +1,6 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Hard fork eras. @@ -9,8 +8,8 @@ -- Compare this to 'Ouroboros.Consensus.Shelley.Eras', which defines ledger -- eras. This module defines hard fork eras, which are a combination of a -- ledger era and a protocol. -module Ouroboros.Consensus.Shelley.HFEras ( - StandardAllegraBlock +module Ouroboros.Consensus.Shelley.HFEras + ( StandardAllegraBlock , StandardAlonzoBlock , StandardBabbageBlock , StandardConwayBlock @@ -18,18 +17,26 @@ module Ouroboros.Consensus.Shelley.HFEras ( , StandardShelleyBlock ) where -import Ouroboros.Consensus.Protocol.Praos (Praos) -import qualified Ouroboros.Consensus.Protocol.Praos as Praos -import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto, TPraos) -import qualified Ouroboros.Consensus.Protocol.TPraos as TPraos -import Ouroboros.Consensus.Shelley.Eras (AllegraEra, AlonzoEra, - BabbageEra, ConwayEra, MaryEra, ShelleyEra) -import Ouroboros.Consensus.Shelley.Ledger.Block (ShelleyBlock, - ShelleyCompatible) -import Ouroboros.Consensus.Shelley.Ledger.Protocol () -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Ouroboros.Consensus.Shelley.Protocol.TPraos () -import Ouroboros.Consensus.Shelley.ShelleyHFC () +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.Praos qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto, TPraos) +import Ouroboros.Consensus.Protocol.TPraos qualified as TPraos +import Ouroboros.Consensus.Shelley.Eras + ( AllegraEra + , AlonzoEra + , BabbageEra + , ConwayEra + , MaryEra + , ShelleyEra + ) +import Ouroboros.Consensus.Shelley.Ledger.Block + ( ShelleyBlock + , ShelleyCompatible + ) +import Ouroboros.Consensus.Shelley.Ledger.Protocol () +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Ouroboros.Consensus.Shelley.Protocol.TPraos () +import Ouroboros.Consensus.Shelley.ShelleyHFC () {------------------------------------------------------------------------------- Hard fork eras @@ -74,8 +81,7 @@ instance (Praos.PraosCrypto c, TPraos.PraosCrypto c) => ShelleyCompatible (TPraos c) BabbageEra -instance - (Praos.PraosCrypto c) => ShelleyCompatible (Praos c) BabbageEra +instance Praos.PraosCrypto c => ShelleyCompatible (Praos c) BabbageEra -- This instance is required since the ledger view forecast function for -- Praos/Conway still goes through the forecast for TPraos. Once this is @@ -84,5 +90,4 @@ instance (Praos.PraosCrypto c, TPraos.PraosCrypto c) => ShelleyCompatible (TPraos c) ConwayEra -instance - (Praos.PraosCrypto c) => ShelleyCompatible (Praos c) ConwayEra +instance Praos.PraosCrypto c => ShelleyCompatible (Praos c) ConwayEra diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger.hs index 29eb132980..b33e9426a7 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger.hs @@ -1,11 +1,11 @@ module Ouroboros.Consensus.Shelley.Ledger (module X) where -import Ouroboros.Consensus.Shelley.Ledger.Block as X -import Ouroboros.Consensus.Shelley.Ledger.Config as X -import Ouroboros.Consensus.Shelley.Ledger.Forge as X -import Ouroboros.Consensus.Shelley.Ledger.Integrity as X -import Ouroboros.Consensus.Shelley.Ledger.Ledger as X -import Ouroboros.Consensus.Shelley.Ledger.Mempool as X -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion as X -import Ouroboros.Consensus.Shelley.Ledger.PeerSelection as X () -import Ouroboros.Consensus.Shelley.Ledger.Query as X +import Ouroboros.Consensus.Shelley.Ledger.Block as X +import Ouroboros.Consensus.Shelley.Ledger.Config as X +import Ouroboros.Consensus.Shelley.Ledger.Forge as X +import Ouroboros.Consensus.Shelley.Ledger.Integrity as X +import Ouroboros.Consensus.Shelley.Ledger.Ledger as X +import Ouroboros.Consensus.Shelley.Ledger.Mempool as X +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion as X +import Ouroboros.Consensus.Shelley.Ledger.PeerSelection as X () +import Ouroboros.Consensus.Shelley.Ledger.Query as X diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Block.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Block.hs index d65b42d9bc..ea887ad88e 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Block.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Block.hs @@ -14,8 +14,8 @@ {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} -module Ouroboros.Consensus.Shelley.Ledger.Block ( - GetHeader (..) +module Ouroboros.Consensus.Shelley.Ledger.Block + ( GetHeader (..) , Header (..) , IsShelleyBlock , NestedCtxt_ (..) @@ -23,58 +23,81 @@ module Ouroboros.Consensus.Shelley.Ledger.Block ( , ShelleyBlock (..) , ShelleyBlockLedgerEra , ShelleyHash (..) + -- * Shelley Compatibility , ShelleyCompatible , mkShelleyBlock , mkShelleyHeader + -- * Serialisation , decodeShelleyBlock , decodeShelleyHeader , encodeShelleyBlock , encodeShelleyHeader , shelleyBinaryBlockInfo + -- * Conversion , fromShelleyPrevHash , toShelleyPrevHash ) where -import qualified Cardano.Crypto.Hash as Crypto -import Cardano.Ledger.Binary (Annotator (..), DecCBOR (..), - EncCBOR (..), FullByteString (..), serialize) -import qualified Cardano.Ledger.Binary.Plain as Plain -import Cardano.Ledger.Core as SL (eraDecoder, eraProtVerLow, - toEraCBOR) -import qualified Cardano.Ledger.Core as SL (TranslationContext, hashTxSeq) -import Cardano.Ledger.Hashes (HASH) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (Crypto) -import qualified Cardano.Protocol.TPraos.BHeader as SL -import qualified Data.ByteString.Lazy as Lazy -import Data.Coerce (coerce) -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator - (HasPartialConsensusConfig) -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState, - SelectView) -import Ouroboros.Consensus.Protocol.Praos.Common - (PraosChainSelectView) -import Ouroboros.Consensus.Protocol.Signed (SignedHeader) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, - ProtocolHeaderSupportsEnvelope (pHeaderPrevHash), - ProtocolHeaderSupportsProtocol (CannotForgeError), - ShelleyHash (ShelleyHash, unShelleyHash), ShelleyProtocol, - ShelleyProtocolHeader, pHeaderBlock, pHeaderBodyHash, - pHeaderHash, pHeaderSlot) -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) -import Ouroboros.Consensus.Storage.Serialisation (DecodeDisk, - EncodeDisk) -import Ouroboros.Consensus.Util (ShowProxy (..), hashFromBytesShortE) -import Ouroboros.Consensus.Util.Condense +import Cardano.Crypto.Hash qualified as Crypto +import Cardano.Ledger.Binary + ( Annotator (..) + , DecCBOR (..) + , EncCBOR (..) + , FullByteString (..) + , serialize + ) +import Cardano.Ledger.Binary.Plain qualified as Plain +import Cardano.Ledger.Core as SL + ( eraDecoder + , eraProtVerLow + , toEraCBOR + ) +import Cardano.Ledger.Core qualified as SL (TranslationContext, hashTxSeq) +import Cardano.Ledger.Hashes (HASH) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (Crypto) +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Data.ByteString.Lazy qualified as Lazy +import Data.Coerce (coerce) +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator + ( HasPartialConsensusConfig + ) +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Protocol.Abstract + ( ChainDepState + , SelectView + ) +import Ouroboros.Consensus.Protocol.Praos.Common + ( PraosChainSelectView + ) +import Ouroboros.Consensus.Protocol.Signed (SignedHeader) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto + , ProtocolHeaderSupportsEnvelope (pHeaderPrevHash) + , ProtocolHeaderSupportsProtocol (CannotForgeError) + , ShelleyHash (ShelleyHash, unShelleyHash) + , ShelleyProtocol + , ShelleyProtocolHeader + , pHeaderBlock + , pHeaderBodyHash + , pHeaderHash + , pHeaderSlot + ) +import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) +import Ouroboros.Consensus.Storage.Serialisation + ( DecodeDisk + , EncodeDisk + ) +import Ouroboros.Consensus.Util (ShowProxy (..), hashFromBytesShortE) +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- ShelleyCompatible @@ -82,32 +105,33 @@ import Ouroboros.Consensus.Util.Condense class ( ShelleyBasedEra era , ShelleyProtocol proto - -- Header constraints - , Eq (ShelleyProtocolHeader proto) + , -- Header constraints + Eq (ShelleyProtocolHeader proto) , Show (ShelleyProtocolHeader proto) , NoThunks (ShelleyProtocolHeader proto) , EncCBOR (ShelleyProtocolHeader proto) , DecCBOR (Annotator (ShelleyProtocolHeader proto)) , Show (CannotForgeError proto) , Show (SL.TranslationContext era) - -- Currently the chain select view is identical + , -- Currently the chain select view is identical -- Era and proto crypto must coincide - , SelectView proto ~ PraosChainSelectView (ProtoCrypto proto) - -- Need to be able to sign the protocol header - , SignedHeader (ShelleyProtocolHeader proto) - -- ChainDepState needs to be serialisable - , DecodeDisk (ShelleyBlock proto era) (ChainDepState proto) + SelectView proto ~ PraosChainSelectView (ProtoCrypto proto) + , -- Need to be able to sign the protocol header + SignedHeader (ShelleyProtocolHeader proto) + , -- ChainDepState needs to be serialisable + DecodeDisk (ShelleyBlock proto era) (ChainDepState proto) , EncodeDisk (ShelleyBlock proto era) (ChainDepState proto) - -- Hard-fork related constraints - , HasPartialConsensusConfig proto + , -- Hard-fork related constraints + HasPartialConsensusConfig proto , DecCBOR (SL.PState era) , Crypto (ProtoCrypto proto) - ) => ShelleyCompatible proto era + ) => + ShelleyCompatible proto era instance ShelleyCompatible proto era => ConvertRawHash (ShelleyBlock proto era) where - toShortRawHash _ = Crypto.hashToBytesShort . unShelleyHash + toShortRawHash _ = Crypto.hashToBytesShort . unShelleyHash fromShortRawHash _ = ShelleyHash . hashFromBytesShortE - hashSize _ = fromIntegral $ Crypto.sizeHash (Proxy @HASH) + hashSize _ = fromIntegral $ Crypto.sizeHash (Proxy @HASH) {------------------------------------------------------------------------------- Shelley blocks and headers @@ -116,93 +140,102 @@ instance ShelleyCompatible proto era => ConvertRawHash (ShelleyBlock proto era) -- | Shelley-based block type. -- -- This block is parametrised over both the (ledger) era and the protocol. -data ShelleyBlock proto era = ShelleyBlock { - shelleyBlockRaw :: !(SL.Block (ShelleyProtocolHeader proto) era) - , shelleyBlockHeaderHash :: !ShelleyHash - } +data ShelleyBlock proto era = ShelleyBlock + { shelleyBlockRaw :: !(SL.Block (ShelleyProtocolHeader proto) era) + , shelleyBlockHeaderHash :: !ShelleyHash + } deriving instance ShelleyCompatible proto era => Show (ShelleyBlock proto era) -deriving instance ShelleyCompatible proto era => Eq (ShelleyBlock proto era) +deriving instance ShelleyCompatible proto era => Eq (ShelleyBlock proto era) -instance (Typeable era, Typeable proto) - => ShowProxy (ShelleyBlock proto era) where +instance + (Typeable era, Typeable proto) => + ShowProxy (ShelleyBlock proto era) type instance HeaderHash (ShelleyBlock proto era) = ShelleyHash mkShelleyBlock :: - ShelleyCompatible proto era - => SL.Block (ShelleyProtocolHeader proto) era - -> ShelleyBlock proto era -mkShelleyBlock raw = ShelleyBlock { - shelleyBlockRaw = raw + ShelleyCompatible proto era => + SL.Block (ShelleyProtocolHeader proto) era -> + ShelleyBlock proto era +mkShelleyBlock raw = + ShelleyBlock + { shelleyBlockRaw = raw , shelleyBlockHeaderHash = pHeaderHash $ SL.bheader raw } class ( ShelleyCompatible (BlockProtocol blk) (ShelleyBlockLedgerEra blk) , blk ~ ShelleyBlock (BlockProtocol blk) (ShelleyBlockLedgerEra blk) - ) => IsShelleyBlock blk + ) => + IsShelleyBlock blk -instance ( proto ~ BlockProtocol (ShelleyBlock proto era) - , ShelleyCompatible proto era - ) => IsShelleyBlock (ShelleyBlock proto era) +instance + ( proto ~ BlockProtocol (ShelleyBlock proto era) + , ShelleyCompatible proto era + ) => + IsShelleyBlock (ShelleyBlock proto era) type family ShelleyBlockLedgerEra blk where ShelleyBlockLedgerEra (ShelleyBlock proto era) = era -data instance Header (ShelleyBlock proto era) = ShelleyHeader { - shelleyHeaderRaw :: !(ShelleyProtocolHeader proto) - , shelleyHeaderHash :: !ShelleyHash - } - deriving (Generic) +data instance Header (ShelleyBlock proto era) = ShelleyHeader + { shelleyHeaderRaw :: !(ShelleyProtocolHeader proto) + , shelleyHeaderHash :: !ShelleyHash + } + deriving Generic -deriving instance ShelleyCompatible proto era => Show (Header (ShelleyBlock proto era)) -deriving instance ShelleyCompatible proto era => Eq (Header (ShelleyBlock proto era)) +deriving instance ShelleyCompatible proto era => Show (Header (ShelleyBlock proto era)) +deriving instance ShelleyCompatible proto era => Eq (Header (ShelleyBlock proto era)) deriving instance ShelleyCompatible proto era => NoThunks (Header (ShelleyBlock proto era)) -instance (Typeable era, Typeable proto) - => ShowProxy (Header (ShelleyBlock proto era)) where +instance + (Typeable era, Typeable proto) => + ShowProxy (Header (ShelleyBlock proto era)) instance ShelleyCompatible proto era => GetHeader (ShelleyBlock proto era) where - getHeader (ShelleyBlock rawBlk hdrHash) = ShelleyHeader { - shelleyHeaderRaw = SL.bheader rawBlk - , shelleyHeaderHash = hdrHash - } + getHeader (ShelleyBlock rawBlk hdrHash) = + ShelleyHeader + { shelleyHeaderRaw = SL.bheader rawBlk + , shelleyHeaderHash = hdrHash + } blockMatchesHeader hdr blk = - -- Compute the hash the body of the block (the transactions) and compare - -- that against the hash of the body stored in the header. - SL.hashTxSeq @era txs == pHeaderBodyHash shelleyHdr - where - ShelleyHeader { shelleyHeaderRaw = shelleyHdr } = hdr - ShelleyBlock { shelleyBlockRaw = SL.Block _ txs } = blk + -- Compute the hash the body of the block (the transactions) and compare + -- that against the hash of the body stored in the header. + SL.hashTxSeq @era txs == pHeaderBodyHash shelleyHdr + where + ShelleyHeader{shelleyHeaderRaw = shelleyHdr} = hdr + ShelleyBlock{shelleyBlockRaw = SL.Block _ txs} = blk headerIsEBB = const Nothing mkShelleyHeader :: - ShelleyCompatible proto era - => ShelleyProtocolHeader proto - -> Header (ShelleyBlock proto era) -mkShelleyHeader raw = ShelleyHeader { - shelleyHeaderRaw = raw + ShelleyCompatible proto era => + ShelleyProtocolHeader proto -> + Header (ShelleyBlock proto era) +mkShelleyHeader raw = + ShelleyHeader + { shelleyHeaderRaw = raw , shelleyHeaderHash = pHeaderHash raw } -instance ShelleyCompatible proto era => HasHeader (ShelleyBlock proto era) where +instance ShelleyCompatible proto era => HasHeader (ShelleyBlock proto era) where getHeaderFields = getBlockHeaderFields instance ShelleyCompatible proto era => HasHeader (Header (ShelleyBlock proto era)) where - getHeaderFields hdr = HeaderFields { - headerFieldHash = pHeaderHash . shelleyHeaderRaw $ hdr - , headerFieldSlot = pHeaderSlot . shelleyHeaderRaw $ hdr - , headerFieldBlockNo = coerce . pHeaderBlock . shelleyHeaderRaw $ hdr - } + getHeaderFields hdr = + HeaderFields + { headerFieldHash = pHeaderHash . shelleyHeaderRaw $ hdr + , headerFieldSlot = pHeaderSlot . shelleyHeaderRaw $ hdr + , headerFieldBlockNo = coerce . pHeaderBlock . shelleyHeaderRaw $ hdr + } instance ShelleyCompatible proto era => GetPrevHash (ShelleyBlock proto era) where headerPrevHash = - fromShelleyPrevHash - . pHeaderPrevHash - . shelleyHeaderRaw + fromShelleyPrevHash + . pHeaderPrevHash + . shelleyHeaderRaw instance ShelleyCompatible proto era => StandardHash (ShelleyBlock proto era) @@ -219,13 +252,13 @@ instance ShelleyCompatible proto era => HasAnnTip (ShelleyBlock proto era) -- | From @cardano-ledger-specs@ to @ouroboros-consensus@ fromShelleyPrevHash :: SL.PrevHash -> ChainHash (ShelleyBlock proto era) -fromShelleyPrevHash SL.GenesisHash = GenesisHash +fromShelleyPrevHash SL.GenesisHash = GenesisHash fromShelleyPrevHash (SL.BlockHash h) = BlockHash (ShelleyHash $ SL.unHashHeader h) -- | From @ouroboros-consensus@ to @cardano-ledger-specs@ toShelleyPrevHash :: ChainHash (Header (ShelleyBlock proto era)) -> SL.PrevHash -toShelleyPrevHash GenesisHash = SL.GenesisHash +toShelleyPrevHash GenesisHash = SL.GenesisHash toShelleyPrevHash (BlockHash (ShelleyHash h)) = SL.BlockHash $ SL.HashHeader h {------------------------------------------------------------------------------- @@ -264,32 +297,40 @@ instance ShelleyCompatible proto era => DecCBOR (Annotator (Header (ShelleyBlock decCBOR = fmap mkShelleyHeader <$> decCBOR encodeShelleyBlock :: - forall proto era. ShelleyCompatible proto era - => ShelleyBlock proto era -> Plain.Encoding + forall proto era. + ShelleyCompatible proto era => + ShelleyBlock proto era -> Plain.Encoding encodeShelleyBlock = toEraCBOR @era decodeShelleyBlock :: - forall proto era. ShelleyCompatible proto era - => forall s. Plain.Decoder s (Lazy.ByteString -> ShelleyBlock proto era) + forall proto era. + ShelleyCompatible proto era => + forall s. + Plain.Decoder s (Lazy.ByteString -> ShelleyBlock proto era) decodeShelleyBlock = eraDecoder @era $ (. Full) . runAnnotator <$> decCBOR -shelleyBinaryBlockInfo :: forall proto era. ShelleyCompatible proto era => ShelleyBlock proto era -> BinaryBlockInfo -shelleyBinaryBlockInfo blk = BinaryBlockInfo { - -- Drop the 'encodeListLen' that precedes the header and the body (= tx +shelleyBinaryBlockInfo :: + forall proto era. ShelleyCompatible proto era => ShelleyBlock proto era -> BinaryBlockInfo +shelleyBinaryBlockInfo blk = + BinaryBlockInfo + { -- Drop the 'encodeListLen' that precedes the header and the body (= tx -- seq) headerOffset = 1 - -- The Shelley decoders use annotations, so this is cheap - , headerSize = fromIntegral $ Lazy.length (serialize (SL.eraProtVerLow @era) (getHeader blk)) + , -- The Shelley decoders use annotations, so this is cheap + headerSize = fromIntegral $ Lazy.length (serialize (SL.eraProtVerLow @era) (getHeader blk)) } encodeShelleyHeader :: - forall proto era. ShelleyCompatible proto era - => Header (ShelleyBlock proto era) -> Plain.Encoding + forall proto era. + ShelleyCompatible proto era => + Header (ShelleyBlock proto era) -> Plain.Encoding encodeShelleyHeader = toEraCBOR @era decodeShelleyHeader :: - forall proto era. ShelleyCompatible proto era - => forall s. Plain.Decoder s (Lazy.ByteString -> Header (ShelleyBlock proto era)) + forall proto era. + ShelleyCompatible proto era => + forall s. + Plain.Decoder s (Lazy.ByteString -> Header (ShelleyBlock proto era)) decodeShelleyHeader = eraDecoder @era $ (. Full) . runAnnotator <$> decCBOR {------------------------------------------------------------------------------- diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Config.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Config.hs index 5df1f847cf..69641e97af 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Config.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Config.hs @@ -9,82 +9,89 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Ledger.Config ( - BlockConfig (..) +module Ouroboros.Consensus.Shelley.Ledger.Config + ( BlockConfig (..) , CodecConfig (..) , StorageConfig (..) , compactGenesis , getCompactGenesis , mkShelleyBlockConfig + -- * opaque , CompactGenesis ) where -import Cardano.Ledger.Binary (FromCBOR, ToCBOR) -import qualified Cardano.Ledger.Shelley.API as SL -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Protocol.Praos.Common - (VRFTiebreakerFlavor (..)) -import Ouroboros.Consensus.Shelley.Eras (isBeforeConway) -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Network.Magic (NetworkMagic (..)) +import Cardano.Ledger.Binary (FromCBOR, ToCBOR) +import Cardano.Ledger.Shelley.API qualified as SL +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Protocol.Praos.Common + ( VRFTiebreakerFlavor (..) + ) +import Ouroboros.Consensus.Shelley.Eras (isBeforeConway) +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Network.Magic (NetworkMagic (..)) {------------------------------------------------------------------------------- Additional node configuration -------------------------------------------------------------------------------} -data instance BlockConfig (ShelleyBlock proto era) = ShelleyConfig { - -- | The highest protocol version this node supports. It will be stored - -- the headers of produced blocks. - shelleyProtocolVersion :: !SL.ProtVer - , shelleySystemStart :: !SystemStart - , shelleyNetworkMagic :: !NetworkMagic - -- | For nodes that can produce blocks, this should be set to the - -- verification key(s) corresponding to the node's signing key(s). For non - -- block producing nodes, this can be set to the empty map. - , shelleyBlockIssuerVKeys :: !(Map (SL.KeyHash 'SL.BlockIssuer) - (SL.VKey 'SL.BlockIssuer)) - , shelleyVRFTiebreakerFlavor :: !VRFTiebreakerFlavor - } - deriving stock (Generic) +data instance BlockConfig (ShelleyBlock proto era) = ShelleyConfig + { shelleyProtocolVersion :: !SL.ProtVer + -- ^ The highest protocol version this node supports. It will be stored + -- the headers of produced blocks. + , shelleySystemStart :: !SystemStart + , shelleyNetworkMagic :: !NetworkMagic + , shelleyBlockIssuerVKeys :: + !( Map + (SL.KeyHash 'SL.BlockIssuer) + (SL.VKey 'SL.BlockIssuer) + ) + -- ^ For nodes that can produce blocks, this should be set to the + -- verification key(s) corresponding to the node's signing key(s). For non + -- block producing nodes, this can be set to the empty map. + , shelleyVRFTiebreakerFlavor :: !VRFTiebreakerFlavor + } + deriving stock Generic -deriving instance ShelleyBasedEra era => Show (BlockConfig (ShelleyBlock proto era)) +deriving instance ShelleyBasedEra era => Show (BlockConfig (ShelleyBlock proto era)) deriving instance ShelleyBasedEra era => NoThunks (BlockConfig (ShelleyBlock proto era)) mkShelleyBlockConfig :: - forall proto era. ShelleyBasedEra era - => SL.ProtVer - -> SL.ShelleyGenesis - -> [SL.VKey 'SL.BlockIssuer] - -> BlockConfig (ShelleyBlock proto era) -mkShelleyBlockConfig protVer genesis blockIssuerVKeys = ShelleyConfig { - shelleyProtocolVersion = protVer - , shelleySystemStart = SystemStart $ SL.sgSystemStart genesis - , shelleyNetworkMagic = NetworkMagic $ SL.sgNetworkMagic genesis - , shelleyBlockIssuerVKeys = Map.fromList - [ (SL.hashKey k, k) - | k <- blockIssuerVKeys - ] + forall proto era. + ShelleyBasedEra era => + SL.ProtVer -> + SL.ShelleyGenesis -> + [SL.VKey 'SL.BlockIssuer] -> + BlockConfig (ShelleyBlock proto era) +mkShelleyBlockConfig protVer genesis blockIssuerVKeys = + ShelleyConfig + { shelleyProtocolVersion = protVer + , shelleySystemStart = SystemStart $ SL.sgSystemStart genesis + , shelleyNetworkMagic = NetworkMagic $ SL.sgNetworkMagic genesis + , shelleyBlockIssuerVKeys = + Map.fromList + [ (SL.hashKey k, k) + | k <- blockIssuerVKeys + ] , shelleyVRFTiebreakerFlavor } - where - shelleyVRFTiebreakerFlavor - | isBeforeConway (Proxy @era) - = UnrestrictedVRFTiebreaker - | otherwise - -- See 'RestrictedVRFTiebreaker' for context. 5 slots is the "usual" value - -- we consider when talking about the maximum propagation delay. - = RestrictedVRFTiebreaker 5 + where + shelleyVRFTiebreakerFlavor + | isBeforeConway (Proxy @era) = + UnrestrictedVRFTiebreaker + | otherwise = + -- See 'RestrictedVRFTiebreaker' for context. 5 slots is the "usual" value + -- we consider when talking about the maximum propagation delay. + RestrictedVRFTiebreaker 5 {------------------------------------------------------------------------------- Codec config @@ -98,12 +105,12 @@ data instance CodecConfig (ShelleyBlock proto era) = ShelleyCodecConfig Storage config -------------------------------------------------------------------------------} -data instance StorageConfig (ShelleyBlock proto era) = ShelleyStorageConfig { - -- | Needed for 'nodeCheckIntegrity' - shelleyStorageConfigSlotsPerKESPeriod :: !Word64 - -- | Needed for 'nodeImmutableDbChunkInfo' - , shelleyStorageConfigSecurityParam :: !SecurityParam - } +data instance StorageConfig (ShelleyBlock proto era) = ShelleyStorageConfig + { shelleyStorageConfigSlotsPerKESPeriod :: !Word64 + -- ^ Needed for 'nodeCheckIntegrity' + , shelleyStorageConfigSecurityParam :: !SecurityParam + -- ^ Needed for 'nodeImmutableDbChunkInfo' + } deriving (Generic, NoThunks) {------------------------------------------------------------------------------- @@ -120,7 +127,7 @@ data instance StorageConfig (ShelleyBlock proto era) = ShelleyStorageConfig { -- -- * The 'sgStaking' field is erased. It is only used to register initial stake -- pools in tests and benchmarks. -newtype CompactGenesis = CompactGenesis { getCompactGenesis :: SL.ShelleyGenesis } +newtype CompactGenesis = CompactGenesis {getCompactGenesis :: SL.ShelleyGenesis} deriving stock (Eq, Show, Generic) deriving newtype (ToCBOR, FromCBOR) @@ -128,8 +135,9 @@ deriving anyclass instance NoThunks CompactGenesis -- | Compacts the given 'SL.ShelleyGenesis'. compactGenesis :: SL.ShelleyGenesis -> CompactGenesis -compactGenesis genesis = CompactGenesis $ - genesis { - SL.sgInitialFunds = mempty - , SL.sgStaking = SL.emptyGenesisStaking +compactGenesis genesis = + CompactGenesis $ + genesis + { SL.sgInitialFunds = mempty + , SL.sgStaking = SL.emptyGenesisStaking } diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Forge.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Forge.hs index 838d60a5e2..46d439c838 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Forge.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Forge.hs @@ -6,44 +6,51 @@ module Ouroboros.Consensus.Shelley.Ledger.Forge (forgeShelleyBlock) where -import qualified Cardano.Ledger.Core as Core (Tx) -import qualified Cardano.Ledger.Core as SL (hashTxSeq, toTxSeq) -import qualified Cardano.Ledger.Shelley.API as SL (Block (..), extractTx) -import qualified Cardano.Ledger.Shelley.BlockChain as SL (bBodySize) -import qualified Cardano.Protocol.TPraos.BHeader as SL -import Control.Exception -import qualified Data.Sequence.Strict as Seq -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Protocol.Abstract (CanBeLeader, IsLeader) -import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Config - (shelleyProtocolVersion) -import Ouroboros.Consensus.Shelley.Ledger.Integrity -import Ouroboros.Consensus.Shelley.Ledger.Mempool -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, - ProtocolHeaderSupportsKES (configSlotsPerKESPeriod), - mkHeader) +import Cardano.Ledger.Core qualified as Core (Tx) +import Cardano.Ledger.Core qualified as SL (hashTxSeq, toTxSeq) +import Cardano.Ledger.Shelley.API qualified as SL (Block (..), extractTx) +import Cardano.Ledger.Shelley.BlockChain qualified as SL (bBodySize) +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Control.Exception +import Data.Sequence.Strict qualified as Seq +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Protocol.Abstract (CanBeLeader, IsLeader) +import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Config + ( shelleyProtocolVersion + ) +import Ouroboros.Consensus.Shelley.Ledger.Integrity +import Ouroboros.Consensus.Shelley.Ledger.Mempool +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto + , ProtocolHeaderSupportsKES (configSlotsPerKESPeriod) + , mkHeader + ) {------------------------------------------------------------------------------- Forging -------------------------------------------------------------------------------} forgeShelleyBlock :: - forall m era proto mk. - (ShelleyCompatible proto era, Monad m) - => HotKey (ProtoCrypto proto) m - -> CanBeLeader proto - -> TopLevelConfig (ShelleyBlock proto era) - -> BlockNo -- ^ Current block number - -> SlotNo -- ^ Current slot number - -> TickedLedgerState (ShelleyBlock proto era) mk -- ^ Current ledger - -> [Validated (GenTx (ShelleyBlock proto era))] -- ^ Txs to include - -> IsLeader proto - -> m (ShelleyBlock proto era) + forall m era proto mk. + (ShelleyCompatible proto era, Monad m) => + HotKey (ProtoCrypto proto) m -> + CanBeLeader proto -> + TopLevelConfig (ShelleyBlock proto era) -> + -- | Current block number + BlockNo -> + -- | Current slot number + SlotNo -> + -- | Current ledger + TickedLedgerState (ShelleyBlock proto era) mk -> + -- | Txs to include + [Validated (GenTx (ShelleyBlock proto era))] -> + IsLeader proto -> + m (ShelleyBlock proto era) forgeShelleyBlock hotKey cbl @@ -53,19 +60,28 @@ forgeShelleyBlock tickedLedger txs isLeader = do - hdr <- mkHeader @_ @(ProtoCrypto proto) hotKey cbl isLeader - curSlot curNo prevHash (SL.hashTxSeq @era body) actualBodySize protocolVersion + hdr <- + mkHeader @_ @(ProtoCrypto proto) + hotKey + cbl + isLeader + curSlot + curNo + prevHash + (SL.hashTxSeq @era body) + actualBodySize + protocolVersion let blk = mkShelleyBlock $ SL.Block hdr body return $ assert (verifyBlockIntegrity (configSlotsPerKESPeriod $ configConsensus cfg) blk) $ - blk - where + blk + where protocolVersion = shelleyProtocolVersion $ configBlock cfg body = - SL.toTxSeq @era - $ Seq.fromList - $ fmap extractTx txs + SL.toTxSeq @era $ + Seq.fromList $ + fmap extractTx txs actualBodySize = SL.bBodySize protocolVersion body @@ -74,7 +90,7 @@ forgeShelleyBlock prevHash :: SL.PrevHash prevHash = - toShelleyPrevHash @proto - . castHash - . getTipHash - $ tickedLedger + toShelleyPrevHash @proto + . castHash + . getTipHash + $ tickedLedger diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Inspect.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Inspect.hs index df2cc9635e..a81d2a07bf 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Inspect.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Inspect.hs @@ -4,33 +4,32 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Ledger.Inspect ( - ShelleyLedgerUpdate (..) +module Ouroboros.Consensus.Shelley.Ledger.Inspect + ( ShelleyLedgerUpdate (..) , pparamsUpdate ) where -import Cardano.Ledger.BaseTypes (StrictMaybe (..)) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as Core -import qualified Cardano.Ledger.Shelley.Governance as SL -import qualified Cardano.Ledger.Shelley.LedgerState as SL -import Control.Monad -import Data.Void -import Lens.Micro ((^.)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Ledger -import Ouroboros.Consensus.Util.Condense - -data ShelleyLedgerUpdate era = - ShelleyUpdatedPParams - !(StrictMaybe (Core.PParams era)) - !EpochNo +import Cardano.Ledger.BaseTypes (StrictMaybe (..)) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as Core +import Cardano.Ledger.Shelley.Governance qualified as SL +import Cardano.Ledger.Shelley.LedgerState qualified as SL +import Control.Monad +import Data.Void +import Lens.Micro ((^.)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Ledger +import Ouroboros.Consensus.Util.Condense + +data ShelleyLedgerUpdate era + = ShelleyUpdatedPParams + !(StrictMaybe (Core.PParams era)) + !EpochNo deriving instance Eq (Core.PParams era) => Eq (ShelleyLedgerUpdate era) deriving instance Show (Core.PParams era) => Show (ShelleyLedgerUpdate era) @@ -40,23 +39,23 @@ instance Show (Core.PParams era) => Condense (ShelleyLedgerUpdate era) where instance ShelleyBasedEra era => InspectLedger (ShelleyBlock proto era) where type LedgerWarning (ShelleyBlock proto era) = Void - type LedgerUpdate (ShelleyBlock proto era) = ShelleyLedgerUpdate era + type LedgerUpdate (ShelleyBlock proto era) = ShelleyLedgerUpdate era inspectLedger _tlc before after = do - guard $ updatesBefore /= updatesAfter - return $ LedgerUpdate updatesAfter - where - - updatesBefore, updatesAfter :: ShelleyLedgerUpdate era - updatesBefore = pparamsUpdate before - updatesAfter = pparamsUpdate after + guard $ updatesBefore /= updatesAfter + return $ LedgerUpdate updatesAfter + where + updatesBefore, updatesAfter :: ShelleyLedgerUpdate era + updatesBefore = pparamsUpdate before + updatesAfter = pparamsUpdate after pparamsUpdate :: - forall era proto mk. ShelleyBasedEra era - => LedgerState (ShelleyBlock proto era) mk - -> ShelleyLedgerUpdate era + forall era proto mk. + ShelleyBasedEra era => + LedgerState (ShelleyBlock proto era) mk -> + ShelleyLedgerUpdate era pparamsUpdate st = - let nes = shelleyLedgerState st - in ShelleyUpdatedPParams - (SL.nextEpochUpdatedPParams (nes ^. SL.newEpochStateGovStateL)) - (succ (SL.nesEL nes)) + let nes = shelleyLedgerState st + in ShelleyUpdatedPParams + (SL.nextEpochUpdatedPParams (nes ^. SL.newEpochStateGovStateL)) + (succ (SL.nesEL nes)) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Integrity.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Integrity.hs index 9449c55e0d..44a119731f 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Integrity.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Integrity.hs @@ -1,20 +1,22 @@ -module Ouroboros.Consensus.Shelley.Ledger.Integrity ( - verifyBlockIntegrity +module Ouroboros.Consensus.Shelley.Ledger.Integrity + ( verifyBlockIntegrity , verifyHeaderIntegrity ) where -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Protocol.Abstract - (verifyHeaderIntegrity) +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( verifyHeaderIntegrity + ) -- | Verifies whether the block is not corrupted by checking its signature and -- witnesses. verifyBlockIntegrity :: - ShelleyCompatible proto era - => Word64 - -> ShelleyBlock proto era -> Bool + ShelleyCompatible proto era => + Word64 -> + ShelleyBlock proto era -> + Bool verifyBlockIntegrity spkp blk = - verifyHeaderIntegrity spkp (shelleyHeaderRaw $ getHeader blk) && - blockMatchesHeader (getHeader blk) blk + verifyHeaderIntegrity spkp (shelleyHeaderRaw $ getHeader blk) + && blockMatchesHeader (getHeader blk) blk diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Ledger.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Ledger.hs index 33e9ef6277..c9aab3df81 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Ledger.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Ledger.hs @@ -19,11 +19,10 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Ledger.Ledger ( - LedgerState (..) +module Ouroboros.Consensus.Shelley.Ledger.Ledger + ( LedgerState (..) , LedgerTables (..) , ShelleyBasedEra , ShelleyTip (..) @@ -32,6 +31,7 @@ module Ouroboros.Consensus.Shelley.Ledger.Ledger ( , castShelleyTip , shelleyLedgerTipPoint , shelleyTipToPoint + -- * Ledger config , ShelleyLedgerConfig (..) , ShelleyPartialLedgerConfig (..) @@ -39,97 +39,120 @@ module Ouroboros.Consensus.Shelley.Ledger.Ledger ( , shelleyEraParams , shelleyEraParamsNeverHardForks , shelleyLedgerGenesis + -- * Auxiliary , ShelleyLedgerEvent (..) , ShelleyReapplyException (..) , getPParams + -- * Serialisation , decodeShelleyAnnTip , decodeShelleyLedgerState , encodeShelleyAnnTip , encodeShelleyHeaderState , encodeShelleyLedgerState + -- * Low-level UTxO manipulations , slUtxoL ) where -import qualified Cardano.Ledger.BaseTypes as SL (epochInfoPure) -import Cardano.Ledger.BaseTypes.NonZero (unNonZero) -import qualified Cardano.Ledger.BHeaderView as SL (BHeaderView) -import Cardano.Ledger.Binary.Decoding (decShareCBOR, decodeMap, - decodeMemPack, internsFromMap) -import Cardano.Ledger.Binary.Encoding (encodeMap, encodeMemPack, - toPlainEncoding) -import Cardano.Ledger.Binary.Plain (FromCBOR (..), ToCBOR (..), - enforceSize) -import qualified Cardano.Ledger.Block as Core -import Cardano.Ledger.Core (Era, eraDecoder, ppMaxBHSizeL, - ppMaxTxSizeL) -import qualified Cardano.Ledger.Core as Core -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Governance as SL -import qualified Cardano.Ledger.Shelley.LedgerState as SL -import qualified Cardano.Ledger.UMap as SL -import Cardano.Slotting.EpochInfo -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as CBOR -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (decode, encode) -import Control.Arrow (left, second) -import qualified Control.Exception as Exception -import Control.Monad.Except -import qualified Control.State.Transition.Extended as STS -import Data.Coerce (coerce) -import Data.Functor.Identity -import Data.MemPack -import qualified Data.Text as T -import qualified Data.Text as Text -import Data.Word -import GHC.Generics (Generic) -import Lens.Micro -import Lens.Micro.Extras (view) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HardFork.History.Util -import Ouroboros.Consensus.HardFork.Simple -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Protocol.Ledger.Util (isNewEpoch) -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Config -import Ouroboros.Consensus.Shelley.Ledger.Protocol () -import Ouroboros.Consensus.Shelley.Protocol.Abstract - (EnvelopeCheckError, envelopeChecks, mkHeaderView) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Util.CBOR (decodeWithOrigin, - encodeWithOrigin) -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.Versioned +import Cardano.Ledger.BHeaderView qualified as SL (BHeaderView) +import Cardano.Ledger.BaseTypes qualified as SL (epochInfoPure) +import Cardano.Ledger.BaseTypes.NonZero (unNonZero) +import Cardano.Ledger.Binary.Decoding + ( decShareCBOR + , decodeMap + , decodeMemPack + , internsFromMap + ) +import Cardano.Ledger.Binary.Encoding + ( encodeMap + , encodeMemPack + , toPlainEncoding + ) +import Cardano.Ledger.Binary.Plain + ( FromCBOR (..) + , ToCBOR (..) + , enforceSize + ) +import Cardano.Ledger.Block qualified as Core +import Cardano.Ledger.Core + ( Era + , eraDecoder + , ppMaxBHSizeL + , ppMaxTxSizeL + ) +import Cardano.Ledger.Core qualified as Core +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Governance qualified as SL +import Cardano.Ledger.Shelley.LedgerState qualified as SL +import Cardano.Ledger.UMap qualified as SL +import Cardano.Slotting.EpochInfo +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (decode, encode) +import Control.Arrow (left, second) +import Control.Exception qualified as Exception +import Control.Monad.Except +import Control.State.Transition.Extended qualified as STS +import Data.Coerce (coerce) +import Data.Functor.Identity +import Data.MemPack +import Data.Text qualified as T +import Data.Text qualified as Text +import Data.Word +import GHC.Generics (Generic) +import Lens.Micro +import Lens.Micro.Extras (view) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HardFork.History.Util +import Ouroboros.Consensus.HardFork.Simple +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Protocol.Ledger.Util (isNewEpoch) +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Config +import Ouroboros.Consensus.Shelley.Ledger.Protocol () +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( EnvelopeCheckError + , envelopeChecks + , mkHeaderView + ) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Util.CBOR + ( decodeWithOrigin + , encodeWithOrigin + ) +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Consensus.Util.Versioned {------------------------------------------------------------------------------- Config -------------------------------------------------------------------------------} -data ShelleyLedgerConfig era = ShelleyLedgerConfig { - shelleyLedgerCompactGenesis :: !CompactGenesis - -- | Derived from 'shelleyLedgerGenesis' but we store a cached version - -- because it used very often. - , shelleyLedgerGlobals :: !SL.Globals - , shelleyLedgerTranslationContext :: !(Core.TranslationContext era) - } - deriving (Generic) +data ShelleyLedgerConfig era = ShelleyLedgerConfig + { shelleyLedgerCompactGenesis :: !CompactGenesis + , shelleyLedgerGlobals :: !SL.Globals + -- ^ Derived from 'shelleyLedgerGenesis' but we store a cached version + -- because it used very often. + , shelleyLedgerTranslationContext :: !(Core.TranslationContext era) + } + deriving Generic -deriving instance (NoThunks (Core.TranslationContext era), Era era) => - NoThunks (ShelleyLedgerConfig era) +deriving instance + (NoThunks (Core.TranslationContext era), Era era) => + NoThunks (ShelleyLedgerConfig era) deriving instance Show (Core.TranslationContext era) => Show (ShelleyLedgerConfig era) @@ -137,173 +160,186 @@ shelleyLedgerGenesis :: ShelleyLedgerConfig era -> SL.ShelleyGenesis shelleyLedgerGenesis = getCompactGenesis . shelleyLedgerCompactGenesis shelleyEraParams :: - SL.ShelleyGenesis - -> HardFork.EraParams -shelleyEraParams genesis = HardFork.EraParams { - eraEpochSize = SL.sgEpochLength genesis + SL.ShelleyGenesis -> + HardFork.EraParams +shelleyEraParams genesis = + HardFork.EraParams + { eraEpochSize = SL.sgEpochLength genesis , eraSlotLength = mkSlotLength $ SL.fromNominalDiffTimeMicro $ SL.sgSlotLength genesis - , eraSafeZone = HardFork.StandardSafeZone stabilityWindow + , eraSafeZone = HardFork.StandardSafeZone stabilityWindow , eraGenesisWin = GenesisWindow stabilityWindow } - where - stabilityWindow = - SL.computeStabilityWindow - (unNonZero $ SL.sgSecurityParam genesis) - (SL.sgActiveSlotCoeff genesis) + where + stabilityWindow = + SL.computeStabilityWindow + (unNonZero $ SL.sgSecurityParam genesis) + (SL.sgActiveSlotCoeff genesis) -- | Separate variant of 'shelleyEraParams' to be used for a Shelley-only chain. shelleyEraParamsNeverHardForks :: SL.ShelleyGenesis -> HardFork.EraParams -shelleyEraParamsNeverHardForks genesis = HardFork.EraParams { - eraEpochSize = SL.sgEpochLength genesis +shelleyEraParamsNeverHardForks genesis = + HardFork.EraParams + { eraEpochSize = SL.sgEpochLength genesis , eraSlotLength = mkSlotLength $ SL.fromNominalDiffTimeMicro $ SL.sgSlotLength genesis - , eraSafeZone = HardFork.UnsafeIndefiniteSafeZone + , eraSafeZone = HardFork.UnsafeIndefiniteSafeZone , eraGenesisWin = GenesisWindow stabilityWindow } - where - stabilityWindow = - SL.computeStabilityWindow - (unNonZero $ SL.sgSecurityParam genesis) - (SL.sgActiveSlotCoeff genesis) + where + stabilityWindow = + SL.computeStabilityWindow + (unNonZero $ SL.sgSecurityParam genesis) + (SL.sgActiveSlotCoeff genesis) mkShelleyLedgerConfig :: - SL.ShelleyGenesis - -> Core.TranslationContext era - -> EpochInfo (Except HardFork.PastHorizonException) - -> ShelleyLedgerConfig era + SL.ShelleyGenesis -> + Core.TranslationContext era -> + EpochInfo (Except HardFork.PastHorizonException) -> + ShelleyLedgerConfig era mkShelleyLedgerConfig genesis transCtxt epochInfo = - ShelleyLedgerConfig { - shelleyLedgerCompactGenesis = compactGenesis genesis - , shelleyLedgerGlobals = - SL.mkShelleyGlobals - genesis - (hoistEpochInfo (left (Text.pack . show) . runExcept) epochInfo) - , shelleyLedgerTranslationContext = transCtxt - } + ShelleyLedgerConfig + { shelleyLedgerCompactGenesis = compactGenesis genesis + , shelleyLedgerGlobals = + SL.mkShelleyGlobals + genesis + (hoistEpochInfo (left (Text.pack . show) . runExcept) epochInfo) + , shelleyLedgerTranslationContext = transCtxt + } type instance LedgerCfg (LedgerState (ShelleyBlock proto era)) = ShelleyLedgerConfig era -data ShelleyPartialLedgerConfig era = ShelleyPartialLedgerConfig { - -- | We cache the non-partial ledger config containing a dummy - -- 'EpochInfo' that needs to be replaced with the correct one. - -- - -- We do this to avoid recomputing the ledger config each time - -- 'completeLedgerConfig' is called, as 'mkShelleyLedgerConfig' does - -- some rather expensive computations that shouldn't be repeated too - -- often (e.g., 'sgActiveSlotCoeff'). - shelleyLedgerConfig :: !(ShelleyLedgerConfig era) - , shelleyTriggerHardFork :: !TriggerHardFork - } - deriving (Generic) +data ShelleyPartialLedgerConfig era = ShelleyPartialLedgerConfig + { shelleyLedgerConfig :: !(ShelleyLedgerConfig era) + -- ^ We cache the non-partial ledger config containing a dummy + -- 'EpochInfo' that needs to be replaced with the correct one. + -- + -- We do this to avoid recomputing the ledger config each time + -- 'completeLedgerConfig' is called, as 'mkShelleyLedgerConfig' does + -- some rather expensive computations that shouldn't be repeated too + -- often (e.g., 'sgActiveSlotCoeff'). + , shelleyTriggerHardFork :: !TriggerHardFork + } + deriving Generic deriving instance Show (ShelleyLedgerConfig era) => Show (ShelleyPartialLedgerConfig era) -deriving instance (NoThunks (Core.TranslationContext era), Core.Era era) => - NoThunks (ShelleyPartialLedgerConfig era) +deriving instance + (NoThunks (Core.TranslationContext era), Core.Era era) => + NoThunks (ShelleyPartialLedgerConfig era) instance ShelleyCompatible proto era => HasPartialLedgerConfig (ShelleyBlock proto era) where type PartialLedgerConfig (ShelleyBlock proto era) = ShelleyPartialLedgerConfig era -- Replace the dummy 'EpochInfo' with the real one completeLedgerConfig _ epochInfo (ShelleyPartialLedgerConfig cfg _) = - cfg { - shelleyLedgerGlobals = (shelleyLedgerGlobals cfg) { - SL.epochInfo = - hoistEpochInfo - (runExcept . withExceptT (T.pack . show)) - epochInfo + cfg + { shelleyLedgerGlobals = + (shelleyLedgerGlobals cfg) + { SL.epochInfo = + hoistEpochInfo + (runExcept . withExceptT (T.pack . show)) + epochInfo } - } + } {------------------------------------------------------------------------------- LedgerState -------------------------------------------------------------------------------} -data ShelleyTip proto era = ShelleyTip { - shelleyTipSlotNo :: !SlotNo - , shelleyTipBlockNo :: !BlockNo - , shelleyTipHash :: !(HeaderHash (ShelleyBlock proto era)) - } +data ShelleyTip proto era = ShelleyTip + { shelleyTipSlotNo :: !SlotNo + , shelleyTipBlockNo :: !BlockNo + , shelleyTipHash :: !(HeaderHash (ShelleyBlock proto era)) + } deriving (Eq, Show, Generic, NoThunks) shelleyTipToPoint :: WithOrigin (ShelleyTip proto era) -> Point (ShelleyBlock proto era) -shelleyTipToPoint Origin = GenesisPoint -shelleyTipToPoint (NotOrigin tip) = BlockPoint (shelleyTipSlotNo tip) - (shelleyTipHash tip) +shelleyTipToPoint Origin = GenesisPoint +shelleyTipToPoint (NotOrigin tip) = + BlockPoint + (shelleyTipSlotNo tip) + (shelleyTipHash tip) castShelleyTip :: ShelleyTip proto era -> ShelleyTip proto' era' -castShelleyTip (ShelleyTip sn bn hh) = ShelleyTip { - shelleyTipSlotNo = sn +castShelleyTip (ShelleyTip sn bn hh) = + ShelleyTip + { shelleyTipSlotNo = sn , shelleyTipBlockNo = bn - , shelleyTipHash = coerce hh + , shelleyTipHash = coerce hh } -data instance LedgerState (ShelleyBlock proto era) mk = ShelleyLedgerState { - shelleyLedgerTip :: !(WithOrigin (ShelleyTip proto era)) - , shelleyLedgerState :: !(SL.NewEpochState era) - , shelleyLedgerTransition :: !ShelleyTransition - , shelleyLedgerTables :: !(LedgerTables (LedgerState (ShelleyBlock proto era)) mk) - } - deriving (Generic) - -deriving instance (ShelleyBasedEra era, EqMK mk) - => Eq (LedgerState (ShelleyBlock proto era) mk) -deriving instance (ShelleyBasedEra era, NoThunksMK mk) - => NoThunks (LedgerState (ShelleyBlock proto era) mk) -deriving instance (ShelleyBasedEra era, ShowMK mk) - => Show (LedgerState (ShelleyBlock proto era) mk) +data instance LedgerState (ShelleyBlock proto era) mk = ShelleyLedgerState + { shelleyLedgerTip :: !(WithOrigin (ShelleyTip proto era)) + , shelleyLedgerState :: !(SL.NewEpochState era) + , shelleyLedgerTransition :: !ShelleyTransition + , shelleyLedgerTables :: !(LedgerTables (LedgerState (ShelleyBlock proto era)) mk) + } + deriving Generic + +deriving instance + (ShelleyBasedEra era, EqMK mk) => + Eq (LedgerState (ShelleyBlock proto era) mk) +deriving instance + (ShelleyBasedEra era, NoThunksMK mk) => + NoThunks (LedgerState (ShelleyBlock proto era) mk) +deriving instance + (ShelleyBasedEra era, ShowMK mk) => + Show (LedgerState (ShelleyBlock proto era) mk) -- | Information required to determine the hard fork point from Shelley to the -- next ledger -newtype ShelleyTransition = ShelleyTransitionInfo { - -- | The number of blocks in this epoch past the voting deadline - -- - -- We record this to make sure that we can tell the HFC about hard forks - -- if and only if we are certain: - -- - -- 1. Blocks that came in within an epoch after the 4k/f voting deadline - -- are not relevant (10k/f - 2 * 3k/f). - -- 2. Since there are slots between blocks, we are probably only sure that - -- there will be no more relevant block when we have seen the first - -- block after the deadline. - -- 3. If we count how many blocks we have seen post deadline, and we have - -- reached k of them, we know that that last pre-deadline block won't - -- be rolled back anymore. - -- 4. At this point we can look at the ledger state and see if there is - -- a new protocol version update scheduled on the next epoch boundary, - -- and notify the HFC that we need to transition into a new era at that - -- point. - shelleyAfterVoting :: Word32 - } - deriving stock (Eq, Show, Generic) - deriving newtype (NoThunks) +newtype ShelleyTransition = ShelleyTransitionInfo + { shelleyAfterVoting :: Word32 + -- ^ The number of blocks in this epoch past the voting deadline + -- + -- We record this to make sure that we can tell the HFC about hard forks + -- if and only if we are certain: + -- + -- 1. Blocks that came in within an epoch after the 4k/f voting deadline + -- are not relevant (10k/f - 2 * 3k/f). + -- 2. Since there are slots between blocks, we are probably only sure that + -- there will be no more relevant block when we have seen the first + -- block after the deadline. + -- 3. If we count how many blocks we have seen post deadline, and we have + -- reached k of them, we know that that last pre-deadline block won't + -- be rolled back anymore. + -- 4. At this point we can look at the ledger state and see if there is + -- a new protocol version update scheduled on the next epoch boundary, + -- and notify the HFC that we need to transition into a new era at that + -- point. + } + deriving stock (Eq, Show, Generic) + deriving newtype NoThunks shelleyLedgerTipPoint :: - LedgerState (ShelleyBlock proto era) mk - -> Point (ShelleyBlock proto era) + LedgerState (ShelleyBlock proto era) mk -> + Point (ShelleyBlock proto era) shelleyLedgerTipPoint = shelleyTipToPoint . shelleyLedgerTip instance ShelleyCompatible proto era => UpdateLedger (ShelleyBlock proto era) -type instance TxIn (LedgerState (ShelleyBlock proto era)) = SL.TxIn +type instance TxIn (LedgerState (ShelleyBlock proto era)) = SL.TxIn type instance TxOut (LedgerState (ShelleyBlock proto era)) = Core.TxOut era -instance (txout ~ Core.TxOut era, MemPack txout) - => IndexedMemPack (LedgerState (ShelleyBlock proto era) EmptyMK) txout where +instance + (txout ~ Core.TxOut era, MemPack txout) => + IndexedMemPack (LedgerState (ShelleyBlock proto era) EmptyMK) txout + where indexedTypeName _ = typeName @txout indexedPackedByteCount _ = packedByteCount indexedPackM _ = packM indexedUnpackM _ = unpackM - -instance ShelleyCompatible proto era - => SerializeTablesWithHint (LedgerState (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + SerializeTablesWithHint (LedgerState (ShelleyBlock proto era)) + where encodeTablesWithHint _ (LedgerTables (ValuesMK tbs)) = toPlainEncoding (Core.eraProtVerLow @era) $ encodeMap encodeMemPack encodeMemPack tbs decodeTablesWithHint st = - let certInterns = - internsFromMap - $ shelleyLedgerState st - ^. SL.nesEsL + let certInterns = + internsFromMap $ + shelleyLedgerState st + ^. SL.nesEsL . SL.esLStateL . SL.lsCertStateL . SL.certDStateL @@ -311,114 +347,122 @@ instance ShelleyCompatible proto era . SL.umElemsL in LedgerTables . ValuesMK <$> (eraDecoder @era $ decodeMap decodeMemPack (decShareCBOR certInterns)) -instance ShelleyBasedEra era - => HasLedgerTables (LedgerState (ShelleyBlock proto era)) where - projectLedgerTables = shelleyLedgerTables +instance + ShelleyBasedEra era => + HasLedgerTables (LedgerState (ShelleyBlock proto era)) + where + projectLedgerTables = shelleyLedgerTables withLedgerTables st tables = - ShelleyLedgerState { - shelleyLedgerTip - , shelleyLedgerState - , shelleyLedgerTransition - , shelleyLedgerTables = tables - } - where - ShelleyLedgerState { - shelleyLedgerTip - , shelleyLedgerState - , shelleyLedgerTransition - } = st + ShelleyLedgerState + { shelleyLedgerTip + , shelleyLedgerState + , shelleyLedgerTransition + , shelleyLedgerTables = tables + } + where + ShelleyLedgerState + { shelleyLedgerTip + , shelleyLedgerState + , shelleyLedgerTransition + } = st -instance ShelleyBasedEra era - => HasLedgerTables (Ticked (LedgerState (ShelleyBlock proto era))) where - projectLedgerTables = castLedgerTables . tickedShelleyLedgerTables +instance + ShelleyBasedEra era => + HasLedgerTables (Ticked (LedgerState (ShelleyBlock proto era))) + where + projectLedgerTables = castLedgerTables . tickedShelleyLedgerTables withLedgerTables st tables = - TickedShelleyLedgerState { - untickedShelleyLedgerTip - , tickedShelleyLedgerTransition - , tickedShelleyLedgerState - , tickedShelleyLedgerTables = castLedgerTables tables - } - where - TickedShelleyLedgerState { - untickedShelleyLedgerTip - , tickedShelleyLedgerTransition - , tickedShelleyLedgerState - } = st + TickedShelleyLedgerState + { untickedShelleyLedgerTip + , tickedShelleyLedgerTransition + , tickedShelleyLedgerState + , tickedShelleyLedgerTables = castLedgerTables tables + } + where + TickedShelleyLedgerState + { untickedShelleyLedgerTip + , tickedShelleyLedgerTransition + , tickedShelleyLedgerState + } = st -instance ShelleyBasedEra era - => CanStowLedgerTables (LedgerState (ShelleyBlock proto era)) where +instance + ShelleyBasedEra era => + CanStowLedgerTables (LedgerState (ShelleyBlock proto era)) + where stowLedgerTables st = - ShelleyLedgerState { - shelleyLedgerTip = shelleyLedgerTip - , shelleyLedgerState = shelleyLedgerState' - , shelleyLedgerTransition = shelleyLedgerTransition - , shelleyLedgerTables = emptyLedgerTables - } - where - (_, shelleyLedgerState') = shelleyLedgerState `slUtxoL` SL.UTxO m - ShelleyLedgerState { - shelleyLedgerTip - , shelleyLedgerState - , shelleyLedgerTransition - , shelleyLedgerTables = LedgerTables (ValuesMK m) - } = st + ShelleyLedgerState + { shelleyLedgerTip = shelleyLedgerTip + , shelleyLedgerState = shelleyLedgerState' + , shelleyLedgerTransition = shelleyLedgerTransition + , shelleyLedgerTables = emptyLedgerTables + } + where + (_, shelleyLedgerState') = shelleyLedgerState `slUtxoL` SL.UTxO m + ShelleyLedgerState + { shelleyLedgerTip + , shelleyLedgerState + , shelleyLedgerTransition + , shelleyLedgerTables = LedgerTables (ValuesMK m) + } = st unstowLedgerTables st = - ShelleyLedgerState { - shelleyLedgerTip = shelleyLedgerTip - , shelleyLedgerState = shelleyLedgerState' - , shelleyLedgerTransition = shelleyLedgerTransition - , shelleyLedgerTables = LedgerTables (ValuesMK (SL.unUTxO tbs)) - } - where - (tbs, shelleyLedgerState') = shelleyLedgerState `slUtxoL` mempty - ShelleyLedgerState { - shelleyLedgerTip - , shelleyLedgerState - , shelleyLedgerTransition - } = st + ShelleyLedgerState + { shelleyLedgerTip = shelleyLedgerTip + , shelleyLedgerState = shelleyLedgerState' + , shelleyLedgerTransition = shelleyLedgerTransition + , shelleyLedgerTables = LedgerTables (ValuesMK (SL.unUTxO tbs)) + } + where + (tbs, shelleyLedgerState') = shelleyLedgerState `slUtxoL` mempty + ShelleyLedgerState + { shelleyLedgerTip + , shelleyLedgerState + , shelleyLedgerTransition + } = st -instance ShelleyBasedEra era - => CanStowLedgerTables (Ticked (LedgerState (ShelleyBlock proto era))) where +instance + ShelleyBasedEra era => + CanStowLedgerTables (Ticked (LedgerState (ShelleyBlock proto era))) + where stowLedgerTables st = - TickedShelleyLedgerState { - untickedShelleyLedgerTip = untickedShelleyLedgerTip - , tickedShelleyLedgerTransition = tickedShelleyLedgerTransition - , tickedShelleyLedgerState = tickedShelleyLedgerState' - , tickedShelleyLedgerTables = emptyLedgerTables - } - where - (_, tickedShelleyLedgerState') = - tickedShelleyLedgerState `slUtxoL` SL.UTxO tbs - TickedShelleyLedgerState { - untickedShelleyLedgerTip - , tickedShelleyLedgerTransition - , tickedShelleyLedgerState - , tickedShelleyLedgerTables = LedgerTables (ValuesMK tbs) + TickedShelleyLedgerState + { untickedShelleyLedgerTip = untickedShelleyLedgerTip + , tickedShelleyLedgerTransition = tickedShelleyLedgerTransition + , tickedShelleyLedgerState = tickedShelleyLedgerState' + , tickedShelleyLedgerTables = emptyLedgerTables + } + where + (_, tickedShelleyLedgerState') = + tickedShelleyLedgerState `slUtxoL` SL.UTxO tbs + TickedShelleyLedgerState + { untickedShelleyLedgerTip + , tickedShelleyLedgerTransition + , tickedShelleyLedgerState + , tickedShelleyLedgerTables = LedgerTables (ValuesMK tbs) } = st unstowLedgerTables st = - TickedShelleyLedgerState { - untickedShelleyLedgerTip = untickedShelleyLedgerTip - , tickedShelleyLedgerTransition = tickedShelleyLedgerTransition - , tickedShelleyLedgerState = tickedShelleyLedgerState' - , tickedShelleyLedgerTables = LedgerTables (ValuesMK (SL.unUTxO tbs)) - } - where - (tbs, tickedShelleyLedgerState') = tickedShelleyLedgerState `slUtxoL` mempty - TickedShelleyLedgerState { - untickedShelleyLedgerTip - , tickedShelleyLedgerTransition - , tickedShelleyLedgerState + TickedShelleyLedgerState + { untickedShelleyLedgerTip = untickedShelleyLedgerTip + , tickedShelleyLedgerTransition = tickedShelleyLedgerTransition + , tickedShelleyLedgerState = tickedShelleyLedgerState' + , tickedShelleyLedgerTables = LedgerTables (ValuesMK (SL.unUTxO tbs)) + } + where + (tbs, tickedShelleyLedgerState') = tickedShelleyLedgerState `slUtxoL` mempty + TickedShelleyLedgerState + { untickedShelleyLedgerTip + , tickedShelleyLedgerTransition + , tickedShelleyLedgerState } = st slUtxoL :: SL.NewEpochState era -> SL.UTxO era -> (SL.UTxO era, SL.NewEpochState era) slUtxoL st vals = st - & SL.nesEsL - . SL.esLStateL - . SL.lsUTxOStateL - . SL.utxoL - <<.~ vals + & SL.nesEsL + . SL.esLStateL + . SL.lsUTxOStateL + . SL.utxoL + <<.~ vals {------------------------------------------------------------------------------- GetTip @@ -435,23 +479,23 @@ instance GetTip (Ticked (LedgerState (ShelleyBlock proto era))) where -------------------------------------------------------------------------------} -- | Ticking only affects the state itself -data instance Ticked (LedgerState (ShelleyBlock proto era)) mk = TickedShelleyLedgerState { - untickedShelleyLedgerTip :: !(WithOrigin (ShelleyTip proto era)) - -- | We are counting blocks within an epoch, this means: - -- - -- 1. We are only incrementing this when /applying/ a block, not when ticking. - -- 2. However, we count within an epoch, which is slot-based. So the count - -- must be reset when /ticking/, not when applying a block. - , tickedShelleyLedgerTransition :: !ShelleyTransition - , tickedShelleyLedgerState :: !(SL.NewEpochState era) - , tickedShelleyLedgerTables :: - !(LedgerTables (LedgerState (ShelleyBlock proto era)) mk) - } - deriving (Generic) +data instance Ticked (LedgerState (ShelleyBlock proto era)) mk = TickedShelleyLedgerState + { untickedShelleyLedgerTip :: !(WithOrigin (ShelleyTip proto era)) + , tickedShelleyLedgerTransition :: !ShelleyTransition + -- ^ We are counting blocks within an epoch, this means: + -- + -- 1. We are only incrementing this when /applying/ a block, not when ticking. + -- 2. However, we count within an epoch, which is slot-based. So the count + -- must be reset when /ticking/, not when applying a block. + , tickedShelleyLedgerState :: !(SL.NewEpochState era) + , tickedShelleyLedgerTables :: + !(LedgerTables (LedgerState (ShelleyBlock proto era)) mk) + } + deriving Generic untickedShelleyLedgerTipPoint :: - TickedLedgerState (ShelleyBlock proto era) mk - -> Point (ShelleyBlock proto era) + TickedLedgerState (ShelleyBlock proto era) mk -> + Point (ShelleyBlock proto era) untickedShelleyLedgerTipPoint = shelleyTipToPoint . untickedShelleyLedgerTip instance ShelleyBasedEra era => IsLedger (LedgerState (ShelleyBlock proto era)) where @@ -459,26 +503,31 @@ instance ShelleyBasedEra era => IsLedger (LedgerState (ShelleyBlock proto era)) type AuxLedgerEvent (LedgerState (ShelleyBlock proto era)) = ShelleyLedgerEvent era - applyChainTickLedgerResult evs cfg slotNo ShelleyLedgerState{ - shelleyLedgerTip - , shelleyLedgerState - , shelleyLedgerTransition - } = - appTick globals shelleyLedgerState slotNo <&> \l' -> - TickedShelleyLedgerState { - untickedShelleyLedgerTip = shelleyLedgerTip - , tickedShelleyLedgerTransition = - -- The voting resets each epoch - if isNewEpoch ei (shelleyTipSlotNo <$> shelleyLedgerTip) slotNo then - ShelleyTransitionInfo { shelleyAfterVoting = 0 } - else - shelleyLedgerTransition - , tickedShelleyLedgerState = l' - -- The UTxO set is only mutated by block/transaction execution and - -- era translations, that is why we put empty tables here. - , tickedShelleyLedgerTables = emptyLedgerTables - } - where + applyChainTickLedgerResult + evs + cfg + slotNo + ShelleyLedgerState + { shelleyLedgerTip + , shelleyLedgerState + , shelleyLedgerTransition + } = + appTick globals shelleyLedgerState slotNo <&> \l' -> + TickedShelleyLedgerState + { untickedShelleyLedgerTip = shelleyLedgerTip + , tickedShelleyLedgerTransition = + -- The voting resets each epoch + if isNewEpoch ei (shelleyTipSlotNo <$> shelleyLedgerTip) slotNo + then + ShelleyTransitionInfo{shelleyAfterVoting = 0} + else + shelleyLedgerTransition + , tickedShelleyLedgerState = l' + , -- The UTxO set is only mutated by block/transaction execution and + -- era translations, that is why we put empty tables here. + tickedShelleyLedgerTables = emptyLedgerTables + } + where globals = shelleyLedgerGlobals cfg ei :: EpochInfo Identity @@ -487,21 +536,22 @@ instance ShelleyBasedEra era => IsLedger (LedgerState (ShelleyBlock proto era)) appTick = uncurry (flip LedgerResult) ..: case evs of ComputeLedgerEvents -> - second (map ShelleyLedgerEventTICK) ..: - SL.applyTick STS.EPReturn + second (map ShelleyLedgerEventTICK) + ..: SL.applyTick STS.EPReturn OmitLedgerEvents -> (,[]) ..: SL.applyTickNoEvents - -- | All events emitted by the Shelley ledger API -data ShelleyLedgerEvent era = - -- | An event emitted when (re)applying a block +data ShelleyLedgerEvent era + = -- | An event emitted when (re)applying a block ShelleyLedgerEventBBODY (STS.Event (Core.EraRule "BBODY" era)) - -- | An event emitted during the chain tick - | ShelleyLedgerEventTICK (STS.Event (Core.EraRule "TICK" era)) + | -- | An event emitted during the chain tick + ShelleyLedgerEventTICK (STS.Event (Core.EraRule "TICK" era)) -instance ShelleyCompatible proto era - => ApplyBlock (LedgerState (ShelleyBlock proto era)) (ShelleyBlock proto era) where +instance + ShelleyCompatible proto era => + ApplyBlock (LedgerState (ShelleyBlock proto era)) (ShelleyBlock proto era) + where -- Note: in the Shelley ledger, the @CHAIN@ rule is used to apply a whole -- block. In consensus, we split up the application of a block to the ledger -- into separate steps that are performed together by 'applyExtLedgerState': @@ -513,18 +563,17 @@ instance ShelleyCompatible proto era -- + 'applyBlockLedgerResult': executes the @BBODY@ transition -- applyBlockLedgerResultWithValidation doValidate evs = - liftEither ..: applyHelper appBlk - where - -- Apply the BBODY transition using the ticked state - appBlk = - fmap (uncurry (flip LedgerResult)) ..: case evs of - ComputeLedgerEvents -> - fmap (second (map ShelleyLedgerEventBBODY)) ..: - SL.applyBlockEither STS.EPReturn doValidate - OmitLedgerEvents -> - fmap (,[]) ..: - SL.applyBlockEitherNoEvents doValidate - + liftEither ..: applyHelper appBlk + where + -- Apply the BBODY transition using the ticked state + appBlk = + fmap (uncurry (flip LedgerResult)) ..: case evs of + ComputeLedgerEvents -> + fmap (second (map ShelleyLedgerEventBBODY)) + ..: SL.applyBlockEither STS.EPReturn doValidate + OmitLedgerEvents -> + fmap (,[]) + ..: SL.applyBlockEitherNoEvents doValidate applyBlockLedgerResult = defaultApplyBlockLedgerResult @@ -532,124 +581,137 @@ instance ShelleyCompatible proto era defaultReapplyBlockLedgerResult (\err -> Exception.throw $! ShelleyReapplyException @era err) getBlockKeySets = - LedgerTables + LedgerTables . KeysMK . Core.neededTxInsForBlock . shelleyBlockRaw -data ShelleyReapplyException = - forall era. Show (SL.BlockTransitionError era) - => ShelleyReapplyException (SL.BlockTransitionError era) +data ShelleyReapplyException + = forall era. + Show (SL.BlockTransitionError era) => + ShelleyReapplyException (SL.BlockTransitionError era) instance Show ShelleyReapplyException where show (ShelleyReapplyException err) = "(ShelleyReapplyException " <> show err <> ")" -instance Exception.Exception ShelleyReapplyException where +instance Exception.Exception ShelleyReapplyException applyHelper :: - forall proto era. ShelleyCompatible proto era - => ( SL.Globals - -> SL.NewEpochState era - -> SL.Block SL.BHeaderView era - -> Either - (SL.BlockTransitionError era) - (LedgerResult - (LedgerState (ShelleyBlock proto era)) - (SL.NewEpochState era) - ) - ) - -> LedgerConfig (ShelleyBlock proto era) - -> ShelleyBlock proto era - -> Ticked (LedgerState (ShelleyBlock proto era)) ValuesMK - -> Either - (SL.BlockTransitionError era) - (LedgerResult + forall proto era. + ShelleyCompatible proto era => + ( SL.Globals -> + SL.NewEpochState era -> + SL.Block SL.BHeaderView era -> + Either + (SL.BlockTransitionError era) + ( LedgerResult (LedgerState (ShelleyBlock proto era)) - (LedgerState (ShelleyBlock proto era) DiffMK)) + (SL.NewEpochState era) + ) + ) -> + LedgerConfig (ShelleyBlock proto era) -> + ShelleyBlock proto era -> + Ticked (LedgerState (ShelleyBlock proto era)) ValuesMK -> + Either + (SL.BlockTransitionError era) + ( LedgerResult + (LedgerState (ShelleyBlock proto era)) + (LedgerState (ShelleyBlock proto era) DiffMK) + ) applyHelper f cfg blk stBefore = do - let TickedShelleyLedgerState{ - tickedShelleyLedgerTransition - , tickedShelleyLedgerState - } = stowLedgerTables stBefore - - ledgerResult <- - f - globals - tickedShelleyLedgerState - ( let b = shelleyBlockRaw blk - h' = mkHeaderView (SL.bheader b) - -- Jared Corduan explains that the " Unsafe " here ultimately only - -- means the value must not be serialized. We're only passing it to - -- 'STS.applyBlockOpts', which does not serialize it. So this is a - -- safe use. - in SL.UnsafeUnserialisedBlock h' (SL.bbody b) - ) - - let track :: - LedgerState (ShelleyBlock proto era) ValuesMK - -> LedgerState (ShelleyBlock proto era) TrackingMK - track = calculateDifference stBefore - - - return $ ledgerResult <&> \newNewEpochState -> - trackingToDiffs $ track $ unstowLedgerTables $ - ShelleyLedgerState { - shelleyLedgerTip = NotOrigin ShelleyTip { - shelleyTipBlockNo = blockNo blk - , shelleyTipSlotNo = blockSlot blk - , shelleyTipHash = blockHash blk - } - , shelleyLedgerState = - newNewEpochState - , shelleyLedgerTransition = ShelleyTransitionInfo { - shelleyAfterVoting = - -- We count the number of blocks that have been applied after the - -- voting deadline has passed. - (if blockSlot blk >= votingDeadline then succ else id) $ - shelleyAfterVoting tickedShelleyLedgerTransition - } - , shelleyLedgerTables = emptyLedgerTables - } - where - globals = shelleyLedgerGlobals cfg - swindow = SL.stabilityWindow globals - - ei :: EpochInfo Identity - ei = SL.epochInfoPure globals - - -- The start of the next epoch is within the safe zone, always. - startOfNextEpoch :: SlotNo - startOfNextEpoch = runIdentity $ do - blockEpoch <- epochInfoEpoch ei (blockSlot blk) - let nextEpoch = succ blockEpoch - epochInfoFirst ei nextEpoch - - -- The block must come in strictly before the voting deadline - -- See Fig 13, "Protocol Parameter Update Inference Rules", of the - -- Shelley specification. - votingDeadline :: SlotNo - votingDeadline = subSlots (2 * swindow) startOfNextEpoch + let TickedShelleyLedgerState + { tickedShelleyLedgerTransition + , tickedShelleyLedgerState + } = stowLedgerTables stBefore + + ledgerResult <- + f + globals + tickedShelleyLedgerState + ( let b = shelleyBlockRaw blk + h' = mkHeaderView (SL.bheader b) + in -- Jared Corduan explains that the " Unsafe " here ultimately only + -- means the value must not be serialized. We're only passing it to + -- 'STS.applyBlockOpts', which does not serialize it. So this is a + -- safe use. + SL.UnsafeUnserialisedBlock h' (SL.bbody b) + ) + + let track :: + LedgerState (ShelleyBlock proto era) ValuesMK -> + LedgerState (ShelleyBlock proto era) TrackingMK + track = calculateDifference stBefore + + return $ + ledgerResult <&> \newNewEpochState -> + trackingToDiffs $ + track $ + unstowLedgerTables $ + ShelleyLedgerState + { shelleyLedgerTip = + NotOrigin + ShelleyTip + { shelleyTipBlockNo = blockNo blk + , shelleyTipSlotNo = blockSlot blk + , shelleyTipHash = blockHash blk + } + , shelleyLedgerState = + newNewEpochState + , shelleyLedgerTransition = + ShelleyTransitionInfo + { shelleyAfterVoting = + -- We count the number of blocks that have been applied after the + -- voting deadline has passed. + (if blockSlot blk >= votingDeadline then succ else id) $ + shelleyAfterVoting tickedShelleyLedgerTransition + } + , shelleyLedgerTables = emptyLedgerTables + } + where + globals = shelleyLedgerGlobals cfg + swindow = SL.stabilityWindow globals + + ei :: EpochInfo Identity + ei = SL.epochInfoPure globals + + -- The start of the next epoch is within the safe zone, always. + startOfNextEpoch :: SlotNo + startOfNextEpoch = runIdentity $ do + blockEpoch <- epochInfoEpoch ei (blockSlot blk) + let nextEpoch = succ blockEpoch + epochInfoFirst ei nextEpoch + + -- The block must come in strictly before the voting deadline + -- See Fig 13, "Protocol Parameter Update Inference Rules", of the + -- Shelley specification. + votingDeadline :: SlotNo + votingDeadline = subSlots (2 * swindow) startOfNextEpoch instance HasHardForkHistory (ShelleyBlock proto era) where type HardForkIndices (ShelleyBlock proto era) = '[ShelleyBlock proto era] - hardForkSummary = neverForksHardForkSummary $ + hardForkSummary = + neverForksHardForkSummary $ shelleyEraParamsNeverHardForks . shelleyLedgerGenesis -instance ShelleyCompatible proto era - => CommonProtocolParams (ShelleyBlock proto era) where +instance + ShelleyCompatible proto era => + CommonProtocolParams (ShelleyBlock proto era) + where maxHeaderSize = fromIntegral . view ppMaxBHSizeL . getPParams . shelleyLedgerState - maxTxSize = view ppMaxTxSizeL . getPParams . shelleyLedgerState + maxTxSize = view ppMaxTxSizeL . getPParams . shelleyLedgerState {------------------------------------------------------------------------------- ValidateEnvelope -------------------------------------------------------------------------------} -instance ShelleyCompatible proto era => BasicEnvelopeValidation (ShelleyBlock proto era) where - -- defaults all OK +instance ShelleyCompatible proto era => BasicEnvelopeValidation (ShelleyBlock proto era) + +-- defaults all OK instance ShelleyCompatible proto era => ValidateEnvelope (ShelleyBlock proto era) where - type OtherHeaderEnvelopeError (ShelleyBlock proto era) = - EnvelopeCheckError proto + type + OtherHeaderEnvelopeError (ShelleyBlock proto era) = + EnvelopeCheckError proto additionalEnvelopeChecks cfg lv hdr = envelopeChecks (configConsensus cfg) lv (shelleyHeaderRaw hdr) @@ -683,78 +745,88 @@ decodeShelleyAnnTip :: Decoder s (AnnTip (ShelleyBlock proto era)) decodeShelleyAnnTip = defaultDecodeAnnTip fromCBOR encodeShelleyHeaderState :: - ShelleyCompatible proto era - => HeaderState (ShelleyBlock proto era) - -> Encoding -encodeShelleyHeaderState = encodeHeaderState + ShelleyCompatible proto era => + HeaderState (ShelleyBlock proto era) -> + Encoding +encodeShelleyHeaderState = + encodeHeaderState encode encodeShelleyAnnTip encodeShelleyTip :: ShelleyTip proto era -> Encoding -encodeShelleyTip ShelleyTip { - shelleyTipSlotNo - , shelleyTipBlockNo - , shelleyTipHash - } = mconcat [ - CBOR.encodeListLen 3 - , encode shelleyTipSlotNo - , encode shelleyTipBlockNo - , encode shelleyTipHash - ] +encodeShelleyTip + ShelleyTip + { shelleyTipSlotNo + , shelleyTipBlockNo + , shelleyTipHash + } = + mconcat + [ CBOR.encodeListLen 3 + , encode shelleyTipSlotNo + , encode shelleyTipBlockNo + , encode shelleyTipHash + ] decodeShelleyTip :: Decoder s (ShelleyTip proto era) decodeShelleyTip = do - enforceSize "ShelleyTip" 3 - shelleyTipSlotNo <- decode - shelleyTipBlockNo <- decode - shelleyTipHash <- decode - return ShelleyTip { - shelleyTipSlotNo + enforceSize "ShelleyTip" 3 + shelleyTipSlotNo <- decode + shelleyTipBlockNo <- decode + shelleyTipHash <- decode + return + ShelleyTip + { shelleyTipSlotNo , shelleyTipBlockNo , shelleyTipHash } encodeShelleyTransition :: ShelleyTransition -> Encoding -encodeShelleyTransition ShelleyTransitionInfo{shelleyAfterVoting} = mconcat [ - CBOR.encodeWord32 shelleyAfterVoting +encodeShelleyTransition ShelleyTransitionInfo{shelleyAfterVoting} = + mconcat + [ CBOR.encodeWord32 shelleyAfterVoting ] decodeShelleyTransition :: Decoder s ShelleyTransition decodeShelleyTransition = do - shelleyAfterVoting <- CBOR.decodeWord32 - return ShelleyTransitionInfo{shelleyAfterVoting} + shelleyAfterVoting <- CBOR.decodeWord32 + return ShelleyTransitionInfo{shelleyAfterVoting} encodeShelleyLedgerState :: - ShelleyCompatible proto era - => LedgerState (ShelleyBlock proto era) EmptyMK - -> Encoding + ShelleyCompatible proto era => + LedgerState (ShelleyBlock proto era) EmptyMK -> + Encoding encodeShelleyLedgerState - ShelleyLedgerState { shelleyLedgerTip - , shelleyLedgerState - , shelleyLedgerTransition - } = - encodeVersion serialisationFormatVersion2 $ mconcat [ - CBOR.encodeListLen 3 - , encodeWithOrigin encodeShelleyTip shelleyLedgerTip - , toCBOR shelleyLedgerState - , encodeShelleyTransition shelleyLedgerTransition - ] + ShelleyLedgerState + { shelleyLedgerTip + , shelleyLedgerState + , shelleyLedgerTransition + } = + encodeVersion serialisationFormatVersion2 $ + mconcat + [ CBOR.encodeListLen 3 + , encodeWithOrigin encodeShelleyTip shelleyLedgerTip + , toCBOR shelleyLedgerState + , encodeShelleyTransition shelleyLedgerTransition + ] decodeShelleyLedgerState :: - forall era proto s. ShelleyCompatible proto era - => Decoder s (LedgerState (ShelleyBlock proto era) EmptyMK) -decodeShelleyLedgerState = decodeVersion [ - (serialisationFormatVersion2, Decode decodeShelleyLedgerState2) + forall era proto s. + ShelleyCompatible proto era => + Decoder s (LedgerState (ShelleyBlock proto era) EmptyMK) +decodeShelleyLedgerState = + decodeVersion + [ (serialisationFormatVersion2, Decode decodeShelleyLedgerState2) ] - where - decodeShelleyLedgerState2 :: Decoder s' (LedgerState (ShelleyBlock proto era) EmptyMK) - decodeShelleyLedgerState2 = do - enforceSize "LedgerState ShelleyBlock" 3 - shelleyLedgerTip <- decodeWithOrigin decodeShelleyTip - shelleyLedgerState <- fromCBOR - shelleyLedgerTransition <- decodeShelleyTransition - return ShelleyLedgerState { - shelleyLedgerTip + where + decodeShelleyLedgerState2 :: Decoder s' (LedgerState (ShelleyBlock proto era) EmptyMK) + decodeShelleyLedgerState2 = do + enforceSize "LedgerState ShelleyBlock" 3 + shelleyLedgerTip <- decodeWithOrigin decodeShelleyTip + shelleyLedgerState <- fromCBOR + shelleyLedgerTransition <- decodeShelleyTransition + return + ShelleyLedgerState + { shelleyLedgerTip , shelleyLedgerState , shelleyLedgerTransition , shelleyLedgerTables = emptyLedgerTables diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Mempool.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Mempool.hs index 0adcb65b5d..f460e701c9 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Mempool.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Mempool.hs @@ -14,15 +14,14 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Shelley mempool integration -- -- TODO nearly all of the logic in this module belongs in cardano-ledger, not -- ouroboros-consensus; ouroboros-consensus-cardano should just be "glue code". -module Ouroboros.Consensus.Shelley.Ledger.Mempool ( - GenTx (..) +module Ouroboros.Consensus.Shelley.Ledger.Mempool + ( GenTx (..) , SL.ApplyTxError (..) , TxId (..) , Validated (..) @@ -30,77 +29,97 @@ module Ouroboros.Consensus.Shelley.Ledger.Mempool ( , mkShelleyTx , mkShelleyValidatedTx , perTxOverhead + -- * Exported for tests , AlonzoMeasure (..) , ConwayMeasure (..) , fromExUnits ) where -import qualified Cardano.Crypto.Hash as Hash -import qualified Cardano.Ledger.Allegra.Rules as AllegraEra -import Cardano.Ledger.Alonzo.Core (Tx, TxSeq, bodyTxL, eraDecoder, - fromTxSeq, ppMaxBBSizeL, ppMaxBlockExUnitsL, sizeTxF) -import qualified Cardano.Ledger.Alonzo.Rules as AlonzoEra -import Cardano.Ledger.Alonzo.Scripts (ExUnits, ExUnits' (..), - pointWiseExUnits, unWrapExUnits) -import Cardano.Ledger.Alonzo.Tx (totExUnits) -import qualified Cardano.Ledger.Api as L -import qualified Cardano.Ledger.Babbage.Rules as BabbageEra -import qualified Cardano.Ledger.BaseTypes as L -import Cardano.Ledger.Binary (Annotator (..), DecCBOR (..), - EncCBOR (..), FromCBOR (..), FullByteString (..), - ToCBOR (..)) -import qualified Cardano.Ledger.Conway.Rules as ConwayEra -import qualified Cardano.Ledger.Conway.Rules as SL -import qualified Cardano.Ledger.Conway.UTxO as SL -import qualified Cardano.Ledger.Core as SL (allInputsTxBodyF, txIdTxBody) -import qualified Cardano.Ledger.Hashes as SL -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Rules as ShelleyEra -import Cardano.Protocol.Crypto (Crypto) -import Control.Arrow ((+++)) -import Control.Monad (guard) -import Control.Monad.Except (Except, liftEither) -import Control.Monad.Identity (Identity (..)) -import Data.DerivingVia (InstantiatedAt (..)) -import Data.Foldable (toList) -import Data.Measure (Measure) -import Data.Typeable (Typeable) -import qualified Data.Validation as V -import GHC.Generics (Generic) -import GHC.Natural (Natural) -import Lens.Micro ((^.)) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Ledger - (ShelleyLedgerConfig (shelleyLedgerGlobals), - Ticked (TickedShelleyLedgerState, tickedShelleyLedgerState), - getPParams) -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Network.Block (unwrapCBORinCBOR, wrapCBORinCBOR) +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Ledger.Allegra.Rules qualified as AllegraEra +import Cardano.Ledger.Alonzo.Core + ( Tx + , TxSeq + , bodyTxL + , eraDecoder + , fromTxSeq + , ppMaxBBSizeL + , ppMaxBlockExUnitsL + , sizeTxF + ) +import Cardano.Ledger.Alonzo.Rules qualified as AlonzoEra +import Cardano.Ledger.Alonzo.Scripts + ( ExUnits + , ExUnits' (..) + , pointWiseExUnits + , unWrapExUnits + ) +import Cardano.Ledger.Alonzo.Tx (totExUnits) +import Cardano.Ledger.Api qualified as L +import Cardano.Ledger.Babbage.Rules qualified as BabbageEra +import Cardano.Ledger.BaseTypes qualified as L +import Cardano.Ledger.Binary + ( Annotator (..) + , DecCBOR (..) + , EncCBOR (..) + , FromCBOR (..) + , FullByteString (..) + , ToCBOR (..) + ) +import Cardano.Ledger.Conway.Rules qualified as ConwayEra +import Cardano.Ledger.Conway.Rules qualified as SL +import Cardano.Ledger.Conway.UTxO qualified as SL +import Cardano.Ledger.Core qualified as SL (allInputsTxBodyF, txIdTxBody) +import Cardano.Ledger.Hashes qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Rules qualified as ShelleyEra +import Cardano.Protocol.Crypto (Crypto) +import Control.Arrow ((+++)) +import Control.Monad (guard) +import Control.Monad.Except (Except, liftEither) +import Control.Monad.Identity (Identity (..)) +import Data.DerivingVia (InstantiatedAt (..)) +import Data.Foldable (toList) +import Data.Measure (Measure) +import Data.Typeable (Typeable) +import Data.Validation qualified as V +import GHC.Generics (Generic) +import GHC.Natural (Natural) +import Lens.Micro ((^.)) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Ledger + ( ShelleyLedgerConfig (shelleyLedgerGlobals) + , Ticked (TickedShelleyLedgerState, tickedShelleyLedgerState) + , getPParams + ) +import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Network.Block (unwrapCBORinCBOR, wrapCBORinCBOR) data instance GenTx (ShelleyBlock proto era) = ShelleyTx !SL.TxId !(Tx era) - deriving stock (Generic) + deriving stock Generic deriving instance ShelleyBasedEra era => NoThunks (GenTx (ShelleyBlock proto era)) deriving instance ShelleyBasedEra era => Eq (GenTx (ShelleyBlock proto era)) -instance (Typeable era, Typeable proto) - => ShowProxy (GenTx (ShelleyBlock proto era)) where +instance + (Typeable era, Typeable proto) => + ShowProxy (GenTx (ShelleyBlock proto era)) -data instance Validated (GenTx (ShelleyBlock proto era)) = - ShelleyValidatedTx +data instance Validated (GenTx (ShelleyBlock proto era)) + = ShelleyValidatedTx !SL.TxId !(SL.Validated (Tx era)) - deriving stock (Generic) + deriving stock Generic deriving instance ShelleyBasedEra era => NoThunks (Validated (GenTx (ShelleyBlock proto era))) @@ -108,32 +127,32 @@ deriving instance ShelleyBasedEra era => Eq (Validated (GenTx (ShelleyBlock prot deriving instance ShelleyBasedEra era => Show (Validated (GenTx (ShelleyBlock proto era))) -instance (Typeable era, Typeable proto) - => ShowProxy (Validated (GenTx (ShelleyBlock proto era))) where +instance + (Typeable era, Typeable proto) => + ShowProxy (Validated (GenTx (ShelleyBlock proto era))) type instance ApplyTxErr (ShelleyBlock proto era) = SL.ApplyTxError era -- orphaned instance -instance Typeable era => ShowProxy (SL.ApplyTxError era) where - - --- |'txInBlockSize' is used to estimate how many transactions we can grab from --- the Mempool to put into the block we are going to forge without exceeding --- the maximum block body size according to the ledger. If we exceed that --- limit, we will have forged a block that is invalid according to the ledger. --- We ourselves won't even adopt it, causing us to lose our slot, something we --- must try to avoid. +instance Typeable era => ShowProxy (SL.ApplyTxError era) + +-- | 'txInBlockSize' is used to estimate how many transactions we can grab from +-- the Mempool to put into the block we are going to forge without exceeding +-- the maximum block body size according to the ledger. If we exceed that +-- limit, we will have forged a block that is invalid according to the ledger. +-- We ourselves won't even adopt it, causing us to lose our slot, something we +-- must try to avoid. -- --- For this reason it is better to overestimate the size of a transaction than --- to underestimate. The only downside is that we maybe could have put one (or --- more?) transactions extra in that block. +-- For this reason it is better to overestimate the size of a transaction than +-- to underestimate. The only downside is that we maybe could have put one (or +-- more?) transactions extra in that block. -- --- As the sum of the serialised transaction sizes is not equal to the size of --- the serialised block body ('TxSeq') consisting of those transactions --- (see cardano-node#1545 for an example), we account for some extra overhead --- per transaction as a safety margin. +-- As the sum of the serialised transaction sizes is not equal to the size of +-- the serialised block body ('TxSeq') consisting of those transactions +-- (see cardano-node#1545 for an example), we account for some extra overhead +-- per transaction as a safety margin. -- --- Also see 'perTxOverhead'. +-- Also see 'perTxOverhead'. fixedBlockBodyOverhead :: Num a => a fixedBlockBodyOverhead = 1024 @@ -141,8 +160,10 @@ fixedBlockBodyOverhead = 1024 perTxOverhead :: Num a => a perTxOverhead = 4 -instance (ShelleyCompatible proto era, TxLimits (ShelleyBlock proto era)) - => LedgerSupportsMempool (ShelleyBlock proto era) where +instance + (ShelleyCompatible proto era, TxLimits (ShelleyBlock proto era)) => + LedgerSupportsMempool (ShelleyBlock proto era) + where txInvariant = const True applyTx = applyShelleyTx @@ -152,48 +173,52 @@ instance (ShelleyCompatible proto era, TxLimits (ShelleyBlock proto era)) txForgetValidated (ShelleyValidatedTx txid vtx) = ShelleyTx txid (SL.extractTx vtx) getTransactionKeySets (ShelleyTx _ tx) = - LedgerTables - $ KeysMK + LedgerTables $ + KeysMK (tx ^. (bodyTxL . SL.allInputsTxBodyF)) mkShelleyTx :: forall era proto. ShelleyBasedEra era => Tx era -> GenTx (ShelleyBlock proto era) mkShelleyTx tx = ShelleyTx (SL.txIdTxBody @era (tx ^. bodyTxL)) tx -mkShelleyValidatedTx :: forall era proto. - ShelleyBasedEra era - => SL.Validated (Tx era) - -> Validated (GenTx (ShelleyBlock proto era)) +mkShelleyValidatedTx :: + forall era proto. + ShelleyBasedEra era => + SL.Validated (Tx era) -> + Validated (GenTx (ShelleyBlock proto era)) mkShelleyValidatedTx vtx = ShelleyValidatedTx txid vtx - where - txid = SL.txIdTxBody @era (SL.extractTx vtx ^. bodyTxL) + where + txid = SL.txIdTxBody @era (SL.extractTx vtx ^. bodyTxL) newtype instance TxId (GenTx (ShelleyBlock proto era)) = ShelleyTxId SL.TxId deriving newtype (Eq, Ord, NoThunks) -deriving newtype instance (Typeable era, Typeable proto, Crypto (ProtoCrypto proto)) - => EncCBOR (TxId (GenTx (ShelleyBlock proto era))) -deriving newtype instance (Typeable era, Typeable proto, Crypto (ProtoCrypto proto)) - => DecCBOR (TxId (GenTx (ShelleyBlock proto era))) +deriving newtype instance + (Typeable era, Typeable proto, Crypto (ProtoCrypto proto)) => + EncCBOR (TxId (GenTx (ShelleyBlock proto era))) +deriving newtype instance + (Typeable era, Typeable proto, Crypto (ProtoCrypto proto)) => + DecCBOR (TxId (GenTx (ShelleyBlock proto era))) -instance (Typeable era, Typeable proto) - => ShowProxy (TxId (GenTx (ShelleyBlock proto era))) where +instance + (Typeable era, Typeable proto) => + ShowProxy (TxId (GenTx (ShelleyBlock proto era))) instance ShelleyBasedEra era => HasTxId (GenTx (ShelleyBlock proto era)) where txId (ShelleyTx i _) = ShelleyTxId i instance ShelleyBasedEra era => ConvertRawTxId (GenTx (ShelleyBlock proto era)) where toRawTxIdHash (ShelleyTxId i) = - Hash.hashToBytesShort . SL.extractHash . SL.unTxId $ i + Hash.hashToBytesShort . SL.extractHash . SL.unTxId $ i instance ShelleyBasedEra era => HasTxs (ShelleyBlock proto era) where extractTxs = - map mkShelleyTx + map mkShelleyTx . txSeqToList . SL.bbody . shelleyBlockRaw - where - txSeqToList :: TxSeq era -> [Tx era] - txSeqToList = toList . fromTxSeq @era + where + txSeqToList :: TxSeq era -> [Tx era] + txSeqToList = toList . fromTxSeq @era {------------------------------------------------------------------------------- Serialisation @@ -205,15 +230,18 @@ instance ShelleyCompatible proto era => ToCBOR (GenTx (ShelleyBlock proto era)) toCBOR (ShelleyTx _txid tx) = wrapCBORinCBOR toCBOR tx instance ShelleyCompatible proto era => FromCBOR (GenTx (ShelleyBlock proto era)) where - fromCBOR = fmap mkShelleyTx $ unwrapCBORinCBOR - $ eraDecoder @era $ (. Full) . runAnnotator <$> decCBOR + fromCBOR = + fmap mkShelleyTx $ + unwrapCBORinCBOR $ + eraDecoder @era $ + (. Full) . runAnnotator <$> decCBOR {------------------------------------------------------------------------------- Pretty-printing -------------------------------------------------------------------------------} instance ShelleyBasedEra era => Condense (GenTx (ShelleyBlock proto era)) where - condense (ShelleyTx _ tx ) = show tx + condense (ShelleyTx _ tx) = show tx instance Condense (GenTxId (ShelleyBlock proto era)) where condense (ShelleyTxId i) = "txid: " <> show i @@ -228,84 +256,89 @@ instance Show (GenTxId (ShelleyBlock proto era)) where Applying transactions -------------------------------------------------------------------------------} -applyShelleyTx :: forall era proto. - ShelleyBasedEra era - => LedgerConfig (ShelleyBlock proto era) - -> WhetherToIntervene - -> SlotNo - -> GenTx (ShelleyBlock proto era) - -> TickedLedgerState (ShelleyBlock proto era) ValuesMK - -> Except (ApplyTxErr (ShelleyBlock proto era)) - ( TickedLedgerState (ShelleyBlock proto era) DiffMK - , Validated (GenTx (ShelleyBlock proto era)) - ) +applyShelleyTx :: + forall era proto. + ShelleyBasedEra era => + LedgerConfig (ShelleyBlock proto era) -> + WhetherToIntervene -> + SlotNo -> + GenTx (ShelleyBlock proto era) -> + TickedLedgerState (ShelleyBlock proto era) ValuesMK -> + Except + (ApplyTxErr (ShelleyBlock proto era)) + ( TickedLedgerState (ShelleyBlock proto era) DiffMK + , Validated (GenTx (ShelleyBlock proto era)) + ) applyShelleyTx cfg wti slot (ShelleyTx _ tx) st0 = do - let st1 :: TickedLedgerState (ShelleyBlock proto era) EmptyMK - st1 = stowLedgerTables st0 + let st1 :: TickedLedgerState (ShelleyBlock proto era) EmptyMK + st1 = stowLedgerTables st0 - innerSt :: SL.NewEpochState era - innerSt = tickedShelleyLedgerState st1 + innerSt :: SL.NewEpochState era + innerSt = tickedShelleyLedgerState st1 - (mempoolState', vtx) <- - applyShelleyBasedTx - (shelleyLedgerGlobals cfg) - (SL.mkMempoolEnv innerSt slot) - (SL.mkMempoolState innerSt) - wti - tx + (mempoolState', vtx) <- + applyShelleyBasedTx + (shelleyLedgerGlobals cfg) + (SL.mkMempoolEnv innerSt slot) + (SL.mkMempoolState innerSt) + wti + tx - let st' :: TickedLedgerState (ShelleyBlock proto era) DiffMK - st' = trackingToDiffs - $ calculateDifference st0 - $ unstowLedgerTables - $ set theLedgerLens mempoolState' st1 + let st' :: TickedLedgerState (ShelleyBlock proto era) DiffMK + st' = + trackingToDiffs $ + calculateDifference st0 $ + unstowLedgerTables $ + set theLedgerLens mempoolState' st1 - pure (st', mkShelleyValidatedTx vtx) + pure (st', mkShelleyValidatedTx vtx) reapplyShelleyTx :: - ShelleyBasedEra era - => ComputeDiffs - -> LedgerConfig (ShelleyBlock proto era) - -> SlotNo - -> Validated (GenTx (ShelleyBlock proto era)) - -> TickedLedgerState (ShelleyBlock proto era) ValuesMK - -> Except (ApplyTxErr (ShelleyBlock proto era)) (TickedLedgerState (ShelleyBlock proto era) TrackingMK) + ShelleyBasedEra era => + ComputeDiffs -> + LedgerConfig (ShelleyBlock proto era) -> + SlotNo -> + Validated (GenTx (ShelleyBlock proto era)) -> + TickedLedgerState (ShelleyBlock proto era) ValuesMK -> + Except (ApplyTxErr (ShelleyBlock proto era)) (TickedLedgerState (ShelleyBlock proto era) TrackingMK) reapplyShelleyTx doDiffs cfg slot vgtx st0 = do - let st1 = stowLedgerTables st0 - innerSt = tickedShelleyLedgerState st1 - - mempoolState' <- - liftEither $ - SL.reapplyTx - (shelleyLedgerGlobals cfg) - (SL.mkMempoolEnv innerSt slot) - (SL.mkMempoolState innerSt) - vtx - - pure $ (case doDiffs of - ComputeDiffs -> calculateDifference st0 - IgnoreDiffs -> attachEmptyDiffs - ) - $ unstowLedgerTables - $ set theLedgerLens mempoolState' st1 - - where - ShelleyValidatedTx _txid vtx = vgtx + let st1 = stowLedgerTables st0 + innerSt = tickedShelleyLedgerState st1 + + mempoolState' <- + liftEither $ + SL.reapplyTx + (shelleyLedgerGlobals cfg) + (SL.mkMempoolEnv innerSt slot) + (SL.mkMempoolState innerSt) + vtx + + pure + $ ( case doDiffs of + ComputeDiffs -> calculateDifference st0 + IgnoreDiffs -> attachEmptyDiffs + ) + $ unstowLedgerTables + $ set theLedgerLens mempoolState' st1 + where + ShelleyValidatedTx _txid vtx = vgtx -- | The lens combinator set :: - (forall f. Applicative f => (a -> f b) -> s -> f t) - -> b -> s -> t + (forall f. Applicative f => (a -> f b) -> s -> f t) -> + b -> + s -> + t set lens inner outer = - runIdentity $ lens (\_ -> Identity inner) outer + runIdentity $ lens (\_ -> Identity inner) outer theLedgerLens :: - Functor f - => (SL.LedgerState era -> f (SL.LedgerState era)) - -> TickedLedgerState (ShelleyBlock proto era) mk - -> f (TickedLedgerState (ShelleyBlock proto era) mk) + Functor f => + (SL.LedgerState era -> f (SL.LedgerState era)) -> + TickedLedgerState (ShelleyBlock proto era) mk -> + f (TickedLedgerState (ShelleyBlock proto era) mk) theLedgerLens f x = - (\y -> x{tickedShelleyLedgerState = y}) + (\y -> x{tickedShelleyLedgerState = y}) <$> SL.overNewEpochState f (tickedShelleyLedgerState x) {------------------------------------------------------------------------------- @@ -313,142 +346,156 @@ theLedgerLens f x = -------------------------------------------------------------------------------} -- | A non-exported newtype wrapper just to give a 'Semigroup' instance -newtype TxErrorSG era = TxErrorSG { unTxErrorSG :: SL.ApplyTxError era } +newtype TxErrorSG era = TxErrorSG {unTxErrorSG :: SL.ApplyTxError era} instance Semigroup (TxErrorSG era) where TxErrorSG (SL.ApplyTxError x) <> TxErrorSG (SL.ApplyTxError y) = TxErrorSG (SL.ApplyTxError (x <> y)) validateMaybe :: - SL.ApplyTxError era - -> Maybe a - -> V.Validation (TxErrorSG era) a + SL.ApplyTxError era -> + Maybe a -> + V.Validation (TxErrorSG era) a validateMaybe err mb = V.validate (TxErrorSG err) id mb runValidation :: - V.Validation (TxErrorSG era) a - -> Except (SL.ApplyTxError era) a + V.Validation (TxErrorSG era) a -> + Except (SL.ApplyTxError era) a runValidation = liftEither . (unTxErrorSG +++ id) . V.toEither ----- txsMaxBytes :: - ShelleyCompatible proto era - => TickedLedgerState (ShelleyBlock proto era) mk - -> IgnoringOverflow ByteSize32 -txsMaxBytes TickedShelleyLedgerState { tickedShelleyLedgerState } = - -- `maxBlockBodySize` is expected to be bigger than `fixedBlockBodyOverhead` - IgnoringOverflow - $ ByteSize32 - $ maxBlockBodySize - fixedBlockBodyOverhead - where - maxBlockBodySize = getPParams tickedShelleyLedgerState ^. ppMaxBBSizeL + ShelleyCompatible proto era => + TickedLedgerState (ShelleyBlock proto era) mk -> + IgnoringOverflow ByteSize32 +txsMaxBytes TickedShelleyLedgerState{tickedShelleyLedgerState} = + -- `maxBlockBodySize` is expected to be bigger than `fixedBlockBodyOverhead` + IgnoringOverflow $ + ByteSize32 $ + maxBlockBodySize - fixedBlockBodyOverhead + where + maxBlockBodySize = getPParams tickedShelleyLedgerState ^. ppMaxBBSizeL txInBlockSize :: - (ShelleyCompatible proto era, MaxTxSizeUTxO era) - => TickedLedgerState (ShelleyBlock proto era) mk - -> GenTx (ShelleyBlock proto era) - -> V.Validation (TxErrorSG era) (IgnoringOverflow ByteSize32) + (ShelleyCompatible proto era, MaxTxSizeUTxO era) => + TickedLedgerState (ShelleyBlock proto era) mk -> + GenTx (ShelleyBlock proto era) -> + V.Validation (TxErrorSG era) (IgnoringOverflow ByteSize32) txInBlockSize st (ShelleyTx _txid tx') = - validateMaybe (maxTxSizeUTxO txsz limit) $ do - guard $ txsz <= limit - Just $ IgnoringOverflow $ ByteSize32 $ fromIntegral txsz + perTxOverhead - where - txsz = tx' ^. sizeTxF + validateMaybe (maxTxSizeUTxO txsz limit) $ do + guard $ txsz <= limit + Just $ IgnoringOverflow $ ByteSize32 $ fromIntegral txsz + perTxOverhead + where + txsz = tx' ^. sizeTxF - pparams = getPParams $ tickedShelleyLedgerState st - limit = fromIntegral (pparams ^. L.ppMaxTxSizeL) :: Integer + pparams = getPParams $ tickedShelleyLedgerState st + limit = fromIntegral (pparams ^. L.ppMaxTxSizeL) :: Integer class MaxTxSizeUTxO era where maxTxSizeUTxO :: - Integer - -- ^ Actual transaction size - -> Integer - -- ^ Maximum transaction size - -> SL.ApplyTxError era + -- | Actual transaction size + Integer -> + -- | Maximum transaction size + Integer -> + SL.ApplyTxError era instance MaxTxSizeUTxO ShelleyEra where maxTxSizeUTxO txSize txSizeLimit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ ShelleyEra.UtxoFailure - $ ShelleyEra.MaxTxSizeUTxO - $ L.Mismatch { mismatchSupplied = txSize - , mismatchExpected = txSizeLimit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + ShelleyEra.UtxoFailure $ + ShelleyEra.MaxTxSizeUTxO $ + L.Mismatch + { mismatchSupplied = txSize + , mismatchExpected = txSizeLimit + } instance MaxTxSizeUTxO AllegraEra where maxTxSizeUTxO txSize txSizeLimit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ ShelleyEra.UtxoFailure - $ AllegraEra.MaxTxSizeUTxO - $ L.Mismatch { mismatchSupplied = txSize - , mismatchExpected = txSizeLimit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + ShelleyEra.UtxoFailure $ + AllegraEra.MaxTxSizeUTxO $ + L.Mismatch + { mismatchSupplied = txSize + , mismatchExpected = txSizeLimit + } instance MaxTxSizeUTxO MaryEra where maxTxSizeUTxO txSize txSizeLimit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ ShelleyEra.UtxoFailure - $ AllegraEra.MaxTxSizeUTxO - $ L.Mismatch { mismatchSupplied = txSize - , mismatchExpected = txSizeLimit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + ShelleyEra.UtxoFailure $ + AllegraEra.MaxTxSizeUTxO $ + L.Mismatch + { mismatchSupplied = txSize + , mismatchExpected = txSizeLimit + } instance MaxTxSizeUTxO AlonzoEra where maxTxSizeUTxO txSize txSizeLimit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ AlonzoEra.ShelleyInAlonzoUtxowPredFailure - $ ShelleyEra.UtxoFailure - $ AlonzoEra.MaxTxSizeUTxO - $ L.Mismatch { mismatchSupplied = txSize - , mismatchExpected = txSizeLimit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + AlonzoEra.ShelleyInAlonzoUtxowPredFailure $ + ShelleyEra.UtxoFailure $ + AlonzoEra.MaxTxSizeUTxO $ + L.Mismatch + { mismatchSupplied = txSize + , mismatchExpected = txSizeLimit + } instance MaxTxSizeUTxO BabbageEra where maxTxSizeUTxO txSize txSizeLimit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ BabbageEra.UtxoFailure - $ BabbageEra.AlonzoInBabbageUtxoPredFailure - $ AlonzoEra.MaxTxSizeUTxO - $ L.Mismatch { mismatchSupplied = txSize - , mismatchExpected = txSizeLimit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + BabbageEra.UtxoFailure $ + BabbageEra.AlonzoInBabbageUtxoPredFailure $ + AlonzoEra.MaxTxSizeUTxO $ + L.Mismatch + { mismatchSupplied = txSize + , mismatchExpected = txSizeLimit + } instance MaxTxSizeUTxO ConwayEra where maxTxSizeUTxO txSize txSizeLimit = - SL.ApplyTxError . pure - $ ConwayEra.ConwayUtxowFailure - $ ConwayEra.UtxoFailure - $ ConwayEra.MaxTxSizeUTxO - $ L.Mismatch { mismatchSupplied = txSize - , mismatchExpected = txSizeLimit } + SL.ApplyTxError . pure $ + ConwayEra.ConwayUtxowFailure $ + ConwayEra.UtxoFailure $ + ConwayEra.MaxTxSizeUTxO $ + L.Mismatch + { mismatchSupplied = txSize + , mismatchExpected = txSizeLimit + } ----- instance ShelleyCompatible p ShelleyEra => TxLimits (ShelleyBlock p ShelleyEra) where type TxMeasure (ShelleyBlock p ShelleyEra) = IgnoringOverflow ByteSize32 - txMeasure _cfg st tx = runValidation $ txInBlockSize st tx - blockCapacityTxMeasure _cfg = txsMaxBytes + txMeasure _cfg st tx = runValidation $ txInBlockSize st tx + blockCapacityTxMeasure _cfg = txsMaxBytes instance ShelleyCompatible p AllegraEra => TxLimits (ShelleyBlock p AllegraEra) where type TxMeasure (ShelleyBlock p AllegraEra) = IgnoringOverflow ByteSize32 - txMeasure _cfg st tx = runValidation $ txInBlockSize st tx - blockCapacityTxMeasure _cfg = txsMaxBytes + txMeasure _cfg st tx = runValidation $ txInBlockSize st tx + blockCapacityTxMeasure _cfg = txsMaxBytes instance ShelleyCompatible p MaryEra => TxLimits (ShelleyBlock p MaryEra) where type TxMeasure (ShelleyBlock p MaryEra) = IgnoringOverflow ByteSize32 - txMeasure _cfg st tx = runValidation $ txInBlockSize st tx - blockCapacityTxMeasure _cfg = txsMaxBytes + txMeasure _cfg st tx = runValidation $ txInBlockSize st tx + blockCapacityTxMeasure _cfg = txsMaxBytes ----- -data AlonzoMeasure = AlonzoMeasure { - byteSize :: !(IgnoringOverflow ByteSize32) - , exUnits :: !(ExUnits' Natural) - } deriving stock (Eq, Generic, Show) - deriving anyclass (NoThunks) - deriving (Measure) - via (InstantiatedAt Generic AlonzoMeasure) +data AlonzoMeasure = AlonzoMeasure + { byteSize :: !(IgnoringOverflow ByteSize32) + , exUnits :: !(ExUnits' Natural) + } + deriving stock (Eq, Generic, Show) + deriving anyclass NoThunks + deriving + Measure + via (InstantiatedAt Generic AlonzoMeasure) instance HasByteSize AlonzoMeasure where txMeasureByteSize = unIgnoringOverflow . byteSize @@ -471,94 +518,103 @@ fromExUnits :: ExUnits -> ExUnits' Natural fromExUnits = unWrapExUnits blockCapacityAlonzoMeasure :: - forall proto era mk. - (ShelleyCompatible proto era, L.AlonzoEraPParams era) - => TickedLedgerState (ShelleyBlock proto era) mk - -> AlonzoMeasure + forall proto era mk. + (ShelleyCompatible proto era, L.AlonzoEraPParams era) => + TickedLedgerState (ShelleyBlock proto era) mk -> + AlonzoMeasure blockCapacityAlonzoMeasure ledgerState = - AlonzoMeasure { - byteSize = txsMaxBytes ledgerState - , exUnits = fromExUnits $ pparams ^. ppMaxBlockExUnitsL - } - where - pparams = getPParams $ tickedShelleyLedgerState ledgerState + AlonzoMeasure + { byteSize = txsMaxBytes ledgerState + , exUnits = fromExUnits $ pparams ^. ppMaxBlockExUnitsL + } + where + pparams = getPParams $ tickedShelleyLedgerState ledgerState txMeasureAlonzo :: - forall proto era. - ( ShelleyCompatible proto era - , L.AlonzoEraPParams era - , L.AlonzoEraTxWits era - , ExUnitsTooBigUTxO era - , MaxTxSizeUTxO era - ) - => TickedLedgerState (ShelleyBlock proto era) ValuesMK - -> GenTx (ShelleyBlock proto era) - -> V.Validation (TxErrorSG era) AlonzoMeasure + forall proto era. + ( ShelleyCompatible proto era + , L.AlonzoEraPParams era + , L.AlonzoEraTxWits era + , ExUnitsTooBigUTxO era + , MaxTxSizeUTxO era + ) => + TickedLedgerState (ShelleyBlock proto era) ValuesMK -> + GenTx (ShelleyBlock proto era) -> + V.Validation (TxErrorSG era) AlonzoMeasure txMeasureAlonzo st tx@(ShelleyTx _txid tx') = - AlonzoMeasure <$> txInBlockSize st tx <*> exunits - where - txsz = totExUnits tx' + AlonzoMeasure <$> txInBlockSize st tx <*> exunits + where + txsz = totExUnits tx' - pparams = getPParams $ tickedShelleyLedgerState st - limit = pparams ^. L.ppMaxTxExUnitsL + pparams = getPParams $ tickedShelleyLedgerState st + limit = pparams ^. L.ppMaxTxExUnitsL - exunits = - validateMaybe (exUnitsTooBigUTxO txsz limit) $ do - guard $ pointWiseExUnits (<=) txsz limit - Just $ fromExUnits txsz + exunits = + validateMaybe (exUnitsTooBigUTxO txsz limit) $ do + guard $ pointWiseExUnits (<=) txsz limit + Just $ fromExUnits txsz class ExUnitsTooBigUTxO era where exUnitsTooBigUTxO :: ExUnits -> ExUnits -> SL.ApplyTxError era instance ExUnitsTooBigUTxO AlonzoEra where exUnitsTooBigUTxO txsz limit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ AlonzoEra.ShelleyInAlonzoUtxowPredFailure - $ ShelleyEra.UtxoFailure - $ AlonzoEra.ExUnitsTooBigUTxO - $ L.Mismatch { mismatchSupplied = txsz - , mismatchExpected = limit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + AlonzoEra.ShelleyInAlonzoUtxowPredFailure $ + ShelleyEra.UtxoFailure $ + AlonzoEra.ExUnitsTooBigUTxO $ + L.Mismatch + { mismatchSupplied = txsz + , mismatchExpected = limit + } instance ExUnitsTooBigUTxO BabbageEra where exUnitsTooBigUTxO txsz limit = - SL.ApplyTxError . pure - $ ShelleyEra.UtxowFailure - $ BabbageEra.AlonzoInBabbageUtxowPredFailure - $ AlonzoEra.ShelleyInAlonzoUtxowPredFailure - $ ShelleyEra.UtxoFailure - $ BabbageEra.AlonzoInBabbageUtxoPredFailure - $ AlonzoEra.ExUnitsTooBigUTxO - $ L.Mismatch { mismatchSupplied = txsz - , mismatchExpected = limit } + SL.ApplyTxError . pure $ + ShelleyEra.UtxowFailure $ + BabbageEra.AlonzoInBabbageUtxowPredFailure $ + AlonzoEra.ShelleyInAlonzoUtxowPredFailure $ + ShelleyEra.UtxoFailure $ + BabbageEra.AlonzoInBabbageUtxoPredFailure $ + AlonzoEra.ExUnitsTooBigUTxO $ + L.Mismatch + { mismatchSupplied = txsz + , mismatchExpected = limit + } instance ExUnitsTooBigUTxO ConwayEra where exUnitsTooBigUTxO txsz limit = - SL.ApplyTxError . pure - $ ConwayEra.ConwayUtxowFailure - $ ConwayEra.UtxoFailure - $ ConwayEra.ExUnitsTooBigUTxO - $ L.Mismatch { mismatchSupplied = txsz - , mismatchExpected = limit } + SL.ApplyTxError . pure $ + ConwayEra.ConwayUtxowFailure $ + ConwayEra.UtxoFailure $ + ConwayEra.ExUnitsTooBigUTxO $ + L.Mismatch + { mismatchSupplied = txsz + , mismatchExpected = limit + } ----- -instance ( ShelleyCompatible p AlonzoEra - ) => TxLimits (ShelleyBlock p AlonzoEra) where - +instance + ShelleyCompatible p AlonzoEra => + TxLimits (ShelleyBlock p AlonzoEra) + where type TxMeasure (ShelleyBlock p AlonzoEra) = AlonzoMeasure - txMeasure _cfg st tx = runValidation $ txMeasureAlonzo st tx - blockCapacityTxMeasure _cfg = blockCapacityAlonzoMeasure + txMeasure _cfg st tx = runValidation $ txMeasureAlonzo st tx + blockCapacityTxMeasure _cfg = blockCapacityAlonzoMeasure ----- -data ConwayMeasure = ConwayMeasure { - alonzoMeasure :: !AlonzoMeasure +data ConwayMeasure = ConwayMeasure + { alonzoMeasure :: !AlonzoMeasure , refScriptsSize :: !(IgnoringOverflow ByteSize32) - } deriving stock (Eq, Generic, Show) - deriving anyclass (NoThunks) - deriving (Measure) - via (InstantiatedAt Generic ConwayMeasure) + } + deriving stock (Eq, Generic, Show) + deriving anyclass NoThunks + deriving + Measure + via (InstantiatedAt Generic ConwayMeasure) instance Semigroup ConwayMeasure where ConwayMeasure a1 r1 <> ConwayMeasure a2 r2 = @@ -579,94 +635,101 @@ instance TxMeasureMetrics ConwayMeasure where unIgnoringOverflow . refScriptsSize blockCapacityConwayMeasure :: - forall proto era mk. - ( ShelleyCompatible proto era - , L.AlonzoEraPParams era - ) - => TickedLedgerState (ShelleyBlock proto era) mk - -> ConwayMeasure + forall proto era mk. + ( ShelleyCompatible proto era + , L.AlonzoEraPParams era + ) => + TickedLedgerState (ShelleyBlock proto era) mk -> + ConwayMeasure blockCapacityConwayMeasure st = - ConwayMeasure { - alonzoMeasure = blockCapacityAlonzoMeasure st - , refScriptsSize = IgnoringOverflow $ ByteSize32 $ fromIntegral $ - -- For post-Conway eras, this will become a protocol parameter. - SL.maxRefScriptSizePerBlock - } + ConwayMeasure + { alonzoMeasure = blockCapacityAlonzoMeasure st + , refScriptsSize = + IgnoringOverflow $ + ByteSize32 $ + fromIntegral $ + -- For post-Conway eras, this will become a protocol parameter. + SL.maxRefScriptSizePerBlock + } txMeasureConway :: - forall proto era. - ( ShelleyCompatible proto era - , L.AlonzoEraTxWits era - , L.BabbageEraTxBody era - , ExUnitsTooBigUTxO era - , MaxTxSizeUTxO era - , TxRefScriptsSizeTooBig era - ) - => TickedLedgerState (ShelleyBlock proto era) ValuesMK - -> GenTx (ShelleyBlock proto era) - -> V.Validation (TxErrorSG era) ConwayMeasure + forall proto era. + ( ShelleyCompatible proto era + , L.AlonzoEraTxWits era + , L.BabbageEraTxBody era + , ExUnitsTooBigUTxO era + , MaxTxSizeUTxO era + , TxRefScriptsSizeTooBig era + ) => + TickedLedgerState (ShelleyBlock proto era) ValuesMK -> + GenTx (ShelleyBlock proto era) -> + V.Validation (TxErrorSG era) ConwayMeasure txMeasureConway st tx@(ShelleyTx _txid tx') = - ConwayMeasure <$> txMeasureAlonzo st tx <*> refScriptBytes - where - utxo = SL.getUTxO . tickedShelleyLedgerState $ st - txsz = SL.txNonDistinctRefScriptsSize utxo tx' :: Int + ConwayMeasure <$> txMeasureAlonzo st tx <*> refScriptBytes + where + utxo = SL.getUTxO . tickedShelleyLedgerState $ st + txsz = SL.txNonDistinctRefScriptsSize utxo tx' :: Int - -- For post-Conway eras, this will become a protocol parameter. - limit = SL.maxRefScriptSizePerTx + -- For post-Conway eras, this will become a protocol parameter. + limit = SL.maxRefScriptSizePerTx - refScriptBytes = - validateMaybe (txRefScriptsSizeTooBig txsz limit) $ do - guard $ txsz <= limit - Just $ IgnoringOverflow $ ByteSize32 $ fromIntegral txsz + refScriptBytes = + validateMaybe (txRefScriptsSizeTooBig txsz limit) $ do + guard $ txsz <= limit + Just $ IgnoringOverflow $ ByteSize32 $ fromIntegral txsz class TxRefScriptsSizeTooBig era where txRefScriptsSizeTooBig :: Int -> Int -> SL.ApplyTxError era instance TxRefScriptsSizeTooBig ConwayEra where txRefScriptsSizeTooBig txsz limit = - SL.ApplyTxError . pure - $ ConwayEra.ConwayTxRefScriptsSizeTooBig - $ L.Mismatch { mismatchSupplied = txsz - , mismatchExpected = limit } + SL.ApplyTxError . pure $ + ConwayEra.ConwayTxRefScriptsSizeTooBig $ + L.Mismatch + { mismatchSupplied = txsz + , mismatchExpected = limit + } ----- txMeasureBabbage :: - forall proto era. - ( ShelleyCompatible proto era - , L.AlonzoEraTxWits era - , L.BabbageEraTxBody era - , ExUnitsTooBigUTxO era - , MaxTxSizeUTxO era - ) - => TickedLedgerState (ShelleyBlock proto era) ValuesMK - -> GenTx (ShelleyBlock proto era) - -> V.Validation (TxErrorSG era) ConwayMeasure + forall proto era. + ( ShelleyCompatible proto era + , L.AlonzoEraTxWits era + , L.BabbageEraTxBody era + , ExUnitsTooBigUTxO era + , MaxTxSizeUTxO era + ) => + TickedLedgerState (ShelleyBlock proto era) ValuesMK -> + GenTx (ShelleyBlock proto era) -> + V.Validation (TxErrorSG era) ConwayMeasure txMeasureBabbage st tx@(ShelleyTx _txid tx') = - (\x -> ConwayMeasure x refScriptBytes) <$> txMeasureAlonzo st tx - where - utxo = SL.getUTxO $ tickedShelleyLedgerState st - - -- The Babbage rules should have checked this ref script size against a - -- limit, but they did not. Now that Cardano @mainnet@ is in Conway, that - -- omission is no longer an attack vector. Any other chain intending to - -- ever use Babbage as its current era ought to patch this. - refScriptBytes = - IgnoringOverflow - $ ByteSize32 - $ fromIntegral (SL.txNonDistinctRefScriptsSize utxo tx' :: Int) + (\x -> ConwayMeasure x refScriptBytes) <$> txMeasureAlonzo st tx + where + utxo = SL.getUTxO $ tickedShelleyLedgerState st + + -- The Babbage rules should have checked this ref script size against a + -- limit, but they did not. Now that Cardano @mainnet@ is in Conway, that + -- omission is no longer an attack vector. Any other chain intending to + -- ever use Babbage as its current era ought to patch this. + refScriptBytes = + IgnoringOverflow $ + ByteSize32 $ + fromIntegral (SL.txNonDistinctRefScriptsSize utxo tx' :: Int) -- | We anachronistically use 'ConwayMeasure' in Babbage. -instance ( ShelleyCompatible p BabbageEra - ) => TxLimits (ShelleyBlock p BabbageEra) where - +instance + ShelleyCompatible p BabbageEra => + TxLimits (ShelleyBlock p BabbageEra) + where type TxMeasure (ShelleyBlock p BabbageEra) = ConwayMeasure - txMeasure _cfg st tx = runValidation $ txMeasureBabbage st tx - blockCapacityTxMeasure _cfg = blockCapacityConwayMeasure - -instance ( ShelleyCompatible p ConwayEra - ) => TxLimits (ShelleyBlock p ConwayEra) where + txMeasure _cfg st tx = runValidation $ txMeasureBabbage st tx + blockCapacityTxMeasure _cfg = blockCapacityConwayMeasure +instance + ShelleyCompatible p ConwayEra => + TxLimits (ShelleyBlock p ConwayEra) + where type TxMeasure (ShelleyBlock p ConwayEra) = ConwayMeasure - txMeasure _cfg st tx = runValidation $ txMeasureConway st tx - blockCapacityTxMeasure _cfg = blockCapacityConwayMeasure + txMeasure _cfg st tx = runValidation $ txMeasureConway st tx + blockCapacityTxMeasure _cfg = blockCapacityConwayMeasure diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/NetworkProtocolVersion.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/NetworkProtocolVersion.hs index 6c3d38e2a0..578468c2f7 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/NetworkProtocolVersion.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/NetworkProtocolVersion.hs @@ -1,48 +1,45 @@ {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion ( - ShelleyNodeToClientVersion (..) +module Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion + ( ShelleyNodeToClientVersion (..) , ShelleyNodeToNodeVersion (..) ) where -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Shelley.Ledger.Block +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Shelley.Ledger.Block data ShelleyNodeToNodeVersion = ShelleyNodeToNodeVersion1 deriving (Show, Eq, Ord, Enum, Bounded) -data ShelleyNodeToClientVersion = - -- | New queries introduced: GetConstitutionHash, GetFilteredVoteDelegatees +data ShelleyNodeToClientVersion + = -- | New queries introduced: GetConstitutionHash, GetFilteredVoteDelegatees ShelleyNodeToClientVersion8 - - -- | New queries introduced: GetProposals, GetRatifyState - | ShelleyNodeToClientVersion9 - - -- | New queries introduced: GetFuturePParams - | ShelleyNodeToClientVersion10 - - -- | New queries introduced: GetBigLedgerPeerSnapshot - | ShelleyNodeToClientVersion11 - - -- | New queries introduced: QueryStakePoolDefaultVote + | -- | New queries introduced: GetProposals, GetRatifyState + ShelleyNodeToClientVersion9 + | -- | New queries introduced: GetFuturePParams + ShelleyNodeToClientVersion10 + | -- | New queries introduced: GetBigLedgerPeerSnapshot + ShelleyNodeToClientVersion11 + | -- | New queries introduced: QueryStakePoolDefaultVote -- Queries deprecated: GetProposedPParamsUpdates - | ShelleyNodeToClientVersion12 + ShelleyNodeToClientVersion12 deriving (Show, Eq, Ord, Enum, Bounded) instance HasNetworkProtocolVersion (ShelleyBlock proto era) where - type BlockNodeToNodeVersion (ShelleyBlock proto era) = ShelleyNodeToNodeVersion + type BlockNodeToNodeVersion (ShelleyBlock proto era) = ShelleyNodeToNodeVersion type BlockNodeToClientVersion (ShelleyBlock proto era) = ShelleyNodeToClientVersion -- TODO #2668 make this era-specific instance SupportedNetworkProtocolVersion (ShelleyBlock proto era) where - supportedNodeToNodeVersions _ = Map.fromList [ - (NodeToNodeV_14, ShelleyNodeToNodeVersion1) + supportedNodeToNodeVersions _ = + Map.fromList + [ (NodeToNodeV_14, ShelleyNodeToNodeVersion1) ] - supportedNodeToClientVersions _ = Map.fromList [ - (NodeToClientV_16, ShelleyNodeToClientVersion8) + supportedNodeToClientVersions _ = + Map.fromList + [ (NodeToClientV_16, ShelleyNodeToClientVersion8) , (NodeToClientV_17, ShelleyNodeToClientVersion9) , (NodeToClientV_18, ShelleyNodeToClientVersion10) , (NodeToClientV_19, ShelleyNodeToClientVersion11) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/PeerSelection.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/PeerSelection.hs index 9ee7d48da8..0861efbba3 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/PeerSelection.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/PeerSelection.hs @@ -3,94 +3,98 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Shelley.Ledger.PeerSelection () where -import Cardano.Ledger.BaseTypes -import qualified Cardano.Ledger.Keys as SL -import qualified Cardano.Ledger.Shelley.API as SL -import Control.DeepSeq (force) -import Data.Bifunctor (second) -import Data.Foldable (toList) -import Data.List (sortOn) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (catMaybes, mapMaybe) -import Data.Ord (Down (..)) -import Data.Text.Encoding (encodeUtf8) -import Lens.Micro.Extras (view) -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Ledger +import Cardano.Ledger.BaseTypes +import Cardano.Ledger.Keys qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Control.DeepSeq (force) +import Data.Bifunctor (second) +import Data.Foldable (toList) +import Data.List (sortOn) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (catMaybes, mapMaybe) +import Data.Ord (Down (..)) +import Data.Text.Encoding (encodeUtf8) +import Lens.Micro.Extras (view) +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Ledger instance SL.EraCertState era => LedgerSupportsPeerSelection (ShelleyBlock proto era) where - getPeers ShelleyLedgerState { shelleyLedgerState } = catMaybes + getPeers ShelleyLedgerState{shelleyLedgerState} = + catMaybes [ (poolStake,) <$> Map.lookup stakePool poolRelayAccessPoints | (stakePool, poolStake) <- orderByStake poolDistr ] - where - poolDistr :: SL.PoolDistr - poolDistr = SL.nesPd shelleyLedgerState + where + poolDistr :: SL.PoolDistr + poolDistr = SL.nesPd shelleyLedgerState - -- | Sort stake pools by descending stake - orderByStake :: - SL.PoolDistr - -> [(SL.KeyHash 'SL.StakePool, PoolStake)] - orderByStake = - sortOn (Down . snd) - . map (second (PoolStake . SL.individualPoolStake)) - . Map.toList - . SL.unPoolDistr + -- \| Sort stake pools by descending stake + orderByStake :: + SL.PoolDistr -> + [(SL.KeyHash 'SL.StakePool, PoolStake)] + orderByStake = + sortOn (Down . snd) + . map (second (PoolStake . SL.individualPoolStake)) + . Map.toList + . SL.unPoolDistr - futurePoolParams, poolParams :: - Map (SL.KeyHash 'SL.StakePool) SL.PoolParams - (futurePoolParams, poolParams) = - (SL.psFutureStakePoolParams pstate, SL.psStakePoolParams pstate) - where - pstate :: SL.PState era - pstate = - view SL.certPStateL - . SL.lsCertState - . SL.esLState - . SL.nesEs - $ shelleyLedgerState + futurePoolParams + , poolParams :: + Map (SL.KeyHash 'SL.StakePool) SL.PoolParams + (futurePoolParams, poolParams) = + (SL.psFutureStakePoolParams pstate, SL.psStakePoolParams pstate) + where + pstate :: SL.PState era + pstate = + view SL.certPStateL + . SL.lsCertState + . SL.esLState + . SL.nesEs + $ shelleyLedgerState - relayToRelayAccessPoint :: SL.StakePoolRelay -> Maybe RelayAccessPoint - relayToRelayAccessPoint (SL.SingleHostAddr (SJust (Port port)) (SJust ipv4) _) = - Just $ RelayAccessAddress (IPv4 ipv4) (fromIntegral port) - relayToRelayAccessPoint (SL.SingleHostAddr (SJust (Port port)) - SNothing - (SJust ipv6)) = - Just $ RelayAccessAddress (IPv6 ipv6) (fromIntegral port) - relayToRelayAccessPoint (SL.SingleHostName (SJust (Port port)) dnsName) = - Just $ RelayAccessDomain (encodeUtf8 $ dnsToText dnsName) (fromIntegral port) - relayToRelayAccessPoint _ = - -- This could be an unsupported relay (SRV records) or an unusable - -- relay such as a relay with an IP address but without a port number. - Nothing + relayToRelayAccessPoint :: SL.StakePoolRelay -> Maybe RelayAccessPoint + relayToRelayAccessPoint (SL.SingleHostAddr (SJust (Port port)) (SJust ipv4) _) = + Just $ RelayAccessAddress (IPv4 ipv4) (fromIntegral port) + relayToRelayAccessPoint + ( SL.SingleHostAddr + (SJust (Port port)) + SNothing + (SJust ipv6) + ) = + Just $ RelayAccessAddress (IPv6 ipv6) (fromIntegral port) + relayToRelayAccessPoint (SL.SingleHostName (SJust (Port port)) dnsName) = + Just $ RelayAccessDomain (encodeUtf8 $ dnsToText dnsName) (fromIntegral port) + relayToRelayAccessPoint _ = + -- This could be an unsupported relay (SRV records) or an unusable + -- relay such as a relay with an IP address but without a port number. + Nothing - -- | Note that a stake pool can have multiple registered relays - pparamsRelayAccessPoints :: - (RelayAccessPoint -> StakePoolRelay) - -> SL.PoolParams - -> Maybe (NonEmpty StakePoolRelay) - pparamsRelayAccessPoints injStakePoolRelay = - NE.nonEmpty - . force - . mapMaybe (fmap injStakePoolRelay . relayToRelayAccessPoint) - . toList - . SL.ppRelays + -- \| Note that a stake pool can have multiple registered relays + pparamsRelayAccessPoints :: + (RelayAccessPoint -> StakePoolRelay) -> + SL.PoolParams -> + Maybe (NonEmpty StakePoolRelay) + pparamsRelayAccessPoints injStakePoolRelay = + NE.nonEmpty + . force + . mapMaybe (fmap injStakePoolRelay . relayToRelayAccessPoint) + . toList + . SL.ppRelays - -- | Combine the stake pools registered in the future and the current pool - -- parameters, and remove duplicates. - poolRelayAccessPoints :: - Map (SL.KeyHash 'SL.StakePool) (NonEmpty StakePoolRelay) - poolRelayAccessPoints = - Map.unionWith - (\futureRelays currentRelays -> NE.nub (futureRelays <> currentRelays)) - (Map.mapMaybe (pparamsRelayAccessPoints FutureRelay) futurePoolParams) - (Map.mapMaybe (pparamsRelayAccessPoints CurrentRelay) poolParams) + -- \| Combine the stake pools registered in the future and the current pool + -- parameters, and remove duplicates. + poolRelayAccessPoints :: + Map (SL.KeyHash 'SL.StakePool) (NonEmpty StakePoolRelay) + poolRelayAccessPoints = + Map.unionWith + (\futureRelays currentRelays -> NE.nub (futureRelays <> currentRelays)) + (Map.mapMaybe (pparamsRelayAccessPoints FutureRelay) futurePoolParams) + (Map.mapMaybe (pparamsRelayAccessPoints CurrentRelay) poolParams) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Protocol.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Protocol.hs index 9f7ddd9852..9d5cbd2c87 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Protocol.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Protocol.hs @@ -5,20 +5,23 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Shelley.Ledger.Protocol () where -import qualified Cardano.Ledger.Shelley.API as SL -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Config (BlockConfig (..)) -import Ouroboros.Consensus.Shelley.Protocol.Abstract - (ShelleyProtocolHeader, pHeaderIssueNo, pHeaderIssuer, - pTieBreakVRFValue, protocolHeaderView) +import Cardano.Ledger.Shelley.API qualified as SL +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Config (BlockConfig (..)) +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ShelleyProtocolHeader + , pHeaderIssueNo + , pHeaderIssuer + , pTieBreakVRFValue + , protocolHeaderView + ) {------------------------------------------------------------------------------- Support for Transitional Praos consensus algorithm @@ -29,24 +32,27 @@ type instance BlockProtocol (ShelleyBlock proto era) = proto instance ShelleyCompatible proto era => BlockSupportsProtocol (ShelleyBlock proto era) where validateView _cfg = protocolHeaderView @proto . shelleyHeaderRaw - selectView _ hdr@(ShelleyHeader shdr _) = PraosChainSelectView { - csvChainLength = blockNo hdr - , csvSlotNo = blockSlot hdr - , csvIssuer = hdrIssuer - , csvIssueNo = pHeaderIssueNo shdr + selectView _ hdr@(ShelleyHeader shdr _) = + PraosChainSelectView + { csvChainLength = blockNo hdr + , csvSlotNo = blockSlot hdr + , csvIssuer = hdrIssuer + , csvIssueNo = pHeaderIssueNo shdr , csvTieBreakVRF = pTieBreakVRFValue shdr } - where - hdrIssuer :: SL.VKey 'SL.BlockIssuer - hdrIssuer = pHeaderIssuer shdr + where + hdrIssuer :: SL.VKey 'SL.BlockIssuer + hdrIssuer = pHeaderIssuer shdr projectChainOrderConfig = shelleyVRFTiebreakerFlavor -- TODO correct place for these two? -type instance Signed (Header (ShelleyBlock proto era)) = - Signed (ShelleyProtocolHeader proto) +type instance + Signed (Header (ShelleyBlock proto era)) = + Signed (ShelleyProtocolHeader proto) -instance SignedHeader (ShelleyProtocolHeader proto) => +instance + SignedHeader (ShelleyProtocolHeader proto) => SignedHeader (Header (ShelleyBlock proto era)) where headerSigned = headerSigned . shelleyHeaderRaw diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query.hs index 5f8859de9c..6d967b72af 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query.hs @@ -18,103 +18,120 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Ledger.Query ( - BlockQuery (..) +module Ouroboros.Consensus.Shelley.Ledger.Query + ( BlockQuery (..) , NonMyopicMemberRewards (..) , StakeSnapshot (..) , StakeSnapshots (..) + -- * Serialisation , decodeShelleyQuery , decodeShelleyResult , encodeShelleyQuery , encodeShelleyResult + -- * BlockSupportsHFLedgerQuery instances , answerShelleyLookupQueries , answerShelleyTraversingQueries , shelleyQFTraverseTablesPredicate ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..), encodeListLen, - enforceSize) -import Cardano.Ledger.Address -import qualified Cardano.Ledger.Api.State.Query as SL -import Cardano.Ledger.CertState (lookupDepositDState) -import qualified Cardano.Ledger.CertState as SL -import Cardano.Ledger.Coin (Coin) -import Cardano.Ledger.Compactible (Compactible (fromCompact)) -import qualified Cardano.Ledger.Conway.Governance as CG -import qualified Cardano.Ledger.Core as SL -import Cardano.Ledger.Credential (StakeCredential) -import Cardano.Ledger.Keys (KeyHash, KeyRole (..)) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as LC -import Cardano.Ledger.Shelley.LedgerState (AccountState) -import qualified Cardano.Ledger.Shelley.RewardProvenance as SL - (RewardProvenance) -import qualified Cardano.Ledger.State as SL -import Cardano.Ledger.UMap (UMap (..), rdReward, umElemDRep, - umElemRDPair, umElemSPool) -import Cardano.Protocol.Crypto (Crypto) -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as CBOR -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (decode, encode) -import Control.DeepSeq (NFData) -import Data.Bifunctor (second) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.Sequence (Seq (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Typeable (Typeable) -import qualified Data.VMap as VMap -import GHC.Generics (Generic) -import Lens.Micro -import Lens.Micro.Extras (view) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) -import qualified Ouroboros.Consensus.Shelley.Eras as SE -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Config -import Ouroboros.Consensus.Shelley.Ledger.Ledger -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion - (ShelleyNodeToClientVersion (..)) -import Ouroboros.Consensus.Shelley.Ledger.PeerSelection () -import Ouroboros.Consensus.Shelley.Ledger.Query.Types -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) -import Ouroboros.Consensus.Storage.LedgerDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Network.Block (Serialised (..), decodePoint, - encodePoint, mkSerialised) -import Ouroboros.Network.PeerSelection.LedgerPeers.Type -import Ouroboros.Network.PeerSelection.LedgerPeers.Utils +import Cardano.Binary + ( FromCBOR (..) + , ToCBOR (..) + , encodeListLen + , enforceSize + ) +import Cardano.Ledger.Address +import Cardano.Ledger.Api.State.Query qualified as SL +import Cardano.Ledger.CertState (lookupDepositDState) +import Cardano.Ledger.CertState qualified as SL +import Cardano.Ledger.Coin (Coin) +import Cardano.Ledger.Compactible (Compactible (fromCompact)) +import Cardano.Ledger.Conway.Governance qualified as CG +import Cardano.Ledger.Core qualified as SL +import Cardano.Ledger.Credential (StakeCredential) +import Cardano.Ledger.Keys (KeyHash, KeyRole (..)) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as LC +import Cardano.Ledger.Shelley.LedgerState (AccountState) +import Cardano.Ledger.Shelley.RewardProvenance qualified as SL + ( RewardProvenance + ) +import Cardano.Ledger.State qualified as SL +import Cardano.Ledger.UMap + ( UMap (..) + , rdReward + , umElemDRep + , umElemRDPair + , umElemSPool + ) +import Cardano.Protocol.Crypto (Crypto) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (decode, encode) +import Control.DeepSeq (NFData) +import Data.Bifunctor (second) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.Sequence (Seq (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Typeable (Typeable) +import Data.VMap qualified as VMap +import GHC.Generics (Generic) +import Lens.Micro +import Lens.Micro.Extras (view) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) +import Ouroboros.Consensus.Shelley.Eras qualified as SE +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Config +import Ouroboros.Consensus.Shelley.Ledger.Ledger +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion + ( ShelleyNodeToClientVersion (..) + ) +import Ouroboros.Consensus.Shelley.Ledger.PeerSelection () +import Ouroboros.Consensus.Shelley.Ledger.Query.Types +import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Network.Block + ( Serialised (..) + , decodePoint + , encodePoint + , mkSerialised + ) +import Ouroboros.Network.PeerSelection.LedgerPeers.Type +import Ouroboros.Network.PeerSelection.LedgerPeers.Utils {------------------------------------------------------------------------------- BlockSupportsLedgerQuery -------------------------------------------------------------------------------} -newtype NonMyopicMemberRewards = NonMyopicMemberRewards { - unNonMyopicMemberRewards :: - Map (Either SL.Coin (SL.Credential 'SL.Staking)) - (Map (SL.KeyHash 'SL.StakePool) SL.Coin) - } - deriving stock (Show) +newtype NonMyopicMemberRewards = NonMyopicMemberRewards + { unNonMyopicMemberRewards :: + Map + (Either SL.Coin (SL.Credential 'SL.Staking)) + (Map (SL.KeyHash 'SL.StakePool) SL.Coin) + } + deriving stock Show deriving newtype (Eq, ToCBOR, FromCBOR) type Delegations = Map (SL.Credential 'SL.Staking) (SL.KeyHash 'SL.StakePool) @@ -128,43 +145,37 @@ data instance BlockQuery (ShelleyBlock proto era) fp result where GetEpochNo :: BlockQuery (ShelleyBlock proto era) QFNoTables EpochNo -- | Calculate the Non-Myopic Pool Member Rewards for a set of -- credentials. See 'SL.getNonMyopicMemberRewards' - GetNonMyopicMemberRewards - :: Set (Either SL.Coin (SL.Credential 'SL.Staking)) - -> BlockQuery (ShelleyBlock proto era) QFNoTables NonMyopicMemberRewards - GetCurrentPParams - :: BlockQuery (ShelleyBlock proto era) QFNoTables (LC.PParams era) - GetProposedPParamsUpdates - :: BlockQuery (ShelleyBlock proto era) QFNoTables (SL.ProposedPPUpdates era) + GetNonMyopicMemberRewards :: + Set (Either SL.Coin (SL.Credential 'SL.Staking)) -> + BlockQuery (ShelleyBlock proto era) QFNoTables NonMyopicMemberRewards + GetCurrentPParams :: + BlockQuery (ShelleyBlock proto era) QFNoTables (LC.PParams era) + GetProposedPParamsUpdates :: + BlockQuery (ShelleyBlock proto era) QFNoTables (SL.ProposedPPUpdates era) -- | This gets the stake distribution, but not in terms of _active_ stake -- (which we need for the leader schedule), but rather in terms of _total_ -- stake, which is relevant for rewards. It is used by the wallet to show -- saturation levels to the end user. We should consider refactoring this, to -- an endpoint that provides all the information that the wallet wants about -- pools, in an extensible fashion. - GetStakeDistribution - :: BlockQuery (ShelleyBlock proto era) QFNoTables (PoolDistr (ProtoCrypto proto)) - + GetStakeDistribution :: + BlockQuery (ShelleyBlock proto era) QFNoTables (PoolDistr (ProtoCrypto proto)) -- | Get a subset of the UTxO, filtered by address. Although this will -- typically return a lot less data than 'GetUTxOWhole', it requires a linear -- search over the UTxO and so cost O(n) time. -- -- Only 'GetUTxOByTxIn' is efficient in time and space. - -- - GetUTxOByAddress - :: Set SL.Addr - -> BlockQuery (ShelleyBlock proto era) QFTraverseTables (SL.UTxO era) - + GetUTxOByAddress :: + Set SL.Addr -> + BlockQuery (ShelleyBlock proto era) QFTraverseTables (SL.UTxO era) -- | Get the /entire/ UTxO. This is only suitable for debug/testing purposes -- because otherwise it is far too much data. - -- - GetUTxOWhole - :: BlockQuery (ShelleyBlock proto era) QFTraverseTables (SL.UTxO era) - + GetUTxOWhole :: + BlockQuery (ShelleyBlock proto era) QFTraverseTables (SL.UTxO era) -- | Only for debugging purposes, we make no effort to ensure binary -- compatibility (cf the comment on 'GetCBOR'). Moreover, it is huge. - DebugEpochState - :: BlockQuery (ShelleyBlock proto era) QFNoTables (SL.EpochState era) - + DebugEpochState :: + BlockQuery (ShelleyBlock proto era) QFNoTables (SL.EpochState era) -- | Wrap the result of the query using CBOR-in-CBOR. -- -- For example, when a client is running a different version than the server @@ -178,97 +189,98 @@ data instance BlockQuery (ShelleyBlock proto era) fp result where -- result, it has access to the deserialised epoch state. When it fails to -- decode it, the client can fall back to pretty printing the actual CBOR, -- which is better than no output at all. - GetCBOR - :: BlockQuery (ShelleyBlock proto era) fp result - -> BlockQuery (ShelleyBlock proto era) fp (Serialised result) - - GetFilteredDelegationsAndRewardAccounts - :: Set (SL.Credential 'SL.Staking) - -> BlockQuery (ShelleyBlock proto era) QFNoTables - (Delegations, Map (SL.Credential 'Staking) Coin) - - GetGenesisConfig - :: BlockQuery (ShelleyBlock proto era) QFNoTables CompactGenesis - + GetCBOR :: + BlockQuery (ShelleyBlock proto era) fp result -> + BlockQuery (ShelleyBlock proto era) fp (Serialised result) + GetFilteredDelegationsAndRewardAccounts :: + Set (SL.Credential 'SL.Staking) -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + (Delegations, Map (SL.Credential 'Staking) Coin) + GetGenesisConfig :: + BlockQuery (ShelleyBlock proto era) QFNoTables CompactGenesis -- | Only for debugging purposes, we make no effort to ensure binary -- compatibility (cf the comment on 'GetCBOR'). Moreover, it is huge. - DebugNewEpochState - :: BlockQuery (ShelleyBlock proto era) QFNoTables (SL.NewEpochState era) - + DebugNewEpochState :: + BlockQuery (ShelleyBlock proto era) QFNoTables (SL.NewEpochState era) -- | Only for debugging purposes, we make no effort to ensure binary -- compatibility (cf the comment on 'GetCBOR'). - DebugChainDepState - :: BlockQuery (ShelleyBlock proto era) QFNoTables (ChainDepState proto) - - GetRewardProvenance - :: BlockQuery (ShelleyBlock proto era) QFNoTables SL.RewardProvenance - + DebugChainDepState :: + BlockQuery (ShelleyBlock proto era) QFNoTables (ChainDepState proto) + GetRewardProvenance :: + BlockQuery (ShelleyBlock proto era) QFNoTables SL.RewardProvenance -- | Get a subset of the UTxO, filtered by transaction input. This is -- efficient and costs only O(m * log n) for m inputs and a UTxO of size n. - -- - GetUTxOByTxIn - :: Set SL.TxIn - -> BlockQuery (ShelleyBlock proto era) QFLookupTables (SL.UTxO era) - - GetStakePools - :: BlockQuery (ShelleyBlock proto era) QFNoTables - (Set (SL.KeyHash 'SL.StakePool)) - - GetStakePoolParams - :: Set (SL.KeyHash 'SL.StakePool) - -> BlockQuery (ShelleyBlock proto era) QFNoTables - (Map (SL.KeyHash 'SL.StakePool) SL.PoolParams) - - GetRewardInfoPools - :: BlockQuery (ShelleyBlock proto era) QFNoTables - (SL.RewardParams, - Map (SL.KeyHash 'SL.StakePool) - (SL.RewardInfoPool)) - - GetPoolState - :: Maybe (Set (SL.KeyHash 'SL.StakePool)) - -> BlockQuery (ShelleyBlock proto era) QFNoTables - (SL.PState era) - - GetStakeSnapshots - :: Maybe (Set (SL.KeyHash 'SL.StakePool)) - -> BlockQuery (ShelleyBlock proto era) QFNoTables - StakeSnapshots - - GetPoolDistr - :: Maybe (Set (SL.KeyHash 'SL.StakePool)) - -> BlockQuery (ShelleyBlock proto era) QFNoTables - (PoolDistr (ProtoCrypto proto)) - - GetStakeDelegDeposits - :: Set StakeCredential - -> BlockQuery (ShelleyBlock proto era) QFNoTables - (Map StakeCredential Coin) - + GetUTxOByTxIn :: + Set SL.TxIn -> + BlockQuery (ShelleyBlock proto era) QFLookupTables (SL.UTxO era) + GetStakePools :: + BlockQuery + (ShelleyBlock proto era) + QFNoTables + (Set (SL.KeyHash 'SL.StakePool)) + GetStakePoolParams :: + Set (SL.KeyHash 'SL.StakePool) -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + (Map (SL.KeyHash 'SL.StakePool) SL.PoolParams) + GetRewardInfoPools :: + BlockQuery + (ShelleyBlock proto era) + QFNoTables + ( SL.RewardParams + , Map + (SL.KeyHash 'SL.StakePool) + (SL.RewardInfoPool) + ) + GetPoolState :: + Maybe (Set (SL.KeyHash 'SL.StakePool)) -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + (SL.PState era) + GetStakeSnapshots :: + Maybe (Set (SL.KeyHash 'SL.StakePool)) -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + StakeSnapshots + GetPoolDistr :: + Maybe (Set (SL.KeyHash 'SL.StakePool)) -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + (PoolDistr (ProtoCrypto proto)) + GetStakeDelegDeposits :: + Set StakeCredential -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + (Map StakeCredential Coin) -- | Not supported in eras before Conway - GetConstitution - :: CG.ConwayEraGov era - => BlockQuery (ShelleyBlock proto era) QFNoTables (CG.Constitution era) - + GetConstitution :: + CG.ConwayEraGov era => + BlockQuery (ShelleyBlock proto era) QFNoTables (CG.Constitution era) -- | Although this query was introduced as part of Conway, it is general and -- so has non-degenerate semantics for eras before Conway. - GetGovState - :: BlockQuery (ShelleyBlock proto era) QFNoTables (LC.GovState era) - + GetGovState :: + BlockQuery (ShelleyBlock proto era) QFNoTables (LC.GovState era) -- | The argument specifies the credential of each 'DRep' whose state should -- be returned. When it's empty, the state of every 'DRep' is returned. -- -- Not supported in eras before Conway. - GetDRepState - :: CG.ConwayEraGov era - => Set (SL.Credential 'DRepRole) - -> BlockQuery (ShelleyBlock proto era) - QFNoTables - (Map - (SL.Credential 'DRepRole) - SL.DRepState - ) - + GetDRepState :: + CG.ConwayEraGov era => + Set (SL.Credential 'DRepRole) -> + BlockQuery + (ShelleyBlock proto era) + QFNoTables + ( Map + (SL.Credential 'DRepRole) + SL.DRepState + ) -- | Query the 'DRep' stake distribution. Note that this can be an expensive -- query because there is a chance that the latest snapshot's distribution -- has not yet been fully computed. @@ -277,418 +289,414 @@ data instance BlockQuery (ShelleyBlock proto era) fp result where -- the stake of every 'DRep's is returned. -- -- Not supported in eras before Conway. - GetDRepStakeDistr - :: CG.ConwayEraGov era - => Set SL.DRep - -> BlockQuery (ShelleyBlock proto era) QFNoTables (Map SL.DRep Coin) - + GetDRepStakeDistr :: + CG.ConwayEraGov era => + Set SL.DRep -> + BlockQuery (ShelleyBlock proto era) QFNoTables (Map SL.DRep Coin) -- | Query committee members -- -- Not supported in eras before Conway. - GetCommitteeMembersState - :: CG.ConwayEraGov era - => Set (SL.Credential 'ColdCommitteeRole) - -> Set (SL.Credential 'HotCommitteeRole) - -> Set SL.MemberStatus - -> BlockQuery (ShelleyBlock proto era) QFNoTables SL.CommitteeMembersState - + GetCommitteeMembersState :: + CG.ConwayEraGov era => + Set (SL.Credential 'ColdCommitteeRole) -> + Set (SL.Credential 'HotCommitteeRole) -> + Set SL.MemberStatus -> + BlockQuery (ShelleyBlock proto era) QFNoTables SL.CommitteeMembersState -- | Not supported in eras before Conway. - GetFilteredVoteDelegatees - :: CG.ConwayEraGov era - => Set (SL.Credential 'SL.Staking) - -> BlockQuery (ShelleyBlock proto era) QFNoTables VoteDelegatees - - GetAccountState - :: BlockQuery (ShelleyBlock proto era) QFNoTables AccountState - + GetFilteredVoteDelegatees :: + CG.ConwayEraGov era => + Set (SL.Credential 'SL.Staking) -> + BlockQuery (ShelleyBlock proto era) QFNoTables VoteDelegatees + GetAccountState :: + BlockQuery (ShelleyBlock proto era) QFNoTables AccountState -- | Query the SPO voting stake distribution. -- This stake distribution is different from the one used in leader election. -- -- See: https://github.com/IntersectMBO/cardano-ledger/issues/4342 -- -- Not supported in eras before Conway. - GetSPOStakeDistr - :: CG.ConwayEraGov era - => Set (KeyHash 'StakePool) - -> BlockQuery (ShelleyBlock proto era) QFNoTables (Map (KeyHash 'StakePool) Coin) - - GetProposals - :: CG.ConwayEraGov era - => Set CG.GovActionId - -> BlockQuery (ShelleyBlock proto era) QFNoTables (Seq (CG.GovActionState era)) - - GetRatifyState - :: CG.ConwayEraGov era - => BlockQuery (ShelleyBlock proto era) QFNoTables (CG.RatifyState era) - - GetFuturePParams - :: BlockQuery (ShelleyBlock proto era) QFNoTables (Maybe (LC.PParams era)) - + GetSPOStakeDistr :: + CG.ConwayEraGov era => + Set (KeyHash 'StakePool) -> + BlockQuery (ShelleyBlock proto era) QFNoTables (Map (KeyHash 'StakePool) Coin) + GetProposals :: + CG.ConwayEraGov era => + Set CG.GovActionId -> + BlockQuery (ShelleyBlock proto era) QFNoTables (Seq (CG.GovActionState era)) + GetRatifyState :: + CG.ConwayEraGov era => + BlockQuery (ShelleyBlock proto era) QFNoTables (CG.RatifyState era) + GetFuturePParams :: + BlockQuery (ShelleyBlock proto era) QFNoTables (Maybe (LC.PParams era)) -- | Obtain a snapshot of big ledger peers. CLI can serialize these, -- and if made available to the node by topology configuration, -- the diffusion layer can use these peers when syncing up from scratch -- or stale ledger state - especially useful for Genesis mode - GetBigLedgerPeerSnapshot - :: BlockQuery (ShelleyBlock proto era) QFNoTables LedgerPeerSnapshot - - QueryStakePoolDefaultVote - :: CG.ConwayEraGov era - => KeyHash 'StakePool - -> BlockQuery (ShelleyBlock proto era) QFNoTables CG.DefaultVote - - -- WARNING: please add new queries to the end of the list and stick to this - -- order in all other pattern matches on queries. This helps in particular - -- with the en/decoders, as we want the CBOR tags to be ordered. - -- - -- WARNING: when adding a new query, a new @ShelleyNodeToClientVersionX@ must - -- be added. See #2830 for a template on how to do this. - -- - -- WARNING: never modify an existing query that has been incorporated in a - -- release of the node, as it will break compatibility with deployed nodes. - -- Instead, add a new query. To remove the old query, first to stop supporting - -- it by modifying 'querySupportedVersion' (@< X@) and when the version is no - -- longer used (because mainnet has hard-forked to a newer version), it can be - -- removed. + GetBigLedgerPeerSnapshot :: + BlockQuery (ShelleyBlock proto era) QFNoTables LedgerPeerSnapshot + QueryStakePoolDefaultVote :: + CG.ConwayEraGov era => + KeyHash 'StakePool -> + BlockQuery (ShelleyBlock proto era) QFNoTables CG.DefaultVote + +-- WARNING: please add new queries to the end of the list and stick to this +-- order in all other pattern matches on queries. This helps in particular +-- with the en/decoders, as we want the CBOR tags to be ordered. +-- +-- WARNING: when adding a new query, a new @ShelleyNodeToClientVersionX@ must +-- be added. See #2830 for a template on how to do this. +-- +-- WARNING: never modify an existing query that has been incorporated in a +-- release of the node, as it will break compatibility with deployed nodes. +-- Instead, add a new query. To remove the old query, first to stop supporting +-- it by modifying 'querySupportedVersion' (@< X@) and when the version is no +-- longer used (because mainnet has hard-forked to a newer version), it can be +-- removed. -instance (Typeable era, Typeable proto) - => ShowProxy (BlockQuery (ShelleyBlock proto era)) where +instance + (Typeable era, Typeable proto) => + ShowProxy (BlockQuery (ShelleyBlock proto era)) -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , ProtoCrypto proto ~ crypto - , Crypto crypto - ) - => BlockSupportsLedgerQuery (ShelleyBlock proto era) where +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , ProtoCrypto proto ~ crypto + , Crypto crypto + ) => + BlockSupportsLedgerQuery (ShelleyBlock proto era) + where answerPureBlockQuery cfg query ext = - case query of - GetLedgerTip -> - shelleyLedgerTipPoint lst - GetEpochNo -> - SL.nesEL st - GetNonMyopicMemberRewards creds -> - NonMyopicMemberRewards $ - SL.getNonMyopicMemberRewards globals st creds - GetCurrentPParams -> - getPParams st - GetProposedPParamsUpdates -> - SL.ProposedPPUpdates Map.empty - GetStakeDistribution -> - fromLedgerPoolDistr $ SL.poolsByTotalStakeFraction globals st - DebugEpochState -> - getEpochState st - GetCBOR query' -> - -- We encode using the latest (@maxBound@) ShelleyNodeToClientVersion, - -- as the @GetCBOR@ query already is about opportunistically assuming - -- both client and server are running the same version; cf. the - -- @GetCBOR@ Haddocks. - mkSerialised (encodeShelleyResult maxBound query') $ - answerPureBlockQuery cfg query' ext - GetFilteredDelegationsAndRewardAccounts creds -> - getFilteredDelegationsAndRewardAccounts st creds - GetGenesisConfig -> - shelleyLedgerCompactGenesis lcfg - DebugNewEpochState -> - st - DebugChainDepState -> - headerStateChainDep hst - GetRewardProvenance -> - snd $ SL.getRewardProvenance globals st - GetStakePools -> - SL.getPools st - GetStakePoolParams poolids -> - SL.getPoolParameters st poolids - GetRewardInfoPools -> - SL.getRewardInfoPools globals st - GetPoolState mPoolIds -> - let certPState = view SL.certPStateL . SL.lsCertState . SL.esLState . SL.nesEs $ st in - case mPoolIds of - Just poolIds -> - SL.PState - { SL.psStakePoolParams = - Map.restrictKeys (SL.psStakePoolParams certPState) poolIds - , SL.psFutureStakePoolParams = - Map.restrictKeys (SL.psFutureStakePoolParams certPState) poolIds - , SL.psRetiring = Map.restrictKeys (SL.psRetiring certPState) poolIds - , SL.psDeposits = Map.restrictKeys (SL.psDeposits certPState) poolIds - } - Nothing -> certPState - GetStakeSnapshots mPoolIds -> - let SL.SnapShots - { SL.ssStakeMark - , SL.ssStakeSet - , SL.ssStakeGo - } = SL.esSnapshots . SL.nesEs $ st - - totalMarkByPoolId :: Map (KeyHash 'StakePool) Coin - totalMarkByPoolId = SL.sumStakePerPool (SL.ssDelegations ssStakeMark) (SL.ssStake ssStakeMark) - - totalSetByPoolId :: Map (KeyHash 'StakePool) Coin - totalSetByPoolId = SL.sumStakePerPool (SL.ssDelegations ssStakeSet) (SL.ssStake ssStakeSet) - - totalGoByPoolId :: Map (KeyHash 'StakePool) Coin - totalGoByPoolId = SL.sumStakePerPool (SL.ssDelegations ssStakeGo) (SL.ssStake ssStakeGo) - - getPoolStakes :: Set (KeyHash 'StakePool) -> Map (KeyHash 'StakePool) StakeSnapshot - getPoolStakes poolIds = Map.fromSet mkStakeSnapshot poolIds - where mkStakeSnapshot poolId = StakeSnapshot - { ssMarkPool = Map.findWithDefault mempty poolId totalMarkByPoolId - , ssSetPool = Map.findWithDefault mempty poolId totalSetByPoolId - , ssGoPool = Map.findWithDefault mempty poolId totalGoByPoolId - } - - getAllStake :: SL.SnapShot -> SL.Coin - getAllStake (SL.SnapShot stake _ _) = VMap.foldMap fromCompact (SL.unStake stake) - in - - case mPoolIds of - Nothing -> - let poolIds = Set.fromList $ mconcat - [ VMap.elems (SL.ssDelegations ssStakeMark) - , VMap.elems (SL.ssDelegations ssStakeSet) - , VMap.elems (SL.ssDelegations ssStakeGo) - ] - in - StakeSnapshots - { ssStakeSnapshots = getPoolStakes poolIds - , ssMarkTotal = getAllStake ssStakeMark - , ssSetTotal = getAllStake ssStakeSet - , ssGoTotal = getAllStake ssStakeGo - } - Just poolIds -> - StakeSnapshots - { ssStakeSnapshots = getPoolStakes poolIds - , ssMarkTotal = getAllStake ssStakeMark - , ssSetTotal = getAllStake ssStakeSet - , ssGoTotal = getAllStake ssStakeGo - } - - GetPoolDistr mPoolIds -> - let stakeSet = SL.ssStakeSet . SL.esSnapshots $ getEpochState st in - fromLedgerPoolDistr $ - SL.calculatePoolDistr' (maybe (const True) (flip Set.member) mPoolIds) stakeSet - GetStakeDelegDeposits stakeCreds -> - let lookupDeposit = - lookupDepositDState (view SL.certDStateL $ SL.lsCertState $ SL.esLState $ SL.nesEs st) - lookupInsert acc cred = - case lookupDeposit cred of - Nothing -> acc - Just deposit -> Map.insert cred deposit acc - in Set.foldl' lookupInsert Map.empty stakeCreds - GetConstitution -> - SL.queryConstitution st - GetGovState -> - SL.queryGovState st - GetDRepState drepCreds -> - SL.queryDRepState st drepCreds - GetDRepStakeDistr dreps -> - SL.queryDRepStakeDistr st dreps - GetCommitteeMembersState coldCreds hotCreds statuses -> - SL.queryCommitteeMembersState coldCreds hotCreds statuses st - GetFilteredVoteDelegatees stakeCreds -> - getFilteredVoteDelegatees st stakeCreds - GetAccountState -> - SL.queryAccountState st - GetSPOStakeDistr keys -> - SL.querySPOStakeDistr st keys - GetProposals gids -> - SL.queryProposals st gids - GetRatifyState -> - SL.queryRatifyState st - GetFuturePParams -> - SL.queryFuturePParams st - GetBigLedgerPeerSnapshot -> - let slot = getTipSlot lst - ledgerPeers = second (fmap stakePoolRelayAccessPoint) <$> getPeers lst - bigLedgerPeers = accumulateBigLedgerStake ledgerPeers - in LedgerPeerSnapshot (slot, bigLedgerPeers) - QueryStakePoolDefaultVote stakePool -> - SL.queryStakePoolDefaultVote st stakePool - where - lcfg = configLedger $ getExtLedgerCfg cfg - globals = shelleyLedgerGlobals lcfg - -- NOTE: we are not pattern matching on @ext@ but using the accessors - -- here. The reason for that is that that pattern match blows up the - -- compile time (in particular the time spent desugaring, which is when - -- the compiler looks at pattern matches) to 2m30s! We don't really - -- understand why, but our guess is that it has to do with the combination - -- of the strictness of 'ExtLedgerState', the fact that @LedgerState@ is a - -- data family, and the 'ShelleyBasedEra' constraint. - lst = ledgerState ext - hst = headerState ext - st = shelleyLedgerState lst + case query of + GetLedgerTip -> + shelleyLedgerTipPoint lst + GetEpochNo -> + SL.nesEL st + GetNonMyopicMemberRewards creds -> + NonMyopicMemberRewards $ + SL.getNonMyopicMemberRewards globals st creds + GetCurrentPParams -> + getPParams st + GetProposedPParamsUpdates -> + SL.ProposedPPUpdates Map.empty + GetStakeDistribution -> + fromLedgerPoolDistr $ SL.poolsByTotalStakeFraction globals st + DebugEpochState -> + getEpochState st + GetCBOR query' -> + -- We encode using the latest (@maxBound@) ShelleyNodeToClientVersion, + -- as the @GetCBOR@ query already is about opportunistically assuming + -- both client and server are running the same version; cf. the + -- @GetCBOR@ Haddocks. + mkSerialised (encodeShelleyResult maxBound query') $ + answerPureBlockQuery cfg query' ext + GetFilteredDelegationsAndRewardAccounts creds -> + getFilteredDelegationsAndRewardAccounts st creds + GetGenesisConfig -> + shelleyLedgerCompactGenesis lcfg + DebugNewEpochState -> + st + DebugChainDepState -> + headerStateChainDep hst + GetRewardProvenance -> + snd $ SL.getRewardProvenance globals st + GetStakePools -> + SL.getPools st + GetStakePoolParams poolids -> + SL.getPoolParameters st poolids + GetRewardInfoPools -> + SL.getRewardInfoPools globals st + GetPoolState mPoolIds -> + let certPState = view SL.certPStateL . SL.lsCertState . SL.esLState . SL.nesEs $ st + in case mPoolIds of + Just poolIds -> + SL.PState + { SL.psStakePoolParams = + Map.restrictKeys (SL.psStakePoolParams certPState) poolIds + , SL.psFutureStakePoolParams = + Map.restrictKeys (SL.psFutureStakePoolParams certPState) poolIds + , SL.psRetiring = Map.restrictKeys (SL.psRetiring certPState) poolIds + , SL.psDeposits = Map.restrictKeys (SL.psDeposits certPState) poolIds + } + Nothing -> certPState + GetStakeSnapshots mPoolIds -> + let SL.SnapShots + { SL.ssStakeMark + , SL.ssStakeSet + , SL.ssStakeGo + } = SL.esSnapshots . SL.nesEs $ st + + totalMarkByPoolId :: Map (KeyHash 'StakePool) Coin + totalMarkByPoolId = SL.sumStakePerPool (SL.ssDelegations ssStakeMark) (SL.ssStake ssStakeMark) + + totalSetByPoolId :: Map (KeyHash 'StakePool) Coin + totalSetByPoolId = SL.sumStakePerPool (SL.ssDelegations ssStakeSet) (SL.ssStake ssStakeSet) + + totalGoByPoolId :: Map (KeyHash 'StakePool) Coin + totalGoByPoolId = SL.sumStakePerPool (SL.ssDelegations ssStakeGo) (SL.ssStake ssStakeGo) + + getPoolStakes :: Set (KeyHash 'StakePool) -> Map (KeyHash 'StakePool) StakeSnapshot + getPoolStakes poolIds = Map.fromSet mkStakeSnapshot poolIds + where + mkStakeSnapshot poolId = + StakeSnapshot + { ssMarkPool = Map.findWithDefault mempty poolId totalMarkByPoolId + , ssSetPool = Map.findWithDefault mempty poolId totalSetByPoolId + , ssGoPool = Map.findWithDefault mempty poolId totalGoByPoolId + } + + getAllStake :: SL.SnapShot -> SL.Coin + getAllStake (SL.SnapShot stake _ _) = VMap.foldMap fromCompact (SL.unStake stake) + in case mPoolIds of + Nothing -> + let poolIds = + Set.fromList $ + mconcat + [ VMap.elems (SL.ssDelegations ssStakeMark) + , VMap.elems (SL.ssDelegations ssStakeSet) + , VMap.elems (SL.ssDelegations ssStakeGo) + ] + in StakeSnapshots + { ssStakeSnapshots = getPoolStakes poolIds + , ssMarkTotal = getAllStake ssStakeMark + , ssSetTotal = getAllStake ssStakeSet + , ssGoTotal = getAllStake ssStakeGo + } + Just poolIds -> + StakeSnapshots + { ssStakeSnapshots = getPoolStakes poolIds + , ssMarkTotal = getAllStake ssStakeMark + , ssSetTotal = getAllStake ssStakeSet + , ssGoTotal = getAllStake ssStakeGo + } + GetPoolDistr mPoolIds -> + let stakeSet = SL.ssStakeSet . SL.esSnapshots $ getEpochState st + in fromLedgerPoolDistr $ + SL.calculatePoolDistr' (maybe (const True) (flip Set.member) mPoolIds) stakeSet + GetStakeDelegDeposits stakeCreds -> + let lookupDeposit = + lookupDepositDState (view SL.certDStateL $ SL.lsCertState $ SL.esLState $ SL.nesEs st) + lookupInsert acc cred = + case lookupDeposit cred of + Nothing -> acc + Just deposit -> Map.insert cred deposit acc + in Set.foldl' lookupInsert Map.empty stakeCreds + GetConstitution -> + SL.queryConstitution st + GetGovState -> + SL.queryGovState st + GetDRepState drepCreds -> + SL.queryDRepState st drepCreds + GetDRepStakeDistr dreps -> + SL.queryDRepStakeDistr st dreps + GetCommitteeMembersState coldCreds hotCreds statuses -> + SL.queryCommitteeMembersState coldCreds hotCreds statuses st + GetFilteredVoteDelegatees stakeCreds -> + getFilteredVoteDelegatees st stakeCreds + GetAccountState -> + SL.queryAccountState st + GetSPOStakeDistr keys -> + SL.querySPOStakeDistr st keys + GetProposals gids -> + SL.queryProposals st gids + GetRatifyState -> + SL.queryRatifyState st + GetFuturePParams -> + SL.queryFuturePParams st + GetBigLedgerPeerSnapshot -> + let slot = getTipSlot lst + ledgerPeers = second (fmap stakePoolRelayAccessPoint) <$> getPeers lst + bigLedgerPeers = accumulateBigLedgerStake ledgerPeers + in LedgerPeerSnapshot (slot, bigLedgerPeers) + QueryStakePoolDefaultVote stakePool -> + SL.queryStakePoolDefaultVote st stakePool + where + lcfg = configLedger $ getExtLedgerCfg cfg + globals = shelleyLedgerGlobals lcfg + -- NOTE: we are not pattern matching on @ext@ but using the accessors + -- here. The reason for that is that that pattern match blows up the + -- compile time (in particular the time spent desugaring, which is when + -- the compiler looks at pattern matches) to 2m30s! We don't really + -- understand why, but our guess is that it has to do with the combination + -- of the strictness of 'ExtLedgerState', the fact that @LedgerState@ is a + -- data family, and the 'ShelleyBasedEra' constraint. + lst = ledgerState ext + hst = headerState ext + st = shelleyLedgerState lst answerBlockQueryLookup = answerShelleyLookupQueries id id id answerBlockQueryTraverse = answerShelleyTraversingQueries id id shelleyQFTraverseTablesPredicate - -- | Is the given query supported by the given 'ShelleyNodeToClientVersion'? + -- \| Is the given query supported by the given 'ShelleyNodeToClientVersion'? blockQueryIsSupportedOnVersion = \case - GetLedgerTip -> const True - GetEpochNo -> const True - GetNonMyopicMemberRewards {} -> const True - GetCurrentPParams -> const True - GetProposedPParamsUpdates -> (< v12) - GetStakeDistribution -> const True - GetUTxOByAddress {} -> const True - GetUTxOWhole -> const True - DebugEpochState -> const True - GetCBOR q -> blockQueryIsSupportedOnVersion q - GetFilteredDelegationsAndRewardAccounts {} -> const True - GetGenesisConfig -> const True - DebugNewEpochState -> const True - DebugChainDepState -> const True - GetRewardProvenance -> const True - GetUTxOByTxIn {} -> const True - GetStakePools -> const True - GetStakePoolParams {} -> const True - GetRewardInfoPools -> const True - GetPoolState {} -> const True - GetStakeSnapshots {} -> const True - GetPoolDistr {} -> const True - GetStakeDelegDeposits {} -> const True - GetConstitution -> (>= v8) - GetGovState -> (>= v8) - GetDRepState {} -> (>= v8) - GetDRepStakeDistr {} -> (>= v8) - GetCommitteeMembersState {} -> (>= v8) - GetFilteredVoteDelegatees {} -> (>= v8) - GetAccountState {} -> (>= v8) - GetSPOStakeDistr {} -> (>= v8) - GetProposals {} -> (>= v9) - GetRatifyState {} -> (>= v9) - GetFuturePParams {} -> (>= v10) - GetBigLedgerPeerSnapshot -> (>= v11) - QueryStakePoolDefaultVote {} -> (>= v12) + GetLedgerTip -> const True + GetEpochNo -> const True + GetNonMyopicMemberRewards{} -> const True + GetCurrentPParams -> const True + GetProposedPParamsUpdates -> (< v12) + GetStakeDistribution -> const True + GetUTxOByAddress{} -> const True + GetUTxOWhole -> const True + DebugEpochState -> const True + GetCBOR q -> blockQueryIsSupportedOnVersion q + GetFilteredDelegationsAndRewardAccounts{} -> const True + GetGenesisConfig -> const True + DebugNewEpochState -> const True + DebugChainDepState -> const True + GetRewardProvenance -> const True + GetUTxOByTxIn{} -> const True + GetStakePools -> const True + GetStakePoolParams{} -> const True + GetRewardInfoPools -> const True + GetPoolState{} -> const True + GetStakeSnapshots{} -> const True + GetPoolDistr{} -> const True + GetStakeDelegDeposits{} -> const True + GetConstitution -> (>= v8) + GetGovState -> (>= v8) + GetDRepState{} -> (>= v8) + GetDRepStakeDistr{} -> (>= v8) + GetCommitteeMembersState{} -> (>= v8) + GetFilteredVoteDelegatees{} -> (>= v8) + GetAccountState{} -> (>= v8) + GetSPOStakeDistr{} -> (>= v8) + GetProposals{} -> (>= v9) + GetRatifyState{} -> (>= v9) + GetFuturePParams{} -> (>= v10) + GetBigLedgerPeerSnapshot -> (>= v11) + QueryStakePoolDefaultVote{} -> (>= v12) + where -- WARNING: when adding a new query, a new @ShelleyNodeToClientVersionX@ -- must be added. See #2830 for a template on how to do this. - where - v8 = ShelleyNodeToClientVersion8 - v9 = ShelleyNodeToClientVersion9 + + v8 = ShelleyNodeToClientVersion8 + v9 = ShelleyNodeToClientVersion9 v10 = ShelleyNodeToClientVersion10 v11 = ShelleyNodeToClientVersion11 v12 = ShelleyNodeToClientVersion12 instance SameDepIndex2 (BlockQuery (ShelleyBlock proto era)) where - sameDepIndex2 GetLedgerTip GetLedgerTip - = Just Refl - sameDepIndex2 GetLedgerTip _ - = Nothing - sameDepIndex2 GetEpochNo GetEpochNo - = Just Refl - sameDepIndex2 GetEpochNo _ - = Nothing + sameDepIndex2 GetLedgerTip GetLedgerTip = + Just Refl + sameDepIndex2 GetLedgerTip _ = + Nothing + sameDepIndex2 GetEpochNo GetEpochNo = + Just Refl + sameDepIndex2 GetEpochNo _ = + Nothing sameDepIndex2 (GetNonMyopicMemberRewards creds) (GetNonMyopicMemberRewards creds') - | creds == creds' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetNonMyopicMemberRewards _) _ - = Nothing - sameDepIndex2 GetCurrentPParams GetCurrentPParams - = Just Refl - sameDepIndex2 GetCurrentPParams _ - = Nothing - sameDepIndex2 GetProposedPParamsUpdates GetProposedPParamsUpdates - = Just Refl - sameDepIndex2 GetProposedPParamsUpdates _ - = Nothing - sameDepIndex2 GetStakeDistribution GetStakeDistribution - = Just Refl - sameDepIndex2 GetStakeDistribution _ - = Nothing + | creds == creds' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetNonMyopicMemberRewards _) _ = + Nothing + sameDepIndex2 GetCurrentPParams GetCurrentPParams = + Just Refl + sameDepIndex2 GetCurrentPParams _ = + Nothing + sameDepIndex2 GetProposedPParamsUpdates GetProposedPParamsUpdates = + Just Refl + sameDepIndex2 GetProposedPParamsUpdates _ = + Nothing + sameDepIndex2 GetStakeDistribution GetStakeDistribution = + Just Refl + sameDepIndex2 GetStakeDistribution _ = + Nothing sameDepIndex2 (GetUTxOByAddress addrs) (GetUTxOByAddress addrs') - | addrs == addrs' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetUTxOByAddress _) _ - = Nothing - sameDepIndex2 GetUTxOWhole GetUTxOWhole - = Just Refl - sameDepIndex2 GetUTxOWhole _ - = Nothing - sameDepIndex2 DebugEpochState DebugEpochState - = Just Refl - sameDepIndex2 DebugEpochState _ - = Nothing - sameDepIndex2 (GetCBOR q) (GetCBOR q') - = (\Refl -> Refl) <$> sameDepIndex2 q q' - sameDepIndex2 (GetCBOR _) _ - = Nothing - sameDepIndex2 (GetFilteredDelegationsAndRewardAccounts creds) - (GetFilteredDelegationsAndRewardAccounts creds') - | creds == creds' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetFilteredDelegationsAndRewardAccounts _) _ - = Nothing - sameDepIndex2 GetGenesisConfig GetGenesisConfig - = Just Refl - sameDepIndex2 GetGenesisConfig _ - = Nothing - sameDepIndex2 DebugNewEpochState DebugNewEpochState - = Just Refl - sameDepIndex2 DebugNewEpochState _ - = Nothing - sameDepIndex2 DebugChainDepState DebugChainDepState - = Just Refl - sameDepIndex2 DebugChainDepState _ - = Nothing - sameDepIndex2 GetRewardProvenance GetRewardProvenance - = Just Refl - sameDepIndex2 GetRewardProvenance _ - = Nothing + | addrs == addrs' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetUTxOByAddress _) _ = + Nothing + sameDepIndex2 GetUTxOWhole GetUTxOWhole = + Just Refl + sameDepIndex2 GetUTxOWhole _ = + Nothing + sameDepIndex2 DebugEpochState DebugEpochState = + Just Refl + sameDepIndex2 DebugEpochState _ = + Nothing + sameDepIndex2 (GetCBOR q) (GetCBOR q') = + (\Refl -> Refl) <$> sameDepIndex2 q q' + sameDepIndex2 (GetCBOR _) _ = + Nothing + sameDepIndex2 + (GetFilteredDelegationsAndRewardAccounts creds) + (GetFilteredDelegationsAndRewardAccounts creds') + | creds == creds' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetFilteredDelegationsAndRewardAccounts _) _ = + Nothing + sameDepIndex2 GetGenesisConfig GetGenesisConfig = + Just Refl + sameDepIndex2 GetGenesisConfig _ = + Nothing + sameDepIndex2 DebugNewEpochState DebugNewEpochState = + Just Refl + sameDepIndex2 DebugNewEpochState _ = + Nothing + sameDepIndex2 DebugChainDepState DebugChainDepState = + Just Refl + sameDepIndex2 DebugChainDepState _ = + Nothing + sameDepIndex2 GetRewardProvenance GetRewardProvenance = + Just Refl + sameDepIndex2 GetRewardProvenance _ = + Nothing sameDepIndex2 (GetUTxOByTxIn addrs) (GetUTxOByTxIn addrs') - | addrs == addrs' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetUTxOByTxIn _) _ - = Nothing - sameDepIndex2 GetStakePools GetStakePools - = Just Refl - sameDepIndex2 GetStakePools _ - = Nothing + | addrs == addrs' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetUTxOByTxIn _) _ = + Nothing + sameDepIndex2 GetStakePools GetStakePools = + Just Refl + sameDepIndex2 GetStakePools _ = + Nothing sameDepIndex2 (GetStakePoolParams poolids) (GetStakePoolParams poolids') - | poolids == poolids' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetStakePoolParams _) _ - = Nothing - sameDepIndex2 GetRewardInfoPools GetRewardInfoPools - = Just Refl - sameDepIndex2 GetRewardInfoPools _ - = Nothing + | poolids == poolids' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetStakePoolParams _) _ = + Nothing + sameDepIndex2 GetRewardInfoPools GetRewardInfoPools = + Just Refl + sameDepIndex2 GetRewardInfoPools _ = + Nothing sameDepIndex2 (GetPoolState poolids) (GetPoolState poolids') - | poolids == poolids' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetPoolState _) _ - = Nothing + | poolids == poolids' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetPoolState _) _ = + Nothing sameDepIndex2 (GetStakeSnapshots poolid) (GetStakeSnapshots poolid') - | poolid == poolid' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetStakeSnapshots _) _ - = Nothing + | poolid == poolid' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetStakeSnapshots _) _ = + Nothing sameDepIndex2 (GetPoolDistr poolids) (GetPoolDistr poolids') - | poolids == poolids' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetPoolDistr _) _ - = Nothing + | poolids == poolids' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetPoolDistr _) _ = + Nothing sameDepIndex2 (GetStakeDelegDeposits stakeCreds) (GetStakeDelegDeposits stakeCreds') - | stakeCreds == stakeCreds' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 (GetStakeDelegDeposits _) _ - = Nothing + | stakeCreds == stakeCreds' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 (GetStakeDelegDeposits _) _ = + Nothing sameDepIndex2 GetConstitution GetConstitution = Just Refl sameDepIndex2 GetConstitution _ = Nothing sameDepIndex2 GetGovState GetGovState = Just Refl @@ -700,13 +708,13 @@ instance SameDepIndex2 (BlockQuery (ShelleyBlock proto era)) where sameDepIndex2 GetCommitteeMembersState{} GetCommitteeMembersState{} = Just Refl sameDepIndex2 GetCommitteeMembersState{} _ = Nothing sameDepIndex2 (GetFilteredVoteDelegatees stakeCreds) (GetFilteredVoteDelegatees stakeCreds') - | stakeCreds == stakeCreds' - = Just Refl - | otherwise - = Nothing - sameDepIndex2 GetFilteredVoteDelegatees {} _ = Nothing - sameDepIndex2 GetAccountState {} GetAccountState {} = Just Refl - sameDepIndex2 GetAccountState {} _ = Nothing + | stakeCreds == stakeCreds' = + Just Refl + | otherwise = + Nothing + sameDepIndex2 GetFilteredVoteDelegatees{} _ = Nothing + sameDepIndex2 GetAccountState{} GetAccountState{} = Just Refl + sameDepIndex2 GetAccountState{} _ = Nothing sameDepIndex2 GetSPOStakeDistr{} GetSPOStakeDistr{} = Just Refl sameDepIndex2 GetSPOStakeDistr{} _ = Nothing sameDepIndex2 GetProposals{} GetProposals{} = Just Refl @@ -720,47 +728,47 @@ instance SameDepIndex2 (BlockQuery (ShelleyBlock proto era)) where sameDepIndex2 QueryStakePoolDefaultVote{} QueryStakePoolDefaultVote{} = Just Refl sameDepIndex2 QueryStakePoolDefaultVote{} _ = Nothing -deriving instance Eq (BlockQuery (ShelleyBlock proto era) fp result) +deriving instance Eq (BlockQuery (ShelleyBlock proto era) fp result) deriving instance Show (BlockQuery (ShelleyBlock proto era) fp result) instance ShelleyCompatible proto era => ShowQuery (BlockQuery (ShelleyBlock proto era) fp) where showResult = \case - GetLedgerTip -> show - GetEpochNo -> show - GetNonMyopicMemberRewards {} -> show - GetCurrentPParams -> show - GetProposedPParamsUpdates -> show - GetStakeDistribution -> show - GetUTxOByAddress {} -> show - GetUTxOWhole -> show - DebugEpochState -> show - GetCBOR {} -> show - GetFilteredDelegationsAndRewardAccounts {} -> show - GetGenesisConfig -> show - DebugNewEpochState -> show - DebugChainDepState -> show - GetRewardProvenance -> show - GetUTxOByTxIn {} -> show - GetStakePools -> show - GetStakePoolParams {} -> show - GetRewardInfoPools -> show - GetPoolState {} -> show - GetStakeSnapshots {} -> show - GetPoolDistr {} -> show - GetStakeDelegDeposits {} -> show - GetConstitution -> show - GetGovState -> show - GetDRepState {} -> show - GetDRepStakeDistr {} -> show - GetCommitteeMembersState {} -> show - GetFilteredVoteDelegatees {} -> show - GetAccountState {} -> show - GetSPOStakeDistr {} -> show - GetProposals {} -> show - GetRatifyState {} -> show - GetFuturePParams {} -> show - GetBigLedgerPeerSnapshot -> show - QueryStakePoolDefaultVote {} -> show + GetLedgerTip -> show + GetEpochNo -> show + GetNonMyopicMemberRewards{} -> show + GetCurrentPParams -> show + GetProposedPParamsUpdates -> show + GetStakeDistribution -> show + GetUTxOByAddress{} -> show + GetUTxOWhole -> show + DebugEpochState -> show + GetCBOR{} -> show + GetFilteredDelegationsAndRewardAccounts{} -> show + GetGenesisConfig -> show + DebugNewEpochState -> show + DebugChainDepState -> show + GetRewardProvenance -> show + GetUTxOByTxIn{} -> show + GetStakePools -> show + GetStakePoolParams{} -> show + GetRewardInfoPools -> show + GetPoolState{} -> show + GetStakeSnapshots{} -> show + GetPoolDistr{} -> show + GetStakeDelegDeposits{} -> show + GetConstitution -> show + GetGovState -> show + GetDRepState{} -> show + GetDRepStakeDistr{} -> show + GetCommitteeMembersState{} -> show + GetFilteredVoteDelegatees{} -> show + GetAccountState{} -> show + GetSPOStakeDistr{} -> show + GetProposals{} -> show + GetRatifyState{} -> show + GetFuturePParams{} -> show + GetBigLedgerPeerSnapshot -> show + QueryStakePoolDefaultVote{} -> show {------------------------------------------------------------------------------- Auxiliary @@ -774,259 +782,277 @@ getDState :: SL.EraCertState era => SL.NewEpochState era -> SL.DState era getDState = view SL.certDStateL . SL.lsCertState . SL.esLState . SL.nesEs getFilteredDelegationsAndRewardAccounts :: - SL.EraCertState era - => SL.NewEpochState era - -> Set (SL.Credential 'SL.Staking) - -> (Delegations, Map (SL.Credential 'Staking) Coin) + SL.EraCertState era => + SL.NewEpochState era -> + Set (SL.Credential 'SL.Staking) -> + (Delegations, Map (SL.Credential 'Staking) Coin) getFilteredDelegationsAndRewardAccounts ss creds = - (filteredDelegations, filteredRwdAcnts) - where - UMap umElems _ = SL.dsUnified $ getDState ss - umElemsRestricted = Map.restrictKeys umElems creds + (filteredDelegations, filteredRwdAcnts) + where + UMap umElems _ = SL.dsUnified $ getDState ss + umElemsRestricted = Map.restrictKeys umElems creds - filteredDelegations = Map.mapMaybe umElemSPool umElemsRestricted - filteredRwdAcnts = - Map.mapMaybe (fmap (fromCompact . rdReward) . umElemRDPair) umElemsRestricted + filteredDelegations = Map.mapMaybe umElemSPool umElemsRestricted + filteredRwdAcnts = + Map.mapMaybe (fmap (fromCompact . rdReward) . umElemRDPair) umElemsRestricted getFilteredVoteDelegatees :: - SL.EraCertState era - => SL.NewEpochState era - -> Set (SL.Credential 'SL.Staking) - -> VoteDelegatees + SL.EraCertState era => + SL.NewEpochState era -> + Set (SL.Credential 'SL.Staking) -> + VoteDelegatees getFilteredVoteDelegatees ss creds = Map.mapMaybe umElemDRep umElemsRestricted - where - UMap umElems _ = SL.dsUnified $ getDState ss - umElemsRestricted = Map.restrictKeys umElems creds + where + UMap umElems _ = SL.dsUnified $ getDState ss + umElemsRestricted = Map.restrictKeys umElems creds {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} encodeShelleyQuery :: - forall era proto fp result. ShelleyBasedEra era - => BlockQuery (ShelleyBlock proto era) fp result -> Encoding + forall era proto fp result. + ShelleyBasedEra era => + BlockQuery (ShelleyBlock proto era) fp result -> Encoding encodeShelleyQuery query = case query of - GetLedgerTip -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 0 - GetEpochNo -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 1 - GetNonMyopicMemberRewards creds -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 2 <> toCBOR creds - GetCurrentPParams -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 3 - GetProposedPParamsUpdates -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 4 - GetStakeDistribution -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 5 - GetUTxOByAddress addrs -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 6 <> LC.toEraCBOR @era addrs - GetUTxOWhole -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 7 - DebugEpochState -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 8 - GetCBOR query' -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 9 <> encodeShelleyQuery query' - GetFilteredDelegationsAndRewardAccounts creds -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 10 <> LC.toEraCBOR @era creds - GetGenesisConfig -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 11 - DebugNewEpochState -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 12 - DebugChainDepState -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 13 - GetRewardProvenance -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 14 - GetUTxOByTxIn txins -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 15 <> LC.toEraCBOR @era txins - GetStakePools -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 16 - GetStakePoolParams poolids -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 17 <> toCBOR poolids - GetRewardInfoPools -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 18 - GetPoolState poolids -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 19 <> toCBOR poolids - GetStakeSnapshots poolId -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 20 <> toCBOR poolId - GetPoolDistr poolids -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 21 <> toCBOR poolids - GetStakeDelegDeposits stakeCreds -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 22 <> toCBOR stakeCreds - GetConstitution -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 23 - GetGovState -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 24 - GetDRepState drepCreds -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 25 <> toCBOR drepCreds - GetDRepStakeDistr dreps -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 26 <> LC.toEraCBOR @era dreps - GetCommitteeMembersState coldCreds hotCreds statuses -> - CBOR.encodeListLen 4 <> CBOR.encodeWord8 27 <> toCBOR coldCreds <> toCBOR hotCreds <> LC.toEraCBOR @era statuses - GetFilteredVoteDelegatees stakeCreds -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 28 <> LC.toEraCBOR @era stakeCreds - GetAccountState -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 29 - GetSPOStakeDistr keys -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 30 <> LC.toEraCBOR @era keys - GetProposals gids -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 31 <> LC.toEraCBOR @era gids - GetRatifyState -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 32 - GetFuturePParams -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 33 - GetBigLedgerPeerSnapshot -> - CBOR.encodeListLen 1 <> CBOR.encodeWord8 34 - QueryStakePoolDefaultVote stakePoolKey -> - CBOR.encodeListLen 2 <> CBOR.encodeWord8 35 <> LC.toEraCBOR @era stakePoolKey + GetLedgerTip -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 0 + GetEpochNo -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 1 + GetNonMyopicMemberRewards creds -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 2 <> toCBOR creds + GetCurrentPParams -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 3 + GetProposedPParamsUpdates -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 4 + GetStakeDistribution -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 5 + GetUTxOByAddress addrs -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 6 <> LC.toEraCBOR @era addrs + GetUTxOWhole -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 7 + DebugEpochState -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 8 + GetCBOR query' -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 9 <> encodeShelleyQuery query' + GetFilteredDelegationsAndRewardAccounts creds -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 10 <> LC.toEraCBOR @era creds + GetGenesisConfig -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 11 + DebugNewEpochState -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 12 + DebugChainDepState -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 13 + GetRewardProvenance -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 14 + GetUTxOByTxIn txins -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 15 <> LC.toEraCBOR @era txins + GetStakePools -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 16 + GetStakePoolParams poolids -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 17 <> toCBOR poolids + GetRewardInfoPools -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 18 + GetPoolState poolids -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 19 <> toCBOR poolids + GetStakeSnapshots poolId -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 20 <> toCBOR poolId + GetPoolDistr poolids -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 21 <> toCBOR poolids + GetStakeDelegDeposits stakeCreds -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 22 <> toCBOR stakeCreds + GetConstitution -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 23 + GetGovState -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 24 + GetDRepState drepCreds -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 25 <> toCBOR drepCreds + GetDRepStakeDistr dreps -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 26 <> LC.toEraCBOR @era dreps + GetCommitteeMembersState coldCreds hotCreds statuses -> + CBOR.encodeListLen 4 + <> CBOR.encodeWord8 27 + <> toCBOR coldCreds + <> toCBOR hotCreds + <> LC.toEraCBOR @era statuses + GetFilteredVoteDelegatees stakeCreds -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 28 <> LC.toEraCBOR @era stakeCreds + GetAccountState -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 29 + GetSPOStakeDistr keys -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 30 <> LC.toEraCBOR @era keys + GetProposals gids -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 31 <> LC.toEraCBOR @era gids + GetRatifyState -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 32 + GetFuturePParams -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 33 + GetBigLedgerPeerSnapshot -> + CBOR.encodeListLen 1 <> CBOR.encodeWord8 34 + QueryStakePoolDefaultVote stakePoolKey -> + CBOR.encodeListLen 2 <> CBOR.encodeWord8 35 <> LC.toEraCBOR @era stakePoolKey decodeShelleyQuery :: - forall era proto. ShelleyBasedEra era - => forall s. Decoder s (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) + forall era proto. + ShelleyBasedEra era => + forall s. + Decoder s (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) decodeShelleyQuery = do - len <- CBOR.decodeListLen - tag <- CBOR.decodeWord8 - - let failmsg :: forall s ans. String -> Decoder s ans - failmsg msg = fail $ - "decodeShelleyQuery: " <> msg <> " (len, tag) = (" <> - show len <> ", " <> show tag <> ")" - - requireCG :: - forall s ans. - (CG.ConwayEraGov era => Decoder s ans) - -> Decoder s ans - requireCG k = case SE.getConwayEraGovDict (Proxy @era) of - Just SE.ConwayEraGovDict -> k - Nothing -> failmsg "that query is not supported before Conway," - - case (len, tag) of - (1, 0) -> return $ SomeBlockQuery GetLedgerTip - (1, 1) -> return $ SomeBlockQuery GetEpochNo - (2, 2) -> SomeBlockQuery . GetNonMyopicMemberRewards <$> fromCBOR - (1, 3) -> return $ SomeBlockQuery GetCurrentPParams - (1, 4) -> return $ SomeBlockQuery GetProposedPParamsUpdates - (1, 5) -> return $ SomeBlockQuery GetStakeDistribution - (2, 6) -> SomeBlockQuery . GetUTxOByAddress <$> LC.fromEraCBOR @era - (1, 7) -> return $ SomeBlockQuery GetUTxOWhole - (1, 8) -> return $ SomeBlockQuery DebugEpochState - (2, 9) -> (\(SomeBlockQuery q) -> SomeBlockQuery (GetCBOR q)) <$> decodeShelleyQuery - (2, 10) -> SomeBlockQuery . GetFilteredDelegationsAndRewardAccounts <$> LC.fromEraCBOR @era - (1, 11) -> return $ SomeBlockQuery GetGenesisConfig - (1, 12) -> return $ SomeBlockQuery DebugNewEpochState - (1, 13) -> return $ SomeBlockQuery DebugChainDepState - (1, 14) -> return $ SomeBlockQuery GetRewardProvenance - (2, 15) -> SomeBlockQuery . GetUTxOByTxIn <$> LC.fromEraCBOR @era - (1, 16) -> return $ SomeBlockQuery GetStakePools - (2, 17) -> SomeBlockQuery . GetStakePoolParams <$> fromCBOR - (1, 18) -> return $ SomeBlockQuery GetRewardInfoPools - (2, 19) -> SomeBlockQuery . GetPoolState <$> fromCBOR - (2, 20) -> SomeBlockQuery . GetStakeSnapshots <$> fromCBOR - (2, 21) -> SomeBlockQuery . GetPoolDistr <$> fromCBOR - (2, 22) -> SomeBlockQuery . GetStakeDelegDeposits <$> fromCBOR - (1, 23) -> requireCG $ return $ SomeBlockQuery GetConstitution - (1, 24) -> return $ SomeBlockQuery GetGovState - (2, 25) -> requireCG $ SomeBlockQuery . GetDRepState <$> fromCBOR - (2, 26) -> requireCG $ SomeBlockQuery . GetDRepStakeDistr <$> LC.fromEraCBOR @era - (4, 27) -> requireCG $ do - coldCreds <- fromCBOR - hotCreds <- fromCBOR - statuses <- LC.fromEraCBOR @era - return $ SomeBlockQuery $ GetCommitteeMembersState coldCreds hotCreds statuses - (2, 28) -> requireCG $ do - SomeBlockQuery . GetFilteredVoteDelegatees <$> LC.fromEraCBOR @era - (1, 29) -> return $ SomeBlockQuery GetAccountState - (2, 30) -> requireCG $ SomeBlockQuery . GetSPOStakeDistr <$> LC.fromEraCBOR @era - (2, 31) -> requireCG $ SomeBlockQuery . GetProposals <$> LC.fromEraCBOR @era - (1, 32) -> requireCG $ return $ SomeBlockQuery GetRatifyState - (1, 33) -> requireCG $ return $ SomeBlockQuery GetFuturePParams - (1, 34) -> return $ SomeBlockQuery GetBigLedgerPeerSnapshot - (2, 35) -> requireCG $ SomeBlockQuery . QueryStakePoolDefaultVote <$> LC.fromEraCBOR @era - _ -> failmsg "invalid" + len <- CBOR.decodeListLen + tag <- CBOR.decodeWord8 + + let failmsg :: forall s ans. String -> Decoder s ans + failmsg msg = + fail $ + "decodeShelleyQuery: " + <> msg + <> " (len, tag) = (" + <> show len + <> ", " + <> show tag + <> ")" + + requireCG :: + forall s ans. + (CG.ConwayEraGov era => Decoder s ans) -> + Decoder s ans + requireCG k = case SE.getConwayEraGovDict (Proxy @era) of + Just SE.ConwayEraGovDict -> k + Nothing -> failmsg "that query is not supported before Conway," + + case (len, tag) of + (1, 0) -> return $ SomeBlockQuery GetLedgerTip + (1, 1) -> return $ SomeBlockQuery GetEpochNo + (2, 2) -> SomeBlockQuery . GetNonMyopicMemberRewards <$> fromCBOR + (1, 3) -> return $ SomeBlockQuery GetCurrentPParams + (1, 4) -> return $ SomeBlockQuery GetProposedPParamsUpdates + (1, 5) -> return $ SomeBlockQuery GetStakeDistribution + (2, 6) -> SomeBlockQuery . GetUTxOByAddress <$> LC.fromEraCBOR @era + (1, 7) -> return $ SomeBlockQuery GetUTxOWhole + (1, 8) -> return $ SomeBlockQuery DebugEpochState + (2, 9) -> (\(SomeBlockQuery q) -> SomeBlockQuery (GetCBOR q)) <$> decodeShelleyQuery + (2, 10) -> SomeBlockQuery . GetFilteredDelegationsAndRewardAccounts <$> LC.fromEraCBOR @era + (1, 11) -> return $ SomeBlockQuery GetGenesisConfig + (1, 12) -> return $ SomeBlockQuery DebugNewEpochState + (1, 13) -> return $ SomeBlockQuery DebugChainDepState + (1, 14) -> return $ SomeBlockQuery GetRewardProvenance + (2, 15) -> SomeBlockQuery . GetUTxOByTxIn <$> LC.fromEraCBOR @era + (1, 16) -> return $ SomeBlockQuery GetStakePools + (2, 17) -> SomeBlockQuery . GetStakePoolParams <$> fromCBOR + (1, 18) -> return $ SomeBlockQuery GetRewardInfoPools + (2, 19) -> SomeBlockQuery . GetPoolState <$> fromCBOR + (2, 20) -> SomeBlockQuery . GetStakeSnapshots <$> fromCBOR + (2, 21) -> SomeBlockQuery . GetPoolDistr <$> fromCBOR + (2, 22) -> SomeBlockQuery . GetStakeDelegDeposits <$> fromCBOR + (1, 23) -> requireCG $ return $ SomeBlockQuery GetConstitution + (1, 24) -> return $ SomeBlockQuery GetGovState + (2, 25) -> requireCG $ SomeBlockQuery . GetDRepState <$> fromCBOR + (2, 26) -> requireCG $ SomeBlockQuery . GetDRepStakeDistr <$> LC.fromEraCBOR @era + (4, 27) -> requireCG $ do + coldCreds <- fromCBOR + hotCreds <- fromCBOR + statuses <- LC.fromEraCBOR @era + return $ SomeBlockQuery $ GetCommitteeMembersState coldCreds hotCreds statuses + (2, 28) -> requireCG $ do + SomeBlockQuery . GetFilteredVoteDelegatees <$> LC.fromEraCBOR @era + (1, 29) -> return $ SomeBlockQuery GetAccountState + (2, 30) -> requireCG $ SomeBlockQuery . GetSPOStakeDistr <$> LC.fromEraCBOR @era + (2, 31) -> requireCG $ SomeBlockQuery . GetProposals <$> LC.fromEraCBOR @era + (1, 32) -> requireCG $ return $ SomeBlockQuery GetRatifyState + (1, 33) -> requireCG $ return $ SomeBlockQuery GetFuturePParams + (1, 34) -> return $ SomeBlockQuery GetBigLedgerPeerSnapshot + (2, 35) -> requireCG $ SomeBlockQuery . QueryStakePoolDefaultVote <$> LC.fromEraCBOR @era + _ -> failmsg "invalid" encodeShelleyResult :: - forall proto era fp result. ShelleyCompatible proto era - => ShelleyNodeToClientVersion - -> BlockQuery (ShelleyBlock proto era) fp result -> result -> Encoding + forall proto era fp result. + ShelleyCompatible proto era => + ShelleyNodeToClientVersion -> + BlockQuery (ShelleyBlock proto era) fp result -> + result -> + Encoding encodeShelleyResult _v query = case query of - GetLedgerTip -> encodePoint encode - GetEpochNo -> toCBOR - GetNonMyopicMemberRewards {} -> toCBOR - GetCurrentPParams -> toCBOR - GetProposedPParamsUpdates -> toCBOR - GetStakeDistribution -> LC.toEraCBOR @era - GetUTxOByAddress {} -> toCBOR - GetUTxOWhole -> toCBOR - DebugEpochState -> toCBOR - GetCBOR {} -> encode - GetFilteredDelegationsAndRewardAccounts {} -> LC.toEraCBOR @era - GetGenesisConfig -> toCBOR - DebugNewEpochState -> toCBOR - DebugChainDepState -> encode - GetRewardProvenance -> LC.toEraCBOR @era - GetUTxOByTxIn {} -> toCBOR - GetStakePools -> toCBOR - GetStakePoolParams {} -> LC.toEraCBOR @era - GetRewardInfoPools -> LC.toEraCBOR @era - GetPoolState {} -> LC.toEraCBOR @era - GetStakeSnapshots {} -> toCBOR - GetPoolDistr {} -> LC.toEraCBOR @era - GetStakeDelegDeposits {} -> LC.toEraCBOR @era - GetConstitution -> toCBOR - GetGovState -> toCBOR - GetDRepState {} -> LC.toEraCBOR @era - GetDRepStakeDistr {} -> LC.toEraCBOR @era - GetCommitteeMembersState {} -> LC.toEraCBOR @era - GetFilteredVoteDelegatees {} -> LC.toEraCBOR @era - GetAccountState {} -> LC.toEraCBOR @era - GetSPOStakeDistr {} -> LC.toEraCBOR @era - GetProposals {} -> LC.toEraCBOR @era - GetRatifyState {} -> LC.toEraCBOR @era - GetFuturePParams {} -> LC.toEraCBOR @era - GetBigLedgerPeerSnapshot -> toCBOR - QueryStakePoolDefaultVote {} -> toCBOR + GetLedgerTip -> encodePoint encode + GetEpochNo -> toCBOR + GetNonMyopicMemberRewards{} -> toCBOR + GetCurrentPParams -> toCBOR + GetProposedPParamsUpdates -> toCBOR + GetStakeDistribution -> LC.toEraCBOR @era + GetUTxOByAddress{} -> toCBOR + GetUTxOWhole -> toCBOR + DebugEpochState -> toCBOR + GetCBOR{} -> encode + GetFilteredDelegationsAndRewardAccounts{} -> LC.toEraCBOR @era + GetGenesisConfig -> toCBOR + DebugNewEpochState -> toCBOR + DebugChainDepState -> encode + GetRewardProvenance -> LC.toEraCBOR @era + GetUTxOByTxIn{} -> toCBOR + GetStakePools -> toCBOR + GetStakePoolParams{} -> LC.toEraCBOR @era + GetRewardInfoPools -> LC.toEraCBOR @era + GetPoolState{} -> LC.toEraCBOR @era + GetStakeSnapshots{} -> toCBOR + GetPoolDistr{} -> LC.toEraCBOR @era + GetStakeDelegDeposits{} -> LC.toEraCBOR @era + GetConstitution -> toCBOR + GetGovState -> toCBOR + GetDRepState{} -> LC.toEraCBOR @era + GetDRepStakeDistr{} -> LC.toEraCBOR @era + GetCommitteeMembersState{} -> LC.toEraCBOR @era + GetFilteredVoteDelegatees{} -> LC.toEraCBOR @era + GetAccountState{} -> LC.toEraCBOR @era + GetSPOStakeDistr{} -> LC.toEraCBOR @era + GetProposals{} -> LC.toEraCBOR @era + GetRatifyState{} -> LC.toEraCBOR @era + GetFuturePParams{} -> LC.toEraCBOR @era + GetBigLedgerPeerSnapshot -> toCBOR + QueryStakePoolDefaultVote{} -> toCBOR decodeShelleyResult :: - forall proto era fp result. ShelleyCompatible proto era - => ShelleyNodeToClientVersion - -> BlockQuery (ShelleyBlock proto era) fp result - -> forall s. Decoder s result + forall proto era fp result. + ShelleyCompatible proto era => + ShelleyNodeToClientVersion -> + BlockQuery (ShelleyBlock proto era) fp result -> + forall s. + Decoder s result decodeShelleyResult _v query = case query of - GetLedgerTip -> decodePoint decode - GetEpochNo -> fromCBOR - GetNonMyopicMemberRewards {} -> fromCBOR - GetCurrentPParams -> fromCBOR - GetProposedPParamsUpdates -> fromCBOR - GetStakeDistribution -> LC.fromEraCBOR @era - GetUTxOByAddress {} -> fromCBOR - GetUTxOWhole -> fromCBOR - DebugEpochState -> fromCBOR - GetCBOR {} -> decode - GetFilteredDelegationsAndRewardAccounts {} -> LC.fromEraCBOR @era - GetGenesisConfig -> fromCBOR - DebugNewEpochState -> fromCBOR - DebugChainDepState -> decode - GetRewardProvenance -> LC.fromEraCBOR @era - GetUTxOByTxIn {} -> fromCBOR - GetStakePools -> fromCBOR - GetStakePoolParams {} -> LC.fromEraCBOR @era - GetRewardInfoPools -> LC.fromEraCBOR @era - GetPoolState {} -> LC.fromEraCBOR @era - GetStakeSnapshots {} -> fromCBOR - GetPoolDistr {} -> LC.fromEraCBOR @era - GetStakeDelegDeposits {} -> LC.fromEraCBOR @era - GetConstitution -> fromCBOR - GetGovState -> fromCBOR - GetDRepState {} -> LC.fromEraCBOR @era - GetDRepStakeDistr {} -> LC.fromEraCBOR @era - GetCommitteeMembersState {} -> LC.fromEraCBOR @era - GetFilteredVoteDelegatees {} -> LC.fromEraCBOR @era - GetAccountState {} -> LC.fromEraCBOR @era - GetSPOStakeDistr {} -> LC.fromEraCBOR @era - GetProposals {} -> LC.fromEraCBOR @era - GetRatifyState {} -> LC.fromEraCBOR @era - GetFuturePParams {} -> LC.fromEraCBOR @era - GetBigLedgerPeerSnapshot -> fromCBOR - QueryStakePoolDefaultVote {} -> fromCBOR + GetLedgerTip -> decodePoint decode + GetEpochNo -> fromCBOR + GetNonMyopicMemberRewards{} -> fromCBOR + GetCurrentPParams -> fromCBOR + GetProposedPParamsUpdates -> fromCBOR + GetStakeDistribution -> LC.fromEraCBOR @era + GetUTxOByAddress{} -> fromCBOR + GetUTxOWhole -> fromCBOR + DebugEpochState -> fromCBOR + GetCBOR{} -> decode + GetFilteredDelegationsAndRewardAccounts{} -> LC.fromEraCBOR @era + GetGenesisConfig -> fromCBOR + DebugNewEpochState -> fromCBOR + DebugChainDepState -> decode + GetRewardProvenance -> LC.fromEraCBOR @era + GetUTxOByTxIn{} -> fromCBOR + GetStakePools -> fromCBOR + GetStakePoolParams{} -> LC.fromEraCBOR @era + GetRewardInfoPools -> LC.fromEraCBOR @era + GetPoolState{} -> LC.fromEraCBOR @era + GetStakeSnapshots{} -> fromCBOR + GetPoolDistr{} -> LC.fromEraCBOR @era + GetStakeDelegDeposits{} -> LC.fromEraCBOR @era + GetConstitution -> fromCBOR + GetGovState -> fromCBOR + GetDRepState{} -> LC.fromEraCBOR @era + GetDRepStakeDistr{} -> LC.fromEraCBOR @era + GetCommitteeMembersState{} -> LC.fromEraCBOR @era + GetFilteredVoteDelegatees{} -> LC.fromEraCBOR @era + GetAccountState{} -> LC.fromEraCBOR @era + GetSPOStakeDistr{} -> LC.fromEraCBOR @era + GetProposals{} -> LC.fromEraCBOR @era + GetRatifyState{} -> LC.fromEraCBOR @era + GetFuturePParams{} -> LC.fromEraCBOR @era + GetBigLedgerPeerSnapshot -> fromCBOR + QueryStakePoolDefaultVote{} -> fromCBOR -- | The stake snapshot returns information about the mark, set, go ledger snapshots for a pool, -- plus the total active stake for each snapshot that can be used in a 'sigma' calculation. @@ -1036,28 +1062,26 @@ decodeShelleyResult _v query = case query of -- before the start of the current epoch. data StakeSnapshot = StakeSnapshot { ssMarkPool :: !SL.Coin - , ssSetPool :: !SL.Coin - , ssGoPool :: !SL.Coin - } deriving (Eq, Show, Generic) + , ssSetPool :: !SL.Coin + , ssGoPool :: !SL.Coin + } + deriving (Eq, Show, Generic) instance NFData StakeSnapshot -instance - ToCBOR StakeSnapshot - where +instance ToCBOR StakeSnapshot where toCBOR StakeSnapshot - { ssMarkPool - , ssSetPool - , ssGoPool - } = encodeListLen 3 - <> toCBOR ssMarkPool - <> toCBOR ssSetPool - <> toCBOR ssGoPool - -instance - FromCBOR StakeSnapshot - where + { ssMarkPool + , ssSetPool + , ssGoPool + } = + encodeListLen 3 + <> toCBOR ssMarkPool + <> toCBOR ssSetPool + <> toCBOR ssGoPool + +instance FromCBOR StakeSnapshot where fromCBOR = do enforceSize "StakeSnapshot" 3 StakeSnapshot @@ -1067,31 +1091,29 @@ instance data StakeSnapshots = StakeSnapshots { ssStakeSnapshots :: !(Map (SL.KeyHash 'SL.StakePool) StakeSnapshot) - , ssMarkTotal :: !SL.Coin - , ssSetTotal :: !SL.Coin - , ssGoTotal :: !SL.Coin - } deriving (Eq, Show, Generic) + , ssMarkTotal :: !SL.Coin + , ssSetTotal :: !SL.Coin + , ssGoTotal :: !SL.Coin + } + deriving (Eq, Show, Generic) instance NFData StakeSnapshots -instance - ToCBOR StakeSnapshots - where +instance ToCBOR StakeSnapshots where toCBOR StakeSnapshots - { ssStakeSnapshots - , ssMarkTotal - , ssSetTotal - , ssGoTotal - } = encodeListLen 4 - <> toCBOR ssStakeSnapshots - <> toCBOR ssMarkTotal - <> toCBOR ssSetTotal - <> toCBOR ssGoTotal - -instance - FromCBOR StakeSnapshots - where + { ssStakeSnapshots + , ssMarkTotal + , ssSetTotal + , ssGoTotal + } = + encodeListLen 4 + <> toCBOR ssStakeSnapshots + <> toCBOR ssMarkTotal + <> toCBOR ssSetTotal + <> toCBOR ssGoTotal + +instance FromCBOR StakeSnapshots where fromCBOR = do enforceSize "StakeSnapshots" 4 StakeSnapshots @@ -1105,136 +1127,141 @@ instance -------------------------------------------------------------------------------} answerShelleyLookupQueries :: - forall proto era m result blk. - ( Monad m - , ShelleyCompatible proto era - ) - => ( LedgerTables (LedgerState (ShelleyBlock proto era)) KeysMK - -> LedgerTables (LedgerState blk) KeysMK - ) - -- ^ Inject ledger tables - -> (TxOut (LedgerState blk) -> LC.TxOut era) - -- ^ Eject TxOut - -> (TxIn (LedgerState blk) -> SL.TxIn) - -- ^ Eject TxIn - -> ExtLedgerCfg (ShelleyBlock proto era) - -> BlockQuery (ShelleyBlock proto era) QFLookupTables result - -> ReadOnlyForker' m blk - -> m result + forall proto era m result blk. + ( Monad m + , ShelleyCompatible proto era + ) => + -- | Inject ledger tables + ( LedgerTables (LedgerState (ShelleyBlock proto era)) KeysMK -> + LedgerTables (LedgerState blk) KeysMK + ) -> + -- | Eject TxOut + (TxOut (LedgerState blk) -> LC.TxOut era) -> + -- | Eject TxIn + (TxIn (LedgerState blk) -> SL.TxIn) -> + ExtLedgerCfg (ShelleyBlock proto era) -> + BlockQuery (ShelleyBlock proto era) QFLookupTables result -> + ReadOnlyForker' m blk -> + m result answerShelleyLookupQueries injTables ejTxOut ejTxIn cfg q forker = - case q of - GetUTxOByTxIn txins -> - answerGetUtxOByTxIn txins - GetCBOR q' -> - -- We encode using the latest (@maxBound@) ShelleyNodeToClientVersion, - -- as the @GetCBOR@ query already is about opportunistically assuming - -- both client and server are running the same version; cf. the - -- @GetCBOR@ Haddocks. - mkSerialised (encodeShelleyResult maxBound q') + case q of + GetUTxOByTxIn txins -> + answerGetUtxOByTxIn txins + GetCBOR q' -> + -- We encode using the latest (@maxBound@) ShelleyNodeToClientVersion, + -- as the @GetCBOR@ query already is about opportunistically assuming + -- both client and server are running the same version; cf. the + -- @GetCBOR@ Haddocks. + mkSerialised (encodeShelleyResult maxBound q') <$> answerShelleyLookupQueries injTables ejTxOut ejTxIn cfg q' forker - where - answerGetUtxOByTxIn :: - Set.Set SL.TxIn - -> m (SL.UTxO era) - answerGetUtxOByTxIn txins = do - LedgerTables (ValuesMK values) <- - LedgerDB.roforkerReadTables - forker - (castLedgerTables $ injTables (LedgerTables $ KeysMK txins)) - pure - $ SL.UTxO - $ Map.mapKeys ejTxIn - $ Map.mapMaybeWithKey - (\k v -> - if ejTxIn k `Set.member` txins - then Just $ ejTxOut v - else Nothing) + where + answerGetUtxOByTxIn :: + Set.Set SL.TxIn -> + m (SL.UTxO era) + answerGetUtxOByTxIn txins = do + LedgerTables (ValuesMK values) <- + LedgerDB.roforkerReadTables + forker + (castLedgerTables $ injTables (LedgerTables $ KeysMK txins)) + pure $ + SL.UTxO $ + Map.mapKeys ejTxIn $ + Map.mapMaybeWithKey + ( \k v -> + if ejTxIn k `Set.member` txins + then Just $ ejTxOut v + else Nothing + ) values shelleyQFTraverseTablesPredicate :: - forall proto era proto' era' result. - (ShelleyBasedEra era, ShelleyBasedEra era') - => BlockQuery (ShelleyBlock proto era) QFTraverseTables result - -> TxOut (LedgerState (ShelleyBlock proto' era')) - -> Bool + forall proto era proto' era' result. + (ShelleyBasedEra era, ShelleyBasedEra era') => + BlockQuery (ShelleyBlock proto era) QFTraverseTables result -> + TxOut (LedgerState (ShelleyBlock proto' era')) -> + Bool shelleyQFTraverseTablesPredicate q = case q of - GetUTxOByAddress addr -> filterGetUTxOByAddressOne addr - GetUTxOWhole -> const True - GetCBOR q' -> shelleyQFTraverseTablesPredicate q' - where - filterGetUTxOByAddressOne :: - Set Addr - -> LC.TxOut era' - -> Bool - filterGetUTxOByAddressOne addrs = - let - compactAddrSet = Set.map compactAddr addrs - checkAddr out = - case out ^. SL.addrEitherTxOutL of - Left addr -> addr `Set.member` addrs - Right cAddr -> cAddr `Set.member` compactAddrSet - in - checkAddr + GetUTxOByAddress addr -> filterGetUTxOByAddressOne addr + GetUTxOWhole -> const True + GetCBOR q' -> shelleyQFTraverseTablesPredicate q' + where + filterGetUTxOByAddressOne :: + Set Addr -> + LC.TxOut era' -> + Bool + filterGetUTxOByAddressOne addrs = + let + compactAddrSet = Set.map compactAddr addrs + checkAddr out = + case out ^. SL.addrEitherTxOutL of + Left addr -> addr `Set.member` addrs + Right cAddr -> cAddr `Set.member` compactAddrSet + in + checkAddr answerShelleyTraversingQueries :: - forall proto era m result blk. - ( ShelleyCompatible proto era - , Ord (TxIn (LedgerState blk)) - , Eq (TxOut (LedgerState blk)) - , MemPack (TxIn (LedgerState blk)) - , IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk)) - ) - => Monad m - => (TxOut (LedgerState blk) -> LC.TxOut era) - -- ^ Eject TxOut - -> (TxIn (LedgerState blk) -> SL.TxIn) - -- ^ Eject TxIn - -> (forall result'. - BlockQuery (ShelleyBlock proto era) QFTraverseTables result' - -> TxOut (LedgerState blk) - -> Bool) - -- ^ Get filter by query - -> ExtLedgerCfg (ShelleyBlock proto era) - -> BlockQuery (ShelleyBlock proto era) QFTraverseTables result - -> ReadOnlyForker' m blk - -> m result + forall proto era m result blk. + ( ShelleyCompatible proto era + , Ord (TxIn (LedgerState blk)) + , Eq (TxOut (LedgerState blk)) + , MemPack (TxIn (LedgerState blk)) + , IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk)) + ) => + Monad m => + -- | Eject TxOut + (TxOut (LedgerState blk) -> LC.TxOut era) -> + -- | Eject TxIn + (TxIn (LedgerState blk) -> SL.TxIn) -> + -- | Get filter by query + ( forall result'. + BlockQuery (ShelleyBlock proto era) QFTraverseTables result' -> + TxOut (LedgerState blk) -> + Bool + ) -> + ExtLedgerCfg (ShelleyBlock proto era) -> + BlockQuery (ShelleyBlock proto era) QFTraverseTables result -> + ReadOnlyForker' m blk -> + m result answerShelleyTraversingQueries ejTxOut ejTxIn filt cfg q forker = case q of - GetUTxOByAddress{} -> loop (filt q) NoPreviousQuery emptyUtxo - GetUTxOWhole -> loop (filt q) NoPreviousQuery emptyUtxo - GetCBOR q' -> - -- We encode using the latest (@maxBound@) ShelleyNodeToClientVersion, - -- as the @GetCBOR@ query already is about opportunistically assuming - -- both client and server are running the same version; cf. the - -- @GetCBOR@ Haddocks. - mkSerialised (encodeShelleyResult maxBound q') <$> - answerShelleyTraversingQueries ejTxOut ejTxIn filt cfg q' forker - where - emptyUtxo = SL.UTxO Map.empty - - combUtxo (SL.UTxO l) vs = SL.UTxO $ Map.union l vs - - partial :: - (TxOut (LedgerState blk) -> Bool) - -> LedgerTables (ExtLedgerState blk) ValuesMK - -> Map SL.TxIn (LC.TxOut era) - partial queryPredicate (LedgerTables (ValuesMK vs)) = - Map.mapKeys ejTxIn - $ Map.mapMaybeWithKey - (\_k v -> - if queryPredicate v + GetUTxOByAddress{} -> loop (filt q) NoPreviousQuery emptyUtxo + GetUTxOWhole -> loop (filt q) NoPreviousQuery emptyUtxo + GetCBOR q' -> + -- We encode using the latest (@maxBound@) ShelleyNodeToClientVersion, + -- as the @GetCBOR@ query already is about opportunistically assuming + -- both client and server are running the same version; cf. the + -- @GetCBOR@ Haddocks. + mkSerialised (encodeShelleyResult maxBound q') + <$> answerShelleyTraversingQueries ejTxOut ejTxIn filt cfg q' forker + where + emptyUtxo = SL.UTxO Map.empty + + combUtxo (SL.UTxO l) vs = SL.UTxO $ Map.union l vs + + partial :: + (TxOut (LedgerState blk) -> Bool) -> + LedgerTables (ExtLedgerState blk) ValuesMK -> + Map SL.TxIn (LC.TxOut era) + partial queryPredicate (LedgerTables (ValuesMK vs)) = + Map.mapKeys ejTxIn $ + Map.mapMaybeWithKey + ( \_k v -> + if queryPredicate v then Just $ ejTxOut v - else Nothing) - vs - - vnull :: ValuesMK k v -> Bool - vnull (ValuesMK vs) = Map.null vs - - toMaxKey (LedgerTables (ValuesMK vs)) = fst $ Map.findMax vs - - loop queryPredicate !prev !acc = do - extValues <- LedgerDB.roforkerRangeReadTables forker prev - if ltcollapse $ ltmap (K2 . vnull) extValues - then pure acc - else loop queryPredicate - (PreviousQueryWasUpTo $ toMaxKey extValues) - (combUtxo acc $ partial queryPredicate extValues) + else Nothing + ) + vs + + vnull :: ValuesMK k v -> Bool + vnull (ValuesMK vs) = Map.null vs + + toMaxKey (LedgerTables (ValuesMK vs)) = fst $ Map.findMax vs + + loop queryPredicate !prev !acc = do + extValues <- LedgerDB.roforkerRangeReadTables forker prev + if ltcollapse $ ltmap (K2 . vnull) extValues + then pure acc + else + loop + queryPredicate + (PreviousQueryWasUpTo $ toMaxKey extValues) + (combUtxo acc $ partial queryPredicate extValues) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query/Types.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query/Types.hs index b790ad5c31..6158161bfa 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query/Types.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/Query/Types.hs @@ -9,38 +9,43 @@ -- to retain backwards-compatibility. Eventually, types likes this should be -- defined in Ledger instead of here, see -- . -module Ouroboros.Consensus.Shelley.Ledger.Query.Types ( - IndividualPoolStake (..) +module Ouroboros.Consensus.Shelley.Ledger.Query.Types + ( IndividualPoolStake (..) , PoolDistr (..) , fromLedgerIndividualPoolStake , fromLedgerPoolDistr ) where -import qualified Cardano.Crypto.Hash as Hash -import qualified Cardano.Crypto.VRF as VRF -import Cardano.Ledger.Binary (DecCBOR (..), EncCBOR (..), - decodeRecordNamed, encodeListLen) -import Cardano.Ledger.Hashes (HASH) -import qualified Cardano.Ledger.Keys as SL -import qualified Cardano.Ledger.State as SL -import Cardano.Protocol.Crypto (Crypto, VRF) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import GHC.Generics (Generic) -import NoThunks.Class +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.Binary + ( DecCBOR (..) + , EncCBOR (..) + , decodeRecordNamed + , encodeListLen + ) +import Cardano.Ledger.Hashes (HASH) +import Cardano.Ledger.Keys qualified as SL +import Cardano.Ledger.State qualified as SL +import Cardano.Protocol.Crypto (Crypto, VRF) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import GHC.Generics (Generic) +import NoThunks.Class -- | Copy of 'SL.IndividualPoolStake' before -- . -data IndividualPoolStake c = IndividualPoolStake { - individualPoolStake :: !Rational +data IndividualPoolStake c = IndividualPoolStake + { individualPoolStake :: !Rational , individualPoolStakeVrf :: !(Hash.Hash HASH (VRF.VerKeyVRF (VRF c))) } deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks fromLedgerIndividualPoolStake :: SL.IndividualPoolStake -> IndividualPoolStake c -fromLedgerIndividualPoolStake ips = IndividualPoolStake { - individualPoolStake = SL.individualPoolStake ips +fromLedgerIndividualPoolStake ips = + IndividualPoolStake + { individualPoolStake = SL.individualPoolStake ips , individualPoolStakeVrf = SL.fromVRFVerKeyHash $ SL.individualPoolStakeVrf ips } @@ -61,13 +66,14 @@ instance Crypto c => DecCBOR (IndividualPoolStake c) where -- | Copy of 'SL.PoolDistr' before -- . -newtype PoolDistr c = PoolDistr { - unPoolDistr :: Map (SL.KeyHash SL.StakePool) (IndividualPoolStake c) +newtype PoolDistr c = PoolDistr + { unPoolDistr :: Map (SL.KeyHash SL.StakePool) (IndividualPoolStake c) } deriving stock (Show, Eq, Generic) deriving newtype (EncCBOR, DecCBOR) fromLedgerPoolDistr :: SL.PoolDistr -> PoolDistr c -fromLedgerPoolDistr pd = PoolDistr { - unPoolDistr = Map.map fromLedgerIndividualPoolStake $ SL.unPoolDistr pd +fromLedgerPoolDistr pd = + PoolDistr + { unPoolDistr = Map.map fromLedgerIndividualPoolStake $ SL.unPoolDistr pd } diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/SupportsProtocol.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/SupportsProtocol.hs index 7f5d742382..19d42e8882 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/SupportsProtocol.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Ledger/SupportsProtocol.hs @@ -13,7 +13,6 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | This module contains 'SupportsProtocol' instances tying the ledger and @@ -22,32 +21,33 @@ -- instance for 'ShelleyBlock'. module Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () where -import qualified Cardano.Ledger.Core as LedgerCore -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Protocol.TPraos.API as SL -import Control.Monad.Except (MonadError (throwError)) -import Data.Coerce (coerce) -import qualified Lens.Micro -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.History.Util -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol (..)) -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Protocol.Abstract (translateLedgerView) -import Ouroboros.Consensus.Protocol.Praos (Praos) -import qualified Ouroboros.Consensus.Protocol.Praos.Views as Praos -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Protocol () -import Ouroboros.Consensus.Shelley.Protocol.Abstract () -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Ouroboros.Consensus.Shelley.Protocol.TPraos () +import Cardano.Ledger.Core qualified as LedgerCore +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.TPraos.API qualified as SL +import Control.Monad.Except (MonadError (throwError)) +import Data.Coerce (coerce) +import Lens.Micro qualified +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.History.Util +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol (..) + ) +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Protocol.Abstract (translateLedgerView) +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.Praos.Views qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Protocol () +import Ouroboros.Consensus.Shelley.Protocol.Abstract () +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Ouroboros.Consensus.Shelley.Protocol.TPraos () instance - (ShelleyCompatible (TPraos crypto) era) => + ShelleyCompatible (TPraos crypto) era => LedgerSupportsProtocol (ShelleyBlock (TPraos crypto) era) where protocolLedgerView _cfg = SL.currentLedgerView . tickedShelleyLedgerState @@ -56,69 +56,68 @@ instance -- https://github.com/IntersectMBO/ouroboros-consensus/blob/main/docs/website/contents/for-developers/HardWonWisdom.md#why-doesnt-ledger-code-ever-return-pasthorizonexception ledgerViewForecastAt cfg ledgerState = Forecast at $ \for -> if - | NotOrigin for == at -> + | NotOrigin for == at -> return $ SL.currentLedgerView shelleyLedgerState - | for < maxFor -> + | for < maxFor -> return $ futureLedgerView for - | otherwise -> + | otherwise -> throwError $ OutsideForecastRange - { outsideForecastAt = at, - outsideForecastMaxFor = maxFor, - outsideForecastFor = for + { outsideForecastAt = at + , outsideForecastMaxFor = maxFor + , outsideForecastFor = for } - where - ShelleyLedgerState {shelleyLedgerState} = ledgerState - globals = shelleyLedgerGlobals cfg - swindow = SL.stabilityWindow globals - at = ledgerTipSlot ledgerState + where + ShelleyLedgerState{shelleyLedgerState} = ledgerState + globals = shelleyLedgerGlobals cfg + swindow = SL.stabilityWindow globals + at = ledgerTipSlot ledgerState - futureLedgerView :: SlotNo -> SL.LedgerView - futureLedgerView = - either - (\e -> error ("futureLedgerView failed: " <> show e)) - id - . SL.futureLedgerView globals shelleyLedgerState + futureLedgerView :: SlotNo -> SL.LedgerView + futureLedgerView = + either + (\e -> error ("futureLedgerView failed: " <> show e)) + id + . SL.futureLedgerView globals shelleyLedgerState - -- Exclusive upper bound - maxFor :: SlotNo - maxFor = addSlots swindow $ succWithOrigin at + -- Exclusive upper bound + maxFor :: SlotNo + maxFor = addSlots swindow $ succWithOrigin at instance - ( ShelleyCompatible (Praos crypto) era, - ShelleyCompatible (TPraos crypto) era + ( ShelleyCompatible (Praos crypto) era + , ShelleyCompatible (TPraos crypto) era ) => LedgerSupportsProtocol (ShelleyBlock (Praos crypto) era) where protocolLedgerView _cfg st = let nes = tickedShelleyLedgerState st - SL.NewEpochState {nesPd} = nes + SL.NewEpochState{nesPd} = nes pparam :: forall a. Lens.Micro.Lens' (LedgerCore.PParams era) a -> a pparam lens = getPParams nes Lens.Micro.^. lens - in Praos.LedgerView - { Praos.lvPoolDistr = nesPd, - Praos.lvMaxBodySize = pparam LedgerCore.ppMaxBBSizeL, - Praos.lvMaxHeaderSize = pparam LedgerCore.ppMaxBHSizeL, - Praos.lvProtocolVersion = pparam LedgerCore.ppProtocolVersionL + { Praos.lvPoolDistr = nesPd + , Praos.lvMaxBodySize = pparam LedgerCore.ppMaxBBSizeL + , Praos.lvMaxHeaderSize = pparam LedgerCore.ppMaxBHSizeL + , Praos.lvProtocolVersion = pparam LedgerCore.ppProtocolVersionL } - -- | Currently the Shelley+ ledger is hard-coded to produce a TPraos ledger + -- \| Currently the Shelley+ ledger is hard-coded to produce a TPraos ledger -- view. Since we can convert them, we piggy-back on this to get a Praos -- ledger view. Ultimately, we will want to liberalise the ledger code -- slightly. ledgerViewForecastAt cfg st = mapForecast (translateLedgerView (Proxy @(TPraos crypto, Praos crypto))) $ ledgerViewForecastAt @(ShelleyBlock (TPraos crypto) era) cfg st' - where - st' :: LedgerState (ShelleyBlock (TPraos crypto) era) EmptyMK - st' = - ShelleyLedgerState - { shelleyLedgerTip = coerceTip <$> shelleyLedgerTip st, - shelleyLedgerState = shelleyLedgerState st, - shelleyLedgerTransition = shelleyLedgerTransition st, - shelleyLedgerTables = emptyLedgerTables - } - coerceTip (ShelleyTip slot block hash) = ShelleyTip slot block (coerce hash) + where + st' :: LedgerState (ShelleyBlock (TPraos crypto) era) EmptyMK + st' = + ShelleyLedgerState + { shelleyLedgerTip = coerceTip <$> shelleyLedgerTip st + , shelleyLedgerState = shelleyLedgerState st + , shelleyLedgerTransition = shelleyLedgerTransition st + , shelleyLedgerTables = emptyLedgerTables + } + coerceTip (ShelleyTip slot block hash) = ShelleyTip slot block (coerce hash) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node.hs index c25f98440d..7f7b25e96f 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node.hs @@ -9,11 +9,10 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Node ( - MaxMajorProtVer (..) +module Ouroboros.Consensus.Shelley.Node + ( MaxMajorProtVer (..) , ProtocolParamsShelleyBased (..) , SL.Nonce (..) , SL.ProtVer (..) @@ -27,27 +26,30 @@ module Ouroboros.Consensus.Shelley.Node ( , validateGenesis ) where -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (Crypto) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Inspect () -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion () -import Ouroboros.Consensus.Shelley.Node.DiffusionPipelining () -import Ouroboros.Consensus.Shelley.Node.Serialisation () -import Ouroboros.Consensus.Shelley.Node.TPraos -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, - pHeaderIssuer) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (Crypto) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Inspect () +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion () +import Ouroboros.Consensus.Shelley.Node.DiffusionPipelining () +import Ouroboros.Consensus.Shelley.Node.Serialisation () +import Ouroboros.Consensus.Shelley.Node.TPraos +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto + , pHeaderIssuer + ) {------------------------------------------------------------------------------- ProtocolInfo @@ -55,8 +57,8 @@ import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, protocolClientInfoShelley :: ProtocolClientInfo (ShelleyBlock proto era) protocolClientInfoShelley = - ProtocolClientInfo { - -- No particular codec configuration is needed for Shelley + ProtocolClientInfo + { -- No particular codec configuration is needed for Shelley pClientInfoCodecConfig = ShelleyCodecConfig } @@ -65,13 +67,13 @@ protocolClientInfoShelley = -------------------------------------------------------------------------------} instance ShelleyCompatible proto era => BlockSupportsMetrics (ShelleyBlock proto era) where - -- | Premature optimisation: we assume everywhere that metrics are + -- \| Premature optimisation: we assume everywhere that metrics are -- cheap, so micro-optimise checking whether the issuer vkey is one of our -- own vkeys. -- - -- * Equality of vkeys takes roughly 40ns - -- * Hashing a vkey takes roughly 850ns - -- * Equality of hashes takes roughly 10ns + -- \* Equality of vkeys takes roughly 40ns + -- \* Hashing a vkey takes roughly 850ns + -- \* Equality of hashes takes roughly 10ns -- -- We want to avoid the hashing of a vkey as it is more expensive than -- simply doing a linear search, comparing vkeys for equality. Only when @@ -84,36 +86,40 @@ instance ShelleyCompatible proto era => BlockSupportsMetrics (ShelleyBlock proto -- we keep it (relatively) simple and optimise for the common case: 0 or 1 -- key. isSelfIssued cfg (ShelleyHeader shdr _) = case Map.size issuerVKeys of - -- The most common case: a non-block producing node - 0 -> IsNotSelfIssued - -- A block producing node with a single set of credentials: just do an - -- equality check of the single VKey, skipping the more expensive - -- computation of the hash. - 1 | pHeaderIssuer shdr `elem` issuerVKeys - -> IsSelfIssued - | otherwise - -> IsNotSelfIssued - -- When we are running with multiple sets of credentials, which should - -- only happen when benchmarking, do a hash lookup, as the number of - -- keys can grow to 100-250. - _ | SL.hashKey (pHeaderIssuer shdr) `Map.member` issuerVKeys - -> IsSelfIssued - | otherwise - -> IsNotSelfIssued - where - - issuerVKeys :: Map (SL.KeyHash 'SL.BlockIssuer) - (SL.VKey 'SL.BlockIssuer) - issuerVKeys = shelleyBlockIssuerVKeys cfg + -- The most common case: a non-block producing node + 0 -> IsNotSelfIssued + -- A block producing node with a single set of credentials: just do an + -- equality check of the single VKey, skipping the more expensive + -- computation of the hash. + 1 + | pHeaderIssuer shdr `elem` issuerVKeys -> + IsSelfIssued + | otherwise -> + IsNotSelfIssued + -- When we are running with multiple sets of credentials, which should + -- only happen when benchmarking, do a hash lookup, as the number of + -- keys can grow to 100-250. + _ + | SL.hashKey (pHeaderIssuer shdr) `Map.member` issuerVKeys -> + IsSelfIssued + | otherwise -> + IsNotSelfIssued + where + issuerVKeys :: + Map + (SL.KeyHash 'SL.BlockIssuer) + (SL.VKey 'SL.BlockIssuer) + issuerVKeys = shelleyBlockIssuerVKeys cfg instance ConsensusProtocol proto => BlockSupportsSanityCheck (ShelleyBlock proto era) where configAllSecurityParams = pure . protocolSecurityParam . topLevelConfigProtocol -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , BlockSupportsSanityCheck (ShelleyBlock proto era) - , TxLimits (ShelleyBlock proto era) - , SerialiseNodeToClientConstraints (ShelleyBlock proto era) - , Crypto (ProtoCrypto proto) - ) - => RunNode (ShelleyBlock proto era) +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , BlockSupportsSanityCheck (ShelleyBlock proto era) + , TxLimits (ShelleyBlock proto era) + , SerialiseNodeToClientConstraints (ShelleyBlock proto era) + , Crypto (ProtoCrypto proto) + ) => + RunNode (ShelleyBlock proto era) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Common.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Common.hs index 6982e3ce00..965276012e 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Common.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Common.hs @@ -8,58 +8,67 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Node configuration common to all (era, protocol) combinations deriving from -- Shelley. -module Ouroboros.Consensus.Shelley.Node.Common ( - ProtocolParamsShelleyBased (..) +module Ouroboros.Consensus.Shelley.Node.Common + ( ProtocolParamsShelleyBased (..) , ShelleyEraWithCrypto , ShelleyLeaderCredentials (..) , shelleyBlockIssuerVKey ) where -import Cardano.Crypto.KES (UnsoundPureSignKeyKES) -import Cardano.Ledger.BaseTypes (unNonZero) -import qualified Cardano.Ledger.Keys as SL -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Ledger.Slot -import Cardano.Protocol.Crypto -import Data.Text (Text) -import Ouroboros.Consensus.Block (CannotForge, ForgeStateInfo, - ForgeStateUpdateError) -import Ouroboros.Consensus.Config (maxRollbacks) -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) -import Ouroboros.Consensus.Node.InitStorage -import qualified Ouroboros.Consensus.Protocol.Ledger.HotKey as HotKey -import Ouroboros.Consensus.Protocol.Praos.Common - (PraosCanBeLeader (praosCanBeLeaderColdVerKey)) -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, - ShelleyCompatible, shelleyNetworkMagic, - shelleyStorageConfigSecurityParam, - shelleyStorageConfigSlotsPerKESPeriod, shelleySystemStart, - verifyBlockIntegrity) -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, - ProtocolHeaderSupportsProtocol (CannotForgeError)) -import Ouroboros.Consensus.Storage.ImmutableDB +import Cardano.Crypto.KES (UnsoundPureSignKeyKES) +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Ledger.Keys qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Slot +import Cardano.Protocol.Crypto +import Data.Text (Text) +import Ouroboros.Consensus.Block + ( CannotForge + , ForgeStateInfo + , ForgeStateUpdateError + ) +import Ouroboros.Consensus.Config (maxRollbacks) +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Protocol.Ledger.HotKey qualified as HotKey +import Ouroboros.Consensus.Protocol.Praos.Common + ( PraosCanBeLeader (praosCanBeLeaderColdVerKey) + ) +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyBlock + , ShelleyCompatible + , shelleyNetworkMagic + , shelleyStorageConfigSecurityParam + , shelleyStorageConfigSlotsPerKESPeriod + , shelleySystemStart + , verifyBlockIntegrity + ) +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto + , ProtocolHeaderSupportsProtocol (CannotForgeError) + ) +import Ouroboros.Consensus.Storage.ImmutableDB {------------------------------------------------------------------------------- Credentials -------------------------------------------------------------------------------} data ShelleyLeaderCredentials c = ShelleyLeaderCredentials - { -- | The unevolved signing KES key (at evolution 0). - -- - -- Note that this is not inside 'ShelleyCanBeLeader' since it gets evolved - -- automatically, whereas 'ShelleyCanBeLeader' does not change. - shelleyLeaderCredentialsInitSignKey :: UnsoundPureSignKeyKES (KES c), - shelleyLeaderCredentialsCanBeLeader :: PraosCanBeLeader c, - -- | Identifier for this set of credentials. - -- - -- Useful when the node is running with multiple sets of credentials. - shelleyLeaderCredentialsLabel :: Text + { shelleyLeaderCredentialsInitSignKey :: UnsoundPureSignKeyKES (KES c) + -- ^ The unevolved signing KES key (at evolution 0). + -- + -- Note that this is not inside 'ShelleyCanBeLeader' since it gets evolved + -- automatically, whereas 'ShelleyCanBeLeader' does not change. + , shelleyLeaderCredentialsCanBeLeader :: PraosCanBeLeader c + , shelleyLeaderCredentialsLabel :: Text + -- ^ Identifier for this set of credentials. + -- + -- Useful when the node is running with multiple sets of credentials. } shelleyBlockIssuerVKey :: @@ -124,11 +133,11 @@ instance ShelleyCompatible proto era => NodeInitStorage (ShelleyBlock proto era) -- per-era protocol parameters, one value of 'ProtocolParamsShelleyBased' will -- be needed, which is shared among all Shelley-based eras. data ProtocolParamsShelleyBased c = ProtocolParamsShelleyBased - { -- | The initial nonce, typically derived from the hash of Genesis - -- config JSON file. - -- - -- WARNING: chains using different values of this parameter will be - -- mutually incompatible. - shelleyBasedInitialNonce :: SL.Nonce, - shelleyBasedLeaderCredentials :: [ShelleyLeaderCredentials c] + { shelleyBasedInitialNonce :: SL.Nonce + -- ^ The initial nonce, typically derived from the hash of Genesis + -- config JSON file. + -- + -- WARNING: chains using different values of this parameter will be + -- mutually incompatible. + , shelleyBasedLeaderCredentials :: [ShelleyLeaderCredentials c] } diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/DiffusionPipelining.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/DiffusionPipelining.hs index 03cb4a22a8..1209b042cf 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/DiffusionPipelining.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/DiffusionPipelining.hs @@ -7,44 +7,43 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Node.DiffusionPipelining ( - HotIdentity (..) +module Ouroboros.Consensus.Shelley.Node.DiffusionPipelining + ( HotIdentity (..) , ShelleyTentativeHeaderState (..) , ShelleyTentativeHeaderView (..) ) where -import qualified Cardano.Ledger.Shelley.API as SL -import Control.Monad (guard) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Shelley.Ledger.Block -import Ouroboros.Consensus.Shelley.Ledger.Protocol () -import Ouroboros.Consensus.Shelley.Protocol.Abstract +import Cardano.Ledger.Shelley.API qualified as SL +import Control.Monad (guard) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Shelley.Ledger.Block +import Ouroboros.Consensus.Shelley.Ledger.Protocol () +import Ouroboros.Consensus.Shelley.Protocol.Abstract -- | Hot block issuer identity for the purpose of Shelley block diffusion -- pipelining. -data HotIdentity c = HotIdentity { - -- | Hash of the cold key. - hiIssuer :: !(SL.KeyHash SL.BlockIssuer) - , -- | The issue number/opcert counter. Even if the opcert was compromised and - -- hence an attacker forges blocks with a specific cold identity, the owner - -- of the cold key can issue a new opcert with an incremented counter, and - -- their minted blocks will be pipelined. - hiIssueNo :: !Word64 +data HotIdentity c = HotIdentity + { hiIssuer :: !(SL.KeyHash SL.BlockIssuer) + -- ^ Hash of the cold key. + , hiIssueNo :: !Word64 + -- ^ The issue number/opcert counter. Even if the opcert was compromised and + -- hence an attacker forges blocks with a specific cold identity, the owner + -- of the cold key can issue a new opcert with an incremented counter, and + -- their minted blocks will be pipelined. } deriving stock (Show, Eq, Ord, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -data ShelleyTentativeHeaderState proto = - ShelleyTentativeHeaderState +data ShelleyTentativeHeaderState proto + = ShelleyTentativeHeaderState -- | The block number of the last trap tentative header. !(WithOrigin BlockNo) -- | The set of all hot identies of those who issued trap tentative @@ -60,13 +59,13 @@ data ShelleyTentativeHeaderState proto = -- record the identities of trap headers they sent. !(Set (HotIdentity (ProtoCrypto proto))) deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -data ShelleyTentativeHeaderView proto = - ShelleyTentativeHeaderView BlockNo (HotIdentity (ProtoCrypto proto)) +data ShelleyTentativeHeaderView proto + = ShelleyTentativeHeaderView BlockNo (HotIdentity (ProtoCrypto proto)) deriving stock instance ConsensusProtocol proto => Show (ShelleyTentativeHeaderView proto) -deriving stock instance ConsensusProtocol proto => Eq (ShelleyTentativeHeaderView proto) +deriving stock instance ConsensusProtocol proto => Eq (ShelleyTentativeHeaderView proto) -- | A header can be pipelined iff no trap header with the same block number and -- by the same issuer was pipelined before. See 'HotIdentity' for what exactly @@ -74,33 +73,42 @@ deriving stock instance ConsensusProtocol proto => Eq (ShelleyTentativeHeaderV instance ( ShelleyCompatible proto era , BlockSupportsProtocol (ShelleyBlock proto era) - ) => BlockSupportsDiffusionPipelining (ShelleyBlock proto era) where - type TentativeHeaderState (ShelleyBlock proto era) = - ShelleyTentativeHeaderState proto + ) => + BlockSupportsDiffusionPipelining (ShelleyBlock proto era) + where + type + TentativeHeaderState (ShelleyBlock proto era) = + ShelleyTentativeHeaderState proto - type TentativeHeaderView (ShelleyBlock proto era) = - ShelleyTentativeHeaderView proto + type + TentativeHeaderView (ShelleyBlock proto era) = + ShelleyTentativeHeaderView proto initialTentativeHeaderState _ = - ShelleyTentativeHeaderState Origin Set.empty + ShelleyTentativeHeaderState Origin Set.empty tentativeHeaderView _bcfg hdr@(ShelleyHeader sph _) = - ShelleyTentativeHeaderView (blockNo hdr) HotIdentity { - hiIssuer = SL.hashKey $ pHeaderIssuer sph + ShelleyTentativeHeaderView + (blockNo hdr) + HotIdentity + { hiIssuer = SL.hashKey $ pHeaderIssuer sph , hiIssueNo = pHeaderIssueNo sph } - applyTentativeHeaderView _ + applyTentativeHeaderView + _ (ShelleyTentativeHeaderView bno hdrIdentity) - (ShelleyTentativeHeaderState lastBlockNo badIdentities) - = case compare (NotOrigin bno) lastBlockNo of + (ShelleyTentativeHeaderState lastBlockNo badIdentities) = + case compare (NotOrigin bno) lastBlockNo of LT -> Nothing EQ -> do guard $ hdrIdentity `Set.notMember` badIdentities - Just $ ShelleyTentativeHeaderState - lastBlockNo - (Set.insert hdrIdentity badIdentities) + Just $ + ShelleyTentativeHeaderState + lastBlockNo + (Set.insert hdrIdentity badIdentities) GT -> - Just $ ShelleyTentativeHeaderState - (NotOrigin bno) - (Set.singleton hdrIdentity) + Just $ + ShelleyTentativeHeaderState + (NotOrigin bno) + (Set.singleton hdrIdentity) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Praos.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Praos.hs index 87599b91c7..be222f7fd4 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Praos.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Praos.hs @@ -8,33 +8,41 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} +{-# OPTIONS_GHC -Wno-orphans #-} -{-# OPTIONS_GHC -Wno-orphans #-} - -module Ouroboros.Consensus.Shelley.Node.Praos ( - -- * BlockForging +module Ouroboros.Consensus.Shelley.Node.Praos + ( -- * BlockForging praosBlockForging , praosSharedBlockForging ) where -import qualified Cardano.Ledger.Api.Era as L -import qualified Cardano.Protocol.TPraos.OCert as Absolute -import qualified Cardano.Protocol.TPraos.OCert as SL -import qualified Data.Text as T -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config (configConsensus) -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as Mempool -import qualified Ouroboros.Consensus.Protocol.Ledger.HotKey as HotKey -import Ouroboros.Consensus.Protocol.Praos (Praos, PraosParams (..), - praosCheckCanForge) -import Ouroboros.Consensus.Protocol.Praos.Common - (PraosCanBeLeader (praosCanBeLeaderOpCert)) -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, - ShelleyCompatible, forgeShelleyBlock) -import Ouroboros.Consensus.Shelley.Node.Common (ShelleyEraWithCrypto, - ShelleyLeaderCredentials (..)) -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Ouroboros.Consensus.Util.IOLike (IOLike) +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Protocol.TPraos.OCert qualified as Absolute +import Cardano.Protocol.TPraos.OCert qualified as SL +import Data.Text qualified as T +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config (configConsensus) +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as Mempool +import Ouroboros.Consensus.Protocol.Ledger.HotKey qualified as HotKey +import Ouroboros.Consensus.Protocol.Praos + ( Praos + , PraosParams (..) + , praosCheckCanForge + ) +import Ouroboros.Consensus.Protocol.Praos.Common + ( PraosCanBeLeader (praosCanBeLeaderOpCert) + ) +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyBlock + , ShelleyCompatible + , forgeShelleyBlock + ) +import Ouroboros.Consensus.Shelley.Node.Common + ( ShelleyEraWithCrypto + , ShelleyLeaderCredentials (..) + ) +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Ouroboros.Consensus.Util.IOLike (IOLike) {------------------------------------------------------------------------------- BlockForging @@ -42,63 +50,63 @@ import Ouroboros.Consensus.Util.IOLike (IOLike) -- | Create a 'BlockForging' record for a single era. praosBlockForging :: - forall m era c. - ( ShelleyCompatible (Praos c) era - , Mempool.TxLimits (ShelleyBlock (Praos c) era) - , IOLike m - ) - => PraosParams - -> ShelleyLeaderCredentials c - -> m (BlockForging m (ShelleyBlock (Praos c) era)) + forall m era c. + ( ShelleyCompatible (Praos c) era + , Mempool.TxLimits (ShelleyBlock (Praos c) era) + , IOLike m + ) => + PraosParams -> + ShelleyLeaderCredentials c -> + m (BlockForging m (ShelleyBlock (Praos c) era)) praosBlockForging praosParams credentials = do - hotKey <- HotKey.mkHotKey @m @c initSignKey startPeriod praosMaxKESEvo - pure $ praosSharedBlockForging hotKey slotToPeriod credentials - where - PraosParams {praosMaxKESEvo, praosSlotsPerKESPeriod} = praosParams + hotKey <- HotKey.mkHotKey @m @c initSignKey startPeriod praosMaxKESEvo + pure $ praosSharedBlockForging hotKey slotToPeriod credentials + where + PraosParams{praosMaxKESEvo, praosSlotsPerKESPeriod} = praosParams - ShelleyLeaderCredentials { - shelleyLeaderCredentialsInitSignKey = initSignKey - , shelleyLeaderCredentialsCanBeLeader = canBeLeader - } = credentials + ShelleyLeaderCredentials + { shelleyLeaderCredentialsInitSignKey = initSignKey + , shelleyLeaderCredentialsCanBeLeader = canBeLeader + } = credentials - startPeriod :: Absolute.KESPeriod - startPeriod = SL.ocertKESPeriod $ praosCanBeLeaderOpCert canBeLeader + startPeriod :: Absolute.KESPeriod + startPeriod = SL.ocertKESPeriod $ praosCanBeLeaderOpCert canBeLeader - slotToPeriod :: SlotNo -> Absolute.KESPeriod - slotToPeriod (SlotNo slot) = - SL.KESPeriod $ fromIntegral $ slot `div` praosSlotsPerKESPeriod + slotToPeriod :: SlotNo -> Absolute.KESPeriod + slotToPeriod (SlotNo slot) = + SL.KESPeriod $ fromIntegral $ slot `div` praosSlotsPerKESPeriod -- | Create a 'BlockForging' record safely using the given 'Hotkey'. -- -- The name of the era (separated by a @_@) will be appended to each -- 'forgeLabel'. praosSharedBlockForging :: - forall m c era. - ( ShelleyEraWithCrypto c (Praos c) era - , IOLike m - ) - => HotKey.HotKey c m - -> (SlotNo -> Absolute.KESPeriod) - -> ShelleyLeaderCredentials c - -> BlockForging m (ShelleyBlock (Praos c) era) + forall m c era. + ( ShelleyEraWithCrypto c (Praos c) era + , IOLike m + ) => + HotKey.HotKey c m -> + (SlotNo -> Absolute.KESPeriod) -> + ShelleyLeaderCredentials c -> + BlockForging m (ShelleyBlock (Praos c) era) praosSharedBlockForging hotKey slotToPeriod - ShelleyLeaderCredentials { - shelleyLeaderCredentialsCanBeLeader = canBeLeader + ShelleyLeaderCredentials + { shelleyLeaderCredentialsCanBeLeader = canBeLeader , shelleyLeaderCredentialsLabel = label } = do BlockForging - { forgeLabel = label <> "_" <> T.pack (L.eraName @era), - canBeLeader = canBeLeader, - updateForgeState = \_ curSlot _ -> + { forgeLabel = label <> "_" <> T.pack (L.eraName @era) + , canBeLeader = canBeLeader + , updateForgeState = \_ curSlot _ -> forgeStateUpdateInfoFromUpdateInfo - <$> HotKey.evolve hotKey (slotToPeriod curSlot), - checkCanForge = \cfg curSlot _tickedChainDepState _isLeader -> + <$> HotKey.evolve hotKey (slotToPeriod curSlot) + , checkCanForge = \cfg curSlot _tickedChainDepState _isLeader -> praosCheckCanForge (configConsensus cfg) - curSlot, - forgeBlock = \cfg -> + curSlot + , forgeBlock = \cfg -> forgeShelleyBlock hotKey canBeLeader diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs index e0c1eff319..cdfbbd0959 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/Serialisation.hs @@ -7,48 +7,55 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Shelley.Node.Serialisation () where -import Cardano.Binary -import Cardano.Ledger.BaseTypes -import Cardano.Ledger.Core (fromEraCBOR, toEraCBOR) -import qualified Cardano.Ledger.Core as SL -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Slotting.EpochInfo (epochInfoSize, - epochInfoSlotToRelativeTime, fixedEpochInfo, - hoistEpochInfo) -import Cardano.Slotting.Time -import Codec.Serialise (decode, encode) -import Control.Exception (Exception, throw) -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor.Identity -import Data.Typeable (Typeable) -import Data.Word -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.History.EpochInfo -import Ouroboros.Consensus.HardFork.Simple -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables (EmptyMK) -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.Praos (PraosState) -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion () -import Ouroboros.Consensus.Shelley.Protocol.Abstract - (pHeaderBlockSize, pHeaderSize) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, - wrapCBORinCBOR) +import Cardano.Binary +import Cardano.Ledger.BaseTypes +import Cardano.Ledger.Core (fromEraCBOR, toEraCBOR) +import Cardano.Ledger.Core qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Slotting.EpochInfo + ( epochInfoSize + , epochInfoSlotToRelativeTime + , fixedEpochInfo + , hoistEpochInfo + ) +import Cardano.Slotting.Time +import Codec.Serialise (decode, encode) +import Control.Exception (Exception, throw) +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor.Identity +import Data.Typeable (Typeable) +import Data.Word +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.History.EpochInfo +import Ouroboros.Consensus.HardFork.Simple +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables (EmptyMK) +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.Praos (PraosState) +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion () +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( pHeaderBlockSize + , pHeaderSize + ) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block + ( Serialised + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) {------------------------------------------------------------------------------- EncodeDisk & DecodeDisk @@ -61,72 +68,101 @@ instance ShelleyCompatible proto era => SerialiseDiskConstraints (ShelleyBlock p instance ShelleyCompatible proto era => EncodeDisk (ShelleyBlock proto era) (ShelleyBlock proto era) where encodeDisk _ = encodeShelleyBlock -instance ShelleyCompatible proto era => DecodeDisk (ShelleyBlock proto era) (Lazy.ByteString -> ShelleyBlock proto era) where +instance + ShelleyCompatible proto era => + DecodeDisk (ShelleyBlock proto era) (Lazy.ByteString -> ShelleyBlock proto era) + where decodeDisk _ = decodeShelleyBlock -instance ShelleyCompatible proto era => EncodeDisk (ShelleyBlock proto era) (Header (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + EncodeDisk (ShelleyBlock proto era) (Header (ShelleyBlock proto era)) + where encodeDisk _ = encodeShelleyHeader -instance ShelleyCompatible proto era => DecodeDisk (ShelleyBlock proto era) (Lazy.ByteString -> Header (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + DecodeDisk (ShelleyBlock proto era) (Lazy.ByteString -> Header (ShelleyBlock proto era)) + where decodeDisk _ = decodeShelleyHeader -instance ShelleyCompatible proto era => EncodeDisk (ShelleyBlock proto era) (LedgerState (ShelleyBlock proto era) EmptyMK) where +instance + ShelleyCompatible proto era => + EncodeDisk (ShelleyBlock proto era) (LedgerState (ShelleyBlock proto era) EmptyMK) + where encodeDisk _ = encodeShelleyLedgerState -instance ShelleyCompatible proto era => DecodeDisk (ShelleyBlock proto era) (LedgerState (ShelleyBlock proto era) EmptyMK) where +instance + ShelleyCompatible proto era => + DecodeDisk (ShelleyBlock proto era) (LedgerState (ShelleyBlock proto era) EmptyMK) + where decodeDisk _ = decodeShelleyLedgerState -- | @'ChainDepState' ('BlockProtocol' ('ShelleyBlock' era))@ instance ShelleyCompatible proto era => EncodeDisk (ShelleyBlock proto era) TPraosState where encodeDisk _ = encode + -- | @'ChainDepState' ('BlockProtocol' ('ShelleyBlock' era))@ instance ShelleyCompatible proto era => DecodeDisk (ShelleyBlock proto era) TPraosState where decodeDisk _ = decode instance ShelleyCompatible proto era => EncodeDisk (ShelleyBlock proto era) PraosState where encodeDisk _ = encode + -- | @'ChainDepState' ('BlockProtocol' ('ShelleyBlock' era))@ instance ShelleyCompatible proto era => DecodeDisk (ShelleyBlock proto era) PraosState where decodeDisk _ = decode -instance ShelleyCompatible proto era - => EncodeDisk (ShelleyBlock proto era) (AnnTip (ShelleyBlock proto era)) where + +instance + ShelleyCompatible proto era => + EncodeDisk (ShelleyBlock proto era) (AnnTip (ShelleyBlock proto era)) + where encodeDisk _ = encodeShelleyAnnTip -instance ShelleyCompatible proto era - => DecodeDisk (ShelleyBlock proto era) (AnnTip (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + DecodeDisk (ShelleyBlock proto era) (AnnTip (ShelleyBlock proto era)) + where decodeDisk _ = decodeShelleyAnnTip {------------------------------------------------------------------------------- SerialiseNodeToNode -------------------------------------------------------------------------------} -instance (ShelleyCompatible proto era) - => SerialiseNodeToNodeConstraints (ShelleyBlock proto era) where +instance + ShelleyCompatible proto era => + SerialiseNodeToNodeConstraints (ShelleyBlock proto era) + where estimateBlockSize hdr = overhead + hdrSize + bodySize - where - -- The maximum block size is 65536, the CBOR-in-CBOR tag for this block - -- is: - -- - -- > D8 18 # tag(24) - -- > 1A 00010000 # bytes(65536) - -- - -- Which is 7 bytes, enough for up to 4294967295 bytes. - overhead = 7 {- CBOR-in-CBOR -} + 1 {- encodeListLen -} - bodySize = fromIntegral . pHeaderBlockSize . shelleyHeaderRaw $ hdr - hdrSize = fromIntegral . pHeaderSize . shelleyHeaderRaw $ hdr + where + -- The maximum block size is 65536, the CBOR-in-CBOR tag for this block + -- is: + -- + -- > D8 18 # tag(24) + -- > 1A 00010000 # bytes(65536) + -- + -- Which is 7 bytes, enough for up to 4294967295 bytes. + overhead = 7 {- CBOR-in-CBOR -} + 1 {- encodeListLen -} + bodySize = fromIntegral . pHeaderBlockSize . shelleyHeaderRaw $ hdr + hdrSize = fromIntegral . pHeaderSize . shelleyHeaderRaw $ hdr -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. -instance ShelleyCompatible proto era - => SerialiseNodeToNode (ShelleyBlock proto era) (ShelleyBlock proto era) where - encodeNodeToNode _ _ = wrapCBORinCBOR encodeShelleyBlock +instance + ShelleyCompatible proto era => + SerialiseNodeToNode (ShelleyBlock proto era) (ShelleyBlock proto era) + where + encodeNodeToNode _ _ = wrapCBORinCBOR encodeShelleyBlock decodeNodeToNode _ _ = unwrapCBORinCBOR decodeShelleyBlock -- | 'Serialised' uses CBOR-in-CBOR by default. instance SerialiseNodeToNode (ShelleyBlock proto era) (Serialised (ShelleyBlock proto era)) - -- Default instance + +-- Default instance -- | CBOR-in-CBOR to be compatible with the wrapped ('Serialised') variant. -instance ShelleyCompatible proto era - => SerialiseNodeToNode (ShelleyBlock proto era) (Header (ShelleyBlock proto era)) where - encodeNodeToNode _ _ = wrapCBORinCBOR encodeShelleyHeader +instance + ShelleyCompatible proto era => + SerialiseNodeToNode (ShelleyBlock proto era) (Header (ShelleyBlock proto era)) + where + encodeNodeToNode _ _ = wrapCBORinCBOR encodeShelleyHeader decodeNodeToNode _ _ = unwrapCBORinCBOR decodeShelleyHeader -- | We use CBOR-in-CBOR @@ -136,13 +172,17 @@ instance SerialiseNodeToNode (ShelleyBlock proto era) (SerialisedHeader (Shelley -- | The @To/FromCBOR@ instances defined in @cardano-ledger@ use -- CBOR-in-CBOR to get the annotation. -instance ShelleyCompatible proto era - => SerialiseNodeToNode (ShelleyBlock proto era) (GenTx (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + SerialiseNodeToNode (ShelleyBlock proto era) (GenTx (ShelleyBlock proto era)) + where encodeNodeToNode _ _ = toCBOR decodeNodeToNode _ _ = fromCBOR -instance (ShelleyCompatible proto era) - => SerialiseNodeToNode (ShelleyBlock proto era) (GenTxId (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + SerialiseNodeToNode (ShelleyBlock proto era) (GenTxId (ShelleyBlock proto era)) + where encodeNodeToNode _ _ = toEraCBOR @era decodeNodeToNode _ _ = fromEraCBOR @era @@ -151,25 +191,29 @@ instance (ShelleyCompatible proto era) -------------------------------------------------------------------------------} -- | Exception thrown in the encoders -data ShelleyEncoderException era proto = - -- | A query was submitted that is not supported by the given +data ShelleyEncoderException era proto + = -- | A query was submitted that is not supported by the given -- 'ShelleyNodeToClientVersion'. ShelleyEncoderUnsupportedQuery - (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) - ShelleyNodeToClientVersion - deriving (Show) + (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) + ShelleyNodeToClientVersion + deriving Show -instance (Typeable era, Typeable proto) - => Exception (ShelleyEncoderException era proto) +instance + (Typeable era, Typeable proto) => + Exception (ShelleyEncoderException era proto) -instance (NoHardForks (ShelleyBlock proto era), ShelleyCompatible proto era) - => SerialiseNodeToClientConstraints (ShelleyBlock proto era) +instance + (NoHardForks (ShelleyBlock proto era), ShelleyCompatible proto era) => + SerialiseNodeToClientConstraints (ShelleyBlock proto era) -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. -instance ShelleyCompatible proto era - => SerialiseNodeToClient (ShelleyBlock proto era) (ShelleyBlock proto era) where - encodeNodeToClient _ _ = wrapCBORinCBOR encodeShelleyBlock +instance + ShelleyCompatible proto era => + SerialiseNodeToClient (ShelleyBlock proto era) (ShelleyBlock proto era) + where + encodeNodeToClient _ _ = wrapCBORinCBOR encodeShelleyBlock decodeNodeToClient _ _ = unwrapCBORinCBOR decodeShelleyBlock -- | This instance uses the invariant that the 'EpochInfo' in a @@ -177,90 +221,100 @@ instance ShelleyCompatible proto era -- 'SlotLength'. This is not true in the case of the HFC in a -- 'ShelleyPartialLedgerConfig', but that is handled correctly in the respective -- 'SerialiseNodeToClient' instance for 'ShelleyPartialLedgerConfig'. -instance (NoHardForks (ShelleyBlock proto era), ShelleyCompatible proto era) - => SerialiseNodeToClient (ShelleyBlock proto era) (ShelleyLedgerConfig era) where +instance + (NoHardForks (ShelleyBlock proto era), ShelleyCompatible proto era) => + SerialiseNodeToClient (ShelleyBlock proto era) (ShelleyLedgerConfig era) + where decodeNodeToClient ccfg version = do enforceSize "ShelleyLedgerConfig" 3 - partialConfig <- decodeNodeToClient - @_ - @(ShelleyPartialLedgerConfig era) - ccfg - version - epochSize <- fromCBOR @EpochSize - slotLength <- decode @SlotLength - return $ completeLedgerConfig - (Proxy @(ShelleyBlock proto era)) - (fixedEpochInfo epochSize slotLength) - partialConfig - - encodeNodeToClient ccfg version ledgerConfig = mconcat [ - encodeListLen 3 - , encodeNodeToClient + partialConfig <- + decodeNodeToClient @_ @(ShelleyPartialLedgerConfig era) ccfg version - (toPartialLedgerConfig (Proxy @(ShelleyBlock proto era)) ledgerConfig) - , toCBOR @EpochSize epochSize - , encode @SlotLength slotLength - ] - where - unwrap = either - (error "ShelleyLedgerConfig contains a non-fixed EpochInfo") - id - ei = epochInfo (shelleyLedgerGlobals ledgerConfig) - epochSize = unwrap $ epochInfoSize ei (EpochNo 0) - RelativeTime t1 = unwrap $ epochInfoSlotToRelativeTime ei 1 - slotLength = mkSlotLength t1 + epochSize <- fromCBOR @EpochSize + slotLength <- decode @SlotLength + return $ + completeLedgerConfig + (Proxy @(ShelleyBlock proto era)) + (fixedEpochInfo epochSize slotLength) + partialConfig + + encodeNodeToClient ccfg version ledgerConfig = + mconcat + [ encodeListLen 3 + , encodeNodeToClient + @_ + @(ShelleyPartialLedgerConfig era) + ccfg + version + (toPartialLedgerConfig (Proxy @(ShelleyBlock proto era)) ledgerConfig) + , toCBOR @EpochSize epochSize + , encode @SlotLength slotLength + ] + where + unwrap = + either + (error "ShelleyLedgerConfig contains a non-fixed EpochInfo") + id + ei = epochInfo (shelleyLedgerGlobals ledgerConfig) + epochSize = unwrap $ epochInfoSize ei (EpochNo 0) + RelativeTime t1 = unwrap $ epochInfoSlotToRelativeTime ei 1 + slotLength = mkSlotLength t1 -- | This instance uses the invariant that the 'EpochInfo' in a -- 'ShelleyPartialLedgerConfig' is always just a dummy value. -instance ShelleyBasedEra era - => SerialiseNodeToClient (ShelleyBlock proto era) (ShelleyPartialLedgerConfig era) where +instance + ShelleyBasedEra era => + SerialiseNodeToClient (ShelleyBlock proto era) (ShelleyPartialLedgerConfig era) + where decodeNodeToClient ccfg version = do enforceSize "ShelleyPartialLedgerConfig era" 13 ShelleyPartialLedgerConfig <$> ( ShelleyLedgerConfig - <$> fromCBOR @CompactGenesis - <*> (SL.Globals - (hoistEpochInfo (Right . runIdentity) $ toPureEpochInfo dummyEpochInfo) - <$> fromCBOR @Word64 - <*> fromCBOR @Word64 - <*> fromCBOR @Word64 - <*> fromCBOR @(NonZero Word64) - <*> fromCBOR @Word64 - <*> fromCBOR @Word64 - <*> fromCBOR @Word64 - <*> fromCBOR @ActiveSlotCoeff - <*> fromCBOR @SL.Network - <*> fromCBOR @SystemStart - ) - <*> fromCBOR @(SL.TranslationContext era) - ) + <$> fromCBOR @CompactGenesis + <*> ( SL.Globals + (hoistEpochInfo (Right . runIdentity) $ toPureEpochInfo dummyEpochInfo) + <$> fromCBOR @Word64 + <*> fromCBOR @Word64 + <*> fromCBOR @Word64 + <*> fromCBOR @(NonZero Word64) + <*> fromCBOR @Word64 + <*> fromCBOR @Word64 + <*> fromCBOR @Word64 + <*> fromCBOR @ActiveSlotCoeff + <*> fromCBOR @SL.Network + <*> fromCBOR @SystemStart + ) + <*> fromCBOR @(SL.TranslationContext era) + ) <*> decodeNodeToClient @(ShelleyBlock proto era) @TriggerHardFork ccfg version - encodeNodeToClient ccfg version - (ShelleyPartialLedgerConfig - (ShelleyLedgerConfig - myCompactGenesis - (SL.Globals - _epochInfo - slotsPerKESPeriod' - stabilityWindow' - randomnessStabilisationWindow' - securityParameter' - maxKESEvo' - quorum' - maxLovelaceSupply' - activeSlotCoeff' - networkId' - systemStart' - ) - translationContext - ) - triggerHardFork - ) - = encodeListLen 13 + encodeNodeToClient + ccfg + version + ( ShelleyPartialLedgerConfig + ( ShelleyLedgerConfig + myCompactGenesis + ( SL.Globals + _epochInfo + slotsPerKESPeriod' + stabilityWindow' + randomnessStabilisationWindow' + securityParameter' + maxKESEvo' + quorum' + maxLovelaceSupply' + activeSlotCoeff' + networkId' + systemStart' + ) + translationContext + ) + triggerHardFork + ) = + encodeListLen 13 <> toCBOR @CompactGenesis myCompactGenesis <> toCBOR @Word64 slotsPerKESPeriod' <> toCBOR @Word64 stabilityWindow' @@ -277,16 +331,21 @@ instance ShelleyBasedEra era -- | 'Serialised' uses CBOR-in-CBOR by default. instance SerialiseNodeToClient (ShelleyBlock proto era) (Serialised (ShelleyBlock proto era)) - -- Default instance + +-- Default instance -- | Uses CBOR-in-CBOR in the @To/FromCBOR@ instances to get the annotation. -instance ShelleyCompatible proto era - => SerialiseNodeToClient (ShelleyBlock proto era) (GenTx (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + SerialiseNodeToClient (ShelleyBlock proto era) (GenTx (ShelleyBlock proto era)) + where encodeNodeToClient _ _ = toCBOR decodeNodeToClient _ _ = fromCBOR -instance (ShelleyCompatible proto era) - => SerialiseNodeToClient (ShelleyBlock proto era) (GenTxId (ShelleyBlock proto era)) where +instance + ShelleyCompatible proto era => + SerialiseNodeToClient (ShelleyBlock proto era) (GenTxId (ShelleyBlock proto era)) + where encodeNodeToClient _ _ = toEraCBOR @era decodeNodeToClient _ _ = fromEraCBOR @era @@ -295,20 +354,24 @@ instance ShelleyBasedEra era => SerialiseNodeToClient (ShelleyBlock proto era) ( encodeNodeToClient _ _ = toEraCBOR @era decodeNodeToClient _ _ = fromEraCBOR @era -instance (ShelleyCompatible proto era, LedgerSupportsProtocol (ShelleyBlock proto era)) - => SerialiseNodeToClient (ShelleyBlock proto era) (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) where +instance + (ShelleyCompatible proto era, LedgerSupportsProtocol (ShelleyBlock proto era)) => + SerialiseNodeToClient + (ShelleyBlock proto era) + (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) + where encodeNodeToClient _ version (SomeBlockQuery q) - | blockQueryIsSupportedOnVersion q version - = encodeShelleyQuery q - | otherwise - = throw $ ShelleyEncoderUnsupportedQuery (SomeBlockQuery q) version + | blockQueryIsSupportedOnVersion q version = + encodeShelleyQuery q + | otherwise = + throw $ ShelleyEncoderUnsupportedQuery (SomeBlockQuery q) version decodeNodeToClient _ _ = decodeShelleyQuery instance ShelleyCompatible proto era => SerialiseBlockQueryResult (ShelleyBlock proto era) BlockQuery where encodeBlockQueryResult _ = encodeShelleyResult decodeBlockQueryResult _ = decodeShelleyResult -instance ShelleyCompatible proto era => SerialiseNodeToClient (ShelleyBlock proto era) SlotNo where +instance ShelleyCompatible proto era => SerialiseNodeToClient (ShelleyBlock proto era) SlotNo where encodeNodeToClient _ _ = toCBOR decodeNodeToClient _ _ = fromCBOR @@ -320,6 +383,6 @@ instance ShelleyCompatible proto era => SerialiseNodeToClient (ShelleyBlock pro instance ShelleyBasedEra era => ReconstructNestedCtxt Header (ShelleyBlock proto era) instance ShelleyBasedEra era => EncodeDiskDepIx (NestedCtxt Header) (ShelleyBlock proto era) -instance ShelleyCompatible proto era => EncodeDiskDep (NestedCtxt Header) (ShelleyBlock proto era) +instance ShelleyCompatible proto era => EncodeDiskDep (NestedCtxt Header) (ShelleyBlock proto era) instance ShelleyBasedEra era => DecodeDiskDepIx (NestedCtxt Header) (ShelleyBlock proto era) -instance ShelleyCompatible proto era => DecodeDiskDep (NestedCtxt Header) (ShelleyBlock proto era) +instance ShelleyCompatible proto era => DecodeDiskDep (NestedCtxt Header) (ShelleyBlock proto era) diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/TPraos.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/TPraos.hs index 5f3449c9a9..72526eda3d 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/TPraos.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Node/TPraos.hs @@ -12,11 +12,10 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Shelley.Node.TPraos ( - MaxMajorProtVer (..) +module Ouroboros.Consensus.Shelley.Node.TPraos + ( MaxMajorProtVer (..) , ProtocolParamsShelleyBased (..) , SL.Nonce (..) , SL.ProtVer (..) @@ -31,48 +30,51 @@ module Ouroboros.Consensus.Shelley.Node.TPraos ( , validateGenesis ) where -import Cardano.Crypto.Hash (Hash) -import qualified Cardano.Crypto.VRF as VRF -import qualified Cardano.Ledger.Api.Era as L -import qualified Cardano.Ledger.Api.Transition as L -import Cardano.Ledger.Hashes (HASH) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (VRF) -import qualified Cardano.Protocol.TPraos.API as SL -import qualified Cardano.Protocol.TPraos.OCert as Absolute (KESPeriod (..)) -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.EpochInfo -import Cardano.Slotting.Time (mkSlotLength) -import Control.Monad.Except (Except) -import Data.Bifunctor (first) -import qualified Data.Text as T -import qualified Data.Text as Text -import Lens.Micro ((^.)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) -import qualified Ouroboros.Consensus.Protocol.Ledger.HotKey as HotKey -import Ouroboros.Consensus.Protocol.Praos.Common -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Inspect () -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion () -import Ouroboros.Consensus.Shelley.Node.Common - (ProtocolParamsShelleyBased (..), ShelleyEraWithCrypto, - ShelleyLeaderCredentials (..), shelleyBlockIssuerVKey) -import Ouroboros.Consensus.Shelley.Node.Serialisation () -import Ouroboros.Consensus.Shelley.Protocol.TPraos () -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.IOLike +import Cardano.Crypto.Hash (Hash) +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.Hashes (HASH) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (VRF) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.OCert qualified as Absolute (KESPeriod (..)) +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.EpochInfo +import Cardano.Slotting.Time (mkSlotLength) +import Control.Monad.Except (Except) +import Data.Bifunctor (first) +import Data.Text qualified as T +import Data.Text qualified as Text +import Lens.Micro ((^.)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) +import Ouroboros.Consensus.Protocol.Ledger.HotKey qualified as HotKey +import Ouroboros.Consensus.Protocol.Praos.Common +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Inspect () +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion () +import Ouroboros.Consensus.Shelley.Node.Common + ( ProtocolParamsShelleyBased (..) + , ShelleyEraWithCrypto + , ShelleyLeaderCredentials (..) + , shelleyBlockIssuerVKey + ) +import Ouroboros.Consensus.Shelley.Node.Serialisation () +import Ouroboros.Consensus.Shelley.Protocol.TPraos () +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- BlockForging @@ -83,75 +85,75 @@ import Ouroboros.Consensus.Util.IOLike -- In case the same credentials should be shared across multiple Shelley-based -- eras, use 'shelleySharedBlockForging'. shelleyBlockForging :: - forall m era c. - ( ShelleyCompatible (TPraos c) era - , TxLimits (ShelleyBlock (TPraos c) era) - , IOLike m - ) - => TPraosParams - -> ShelleyLeaderCredentials c - -> m (BlockForging m (ShelleyBlock (TPraos c) era)) + forall m era c. + ( ShelleyCompatible (TPraos c) era + , TxLimits (ShelleyBlock (TPraos c) era) + , IOLike m + ) => + TPraosParams -> + ShelleyLeaderCredentials c -> + m (BlockForging m (ShelleyBlock (TPraos c) era)) shelleyBlockForging tpraosParams credentials = do - hotKey <- HotKey.mkHotKey @m @c initSignKey startPeriod tpraosMaxKESEvo - pure $ shelleySharedBlockForging hotKey slotToPeriod credentials - where - TPraosParams {tpraosMaxKESEvo, tpraosSlotsPerKESPeriod} = tpraosParams + hotKey <- HotKey.mkHotKey @m @c initSignKey startPeriod tpraosMaxKESEvo + pure $ shelleySharedBlockForging hotKey slotToPeriod credentials + where + TPraosParams{tpraosMaxKESEvo, tpraosSlotsPerKESPeriod} = tpraosParams - ShelleyLeaderCredentials { - shelleyLeaderCredentialsInitSignKey = initSignKey - , shelleyLeaderCredentialsCanBeLeader = canBeLeader - } = credentials + ShelleyLeaderCredentials + { shelleyLeaderCredentialsInitSignKey = initSignKey + , shelleyLeaderCredentialsCanBeLeader = canBeLeader + } = credentials - startPeriod :: Absolute.KESPeriod - startPeriod = SL.ocertKESPeriod $ praosCanBeLeaderOpCert canBeLeader + startPeriod :: Absolute.KESPeriod + startPeriod = SL.ocertKESPeriod $ praosCanBeLeaderOpCert canBeLeader - slotToPeriod :: SlotNo -> Absolute.KESPeriod - slotToPeriod (SlotNo slot) = - SL.KESPeriod $ fromIntegral $ slot `div` tpraosSlotsPerKESPeriod + slotToPeriod :: SlotNo -> Absolute.KESPeriod + slotToPeriod (SlotNo slot) = + SL.KESPeriod $ fromIntegral $ slot `div` tpraosSlotsPerKESPeriod -- | Create a 'BlockForging' record safely using a given 'Hotkey'. -- -- The name of the era (separated by a @_@) will be appended to each -- 'forgeLabel'. shelleySharedBlockForging :: - forall m c era. - ( ShelleyEraWithCrypto c (TPraos c) era - , IOLike m - ) - => HotKey c m - -> (SlotNo -> Absolute.KESPeriod) - -> ShelleyLeaderCredentials c - -> BlockForging m (ShelleyBlock (TPraos c) era) + forall m c era. + ( ShelleyEraWithCrypto c (TPraos c) era + , IOLike m + ) => + HotKey c m -> + (SlotNo -> Absolute.KESPeriod) -> + ShelleyLeaderCredentials c -> + BlockForging m (ShelleyBlock (TPraos c) era) shelleySharedBlockForging hotKey slotToPeriod credentials = - BlockForging { - forgeLabel = label <> "_" <> T.pack (L.eraName @era) - , canBeLeader = canBeLeader - , updateForgeState = \_ curSlot _ -> - forgeStateUpdateInfoFromUpdateInfo <$> - HotKey.evolve hotKey (slotToPeriod curSlot) - , checkCanForge = \cfg curSlot _tickedChainDepState -> - tpraosCheckCanForge - (configConsensus cfg) - forgingVRFHash - curSlot - , forgeBlock = \cfg -> - forgeShelleyBlock - hotKey - canBeLeader - cfg - } - where - ShelleyLeaderCredentials { - shelleyLeaderCredentialsCanBeLeader = canBeLeader - , shelleyLeaderCredentialsLabel = label - } = credentials + BlockForging + { forgeLabel = label <> "_" <> T.pack (L.eraName @era) + , canBeLeader = canBeLeader + , updateForgeState = \_ curSlot _ -> + forgeStateUpdateInfoFromUpdateInfo + <$> HotKey.evolve hotKey (slotToPeriod curSlot) + , checkCanForge = \cfg curSlot _tickedChainDepState -> + tpraosCheckCanForge + (configConsensus cfg) + forgingVRFHash + curSlot + , forgeBlock = \cfg -> + forgeShelleyBlock + hotKey + canBeLeader + cfg + } + where + ShelleyLeaderCredentials + { shelleyLeaderCredentialsCanBeLeader = canBeLeader + , shelleyLeaderCredentialsLabel = label + } = credentials - forgingVRFHash :: Hash HASH (VRF.VerKeyVRF (VRF c)) - forgingVRFHash = - VRF.hashVerKeyVRF - . VRF.deriveVerKeyVRF - . praosCanBeLeaderSignKeyVRF - $ canBeLeader + forgingVRFHash :: Hash HASH (VRF.VerKeyVRF (VRF c)) + forgingVRFHash = + VRF.hashVerKeyVRF + . VRF.deriveVerKeyVRF + . praosCanBeLeaderSignKeyVRF + $ canBeLeader {------------------------------------------------------------------------------- ProtocolInfo @@ -161,61 +163,64 @@ shelleySharedBlockForging hotKey slotToPeriod credentials = -- 'assertWithMsg'. validateGenesis :: SL.ShelleyGenesis -> Either String () validateGenesis = first errsToString . SL.validateGenesis - where - errsToString :: [SL.ValidationErr] -> String - errsToString errs = - Text.unpack $ Text.unlines - ("Invalid genesis config:" : map SL.describeValidationErr errs) + where + errsToString :: [SL.ValidationErr] -> String + errsToString errs = + Text.unpack $ + Text.unlines + ("Invalid genesis config:" : map SL.describeValidationErr errs) protocolInfoShelley :: - forall m c. - ( IOLike m - , ShelleyCompatible (TPraos c) ShelleyEra - , TxLimits (ShelleyBlock (TPraos c) ShelleyEra) - ) - => SL.ShelleyGenesis - -> ProtocolParamsShelleyBased c - -> SL.ProtVer - -> ( ProtocolInfo (ShelleyBlock (TPraos c) ShelleyEra) - , m [BlockForging m (ShelleyBlock (TPraos c) ShelleyEra)] - ) -protocolInfoShelley shelleyGenesis - protocolParamsShelleyBased - protVer = + forall m c. + ( IOLike m + , ShelleyCompatible (TPraos c) ShelleyEra + , TxLimits (ShelleyBlock (TPraos c) ShelleyEra) + ) => + SL.ShelleyGenesis -> + ProtocolParamsShelleyBased c -> + SL.ProtVer -> + ( ProtocolInfo (ShelleyBlock (TPraos c) ShelleyEra) + , m [BlockForging m (ShelleyBlock (TPraos c) ShelleyEra)] + ) +protocolInfoShelley + shelleyGenesis + protocolParamsShelleyBased + protVer = protocolInfoTPraosShelleyBased protocolParamsShelleyBased (L.mkShelleyTransitionConfig shelleyGenesis) protVer protocolInfoTPraosShelleyBased :: - forall m era c. - ( IOLike m - , ShelleyCompatible (TPraos c) era - , TxLimits (ShelleyBlock (TPraos c) era) - ) - => ProtocolParamsShelleyBased c - -> L.TransitionConfig era - -> SL.ProtVer - -- ^ see 'shelleyProtVer', mutatis mutandi - -> ( ProtocolInfo (ShelleyBlock (TPraos c) era) - , m [BlockForging m (ShelleyBlock (TPraos c) era)] - ) -protocolInfoTPraosShelleyBased ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = initialNonce - , shelleyBasedLeaderCredentials = credentialss - } - transitionCfg - protVer = + forall m era c. + ( IOLike m + , ShelleyCompatible (TPraos c) era + , TxLimits (ShelleyBlock (TPraos c) era) + ) => + ProtocolParamsShelleyBased c -> + L.TransitionConfig era -> + -- | see 'shelleyProtVer', mutatis mutandi + SL.ProtVer -> + ( ProtocolInfo (ShelleyBlock (TPraos c) era) + , m [BlockForging m (ShelleyBlock (TPraos c) era)] + ) +protocolInfoTPraosShelleyBased + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = initialNonce + , shelleyBasedLeaderCredentials = credentialss + } + transitionCfg + protVer = assertWithMsg (validateGenesis genesis) $ - ( ProtocolInfo { - pInfoConfig = topLevelConfig - , pInfoInitLedger = initExtLedgerState - } - , traverse - (shelleyBlockForging tpraosParams) - credentialss - ) - where + ( ProtocolInfo + { pInfoConfig = topLevelConfig + , pInfoInitLedger = initExtLedgerState + } + , traverse + (shelleyBlockForging tpraosParams) + credentialss + ) + where genesis :: SL.ShelleyGenesis genesis = transitionCfg ^. L.tcShelleyGenesisL @@ -223,66 +228,73 @@ protocolInfoTPraosShelleyBased ProtocolParamsShelleyBased { maxMajorProtVer = MaxMajorProtVer $ SL.pvMajor protVer topLevelConfig :: TopLevelConfig (ShelleyBlock (TPraos c) era) - topLevelConfig = TopLevelConfig { - topLevelConfigProtocol = consensusConfig - , topLevelConfigLedger = ledgerConfig - , topLevelConfigBlock = blockConfig - , topLevelConfigCodec = ShelleyCodecConfig - , topLevelConfigStorage = storageConfig - , topLevelConfigCheckpoints = emptyCheckpointsMap - } + topLevelConfig = + TopLevelConfig + { topLevelConfigProtocol = consensusConfig + , topLevelConfigLedger = ledgerConfig + , topLevelConfigBlock = blockConfig + , topLevelConfigCodec = ShelleyCodecConfig + , topLevelConfigStorage = storageConfig + , topLevelConfigCheckpoints = emptyCheckpointsMap + } consensusConfig :: ConsensusConfig (BlockProtocol (ShelleyBlock (TPraos c) era)) - consensusConfig = TPraosConfig { - tpraosParams - , tpraosEpochInfo = epochInfo - } + consensusConfig = + TPraosConfig + { tpraosParams + , tpraosEpochInfo = epochInfo + } ledgerConfig :: LedgerConfig (ShelleyBlock (TPraos c) era) ledgerConfig = - mkShelleyLedgerConfig - genesis - (transitionCfg ^. L.tcTranslationContextL) - epochInfo + mkShelleyLedgerConfig + genesis + (transitionCfg ^. L.tcTranslationContextL) + epochInfo epochInfo :: EpochInfo (Except History.PastHorizonException) epochInfo = - fixedEpochInfo - (SL.sgEpochLength genesis) - (mkSlotLength $ SL.fromNominalDiffTimeMicro $ SL.sgSlotLength genesis) + fixedEpochInfo + (SL.sgEpochLength genesis) + (mkSlotLength $ SL.fromNominalDiffTimeMicro $ SL.sgSlotLength genesis) tpraosParams :: TPraosParams tpraosParams = mkTPraosParams maxMajorProtVer initialNonce genesis blockConfig :: BlockConfig (ShelleyBlock (TPraos c) era) blockConfig = - mkShelleyBlockConfig - protVer - genesis - (shelleyBlockIssuerVKey <$> credentialss) + mkShelleyBlockConfig + protVer + genesis + (shelleyBlockIssuerVKey <$> credentialss) storageConfig :: StorageConfig (ShelleyBlock (TPraos c) era) - storageConfig = ShelleyStorageConfig { - shelleyStorageConfigSlotsPerKESPeriod = tpraosSlotsPerKESPeriod tpraosParams - , shelleyStorageConfigSecurityParam = tpraosSecurityParam tpraosParams + storageConfig = + ShelleyStorageConfig + { shelleyStorageConfigSlotsPerKESPeriod = tpraosSlotsPerKESPeriod tpraosParams + , shelleyStorageConfigSecurityParam = tpraosSecurityParam tpraosParams } initLedgerState :: LedgerState (ShelleyBlock (TPraos c) era) ValuesMK - initLedgerState = unstowLedgerTables ShelleyLedgerState { - shelleyLedgerTip = Origin - , shelleyLedgerState = - L.injectIntoTestState transitionCfg - $ L.createInitialState transitionCfg - , shelleyLedgerTransition = ShelleyTransitionInfo {shelleyAfterVoting = 0} - , shelleyLedgerTables = emptyLedgerTables - } + initLedgerState = + unstowLedgerTables + ShelleyLedgerState + { shelleyLedgerTip = Origin + , shelleyLedgerState = + L.injectIntoTestState transitionCfg $ + L.createInitialState transitionCfg + , shelleyLedgerTransition = ShelleyTransitionInfo{shelleyAfterVoting = 0} + , shelleyLedgerTables = emptyLedgerTables + } initChainDepState :: TPraosState - initChainDepState = TPraosState Origin $ - SL.initialChainDepState initialNonce (SL.sgGenDelegs genesis) + initChainDepState = + TPraosState Origin $ + SL.initialChainDepState initialNonce (SL.sgGenDelegs genesis) initExtLedgerState :: ExtLedgerState (ShelleyBlock (TPraos c) era) ValuesMK - initExtLedgerState = ExtLedgerState { - ledgerState = initLedgerState - , headerState = genesisHeaderState initChainDepState - } + initExtLedgerState = + ExtLedgerState + { ledgerState = initLedgerState + , headerState = genesisHeaderState initChainDepState + } diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Abstract.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Abstract.hs index cf70fac574..a243fbdc20 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Abstract.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Abstract.hs @@ -16,8 +16,8 @@ -- Everything in this module is indexed on the protocol (or the crypto), -- rather than on the block type. This allows it to be imported in -- @Ouroboros.Consensus.Shelley.Ledger.Block@. -module Ouroboros.Consensus.Shelley.Protocol.Abstract ( - ProtoCrypto +module Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto , ProtocolHeaderSupportsEnvelope (..) , ProtocolHeaderSupportsKES (..) , ProtocolHeaderSupportsLedger (..) @@ -27,32 +27,41 @@ module Ouroboros.Consensus.Shelley.Protocol.Abstract ( , ShelleyProtocolHeader ) where -import Cardano.Binary (FromCBOR (fromCBOR), ToCBOR (toCBOR)) -import qualified Cardano.Crypto.Hash as Hash -import Cardano.Crypto.VRF (OutputVRF) -import Cardano.Ledger.BaseTypes (ProtVer) -import Cardano.Ledger.BHeaderView (BHeaderView) -import Cardano.Ledger.Hashes (EraIndependentBlockBody, - EraIndependentBlockHeader, HASH) -import Cardano.Ledger.Keys (KeyRole (BlockIssuer), VKey) -import Cardano.Protocol.Crypto (Crypto, VRF) -import Cardano.Protocol.TPraos.BHeader (PrevHash) -import Cardano.Slotting.Block (BlockNo) -import Cardano.Slotting.Slot (SlotNo) -import Codec.Serialise (Serialise (..)) -import Control.Monad.Except (Except) -import Data.Kind (Type) -import Data.Typeable (Typeable) -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Protocol.Abstract (CanBeLeader, - ChainDepState, ConsensusConfig, ConsensusProtocol, - IsLeader, LedgerView, ValidateView) -import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) -import Ouroboros.Consensus.Protocol.Signed (SignedHeader) -import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Cardano.Binary (FromCBOR (fromCBOR), ToCBOR (toCBOR)) +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Crypto.VRF (OutputVRF) +import Cardano.Ledger.BHeaderView (BHeaderView) +import Cardano.Ledger.BaseTypes (ProtVer) +import Cardano.Ledger.Hashes + ( EraIndependentBlockBody + , EraIndependentBlockHeader + , HASH + ) +import Cardano.Ledger.Keys (KeyRole (BlockIssuer), VKey) +import Cardano.Protocol.Crypto (Crypto, VRF) +import Cardano.Protocol.TPraos.BHeader (PrevHash) +import Cardano.Slotting.Block (BlockNo) +import Cardano.Slotting.Slot (SlotNo) +import Codec.Serialise (Serialise (..)) +import Control.Monad.Except (Except) +import Data.Kind (Type) +import Data.Typeable (Typeable) +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Protocol.Abstract + ( CanBeLeader + , ChainDepState + , ConsensusConfig + , ConsensusProtocol + , IsLeader + , LedgerView + , ValidateView + ) +import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) +import Ouroboros.Consensus.Protocol.Signed (SignedHeader) +import Ouroboros.Consensus.Util.Condense (Condense (..)) {------------------------------------------------------------------------------- Crypto @@ -68,19 +77,16 @@ newtype ShelleyHash = ShelleyHash { unShelleyHash :: Hash.Hash HASH EraIndependentBlockHeader } deriving stock (Eq, Ord, Show, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks deriving newtype instance ToCBOR ShelleyHash deriving newtype instance FromCBOR ShelleyHash -instance - Serialise ShelleyHash - where +instance Serialise ShelleyHash where encode = toCBOR decode = fromCBOR - instance Condense ShelleyHash where condense = show . unShelleyHash @@ -89,16 +95,15 @@ instance Condense ShelleyHash where -------------------------------------------------------------------------------} -- | Shelley header, determined by the associated protocol. --- type family ShelleyProtocolHeader proto = (sh :: Type) | sh -> proto -- | Indicates that the header (determined by the protocol) supports " Envelope -- " functionality. Envelope functionality refers to the minimal functionality -- required to construct a chain. class - ( Eq (EnvelopeCheckError proto), - NoThunks (EnvelopeCheckError proto), - Show (EnvelopeCheckError proto) + ( Eq (EnvelopeCheckError proto) + , NoThunks (EnvelopeCheckError proto) + , Show (EnvelopeCheckError proto) ) => ProtocolHeaderSupportsEnvelope proto where @@ -126,7 +131,6 @@ class -- header (made specific to KES-using protocols through the need to handle -- the hot key). class ProtocolHeaderSupportsKES proto where - -- | Extract the "slots per KES period" value from the protocol config. -- -- Note that we do not require `ConsensusConfig` in 'verifyHeaderIntegrity' @@ -163,7 +167,6 @@ class ProtocolHeaderSupportsKES proto where -- | ProtocolHeaderSupportsProtocol` provides support for the concrete -- block header to support the `ConsensusProtocol` itself. class ProtocolHeaderSupportsProtocol proto where - type CannotForgeError proto :: Type protocolHeaderView :: @@ -173,6 +176,7 @@ class ProtocolHeaderSupportsProtocol proto where ShelleyProtocolHeader proto -> VKey 'BlockIssuer pHeaderIssueNo :: ShelleyProtocolHeader proto -> Word64 + -- | A VRF value in the header, used to choose between otherwise equally -- preferable chains. pTieBreakVRFValue :: @@ -184,19 +188,18 @@ class ProtocolHeaderSupportsProtocol proto where class ProtocolHeaderSupportsLedger proto where mkHeaderView :: ShelleyProtocolHeader proto -> BHeaderView - {------------------------------------------------------------------------------- Key constraints -------------------------------------------------------------------------------} class - ( ConsensusProtocol proto, - Typeable (ShelleyProtocolHeader proto), - ProtocolHeaderSupportsEnvelope proto, - ProtocolHeaderSupportsKES proto, - ProtocolHeaderSupportsProtocol proto, - ProtocolHeaderSupportsLedger proto, - Serialise (ChainDepState proto), - SignedHeader (ShelleyProtocolHeader proto) + ( ConsensusProtocol proto + , Typeable (ShelleyProtocolHeader proto) + , ProtocolHeaderSupportsEnvelope proto + , ProtocolHeaderSupportsKES proto + , ProtocolHeaderSupportsProtocol proto + , ProtocolHeaderSupportsLedger proto + , Serialise (ChainDepState proto) + , SignedHeader (ShelleyProtocolHeader proto) ) => ShelleyProtocol proto diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Praos.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Praos.hs index f49514d481..068ff27f13 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Praos.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/Praos.hs @@ -2,49 +2,55 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Shelley.Protocol.Praos (PraosEnvelopeError (..)) where -import qualified Cardano.Crypto.KES as KES -import Cardano.Crypto.VRF (certifiedOutput) -import Cardano.Ledger.BaseTypes (ProtVer (ProtVer), Version) -import Cardano.Ledger.BHeaderView -import Cardano.Ledger.Keys (hashKey) -import Cardano.Ledger.Slot (SlotNo (unSlotNo)) -import Cardano.Protocol.TPraos.OCert - (OCert (ocertKESPeriod, ocertVkHot)) -import qualified Cardano.Protocol.TPraos.OCert as SL -import Control.Monad (unless) -import Control.Monad.Except (throwError) -import Data.Either (isRight) -import Data.Word (Word16, Word32) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Protocol.Praos -import Ouroboros.Consensus.Protocol.Praos.Common - (MaxMajorProtVer (MaxMajorProtVer)) -import Ouroboros.Consensus.Protocol.Praos.Header (Header (..), - HeaderBody (..), headerHash, headerSize) -import Ouroboros.Consensus.Protocol.Praos.Views -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, - ProtocolHeaderSupportsEnvelope (..), - ProtocolHeaderSupportsKES (..), - ProtocolHeaderSupportsLedger (..), - ProtocolHeaderSupportsProtocol (..), - ShelleyHash (ShelleyHash), ShelleyProtocol, - ShelleyProtocolHeader) - +import Cardano.Crypto.KES qualified as KES +import Cardano.Crypto.VRF (certifiedOutput) +import Cardano.Ledger.BHeaderView +import Cardano.Ledger.BaseTypes (ProtVer (ProtVer), Version) +import Cardano.Ledger.Keys (hashKey) +import Cardano.Ledger.Slot (SlotNo (unSlotNo)) +import Cardano.Protocol.TPraos.OCert + ( OCert (ocertKESPeriod, ocertVkHot) + ) +import Cardano.Protocol.TPraos.OCert qualified as SL +import Control.Monad (unless) +import Control.Monad.Except (throwError) +import Data.Either (isRight) +import Data.Word (Word16, Word32) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Protocol.Praos +import Ouroboros.Consensus.Protocol.Praos.Common + ( MaxMajorProtVer (MaxMajorProtVer) + ) +import Ouroboros.Consensus.Protocol.Praos.Header + ( Header (..) + , HeaderBody (..) + , headerHash + , headerSize + ) +import Ouroboros.Consensus.Protocol.Praos.Views +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto + , ProtocolHeaderSupportsEnvelope (..) + , ProtocolHeaderSupportsKES (..) + , ProtocolHeaderSupportsLedger (..) + , ProtocolHeaderSupportsProtocol (..) + , ShelleyHash (ShelleyHash) + , ShelleyProtocol + , ShelleyProtocolHeader + ) type instance ProtoCrypto (Praos c) = c type instance ShelleyProtocolHeader (Praos c) = Header c data PraosEnvelopeError - = ObsoleteNode Version Version - -- ^ This is a subtle case. + = -- | This is a subtle case. -- -- This node is explicitly rejecting the header, but the header isn't -- necessarily _directly_ at fault. @@ -84,6 +90,7 @@ data PraosEnvelopeError -- block's non-header content either) where the header could be validated -- but its underlying block could not. See -- . + ObsoleteNode Version Version | HeaderSizeTooLarge Int Word16 | BlockSizeTooLarge Word32 Word32 deriving (Eq, Generic, Show) @@ -109,70 +116,70 @@ instance PraosCrypto c => ProtocolHeaderSupportsEnvelope (Praos c) where unless (bhviewBSize bhv <= maxBodySize) $ throwError $ BlockSizeTooLarge (bhviewBSize bhv) maxBodySize - where - pp = praosParams cfg - (MaxMajorProtVer maxpv) = praosMaxMajorPV pp - (ProtVer m _) = lvProtocolVersion lv - maxHeaderSize = lvMaxHeaderSize lv - maxBodySize = lvMaxBodySize lv - bhv = mkHeaderView hdr + where + pp = praosParams cfg + (MaxMajorProtVer maxpv) = praosMaxMajorPV pp + (ProtVer m _) = lvProtocolVersion lv + maxHeaderSize = lvMaxHeaderSize lv + maxBodySize = lvMaxBodySize lv + bhv = mkHeaderView hdr instance PraosCrypto c => ProtocolHeaderSupportsKES (Praos c) where configSlotsPerKESPeriod cfg = praosSlotsPerKESPeriod $ praosParams cfg verifyHeaderIntegrity slotsPerKESPeriod header = isRight $ KES.verifySignedKES () ocertVkHot t headerBody headerSig - where - Header {headerBody, headerSig} = header - SL.OCert - { ocertVkHot, - ocertKESPeriod = SL.KESPeriod startOfKesPeriod - } = hbOCert headerBody - - currentKesPeriod = - fromIntegral $ - unSlotNo (hbSlotNo headerBody) `div` slotsPerKESPeriod - - t - | currentKesPeriod >= startOfKesPeriod = - currentKesPeriod - startOfKesPeriod - | otherwise = - 0 + where + Header{headerBody, headerSig} = header + SL.OCert + { ocertVkHot + , ocertKESPeriod = SL.KESPeriod startOfKesPeriod + } = hbOCert headerBody + + currentKesPeriod = + fromIntegral $ + unSlotNo (hbSlotNo headerBody) `div` slotsPerKESPeriod + + t + | currentKesPeriod >= startOfKesPeriod = + currentKesPeriod - startOfKesPeriod + | otherwise = + 0 mkHeader hk cbl il slotNo blockNo prevHash bbHash sz protVer = do - PraosFields {praosSignature, praosToSign} <- forgePraosFields hk cbl il mkBhBodyBytes + PraosFields{praosSignature, praosToSign} <- forgePraosFields hk cbl il mkBhBodyBytes pure $ Header praosToSign praosSignature - where - mkBhBodyBytes - PraosToSign - { praosToSignIssuerVK, - praosToSignVrfVK, - praosToSignVrfRes, - praosToSignOCert - } = - HeaderBody - { hbBlockNo = blockNo, - hbSlotNo = slotNo, - hbPrev = prevHash, - hbVk = praosToSignIssuerVK, - hbVrfVk = praosToSignVrfVK, - hbVrfRes = praosToSignVrfRes, - hbBodySize = fromIntegral sz, - hbBodyHash = bbHash, - hbOCert = praosToSignOCert, - hbProtVer = protVer - } + where + mkBhBodyBytes + PraosToSign + { praosToSignIssuerVK + , praosToSignVrfVK + , praosToSignVrfRes + , praosToSignOCert + } = + HeaderBody + { hbBlockNo = blockNo + , hbSlotNo = slotNo + , hbPrev = prevHash + , hbVk = praosToSignIssuerVK + , hbVrfVk = praosToSignVrfVK + , hbVrfRes = praosToSignVrfRes + , hbBodySize = fromIntegral sz + , hbBodyHash = bbHash + , hbOCert = praosToSignOCert + , hbProtVer = protVer + } instance PraosCrypto c => ProtocolHeaderSupportsProtocol (Praos c) where type CannotForgeError (Praos c) = PraosCannotForge c - protocolHeaderView Header {headerBody, headerSig} = + protocolHeaderView Header{headerBody, headerSig} = HeaderView - { hvPrevHash = hbPrev headerBody, - hvVK = hbVk headerBody, - hvVrfVK = hbVrfVk headerBody, - hvVrfRes = hbVrfRes headerBody, - hvOCert = hbOCert headerBody, - hvSlotNo = hbSlotNo headerBody, - hvSigned = headerBody, - hvSignature = headerSig + { hvPrevHash = hbPrev headerBody + , hvVK = hbVk headerBody + , hvVrfVK = hbVrfVk headerBody + , hvVrfRes = hbVrfRes headerBody + , hvOCert = hbOCert headerBody + , hvSlotNo = hbSlotNo headerBody + , hvSigned = headerBody + , hvSignature = headerSig } pHeaderIssuer = hbVk . headerBody pHeaderIssueNo = SL.ocertN . hbOCert . headerBody @@ -185,13 +192,13 @@ instance PraosCrypto c => ProtocolHeaderSupportsProtocol (Praos c) where pTieBreakVRFValue = certifiedOutput . hbVrfRes . headerBody instance PraosCrypto c => ProtocolHeaderSupportsLedger (Praos c) where - mkHeaderView hdr@Header {headerBody} = + mkHeaderView hdr@Header{headerBody} = BHeaderView - { bhviewID = hashKey $ hbVk headerBody, - bhviewBSize = hbBodySize headerBody, - bhviewHSize = headerSize hdr, - bhviewBHash = hbBodyHash headerBody, - bhviewSlot = hbSlotNo headerBody + { bhviewID = hashKey $ hbVk headerBody + , bhviewBSize = hbBodySize headerBody + , bhviewHSize = headerSize hdr + , bhviewBHash = hbBodyHash headerBody + , bhviewSlot = hbSlotNo headerBody } type instance Signed (Header c) = HeaderBody c diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/TPraos.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/TPraos.hs index 7764bd9448..60ccab6919 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/TPraos.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/Protocol/TPraos.hs @@ -2,38 +2,50 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Shelley.Protocol.TPraos () where -import qualified Cardano.Crypto.KES as SL -import Cardano.Crypto.VRF (certifiedOutput) -import Cardano.Ledger.Chain (ChainPredicateFailure) -import Cardano.Ledger.Hashes (originalBytesSize) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.TPraos.API (PraosCrypto) -import qualified Cardano.Protocol.TPraos.API as SL -import qualified Cardano.Protocol.TPraos.BHeader as SL -import Cardano.Protocol.TPraos.OCert (ocertKESPeriod, ocertVkHot) -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.Slot (unSlotNo) -import Data.Either (isRight) -import Data.Word (Word32) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Protocol.Signed (Signed, - SignedHeader (headerSigned)) -import Ouroboros.Consensus.Protocol.TPraos - (MaxMajorProtVer (MaxMajorProtVer), TPraos, - TPraosCannotForge, TPraosFields (..), TPraosToSign (..), - forgeTPraosFields, tpraosMaxMajorPV, tpraosParams, - tpraosSlotsPerKESPeriod) -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto, - ProtocolHeaderSupportsEnvelope (..), - ProtocolHeaderSupportsKES (..), - ProtocolHeaderSupportsLedger (..), - ProtocolHeaderSupportsProtocol (..), ShelleyHash (..), - ShelleyProtocol, ShelleyProtocolHeader, protocolHeaderView) +import Cardano.Crypto.KES qualified as SL +import Cardano.Crypto.VRF (certifiedOutput) +import Cardano.Ledger.Chain (ChainPredicateFailure) +import Cardano.Ledger.Hashes (originalBytesSize) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.TPraos.API (PraosCrypto) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Cardano.Protocol.TPraos.OCert (ocertKESPeriod, ocertVkHot) +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.Slot (unSlotNo) +import Data.Either (isRight) +import Data.Word (Word32) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Protocol.Signed + ( Signed + , SignedHeader (headerSigned) + ) +import Ouroboros.Consensus.Protocol.TPraos + ( MaxMajorProtVer (MaxMajorProtVer) + , TPraos + , TPraosCannotForge + , TPraosFields (..) + , TPraosToSign (..) + , forgeTPraosFields + , tpraosMaxMajorPV + , tpraosParams + , tpraosSlotsPerKESPeriod + ) +import Ouroboros.Consensus.Shelley.Protocol.Abstract + ( ProtoCrypto + , ProtocolHeaderSupportsEnvelope (..) + , ProtocolHeaderSupportsKES (..) + , ProtocolHeaderSupportsLedger (..) + , ProtocolHeaderSupportsProtocol (..) + , ShelleyHash (..) + , ShelleyProtocol + , ShelleyProtocolHeader + , protocolHeaderView + ) type instance ProtoCrypto (TPraos c) = c @@ -55,56 +67,56 @@ instance PraosCrypto c => ProtocolHeaderSupportsEnvelope (TPraos c) where maxPV (SL.lvChainChecks lv) (SL.makeHeaderView $ protocolHeaderView @(TPraos c) hdr) - where - MaxMajorProtVer maxPV = tpraosMaxMajorPV $ tpraosParams cfg + where + MaxMajorProtVer maxPV = tpraosMaxMajorPV $ tpraosParams cfg instance PraosCrypto c => ProtocolHeaderSupportsKES (TPraos c) where configSlotsPerKESPeriod cfg = tpraosSlotsPerKESPeriod $ tpraosParams cfg verifyHeaderIntegrity slotsPerKESPeriod hdr = isRight $ SL.verifySignedKES () ocertVkHot t hdrBody hdrSignature - where - SL.BHeader hdrBody hdrSignature = hdr - SL.OCert - { ocertVkHot, - ocertKESPeriod = SL.KESPeriod startOfKesPeriod - } = SL.bheaderOCert hdrBody - - currentKesPeriod = - fromIntegral $ - unSlotNo (SL.bheaderSlotNo $ SL.bhbody hdr) `div` slotsPerKESPeriod - - t - | currentKesPeriod >= startOfKesPeriod = + where + SL.BHeader hdrBody hdrSignature = hdr + SL.OCert + { ocertVkHot + , ocertKESPeriod = SL.KESPeriod startOfKesPeriod + } = SL.bheaderOCert hdrBody + + currentKesPeriod = + fromIntegral $ + unSlotNo (SL.bheaderSlotNo $ SL.bhbody hdr) `div` slotsPerKESPeriod + + t + | currentKesPeriod >= startOfKesPeriod = currentKesPeriod - startOfKesPeriod - | otherwise = + | otherwise = 0 mkHeader hotKey canBeLeader isLeader curSlot curNo prevHash bbHash actualBodySize protVer = do - TPraosFields {tpraosSignature, tpraosToSign} <- + TPraosFields{tpraosSignature, tpraosToSign} <- forgeTPraosFields hotKey canBeLeader isLeader mkBhBody pure $ SL.BHeader tpraosToSign tpraosSignature - where - mkBhBody toSign = - SL.BHBody - { SL.bheaderPrev = prevHash, - SL.bheaderVk = tpraosToSignIssuerVK, - SL.bheaderVrfVk = tpraosToSignVrfVK, - SL.bheaderSlotNo = curSlot, - SL.bheaderBlockNo = curNo, - SL.bheaderEta = tpraosToSignEta, - SL.bheaderL = tpraosToSignLeader, - SL.bsize = fromIntegral actualBodySize, - SL.bhash = bbHash, - SL.bheaderOCert = tpraosToSignOCert, - SL.bprotver = protVer - } - where - TPraosToSign - { tpraosToSignIssuerVK, - tpraosToSignVrfVK, - tpraosToSignEta, - tpraosToSignLeader, - tpraosToSignOCert - } = toSign + where + mkBhBody toSign = + SL.BHBody + { SL.bheaderPrev = prevHash + , SL.bheaderVk = tpraosToSignIssuerVK + , SL.bheaderVrfVk = tpraosToSignVrfVK + , SL.bheaderSlotNo = curSlot + , SL.bheaderBlockNo = curNo + , SL.bheaderEta = tpraosToSignEta + , SL.bheaderL = tpraosToSignLeader + , SL.bsize = fromIntegral actualBodySize + , SL.bhash = bbHash + , SL.bheaderOCert = tpraosToSignOCert + , SL.bprotver = protVer + } + where + TPraosToSign + { tpraosToSignIssuerVK + , tpraosToSignVrfVK + , tpraosToSignEta + , tpraosToSignLeader + , tpraosToSignOCert + } = toSign instance PraosCrypto c => ProtocolHeaderSupportsProtocol (TPraos c) where type CannotForgeError (TPraos c) = TPraosCannotForge c diff --git a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/ShelleyHFC.hs b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/ShelleyHFC.hs index 403564ee40..e1578ab89b 100644 --- a/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/ShelleyHFC.hs +++ b/ouroboros-consensus-cardano/src/shelley/Ouroboros/Consensus/Shelley/ShelleyHFC.hs @@ -15,12 +15,11 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | This module is the Shelley Hard Fork Combinator -module Ouroboros.Consensus.Shelley.ShelleyHFC ( - ProtocolShelley +module Ouroboros.Consensus.Shelley.ShelleyHFC + ( ProtocolShelley , ShelleyBlockHFC , ShelleyPartialLedgerConfig (..) , crossEraForecastAcrossShelley @@ -28,63 +27,73 @@ module Ouroboros.Consensus.Shelley.ShelleyHFC ( , translateChainDepStateAcrossShelley ) where -import qualified Cardano.Ledger.Api.Era as L -import qualified Cardano.Ledger.BaseTypes as SL (mkVersion, unNonZero) -import Cardano.Ledger.Binary.Decoding (decShareCBOR, decodeMap, - decodeMemPack, internsFromMap) -import Cardano.Ledger.Binary.Encoding (encodeMap, encodeMemPack, - toPlainEncoding) -import qualified Cardano.Ledger.Core as SL -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.LedgerState as SL -import qualified Cardano.Ledger.UMap as SL -import Cardano.Protocol.Crypto (Crypto) -import qualified Cardano.Protocol.TPraos.API as SL -import Codec.CBOR.Decoding -import Codec.CBOR.Encoding -import Control.Monad (guard) -import Control.Monad.Except (runExcept, throwError) -import Data.Coerce -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.SOP.BasicFunctors -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index (Index (..)) -import Data.SOP.InPairs (RequiringBoth (..), ignoringBoth) -import Data.SOP.Strict -import qualified Data.SOP.Tails as Tails -import qualified Data.SOP.Telescope as Telescope -import qualified Data.Text as T (pack) -import Data.Typeable -import Data.Void (Void) -import Data.Word -import Lens.Micro ((^.)) -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import qualified Ouroboros.Consensus.Forecast as Forecast -import Ouroboros.Consensus.HardFork.Combinator hiding - (translateChainDepState) -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.HardFork.History (Bound (boundSlot)) -import Ouroboros.Consensus.HardFork.Simple -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol, ledgerViewForecastAt) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.Praos -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Inspect as Shelley.Inspect -import Ouroboros.Consensus.Shelley.Node () -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.IndexedMemPack +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.BaseTypes qualified as SL (mkVersion, unNonZero) +import Cardano.Ledger.Binary.Decoding + ( decShareCBOR + , decodeMap + , decodeMemPack + , internsFromMap + ) +import Cardano.Ledger.Binary.Encoding + ( encodeMap + , encodeMemPack + , toPlainEncoding + ) +import Cardano.Ledger.Core qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.LedgerState qualified as SL +import Cardano.Ledger.UMap qualified as SL +import Cardano.Protocol.Crypto (Crypto) +import Cardano.Protocol.TPraos.API qualified as SL +import Codec.CBOR.Decoding +import Codec.CBOR.Encoding +import Control.Monad (guard) +import Control.Monad.Except (runExcept, throwError) +import Data.Coerce +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.SOP.BasicFunctors +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs (RequiringBoth (..), ignoringBoth) +import Data.SOP.Index (Index (..)) +import Data.SOP.Strict +import Data.SOP.Tails qualified as Tails +import Data.SOP.Telescope qualified as Telescope +import Data.Text qualified as T (pack) +import Data.Typeable +import Data.Void (Void) +import Data.Word +import Lens.Micro ((^.)) +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.Forecast qualified as Forecast +import Ouroboros.Consensus.HardFork.Combinator hiding + ( translateChainDepState + ) +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.History (Bound (boundSlot)) +import Ouroboros.Consensus.HardFork.Simple +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool (TxLimits) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + , ledgerViewForecastAt + ) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.Praos +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Inspect as Shelley.Inspect +import Ouroboros.Consensus.Shelley.Node () +import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Synonym for convenience @@ -97,21 +106,28 @@ type ShelleyBlockHFC proto era = HardForkBlock '[ShelleyBlock proto era] NoHardForks instance -------------------------------------------------------------------------------} -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - ) => ImmutableEraParams (ShelleyBlock proto era) where +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + ) => + ImmutableEraParams (ShelleyBlock proto era) + where immutableEraParams = - shelleyEraParamsNeverHardForks + shelleyEraParamsNeverHardForks . shelleyLedgerGenesis . configLedger -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , TxLimits (ShelleyBlock proto era) - , Crypto (ProtoCrypto proto) - ) => NoHardForks (ShelleyBlock proto era) where - toPartialLedgerConfig _ cfg = ShelleyPartialLedgerConfig { - shelleyLedgerConfig = cfg +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , TxLimits (ShelleyBlock proto era) + , Crypto (ProtoCrypto proto) + ) => + NoHardForks (ShelleyBlock proto era) + where + toPartialLedgerConfig _ cfg = + ShelleyPartialLedgerConfig + { shelleyLedgerConfig = cfg , shelleyTriggerHardFork = TriggerHardForkNotDuringThisExecution } @@ -122,17 +138,20 @@ instance ( ShelleyCompatible proto era -- | Forward to the ShelleyBlock instance. Only supports -- 'HardForkNodeToNodeDisabled', which is compatible with nodes running with -- 'ShelleyBlock'. -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , TxLimits (ShelleyBlock proto era) - , Crypto (ProtoCrypto proto) - ) => SupportedNetworkProtocolVersion (ShelleyBlockHFC proto era) where +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , TxLimits (ShelleyBlock proto era) + , Crypto (ProtoCrypto proto) + ) => + SupportedNetworkProtocolVersion (ShelleyBlockHFC proto era) + where supportedNodeToNodeVersions _ = - Map.map HardForkNodeToNodeDisabled $ + Map.map HardForkNodeToNodeDisabled $ supportedNodeToNodeVersions (Proxy @(ShelleyBlock proto era)) supportedNodeToClientVersions _ = - Map.map HardForkNodeToClientDisabled $ + Map.map HardForkNodeToClientDisabled $ supportedNodeToClientVersions (Proxy @(ShelleyBlock proto era)) latestReleasedNodeVersion = latestReleasedNodeVersionDefault @@ -144,40 +163,48 @@ instance ( ShelleyCompatible proto era -- | Use the default implementations. This means the serialisation of blocks -- includes an era wrapper. Each block should do this from the start to be -- prepared for future hard forks without having to do any bit twiddling. -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , TxLimits (ShelleyBlock proto era) - , Crypto (ProtoCrypto proto) - ) => SerialiseHFC '[ShelleyBlock proto era] -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , TxLimits (ShelleyBlock proto era) - , Crypto (ProtoCrypto proto) - ) => SerialiseConstraintsHFC (ShelleyBlock proto era) +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , TxLimits (ShelleyBlock proto era) + , Crypto (ProtoCrypto proto) + ) => + SerialiseHFC '[ShelleyBlock proto era] + +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , TxLimits (ShelleyBlock proto era) + , Crypto (ProtoCrypto proto) + ) => + SerialiseConstraintsHFC (ShelleyBlock proto era) {------------------------------------------------------------------------------- Protocol type definition -------------------------------------------------------------------------------} -type ProtocolShelley = HardForkProtocol '[ ShelleyBlock (TPraos StandardCrypto) ShelleyEra ] +type ProtocolShelley = HardForkProtocol '[ShelleyBlock (TPraos StandardCrypto) ShelleyEra] {------------------------------------------------------------------------------- SingleEraBlock Shelley -------------------------------------------------------------------------------} shelleyTransition :: - forall era proto mk. ShelleyCompatible proto era - => PartialLedgerConfig (ShelleyBlock proto era) - -> Word16 -- ^ Next era's initial major protocol version - -> LedgerState (ShelleyBlock proto era) mk - -> Maybe EpochNo -shelleyTransition ShelleyPartialLedgerConfig{..} - transitionMajorVersionRaw - state = - isTransition - . Shelley.Inspect.pparamsUpdate - $ state - where + forall era proto mk. + ShelleyCompatible proto era => + PartialLedgerConfig (ShelleyBlock proto era) -> + -- | Next era's initial major protocol version + Word16 -> + LedgerState (ShelleyBlock proto era) mk -> + Maybe EpochNo +shelleyTransition + ShelleyPartialLedgerConfig{..} + transitionMajorVersionRaw + state = + isTransition + . Shelley.Inspect.pparamsUpdate + $ state + where ShelleyTransitionInfo{..} = shelleyLedgerTransition state -- 'shelleyLedgerConfig' contains a dummy 'EpochInfo' but this does not @@ -190,196 +217,224 @@ shelleyTransition ShelleyPartialLedgerConfig{..} isTransition :: ShelleyLedgerUpdate era -> Maybe EpochNo isTransition (ShelleyUpdatedPParams maybePParams newPParamsEpochNo) = do - SL.SJust pp <- Just maybePParams - let protVer = pp ^. SL.ppProtocolVersionL - transitionMajorVersion <- SL.mkVersion transitionMajorVersionRaw - guard $ SL.pvMajor protVer == transitionMajorVersion - guard $ shelleyAfterVoting >= fromIntegral k - return newPParamsEpochNo - -instance ( ShelleyCompatible proto era - , LedgerSupportsProtocol (ShelleyBlock proto era) - , TxLimits (ShelleyBlock proto era) - , Crypto (ProtoCrypto proto) - ) => SingleEraBlock (ShelleyBlock proto era) where + SL.SJust pp <- Just maybePParams + let protVer = pp ^. SL.ppProtocolVersionL + transitionMajorVersion <- SL.mkVersion transitionMajorVersionRaw + guard $ SL.pvMajor protVer == transitionMajorVersion + guard $ shelleyAfterVoting >= fromIntegral k + return newPParamsEpochNo + +instance + ( ShelleyCompatible proto era + , LedgerSupportsProtocol (ShelleyBlock proto era) + , TxLimits (ShelleyBlock proto era) + , Crypto (ProtoCrypto proto) + ) => + SingleEraBlock (ShelleyBlock proto era) + where singleEraTransition pcfg _eraParams _eraStart ledgerState = - -- TODO: We might be evaluating 'singleEraTransition' more than once when - -- replaying blocks. We should investigate if this is the case, and if so, - -- whether this is the desired behaviour. If it is not, then we need to - -- fix it. - -- - -- For evidence of this behaviour, replace the cased-on expression by: - -- > @traceShowId $ shelleyTriggerHardFork pcf@ - case shelleyTriggerHardFork pcfg of - TriggerHardForkNotDuringThisExecution -> Nothing - TriggerHardForkAtEpoch epoch -> Just epoch - TriggerHardForkAtVersion shelleyMajorVersion -> - shelleyTransition - pcfg - shelleyMajorVersion - ledgerState - - singleEraInfo _ = SingleEraInfo { - singleEraName = T.pack (L.eraName @era) - } + -- TODO: We might be evaluating 'singleEraTransition' more than once when + -- replaying blocks. We should investigate if this is the case, and if so, + -- whether this is the desired behaviour. If it is not, then we need to + -- fix it. + -- + -- For evidence of this behaviour, replace the cased-on expression by: + -- > @traceShowId $ shelleyTriggerHardFork pcf@ + case shelleyTriggerHardFork pcfg of + TriggerHardForkNotDuringThisExecution -> Nothing + TriggerHardForkAtEpoch epoch -> Just epoch + TriggerHardForkAtVersion shelleyMajorVersion -> + shelleyTransition + pcfg + shelleyMajorVersion + ledgerState + + singleEraInfo _ = + SingleEraInfo + { singleEraName = T.pack (L.eraName @era) + } instance Ouroboros.Consensus.Protocol.Praos.PraosCrypto c => HasPartialConsensusConfig (Praos c) where type PartialConsensusConfig (Praos c) = PraosParams - completeConsensusConfig _ praosEpochInfo praosParams = PraosConfig {..} + completeConsensusConfig _ praosEpochInfo praosParams = PraosConfig{..} toPartialConsensusConfig _ = praosParams instance SL.PraosCrypto c => HasPartialConsensusConfig (TPraos c) where type PartialConsensusConfig (TPraos c) = TPraosParams - completeConsensusConfig _ tpraosEpochInfo tpraosParams = TPraosConfig {..} + completeConsensusConfig _ tpraosEpochInfo tpraosParams = TPraosConfig{..} toPartialConsensusConfig _ = tpraosParams translateChainDepStateAcrossShelley :: - forall eraFrom eraTo protoFrom protoTo. - ( TranslateProto protoFrom protoTo - ) - => RequiringBoth - WrapConsensusConfig - (Translate WrapChainDepState) - (ShelleyBlock protoFrom eraFrom) - (ShelleyBlock protoTo eraTo) + forall eraFrom eraTo protoFrom protoTo. + TranslateProto protoFrom protoTo => + RequiringBoth + WrapConsensusConfig + (Translate WrapChainDepState) + (ShelleyBlock protoFrom eraFrom) + (ShelleyBlock protoTo eraTo) translateChainDepStateAcrossShelley = - ignoringBoth $ - Translate $ \_epochNo (WrapChainDepState chainDepState) -> - -- Same protocol, same 'ChainDepState'. Note that we don't have to apply - -- any changes related to an epoch transition, this is already done when - -- ticking the state. - WrapChainDepState $ translateChainDepState (Proxy @(protoFrom, protoTo)) chainDepState + ignoringBoth $ + Translate $ \_epochNo (WrapChainDepState chainDepState) -> + -- Same protocol, same 'ChainDepState'. Note that we don't have to apply + -- any changes related to an epoch transition, this is already done when + -- ticking the state. + WrapChainDepState $ translateChainDepState (Proxy @(protoFrom, protoTo)) chainDepState crossEraForecastAcrossShelley :: - forall eraFrom eraTo protoFrom protoTo. - ( TranslateProto protoFrom protoTo - , LedgerSupportsProtocol (ShelleyBlock protoFrom eraFrom) - ) - => RequiringBoth - WrapLedgerConfig - (CrossEraForecaster LedgerState WrapLedgerView) - (ShelleyBlock protoFrom eraFrom) - (ShelleyBlock protoTo eraTo) + forall eraFrom eraTo protoFrom protoTo. + ( TranslateProto protoFrom protoTo + , LedgerSupportsProtocol (ShelleyBlock protoFrom eraFrom) + ) => + RequiringBoth + WrapLedgerConfig + (CrossEraForecaster LedgerState WrapLedgerView) + (ShelleyBlock protoFrom eraFrom) + (ShelleyBlock protoTo eraTo) crossEraForecastAcrossShelley = coerce forecastAcrossShelley -- | Forecast from a Shelley-based era to the next Shelley-based era. forecastAcrossShelley :: - forall protoFrom protoTo eraFrom eraTo mk. - ( TranslateProto protoFrom protoTo - , LedgerSupportsProtocol (ShelleyBlock protoFrom eraFrom) - ) - => ShelleyLedgerConfig eraFrom - -> ShelleyLedgerConfig eraTo - -> Bound -- ^ Transition between the two eras - -> SlotNo -- ^ Forecast for this slot - -> LedgerState (ShelleyBlock protoFrom eraFrom) mk - -> Except OutsideForecastRange (WrapLedgerView (ShelleyBlock protoTo eraTo)) + forall protoFrom protoTo eraFrom eraTo mk. + ( TranslateProto protoFrom protoTo + , LedgerSupportsProtocol (ShelleyBlock protoFrom eraFrom) + ) => + ShelleyLedgerConfig eraFrom -> + ShelleyLedgerConfig eraTo -> + -- | Transition between the two eras + Bound -> + -- | Forecast for this slot + SlotNo -> + LedgerState (ShelleyBlock protoFrom eraFrom) mk -> + Except OutsideForecastRange (WrapLedgerView (ShelleyBlock protoTo eraTo)) forecastAcrossShelley cfgFrom cfgTo transition forecastFor ledgerStateFrom - | forecastFor < maxFor - = return $ futureLedgerView forecastFor - | otherwise - = throwError $ OutsideForecastRange { - outsideForecastAt = ledgerTipSlot ledgerStateFrom - , outsideForecastMaxFor = maxFor - , outsideForecastFor = forecastFor - } - where - -- | 'SL.futureLedgerView' imposes its own bounds. Those bounds could - -- /exceed/ the 'maxFor' we have computed, but should never be /less/. - futureLedgerView :: SlotNo -> WrapLedgerView (ShelleyBlock protoTo era) - futureLedgerView = - WrapLedgerView - . either - (\e -> error ("futureLedgerView failed: " <> show e)) - (translateLedgerView (Proxy @(protoFrom, protoTo))) - . runExcept - . Forecast.forecastFor (ledgerViewForecastAt cfgFrom ledgerStateFrom) - - -- Exclusive upper bound - maxFor :: SlotNo - maxFor = crossEraForecastBound - (ledgerTipSlot ledgerStateFrom) - (boundSlot transition) - (SL.stabilityWindow (shelleyLedgerGlobals cfgFrom)) - (SL.stabilityWindow (shelleyLedgerGlobals cfgTo)) + | forecastFor < maxFor = + return $ futureLedgerView forecastFor + | otherwise = + throwError $ + OutsideForecastRange + { outsideForecastAt = ledgerTipSlot ledgerStateFrom + , outsideForecastMaxFor = maxFor + , outsideForecastFor = forecastFor + } + where + -- \| 'SL.futureLedgerView' imposes its own bounds. Those bounds could + -- /exceed/ the 'maxFor' we have computed, but should never be /less/. + futureLedgerView :: SlotNo -> WrapLedgerView (ShelleyBlock protoTo era) + futureLedgerView = + WrapLedgerView + . either + (\e -> error ("futureLedgerView failed: " <> show e)) + (translateLedgerView (Proxy @(protoFrom, protoTo))) + . runExcept + . Forecast.forecastFor (ledgerViewForecastAt cfgFrom ledgerStateFrom) + + -- Exclusive upper bound + maxFor :: SlotNo + maxFor = + crossEraForecastBound + (ledgerTipSlot ledgerStateFrom) + (boundSlot transition) + (SL.stabilityWindow (shelleyLedgerGlobals cfgFrom)) + (SL.stabilityWindow (shelleyLedgerGlobals cfgTo)) {------------------------------------------------------------------------------- Translation from one Shelley-based era to another Shelley-based era -------------------------------------------------------------------------------} -instance ( ShelleyBasedEra era - , ShelleyBasedEra (SL.PreviousEra era) - , SL.Era (SL.PreviousEra era) - ) => SL.TranslateEra era (ShelleyTip proto) where +instance + ( ShelleyBasedEra era + , ShelleyBasedEra (SL.PreviousEra era) + , SL.Era (SL.PreviousEra era) + ) => + SL.TranslateEra era (ShelleyTip proto) + where translateEra _ (ShelleyTip sno bno (ShelleyHash hash)) = - return $ ShelleyTip sno bno (ShelleyHash hash) - -instance ( ShelleyBasedEra era - , ShelleyBasedEra (SL.PreviousEra era) - , SL.TranslateEra era (ShelleyTip proto) - , SL.TranslateEra era SL.NewEpochState - , SL.TranslationError era SL.NewEpochState ~ Void - , CanMapMK mk - , CanMapKeysMK mk - ) => SL.TranslateEra era (Flip LedgerState mk :.: ShelleyBlock proto) where + return $ ShelleyTip sno bno (ShelleyHash hash) + +instance + ( ShelleyBasedEra era + , ShelleyBasedEra (SL.PreviousEra era) + , SL.TranslateEra era (ShelleyTip proto) + , SL.TranslateEra era SL.NewEpochState + , SL.TranslationError era SL.NewEpochState ~ Void + , CanMapMK mk + , CanMapKeysMK mk + ) => + SL.TranslateEra era (Flip LedgerState mk :.: ShelleyBlock proto) + where translateEra ctxt (Comp (Flip (ShelleyLedgerState tip state _transition tables))) = do - tip' <- mapM (SL.translateEra ctxt) tip - state' <- SL.translateEra ctxt state - return $ Comp $ Flip $ ShelleyLedgerState { - shelleyLedgerTip = tip' - , shelleyLedgerState = state' - , shelleyLedgerTransition = ShelleyTransitionInfo 0 - , shelleyLedgerTables = translateShelleyTables tables - } + tip' <- mapM (SL.translateEra ctxt) tip + state' <- SL.translateEra ctxt state + return $ + Comp $ + Flip $ + ShelleyLedgerState + { shelleyLedgerTip = tip' + , shelleyLedgerState = state' + , shelleyLedgerTransition = ShelleyTransitionInfo 0 + , shelleyLedgerTables = translateShelleyTables tables + } translateShelleyTables :: - ( CanMapMK mk - , CanMapKeysMK mk - , ShelleyBasedEra era - , ShelleyBasedEra (SL.PreviousEra era) - ) - => LedgerTables (LedgerState (ShelleyBlock proto (SL.PreviousEra era))) mk - -> LedgerTables (LedgerState (ShelleyBlock proto era)) mk + ( CanMapMK mk + , CanMapKeysMK mk + , ShelleyBasedEra era + , ShelleyBasedEra (SL.PreviousEra era) + ) => + LedgerTables (LedgerState (ShelleyBlock proto (SL.PreviousEra era))) mk -> + LedgerTables (LedgerState (ShelleyBlock proto era)) mk translateShelleyTables (LedgerTables utxoTable) = - LedgerTables $ mapKeysMK coerce $ mapMK SL.upgradeTxOut utxoTable + LedgerTables $ mapKeysMK coerce $ mapMK SL.upgradeTxOut utxoTable -instance ( ShelleyBasedEra era - , SL.TranslateEra era WrapTx - ) => SL.TranslateEra era (GenTx :.: ShelleyBlock proto) where +instance + ( ShelleyBasedEra era + , SL.TranslateEra era WrapTx + ) => + SL.TranslateEra era (GenTx :.: ShelleyBlock proto) + where type TranslationError era (GenTx :.: ShelleyBlock proto) = SL.TranslationError era WrapTx translateEra ctxt (Comp (ShelleyTx _txId tx)) = - Comp . mkShelleyTx . unwrapTx @era - <$> SL.translateEra ctxt (WrapTx @(SL.PreviousEra era) tx) - -instance ( ShelleyBasedEra era - , SL.TranslateEra era WrapTx - ) => SL.TranslateEra era (WrapValidatedGenTx :.: ShelleyBlock proto) where - type TranslationError era (WrapValidatedGenTx :.: ShelleyBlock proto) = SL.TranslationError era WrapTx + Comp . mkShelleyTx . unwrapTx @era + <$> SL.translateEra ctxt (WrapTx @(SL.PreviousEra era) tx) + +instance + ( ShelleyBasedEra era + , SL.TranslateEra era WrapTx + ) => + SL.TranslateEra era (WrapValidatedGenTx :.: ShelleyBlock proto) + where + type + TranslationError era (WrapValidatedGenTx :.: ShelleyBlock proto) = + SL.TranslationError era WrapTx translateEra ctxt (Comp (WrapValidatedGenTx (ShelleyValidatedTx _txId vtx))) = - Comp . WrapValidatedGenTx - . mkShelleyValidatedTx . SL.coerceValidated - <$> SL.translateValidated @era @WrapTx ctxt (SL.coerceValidated vtx) + Comp + . WrapValidatedGenTx + . mkShelleyValidatedTx + . SL.coerceValidated + <$> SL.translateValidated @era @WrapTx ctxt (SL.coerceValidated vtx) {------------------------------------------------------------------------------- Canonical TxIn -------------------------------------------------------------------------------} -instance (ShelleyCompatible proto era, ShelleyBasedEra era) - => HasCanonicalTxIn '[ShelleyBlock proto era] where - newtype instance CanonicalTxIn '[ShelleyBlock proto era] = ShelleyBlockHFCTxIn { - getShelleyBlockHFCTxIn :: SL.TxIn +instance + (ShelleyCompatible proto era, ShelleyBasedEra era) => + HasCanonicalTxIn '[ShelleyBlock proto era] + where + newtype CanonicalTxIn '[ShelleyBlock proto era] = ShelleyBlockHFCTxIn + { getShelleyBlockHFCTxIn :: SL.TxIn } deriving stock (Show, Eq, Ord) - deriving newtype (NoThunks) + deriving newtype NoThunks - injectCanonicalTxIn IZ txIn = ShelleyBlockHFCTxIn txIn + injectCanonicalTxIn IZ txIn = ShelleyBlockHFCTxIn txIn injectCanonicalTxIn (IS idx') _ = case idx' of {} - ejectCanonicalTxIn IZ txIn = getShelleyBlockHFCTxIn txIn + ejectCanonicalTxIn IZ txIn = getShelleyBlockHFCTxIn txIn ejectCanonicalTxIn (IS idx') _ = case idx' of {} deriving newtype instance MemPack (CanonicalTxIn '[ShelleyBlock proto era]) @@ -389,10 +444,10 @@ deriving newtype instance MemPack (CanonicalTxIn '[ShelleyBlock proto era]) -------------------------------------------------------------------------------} instance ShelleyCompatible proto era => HasHardForkTxOut '[ShelleyBlock proto era] where - type instance HardForkTxOut '[ShelleyBlock proto era] = SL.TxOut era - injectHardForkTxOut IZ txOut = txOut + type HardForkTxOut '[ShelleyBlock proto era] = SL.TxOut era + injectHardForkTxOut IZ txOut = txOut injectHardForkTxOut (IS idx') _ = case idx' of {} - ejectHardForkTxOut IZ txOut = txOut + ejectHardForkTxOut IZ txOut = txOut ejectHardForkTxOut (IS idx') _ = case idx' of {} txOutEjections = fn (unZ . unK) :* Nil txOutTranslations = Tails.mk1 @@ -401,64 +456,77 @@ instance ShelleyCompatible proto era => HasHardForkTxOut '[ShelleyBlock proto er Queries -------------------------------------------------------------------------------} -instance ( ShelleyCompatible proto era - , ShelleyBasedEra era - , TxOut (LedgerState (ShelleyBlock proto era)) ~ SL.TxOut era - , HasHardForkTxOut '[ShelleyBlock proto era] - ) => BlockSupportsHFLedgerQuery '[ShelleyBlock proto era] where - +instance + ( ShelleyCompatible proto era + , ShelleyBasedEra era + , TxOut (LedgerState (ShelleyBlock proto era)) ~ SL.TxOut era + , HasHardForkTxOut '[ShelleyBlock proto era] + ) => + BlockSupportsHFLedgerQuery '[ShelleyBlock proto era] + where answerBlockQueryHFLookup = \case IZ -> answerShelleyLookupQueries (injectLedgerTables IZ) id (ejectCanonicalTxIn IZ) IS idx -> case idx of {} answerBlockQueryHFTraverse = \case - IZ -> answerShelleyTraversingQueries - id - (ejectCanonicalTxIn IZ) - (queryLedgerGetTraversingFilter @('[ShelleyBlock proto era]) IZ) + IZ -> + answerShelleyTraversingQueries + id + (ejectCanonicalTxIn IZ) + (queryLedgerGetTraversingFilter @('[ShelleyBlock proto era]) IZ) IS idx -> case idx of {} queryLedgerGetTraversingFilter = \case IZ -> shelleyQFTraverseTablesPredicate IS idx -> case idx of {} -instance (txout ~ SL.TxOut era, MemPack txout) - => IndexedMemPack (LedgerState (HardForkBlock '[ShelleyBlock proto era]) EmptyMK) txout where +instance + (txout ~ SL.TxOut era, MemPack txout) => + IndexedMemPack (LedgerState (HardForkBlock '[ShelleyBlock proto era]) EmptyMK) txout + where indexedTypeName _ = typeName @txout indexedPackedByteCount _ = packedByteCount indexedPackM _ = packM indexedUnpackM _ = unpackM -instance ShelleyCompatible proto era - => SerializeTablesWithHint (LedgerState (HardForkBlock '[ShelleyBlock proto era])) where - encodeTablesWithHint :: LedgerState (HardForkBlock '[ShelleyBlock proto era]) EmptyMK - -> LedgerTables (LedgerState (HardForkBlock '[ShelleyBlock proto era])) ValuesMK - -> Encoding +instance + ShelleyCompatible proto era => + SerializeTablesWithHint (LedgerState (HardForkBlock '[ShelleyBlock proto era])) + where + encodeTablesWithHint :: + LedgerState (HardForkBlock '[ShelleyBlock proto era]) EmptyMK -> + LedgerTables (LedgerState (HardForkBlock '[ShelleyBlock proto era])) ValuesMK -> + Encoding encodeTablesWithHint (HardForkLedgerState (HardForkState idx)) (LedgerTables (ValuesMK tbs)) = let np = (Fn $ const $ K encOne) :* Nil - in hcollapse $ hap np $ Telescope.tip idx + in + hcollapse $ hap np $ Telescope.tip idx where - encOne :: Encoding - encOne = toPlainEncoding (SL.eraProtVerLow @era) $ encodeMap encodeMemPack encodeMemPack tbs + encOne :: Encoding + encOne = toPlainEncoding (SL.eraProtVerLow @era) $ encodeMap encodeMemPack encodeMemPack tbs - decodeTablesWithHint :: forall s. LedgerState (HardForkBlock '[ShelleyBlock proto era]) EmptyMK - -> Decoder s (LedgerTables (LedgerState (HardForkBlock '[ShelleyBlock proto era])) ValuesMK) + decodeTablesWithHint :: + forall s. + LedgerState (HardForkBlock '[ShelleyBlock proto era]) EmptyMK -> + Decoder s (LedgerTables (LedgerState (HardForkBlock '[ShelleyBlock proto era])) ValuesMK) decodeTablesWithHint (HardForkLedgerState (HardForkState idx)) = let np = (Fn $ Comp . fmap K . getOne . unFlip . currentState) :* Nil - in hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) + in + hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) where - getOne :: LedgerState (ShelleyBlock proto era) EmptyMK - -> Decoder s (LedgerTables (LedgerState (HardForkBlock '[ShelleyBlock proto era])) ValuesMK) - getOne st = - let certInterns = - internsFromMap - $ shelleyLedgerState st - ^. SL.nesEsL - . SL.esLStateL - . SL.lsCertStateL - . SL.certDStateL - . SL.dsUnifiedL - . SL.umElemsL + getOne :: + LedgerState (ShelleyBlock proto era) EmptyMK -> + Decoder s (LedgerTables (LedgerState (HardForkBlock '[ShelleyBlock proto era])) ValuesMK) + getOne st = + let certInterns = + internsFromMap $ + shelleyLedgerState st + ^. SL.nesEsL + . SL.esLStateL + . SL.lsCertStateL + . SL.certDStateL + . SL.dsUnifiedL + . SL.umElemsL in LedgerTables . ValuesMK <$> SL.eraDecoder @era (decodeMap decodeMemPack (decShareCBOR certInterns)) diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Ledger.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Ledger.hs index 81e449725e..0c31da8cc9 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Ledger.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Ledger.hs @@ -6,13 +6,13 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.ByronDual.Ledger ( - -- * Shorthand +module Ouroboros.Consensus.ByronDual.Ledger + ( -- * Shorthand DualByronBlock , DualByronBridge + -- * Bridge , ByronSpecBridge (..) , SpecToImplIds (..) @@ -20,74 +20,80 @@ module Ouroboros.Consensus.ByronDual.Ledger ( , bridgeTransactionIds , initByronSpecBridge , specToImplTx + -- * Block forging , forgeDualByronBlock ) where -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.UTxO as Spec -import qualified Cardano.Chain.UTxO as Impl -import Cardano.Crypto.DSIGN.Class -import Codec.Serialise -import Data.ByteString (ByteString) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import GHC.Generics (Generic) -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Crypto.DSIGN -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.ByronSpec.Ledger -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Dual -import Ouroboros.Consensus.Protocol.PBFT -import qualified Test.Cardano.Chain.Elaboration.Block as Spec.Test -import qualified Test.Cardano.Chain.Elaboration.Keys as Spec.Test +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.UTxO qualified as Spec +import Cardano.Chain.UTxO qualified as Impl +import Cardano.Crypto.DSIGN.Class +import Codec.Serialise +import Data.ByteString (ByteString) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import GHC.Generics (Generic) +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Crypto.DSIGN +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.ByronSpec.Ledger +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Dual +import Ouroboros.Consensus.Protocol.PBFT +import Test.Cardano.Chain.Elaboration.Block qualified as Spec.Test +import Test.Cardano.Chain.Elaboration.Keys qualified as Spec.Test {------------------------------------------------------------------------------- Shorthand -------------------------------------------------------------------------------} -type DualByronBlock = DualBlock ByronBlock ByronSpecBlock +type DualByronBlock = DualBlock ByronBlock ByronSpecBlock type DualByronBridge = BridgeLedger ByronBlock ByronSpecBlock {------------------------------------------------------------------------------- Map transaction Ids (part of the bridge) -------------------------------------------------------------------------------} -newtype SpecToImplIds = SpecToImplIds { - getSpecToImplIds :: Spec.Test.AbstractToConcreteIdMaps - } +newtype SpecToImplIds = SpecToImplIds + { getSpecToImplIds :: Spec.Test.AbstractToConcreteIdMaps + } deriving (Show, Eq, Generic, Serialise) instance Semigroup SpecToImplIds where SpecToImplIds a <> SpecToImplIds b = - SpecToImplIds $ Spec.Test.AbstractToConcreteIdMaps { - transactionIds = combine Spec.Test.transactionIds - , proposalIds = combine Spec.Test.proposalIds + SpecToImplIds $ + Spec.Test.AbstractToConcreteIdMaps + { transactionIds = combine Spec.Test.transactionIds + , proposalIds = combine Spec.Test.proposalIds } - where - combine :: Semigroup x => (Spec.Test.AbstractToConcreteIdMaps -> x) -> x - combine f = f a <> f b + where + combine :: Semigroup x => (Spec.Test.AbstractToConcreteIdMaps -> x) -> x + combine f = f a <> f b instance Monoid SpecToImplIds where - mempty = SpecToImplIds Spec.Test.AbstractToConcreteIdMaps { - transactionIds = mempty - , proposalIds = mempty - } + mempty = + SpecToImplIds + Spec.Test.AbstractToConcreteIdMaps + { transactionIds = mempty + , proposalIds = mempty + } -- | Construct singleton 'SpecToImplIds' for a transaction specToImplTx :: Spec.Tx -> Impl.ATxAux ByteString -> SpecToImplIds -specToImplTx spec impl = SpecToImplIds $ Spec.Test.AbstractToConcreteIdMaps { - transactionIds = Map.singleton (specTxId spec) (byronIdTx impl) - , proposalIds = Map.empty - } - where - specTxId :: Spec.Tx -> Spec.TxId - specTxId = Spec.txid . Spec.body +specToImplTx spec impl = + SpecToImplIds $ + Spec.Test.AbstractToConcreteIdMaps + { transactionIds = Map.singleton (specTxId spec) (byronIdTx impl) + , proposalIds = Map.empty + } + where + specTxId :: Spec.Tx -> Spec.TxId + specTxId = Spec.txid . Spec.body {------------------------------------------------------------------------------- Bridge @@ -105,72 +111,79 @@ specToImplTx spec impl = SpecToImplIds $ Spec.Test.AbstractToConcreteIdMaps { -- the implementation. In the consensus case, this state cannot be maintained -- like this, and so it has to become part of transactions, blocks, and the -- ledger state itself. -data ByronSpecBridge = ByronSpecBridge { - -- | Map between keys - -- - -- Some observations: - -- - -- * The abstract chain environment contains a set of allowed delegators - -- (of type @Set VKeyGenesis@), which gets translated to - -- 'gdGenesisKeyHashes' (of type @Set Common.KeyHash@) in the concrete - -- genesis config. - -- - -- * During the translation from abstract blocks to concrete blocks, the - -- 'VKey' of the block is translated to a concrete 'SigningKey' (as well - -- as a 'VerificationKey') in 'elaborateKeyPair'. - -- - -- * Although this translation is deterministic, it doesn't have an - -- easily definable inverse. For this reason, we maintain an opposite - -- mapping as part of the ledger state. - toSpecKeys :: Map (PBftVerKeyHash PBftByronCrypto) Spec.VKey - - -- | Mapping between abstract and concrete Ids - -- - -- We need to maintain this mapping so that we can use the abstract state - -- generators and then elaborate to concrete values. - , toImplIds :: SpecToImplIds - } +data ByronSpecBridge = ByronSpecBridge + { toSpecKeys :: Map (PBftVerKeyHash PBftByronCrypto) Spec.VKey + -- ^ Map between keys + -- + -- Some observations: + -- + -- * The abstract chain environment contains a set of allowed delegators + -- (of type @Set VKeyGenesis@), which gets translated to + -- 'gdGenesisKeyHashes' (of type @Set Common.KeyHash@) in the concrete + -- genesis config. + -- + -- * During the translation from abstract blocks to concrete blocks, the + -- 'VKey' of the block is translated to a concrete 'SigningKey' (as well + -- as a 'VerificationKey') in 'elaborateKeyPair'. + -- + -- * Although this translation is deterministic, it doesn't have an + -- easily definable inverse. For this reason, we maintain an opposite + -- mapping as part of the ledger state. + , toImplIds :: SpecToImplIds + -- ^ Mapping between abstract and concrete Ids + -- + -- We need to maintain this mapping so that we can use the abstract state + -- generators and then elaborate to concrete values. + } deriving (Show, Eq, Generic, Serialise) instance Bridge ByronBlock ByronSpecBlock where type BridgeLedger ByronBlock ByronSpecBlock = ByronSpecBridge - type BridgeBlock ByronBlock ByronSpecBlock = SpecToImplIds - type BridgeTx ByronBlock ByronSpecBlock = SpecToImplIds + type BridgeBlock ByronBlock ByronSpecBlock = SpecToImplIds + type BridgeTx ByronBlock ByronSpecBlock = SpecToImplIds -- TODO: Once we generate delegation certificates, -- we should update 'toSpecKeys' also, - updateBridgeWithBlock block bridge = bridge { - toImplIds = toImplIds bridge <> dualBlockBridge block + updateBridgeWithBlock block bridge = + bridge + { toImplIds = toImplIds bridge <> dualBlockBridge block } - updateBridgeWithTx genTx bridge = bridge { - toImplIds = toImplIds bridge <> vDualGenTxBridge genTx + updateBridgeWithTx genTx bridge = + bridge + { toImplIds = toImplIds bridge <> vDualGenTxBridge genTx } {------------------------------------------------------------------------------- Bridge initialization -------------------------------------------------------------------------------} -initByronSpecBridge :: ByronSpecGenesis - -> Map Spec.TxId Impl.TxId - -- ^ Mapping for the transaction in the initial UTxO - -> ByronSpecBridge -initByronSpecBridge ByronSpecGenesis{..} txIdMap = ByronSpecBridge { - toSpecKeys = Map.fromList $ map mapKey $ - Set.toList byronSpecGenesisDelegators - , toImplIds = SpecToImplIds Spec.Test.AbstractToConcreteIdMaps { - transactionIds = txIdMap - , proposalIds = Map.empty - } +initByronSpecBridge :: + ByronSpecGenesis -> + -- | Mapping for the transaction in the initial UTxO + Map Spec.TxId Impl.TxId -> + ByronSpecBridge +initByronSpecBridge ByronSpecGenesis{..} txIdMap = + ByronSpecBridge + { toSpecKeys = + Map.fromList $ + map mapKey $ + Set.toList byronSpecGenesisDelegators + , toImplIds = + SpecToImplIds + Spec.Test.AbstractToConcreteIdMaps + { transactionIds = txIdMap + , proposalIds = Map.empty + } } - where - -- The abstract spec maps the allowed delegators to themselves initially - mapKey :: Spec.VKeyGenesis -> (PBftVerKeyHash PBftByronCrypto, Spec.VKey) - mapKey (Spec.VKeyGenesis vkey) = ( - hashVerKey $ VerKeyByronDSIGN (Spec.Test.elaborateVKey vkey) - , vkey - ) + where + -- The abstract spec maps the allowed delegators to themselves initially + mapKey :: Spec.VKeyGenesis -> (PBftVerKeyHash PBftByronCrypto, Spec.VKey) + mapKey (Spec.VKeyGenesis vkey) = + ( hashVerKey $ VerKeyByronDSIGN (Spec.Test.elaborateVKey vkey) + , vkey + ) {------------------------------------------------------------------------------- Using the bridge @@ -181,59 +194,70 @@ initByronSpecBridge ByronSpecGenesis{..} txIdMap = ByronSpecBridge { -- We get a proof from PBFT that we are the leader, including a signing key (of -- type 'SigningKey'). In order to produce the corresponding abstract block, we -- need a 'VKey'. -bridgeToSpecKey :: DualByronBridge - -> PBftVerKeyHash PBftByronCrypto -> Spec.VKey +bridgeToSpecKey :: + DualByronBridge -> + PBftVerKeyHash PBftByronCrypto -> + Spec.VKey bridgeToSpecKey ByronSpecBridge{..} keyHash = - case Map.lookup keyHash toSpecKeys of - Just vkey -> vkey - Nothing -> error $ "toSpecKey: unknown key " ++ show keyHash + case Map.lookup keyHash toSpecKeys of + Just vkey -> vkey + Nothing -> error $ "toSpecKey: unknown key " ++ show keyHash bridgeTransactionIds :: DualByronBridge -> Map Spec.TxId Impl.TxId -bridgeTransactionIds = Spec.Test.transactionIds - . getSpecToImplIds - . toImplIds +bridgeTransactionIds = + Spec.Test.transactionIds + . getSpecToImplIds + . toImplIds {------------------------------------------------------------------------------- Block forging -------------------------------------------------------------------------------} forgeDualByronBlock :: - HasCallStack - => TopLevelConfig DualByronBlock - -> BlockNo -- ^ Current block number - -> SlotNo -- ^ Current slot number - -> TickedLedgerState DualByronBlock mk -- ^ Ledger - -> [Validated (GenTx DualByronBlock)] -- ^ Txs to add in the block - -> PBftIsLeader PBftByronCrypto -- ^ Leader proof ('IsLeader') - -> DualByronBlock + HasCallStack => + TopLevelConfig DualByronBlock -> + -- | Current block number + BlockNo -> + -- | Current slot number + SlotNo -> + -- | Ledger + TickedLedgerState DualByronBlock mk -> + -- | Txs to add in the block + [Validated (GenTx DualByronBlock)] -> + -- | Leader proof ('IsLeader') + PBftIsLeader PBftByronCrypto -> + DualByronBlock forgeDualByronBlock cfg curBlockNo curSlotNo tickedLedger vtxs isLeader = - -- NOTE: We do not /elaborate/ the real Byron block from the spec one, but - -- instead we /forge/ it. This is important, because we want to test that - -- codepath. This does mean that we do not get any kind of "bridge" between - -- the two blocks (which we would have gotten if we would have elaborated - -- the block instead). Fortunately, this is okay, since the bridge for the - -- block can be computed from the bridge information of all of the txs. - DualBlock { - dualBlockMain = main - , dualBlockAux = Just aux - , dualBlockBridge = mconcat $ map vDualGenTxBridge vtxs - } - where - main :: ByronBlock - main = forgeByronBlock - (dualTopLevelConfigMain cfg) - curBlockNo - curSlotNo - (tickedDualLedgerStateMain tickedLedger) - (map vDualGenTxMain vtxs) - isLeader - - aux :: ByronSpecBlock - aux = forgeByronSpecBlock - curBlockNo - curSlotNo - (tickedDualLedgerStateAux tickedLedger) - (map vDualGenTxAux vtxs) - (bridgeToSpecKey - (tickedDualLedgerStateBridge tickedLedger) - (hashVerKey . deriveVerKeyDSIGN . pbftIsLeaderSignKey $ isLeader)) + -- NOTE: We do not /elaborate/ the real Byron block from the spec one, but + -- instead we /forge/ it. This is important, because we want to test that + -- codepath. This does mean that we do not get any kind of "bridge" between + -- the two blocks (which we would have gotten if we would have elaborated + -- the block instead). Fortunately, this is okay, since the bridge for the + -- block can be computed from the bridge information of all of the txs. + DualBlock + { dualBlockMain = main + , dualBlockAux = Just aux + , dualBlockBridge = mconcat $ map vDualGenTxBridge vtxs + } + where + main :: ByronBlock + main = + forgeByronBlock + (dualTopLevelConfigMain cfg) + curBlockNo + curSlotNo + (tickedDualLedgerStateMain tickedLedger) + (map vDualGenTxMain vtxs) + isLeader + + aux :: ByronSpecBlock + aux = + forgeByronSpecBlock + curBlockNo + curSlotNo + (tickedDualLedgerStateAux tickedLedger) + (map vDualGenTxAux vtxs) + ( bridgeToSpecKey + (tickedDualLedgerStateBridge tickedLedger) + (hashVerKey . deriveVerKeyDSIGN . pbftIsLeaderSignKey $ isLeader) + ) diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node.hs index 884d17b8e4..0503c162fc 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node.hs @@ -7,68 +7,68 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.ByronDual.Node (protocolInfoDualByron) where -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.Delegation as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import qualified Byron.Spec.Ledger.UTxO as Spec -import qualified Cardano.Chain.Block as Impl -import qualified Cardano.Chain.Genesis as Impl -import qualified Cardano.Chain.Update as Impl -import qualified Cardano.Chain.Update.Validation.Interface as Impl -import qualified Cardano.Chain.UTxO as Impl -import Data.Either (fromRight) -import Data.Map.Strict (Map) -import Data.Maybe (fromMaybe) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.ByronDual.Ledger -import Ouroboros.Consensus.ByronDual.Node.Serialisation () -import Ouroboros.Consensus.ByronSpec.Ledger -import qualified Ouroboros.Consensus.ByronSpec.Ledger.Genesis as Genesis -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Dual -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S -import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB (..)) -import Ouroboros.Consensus.Util ((.....:)) -import qualified Test.Cardano.Chain.Elaboration.Block as Spec.Test -import qualified Test.Cardano.Chain.Elaboration.Delegation as Spec.Test -import qualified Test.Cardano.Chain.Elaboration.Keys as Spec.Test -import qualified Test.Cardano.Chain.Elaboration.Update as Spec.Test -import qualified Test.Cardano.Chain.UTxO.Model as Spec.Test +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.Delegation qualified as Spec +import Byron.Spec.Ledger.UTxO qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Cardano.Chain.Block qualified as Impl +import Cardano.Chain.Genesis qualified as Impl +import Cardano.Chain.UTxO qualified as Impl +import Cardano.Chain.Update qualified as Impl +import Cardano.Chain.Update.Validation.Interface qualified as Impl +import Data.Either (fromRight) +import Data.Map.Strict (Map) +import Data.Maybe (fromMaybe) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.ByronDual.Ledger +import Ouroboros.Consensus.ByronDual.Node.Serialisation () +import Ouroboros.Consensus.ByronSpec.Ledger +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis qualified as Genesis +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Dual +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S +import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB (..)) +import Ouroboros.Consensus.Util ((.....:)) +import Test.Cardano.Chain.Elaboration.Block qualified as Spec.Test +import Test.Cardano.Chain.Elaboration.Delegation qualified as Spec.Test +import Test.Cardano.Chain.Elaboration.Keys qualified as Spec.Test +import Test.Cardano.Chain.Elaboration.Update qualified as Spec.Test +import Test.Cardano.Chain.UTxO.Model qualified as Spec.Test {------------------------------------------------------------------------------- BlockForging -------------------------------------------------------------------------------} dualByronBlockForging :: - Monad m - => ByronLeaderCredentials - -> BlockForging m DualByronBlock -dualByronBlockForging creds = BlockForging { - forgeLabel = forgeLabel - , canBeLeader = canBeLeader + Monad m => + ByronLeaderCredentials -> + BlockForging m DualByronBlock +dualByronBlockForging creds = + BlockForging + { forgeLabel = forgeLabel + , canBeLeader = canBeLeader , updateForgeState = \cfg -> fmap castForgeStateUpdateInfo .: updateForgeState (dualTopLevelConfigMain cfg) - , checkCanForge = checkCanForge . dualTopLevelConfigMain - , forgeBlock = return .....: forgeDualByronBlock + , checkCanForge = checkCanForge . dualTopLevelConfigMain + , forgeBlock = return .....: forgeDualByronBlock } - where - BlockForging {..} = byronBlockForging creds + where + BlockForging{..} = byronBlockForging creds {------------------------------------------------------------------------------- ProtocolInfo @@ -76,137 +76,155 @@ dualByronBlockForging creds = BlockForging { Partly modelled after 'applyTrace' in "Test.Cardano.Chain.Block.Model". -------------------------------------------------------------------------------} -protocolInfoDualByron :: forall m. Monad m - => ByronSpecGenesis - -> PBftParams - -> [CoreNodeId] -- ^ Are we a core node? - -> ( ProtocolInfo DualByronBlock - , m [BlockForging m DualByronBlock] - ) +protocolInfoDualByron :: + forall m. + Monad m => + ByronSpecGenesis -> + PBftParams -> + -- | Are we a core node? + [CoreNodeId] -> + ( ProtocolInfo DualByronBlock + , m [BlockForging m DualByronBlock] + ) protocolInfoDualByron abstractGenesis@ByronSpecGenesis{..} params credss = - ( ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = PBftConfig { - pbftParams = params - } - , topLevelConfigLedger = DualLedgerConfig { - dualLedgerConfigMain = concreteGenesis - , dualLedgerConfigAux = abstractConfig - } - , topLevelConfigBlock = DualBlockConfig { - dualBlockConfigMain = concreteConfig - , dualBlockConfigAux = ByronSpecBlockConfig - } - , topLevelConfigCodec = DualCodecConfig { - dualCodecConfigMain = mkByronCodecConfig concreteGenesis - , dualCodecConfigAux = ByronSpecCodecConfig - } - , topLevelConfigStorage = DualStorageConfig { - dualStorageConfigMain = ByronStorageConfig concreteConfig - , dualStorageConfigAux = ByronSpecStorageConfig - } - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - , pInfoInitLedger = ExtLedgerState { - ledgerState = DualLedgerState { - dualLedgerStateMain = initConcreteState - , dualLedgerStateAux = initAbstractState - , dualLedgerStateBridge = initBridge - } - , headerState = genesisHeaderState S.empty - } + ( ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + PBftConfig + { pbftParams = params + } + , topLevelConfigLedger = + DualLedgerConfig + { dualLedgerConfigMain = concreteGenesis + , dualLedgerConfigAux = abstractConfig + } + , topLevelConfigBlock = + DualBlockConfig + { dualBlockConfigMain = concreteConfig + , dualBlockConfigAux = ByronSpecBlockConfig + } + , topLevelConfigCodec = + DualCodecConfig + { dualCodecConfigMain = mkByronCodecConfig concreteGenesis + , dualCodecConfigAux = ByronSpecCodecConfig + } + , topLevelConfigStorage = + DualStorageConfig + { dualStorageConfigMain = ByronStorageConfig concreteConfig + , dualStorageConfigAux = ByronSpecStorageConfig + } + , topLevelConfigCheckpoints = emptyCheckpointsMap + } + , pInfoInitLedger = + ExtLedgerState + { ledgerState = + DualLedgerState + { dualLedgerStateMain = initConcreteState + , dualLedgerStateAux = initAbstractState + , dualLedgerStateBridge = initBridge + } + , headerState = genesisHeaderState S.empty + } } - , return $ dualByronBlockForging . byronLeaderCredentials <$> credss - ) - where - initUtxo :: Impl.UTxO - txIdMap :: Map Spec.TxId Impl.TxId - (initUtxo, txIdMap) = Spec.Test.elaborateInitialUTxO byronSpecGenesisInitUtxo + , return $ dualByronBlockForging . byronLeaderCredentials <$> credss + ) + where + initUtxo :: Impl.UTxO + txIdMap :: Map Spec.TxId Impl.TxId + (initUtxo, txIdMap) = Spec.Test.elaborateInitialUTxO byronSpecGenesisInitUtxo - -- 'Spec.Test.abEnvToCfg' ignores the UTxO, because the Byron genesis - -- data doesn't contain a UTxO, but only a 'UTxOConfiguration'. - -- - -- It also ignores the slot length (the Byron spec does not talk about - -- slot lengths at all) so we have to set this ourselves. - concreteGenesis :: Impl.Config - concreteGenesis = translated { - Impl.configGenesisData = configGenesisData { - Impl.gdProtocolParameters = protocolParameters { - Impl.ppSlotDuration = byronSpecGenesisSlotLength - } + -- 'Spec.Test.abEnvToCfg' ignores the UTxO, because the Byron genesis + -- data doesn't contain a UTxO, but only a 'UTxOConfiguration'. + -- + -- It also ignores the slot length (the Byron spec does not talk about + -- slot lengths at all) so we have to set this ourselves. + concreteGenesis :: Impl.Config + concreteGenesis = + translated + { Impl.configGenesisData = + configGenesisData + { Impl.gdProtocolParameters = + protocolParameters + { Impl.ppSlotDuration = byronSpecGenesisSlotLength + } } - } - where - translated = Spec.Test.abEnvToCfg $ Genesis.toChainEnv abstractGenesis - configGenesisData = Impl.configGenesisData translated - protocolParameters = Impl.gdProtocolParameters configGenesisData + } + where + translated = Spec.Test.abEnvToCfg $ Genesis.toChainEnv abstractGenesis + configGenesisData = Impl.configGenesisData translated + protocolParameters = Impl.gdProtocolParameters configGenesisData - initAbstractState :: LedgerState ByronSpecBlock ValuesMK - initConcreteState :: LedgerState ByronBlock ValuesMK + initAbstractState :: LedgerState ByronSpecBlock ValuesMK + initConcreteState :: LedgerState ByronBlock ValuesMK - initAbstractState = initByronSpecLedgerState abstractGenesis - initConcreteState = initByronLedgerState concreteGenesis (Just initUtxo) + initAbstractState = initByronSpecLedgerState abstractGenesis + initConcreteState = initByronLedgerState concreteGenesis (Just initUtxo) - abstractConfig :: LedgerConfig ByronSpecBlock - concreteConfig :: BlockConfig ByronBlock + abstractConfig :: LedgerConfig ByronSpecBlock + concreteConfig :: BlockConfig ByronBlock - abstractConfig = abstractGenesis - concreteConfig = mkByronConfig - concreteGenesis - protocolVersion - softwareVersion - where - -- TODO: Take (spec) protocol version and (spec) software version - -- as arguments instead, and then translate /those/ to Impl types. - -- - protocolVersion :: Impl.ProtocolVersion - protocolVersion = - Impl.adoptedProtocolVersion $ - Impl.cvsUpdateState (byronLedgerState initConcreteState) + abstractConfig = abstractGenesis + concreteConfig = + mkByronConfig + concreteGenesis + protocolVersion + softwareVersion + where + -- TODO: Take (spec) protocol version and (spec) software version + -- as arguments instead, and then translate /those/ to Impl types. + -- + protocolVersion :: Impl.ProtocolVersion + protocolVersion = + Impl.adoptedProtocolVersion $ + Impl.cvsUpdateState (byronLedgerState initConcreteState) - -- The spec has a TODO about this; we just copy what 'elaborate' does - -- (Test.Cardano.Chain.Elaboration.Block) - softwareVersion :: Impl.SoftwareVersion - softwareVersion = - Spec.Test.elaborateSoftwareVersion $ - Spec.SwVer (Spec.ApName "") (Spec.ApVer 0) + -- The spec has a TODO about this; we just copy what 'elaborate' does + -- (Test.Cardano.Chain.Elaboration.Block) + softwareVersion :: Impl.SoftwareVersion + softwareVersion = + Spec.Test.elaborateSoftwareVersion $ + Spec.SwVer (Spec.ApName "") (Spec.ApVer 0) - initBridge :: DualByronBridge - initBridge = initByronSpecBridge abstractGenesis txIdMap + initBridge :: DualByronBridge + initBridge = initByronSpecBridge abstractGenesis txIdMap - byronLeaderCredentials :: CoreNodeId -> ByronLeaderCredentials - byronLeaderCredentials nid = - fromRight (error "byronLeaderCredentials: failed to construct credentials") $ - mkByronLeaderCredentials - concreteGenesis - (Spec.Test.vKeyToSKey vkey) - (Spec.Test.elaborateDCert - (Impl.configProtocolMagicId concreteGenesis) - abstractDCert) - "byronLeaderCredentials" - where - -- PBFT constructs the core node ID by the implicit ordering of - -- the hashes of the verification keys in the genesis config. Here - -- we go the other way, looking up this hash, and then using our - -- translation map to find the corresponding abstract key. - -- - -- TODO: We should be able to use keys that are /not/ in genesis - -- (so that we can start the node with new delegated keys that aren't - -- present in the genesis config). - -- - keyHash :: PBftVerKeyHash PBftByronCrypto - keyHash = fromMaybe - (error $ "mkCredentials: invalid " ++ show nid) - (nodeIdToGenesisKey concreteGenesis nid) + byronLeaderCredentials :: CoreNodeId -> ByronLeaderCredentials + byronLeaderCredentials nid = + fromRight (error "byronLeaderCredentials: failed to construct credentials") $ + mkByronLeaderCredentials + concreteGenesis + (Spec.Test.vKeyToSKey vkey) + ( Spec.Test.elaborateDCert + (Impl.configProtocolMagicId concreteGenesis) + abstractDCert + ) + "byronLeaderCredentials" + where + -- PBFT constructs the core node ID by the implicit ordering of + -- the hashes of the verification keys in the genesis config. Here + -- we go the other way, looking up this hash, and then using our + -- translation map to find the corresponding abstract key. + -- + -- TODO: We should be able to use keys that are /not/ in genesis + -- (so that we can start the node with new delegated keys that aren't + -- present in the genesis config). + -- + keyHash :: PBftVerKeyHash PBftByronCrypto + keyHash = + fromMaybe + (error $ "mkCredentials: invalid " ++ show nid) + (nodeIdToGenesisKey concreteGenesis nid) - vkey :: Spec.VKey - vkey = bridgeToSpecKey initBridge keyHash + vkey :: Spec.VKey + vkey = bridgeToSpecKey initBridge keyHash - abstractDCert :: Spec.DCert - abstractDCert = Spec.Test.rcDCert - vkey - byronSpecGenesisSecurityParam - (byronSpecLedgerState initAbstractState) + abstractDCert :: Spec.DCert + abstractDCert = + Spec.Test.rcDCert + vkey + byronSpecGenesisSecurityParam + (byronSpecLedgerState initAbstractState) {------------------------------------------------------------------------------- NodeInitStorage instance @@ -214,25 +232,27 @@ protocolInfoDualByron abstractGenesis@ByronSpecGenesis{..} params credss = instance NodeInitStorage DualByronBlock where -- Just like Byron, we need to start with an EBB - nodeInitChainDB cfg InitChainDB { getCurrentLedger, addBlock } = do - tip <- ledgerTipPoint <$> getCurrentLedger - case tip of - BlockPoint {} -> return () - GenesisPoint -> addBlock genesisEBB - where - genesisEBB :: DualByronBlock - genesisEBB = DualBlock { - dualBlockMain = byronEBB - , dualBlockAux = Nothing - , dualBlockBridge = mempty - } + nodeInitChainDB cfg InitChainDB{getCurrentLedger, addBlock} = do + tip <- ledgerTipPoint <$> getCurrentLedger + case tip of + BlockPoint{} -> return () + GenesisPoint -> addBlock genesisEBB + where + genesisEBB :: DualByronBlock + genesisEBB = + DualBlock + { dualBlockMain = byronEBB + , dualBlockAux = Nothing + , dualBlockBridge = mempty + } - byronEBB :: ByronBlock - byronEBB = forgeEBB - (getByronBlockConfig (dualStorageConfigMain cfg)) - (SlotNo 0) - (BlockNo 0) - GenesisHash + byronEBB :: ByronBlock + byronEBB = + forgeEBB + (getByronBlockConfig (dualStorageConfigMain cfg)) + (SlotNo 0) + (BlockNo 0) + GenesisHash -- Node config is a consensus concern, determined by the main block only nodeImmutableDbChunkInfo = nodeImmutableDbChunkInfo . dualStorageConfigMain @@ -252,7 +272,9 @@ instance BlockSupportsMetrics DualByronBlock where instance BlockSupportsSanityCheck DualByronBlock where configAllSecurityParams = pure . configSecurityParam -deriving via SelectViewDiffusionPipelining DualByronBlock - instance BlockSupportsDiffusionPipelining DualByronBlock +deriving via + SelectViewDiffusionPipelining DualByronBlock + instance + BlockSupportsDiffusionPipelining DualByronBlock instance RunNode DualByronBlock diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node/Serialisation.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node/Serialisation.hs index d8e85c2735..ebaa87de0d 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node/Serialisation.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Ouroboros/Consensus/ByronDual/Node/Serialisation.hs @@ -4,33 +4,35 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.ByronDual.Node.Serialisation () where -import Cardano.Binary -import Cardano.Chain.Slotting (EpochSlots) -import qualified Data.ByteString.Lazy as Lazy -import Data.Proxy -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node.Serialisation () -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.ByronDual.Ledger -import Ouroboros.Consensus.ByronSpec.Ledger -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Dual -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, - wrapCBORinCBOR) +import Cardano.Binary +import Cardano.Chain.Slotting (EpochSlots) +import Data.ByteString.Lazy qualified as Lazy +import Data.Proxy +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node.Serialisation () +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.ByronDual.Ledger +import Ouroboros.Consensus.ByronSpec.Ledger +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Dual +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block + ( Serialised + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) {------------------------------------------------------------------------------- HasNetworkProtocolVersion @@ -40,12 +42,12 @@ pb :: Proxy ByronBlock pb = Proxy instance HasNetworkProtocolVersion DualByronBlock where - type BlockNodeToNodeVersion DualByronBlock = BlockNodeToNodeVersion ByronBlock + type BlockNodeToNodeVersion DualByronBlock = BlockNodeToNodeVersion ByronBlock type BlockNodeToClientVersion DualByronBlock = BlockNodeToClientVersion ByronBlock instance SupportedNetworkProtocolVersion DualByronBlock where - supportedNodeToNodeVersions _ = supportedNodeToNodeVersions pb - supportedNodeToClientVersions _ = supportedNodeToClientVersions pb + supportedNodeToNodeVersions _ = supportedNodeToNodeVersions pb + supportedNodeToClientVersions _ = supportedNodeToClientVersions pb latestReleasedNodeVersion = latestReleasedNodeVersionDefault @@ -59,12 +61,13 @@ instance EncodeDisk DualByronBlock DualByronBlock where encodeDisk _ = encodeDualBlock encodeByronBlock instance DecodeDisk DualByronBlock (Lazy.ByteString -> DualByronBlock) where decodeDisk ccfg = decodeDualBlock (decodeByronBlock epochSlots) - where - epochSlots = extractEpochSlots ccfg + where + epochSlots = extractEpochSlots ccfg instance DecodeDiskDep (NestedCtxt Header) DualByronBlock where - decodeDiskDep (DualCodecConfig ccfg ByronSpecCodecConfig) - (NestedCtxt (CtxtDual ctxt)) = + decodeDiskDep + (DualCodecConfig ccfg ByronSpecCodecConfig) + (NestedCtxt (CtxtDual ctxt)) = decodeDiskDep ccfg (NestedCtxt ctxt) instance EncodeDisk DualByronBlock (LedgerState DualByronBlock EmptyMK) where @@ -75,16 +78,19 @@ instance DecodeDisk DualByronBlock (LedgerState DualByronBlock EmptyMK) where -- | @'ChainDepState' ('BlockProtocol' 'DualByronBlock')@ instance EncodeDisk DualByronBlock (PBftState PBftByronCrypto) where encodeDisk _ = encodeByronChainDepState + -- | @'ChainDepState' ('BlockProtocol' 'DualByronBlock')@ instance DecodeDisk DualByronBlock (PBftState PBftByronCrypto) where decodeDisk _ = decodeByronChainDepState instance EncodeDisk DualByronBlock (AnnTip DualByronBlock) where - encodeDisk ccfg = encodeDisk (dualCodecConfigMain ccfg) - . (castAnnTip :: AnnTip DualByronBlock -> AnnTip ByronBlock) + encodeDisk ccfg = + encodeDisk (dualCodecConfigMain ccfg) + . (castAnnTip :: AnnTip DualByronBlock -> AnnTip ByronBlock) instance DecodeDisk DualByronBlock (AnnTip DualByronBlock) where - decodeDisk ccfg = (castAnnTip :: AnnTip ByronBlock -> AnnTip DualByronBlock) - <$> decodeDisk (dualCodecConfigMain ccfg) + decodeDisk ccfg = + (castAnnTip :: AnnTip ByronBlock -> AnnTip DualByronBlock) + <$> decodeDisk (dualCodecConfigMain ccfg) {------------------------------------------------------------------------------- SerialiseNodeToNode @@ -98,32 +104,33 @@ instance SerialiseNodeToNodeConstraints DualByronBlock where -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. instance SerialiseNodeToNode DualByronBlock DualByronBlock where - encodeNodeToNode _ _ = wrapCBORinCBOR (encodeDualBlock encodeByronBlock) + encodeNodeToNode _ _ = wrapCBORinCBOR (encodeDualBlock encodeByronBlock) decodeNodeToNode ccfg _ = unwrapCBORinCBOR (decodeDualBlock (decodeByronBlock epochSlots)) - where - epochSlots = extractEpochSlots ccfg + where + epochSlots = extractEpochSlots ccfg -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. instance SerialiseNodeToNode DualByronBlock (Serialised DualByronBlock) - -- Default instance + +-- Default instance -- | Forward to the Byron instance. instance SerialiseNodeToNode DualByronBlock (Header DualByronBlock) where encodeNodeToNode ccfg version = - encodeNodeToNode (dualCodecConfigMain ccfg) version + encodeNodeToNode (dualCodecConfigMain ccfg) version . dualHeaderMain decodeNodeToNode ccfg version = - DualHeader + DualHeader <$> decodeNodeToNode (dualCodecConfigMain ccfg) version -- | Forward to the Byron instance. instance SerialiseNodeToNode DualByronBlock (SerialisedHeader DualByronBlock) where encodeNodeToNode ccfg version = - encodeNodeToNode (dualCodecConfigMain ccfg) version + encodeNodeToNode (dualCodecConfigMain ccfg) version . dualWrappedMain decodeNodeToNode ccfg version = - rewrapMain + rewrapMain <$> decodeNodeToNode (dualCodecConfigMain ccfg) version instance SerialiseNodeToNode DualByronBlock (GenTx DualByronBlock) where @@ -147,15 +154,16 @@ instance SerialiseNodeToClient DualByronBlock (DualLedgerConfig ByronBlock Byron -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. instance SerialiseNodeToClient DualByronBlock DualByronBlock where - encodeNodeToClient _ _ = wrapCBORinCBOR (encodeDualBlock encodeByronBlock) + encodeNodeToClient _ _ = wrapCBORinCBOR (encodeDualBlock encodeByronBlock) decodeNodeToClient ccfg _ = unwrapCBORinCBOR (decodeDualBlock (decodeByronBlock epochSlots)) - where - epochSlots = extractEpochSlots ccfg + where + epochSlots = extractEpochSlots ccfg -- | CBOR-in-CBOR for the annotation. This also makes it compatible with the -- wrapped ('Serialised') variant. instance SerialiseNodeToClient DualByronBlock (Serialised DualByronBlock) - -- Default instance + +-- Default instance instance SerialiseNodeToClient DualByronBlock (GenTx DualByronBlock) where encodeNodeToClient _ _ = encodeDualGenTx encodeByronGenTx @@ -189,11 +197,13 @@ extractEpochSlots = getByronEpochSlots . dualCodecConfigMain -- | The headers for 'DualByronBlock' and 'ByronBlock' are identical, so we -- can safely cast the serialised forms. -dualWrappedMain :: SerialisedHeader DualByronBlock - -> SerialisedHeader ByronBlock +dualWrappedMain :: + SerialisedHeader DualByronBlock -> + SerialisedHeader ByronBlock dualWrappedMain = castSerialisedHeader ctxtDualMain -- | Inverse of 'dualWrappedMain'. -rewrapMain :: SerialisedHeader ByronBlock - -> SerialisedHeader DualByronBlock +rewrapMain :: + SerialisedHeader ByronBlock -> + SerialisedHeader DualByronBlock rewrapMain = castSerialisedHeader CtxtDual diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Examples.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Examples.hs index 981536e699..b6514a9259 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Examples.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Examples.hs @@ -2,17 +2,17 @@ {-# LANGUAGE DisambiguateRecordFields #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-incomplete-uni-patterns #-} -module Test.Consensus.Byron.Examples ( - -- * Setup +module Test.Consensus.Byron.Examples + ( -- * Setup cfg , codecConfig , leaderCredentials , ledgerConfig , secParam , windowSize + -- * Examples , exampleApplyTxErr , exampleChainDepState @@ -25,39 +25,43 @@ module Test.Consensus.Byron.Examples ( , examples ) where -import qualified Cardano.Chain.Block as CC.Block -import qualified Cardano.Chain.Byron.API as CC -import qualified Cardano.Chain.Common as CC -import qualified Cardano.Chain.Update.Validation.Interface as CC.UPI -import qualified Cardano.Chain.UTxO as CC -import Cardano.Ledger.BaseTypes (knownNonZeroBounded) -import Control.Monad.Except (runExcept) -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Crypto.DSIGN (SignKeyDSIGN (..)) -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node (ByronLeaderCredentials (..)) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised (..)) -import qualified Test.Cardano.Chain.Common.Example as CC -import qualified Test.Cardano.Chain.Genesis.Dummy as CC -import qualified Test.Cardano.Chain.Update.Example as CC -import qualified Test.Cardano.Chain.UTxO.Example as CC -import Test.ThreadNet.Infra.Byron.ProtocolInfo (mkLeaderCredentials) -import qualified Test.Util.Serialisation.Examples as Examples -import Test.Util.Serialisation.Examples (Examples (Examples), - Labelled, labelled, unlabelled) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Cardano.Chain.Block qualified as CC.Block +import Cardano.Chain.Byron.API qualified as CC +import Cardano.Chain.Common qualified as CC +import Cardano.Chain.UTxO qualified as CC +import Cardano.Chain.Update.Validation.Interface qualified as CC.UPI +import Cardano.Ledger.BaseTypes (knownNonZeroBounded) +import Control.Monad.Except (runExcept) +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Crypto.DSIGN (SignKeyDSIGN (..)) +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node (ByronLeaderCredentials (..)) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block (Serialised (..)) +import Test.Cardano.Chain.Common.Example qualified as CC +import Test.Cardano.Chain.Genesis.Dummy qualified as CC +import Test.Cardano.Chain.UTxO.Example qualified as CC +import Test.Cardano.Chain.Update.Example qualified as CC +import Test.ThreadNet.Infra.Byron.ProtocolInfo (mkLeaderCredentials) +import Test.Util.Serialisation.Examples + ( Examples (Examples) + , Labelled + , labelled + , unlabelled + ) +import Test.Util.Serialisation.Examples qualified as Examples +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- Setup @@ -73,8 +77,9 @@ windowSize :: S.WindowSize windowSize = S.WindowSize 2 cfg :: BlockConfig ByronBlock -cfg = ByronConfig { - byronGenesisConfig = CC.dummyConfig +cfg = + ByronConfig + { byronGenesisConfig = CC.dummyConfig , byronProtocolVersion = CC.exampleProtocolVersion , byronSoftwareVersion = CC.exampleSoftwareVersion } @@ -87,57 +92,59 @@ ledgerConfig = CC.dummyConfig leaderCredentials :: ByronLeaderCredentials leaderCredentials = - mkLeaderCredentials - CC.dummyConfig - CC.dummyGeneratedSecrets - (CoreNodeId 0) + mkLeaderCredentials + CC.dummyConfig + CC.dummyGeneratedSecrets + (CoreNodeId 0) {------------------------------------------------------------------------------- Examples -------------------------------------------------------------------------------} examples :: Examples ByronBlock -examples = Examples { - exampleBlock = regularAndEBB exampleBlock exampleEBB - , exampleSerialisedBlock = regularAndEBB exampleSerialisedBlock exampleSerialisedEBB - , exampleHeader = regularAndEBB exampleHeader exampleEBBHeader +examples = + Examples + { exampleBlock = regularAndEBB exampleBlock exampleEBB + , exampleSerialisedBlock = regularAndEBB exampleSerialisedBlock exampleSerialisedEBB + , exampleHeader = regularAndEBB exampleHeader exampleEBBHeader , exampleSerialisedHeader = regularAndEBB exampleSerialisedHeader exampleSerialisedEBBHeader - , exampleHeaderHash = unlabelled exampleHeaderHash - , exampleGenTx = unlabelled exampleGenTx - , exampleGenTxId = unlabelled exampleGenTxId - , exampleApplyTxErr = unlabelled exampleApplyTxErr - , exampleQuery = unlabelled exampleQuery - , exampleResult = unlabelled exampleResult - , exampleAnnTip = unlabelled exampleAnnTip - , exampleLedgerConfig = unlabelled ledgerConfig - , exampleLedgerState = unlabelled $ forgetLedgerTables exampleLedgerState - , exampleChainDepState = unlabelled exampleChainDepState - , exampleExtLedgerState = unlabelled $ forgetLedgerTables exampleExtLedgerState - , exampleSlotNo = unlabelled exampleSlotNo - , exampleLedgerTables = unlabelled emptyLedgerTables + , exampleHeaderHash = unlabelled exampleHeaderHash + , exampleGenTx = unlabelled exampleGenTx + , exampleGenTxId = unlabelled exampleGenTxId + , exampleApplyTxErr = unlabelled exampleApplyTxErr + , exampleQuery = unlabelled exampleQuery + , exampleResult = unlabelled exampleResult + , exampleAnnTip = unlabelled exampleAnnTip + , exampleLedgerConfig = unlabelled ledgerConfig + , exampleLedgerState = unlabelled $ forgetLedgerTables exampleLedgerState + , exampleChainDepState = unlabelled exampleChainDepState + , exampleExtLedgerState = unlabelled $ forgetLedgerTables exampleExtLedgerState + , exampleSlotNo = unlabelled exampleSlotNo + , exampleLedgerTables = unlabelled emptyLedgerTables } - where - regularAndEBB :: a -> a -> Labelled a - regularAndEBB regular ebb = labelled [("regular", regular), ("EBB", ebb)] + where + regularAndEBB :: a -> a -> Labelled a + regularAndEBB regular ebb = labelled [("regular", regular), ("EBB", ebb)] - exampleQuery = SomeBlockQuery GetUpdateInterfaceState - exampleResult = SomeResult GetUpdateInterfaceState exampleUPIState + exampleQuery = SomeBlockQuery GetUpdateInterfaceState + exampleResult = SomeResult GetUpdateInterfaceState exampleUPIState exampleBlock :: ByronBlock exampleBlock = - forgeRegularBlock - cfg - (BlockNo 1) - (SlotNo 1) - (applyChainTick OmitLedgerEvents ledgerConfig (SlotNo 1) (forgetLedgerTables ledgerStateAfterEBB)) - [ValidatedByronTx exampleGenTx] - (fakeMkIsLeader leaderCredentials) - where - -- | Normally, we'd have to use 'checkIsLeader' to produce this proof. - fakeMkIsLeader (ByronLeaderCredentials signKey dlgCert _ _) = PBftIsLeader { - pbftIsLeaderSignKey = SignKeyByronDSIGN signKey - , pbftIsLeaderDlgCert = dlgCert - } + forgeRegularBlock + cfg + (BlockNo 1) + (SlotNo 1) + (applyChainTick OmitLedgerEvents ledgerConfig (SlotNo 1) (forgetLedgerTables ledgerStateAfterEBB)) + [ValidatedByronTx exampleGenTx] + (fakeMkIsLeader leaderCredentials) + where + -- \| Normally, we'd have to use 'checkIsLeader' to produce this proof. + fakeMkIsLeader (ByronLeaderCredentials signKey dlgCert _ _) = + PBftIsLeader + { pbftIsLeaderSignKey = SignKeyByronDSIGN signKey + , pbftIsLeaderDlgCert = dlgCert + } exampleEBB :: ByronBlock exampleEBB = forgeEBB cfg (SlotNo 0) (BlockNo 0) GenesisHash @@ -155,39 +162,44 @@ exampleEBBHeader :: Header ByronBlock exampleEBBHeader = getHeader exampleEBB exampleSerialisedHeader :: SerialisedHeader ByronBlock -exampleSerialisedHeader = SerialisedHeaderFromDepPair $ +exampleSerialisedHeader = + SerialisedHeaderFromDepPair $ GenDepPair (NestedCtxt (CtxtByronRegular 100)) (Serialised "
") exampleSerialisedEBBHeader :: SerialisedHeader ByronBlock -exampleSerialisedEBBHeader = SerialisedHeaderFromDepPair $ +exampleSerialisedEBBHeader = + SerialisedHeaderFromDepPair $ GenDepPair (NestedCtxt (CtxtByronBoundary 100)) (Serialised "") exampleAnnTip :: AnnTip ByronBlock -exampleAnnTip = AnnTip { - annTipSlotNo = SlotNo 37 +exampleAnnTip = + AnnTip + { annTipSlotNo = SlotNo 37 , annTipBlockNo = BlockNo 23 - , annTipInfo = TipInfoIsEBB exampleHeaderHash IsNotEBB + , annTipInfo = TipInfoIsEBB exampleHeaderHash IsNotEBB } exampleChainDepState :: ChainDepState (BlockProtocol ByronBlock) exampleChainDepState = S.fromList signers - where - signers = map (`S.PBftSigner` CC.exampleKeyHash) [1..4] + where + signers = map (`S.PBftSigner` CC.exampleKeyHash) [1 .. 4] emptyLedgerState :: LedgerState ByronBlock ValuesMK -emptyLedgerState = ByronLedgerState { - byronLedgerTipBlockNo = Origin - , byronLedgerState = initState +emptyLedgerState = + ByronLedgerState + { byronLedgerTipBlockNo = Origin + , byronLedgerState = initState , byronLedgerTransition = ByronTransitionInfo Map.empty } - where - initState :: CC.Block.ChainValidationState - Right initState = runExcept $ + where + initState :: CC.Block.ChainValidationState + Right initState = + runExcept $ CC.Block.initialChainValidationState ledgerConfig ledgerStateAfterEBB :: LedgerState ByronBlock ValuesMK ledgerStateAfterEBB = - applyDiffs emptyLedgerState + applyDiffs emptyLedgerState . reapplyLedgerBlock OmitLedgerEvents ledgerConfig exampleEBB . applyDiffs emptyLedgerState . applyChainTick OmitLedgerEvents ledgerConfig (SlotNo 0) @@ -196,7 +208,7 @@ ledgerStateAfterEBB = exampleLedgerState :: LedgerState ByronBlock ValuesMK exampleLedgerState = - applyDiffs emptyLedgerState + applyDiffs emptyLedgerState . reapplyLedgerBlock OmitLedgerEvents ledgerConfig exampleBlock . applyDiffs ledgerStateAfterEBB . applyChainTick OmitLedgerEvents ledgerConfig (SlotNo 1) @@ -207,8 +219,9 @@ exampleHeaderState :: HeaderState ByronBlock exampleHeaderState = HeaderState (NotOrigin exampleAnnTip) exampleChainDepState exampleExtLedgerState :: ExtLedgerState ByronBlock ValuesMK -exampleExtLedgerState = ExtLedgerState { - ledgerState = exampleLedgerState +exampleExtLedgerState = + ExtLedgerState + { ledgerState = exampleLedgerState , headerState = exampleHeaderState } @@ -226,10 +239,10 @@ exampleUPIState = CC.UPI.initialState ledgerConfig exampleApplyTxErr :: CC.ApplyMempoolPayloadErr exampleApplyTxErr = - CC.MempoolTxErr - $ CC.UTxOValidationTxValidationError - $ CC.TxValidationLovelaceError "a" - $ CC.LovelaceOverflow 0 + CC.MempoolTxErr $ + CC.UTxOValidationTxValidationError $ + CC.TxValidationLovelaceError "a" $ + CC.LovelaceOverflow 0 exampleSlotNo :: SlotNo exampleSlotNo = SlotNo 42 diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Generators.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Generators.hs index 3bbd2123e1..a29d110409 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Generators.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/Consensus/Byron/Generators.hs @@ -3,11 +3,10 @@ {-# LANGUAGE GADTs #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Test.Consensus.Byron.Generators ( - RegularBlock (..) +module Test.Consensus.Byron.Generators + ( RegularBlock (..) , epochSlots , genByronLedgerConfig , genByronLedgerState @@ -15,68 +14,83 @@ module Test.Consensus.Byron.Generators ( , protocolMagicId ) where -import Cardano.Chain.Block (ABlockOrBoundary (..), - ABlockOrBoundaryHdr (..), ChainValidationState (..), - cvsPreviousHash) -import qualified Cardano.Chain.Block as CC.Block -import qualified Cardano.Chain.Byron.API as API -import Cardano.Chain.Common (Address, BlockCount (..), CompactAddress, - KeyHash, Lovelace) -import qualified Cardano.Chain.Delegation as CC.Del -import qualified Cardano.Chain.Delegation.Validation.Activation as CC.Act -import qualified Cardano.Chain.Delegation.Validation.Interface as CC.DI -import qualified Cardano.Chain.Delegation.Validation.Scheduling as CC.Sched -import qualified Cardano.Chain.Genesis as Byron -import qualified Cardano.Chain.Genesis as CC.Genesis -import Cardano.Chain.Slotting (EpochNumber, EpochSlots (..), - SlotNumber) -import qualified Cardano.Chain.Update as CC.Update -import qualified Cardano.Chain.Update.Validation.Interface as CC.UPI -import qualified Cardano.Chain.Update.Validation.Registration as CC.Reg -import qualified Cardano.Chain.UTxO as CC.UTxO -import Cardano.Crypto (ProtocolMagicId (..), - RequiresNetworkMagic (..)) -import Cardano.Crypto.Hashing (Hash) -import Cardano.Crypto.Signing -import qualified Cardano.Crypto.Wallet as Wallet -import Cardano.Ledger.BaseTypes (knownNonZeroBounded) -import Cardano.Ledger.Binary (decCBOR, encCBOR) -import Control.Monad (replicateM) -import qualified Data.ByteString as BS -import qualified Data.ByteString.Base64 as B64 -import qualified Data.ByteString.Char8 as BSC8 -import Data.Coerce (coerce) -import qualified Data.Map.Strict as Map -import Data.String (IsString (fromString)) -import qualified Data.Text as T -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.HeaderValidation (AnnTip (..)) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) -import Ouroboros.Consensus.Ledger.Tables.Utils (emptyLedgerTables) -import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) -import qualified Ouroboros.Consensus.Protocol.PBFT.State as PBftState -import Ouroboros.Network.SizeInBytes -import qualified Test.Cardano.Chain.Block.Gen as CC -import qualified Test.Cardano.Chain.Common.Gen as CC -import qualified Test.Cardano.Chain.Delegation.Gen as CC -import qualified Test.Cardano.Chain.Genesis.Gen as CC -import qualified Test.Cardano.Chain.MempoolPayload.Gen as CC -import qualified Test.Cardano.Chain.Slotting.Gen as CC -import qualified Test.Cardano.Chain.Update.Gen as UG -import qualified Test.Cardano.Chain.UTxO.Gen as CC -import qualified Test.Cardano.Crypto.Gen as CC -import Test.Cardano.Ledger.Binary.Arbitrary () -import Test.QuickCheck hiding (Result) -import Test.QuickCheck.Hedgehog (hedgehog) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip (Coherent (..), - WithVersion (..)) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Cardano.Chain.Block + ( ABlockOrBoundary (..) + , ABlockOrBoundaryHdr (..) + , ChainValidationState (..) + , cvsPreviousHash + ) +import Cardano.Chain.Block qualified as CC.Block +import Cardano.Chain.Byron.API qualified as API +import Cardano.Chain.Common + ( Address + , BlockCount (..) + , CompactAddress + , KeyHash + , Lovelace + ) +import Cardano.Chain.Delegation qualified as CC.Del +import Cardano.Chain.Delegation.Validation.Activation qualified as CC.Act +import Cardano.Chain.Delegation.Validation.Interface qualified as CC.DI +import Cardano.Chain.Delegation.Validation.Scheduling qualified as CC.Sched +import Cardano.Chain.Genesis qualified as Byron +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.Slotting + ( EpochNumber + , EpochSlots (..) + , SlotNumber + ) +import Cardano.Chain.UTxO qualified as CC.UTxO +import Cardano.Chain.Update qualified as CC.Update +import Cardano.Chain.Update.Validation.Interface qualified as CC.UPI +import Cardano.Chain.Update.Validation.Registration qualified as CC.Reg +import Cardano.Crypto + ( ProtocolMagicId (..) + , RequiresNetworkMagic (..) + ) +import Cardano.Crypto.Hashing (Hash) +import Cardano.Crypto.Signing +import Cardano.Crypto.Wallet qualified as Wallet +import Cardano.Ledger.BaseTypes (knownNonZeroBounded) +import Cardano.Ledger.Binary (decCBOR, encCBOR) +import Control.Monad (replicateM) +import Data.ByteString qualified as BS +import Data.ByteString.Base64 qualified as B64 +import Data.ByteString.Char8 qualified as BSC8 +import Data.Coerce (coerce) +import Data.Map.Strict qualified as Map +import Data.String (IsString (fromString)) +import Data.Text qualified as T +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HeaderValidation (AnnTip (..)) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) +import Ouroboros.Consensus.Ledger.Tables.Utils (emptyLedgerTables) +import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) +import Ouroboros.Consensus.Protocol.PBFT.State qualified as PBftState +import Ouroboros.Network.SizeInBytes +import Test.Cardano.Chain.Block.Gen qualified as CC +import Test.Cardano.Chain.Common.Gen qualified as CC +import Test.Cardano.Chain.Delegation.Gen qualified as CC +import Test.Cardano.Chain.Genesis.Gen qualified as CC +import Test.Cardano.Chain.MempoolPayload.Gen qualified as CC +import Test.Cardano.Chain.Slotting.Gen qualified as CC +import Test.Cardano.Chain.UTxO.Gen qualified as CC +import Test.Cardano.Chain.Update.Gen qualified as UG +import Test.Cardano.Crypto.Gen qualified as CC +import Test.Cardano.Ledger.Binary.Arbitrary () +import Test.QuickCheck hiding (Result) +import Test.QuickCheck.Hedgehog (hedgehog) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip + ( Coherent (..) + , WithVersion (..) + ) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- Generators @@ -94,36 +108,40 @@ protocolMagicId :: ProtocolMagicId protocolMagicId = ProtocolMagicId 100 instance Arbitrary CC.Genesis.Config where - arbitrary = CC.Genesis.Config - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + CC.Genesis.Config + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary CC.Genesis.GenesisData where - arbitrary = CC.Genesis.GenesisData - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + CC.Genesis.GenesisData + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary CC.Genesis.GenesisKeyHashes where arbitrary = CC.Genesis.GenesisKeyHashes <$> arbitrary instance Arbitrary CC.Genesis.GenesisDelegation where - arbitrary = (CC.Genesis.mkGenesisDelegation <$> arbitrary) - `suchThatMap` (either (const Nothing) Just) + arbitrary = + (CC.Genesis.mkGenesisDelegation <$> arbitrary) + `suchThatMap` (either (const Nothing) Just) instance Arbitrary (CC.Del.ACertificate ()) where - arbitrary = CC.Del.signCertificate - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + CC.Del.signCertificate + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary SafeSigner where arbitrary = do @@ -133,11 +151,12 @@ instance Arbitrary SafeSigner where return $ SafeSigner (SigningKey xprv) (PassPhrase (fromString (BSC8.unpack passPhrase))) instance Arbitrary VerificationKey where - arbitrary = either (error . show) id . parseFullVerificationKey <$> - (T.pack . BSC8.unpack . B64.encode <$> arbitraryKey) - where - -- The key must be 64 bytes - arbitraryKey = BS.pack <$> sequence (replicate 64 arbitrary) + arbitrary = + either (error . show) id . parseFullVerificationKey + <$> (T.pack . BSC8.unpack . B64.encode <$> arbitraryKey) + where + -- The key must be 64 bytes + arbitraryKey = BS.pack <$> sequence (replicate 64 arbitrary) instance Arbitrary CC.Genesis.GenesisNonAvvmBalances where arbitrary = CC.Genesis.GenesisNonAvvmBalances <$> arbitrary @@ -170,53 +189,58 @@ instance Arbitrary CompactAddress where arbitrary = hedgehog CC.genCompactAddress -- | A 'ByronBlock' that is never an EBB. -newtype RegularBlock = RegularBlock { unRegularBlock :: ByronBlock } +newtype RegularBlock = RegularBlock {unRegularBlock :: ByronBlock} deriving (Eq, Show) instance Arbitrary RegularBlock where arbitrary = - RegularBlock .annotateByronBlock epochSlots <$> - hedgehog (CC.genBlock protocolMagicId epochSlots) + RegularBlock . annotateByronBlock epochSlots + <$> hedgehog (CC.genBlock protocolMagicId epochSlots) instance Arbitrary ByronBlock where arbitrary = getCoherent <$> arbitrary instance Arbitrary (Coherent ByronBlock) where - arbitrary = Coherent <$> frequency - [ (3, genBlock) - , (1, genBoundaryBlock) - ] - where - genBlock :: Gen ByronBlock - genBlock = unRegularBlock <$> arbitrary - genBoundaryBlock :: Gen ByronBlock - genBoundaryBlock = - mkByronBlock epochSlots . ABOBBoundary . API.reAnnotateBoundary protocolMagicId <$> - hedgehog (CC.genBoundaryBlock) + arbitrary = + Coherent + <$> frequency + [ (3, genBlock) + , (1, genBoundaryBlock) + ] + where + genBlock :: Gen ByronBlock + genBlock = unRegularBlock <$> arbitrary + genBoundaryBlock :: Gen ByronBlock + genBoundaryBlock = + mkByronBlock epochSlots . ABOBBoundary . API.reAnnotateBoundary protocolMagicId + <$> hedgehog (CC.genBoundaryBlock) instance Arbitrary (Header ByronBlock) where - arbitrary = frequency + arbitrary = + frequency [ (3, genHeader) , (1, genBoundaryHeader) ] - where - genHeader :: Gen (Header ByronBlock) - genHeader = do - blockSize <- SizeInBytes <$> arbitrary - flip (mkByronHeader epochSlots) blockSize . ABOBBlockHdr . - API.reAnnotateUsing - (CC.Block.encCBORHeader epochSlots) - (CC.Block.decCBORAHeader epochSlots) <$> - hedgehog (CC.genHeader protocolMagicId epochSlots) - - genBoundaryHeader :: Gen (Header ByronBlock) - genBoundaryHeader = do - blockSize <- SizeInBytes <$> arbitrary - flip (mkByronHeader epochSlots) blockSize . ABOBBoundaryHdr . - API.reAnnotateUsing - (CC.Block.encCBORABoundaryHeader protocolMagicId) - CC.Block.decCBORABoundaryHeader <$> - hedgehog CC.genBoundaryHeader + where + genHeader :: Gen (Header ByronBlock) + genHeader = do + blockSize <- SizeInBytes <$> arbitrary + flip (mkByronHeader epochSlots) blockSize + . ABOBBlockHdr + . API.reAnnotateUsing + (CC.Block.encCBORHeader epochSlots) + (CC.Block.decCBORAHeader epochSlots) + <$> hedgehog (CC.genHeader protocolMagicId epochSlots) + + genBoundaryHeader :: Gen (Header ByronBlock) + genBoundaryHeader = do + blockSize <- SizeInBytes <$> arbitrary + flip (mkByronHeader epochSlots) blockSize + . ABOBBoundaryHdr + . API.reAnnotateUsing + (CC.Block.encCBORABoundaryHeader protocolMagicId) + CC.Block.decCBORABoundaryHeader + <$> hedgehog CC.genBoundaryHeader instance Arbitrary (Hash a) where arbitrary = coerce <$> hedgehog CC.genTextHash @@ -229,30 +253,32 @@ instance Arbitrary KeyHash where instance Arbitrary (GenTx ByronBlock) where arbitrary = - fromMempoolPayload . API.reAnnotateUsing encCBOR decCBOR <$> - hedgehog (CC.genMempoolPayload protocolMagicId) + fromMempoolPayload . API.reAnnotateUsing encCBOR decCBOR + <$> hedgehog (CC.genMempoolPayload protocolMagicId) instance Arbitrary (GenTxId ByronBlock) where - arbitrary = oneof - [ ByronTxId <$> hedgehog CC.genTxId - , ByronDlgId <$> hedgehog genCertificateId + arbitrary = + oneof + [ ByronTxId <$> hedgehog CC.genTxId + , ByronDlgId <$> hedgehog genCertificateId , ByronUpdateProposalId <$> hedgehog (UG.genUpId protocolMagicId) - , ByronUpdateVoteId <$> hedgehog genUpdateVoteId + , ByronUpdateVoteId <$> hedgehog genUpdateVoteId ] - where - genCertificateId = CC.genAbstractHash (CC.genCertificate protocolMagicId) - genUpdateVoteId = CC.genAbstractHash (UG.genVote protocolMagicId) + where + genCertificateId = CC.genAbstractHash (CC.genCertificate protocolMagicId) + genUpdateVoteId = CC.genAbstractHash (UG.genVote protocolMagicId) instance Arbitrary API.ApplyMempoolPayloadErr where - arbitrary = oneof - [ API.MempoolTxErr <$> hedgehog CC.genUTxOValidationError - , API.MempoolDlgErr <$> hedgehog CC.genError - -- TODO there is no generator for - -- Cardano.Chain.Update.Validation.Interface.Error and we can't write one - -- either because the different Error types it wraps are not exported. - -- , MempoolUpdateProposalErr <$> arbitrary - -- , MempoolUpdateVoteErr <$> arbitrary - ] + arbitrary = + oneof + [ API.MempoolTxErr <$> hedgehog CC.genUTxOValidationError + , API.MempoolDlgErr <$> hedgehog CC.genError + -- TODO there is no generator for + -- Cardano.Chain.Update.Validation.Interface.Error and we can't write one + -- either because the different Error types it wraps are not exported. + -- , MempoolUpdateProposalErr <$> arbitrary + -- , MempoolUpdateVoteErr <$> arbitrary + ] instance Arbitrary (SomeBlockQuery (BlockQuery ByronBlock)) where arbitrary = pure $ SomeBlockQuery GetUpdateInterfaceState @@ -282,77 +308,87 @@ instance Arbitrary CC.Update.SoftwareVersion where arbitrary = hedgehog UG.genSoftwareVersion instance Arbitrary CC.Reg.ProtocolUpdateProposal where - arbitrary = CC.Reg.ProtocolUpdateProposal - <$> arbitrary - <*> arbitrary + arbitrary = + CC.Reg.ProtocolUpdateProposal + <$> arbitrary + <*> arbitrary instance Arbitrary CC.Reg.SoftwareUpdateProposal where - arbitrary = CC.Reg.SoftwareUpdateProposal - <$> arbitrary - <*> arbitrary + arbitrary = + CC.Reg.SoftwareUpdateProposal + <$> arbitrary + <*> arbitrary instance Arbitrary CC.Reg.ApplicationVersion where - arbitrary = CC.Reg.ApplicationVersion - <$> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + CC.Reg.ApplicationVersion + <$> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary CC.UPI.State where - arbitrary = CC.UPI.State - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> pure mempty -- TODO CandidateProtocolUpdate's constructor is not exported - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> pure mempty -- TODO Endorsement is not exported - <*> arbitrary + arbitrary = + CC.UPI.State + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> pure mempty -- TODO CandidateProtocolUpdate's constructor is not exported + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> pure mempty -- TODO Endorsement is not exported + <*> arbitrary instance Arbitrary CC.Genesis.GenesisHash where arbitrary = CC.Genesis.GenesisHash <$> arbitrary instance Arbitrary CC.UTxO.UTxO where - arbitrary = oneof [ - hedgehog CC.genUTxO - -- We would sometimes like to run tests using a smaller (or even empty) - -- UTxO, but 'genUTxO' generates a UTxO without depending on the QC size - -- parameter. The probability of generating smaller (or empty) UTxOs is - -- therefore low. - , CC.UTxO.fromList <$> - listOf ((,) <$> hedgehog CC.genTxIn <*> hedgehog CC.genTxOut) - ] + arbitrary = + oneof + [ hedgehog CC.genUTxO + , -- We would sometimes like to run tests using a smaller (or even empty) + -- UTxO, but 'genUTxO' generates a UTxO without depending on the QC size + -- parameter. The probability of generating smaller (or empty) UTxOs is + -- therefore low. + CC.UTxO.fromList + <$> listOf ((,) <$> hedgehog CC.genTxIn <*> hedgehog CC.genTxOut) + ] instance Arbitrary CC.Act.State where - arbitrary = CC.Act.State - <$> arbitrary - <*> arbitrary + arbitrary = + CC.Act.State + <$> arbitrary + <*> arbitrary instance Arbitrary CC.Sched.ScheduledDelegation where - arbitrary = CC.Sched.ScheduledDelegation - <$> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + CC.Sched.ScheduledDelegation + <$> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary CC.Sched.State where - arbitrary = CC.Sched.State - <$> arbitrary - <*> arbitrary + arbitrary = + CC.Sched.State + <$> arbitrary + <*> arbitrary instance Arbitrary CC.DI.State where - arbitrary = CC.DI.State - <$> arbitrary - <*> arbitrary + arbitrary = + CC.DI.State + <$> arbitrary + <*> arbitrary instance Arbitrary CC.Block.ChainValidationState where - arbitrary = CC.Block.ChainValidationState - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + CC.Block.ChainValidationState + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary ByronNodeToNodeVersion where arbitrary = arbitraryBoundedEnum @@ -377,16 +413,17 @@ genByronLedgerState = do chainValidationState <- arbitrary ledgerTransition <- arbitrary ledgerTipBlockNo <- genLedgerTipBlockNo chainValidationState - pure $ ByronLedgerState { - byronLedgerTipBlockNo = ledgerTipBlockNo - , byronLedgerState = chainValidationState - , byronLedgerTransition = ledgerTransition - } - where - genLedgerTipBlockNo ChainValidationState { cvsPreviousHash } = - case cvsPreviousHash of - Left _ -> pure Origin - Right _ -> NotOrigin <$> arbitrary + pure $ + ByronLedgerState + { byronLedgerTipBlockNo = ledgerTipBlockNo + , byronLedgerState = chainValidationState + , byronLedgerTransition = ledgerTransition + } + where + genLedgerTipBlockNo ChainValidationState{cvsPreviousHash} = + case cvsPreviousHash of + Left _ -> pure Origin + Right _ -> NotOrigin <$> arbitrary instance ZeroableMK mk => Arbitrary (LedgerTables (LedgerState ByronBlock) mk) where arbitrary = pure emptyLedgerTables @@ -398,20 +435,21 @@ instance Arbitrary (TipInfoIsEBB ByronBlock) where arbitrary = TipInfoIsEBB <$> arbitrary <*> elements [IsEBB, IsNotEBB] instance Arbitrary (AnnTip ByronBlock) where - arbitrary = AnnTip - <$> (SlotNo <$> arbitrary) - <*> (BlockNo <$> arbitrary) - <*> arbitrary + arbitrary = + AnnTip + <$> (SlotNo <$> arbitrary) + <*> (BlockNo <$> arbitrary) + <*> arbitrary instance Arbitrary (PBftState PBftByronCrypto) where arbitrary = do - slots <- choose (0, 10) - keys <- replicateM 3 arbitrary - let signers = zipWith PBftState.PBftSigner (map SlotNo [0..slots]) (cycle keys) - return $ PBftState.fromList signers + slots <- choose (0, 10) + keys <- replicateM 3 arbitrary + let signers = zipWith PBftState.PBftSigner (map SlotNo [0 .. slots]) (cycle keys) + return $ PBftState.fromList signers instance Arbitrary (SomeResult ByronBlock) where - arbitrary = SomeResult GetUpdateInterfaceState <$> arbitrary + arbitrary = SomeResult GetUpdateInterfaceState <$> arbitrary {------------------------------------------------------------------------------- Versioned generators for serialisation @@ -422,22 +460,23 @@ instance Arbitrary (SomeResult ByronBlock) where instance Arbitrary (WithVersion ByronNodeToNodeVersion (Header ByronBlock)) where arbitrary = do version <- arbitrary - hdr <- arbitrary + hdr <- arbitrary let hdr' = case version of ByronNodeToNodeVersion1 -> - hdr { byronHeaderBlockSizeHint = fakeByronBlockSizeHint } + hdr{byronHeaderBlockSizeHint = fakeByronBlockSizeHint} ByronNodeToNodeVersion2 -> hdr return (WithVersion version hdr') instance Arbitrary (WithVersion ByronNodeToNodeVersion (SomeSecond (NestedCtxt Header) ByronBlock)) where arbitrary = do - version <- arbitrary - size <- case version of - ByronNodeToNodeVersion1 -> return fakeByronBlockSizeHint - ByronNodeToNodeVersion2 -> SizeInBytes <$> arbitrary - ctxt <- elements [ - SomeSecond . NestedCtxt $ CtxtByronRegular size - , SomeSecond . NestedCtxt $ CtxtByronBoundary size - ] - return (WithVersion version ctxt) + version <- arbitrary + size <- case version of + ByronNodeToNodeVersion1 -> return fakeByronBlockSizeHint + ByronNodeToNodeVersion2 -> SizeInBytes <$> arbitrary + ctxt <- + elements + [ SomeSecond . NestedCtxt $ CtxtByronRegular size + , SomeSecond . NestedCtxt $ CtxtByronBoundary size + ] + return (WithVersion version ctxt) diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron.hs index 916dcd84cf..5bffed7109 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron.hs @@ -1,5 +1,5 @@ module Test.ThreadNet.Infra.Byron (module X) where -import Test.ThreadNet.Infra.Byron.Genesis as X -import Test.ThreadNet.Infra.Byron.ProtocolInfo as X -import Test.ThreadNet.Infra.Byron.TrackUpdates as X +import Test.ThreadNet.Infra.Byron.Genesis as X +import Test.ThreadNet.Infra.Byron.ProtocolInfo as X +import Test.ThreadNet.Infra.Byron.TrackUpdates as X diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/Genesis.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/Genesis.hs index 38a6d58400..38f5055406 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/Genesis.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/Genesis.hs @@ -1,72 +1,77 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} -module Test.ThreadNet.Infra.Byron.Genesis ( - byronPBftParams +module Test.ThreadNet.Infra.Byron.Genesis + ( byronPBftParams , generateGenesisConfig ) where -import qualified Cardano.Chain.Common as Common -import qualified Cardano.Chain.Genesis as Genesis -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Crypto as Crypto -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Monad.Except (runExceptT) -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.PBFT -import qualified Test.Cardano.Chain.Genesis.Dummy as Dummy -import Test.Util.Time +import Cardano.Chain.Common qualified as Common +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.Update qualified as Update +import Cardano.Crypto qualified as Crypto +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Monad.Except (runExceptT) +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.PBFT +import Test.Cardano.Chain.Genesis.Dummy qualified as Dummy +import Test.Util.Time {------------------------------------------------------------------------------- Generating the genesis configuration -------------------------------------------------------------------------------} byronPBftParams :: SecurityParam -> NumCoreNodes -> PBftParams -byronPBftParams paramK numCoreNodes = PBftParams - { pbftNumNodes = numCoreNodes - , pbftSecurityParam = paramK - , pbftSignatureThreshold = PBftSignatureThreshold $ (1 / n) + (1 / k) + epsilon +byronPBftParams paramK numCoreNodes = + PBftParams + { pbftNumNodes = numCoreNodes + , pbftSecurityParam = paramK + , pbftSignatureThreshold = PBftSignatureThreshold $ (1 / n) + (1 / k) + epsilon -- crucially: @floor (k * t) >= ceil (k / n)@ - } - where - epsilon = 1/10000 -- avoid problematic floating point round-off + } + where + epsilon = 1 / 10000 -- avoid problematic floating point round-off + n :: Num a => a + n = fromIntegral x where NumCoreNodes x = numCoreNodes - n :: Num a => a - n = fromIntegral x where NumCoreNodes x = numCoreNodes - - k :: Num a => a - k = fromIntegral x where x = unNonZero $ maxRollbacks paramK + k :: Num a => a + k = fromIntegral x where x = unNonZero $ maxRollbacks paramK -- Instead of using 'Dummy.dummyConfig', which hard codes the number of rich -- men (= CoreNodes for us) to 4, we generate a dummy config with the given -- number of rich men. -generateGenesisConfig :: SlotLength - -> PBftParams - -> (Genesis.Config, Genesis.GeneratedSecrets) +generateGenesisConfig :: + SlotLength -> + PBftParams -> + (Genesis.Config, Genesis.GeneratedSecrets) generateGenesisConfig slotLen params = - either (error . show) id $ - Crypto.deterministic "this is fake entropy for testing" $ - runExceptT $ - Genesis.generateGenesisConfigWithEntropy dawnOfTime spec - where - PBftParams{pbftNumNodes, pbftSecurityParam} = params - NumCoreNodes numCoreNodes = pbftNumNodes + either (error . show) id $ + Crypto.deterministic "this is fake entropy for testing" $ + runExceptT $ + Genesis.generateGenesisConfigWithEntropy dawnOfTime spec + where + PBftParams{pbftNumNodes, pbftSecurityParam} = params + NumCoreNodes numCoreNodes = pbftNumNodes - spec :: Genesis.GenesisSpec - spec = Dummy.dummyGenesisSpec - { Genesis.gsInitializer = Dummy.dummyGenesisInitializer - { Genesis.giTestBalance = - (Genesis.giTestBalance Dummy.dummyGenesisInitializer) - -- The nodes are the richmen - { Genesis.tboRichmen = fromIntegral numCoreNodes } - } - , Genesis.gsK = Common.BlockCount $ unNonZero $ maxRollbacks pbftSecurityParam - , Genesis.gsProtocolParameters = gsProtocolParameters - { Update.ppSlotDuration = toByronSlotLength slotLen - } - } - where - gsProtocolParameters = Genesis.gsProtocolParameters Dummy.dummyGenesisSpec + spec :: Genesis.GenesisSpec + spec = + Dummy.dummyGenesisSpec + { Genesis.gsInitializer = + Dummy.dummyGenesisInitializer + { Genesis.giTestBalance = + (Genesis.giTestBalance Dummy.dummyGenesisInitializer) + { -- The nodes are the richmen + Genesis.tboRichmen = fromIntegral numCoreNodes + } + } + , Genesis.gsK = Common.BlockCount $ unNonZero $ maxRollbacks pbftSecurityParam + , Genesis.gsProtocolParameters = + gsProtocolParameters + { Update.ppSlotDuration = toByronSlotLength slotLen + } + } + where + gsProtocolParameters = Genesis.gsProtocolParameters Dummy.dummyGenesisSpec diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/ProtocolInfo.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/ProtocolInfo.hs index 54f96ca8c0..96ff99912b 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/ProtocolInfo.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/ProtocolInfo.hs @@ -2,97 +2,103 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.ThreadNet.Infra.Byron.ProtocolInfo ( - mkLeaderCredentials +module Test.ThreadNet.Infra.Byron.ProtocolInfo + ( mkLeaderCredentials , mkProtocolByron , theProposedProtocolVersion , theProposedSoftwareVersion ) where -import qualified Cardano.Chain.Common as Common -import qualified Cardano.Chain.Delegation as Delegation -import qualified Cardano.Chain.Genesis as Genesis -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Crypto as Crypto -import Data.Foldable (find) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.Byron.Crypto.DSIGN (ByronDSIGN, - SignKeyDSIGN (..)) -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import Ouroboros.Consensus.Byron.Node -import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.PBFT +import Cardano.Chain.Common qualified as Common +import Cardano.Chain.Delegation qualified as Delegation +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.Update qualified as Update +import Cardano.Crypto qualified as Crypto +import Data.Foldable (find) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.Byron.Crypto.DSIGN + ( ByronDSIGN + , SignKeyDSIGN (..) + ) +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Byron.Node +import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.PBFT mkProtocolByron :: - forall m. (Monad m, HasCallStack) - => PBftParams - -> CoreNodeId - -> Genesis.Config - -> Genesis.GeneratedSecrets - -> (ProtocolInfo ByronBlock, [BlockForging m ByronBlock], SignKeyDSIGN ByronDSIGN) - -- ^ We return the signing key which is needed in some tests, because it - -- cannot easily be extracted from the 'ProtocolInfo'. + forall m. + (Monad m, HasCallStack) => + PBftParams -> + CoreNodeId -> + Genesis.Config -> + Genesis.GeneratedSecrets -> + -- | We return the signing key which is needed in some tests, because it + -- cannot easily be extracted from the 'ProtocolInfo'. + (ProtocolInfo ByronBlock, [BlockForging m ByronBlock], SignKeyDSIGN ByronDSIGN) mkProtocolByron params coreNodeId genesisConfig genesisSecrets = - (protocolInfo, blockForging, signingKey) - where - leaderCredentials :: ByronLeaderCredentials - leaderCredentials = - mkLeaderCredentials - genesisConfig - genesisSecrets - coreNodeId + (protocolInfo, blockForging, signingKey) + where + leaderCredentials :: ByronLeaderCredentials + leaderCredentials = + mkLeaderCredentials + genesisConfig + genesisSecrets + coreNodeId - signingKey :: SignKeyDSIGN ByronDSIGN - signingKey = SignKeyByronDSIGN (blcSignKey leaderCredentials) + signingKey :: SignKeyDSIGN ByronDSIGN + signingKey = SignKeyByronDSIGN (blcSignKey leaderCredentials) - PBftParams { pbftSignatureThreshold } = params + PBftParams{pbftSignatureThreshold} = params - protocolInfo :: ProtocolInfo ByronBlock - protocolInfo = protocolInfoByron protocolParams + protocolInfo :: ProtocolInfo ByronBlock + protocolInfo = protocolInfoByron protocolParams - blockForging :: [BlockForging m ByronBlock] - blockForging = blockForgingByron protocolParams + blockForging :: [BlockForging m ByronBlock] + blockForging = blockForgingByron protocolParams - protocolParams :: ProtocolParamsByron - protocolParams = ProtocolParamsByron { - byronGenesis = genesisConfig - , byronPbftSignatureThreshold = Just $ pbftSignatureThreshold - , byronProtocolVersion = theProposedProtocolVersion - , byronSoftwareVersion = theProposedSoftwareVersion - , byronLeaderCredentials = Just leaderCredentials - } + protocolParams :: ProtocolParamsByron + protocolParams = + ProtocolParamsByron + { byronGenesis = genesisConfig + , byronPbftSignatureThreshold = Just $ pbftSignatureThreshold + , byronProtocolVersion = theProposedProtocolVersion + , byronSoftwareVersion = theProposedSoftwareVersion + , byronLeaderCredentials = Just leaderCredentials + } mkLeaderCredentials :: - HasCallStack - => Genesis.Config - -> Genesis.GeneratedSecrets - -> CoreNodeId - -> ByronLeaderCredentials + HasCallStack => + Genesis.Config -> + Genesis.GeneratedSecrets -> + CoreNodeId -> + ByronLeaderCredentials mkLeaderCredentials genesisConfig genesisSecrets (CoreNodeId i) = - either (error . show) id $ - mkByronLeaderCredentials - genesisConfig - dlgKey - dlgCert - "ThreadNet" - where - dlgKey :: Crypto.SigningKey - dlgKey = fromMaybe (error "dlgKey") $ - find (\sec -> Delegation.delegateVK dlgCert == Crypto.toVerification sec) - $ Genesis.gsRichSecrets genesisSecrets + either (error . show) id $ + mkByronLeaderCredentials + genesisConfig + dlgKey + dlgCert + "ThreadNet" + where + dlgKey :: Crypto.SigningKey + dlgKey = + fromMaybe (error "dlgKey") $ + find (\sec -> Delegation.delegateVK dlgCert == Crypto.toVerification sec) $ + Genesis.gsRichSecrets genesisSecrets - dlgCert :: Delegation.Certificate - dlgCert = snd $ Map.toAscList dlgMap !! (fromIntegral i) + dlgCert :: Delegation.Certificate + dlgCert = snd $ Map.toAscList dlgMap !! (fromIntegral i) - dlgMap :: Map Common.KeyHash Delegation.Certificate - dlgMap = Genesis.unGenesisDelegation - $ Genesis.gdHeavyDelegation - $ Genesis.configGenesisData genesisConfig + dlgMap :: Map Common.KeyHash Delegation.Certificate + dlgMap = + Genesis.unGenesisDelegation $ + Genesis.gdHeavyDelegation $ + Genesis.configGenesisData genesisConfig -- | The protocol version proposed as part of the hard-fork smoke test -- @@ -103,7 +109,6 @@ mkLeaderCredentials genesisConfig genesisSecrets (CoreNodeId i) = -- This value occurs in two crucial places: the proposal and also the -- 'Byron.byronProtocolVersion' field of the static node config. See the -- Haddock comment on 'mkProtocolByronAndHardForkTxs'. --- theProposedProtocolVersion :: Update.ProtocolVersion theProposedProtocolVersion = Update.ProtocolVersion 1 0 0 @@ -117,9 +122,9 @@ theProposedProtocolVersion = Update.ProtocolVersion 1 0 0 -- The initial Byron ledger state begins with no recorded software versions. -- For the addition of a new software version, the Byron ledger rules require -- that it starts at 0 or 1. --- theProposedSoftwareVersion :: Update.SoftwareVersion -theProposedSoftwareVersion = Update.SoftwareVersion - -- appnames must be ASCII and <= 12 characters - (Update.ApplicationName "Dummy") - 0 +theProposedSoftwareVersion = + Update.SoftwareVersion + -- appnames must be ASCII and <= 12 characters + (Update.ApplicationName "Dummy") + 0 diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/TrackUpdates.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/TrackUpdates.hs index 103b2769fc..f26c49237e 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/TrackUpdates.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/Infra/Byron/TrackUpdates.hs @@ -5,106 +5,111 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.ThreadNet.Infra.Byron.TrackUpdates ( - ProtocolVersionUpdateLabel (..) +module Test.ThreadNet.Infra.Byron.TrackUpdates + ( ProtocolVersionUpdateLabel (..) , SoftwareVersionUpdateLabel (..) , mkProtocolByronAndHardForkTxs , mkUpdateLabels ) where -import qualified Cardano.Chain.Block as Block -import qualified Cardano.Chain.Byron.API as ByronAPI -import qualified Cardano.Chain.Genesis as Genesis -import qualified Cardano.Chain.MempoolPayload as MempoolPayload -import Cardano.Chain.Slotting (EpochSlots (..), SlotNumber (..)) -import qualified Cardano.Chain.Update as Update -import Cardano.Chain.Update.Proposal (AProposal) -import qualified Cardano.Chain.Update.Proposal as Proposal -import qualified Cardano.Chain.Update.Validation.Interface as Update -import qualified Cardano.Chain.Update.Validation.Registration as Registration -import Cardano.Chain.Update.Vote (AVote) -import qualified Cardano.Chain.Update.Vote as Vote -import qualified Cardano.Crypto as Crypto -import Cardano.Ledger.BaseTypes (unNonZero) -import Cardano.Ledger.Binary (ByteSpan, DecCBOR (..), EncCBOR (..)) -import Control.Exception (assert) -import Control.Monad (guard) -import Data.ByteString (ByteString) -import Data.Coerce (coerce) -import Data.Functor.Identity -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import qualified Ouroboros.Consensus.Byron.Crypto.DSIGN as Crypto -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import qualified Ouroboros.Consensus.Byron.Ledger as Byron -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Tables (EmptyMK) -import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..), - ProtocolInfo (..)) -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.PBFT -import Test.ThreadNet.Infra.Byron.ProtocolInfo -import Test.ThreadNet.Network (TestNodeInitialization (..)) -import qualified Test.ThreadNet.Ref.PBFT as Ref -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeTopology -import Test.Util.Slots (NumSlots (..)) +import Cardano.Chain.Block qualified as Block +import Cardano.Chain.Byron.API qualified as ByronAPI +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.MempoolPayload qualified as MempoolPayload +import Cardano.Chain.Slotting (EpochSlots (..), SlotNumber (..)) +import Cardano.Chain.Update qualified as Update +import Cardano.Chain.Update.Proposal (AProposal) +import Cardano.Chain.Update.Proposal qualified as Proposal +import Cardano.Chain.Update.Validation.Interface qualified as Update +import Cardano.Chain.Update.Validation.Registration qualified as Registration +import Cardano.Chain.Update.Vote (AVote) +import Cardano.Chain.Update.Vote qualified as Vote +import Cardano.Crypto qualified as Crypto +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Ledger.Binary (ByteSpan, DecCBOR (..), EncCBOR (..)) +import Control.Exception (assert) +import Control.Monad (guard) +import Data.ByteString (ByteString) +import Data.Coerce (coerce) +import Data.Functor.Identity +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Crypto.DSIGN qualified as Crypto +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Byron.Ledger qualified as Byron +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Tables (EmptyMK) +import Ouroboros.Consensus.Node.ProtocolInfo + ( NumCoreNodes (..) + , ProtocolInfo (..) + ) +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.PBFT +import Test.ThreadNet.Infra.Byron.ProtocolInfo +import Test.ThreadNet.Network (TestNodeInitialization (..)) +import Test.ThreadNet.Ref.PBFT qualified as Ref +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeTopology +import Test.Util.Slots (NumSlots (..)) -- | The expectation and observation regarding whether the hard-fork proposal -- successfully updated the protocol version --- data ProtocolVersionUpdateLabel = ProtocolVersionUpdateLabel { pvuObserved :: Bool - -- ^ whether the proposed protocol version is adopted or not adopted by the - -- end of the test + -- ^ whether the proposed protocol version is adopted or not adopted by the + -- end of the test , pvuRequired :: Maybe Bool - -- ^ @Just b@ indicates whether the final chains must have adopted or must - -- have not adopted the proposed protocol version. @Nothing@ means there is - -- no requirement. + -- ^ @Just b@ indicates whether the final chains must have adopted or must + -- have not adopted the proposed protocol version. @Nothing@ means there is + -- no requirement. } - deriving (Show) + deriving Show -- | As 'ProtocolVersionUpdateLabel', but for software version updates -- -- Note that software version updates are adopted sooner than and perhaps -- independently of protocol version updates, even when they are introduced by -- the same proposal transaction. --- data SoftwareVersionUpdateLabel = SoftwareVersionUpdateLabel { svuObserved :: Bool , svuRequired :: Maybe Bool } - deriving (Show) + deriving Show -- | Classify the a @QuickCheck@ test's input and output with respect to -- whether the protocol\/software version should have been\/was updated --- mkUpdateLabels :: - PBftParams - -> NumSlots - -> Genesis.Config - -> NodeJoinPlan - -> NodeTopology - -> Ref.Result - -> Byron.LedgerState ByronBlock EmptyMK - -- ^ from 'nodeOutputFinalLedger' - -> (ProtocolVersionUpdateLabel, SoftwareVersionUpdateLabel) -mkUpdateLabels params numSlots genesisConfig nodeJoinPlan topology result + PBftParams -> + NumSlots -> + Genesis.Config -> + NodeJoinPlan -> + NodeTopology -> + Ref.Result -> + -- | from 'nodeOutputFinalLedger' + Byron.LedgerState ByronBlock EmptyMK -> + (ProtocolVersionUpdateLabel, SoftwareVersionUpdateLabel) +mkUpdateLabels + params + numSlots + genesisConfig + nodeJoinPlan + topology + result ldgr = (pvuLabel, svuLabel) - where + where PBftParams{pbftNumNodes, pbftSecurityParam} = params -- the slot immediately after the end of the simulation sentinel :: SlotNumber sentinel = SlotNumber t - where - NumSlots t = numSlots + where + NumSlots t = numSlots -- a block forged in slot @s@ becomes immutable/stable in slot @s + twoK@ -- according to the Byron Chain Density invariant @@ -125,9 +130,10 @@ mkUpdateLabels params numSlots genesisConfig nodeJoinPlan topology result quorum :: Word64 quorum = (\x -> assert (x > 0) x) $ - fromIntegral $ Update.upAdptThd (fromIntegral n) pp0 - where - NumCoreNodes n = pbftNumNodes + fromIntegral $ + Update.upAdptThd (fromIntegral n) pp0 + where + NumCoreNodes n = pbftNumNodes -- how many slots the proposal has to gain sufficient votes before it -- expires @@ -137,124 +143,132 @@ mkUpdateLabels params numSlots genesisConfig nodeJoinPlan topology result -- the first slot of the epoch after the epoch containing the given slot ebbSlotAfter :: SlotNo -> SlotNo ebbSlotAfter (SlotNo s) = - SlotNo (denom * div s denom) + epochSlots - where - SlotNo denom = epochSlots + SlotNo (denom * div s denom) + epochSlots + where + SlotNo denom = epochSlots finalState :: [Ref.Outcome] -> ProposalState finalState outcomes = go Proposing (SlotNo 0) outcomes -- compute the @Just@ case of 'pvuRequired' from the simulated outcomes - go - :: ProposalState - -- ^ the state before the next outcome - -> SlotNo - -- ^ the slot described by the next outcome - -> [Ref.Outcome] - -> ProposalState + go :: + ProposalState -> + -- \^ the state before the next outcome + SlotNo -> + -- \^ the slot described by the next outcome + [Ref.Outcome] -> + ProposalState go !st !s = \case - [] -> assert (coerce sentinel == s) st - o:os -> case o of - Ref.Absent -> continueWith st - Ref.Unable -> continueWith st - Ref.Wasted -> continueWith st - Ref.Nominal -> case st of - -- the proposal is in this slot - Proposing -> - let -- if this leader just joined, it will forge before the - -- proposal reaches its mempool, unless it's node 0 - lostRace = s == leaderJoinSlot && - leader /= CoreNodeId 0 - in - if lostRace then continueWith st else + [] -> assert (coerce sentinel == s) st + o : os -> case o of + Ref.Absent -> continueWith st + Ref.Unable -> continueWith st + Ref.Wasted -> continueWith st + Ref.Nominal -> case st of + -- the proposal is in this slot + Proposing -> + let + -- if this leader just joined, it will forge before the + -- proposal reaches its mempool, unless it's node 0 + lostRace = + s == leaderJoinSlot + && leader /= CoreNodeId 0 + in + if lostRace + then continueWith st + else -- votes can be valid immediately and at least one should -- also be in this block - go (Voting s Set.empty) s (o:os) - Voting proposalSlot votes -> - let votesInTheNewBlock = - -- an exception to the rule: the proposal and c0's - -- own vote always has time to reach its mempool - (if leader == c0 then Set.insert c0 else id) $ - -- if the leader is joining in this slot, then no - -- votes will reach its mempool before it forges: - -- other nodes' votes will be delayed via - -- communication and its own vote is not valid - -- because it will forge before its ledger/mempool - -- contains the proposal - if s == leaderJoinSlot then Set.empty else - -- only votes from nodes that joined prior to this - -- slot can reach the leader's mempool before it - -- forges - Map.keysSet $ Map.filter (< s) m - where - NodeJoinPlan m = nodeJoinPlan - c0 = CoreNodeId 0 + go (Voting s Set.empty) s (o : os) + Voting proposalSlot votes -> + let votesInTheNewBlock = + -- an exception to the rule: the proposal and c0's + -- own vote always has time to reach its mempool + (if leader == c0 then Set.insert c0 else id) $ + -- if the leader is joining in this slot, then no + -- votes will reach its mempool before it forges: + -- other nodes' votes will be delayed via + -- communication and its own vote is not valid + -- because it will forge before its ledger/mempool + -- contains the proposal + if s == leaderJoinSlot + then Set.empty + else + -- only votes from nodes that joined prior to this + -- slot can reach the leader's mempool before it + -- forges + Map.keysSet $ Map.filter (< s) m + where + NodeJoinPlan m = nodeJoinPlan + c0 = CoreNodeId 0 - votes' = Set.union votesInTheNewBlock votes - confirmed = fromIntegral (Set.size votes') >= quorum - expired = proposalSlot + ttl < s - in - if -- TODO cardano-ledger-byron checks for quorum before it checks - -- for expiry, so we do mimick that here. But is that - -- correct? - | confirmed -> continueWith $ Endorsing s Set.empty - -- c0 will re-propose the same proposal again at the next - -- opportunity - | expired -> continueWith $ Proposing - | otherwise -> continueWith $ Voting proposalSlot votes' - Endorsing finalVoteSlot ends -> - continueWith $ - if s < finalVoteSlot + twoK - then st -- ignore endorsements until final vote is stable - else - let ends' = Set.insert (Ref.mkLeaderOf params s) ends - in - if fromIntegral (Set.size ends) < quorum - then Endorsing finalVoteSlot ends' - else Adopting s -- enough endorsements - Adopting{} -> continueWith st - where - leader = Ref.mkLeaderOf params s - leaderJoinSlot = coreNodeIdJoinSlot nodeJoinPlan leader + votes' = Set.union votesInTheNewBlock votes + confirmed = fromIntegral (Set.size votes') >= quorum + expired = proposalSlot + ttl < s + in if + -- TODO cardano-ledger-byron checks for quorum before it checks + -- for expiry, so we do mimick that here. But is that + -- correct? + | confirmed -> continueWith $ Endorsing s Set.empty + -- c0 will re-propose the same proposal again at the next + -- opportunity + | expired -> continueWith $ Proposing + | otherwise -> continueWith $ Voting proposalSlot votes' + Endorsing finalVoteSlot ends -> + continueWith $ + if s < finalVoteSlot + twoK + then st -- ignore endorsements until final vote is stable + else + let ends' = Set.insert (Ref.mkLeaderOf params s) ends + in if fromIntegral (Set.size ends) < quorum + then Endorsing finalVoteSlot ends' + else Adopting s -- enough endorsements + Adopting{} -> continueWith st + where + leader = Ref.mkLeaderOf params s + leaderJoinSlot = coreNodeIdJoinSlot nodeJoinPlan leader - continueWith st' = go st' (succ s) os + continueWith st' = go st' (succ s) os - pvuLabel = ProtocolVersionUpdateLabel + pvuLabel = + ProtocolVersionUpdateLabel { pvuObserved = (== theProposedProtocolVersion) $ - Update.adoptedProtocolVersion $ - Block.cvsUpdateState $ - -- tick the chain over into the slot after the final simulated slot - ByronAPI.applyChainTick genesisConfig sentinel $ - Byron.byronLedgerState ldgr + Update.adoptedProtocolVersion $ + Block.cvsUpdateState $ + -- tick the chain over into the slot after the final simulated slot + ByronAPI.applyChainTick genesisConfig sentinel $ + Byron.byronLedgerState ldgr , pvuRequired = case result of -- 'Ref.Forked' means there's only 1-block chains, and that's not enough -- for a proposal to succeed - Ref.Forked{} -> Just False + Ref.Forked{} -> Just False -- we wouldn't necessarily be able to anticipate when the last -- endorsement happens, so give up Ref.Nondeterministic{} -> Nothing - Ref.Outcomes outcomes -> do - checkTopo params topology - Just $ case finalState outcomes of - Proposing{} -> False - Voting{} -> False - Endorsing{} -> False - Adopting finalEndorsementSlot -> - ebbSlotAfter (finalEndorsementSlot + twoK) <= s - where - s = coerce sentinel + Ref.Outcomes outcomes -> do + checkTopo params topology + Just $ case finalState outcomes of + Proposing{} -> False + Voting{} -> False + Endorsing{} -> False + Adopting finalEndorsementSlot -> + ebbSlotAfter (finalEndorsementSlot + twoK) <= s + where + s = coerce sentinel } - svuLabel = SoftwareVersionUpdateLabel + svuLabel = + SoftwareVersionUpdateLabel { svuObserved = fromMaybe False $ do let nm = Update.svAppName theProposedSoftwareVersion - (Registration.ApplicationVersion vn _slot _metadata) <- Map.lookup nm $ - Update.appVersions $ - Block.cvsUpdateState $ - -- unlike for protocol version updates, there is no need to tick - -- since the passage of time isn't a prerequisite - Byron.byronLedgerState ldgr + (Registration.ApplicationVersion vn _slot _metadata) <- + Map.lookup nm $ + Update.appVersions $ + Block.cvsUpdateState $ + -- unlike for protocol version updates, there is no need to tick + -- since the passage of time isn't a prerequisite + Byron.byronLedgerState ldgr pure $ vn == Update.svNumber theProposedSoftwareVersion , svuRequired = case result of -- 'Ref.Forked' means all blocks except perhaps the first were @@ -263,17 +277,17 @@ mkUpdateLabels params numSlots genesisConfig nodeJoinPlan topology result -- block forged by node c0 will have proposed and might have -- confirmed it (depending on quorum), but the other nodes will not -- have. This is very much a corner case, so we ignore it. - Ref.Forked{} -> Nothing + Ref.Forked{} -> Nothing -- We wouldn't necessarily be able to anticipate if the proposal is -- confirmed or even in all of the final chains, so we ignore it. Ref.Nondeterministic{} -> Nothing - Ref.Outcomes outcomes -> do - checkTopo params topology - Just $ case finalState outcomes of - Proposing{} -> False - Voting{} -> False - Endorsing{} -> True - Adopting{} -> True + Ref.Outcomes outcomes -> do + checkTopo params topology + Just $ case finalState outcomes of + Proposing{} -> False + Voting{} -> False + Endorsing{} -> True + Adopting{} -> True } -- if the topology is not mesh, then some assumptions in 'finalState' about @@ -289,30 +303,29 @@ mkUpdateLabels params numSlots genesisConfig nodeJoinPlan topology result -- leads). checkTopo :: PBftParams -> NodeTopology -> Maybe () checkTopo params topology = do - let PBftParams{pbftNumNodes} = params - guard $ topology == meshNodeTopology pbftNumNodes + let PBftParams{pbftNumNodes} = params + guard $ topology == meshNodeTopology pbftNumNodes -- | The state of a proposal within a linear timeline --- -data ProposalState = - Proposing - -- ^ submitting the proposal (possibly not for the first time, if it has +data ProposalState + = -- | submitting the proposal (possibly not for the first time, if it has -- previously expired) - | Voting !SlotNo !(Set CoreNodeId) - -- ^ accumulating sufficient votes + Proposing + | -- | accumulating sufficient votes -- -- The slot is when the proposal was submitted; it might expire during -- voting. The set is who has voted. - | Endorsing !SlotNo !(Set CoreNodeId) - -- ^ accumulating sufficient endorsements + Voting !SlotNo !(Set CoreNodeId) + | -- | accumulating sufficient endorsements -- -- The slot is when the first sufficient vote was submitted. The set is the -- endorsements seen so far. - | Adopting !SlotNo - -- ^ waiting for epoch transition + Endorsing !SlotNo !(Set CoreNodeId) + | -- | waiting for epoch transition -- -- The slot is when the first sufficient endorsement was submitted. - deriving (Show) + Adopting !SlotNo + deriving Show {------------------------------------------------------------------------------- ProtocolVersion update proposals @@ -362,24 +375,28 @@ data ProposalState = -- state of the final chains for the new protocol version when we detect no -- mitigating circumstances, such as the test not even being scheduled to -- reach the second epoch. --- mkProtocolByronAndHardForkTxs :: - forall m. (Monad m, HasCallStack) - => PBftParams - -> CoreNodeId - -> Genesis.Config - -> Genesis.GeneratedSecrets - -> Update.ProtocolVersion - -- ^ the protocol version that triggers the hard fork - -> TestNodeInitialization m ByronBlock + forall m. + (Monad m, HasCallStack) => + PBftParams -> + CoreNodeId -> + Genesis.Config -> + Genesis.GeneratedSecrets -> + -- | the protocol version that triggers the hard fork + Update.ProtocolVersion -> + TestNodeInitialization m ByronBlock mkProtocolByronAndHardForkTxs - params cid genesisConfig genesisSecrets propPV = + params + cid + genesisConfig + genesisSecrets + propPV = TestNodeInitialization - { tniCrucialTxs = proposals ++ votes + { tniCrucialTxs = proposals ++ votes , tniProtocolInfo = pInfo , tniBlockForging = pure blockForging } - where + where ProtocolInfo{pInfoConfig} = pInfo bcfg = configBlock pInfoConfig @@ -387,85 +404,88 @@ mkProtocolByronAndHardForkTxs blockForging :: [BlockForging m ByronBlock] opKey :: Crypto.SigningKey (pInfo, blockForging, Crypto.SignKeyByronDSIGN opKey) = - mkProtocolByron params cid genesisConfig genesisSecrets + mkProtocolByron params cid genesisConfig genesisSecrets proposals :: [Byron.GenTx ByronBlock] proposals = - if cid /= CoreNodeId 0 then [] else - (:[]) $ - Byron.fromMempoolPayload $ - MempoolPayload.MempoolUpdateProposal proposal + if cid /= CoreNodeId 0 + then [] + else + (: []) $ + Byron.fromMempoolPayload $ + MempoolPayload.MempoolUpdateProposal proposal votes :: [Byron.GenTx ByronBlock] votes = - (:[]) $ + (: []) $ Byron.fromMempoolPayload $ - MempoolPayload.MempoolUpdateVote vote + MempoolPayload.MempoolUpdateVote vote vote :: AVote ByteString vote = - loopbackAnnotations $ + loopbackAnnotations $ -- signed by delegate SK Vote.signVote (Byron.byronProtocolMagicId bcfg) (Update.recoverUpId proposal) - True -- the serialization hardwires this value anyway + True -- the serialization hardwires this value anyway (Crypto.noPassSafeSigner opKey) proposal :: AProposal ByteString proposal = - loopbackAnnotations $ + loopbackAnnotations $ mkHardForkProposal params genesisConfig genesisSecrets propPV -- | A protocol parameter update proposal that doesn't actually change any -- parameter value but does propose 'theProposedProtocolVersion' -- -- Without loss of generality, the proposal is signed by @'CoreNodeId' 0@. --- mkHardForkProposal :: - HasCallStack - => PBftParams - -> Genesis.Config - -> Genesis.GeneratedSecrets - -> Update.ProtocolVersion - -> AProposal () + HasCallStack => + PBftParams -> + Genesis.Config -> + Genesis.GeneratedSecrets -> + Update.ProtocolVersion -> + AProposal () mkHardForkProposal params genesisConfig genesisSecrets propPV = - -- signed by delegate SK - Proposal.signProposal - (Byron.byronProtocolMagicId bcfg) - propBody - (Crypto.noPassSafeSigner opKey) - where - pInfo :: ProtocolInfo ByronBlock - _blockForging :: [BlockForging Identity ByronBlock] - opKey :: Crypto.SigningKey - (pInfo, _blockForging, Crypto.SignKeyByronDSIGN opKey) = - mkProtocolByron params (CoreNodeId 0) genesisConfig genesisSecrets + -- signed by delegate SK + Proposal.signProposal + (Byron.byronProtocolMagicId bcfg) + propBody + (Crypto.noPassSafeSigner opKey) + where + pInfo :: ProtocolInfo ByronBlock + _blockForging :: [BlockForging Identity ByronBlock] + opKey :: Crypto.SigningKey + (pInfo, _blockForging, Crypto.SignKeyByronDSIGN opKey) = + mkProtocolByron params (CoreNodeId 0) genesisConfig genesisSecrets - ProtocolInfo{pInfoConfig} = pInfo - bcfg = configBlock pInfoConfig + ProtocolInfo{pInfoConfig} = pInfo + bcfg = configBlock pInfoConfig - propBody :: Proposal.ProposalBody - propBody = Proposal.ProposalBody - { Proposal.protocolVersion = propPV - , Proposal.protocolParametersUpdate = Update.ProtocolParametersUpdate - { Update.ppuScriptVersion = Nothing - , Update.ppuSlotDuration = Nothing - , Update.ppuMaxBlockSize = Nothing - , Update.ppuMaxHeaderSize = Nothing - , Update.ppuMaxTxSize = Nothing - , Update.ppuMaxProposalSize = Nothing - , Update.ppuMpcThd = Nothing - , Update.ppuHeavyDelThd = Nothing - , Update.ppuUpdateVoteThd = Nothing - , Update.ppuUpdateProposalThd = Nothing - , Update.ppuUpdateProposalTTL = Nothing - , Update.ppuSoftforkRule = Nothing - , Update.ppuTxFeePolicy = Nothing - , Update.ppuUnlockStakeEpoch = Nothing - } - , Proposal.softwareVersion = theProposedSoftwareVersion - , Proposal.metadata = Map.empty + propBody :: Proposal.ProposalBody + propBody = + Proposal.ProposalBody + { Proposal.protocolVersion = propPV + , Proposal.protocolParametersUpdate = + Update.ProtocolParametersUpdate + { Update.ppuScriptVersion = Nothing + , Update.ppuSlotDuration = Nothing + , Update.ppuMaxBlockSize = Nothing + , Update.ppuMaxHeaderSize = Nothing + , Update.ppuMaxTxSize = Nothing + , Update.ppuMaxProposalSize = Nothing + , Update.ppuMpcThd = Nothing + , Update.ppuHeavyDelThd = Nothing + , Update.ppuUpdateVoteThd = Nothing + , Update.ppuUpdateProposalThd = Nothing + , Update.ppuUpdateProposalTTL = Nothing + , Update.ppuSoftforkRule = Nothing + , Update.ppuTxFeePolicy = Nothing + , Update.ppuUnlockStakeEpoch = Nothing + } + , Proposal.softwareVersion = theProposedSoftwareVersion + , Proposal.metadata = Map.empty } -- | Add the bytestring annotations that would be present if we were to @@ -474,13 +494,12 @@ mkHardForkProposal params genesisConfig genesisSecrets propPV = -- The mempool payloads require the serialized bytes as annotations. It's -- tricky to get right, and this function lets use reuse the existing CBOR -- instances. --- loopbackAnnotations :: - ( DecCBOR (f ByteSpan) - , EncCBOR (f ()) - , Functor f - ) - => f () - -> f ByteString + ( DecCBOR (f ByteSpan) + , EncCBOR (f ()) + , Functor f + ) => + f () -> + f ByteString loopbackAnnotations = - ByronAPI.reAnnotateUsing encCBOR decCBOR + ByronAPI.reAnnotateUsing encCBOR decCBOR diff --git a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/TxGen/Byron.hs b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/TxGen/Byron.hs index aeb1901b08..a49fb97c4a 100644 --- a/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/TxGen/Byron.hs +++ b/ouroboros-consensus-cardano/src/unstable-byron-testlib/Test/ThreadNet/TxGen/Byron.hs @@ -2,8 +2,8 @@ module Test.ThreadNet.TxGen.Byron () where -import Ouroboros.Consensus.Byron.Ledger -import Test.ThreadNet.TxGen +import Ouroboros.Consensus.Byron.Ledger +import Test.ThreadNet.TxGen instance TxGen ByronBlock where -- We don't generate transactions for 'ByronBlock', but we do for diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger.hs index 036cc1e9a0..249f91b8b1 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger.hs @@ -6,12 +6,15 @@ module Ouroboros.Consensus.ByronSpec.Ledger (module X) where -- From Genesis and GenTx we only import the types, as these module are intended -- to be imported qualified. -import Ouroboros.Consensus.ByronSpec.Ledger.Block as X -import Ouroboros.Consensus.ByronSpec.Ledger.Forge as X -import Ouroboros.Consensus.ByronSpec.Ledger.Genesis as X - (ByronSpecGenesis (..)) -import Ouroboros.Consensus.ByronSpec.Ledger.GenTx as X - (ByronSpecGenTx (..), ByronSpecGenTxErr (..)) -import Ouroboros.Consensus.ByronSpec.Ledger.Ledger as X -import Ouroboros.Consensus.ByronSpec.Ledger.Mempool as X -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans as X () +import Ouroboros.Consensus.ByronSpec.Ledger.Block as X +import Ouroboros.Consensus.ByronSpec.Ledger.Forge as X +import Ouroboros.Consensus.ByronSpec.Ledger.GenTx as X + ( ByronSpecGenTx (..) + , ByronSpecGenTxErr (..) + ) +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis as X + ( ByronSpecGenesis (..) + ) +import Ouroboros.Consensus.ByronSpec.Ledger.Ledger as X +import Ouroboros.Consensus.ByronSpec.Ledger.Mempool as X +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans as X () diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Accessors.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Accessors.hs index a7f5ade567..09535d4269 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Accessors.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Accessors.hs @@ -3,45 +3,51 @@ {-# LANGUAGE TupleSections #-} -- | Working with the Byron spec chain state -module Ouroboros.Consensus.ByronSpec.Ledger.Accessors ( - -- * ChainState getters +module Ouroboros.Consensus.ByronSpec.Ledger.Accessors + ( -- * ChainState getters GetChainState , getChainStateDIState , getChainStateHash , getChainStateSlot , getChainStateUPIState , getChainStateUtxoState + -- * ChainState modifiers , ModChainState , modChainStateDIState , modChainStateSlot , modChainStateUPIState , modChainStateUtxoState + -- * Auxiliary , getDIStateDSState , modDIStateDSState ) where -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.Delegation as Spec -import qualified Byron.Spec.Ledger.STS.UTXO as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import qualified Control.State.Transition as Spec +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.Delegation qualified as Spec +import Byron.Spec.Ledger.STS.UTXO qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Control.State.Transition qualified as Spec {------------------------------------------------------------------------------- Accessors -------------------------------------------------------------------------------} -type GetChainState a = Spec.State Spec.CHAIN -> a -type ModChainState a = forall m. Applicative m => (a -> m a) - -> Spec.State Spec.CHAIN -> m (Spec.State Spec.CHAIN) +type GetChainState a = Spec.State Spec.CHAIN -> a +type ModChainState a = + forall m. + Applicative m => + (a -> m a) -> + Spec.State Spec.CHAIN -> + m (Spec.State Spec.CHAIN) getChainStateSlot :: GetChainState Spec.Slot getChainStateSlot (a, _, _, _, _, _) = a modChainStateSlot :: ModChainState Spec.Slot -modChainStateSlot fn (a, b, c, d, e, f) = (, b, c, d, e, f) <$> fn a +modChainStateSlot fn (a, b, c, d, e, f) = (,b,c,d,e,f) <$> fn a getChainStateHash :: GetChainState Spec.Hash getChainStateHash (_, _, c, _, _, _) = c @@ -50,19 +56,19 @@ getChainStateUtxoState :: GetChainState Spec.UTxOState getChainStateUtxoState (_, _, _, d, _, _) = d modChainStateUtxoState :: ModChainState Spec.UTxOState -modChainStateUtxoState fn (a, b, c, d, e, f) = (a, b, c, , e, f) <$> fn d +modChainStateUtxoState fn (a, b, c, d, e, f) = (a,b,c,,e,f) <$> fn d getChainStateDIState :: GetChainState Spec.DIState getChainStateDIState (_, _, _, _, e, _) = e modChainStateDIState :: ModChainState Spec.DIState -modChainStateDIState fn (a, b, c, d, e, f) = (a, b, c, d, , f) <$> fn e +modChainStateDIState fn (a, b, c, d, e, f) = (a,b,c,d,,f) <$> fn e getChainStateUPIState :: GetChainState Spec.UPIState getChainStateUPIState (_, _, _, _, _, f) = f modChainStateUPIState :: ModChainState Spec.UPIState -modChainStateUPIState fn (a, b, c, d, e, f) = (a, b, c, d, e, ) <$> fn f +modChainStateUPIState fn (a, b, c, d, e, f) = (a,b,c,d,e,) <$> fn f {------------------------------------------------------------------------------- 'Spec.DSState' is a sub-state of 'Spec.DIState' @@ -73,23 +79,27 @@ modChainStateUPIState fn (a, b, c, d, e, f) = (a, b, c, d, e, ) <$> fn f -- | Extract 'Spec.DSState' from 'Spec.DIState' getDIStateDSState :: Spec.DIState -> Spec.DSState -getDIStateDSState Spec.DIState{..} = Spec.DSState { - _dSStateScheduledDelegations = _dIStateScheduledDelegations - , _dSStateKeyEpochDelegations = _dIStateKeyEpochDelegations +getDIStateDSState Spec.DIState{..} = + Spec.DSState + { _dSStateScheduledDelegations = _dIStateScheduledDelegations + , _dSStateKeyEpochDelegations = _dIStateKeyEpochDelegations } -- | Update 'Spec.DIState' from 'Spec.DSState' -modDIStateDSState :: Applicative m - => (Spec.DSState -> m Spec.DSState) - -> Spec.DIState -> m Spec.DIState +modDIStateDSState :: + Applicative m => + (Spec.DSState -> m Spec.DSState) -> + Spec.DIState -> + m Spec.DIState modDIStateDSState f diState@Spec.DIState{..} = - update <$> f (getDIStateDSState diState) - where - update :: Spec.DSState -> Spec.DIState - update Spec.DSState{..} = Spec.DIState{ - _dIStateScheduledDelegations = _dSStateScheduledDelegations - , _dIStateKeyEpochDelegations = _dSStateKeyEpochDelegations - -- The rest stays the same - , _dIStateDelegationMap = _dIStateDelegationMap - , _dIStateLastDelegation = _dIStateLastDelegation - } + update <$> f (getDIStateDSState diState) + where + update :: Spec.DSState -> Spec.DIState + update Spec.DSState{..} = + Spec.DIState + { _dIStateScheduledDelegations = _dSStateScheduledDelegations + , _dIStateKeyEpochDelegations = _dSStateKeyEpochDelegations + , -- The rest stays the same + _dIStateDelegationMap = _dIStateDelegationMap + , _dIStateLastDelegation = _dIStateLastDelegation + } diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Block.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Block.hs index 9d2ee6fdc3..07836fd525 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Block.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Block.hs @@ -4,24 +4,25 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.ByronSpec.Ledger.Block ( - BlockConfig (..) +module Ouroboros.Consensus.ByronSpec.Ledger.Block + ( BlockConfig (..) , ByronSpecBlock (..) , CodecConfig (..) , Header (..) , StorageConfig (..) + -- * type alias , ByronSpecHeader ) where -import qualified Byron.Spec.Chain.STS.Block as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import Codec.Serialise -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.ByronSpec.Ledger.Conversions -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () +import Byron.Spec.Chain.STS.Block qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Codec.Serialise +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.ByronSpec.Ledger.Conversions +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () {------------------------------------------------------------------------------- Block @@ -33,28 +34,29 @@ import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () -- also add the 'BlockNo', as this is entirely absent from the spec but we need -- it for the 'HasHeader' abstraction, which is ubiquitous in -- @ouroboros-consensus@ and @-network@. -data ByronSpecBlock = ByronSpecBlock { - byronSpecBlock :: Spec.Block - , byronSpecBlockNo :: BlockNo - , byronSpecBlockHash :: Spec.Hash - } +data ByronSpecBlock = ByronSpecBlock + { byronSpecBlock :: Spec.Block + , byronSpecBlockNo :: BlockNo + , byronSpecBlockHash :: Spec.Hash + } deriving (Show, Eq, Generic, Serialise) {------------------------------------------------------------------------------- GetHeader -------------------------------------------------------------------------------} -data instance Header ByronSpecBlock = ByronSpecHeader { - byronSpecHeader :: Spec.BlockHeader - , byronSpecHeaderNo :: BlockNo - , byronSpecHeaderHash :: Spec.Hash - } +data instance Header ByronSpecBlock = ByronSpecHeader + { byronSpecHeader :: Spec.BlockHeader + , byronSpecHeaderNo :: BlockNo + , byronSpecHeaderHash :: Spec.Hash + } deriving (Show, Eq, Generic, Serialise) instance GetHeader ByronSpecBlock where - getHeader ByronSpecBlock{..} = ByronSpecHeader { - byronSpecHeader = Spec._bHeader byronSpecBlock - , byronSpecHeaderNo = byronSpecBlockNo + getHeader ByronSpecBlock{..} = + ByronSpecHeader + { byronSpecHeader = Spec._bHeader byronSpecBlock + , byronSpecHeaderNo = byronSpecBlockNo , byronSpecHeaderHash = byronSpecBlockHash } @@ -78,10 +80,11 @@ instance HasHeader ByronSpecBlock where getHeaderFields = getBlockHeaderFields instance HasHeader ByronSpecHeader where - getHeaderFields hdr = HeaderFields { - headerFieldHash = byronSpecHeaderHash hdr + getHeaderFields hdr = + HeaderFields + { headerFieldHash = byronSpecHeaderHash hdr , headerFieldBlockNo = byronSpecHeaderNo hdr - , headerFieldSlot = fromByronSpecSlotNo . Spec._bhSlot $ byronSpecHeader hdr + , headerFieldSlot = fromByronSpecSlotNo . Spec._bhSlot $ byronSpecHeader hdr } instance GetPrevHash ByronSpecBlock where diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Conversions.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Conversions.hs index 9fdfd8758b..0a1697d90d 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Conversions.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Conversions.hs @@ -1,27 +1,30 @@ -- | Conversions from ouroboros-consensus types to the Byron spec types -- -- Intended for unqualified import. -module Ouroboros.Consensus.ByronSpec.Ledger.Conversions ( - -- * Spec to consensus +module Ouroboros.Consensus.ByronSpec.Ledger.Conversions + ( -- * Spec to consensus fromByronSpecPrevHash , fromByronSpecSlotNo + -- * Consensus to spec , toByronSpecSlotNo ) where -import qualified Byron.Spec.Chain.STS.Block as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import Ouroboros.Consensus.Block +import Byron.Spec.Chain.STS.Block qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Ouroboros.Consensus.Block {------------------------------------------------------------------------------- Spec to consensus -------------------------------------------------------------------------------} -fromByronSpecPrevHash :: (Spec.Hash -> HeaderHash b) - -> Spec.Hash -> ChainHash b +fromByronSpecPrevHash :: + (Spec.Hash -> HeaderHash b) -> + Spec.Hash -> + ChainHash b fromByronSpecPrevHash f h | h == Spec.genesisHash = GenesisHash - | otherwise = BlockHash (f h) + | otherwise = BlockHash (f h) fromByronSpecSlotNo :: Spec.Slot -> SlotNo fromByronSpecSlotNo (Spec.Slot slot) = SlotNo slot diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Forge.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Forge.hs index 90194a8d80..3b8b122ac1 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Forge.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Forge.hs @@ -1,50 +1,53 @@ module Ouroboros.Consensus.ByronSpec.Ledger.Forge (forgeByronSpecBlock) where -import qualified Byron.Spec.Chain.STS.Block as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.ByronSpec.Ledger.Accessors -import Ouroboros.Consensus.ByronSpec.Ledger.Block -import Ouroboros.Consensus.ByronSpec.Ledger.Conversions -import qualified Ouroboros.Consensus.ByronSpec.Ledger.GenTx as GenTx -import Ouroboros.Consensus.ByronSpec.Ledger.Ledger -import Ouroboros.Consensus.ByronSpec.Ledger.Mempool -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () +import Byron.Spec.Chain.STS.Block qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.ByronSpec.Ledger.Accessors +import Ouroboros.Consensus.ByronSpec.Ledger.Block +import Ouroboros.Consensus.ByronSpec.Ledger.Conversions +import Ouroboros.Consensus.ByronSpec.Ledger.GenTx qualified as GenTx +import Ouroboros.Consensus.ByronSpec.Ledger.Ledger +import Ouroboros.Consensus.ByronSpec.Ledger.Mempool +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () {------------------------------------------------------------------------------- Forging -------------------------------------------------------------------------------} -forgeByronSpecBlock :: BlockNo - -> SlotNo - -> Ticked (LedgerState ByronSpecBlock) mk - -> [Validated (GenTx ByronSpecBlock)] - -> Spec.VKey - -> ByronSpecBlock +forgeByronSpecBlock :: + BlockNo -> + SlotNo -> + Ticked (LedgerState ByronSpecBlock) mk -> + [Validated (GenTx ByronSpecBlock)] -> + Spec.VKey -> + ByronSpecBlock forgeByronSpecBlock curBlockNo curSlotNo (TickedByronSpecLedgerState _ st) txs vkey = - ByronSpecBlock { - byronSpecBlock = block - , byronSpecBlockNo = curBlockNo - , byronSpecBlockHash = Spec.bhHash $ Spec._bHeader block - } - where - (ds, ts, us, vs) = - GenTx.partition - (map (unByronSpecGenTx . forgetValidatedByronSpecGenTx) txs) + ByronSpecBlock + { byronSpecBlock = block + , byronSpecBlockNo = curBlockNo + , byronSpecBlockHash = Spec.bhHash $ Spec._bHeader block + } + where + (ds, ts, us, vs) = + GenTx.partition + (map (unByronSpecGenTx . forgetValidatedByronSpecGenTx) txs) - -- TODO: Don't take protocol version from ledger state - -- - block :: Spec.Block - block = Spec.mkBlock - (getChainStateHash st) - (toByronSpecSlotNo curSlotNo) - vkey - (Spec.protocolVersion $ getChainStateUPIState st) - ds - (case us of - [] -> Nothing - [u] -> Just u - _ -> error "forgeByronSpecBlock: multiple update proposals") - vs - ts + -- TODO: Don't take protocol version from ledger state + -- + block :: Spec.Block + block = + Spec.mkBlock + (getChainStateHash st) + (toByronSpecSlotNo curSlotNo) + vkey + (Spec.protocolVersion $ getChainStateUPIState st) + ds + ( case us of + [] -> Nothing + [u] -> Just u + _ -> error "forgeByronSpecBlock: multiple update proposals" + ) + vs + ts diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/GenTx.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/GenTx.hs index efafe2bb09..02672ad872 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/GenTx.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/GenTx.hs @@ -7,26 +7,27 @@ -- -- > import Ouroboros.Consensus.ByronSpec.Ledger.GenTx (ByronSpecGenTx(..), ByronSpecGenTxErr(..)) -- > import qualified Ouroboros.Consensus.ByronSpec.Ledger.GenTx as GenTx -module Ouroboros.Consensus.ByronSpec.Ledger.GenTx ( - ByronSpecGenTx (..) +module Ouroboros.Consensus.ByronSpec.Ledger.GenTx + ( ByronSpecGenTx (..) , ByronSpecGenTxErr (..) , apply , partition ) where -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Ledger.Delegation as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import qualified Byron.Spec.Ledger.UTxO as Spec -import Codec.Serialise -import Control.Monad.Trans.Except -import qualified Control.State.Transition as Spec -import Data.List.NonEmpty (NonEmpty) -import GHC.Generics (Generic) -import Ouroboros.Consensus.ByronSpec.Ledger.Genesis - (ByronSpecGenesis (..)) -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () -import qualified Ouroboros.Consensus.ByronSpec.Ledger.Rules as Rules +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Ledger.Delegation qualified as Spec +import Byron.Spec.Ledger.UTxO qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Codec.Serialise +import Control.Monad.Trans.Except +import Control.State.Transition qualified as Spec +import Data.List.NonEmpty (NonEmpty) +import GHC.Generics (Generic) +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis + ( ByronSpecGenesis (..) + ) +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () +import Ouroboros.Consensus.ByronSpec.Ledger.Rules qualified as Rules {------------------------------------------------------------------------------- Types @@ -36,48 +37,50 @@ import qualified Ouroboros.Consensus.ByronSpec.Ledger.Rules as Rules -- -- The spec doesn't have a type for this, instead splitting the block body -- into separate lists -data ByronSpecGenTx = - ByronSpecGenTxDCert Spec.DCert - | ByronSpecGenTxTx Spec.Tx +data ByronSpecGenTx + = ByronSpecGenTxDCert Spec.DCert + | ByronSpecGenTxTx Spec.Tx | ByronSpecGenTxUProp Spec.UProp - | ByronSpecGenTxVote Spec.Vote + | ByronSpecGenTxVote Spec.Vote deriving (Show, Generic, Serialise) -- | Transaction errors -- -- We don't distinguish these from any other kind of CHAIN failure. -newtype ByronSpecGenTxErr = ByronSpecGenTxErr { - unByronSpecGenTxErr :: (NonEmpty (Spec.PredicateFailure Spec.CHAIN)) - } +newtype ByronSpecGenTxErr = ByronSpecGenTxErr + { unByronSpecGenTxErr :: (NonEmpty (Spec.PredicateFailure Spec.CHAIN)) + } deriving (Show, Generic, Serialise) {------------------------------------------------------------------------------- Functions -------------------------------------------------------------------------------} -apply :: ByronSpecGenesis - -> ByronSpecGenTx - -> Spec.State Spec.CHAIN - -> Except ByronSpecGenTxErr (Spec.State Spec.CHAIN) +apply :: + ByronSpecGenesis -> + ByronSpecGenTx -> + Spec.State Spec.CHAIN -> + Except ByronSpecGenTxErr (Spec.State Spec.CHAIN) apply cfg = \genTx -> withExcept ByronSpecGenTxErr . go genTx - where - go (ByronSpecGenTxDCert dcert) = Rules.liftSDELEG cfg dcert - go (ByronSpecGenTxTx tx ) = Rules.liftUTXOW cfg tx - go (ByronSpecGenTxUProp prop ) = Rules.liftUPIREG cfg prop - go (ByronSpecGenTxVote vote ) = Rules.liftUPIVOTE cfg vote + where + go (ByronSpecGenTxDCert dcert) = Rules.liftSDELEG cfg dcert + go (ByronSpecGenTxTx tx) = Rules.liftUTXOW cfg tx + go (ByronSpecGenTxUProp prop) = Rules.liftUPIREG cfg prop + go (ByronSpecGenTxVote vote) = Rules.liftUPIVOTE cfg vote -partition :: [ByronSpecGenTx] - -> ( [Spec.DCert] - , [Spec.Tx] - , [Spec.UProp] - , [Spec.Vote] - ) +partition :: + [ByronSpecGenTx] -> + ( [Spec.DCert] + , [Spec.Tx] + , [Spec.UProp] + , [Spec.Vote] + ) partition = go ([], [], [], []) - where - go (ds, ts, us, vs) [] = (reverse ds, reverse ts, reverse us, reverse vs) - go (ds, ts, us, vs) (g:gs) = - case g of - ByronSpecGenTxDCert d -> go (d:ds, ts, us, vs) gs - ByronSpecGenTxTx t -> go ( ds, t:ts, us, vs) gs - ByronSpecGenTxUProp u -> go ( ds, ts, u:us, vs) gs - ByronSpecGenTxVote v -> go ( ds, ts, us, v:vs) gs + where + go (ds, ts, us, vs) [] = (reverse ds, reverse ts, reverse us, reverse vs) + go (ds, ts, us, vs) (g : gs) = + case g of + ByronSpecGenTxDCert d -> go (d : ds, ts, us, vs) gs + ByronSpecGenTxTx t -> go (ds, t : ts, us, vs) gs + ByronSpecGenTxUProp u -> go (ds, ts, u : us, vs) gs + ByronSpecGenTxVote v -> go (ds, ts, us, v : vs) gs diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Genesis.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Genesis.hs index 9f72517a0b..d6cd378e9c 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Genesis.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Genesis.hs @@ -11,63 +11,64 @@ -- -- > import Ouroboros.Consensus.ByronSpec.Ledger.Genesis (ByronSpecGenesis) -- > import qualified Ouroboros.Consensus.ByronSpec.Ledger.Genesis as Genesis -module Ouroboros.Consensus.ByronSpec.Ledger.Genesis ( - ByronSpecGenesis (..) +module Ouroboros.Consensus.ByronSpec.Ledger.Genesis + ( ByronSpecGenesis (..) , modFeeParams , modPBftThreshold , modPParams , modUtxo , modUtxoValues + -- * Conversions , fromChainEnv , toChainEnv ) where -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import qualified Byron.Spec.Ledger.UTxO as Spec -import Cardano.Binary -import Codec.Serialise (Serialise (..)) -import qualified Control.State.Transition as Spec -import Data.Coerce (coerce) -import Data.Set (Set) -import NoThunks.Class (AllowThunk (..), NoThunks) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () -import Ouroboros.Consensus.Node.Serialisation +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.UTxO qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Cardano.Binary +import Codec.Serialise (Serialise (..)) +import Control.State.Transition qualified as Spec +import Data.Coerce (coerce) +import Data.Set (Set) +import NoThunks.Class (AllowThunk (..), NoThunks) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () +import Ouroboros.Consensus.Node.Serialisation {------------------------------------------------------------------------------- Genesis config -------------------------------------------------------------------------------} -- | The equivalent of the genesis config for the abstract ledger -data ByronSpecGenesis = ByronSpecGenesis { - byronSpecGenesisDelegators :: Set Spec.VKeyGenesis - , byronSpecGenesisInitUtxo :: Spec.UTxO - , byronSpecGenesisInitPParams :: Spec.PParams - , byronSpecGenesisSecurityParam :: Spec.BlockCount - - -- | Slot length - -- - -- The Byron spec itself does not talk about slot length at all. Here we - -- record it primarily to support the relation between the spec and the - -- real implementation. For this reason we choose the same representation - -- as the real PBFT does ('ppSlotDuration' in 'ProtocolParameters'). - , byronSpecGenesisSlotLength :: Natural - } - deriving stock (Show) +data ByronSpecGenesis = ByronSpecGenesis + { byronSpecGenesisDelegators :: Set Spec.VKeyGenesis + , byronSpecGenesisInitUtxo :: Spec.UTxO + , byronSpecGenesisInitPParams :: Spec.PParams + , byronSpecGenesisSecurityParam :: Spec.BlockCount + , byronSpecGenesisSlotLength :: Natural + -- ^ Slot length + -- + -- The Byron spec itself does not talk about slot length at all. Here we + -- record it primarily to support the relation between the spec and the + -- real implementation. For this reason we choose the same representation + -- as the real PBFT does ('ppSlotDuration' in 'ProtocolParameters'). + } + deriving stock Show deriving NoThunks via AllowThunk ByronSpecGenesis instance SerialiseNodeToClient blk ByronSpecGenesis where - encodeNodeToClient _ _ (ByronSpecGenesis delegators utxo pparams k slotLength) = mconcat - [ encodeListLen 5 - , toCBOR delegators - , encode utxo - , encode pparams - , toCBOR k - , toCBOR slotLength - ] + encodeNodeToClient _ _ (ByronSpecGenesis delegators utxo pparams k slotLength) = + mconcat + [ encodeListLen 5 + , toCBOR delegators + , encode utxo + , encode pparams + , toCBOR k + , toCBOR slotLength + ] decodeNodeToClient _ _ = do enforceSize "ByronSpecGenesis" 5 ByronSpecGenesis @@ -77,13 +78,17 @@ instance SerialiseNodeToClient blk ByronSpecGenesis where <*> fromCBOR @Spec.BlockCount <*> fromCBOR @Natural -modPBftThreshold :: (Double -> Double) - -> ByronSpecGenesis -> ByronSpecGenesis +modPBftThreshold :: + (Double -> Double) -> + ByronSpecGenesis -> + ByronSpecGenesis modPBftThreshold = modPParams . modPParamsPBftThreshold -- | Modify the @a@ and @b@ fee parameters -modFeeParams :: ((Int, Int) -> (Int, Int)) - -> ByronSpecGenesis -> ByronSpecGenesis +modFeeParams :: + ((Int, Int) -> (Int, Int)) -> + ByronSpecGenesis -> + ByronSpecGenesis modFeeParams = modPParams . modPParamsFeeParams -- | Adjust all values in the initial UTxO equally @@ -91,37 +96,47 @@ modUtxoValues :: (Integer -> Integer) -> ByronSpecGenesis -> ByronSpecGenesis modUtxoValues = modUtxo . Spec.mapUTxOValues . coerce modUtxo :: (Spec.UTxO -> Spec.UTxO) -> ByronSpecGenesis -> ByronSpecGenesis -modUtxo f genesis = genesis { - byronSpecGenesisInitUtxo = f (byronSpecGenesisInitUtxo genesis) +modUtxo f genesis = + genesis + { byronSpecGenesisInitUtxo = f (byronSpecGenesisInitUtxo genesis) } -modPParams :: (Spec.PParams -> Spec.PParams) - -> ByronSpecGenesis -> ByronSpecGenesis -modPParams f genesis = genesis { - byronSpecGenesisInitPParams = f (byronSpecGenesisInitPParams genesis) +modPParams :: + (Spec.PParams -> Spec.PParams) -> + ByronSpecGenesis -> + ByronSpecGenesis +modPParams f genesis = + genesis + { byronSpecGenesisInitPParams = f (byronSpecGenesisInitPParams genesis) } {------------------------------------------------------------------------------- Internal: accessors for the protocol parameters -------------------------------------------------------------------------------} -modPParamsPBftThreshold :: (Double -> Double) - -> Spec.PParams -> Spec.PParams -modPParamsPBftThreshold f pparams = pparams { - Spec._bkSgnCntT = Spec.BkSgnCntT (f threshold) +modPParamsPBftThreshold :: + (Double -> Double) -> + Spec.PParams -> + Spec.PParams +modPParamsPBftThreshold f pparams = + pparams + { Spec._bkSgnCntT = Spec.BkSgnCntT (f threshold) } - where - Spec.BkSgnCntT threshold = Spec._bkSgnCntT pparams - -modPParamsFeeParams :: ((Int, Int) -> (Int, Int)) - -> Spec.PParams -> Spec.PParams -modPParamsFeeParams f pparams = pparams { - Spec._factorA = Spec.FactorA $ fst (f (a, b)) + where + Spec.BkSgnCntT threshold = Spec._bkSgnCntT pparams + +modPParamsFeeParams :: + ((Int, Int) -> (Int, Int)) -> + Spec.PParams -> + Spec.PParams +modPParamsFeeParams f pparams = + pparams + { Spec._factorA = Spec.FactorA $ fst (f (a, b)) , Spec._factorB = Spec.FactorB $ snd (f (a, b)) } - where - Spec.FactorA a = Spec._factorA pparams - Spec.FactorB b = Spec._factorB pparams + where + Spec.FactorA a = Spec._factorA pparams + Spec.FactorB b = Spec._factorB pparams {------------------------------------------------------------------------------- Conversions @@ -129,48 +144,51 @@ modPParamsFeeParams f pparams = pparams { -- | Derive CHAIN rule environment toChainEnv :: ByronSpecGenesis -> Spec.Environment Spec.CHAIN -toChainEnv ByronSpecGenesis{..} = disableConsensusChecks ( - Spec.Slot 0 -- current slot +toChainEnv ByronSpecGenesis{..} = + disableConsensusChecks + ( Spec.Slot 0 -- current slot , byronSpecGenesisInitUtxo , byronSpecGenesisDelegators , byronSpecGenesisInitPParams , byronSpecGenesisSecurityParam ) - where - -- We are only interested in updating the /ledger state/, not the /consensus - -- chain state/. Unfortunately, the Byron spec does not make that - -- distinction, and so when we call the CHAIN rule, we might get some errors - -- here that the implementation does not report (because it would only find - -- them when we update the chain state). There are at least two possible - -- proper solutions for this: - -- - -- 1. Modify the spec so that we /do/ have the separation. Note that if we - -- did, we would not use the chain state part of the spec, since the - -- chain state part of the dual ledger is determined entirely by the - -- concrete Byron block. - -- 2. Turn 'applyExtLedger' and related types into a type class of their - -- own, so that we can override it specifically for the dual ledger. - -- - -- Either way, we are only testing the /ledger/ part of the two blocks here, - -- not the consensus part. For now we just override some parameters in the - -- environment to work around the problem and make sure that none of the - -- consensus checks in the spec can fail. - disableConsensusChecks :: Spec.Environment Spec.CHAIN - -> Spec.Environment Spec.CHAIN - disableConsensusChecks ( _currentSlot - , utx0 - , delegators - , pparams - , k - ) = ( - -- Disable 'SlotInTheFuture' failure - Spec.Slot maxBound - , utx0 - , delegators - -- Disable 'TooManyIssuedBlocks' failure - , pparams { Spec._bkSgnCntT = Spec.BkSgnCntT 1 } - , k - ) + where + -- We are only interested in updating the /ledger state/, not the /consensus + -- chain state/. Unfortunately, the Byron spec does not make that + -- distinction, and so when we call the CHAIN rule, we might get some errors + -- here that the implementation does not report (because it would only find + -- them when we update the chain state). There are at least two possible + -- proper solutions for this: + -- + -- 1. Modify the spec so that we /do/ have the separation. Note that if we + -- did, we would not use the chain state part of the spec, since the + -- chain state part of the dual ledger is determined entirely by the + -- concrete Byron block. + -- 2. Turn 'applyExtLedger' and related types into a type class of their + -- own, so that we can override it specifically for the dual ledger. + -- + -- Either way, we are only testing the /ledger/ part of the two blocks here, + -- not the consensus part. For now we just override some parameters in the + -- environment to work around the problem and make sure that none of the + -- consensus checks in the spec can fail. + disableConsensusChecks :: + Spec.Environment Spec.CHAIN -> + Spec.Environment Spec.CHAIN + disableConsensusChecks + ( _currentSlot + , utx0 + , delegators + , pparams + , k + ) = + ( -- Disable 'SlotInTheFuture' failure + Spec.Slot maxBound + , utx0 + , delegators + , -- Disable 'TooManyIssuedBlocks' failure + pparams{Spec._bkSgnCntT = Spec.BkSgnCntT 1} + , k + ) -- | Construct genesis config from CHAIN environment -- @@ -179,10 +197,11 @@ toChainEnv ByronSpecGenesis{..} = disableConsensusChecks ( -- that role. In order to be able to reuse the test generators, we therefore -- also define a translation in the opposite direction. fromChainEnv :: Natural -> Spec.Environment Spec.CHAIN -> ByronSpecGenesis -fromChainEnv byronSpecGenesisSlotLength - ( _currentSlot - , byronSpecGenesisInitUtxo - , byronSpecGenesisDelegators - , byronSpecGenesisInitPParams - , byronSpecGenesisSecurityParam - ) = ByronSpecGenesis{..} +fromChainEnv + byronSpecGenesisSlotLength + ( _currentSlot + , byronSpecGenesisInitUtxo + , byronSpecGenesisDelegators + , byronSpecGenesisInitPParams + , byronSpecGenesisSecurityParam + ) = ByronSpecGenesis{..} diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Ledger.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Ledger.hs index ff7a60a8d7..57ffd24796 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Ledger.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Ledger.hs @@ -9,57 +9,57 @@ {-# OPTIONS -Wno-orphans #-} -module Ouroboros.Consensus.ByronSpec.Ledger.Ledger ( - ByronSpecLedgerError (..) +module Ouroboros.Consensus.ByronSpec.Ledger.Ledger + ( ByronSpecLedgerError (..) , initByronSpecLedgerState + -- * Type family instances , LedgerState (..) , LedgerTables (..) , Ticked (..) ) where -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import Codec.Serialise -import Control.Monad.Except -import qualified Control.State.Transition as Spec -import Data.List.NonEmpty (NonEmpty) -import Data.Void (Void) -import GHC.Generics (Generic) -import NoThunks.Class (AllowThunk (..), NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.ByronSpec.Ledger.Accessors -import Ouroboros.Consensus.ByronSpec.Ledger.Block -import Ouroboros.Consensus.ByronSpec.Ledger.Conversions -import Ouroboros.Consensus.ByronSpec.Ledger.Genesis (ByronSpecGenesis) -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () -import qualified Ouroboros.Consensus.ByronSpec.Ledger.Rules as Rules -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.IndexedMemPack +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Codec.Serialise +import Control.Monad.Except +import Control.State.Transition qualified as Spec +import Data.List.NonEmpty (NonEmpty) +import Data.Void (Void) +import GHC.Generics (Generic) +import NoThunks.Class (AllowThunk (..), NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.ByronSpec.Ledger.Accessors +import Ouroboros.Consensus.ByronSpec.Ledger.Block +import Ouroboros.Consensus.ByronSpec.Ledger.Conversions +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis (ByronSpecGenesis) +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () +import Ouroboros.Consensus.ByronSpec.Ledger.Rules qualified as Rules +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- State -------------------------------------------------------------------------------} -data instance LedgerState ByronSpecBlock mk = ByronSpecLedgerState { - -- | Tip of the ledger (most recently applied block, if any) - -- - -- The spec state stores the last applied /hash/, but not the /slot/. - byronSpecLedgerTip :: Maybe SlotNo - - -- | The spec state proper - , byronSpecLedgerState :: Spec.State Spec.CHAIN - } +data instance LedgerState ByronSpecBlock mk = ByronSpecLedgerState + { byronSpecLedgerTip :: Maybe SlotNo + -- ^ Tip of the ledger (most recently applied block, if any) + -- + -- The spec state stores the last applied /hash/, but not the /slot/. + , byronSpecLedgerState :: Spec.State Spec.CHAIN + -- ^ The spec state proper + } deriving stock (Show, Eq, Generic) - deriving anyclass (Serialise) + deriving anyclass Serialise deriving NoThunks via AllowThunk (LedgerState ByronSpecBlock mk) -newtype ByronSpecLedgerError = ByronSpecLedgerError { - unByronSpecLedgerError :: NonEmpty (Spec.PredicateFailure Spec.CHAIN) - } +newtype ByronSpecLedgerError = ByronSpecLedgerError + { unByronSpecLedgerError :: NonEmpty (Spec.PredicateFailure Spec.CHAIN) + } deriving (Show, Eq) deriving NoThunks via AllowThunk ByronSpecLedgerError @@ -68,8 +68,9 @@ type instance LedgerCfg (LedgerState ByronSpecBlock) = ByronSpecGenesis instance UpdateLedger ByronSpecBlock initByronSpecLedgerState :: ByronSpecGenesis -> LedgerState ByronSpecBlock mk -initByronSpecLedgerState cfg = ByronSpecLedgerState { - byronSpecLedgerTip = Nothing +initByronSpecLedgerState cfg = + ByronSpecLedgerState + { byronSpecLedgerTip = Nothing , byronSpecLedgerState = Rules.initStateCHAIN cfg } @@ -78,66 +79,79 @@ initByronSpecLedgerState cfg = ByronSpecLedgerState { -------------------------------------------------------------------------------} instance GetTip (LedgerState ByronSpecBlock) where - getTip (ByronSpecLedgerState tip state) = castPoint $ + getTip (ByronSpecLedgerState tip state) = + castPoint $ getByronSpecTip tip state instance GetTip (Ticked (LedgerState ByronSpecBlock)) where - getTip (TickedByronSpecLedgerState tip state) = castPoint $ + getTip (TickedByronSpecLedgerState tip state) = + castPoint $ getByronSpecTip tip state getByronSpecTip :: Maybe SlotNo -> Spec.State Spec.CHAIN -> Point ByronSpecBlock -getByronSpecTip Nothing _ = GenesisPoint -getByronSpecTip (Just slot) state = BlockPoint - slot - (getChainStateHash state) +getByronSpecTip Nothing _ = GenesisPoint +getByronSpecTip (Just slot) state = + BlockPoint + slot + (getChainStateHash state) {------------------------------------------------------------------------------- Ticking -------------------------------------------------------------------------------} -data instance Ticked (LedgerState ByronSpecBlock) mk = TickedByronSpecLedgerState { - untickedByronSpecLedgerTip :: Maybe SlotNo - , tickedByronSpecLedgerState :: Spec.State Spec.CHAIN - } +data instance Ticked (LedgerState ByronSpecBlock) mk = TickedByronSpecLedgerState + { untickedByronSpecLedgerTip :: Maybe SlotNo + , tickedByronSpecLedgerState :: Spec.State Spec.CHAIN + } deriving stock (Show, Eq) deriving NoThunks via AllowThunk (Ticked (LedgerState ByronSpecBlock) mk) instance IsLedger (LedgerState ByronSpecBlock) where type LedgerErr (LedgerState ByronSpecBlock) = ByronSpecLedgerError - type AuxLedgerEvent (LedgerState ByronSpecBlock) = - VoidLedgerEvent (LedgerState ByronSpecBlock) + type + AuxLedgerEvent (LedgerState ByronSpecBlock) = + VoidLedgerEvent (LedgerState ByronSpecBlock) applyChainTickLedgerResult _evs cfg slot (ByronSpecLedgerState tip state) = - pureLedgerResult - $ TickedByronSpecLedgerState { - untickedByronSpecLedgerTip = tip - , tickedByronSpecLedgerState = Rules.applyChainTick - cfg - (toByronSpecSlotNo slot) - state - } + pureLedgerResult $ + TickedByronSpecLedgerState + { untickedByronSpecLedgerTip = tip + , tickedByronSpecLedgerState = + Rules.applyChainTick + cfg + (toByronSpecSlotNo slot) + state + } {------------------------------------------------------------------------------- Ledger Tables -------------------------------------------------------------------------------} -type instance TxIn (LedgerState ByronSpecBlock) = Void +type instance TxIn (LedgerState ByronSpecBlock) = Void type instance TxOut (LedgerState ByronSpecBlock) = Void instance LedgerTablesAreTrivial (LedgerState ByronSpecBlock) where convertMapKind (ByronSpecLedgerState x y) = - ByronSpecLedgerState x y + ByronSpecLedgerState x y instance LedgerTablesAreTrivial (Ticked (LedgerState ByronSpecBlock)) where convertMapKind (TickedByronSpecLedgerState x y) = - TickedByronSpecLedgerState x y -deriving via Void - instance IndexedMemPack (LedgerState ByronSpecBlock EmptyMK) Void -deriving via TrivialLedgerTables (LedgerState ByronSpecBlock) - instance HasLedgerTables (LedgerState ByronSpecBlock) -deriving via TrivialLedgerTables (Ticked (LedgerState ByronSpecBlock)) - instance HasLedgerTables (Ticked (LedgerState ByronSpecBlock)) -deriving via TrivialLedgerTables (LedgerState ByronSpecBlock) - instance CanStowLedgerTables (LedgerState ByronSpecBlock) + TickedByronSpecLedgerState x y +deriving via + Void + instance + IndexedMemPack (LedgerState ByronSpecBlock EmptyMK) Void +deriving via + TrivialLedgerTables (LedgerState ByronSpecBlock) + instance + HasLedgerTables (LedgerState ByronSpecBlock) +deriving via + TrivialLedgerTables (Ticked (LedgerState ByronSpecBlock)) + instance + HasLedgerTables (Ticked (LedgerState ByronSpecBlock)) +deriving via + TrivialLedgerTables (LedgerState ByronSpecBlock) + instance + CanStowLedgerTables (LedgerState ByronSpecBlock) {------------------------------------------------------------------------------- Applying blocks @@ -145,13 +159,12 @@ deriving via TrivialLedgerTables (LedgerState ByronSpecBlock) instance ApplyBlock (LedgerState ByronSpecBlock) ByronSpecBlock where applyBlockLedgerResultWithValidation _ _ cfg block (TickedByronSpecLedgerState _tip state) = - withExcept ByronSpecLedgerError - $ fmap (pureLedgerResult . ByronSpecLedgerState (Just (blockSlot block))) - $ -- Note that the CHAIN rule also applies the chain tick. So even - -- though the ledger we received has already been ticked with - -- 'applyChainTick', we do it again as part of CHAIN. This is safe, as - -- it is idempotent. If we wanted to avoid the repeated tick, we would - -- have to call the subtransitions of CHAIN (except for ticking). + withExcept ByronSpecLedgerError $ + fmap (pureLedgerResult . ByronSpecLedgerState (Just (blockSlot block))) $ -- Note that the CHAIN rule also applies the chain tick. So even + -- though the ledger we received has already been ticked with + -- 'applyChainTick', we do it again as part of CHAIN. This is safe, as + -- it is idempotent. If we wanted to avoid the repeated tick, we would + -- have to call the subtransitions of CHAIN (except for ticking). Rules.liftCHAIN cfg (byronSpecBlock block) @@ -169,10 +182,10 @@ instance ApplyBlock (LedgerState ByronSpecBlock) ByronSpecBlock where instance CommonProtocolParams ByronSpecBlock where maxHeaderSize = fromIntegral . Spec._maxHdrSz . getPParams - maxTxSize = fromIntegral . Spec._maxTxSz . getPParams + maxTxSize = fromIntegral . Spec._maxTxSz . getPParams getPParams :: LedgerState ByronSpecBlock mk -> Spec.PParams getPParams = - Spec.protocolParameters + Spec.protocolParameters . getChainStateUPIState . byronSpecLedgerState diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Mempool.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Mempool.hs index 339d697731..cd68d5bab9 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Mempool.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Mempool.hs @@ -2,37 +2,38 @@ {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE DerivingVia #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.ByronSpec.Ledger.Mempool ( - -- * Type family instances +module Ouroboros.Consensus.ByronSpec.Ledger.Mempool + ( -- * Type family instances GenTx (..) , Validated (..) ) where -import Codec.Serialise -import GHC.Generics (Generic) -import NoThunks.Class (AllowThunk (..), NoThunks) -import Ouroboros.Consensus.ByronSpec.Ledger.Block -import Ouroboros.Consensus.ByronSpec.Ledger.GenTx - (ByronSpecGenTx (..), ByronSpecGenTxErr (..)) -import qualified Ouroboros.Consensus.ByronSpec.Ledger.GenTx as GenTx -import Ouroboros.Consensus.ByronSpec.Ledger.Ledger -import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils +import Codec.Serialise +import GHC.Generics (Generic) +import NoThunks.Class (AllowThunk (..), NoThunks) +import Ouroboros.Consensus.ByronSpec.Ledger.Block +import Ouroboros.Consensus.ByronSpec.Ledger.GenTx + ( ByronSpecGenTx (..) + , ByronSpecGenTxErr (..) + ) +import Ouroboros.Consensus.ByronSpec.Ledger.GenTx qualified as GenTx +import Ouroboros.Consensus.ByronSpec.Ledger.Ledger +import Ouroboros.Consensus.ByronSpec.Ledger.Orphans () +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils -newtype instance GenTx ByronSpecBlock = ByronSpecGenTx { - unByronSpecGenTx :: ByronSpecGenTx - } +newtype instance GenTx ByronSpecBlock = ByronSpecGenTx + { unByronSpecGenTx :: ByronSpecGenTx + } deriving stock (Show, Generic) - deriving anyclass (Serialise) + deriving anyclass Serialise deriving NoThunks via AllowThunk (GenTx ByronSpecBlock) -newtype instance Validated (GenTx ByronSpecBlock) = ValidatedByronSpecGenTx { - forgetValidatedByronSpecGenTx :: GenTx ByronSpecBlock - } +newtype instance Validated (GenTx ByronSpecBlock) = ValidatedByronSpecGenTx + { forgetValidatedByronSpecGenTx :: GenTx ByronSpecBlock + } deriving stock (Show, Generic) deriving anyclass NoThunks @@ -40,17 +41,18 @@ type instance ApplyTxErr ByronSpecBlock = ByronSpecGenTxErr instance LedgerSupportsMempool ByronSpecBlock where applyTx cfg _wti _slot tx (TickedByronSpecLedgerState tip st) = - fmap (\st' -> - ( TickedByronSpecLedgerState tip st' - , ValidatedByronSpecGenTx tx - ) - ) + fmap + ( \st' -> + ( TickedByronSpecLedgerState tip st' + , ValidatedByronSpecGenTx tx + ) + ) $ GenTx.apply cfg (unByronSpecGenTx tx) st -- Byron spec doesn't have multiple validation modes reapplyTx _ cfg slot vtx st = - attachEmptyDiffs . applyDiffs st . fst - <$> applyTx cfg DoNotIntervene slot (forgetValidatedByronSpecGenTx vtx) st + attachEmptyDiffs . applyDiffs st . fst + <$> applyTx cfg DoNotIntervene slot (forgetValidatedByronSpecGenTx vtx) st txForgetValidated = forgetValidatedByronSpecGenTx diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Orphans.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Orphans.hs index a3441eab02..fcf10ba8e2 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Orphans.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Orphans.hs @@ -4,35 +4,37 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE StandaloneDeriving #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Missing instances for standard type classes in the Byron spec module Ouroboros.Consensus.ByronSpec.Ledger.Orphans () where -import qualified Byron.Spec.Chain.STS.Block as Spec -import qualified Byron.Spec.Chain.STS.Rule.BBody as Spec -import qualified Byron.Spec.Chain.STS.Rule.Bupi as Spec -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Chain.STS.Rule.Epoch as Spec -import qualified Byron.Spec.Chain.STS.Rule.Pbft as Spec -import qualified Byron.Spec.Chain.STS.Rule.SigCnt as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.Delegation as Spec -import qualified Byron.Spec.Ledger.STS.UTXO as Spec -import qualified Byron.Spec.Ledger.STS.UTXOW as Spec -import qualified Byron.Spec.Ledger.STS.UTXOWS as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import qualified Byron.Spec.Ledger.UTxO as Spec -import Cardano.Ledger.Binary.Plain (FromCBOR (..), ToCBOR (..), - enforceSize) -import Codec.CBOR.Encoding (encodeListLen) -import Codec.Serialise -import qualified Control.State.Transition as Spec -import Data.Bimap (Bimap) -import qualified Data.Bimap as Bimap -import GHC.Generics (Generic) -import Test.Cardano.Chain.Elaboration.Block as Spec.Test +import Byron.Spec.Chain.STS.Block qualified as Spec +import Byron.Spec.Chain.STS.Rule.BBody qualified as Spec +import Byron.Spec.Chain.STS.Rule.Bupi qualified as Spec +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Chain.STS.Rule.Epoch qualified as Spec +import Byron.Spec.Chain.STS.Rule.Pbft qualified as Spec +import Byron.Spec.Chain.STS.Rule.SigCnt qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.Delegation qualified as Spec +import Byron.Spec.Ledger.STS.UTXO qualified as Spec +import Byron.Spec.Ledger.STS.UTXOW qualified as Spec +import Byron.Spec.Ledger.STS.UTXOWS qualified as Spec +import Byron.Spec.Ledger.UTxO qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Cardano.Ledger.Binary.Plain + ( FromCBOR (..) + , ToCBOR (..) + , enforceSize + ) +import Codec.CBOR.Encoding (encodeListLen) +import Codec.Serialise +import Control.State.Transition qualified as Spec +import Data.Bimap (Bimap) +import Data.Bimap qualified as Bimap +import GHC.Generics (Generic) +import Test.Cardano.Chain.Elaboration.Block as Spec.Test {------------------------------------------------------------------------------- Serialise @@ -107,7 +109,7 @@ instance Serialise Spec.UtxoPredicateFailure instance Serialise Spec.UtxowPredicateFailure instance Serialise Spec.UtxowsPredicateFailure -instance Serialise a => Serialise (Spec.Sig a) +instance Serialise a => Serialise (Spec.Sig a) instance Serialise a => Serialise (Spec.Threshold a) {------------------------------------------------------------------------------- @@ -115,8 +117,9 @@ instance Serialise a => Serialise (Spec.Threshold a) -------------------------------------------------------------------------------} instance Serialise Spec.Test.AbstractToConcreteIdMaps where - encode AbstractToConcreteIdMaps{..} = mconcat [ - encodeListLen 2 + encode AbstractToConcreteIdMaps{..} = + mconcat + [ encodeListLen 2 , encode (ToFromCBOR <$> transactionIds) , encode (ToFromCBOR <$> proposalIds) ] @@ -124,7 +127,7 @@ instance Serialise Spec.Test.AbstractToConcreteIdMaps where decode = do enforceSize "AbstractToConcreteIdMaps" 2 transactionIds <- fmap unToFromCBOR <$> decode - proposalIds <- fmap unToFromCBOR <$> decode + proposalIds <- fmap unToFromCBOR <$> decode return $ AbstractToConcreteIdMaps{..} {------------------------------------------------------------------------------- @@ -158,9 +161,14 @@ deriving instance Generic Spec.SigcntPredicateFailure TODO: This should move someplace else. -------------------------------------------------------------------------------} -instance ( Ord k, Ord v - , Serialise k, Serialise v - ) => Serialise (Bimap k v) where +instance + ( Ord k + , Ord v + , Serialise k + , Serialise v + ) => + Serialise (Bimap k v) + where encode = encode . Bimap.toList decode = Bimap.fromList <$> decode @@ -168,7 +176,7 @@ instance ( Ord k, Ord v Auxiliary: Cardano.Binary.ToCBOR/FromCBOR to Serialise bridge -------------------------------------------------------------------------------} -newtype ToFromCBOR a = ToFromCBOR { unToFromCBOR :: a } +newtype ToFromCBOR a = ToFromCBOR {unToFromCBOR :: a} instance (ToCBOR a, FromCBOR a) => Serialise (ToFromCBOR a) where encode = toCBOR . unToFromCBOR diff --git a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Rules.hs b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Rules.hs index 090669ce05..13d86422a0 100644 --- a/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Rules.hs +++ b/ouroboros-consensus-cardano/src/unstable-byronspec/Ouroboros/Consensus/ByronSpec/Ledger/Rules.hs @@ -11,17 +11,20 @@ -- Intended for qualified import -- -- import qualified Ouroboros.Consensus.ByronSpec.Ledger.Rules as Rules -module Ouroboros.Consensus.ByronSpec.Ledger.Rules ( - -- * Ledger +module Ouroboros.Consensus.ByronSpec.Ledger.Rules + ( -- * Ledger applyChainTick + -- * Lift STS transition rules to the chain level , liftCHAIN , liftSDELEG , liftUPIREG , liftUPIVOTE , liftUTXOW + -- * STS initial rules , initStateCHAIN + -- * Rule context (exported for the benefit of the tests , RuleContext (..) , ctxtCHAIN @@ -33,27 +36,28 @@ module Ouroboros.Consensus.ByronSpec.Ledger.Rules ( , ctxtUTXOW ) where -import qualified Byron.Spec.Chain.STS.Rule.BBody as Spec -import qualified Byron.Spec.Chain.STS.Rule.Bupi as Spec -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Chain.STS.Rule.Epoch as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.Delegation as Spec -import qualified Byron.Spec.Ledger.STS.UTXO as Spec -import qualified Byron.Spec.Ledger.STS.UTXOW as Spec -import qualified Byron.Spec.Ledger.STS.UTXOWS as Spec -import qualified Byron.Spec.Ledger.Update as Spec -import Control.Monad -import Control.Monad.Trans.Except -import qualified Control.State.Transition as Spec -import Data.Functor.Identity -import Data.List.NonEmpty (NonEmpty) -import Data.Proxy -import qualified Data.Set as Set -import Ouroboros.Consensus.ByronSpec.Ledger.Accessors -import Ouroboros.Consensus.ByronSpec.Ledger.Genesis - (ByronSpecGenesis (..)) -import qualified Ouroboros.Consensus.ByronSpec.Ledger.Genesis as Genesis +import Byron.Spec.Chain.STS.Rule.BBody qualified as Spec +import Byron.Spec.Chain.STS.Rule.Bupi qualified as Spec +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Chain.STS.Rule.Epoch qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.Delegation qualified as Spec +import Byron.Spec.Ledger.STS.UTXO qualified as Spec +import Byron.Spec.Ledger.STS.UTXOW qualified as Spec +import Byron.Spec.Ledger.STS.UTXOWS qualified as Spec +import Byron.Spec.Ledger.Update qualified as Spec +import Control.Monad +import Control.Monad.Trans.Except +import Control.State.Transition qualified as Spec +import Data.Functor.Identity +import Data.List.NonEmpty (NonEmpty) +import Data.Proxy +import Data.Set qualified as Set +import Ouroboros.Consensus.ByronSpec.Ledger.Accessors +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis + ( ByronSpecGenesis (..) + ) +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis qualified as Genesis {------------------------------------------------------------------------------- Chain tick @@ -70,21 +74,23 @@ import qualified Ouroboros.Consensus.ByronSpec.Ledger.Genesis as Genesis -- -- This matches quite closely what 'applyChainTick' in -- "Ouroboros.Consensus.Ledger.Byron.Auxiliary" does. -applyChainTick :: ByronSpecGenesis - -> Spec.Slot - -> Spec.State Spec.CHAIN - -> Spec.State Spec.CHAIN +applyChainTick :: + ByronSpecGenesis -> + Spec.Slot -> + Spec.State Spec.CHAIN -> + Spec.State Spec.CHAIN applyChainTick cfg slot st = - case runExcept (go st) of - Left _ -> error "applyChainTick: unexpected failure" - Right st' -> st' - where - go :: Spec.State Spec.CHAIN - -> Except (NonEmpty (Spec.PredicateFailure Spec.CHAIN)) (Spec.State Spec.CHAIN) - go = -- Apply EPOCH rule (deals with update proposals) - liftEPOCH cfg slot - - -- Apply scheduled delegations (empty list of new delegation certs) + case runExcept (go st) of + Left _ -> error "applyChainTick: unexpected failure" + Right st' -> st' + where + go :: + Spec.State Spec.CHAIN -> + Except (NonEmpty (Spec.PredicateFailure Spec.CHAIN)) (Spec.State Spec.CHAIN) + go = + -- Apply EPOCH rule (deals with update proposals) + liftEPOCH cfg slot + -- Apply scheduled delegations (empty list of new delegation certs) >=> liftDELEG cfg [] {------------------------------------------------------------------------------- @@ -135,53 +141,63 @@ liftDELEG = liftRule . ctxtDELEG -- genesis config and some information from the state; the reason is that -- although some values come from the state, as far as these rules are -- concerned, they are constants. -data RuleContext sts = RuleContext { - getRuleState :: GetChainState (Spec.State sts) - , modRuleState :: ModChainState (Spec.State sts) - , liftFailure :: Spec.PredicateFailure sts -> Spec.PredicateFailure Spec.CHAIN - , getRuleEnv :: Spec.State Spec.CHAIN -> Spec.Environment sts - } +data RuleContext sts = RuleContext + { getRuleState :: GetChainState (Spec.State sts) + , modRuleState :: ModChainState (Spec.State sts) + , liftFailure :: Spec.PredicateFailure sts -> Spec.PredicateFailure Spec.CHAIN + , getRuleEnv :: Spec.State Spec.CHAIN -> Spec.Environment sts + } -applySTS :: forall sts. (Spec.STS sts, Spec.BaseM sts ~ Identity) - => Proxy sts - -> Spec.Environment sts - -> Spec.Signal sts - -> Spec.State sts - -> Except (NonEmpty (Spec.PredicateFailure sts)) (Spec.State sts) -applySTS _ env signal state = except $ - Spec.applySTS @sts $ Spec.TRC (env, state, signal) +applySTS :: + forall sts. + (Spec.STS sts, Spec.BaseM sts ~ Identity) => + Proxy sts -> + Spec.Environment sts -> + Spec.Signal sts -> + Spec.State sts -> + Except (NonEmpty (Spec.PredicateFailure sts)) (Spec.State sts) +applySTS _ env signal state = + except $ + Spec.applySTS @sts $ + Spec.TRC (env, state, signal) -type LiftedRule sts = Spec.Signal sts - -> Spec.State Spec.CHAIN - -> Except (NonEmpty (Spec.PredicateFailure Spec.CHAIN)) - (Spec.State Spec.CHAIN) +type LiftedRule sts = + Spec.Signal sts -> + Spec.State Spec.CHAIN -> + Except + (NonEmpty (Spec.PredicateFailure Spec.CHAIN)) + (Spec.State Spec.CHAIN) -- | Lift sub-STS rule to top-level CHAIN -liftRule :: forall sts. (Spec.STS sts, Spec.BaseM sts ~ Identity) - => RuleContext sts -> LiftedRule sts +liftRule :: + forall sts. + (Spec.STS sts, Spec.BaseM sts ~ Identity) => + RuleContext sts -> LiftedRule sts liftRule RuleContext{..} signal st = - withExcept (fmap liftFailure) $ - modRuleState (applySTS (Proxy @sts) (getRuleEnv st) signal) st + withExcept (fmap liftFailure) $ + modRuleState (applySTS (Proxy @sts) (getRuleEnv st) signal) st {------------------------------------------------------------------------------- Instances of 'RuleContext' -------------------------------------------------------------------------------} ctxtCHAIN :: ByronSpecGenesis -> RuleContext Spec.CHAIN -ctxtCHAIN cfg = RuleContext { - getRuleState = id +ctxtCHAIN cfg = + RuleContext + { getRuleState = id , modRuleState = id - , liftFailure = id - , getRuleEnv = \_st -> Genesis.toChainEnv cfg + , liftFailure = id + , getRuleEnv = \_st -> Genesis.toChainEnv cfg } ctxtEPOCH :: ByronSpecGenesis -> RuleContext Spec.EPOCH -ctxtEPOCH ByronSpecGenesis{..} = RuleContext { - getRuleState = getChainStateUPIState +ctxtEPOCH ByronSpecGenesis{..} = + RuleContext + { getRuleState = getChainStateUPIState , modRuleState = modChainStateUPIState - , liftFailure = Spec.EpochFailure - , getRuleEnv = \st -> ( - -- The _current_ epoch + , liftFailure = Spec.EpochFailure + , getRuleEnv = \st -> + ( -- The _current_ epoch -- This is needed to detect if the new slot introduces the next epoch Spec.sEpoch (getChainStateSlot st) byronSpecGenesisSecurityParam , byronSpecGenesisSecurityParam @@ -189,51 +205,61 @@ ctxtEPOCH ByronSpecGenesis{..} = RuleContext { } ctxtDELEG :: ByronSpecGenesis -> RuleContext Spec.DELEG -ctxtDELEG ByronSpecGenesis{..} = RuleContext { - getRuleState = getChainStateDIState +ctxtDELEG ByronSpecGenesis{..} = + RuleContext + { getRuleState = getChainStateDIState , modRuleState = modChainStateDIState - , liftFailure = Spec.LedgerDelegationFailure - , getRuleEnv = \st -> Spec.DSEnv { - _dSEnvAllowedDelegators = byronSpecGenesisDelegators - , _dSEnvEpoch = Spec.sEpoch - (getChainStateSlot st) - byronSpecGenesisSecurityParam - , _dSEnvSlot = getChainStateSlot st - , _dSEnvK = byronSpecGenesisSecurityParam - } + , liftFailure = Spec.LedgerDelegationFailure + , getRuleEnv = \st -> + Spec.DSEnv + { _dSEnvAllowedDelegators = byronSpecGenesisDelegators + , _dSEnvEpoch = + Spec.sEpoch + (getChainStateSlot st) + byronSpecGenesisSecurityParam + , _dSEnvSlot = getChainStateSlot st + , _dSEnvK = byronSpecGenesisSecurityParam + } } ctxtSDELEG :: ByronSpecGenesis -> RuleContext Spec.SDELEG -ctxtSDELEG cfg = RuleContext { - getRuleState = getDIStateDSState . getRuleState (ctxtDELEG cfg) +ctxtSDELEG cfg = + RuleContext + { getRuleState = getDIStateDSState . getRuleState (ctxtDELEG cfg) , modRuleState = modRuleState (ctxtDELEG cfg) . modDIStateDSState - , liftFailure = liftFailure (ctxtDELEG cfg) - . Spec.SDelegSFailure - . Spec.SDelegFailure - , getRuleEnv = getRuleEnv (ctxtDELEG cfg) + , liftFailure = + liftFailure (ctxtDELEG cfg) + . Spec.SDelegSFailure + . Spec.SDelegFailure + , getRuleEnv = getRuleEnv (ctxtDELEG cfg) } ctxtUTXOW :: ByronSpecGenesis -> RuleContext Spec.UTXOW -ctxtUTXOW ByronSpecGenesis{..} = RuleContext { - getRuleState = getChainStateUtxoState - , modRuleState = modChainStateUtxoState - , liftFailure = Spec.LedgerUTxOFailure - . Spec.UtxowFailure - , getRuleEnv = \st -> Spec.UTxOEnv { - utxo0 = byronSpecGenesisInitUtxo - , pps = Spec.protocolParameters (getChainStateUPIState st) - } - } +ctxtUTXOW ByronSpecGenesis{..} = + RuleContext + { getRuleState = getChainStateUtxoState + , modRuleState = modChainStateUtxoState + , liftFailure = + Spec.LedgerUTxOFailure + . Spec.UtxowFailure + , getRuleEnv = \st -> + Spec.UTxOEnv + { utxo0 = byronSpecGenesisInitUtxo + , pps = Spec.protocolParameters (getChainStateUPIState st) + } + } ctxtUPIREG :: ByronSpecGenesis -> RuleContext Spec.UPIREG -ctxtUPIREG ByronSpecGenesis{..} = RuleContext { - getRuleState = getChainStateUPIState +ctxtUPIREG ByronSpecGenesis{..} = + RuleContext + { getRuleState = getChainStateUPIState , modRuleState = modChainStateUPIState - , liftFailure = Spec.BBodyFailure - . Spec.BUPIFailure - . Spec.UPIREGFailure - , getRuleEnv = \st -> ( - getChainStateSlot st + , liftFailure = + Spec.BBodyFailure + . Spec.BUPIFailure + . Spec.UPIREGFailure + , getRuleEnv = \st -> + ( getChainStateSlot st , Spec._dIStateDelegationMap (getChainStateDIState st) , byronSpecGenesisSecurityParam , fromIntegral $ Set.size byronSpecGenesisDelegators @@ -241,15 +267,17 @@ ctxtUPIREG ByronSpecGenesis{..} = RuleContext { } ctxtUPIVOTE :: ByronSpecGenesis -> RuleContext Spec.UPIVOTE -ctxtUPIVOTE cfg = RuleContext { - getRuleState = getRuleState (ctxtUPIREG cfg) +ctxtUPIVOTE cfg = + RuleContext + { getRuleState = getRuleState (ctxtUPIREG cfg) , modRuleState = modRuleState (ctxtUPIREG cfg) - , getRuleEnv = getRuleEnv (ctxtUPIREG cfg) - , liftFailure = Spec.BBodyFailure - . Spec.BUPIFailure - . Spec.UPIVOTESFailure - . Spec.ApplyVotesFailure - . Spec.UpivoteFailure + , getRuleEnv = getRuleEnv (ctxtUPIREG cfg) + , liftFailure = + Spec.BBodyFailure + . Spec.BUPIFailure + . Spec.UPIVOTESFailure + . Spec.ApplyVotesFailure + . Spec.UpivoteFailure } {------------------------------------------------------------------------------- @@ -258,10 +286,10 @@ ctxtUPIVOTE cfg = RuleContext { initStateCHAIN :: ByronSpecGenesis -> Spec.State Spec.CHAIN initStateCHAIN cfg = - dontExpectError $ - Spec.applySTS @Spec.CHAIN $ - Spec.IRC (Genesis.toChainEnv cfg) - where - dontExpectError :: Either a b -> b - dontExpectError (Left _) = error "initStateCHAIN: unexpected error" - dontExpectError (Right b) = b + dontExpectError $ + Spec.applySTS @Spec.CHAIN $ + Spec.IRC (Genesis.toChainEnv cfg) + where + dontExpectError :: Either a b -> b + dontExpectError (Left _) = error "initStateCHAIN: unexpected error" + dontExpectError (Right b) = b diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Examples.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Examples.hs index 54d568c402..5a559adb0e 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Examples.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Examples.hs @@ -8,15 +8,15 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeOperators #-} - {-# OPTIONS_GHC -Wno-orphans #-} #if __GLASGOW_HASKELL__ >= 908 {-# OPTIONS_GHC -Wno-x-partial #-} #endif -module Test.Consensus.Cardano.Examples ( - -- * Setup +module Test.Consensus.Cardano.Examples + ( -- * Setup codecConfig + -- * Examples , exampleApplyTxErrWrongEraByron , exampleApplyTxErrWrongEraShelley @@ -31,49 +31,56 @@ module Test.Consensus.Cardano.Examples ( , examples ) where -import Data.Bifunctor (second) -import Data.Coerce (Coercible, coerce) -import Data.SOP.BasicFunctors -import Data.SOP.Counting (Exactly (..)) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index (Index (..), himap) -import Data.SOP.Strict -import qualified Data.Text as T -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.ByronHFC -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import qualified Ouroboros.Consensus.Byron.Ledger as Byron -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.Cardano.Ledger () -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Embed.Nary -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation (AnnTip) -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (ApplyTxErr) -import Ouroboros.Consensus.Ledger.Tables (EmptyMK, ValuesMK, - castLedgerTables) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) -import qualified Ouroboros.Consensus.Shelley.Ledger as Shelley -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Network.Block (Serialised (..)) -import qualified Test.Consensus.Byron.Examples as Byron -import qualified Test.Consensus.Shelley.Examples as Shelley -import Test.Util.Serialisation.Examples (Examples (..), Labelled, - labelled, prefixExamples) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Data.Bifunctor (second) +import Data.Coerce (Coercible, coerce) +import Data.SOP.BasicFunctors +import Data.SOP.Counting (Exactly (..)) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.Index (Index (..), himap) +import Data.SOP.Strict +import Data.Text qualified as T +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.ByronHFC +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Byron.Ledger qualified as Byron +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.Cardano.Ledger () +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Embed.Nary +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation (AnnTip) +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool (ApplyTxErr) +import Ouroboros.Consensus.Ledger.Tables + ( EmptyMK + , ValuesMK + , castLedgerTables + ) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) +import Ouroboros.Consensus.Shelley.Ledger qualified as Shelley +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Network.Block (Serialised (..)) +import Test.Consensus.Byron.Examples qualified as Byron +import Test.Consensus.Shelley.Examples qualified as Shelley +import Test.Util.Serialisation.Examples + ( Examples (..) + , Labelled + , labelled + , prefixExamples + ) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) type Crypto = StandardCrypto eraExamples :: NP Examples (CardanoEras Crypto) eraExamples = - Byron.examples + Byron.examples :* Shelley.examplesShelley :* Shelley.examplesAllegra :* Shelley.examplesMary @@ -85,88 +92,101 @@ eraExamples = -- | By using this function, we can't forget to update this test when adding a -- new era to 'CardanoEras'. combineEras :: - NP Examples (CardanoEras Crypto) - -> Examples (CardanoBlock Crypto) -combineEras perEraExamples = Examples { - exampleBlock = coerce $ viaInject @I (coerce exampleBlock) - , exampleSerialisedBlock = viaInject exampleSerialisedBlock - , exampleHeader = viaInject exampleHeader - , exampleSerialisedHeader = viaInject exampleSerialisedHeader - , exampleHeaderHash = coerce $ viaInject @WrapHeaderHash (coerce exampleHeaderHash) - , exampleGenTx = viaInject exampleGenTx - , exampleGenTxId = coerce $ viaInject @WrapGenTxId (coerce exampleGenTxId) - , exampleApplyTxErr = coerce $ viaInject @WrapApplyTxErr (coerce exampleApplyTxErr) - , exampleQuery = fmap (second unComp) $ viaInject (fmap (second Comp) . exampleQuery) - , exampleResult = viaInject exampleResult - , exampleAnnTip = viaInject exampleAnnTip - , exampleLedgerState = fmap (second unFlip) $ viaInject (fmap (second Flip) . exampleLedgerState) - , exampleChainDepState = coerce $ viaInject @WrapChainDepState (coerce exampleChainDepState) - , exampleExtLedgerState = fmap (second unFlip) $ viaInject (fmap (second Flip) . exampleExtLedgerState) - , exampleSlotNo = coerce $ viaInject @(K SlotNo) (coerce exampleSlotNo) - , exampleLedgerConfig = exampleLedgerConfigCardano - , exampleLedgerTables = exampleLedgerTablesCardano - } - where - viaInject :: - forall f. Inject f - => (forall blk. Examples blk -> Labelled (f blk)) - -> Labelled (f (CardanoBlock Crypto)) - viaInject getExamples = - mconcat - $ hcollapse - $ himap (\ix -> K . inj ix . getExamples) perEraExamplesPrefixed - where - inj :: forall blk. Index (CardanoEras Crypto) blk -> Labelled (f blk) -> Labelled (f (CardanoBlock Crypto)) - inj idx = fmap (fmap (inject $ oracularInjectionIndex exampleStartBounds idx)) - - perEraExamplesPrefixed :: NP Examples (CardanoEras Crypto) - perEraExamplesPrefixed = hcmap proxySingle prefixWithEraName perEraExamples - where - prefixWithEraName es = prefixExamples (T.unpack eraName) es - where - eraName = singleEraName $ singleEraInfo es - - exampleLedgerTablesCardano :: - Labelled (LedgerTables (LedgerState (HardForkBlock (CardanoEras Crypto))) ValuesMK) - exampleLedgerTablesCardano = - mconcat - $ hcollapse - $ himap (\ix -> K . map (second (injectLedgerTables ix)) . exampleLedgerTables) perEraExamplesPrefixed - - exampleLedgerConfigCardano :: - Labelled (HardForkLedgerConfig (CardanoEras Crypto)) - exampleLedgerConfigCardano = [ - ( Nothing - , HardForkLedgerConfig - cardanoShape - (PerEraLedgerConfig ( - WrapPartialLedgerConfig (ByronPartialLedgerConfig lcByron (TriggerHardForkAtEpoch shelleyTransitionEpoch)) - :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcShelley (TriggerHardForkAtEpoch (History.boundEpoch allegraStartBound))) - :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcAllegra (TriggerHardForkAtEpoch (History.boundEpoch maryStartBound))) - :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcMary (TriggerHardForkAtEpoch (History.boundEpoch alonzoStartBound))) - :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcAlonzo (TriggerHardForkAtEpoch (History.boundEpoch babbageStartBound))) - :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcBabbage (TriggerHardForkAtEpoch (History.boundEpoch conwayStartBound))) - :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcConway TriggerHardForkNotDuringThisExecution) - :* Nil)) - ) - | WrapLedgerConfig lcByron <- labelledLcByron - , WrapLedgerConfig lcShelley <- labelledLcShelley - , WrapLedgerConfig lcAllegra <- labelledLcAllegra - , WrapLedgerConfig lcMary <- labelledLcMary - , WrapLedgerConfig lcAlonzo <- labelledLcAlonzo - , WrapLedgerConfig lcBabbage <- labelledLcBabbage - , WrapLedgerConfig lcConway <- labelledLcConway - ] - where - ( Comp labelledLcByron - :* Comp labelledLcShelley - :* Comp labelledLcAllegra - :* Comp labelledLcMary - :* Comp labelledLcAlonzo - :* Comp labelledLcBabbage - :* Comp labelledLcConway - :* Nil - ) = hmap (Comp . fmap (WrapLedgerConfig . snd) . exampleLedgerConfig) perEraExamples + NP Examples (CardanoEras Crypto) -> + Examples (CardanoBlock Crypto) +combineEras perEraExamples = + Examples + { exampleBlock = coerce $ viaInject @I (coerce exampleBlock) + , exampleSerialisedBlock = viaInject exampleSerialisedBlock + , exampleHeader = viaInject exampleHeader + , exampleSerialisedHeader = viaInject exampleSerialisedHeader + , exampleHeaderHash = coerce $ viaInject @WrapHeaderHash (coerce exampleHeaderHash) + , exampleGenTx = viaInject exampleGenTx + , exampleGenTxId = coerce $ viaInject @WrapGenTxId (coerce exampleGenTxId) + , exampleApplyTxErr = coerce $ viaInject @WrapApplyTxErr (coerce exampleApplyTxErr) + , exampleQuery = fmap (second unComp) $ viaInject (fmap (second Comp) . exampleQuery) + , exampleResult = viaInject exampleResult + , exampleAnnTip = viaInject exampleAnnTip + , exampleLedgerState = + fmap (second unFlip) $ viaInject (fmap (second Flip) . exampleLedgerState) + , exampleChainDepState = coerce $ viaInject @WrapChainDepState (coerce exampleChainDepState) + , exampleExtLedgerState = + fmap (second unFlip) $ viaInject (fmap (second Flip) . exampleExtLedgerState) + , exampleSlotNo = coerce $ viaInject @(K SlotNo) (coerce exampleSlotNo) + , exampleLedgerConfig = exampleLedgerConfigCardano + , exampleLedgerTables = exampleLedgerTablesCardano + } + where + viaInject :: + forall f. + Inject f => + (forall blk. Examples blk -> Labelled (f blk)) -> + Labelled (f (CardanoBlock Crypto)) + viaInject getExamples = + mconcat $ + hcollapse $ + himap (\ix -> K . inj ix . getExamples) perEraExamplesPrefixed + where + inj :: + forall blk. Index (CardanoEras Crypto) blk -> Labelled (f blk) -> Labelled (f (CardanoBlock Crypto)) + inj idx = fmap (fmap (inject $ oracularInjectionIndex exampleStartBounds idx)) + + perEraExamplesPrefixed :: NP Examples (CardanoEras Crypto) + perEraExamplesPrefixed = hcmap proxySingle prefixWithEraName perEraExamples + where + prefixWithEraName es = prefixExamples (T.unpack eraName) es + where + eraName = singleEraName $ singleEraInfo es + + exampleLedgerTablesCardano :: + Labelled (LedgerTables (LedgerState (HardForkBlock (CardanoEras Crypto))) ValuesMK) + exampleLedgerTablesCardano = + mconcat $ + hcollapse $ + himap (\ix -> K . map (second (injectLedgerTables ix)) . exampleLedgerTables) perEraExamplesPrefixed + + exampleLedgerConfigCardano :: + Labelled (HardForkLedgerConfig (CardanoEras Crypto)) + exampleLedgerConfigCardano = + [ ( Nothing + , HardForkLedgerConfig + cardanoShape + ( PerEraLedgerConfig + ( WrapPartialLedgerConfig + (ByronPartialLedgerConfig lcByron (TriggerHardForkAtEpoch shelleyTransitionEpoch)) + :* WrapPartialLedgerConfig + (ShelleyPartialLedgerConfig lcShelley (TriggerHardForkAtEpoch (History.boundEpoch allegraStartBound))) + :* WrapPartialLedgerConfig + (ShelleyPartialLedgerConfig lcAllegra (TriggerHardForkAtEpoch (History.boundEpoch maryStartBound))) + :* WrapPartialLedgerConfig + (ShelleyPartialLedgerConfig lcMary (TriggerHardForkAtEpoch (History.boundEpoch alonzoStartBound))) + :* WrapPartialLedgerConfig + (ShelleyPartialLedgerConfig lcAlonzo (TriggerHardForkAtEpoch (History.boundEpoch babbageStartBound))) + :* WrapPartialLedgerConfig + (ShelleyPartialLedgerConfig lcBabbage (TriggerHardForkAtEpoch (History.boundEpoch conwayStartBound))) + :* WrapPartialLedgerConfig (ShelleyPartialLedgerConfig lcConway TriggerHardForkNotDuringThisExecution) + :* Nil + ) + ) + ) + | WrapLedgerConfig lcByron <- labelledLcByron + , WrapLedgerConfig lcShelley <- labelledLcShelley + , WrapLedgerConfig lcAllegra <- labelledLcAllegra + , WrapLedgerConfig lcMary <- labelledLcMary + , WrapLedgerConfig lcAlonzo <- labelledLcAlonzo + , WrapLedgerConfig lcBabbage <- labelledLcBabbage + , WrapLedgerConfig lcConway <- labelledLcConway + ] + where + ( Comp labelledLcByron + :* Comp labelledLcShelley + :* Comp labelledLcAllegra + :* Comp labelledLcMary + :* Comp labelledLcAlonzo + :* Comp labelledLcBabbage + :* Comp labelledLcConway + :* Nil + ) = hmap (Comp . fmap (WrapLedgerConfig . snd) . exampleLedgerConfig) perEraExamples {------------------------------------------------------------------------------- Inject instances @@ -179,48 +199,51 @@ instance Inject Serialised where instance Inject SomeResult where inject iidx (SomeResult q r) = - SomeResult - (QueryIfCurrent (injectQuery (forgetInjectionIndex iidx) q)) - (Right r) + SomeResult + (QueryIfCurrent (injectQuery (forgetInjectionIndex iidx) q)) + (Right r) instance Inject Examples where - inject (iidx :: InjectionIndex xs x) Examples {..} = Examples { - exampleBlock = inj (Proxy @I) exampleBlock - , exampleSerialisedBlock = inj (Proxy @Serialised) exampleSerialisedBlock - , exampleHeader = inj (Proxy @Header) exampleHeader - , exampleSerialisedHeader = inj (Proxy @SerialisedHeader) exampleSerialisedHeader - , exampleHeaderHash = inj (Proxy @WrapHeaderHash) exampleHeaderHash - , exampleGenTx = inj (Proxy @GenTx) exampleGenTx - , exampleGenTxId = inj (Proxy @WrapGenTxId) exampleGenTxId - , exampleApplyTxErr = inj (Proxy @WrapApplyTxErr) exampleApplyTxErr - , exampleQuery = inj (Proxy @(SomeBlockQuery :.: BlockQuery)) exampleQuery - , exampleResult = inj (Proxy @SomeResult) exampleResult - , exampleAnnTip = inj (Proxy @AnnTip) exampleAnnTip - , exampleLedgerState = inj (Proxy @(Flip LedgerState EmptyMK)) exampleLedgerState - , exampleChainDepState = inj (Proxy @WrapChainDepState) exampleChainDepState - , exampleExtLedgerState = inj (Proxy @(Flip ExtLedgerState EmptyMK)) exampleExtLedgerState - , exampleSlotNo = exampleSlotNo - , exampleLedgerTables = inj (Proxy @WrapLedgerTables) exampleLedgerTables - -- We cannot create a HF Ledger Config out of just one of the eras - , exampleLedgerConfig = mempty + inject (iidx :: InjectionIndex xs x) Examples{..} = + Examples + { exampleBlock = inj (Proxy @I) exampleBlock + , exampleSerialisedBlock = inj (Proxy @Serialised) exampleSerialisedBlock + , exampleHeader = inj (Proxy @Header) exampleHeader + , exampleSerialisedHeader = inj (Proxy @SerialisedHeader) exampleSerialisedHeader + , exampleHeaderHash = inj (Proxy @WrapHeaderHash) exampleHeaderHash + , exampleGenTx = inj (Proxy @GenTx) exampleGenTx + , exampleGenTxId = inj (Proxy @WrapGenTxId) exampleGenTxId + , exampleApplyTxErr = inj (Proxy @WrapApplyTxErr) exampleApplyTxErr + , exampleQuery = inj (Proxy @(SomeBlockQuery :.: BlockQuery)) exampleQuery + , exampleResult = inj (Proxy @SomeResult) exampleResult + , exampleAnnTip = inj (Proxy @AnnTip) exampleAnnTip + , exampleLedgerState = inj (Proxy @(Flip LedgerState EmptyMK)) exampleLedgerState + , exampleChainDepState = inj (Proxy @WrapChainDepState) exampleChainDepState + , exampleExtLedgerState = inj (Proxy @(Flip ExtLedgerState EmptyMK)) exampleExtLedgerState + , exampleSlotNo = exampleSlotNo + , exampleLedgerTables = inj (Proxy @WrapLedgerTables) exampleLedgerTables + , -- We cannot create a HF Ledger Config out of just one of the eras + exampleLedgerConfig = mempty } - where - inj :: - forall f a b. - ( Inject f - , Coercible a (f x) - , Coercible b (f (HardForkBlock xs)) - ) - => Proxy f -> Labelled a -> Labelled b - inj p = map (fmap (inject' p iidx)) + where + inj :: + forall f a b. + ( Inject f + , Coercible a (f x) + , Coercible b (f (HardForkBlock xs)) + ) => + Proxy f -> Labelled a -> Labelled b + inj p = map (fmap (inject' p iidx)) -- | This wrapper is used only in the 'Example' instance of 'Inject' so that we -- can use a type that matches the kind expected by 'inj'. -newtype WrapLedgerTables blk = WrapLedgerTables ( LedgerTables (ExtLedgerState blk) ValuesMK ) +newtype WrapLedgerTables blk = WrapLedgerTables (LedgerTables (ExtLedgerState blk) ValuesMK) instance Inject WrapLedgerTables where inject idx (WrapLedgerTables lt) = - WrapLedgerTables $ castLedgerTables $ injectLedgerTables (forgetInjectionIndex idx) (castLedgerTables lt) + WrapLedgerTables $ + castLedgerTables $ + injectLedgerTables (forgetInjectionIndex idx) (castLedgerTables lt) {------------------------------------------------------------------------------- Setup @@ -256,78 +279,81 @@ byronStartBound = History.initBound shelleyStartBound :: History.Bound shelleyStartBound = - History.mkUpperBound - byronEraParams - byronStartBound - shelleyTransitionEpoch + History.mkUpperBound + byronEraParams + byronStartBound + shelleyTransitionEpoch allegraStartBound :: History.Bound allegraStartBound = - History.mkUpperBound - shelleyEraParams - shelleyStartBound - 20 + History.mkUpperBound + shelleyEraParams + shelleyStartBound + 20 maryStartBound :: History.Bound maryStartBound = - History.mkUpperBound - allegraEraParams - allegraStartBound - 30 + History.mkUpperBound + allegraEraParams + allegraStartBound + 30 alonzoStartBound :: History.Bound alonzoStartBound = - History.mkUpperBound - maryEraParams - maryStartBound - 40 + History.mkUpperBound + maryEraParams + maryStartBound + 40 babbageStartBound :: History.Bound babbageStartBound = - History.mkUpperBound - alonzoEraParams - alonzoStartBound - 50 + History.mkUpperBound + alonzoEraParams + alonzoStartBound + 50 conwayStartBound :: History.Bound conwayStartBound = - History.mkUpperBound - alonzoEraParams - alonzoStartBound - 60 + History.mkUpperBound + alonzoEraParams + alonzoStartBound + 60 exampleStartBounds :: Exactly (CardanoEras Crypto) History.Bound -exampleStartBounds = Exactly $ - ( K byronStartBound - :* K shelleyStartBound - :* K allegraStartBound - :* K maryStartBound - :* K alonzoStartBound - :* K babbageStartBound - :* K conwayStartBound - :* Nil +exampleStartBounds = + Exactly $ + ( K byronStartBound + :* K shelleyStartBound + :* K allegraStartBound + :* K maryStartBound + :* K alonzoStartBound + :* K babbageStartBound + :* K conwayStartBound + :* Nil ) cardanoShape :: History.Shape (CardanoEras Crypto) -cardanoShape = History.Shape $ Exactly $ - ( K byronEraParams - :* K shelleyEraParams - :* K allegraEraParams - :* K maryEraParams - :* K alonzoEraParams - :* K babbageEraParams - :* K conwayEraParams - :* Nil - ) +cardanoShape = + History.Shape $ + Exactly $ + ( K byronEraParams + :* K shelleyEraParams + :* K allegraEraParams + :* K maryEraParams + :* K alonzoEraParams + :* K babbageEraParams + :* K conwayEraParams + :* Nil + ) summary :: History.Summary (CardanoEras Crypto) summary = - State.reconstructSummary - cardanoShape - (State.TransitionKnown shelleyTransitionEpoch) - (hardForkLedgerStatePerEra (ledgerStateByron byronLedger)) - where - (_, byronLedger) = head $ exampleLedgerState Byron.examples + State.reconstructSummary + cardanoShape + (State.TransitionKnown shelleyTransitionEpoch) + (hardForkLedgerStatePerEra (ledgerStateByron byronLedger)) + where + (_, byronLedger) = head $ exampleLedgerState Byron.examples eraInfoByron :: SingleEraInfo ByronBlock eraInfoByron = singleEraInfo (Proxy @ByronBlock) @@ -337,25 +363,26 @@ eraInfoShelley = singleEraInfo (Proxy @(ShelleyBlock (TPraos StandardCrypto) She codecConfig :: CardanoCodecConfig Crypto codecConfig = - CardanoCodecConfig - Byron.codecConfig - Shelley.ShelleyCodecConfig - Shelley.ShelleyCodecConfig - Shelley.ShelleyCodecConfig - Shelley.ShelleyCodecConfig - Shelley.ShelleyCodecConfig - Shelley.ShelleyCodecConfig + CardanoCodecConfig + Byron.codecConfig + Shelley.ShelleyCodecConfig + Shelley.ShelleyCodecConfig + Shelley.ShelleyCodecConfig + Shelley.ShelleyCodecConfig + Shelley.ShelleyCodecConfig + Shelley.ShelleyCodecConfig ledgerStateByron :: - LedgerState ByronBlock mk - -> LedgerState (CardanoBlock Crypto) mk + LedgerState ByronBlock mk -> + LedgerState (CardanoBlock Crypto) mk ledgerStateByron stByron = - HardForkLedgerState $ HardForkState $ TZ cur - where - cur = State.Current { - currentStart = History.initBound - , currentState = Flip stByron - } + HardForkLedgerState $ HardForkState $ TZ cur + where + cur = + State.Current + { currentStart = History.initBound + , currentState = Flip stByron + } {------------------------------------------------------------------------------- Examples @@ -363,23 +390,27 @@ ledgerStateByron stByron = -- | Multi-era examples, e.g., applying a transaction to the wrong era. multiEraExamples :: Examples (CardanoBlock Crypto) -multiEraExamples = mempty { - exampleApplyTxErr = labelled [ - ("WrongEraByron", exampleApplyTxErrWrongEraByron) - , ("WrongEraShelley", exampleApplyTxErrWrongEraShelley) - ] - , exampleQuery = labelled [ - ("AnytimeByron", exampleQueryAnytimeByron) - , ("AnytimeShelley", exampleQueryAnytimeShelley) - , ("HardFork", exampleQueryHardFork) - ] - , exampleResult = labelled [ - ("EraMismatchByron", exampleResultEraMismatchByron) - , ("EraMismatchShelley", exampleResultEraMismatchShelley) - , ("AnytimeByron", exampleResultAnytimeByron) - , ("AnytimeShelley", exampleResultAnytimeShelley) - , ("HardFork", exampleResultHardFork) - ] +multiEraExamples = + mempty + { exampleApplyTxErr = + labelled + [ ("WrongEraByron", exampleApplyTxErrWrongEraByron) + , ("WrongEraShelley", exampleApplyTxErrWrongEraShelley) + ] + , exampleQuery = + labelled + [ ("AnytimeByron", exampleQueryAnytimeByron) + , ("AnytimeShelley", exampleQueryAnytimeShelley) + , ("HardFork", exampleQueryHardFork) + ] + , exampleResult = + labelled + [ ("EraMismatchByron", exampleResultEraMismatchByron) + , ("EraMismatchShelley", exampleResultEraMismatchShelley) + , ("AnytimeByron", exampleResultAnytimeByron) + , ("AnytimeShelley", exampleResultAnytimeShelley) + , ("HardFork", exampleResultHardFork) + ] } -- | The examples: the examples from each individual era lifted in to @@ -390,61 +421,61 @@ examples = combineEras eraExamples <> multiEraExamples -- | Applying a Shelley thing to a Byron ledger exampleEraMismatchByron :: MismatchEraInfo (CardanoEras Crypto) exampleEraMismatchByron = - MismatchEraInfo $ MR (Z eraInfoShelley) (LedgerEraInfo eraInfoByron) + MismatchEraInfo $ MR (Z eraInfoShelley) (LedgerEraInfo eraInfoByron) -- | Applying a Byron thing to a Shelley ledger exampleEraMismatchShelley :: MismatchEraInfo (CardanoEras Crypto) exampleEraMismatchShelley = - MismatchEraInfo $ ML eraInfoByron (Z (LedgerEraInfo eraInfoShelley)) + MismatchEraInfo $ ML eraInfoByron (Z (LedgerEraInfo eraInfoShelley)) exampleApplyTxErrWrongEraByron :: ApplyTxErr (CardanoBlock Crypto) exampleApplyTxErrWrongEraByron = - HardForkApplyTxErrWrongEra exampleEraMismatchByron + HardForkApplyTxErrWrongEra exampleEraMismatchByron exampleApplyTxErrWrongEraShelley :: ApplyTxErr (CardanoBlock Crypto) exampleApplyTxErrWrongEraShelley = - HardForkApplyTxErrWrongEra exampleEraMismatchShelley + HardForkApplyTxErrWrongEra exampleEraMismatchShelley exampleQueryEraMismatchByron :: SomeBlockQuery (BlockQuery (CardanoBlock Crypto)) exampleQueryEraMismatchByron = - SomeBlockQuery (QueryIfCurrentShelley Shelley.GetLedgerTip) + SomeBlockQuery (QueryIfCurrentShelley Shelley.GetLedgerTip) exampleQueryEraMismatchShelley :: SomeBlockQuery (BlockQuery (CardanoBlock Crypto)) exampleQueryEraMismatchShelley = - SomeBlockQuery (QueryIfCurrentByron Byron.GetUpdateInterfaceState) + SomeBlockQuery (QueryIfCurrentByron Byron.GetUpdateInterfaceState) exampleQueryAnytimeByron :: SomeBlockQuery (BlockQuery (CardanoBlock Crypto)) exampleQueryAnytimeByron = - SomeBlockQuery (QueryAnytimeByron GetEraStart) + SomeBlockQuery (QueryAnytimeByron GetEraStart) exampleQueryAnytimeShelley :: SomeBlockQuery (BlockQuery (CardanoBlock Crypto)) exampleQueryAnytimeShelley = - SomeBlockQuery (QueryAnytimeShelley GetEraStart) + SomeBlockQuery (QueryAnytimeShelley GetEraStart) exampleQueryHardFork :: SomeBlockQuery (BlockQuery (CardanoBlock Crypto)) exampleQueryHardFork = - SomeBlockQuery (QueryHardFork GetInterpreter) + SomeBlockQuery (QueryHardFork GetInterpreter) exampleResultEraMismatchByron :: SomeResult (CardanoBlock Crypto) exampleResultEraMismatchByron = - SomeResult - (QueryIfCurrentShelley Shelley.GetLedgerTip) - (Left exampleEraMismatchByron) + SomeResult + (QueryIfCurrentShelley Shelley.GetLedgerTip) + (Left exampleEraMismatchByron) exampleResultEraMismatchShelley :: SomeResult (CardanoBlock Crypto) exampleResultEraMismatchShelley = - SomeResult - (QueryIfCurrentByron Byron.GetUpdateInterfaceState) - (Left exampleEraMismatchShelley) + SomeResult + (QueryIfCurrentByron Byron.GetUpdateInterfaceState) + (Left exampleEraMismatchShelley) exampleResultAnytimeByron :: SomeResult (CardanoBlock Crypto) exampleResultAnytimeByron = - SomeResult (QueryAnytimeByron GetEraStart) (Just byronStartBound) + SomeResult (QueryAnytimeByron GetEraStart) (Just byronStartBound) exampleResultAnytimeShelley :: SomeResult (CardanoBlock Crypto) exampleResultAnytimeShelley = - SomeResult (QueryAnytimeShelley GetEraStart) (Just shelleyStartBound) + SomeResult (QueryAnytimeShelley GetEraStart) (Just shelleyStartBound) exampleResultHardFork :: SomeResult (CardanoBlock Crypto) exampleResultHardFork = - SomeResult (QueryHardFork GetInterpreter) (History.mkInterpreter summary) + SomeResult (QueryHardFork GetInterpreter) (History.mkInterpreter summary) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Generators.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Generators.hs index f5726d67fb..844b7f41d2 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Generators.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/Generators.hs @@ -10,7 +10,6 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | 'Arbitrary' instances intended for serialisation roundtrip tests for @@ -23,47 +22,49 @@ -- Cardano instances by picking randomly from one of the eras. module Test.Consensus.Cardano.Generators () where -import qualified Data.Map.Strict as Map -import Data.Maybe (catMaybes, fromMaybe) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting (Exactly (..)) -import Data.SOP.Index -import Data.SOP.NonEmpty -import Data.SOP.Sing -import Data.SOP.Strict -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.ByronHFC -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.Cardano.Node () -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Serialisation -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Serialisation (Some (..)) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Block () -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.TypeFamilyWrappers -import Test.Cardano.Ledger.Alonzo.Arbitrary () -import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () -import Test.Cardano.Ledger.Conway.Arbitrary () -import Test.Consensus.Byron.Generators () -import Test.Consensus.Cardano.MockCrypto -import Test.Consensus.Protocol.Serialisation.Generators () -import Test.Consensus.Shelley.Generators -import Test.Consensus.Shelley.MockCrypto (CanMock) -import Test.QuickCheck -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip (Coherent (..), - WithVersion (..)) +import Data.Map.Strict qualified as Map +import Data.Maybe (catMaybes, fromMaybe) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting (Exactly (..)) +import Data.SOP.Index +import Data.SOP.NonEmpty +import Data.SOP.Sing +import Data.SOP.Strict +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.ByronHFC +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.Cardano.Node () +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Serialisation +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Serialisation (Some (..)) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Block () +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.TypeFamilyWrappers +import Test.Cardano.Ledger.Alonzo.Arbitrary () +import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () +import Test.Cardano.Ledger.Conway.Arbitrary () +import Test.Consensus.Byron.Generators () +import Test.Consensus.Cardano.MockCrypto +import Test.Consensus.Protocol.Serialisation.Generators () +import Test.Consensus.Shelley.Generators +import Test.Consensus.Shelley.MockCrypto (CanMock) +import Test.QuickCheck +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip + ( Coherent (..) + , WithVersion (..) + ) {------------------------------------------------------------------------------- Disk @@ -71,70 +72,78 @@ import Test.Util.Serialisation.Roundtrip (Coherent (..), instance Arbitrary (CardanoBlock MockCryptoCompatByron) where arbitrary = - oneof $ catMaybes $ hcollapse generators - where - generators :: - NP - (K (Maybe (Gen (CardanoBlock MockCryptoCompatByron)))) - (CardanoEras MockCryptoCompatByron) - generators = - mk BlockByron - :* mk BlockShelley - :* mk BlockAllegra - :* mk BlockMary - :* mk BlockAlonzo - :* mk BlockBabbage - :* mk BlockConway - :* Nil - - mk :: - forall a x. Arbitrary a - => (a -> CardanoBlock MockCryptoCompatByron) - -> K (Maybe (Gen (CardanoBlock MockCryptoCompatByron))) x - mk f = K $ Just $ f <$> arbitrary + oneof $ catMaybes $ hcollapse generators + where + generators :: + NP + (K (Maybe (Gen (CardanoBlock MockCryptoCompatByron)))) + (CardanoEras MockCryptoCompatByron) + generators = + mk BlockByron + :* mk BlockShelley + :* mk BlockAllegra + :* mk BlockMary + :* mk BlockAlonzo + :* mk BlockBabbage + :* mk BlockConway + :* Nil + + mk :: + forall a x. + Arbitrary a => + (a -> CardanoBlock MockCryptoCompatByron) -> + K (Maybe (Gen (CardanoBlock MockCryptoCompatByron))) x + mk f = K $ Just $ f <$> arbitrary instance Arbitrary (Coherent (CardanoBlock MockCryptoCompatByron)) where arbitrary = - fmap Coherent $ oneof $ catMaybes $ hcollapse generators - where - generators :: - NP - (K (Maybe (Gen (CardanoBlock MockCryptoCompatByron)))) - (CardanoEras MockCryptoCompatByron) - generators = - mk BlockByron - :* mk BlockShelley - :* mk BlockAllegra - :* mk BlockMary - :* mk BlockAlonzo - :* mk BlockBabbage - :* mk BlockConway - :* Nil - - mk :: - forall a x. Arbitrary (Coherent a) - => (a -> CardanoBlock MockCryptoCompatByron) - -> K (Maybe (Gen (CardanoBlock MockCryptoCompatByron))) x - mk f = K $ Just $ f . getCoherent <$> arbitrary + fmap Coherent $ oneof $ catMaybes $ hcollapse generators + where + generators :: + NP + (K (Maybe (Gen (CardanoBlock MockCryptoCompatByron)))) + (CardanoEras MockCryptoCompatByron) + generators = + mk BlockByron + :* mk BlockShelley + :* mk BlockAllegra + :* mk BlockMary + :* mk BlockAlonzo + :* mk BlockBabbage + :* mk BlockConway + :* Nil + + mk :: + forall a x. + Arbitrary (Coherent a) => + (a -> CardanoBlock MockCryptoCompatByron) -> + K (Maybe (Gen (CardanoBlock MockCryptoCompatByron))) x + mk f = K $ Just $ f . getCoherent <$> arbitrary instance Arbitrary (CardanoHeader MockCryptoCompatByron) where arbitrary = getHeader <$> arbitrary -instance (CanMock (TPraos c) ShelleyEra, CardanoHardForkConstraints c) - => Arbitrary (OneEraHash (CardanoEras c)) where +instance + (CanMock (TPraos c) ShelleyEra, CardanoHardForkConstraints c) => + Arbitrary (OneEraHash (CardanoEras c)) + where arbitrary = inj <$> arbitrary - where - inj :: NS WrapHeaderHash (CardanoEras c) -> OneEraHash (CardanoEras c) - inj = hcollapse . hcmap proxySingle aux - - aux :: - forall blk. SingleEraBlock blk - => WrapHeaderHash blk -> K (OneEraHash (CardanoEras c)) blk - aux = K . OneEraHash . toShortRawHash (Proxy @blk) . unwrapHeaderHash - -instance (c ~ MockCryptoCompatByron, ShelleyBasedEra ShelleyEra) - => Arbitrary (AnnTip (CardanoBlock c)) where - arbitrary = AnnTip + where + inj :: NS WrapHeaderHash (CardanoEras c) -> OneEraHash (CardanoEras c) + inj = hcollapse . hcmap proxySingle aux + + aux :: + forall blk. + SingleEraBlock blk => + WrapHeaderHash blk -> K (OneEraHash (CardanoEras c)) blk + aux = K . OneEraHash . toShortRawHash (Proxy @blk) . unwrapHeaderHash + +instance + (c ~ MockCryptoCompatByron, ShelleyBasedEra ShelleyEra) => + Arbitrary (AnnTip (CardanoBlock c)) + where + arbitrary = + AnnTip <$> (SlotNo <$> arbitrary) <*> arbitrary <*> (OneEraTipInfo <$> arbitrary) @@ -143,150 +152,315 @@ instance (c ~ MockCryptoCompatByron, ShelleyBasedEra ShelleyEra) NodeToNode -------------------------------------------------------------------------------} -instance CardanoHardForkConstraints c - => Arbitrary (HardForkNodeToNodeVersion (CardanoEras c)) where +instance + CardanoHardForkConstraints c => + Arbitrary (HardForkNodeToNodeVersion (CardanoEras c)) + where arbitrary = elements $ Map.elems $ supportedNodeToNodeVersions (Proxy @(CardanoBlock c)) -deriving instance Arbitrary (BlockNodeToNodeVersion blk) - => Arbitrary (WrapNodeToNodeVersion blk) +deriving instance + Arbitrary (BlockNodeToNodeVersion blk) => + Arbitrary (WrapNodeToNodeVersion blk) arbitraryNodeToNode :: - ( Arbitrary (WithVersion ByronNodeToNodeVersion byron) - , Arbitrary (WithVersion ShelleyNodeToNodeVersion shelley) - , Arbitrary (WithVersion ShelleyNodeToNodeVersion allegra) - , Arbitrary (WithVersion ShelleyNodeToNodeVersion mary) - , Arbitrary (WithVersion ShelleyNodeToNodeVersion alonzo) - , Arbitrary (WithVersion ShelleyNodeToNodeVersion babbage) - , Arbitrary (WithVersion ShelleyNodeToNodeVersion conway) - ) - => (byron -> cardano) - -> (shelley -> cardano) - -> (allegra -> cardano) - -> (mary -> cardano) - -> (alonzo -> cardano) - -> (babbage -> cardano) - -> (conway -> cardano) - -> Gen (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) cardano) -arbitraryNodeToNode injByron injShelley injAllegra injMary injAlonzo injBabbage injConway = oneof - [ - -- Byron before HFC - (\(WithVersion versionByron b) -> + ( Arbitrary (WithVersion ByronNodeToNodeVersion byron) + , Arbitrary (WithVersion ShelleyNodeToNodeVersion shelley) + , Arbitrary (WithVersion ShelleyNodeToNodeVersion allegra) + , Arbitrary (WithVersion ShelleyNodeToNodeVersion mary) + , Arbitrary (WithVersion ShelleyNodeToNodeVersion alonzo) + , Arbitrary (WithVersion ShelleyNodeToNodeVersion babbage) + , Arbitrary (WithVersion ShelleyNodeToNodeVersion conway) + ) => + (byron -> cardano) -> + (shelley -> cardano) -> + (allegra -> cardano) -> + (mary -> cardano) -> + (alonzo -> cardano) -> + (babbage -> cardano) -> + (conway -> cardano) -> + Gen (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) cardano) +arbitraryNodeToNode injByron injShelley injAllegra injMary injAlonzo injBabbage injConway = + oneof + [ -- Byron before HFC + ( \(WithVersion versionByron b) -> WithVersion (HardForkNodeToNodeDisabled versionByron) - (injByron b)) + (injByron b) + ) <$> arbitrary - -- Note that any value generated by the V1 Byron generator is also fine - -- when using Byron V2. - , (\ (WithVersion versionByron x) versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + , -- Note that any value generated by the V1 Byron generator is also fine + -- when using Byron V2. + ( \(WithVersion versionByron x) versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injByron x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - , (\ versionByron (WithVersion versionShelley x) versionAllegra versionMary versionAlonzo versionBabbage versionConway -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , ( \versionByron (WithVersion versionShelley x) versionAllegra versionMary versionAlonzo versionBabbage versionConway -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injShelley x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - , (\ versionByron versionShelley (WithVersion versionAllegra x) versionMary versionAlonzo versionBabbage versionConway -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , ( \versionByron versionShelley (WithVersion versionAllegra x) versionMary versionAlonzo versionBabbage versionConway -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injAllegra x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - , (\ versionByron versionShelley versionAllegra (WithVersion versionMary x) versionAlonzo versionBabbage versionConway -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , ( \versionByron versionShelley versionAllegra (WithVersion versionMary x) versionAlonzo versionBabbage versionConway -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injMary x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - , (\ versionByron versionShelley versionAllegra versionMary (WithVersion versionAlonzo x) versionBabbage versionConway -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , ( \versionByron versionShelley versionAllegra versionMary (WithVersion versionAlonzo x) versionBabbage versionConway -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injAlonzo x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - , (\ versionByron versionShelley versionAllegra versionMary versionAlonzo (WithVersion versionBabbage x) versionConway -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , ( \versionByron versionShelley versionAllegra versionMary versionAlonzo (WithVersion versionBabbage x) versionConway -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injBabbage x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - , (\ versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage (WithVersion versionConway x) -> - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , ( \versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage (WithVersion versionConway x) -> + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway (injConway x) ) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary ] + where + distrib + versionByron + versionShelley + versionAllegra + versionMary + versionAlonzo + versionBabbage + versionConway + x = + WithVersion + ( HardForkNodeToNodeEnabled + maxBound + ( WrapNodeToNodeVersion versionByron + :* WrapNodeToNodeVersion versionShelley + :* WrapNodeToNodeVersion versionAllegra + :* WrapNodeToNodeVersion versionMary + :* WrapNodeToNodeVersion versionAlonzo + :* WrapNodeToNodeVersion versionBabbage + :* WrapNodeToNodeVersion versionConway + :* Nil + ) + ) + x + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToNodeVersion (CardanoEras c)) + (SomeSecond (NestedCtxt Header) (CardanoBlock c)) + ) where - distrib versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway - x = - WithVersion - (HardForkNodeToNodeEnabled - maxBound - ( WrapNodeToNodeVersion versionByron - :* WrapNodeToNodeVersion versionShelley - :* WrapNodeToNodeVersion versionAllegra - :* WrapNodeToNodeVersion versionMary - :* WrapNodeToNodeVersion versionAlonzo - :* WrapNodeToNodeVersion versionBabbage - :* WrapNodeToNodeVersion versionConway - :* Nil - )) - x - - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) - (SomeSecond (NestedCtxt Header) (CardanoBlock c))) where arbitrary = arbitraryNodeToNode injByron injShelley injAllegra injMary injAlonzo injBabbage injConway - where - injByron = mapSomeNestedCtxt NCZ - injShelley = mapSomeNestedCtxt (NCS . NCZ) - injAllegra = mapSomeNestedCtxt (NCS . NCS . NCZ) - injMary = mapSomeNestedCtxt (NCS . NCS . NCS . NCZ) - injAlonzo = mapSomeNestedCtxt (NCS . NCS . NCS . NCS . NCZ) - injBabbage = mapSomeNestedCtxt (NCS . NCS . NCS . NCS . NCS . NCZ) - injConway = mapSomeNestedCtxt (NCS . NCS . NCS . NCS . NCS . NCS . NCZ) - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) - (CardanoBlock c)) where - arbitrary = arbitraryNodeToNode BlockByron BlockShelley BlockAllegra BlockMary BlockAlonzo BlockBabbage BlockConway - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) - (CardanoHeader c)) where - arbitrary = arbitraryNodeToNode HeaderByron HeaderShelley HeaderAllegra HeaderMary HeaderAlonzo HeaderBabbage HeaderConway - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) - (CardanoGenTx c)) where - arbitrary = arbitraryNodeToNode GenTxByron GenTxShelley GenTxAllegra GenTxMary GenTxAlonzo GenTxBabbage GenTxConway - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToNodeVersion (CardanoEras c)) - (CardanoGenTxId c)) where - arbitrary = arbitraryNodeToNode GenTxIdByron GenTxIdShelley GenTxIdAllegra GenTxIdMary GenTxIdAlonzo GenTxIdBabbage GenTxIdConway + where + injByron = mapSomeNestedCtxt NCZ + injShelley = mapSomeNestedCtxt (NCS . NCZ) + injAllegra = mapSomeNestedCtxt (NCS . NCS . NCZ) + injMary = mapSomeNestedCtxt (NCS . NCS . NCS . NCZ) + injAlonzo = mapSomeNestedCtxt (NCS . NCS . NCS . NCS . NCZ) + injBabbage = mapSomeNestedCtxt (NCS . NCS . NCS . NCS . NCS . NCZ) + injConway = mapSomeNestedCtxt (NCS . NCS . NCS . NCS . NCS . NCS . NCZ) + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToNodeVersion (CardanoEras c)) + (CardanoBlock c) + ) + where + arbitrary = + arbitraryNodeToNode + BlockByron + BlockShelley + BlockAllegra + BlockMary + BlockAlonzo + BlockBabbage + BlockConway + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToNodeVersion (CardanoEras c)) + (CardanoHeader c) + ) + where + arbitrary = + arbitraryNodeToNode + HeaderByron + HeaderShelley + HeaderAllegra + HeaderMary + HeaderAlonzo + HeaderBabbage + HeaderConway + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToNodeVersion (CardanoEras c)) + (CardanoGenTx c) + ) + where + arbitrary = + arbitraryNodeToNode + GenTxByron + GenTxShelley + GenTxAllegra + GenTxMary + GenTxAlonzo + GenTxBabbage + GenTxConway + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToNodeVersion (CardanoEras c)) + (CardanoGenTxId c) + ) + where + arbitrary = + arbitraryNodeToNode + GenTxIdByron + GenTxIdShelley + GenTxIdAllegra + GenTxIdMary + GenTxIdAlonzo + GenTxIdBabbage + GenTxIdConway {------------------------------------------------------------------------------- NodeToClient -------------------------------------------------------------------------------} -instance CardanoHardForkConstraints c - => Arbitrary (HardForkNodeToClientVersion (CardanoEras c)) where +instance + CardanoHardForkConstraints c => + Arbitrary (HardForkNodeToClientVersion (CardanoEras c)) + where arbitrary = elements $ Map.elems $ supportedNodeToClientVersions (Proxy @(CardanoBlock c)) -newtype HardForkEnabledNodeToClientVersion c = HardForkEnabledNodeToClientVersion { - getHardForkEnabledNodeToClientVersion :: HardForkNodeToClientVersion (CardanoEras c) - } +newtype HardForkEnabledNodeToClientVersion c = HardForkEnabledNodeToClientVersion + { getHardForkEnabledNodeToClientVersion :: HardForkNodeToClientVersion (CardanoEras c) + } -deriving newtype instance CardanoHardForkConstraints c - => Eq (HardForkEnabledNodeToClientVersion c) -deriving newtype instance CardanoHardForkConstraints c - => Show (HardForkEnabledNodeToClientVersion c) +deriving newtype instance + CardanoHardForkConstraints c => + Eq (HardForkEnabledNodeToClientVersion c) +deriving newtype instance + CardanoHardForkConstraints c => + Show (HardForkEnabledNodeToClientVersion c) -instance CardanoHardForkConstraints c - => Arbitrary (HardForkEnabledNodeToClientVersion c) where +instance + CardanoHardForkConstraints c => + Arbitrary (HardForkEnabledNodeToClientVersion c) + where arbitrary = - elements + elements . map HardForkEnabledNodeToClientVersion . filter isHardForkNodeToClientEnabled . Map.elems @@ -299,288 +473,439 @@ instance CardanoHardForkConstraints c -- PRECONDITION: 'supportedNodeToClientVersions' must include a version that -- satisfies this condition. _genWithHardForkSpecificNodeToClientVersion :: - forall c. CardanoHardForkConstraints c - => (HardForkSpecificNodeToClientVersion -> Bool) - -> Gen (HardForkNodeToClientVersion (CardanoEras c)) + forall c. + CardanoHardForkConstraints c => + (HardForkSpecificNodeToClientVersion -> Bool) -> + Gen (HardForkNodeToClientVersion (CardanoEras c)) _genWithHardForkSpecificNodeToClientVersion p = - elements + elements . filter p' . Map.elems . supportedNodeToClientVersions $ Proxy @(CardanoBlock c) + where + p' :: HardForkNodeToClientVersion (CardanoEras c) -> Bool + p' (HardForkNodeToClientEnabled v _) = p v + p' (HardForkNodeToClientDisabled{}) = False + +instance + Arbitrary (BlockNodeToClientVersion blk) => + Arbitrary (EraNodeToClientVersion blk) where - p' :: HardForkNodeToClientVersion (CardanoEras c) -> Bool - p' (HardForkNodeToClientEnabled v _) = p v - p' (HardForkNodeToClientDisabled {}) = False - -instance Arbitrary (BlockNodeToClientVersion blk) - => Arbitrary (EraNodeToClientVersion blk) where - arbitrary = frequency - [ (1, pure EraNodeToClientDisabled) - , (9, EraNodeToClientEnabled <$> arbitrary) - ] + arbitrary = + frequency + [ (1, pure EraNodeToClientDisabled) + , (9, EraNodeToClientEnabled <$> arbitrary) + ] arbitraryNodeToClient :: - ( Arbitrary (WithVersion ByronNodeToClientVersion byron) - , Arbitrary (WithVersion ShelleyNodeToClientVersion shelley) - , Arbitrary (WithVersion ShelleyNodeToClientVersion allegra) - , Arbitrary (WithVersion ShelleyNodeToClientVersion mary) - , Arbitrary (WithVersion ShelleyNodeToClientVersion alonzo) - , Arbitrary (WithVersion ShelleyNodeToClientVersion babbage) - , Arbitrary (WithVersion ShelleyNodeToClientVersion conway) - ) - => (byron -> cardano) - -> (shelley -> cardano) - -> (allegra -> cardano) - -> (mary -> cardano) - -> (alonzo -> cardano) - -> (babbage -> cardano) - -> (conway -> cardano) - -> Gen (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) cardano) -arbitraryNodeToClient injByron injShelley injAllegra injMary injAlonzo injBabbage injConway = oneof + ( Arbitrary (WithVersion ByronNodeToClientVersion byron) + , Arbitrary (WithVersion ShelleyNodeToClientVersion shelley) + , Arbitrary (WithVersion ShelleyNodeToClientVersion allegra) + , Arbitrary (WithVersion ShelleyNodeToClientVersion mary) + , Arbitrary (WithVersion ShelleyNodeToClientVersion alonzo) + , Arbitrary (WithVersion ShelleyNodeToClientVersion babbage) + , Arbitrary (WithVersion ShelleyNodeToClientVersion conway) + ) => + (byron -> cardano) -> + (shelley -> cardano) -> + (allegra -> cardano) -> + (mary -> cardano) -> + (alonzo -> cardano) -> + (babbage -> cardano) -> + (conway -> cardano) -> + Gen (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) cardano) +arbitraryNodeToClient injByron injShelley injAllegra injMary injAlonzo injBabbage injConway = + oneof -- Byron + HardFork disabled - [ (\(WithVersion versionByron b) -> + [ ( \(WithVersion versionByron b) -> WithVersion (HardForkNodeToClientDisabled versionByron) - (injByron b)) + (injByron b) + ) <$> arbitrary - -- Byron + HardFork enabled. - , (\(WithVersion versionByron b) versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway -> + , -- Byron + HardFork enabled. + ( \(WithVersion versionByron b) versionShelley versionAllegra versionMary versionAlonzo versionBabbage versionConway -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* versionAllegra - :* versionMary - :* versionAlonzo - :* versionBabbage - :* versionConway - :* Nil - )) - (injByron b)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - -- Shelley + HardFork enabled - , (\versionByron (WithVersion versionShelley s) versionAllegra versionMary versionAlonzo versionBabbage versionConway -> + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* versionAllegra + :* versionMary + :* versionAlonzo + :* versionBabbage + :* versionConway + :* Nil + ) + ) + (injByron b) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , -- Shelley + HardFork enabled + ( \versionByron (WithVersion versionShelley s) versionAllegra versionMary versionAlonzo versionBabbage versionConway -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* versionAllegra - :* versionMary - :* versionAlonzo - :* versionBabbage - :* versionConway - :* Nil - )) - (injShelley s)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - -- Allegra + HardFork enabled - , (\versionByron versionShelley (WithVersion versionAllegra a) versionMary versionAlonzo versionBabbage versionConway -> + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* versionAllegra + :* versionMary + :* versionAlonzo + :* versionBabbage + :* versionConway + :* Nil + ) + ) + (injShelley s) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , -- Allegra + HardFork enabled + ( \versionByron versionShelley (WithVersion versionAllegra a) versionMary versionAlonzo versionBabbage versionConway -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* EraNodeToClientEnabled versionAllegra - :* versionMary - :* versionAlonzo - :* versionBabbage - :* versionConway - :* Nil - )) - (injAllegra a)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - -- Mary + HardFork enabled - , (\versionByron versionShelley versionAllegra (WithVersion versionMary m) versionAlonzo versionBabbage versionConway -> + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* EraNodeToClientEnabled versionAllegra + :* versionMary + :* versionAlonzo + :* versionBabbage + :* versionConway + :* Nil + ) + ) + (injAllegra a) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , -- Mary + HardFork enabled + ( \versionByron versionShelley versionAllegra (WithVersion versionMary m) versionAlonzo versionBabbage versionConway -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* EraNodeToClientEnabled versionAllegra - :* EraNodeToClientEnabled versionMary - :* versionAlonzo - :* versionBabbage - :* versionConway - :* Nil - )) - (injMary m)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - -- Alonzo + HardFork enabled - , (\versionByron versionShelley versionAllegra versionMary (WithVersion versionAlonzo a) versionBabbage versionConway -> + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* EraNodeToClientEnabled versionAllegra + :* EraNodeToClientEnabled versionMary + :* versionAlonzo + :* versionBabbage + :* versionConway + :* Nil + ) + ) + (injMary m) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , -- Alonzo + HardFork enabled + ( \versionByron versionShelley versionAllegra versionMary (WithVersion versionAlonzo a) versionBabbage versionConway -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* EraNodeToClientEnabled versionAllegra - :* EraNodeToClientEnabled versionMary - :* EraNodeToClientEnabled versionAlonzo - :* versionBabbage - :* versionConway - :* Nil - )) - (injAlonzo a)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - -- Babbage + HardFork enabled - , (\versionByron versionShelley versionAllegra versionMary versionAlonzo (WithVersion versionBabbage a) versionConway -> + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* EraNodeToClientEnabled versionAllegra + :* EraNodeToClientEnabled versionMary + :* EraNodeToClientEnabled versionAlonzo + :* versionBabbage + :* versionConway + :* Nil + ) + ) + (injAlonzo a) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , -- Babbage + HardFork enabled + ( \versionByron versionShelley versionAllegra versionMary versionAlonzo (WithVersion versionBabbage a) versionConway -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* EraNodeToClientEnabled versionAllegra - :* EraNodeToClientEnabled versionMary - :* EraNodeToClientEnabled versionAlonzo - :* EraNodeToClientEnabled versionBabbage - :* versionConway - :* Nil - )) - (injBabbage a)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary - -- Conway + HardFork enabled - , (\versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage (WithVersion versionConway a) -> + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* EraNodeToClientEnabled versionAllegra + :* EraNodeToClientEnabled versionMary + :* EraNodeToClientEnabled versionAlonzo + :* EraNodeToClientEnabled versionBabbage + :* versionConway + :* Nil + ) + ) + (injBabbage a) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + , -- Conway + HardFork enabled + ( \versionByron versionShelley versionAllegra versionMary versionAlonzo versionBabbage (WithVersion versionConway a) -> WithVersion - (HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled versionByron - :* EraNodeToClientEnabled versionShelley - :* EraNodeToClientEnabled versionAllegra - :* EraNodeToClientEnabled versionMary - :* EraNodeToClientEnabled versionAlonzo - :* EraNodeToClientEnabled versionBabbage - :* EraNodeToClientEnabled versionConway - :* Nil - )) - (injConway a)) - <$> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary <*> arbitrary + ( HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled versionByron + :* EraNodeToClientEnabled versionShelley + :* EraNodeToClientEnabled versionAllegra + :* EraNodeToClientEnabled versionMary + :* EraNodeToClientEnabled versionAlonzo + :* EraNodeToClientEnabled versionBabbage + :* EraNodeToClientEnabled versionConway + :* Nil + ) + ) + (injConway a) + ) + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary ] -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (CardanoBlock c)) where - arbitrary = arbitraryNodeToClient BlockByron BlockShelley BlockAllegra BlockMary BlockAlonzo BlockBabbage BlockConway - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (CardanoGenTx c)) where - arbitrary = arbitraryNodeToClient GenTxByron GenTxShelley GenTxAllegra GenTxMary GenTxAlonzo GenTxBabbage GenTxConway - -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (CardanoApplyTxErr c)) where - arbitrary = frequency - [ (8, arbitraryNodeToClient ApplyTxErrByron ApplyTxErrShelley ApplyTxErrAllegra ApplyTxErrMary ApplyTxErrAlonzo ApplyTxErrBabbage ApplyTxErrConway) - , (2, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (HardForkApplyTxErrWrongEra <$> arbitrary)) +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (CardanoBlock c) + ) + where + arbitrary = + arbitraryNodeToClient + BlockByron + BlockShelley + BlockAllegra + BlockMary + BlockAlonzo + BlockBabbage + BlockConway + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (CardanoGenTx c) + ) + where + arbitrary = + arbitraryNodeToClient + GenTxByron + GenTxShelley + GenTxAllegra + GenTxMary + GenTxAlonzo + GenTxBabbage + GenTxConway + +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (CardanoApplyTxErr c) + ) + where + arbitrary = + frequency + [ + ( 8 + , arbitraryNodeToClient + ApplyTxErrByron + ApplyTxErrShelley + ApplyTxErrAllegra + ApplyTxErrMary + ApplyTxErrAlonzo + ApplyTxErrBabbage + ApplyTxErrConway + ) + , + ( 2 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (HardForkApplyTxErrWrongEra <$> arbitrary) + ) ] shrink = traverse aux - where - aux :: CardanoApplyTxErr MockCryptoCompatByron - -> [CardanoApplyTxErr MockCryptoCompatByron] - aux (HardForkApplyTxErrFromEra (OneEraApplyTxErr x)) = - HardForkApplyTxErrFromEra . OneEraApplyTxErr <$> shrink x - aux (HardForkApplyTxErrWrongEra x) = - HardForkApplyTxErrWrongEra <$> shrink x + where + aux :: + CardanoApplyTxErr MockCryptoCompatByron -> + [CardanoApplyTxErr MockCryptoCompatByron] + aux (HardForkApplyTxErrFromEra (OneEraApplyTxErr x)) = + HardForkApplyTxErrFromEra . OneEraApplyTxErr <$> shrink x + aux (HardForkApplyTxErrWrongEra x) = + HardForkApplyTxErrWrongEra <$> shrink x instance Arbitrary (Some QueryAnytime) where arbitrary = return $ Some GetEraStart -instance CardanoHardForkConstraints c - => Arbitrary (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (Some (QueryHardFork (CardanoEras c)))) where - arbitrary = frequency - [ (1, do version <- getHardForkEnabledNodeToClientVersion <$> arbitrary - return $ WithVersion version (Some GetInterpreter)) - , (1, do version <- getHardForkEnabledNodeToClientVersion <$> arbitrary - return $ WithVersion version (Some GetCurrentEra)) +instance + CardanoHardForkConstraints c => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (Some (QueryHardFork (CardanoEras c))) + ) + where + arbitrary = + frequency + [ + ( 1 + , do + version <- getHardForkEnabledNodeToClientVersion <$> arbitrary + return $ WithVersion version (Some GetInterpreter) + ) + , + ( 1 + , do + version <- getHardForkEnabledNodeToClientVersion <$> arbitrary + return $ WithVersion version (Some GetCurrentEra) + ) ] -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (SomeBlockQuery (BlockQuery (CardanoBlock c)))) where - arbitrary = frequency +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (SomeBlockQuery (BlockQuery (CardanoBlock c))) + ) + where + arbitrary = + frequency [ (1, arbitraryNodeToClient injByron injShelley injAllegra injMary injAlonzo injBabbage injConway) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeByron <$> arbitrary)) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeShelley <$> arbitrary)) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeAllegra <$> arbitrary)) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeMary <$> arbitrary)) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeAlonzo <$> arbitrary)) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeBabbage <$> arbitrary)) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (injAnytimeConway <$> arbitrary)) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeByron <$> arbitrary) + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeShelley <$> arbitrary) + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeAllegra <$> arbitrary) + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeMary <$> arbitrary) + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeAlonzo <$> arbitrary) + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeBabbage <$> arbitrary) + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (injAnytimeConway <$> arbitrary) + ) , (1, fmap injHardFork <$> arbitrary) ] - where - injByron (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentByron query) - injShelley (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentShelley query) - injAllegra (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentAllegra query) - injMary (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentMary query) - injAlonzo (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentAlonzo query) - injBabbage (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentBabbage query) - injConway (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentConway query) - injAnytimeByron (Some query) = SomeBlockQuery (QueryAnytimeByron query) - injAnytimeShelley (Some query) = SomeBlockQuery (QueryAnytimeShelley query) - injAnytimeAllegra (Some query) = SomeBlockQuery (QueryAnytimeAllegra query) - injAnytimeMary (Some query) = SomeBlockQuery (QueryAnytimeMary query) - injAnytimeAlonzo (Some query) = SomeBlockQuery (QueryAnytimeAlonzo query) - injAnytimeBabbage (Some query) = SomeBlockQuery (QueryAnytimeBabbage query) - injAnytimeConway (Some query) = SomeBlockQuery (QueryAnytimeConway query) - injHardFork (Some query) = SomeBlockQuery (QueryHardFork query) + where + injByron (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentByron query) + injShelley (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentShelley query) + injAllegra (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentAllegra query) + injMary (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentMary query) + injAlonzo (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentAlonzo query) + injBabbage (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentBabbage query) + injConway (SomeBlockQuery query) = SomeBlockQuery (QueryIfCurrentConway query) + injAnytimeByron (Some query) = SomeBlockQuery (QueryAnytimeByron query) + injAnytimeShelley (Some query) = SomeBlockQuery (QueryAnytimeShelley query) + injAnytimeAllegra (Some query) = SomeBlockQuery (QueryAnytimeAllegra query) + injAnytimeMary (Some query) = SomeBlockQuery (QueryAnytimeMary query) + injAnytimeAlonzo (Some query) = SomeBlockQuery (QueryAnytimeAlonzo query) + injAnytimeBabbage (Some query) = SomeBlockQuery (QueryAnytimeBabbage query) + injAnytimeConway (Some query) = SomeBlockQuery (QueryAnytimeConway query) + injHardFork (Some query) = SomeBlockQuery (QueryHardFork query) instance Arbitrary History.EraEnd where - arbitrary = oneof + arbitrary = + oneof [ History.EraEnd <$> arbitrary , return History.EraUnbounded ] instance Arbitrary History.EraSummary where - arbitrary = History.EraSummary + arbitrary = + History.EraSummary <$> arbitrary <*> arbitrary <*> arbitrary instance (Arbitrary a, SListI xs) => Arbitrary (NonEmpty xs a) where arbitrary = do - let nbXs = lengthSList (Proxy @xs) - len <- choose (1, nbXs) - xs <- vectorOf len arbitrary - return $ fromMaybe (error "nonEmptyFromList failed") $ nonEmptyFromList xs + let nbXs = lengthSList (Proxy @xs) + len <- choose (1, nbXs) + xs <- vectorOf len arbitrary + return $ fromMaybe (error "nonEmptyFromList failed") $ nonEmptyFromList xs instance Arbitrary (History.Interpreter (CardanoEras c)) where arbitrary = - History.mkInterpreter . History.Summary . enforceInvariant <$> arbitrary - where - -- Enforce the invariant that when the last era in the summary is the - -- final era, it is unbounded. The decoder relies on this. - enforceInvariant xs - | length (nonEmptyToList xs) == lengthSList (Proxy @(CardanoEras c)) - = fixEndBound xs - | otherwise - = xs - - fixEndBound :: - NonEmpty xs History.EraSummary - -> NonEmpty xs History.EraSummary - fixEndBound (NonEmptyCons e es) = NonEmptyCons e (fixEndBound es) - fixEndBound (NonEmptyOne e) = - NonEmptyOne e { History.eraEnd = History.EraUnbounded } + History.mkInterpreter . History.Summary . enforceInvariant <$> arbitrary + where + -- Enforce the invariant that when the last era in the summary is the + -- final era, it is unbounded. The decoder relies on this. + enforceInvariant xs + | length (nonEmptyToList xs) == lengthSList (Proxy @(CardanoEras c)) = + fixEndBound xs + | otherwise = + xs + + fixEndBound :: + NonEmpty xs History.EraSummary -> + NonEmpty xs History.EraSummary + fixEndBound (NonEmptyCons e es) = NonEmptyCons e (fixEndBound es) + fixEndBound (NonEmptyOne e) = + NonEmptyOne e{History.eraEnd = History.EraUnbounded} instance Arbitrary (EraIndex (CardanoEras c)) where arbitrary = do @@ -590,147 +915,205 @@ instance Arbitrary (EraIndex (CardanoEras c)) where Nothing -> error $ "nsFromIndex failed for " <> show index Just ns -> return $ eraIndexFromNS ns -instance c ~ MockCryptoCompatByron - => Arbitrary (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (SomeResult (CardanoBlock c))) where - arbitrary = frequency +instance + c ~ MockCryptoCompatByron => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (SomeResult (CardanoBlock c)) + ) + where + arbitrary = + frequency [ (1, arbitraryNodeToClient injByron injShelley injAllegra injMary injAlonzo injBabbage injConway) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryIfCurrentResultEraMismatch) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultByron) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultShelley) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultAllegra) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultMary) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultAlonzo) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultBabbage) - , (1, WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> genQueryAnytimeResultConway) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryIfCurrentResultEraMismatch + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultByron + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultShelley + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultAllegra + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultMary + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultAlonzo + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultBabbage + ) + , + ( 1 + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> genQueryAnytimeResultConway + ) , (1, genQueryHardForkResult) ] - where - injByron (SomeResult q r) = SomeResult (QueryIfCurrentByron q) (QueryResultSuccess r) - injShelley (SomeResult q r) = SomeResult (QueryIfCurrentShelley q) (QueryResultSuccess r) - injAllegra (SomeResult q r) = SomeResult (QueryIfCurrentAllegra q) (QueryResultSuccess r) - injMary (SomeResult q r) = SomeResult (QueryIfCurrentMary q) (QueryResultSuccess r) - injAlonzo (SomeResult q r) = SomeResult (QueryIfCurrentAlonzo q) (QueryResultSuccess r) - injBabbage (SomeResult q r) = SomeResult (QueryIfCurrentBabbage q) (QueryResultSuccess r) - injConway (SomeResult q r) = SomeResult (QueryIfCurrentConway q) (QueryResultSuccess r) - - -- In practice, when sending a Byron query you'll never get a mismatch - -- saying that your query is from the Shelley era while the ledger is - -- from Byron. Only the inverse. We ignore that in this generator, as it - -- doesn't matter for serialisation purposes, we just generate a random - -- 'MismatchEraInfo'. - genQueryIfCurrentResultEraMismatch :: Gen (SomeResult (CardanoBlock c)) - genQueryIfCurrentResultEraMismatch = oneof - [ (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentByron q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - , (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentShelley q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - , (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentAllegra q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - , (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentMary q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - , (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentAlonzo q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - , (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentBabbage q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - , (\(SomeResult q (_ :: result)) mismatch -> - SomeResult (QueryIfCurrentConway q) (Left @_ @result mismatch)) - <$> arbitrary <*> arbitrary - ] - - genQueryAnytimeResultByron :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultByron = - SomeResult (QueryAnytimeByron GetEraStart) <$> arbitrary - - genQueryAnytimeResultShelley :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultShelley = - SomeResult (QueryAnytimeShelley GetEraStart) <$> arbitrary - - genQueryAnytimeResultAllegra :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultAllegra = - SomeResult (QueryAnytimeAllegra GetEraStart) <$> arbitrary - - genQueryAnytimeResultMary :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultMary = - SomeResult (QueryAnytimeMary GetEraStart) <$> arbitrary - - genQueryAnytimeResultAlonzo :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultAlonzo = - SomeResult (QueryAnytimeAlonzo GetEraStart) <$> arbitrary - - genQueryAnytimeResultBabbage :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultBabbage = - SomeResult (QueryAnytimeBabbage GetEraStart) <$> arbitrary - - genQueryAnytimeResultConway :: Gen (SomeResult (CardanoBlock c)) - genQueryAnytimeResultConway = - SomeResult (QueryAnytimeConway GetEraStart) <$> arbitrary - - genQueryHardForkResult :: - Gen (WithVersion (HardForkNodeToClientVersion (CardanoEras c)) - (SomeResult (CardanoBlock c))) - genQueryHardForkResult = oneof - [ WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (SomeResult (QueryHardFork GetInterpreter) <$> arbitrary) - , WithVersion - <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) - <*> (SomeResult (QueryHardFork GetCurrentEra) <$> arbitrary) - ] + where + injByron (SomeResult q r) = SomeResult (QueryIfCurrentByron q) (QueryResultSuccess r) + injShelley (SomeResult q r) = SomeResult (QueryIfCurrentShelley q) (QueryResultSuccess r) + injAllegra (SomeResult q r) = SomeResult (QueryIfCurrentAllegra q) (QueryResultSuccess r) + injMary (SomeResult q r) = SomeResult (QueryIfCurrentMary q) (QueryResultSuccess r) + injAlonzo (SomeResult q r) = SomeResult (QueryIfCurrentAlonzo q) (QueryResultSuccess r) + injBabbage (SomeResult q r) = SomeResult (QueryIfCurrentBabbage q) (QueryResultSuccess r) + injConway (SomeResult q r) = SomeResult (QueryIfCurrentConway q) (QueryResultSuccess r) + + -- In practice, when sending a Byron query you'll never get a mismatch + -- saying that your query is from the Shelley era while the ledger is + -- from Byron. Only the inverse. We ignore that in this generator, as it + -- doesn't matter for serialisation purposes, we just generate a random + -- 'MismatchEraInfo'. + genQueryIfCurrentResultEraMismatch :: Gen (SomeResult (CardanoBlock c)) + genQueryIfCurrentResultEraMismatch = + oneof + [ ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentByron q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + , ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentShelley q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + , ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentAllegra q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + , ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentMary q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + , ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentAlonzo q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + , ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentBabbage q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + , ( \(SomeResult q (_ :: result)) mismatch -> + SomeResult (QueryIfCurrentConway q) (Left @_ @result mismatch) + ) + <$> arbitrary + <*> arbitrary + ] + + genQueryAnytimeResultByron :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultByron = + SomeResult (QueryAnytimeByron GetEraStart) <$> arbitrary + + genQueryAnytimeResultShelley :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultShelley = + SomeResult (QueryAnytimeShelley GetEraStart) <$> arbitrary + + genQueryAnytimeResultAllegra :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultAllegra = + SomeResult (QueryAnytimeAllegra GetEraStart) <$> arbitrary + + genQueryAnytimeResultMary :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultMary = + SomeResult (QueryAnytimeMary GetEraStart) <$> arbitrary + + genQueryAnytimeResultAlonzo :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultAlonzo = + SomeResult (QueryAnytimeAlonzo GetEraStart) <$> arbitrary + + genQueryAnytimeResultBabbage :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultBabbage = + SomeResult (QueryAnytimeBabbage GetEraStart) <$> arbitrary + + genQueryAnytimeResultConway :: Gen (SomeResult (CardanoBlock c)) + genQueryAnytimeResultConway = + SomeResult (QueryAnytimeConway GetEraStart) <$> arbitrary + + genQueryHardForkResult :: + Gen + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (SomeResult (CardanoBlock c)) + ) + genQueryHardForkResult = + oneof + [ WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (SomeResult (QueryHardFork GetInterpreter) <$> arbitrary) + , WithVersion + <$> (getHardForkEnabledNodeToClientVersion <$> arbitrary) + <*> (SomeResult (QueryHardFork GetCurrentEra) <$> arbitrary) + ] {------------------------------------------------------------------------------ Ledger Config ------------------------------------------------------------------------------} - -- | See 'encodeNodeToClientNP' and 'decodeNodeToClientNP'. -instance CardanoHardForkConstraints c - => Arbitrary (WithVersion - (HardForkNodeToClientVersion (CardanoEras c)) - (HardForkLedgerConfig (CardanoEras c)) - ) where - arbitrary = WithVersion +instance + CardanoHardForkConstraints c => + Arbitrary + ( WithVersion + (HardForkNodeToClientVersion (CardanoEras c)) + (HardForkLedgerConfig (CardanoEras c)) + ) + where + arbitrary = + WithVersion -- Use a version that enables all eras. We assume that all eras are -- enabled in the maximum supported version. (snd $ fromMaybe err $ Map.lookupMax $ supportedNodeToClientVersions (Proxy @(CardanoBlock c))) <$> arbitrary - where - err = error "Expected at least 1 supported note-to-client version, but `supportedNodeToClientVersions` has none" + where + err = + error + "Expected at least 1 supported note-to-client version, but `supportedNodeToClientVersions` has none" -instance CardanoHardForkConstraints c - => Arbitrary (HardForkLedgerConfig (CardanoEras c)) where +instance + CardanoHardForkConstraints c => + Arbitrary (HardForkLedgerConfig (CardanoEras c)) + where arbitrary = HardForkLedgerConfig <$> arbitrary <*> arbitrary instance SListI xs => Arbitrary (History.Shape xs) where arbitrary = History.Shape . Exactly <$> hsequenceK (hpure (K arbitrary)) -instance (CardanoHardForkConstraints c) - => Arbitrary (PerEraLedgerConfig (CardanoEras c)) where +instance + CardanoHardForkConstraints c => + Arbitrary (PerEraLedgerConfig (CardanoEras c)) + where arbitrary = - fmap PerEraLedgerConfig . hsequence' - $ hcpure (Proxy @(Compose Arbitrary WrapPartialLedgerConfig)) (Comp arbitrary) + fmap PerEraLedgerConfig . hsequence' $ + hcpure (Proxy @(Compose Arbitrary WrapPartialLedgerConfig)) (Comp arbitrary) instance Arbitrary (PartialLedgerConfig blk) => Arbitrary (WrapPartialLedgerConfig blk) where arbitrary = WrapPartialLedgerConfig <$> arbitrary @@ -738,13 +1121,16 @@ instance Arbitrary (PartialLedgerConfig blk) => Arbitrary (WrapPartialLedgerConf instance Arbitrary ByronPartialLedgerConfig where arbitrary = ByronPartialLedgerConfig <$> arbitrary <*> arbitrary -instance Arbitrary (ShelleyLedgerConfig era) - => Arbitrary (ShelleyPartialLedgerConfig era) where +instance + Arbitrary (ShelleyLedgerConfig era) => + Arbitrary (ShelleyPartialLedgerConfig era) + where arbitrary = ShelleyPartialLedgerConfig <$> arbitrary <*> arbitrary instance Arbitrary TriggerHardFork where - arbitrary = oneof [ - TriggerHardForkAtVersion <$> arbitrary - , TriggerHardForkAtEpoch <$> arbitrary - , pure TriggerHardForkNotDuringThisExecution - ] + arbitrary = + oneof + [ TriggerHardForkAtVersion <$> arbitrary + , TriggerHardForkAtEpoch <$> arbitrary + , pure TriggerHardForkNotDuringThisExecution + ] diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/MockCrypto.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/MockCrypto.hs index 7685c21881..383396a4cf 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/MockCrypto.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/MockCrypto.hs @@ -1,16 +1,15 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Cardano.MockCrypto (MockCryptoCompatByron) where -import Cardano.Crypto.KES (MockKES) -import Cardano.Crypto.VRF (MockVRF) -import Cardano.Protocol.Crypto (Crypto (..)) -import qualified Ouroboros.Consensus.Protocol.Praos as Praos -import qualified Ouroboros.Consensus.Protocol.TPraos as TPraos +import Cardano.Crypto.KES (MockKES) +import Cardano.Crypto.VRF (MockVRF) +import Cardano.Protocol.Crypto (Crypto (..)) +import Ouroboros.Consensus.Protocol.Praos qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos qualified as TPraos -- | A replacement for 'Test.Consensus.Shelley.MockCrypto' that is compatible -- with bootstrapping from Byron. @@ -33,8 +32,8 @@ import qualified Ouroboros.Consensus.Protocol.TPraos as TPraos data MockCryptoCompatByron instance Crypto MockCryptoCompatByron where - type KES MockCryptoCompatByron = MockKES 10 - type VRF MockCryptoCompatByron = MockVRF + type KES MockCryptoCompatByron = MockKES 10 + type VRF MockCryptoCompatByron = MockVRF instance TPraos.PraosCrypto MockCryptoCompatByron diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/ProtocolInfo.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/ProtocolInfo.hs index 4b2fcdbf98..5a2b4d8ab1 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/ProtocolInfo.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/Consensus/Cardano/ProtocolInfo.hs @@ -5,57 +5,73 @@ {-# LANGUAGE TypeApplications #-} -- | Utility functions to elaborate a Cardano 'ProtocolInfo' from certain parameters. -module Test.Consensus.Cardano.ProtocolInfo ( - -- * ProtocolInfo elaboration parameter types +module Test.Consensus.Cardano.ProtocolInfo + ( -- * ProtocolInfo elaboration parameter types ByronSlotLengthInSeconds (..) , NumCoreNodes (..) , ShelleySlotLengthInSeconds (..) + -- ** Hard-fork specification , Era (..) , hardForkInto , hardForkOnDefaultProtocolVersions + -- * ProtocolInfo elaboration , mkSimpleTestProtocolInfo , mkTestProtocolInfo , protocolVersionZero ) where -import qualified Cardano.Chain.Genesis as CC.Genesis -import qualified Cardano.Chain.Update as CC.Update -import qualified Cardano.Ledger.Api.Transition as L -import qualified Cardano.Ledger.BaseTypes as SL -import Cardano.Protocol.Crypto (StandardCrypto) -import qualified Cardano.Protocol.TPraos.OCert as SL -import qualified Cardano.Slotting.Time as Time -import Data.Proxy (Proxy (..)) -import Data.SOP.Strict -import Data.Word (Word64) -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.BlockchainTime (SlotLength) -import Ouroboros.Consensus.Byron.Node (ByronLeaderCredentials, - ProtocolParamsByron (..), byronGenesis, - byronPbftSignatureThreshold, byronSoftwareVersion) -import Ouroboros.Consensus.Cardano.Block (CardanoBlock) -import Ouroboros.Consensus.Cardano.Node (CardanoHardForkConstraints, - CardanoHardForkTrigger (..), CardanoHardForkTriggers (..), - CardanoProtocolParams (..), protocolInfoCardano) -import Ouroboros.Consensus.Config (emptyCheckpointsMap) -import Ouroboros.Consensus.Config.SecurityParam (SecurityParam (..)) -import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..), - ProtocolInfo) -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.PBFT (PBftParams, - PBftSignatureThreshold (..)) -import Ouroboros.Consensus.Shelley.Node - (ProtocolParamsShelleyBased (..), ShelleyGenesis, - ShelleyLeaderCredentials) -import Ouroboros.Consensus.Util.IOLike (IOLike) -import qualified Test.Cardano.Ledger.Alonzo.Examples.Consensus as SL -import qualified Test.Cardano.Ledger.Conway.Examples.Consensus as SL -import qualified Test.ThreadNet.Infra.Byron as Byron -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.Util.Seed (Seed (Seed), runGen) -import Test.Util.Slots (NumSlots (..)) +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.Update qualified as CC.Update +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Protocol.Crypto (StandardCrypto) +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.Time qualified as Time +import Data.Proxy (Proxy (..)) +import Data.SOP.Strict +import Data.Word (Word64) +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.BlockchainTime (SlotLength) +import Ouroboros.Consensus.Byron.Node + ( ByronLeaderCredentials + , ProtocolParamsByron (..) + , byronGenesis + , byronPbftSignatureThreshold + , byronSoftwareVersion + ) +import Ouroboros.Consensus.Cardano.Block (CardanoBlock) +import Ouroboros.Consensus.Cardano.Node + ( CardanoHardForkConstraints + , CardanoHardForkTrigger (..) + , CardanoHardForkTriggers (..) + , CardanoProtocolParams (..) + , protocolInfoCardano + ) +import Ouroboros.Consensus.Config (emptyCheckpointsMap) +import Ouroboros.Consensus.Config.SecurityParam (SecurityParam (..)) +import Ouroboros.Consensus.Node.ProtocolInfo + ( NumCoreNodes (..) + , ProtocolInfo + ) +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.PBFT + ( PBftParams + , PBftSignatureThreshold (..) + ) +import Ouroboros.Consensus.Shelley.Node + ( ProtocolParamsShelleyBased (..) + , ShelleyGenesis + , ShelleyLeaderCredentials + ) +import Ouroboros.Consensus.Util.IOLike (IOLike) +import Test.Cardano.Ledger.Alonzo.Examples.Consensus qualified as SL +import Test.Cardano.Ledger.Conway.Examples.Consensus qualified as SL +import Test.ThreadNet.Infra.Byron qualified as Byron +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.Util.Seed (Seed (Seed), runGen) +import Test.Util.Slots (NumSlots (..)) {------------------------------------------------------------------------------- ProtocolInfo elaboration parameter types @@ -74,46 +90,53 @@ instance ToSlotLength ByronSlotLengthInSeconds where instance ToSlotLength ShelleySlotLengthInSeconds where toSlotLength (ShelleySlotLengthInSeconds n) = Time.slotLengthFromSec $ fromIntegral n -data Era = Byron - | Shelley - | Allegra - | Mary - | Alonzo - | Babbage - | Conway +data Era + = Byron + | Shelley + | Allegra + | Mary + | Alonzo + | Babbage + | Conway deriving (Show, Eq, Ord, Enum) protocolVersionZero :: SL.ProtVer protocolVersionZero = SL.ProtVer versionZero 0 - where - versionZero :: SL.Version - versionZero = SL.natVersion @0 + where + versionZero :: SL.Version + versionZero = SL.natVersion @0 hardForkOnDefaultProtocolVersions :: CardanoHardForkTriggers hardForkOnDefaultProtocolVersions = - CardanoHardForkTriggers - $ hpure CardanoTriggerHardForkAtDefaultVersion + CardanoHardForkTriggers $ + hpure CardanoTriggerHardForkAtDefaultVersion hardForkInto :: Era -> CardanoHardForkTriggers -hardForkInto Byron = hardForkOnDefaultProtocolVersions +hardForkInto Byron = hardForkOnDefaultProtocolVersions hardForkInto Shelley = - hardForkOnDefaultProtocolVersions - { triggerHardForkShelley = CardanoTriggerHardForkAtEpoch 0 } + hardForkOnDefaultProtocolVersions + { triggerHardForkShelley = CardanoTriggerHardForkAtEpoch 0 + } hardForkInto Allegra = - (hardForkInto Shelley) - { triggerHardForkAllegra = CardanoTriggerHardForkAtEpoch 0 } -hardForkInto Mary = - (hardForkInto Allegra) - { triggerHardForkMary = CardanoTriggerHardForkAtEpoch 0 } -hardForkInto Alonzo = - (hardForkInto Mary) - { triggerHardForkAlonzo = CardanoTriggerHardForkAtEpoch 0 } + (hardForkInto Shelley) + { triggerHardForkAllegra = CardanoTriggerHardForkAtEpoch 0 + } +hardForkInto Mary = + (hardForkInto Allegra) + { triggerHardForkMary = CardanoTriggerHardForkAtEpoch 0 + } +hardForkInto Alonzo = + (hardForkInto Mary) + { triggerHardForkAlonzo = CardanoTriggerHardForkAtEpoch 0 + } hardForkInto Babbage = - (hardForkInto Alonzo) - { triggerHardForkBabbage = CardanoTriggerHardForkAtEpoch 0 } + (hardForkInto Alonzo) + { triggerHardForkBabbage = CardanoTriggerHardForkAtEpoch 0 + } hardForkInto Conway = - (hardForkInto Babbage) - { triggerHardForkConway = CardanoTriggerHardForkAtEpoch 0 } + (hardForkInto Babbage) + { triggerHardForkConway = CardanoTriggerHardForkAtEpoch 0 + } {------------------------------------------------------------------------------- ProtocolInfo elaboration @@ -140,121 +163,117 @@ hardForkInto Conway = -- 'CardanoHardForkTriggers' parameter will determine to which era this ledger -- state belongs. See 'hardForkInto' and 'hardForkOnDefaultProtocolVersions' for -- more details on how to specify a value of this type. --- mkSimpleTestProtocolInfo :: - forall c - . (CardanoHardForkConstraints c) - => Shelley.DecentralizationParam - -- ^ Network decentralization parameter. - -> SecurityParam - -> ByronSlotLengthInSeconds - -> ShelleySlotLengthInSeconds - -> SL.ProtVer - -> CardanoHardForkTriggers - -> ProtocolInfo (CardanoBlock c) + forall c. + CardanoHardForkConstraints c => + -- | Network decentralization parameter. + Shelley.DecentralizationParam -> + SecurityParam -> + ByronSlotLengthInSeconds -> + ShelleySlotLengthInSeconds -> + SL.ProtVer -> + CardanoHardForkTriggers -> + ProtocolInfo (CardanoBlock c) mkSimpleTestProtocolInfo - decentralizationParam - securityParam - byronSlotLenghtInSeconds - shelleySlotLengthInSeconds - protocolVersion - hardForkTriggers - = fst - $ mkTestProtocolInfo @IO - (CoreNodeId 0, coreNodeShelley) - shelleyGenesis - aByronProtocolVersion - SL.NeutralNonce - genesisByron - generatedSecretsByron - (Just $ PBftSignatureThreshold 1) - protocolVersion - hardForkTriggers - where + decentralizationParam + securityParam + byronSlotLenghtInSeconds + shelleySlotLengthInSeconds + protocolVersion + hardForkTriggers = + fst $ + mkTestProtocolInfo @IO + (CoreNodeId 0, coreNodeShelley) + shelleyGenesis + aByronProtocolVersion + SL.NeutralNonce + genesisByron + generatedSecretsByron + (Just $ PBftSignatureThreshold 1) + protocolVersion + hardForkTriggers + where aByronProtocolVersion = - CC.Update.ProtocolVersion 0 0 0 + CC.Update.ProtocolVersion 0 0 0 coreNodeShelley :: Shelley.CoreNode c coreNodeShelley = runGen initSeed $ Shelley.genCoreNode initialKESPeriod - where - initSeed :: Seed - initSeed = Seed 0 + where + initSeed :: Seed + initSeed = Seed 0 - initialKESPeriod :: SL.KESPeriod - initialKESPeriod = SL.KESPeriod 0 + initialKESPeriod :: SL.KESPeriod + initialKESPeriod = SL.KESPeriod 0 pbftParams :: PBftParams pbftParams = Byron.byronPBftParams securityParam (NumCoreNodes 1) generatedSecretsByron :: CC.Genesis.GeneratedSecrets (genesisByron, generatedSecretsByron) = - Byron.generateGenesisConfig (toSlotLength byronSlotLenghtInSeconds) pbftParams + Byron.generateGenesisConfig (toSlotLength byronSlotLenghtInSeconds) pbftParams shelleyGenesis :: ShelleyGenesis shelleyGenesis = - Shelley.mkGenesisConfig - protocolVersion - securityParam - activeSlotCoeff - decentralizationParam - maxLovelaceSupply - (toSlotLength shelleySlotLengthInSeconds) - (Shelley.mkKesConfig (Proxy @StandardCrypto) numSlots) - [coreNodeShelley] - where - maxLovelaceSupply :: Word64 - maxLovelaceSupply = 45000000000000000 + Shelley.mkGenesisConfig + protocolVersion + securityParam + activeSlotCoeff + decentralizationParam + maxLovelaceSupply + (toSlotLength shelleySlotLengthInSeconds) + (Shelley.mkKesConfig (Proxy @StandardCrypto) numSlots) + [coreNodeShelley] + where + maxLovelaceSupply :: Word64 + maxLovelaceSupply = 45000000000000000 - activeSlotCoeff :: Rational - activeSlotCoeff = 0.2 -- c.f. mainnet is more conservative, using 0.05 - - numSlots = NumSlots 100 + activeSlotCoeff :: Rational + activeSlotCoeff = 0.2 -- c.f. mainnet is more conservative, using 0.05 + numSlots = NumSlots 100 -- | A more generalized version of 'mkSimpleTestProtocolInfo'. --- mkTestProtocolInfo :: - forall m c - . (CardanoHardForkConstraints c, IOLike m) - => (CoreNodeId, Shelley.CoreNode c) - -- ^ Id of the node for which the protocol info will be elaborated. - -> ShelleyGenesis - -- ^ These nodes will be part of the initial delegation mapping, and funds + forall m c. + (CardanoHardForkConstraints c, IOLike m) => + -- | Id of the node for which the protocol info will be elaborated. + (CoreNodeId, Shelley.CoreNode c) -> + -- | These nodes will be part of the initial delegation mapping, and funds -- will be allocated to these nodes. - -> CC.Update.ProtocolVersion - -- ^ Protocol version of the Byron era proposal. - -> SL.Nonce - -> CC.Genesis.Config - -> CC.Genesis.GeneratedSecrets - -> Maybe PBftSignatureThreshold - -> SL.ProtVer - -- ^ See 'protocolInfoCardano' for the details of what is the + ShelleyGenesis -> + -- | Protocol version of the Byron era proposal. + CC.Update.ProtocolVersion -> + SL.Nonce -> + CC.Genesis.Config -> + CC.Genesis.GeneratedSecrets -> + Maybe PBftSignatureThreshold -> + -- | See 'protocolInfoCardano' for the details of what is the -- relation between this version and any 'TriggerHardForkAtVersion' -- that __might__ appear in the 'CardanoHardForkTriggers' parameter. - -> CardanoHardForkTriggers - -- ^ Specification of the era to which the initial state should hard-fork to. - -> (ProtocolInfo (CardanoBlock c), m [BlockForging m (CardanoBlock c)]) + SL.ProtVer -> + -- | Specification of the era to which the initial state should hard-fork to. + CardanoHardForkTriggers -> + (ProtocolInfo (CardanoBlock c), m [BlockForging m (CardanoBlock c)]) mkTestProtocolInfo - (coreNodeId, coreNode) - shelleyGenesis - aByronProtocolVersion - initialNonce - genesisByron - generatedSecretsByron - aByronPbftSignatureThreshold - protocolVersion - hardForkTriggers - = + (coreNodeId, coreNode) + shelleyGenesis + aByronProtocolVersion + initialNonce + genesisByron + generatedSecretsByron + aByronPbftSignatureThreshold + protocolVersion + hardForkTriggers = protocolInfoCardano - (CardanoProtocolParams - ProtocolParamsByron { - byronGenesis = genesisByron + ( CardanoProtocolParams + ProtocolParamsByron + { byronGenesis = genesisByron , byronPbftSignatureThreshold = aByronPbftSignatureThreshold - , byronProtocolVersion = aByronProtocolVersion - , byronSoftwareVersion = softVerByron - , byronLeaderCredentials = Just leaderCredentialsByron + , byronProtocolVersion = aByronProtocolVersion + , byronSoftwareVersion = softVerByron + , byronLeaderCredentials = Just leaderCredentialsByron } - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = initialNonce + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = initialNonce , shelleyBasedLeaderCredentials = [leaderCredentialsShelley] } hardForkTriggers @@ -272,15 +291,14 @@ mkTestProtocolInfo ) emptyCheckpointsMap protocolVersion - ) - - where + ) + where leaderCredentialsByron :: ByronLeaderCredentials leaderCredentialsByron = - Byron.mkLeaderCredentials - genesisByron - generatedSecretsByron - coreNodeId + Byron.mkLeaderCredentials + genesisByron + generatedSecretsByron + coreNodeId -- This sets a vestigial header field which is not actually used for anything. softVerByron :: CC.Update.SoftwareVersion diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/ShelleyBasedHardFork.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/ShelleyBasedHardFork.hs index 6c6f044823..afec79d931 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/ShelleyBasedHardFork.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/ShelleyBasedHardFork.hs @@ -18,80 +18,90 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE ViewPatterns #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Test infrastructure to test hard-forking from one Shelley-based era to -- another, e.g., Shelley to Allegra. -module Test.ThreadNet.Infra.ShelleyBasedHardFork ( - -- * Blocks +module Test.ThreadNet.Infra.ShelleyBasedHardFork + ( -- * Blocks ShelleyBasedHardForkBlock , ShelleyBasedHardForkEras + -- * Transactions , pattern GenTxShelley1 , pattern GenTxShelley2 + -- * Node , ShelleyBasedHardForkConstraints , protocolInfoShelleyBasedHardFork ) where -import qualified Cardano.Ledger.Api.Transition as L -import Cardano.Ledger.Binary.Decoding (decShareCBOR, decodeMap, - decodeMemPack, internsFromMap) -import Cardano.Ledger.Binary.Encoding (encodeMap, encodeMemPack, - toPlainEncoding) -import qualified Cardano.Ledger.Core as SL -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.LedgerState as SL -import qualified Cardano.Ledger.UMap as SL -import Codec.CBOR.Decoding -import Codec.CBOR.Encoding -import Control.Monad.Except (runExcept) -import Data.Coerce -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index (Index (..), hcimap) -import qualified Data.SOP.InPairs as InPairs -import Data.SOP.Strict -import qualified Data.SOP.Tails as Tails -import qualified Data.SOP.Telescope as Telescope -import Data.Void (Void) -import Lens.Micro ((^.)) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract (BlockProtocol) -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.Cardano.CanHardFork - (crossEraForecastAcrossShelley, - translateChainDepStateAcrossShelley) -import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Embed.Binary -import Ouroboros.Consensus.HardFork.Combinator.Serialisation -import Ouroboros.Consensus.HardFork.Combinator.State.Types as HFC -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Node -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (eitherToMaybe) -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.IOLike (IOLike) -import Test.ThreadNet.TxGen -import Test.ThreadNet.TxGen.Shelley () +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.Binary.Decoding + ( decShareCBOR + , decodeMap + , decodeMemPack + , internsFromMap + ) +import Cardano.Ledger.Binary.Encoding + ( encodeMap + , encodeMemPack + , toPlainEncoding + ) +import Cardano.Ledger.Core qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.LedgerState qualified as SL +import Cardano.Ledger.UMap qualified as SL +import Codec.CBOR.Decoding +import Codec.CBOR.Encoding +import Control.Monad.Except (runExcept) +import Data.Coerce +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index (Index (..), hcimap) +import Data.SOP.Strict +import Data.SOP.Tails qualified as Tails +import Data.SOP.Telescope qualified as Telescope +import Data.Void (Void) +import Lens.Micro ((^.)) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract (BlockProtocol) +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.Cardano.CanHardFork + ( crossEraForecastAcrossShelley + , translateChainDepStateAcrossShelley + ) +import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Embed.Binary +import Ouroboros.Consensus.HardFork.Combinator.Serialisation +import Ouroboros.Consensus.HardFork.Combinator.State.Types as HFC +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Node +import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (eitherToMaybe) +import Ouroboros.Consensus.Util.IOLike (IOLike) +import Ouroboros.Consensus.Util.IndexedMemPack +import Test.ThreadNet.TxGen +import Test.ThreadNet.TxGen.Shelley () {------------------------------------------------------------------------------- Block type @@ -99,7 +109,7 @@ import Test.ThreadNet.TxGen.Shelley () -- | Two eras, both Shelley-based. type ShelleyBasedHardForkEras proto1 era1 proto2 era2 = - '[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2] + '[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2] type ShelleyBasedHardForkBlock proto1 era1 proto2 era2 = HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2) @@ -112,25 +122,25 @@ type ShelleyBasedHardForkGenTx proto1 era1 proto2 era2 = GenTx (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) pattern GenTxShelley1 :: - GenTx (ShelleyBlock proto1 era1) - -> ShelleyBasedHardForkGenTx proto1 era1 proto2 era2 + GenTx (ShelleyBlock proto1 era1) -> + ShelleyBasedHardForkGenTx proto1 era1 proto2 era2 pattern GenTxShelley1 tx = HardForkGenTx (OneEraGenTx (Z tx)) pattern GenTxShelley2 :: - GenTx (ShelleyBlock proto2 era2) - -> ShelleyBasedHardForkGenTx proto1 era1 proto2 era2 + GenTx (ShelleyBlock proto2 era2) -> + ShelleyBasedHardForkGenTx proto1 era1 proto2 era2 pattern GenTxShelley2 tx = HardForkGenTx (OneEraGenTx (S (Z tx))) {-# COMPLETE GenTxShelley1, GenTxShelley2 #-} pattern ShelleyBasedHardForkNodeToNodeVersionMax :: - BlockNodeToNodeVersion (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) + BlockNodeToNodeVersion (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) pattern ShelleyBasedHardForkNodeToNodeVersionMax = - HardForkNodeToNodeEnabled - HardForkSpecificNodeToNodeVersionMax - ( WrapNodeToNodeVersion ShelleyNodeToNodeVersionMax - :* WrapNodeToNodeVersion ShelleyNodeToNodeVersionMax - :* Nil + HardForkNodeToNodeEnabled + HardForkSpecificNodeToNodeVersionMax + ( WrapNodeToNodeVersion ShelleyNodeToNodeVersionMax + :* WrapNodeToNodeVersion ShelleyNodeToNodeVersionMax + :* Nil ) pattern HardForkSpecificNodeToNodeVersionMax :: HardForkSpecificNodeToNodeVersion @@ -144,13 +154,13 @@ pattern ShelleyNodeToNodeVersionMax <- ((== maxBound) -> True) ShelleyNodeToNodeVersionMax = maxBound pattern ShelleyBasedHardForkNodeToClientVersionMax :: - BlockNodeToClientVersion (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) + BlockNodeToClientVersion (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) pattern ShelleyBasedHardForkNodeToClientVersionMax = - HardForkNodeToClientEnabled - HardForkSpecificNodeToClientVersionMax - ( EraNodeToClientEnabled ShelleyNodeToClientVersionMax - :* EraNodeToClientEnabled ShelleyNodeToClientVersionMax - :* Nil + HardForkNodeToClientEnabled + HardForkSpecificNodeToClientVersionMax + ( EraNodeToClientEnabled ShelleyNodeToClientVersionMax + :* EraNodeToClientEnabled ShelleyNodeToClientVersionMax + :* Nil ) pattern HardForkSpecificNodeToClientVersionMax :: HardForkSpecificNodeToClientVersion @@ -178,18 +188,14 @@ type ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 = , BlockProtocol (ShelleyBlock proto2 era2) ~ proto2 , TranslateTxMeasure (TxMeasure (ShelleyBlock proto1 era1)) (TxMeasure (ShelleyBlock proto2 era2)) , SL.PreviousEra era2 ~ era1 - - , SL.TranslateEra era2 SL.NewEpochState - , SL.TranslateEra era2 WrapTx - - , SL.TranslationError era2 SL.NewEpochState ~ Void - - -- At the moment, fix the protocols together - , ProtoCrypto proto1 ~ ProtoCrypto proto2 + , SL.TranslateEra era2 SL.NewEpochState + , SL.TranslateEra era2 WrapTx + , SL.TranslationError era2 SL.NewEpochState ~ Void + , -- At the moment, fix the protocols together + ProtoCrypto proto1 ~ ProtoCrypto proto2 , PraosCrypto (ProtoCrypto proto1) , proto1 ~ TPraos (ProtoCrypto proto1) , proto1 ~ proto2 - , MemPack (TxOut (LedgerState (ShelleyBlock proto1 era1))) , MemPack (TxOut (LedgerState (ShelleyBlock proto2 era2))) ) @@ -216,91 +222,105 @@ instance TranslateTxMeasure AlonzoMeasure ConwayMeasure where instance TranslateTxMeasure ConwayMeasure ConwayMeasure where translateTxMeasure = id -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => SerialiseHFC (ShelleyBasedHardForkEras proto1 era1 proto2 era2) - -- use defaults +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + SerialiseHFC (ShelleyBasedHardForkEras proto1 era1 proto2 era2) + +-- use defaults -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => CanHardFork (ShelleyBasedHardForkEras proto1 era1 proto2 era2) where - type HardForkTxMeasure (ShelleyBasedHardForkEras proto1 era1 proto2 era2) = +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + CanHardFork (ShelleyBasedHardForkEras proto1 era1 proto2 era2) + where + type + HardForkTxMeasure (ShelleyBasedHardForkEras proto1 era1 proto2 era2) = TxMeasure (ShelleyBlock proto2 era2) - hardForkEraTranslation = EraTranslation { - translateLedgerState = PCons translateLedgerState PNil - , translateLedgerTables = PCons translateLedgerTables PNil + hardForkEraTranslation = + EraTranslation + { translateLedgerState = PCons translateLedgerState PNil + , translateLedgerTables = PCons translateLedgerTables PNil , translateChainDepState = PCons translateChainDepStateAcrossShelley PNil - , crossEraForecast = PCons crossEraForecastAcrossShelley PNil + , crossEraForecast = PCons crossEraForecastAcrossShelley PNil } - where - translateLedgerState :: - InPairs.RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock proto1 era1) - (ShelleyBlock proto2 era2) - translateLedgerState = - InPairs.RequireBoth - $ \_cfg1 cfg2 -> - HFC.TranslateLedgerState { - translateLedgerStateWith = \_epochNo -> + where + translateLedgerState :: + InPairs.RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock proto1 era1) + (ShelleyBlock proto2 era2) + translateLedgerState = + InPairs.RequireBoth $ + \_cfg1 cfg2 -> + HFC.TranslateLedgerState + { translateLedgerStateWith = \_epochNo -> noNewTickingDiffs - . unFlip - . unComp - . SL.translateEra' - (shelleyLedgerTranslationContext (unwrapLedgerConfig cfg2)) - . Comp - . Flip - } + . unFlip + . unComp + . SL.translateEra' + (shelleyLedgerTranslationContext (unwrapLedgerConfig cfg2)) + . Comp + . Flip + } - translateLedgerTables :: - TranslateLedgerTables - (ShelleyBlock proto1 era1) - (ShelleyBlock proto2 era2) - translateLedgerTables = HFC.TranslateLedgerTables { - translateTxInWith = coerce - , translateTxOutWith = SL.upgradeTxOut - } + translateLedgerTables :: + TranslateLedgerTables + (ShelleyBlock proto1 era1) + (ShelleyBlock proto2 era2) + translateLedgerTables = + HFC.TranslateLedgerTables + { translateTxInWith = coerce + , translateTxOutWith = SL.upgradeTxOut + } hardForkChainSel = Tails.mk2 CompareSameSelectView hardForkInjectTxs = - InPairs.mk2 - $ InPairs.RequireBoth $ \_cfg1 cfg2 -> + InPairs.mk2 $ + InPairs.RequireBoth $ \_cfg1 cfg2 -> let ctxt = shelleyLedgerTranslationContext (unwrapLedgerConfig cfg2) - in - Pair2 - (InjectTx (translateTx ctxt)) - (InjectValidatedTx (translateValidatedTx ctxt)) - where - translateTx :: - SL.TranslationContext era2 - -> GenTx (ShelleyBlock proto era1) - -> Maybe (GenTx (ShelleyBlock proto era2)) - translateTx transCtxt = - fmap unComp - . eitherToMaybe . runExcept . SL.translateEra transCtxt + in Pair2 + (InjectTx (translateTx ctxt)) + (InjectValidatedTx (translateValidatedTx ctxt)) + where + translateTx :: + SL.TranslationContext era2 -> + GenTx (ShelleyBlock proto era1) -> + Maybe (GenTx (ShelleyBlock proto era2)) + translateTx transCtxt = + fmap unComp + . eitherToMaybe + . runExcept + . SL.translateEra transCtxt . Comp - translateValidatedTx :: - SL.TranslationContext era2 - -> WrapValidatedGenTx (ShelleyBlock proto era1) - -> Maybe (WrapValidatedGenTx (ShelleyBlock proto era2)) - translateValidatedTx transCtxt = - fmap unComp - . eitherToMaybe . runExcept . SL.translateEra transCtxt - . Comp + translateValidatedTx :: + SL.TranslationContext era2 -> + WrapValidatedGenTx (ShelleyBlock proto era1) -> + Maybe (WrapValidatedGenTx (ShelleyBlock proto era2)) + translateValidatedTx transCtxt = + fmap unComp + . eitherToMaybe + . runExcept + . SL.translateEra transCtxt + . Comp hardForkInjTxMeasure = \case - ( Z (WrapTxMeasure x)) -> translateTxMeasure x - S (Z (WrapTxMeasure x)) -> x + (Z (WrapTxMeasure x)) -> translateTxMeasure x + S (Z (WrapTxMeasure x)) -> x -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => SupportedNetworkProtocolVersion (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) where - supportedNodeToNodeVersions _ = Map.fromList $ +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + SupportedNetworkProtocolVersion (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) + where + supportedNodeToNodeVersions _ = + Map.fromList $ [ (maxBound, ShelleyBasedHardForkNodeToNodeVersionMax) ] - supportedNodeToClientVersions _ = Map.fromList $ + supportedNodeToClientVersions _ = + Map.fromList $ [ (maxBound, ShelleyBasedHardForkNodeToClientVersionMax) ] @@ -313,41 +333,45 @@ instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 answerShelleyBasedQueryHF :: ( xs ~ '[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2] , ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - ) - => ( forall blk. - IsShelleyBlock blk - => Index xs blk - -> ExtLedgerCfg blk - -> BlockQuery blk footprint result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m result - ) - -> Index xs x - -> ExtLedgerCfg x - -> BlockQuery x footprint result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m result + ) => + ( forall blk. + IsShelleyBlock blk => + Index xs blk -> + ExtLedgerCfg blk -> + BlockQuery blk footprint result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m result + ) -> + Index xs x -> + ExtLedgerCfg x -> + BlockQuery x footprint result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m result answerShelleyBasedQueryHF f idx cfgs q forker = case idx of - IZ -> f idx cfgs q forker - IS IZ -> f idx cfgs q forker - IS (IS idx') -> case idx' of {} + IZ -> f idx cfgs q forker + IS IZ -> f idx cfgs q forker + IS (IS idx') -> case idx' of {} -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => BlockSupportsHFLedgerQuery '[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2] where +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + BlockSupportsHFLedgerQuery '[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2] + where answerBlockQueryHFLookup = answerShelleyBasedQueryHF - (\idx -> answerShelleyLookupQueries - (injectLedgerTables idx) - (ejectHardForkTxOutDefault idx) - (ejectCanonicalTxIn idx) + ( \idx -> + answerShelleyLookupQueries + (injectLedgerTables idx) + (ejectHardForkTxOutDefault idx) + (ejectCanonicalTxIn idx) ) answerBlockQueryHFTraverse = answerShelleyBasedQueryHF - (\idx -> answerShelleyTraversingQueries - (ejectHardForkTxOutDefault idx) - (ejectCanonicalTxIn idx) - (queryLedgerGetTraversingFilter @('[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2]) idx) + ( \idx -> + answerShelleyTraversingQueries + (ejectHardForkTxOutDefault idx) + (ejectCanonicalTxIn idx) + (queryLedgerGetTraversingFilter @('[ShelleyBlock proto1 era1, ShelleyBlock proto2 era2]) idx) ) queryLedgerGetTraversingFilter IZ q = \case @@ -363,21 +387,22 @@ instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 -------------------------------------------------------------------------------} protocolInfoShelleyBasedHardFork :: - forall m proto1 era1 proto2 era2. - (IOLike m, ShelleyBasedHardForkConstraints proto1 era1 proto2 era2) - => ProtocolParamsShelleyBased (ProtoCrypto proto1) - -> SL.ProtVer - -> SL.ProtVer - -> L.TransitionConfig era2 - -> TriggerHardFork - -> ( ProtocolInfo (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) - , m [BlockForging m (ShelleyBasedHardForkBlock proto1 era1 proto2 era2)] - ) -protocolInfoShelleyBasedHardFork protocolParamsShelleyBased - protVer1 - protVer2 - transCfg2 - hardForkTrigger = + forall m proto1 era1 proto2 era2. + (IOLike m, ShelleyBasedHardForkConstraints proto1 era1 proto2 era2) => + ProtocolParamsShelleyBased (ProtoCrypto proto1) -> + SL.ProtVer -> + SL.ProtVer -> + L.TransitionConfig era2 -> + TriggerHardFork -> + ( ProtocolInfo (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) + , m [BlockForging m (ShelleyBasedHardForkBlock proto1 era1 proto2 era2)] + ) +protocolInfoShelleyBasedHardFork + protocolParamsShelleyBased + protVer1 + protVer2 + transCfg2 + hardForkTrigger = protocolInfoBinary -- Era 1 protocolInfo1 @@ -391,9 +416,9 @@ protocolInfoShelleyBasedHardFork protocolParamsShelleyBased eraParams2 tpraosParams toPartialLedgerConfig2 - where - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce + where + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce , shelleyBasedLeaderCredentials } = protocolParamsShelleyBased @@ -405,19 +430,20 @@ protocolInfoShelleyBasedHardFork protocolParamsShelleyBased protocolInfo1 :: ProtocolInfo (ShelleyBlock proto1 era1) blockForging1 :: m [BlockForging m (ShelleyBlock proto1 era1)] (protocolInfo1, blockForging1) = - protocolInfoTPraosShelleyBased - protocolParamsShelleyBased - (transCfg2 ^. L.tcPreviousEraConfigL) - protVer1 + protocolInfoTPraosShelleyBased + protocolParamsShelleyBased + (transCfg2 ^. L.tcPreviousEraConfigL) + protVer1 eraParams1 :: History.EraParams eraParams1 = shelleyEraParams genesis toPartialLedgerConfig1 :: - LedgerConfig (ShelleyBlock proto1 era1) - -> PartialLedgerConfig (ShelleyBlock proto1 era1) - toPartialLedgerConfig1 cfg = ShelleyPartialLedgerConfig { - shelleyLedgerConfig = cfg + LedgerConfig (ShelleyBlock proto1 era1) -> + PartialLedgerConfig (ShelleyBlock proto1 era1) + toPartialLedgerConfig1 cfg = + ShelleyPartialLedgerConfig + { shelleyLedgerConfig = cfg , shelleyTriggerHardFork = hardForkTrigger } @@ -426,22 +452,23 @@ protocolInfoShelleyBasedHardFork protocolParamsShelleyBased protocolInfo2 :: ProtocolInfo (ShelleyBlock proto2 era2) blockForging2 :: m [BlockForging m (ShelleyBlock proto2 era2)] (protocolInfo2, blockForging2) = - protocolInfoTPraosShelleyBased - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce - , shelleyBasedLeaderCredentials - } - transCfg2 - protVer2 + protocolInfoTPraosShelleyBased + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce + , shelleyBasedLeaderCredentials + } + transCfg2 + protVer2 eraParams2 :: History.EraParams eraParams2 = shelleyEraParams genesis toPartialLedgerConfig2 :: - LedgerConfig (ShelleyBlock proto2 era2) - -> PartialLedgerConfig (ShelleyBlock proto2 era2) - toPartialLedgerConfig2 cfg = ShelleyPartialLedgerConfig { - shelleyLedgerConfig = cfg + LedgerConfig (ShelleyBlock proto2 era2) -> + PartialLedgerConfig (ShelleyBlock proto2 era2) + toPartialLedgerConfig2 cfg = + ShelleyPartialLedgerConfig + { shelleyLedgerConfig = cfg , shelleyTriggerHardFork = TriggerHardForkNotDuringThisExecution } @@ -450,110 +477,155 @@ protocolInfoShelleyBasedHardFork protocolParamsShelleyBased -------------------------------------------------------------------------------} -- | Use a generic implementation for 'TxGen' -instance ( TxGen (ShelleyBlock proto1 era1) - , TxGen (ShelleyBlock proto2 era2) - , ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - ) => TxGen (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) where - type TxGenExtra (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) = - NP WrapTxGenExtra (ShelleyBasedHardForkEras proto1 era1 proto2 era2) +instance + ( TxGen (ShelleyBlock proto1 era1) + , TxGen (ShelleyBlock proto2 era2) + , ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 + ) => + TxGen (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) + where + type + TxGenExtra (ShelleyBasedHardForkBlock proto1 era1 proto2 era2) = + NP WrapTxGenExtra (ShelleyBasedHardForkEras proto1 era1 proto2 era2) testGenTxs = testGenTxsHfc {------------------------------------------------------------------------------- Canonical TxIn -------------------------------------------------------------------------------} -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => HasCanonicalTxIn (ShelleyBasedHardForkEras proto1 era1 proto2 era2) where - newtype instance CanonicalTxIn (ShelleyBasedHardForkEras proto1 era1 proto2 era2) = - ShelleyHFCTxIn { - getShelleyHFCTxIn :: SL.TxIn - } +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + HasCanonicalTxIn (ShelleyBasedHardForkEras proto1 era1 proto2 era2) + where + newtype CanonicalTxIn (ShelleyBasedHardForkEras proto1 era1 proto2 era2) + = ShelleyHFCTxIn + { getShelleyHFCTxIn :: SL.TxIn + } deriving stock (Show, Eq, Ord) - deriving newtype (NoThunks) + deriving newtype NoThunks - injectCanonicalTxIn IZ txIn = ShelleyHFCTxIn txIn - injectCanonicalTxIn (IS IZ) txIn = ShelleyHFCTxIn (coerce txIn) - injectCanonicalTxIn (IS (IS idx')) _ = case idx' of {} + injectCanonicalTxIn IZ txIn = ShelleyHFCTxIn txIn + injectCanonicalTxIn (IS IZ) txIn = ShelleyHFCTxIn (coerce txIn) + injectCanonicalTxIn (IS (IS idx')) _ = case idx' of {} - ejectCanonicalTxIn IZ txIn = getShelleyHFCTxIn txIn - ejectCanonicalTxIn (IS IZ) txIn = coerce (getShelleyHFCTxIn txIn) - ejectCanonicalTxIn (IS (IS idx')) _ = case idx' of {} + ejectCanonicalTxIn IZ txIn = getShelleyHFCTxIn txIn + ejectCanonicalTxIn (IS IZ) txIn = coerce (getShelleyHFCTxIn txIn) + ejectCanonicalTxIn (IS (IS idx')) _ = case idx' of {} deriving newtype instance - ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => MemPack (CanonicalTxIn (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + MemPack (CanonicalTxIn (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => HasHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2) where - type instance HardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2) = - DefaultHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2) +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + HasHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2) + where + type + HardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2) = + DefaultHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2) injectHardForkTxOut = injectHardForkTxOutDefault ejectHardForkTxOut = ejectHardForkTxOutDefault -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => SerializeTablesWithHint (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) where - encodeTablesWithHint :: LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) EmptyMK - -> LedgerTables (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) ValuesMK - -> Encoding +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + SerializeTablesWithHint + (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) + where + encodeTablesWithHint :: + LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) EmptyMK -> + LedgerTables + (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) + ValuesMK -> + Encoding encodeTablesWithHint (HardForkLedgerState (HardForkState idx)) (LedgerTables (ValuesMK tbs)) = let - np = (Fn $ const $ K $ encOne (Proxy @era1)) - :* (Fn $ const $ K $ encOne (Proxy @era2)) - :* Nil - in hcollapse $ hap np $ Telescope.tip idx + np = + (Fn $ const $ K $ encOne (Proxy @era1)) + :* (Fn $ const $ K $ encOne (Proxy @era2)) + :* Nil + in + hcollapse $ hap np $ Telescope.tip idx where - encOne :: forall era. SL.Era era => Proxy era -> Encoding - encOne _ = toPlainEncoding (SL.eraProtVerLow @era) - $ encodeMap (encodeMemPack . getShelleyHFCTxIn) (\case - Z txout -> encodeMemPack $ unwrapTxOut txout - S (Z txout) -> encodeMemPack $ unwrapTxOut txout - ) tbs - - decodeTablesWithHint :: forall s. LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) EmptyMK - -> Decoder s (LedgerTables (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) ValuesMK) + encOne :: forall era. SL.Era era => Proxy era -> Encoding + encOne _ = + toPlainEncoding (SL.eraProtVerLow @era) $ + encodeMap + (encodeMemPack . getShelleyHFCTxIn) + ( \case + Z txout -> encodeMemPack $ unwrapTxOut txout + S (Z txout) -> encodeMemPack $ unwrapTxOut txout + ) + tbs + + decodeTablesWithHint :: + forall s. + LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) EmptyMK -> + Decoder + s + ( LedgerTables + (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) + ValuesMK + ) decodeTablesWithHint (HardForkLedgerState (HardForkState idx)) = let - np = (Fn $ Comp . fmap K . getOne (Z . WrapTxOut) . unFlip . currentState) - :* (Fn $ Comp . fmap K . getOne (S . Z . WrapTxOut) . unFlip . currentState) - :* Nil - in hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) + np = + (Fn $ Comp . fmap K . getOne (Z . WrapTxOut) . unFlip . currentState) + :* (Fn $ Comp . fmap K . getOne (S . Z . WrapTxOut) . unFlip . currentState) + :* Nil + in + hcollapse <$> (hsequence' $ hap np $ Telescope.tip idx) where - getOne :: forall proto era. ShelleyCompatible proto era - => (TxOut (LedgerState (ShelleyBlock proto era)) -> TxOut (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)))) - -> LedgerState (ShelleyBlock proto era) EmptyMK - -> Decoder s (LedgerTables (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) ValuesMK) - getOne toShelleyTxOut st = - let certInterns = - internsFromMap - $ shelleyLedgerState st - ^. SL.nesEsL - . SL.esLStateL - . SL.lsCertStateL - . SL.certDStateL - . SL.dsUnifiedL - . SL.umElemsL - in LedgerTables . ValuesMK <$> SL.eraDecoder @era (decodeMap (ShelleyHFCTxIn <$> decodeMemPack) (toShelleyTxOut <$> decShareCBOR certInterns)) - -instance ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 - => IndexedMemPack - (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) EmptyMK) - (DefaultHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) where + getOne :: + forall proto era. + ShelleyCompatible proto era => + ( TxOut (LedgerState (ShelleyBlock proto era)) -> + TxOut (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) + ) -> + LedgerState (ShelleyBlock proto era) EmptyMK -> + Decoder + s + ( LedgerTables + (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2))) + ValuesMK + ) + getOne toShelleyTxOut st = + let certInterns = + internsFromMap $ + shelleyLedgerState st + ^. SL.nesEsL + . SL.esLStateL + . SL.lsCertStateL + . SL.certDStateL + . SL.dsUnifiedL + . SL.umElemsL + in LedgerTables . ValuesMK + <$> SL.eraDecoder @era + (decodeMap (ShelleyHFCTxIn <$> decodeMemPack) (toShelleyTxOut <$> decShareCBOR certInterns)) + +instance + ShelleyBasedHardForkConstraints proto1 era1 proto2 era2 => + IndexedMemPack + (LedgerState (HardForkBlock (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) EmptyMK) + (DefaultHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) + where indexedTypeName _ = typeName @(DefaultHardForkTxOut (ShelleyBasedHardForkEras proto1 era1 proto2 era2)) indexedPackedByteCount _ txout = hcollapse $ - hcmap (Proxy @MemPackTxOut) - (K . packedByteCount . unwrapTxOut) - txout + hcmap + (Proxy @MemPackTxOut) + (K . packedByteCount . unwrapTxOut) + txout indexedPackM _ = - hcollapse . hcimap - (Proxy @MemPackTxOut) - (\_ (WrapTxOut txout) -> K $ do - packM txout - ) + hcollapse + . hcimap + (Proxy @MemPackTxOut) + ( \_ (WrapTxOut txout) -> K $ do + packM txout + ) indexedUnpackM (HardForkLedgerState (HardForkState idx)) = do hsequence' $ hcmap - (Proxy @MemPackTxOut) - (const $ Comp $ WrapTxOut <$> unpackM) - $ Telescope.tip idx + (Proxy @MemPackTxOut) + (const $ Comp $ WrapTxOut <$> unpackM) + $ Telescope.tip idx diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/TwoEras.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/TwoEras.hs index 38625a7bfc..454752806c 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/TwoEras.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/Infra/TwoEras.hs @@ -7,12 +7,13 @@ {-# LANGUAGE TypeOperators #-} -- | Definitions used in ThreadNet tests that involve two eras. -module Test.ThreadNet.Infra.TwoEras ( - -- * Generators +module Test.ThreadNet.Infra.TwoEras + ( -- * Generators Partition (..) , genNonce , genPartition , genTestConfig + -- * Era inspection , ReachesEra2 (..) , activeSlotCoeff @@ -23,6 +24,7 @@ module Test.ThreadNet.Infra.TwoEras ( , partitionExclusiveUpperBound , secondEraOverlaySlots , shelleyEpochSize + -- * Properties , label_ReachesEra2 , label_hadActiveNonOverlaySlots @@ -32,107 +34,116 @@ module Test.ThreadNet.Infra.TwoEras ( , tabulatePartitionPosition ) where -import qualified Cardano.Chain.Common as CC.Common -import Cardano.Chain.ProtocolConstants (kEpochSlots) -import Cardano.Chain.Slotting (unEpochSlots) -import Cardano.Ledger.BaseTypes (unNonZero) -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Protocol.TPraos.Rules.Overlay as SL -import Cardano.Slotting.EpochInfo -import Cardano.Slotting.Slot (EpochNo (..), EpochSize (..), - SlotNo (..)) -import Control.Exception (assert) -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import Data.Maybe (isJust) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.SOP.Strict (NS (..)) -import Data.Word (Word64) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.HardFork.Combinator (HardForkBlock (..), - OneEraBlock (..)) -import qualified Ouroboros.Consensus.HardFork.History.Util as Util -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Test.QuickCheck -import Test.ThreadNet.General -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.Network (CalcMessageDelay (..), NodeOutput (..)) -import Test.ThreadNet.Util.Expectations (NumBlocks (..)) -import qualified Test.ThreadNet.Util.NodeTopology as Topo -import qualified Test.Util.BoolProps as BoolProps -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) +import Cardano.Chain.Common qualified as CC.Common +import Cardano.Chain.ProtocolConstants (kEpochSlots) +import Cardano.Chain.Slotting (unEpochSlots) +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Protocol.TPraos.Rules.Overlay qualified as SL +import Cardano.Slotting.EpochInfo +import Cardano.Slotting.Slot + ( EpochNo (..) + , EpochSize (..) + , SlotNo (..) + ) +import Control.Exception (assert) +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Maybe (isJust) +import Data.SOP.Strict (NS (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.Combinator + ( HardForkBlock (..) + , OneEraBlock (..) + ) +import Ouroboros.Consensus.HardFork.History.Util qualified as Util +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Test.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.Network (CalcMessageDelay (..), NodeOutput (..)) +import Test.ThreadNet.Util.Expectations (NumBlocks (..)) +import Test.ThreadNet.Util.NodeTopology qualified as Topo +import Test.Util.BoolProps qualified as BoolProps +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) -- | When and for how long the nodes are partitioned -- -- The nodes are divided via message delays into two sub-networks by the parity -- of their 'CoreNodeId'. -data Partition = Partition SlotNo NumSlots - -- ^ the scheduled start slot and duration (which includes the start slot) - deriving (Show) +data Partition + = -- | the scheduled start slot and duration (which includes the start slot) + Partition SlotNo NumSlots + deriving Show partitionExclusiveUpperBound :: Partition -> SlotNo partitionExclusiveUpperBound (Partition s (NumSlots dur)) = - Util.addSlots dur s + Util.addSlots dur s genNonce :: Gen SL.Nonce -genNonce = frequency - [ (1, pure SL.NeutralNonce) - , (9, SL.mkNonceFromNumber <$> arbitrary) - ] +genNonce = + frequency + [ (1, pure SL.NeutralNonce) + , (9, SL.mkNonceFromNumber <$> arbitrary) + ] -- | Generate a 'setupTestConfig' relevant to the case where the first era (eg -- Byron) lasts for one epoch and the second era (eg Shelley) lasts for an -- interesting number of slots. genTestConfig :: SecurityParam -> (EpochSize, EpochSize) -> Gen TestConfig genTestConfig k (EpochSize epochSize1, EpochSize epochSize2) = do - initSeed <- arbitrary + initSeed <- arbitrary - numSlots <- do - let wiggle = min epochSize1 (2 * unNonZero (maxRollbacks k)) + numSlots <- do + let wiggle = min epochSize1 (2 * unNonZero (maxRollbacks k)) - approachSecondEra = - choose (0, wiggle) <&> \t -> epochSize1 + t - wiggle - reachSecondEra = - choose (1, epochSize2) <&> \t -> epochSize1 + t - reachThirdEpochOfSecondEra = - choose (1, epochSize2) <&> \t -> epochSize1 + 2 * epochSize2 + t + approachSecondEra = + choose (0, wiggle) <&> \t -> epochSize1 + t - wiggle + reachSecondEra = + choose (1, epochSize2) <&> \t -> epochSize1 + t + reachThirdEpochOfSecondEra = + choose (1, epochSize2) <&> \t -> epochSize1 + 2 * epochSize2 + t - fmap NumSlots $ frequency $ + fmap NumSlots $ + frequency $ [ (05, approachSecondEra) , (64, reachSecondEra) , (31, reachThirdEpochOfSecondEra) ] - -- This test has more slots than most, so we de-emphasize the relatively - -- expensive n=5 case. For example: - -- - -- > 250 tests with 30% 2, 30% 3, 30% 4, and 10% 5 took 500s - -- > 250 tests with 100% 5 took 1000s - ncn <- frequency [(9, choose (2, 4)), (1, pure 5)] - - -- Ensure that each partition class is internally connected. - nodeTopology <- do - topo0 <- Topo.genNodeTopology $ NumCoreNodes ncn - oddTopo <- do - -- eg nodes 1 3 for ncn = 5 - topo <- Topo.genNodeTopology $ NumCoreNodes $ div ncn 2 - let rename (CoreNodeId i) = CoreNodeId (2 * i + 1) - pure $ Topo.mapNodeTopology rename topo - evenTopo <- do - -- eg nodes 0 2 4 for ncn = 5 - topo <- Topo.genNodeTopology $ NumCoreNodes $ ncn - div ncn 2 - let rename (CoreNodeId i) = CoreNodeId (2 * i) - pure $ Topo.mapNodeTopology rename topo - pure $ - Topo.unionNodeTopology evenTopo $ + -- This test has more slots than most, so we de-emphasize the relatively + -- expensive n=5 case. For example: + -- + -- > 250 tests with 30% 2, 30% 3, 30% 4, and 10% 5 took 500s + -- > 250 tests with 100% 5 took 1000s + ncn <- frequency [(9, choose (2, 4)), (1, pure 5)] + + -- Ensure that each partition class is internally connected. + nodeTopology <- do + topo0 <- Topo.genNodeTopology $ NumCoreNodes ncn + oddTopo <- do + -- eg nodes 1 3 for ncn = 5 + topo <- Topo.genNodeTopology $ NumCoreNodes $ div ncn 2 + let rename (CoreNodeId i) = CoreNodeId (2 * i + 1) + pure $ Topo.mapNodeTopology rename topo + evenTopo <- do + -- eg nodes 0 2 4 for ncn = 5 + topo <- Topo.genNodeTopology $ NumCoreNodes $ ncn - div ncn 2 + let rename (CoreNodeId i) = CoreNodeId (2 * i) + pure $ Topo.mapNodeTopology rename topo + pure $ + Topo.unionNodeTopology evenTopo $ Topo.unionNodeTopology oddTopo $ - topo0 + topo0 - pure TestConfig + pure + TestConfig { initSeed , nodeTopology , numCoreNodes = NumCoreNodes ncn @@ -142,205 +153,215 @@ genTestConfig k (EpochSize epochSize1, EpochSize epochSize2) = do -- | Generate 'setupPartition' genPartition :: NumCoreNodes -> NumSlots -> SecurityParam -> Gen Partition genPartition (NumCoreNodes n) (NumSlots t) (SecurityParam k') = do - let ultimateSlot :: Word64 - ultimateSlot = assert (t > 0) $ t - 1 - - k = unNonZero k' - - crop :: Word64 -> Word64 - crop s = min ultimateSlot s - - -- Fundamental plan: all @MsgRollForward@ sent during the partition only - -- arrive at the onset of slot after it. The partition begins at the onset - -- of @firstSlotIn@ and ends at the onset of @firstSlotAfter@. - - -- FACT (A) The leader of @firstSlotAfter@ will forge before fully reacting - -- to the @MsgRollForward@s that just arrived, due to a race in the test - -- infrastructure that is consistently won by the forge. Thus the two - -- class's unique extensions consist of the blocks forged in the slots from - -- @firstSlotIn@ to @firstSlotAfter@ /inclusive/. - - -- @w@ is defined below this large comment to be how long the partition - -- should last (this may be 'crop'ped farther below if the partition ends - -- up near the end of the test) - -- - -- Because of FACT (A), we limit the partition duration so that at least - -- one of the two partition classes will forge /strictly/ /less/ /than/ @k@ - -- such blocks. While both classes would be able to rollback if we let them - -- forge @k@ such blocks, they might (TODO "might" or "will"?) not able to - -- /decide/ that they need to rollback, since the relevant ChainSync client - -- might (TODO will?) not be able to forecast a ledger view far-enough into - -- the future to validate the @k+1@st header from the other class after the - -- partition ends. It will remain unaware that a longer chain exists and - -- end up stuck on its own distinct chain. Hence it's a Common Prefix - -- violation. - -- - -- We therefore motivate an upper bound on the partition duration @w@ for a - -- net with @n@ nodes by considering the two variables' parities. - -- - -- o For @n = 2h@ nodes and @w = 2u@ slots, the classes respectively forge - -- @u@ and @u+1@ blocks. (The @+1@th block is forged in - -- @firstSlotAfter@.) The preceding paragraph requires @u < k@ requires - -- @u <= k-1@ requires @w <= 2k-2@. So @w <= 2k-2@ when @w@ is even. - -- - -- o For @n = 2h@ nodes and @w = 2u+1@ slots, the classes both forge @u+1@ - -- blocks. (The second @+1@th block is forged in @firstSlotAfter@.) The - -- preceding paragraph requires @u+1 < k@ requires @u <= k-2@ requires @w - -- <= 2k-3@. So @w <= 2k-3@ when @w@ is odd. Note that @w <= 2k-2@ is - -- equivalent since @w@ is assumed odd. - -- - -- o For @n = 2h+1@ nodes, the smaller class forges at most the number of - -- blocks considered in the above cases for even @n@, so this case - -- doesn't contribute anything novel to the upper bound. The smaller - -- class has less than half of the nodes and so will violate Chain Growth - -- /if/ /given/ /enough/ /time/ /alone/. But for Byron at least, the - -- logic for avoiding a Common Prefix violation already establishes a - -- sufficiently strict upper bound to preclude a Chain Growth violation, - -- since Chain Growth isn't technically violated until @2k@ slots have - -- passed (it's also relevant that the net forges at full round-robin - -- speed until the partition starts). (TODO does that cover Shelley too?) - -- - -- Thus @w@ can range from @0@ to @2k-2@ inclusive. - w <- assert (k > 0) $ choose (0, 2 * k - 2) - - -- Each of the the following milestones happens at the listed slot in the - -- absence of a partition. - -- - -- o In slots 0 and 1 the proposal and the votes confirm it - -- - -- o In slot 2k+1 the confirmation becomes stable - -- - -- o In slot 2k+1+quorum the nodes endorse it - -- - -- o In slot 4k+1+quorum the endorsement is stable - -- - -- o In slot 10k the update is adopted. - -- - -- We are most interested in what happens when the partition is near these - -- milestones. - mbFocus <- do - let quorum = div n 2 -- in the right ballpark - frequency $ - map (\(wgt, mbDur) -> (wgt, pure mbDur)) $ - filter (maybe True (< t) . snd) - [ (5, Nothing) - , (1, Just 0) - , (1, Just $ 2 * k + 1) - , (1, Just $ 2 * k + 1 + quorum) - , (1, Just $ 4 * k + 1) - , (1, Just $ 4 * k + 1 + quorum) - , (20, assert (numFirstEraEpochs == (1 :: Int)) $ - Just $ byronEpochSize (SecurityParam k')) + let ultimateSlot :: Word64 + ultimateSlot = assert (t > 0) $ t - 1 + + k = unNonZero k' + + crop :: Word64 -> Word64 + crop s = min ultimateSlot s + + -- Fundamental plan: all @MsgRollForward@ sent during the partition only + -- arrive at the onset of slot after it. The partition begins at the onset + -- of @firstSlotIn@ and ends at the onset of @firstSlotAfter@. + + -- FACT (A) The leader of @firstSlotAfter@ will forge before fully reacting + -- to the @MsgRollForward@s that just arrived, due to a race in the test + -- infrastructure that is consistently won by the forge. Thus the two + -- class's unique extensions consist of the blocks forged in the slots from + -- @firstSlotIn@ to @firstSlotAfter@ /inclusive/. + + -- @w@ is defined below this large comment to be how long the partition + -- should last (this may be 'crop'ped farther below if the partition ends + -- up near the end of the test) + -- + -- Because of FACT (A), we limit the partition duration so that at least + -- one of the two partition classes will forge /strictly/ /less/ /than/ @k@ + -- such blocks. While both classes would be able to rollback if we let them + -- forge @k@ such blocks, they might (TODO "might" or "will"?) not able to + -- /decide/ that they need to rollback, since the relevant ChainSync client + -- might (TODO will?) not be able to forecast a ledger view far-enough into + -- the future to validate the @k+1@st header from the other class after the + -- partition ends. It will remain unaware that a longer chain exists and + -- end up stuck on its own distinct chain. Hence it's a Common Prefix + -- violation. + -- + -- We therefore motivate an upper bound on the partition duration @w@ for a + -- net with @n@ nodes by considering the two variables' parities. + -- + -- o For @n = 2h@ nodes and @w = 2u@ slots, the classes respectively forge + -- @u@ and @u+1@ blocks. (The @+1@th block is forged in + -- @firstSlotAfter@.) The preceding paragraph requires @u < k@ requires + -- @u <= k-1@ requires @w <= 2k-2@. So @w <= 2k-2@ when @w@ is even. + -- + -- o For @n = 2h@ nodes and @w = 2u+1@ slots, the classes both forge @u+1@ + -- blocks. (The second @+1@th block is forged in @firstSlotAfter@.) The + -- preceding paragraph requires @u+1 < k@ requires @u <= k-2@ requires @w + -- <= 2k-3@. So @w <= 2k-3@ when @w@ is odd. Note that @w <= 2k-2@ is + -- equivalent since @w@ is assumed odd. + -- + -- o For @n = 2h+1@ nodes, the smaller class forges at most the number of + -- blocks considered in the above cases for even @n@, so this case + -- doesn't contribute anything novel to the upper bound. The smaller + -- class has less than half of the nodes and so will violate Chain Growth + -- /if/ /given/ /enough/ /time/ /alone/. But for Byron at least, the + -- logic for avoiding a Common Prefix violation already establishes a + -- sufficiently strict upper bound to preclude a Chain Growth violation, + -- since Chain Growth isn't technically violated until @2k@ slots have + -- passed (it's also relevant that the net forges at full round-robin + -- speed until the partition starts). (TODO does that cover Shelley too?) + -- + -- Thus @w@ can range from @0@ to @2k-2@ inclusive. + w <- assert (k > 0) $ choose (0, 2 * k - 2) + + -- Each of the the following milestones happens at the listed slot in the + -- absence of a partition. + -- + -- o In slots 0 and 1 the proposal and the votes confirm it + -- + -- o In slot 2k+1 the confirmation becomes stable + -- + -- o In slot 2k+1+quorum the nodes endorse it + -- + -- o In slot 4k+1+quorum the endorsement is stable + -- + -- o In slot 10k the update is adopted. + -- + -- We are most interested in what happens when the partition is near these + -- milestones. + mbFocus <- do + let quorum = div n 2 -- in the right ballpark + frequency $ + map (\(wgt, mbDur) -> (wgt, pure mbDur)) $ + filter + (maybe True (< t) . snd) + [ (5, Nothing) + , (1, Just 0) + , (1, Just $ 2 * k + 1) + , (1, Just $ 2 * k + 1 + quorum) + , (1, Just $ 4 * k + 1) + , (1, Just $ 4 * k + 1 + quorum) + , + ( 20 + , assert (numFirstEraEpochs == (1 :: Int)) $ + Just $ + byronEpochSize (SecurityParam k') + ) ] - -- Position the partition so that it at least abuts the focus slot. - firstSlotIn <- choose $ - case mbFocus of - Nothing -> (0, ultimateSlot ) - Just focus -> (crop $ focus `monus` (w + 1), crop $ focus + 1) + -- Position the partition so that it at least abuts the focus slot. + firstSlotIn <- choose $ + case mbFocus of + Nothing -> (0, ultimateSlot) + Just focus -> (crop $ focus `monus` (w + 1), crop $ focus + 1) - let -- Because of FACT (A), we require there to be at least one slot after - -- the partition. This doesn't ensure full consensus, because the block - -- forged in @firstSlotAfter@ may create a chain as long as that of the - -- other partition (first era) or even multiple such chains (second - -- era). But it does ensure that all final chains will be the same - -- length. - firstSlotAfter :: Word64 - firstSlotAfter = crop $ firstSlotIn + w + let + -- Because of FACT (A), we require there to be at least one slot after + -- the partition. This doesn't ensure full consensus, because the block + -- forged in @firstSlotAfter@ may create a chain as long as that of the + -- other partition (first era) or even multiple such chains (second + -- era). But it does ensure that all final chains will be the same + -- length. + firstSlotAfter :: Word64 + firstSlotAfter = crop $ firstSlotIn + w - dur :: Word64 - dur = Util.countSlots (SlotNo firstSlotAfter) (SlotNo firstSlotIn) + dur :: Word64 + dur = Util.countSlots (SlotNo firstSlotAfter) (SlotNo firstSlotIn) - pure $ Partition (SlotNo firstSlotIn) (NumSlots dur) + pure $ Partition (SlotNo firstSlotIn) (NumSlots dur) -- | Whether there was a block forged in a non-overlay slot in the second era. -- -- This event evidences that the stake pools were correctly created and -- delegated to. label_hadActiveNonOverlaySlots :: - TestOutput (HardForkBlock (era ': eras)) - -> Set SlotNo - -> String + TestOutput (HardForkBlock (era ': eras)) -> + Set SlotNo -> + String label_hadActiveNonOverlaySlots testOutput overlaySlots = - show $ or $ - [ Set.notMember slot overlaySlots - | (_nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputForges} = no - , (slot, blk) <- Map.toDescList nodeOutputForges - , not $ isFirstEraBlock blk - ] - where - TestOutput{testOutputNodes} = testOutput + show $ + or $ + [ Set.notMember slot overlaySlots + | (_nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputForges} = no + , (slot, blk) <- Map.toDescList nodeOutputForges + , not $ isFirstEraBlock blk + ] + where + TestOutput{testOutputNodes} = testOutput -- | All OBFT overlay slots in the second era. secondEraOverlaySlots :: - NumSlots - -> NumSlots - -> SL.UnitInterval - -> EpochSize - -> Set SlotNo + NumSlots -> + NumSlots -> + SL.UnitInterval -> + EpochSize -> + Set SlotNo secondEraOverlaySlots numSlots (NumSlots numFirstEraSlots) d secondEraEpochSize = - Set.filter (< SlotNo t) $ + Set.filter (< SlotNo t) $ Set.unions $ - takeWhile (isJust . Set.lookupLT (SlotNo t)) $ - map overlayOffsets [0..] - where - NumSlots t = numSlots - - -- The overlay slots in the ith epoch of the second era. - -- - -- TODO: This function conceptually should be simpler if we created a - -- sufficiently accurate 'EpochInfo' and so didn't need the shift. But - -- doing so (eg by constructing a HardFork @Summary@) is currently - -- significantly more involved than this workaround. - overlayOffsets :: Word64 -> Set SlotNo - overlayOffsets i = - -- Shift to account for the first era. - Set.mapMonotonic (Util.addSlots numFirstEraSlots) . - Set.fromList $ - -- Note: this is only correct if each epoch uses the same value for @d@ - SL.overlaySlots - -- Suitable only for this narrow context - (fixedEpochInfoFirst secondEraEpochSize (EpochNo i)) - -- notably contains setupD - d - -- NB 0 <-> the first epoch of the second era - secondEraEpochSize + takeWhile (isJust . Set.lookupLT (SlotNo t)) $ + map overlayOffsets [0 ..] + where + NumSlots t = numSlots + + -- The overlay slots in the ith epoch of the second era. + -- + -- TODO: This function conceptually should be simpler if we created a + -- sufficiently accurate 'EpochInfo' and so didn't need the shift. But + -- doing so (eg by constructing a HardFork @Summary@) is currently + -- significantly more involved than this workaround. + overlayOffsets :: Word64 -> Set SlotNo + overlayOffsets i = + -- Shift to account for the first era. + Set.mapMonotonic (Util.addSlots numFirstEraSlots) + . Set.fromList + $ + -- Note: this is only correct if each epoch uses the same value for @d@ + SL.overlaySlots + -- Suitable only for this narrow context + (fixedEpochInfoFirst secondEraEpochSize (EpochNo i)) + -- notably contains setupD + d + -- NB 0 <-> the first epoch of the second era + secondEraEpochSize tabulatePartitionPosition :: - NumSlots -> Partition -> Bool -> Property -> Property + NumSlots -> Partition -> Bool -> Property -> Property tabulatePartitionPosition (NumSlots numFirstEraSlots) part transitions = - tabulate "partition in or abuts era (First era, Second era)" - [ show (inclFirstEra, inclSecondEra) ] - where - Partition (SlotNo firstSlotIn) (NumSlots dur) = part - firstSlotAfter = firstSlotIn + dur - - inclFirstEra = - dur > 0 && (not transitions || firstSlotIn <= numFirstEraSlots) - inclSecondEra = - dur > 0 && (transitions && firstSlotAfter >= numFirstEraSlots) + tabulate + "partition in or abuts era (First era, Second era)" + [show (inclFirstEra, inclSecondEra)] + where + Partition (SlotNo firstSlotIn) (NumSlots dur) = part + firstSlotAfter = firstSlotIn + dur + + inclFirstEra = + dur > 0 && (not transitions || firstSlotIn <= numFirstEraSlots) + inclSecondEra = + dur > 0 && (transitions && firstSlotAfter >= numFirstEraSlots) tabulateFinalIntersectionDepth :: SecurityParam -> NumBlocks -> String -> Property -> Property tabulateFinalIntersectionDepth k (NumBlocks finalIntersectionDepth) finalBlockEra = - tabul "count" (show finalIntersectionDepth) . - tabul "k frac" (approxFracK k finalIntersectionDepth) . - tabul "k diff" (diffK k finalIntersectionDepth) - where - lbl = "final intersection depth" - tabul s x = tabulate - (lbl <> ", " <> finalBlockEra <> ", " <> s) - [x <> " blocks"] + tabul "count" (show finalIntersectionDepth) + . tabul "k frac" (approxFracK k finalIntersectionDepth) + . tabul "k diff" (diffK k finalIntersectionDepth) + where + lbl = "final intersection depth" + tabul s x = + tabulate + (lbl <> ", " <> finalBlockEra <> ", " <> s) + [x <> " blocks"] tabulatePartitionDuration :: SecurityParam -> Partition -> Property -> Property tabulatePartitionDuration k part = - tabul "count" (show dur) . - tabul "k frac" (approxFracK k dur) - where - tabul s x = - tabulate ("partition duration, " <> s) [x <> " slots"] + tabul "count" (show dur) + . tabul "k frac" (approxFracK k dur) + where + tabul s x = + tabulate ("partition duration, " <> s) [x <> " slots"] - Partition _ (NumSlots dur) = part + Partition _ (NumSlots dur) = part {------------------------------------------------------------------------------- Constants @@ -353,7 +374,7 @@ tabulatePartitionDuration k part = -- length of any scheduled network partitions need to be balanced so that Common -- Prefix violations (in particular, wedges) are extremely unlikely. activeSlotCoeff :: Rational -activeSlotCoeff = 0.2 -- c.f. mainnet is more conservative, using 0.05 +activeSlotCoeff = 0.2 -- c.f. mainnet is more conservative, using 0.05 -- | The number of epochs in the first era in this test -- @@ -373,14 +394,14 @@ numFirstEraEpochs = 1 -- Note these fields are ordered alphabetically not semantically; see -- 'label_ReachesEra2'. data ReachesEra2 = ReachesEra2 - { rsEra1Slots :: BoolProps.Prereq - -- ^ enough slots in the first era to enable a block in the second era - , rsPV :: BoolProps.Prereq - -- ^ sufficient protocol version to enable a block in the second era + { rsEra1Slots :: BoolProps.Prereq + -- ^ enough slots in the first era to enable a block in the second era + , rsPV :: BoolProps.Prereq + -- ^ sufficient protocol version to enable a block in the second era , rsEra2Blocks :: Bool - -- ^ blocks from the second era included in final chains - , rsEra2Slots :: BoolProps.Requirement - -- ^ enough slots in the second era to necessitate a block in the second era + -- ^ blocks from the second era included in final chains + , rsEra2Slots :: BoolProps.Requirement + -- ^ enough slots in the second era to necessitate a block in the second era } deriving (Generic, Show) @@ -389,23 +410,23 @@ instance BoolProps.CollectReqs ReachesEra2 -- | List the (pre)reqs in semantic order, followed by the observation label_ReachesEra2 :: ReachesEra2 -> String label_ReachesEra2 reachesEra2 = - prepend "pv" rsPV $ + prepend "pv" rsPV $ prepend "slots1" rsEra1Slots $ - prepend "slots2" rsEra2Slots $ - "blocks2 " <> show rsEra2Blocks - where - -- incur a GHC warning if the number of fields changes - ReachesEra2 _ _ _ _dummy = reachesEra2 - -- this pattern should bind each field by name - ReachesEra2 - { rsEra1Slots - , rsPV - , rsEra2Blocks - , rsEra2Slots - } = reachesEra2 - - prepend :: Show a => String -> a -> String -> String - prepend s req x = s <> " " <> show req <> ", " <> x + prepend "slots2" rsEra2Slots $ + "blocks2 " <> show rsEra2Blocks + where + -- incur a GHC warning if the number of fields changes + ReachesEra2 _ _ _ _dummy = reachesEra2 + -- this pattern should bind each field by name + ReachesEra2 + { rsEra1Slots + , rsPV + , rsEra2Blocks + , rsEra2Slots + } = reachesEra2 + + prepend :: Show a => String -> a -> String -> String + prepend s req x = s <> " " <> show req <> ", " <> x -- | Is the update proposal adopted? ledgerReachesEra2 :: ReachesEra2 -> Bool @@ -414,18 +435,19 @@ ledgerReachesEra2 rs = BoolProps.checkReqs rs /= Just False -- | Checks if the observation satisfies the (pre)reqs prop_ReachesEra2 :: ReachesEra2 -> Property prop_ReachesEra2 rs = case BoolProps.checkReqs rs of - Nothing -> property True - Just req -> - counterexample (show rs) $ - counterexample (msg req) $ + Nothing -> property True + Just req -> + counterexample (show rs) $ + counterexample (msg req) $ rsEra2Blocks == req - where - ReachesEra2{rsEra2Blocks} = rs + where + ReachesEra2{rsEra2Blocks} = rs - msg :: Bool -> String - msg req = if req - then "the final chains should include at least one second era block" - else "the final chains should not include any second era blocks" + msg :: Bool -> String + msg req = + if req + then "the final chains should include at least one second era block" + else "the final chains should not include any second era blocks" {------------------------------------------------------------------------------- A short even-odd partition @@ -436,14 +458,16 @@ prop_ReachesEra2 rs = case BoolProps.checkReqs rs of -- Calculates the delays that implement 'setupPartition'. mkMessageDelay :: Partition -> CalcMessageDelay blk mkMessageDelay part = CalcMessageDelay $ - \(CoreNodeId i, CoreNodeId j) curSlot _hdr -> NumSlots $ if - | curSlot < firstSlotIn -> 0 - | curSlot >= firstSlotAfter -> 0 - | mod i 2 == mod j 2 -> 0 - | otherwise -> unSlotNo $ firstSlotAfter - curSlot - where - Partition firstSlotIn _ = part - firstSlotAfter = partitionExclusiveUpperBound part + \(CoreNodeId i, CoreNodeId j) curSlot _hdr -> + NumSlots $ + if + | curSlot < firstSlotIn -> 0 + | curSlot >= firstSlotAfter -> 0 + | mod i 2 == mod j 2 -> 0 + | otherwise -> unSlotNo $ firstSlotAfter - curSlot + where + Partition firstSlotIn _ = part + firstSlotAfter = partitionExclusiveUpperBound part {------------------------------------------------------------------------------- Miscellany @@ -451,31 +475,31 @@ mkMessageDelay part = CalcMessageDelay $ byronEpochSize :: SecurityParam -> Word64 byronEpochSize (SecurityParam k) = - unEpochSlots $ kEpochSlots $ CC.Common.BlockCount $ unNonZero k + unEpochSlots $ kEpochSlots $ CC.Common.BlockCount $ unNonZero k shelleyEpochSize :: SecurityParam -> Word64 shelleyEpochSize k = unEpochSize $ Shelley.mkEpochSize k activeSlotCoeff isFirstEraBlock :: HardForkBlock (era ': eras) -> Bool isFirstEraBlock = \case - HardForkBlock (OneEraBlock Z{}) -> True - _ -> False + HardForkBlock (OneEraBlock Z{}) -> True + _ -> False -- | Render a number as a positive difference from @k@ -- -- PREREQUISITE: The number must not be greater than @k@. diffK :: SecurityParam -> Word64 -> String diffK (SecurityParam k) v = - assert (unNonZero k >= v) $ + assert (unNonZero k >= v) $ "k - " <> show (unNonZero k - v) -- | Render a number as the nearest tenths of @k@ approxFracK :: SecurityParam -> Word64 -> String approxFracK (SecurityParam k) v = - "k * " <> show (fromIntegral tenths / 10 :: Double) - where - ratio = toRational v / toRational (unNonZero k) - tenths = round (ratio * 10) :: Int + "k * " <> show (fromIntegral tenths / 10 :: Double) + where + ratio = toRational v / toRational (unNonZero k) + tenths = round (ratio * 10) :: Int -- | monus :: (Num a, Ord a) => a -> a -> a diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Allegra.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Allegra.hs index a6abc00c2f..f810d33336 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Allegra.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Allegra.hs @@ -1,19 +1,17 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.TxGen.Allegra () where -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Test.ThreadNet.TxGen (TxGen (..)) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Test.ThreadNet.TxGen (TxGen (..)) -- | Dummy generator until CAD-2119 is done, i.e., the transaction generator in -- the ledger has been generalised over the eras. instance TxGen (ShelleyBlock (TPraos c) AllegraEra) where - type TxGenExtra _ = () testGenTxs _ _ _ _ _ _ = pure [] diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Alonzo.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Alonzo.hs index b3398a66ae..65ee99b344 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Alonzo.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Alonzo.hs @@ -1,19 +1,17 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.TxGen.Alonzo () where -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Test.ThreadNet.TxGen (TxGen (..)) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Test.ThreadNet.TxGen (TxGen (..)) -- | Dummy generator until CAD-2119 is done, i.e., the transaction generator in -- the ledger has been generalised over the eras. instance TxGen (ShelleyBlock (TPraos c) AlonzoEra) where - type TxGenExtra _ = () testGenTxs _ _ _ _ _ _ = pure [] diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Babbage.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Babbage.hs index f4b8bd7655..0a23e71016 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Babbage.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Babbage.hs @@ -1,19 +1,17 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.TxGen.Babbage () where -import Ouroboros.Consensus.Protocol.Praos (Praos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Test.ThreadNet.TxGen (TxGen (..)) +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Test.ThreadNet.TxGen (TxGen (..)) -- | Dummy generator until CAD-2119 is done, i.e., the transaction generator in -- the ledger has been generalised over the eras. instance TxGen (ShelleyBlock (Praos c) BabbageEra) where - type TxGenExtra _ = () testGenTxs _ _ _ _ _ _ = pure [] diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Cardano.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Cardano.hs index 296d3abd7f..796aa4aa7d 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Cardano.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Cardano.hs @@ -6,86 +6,99 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.TxGen.Cardano (CardanoTxGenExtra (..)) where -import qualified Cardano.Chain.Common as Byron -import Cardano.Chain.Genesis (GeneratedSecrets (..)) -import Cardano.Crypto (toVerification) -import qualified Cardano.Crypto.DSIGN as DSIGN -import qualified Cardano.Crypto.Signing as Byron -import qualified Cardano.Crypto.VRF as VRF -import qualified Cardano.Ledger.Address as SL (BootstrapAddress (..)) -import qualified Cardano.Ledger.Hashes as SL -import Cardano.Ledger.Keys (DSIGN) -import qualified Cardano.Ledger.Keys.Bootstrap as SL (makeBootstrapWitness) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as SL -import Cardano.Ledger.Val ((<->)) -import Cardano.Protocol.Crypto (VRF) -import Control.Exception (assert) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (maybeToList) -import qualified Data.Sequence.Strict as StrictSeq -import qualified Data.Set as Set -import Data.SOP.Strict -import Data.SOP.Telescope as Tele -import Lens.Micro -import Ouroboros.Consensus.Block (SlotNo (..)) -import Ouroboros.Consensus.Cardano -import Ouroboros.Consensus.Cardano.Block (CardanoEras, GenTx (..), - ShelleyEra) -import Ouroboros.Consensus.Cardano.Node (CardanoHardForkConstraints) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Ledger - (getFlipTickedLedgerState, tickedHardForkLedgerStatePerEra) -import Ouroboros.Consensus.HardFork.Combinator.State.Types - (currentState, getHardForkState) -import Ouroboros.Consensus.Ledger.Basics (ComputeLedgerEvents (..), - LedgerConfig, LedgerState, TickedLedgerState, - applyChainTick) -import Ouroboros.Consensus.Ledger.Tables (ValuesMK) -import Ouroboros.Consensus.Ledger.Tables.Utils (applyDiffs, - forgetLedgerTables) -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, mkShelleyTx) -import Ouroboros.Consensus.Shelley.Ledger.Ledger - (tickedShelleyLedgerState) -import qualified Test.Cardano.Ledger.Core.KeyPair as TL (mkWitnessVKey) -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.TxGen +import Cardano.Chain.Common qualified as Byron +import Cardano.Chain.Genesis (GeneratedSecrets (..)) +import Cardano.Crypto (toVerification) +import Cardano.Crypto.DSIGN qualified as DSIGN +import Cardano.Crypto.Signing qualified as Byron +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.Address qualified as SL (BootstrapAddress (..)) +import Cardano.Ledger.Hashes qualified as SL +import Cardano.Ledger.Keys (DSIGN) +import Cardano.Ledger.Keys.Bootstrap qualified as SL (makeBootstrapWitness) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as SL +import Cardano.Ledger.Val ((<->)) +import Cardano.Protocol.Crypto (VRF) +import Control.Exception (assert) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (maybeToList) +import Data.SOP.Strict +import Data.SOP.Telescope as Tele +import Data.Sequence.Strict qualified as StrictSeq +import Data.Set qualified as Set +import Lens.Micro +import Ouroboros.Consensus.Block (SlotNo (..)) +import Ouroboros.Consensus.Cardano +import Ouroboros.Consensus.Cardano.Block + ( CardanoEras + , GenTx (..) + , ShelleyEra + ) +import Ouroboros.Consensus.Cardano.Node (CardanoHardForkConstraints) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Ledger + ( getFlipTickedLedgerState + , tickedHardForkLedgerStatePerEra + ) +import Ouroboros.Consensus.HardFork.Combinator.State.Types + ( currentState + , getHardForkState + ) +import Ouroboros.Consensus.Ledger.Basics + ( ComputeLedgerEvents (..) + , LedgerConfig + , LedgerState + , TickedLedgerState + , applyChainTick + ) +import Ouroboros.Consensus.Ledger.Tables (ValuesMK) +import Ouroboros.Consensus.Ledger.Tables.Utils + ( applyDiffs + , forgetLedgerTables + ) +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, mkShelleyTx) +import Ouroboros.Consensus.Shelley.Ledger.Ledger + ( tickedShelleyLedgerState + ) +import Test.Cardano.Ledger.Core.KeyPair qualified as TL (mkWitnessVKey) +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.TxGen data CardanoTxGenExtra c = CardanoTxGenExtra { ctgeByronGenesisKeys :: GeneratedSecrets - , ctgeNetworkMagic :: Byron.NetworkMagic + , ctgeNetworkMagic :: Byron.NetworkMagic , ctgeShelleyCoreNodes :: [Shelley.CoreNode c] } instance CardanoHardForkConstraints c => TxGen (CardanoBlock c) where - type TxGenExtra (CardanoBlock c) = CardanoTxGenExtra c -- TODO also generate " typical " Byron and Shelley transactions testGenTxs (CoreNodeId i) _ncn curSlot cfg extra ls = - pure $ maybeToList $ migrateUTxO migrationInfo curSlot lcfg ls - where - lcfg = topLevelConfigLedger cfg - - CardanoTxGenExtra - { ctgeByronGenesisKeys - , ctgeNetworkMagic - , ctgeShelleyCoreNodes - } = extra - - GeneratedSecrets - { gsRichSecrets - } = ctgeByronGenesisKeys - - migrationInfo = MigrationInfo + pure $ maybeToList $ migrateUTxO migrationInfo curSlot lcfg ls + where + lcfg = topLevelConfigLedger cfg + + CardanoTxGenExtra + { ctgeByronGenesisKeys + , ctgeNetworkMagic + , ctgeShelleyCoreNodes + } = extra + + GeneratedSecrets + { gsRichSecrets + } = ctgeByronGenesisKeys + + migrationInfo = + MigrationInfo { byronMagic = ctgeNetworkMagic , byronSK , paymentSK @@ -94,31 +107,31 @@ instance CardanoHardForkConstraints c => TxGen (CardanoBlock c) where , vrfSK } - byronSK :: Byron.SigningKey - byronSK = gsRichSecrets !! fromIntegral i + byronSK :: Byron.SigningKey + byronSK = gsRichSecrets !! fromIntegral i - Shelley.CoreNode - { Shelley.cnDelegateKey = paymentSK - , Shelley.cnStakingKey = stakingSK - , Shelley.cnVRF = vrfSK - } = ctgeShelleyCoreNodes !! fromIntegral i + Shelley.CoreNode + { Shelley.cnDelegateKey = paymentSK + , Shelley.cnStakingKey = stakingSK + , Shelley.cnVRF = vrfSK + } = ctgeShelleyCoreNodes !! fromIntegral i - -- Reuse the payment key as the pool key, since it's an individual - -- stake pool and the namespaces are separate. - poolSK :: DSIGN.SignKeyDSIGN DSIGN - poolSK = paymentSK + -- Reuse the payment key as the pool key, since it's an individual + -- stake pool and the namespaces are separate. + poolSK :: DSIGN.SignKeyDSIGN DSIGN + poolSK = paymentSK -- | See 'migrateUTxO' data MigrationInfo c = MigrationInfo { byronMagic :: Byron.NetworkMagic - -- ^ Needed for creating a Byron address. - , byronSK :: Byron.SigningKey - -- ^ The core node's Byron secret. - , paymentSK :: DSIGN.SignKeyDSIGN DSIGN - , poolSK :: DSIGN.SignKeyDSIGN DSIGN - , stakingSK :: DSIGN.SignKeyDSIGN DSIGN - , vrfSK :: VRF.SignKeyVRF (VRF c) - -- ^ To be re-used by the individual pool. + -- ^ Needed for creating a Byron address. + , byronSK :: Byron.SigningKey + -- ^ The core node's Byron secret. + , paymentSK :: DSIGN.SignKeyDSIGN DSIGN + , poolSK :: DSIGN.SignKeyDSIGN DSIGN + , stakingSK :: DSIGN.SignKeyDSIGN DSIGN + , vrfSK :: VRF.SignKeyVRF (VRF c) + -- ^ To be re-used by the individual pool. } -- | Convert a core node's utxo from Byron to an active Shelley stake pool. @@ -131,161 +144,164 @@ data MigrationInfo c = MigrationInfo -- It returns 'Nothing' if the core node does not have any utxo in its -- 'byronAddr' (eg if this transaction has already been applied). migrateUTxO :: - forall c. - ( CardanoHardForkConstraints c - ) - => MigrationInfo c - -> SlotNo - -> LedgerConfig (CardanoBlock c) - -> LedgerState (CardanoBlock c) ValuesMK - -> Maybe (GenTx (CardanoBlock c)) + forall c. + CardanoHardForkConstraints c => + MigrationInfo c -> + SlotNo -> + LedgerConfig (CardanoBlock c) -> + LedgerState (CardanoBlock c) ValuesMK -> + Maybe (GenTx (CardanoBlock c)) migrateUTxO migrationInfo curSlot lcfg lst - | Just utxo <- mbUTxO = - - let picked :: Map SL.TxIn (SL.TxOut ShelleyEra) - picked = Map.filter pick $ SL.unUTxO utxo - where + | Just utxo <- mbUTxO = + let picked :: Map SL.TxIn (SL.TxOut ShelleyEra) + picked = Map.filter pick $ SL.unUTxO utxo + where pick (SL.ShelleyTxOut addr _) = - addr == SL.AddrBootstrap (SL.BootstrapAddress byronAddr) - - -- Total held by 'byronAddr' - pickedCoin :: SL.Coin - pickedCoin = foldMap (\(SL.ShelleyTxOut _ coin) -> coin) picked - - -- NOTE: The Cardano ThreadNet tests use the - -- ouroboros-consensus-shelley-test infra's genesis config, which sets - -- relevant protocol params to 0. - fee, deposits, spentCoin :: SL.Coin - fee = SL.Coin 0 - deposits = SL.Coin 0 - spentCoin = deposits <> fee - - unspentCoin :: SL.Coin - unspentCoin = + addr == SL.AddrBootstrap (SL.BootstrapAddress byronAddr) + + -- Total held by 'byronAddr' + pickedCoin :: SL.Coin + pickedCoin = foldMap (\(SL.ShelleyTxOut _ coin) -> coin) picked + + -- NOTE: The Cardano ThreadNet tests use the + -- ouroboros-consensus-shelley-test infra's genesis config, which sets + -- relevant protocol params to 0. + fee, deposits, spentCoin :: SL.Coin + fee = SL.Coin 0 + deposits = SL.Coin 0 + spentCoin = deposits <> fee + + unspentCoin :: SL.Coin + unspentCoin = assert (pickedCoin > spentCoin) $ - pickedCoin <-> spentCoin - - body :: SL.TxBody ShelleyEra - body = SL.mkBasicTxBody - & SL.certsTxBodyL .~ StrictSeq.fromList - [ SL.RegTxCert $ Shelley.mkCredential stakingSK - , SL.RegPoolTxCert $ poolParams unspentCoin - , SL.DelegStakeTxCert - (Shelley.mkCredential stakingSK) (Shelley.mkKeyHash poolSK) - ] - & SL.inputsTxBodyL .~ Map.keysSet picked - & SL.outputsTxBodyL .~ - StrictSeq.singleton (SL.ShelleyTxOut shelleyAddr unspentCoin) - & SL.ttlTxBodyL .~ SlotNo maxBound - & SL.feeTxBodyL .~ fee - - bodyHash :: SL.SafeHash SL.EraIndependentTxBody - bodyHash = SL.hashAnnotated body - - -- Witness the use of bootstrap address's utxo. - byronWit :: SL.BootstrapWitness - byronWit = + pickedCoin <-> spentCoin + + body :: SL.TxBody ShelleyEra + body = + SL.mkBasicTxBody + & SL.certsTxBodyL + .~ StrictSeq.fromList + [ SL.RegTxCert $ Shelley.mkCredential stakingSK + , SL.RegPoolTxCert $ poolParams unspentCoin + , SL.DelegStakeTxCert + (Shelley.mkCredential stakingSK) + (Shelley.mkKeyHash poolSK) + ] + & SL.inputsTxBodyL .~ Map.keysSet picked + & SL.outputsTxBodyL + .~ StrictSeq.singleton (SL.ShelleyTxOut shelleyAddr unspentCoin) + & SL.ttlTxBodyL .~ SlotNo maxBound + & SL.feeTxBodyL .~ fee + + bodyHash :: SL.SafeHash SL.EraIndependentTxBody + bodyHash = SL.hashAnnotated body + + -- Witness the use of bootstrap address's utxo. + byronWit :: SL.BootstrapWitness + byronWit = SL.makeBootstrapWitness (SL.extractHash bodyHash) byronSK $ - Byron.addrAttributes byronAddr + Byron.addrAttributes byronAddr - -- Witness the stake delegation. - delegWit :: SL.WitVKey 'SL.Witness - delegWit = + -- Witness the stake delegation. + delegWit :: SL.WitVKey 'SL.Witness + delegWit = TL.mkWitnessVKey bodyHash (Shelley.mkKeyPair stakingSK) - -- Witness the pool registration. - poolWit :: SL.WitVKey 'SL.Witness - poolWit = + -- Witness the pool registration. + poolWit :: SL.WitVKey 'SL.Witness + poolWit = TL.mkWitnessVKey bodyHash (Shelley.mkKeyPair poolSK) - - in - if Map.null picked then Nothing else - (Just . GenTxShelley. mkShelleyTx) $ - SL.ShelleyTx - { SL.body = body - , SL.auxiliaryData = SL.SNothing - , SL.wits = SL.mkBasicTxWits - & SL.addrTxWitsL .~ Set.fromList [delegWit, poolWit] - & SL.bootAddrTxWitsL .~ Set.singleton byronWit + in if Map.null picked + then Nothing + else + (Just . GenTxShelley . mkShelleyTx) $ + SL.ShelleyTx + { SL.body = body + , SL.auxiliaryData = SL.SNothing + , SL.wits = + SL.mkBasicTxWits + & SL.addrTxWitsL .~ Set.fromList [delegWit, poolWit] + & SL.bootAddrTxWitsL .~ Set.singleton byronWit + } + | otherwise = Nothing + where + mbUTxO :: Maybe (SL.UTxO ShelleyEra) + mbUTxO = + fmap getUTxOShelley + . ejectShelleyTickedLedgerState + . applyDiffs lst + . applyChainTick OmitLedgerEvents lcfg curSlot + . forgetLedgerTables + $ lst + + MigrationInfo + { byronMagic + , byronSK + , paymentSK + , poolSK + , stakingSK + , vrfSK + } = migrationInfo + + byronAddr :: Byron.Address + byronAddr = + Byron.makeVerKeyAddress byronMagic $ toVerification byronSK + + -- We use a base reference for the stake so that we can refer to it in the + -- same tx that registers it. + shelleyAddr :: SL.Addr + shelleyAddr = + SL.Addr + Shelley.networkId + (Shelley.mkCredential paymentSK) + (SL.StakeRefBase $ Shelley.mkCredential stakingSK) + + -- A simplistic individual pool + poolParams :: SL.Coin -> SL.PoolParams + poolParams pledge = + SL.PoolParams + { SL.ppCost = SL.Coin 1 + , SL.ppMetadata = SL.SNothing + , SL.ppMargin = minBound + , SL.ppOwners = Set.singleton $ Shelley.mkKeyHash poolSK + , SL.ppPledge = pledge + , SL.ppId = Shelley.mkKeyHash poolSK + , SL.ppRewardAccount = + SL.RewardAccount Shelley.networkId $ Shelley.mkCredential poolSK + , SL.ppRelays = StrictSeq.empty + , SL.ppVrf = Shelley.mkKeyHashVrf @c vrfSK } - | otherwise = Nothing - - where - mbUTxO :: Maybe (SL.UTxO ShelleyEra) - mbUTxO = - fmap getUTxOShelley - . ejectShelleyTickedLedgerState - . applyDiffs lst - . applyChainTick OmitLedgerEvents lcfg curSlot - . forgetLedgerTables - $ lst - - MigrationInfo - { byronMagic - , byronSK - , paymentSK - , poolSK - , stakingSK - , vrfSK - } = migrationInfo - - byronAddr :: Byron.Address - byronAddr = - Byron.makeVerKeyAddress byronMagic $ toVerification byronSK - - -- We use a base reference for the stake so that we can refer to it in the - -- same tx that registers it. - shelleyAddr :: SL.Addr - shelleyAddr = - SL.Addr Shelley.networkId - (Shelley.mkCredential paymentSK) - (SL.StakeRefBase $ Shelley.mkCredential stakingSK) - - -- A simplistic individual pool - poolParams :: SL.Coin -> SL.PoolParams - poolParams pledge = SL.PoolParams - { SL.ppCost = SL.Coin 1 - , SL.ppMetadata = SL.SNothing - , SL.ppMargin = minBound - , SL.ppOwners = Set.singleton $ Shelley.mkKeyHash poolSK - , SL.ppPledge = pledge - , SL.ppId = Shelley.mkKeyHash poolSK - , SL.ppRewardAccount = - SL.RewardAccount Shelley.networkId $ Shelley.mkCredential poolSK - , SL.ppRelays = StrictSeq.empty - , SL.ppVrf = Shelley.mkKeyHashVrf @c vrfSK - } - ----- ejectShelleyNS :: - NS f (CardanoEras c) - -> Maybe (f (ShelleyBlock (TPraos c) ShelleyEra)) + NS f (CardanoEras c) -> + Maybe (f (ShelleyBlock (TPraos c) ShelleyEra)) ejectShelleyNS = \case - S (Z x) -> Just x - _ -> Nothing + S (Z x) -> Just x + _ -> Nothing -getUTxOShelley :: TickedLedgerState (ShelleyBlock proto era) mk - -> SL.UTxO era +getUTxOShelley :: + TickedLedgerState (ShelleyBlock proto era) mk -> + SL.UTxO era getUTxOShelley tls = - SL.utxosUtxo $ + SL.utxosUtxo $ SL.lsUTxOState $ - SL.esLState $ - SL.nesEs $ - tickedShelleyLedgerState tls + SL.esLState $ + SL.nesEs $ + tickedShelleyLedgerState tls ejectShelleyTickedLedgerState :: - TickedLedgerState (CardanoBlock c) mk - -> Maybe (TickedLedgerState (ShelleyBlock (TPraos c) ShelleyEra) mk) + TickedLedgerState (CardanoBlock c) mk -> + Maybe (TickedLedgerState (ShelleyBlock (TPraos c) ShelleyEra) mk) ejectShelleyTickedLedgerState ls = - fmap (getFlipTickedLedgerState . currentState) $ + fmap (getFlipTickedLedgerState . currentState) $ ejectShelleyNS $ - Tele.tip $ - getHardForkState $ - tickedHardForkLedgerStatePerEra $ - ls + Tele.tip $ + getHardForkState $ + tickedHardForkLedgerStatePerEra $ + ls diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Mary.hs b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Mary.hs index e3e8254e11..25a39e4df5 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Mary.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-testlib/Test/ThreadNet/TxGen/Mary.hs @@ -1,19 +1,17 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.TxGen.Mary () where -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Test.ThreadNet.TxGen (TxGen (..)) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Test.ThreadNet.TxGen (TxGen (..)) -- | Dummy generator until CAD-2119 is done, i.e., the transaction generator in -- the ledger has been generalised over the eras. instance TxGen (ShelleyBlock (TPraos c) MaryEra) where - type TxGenExtra _ = () testGenTxs _ _ _ _ _ _ = pure [] diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Any.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Any.hs index 85fc46e039..c99b060983 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Any.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Any.hs @@ -5,26 +5,27 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} -module Cardano.Api.Any ( - module Cardano.Api.Any +module Cardano.Api.Any + ( module Cardano.Api.Any , module Cbor , module Proxy ) where - -import Cardano.Ledger.Binary as Cbor (DecCBOR (..), EncCBOR (..), - FromCBOR (..), ToCBOR (..)) -import qualified Cardano.Ledger.Binary.Plain as CBOR -import Control.Exception (Exception (..), IOException, throwIO) -import Data.ByteString (ByteString) -import qualified Data.ByteString.Base16 as Base16 (decode, encode) -import Data.Kind (Constraint, Type) -import Data.Proxy as Proxy (Proxy (..)) -import Data.Text as Text (Text) -import qualified Data.Text.Encoding as Text (decodeUtf8) -import System.IO (Handle) - - +import Cardano.Ledger.Binary as Cbor + ( DecCBOR (..) + , EncCBOR (..) + , FromCBOR (..) + , ToCBOR (..) + ) +import Cardano.Ledger.Binary.Plain qualified as CBOR +import Control.Exception (Exception (..), IOException, throwIO) +import Data.ByteString (ByteString) +import Data.ByteString.Base16 qualified as Base16 (decode, encode) +import Data.Kind (Constraint, Type) +import Data.Proxy as Proxy (Proxy (..)) +import Data.Text as Text (Text) +import Data.Text.Encoding qualified as Text (decodeUtf8) +import System.IO (Handle) -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/HasTypeProxy.hs @@ -33,36 +34,27 @@ class HasTypeProxy t where -- use where it would otherwise be ambiguous or merely unclear. -- -- Values of this type are passed to deserialisation functions for example. - -- data AsType t proxyToAsType :: Proxy t -> AsType t - data FromSomeType (c :: Type -> Constraint) b where - FromSomeType :: c a => AsType a -> (a -> b) -> FromSomeType c b - - + FromSomeType :: c a => AsType a -> (a -> b) -> FromSomeType c b -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/Hash.hs data family Hash keyrole :: Type class CastHash roleA roleB where - - castHash :: Hash roleA -> Hash roleB - + castHash :: Hash roleA -> Hash roleB instance HasTypeProxy a => HasTypeProxy (Hash a) where - data AsType (Hash a) = AsHash (AsType a) - proxyToAsType _ = AsHash (proxyToAsType (Proxy :: Proxy a)) - - + data AsType (Hash a) = AsHash (AsType a) + proxyToAsType _ = AsHash (proxyToAsType (Proxy :: Proxy a)) -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/SerialiseRaw.hs class HasTypeProxy a => SerialiseAsRawBytes a where - serialiseToRawBytes :: a -> ByteString deserialiseFromRawBytes :: AsType a -> ByteString -> Maybe a @@ -73,73 +65,73 @@ serialiseToRawBytesHex = Base16.encode . serialiseToRawBytes serialiseToRawBytesHexText :: SerialiseAsRawBytes a => a -> Text serialiseToRawBytesHexText = Text.decodeUtf8 . serialiseToRawBytesHex -deserialiseFromRawBytesHex :: SerialiseAsRawBytes a - => AsType a -> ByteString -> Maybe a +deserialiseFromRawBytesHex :: + SerialiseAsRawBytes a => + AsType a -> ByteString -> Maybe a deserialiseFromRawBytesHex proxy hex = - case Base16.decode hex of - Right raw -> deserialiseFromRawBytes proxy raw - Left _msg -> Nothing - - + case Base16.decode hex of + Right raw -> deserialiseFromRawBytes proxy raw + Left _msg -> Nothing -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/SerialiseAsCBOR.hs class HasTypeProxy a => SerialiseAsCBOR a where - serialiseToCBOR :: a -> ByteString - deserialiseFromCBOR :: AsType a -> ByteString -> Either CBOR.DecoderError a - - default serialiseToCBOR :: ToCBOR a => a -> ByteString - serialiseToCBOR = CBOR.serialize' - - default deserialiseFromCBOR :: FromCBOR a - => AsType a - -> ByteString - -> Either CBOR.DecoderError a - deserialiseFromCBOR _proxy = CBOR.decodeFull' + serialiseToCBOR :: a -> ByteString + deserialiseFromCBOR :: AsType a -> ByteString -> Either CBOR.DecoderError a + default serialiseToCBOR :: ToCBOR a => a -> ByteString + serialiseToCBOR = CBOR.serialize' + default deserialiseFromCBOR :: + FromCBOR a => + AsType a -> + ByteString -> + Either CBOR.DecoderError a + deserialiseFromCBOR _proxy = CBOR.decodeFull' -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/Error.hs class Show e => Error e where - - displayError :: e -> String + displayError :: e -> String instance Error () where - displayError () = "" - + displayError () = "" -- | The preferred approach is to use 'Except' or 'ExceptT', but you can if -- necessary use IO exceptions. --- throwErrorAsException :: Error e => e -> IO a throwErrorAsException e = throwIO (ErrorAsException e) data ErrorAsException where - ErrorAsException :: Error e => e -> ErrorAsException + ErrorAsException :: Error e => e -> ErrorAsException instance Show ErrorAsException where - show (ErrorAsException e) = show e + show (ErrorAsException e) = show e instance Exception ErrorAsException where - displayException (ErrorAsException e) = displayError e - - -data FileError e = FileError FilePath e - | FileErrorTempFile - FilePath - -- ^ Target path - FilePath - -- ^ Temporary path - Handle - | FileIOError FilePath IOException + displayException (ErrorAsException e) = displayError e + +data FileError e + = FileError FilePath e + | FileErrorTempFile + -- | Target path + FilePath + -- | Temporary path + FilePath + Handle + | FileIOError FilePath IOException deriving Show instance Error e => Error (FileError e) where - displayError (FileErrorTempFile targetPath tempPath h)= - "Error creating temporary file at: " ++ tempPath ++ - "/n" ++ "Target path: " ++ targetPath ++ - "/n" ++ "Handle: " ++ show h + displayError (FileErrorTempFile targetPath tempPath h) = + "Error creating temporary file at: " + ++ tempPath + ++ "/n" + ++ "Target path: " + ++ targetPath + ++ "/n" + ++ "Handle: " + ++ show h displayError (FileIOError path ioe) = path ++ ": " ++ displayException ioe displayError (FileError path e) = @@ -148,20 +140,14 @@ instance Error e => Error (FileError e) where instance Error IOException where displayError = show - - --- WARNING: STUB for Bech32 class (HasTypeProxy a, SerialiseAsRawBytes a) => SerialiseAsBech32 a where + -- | The human readable prefix to use when encoding this value to Bech32. + bech32PrefixFor :: a -> Text - -- | The human readable prefix to use when encoding this value to Bech32. - -- - bech32PrefixFor :: a -> Text - - -- | The set of human readable prefixes that can be used for this type. - -- - bech32PrefixesPermitted :: AsType a -> [Text] - + -- | The set of human readable prefixes that can be used for this type. + bech32PrefixesPermitted :: AsType a -> [Text] -- serialiseToBech32 :: SerialiseAsBech32 a => a -> Text serialiseToBech32 :: a -> Text @@ -174,7 +160,7 @@ deserialiseFromBech32 _ _ = error "deserialiseFromBech32: stub not implemented" data Bech32DecodeError instance Show Bech32DecodeError where - show = const "Bech32DecodeError: stub not implemented" + show = const "Bech32DecodeError: stub not implemented" instance Error Bech32DecodeError where - displayError = show + displayError = show diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Key.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Key.hs index 6322d3aa3d..7dcf1a5d4e 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Key.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Key.hs @@ -4,20 +4,19 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/Key.hs -module Cardano.Api.Key ( - AsType (AsVerificationKey, AsSigningKey) +module Cardano.Api.Key + ( AsType (AsVerificationKey, AsSigningKey) , CastSigningKeyRole (..) , CastVerificationKeyRole (..) , Key (..) , generateSigningKey ) where -import Cardano.Api.Any -import Cardano.Api.SerialiseTextEnvelope -import qualified Cardano.Crypto.DSIGN.Class as Crypto -import qualified Cardano.Crypto.Seed as Crypto -import Data.Kind (Type) - +import Cardano.Api.Any +import Cardano.Api.SerialiseTextEnvelope +import Cardano.Crypto.DSIGN.Class qualified as Crypto +import Cardano.Crypto.Seed qualified as Crypto +import Data.Kind (Type) -- | An interface for cryptographic keys used for signatures with a 'SigningKey' -- and a 'VerificationKey' key. @@ -25,63 +24,57 @@ import Data.Kind (Type) -- This interface does not provide actual signing or verifying functions since -- this API is concerned with the management of keys: generating and -- serialising. --- -class (Eq (VerificationKey keyrole), - Show (VerificationKey keyrole), - SerialiseAsRawBytes (Hash keyrole), - HasTextEnvelope (VerificationKey keyrole), - HasTextEnvelope (SigningKey keyrole)) - => Key keyrole where - - -- | The type of cryptographic verification key, for each key role. - data VerificationKey keyrole :: Type +class + ( Eq (VerificationKey keyrole) + , Show (VerificationKey keyrole) + , SerialiseAsRawBytes (Hash keyrole) + , HasTextEnvelope (VerificationKey keyrole) + , HasTextEnvelope (SigningKey keyrole) + ) => + Key keyrole + where + -- | The type of cryptographic verification key, for each key role. + data VerificationKey keyrole :: Type - -- | The type of cryptographic signing key, for each key role. - data SigningKey keyrole :: Type + -- | The type of cryptographic signing key, for each key role. + data SigningKey keyrole :: Type - -- | Get the corresponding verification key from a signing key. - getVerificationKey :: SigningKey keyrole -> VerificationKey keyrole + -- | Get the corresponding verification key from a signing key. + getVerificationKey :: SigningKey keyrole -> VerificationKey keyrole - -- | Generate a 'SigningKey' deterministically, given a 'Crypto.Seed'. The - -- required size of the seed is given by 'deterministicSigningKeySeedSize'. - -- - deterministicSigningKey :: AsType keyrole -> Crypto.Seed -> SigningKey keyrole - deterministicSigningKeySeedSize :: AsType keyrole -> Word + -- | Generate a 'SigningKey' deterministically, given a 'Crypto.Seed'. The + -- required size of the seed is given by 'deterministicSigningKeySeedSize'. + deterministicSigningKey :: AsType keyrole -> Crypto.Seed -> SigningKey keyrole - verificationKeyHash :: VerificationKey keyrole -> Hash keyrole + deterministicSigningKeySeedSize :: AsType keyrole -> Word + verificationKeyHash :: VerificationKey keyrole -> Hash keyrole -- TODO: We should move this into the Key type class, with the existing impl as the default impl. -- For KES we can then override it to keep the seed and key in mlocked memory at all times. + -- | Generate a 'SigningKey' using a seed from operating system entropy. --- generateSigningKey :: Key keyrole => AsType keyrole -> IO (SigningKey keyrole) generateSigningKey keytype = do - seed <- Crypto.readSeedFromSystemEntropy seedSize - return $! deterministicSigningKey keytype seed - where - seedSize = deterministicSigningKeySeedSize keytype - + seed <- Crypto.readSeedFromSystemEntropy seedSize + return $! deterministicSigningKey keytype seed + where + seedSize = deterministicSigningKeySeedSize keytype instance HasTypeProxy a => HasTypeProxy (VerificationKey a) where - data AsType (VerificationKey a) = AsVerificationKey (AsType a) - proxyToAsType _ = AsVerificationKey (proxyToAsType (Proxy :: Proxy a)) + data AsType (VerificationKey a) = AsVerificationKey (AsType a) + proxyToAsType _ = AsVerificationKey (proxyToAsType (Proxy :: Proxy a)) instance HasTypeProxy a => HasTypeProxy (SigningKey a) where - data AsType (SigningKey a) = AsSigningKey (AsType a) - proxyToAsType _ = AsSigningKey (proxyToAsType (Proxy :: Proxy a)) - + data AsType (SigningKey a) = AsSigningKey (AsType a) + proxyToAsType _ = AsSigningKey (proxyToAsType (Proxy :: Proxy a)) -- | Some key roles share the same representation and it is sometimes -- legitimate to change the role of a key. --- class CastVerificationKeyRole keyroleA keyroleB where - - -- | Change the role of a 'VerificationKey', if the representation permits. - castVerificationKey :: VerificationKey keyroleA -> VerificationKey keyroleB + -- | Change the role of a 'VerificationKey', if the representation permits. + castVerificationKey :: VerificationKey keyroleA -> VerificationKey keyroleB class CastSigningKeyRole keyroleA keyroleB where - - -- | Change the role of a 'SigningKey', if the representation permits. - castSigningKey :: SigningKey keyroleA -> SigningKey keyroleB - + -- | Change the role of a 'SigningKey', if the representation permits. + castSigningKey :: SigningKey keyroleA -> SigningKey keyroleB diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysByron.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysByron.hs index 892a18dc89..f853fd4cbd 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysByron.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysByron.hs @@ -12,16 +12,17 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/KeysByron.hs -- | Byron key types and their 'Key' class instances --- -module Cardano.Api.KeysByron ( - -- * Key types +module Cardano.Api.KeysByron + ( -- * Key types ByronKey , ByronKeyLegacy + -- * Data family instances , AsType (..) , Hash (..) , SigningKey (..) , VerificationKey (..) + -- * Legacy format , ByronKeyFormat (..) , IsByronKey (..) @@ -29,30 +30,32 @@ module Cardano.Api.KeysByron ( , toByronSigningKey ) where -import Cardano.Api.Any -import Cardano.Api.Key -import Cardano.Api.KeysShelley -import Cardano.Api.SerialiseTextEnvelope -import Cardano.Api.SerialiseUsing -import qualified Cardano.Chain.Common as Byron -import qualified Cardano.Crypto.DSIGN.Class as Crypto -import qualified Cardano.Crypto.Hashing as Byron -import qualified Cardano.Crypto.Seed as Crypto -import qualified Cardano.Crypto.Signing as Byron -import qualified Cardano.Crypto.Signing as Crypto -import qualified Cardano.Crypto.Wallet as Crypto.HD -import Cardano.Ledger.Binary (byronProtVer, toPlainDecoder, - toPlainEncoding) -import Cardano.Prelude (cborError, toCborError) -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Read as CBOR -import qualified Codec.CBOR.Write as CBOR (toStrictByteString) -import Control.Monad -import qualified Data.ByteString.Lazy as LB -import Data.String (IsString) -import Data.Text (Text) -import qualified Data.Text as Text - +import Cardano.Api.Any +import Cardano.Api.Key +import Cardano.Api.KeysShelley +import Cardano.Api.SerialiseTextEnvelope +import Cardano.Api.SerialiseUsing +import Cardano.Chain.Common qualified as Byron +import Cardano.Crypto.DSIGN.Class qualified as Crypto +import Cardano.Crypto.Hashing qualified as Byron +import Cardano.Crypto.Seed qualified as Crypto +import Cardano.Crypto.Signing qualified as Byron +import Cardano.Crypto.Signing qualified as Crypto +import Cardano.Crypto.Wallet qualified as Crypto.HD +import Cardano.Ledger.Binary + ( byronProtVer + , toPlainDecoder + , toPlainEncoding + ) +import Cardano.Prelude (cborError, toCborError) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Read qualified as CBOR +import Codec.CBOR.Write qualified as CBOR (toStrictByteString) +import Control.Monad +import Data.ByteString.Lazy qualified as LB +import Data.String (IsString) +import Data.Text (Text) +import Data.Text qualified as Text -- | Byron-era payment keys. Used for Byron addresses and witnessing -- transactions that spend from these addresses. @@ -63,12 +66,12 @@ import qualified Data.Text as Text -- a chaincode. It is safe to use a zero or random chaincode for new Byron keys. -- -- This is a type level tag, used with other interfaces like 'Key'. --- data ByronKey + data ByronKeyLegacy class IsByronKey key where - byronKeyFormat :: ByronKeyFormat key + byronKeyFormat :: ByronKeyFormat key data ByronKeyFormat key where ByronLegacyKeyFormat :: ByronKeyFormat ByronKeyLegacy @@ -82,72 +85,74 @@ toByronSigningKey :: SomeByronSigningKey -> Byron.SigningKey toByronSigningKey bWit = case bWit of AByronSigningKeyLegacy (ByronSigningKeyLegacy sKey) -> sKey - AByronSigningKey (ByronSigningKey sKey) -> sKey + AByronSigningKey (ByronSigningKey sKey) -> sKey -- -- Byron key -- instance Key ByronKey where - - newtype VerificationKey ByronKey = - ByronVerificationKey Byron.VerificationKey - deriving stock Eq - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey ByronKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey ByronKey = - ByronSigningKey Byron.SigningKey - deriving (Show, IsString) via UsingRawBytesHex (SigningKey ByronKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType ByronKey -> Crypto.Seed -> SigningKey ByronKey - deterministicSigningKey AsByronKey seed = - ByronSigningKey (snd (Crypto.runMonadRandomWithSeed seed Byron.keyGen)) - - deterministicSigningKeySeedSize :: AsType ByronKey -> Word - deterministicSigningKeySeedSize AsByronKey = 32 - - getVerificationKey :: SigningKey ByronKey -> VerificationKey ByronKey - getVerificationKey (ByronSigningKey sk) = - ByronVerificationKey (Byron.toVerification sk) - - verificationKeyHash :: VerificationKey ByronKey -> Hash ByronKey - verificationKeyHash (ByronVerificationKey vkey) = - ByronKeyHash (Byron.hashKey vkey) + newtype VerificationKey ByronKey + = ByronVerificationKey Byron.VerificationKey + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey ByronKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey ByronKey + = ByronSigningKey Byron.SigningKey + deriving (Show, IsString) via UsingRawBytesHex (SigningKey ByronKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType ByronKey -> Crypto.Seed -> SigningKey ByronKey + deterministicSigningKey AsByronKey seed = + ByronSigningKey (snd (Crypto.runMonadRandomWithSeed seed Byron.keyGen)) + + deterministicSigningKeySeedSize :: AsType ByronKey -> Word + deterministicSigningKeySeedSize AsByronKey = 32 + + getVerificationKey :: SigningKey ByronKey -> VerificationKey ByronKey + getVerificationKey (ByronSigningKey sk) = + ByronVerificationKey (Byron.toVerification sk) + + verificationKeyHash :: VerificationKey ByronKey -> Hash ByronKey + verificationKeyHash (ByronVerificationKey vkey) = + ByronKeyHash (Byron.hashKey vkey) instance HasTypeProxy ByronKey where - data AsType ByronKey = AsByronKey - proxyToAsType _ = AsByronKey + data AsType ByronKey = AsByronKey + proxyToAsType _ = AsByronKey instance HasTextEnvelope (VerificationKey ByronKey) where - textEnvelopeType _ = "PaymentVerificationKeyByron_ed25519_bip32" + textEnvelopeType _ = "PaymentVerificationKeyByron_ed25519_bip32" instance HasTextEnvelope (SigningKey ByronKey) where - textEnvelopeType _ = "PaymentSigningKeyByron_ed25519_bip32" + textEnvelopeType _ = "PaymentSigningKeyByron_ed25519_bip32" instance SerialiseAsRawBytes (VerificationKey ByronKey) where - serialiseToRawBytes (ByronVerificationKey (Byron.VerificationKey xvk)) = - Crypto.HD.unXPub xvk + serialiseToRawBytes (ByronVerificationKey (Byron.VerificationKey xvk)) = + Crypto.HD.unXPub xvk - deserialiseFromRawBytes (AsVerificationKey AsByronKey) bs = - either (const Nothing) (Just . ByronVerificationKey . Byron.VerificationKey) - (Crypto.HD.xpub bs) + deserialiseFromRawBytes (AsVerificationKey AsByronKey) bs = + either + (const Nothing) + (Just . ByronVerificationKey . Byron.VerificationKey) + (Crypto.HD.xpub bs) instance SerialiseAsRawBytes (SigningKey ByronKey) where - serialiseToRawBytes (ByronSigningKey (Byron.SigningKey xsk)) = - CBOR.toStrictByteString $ encCBORXPrv xsk - where - encCBORXPrv = toPlainEncoding byronProtVer . Crypto.encCBORXPrv - - deserialiseFromRawBytes (AsSigningKey AsByronKey) bs = - either (const Nothing) (Just . ByronSigningKey . Byron.SigningKey) - (snd <$> CBOR.deserialiseFromBytes decCBORXPrv (LB.fromStrict bs)) - where - decCBORXPrv = toPlainDecoder Nothing byronProtVer Byron.decCBORXPrv - + serialiseToRawBytes (ByronSigningKey (Byron.SigningKey xsk)) = + CBOR.toStrictByteString $ encCBORXPrv xsk + where + encCBORXPrv = toPlainEncoding byronProtVer . Crypto.encCBORXPrv + + deserialiseFromRawBytes (AsSigningKey AsByronKey) bs = + either + (const Nothing) + (Just . ByronSigningKey . Byron.SigningKey) + (snd <$> CBOR.deserialiseFromBytes decCBORXPrv (LB.fromStrict bs)) + where + decCBORXPrv = toPlainDecoder Nothing byronProtVer Byron.decCBORXPrv newtype instance Hash ByronKey = ByronKeyHash Byron.KeyHash deriving (Eq, Ord) @@ -156,23 +161,27 @@ newtype instance Hash ByronKey = ByronKeyHash Byron.KeyHash deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash ByronKey) where - serialiseToRawBytes (ByronKeyHash (Byron.KeyHash vkh)) = - Byron.abstractHashToBytes vkh + serialiseToRawBytes (ByronKeyHash (Byron.KeyHash vkh)) = + Byron.abstractHashToBytes vkh - deserialiseFromRawBytes (AsHash AsByronKey) bs = - ByronKeyHash . Byron.KeyHash <$> Byron.abstractHashFromBytes bs + deserialiseFromRawBytes (AsHash AsByronKey) bs = + ByronKeyHash . Byron.KeyHash <$> Byron.abstractHashFromBytes bs instance CastVerificationKeyRole ByronKey PaymentExtendedKey where - castVerificationKey (ByronVerificationKey vk) = - PaymentExtendedVerificationKey - (Byron.unVerificationKey vk) + castVerificationKey (ByronVerificationKey vk) = + PaymentExtendedVerificationKey + (Byron.unVerificationKey vk) instance CastVerificationKeyRole ByronKey PaymentKey where - castVerificationKey = - (castVerificationKey :: VerificationKey PaymentExtendedKey - -> VerificationKey PaymentKey) - . (castVerificationKey :: VerificationKey ByronKey - -> VerificationKey PaymentExtendedKey) + castVerificationKey = + ( castVerificationKey :: + VerificationKey PaymentExtendedKey -> + VerificationKey PaymentKey + ) + . ( castVerificationKey :: + VerificationKey ByronKey -> + VerificationKey PaymentExtendedKey + ) instance IsByronKey ByronKey where byronKeyFormat = ByronModernKeyFormat @@ -182,43 +191,42 @@ instance IsByronKey ByronKey where -- instance Key ByronKeyLegacy where + newtype VerificationKey ByronKeyLegacy + = ByronVerificationKeyLegacy Byron.VerificationKey + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey ByronKeyLegacy) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR - newtype VerificationKey ByronKeyLegacy = - ByronVerificationKeyLegacy Byron.VerificationKey - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey ByronKeyLegacy) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey ByronKeyLegacy = - ByronSigningKeyLegacy Byron.SigningKey - deriving (Show, IsString) via UsingRawBytesHex (SigningKey ByronKeyLegacy) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR + newtype SigningKey ByronKeyLegacy + = ByronSigningKeyLegacy Byron.SigningKey + deriving (Show, IsString) via UsingRawBytesHex (SigningKey ByronKeyLegacy) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR - deterministicSigningKey :: AsType ByronKeyLegacy -> Crypto.Seed -> SigningKey ByronKeyLegacy - deterministicSigningKey _ _ = error "Please generate a non legacy Byron key instead" + deterministicSigningKey :: AsType ByronKeyLegacy -> Crypto.Seed -> SigningKey ByronKeyLegacy + deterministicSigningKey _ _ = error "Please generate a non legacy Byron key instead" - deterministicSigningKeySeedSize :: AsType ByronKeyLegacy -> Word - deterministicSigningKeySeedSize AsByronKeyLegacy = 32 + deterministicSigningKeySeedSize :: AsType ByronKeyLegacy -> Word + deterministicSigningKeySeedSize AsByronKeyLegacy = 32 - getVerificationKey :: SigningKey ByronKeyLegacy -> VerificationKey ByronKeyLegacy - getVerificationKey (ByronSigningKeyLegacy sk) = - ByronVerificationKeyLegacy (Byron.toVerification sk) + getVerificationKey :: SigningKey ByronKeyLegacy -> VerificationKey ByronKeyLegacy + getVerificationKey (ByronSigningKeyLegacy sk) = + ByronVerificationKeyLegacy (Byron.toVerification sk) - verificationKeyHash :: VerificationKey ByronKeyLegacy -> Hash ByronKeyLegacy - verificationKeyHash (ByronVerificationKeyLegacy vkey) = - ByronKeyHashLegacy (Byron.hashKey vkey) + verificationKeyHash :: VerificationKey ByronKeyLegacy -> Hash ByronKeyLegacy + verificationKeyHash (ByronVerificationKeyLegacy vkey) = + ByronKeyHashLegacy (Byron.hashKey vkey) instance HasTypeProxy ByronKeyLegacy where data AsType ByronKeyLegacy = AsByronKeyLegacy proxyToAsType _ = AsByronKeyLegacy instance HasTextEnvelope (VerificationKey ByronKeyLegacy) where - textEnvelopeType _ = "PaymentVerificationKeyByronLegacy_ed25519_bip32" + textEnvelopeType _ = "PaymentVerificationKeyByronLegacy_ed25519_bip32" instance HasTextEnvelope (SigningKey ByronKeyLegacy) where - textEnvelopeType _ = "PaymentSigningKeyByronLegacy_ed25519_bip32" + textEnvelopeType _ = "PaymentSigningKeyByronLegacy_ed25519_bip32" newtype instance Hash ByronKeyLegacy = ByronKeyHashLegacy Byron.KeyHash deriving (Eq, Ord) @@ -227,68 +235,75 @@ newtype instance Hash ByronKeyLegacy = ByronKeyHashLegacy Byron.KeyHash deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash ByronKeyLegacy) where - serialiseToRawBytes (ByronKeyHashLegacy (Byron.KeyHash vkh)) = - Byron.abstractHashToBytes vkh + serialiseToRawBytes (ByronKeyHashLegacy (Byron.KeyHash vkh)) = + Byron.abstractHashToBytes vkh - deserialiseFromRawBytes (AsHash AsByronKeyLegacy) bs = - ByronKeyHashLegacy . Byron.KeyHash <$> Byron.abstractHashFromBytes bs + deserialiseFromRawBytes (AsHash AsByronKeyLegacy) bs = + ByronKeyHashLegacy . Byron.KeyHash <$> Byron.abstractHashFromBytes bs instance SerialiseAsRawBytes (VerificationKey ByronKeyLegacy) where - serialiseToRawBytes (ByronVerificationKeyLegacy (Byron.VerificationKey xvk)) = - Crypto.HD.unXPub xvk + serialiseToRawBytes (ByronVerificationKeyLegacy (Byron.VerificationKey xvk)) = + Crypto.HD.unXPub xvk - deserialiseFromRawBytes (AsVerificationKey AsByronKeyLegacy) bs = - either (const Nothing) (Just . ByronVerificationKeyLegacy . Byron.VerificationKey) - (Crypto.HD.xpub bs) + deserialiseFromRawBytes (AsVerificationKey AsByronKeyLegacy) bs = + either + (const Nothing) + (Just . ByronVerificationKeyLegacy . Byron.VerificationKey) + (Crypto.HD.xpub bs) instance SerialiseAsRawBytes (SigningKey ByronKeyLegacy) where - serialiseToRawBytes (ByronSigningKeyLegacy (Byron.SigningKey xsk)) = - Crypto.HD.unXPrv xsk - - deserialiseFromRawBytes (AsSigningKey AsByronKeyLegacy) bs = - either (const Nothing) (Just . ByronSigningKeyLegacy . snd) - (CBOR.deserialiseFromBytes decodeLegacyDelegateKey $ LB.fromStrict bs) - where - -- Stolen from: cardano-sl/binary/src/Pos/Binary/Class/Core.hs - -- | Enforces that the input size is the same as the decoded one, failing in - -- case it's not. - enforceSize :: Text -> Int -> CBOR.Decoder s () - enforceSize lbl requestedSize = CBOR.decodeListLenCanonical >>= matchSize requestedSize lbl - - -- Stolen from: cardano-sl/binary/src/Pos/Binary/Class/Core.hs - -- | Compare two sizes, failing if they are not equal. - matchSize :: Int -> Text -> Int -> CBOR.Decoder s () - matchSize requestedSize lbl actualSize = - when (actualSize /= requestedSize) $ - cborError ( lbl <> " failed the size check. Expected " <> Text.pack (show requestedSize) - <> ", found " <> Text.pack (show actualSize) - ) - - decodeXPrv :: CBOR.Decoder s Crypto.HD.XPrv - decodeXPrv = CBOR.decodeBytesCanonical >>= toCborError . Crypto.HD.xprv - - - -- | Decoder for a Byron/Classic signing key. - -- Lifted from cardano-sl legacy codebase. - decodeLegacyDelegateKey :: CBOR.Decoder s Byron.SigningKey - decodeLegacyDelegateKey = do - enforceSize "UserSecret" 4 - _ <- do - enforceSize "vss" 1 - CBOR.decodeBytes - pkey <- do - enforceSize "pkey" 1 - Byron.SigningKey <$> decodeXPrv - _ <- do - CBOR.decodeListLenIndef - CBOR.decodeSequenceLenIndef (flip (:)) [] reverse CBOR.decodeNull - _ <- do - enforceSize "wallet" 0 - pure pkey + serialiseToRawBytes (ByronSigningKeyLegacy (Byron.SigningKey xsk)) = + Crypto.HD.unXPrv xsk + + deserialiseFromRawBytes (AsSigningKey AsByronKeyLegacy) bs = + either + (const Nothing) + (Just . ByronSigningKeyLegacy . snd) + (CBOR.deserialiseFromBytes decodeLegacyDelegateKey $ LB.fromStrict bs) + where + -- Stolen from: cardano-sl/binary/src/Pos/Binary/Class/Core.hs + -- \| Enforces that the input size is the same as the decoded one, failing in + -- case it's not. + enforceSize :: Text -> Int -> CBOR.Decoder s () + enforceSize lbl requestedSize = CBOR.decodeListLenCanonical >>= matchSize requestedSize lbl + + -- Stolen from: cardano-sl/binary/src/Pos/Binary/Class/Core.hs + -- \| Compare two sizes, failing if they are not equal. + matchSize :: Int -> Text -> Int -> CBOR.Decoder s () + matchSize requestedSize lbl actualSize = + when (actualSize /= requestedSize) $ + cborError + ( lbl + <> " failed the size check. Expected " + <> Text.pack (show requestedSize) + <> ", found " + <> Text.pack (show actualSize) + ) + + decodeXPrv :: CBOR.Decoder s Crypto.HD.XPrv + decodeXPrv = CBOR.decodeBytesCanonical >>= toCborError . Crypto.HD.xprv + + -- \| Decoder for a Byron/Classic signing key. + -- Lifted from cardano-sl legacy codebase. + decodeLegacyDelegateKey :: CBOR.Decoder s Byron.SigningKey + decodeLegacyDelegateKey = do + enforceSize "UserSecret" 4 + _ <- do + enforceSize "vss" 1 + CBOR.decodeBytes + pkey <- do + enforceSize "pkey" 1 + Byron.SigningKey <$> decodeXPrv + _ <- do + CBOR.decodeListLenIndef + CBOR.decodeSequenceLenIndef (flip (:)) [] reverse CBOR.decodeNull + _ <- do + enforceSize "wallet" 0 + pure pkey instance CastVerificationKeyRole ByronKeyLegacy ByronKey where - castVerificationKey (ByronVerificationKeyLegacy vk) = - ByronVerificationKey vk + castVerificationKey (ByronVerificationKeyLegacy vk) = + ByronVerificationKey vk instance IsByronKey ByronKeyLegacy where byronKeyFormat = ByronLegacyKeyFormat diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysPraos.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysPraos.hs index 815371f197..2efe0f02f0 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysPraos.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysPraos.hs @@ -11,11 +11,11 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/KeysPraos.hs -- | Praos consensus key types and their 'Key' class instances --- -module Cardano.Api.KeysPraos ( - -- * Key types +module Cardano.Api.KeysPraos + ( -- * Key types KesKey , VrfKey + -- * Data family instances , AsType (..) , Hash (..) @@ -23,17 +23,17 @@ module Cardano.Api.KeysPraos ( , VerificationKey (..) ) where -import Cardano.Api.Any -import Cardano.Api.Key -import Cardano.Api.SerialiseTextEnvelope -import Cardano.Api.SerialiseUsing -import qualified Cardano.Crypto.DSIGN.Class as Crypto -import qualified Cardano.Crypto.Hash.Class as Crypto -import qualified Cardano.Crypto.KES.Class as Crypto -import qualified Cardano.Crypto.VRF.Class as Crypto -import Cardano.Ledger.Hashes (HASH) -import Cardano.Protocol.Crypto (Crypto (..), StandardCrypto) -import Data.String (IsString (..)) +import Cardano.Api.Any +import Cardano.Api.Key +import Cardano.Api.SerialiseTextEnvelope +import Cardano.Api.SerialiseUsing +import Cardano.Crypto.DSIGN.Class qualified as Crypto +import Cardano.Crypto.Hash.Class qualified as Crypto +import Cardano.Crypto.KES.Class qualified as Crypto +import Cardano.Crypto.VRF.Class qualified as Crypto +import Cardano.Ledger.Hashes (HASH) +import Cardano.Protocol.Crypto (Crypto (..), StandardCrypto) +import Data.String (IsString (..)) -- -- KES keys @@ -42,98 +42,99 @@ import Data.String (IsString (..)) data KesKey instance HasTypeProxy KesKey where - data AsType KesKey = AsKesKey - proxyToAsType _ = AsKesKey + data AsType KesKey = AsKesKey + proxyToAsType _ = AsKesKey instance Key KesKey where - - newtype VerificationKey KesKey = - KesVerificationKey (Crypto.VerKeyKES (KES StandardCrypto)) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey KesKey) - deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey KesKey = - KesSigningKey (Crypto.UnsoundPureSignKeyKES (KES StandardCrypto)) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey KesKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass (EncCBOR, DecCBOR, SerialiseAsCBOR) - - --This loses the mlock safety of the seed, since it starts from a normal in-memory seed. - deterministicSigningKey :: AsType KesKey -> Crypto.Seed -> SigningKey KesKey - deterministicSigningKey AsKesKey = - KesSigningKey . Crypto.unsoundPureGenKeyKES - - deterministicSigningKeySeedSize :: AsType KesKey -> Word - deterministicSigningKeySeedSize AsKesKey = - Crypto.seedSizeKES proxy - where - proxy :: Proxy (KES StandardCrypto) - proxy = Proxy - - getVerificationKey :: SigningKey KesKey -> VerificationKey KesKey - getVerificationKey (KesSigningKey sk) = - KesVerificationKey (Crypto.unsoundPureDeriveVerKeyKES sk) - - verificationKeyHash :: VerificationKey KesKey -> Hash KesKey - verificationKeyHash (KesVerificationKey vkey) = - KesKeyHash (Crypto.hashVerKeyKES vkey) - + newtype VerificationKey KesKey + = KesVerificationKey (Crypto.VerKeyKES (KES StandardCrypto)) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey KesKey) + deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey KesKey + = KesSigningKey (Crypto.UnsoundPureSignKeyKES (KES StandardCrypto)) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey KesKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass (EncCBOR, DecCBOR, SerialiseAsCBOR) + + -- This loses the mlock safety of the seed, since it starts from a normal in-memory seed. + deterministicSigningKey :: AsType KesKey -> Crypto.Seed -> SigningKey KesKey + deterministicSigningKey AsKesKey = + KesSigningKey . Crypto.unsoundPureGenKeyKES + + deterministicSigningKeySeedSize :: AsType KesKey -> Word + deterministicSigningKeySeedSize AsKesKey = + Crypto.seedSizeKES proxy + where + proxy :: Proxy (KES StandardCrypto) + proxy = Proxy + + getVerificationKey :: SigningKey KesKey -> VerificationKey KesKey + getVerificationKey (KesSigningKey sk) = + KesVerificationKey (Crypto.unsoundPureDeriveVerKeyKES sk) + + verificationKeyHash :: VerificationKey KesKey -> Hash KesKey + verificationKeyHash (KesVerificationKey vkey) = + KesKeyHash (Crypto.hashVerKeyKES vkey) instance SerialiseAsRawBytes (VerificationKey KesKey) where - serialiseToRawBytes (KesVerificationKey vk) = - Crypto.rawSerialiseVerKeyKES vk + serialiseToRawBytes (KesVerificationKey vk) = + Crypto.rawSerialiseVerKeyKES vk - deserialiseFromRawBytes (AsVerificationKey AsKesKey) bs = - KesVerificationKey <$> - Crypto.rawDeserialiseVerKeyKES bs + deserialiseFromRawBytes (AsVerificationKey AsKesKey) bs = + KesVerificationKey + <$> Crypto.rawDeserialiseVerKeyKES bs instance SerialiseAsRawBytes (SigningKey KesKey) where - serialiseToRawBytes (KesSigningKey sk) = - Crypto.rawSerialiseUnsoundPureSignKeyKES sk + serialiseToRawBytes (KesSigningKey sk) = + Crypto.rawSerialiseUnsoundPureSignKeyKES sk - deserialiseFromRawBytes (AsSigningKey AsKesKey) bs = - KesSigningKey <$> Crypto.rawDeserialiseUnsoundPureSignKeyKES bs + deserialiseFromRawBytes (AsSigningKey AsKesKey) bs = + KesSigningKey <$> Crypto.rawDeserialiseUnsoundPureSignKeyKES bs instance SerialiseAsBech32 (VerificationKey KesKey) where - bech32PrefixFor _ = "kes_vk" - bech32PrefixesPermitted _ = ["kes_vk"] + bech32PrefixFor _ = "kes_vk" + bech32PrefixesPermitted _ = ["kes_vk"] instance SerialiseAsBech32 (SigningKey KesKey) where - bech32PrefixFor _ = "kes_sk" - bech32PrefixesPermitted _ = ["kes_sk"] - - -newtype instance Hash KesKey = - KesKeyHash (Crypto.Hash HASH - (Crypto.VerKeyKES (KES StandardCrypto))) + bech32PrefixFor _ = "kes_sk" + bech32PrefixesPermitted _ = ["kes_sk"] + +newtype instance Hash KesKey + = KesKeyHash + ( Crypto.Hash + HASH + (Crypto.VerKeyKES (KES StandardCrypto)) + ) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash KesKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash KesKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash KesKey) where - serialiseToRawBytes (KesKeyHash vkh) = - Crypto.hashToBytes vkh + serialiseToRawBytes (KesKeyHash vkh) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsKesKey) bs = - KesKeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsKesKey) bs = + KesKeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey KesKey) where - textEnvelopeType _ = "KesVerificationKey_" - <> fromString (Crypto.algorithmNameKES proxy) - where - proxy :: Proxy (KES StandardCrypto) - proxy = Proxy + textEnvelopeType _ = + "KesVerificationKey_" + <> fromString (Crypto.algorithmNameKES proxy) + where + proxy :: Proxy (KES StandardCrypto) + proxy = Proxy instance HasTextEnvelope (SigningKey KesKey) where - textEnvelopeType _ = "KesSigningKey_" - <> fromString (Crypto.algorithmNameKES proxy) - where - proxy :: Proxy (KES StandardCrypto) - proxy = Proxy - + textEnvelopeType _ = + "KesSigningKey_" + <> fromString (Crypto.algorithmNameKES proxy) + where + proxy :: Proxy (KES StandardCrypto) + proxy = Proxy -- -- VRF keys @@ -142,89 +143,90 @@ instance HasTextEnvelope (SigningKey KesKey) where data VrfKey instance HasTypeProxy VrfKey where - data AsType VrfKey = AsVrfKey - proxyToAsType _ = AsVrfKey + data AsType VrfKey = AsVrfKey + proxyToAsType _ = AsVrfKey instance Key VrfKey where - - newtype VerificationKey VrfKey = - VrfVerificationKey (Crypto.VerKeyVRF (VRF StandardCrypto)) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey VrfKey) - deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey VrfKey = - VrfSigningKey (Crypto.SignKeyVRF (VRF StandardCrypto)) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey VrfKey) - deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType VrfKey -> Crypto.Seed -> SigningKey VrfKey - deterministicSigningKey AsVrfKey seed = - VrfSigningKey (Crypto.genKeyVRF seed) - - deterministicSigningKeySeedSize :: AsType VrfKey -> Word - deterministicSigningKeySeedSize AsVrfKey = - Crypto.seedSizeVRF proxy - where - proxy :: Proxy (VRF StandardCrypto) - proxy = Proxy - - getVerificationKey :: SigningKey VrfKey -> VerificationKey VrfKey - getVerificationKey (VrfSigningKey sk) = - VrfVerificationKey (Crypto.deriveVerKeyVRF sk) - - verificationKeyHash :: VerificationKey VrfKey -> Hash VrfKey - verificationKeyHash (VrfVerificationKey vkey) = - VrfKeyHash (Crypto.hashVerKeyVRF vkey) + newtype VerificationKey VrfKey + = VrfVerificationKey (Crypto.VerKeyVRF (VRF StandardCrypto)) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey VrfKey) + deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey VrfKey + = VrfSigningKey (Crypto.SignKeyVRF (VRF StandardCrypto)) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey VrfKey) + deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType VrfKey -> Crypto.Seed -> SigningKey VrfKey + deterministicSigningKey AsVrfKey seed = + VrfSigningKey (Crypto.genKeyVRF seed) + + deterministicSigningKeySeedSize :: AsType VrfKey -> Word + deterministicSigningKeySeedSize AsVrfKey = + Crypto.seedSizeVRF proxy + where + proxy :: Proxy (VRF StandardCrypto) + proxy = Proxy + + getVerificationKey :: SigningKey VrfKey -> VerificationKey VrfKey + getVerificationKey (VrfSigningKey sk) = + VrfVerificationKey (Crypto.deriveVerKeyVRF sk) + + verificationKeyHash :: VerificationKey VrfKey -> Hash VrfKey + verificationKeyHash (VrfVerificationKey vkey) = + VrfKeyHash (Crypto.hashVerKeyVRF vkey) instance SerialiseAsRawBytes (VerificationKey VrfKey) where - serialiseToRawBytes (VrfVerificationKey vk) = - Crypto.rawSerialiseVerKeyVRF vk + serialiseToRawBytes (VrfVerificationKey vk) = + Crypto.rawSerialiseVerKeyVRF vk - deserialiseFromRawBytes (AsVerificationKey AsVrfKey) bs = - VrfVerificationKey <$> Crypto.rawDeserialiseVerKeyVRF bs + deserialiseFromRawBytes (AsVerificationKey AsVrfKey) bs = + VrfVerificationKey <$> Crypto.rawDeserialiseVerKeyVRF bs instance SerialiseAsRawBytes (SigningKey VrfKey) where - serialiseToRawBytes (VrfSigningKey sk) = - Crypto.rawSerialiseSignKeyVRF sk + serialiseToRawBytes (VrfSigningKey sk) = + Crypto.rawSerialiseSignKeyVRF sk - deserialiseFromRawBytes (AsSigningKey AsVrfKey) bs = - VrfSigningKey <$> Crypto.rawDeserialiseSignKeyVRF bs + deserialiseFromRawBytes (AsSigningKey AsVrfKey) bs = + VrfSigningKey <$> Crypto.rawDeserialiseSignKeyVRF bs instance SerialiseAsBech32 (VerificationKey VrfKey) where - bech32PrefixFor _ = "vrf_vk" - bech32PrefixesPermitted _ = ["vrf_vk"] + bech32PrefixFor _ = "vrf_vk" + bech32PrefixesPermitted _ = ["vrf_vk"] instance SerialiseAsBech32 (SigningKey VrfKey) where - bech32PrefixFor _ = "vrf_sk" - bech32PrefixesPermitted _ = ["vrf_sk"] - -newtype instance Hash VrfKey = - VrfKeyHash (Crypto.Hash HASH - (Crypto.VerKeyVRF (VRF StandardCrypto))) + bech32PrefixFor _ = "vrf_sk" + bech32PrefixesPermitted _ = ["vrf_sk"] + +newtype instance Hash VrfKey + = VrfKeyHash + ( Crypto.Hash + HASH + (Crypto.VerKeyVRF (VRF StandardCrypto)) + ) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash VrfKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash VrfKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash VrfKey) where - serialiseToRawBytes (VrfKeyHash vkh) = - Crypto.hashToBytes vkh + serialiseToRawBytes (VrfKeyHash vkh) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsVrfKey) bs = - VrfKeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsVrfKey) bs = + VrfKeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey VrfKey) where - textEnvelopeType _ = "VrfVerificationKey_" <> fromString (Crypto.algorithmNameVRF proxy) - where - proxy :: Proxy (VRF StandardCrypto) - proxy = Proxy + textEnvelopeType _ = "VrfVerificationKey_" <> fromString (Crypto.algorithmNameVRF proxy) + where + proxy :: Proxy (VRF StandardCrypto) + proxy = Proxy instance HasTextEnvelope (SigningKey VrfKey) where - textEnvelopeType _ = "VrfSigningKey_" <> fromString (Crypto.algorithmNameVRF proxy) - where - proxy :: Proxy (VRF StandardCrypto) - proxy = Proxy - + textEnvelopeType _ = "VrfSigningKey_" <> fromString (Crypto.algorithmNameVRF proxy) + where + proxy :: Proxy (VRF StandardCrypto) + proxy = Proxy diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysShelley.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysShelley.hs index 673ead3ed6..df551d1bca 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysShelley.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/KeysShelley.hs @@ -1,3 +1,6 @@ +-- The Shelley ledger uses promoted data kinds which we have to use, but we do +-- not export any from this API. We also use them unticked as nature intended. +{-# LANGUAGE DataKinds #-} {-# LANGUAGE DeriveAnyClass #-} {-# LANGUAGE DerivingVia #-} {-# LANGUAGE FlexibleInstances #-} @@ -8,16 +11,11 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} --- The Shelley ledger uses promoted data kinds which we have to use, but we do --- not export any from this API. We also use them unticked as nature intended. -{-# LANGUAGE DataKinds #-} - -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/KeysShelley.hs -- | Shelley key types and their 'Key' class instances --- -module Cardano.Api.KeysShelley ( - -- * Key types +module Cardano.Api.KeysShelley + ( -- * Key types GenesisDelegateExtendedKey , GenesisDelegateKey , GenesisExtendedKey @@ -28,6 +26,7 @@ module Cardano.Api.KeysShelley ( , StakeExtendedKey , StakeKey , StakePoolKey + -- * Data family instances , AsType (..) , Hash (..) @@ -35,24 +34,29 @@ module Cardano.Api.KeysShelley ( , VerificationKey (..) ) where -import Cardano.Api.Any -import Cardano.Api.Key -import Cardano.Api.SerialiseTextEnvelope -import Cardano.Api.SerialiseUsing -import Cardano.Crypto.DSIGN (SignKeyDSIGN) -import qualified Cardano.Crypto.DSIGN.Class as Crypto -import qualified Cardano.Crypto.Hash.Class as Crypto -import qualified Cardano.Crypto.Seed as Crypto -import qualified Cardano.Crypto.Wallet as Crypto.HD -import Cardano.Ledger.Keys (DSIGN) -import qualified Cardano.Ledger.Keys as Shelley -import Data.Aeson.Types (FromJSON (..), ToJSON (..), ToJSONKey (..), - toJSONKeyText, withText) -import Data.ByteString (ByteString) -import qualified Data.ByteString as BS -import Data.Maybe -import Data.String (IsString (..)) -import qualified Data.Text as Text +import Cardano.Api.Any +import Cardano.Api.Key +import Cardano.Api.SerialiseTextEnvelope +import Cardano.Api.SerialiseUsing +import Cardano.Crypto.DSIGN (SignKeyDSIGN) +import Cardano.Crypto.DSIGN.Class qualified as Crypto +import Cardano.Crypto.Hash.Class qualified as Crypto +import Cardano.Crypto.Seed qualified as Crypto +import Cardano.Crypto.Wallet qualified as Crypto.HD +import Cardano.Ledger.Keys (DSIGN) +import Cardano.Ledger.Keys qualified as Shelley +import Data.Aeson.Types + ( FromJSON (..) + , ToJSON (..) + , ToJSONKey (..) + , toJSONKeyText + , withText + ) +import Data.ByteString (ByteString) +import Data.ByteString qualified as BS +import Data.Maybe +import Data.String (IsString (..)) +import Data.Text qualified as Text -- -- Shelley payment keys @@ -62,98 +66,97 @@ import qualified Data.Text as Text -- transactions that spend from these addresses. -- -- This is a type level tag, used with other interfaces like 'Key'. --- data PaymentKey instance HasTypeProxy PaymentKey where - data AsType PaymentKey = AsPaymentKey - proxyToAsType _ = AsPaymentKey + data AsType PaymentKey = AsPaymentKey + proxyToAsType _ = AsPaymentKey instance Key PaymentKey where - - newtype VerificationKey PaymentKey = - PaymentVerificationKey (Shelley.VKey Shelley.Payment) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey PaymentKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey PaymentKey = - PaymentSigningKey (SignKeyDSIGN DSIGN) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey PaymentKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType PaymentKey -> Crypto.Seed -> SigningKey PaymentKey - deterministicSigningKey AsPaymentKey seed = - PaymentSigningKey (Crypto.genKeyDSIGN seed) - - deterministicSigningKeySeedSize :: AsType PaymentKey -> Word - deterministicSigningKeySeedSize AsPaymentKey = - Crypto.seedSizeDSIGN proxy - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - - getVerificationKey :: SigningKey PaymentKey -> VerificationKey PaymentKey - getVerificationKey (PaymentSigningKey sk) = - PaymentVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) - - verificationKeyHash :: VerificationKey PaymentKey -> Hash PaymentKey - verificationKeyHash (PaymentVerificationKey vkey) = - PaymentKeyHash (Shelley.hashKey vkey) + newtype VerificationKey PaymentKey + = PaymentVerificationKey (Shelley.VKey Shelley.Payment) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey PaymentKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey PaymentKey + = PaymentSigningKey (SignKeyDSIGN DSIGN) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey PaymentKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType PaymentKey -> Crypto.Seed -> SigningKey PaymentKey + deterministicSigningKey AsPaymentKey seed = + PaymentSigningKey (Crypto.genKeyDSIGN seed) + + deterministicSigningKeySeedSize :: AsType PaymentKey -> Word + deterministicSigningKeySeedSize AsPaymentKey = + Crypto.seedSizeDSIGN proxy + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + + getVerificationKey :: SigningKey PaymentKey -> VerificationKey PaymentKey + getVerificationKey (PaymentSigningKey sk) = + PaymentVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) + + verificationKeyHash :: VerificationKey PaymentKey -> Hash PaymentKey + verificationKeyHash (PaymentVerificationKey vkey) = + PaymentKeyHash (Shelley.hashKey vkey) instance SerialiseAsRawBytes (VerificationKey PaymentKey) where - serialiseToRawBytes (PaymentVerificationKey (Shelley.VKey vk)) = - Crypto.rawSerialiseVerKeyDSIGN vk + serialiseToRawBytes (PaymentVerificationKey (Shelley.VKey vk)) = + Crypto.rawSerialiseVerKeyDSIGN vk - deserialiseFromRawBytes (AsVerificationKey AsPaymentKey) bs = - PaymentVerificationKey . Shelley.VKey <$> - Crypto.rawDeserialiseVerKeyDSIGN bs + deserialiseFromRawBytes (AsVerificationKey AsPaymentKey) bs = + PaymentVerificationKey . Shelley.VKey + <$> Crypto.rawDeserialiseVerKeyDSIGN bs instance SerialiseAsRawBytes (SigningKey PaymentKey) where - serialiseToRawBytes (PaymentSigningKey sk) = - Crypto.rawSerialiseSignKeyDSIGN sk + serialiseToRawBytes (PaymentSigningKey sk) = + Crypto.rawSerialiseSignKeyDSIGN sk - deserialiseFromRawBytes (AsSigningKey AsPaymentKey) bs = - PaymentSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs + deserialiseFromRawBytes (AsSigningKey AsPaymentKey) bs = + PaymentSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs instance SerialiseAsBech32 (VerificationKey PaymentKey) where - bech32PrefixFor _ = "addr_vk" - bech32PrefixesPermitted _ = ["addr_vk"] + bech32PrefixFor _ = "addr_vk" + bech32PrefixesPermitted _ = ["addr_vk"] instance SerialiseAsBech32 (SigningKey PaymentKey) where - bech32PrefixFor _ = "addr_sk" - bech32PrefixesPermitted _ = ["addr_sk"] + bech32PrefixFor _ = "addr_sk" + bech32PrefixesPermitted _ = ["addr_sk"] -newtype instance Hash PaymentKey = - PaymentKeyHash (Shelley.KeyHash Shelley.Payment) +newtype instance Hash PaymentKey + = PaymentKeyHash (Shelley.KeyHash Shelley.Payment) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash PaymentKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash PaymentKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash PaymentKey) where - serialiseToRawBytes (PaymentKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (PaymentKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsPaymentKey) bs = - PaymentKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsPaymentKey) bs = + PaymentKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey PaymentKey) where - textEnvelopeType _ = "PaymentVerificationKeyShelley_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "PaymentVerificationKeyShelley_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance HasTextEnvelope (SigningKey PaymentKey) where - textEnvelopeType _ = "PaymentSigningKeyShelley_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - + textEnvelopeType _ = + "PaymentSigningKeyShelley_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy -- -- Shelley payment extended ed25519 keys @@ -174,132 +177,137 @@ instance HasTextEnvelope (SigningKey PaymentKey) where -- key ('VerificationKey' 'PaymentKey'). -- -- This is a type level tag, used with other interfaces like 'Key'. --- data PaymentExtendedKey instance HasTypeProxy PaymentExtendedKey where - data AsType PaymentExtendedKey = AsPaymentExtendedKey - proxyToAsType _ = AsPaymentExtendedKey + data AsType PaymentExtendedKey = AsPaymentExtendedKey + proxyToAsType _ = AsPaymentExtendedKey instance Key PaymentExtendedKey where - - newtype VerificationKey PaymentExtendedKey = - PaymentExtendedVerificationKey Crypto.HD.XPub - deriving stock (Eq) - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey PaymentExtendedKey) - - newtype SigningKey PaymentExtendedKey = - PaymentExtendedSigningKey Crypto.HD.XPrv - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (SigningKey PaymentExtendedKey) - - deterministicSigningKey :: AsType PaymentExtendedKey - -> Crypto.Seed - -> SigningKey PaymentExtendedKey - deterministicSigningKey AsPaymentExtendedKey seed = - PaymentExtendedSigningKey - (Crypto.HD.generate seedbs BS.empty) - where - (seedbs, _) = Crypto.getBytesFromSeedT 32 seed - - deterministicSigningKeySeedSize :: AsType PaymentExtendedKey -> Word - deterministicSigningKeySeedSize AsPaymentExtendedKey = 32 - - getVerificationKey :: SigningKey PaymentExtendedKey - -> VerificationKey PaymentExtendedKey - getVerificationKey (PaymentExtendedSigningKey sk) = - PaymentExtendedVerificationKey (Crypto.HD.toXPub sk) - - -- | We use the hash of the normal non-extended pub key so that it is - -- consistent with the one used in addresses and signatures. - -- - verificationKeyHash :: VerificationKey PaymentExtendedKey - -> Hash PaymentExtendedKey - verificationKeyHash (PaymentExtendedVerificationKey vk) = - PaymentExtendedKeyHash + newtype VerificationKey PaymentExtendedKey + = PaymentExtendedVerificationKey Crypto.HD.XPub + deriving stock Eq + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey PaymentExtendedKey) + + newtype SigningKey PaymentExtendedKey + = PaymentExtendedSigningKey Crypto.HD.XPrv + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (SigningKey PaymentExtendedKey) + + deterministicSigningKey :: + AsType PaymentExtendedKey -> + Crypto.Seed -> + SigningKey PaymentExtendedKey + deterministicSigningKey AsPaymentExtendedKey seed = + PaymentExtendedSigningKey + (Crypto.HD.generate seedbs BS.empty) + where + (seedbs, _) = Crypto.getBytesFromSeedT 32 seed + + deterministicSigningKeySeedSize :: AsType PaymentExtendedKey -> Word + deterministicSigningKeySeedSize AsPaymentExtendedKey = 32 + + getVerificationKey :: + SigningKey PaymentExtendedKey -> + VerificationKey PaymentExtendedKey + getVerificationKey (PaymentExtendedSigningKey sk) = + PaymentExtendedVerificationKey (Crypto.HD.toXPub sk) + + -- \| We use the hash of the normal non-extended pub key so that it is + -- consistent with the one used in addresses and signatures. + verificationKeyHash :: + VerificationKey PaymentExtendedKey -> + Hash PaymentExtendedKey + verificationKeyHash (PaymentExtendedVerificationKey vk) = + PaymentExtendedKeyHash . Shelley.KeyHash . Crypto.castHash $ Crypto.hashWith Crypto.HD.xpubPublicKey vk - instance ToCBOR (VerificationKey PaymentExtendedKey) where - toCBOR (PaymentExtendedVerificationKey xpub) = - toCBOR (Crypto.HD.unXPub xpub) + toCBOR (PaymentExtendedVerificationKey xpub) = + toCBOR (Crypto.HD.unXPub xpub) instance FromCBOR (VerificationKey PaymentExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . PaymentExtendedVerificationKey) - (Crypto.HD.xpub (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . PaymentExtendedVerificationKey) + (Crypto.HD.xpub (bs :: ByteString)) instance ToCBOR (SigningKey PaymentExtendedKey) where - toCBOR (PaymentExtendedSigningKey xprv) = - toCBOR (Crypto.HD.unXPrv xprv) + toCBOR (PaymentExtendedSigningKey xprv) = + toCBOR (Crypto.HD.unXPrv xprv) instance FromCBOR (SigningKey PaymentExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . PaymentExtendedSigningKey) - (Crypto.HD.xprv (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . PaymentExtendedSigningKey) + (Crypto.HD.xprv (bs :: ByteString)) instance SerialiseAsRawBytes (VerificationKey PaymentExtendedKey) where - serialiseToRawBytes (PaymentExtendedVerificationKey xpub) = - Crypto.HD.unXPub xpub + serialiseToRawBytes (PaymentExtendedVerificationKey xpub) = + Crypto.HD.unXPub xpub - deserialiseFromRawBytes (AsVerificationKey AsPaymentExtendedKey) bs = - either (const Nothing) (Just . PaymentExtendedVerificationKey) - (Crypto.HD.xpub bs) + deserialiseFromRawBytes (AsVerificationKey AsPaymentExtendedKey) bs = + either + (const Nothing) + (Just . PaymentExtendedVerificationKey) + (Crypto.HD.xpub bs) instance SerialiseAsRawBytes (SigningKey PaymentExtendedKey) where - serialiseToRawBytes (PaymentExtendedSigningKey xprv) = - Crypto.HD.unXPrv xprv + serialiseToRawBytes (PaymentExtendedSigningKey xprv) = + Crypto.HD.unXPrv xprv - deserialiseFromRawBytes (AsSigningKey AsPaymentExtendedKey) bs = - either (const Nothing) (Just . PaymentExtendedSigningKey) - (Crypto.HD.xprv bs) + deserialiseFromRawBytes (AsSigningKey AsPaymentExtendedKey) bs = + either + (const Nothing) + (Just . PaymentExtendedSigningKey) + (Crypto.HD.xprv bs) instance SerialiseAsBech32 (VerificationKey PaymentExtendedKey) where - bech32PrefixFor _ = "addr_xvk" - bech32PrefixesPermitted _ = ["addr_xvk"] + bech32PrefixFor _ = "addr_xvk" + bech32PrefixesPermitted _ = ["addr_xvk"] instance SerialiseAsBech32 (SigningKey PaymentExtendedKey) where - bech32PrefixFor _ = "addr_xsk" - bech32PrefixesPermitted _ = ["addr_xsk"] + bech32PrefixFor _ = "addr_xsk" + bech32PrefixesPermitted _ = ["addr_xsk"] - -newtype instance Hash PaymentExtendedKey = - PaymentExtendedKeyHash (Shelley.KeyHash Shelley.Payment) +newtype instance Hash PaymentExtendedKey + = PaymentExtendedKeyHash (Shelley.KeyHash Shelley.Payment) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash PaymentExtendedKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash PaymentExtendedKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash PaymentExtendedKey) where - serialiseToRawBytes (PaymentExtendedKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (PaymentExtendedKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsPaymentExtendedKey) bs = - PaymentExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsPaymentExtendedKey) bs = + PaymentExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey PaymentExtendedKey) where - textEnvelopeType _ = "PaymentExtendedVerificationKeyShelley_ed25519_bip32" + textEnvelopeType _ = "PaymentExtendedVerificationKeyShelley_ed25519_bip32" instance HasTextEnvelope (SigningKey PaymentExtendedKey) where - textEnvelopeType _ = "PaymentExtendedSigningKeyShelley_ed25519_bip32" + textEnvelopeType _ = "PaymentExtendedSigningKeyShelley_ed25519_bip32" instance CastVerificationKeyRole PaymentExtendedKey PaymentKey where - castVerificationKey (PaymentExtendedVerificationKey vk) = - PaymentVerificationKey + castVerificationKey (PaymentExtendedVerificationKey vk) = + PaymentVerificationKey . Shelley.VKey . fromMaybe impossible . Crypto.rawDeserialiseVerKeyDSIGN . Crypto.HD.xpubPublicKey $ vk - where - impossible = - error "castVerificationKey: byron and shelley key sizes do not match!" - + where + impossible = + error "castVerificationKey: byron and shelley key sizes do not match!" -- -- Stake keys @@ -308,96 +316,94 @@ instance CastVerificationKeyRole PaymentExtendedKey PaymentKey where data StakeKey instance HasTypeProxy StakeKey where - data AsType StakeKey = AsStakeKey - proxyToAsType _ = AsStakeKey + data AsType StakeKey = AsStakeKey + proxyToAsType _ = AsStakeKey instance Key StakeKey where - - newtype VerificationKey StakeKey = - StakeVerificationKey (Shelley.VKey Shelley.Staking) - deriving stock (Eq) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey StakeKey) - - newtype SigningKey StakeKey = - StakeSigningKey (SignKeyDSIGN DSIGN) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (SigningKey StakeKey) - - deterministicSigningKey :: AsType StakeKey -> Crypto.Seed -> SigningKey StakeKey - deterministicSigningKey AsStakeKey seed = - StakeSigningKey (Crypto.genKeyDSIGN seed) - - deterministicSigningKeySeedSize :: AsType StakeKey -> Word - deterministicSigningKeySeedSize AsStakeKey = - Crypto.seedSizeDSIGN proxy - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - - getVerificationKey :: SigningKey StakeKey -> VerificationKey StakeKey - getVerificationKey (StakeSigningKey sk) = - StakeVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) - - verificationKeyHash :: VerificationKey StakeKey -> Hash StakeKey - verificationKeyHash (StakeVerificationKey vkey) = - StakeKeyHash (Shelley.hashKey vkey) - + newtype VerificationKey StakeKey + = StakeVerificationKey (Shelley.VKey Shelley.Staking) + deriving stock Eq + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey StakeKey) + + newtype SigningKey StakeKey + = StakeSigningKey (SignKeyDSIGN DSIGN) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (SigningKey StakeKey) + + deterministicSigningKey :: AsType StakeKey -> Crypto.Seed -> SigningKey StakeKey + deterministicSigningKey AsStakeKey seed = + StakeSigningKey (Crypto.genKeyDSIGN seed) + + deterministicSigningKeySeedSize :: AsType StakeKey -> Word + deterministicSigningKeySeedSize AsStakeKey = + Crypto.seedSizeDSIGN proxy + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + + getVerificationKey :: SigningKey StakeKey -> VerificationKey StakeKey + getVerificationKey (StakeSigningKey sk) = + StakeVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) + + verificationKeyHash :: VerificationKey StakeKey -> Hash StakeKey + verificationKeyHash (StakeVerificationKey vkey) = + StakeKeyHash (Shelley.hashKey vkey) instance SerialiseAsRawBytes (VerificationKey StakeKey) where - serialiseToRawBytes (StakeVerificationKey (Shelley.VKey vk)) = - Crypto.rawSerialiseVerKeyDSIGN vk + serialiseToRawBytes (StakeVerificationKey (Shelley.VKey vk)) = + Crypto.rawSerialiseVerKeyDSIGN vk - deserialiseFromRawBytes (AsVerificationKey AsStakeKey) bs = - StakeVerificationKey . Shelley.VKey <$> - Crypto.rawDeserialiseVerKeyDSIGN bs + deserialiseFromRawBytes (AsVerificationKey AsStakeKey) bs = + StakeVerificationKey . Shelley.VKey + <$> Crypto.rawDeserialiseVerKeyDSIGN bs instance SerialiseAsRawBytes (SigningKey StakeKey) where - serialiseToRawBytes (StakeSigningKey sk) = - Crypto.rawSerialiseSignKeyDSIGN sk + serialiseToRawBytes (StakeSigningKey sk) = + Crypto.rawSerialiseSignKeyDSIGN sk - deserialiseFromRawBytes (AsSigningKey AsStakeKey) bs = - StakeSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs + deserialiseFromRawBytes (AsSigningKey AsStakeKey) bs = + StakeSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs instance SerialiseAsBech32 (VerificationKey StakeKey) where - bech32PrefixFor _ = "stake_vk" - bech32PrefixesPermitted _ = ["stake_vk"] + bech32PrefixFor _ = "stake_vk" + bech32PrefixesPermitted _ = ["stake_vk"] instance SerialiseAsBech32 (SigningKey StakeKey) where - bech32PrefixFor _ = "stake_sk" - bech32PrefixesPermitted _ = ["stake_sk"] + bech32PrefixFor _ = "stake_sk" + bech32PrefixesPermitted _ = ["stake_sk"] - -newtype instance Hash StakeKey = - StakeKeyHash (Shelley.KeyHash Shelley.Staking) +newtype instance Hash StakeKey + = StakeKeyHash (Shelley.KeyHash Shelley.Staking) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash StakeKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash StakeKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash StakeKey) where - serialiseToRawBytes (StakeKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (StakeKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsStakeKey) bs = - StakeKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsStakeKey) bs = + StakeKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey StakeKey) where - textEnvelopeType _ = "StakeVerificationKeyShelley_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "StakeVerificationKeyShelley_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance HasTextEnvelope (SigningKey StakeKey) where - textEnvelopeType _ = "StakeSigningKeyShelley_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - + textEnvelopeType _ = + "StakeSigningKeyShelley_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy -- -- Shelley stake extended ed25519 keys @@ -418,132 +424,137 @@ instance HasTextEnvelope (SigningKey StakeKey) where -- key ('VerificationKey' 'StakeKey'). -- -- This is a type level tag, used with other interfaces like 'Key'. --- data StakeExtendedKey instance HasTypeProxy StakeExtendedKey where - data AsType StakeExtendedKey = AsStakeExtendedKey - proxyToAsType _ = AsStakeExtendedKey + data AsType StakeExtendedKey = AsStakeExtendedKey + proxyToAsType _ = AsStakeExtendedKey instance Key StakeExtendedKey where - - newtype VerificationKey StakeExtendedKey = - StakeExtendedVerificationKey Crypto.HD.XPub - deriving stock (Eq) - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey StakeExtendedKey) - - newtype SigningKey StakeExtendedKey = - StakeExtendedSigningKey Crypto.HD.XPrv - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (SigningKey StakeExtendedKey) - - deterministicSigningKey :: AsType StakeExtendedKey - -> Crypto.Seed - -> SigningKey StakeExtendedKey - deterministicSigningKey AsStakeExtendedKey seed = - StakeExtendedSigningKey - (Crypto.HD.generate seedbs BS.empty) - where - (seedbs, _) = Crypto.getBytesFromSeedT 32 seed - - deterministicSigningKeySeedSize :: AsType StakeExtendedKey -> Word - deterministicSigningKeySeedSize AsStakeExtendedKey = 32 - - getVerificationKey :: SigningKey StakeExtendedKey - -> VerificationKey StakeExtendedKey - getVerificationKey (StakeExtendedSigningKey sk) = - StakeExtendedVerificationKey (Crypto.HD.toXPub sk) - - -- | We use the hash of the normal non-extended pub key so that it is - -- consistent with the one used in addresses and signatures. - -- - verificationKeyHash :: VerificationKey StakeExtendedKey - -> Hash StakeExtendedKey - verificationKeyHash (StakeExtendedVerificationKey vk) = - StakeExtendedKeyHash + newtype VerificationKey StakeExtendedKey + = StakeExtendedVerificationKey Crypto.HD.XPub + deriving stock Eq + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey StakeExtendedKey) + + newtype SigningKey StakeExtendedKey + = StakeExtendedSigningKey Crypto.HD.XPrv + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (SigningKey StakeExtendedKey) + + deterministicSigningKey :: + AsType StakeExtendedKey -> + Crypto.Seed -> + SigningKey StakeExtendedKey + deterministicSigningKey AsStakeExtendedKey seed = + StakeExtendedSigningKey + (Crypto.HD.generate seedbs BS.empty) + where + (seedbs, _) = Crypto.getBytesFromSeedT 32 seed + + deterministicSigningKeySeedSize :: AsType StakeExtendedKey -> Word + deterministicSigningKeySeedSize AsStakeExtendedKey = 32 + + getVerificationKey :: + SigningKey StakeExtendedKey -> + VerificationKey StakeExtendedKey + getVerificationKey (StakeExtendedSigningKey sk) = + StakeExtendedVerificationKey (Crypto.HD.toXPub sk) + + -- \| We use the hash of the normal non-extended pub key so that it is + -- consistent with the one used in addresses and signatures. + verificationKeyHash :: + VerificationKey StakeExtendedKey -> + Hash StakeExtendedKey + verificationKeyHash (StakeExtendedVerificationKey vk) = + StakeExtendedKeyHash . Shelley.KeyHash . Crypto.castHash $ Crypto.hashWith Crypto.HD.xpubPublicKey vk - instance ToCBOR (VerificationKey StakeExtendedKey) where - toCBOR (StakeExtendedVerificationKey xpub) = - toCBOR (Crypto.HD.unXPub xpub) + toCBOR (StakeExtendedVerificationKey xpub) = + toCBOR (Crypto.HD.unXPub xpub) instance FromCBOR (VerificationKey StakeExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . StakeExtendedVerificationKey) - (Crypto.HD.xpub (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . StakeExtendedVerificationKey) + (Crypto.HD.xpub (bs :: ByteString)) instance ToCBOR (SigningKey StakeExtendedKey) where - toCBOR (StakeExtendedSigningKey xprv) = - toCBOR (Crypto.HD.unXPrv xprv) + toCBOR (StakeExtendedSigningKey xprv) = + toCBOR (Crypto.HD.unXPrv xprv) instance FromCBOR (SigningKey StakeExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . StakeExtendedSigningKey) - (Crypto.HD.xprv (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . StakeExtendedSigningKey) + (Crypto.HD.xprv (bs :: ByteString)) instance SerialiseAsRawBytes (VerificationKey StakeExtendedKey) where - serialiseToRawBytes (StakeExtendedVerificationKey xpub) = - Crypto.HD.unXPub xpub + serialiseToRawBytes (StakeExtendedVerificationKey xpub) = + Crypto.HD.unXPub xpub - deserialiseFromRawBytes (AsVerificationKey AsStakeExtendedKey) bs = - either (const Nothing) (Just . StakeExtendedVerificationKey) - (Crypto.HD.xpub bs) + deserialiseFromRawBytes (AsVerificationKey AsStakeExtendedKey) bs = + either + (const Nothing) + (Just . StakeExtendedVerificationKey) + (Crypto.HD.xpub bs) instance SerialiseAsRawBytes (SigningKey StakeExtendedKey) where - serialiseToRawBytes (StakeExtendedSigningKey xprv) = - Crypto.HD.unXPrv xprv + serialiseToRawBytes (StakeExtendedSigningKey xprv) = + Crypto.HD.unXPrv xprv - deserialiseFromRawBytes (AsSigningKey AsStakeExtendedKey) bs = - either (const Nothing) (Just . StakeExtendedSigningKey) - (Crypto.HD.xprv bs) + deserialiseFromRawBytes (AsSigningKey AsStakeExtendedKey) bs = + either + (const Nothing) + (Just . StakeExtendedSigningKey) + (Crypto.HD.xprv bs) instance SerialiseAsBech32 (VerificationKey StakeExtendedKey) where - bech32PrefixFor _ = "stake_xvk" - bech32PrefixesPermitted _ = ["stake_xvk"] + bech32PrefixFor _ = "stake_xvk" + bech32PrefixesPermitted _ = ["stake_xvk"] instance SerialiseAsBech32 (SigningKey StakeExtendedKey) where - bech32PrefixFor _ = "stake_xsk" - bech32PrefixesPermitted _ = ["stake_xsk"] - + bech32PrefixFor _ = "stake_xsk" + bech32PrefixesPermitted _ = ["stake_xsk"] -newtype instance Hash StakeExtendedKey = - StakeExtendedKeyHash (Shelley.KeyHash Shelley.Staking) +newtype instance Hash StakeExtendedKey + = StakeExtendedKeyHash (Shelley.KeyHash Shelley.Staking) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash StakeExtendedKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash StakeExtendedKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash StakeExtendedKey) where - serialiseToRawBytes (StakeExtendedKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (StakeExtendedKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsStakeExtendedKey) bs = - StakeExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsStakeExtendedKey) bs = + StakeExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey StakeExtendedKey) where - textEnvelopeType _ = "StakeExtendedVerificationKeyShelley_ed25519_bip32" + textEnvelopeType _ = "StakeExtendedVerificationKeyShelley_ed25519_bip32" instance HasTextEnvelope (SigningKey StakeExtendedKey) where - textEnvelopeType _ = "StakeExtendedSigningKeyShelley_ed25519_bip32" + textEnvelopeType _ = "StakeExtendedSigningKeyShelley_ed25519_bip32" instance CastVerificationKeyRole StakeExtendedKey StakeKey where - castVerificationKey (StakeExtendedVerificationKey vk) = - StakeVerificationKey + castVerificationKey (StakeExtendedVerificationKey vk) = + StakeVerificationKey . Shelley.VKey . fromMaybe impossible . Crypto.rawDeserialiseVerKeyDSIGN . Crypto.HD.xpubPublicKey $ vk - where - impossible = - error "castVerificationKey: byron and shelley key sizes do not match!" - + where + impossible = + error "castVerificationKey: byron and shelley key sizes do not match!" -- -- Genesis keys @@ -552,88 +563,86 @@ instance CastVerificationKeyRole StakeExtendedKey StakeKey where data GenesisKey instance HasTypeProxy GenesisKey where - data AsType GenesisKey = AsGenesisKey - proxyToAsType _ = AsGenesisKey + data AsType GenesisKey = AsGenesisKey + proxyToAsType _ = AsGenesisKey instance Key GenesisKey where - - newtype VerificationKey GenesisKey = - GenesisVerificationKey (Shelley.VKey Shelley.Genesis) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey GenesisKey = - GenesisSigningKey (SignKeyDSIGN DSIGN) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType GenesisKey -> Crypto.Seed -> SigningKey GenesisKey - deterministicSigningKey AsGenesisKey seed = - GenesisSigningKey (Crypto.genKeyDSIGN seed) - - deterministicSigningKeySeedSize :: AsType GenesisKey -> Word - deterministicSigningKeySeedSize AsGenesisKey = - Crypto.seedSizeDSIGN proxy - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - - getVerificationKey :: SigningKey GenesisKey -> VerificationKey GenesisKey - getVerificationKey (GenesisSigningKey sk) = - GenesisVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) - - verificationKeyHash :: VerificationKey GenesisKey -> Hash GenesisKey - verificationKeyHash (GenesisVerificationKey vkey) = - GenesisKeyHash (Shelley.hashKey vkey) - + newtype VerificationKey GenesisKey + = GenesisVerificationKey (Shelley.VKey Shelley.Genesis) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey GenesisKey + = GenesisSigningKey (SignKeyDSIGN DSIGN) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType GenesisKey -> Crypto.Seed -> SigningKey GenesisKey + deterministicSigningKey AsGenesisKey seed = + GenesisSigningKey (Crypto.genKeyDSIGN seed) + + deterministicSigningKeySeedSize :: AsType GenesisKey -> Word + deterministicSigningKeySeedSize AsGenesisKey = + Crypto.seedSizeDSIGN proxy + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + + getVerificationKey :: SigningKey GenesisKey -> VerificationKey GenesisKey + getVerificationKey (GenesisSigningKey sk) = + GenesisVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) + + verificationKeyHash :: VerificationKey GenesisKey -> Hash GenesisKey + verificationKeyHash (GenesisVerificationKey vkey) = + GenesisKeyHash (Shelley.hashKey vkey) instance SerialiseAsRawBytes (VerificationKey GenesisKey) where - serialiseToRawBytes (GenesisVerificationKey (Shelley.VKey vk)) = - Crypto.rawSerialiseVerKeyDSIGN vk + serialiseToRawBytes (GenesisVerificationKey (Shelley.VKey vk)) = + Crypto.rawSerialiseVerKeyDSIGN vk - deserialiseFromRawBytes (AsVerificationKey AsGenesisKey) bs = - GenesisVerificationKey . Shelley.VKey <$> - Crypto.rawDeserialiseVerKeyDSIGN bs + deserialiseFromRawBytes (AsVerificationKey AsGenesisKey) bs = + GenesisVerificationKey . Shelley.VKey + <$> Crypto.rawDeserialiseVerKeyDSIGN bs instance SerialiseAsRawBytes (SigningKey GenesisKey) where - serialiseToRawBytes (GenesisSigningKey sk) = - Crypto.rawSerialiseSignKeyDSIGN sk - - deserialiseFromRawBytes (AsSigningKey AsGenesisKey) bs = - GenesisSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs + serialiseToRawBytes (GenesisSigningKey sk) = + Crypto.rawSerialiseSignKeyDSIGN sk + deserialiseFromRawBytes (AsSigningKey AsGenesisKey) bs = + GenesisSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs -newtype instance Hash GenesisKey = - GenesisKeyHash (Shelley.KeyHash Shelley.Genesis) +newtype instance Hash GenesisKey + = GenesisKeyHash (Shelley.KeyHash Shelley.Genesis) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash GenesisKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash GenesisKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash GenesisKey) where - serialiseToRawBytes (GenesisKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (GenesisKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsGenesisKey) bs = - GenesisKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsGenesisKey) bs = + GenesisKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey GenesisKey) where - textEnvelopeType _ = "GenesisVerificationKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "GenesisVerificationKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance HasTextEnvelope (SigningKey GenesisKey) where - textEnvelopeType _ = "GenesisSigningKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - + textEnvelopeType _ = + "GenesisSigningKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy -- -- Shelley genesis extended ed25519 keys @@ -651,124 +660,129 @@ instance HasTextEnvelope (SigningKey GenesisKey) where -- key ('VerificationKey' 'GenesisKey'). -- -- This is a type level tag, used with other interfaces like 'Key'. --- data GenesisExtendedKey instance HasTypeProxy GenesisExtendedKey where - data AsType GenesisExtendedKey = AsGenesisExtendedKey - proxyToAsType _ = AsGenesisExtendedKey + data AsType GenesisExtendedKey = AsGenesisExtendedKey + proxyToAsType _ = AsGenesisExtendedKey instance Key GenesisExtendedKey where - - newtype VerificationKey GenesisExtendedKey = - GenesisExtendedVerificationKey Crypto.HD.XPub - deriving stock (Eq) - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisExtendedKey) - - newtype SigningKey GenesisExtendedKey = - GenesisExtendedSigningKey Crypto.HD.XPrv - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisExtendedKey) - - deterministicSigningKey :: AsType GenesisExtendedKey - -> Crypto.Seed - -> SigningKey GenesisExtendedKey - deterministicSigningKey AsGenesisExtendedKey seed = - GenesisExtendedSigningKey - (Crypto.HD.generate seedbs BS.empty) - where - (seedbs, _) = Crypto.getBytesFromSeedT 32 seed - - deterministicSigningKeySeedSize :: AsType GenesisExtendedKey -> Word - deterministicSigningKeySeedSize AsGenesisExtendedKey = 32 - - getVerificationKey :: SigningKey GenesisExtendedKey - -> VerificationKey GenesisExtendedKey - getVerificationKey (GenesisExtendedSigningKey sk) = - GenesisExtendedVerificationKey (Crypto.HD.toXPub sk) - - -- | We use the hash of the normal non-extended pub key so that it is - -- consistent with the one used in addresses and signatures. - -- - verificationKeyHash :: VerificationKey GenesisExtendedKey - -> Hash GenesisExtendedKey - verificationKeyHash (GenesisExtendedVerificationKey vk) = - GenesisExtendedKeyHash + newtype VerificationKey GenesisExtendedKey + = GenesisExtendedVerificationKey Crypto.HD.XPub + deriving stock Eq + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisExtendedKey) + + newtype SigningKey GenesisExtendedKey + = GenesisExtendedSigningKey Crypto.HD.XPrv + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisExtendedKey) + + deterministicSigningKey :: + AsType GenesisExtendedKey -> + Crypto.Seed -> + SigningKey GenesisExtendedKey + deterministicSigningKey AsGenesisExtendedKey seed = + GenesisExtendedSigningKey + (Crypto.HD.generate seedbs BS.empty) + where + (seedbs, _) = Crypto.getBytesFromSeedT 32 seed + + deterministicSigningKeySeedSize :: AsType GenesisExtendedKey -> Word + deterministicSigningKeySeedSize AsGenesisExtendedKey = 32 + + getVerificationKey :: + SigningKey GenesisExtendedKey -> + VerificationKey GenesisExtendedKey + getVerificationKey (GenesisExtendedSigningKey sk) = + GenesisExtendedVerificationKey (Crypto.HD.toXPub sk) + + -- \| We use the hash of the normal non-extended pub key so that it is + -- consistent with the one used in addresses and signatures. + verificationKeyHash :: + VerificationKey GenesisExtendedKey -> + Hash GenesisExtendedKey + verificationKeyHash (GenesisExtendedVerificationKey vk) = + GenesisExtendedKeyHash . Shelley.KeyHash . Crypto.castHash $ Crypto.hashWith Crypto.HD.xpubPublicKey vk - instance ToCBOR (VerificationKey GenesisExtendedKey) where - toCBOR (GenesisExtendedVerificationKey xpub) = - toCBOR (Crypto.HD.unXPub xpub) + toCBOR (GenesisExtendedVerificationKey xpub) = + toCBOR (Crypto.HD.unXPub xpub) instance FromCBOR (VerificationKey GenesisExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . GenesisExtendedVerificationKey) - (Crypto.HD.xpub (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . GenesisExtendedVerificationKey) + (Crypto.HD.xpub (bs :: ByteString)) instance ToCBOR (SigningKey GenesisExtendedKey) where - toCBOR (GenesisExtendedSigningKey xprv) = - toCBOR (Crypto.HD.unXPrv xprv) + toCBOR (GenesisExtendedSigningKey xprv) = + toCBOR (Crypto.HD.unXPrv xprv) instance FromCBOR (SigningKey GenesisExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . GenesisExtendedSigningKey) - (Crypto.HD.xprv (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . GenesisExtendedSigningKey) + (Crypto.HD.xprv (bs :: ByteString)) instance SerialiseAsRawBytes (VerificationKey GenesisExtendedKey) where - serialiseToRawBytes (GenesisExtendedVerificationKey xpub) = - Crypto.HD.unXPub xpub + serialiseToRawBytes (GenesisExtendedVerificationKey xpub) = + Crypto.HD.unXPub xpub - deserialiseFromRawBytes (AsVerificationKey AsGenesisExtendedKey) bs = - either (const Nothing) (Just . GenesisExtendedVerificationKey) - (Crypto.HD.xpub bs) + deserialiseFromRawBytes (AsVerificationKey AsGenesisExtendedKey) bs = + either + (const Nothing) + (Just . GenesisExtendedVerificationKey) + (Crypto.HD.xpub bs) instance SerialiseAsRawBytes (SigningKey GenesisExtendedKey) where - serialiseToRawBytes (GenesisExtendedSigningKey xprv) = - Crypto.HD.unXPrv xprv - - deserialiseFromRawBytes (AsSigningKey AsGenesisExtendedKey) bs = - either (const Nothing) (Just . GenesisExtendedSigningKey) - (Crypto.HD.xprv bs) + serialiseToRawBytes (GenesisExtendedSigningKey xprv) = + Crypto.HD.unXPrv xprv + deserialiseFromRawBytes (AsSigningKey AsGenesisExtendedKey) bs = + either + (const Nothing) + (Just . GenesisExtendedSigningKey) + (Crypto.HD.xprv bs) -newtype instance Hash GenesisExtendedKey = - GenesisExtendedKeyHash (Shelley.KeyHash Shelley.Staking) +newtype instance Hash GenesisExtendedKey + = GenesisExtendedKeyHash (Shelley.KeyHash Shelley.Staking) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash GenesisExtendedKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash GenesisExtendedKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash GenesisExtendedKey) where - serialiseToRawBytes (GenesisExtendedKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (GenesisExtendedKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsGenesisExtendedKey) bs = - GenesisExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsGenesisExtendedKey) bs = + GenesisExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey GenesisExtendedKey) where - textEnvelopeType _ = "GenesisExtendedVerificationKey_ed25519_bip32" + textEnvelopeType _ = "GenesisExtendedVerificationKey_ed25519_bip32" instance HasTextEnvelope (SigningKey GenesisExtendedKey) where - textEnvelopeType _ = "GenesisExtendedSigningKey_ed25519_bip32" + textEnvelopeType _ = "GenesisExtendedSigningKey_ed25519_bip32" instance CastVerificationKeyRole GenesisExtendedKey GenesisKey where - castVerificationKey (GenesisExtendedVerificationKey vk) = - GenesisVerificationKey + castVerificationKey (GenesisExtendedVerificationKey vk) = + GenesisVerificationKey . Shelley.VKey . fromMaybe impossible . Crypto.rawDeserialiseVerKeyDSIGN . Crypto.HD.xpubPublicKey $ vk - where - impossible = - error "castVerificationKey: byron and shelley key sizes do not match!" - + where + impossible = + error "castVerificationKey: byron and shelley key sizes do not match!" -- -- Genesis delegate keys @@ -777,97 +791,94 @@ instance CastVerificationKeyRole GenesisExtendedKey GenesisKey where data GenesisDelegateKey instance HasTypeProxy GenesisDelegateKey where - data AsType GenesisDelegateKey = AsGenesisDelegateKey - proxyToAsType _ = AsGenesisDelegateKey - + data AsType GenesisDelegateKey = AsGenesisDelegateKey + proxyToAsType _ = AsGenesisDelegateKey instance Key GenesisDelegateKey where - - newtype VerificationKey GenesisDelegateKey = - GenesisDelegateVerificationKey (Shelley.VKey Shelley.GenesisDelegate) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisDelegateKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey GenesisDelegateKey = - GenesisDelegateSigningKey ((SignKeyDSIGN DSIGN)) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisDelegateKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType GenesisDelegateKey -> Crypto.Seed -> SigningKey GenesisDelegateKey - deterministicSigningKey AsGenesisDelegateKey seed = - GenesisDelegateSigningKey (Crypto.genKeyDSIGN seed) - - deterministicSigningKeySeedSize :: AsType GenesisDelegateKey -> Word - deterministicSigningKeySeedSize AsGenesisDelegateKey = - Crypto.seedSizeDSIGN proxy - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - - getVerificationKey :: SigningKey GenesisDelegateKey -> VerificationKey GenesisDelegateKey - getVerificationKey (GenesisDelegateSigningKey sk) = - GenesisDelegateVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) - - verificationKeyHash :: VerificationKey GenesisDelegateKey -> Hash GenesisDelegateKey - verificationKeyHash (GenesisDelegateVerificationKey vkey) = - GenesisDelegateKeyHash (Shelley.hashKey vkey) - + newtype VerificationKey GenesisDelegateKey + = GenesisDelegateVerificationKey (Shelley.VKey Shelley.GenesisDelegate) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisDelegateKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey GenesisDelegateKey + = GenesisDelegateSigningKey ((SignKeyDSIGN DSIGN)) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisDelegateKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType GenesisDelegateKey -> Crypto.Seed -> SigningKey GenesisDelegateKey + deterministicSigningKey AsGenesisDelegateKey seed = + GenesisDelegateSigningKey (Crypto.genKeyDSIGN seed) + + deterministicSigningKeySeedSize :: AsType GenesisDelegateKey -> Word + deterministicSigningKeySeedSize AsGenesisDelegateKey = + Crypto.seedSizeDSIGN proxy + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + + getVerificationKey :: SigningKey GenesisDelegateKey -> VerificationKey GenesisDelegateKey + getVerificationKey (GenesisDelegateSigningKey sk) = + GenesisDelegateVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) + + verificationKeyHash :: VerificationKey GenesisDelegateKey -> Hash GenesisDelegateKey + verificationKeyHash (GenesisDelegateVerificationKey vkey) = + GenesisDelegateKeyHash (Shelley.hashKey vkey) instance SerialiseAsRawBytes (VerificationKey GenesisDelegateKey) where - serialiseToRawBytes (GenesisDelegateVerificationKey (Shelley.VKey vk)) = - Crypto.rawSerialiseVerKeyDSIGN vk + serialiseToRawBytes (GenesisDelegateVerificationKey (Shelley.VKey vk)) = + Crypto.rawSerialiseVerKeyDSIGN vk - deserialiseFromRawBytes (AsVerificationKey AsGenesisDelegateKey) bs = - GenesisDelegateVerificationKey . Shelley.VKey <$> - Crypto.rawDeserialiseVerKeyDSIGN bs + deserialiseFromRawBytes (AsVerificationKey AsGenesisDelegateKey) bs = + GenesisDelegateVerificationKey . Shelley.VKey + <$> Crypto.rawDeserialiseVerKeyDSIGN bs instance SerialiseAsRawBytes (SigningKey GenesisDelegateKey) where - serialiseToRawBytes (GenesisDelegateSigningKey sk) = - Crypto.rawSerialiseSignKeyDSIGN sk + serialiseToRawBytes (GenesisDelegateSigningKey sk) = + Crypto.rawSerialiseSignKeyDSIGN sk - deserialiseFromRawBytes (AsSigningKey AsGenesisDelegateKey) bs = - GenesisDelegateSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs + deserialiseFromRawBytes (AsSigningKey AsGenesisDelegateKey) bs = + GenesisDelegateSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs - -newtype instance Hash GenesisDelegateKey = - GenesisDelegateKeyHash (Shelley.KeyHash Shelley.GenesisDelegate) +newtype instance Hash GenesisDelegateKey + = GenesisDelegateKeyHash (Shelley.KeyHash Shelley.GenesisDelegate) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash GenesisDelegateKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash GenesisDelegateKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash GenesisDelegateKey) where - serialiseToRawBytes (GenesisDelegateKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (GenesisDelegateKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsGenesisDelegateKey) bs = - GenesisDelegateKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsGenesisDelegateKey) bs = + GenesisDelegateKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey GenesisDelegateKey) where - textEnvelopeType _ = "GenesisDelegateVerificationKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "GenesisDelegateVerificationKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance HasTextEnvelope (SigningKey GenesisDelegateKey) where - textEnvelopeType _ = "GenesisDelegateSigningKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "GenesisDelegateSigningKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance CastVerificationKeyRole GenesisDelegateKey StakePoolKey where - castVerificationKey (GenesisDelegateVerificationKey (Shelley.VKey vkey)) = - StakePoolVerificationKey (Shelley.VKey vkey) + castVerificationKey (GenesisDelegateVerificationKey (Shelley.VKey vkey)) = + StakePoolVerificationKey (Shelley.VKey vkey) instance CastSigningKeyRole GenesisDelegateKey StakePoolKey where - castSigningKey (GenesisDelegateSigningKey skey) = - StakePoolSigningKey skey - + castSigningKey (GenesisDelegateSigningKey skey) = + StakePoolSigningKey skey -- -- Shelley genesis delegate extended ed25519 keys @@ -885,124 +896,129 @@ instance CastSigningKeyRole GenesisDelegateKey StakePoolKey where -- key ('VerificationKey' 'GenesisKey'). -- -- This is a type level tag, used with other interfaces like 'Key'. --- data GenesisDelegateExtendedKey instance HasTypeProxy GenesisDelegateExtendedKey where - data AsType GenesisDelegateExtendedKey = AsGenesisDelegateExtendedKey - proxyToAsType _ = AsGenesisDelegateExtendedKey + data AsType GenesisDelegateExtendedKey = AsGenesisDelegateExtendedKey + proxyToAsType _ = AsGenesisDelegateExtendedKey instance Key GenesisDelegateExtendedKey where - - newtype VerificationKey GenesisDelegateExtendedKey = - GenesisDelegateExtendedVerificationKey Crypto.HD.XPub - deriving stock (Eq) - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisDelegateExtendedKey) - - newtype SigningKey GenesisDelegateExtendedKey = - GenesisDelegateExtendedSigningKey Crypto.HD.XPrv - deriving anyclass SerialiseAsCBOR - deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisDelegateExtendedKey) - - deterministicSigningKey :: AsType GenesisDelegateExtendedKey - -> Crypto.Seed - -> SigningKey GenesisDelegateExtendedKey - deterministicSigningKey AsGenesisDelegateExtendedKey seed = - GenesisDelegateExtendedSigningKey - (Crypto.HD.generate seedbs BS.empty) - where - (seedbs, _) = Crypto.getBytesFromSeedT 32 seed - - deterministicSigningKeySeedSize :: AsType GenesisDelegateExtendedKey -> Word - deterministicSigningKeySeedSize AsGenesisDelegateExtendedKey = 32 - - getVerificationKey :: SigningKey GenesisDelegateExtendedKey - -> VerificationKey GenesisDelegateExtendedKey - getVerificationKey (GenesisDelegateExtendedSigningKey sk) = - GenesisDelegateExtendedVerificationKey (Crypto.HD.toXPub sk) - - -- | We use the hash of the normal non-extended pub key so that it is - -- consistent with the one used in addresses and signatures. - -- - verificationKeyHash :: VerificationKey GenesisDelegateExtendedKey - -> Hash GenesisDelegateExtendedKey - verificationKeyHash (GenesisDelegateExtendedVerificationKey vk) = - GenesisDelegateExtendedKeyHash + newtype VerificationKey GenesisDelegateExtendedKey + = GenesisDelegateExtendedVerificationKey Crypto.HD.XPub + deriving stock Eq + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisDelegateExtendedKey) + + newtype SigningKey GenesisDelegateExtendedKey + = GenesisDelegateExtendedSigningKey Crypto.HD.XPrv + deriving anyclass SerialiseAsCBOR + deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisDelegateExtendedKey) + + deterministicSigningKey :: + AsType GenesisDelegateExtendedKey -> + Crypto.Seed -> + SigningKey GenesisDelegateExtendedKey + deterministicSigningKey AsGenesisDelegateExtendedKey seed = + GenesisDelegateExtendedSigningKey + (Crypto.HD.generate seedbs BS.empty) + where + (seedbs, _) = Crypto.getBytesFromSeedT 32 seed + + deterministicSigningKeySeedSize :: AsType GenesisDelegateExtendedKey -> Word + deterministicSigningKeySeedSize AsGenesisDelegateExtendedKey = 32 + + getVerificationKey :: + SigningKey GenesisDelegateExtendedKey -> + VerificationKey GenesisDelegateExtendedKey + getVerificationKey (GenesisDelegateExtendedSigningKey sk) = + GenesisDelegateExtendedVerificationKey (Crypto.HD.toXPub sk) + + -- \| We use the hash of the normal non-extended pub key so that it is + -- consistent with the one used in addresses and signatures. + verificationKeyHash :: + VerificationKey GenesisDelegateExtendedKey -> + Hash GenesisDelegateExtendedKey + verificationKeyHash (GenesisDelegateExtendedVerificationKey vk) = + GenesisDelegateExtendedKeyHash . Shelley.KeyHash . Crypto.castHash $ Crypto.hashWith Crypto.HD.xpubPublicKey vk - instance ToCBOR (VerificationKey GenesisDelegateExtendedKey) where - toCBOR (GenesisDelegateExtendedVerificationKey xpub) = - toCBOR (Crypto.HD.unXPub xpub) + toCBOR (GenesisDelegateExtendedVerificationKey xpub) = + toCBOR (Crypto.HD.unXPub xpub) instance FromCBOR (VerificationKey GenesisDelegateExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . GenesisDelegateExtendedVerificationKey) - (Crypto.HD.xpub (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . GenesisDelegateExtendedVerificationKey) + (Crypto.HD.xpub (bs :: ByteString)) instance ToCBOR (SigningKey GenesisDelegateExtendedKey) where - toCBOR (GenesisDelegateExtendedSigningKey xprv) = - toCBOR (Crypto.HD.unXPrv xprv) + toCBOR (GenesisDelegateExtendedSigningKey xprv) = + toCBOR (Crypto.HD.unXPrv xprv) instance FromCBOR (SigningKey GenesisDelegateExtendedKey) where - fromCBOR = do - bs <- fromCBOR - either fail (return . GenesisDelegateExtendedSigningKey) - (Crypto.HD.xprv (bs :: ByteString)) + fromCBOR = do + bs <- fromCBOR + either + fail + (return . GenesisDelegateExtendedSigningKey) + (Crypto.HD.xprv (bs :: ByteString)) instance SerialiseAsRawBytes (VerificationKey GenesisDelegateExtendedKey) where - serialiseToRawBytes (GenesisDelegateExtendedVerificationKey xpub) = - Crypto.HD.unXPub xpub + serialiseToRawBytes (GenesisDelegateExtendedVerificationKey xpub) = + Crypto.HD.unXPub xpub - deserialiseFromRawBytes (AsVerificationKey AsGenesisDelegateExtendedKey) bs = - either (const Nothing) (Just . GenesisDelegateExtendedVerificationKey) - (Crypto.HD.xpub bs) + deserialiseFromRawBytes (AsVerificationKey AsGenesisDelegateExtendedKey) bs = + either + (const Nothing) + (Just . GenesisDelegateExtendedVerificationKey) + (Crypto.HD.xpub bs) instance SerialiseAsRawBytes (SigningKey GenesisDelegateExtendedKey) where - serialiseToRawBytes (GenesisDelegateExtendedSigningKey xprv) = - Crypto.HD.unXPrv xprv + serialiseToRawBytes (GenesisDelegateExtendedSigningKey xprv) = + Crypto.HD.unXPrv xprv - deserialiseFromRawBytes (AsSigningKey AsGenesisDelegateExtendedKey) bs = - either (const Nothing) (Just . GenesisDelegateExtendedSigningKey) - (Crypto.HD.xprv bs) + deserialiseFromRawBytes (AsSigningKey AsGenesisDelegateExtendedKey) bs = + either + (const Nothing) + (Just . GenesisDelegateExtendedSigningKey) + (Crypto.HD.xprv bs) - -newtype instance Hash GenesisDelegateExtendedKey = - GenesisDelegateExtendedKeyHash (Shelley.KeyHash Shelley.Staking) +newtype instance Hash GenesisDelegateExtendedKey + = GenesisDelegateExtendedKeyHash (Shelley.KeyHash Shelley.Staking) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash GenesisDelegateExtendedKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash GenesisDelegateExtendedKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash GenesisDelegateExtendedKey) where - serialiseToRawBytes (GenesisDelegateExtendedKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (GenesisDelegateExtendedKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsGenesisDelegateExtendedKey) bs = - GenesisDelegateExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsGenesisDelegateExtendedKey) bs = + GenesisDelegateExtendedKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey GenesisDelegateExtendedKey) where - textEnvelopeType _ = "GenesisDelegateExtendedVerificationKey_ed25519_bip32" + textEnvelopeType _ = "GenesisDelegateExtendedVerificationKey_ed25519_bip32" instance HasTextEnvelope (SigningKey GenesisDelegateExtendedKey) where - textEnvelopeType _ = "GenesisDelegateExtendedSigningKey_ed25519_bip32" + textEnvelopeType _ = "GenesisDelegateExtendedSigningKey_ed25519_bip32" instance CastVerificationKeyRole GenesisDelegateExtendedKey GenesisDelegateKey where - castVerificationKey (GenesisDelegateExtendedVerificationKey vk) = - GenesisDelegateVerificationKey + castVerificationKey (GenesisDelegateExtendedVerificationKey vk) = + GenesisDelegateVerificationKey . Shelley.VKey . fromMaybe impossible . Crypto.rawDeserialiseVerKeyDSIGN . Crypto.HD.xpubPublicKey $ vk - where - impossible = - error "castVerificationKey: byron and shelley key sizes do not match!" - + where + impossible = + error "castVerificationKey: byron and shelley key sizes do not match!" -- -- Genesis UTxO keys @@ -1011,99 +1027,97 @@ instance CastVerificationKeyRole GenesisDelegateExtendedKey GenesisDelegateKey w data GenesisUTxOKey instance HasTypeProxy GenesisUTxOKey where - data AsType GenesisUTxOKey = AsGenesisUTxOKey - proxyToAsType _ = AsGenesisUTxOKey - + data AsType GenesisUTxOKey = AsGenesisUTxOKey + proxyToAsType _ = AsGenesisUTxOKey instance Key GenesisUTxOKey where - - newtype VerificationKey GenesisUTxOKey = - GenesisUTxOVerificationKey (Shelley.VKey Shelley.Payment) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisUTxOKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey GenesisUTxOKey = - GenesisUTxOSigningKey (SignKeyDSIGN DSIGN) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisUTxOKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType GenesisUTxOKey -> Crypto.Seed -> SigningKey GenesisUTxOKey - deterministicSigningKey AsGenesisUTxOKey seed = - GenesisUTxOSigningKey (Crypto.genKeyDSIGN seed) - - deterministicSigningKeySeedSize :: AsType GenesisUTxOKey -> Word - deterministicSigningKeySeedSize AsGenesisUTxOKey = - Crypto.seedSizeDSIGN proxy - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - - getVerificationKey :: SigningKey GenesisUTxOKey -> VerificationKey GenesisUTxOKey - getVerificationKey (GenesisUTxOSigningKey sk) = - GenesisUTxOVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) - - verificationKeyHash :: VerificationKey GenesisUTxOKey -> Hash GenesisUTxOKey - verificationKeyHash (GenesisUTxOVerificationKey vkey) = - GenesisUTxOKeyHash (Shelley.hashKey vkey) - + newtype VerificationKey GenesisUTxOKey + = GenesisUTxOVerificationKey (Shelley.VKey Shelley.Payment) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey GenesisUTxOKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey GenesisUTxOKey + = GenesisUTxOSigningKey (SignKeyDSIGN DSIGN) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey GenesisUTxOKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType GenesisUTxOKey -> Crypto.Seed -> SigningKey GenesisUTxOKey + deterministicSigningKey AsGenesisUTxOKey seed = + GenesisUTxOSigningKey (Crypto.genKeyDSIGN seed) + + deterministicSigningKeySeedSize :: AsType GenesisUTxOKey -> Word + deterministicSigningKeySeedSize AsGenesisUTxOKey = + Crypto.seedSizeDSIGN proxy + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + + getVerificationKey :: SigningKey GenesisUTxOKey -> VerificationKey GenesisUTxOKey + getVerificationKey (GenesisUTxOSigningKey sk) = + GenesisUTxOVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) + + verificationKeyHash :: VerificationKey GenesisUTxOKey -> Hash GenesisUTxOKey + verificationKeyHash (GenesisUTxOVerificationKey vkey) = + GenesisUTxOKeyHash (Shelley.hashKey vkey) instance SerialiseAsRawBytes (VerificationKey GenesisUTxOKey) where - serialiseToRawBytes (GenesisUTxOVerificationKey (Shelley.VKey vk)) = - Crypto.rawSerialiseVerKeyDSIGN vk + serialiseToRawBytes (GenesisUTxOVerificationKey (Shelley.VKey vk)) = + Crypto.rawSerialiseVerKeyDSIGN vk - deserialiseFromRawBytes (AsVerificationKey AsGenesisUTxOKey) bs = - GenesisUTxOVerificationKey . Shelley.VKey <$> - Crypto.rawDeserialiseVerKeyDSIGN bs + deserialiseFromRawBytes (AsVerificationKey AsGenesisUTxOKey) bs = + GenesisUTxOVerificationKey . Shelley.VKey + <$> Crypto.rawDeserialiseVerKeyDSIGN bs instance SerialiseAsRawBytes (SigningKey GenesisUTxOKey) where - serialiseToRawBytes (GenesisUTxOSigningKey sk) = - Crypto.rawSerialiseSignKeyDSIGN sk - - deserialiseFromRawBytes (AsSigningKey AsGenesisUTxOKey) bs = - GenesisUTxOSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs + serialiseToRawBytes (GenesisUTxOSigningKey sk) = + Crypto.rawSerialiseSignKeyDSIGN sk + deserialiseFromRawBytes (AsSigningKey AsGenesisUTxOKey) bs = + GenesisUTxOSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs -newtype instance Hash GenesisUTxOKey = - GenesisUTxOKeyHash (Shelley.KeyHash Shelley.Payment) +newtype instance Hash GenesisUTxOKey + = GenesisUTxOKeyHash (Shelley.KeyHash Shelley.Payment) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash GenesisUTxOKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash GenesisUTxOKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash GenesisUTxOKey) where - serialiseToRawBytes (GenesisUTxOKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (GenesisUTxOKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsGenesisUTxOKey) bs = - GenesisUTxOKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsGenesisUTxOKey) bs = + GenesisUTxOKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance HasTextEnvelope (VerificationKey GenesisUTxOKey) where - textEnvelopeType _ = "GenesisUTxOVerificationKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "GenesisUTxOVerificationKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance HasTextEnvelope (SigningKey GenesisUTxOKey) where - textEnvelopeType _ = "GenesisUTxOSigningKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - -- TODO: use a different type from the stake pool key, since some operations - -- need a genesis key specifically + textEnvelopeType _ = + "GenesisUTxOSigningKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + +-- TODO: use a different type from the stake pool key, since some operations +-- need a genesis key specifically instance CastVerificationKeyRole GenesisUTxOKey PaymentKey where - castVerificationKey (GenesisUTxOVerificationKey (Shelley.VKey vkey)) = - PaymentVerificationKey (Shelley.VKey vkey) + castVerificationKey (GenesisUTxOVerificationKey (Shelley.VKey vkey)) = + PaymentVerificationKey (Shelley.VKey vkey) instance CastSigningKeyRole GenesisUTxOKey PaymentKey where - castSigningKey (GenesisUTxOSigningKey skey) = - PaymentSigningKey skey - + castSigningKey (GenesisUTxOSigningKey skey) = + PaymentSigningKey skey -- -- stake pool keys @@ -1112,86 +1126,85 @@ instance CastSigningKeyRole GenesisUTxOKey PaymentKey where data StakePoolKey instance HasTypeProxy StakePoolKey where - data AsType StakePoolKey = AsStakePoolKey - proxyToAsType _ = AsStakePoolKey + data AsType StakePoolKey = AsStakePoolKey + proxyToAsType _ = AsStakePoolKey instance Key StakePoolKey where - - newtype VerificationKey StakePoolKey = - StakePoolVerificationKey (Shelley.VKey Shelley.StakePool) - deriving stock (Eq) - deriving (Show, IsString) via UsingRawBytesHex (VerificationKey StakePoolKey) - deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - newtype SigningKey StakePoolKey = - StakePoolSigningKey (SignKeyDSIGN DSIGN) - deriving (Show, IsString) via UsingRawBytesHex (SigningKey StakePoolKey) - deriving newtype (ToCBOR, FromCBOR) - deriving anyclass SerialiseAsCBOR - - deterministicSigningKey :: AsType StakePoolKey -> Crypto.Seed -> SigningKey StakePoolKey - deterministicSigningKey AsStakePoolKey seed = - StakePoolSigningKey (Crypto.genKeyDSIGN seed) - - deterministicSigningKeySeedSize :: AsType StakePoolKey -> Word - deterministicSigningKeySeedSize AsStakePoolKey = - Crypto.seedSizeDSIGN proxy - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - - getVerificationKey :: SigningKey StakePoolKey -> VerificationKey StakePoolKey - getVerificationKey (StakePoolSigningKey sk) = - StakePoolVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) - - verificationKeyHash :: VerificationKey StakePoolKey -> Hash StakePoolKey - verificationKeyHash (StakePoolVerificationKey vkey) = - StakePoolKeyHash (Shelley.hashKey vkey) + newtype VerificationKey StakePoolKey + = StakePoolVerificationKey (Shelley.VKey Shelley.StakePool) + deriving stock Eq + deriving (Show, IsString) via UsingRawBytesHex (VerificationKey StakePoolKey) + deriving newtype (EncCBOR, DecCBOR, ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + newtype SigningKey StakePoolKey + = StakePoolSigningKey (SignKeyDSIGN DSIGN) + deriving (Show, IsString) via UsingRawBytesHex (SigningKey StakePoolKey) + deriving newtype (ToCBOR, FromCBOR) + deriving anyclass SerialiseAsCBOR + + deterministicSigningKey :: AsType StakePoolKey -> Crypto.Seed -> SigningKey StakePoolKey + deterministicSigningKey AsStakePoolKey seed = + StakePoolSigningKey (Crypto.genKeyDSIGN seed) + + deterministicSigningKeySeedSize :: AsType StakePoolKey -> Word + deterministicSigningKeySeedSize AsStakePoolKey = + Crypto.seedSizeDSIGN proxy + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy + + getVerificationKey :: SigningKey StakePoolKey -> VerificationKey StakePoolKey + getVerificationKey (StakePoolSigningKey sk) = + StakePoolVerificationKey (Shelley.VKey (Crypto.deriveVerKeyDSIGN sk)) + + verificationKeyHash :: VerificationKey StakePoolKey -> Hash StakePoolKey + verificationKeyHash (StakePoolVerificationKey vkey) = + StakePoolKeyHash (Shelley.hashKey vkey) instance SerialiseAsRawBytes (VerificationKey StakePoolKey) where - serialiseToRawBytes (StakePoolVerificationKey (Shelley.VKey vk)) = - Crypto.rawSerialiseVerKeyDSIGN vk + serialiseToRawBytes (StakePoolVerificationKey (Shelley.VKey vk)) = + Crypto.rawSerialiseVerKeyDSIGN vk - deserialiseFromRawBytes (AsVerificationKey AsStakePoolKey) bs = - StakePoolVerificationKey . Shelley.VKey <$> - Crypto.rawDeserialiseVerKeyDSIGN bs + deserialiseFromRawBytes (AsVerificationKey AsStakePoolKey) bs = + StakePoolVerificationKey . Shelley.VKey + <$> Crypto.rawDeserialiseVerKeyDSIGN bs instance SerialiseAsRawBytes (SigningKey StakePoolKey) where - serialiseToRawBytes (StakePoolSigningKey sk) = - Crypto.rawSerialiseSignKeyDSIGN sk + serialiseToRawBytes (StakePoolSigningKey sk) = + Crypto.rawSerialiseSignKeyDSIGN sk - deserialiseFromRawBytes (AsSigningKey AsStakePoolKey) bs = - StakePoolSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs + deserialiseFromRawBytes (AsSigningKey AsStakePoolKey) bs = + StakePoolSigningKey <$> Crypto.rawDeserialiseSignKeyDSIGN bs instance SerialiseAsBech32 (VerificationKey StakePoolKey) where - bech32PrefixFor _ = "pool_vk" - bech32PrefixesPermitted _ = ["pool_vk"] + bech32PrefixFor _ = "pool_vk" + bech32PrefixesPermitted _ = ["pool_vk"] instance SerialiseAsBech32 (SigningKey StakePoolKey) where - bech32PrefixFor _ = "pool_sk" - bech32PrefixesPermitted _ = ["pool_sk"] + bech32PrefixFor _ = "pool_sk" + bech32PrefixesPermitted _ = ["pool_sk"] -newtype instance Hash StakePoolKey = - StakePoolKeyHash (Shelley.KeyHash Shelley.StakePool) +newtype instance Hash StakePoolKey + = StakePoolKeyHash (Shelley.KeyHash Shelley.StakePool) deriving stock (Eq, Ord) deriving (Show, IsString) via UsingRawBytesHex (Hash StakePoolKey) deriving (ToCBOR, FromCBOR) via UsingRawBytes (Hash StakePoolKey) deriving anyclass SerialiseAsCBOR instance SerialiseAsRawBytes (Hash StakePoolKey) where - serialiseToRawBytes (StakePoolKeyHash (Shelley.KeyHash vkh)) = - Crypto.hashToBytes vkh + serialiseToRawBytes (StakePoolKeyHash (Shelley.KeyHash vkh)) = + Crypto.hashToBytes vkh - deserialiseFromRawBytes (AsHash AsStakePoolKey) bs = - StakePoolKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs + deserialiseFromRawBytes (AsHash AsStakePoolKey) bs = + StakePoolKeyHash . Shelley.KeyHash <$> Crypto.hashFromBytes bs instance SerialiseAsBech32 (Hash StakePoolKey) where - bech32PrefixFor _ = "pool" - bech32PrefixesPermitted _ = ["pool"] + bech32PrefixFor _ = "pool" + bech32PrefixesPermitted _ = ["pool"] instance ToJSON (Hash StakePoolKey) where - toJSON = toJSON . serialiseToBech32 + toJSON = toJSON . serialiseToBech32 instance ToJSONKey (Hash StakePoolKey) where toJSONKey = toJSONKeyText serialiseToBech32 @@ -1200,21 +1213,25 @@ instance FromJSON (Hash StakePoolKey) where parseJSON = withText "PoolId" $ \str -> case deserialiseFromBech32 (AsHash AsStakePoolKey) str of Left err -> - fail $ "Error deserialising Hash StakePoolKey: " <> Text.unpack str <> - " Error: " <> displayError err + fail $ + "Error deserialising Hash StakePoolKey: " + <> Text.unpack str + <> " Error: " + <> displayError err Right h -> pure h instance HasTextEnvelope (VerificationKey StakePoolKey) where - textEnvelopeType _ = "StakePoolVerificationKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy + textEnvelopeType _ = + "StakePoolVerificationKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy instance HasTextEnvelope (SigningKey StakePoolKey) where - textEnvelopeType _ = "StakePoolSigningKey_" - <> fromString (Crypto.algorithmNameDSIGN proxy) - where - proxy :: Proxy Shelley.DSIGN - proxy = Proxy - + textEnvelopeType _ = + "StakePoolSigningKey_" + <> fromString (Crypto.algorithmNameDSIGN proxy) + where + proxy :: Proxy Shelley.DSIGN + proxy = Proxy diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/OperationalCertificate.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/OperationalCertificate.hs index b8eb736501..37b57f131c 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/OperationalCertificate.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/OperationalCertificate.hs @@ -6,93 +6,95 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/OperationalCertificate.hs -- | Operational certificates --- -module Cardano.Api.OperationalCertificate ( - -- OperationalCertIssueError (..) +module Cardano.Api.OperationalCertificate + ( -- OperationalCertIssueError (..) OperationalCertificate (..) , OperationalCertificateIssueCounter (..) , Shelley.KESPeriod (..) , getHotKey , getKesPeriod , getOpCertCount - -- , issueOperationalCertificate + -- , issueOperationalCertificate + -- * Data family instances , AsType (..) ) where -import Cardano.Api.Any -import Cardano.Api.Key -import Cardano.Api.KeysByron -import Cardano.Api.KeysPraos -import Cardano.Api.KeysShelley -import Cardano.Api.SerialiseTextEnvelope -import qualified Cardano.Ledger.Binary as CBOR (CBORGroup (..), shelleyProtVer, - toPlainDecoder, toPlainEncoding) -import Cardano.Protocol.Crypto (StandardCrypto) -import qualified Cardano.Protocol.TPraos.OCert as Shelley -import Data.Word - +import Cardano.Api.Any +import Cardano.Api.Key +import Cardano.Api.KeysByron +import Cardano.Api.KeysPraos +import Cardano.Api.KeysShelley +import Cardano.Api.SerialiseTextEnvelope +import Cardano.Ledger.Binary qualified as CBOR + ( CBORGroup (..) + , shelleyProtVer + , toPlainDecoder + , toPlainEncoding + ) +import Cardano.Protocol.Crypto (StandardCrypto) +import Cardano.Protocol.TPraos.OCert qualified as Shelley +import Data.Word -- ---------------------------------------------------------------------------- -- Operational certificates -- -data OperationalCertificate = - OperationalCertificate - !(Shelley.OCert StandardCrypto) - !(VerificationKey StakePoolKey) +data OperationalCertificate + = OperationalCertificate + !(Shelley.OCert StandardCrypto) + !(VerificationKey StakePoolKey) deriving (Eq, Show) deriving anyclass SerialiseAsCBOR -data OperationalCertificateIssueCounter = - OperationalCertificateIssueCounter - { opCertIssueCount :: !Word64 - , opCertIssueColdKey :: !(VerificationKey StakePoolKey) -- For consistency checking - } +data OperationalCertificateIssueCounter + = OperationalCertificateIssueCounter + { opCertIssueCount :: !Word64 + , opCertIssueColdKey :: !(VerificationKey StakePoolKey) -- For consistency checking + } deriving (Eq, Show) deriving anyclass SerialiseAsCBOR - instance ToCBOR OperationalCertificate where - toCBOR = CBOR.toPlainEncoding CBOR.shelleyProtVer . encCBOR + toCBOR = CBOR.toPlainEncoding CBOR.shelleyProtVer . encCBOR instance FromCBOR OperationalCertificate where - fromCBOR = CBOR.toPlainDecoder Nothing CBOR.shelleyProtVer decCBOR + fromCBOR = CBOR.toPlainDecoder Nothing CBOR.shelleyProtVer decCBOR instance ToCBOR OperationalCertificateIssueCounter where - toCBOR = CBOR.toPlainEncoding CBOR.shelleyProtVer . encCBOR + toCBOR = CBOR.toPlainEncoding CBOR.shelleyProtVer . encCBOR instance FromCBOR OperationalCertificateIssueCounter where - fromCBOR = CBOR.toPlainDecoder Nothing CBOR.shelleyProtVer decCBOR + fromCBOR = CBOR.toPlainDecoder Nothing CBOR.shelleyProtVer decCBOR instance EncCBOR OperationalCertificate where - encCBOR (OperationalCertificate ocert vkey) = - encCBOR (CBOR.CBORGroup ocert, vkey) + encCBOR (OperationalCertificate ocert vkey) = + encCBOR (CBOR.CBORGroup ocert, vkey) instance DecCBOR OperationalCertificate where - decCBOR = do - (CBOR.CBORGroup ocert, vkey) <- decCBOR - return (OperationalCertificate ocert vkey) + decCBOR = do + (CBOR.CBORGroup ocert, vkey) <- decCBOR + return (OperationalCertificate ocert vkey) instance EncCBOR OperationalCertificateIssueCounter where - encCBOR (OperationalCertificateIssueCounter counter vkey) = - encCBOR (counter, vkey) + encCBOR (OperationalCertificateIssueCounter counter vkey) = + encCBOR (counter, vkey) instance DecCBOR OperationalCertificateIssueCounter where - decCBOR = do - (counter, vkey) <- decCBOR - return (OperationalCertificateIssueCounter counter vkey) + decCBOR = do + (counter, vkey) <- decCBOR + return (OperationalCertificateIssueCounter counter vkey) instance HasTypeProxy OperationalCertificate where - data AsType OperationalCertificate = AsOperationalCertificate - proxyToAsType _ = AsOperationalCertificate + data AsType OperationalCertificate = AsOperationalCertificate + proxyToAsType _ = AsOperationalCertificate instance HasTypeProxy OperationalCertificateIssueCounter where - data AsType OperationalCertificateIssueCounter = AsOperationalCertificateIssueCounter - proxyToAsType _ = AsOperationalCertificateIssueCounter + data AsType OperationalCertificateIssueCounter = AsOperationalCertificateIssueCounter + proxyToAsType _ = AsOperationalCertificateIssueCounter instance HasTextEnvelope OperationalCertificate where - textEnvelopeType _ = "NodeOperationalCertificate" + textEnvelopeType _ = "NodeOperationalCertificate" getHotKey :: OperationalCertificate -> VerificationKey KesKey getHotKey (OperationalCertificate cert _) = KesVerificationKey $ Shelley.ocertVkHot cert diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Protocol/Types.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Protocol/Types.hs index a1b804ebd4..13de6c533b 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Protocol/Types.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/Protocol/Types.hs @@ -9,99 +9,112 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/Protocol/Types.hs -module Cardano.Api.Protocol.Types ( - BlockType (..) +module Cardano.Api.Protocol.Types + ( BlockType (..) , Protocol (..) , ProtocolClient (..) , ProtocolClientInfoArgs (..) , ProtocolInfoArgs (..) ) where -import Cardano.Chain.Slotting (EpochSlots) -import Data.Bifunctor (bimap) -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.Byron.ByronHFC (ByronBlockHFC) -import Ouroboros.Consensus.Cardano -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Node -import Ouroboros.Consensus.HardFork.Combinator.Embed.Unary -import qualified Ouroboros.Consensus.Ledger.SupportsProtocol as Consensus - (LedgerSupportsProtocol) -import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolClientInfo (..), - ProtocolInfo (..)) -import Ouroboros.Consensus.Node.Run (RunNode) -import qualified Ouroboros.Consensus.Protocol.TPraos as Consensus -import qualified Ouroboros.Consensus.Shelley.Eras as Consensus (ShelleyEra) -import Ouroboros.Consensus.Shelley.HFEras () -import qualified Ouroboros.Consensus.Shelley.Ledger.Block as Consensus - (ShelleyBlock) -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.ShelleyHFC (ShelleyBlockHFC) -import Ouroboros.Consensus.Util.IOLike (IOLike) - +import Cardano.Chain.Slotting (EpochSlots) +import Data.Bifunctor (bimap) +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.Byron.ByronHFC (ByronBlockHFC) +import Ouroboros.Consensus.Cardano +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Node +import Ouroboros.Consensus.HardFork.Combinator.Embed.Unary +import Ouroboros.Consensus.Ledger.SupportsProtocol qualified as Consensus + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Node.ProtocolInfo + ( ProtocolClientInfo (..) + , ProtocolInfo (..) + ) +import Ouroboros.Consensus.Node.Run (RunNode) +import Ouroboros.Consensus.Protocol.TPraos qualified as Consensus +import Ouroboros.Consensus.Shelley.Eras qualified as Consensus (ShelleyEra) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger.Block qualified as Consensus + ( ShelleyBlock + ) +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.ShelleyHFC (ShelleyBlockHFC) +import Ouroboros.Consensus.Util.IOLike (IOLike) class (RunNode blk, IOLike m) => Protocol m blk where data ProtocolInfoArgs m blk - protocolInfo :: ProtocolInfoArgs m blk -> ( ProtocolInfo blk - , m [BlockForging m blk] - ) + protocolInfo :: + ProtocolInfoArgs m blk -> + ( ProtocolInfo blk + , m [BlockForging m blk] + ) -- | Node client support for each consensus protocol. -- -- This is like 'Protocol' but for clients of the node, so with less onerous -- requirements than to run a node. --- class RunNode blk => ProtocolClient blk where data ProtocolClientInfoArgs blk protocolClientInfo :: ProtocolClientInfoArgs blk -> ProtocolClientInfo blk - -- | Run PBFT against the Byron ledger instance IOLike m => Protocol m ByronBlockHFC where data ProtocolInfoArgs m ByronBlockHFC = ProtocolInfoArgsByron ProtocolParamsByron - protocolInfo (ProtocolInfoArgsByron params) = ( inject $ protocolInfoByron params - , pure . map inject $ blockForgingByron params - ) + protocolInfo (ProtocolInfoArgsByron params) = + ( inject $ protocolInfoByron params + , pure . map inject $ blockForgingByron params + ) instance (CardanoHardForkConstraints StandardCrypto, IOLike m) => Protocol m (CardanoBlock StandardCrypto) where - data ProtocolInfoArgs m (CardanoBlock StandardCrypto) = - ProtocolInfoArgsCardano - (CardanoProtocolParams StandardCrypto) + data ProtocolInfoArgs m (CardanoBlock StandardCrypto) + = ProtocolInfoArgsCardano + (CardanoProtocolParams StandardCrypto) protocolInfo (ProtocolInfoArgsCardano paramsCardano) = - protocolInfoCardano paramsCardano + protocolInfoCardano paramsCardano instance ProtocolClient ByronBlockHFC where - data ProtocolClientInfoArgs ByronBlockHFC = - ProtocolClientInfoArgsByron EpochSlots + data ProtocolClientInfoArgs ByronBlockHFC + = ProtocolClientInfoArgsByron EpochSlots protocolClientInfo (ProtocolClientInfoArgsByron epochSlots) = inject $ protocolClientInfoByron epochSlots instance CardanoHardForkConstraints StandardCrypto => ProtocolClient (CardanoBlock StandardCrypto) where - data ProtocolClientInfoArgs (CardanoBlock StandardCrypto) = - ProtocolClientInfoArgsCardano EpochSlots + data ProtocolClientInfoArgs (CardanoBlock StandardCrypto) + = ProtocolClientInfoArgsCardano EpochSlots protocolClientInfo (ProtocolClientInfoArgsCardano epochSlots) = protocolClientInfoCardano epochSlots -instance ( IOLike m - , Consensus.LedgerSupportsProtocol - (Consensus.ShelleyBlock - (Consensus.TPraos StandardCrypto) ShelleyEra) - ) - => Protocol m (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) where - data ProtocolInfoArgs m (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) = ProtocolInfoArgsShelley - ShelleyGenesis - (ProtocolParamsShelleyBased StandardCrypto) - ProtVer +instance + ( IOLike m + , Consensus.LedgerSupportsProtocol + ( Consensus.ShelleyBlock + (Consensus.TPraos StandardCrypto) + ShelleyEra + ) + ) => + Protocol m (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) + where + data ProtocolInfoArgs m (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) + = ProtocolInfoArgsShelley + ShelleyGenesis + (ProtocolParamsShelleyBased StandardCrypto) + ProtVer protocolInfo (ProtocolInfoArgsShelley genesis shelleyBasedProtocolParams' protVer) = bimap inject (fmap $ map inject) $ protocolInfoShelley genesis shelleyBasedProtocolParams' protVer -instance Consensus.LedgerSupportsProtocol - (Consensus.ShelleyBlock - (Consensus.TPraos StandardCrypto) Consensus.ShelleyEra) - => ProtocolClient (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) where - data ProtocolClientInfoArgs (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) = - ProtocolClientInfoArgsShelley +instance + Consensus.LedgerSupportsProtocol + ( Consensus.ShelleyBlock + (Consensus.TPraos StandardCrypto) + Consensus.ShelleyEra + ) => + ProtocolClient (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) + where + data ProtocolClientInfoArgs (ShelleyBlockHFC (Consensus.TPraos StandardCrypto) ShelleyEra) + = ProtocolClientInfoArgsShelley protocolClientInfo ProtocolClientInfoArgsShelley = inject protocolClientInfoShelley diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseTextEnvelope.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseTextEnvelope.hs index 77e0393261..637af0bb88 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseTextEnvelope.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseTextEnvelope.hs @@ -9,9 +9,8 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/SerialiseTextEnvelope.hs -- | TextEnvelope Serialisation --- -module Cardano.Api.SerialiseTextEnvelope ( - FromSomeType (..) +module Cardano.Api.SerialiseTextEnvelope + ( FromSomeType (..) , HasTextEnvelope (..) , TextEnvelope (..) , TextEnvelopeDescr (..) @@ -24,26 +23,37 @@ module Cardano.Api.SerialiseTextEnvelope ( , readTextEnvelopeFromFile , readTextEnvelopeOfTypeFromFile , serialiseToTextEnvelope + -- * Data family instances , AsType (..) ) where -import Cardano.Api.Any -import Cardano.Ledger.Binary (DecoderError) -import Control.Monad (unless) -import Control.Monad.Trans.Except (ExceptT (..), runExceptT) -import Control.Monad.Trans.Except.Extra (firstExceptT, - handleIOExceptT, hoistEither) -import Data.Aeson as Aeson (FromJSON (..), ToJSON (..), - eitherDecodeStrict', object, withObject, (.:), (.=)) -import Data.Bifunctor (first) -import Data.ByteString (ByteString) -import qualified Data.ByteString as BS -import qualified Data.ByteString.Base16 as Base16 -import qualified Data.List as List -import Data.Maybe (fromMaybe) -import Data.String (IsString) -import qualified Data.Text.Encoding as Text +import Cardano.Api.Any +import Cardano.Ledger.Binary (DecoderError) +import Control.Monad (unless) +import Control.Monad.Trans.Except (ExceptT (..), runExceptT) +import Control.Monad.Trans.Except.Extra + ( firstExceptT + , handleIOExceptT + , hoistEither + ) +import Data.Aeson as Aeson + ( FromJSON (..) + , ToJSON (..) + , eitherDecodeStrict' + , object + , withObject + , (.:) + , (.=) + ) +import Data.Bifunctor (first) +import Data.ByteString (ByteString) +import Data.ByteString qualified as BS +import Data.ByteString.Base16 qualified as Base16 +import Data.List qualified as List +import Data.Maybe (fromMaybe) +import Data.String (IsString) +import Data.Text.Encoding qualified as Text -- ---------------------------------------------------------------------------- -- Text envelopes @@ -66,38 +76,39 @@ newtype TextEnvelopeDescr = TextEnvelopeDescr String -- -- It also contains a \"title\" field which is free-form, and could be used -- to indicate the role or purpose to a reader. --- data TextEnvelope = TextEnvelope - { teType :: !TextEnvelopeType + { teType :: !TextEnvelopeType , teDescription :: !TextEnvelopeDescr - , teRawCBOR :: !ByteString - } deriving (Eq, Show) + , teRawCBOR :: !ByteString + } + deriving (Eq, Show) instance HasTypeProxy TextEnvelope where - data AsType TextEnvelope = AsTextEnvelope - proxyToAsType _ = AsTextEnvelope + data AsType TextEnvelope = AsTextEnvelope + proxyToAsType _ = AsTextEnvelope instance ToJSON TextEnvelope where - toJSON TextEnvelope {teType, teDescription, teRawCBOR} = - object [ "type" .= teType - , "description" .= teDescription - , "cborHex" .= Text.decodeUtf8 (Base16.encode teRawCBOR) - ] + toJSON TextEnvelope{teType, teDescription, teRawCBOR} = + object + [ "type" .= teType + , "description" .= teDescription + , "cborHex" .= Text.decodeUtf8 (Base16.encode teRawCBOR) + ] instance FromJSON TextEnvelope where parseJSON = withObject "TextEnvelope" $ \v -> - TextEnvelope <$> (v .: "type") - <*> (v .: "description") - <*> (parseJSONBase16 =<< v .: "cborHex") - where - parseJSONBase16 v = - either fail return . Base16.decode . Text.encodeUtf8 =<< parseJSON v - + TextEnvelope + <$> (v .: "type") + <*> (v .: "description") + <*> (parseJSONBase16 =<< v .: "cborHex") + where + parseJSONBase16 v = + either fail return . Base16.decode . Text.encodeUtf8 =<< parseJSON v -- | The errors that the pure 'TextEnvelope' parsing\/decoding functions can return. --- data TextEnvelopeError - = TextEnvelopeTypeError ![TextEnvelopeType] !TextEnvelopeType -- ^ expected, actual + = -- | expected, actual + TextEnvelopeTypeError ![TextEnvelopeType] !TextEnvelopeType | TextEnvelopeDecodeError !DecoderError | TextEnvelopeAesonDecodeError !String deriving (Eq, Show) @@ -105,123 +116,130 @@ data TextEnvelopeError instance Error TextEnvelopeError where displayError tee = case tee of - TextEnvelopeTypeError [TextEnvelopeType expType] - (TextEnvelopeType actType) -> + TextEnvelopeTypeError + [TextEnvelopeType expType] + (TextEnvelopeType actType) -> "TextEnvelope type error: " - <> " Expected: " <> expType - <> " Actual: " <> actType - + <> " Expected: " + <> expType + <> " Actual: " + <> actType TextEnvelopeTypeError expTypes (TextEnvelopeType actType) -> - "TextEnvelope type error: " - <> " Expected one of: " - <> List.intercalate ", " - [ expType | TextEnvelopeType expType <- expTypes ] - <> " Actual: " <> actType + "TextEnvelope type error: " + <> " Expected one of: " + <> List.intercalate + ", " + [expType | TextEnvelopeType expType <- expTypes] + <> " Actual: " + <> actType TextEnvelopeAesonDecodeError decErr -> "TextEnvelope aeson decode error: " <> decErr TextEnvelopeDecodeError decErr -> "TextEnvelope decode error: " <> show decErr -- | Check that the \"type\" of the 'TextEnvelope' is as expected. -- -- For example, one might check that the type is \"TxSignedShelley\". --- expectTextEnvelopeOfType :: TextEnvelopeType -> TextEnvelope -> Either TextEnvelopeError () -expectTextEnvelopeOfType expectedType TextEnvelope { teType = actualType } = - unless (expectedType == actualType) $ - Left (TextEnvelopeTypeError [expectedType] actualType) - +expectTextEnvelopeOfType expectedType TextEnvelope{teType = actualType} = + unless (expectedType == actualType) $ + Left (TextEnvelopeTypeError [expectedType] actualType) -- ---------------------------------------------------------------------------- -- Serialisation in text envelope format -- class SerialiseAsCBOR a => HasTextEnvelope a where - textEnvelopeType :: AsType a -> TextEnvelopeType - - textEnvelopeDefaultDescr :: a -> TextEnvelopeDescr - textEnvelopeDefaultDescr _ = "" + textEnvelopeType :: AsType a -> TextEnvelopeType + textEnvelopeDefaultDescr :: a -> TextEnvelopeDescr + textEnvelopeDefaultDescr _ = "" -serialiseToTextEnvelope :: forall a. HasTextEnvelope a - => Maybe TextEnvelopeDescr -> a -> TextEnvelope +serialiseToTextEnvelope :: + forall a. + HasTextEnvelope a => + Maybe TextEnvelopeDescr -> a -> TextEnvelope serialiseToTextEnvelope mbDescr a = - TextEnvelope { - teType = textEnvelopeType ttoken - , teDescription = fromMaybe (textEnvelopeDefaultDescr a) mbDescr + TextEnvelope + { teType = textEnvelopeType ttoken + , teDescription = fromMaybe (textEnvelopeDefaultDescr a) mbDescr , teRawCBOR = serialiseToCBOR a } - where - ttoken :: AsType a - ttoken = proxyToAsType Proxy - - -deserialiseFromTextEnvelope :: HasTextEnvelope a - => AsType a - -> TextEnvelope - -> Either TextEnvelopeError a + where + ttoken :: AsType a + ttoken = proxyToAsType Proxy + +deserialiseFromTextEnvelope :: + HasTextEnvelope a => + AsType a -> + TextEnvelope -> + Either TextEnvelopeError a deserialiseFromTextEnvelope ttoken te = do - expectTextEnvelopeOfType (textEnvelopeType ttoken) te - first TextEnvelopeDecodeError $ - deserialiseFromCBOR ttoken (teRawCBOR te) --TODO: You have switched from CBOR to JSON - - -deserialiseFromTextEnvelopeAnyOf :: [FromSomeType HasTextEnvelope b] - -> TextEnvelope - -> Either TextEnvelopeError b + expectTextEnvelopeOfType (textEnvelopeType ttoken) te + first TextEnvelopeDecodeError $ + deserialiseFromCBOR ttoken (teRawCBOR te) -- TODO: You have switched from CBOR to JSON + +deserialiseFromTextEnvelopeAnyOf :: + [FromSomeType HasTextEnvelope b] -> + TextEnvelope -> + Either TextEnvelopeError b deserialiseFromTextEnvelopeAnyOf types te = - case List.find matching types of - Nothing -> - Left (TextEnvelopeTypeError expectedTypes actualType) - - Just (FromSomeType ttoken f) -> - first TextEnvelopeDecodeError $ - f <$> deserialiseFromCBOR ttoken (teRawCBOR te) - where - actualType = teType te - expectedTypes = [ textEnvelopeType ttoken - | FromSomeType ttoken _f <- types ] - - matching (FromSomeType ttoken _f) = actualType == textEnvelopeType ttoken - -readFileTextEnvelope :: HasTextEnvelope a - => AsType a - -> FilePath - -> IO (Either (FileError TextEnvelopeError) a) + case List.find matching types of + Nothing -> + Left (TextEnvelopeTypeError expectedTypes actualType) + Just (FromSomeType ttoken f) -> + first TextEnvelopeDecodeError $ + f <$> deserialiseFromCBOR ttoken (teRawCBOR te) + where + actualType = teType te + expectedTypes = + [ textEnvelopeType ttoken + | FromSomeType ttoken _f <- types + ] + + matching (FromSomeType ttoken _f) = actualType == textEnvelopeType ttoken + +readFileTextEnvelope :: + HasTextEnvelope a => + AsType a -> + FilePath -> + IO (Either (FileError TextEnvelopeError) a) readFileTextEnvelope ttoken path = - runExceptT $ do - content <- handleIOExceptT (FileIOError path) $ BS.readFile path - firstExceptT (FileError path) $ hoistEither $ do - te <- first TextEnvelopeAesonDecodeError $ Aeson.eitherDecodeStrict' content - deserialiseFromTextEnvelope ttoken te - - -readFileTextEnvelopeAnyOf :: [FromSomeType HasTextEnvelope b] - -> FilePath - -> IO (Either (FileError TextEnvelopeError) b) + runExceptT $ do + content <- handleIOExceptT (FileIOError path) $ BS.readFile path + firstExceptT (FileError path) $ hoistEither $ do + te <- first TextEnvelopeAesonDecodeError $ Aeson.eitherDecodeStrict' content + deserialiseFromTextEnvelope ttoken te + +readFileTextEnvelopeAnyOf :: + [FromSomeType HasTextEnvelope b] -> + FilePath -> + IO (Either (FileError TextEnvelopeError) b) readFileTextEnvelopeAnyOf types path = - runExceptT $ do - content <- handleIOExceptT (FileIOError path) $ BS.readFile path - firstExceptT (FileError path) $ hoistEither $ do - te <- first TextEnvelopeAesonDecodeError $ Aeson.eitherDecodeStrict' content - deserialiseFromTextEnvelopeAnyOf types te - - -readTextEnvelopeFromFile :: FilePath - -> IO (Either (FileError TextEnvelopeError) TextEnvelope) + runExceptT $ do + content <- handleIOExceptT (FileIOError path) $ BS.readFile path + firstExceptT (FileError path) $ hoistEither $ do + te <- first TextEnvelopeAesonDecodeError $ Aeson.eitherDecodeStrict' content + deserialiseFromTextEnvelopeAnyOf types te + +readTextEnvelopeFromFile :: + FilePath -> + IO (Either (FileError TextEnvelopeError) TextEnvelope) readTextEnvelopeFromFile path = runExceptT $ do - bs <- handleIOExceptT (FileIOError path) $ - BS.readFile path + bs <- + handleIOExceptT (FileIOError path) $ + BS.readFile path firstExceptT (FileError path . TextEnvelopeAesonDecodeError) - . hoistEither $ Aeson.eitherDecodeStrict' bs - + . hoistEither + $ Aeson.eitherDecodeStrict' bs readTextEnvelopeOfTypeFromFile :: - TextEnvelopeType - -> FilePath - -> IO (Either (FileError TextEnvelopeError) TextEnvelope) + TextEnvelopeType -> + FilePath -> + IO (Either (FileError TextEnvelopeError) TextEnvelope) readTextEnvelopeOfTypeFromFile expectedType path = runExceptT $ do te <- ExceptT (readTextEnvelopeFromFile path) - firstExceptT (FileError path) $ hoistEither $ - expectTextEnvelopeOfType expectedType te + firstExceptT (FileError path) $ + hoistEither $ + expectTextEnvelopeOfType expectedType te return te diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseUsing.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseUsing.hs index 5b53fca8be..5d81991b0b 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseUsing.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Api/SerialiseUsing.hs @@ -3,49 +3,49 @@ -- DUPLICATE -- adapted from: cardano-api/src/Cardano/Api/SerialiseUsing.hs -- | Raw binary serialisation --- -module Cardano.Api.SerialiseUsing ( - UsingRawBytes (..) +module Cardano.Api.SerialiseUsing + ( UsingRawBytes (..) , UsingRawBytesHex (..) ) where -import Cardano.Api.Any -import Data.Aeson.Types (FromJSON, FromJSONKey, ToJSON (..), - ToJSONKey) -import qualified Data.Aeson.Types as Aeson -import Data.ByteString (ByteString) -import qualified Data.ByteString.Base16 as Base16 -import qualified Data.ByteString.Char8 as BSC -import Data.String (IsString (..)) -import qualified Data.Text.Encoding as Text -import Data.Typeable (Typeable, tyConName, typeRep, typeRepTyCon) - +import Cardano.Api.Any +import Data.Aeson.Types + ( FromJSON + , FromJSONKey + , ToJSON (..) + , ToJSONKey + ) +import Data.Aeson.Types qualified as Aeson +import Data.ByteString (ByteString) +import Data.ByteString.Base16 qualified as Base16 +import Data.ByteString.Char8 qualified as BSC +import Data.String (IsString (..)) +import Data.Text.Encoding qualified as Text +import Data.Typeable (Typeable, tyConName, typeRep, typeRepTyCon) -- | For use with @deriving via@, to provide 'ToCBOR' and 'FromCBOR' instances, -- based on the 'SerialiseAsRawBytes' instance. Eg: -- -- > deriving (ToCBOR, FromCBOR) via (UsingRawBytes Blah) --- newtype UsingRawBytes a = UsingRawBytes a instance (SerialiseAsRawBytes a, Typeable a) => ToCBOR (UsingRawBytes a) where - toCBOR (UsingRawBytes x) = toCBOR (serialiseToRawBytes x) + toCBOR (UsingRawBytes x) = toCBOR (serialiseToRawBytes x) instance (SerialiseAsRawBytes a, Typeable a) => FromCBOR (UsingRawBytes a) where - fromCBOR = do - bs <- fromCBOR - case deserialiseFromRawBytes ttoken bs of - Just x -> return (UsingRawBytes x) - Nothing -> fail ("cannot deserialise as a " ++ tname) - where - ttoken = proxyToAsType (Proxy :: Proxy a) - tname = (tyConName . typeRepTyCon . typeRep) (Proxy :: Proxy a) + fromCBOR = do + bs <- fromCBOR + case deserialiseFromRawBytes ttoken bs of + Just x -> return (UsingRawBytes x) + Nothing -> fail ("cannot deserialise as a " ++ tname) + where + ttoken = proxyToAsType (Proxy :: Proxy a) + tname = (tyConName . typeRepTyCon . typeRep) (Proxy :: Proxy a) instance (SerialiseAsRawBytes a, Typeable a) => EncCBOR (UsingRawBytes a) instance (SerialiseAsRawBytes a, Typeable a) => DecCBOR (UsingRawBytes a) - -- | For use with @deriving via@, to provide instances for any\/all of 'Show', -- 'IsString', 'ToJSON', 'FromJSON', 'ToJSONKey', FromJSONKey' using a hex -- encoding, based on the 'SerialiseAsRawBytes' instance. @@ -53,43 +53,40 @@ instance (SerialiseAsRawBytes a, Typeable a) => DecCBOR (UsingRawBytes a) -- > deriving (Show, IsString) via (UsingRawBytesHex Blah) -- > deriving (ToJSON, FromJSON) via (UsingRawBytesHex Blah) -- > deriving (ToJSONKey, FromJSONKey) via (UsingRawBytesHex Blah) --- newtype UsingRawBytesHex a = UsingRawBytesHex a instance SerialiseAsRawBytes a => Show (UsingRawBytesHex a) where - show (UsingRawBytesHex x) = show (serialiseToRawBytesHex x) + show (UsingRawBytesHex x) = show (serialiseToRawBytesHex x) instance SerialiseAsRawBytes a => IsString (UsingRawBytesHex a) where - fromString = either error id . deserialiseFromRawBytesBase16 . BSC.pack + fromString = either error id . deserialiseFromRawBytesBase16 . BSC.pack instance SerialiseAsRawBytes a => ToJSON (UsingRawBytesHex a) where - toJSON (UsingRawBytesHex x) = toJSON (serialiseToRawBytesHexText x) + toJSON (UsingRawBytesHex x) = toJSON (serialiseToRawBytesHexText x) instance (SerialiseAsRawBytes a, Typeable a) => FromJSON (UsingRawBytesHex a) where parseJSON = Aeson.withText tname $ either fail pure . deserialiseFromRawBytesBase16 . Text.encodeUtf8 - where - tname = (tyConName . typeRepTyCon . typeRep) (Proxy :: Proxy a) + where + tname = (tyConName . typeRepTyCon . typeRep) (Proxy :: Proxy a) instance SerialiseAsRawBytes a => ToJSONKey (UsingRawBytesHex a) where toJSONKey = Aeson.toJSONKeyText $ \(UsingRawBytesHex x) -> serialiseToRawBytesHexText x -instance - (SerialiseAsRawBytes a, Typeable a) => FromJSONKey (UsingRawBytesHex a) where - +instance (SerialiseAsRawBytes a, Typeable a) => FromJSONKey (UsingRawBytesHex a) where fromJSONKey = Aeson.FromJSONKeyTextParser $ - either fail pure . deserialiseFromRawBytesBase16 . Text.encodeUtf8 + either fail pure . deserialiseFromRawBytesBase16 . Text.encodeUtf8 deserialiseFromRawBytesBase16 :: SerialiseAsRawBytes a => ByteString -> Either String (UsingRawBytesHex a) deserialiseFromRawBytesBase16 str = case Base16.decode str of Right raw -> case deserialiseFromRawBytes ttoken raw of - Just x -> Right (UsingRawBytesHex x) + Just x -> Right (UsingRawBytesHex x) Nothing -> Left ("cannot deserialise " ++ show str) - Left msg -> Left ("invalid hex " ++ show str ++ ", " ++ msg) - where - ttoken = proxyToAsType (Proxy :: Proxy a) + Left msg -> Left ("invalid hex " ++ show str ++ ", " ++ msg) + where + ttoken = proxyToAsType (Proxy :: Proxy a) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol.hs index 8577a647d0..ca47c6ec72 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol.hs @@ -1,45 +1,42 @@ -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Protocol.hs -module Cardano.Node.Protocol ( - ProtocolInstantiationError (..) +module Cardano.Node.Protocol + ( ProtocolInstantiationError (..) , SomeConsensusProtocol (..) , mkConsensusProtocol ) where -import Cardano.Api.Any -import Cardano.Node.Protocol.Byron -import Cardano.Node.Protocol.Cardano -import Cardano.Node.Protocol.Shelley -import Cardano.Node.Protocol.Types (SomeConsensusProtocol (..)) -import Cardano.Node.Types -import Control.Monad.Trans.Except (ExceptT) -import Control.Monad.Trans.Except.Extra (firstExceptT) - +import Cardano.Api.Any +import Cardano.Node.Protocol.Byron +import Cardano.Node.Protocol.Cardano +import Cardano.Node.Protocol.Shelley +import Cardano.Node.Protocol.Types (SomeConsensusProtocol (..)) +import Cardano.Node.Types +import Control.Monad.Trans.Except (ExceptT) +import Control.Monad.Trans.Except.Extra (firstExceptT) ------------------------------------------------------------------------------ -- Conversions from configuration into specific protocols and their params -- mkConsensusProtocol :: - NodeProtocolConfiguration - -> Maybe ProtocolFilepaths - -> ExceptT ProtocolInstantiationError IO SomeConsensusProtocol + NodeProtocolConfiguration -> + Maybe ProtocolFilepaths -> + ExceptT ProtocolInstantiationError IO SomeConsensusProtocol mkConsensusProtocol ncProtocolConfig mProtocolFiles = - case ncProtocolConfig of - - NodeProtocolConfigurationByron config -> - firstExceptT ByronProtocolInstantiationError $ - mkSomeConsensusProtocolByron config mProtocolFiles - - NodeProtocolConfigurationShelley config -> - firstExceptT ShelleyProtocolInstantiationError $ - mkSomeConsensusProtocolShelley config mProtocolFiles - - NodeProtocolConfigurationCardano byronConfig - shelleyConfig - alonzoConfig - conwayConfig - hardForkConfig -> + case ncProtocolConfig of + NodeProtocolConfigurationByron config -> + firstExceptT ByronProtocolInstantiationError $ + mkSomeConsensusProtocolByron config mProtocolFiles + NodeProtocolConfigurationShelley config -> + firstExceptT ShelleyProtocolInstantiationError $ + mkSomeConsensusProtocolShelley config mProtocolFiles + NodeProtocolConfigurationCardano + byronConfig + shelleyConfig + alonzoConfig + conwayConfig + hardForkConfig -> firstExceptT CardanoProtocolInstantiationError $ mkSomeConsensusProtocolCardano byronConfig @@ -53,14 +50,13 @@ mkConsensusProtocol ncProtocolConfig mProtocolFiles = -- Errors -- -data ProtocolInstantiationError = - ByronProtocolInstantiationError ByronProtocolInstantiationError +data ProtocolInstantiationError + = ByronProtocolInstantiationError ByronProtocolInstantiationError | ShelleyProtocolInstantiationError ShelleyProtocolInstantiationError | CardanoProtocolInstantiationError CardanoProtocolInstantiationError deriving Show - instance Error ProtocolInstantiationError where - displayError (ByronProtocolInstantiationError err) = displayError err + displayError (ByronProtocolInstantiationError err) = displayError err displayError (ShelleyProtocolInstantiationError err) = displayError err displayError (CardanoProtocolInstantiationError err) = displayError err diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Alonzo.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Alonzo.hs index 7fa3541957..eebabcf833 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Alonzo.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Alonzo.hs @@ -3,34 +3,41 @@ -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Protocol/Alonzo.hs -module Cardano.Node.Protocol.Alonzo ( - AlonzoProtocolInstantiationError (..) +module Cardano.Node.Protocol.Alonzo + ( AlonzoProtocolInstantiationError (..) + -- * Reusable parts , readGenesis , validateGenesis ) where -import Cardano.Api.Any -import qualified Cardano.Ledger.Alonzo.Genesis as Alonzo -import Cardano.Node.Protocol.Shelley (GenesisReadError, - readGenesisAny) -import Cardano.Node.Types -import Cardano.Prelude -import Prelude (String) +import Cardano.Api.Any +import Cardano.Ledger.Alonzo.Genesis qualified as Alonzo +import Cardano.Node.Protocol.Shelley + ( GenesisReadError + , readGenesisAny + ) +import Cardano.Node.Types +import Cardano.Prelude +import Prelude (String) -- -- Alonzo genesis -- -readGenesis :: GenesisFile - -> Maybe GenesisHash - -> ExceptT GenesisReadError IO - (Alonzo.AlonzoGenesis, GenesisHash) +readGenesis :: + GenesisFile -> + Maybe GenesisHash -> + ExceptT + GenesisReadError + IO + (Alonzo.AlonzoGenesis, GenesisHash) readGenesis = readGenesisAny -validateGenesis :: Alonzo.AlonzoGenesis - -> ExceptT AlonzoProtocolInstantiationError IO () -validateGenesis _ = return () --TODO alonzo: do the validation +validateGenesis :: + Alonzo.AlonzoGenesis -> + ExceptT AlonzoProtocolInstantiationError IO () +validateGenesis _ = return () -- TODO alonzo: do the validation data AlonzoProtocolInstantiationError = InvalidCostModelError !FilePath @@ -48,4 +55,3 @@ instance Error AlonzoProtocolInstantiationError where displayError err displayError (AlonzoCostModelDecodeError fp err) = "Error decoding cost model at: " <> show fp <> " Error: " <> err - diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Byron.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Byron.hs index 9eb184f2ef..c2efad392a 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Byron.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Byron.hs @@ -1,40 +1,46 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE NamedFieldPuns #-} -{-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} +{-# LANGUAGE NoImplicitPrelude #-} -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Protocol/Byron.hs -module Cardano.Node.Protocol.Byron ( - mkSomeConsensusProtocolByron +module Cardano.Node.Protocol.Byron + ( mkSomeConsensusProtocolByron + -- * Errors , ByronProtocolInstantiationError (..) + -- * Reusable parts , readGenesis , readLeaderCredentials ) where -import Cardano.Api.Any -import Cardano.Api.KeysByron -import qualified Cardano.Api.Protocol.Types as Protocol -import qualified Cardano.Chain.Genesis as Genesis -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Chain.UTxO as UTxO -import qualified Cardano.Crypto.Hash as Crypto -import qualified Cardano.Crypto.Hashing as Byron.Crypto -import Cardano.Crypto.ProtocolMagic (RequiresNetworkMagic) -import Cardano.Node.Protocol.Types -import Cardano.Node.Types -import Cardano.Prelude -import Control.Monad.Trans.Except.Extra (bimapExceptT, firstExceptT, - hoistEither, hoistMaybe, left) -import qualified Data.ByteString.Lazy as LB -import Data.Text as Text (unpack) -import Ouroboros.Consensus.Cardano -import qualified Ouroboros.Consensus.Cardano as Consensus -import Prelude hiding (show, (.)) - +import Cardano.Api.Any +import Cardano.Api.KeysByron +import Cardano.Api.Protocol.Types qualified as Protocol +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.UTxO qualified as UTxO +import Cardano.Chain.Update qualified as Update +import Cardano.Crypto.Hash qualified as Crypto +import Cardano.Crypto.Hashing qualified as Byron.Crypto +import Cardano.Crypto.ProtocolMagic (RequiresNetworkMagic) +import Cardano.Node.Protocol.Types +import Cardano.Node.Types +import Cardano.Prelude +import Control.Monad.Trans.Except.Extra + ( bimapExceptT + , firstExceptT + , hoistEither + , hoistMaybe + , left + ) +import Data.ByteString.Lazy qualified as LB +import Data.Text as Text (unpack) +import Ouroboros.Consensus.Cardano +import Ouroboros.Consensus.Cardano qualified as Consensus +import Prelude hiding (show, (.)) ------------------------------------------------------------------------------ -- Byron protocol @@ -46,122 +52,137 @@ import Prelude hiding (show, (.)) -- -- This also serves a purpose as a sanity check that we have all the necessary -- type class instances available. --- mkSomeConsensusProtocolByron :: - NodeByronProtocolConfiguration - -> Maybe ProtocolFilepaths - -> ExceptT ByronProtocolInstantiationError IO SomeConsensusProtocol -mkSomeConsensusProtocolByron NodeByronProtocolConfiguration { - npcByronGenesisFile, - npcByronGenesisFileHash, - npcByronReqNetworkMagic, - npcByronPbftSignatureThresh, - npcByronApplicationName, - npcByronApplicationVersion, - npcByronSupportedProtocolVersionMajor, - npcByronSupportedProtocolVersionMinor, - npcByronSupportedProtocolVersionAlt - } - files = do - genesisConfig <- readGenesis npcByronGenesisFile - npcByronGenesisFileHash - npcByronReqNetworkMagic + NodeByronProtocolConfiguration -> + Maybe ProtocolFilepaths -> + ExceptT ByronProtocolInstantiationError IO SomeConsensusProtocol +mkSomeConsensusProtocolByron + NodeByronProtocolConfiguration + { npcByronGenesisFile + , npcByronGenesisFileHash + , npcByronReqNetworkMagic + , npcByronPbftSignatureThresh + , npcByronApplicationName + , npcByronApplicationVersion + , npcByronSupportedProtocolVersionMajor + , npcByronSupportedProtocolVersionMinor + , npcByronSupportedProtocolVersionAlt + } + files = do + genesisConfig <- + readGenesis + npcByronGenesisFile + npcByronGenesisFileHash + npcByronReqNetworkMagic optionalLeaderCredentials <- readLeaderCredentials genesisConfig files - return $ SomeConsensusProtocol Protocol.ByronBlockType $ Protocol.ProtocolInfoArgsByron $ Consensus.ProtocolParamsByron { - byronGenesis = genesisConfig, - byronPbftSignatureThreshold = - PBftSignatureThreshold <$> npcByronPbftSignatureThresh, - byronProtocolVersion = - Update.ProtocolVersion - npcByronSupportedProtocolVersionMajor - npcByronSupportedProtocolVersionMinor - npcByronSupportedProtocolVersionAlt, - byronSoftwareVersion = - Update.SoftwareVersion - npcByronApplicationName - npcByronApplicationVersion, - byronLeaderCredentials = optionalLeaderCredentials - } - -readGenesis :: GenesisFile - -> Maybe GenesisHash - -> RequiresNetworkMagic - -> ExceptT ByronProtocolInstantiationError IO - Genesis.Config + return $ + SomeConsensusProtocol Protocol.ByronBlockType $ + Protocol.ProtocolInfoArgsByron $ + Consensus.ProtocolParamsByron + { byronGenesis = genesisConfig + , byronPbftSignatureThreshold = + PBftSignatureThreshold <$> npcByronPbftSignatureThresh + , byronProtocolVersion = + Update.ProtocolVersion + npcByronSupportedProtocolVersionMajor + npcByronSupportedProtocolVersionMinor + npcByronSupportedProtocolVersionAlt + , byronSoftwareVersion = + Update.SoftwareVersion + npcByronApplicationName + npcByronApplicationVersion + , byronLeaderCredentials = optionalLeaderCredentials + } + +readGenesis :: + GenesisFile -> + Maybe GenesisHash -> + RequiresNetworkMagic -> + ExceptT + ByronProtocolInstantiationError + IO + Genesis.Config readGenesis (GenesisFile file) mbExpectedGenesisHash ncReqNetworkMagic = do - (genesisData, genesisHash) <- firstExceptT (GenesisReadError file) $ - Genesis.readGenesisData file - checkExpectedGenesisHash genesisHash - return Genesis.Config { - Genesis.configGenesisData = genesisData, - Genesis.configGenesisHash = genesisHash, - Genesis.configReqNetMagic = ncReqNetworkMagic, - Genesis.configUTxOConfiguration = UTxO.defaultUTxOConfiguration - --TODO: add config support for the UTxOConfiguration if needed - } - where - checkExpectedGenesisHash :: Genesis.GenesisHash - -> ExceptT ByronProtocolInstantiationError IO () - checkExpectedGenesisHash actual' = - case mbExpectedGenesisHash of - Just expected | actual /= expected -> + (genesisData, genesisHash) <- + firstExceptT (GenesisReadError file) $ + Genesis.readGenesisData file + checkExpectedGenesisHash genesisHash + return + Genesis.Config + { Genesis.configGenesisData = genesisData + , Genesis.configGenesisHash = genesisHash + , Genesis.configReqNetMagic = ncReqNetworkMagic + , Genesis.configUTxOConfiguration = UTxO.defaultUTxOConfiguration + -- TODO: add config support for the UTxOConfiguration if needed + } + where + checkExpectedGenesisHash :: + Genesis.GenesisHash -> + ExceptT ByronProtocolInstantiationError IO () + checkExpectedGenesisHash actual' = + case mbExpectedGenesisHash of + Just expected + | actual /= expected -> throwError (GenesisHashMismatch actual expected) - where - actual = fromByronGenesisHash actual' - - _ -> return () + where + actual = fromByronGenesisHash actual' + _ -> return () - fromByronGenesisHash :: Genesis.GenesisHash -> GenesisHash - fromByronGenesisHash (Genesis.GenesisHash h) = - GenesisHash + fromByronGenesisHash :: Genesis.GenesisHash -> GenesisHash + fromByronGenesisHash (Genesis.GenesisHash h) = + GenesisHash . fromMaybe impossible . Crypto.hashFromBytes . Byron.Crypto.hashToBytes $ h - where - impossible = - panic "fromByronGenesisHash: old and new crypto libs disagree on hash size" - - - -readLeaderCredentials :: Genesis.Config - -> Maybe ProtocolFilepaths - -> ExceptT ByronProtocolInstantiationError IO - (Maybe ByronLeaderCredentials) + where + impossible = + panic "fromByronGenesisHash: old and new crypto libs disagree on hash size" + +readLeaderCredentials :: + Genesis.Config -> + Maybe ProtocolFilepaths -> + ExceptT + ByronProtocolInstantiationError + IO + (Maybe ByronLeaderCredentials) readLeaderCredentials _ Nothing = return Nothing -readLeaderCredentials genesisConfig - (Just ProtocolFilepaths { - byronCertFile, - byronKeyFile - }) = - case (byronCertFile, byronKeyFile) of - (Nothing, Nothing) -> pure Nothing - (Just _, Nothing) -> left SigningKeyFilepathNotSpecified - (Nothing, Just _) -> left DelegationCertificateFilepathNotSpecified - (Just delegCertFile, Just signingKeyFile) -> do - - signingKeyFileBytes <- liftIO $ LB.readFile signingKeyFile - delegCertFileBytes <- liftIO $ LB.readFile delegCertFile - ByronSigningKey signingKey <- hoistMaybe (SigningKeyDeserialiseFailure signingKeyFile) - $ deserialiseFromRawBytes (AsSigningKey AsByronKey) $ LB.toStrict signingKeyFileBytes - delegCert <- firstExceptT (CanonicalDecodeFailure delegCertFile) - . hoistEither - $ canonicalDecodePretty delegCertFileBytes - - bimapExceptT CredentialsError Just - . hoistEither - $ mkByronLeaderCredentials genesisConfig signingKey delegCert "Byron" - - +readLeaderCredentials + genesisConfig + ( Just + ProtocolFilepaths + { byronCertFile + , byronKeyFile + } + ) = + case (byronCertFile, byronKeyFile) of + (Nothing, Nothing) -> pure Nothing + (Just _, Nothing) -> left SigningKeyFilepathNotSpecified + (Nothing, Just _) -> left DelegationCertificateFilepathNotSpecified + (Just delegCertFile, Just signingKeyFile) -> do + signingKeyFileBytes <- liftIO $ LB.readFile signingKeyFile + delegCertFileBytes <- liftIO $ LB.readFile delegCertFile + ByronSigningKey signingKey <- + hoistMaybe (SigningKeyDeserialiseFailure signingKeyFile) $ + deserialiseFromRawBytes (AsSigningKey AsByronKey) $ + LB.toStrict signingKeyFileBytes + delegCert <- + firstExceptT (CanonicalDecodeFailure delegCertFile) + . hoistEither + $ canonicalDecodePretty delegCertFileBytes + + bimapExceptT CredentialsError Just + . hoistEither + $ mkByronLeaderCredentials genesisConfig signingKey delegCert "Byron" ------------------------------------------------------------------------------ -- Byron Errors -- -data ByronProtocolInstantiationError = - CanonicalDecodeFailure !FilePath !Text +data ByronProtocolInstantiationError + = CanonicalDecodeFailure !FilePath !Text | GenesisHashMismatch !GenesisHash !GenesisHash -- actual, expected | DelegationCertificateFilepathNotSpecified | GenesisConfigurationError !FilePath !Genesis.ConfigurationError @@ -173,26 +194,33 @@ data ByronProtocolInstantiationError = instance Error ByronProtocolInstantiationError where displayError (CanonicalDecodeFailure fp failure) = - "Canonical decode failure in " <> fp - <> " Canonical failure: " <> Text.unpack failure + "Canonical decode failure in " + <> fp + <> " Canonical failure: " + <> Text.unpack failure displayError (GenesisHashMismatch actual expected) = - "Wrong Byron genesis file: the actual hash is " <> show actual - <> ", but the expected Byron genesis hash given in the node configuration " - <> "file is " <> show expected + "Wrong Byron genesis file: the actual hash is " + <> show actual + <> ", but the expected Byron genesis hash given in the node configuration " + <> "file is " + <> show expected displayError DelegationCertificateFilepathNotSpecified = - "Delegation certificate filepath not specified" - --TODO: Implement configuration error render function in cardano-ledger + "Delegation certificate filepath not specified" + -- TODO: Implement configuration error render function in cardano-ledger displayError (GenesisConfigurationError fp genesisConfigError) = - "Genesis configuration error in: " <> toS fp - <> " Error: " <> show genesisConfigError + "Genesis configuration error in: " + <> toS fp + <> " Error: " + <> show genesisConfigError displayError (GenesisReadError fp err) = - "There was an error parsing the genesis file: " <> toS fp - <> " Error: " <> show err - -- TODO: Implement ByronLeaderCredentialsError render function in ouroboros-network + "There was an error parsing the genesis file: " + <> toS fp + <> " Error: " + <> show err + -- TODO: Implement ByronLeaderCredentialsError render function in ouroboros-network displayError (CredentialsError byronLeaderCredentialsError) = - "Byron leader credentials error: " <> show byronLeaderCredentialsError + "Byron leader credentials error: " <> show byronLeaderCredentialsError displayError (SigningKeyDeserialiseFailure fp) = - "Signing key deserialisation error in: " <> toS fp + "Signing key deserialisation error in: " <> toS fp displayError SigningKeyFilepathNotSpecified = - "Signing key filepath not specified" - + "Signing key filepath not specified" diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Cardano.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Cardano.hs index bf1a35e957..f07bed1a9c 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Cardano.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Cardano.hs @@ -5,39 +5,38 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} - -{-# OPTIONS_GHC -Wno-orphans #-} +{-# OPTIONS_GHC -Wno-orphans #-} -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Protocol/Cardano.hs -module Cardano.Node.Protocol.Cardano ( - mkConsensusProtocolCardano +module Cardano.Node.Protocol.Cardano + ( mkConsensusProtocolCardano , mkSomeConsensusProtocolCardano + -- * Errors , CardanoProtocolInstantiationError (..) ) where -import Cardano.Api.Any -import Cardano.Api.Protocol.Types -import qualified Cardano.Chain.Update as Byron -import qualified Cardano.Ledger.Api.Era as L -import qualified Cardano.Ledger.Api.Transition as SL -import qualified Cardano.Node.Protocol.Alonzo as Alonzo -import qualified Cardano.Node.Protocol.Byron as Byron -import qualified Cardano.Node.Protocol.Conway as Conway -import qualified Cardano.Node.Protocol.Shelley as Shelley -import Cardano.Node.Protocol.Types -import Cardano.Node.Types -import Control.Monad.Trans.Except (ExceptT) -import Control.Monad.Trans.Except.Extra (firstExceptT) -import Ouroboros.Consensus.Cardano -import qualified Ouroboros.Consensus.Cardano as Consensus -import Ouroboros.Consensus.Cardano.Condense () -import Ouroboros.Consensus.Cardano.Node (CardanoProtocolParams (..)) -import Ouroboros.Consensus.Config (emptyCheckpointsMap) -import Ouroboros.Consensus.HardFork.Combinator.Condense () -import Ouroboros.Consensus.Shelley.Crypto (StandardCrypto) - +import Cardano.Api.Any +import Cardano.Api.Protocol.Types +import Cardano.Chain.Update qualified as Byron +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.Api.Transition qualified as SL +import Cardano.Node.Protocol.Alonzo qualified as Alonzo +import Cardano.Node.Protocol.Byron qualified as Byron +import Cardano.Node.Protocol.Conway qualified as Conway +import Cardano.Node.Protocol.Shelley qualified as Shelley +import Cardano.Node.Protocol.Types +import Cardano.Node.Types +import Control.Monad.Trans.Except (ExceptT) +import Control.Monad.Trans.Except.Extra (firstExceptT) +import Ouroboros.Consensus.Cardano +import Ouroboros.Consensus.Cardano qualified as Consensus +import Ouroboros.Consensus.Cardano.Condense () +import Ouroboros.Consensus.Cardano.Node (CardanoProtocolParams (..)) +import Ouroboros.Consensus.Config (emptyCheckpointsMap) +import Ouroboros.Consensus.HardFork.Combinator.Condense () +import Ouroboros.Consensus.Shelley.Crypto (StandardCrypto) ------------------------------------------------------------------------------ -- Real Cardano protocol @@ -54,71 +53,73 @@ import Ouroboros.Consensus.Shelley.Crypto (StandardCrypto) -- -- This also serves a purpose as a sanity check that we have all the necessary -- type class instances available. --- mkSomeConsensusProtocolCardano :: - NodeByronProtocolConfiguration - -> NodeShelleyProtocolConfiguration - -> NodeAlonzoProtocolConfiguration - -> NodeConwayProtocolConfiguration - -> NodeHardForkProtocolConfiguration - -> Maybe ProtocolFilepaths - -> ExceptT CardanoProtocolInstantiationError IO SomeConsensusProtocol + NodeByronProtocolConfiguration -> + NodeShelleyProtocolConfiguration -> + NodeAlonzoProtocolConfiguration -> + NodeConwayProtocolConfiguration -> + NodeHardForkProtocolConfiguration -> + Maybe ProtocolFilepaths -> + ExceptT CardanoProtocolInstantiationError IO SomeConsensusProtocol mkSomeConsensusProtocolCardano nbpc nspc napc ncpc nhpc files = do - params <- mkConsensusProtocolCardano nbpc nspc napc ncpc nhpc files - return $! - SomeConsensusProtocol CardanoBlockType $ ProtocolInfoArgsCardano params + params <- mkConsensusProtocolCardano nbpc nspc napc ncpc nhpc files + return $! + SomeConsensusProtocol CardanoBlockType $ + ProtocolInfoArgsCardano params mkConsensusProtocolCardano :: - NodeByronProtocolConfiguration - -> NodeShelleyProtocolConfiguration - -> NodeAlonzoProtocolConfiguration - -> NodeConwayProtocolConfiguration - -> NodeHardForkProtocolConfiguration - -> Maybe ProtocolFilepaths - -> ExceptT CardanoProtocolInstantiationError IO (CardanoProtocolParams StandardCrypto) -mkConsensusProtocolCardano NodeByronProtocolConfiguration { - npcByronGenesisFile, - npcByronGenesisFileHash, - npcByronReqNetworkMagic, - npcByronPbftSignatureThresh, - npcByronApplicationName, - npcByronApplicationVersion, - npcByronSupportedProtocolVersionMajor, - npcByronSupportedProtocolVersionMinor, - npcByronSupportedProtocolVersionAlt - } - NodeShelleyProtocolConfiguration { - npcShelleyGenesisFile, - npcShelleyGenesisFileHash - } - NodeAlonzoProtocolConfiguration { - npcAlonzoGenesisFile, - npcAlonzoGenesisFileHash - } - NodeConwayProtocolConfiguration { - npcConwayGenesisFile, - npcConwayGenesisFileHash - } - NodeHardForkProtocolConfiguration { - npcTestEnableDevelopmentHardForkEras = _, - -- During testing of the latest unreleased era, we conditionally - -- declared that we knew about it. We do so only when a config option - -- for testing development/unstable eras is used. This lets us include - -- not-yet-ready eras in released node versions without mainnet nodes - -- prematurely advertising that they could hard fork into the new era. - npcTestShelleyHardForkAtEpoch, - npcTestAllegraHardForkAtEpoch, - npcTestMaryHardForkAtEpoch, - npcTestAlonzoHardForkAtEpoch, - npcTestBabbageHardForkAtEpoch, - npcTestConwayHardForkAtEpoch - } - files = do + NodeByronProtocolConfiguration -> + NodeShelleyProtocolConfiguration -> + NodeAlonzoProtocolConfiguration -> + NodeConwayProtocolConfiguration -> + NodeHardForkProtocolConfiguration -> + Maybe ProtocolFilepaths -> + ExceptT CardanoProtocolInstantiationError IO (CardanoProtocolParams StandardCrypto) +mkConsensusProtocolCardano + NodeByronProtocolConfiguration + { npcByronGenesisFile + , npcByronGenesisFileHash + , npcByronReqNetworkMagic + , npcByronPbftSignatureThresh + , npcByronApplicationName + , npcByronApplicationVersion + , npcByronSupportedProtocolVersionMajor + , npcByronSupportedProtocolVersionMinor + , npcByronSupportedProtocolVersionAlt + } + NodeShelleyProtocolConfiguration + { npcShelleyGenesisFile + , npcShelleyGenesisFileHash + } + NodeAlonzoProtocolConfiguration + { npcAlonzoGenesisFile + , npcAlonzoGenesisFileHash + } + NodeConwayProtocolConfiguration + { npcConwayGenesisFile + , npcConwayGenesisFileHash + } + NodeHardForkProtocolConfiguration + { npcTestEnableDevelopmentHardForkEras = _ + , -- During testing of the latest unreleased era, we conditionally + -- declared that we knew about it. We do so only when a config option + -- for testing development/unstable eras is used. This lets us include + -- not-yet-ready eras in released node versions without mainnet nodes + -- prematurely advertising that they could hard fork into the new era. + npcTestShelleyHardForkAtEpoch + , npcTestAllegraHardForkAtEpoch + , npcTestMaryHardForkAtEpoch + , npcTestAlonzoHardForkAtEpoch + , npcTestBabbageHardForkAtEpoch + , npcTestConwayHardForkAtEpoch + } + files = do byronGenesis <- firstExceptT CardanoProtocolInstantiationErrorByron $ - Byron.readGenesis npcByronGenesisFile - npcByronGenesisFileHash - npcByronReqNetworkMagic + Byron.readGenesis + npcByronGenesisFile + npcByronGenesisFileHash + npcByronReqNetworkMagic byronLeaderCredentials <- firstExceptT CardanoProtocolInstantiationErrorByron $ @@ -126,18 +127,21 @@ mkConsensusProtocolCardano NodeByronProtocolConfiguration { (shelleyGenesis, shelleyGenesisHash) <- firstExceptT CardanoProtocolInstantiationShelleyGenesisReadError $ - Shelley.readGenesis npcShelleyGenesisFile - npcShelleyGenesisFileHash + Shelley.readGenesis + npcShelleyGenesisFile + npcShelleyGenesisFileHash (alonzoGenesis, _alonzoGenesisHash) <- firstExceptT CardanoProtocolInstantiationAlonzoGenesisReadError $ - Alonzo.readGenesis npcAlonzoGenesisFile - npcAlonzoGenesisFileHash + Alonzo.readGenesis + npcAlonzoGenesisFile + npcAlonzoGenesisFileHash (conwayGenesis, _conwayGenesisHash) <- firstExceptT CardanoProtocolInstantiationConwayGenesisReadError $ - Conway.readGenesis npcConwayGenesisFile - npcConwayGenesisFileHash + Conway.readGenesis + npcConwayGenesisFile + npcConwayGenesisFileHash shelleyLeaderCredentials <- firstExceptT CardanoProtocolInstantiationPraosLeaderCredentialsError $ @@ -146,97 +150,95 @@ mkConsensusProtocolCardano NodeByronProtocolConfiguration { let transitionLedgerConfig = SL.mkLatestTransitionConfig shelleyGenesis alonzoGenesis conwayGenesis - --TODO: all these protocol versions below are confusing and unnecessary. + -- TODO: all these protocol versions below are confusing and unnecessary. -- It could and should all be automated and these config entries eliminated. return $! CardanoProtocolParams - Consensus.ProtocolParamsByron { - byronGenesis = byronGenesis, - byronPbftSignatureThreshold = - PBftSignatureThreshold <$> npcByronPbftSignatureThresh, - - -- This is /not/ the Byron protocol version. It is the protocol - -- version that this node will use in blocks it creates. It is used - -- in the Byron update mechanism to signal that this block-producing - -- node is ready to move to the new protocol. For example, when the - -- protocol version (according to the ledger state) is 0, this setting - -- should be 1 when we are ready to move. Similarly when the current - -- protocol version is 1, this should be 2 to indicate we are ready - -- to move into the Shelley era. - byronProtocolVersion = - Byron.ProtocolVersion - npcByronSupportedProtocolVersionMajor - npcByronSupportedProtocolVersionMinor - npcByronSupportedProtocolVersionAlt, - byronSoftwareVersion = - Byron.SoftwareVersion - npcByronApplicationName - npcByronApplicationVersion, - byronLeaderCredentials = byronLeaderCredentials - } - Consensus.ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = Shelley.genesisHashToPraosNonce - shelleyGenesisHash, - shelleyBasedLeaderCredentials = shelleyLeaderCredentials - } + Consensus.ProtocolParamsByron + { byronGenesis = byronGenesis + , byronPbftSignatureThreshold = + PBftSignatureThreshold <$> npcByronPbftSignatureThresh + , -- This is /not/ the Byron protocol version. It is the protocol + -- version that this node will use in blocks it creates. It is used + -- in the Byron update mechanism to signal that this block-producing + -- node is ready to move to the new protocol. For example, when the + -- protocol version (according to the ledger state) is 0, this setting + -- should be 1 when we are ready to move. Similarly when the current + -- protocol version is 1, this should be 2 to indicate we are ready + -- to move into the Shelley era. + byronProtocolVersion = + Byron.ProtocolVersion + npcByronSupportedProtocolVersionMajor + npcByronSupportedProtocolVersionMinor + npcByronSupportedProtocolVersionAlt + , byronSoftwareVersion = + Byron.SoftwareVersion + npcByronApplicationName + npcByronApplicationVersion + , byronLeaderCredentials = byronLeaderCredentials + } + Consensus.ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = + Shelley.genesisHashToPraosNonce + shelleyGenesisHash + , shelleyBasedLeaderCredentials = shelleyLeaderCredentials + } -- The 'CardanoHardForkTriggers' specify the parameters needed to -- transition between two eras. The comments below also apply for all -- subsequent hard forks. -- -- Byron to Shelley hard fork parameters - Consensus.CardanoHardForkTriggers' { - triggerHardForkShelley = - -- What will trigger the Byron -> Shelley hard fork? - case npcTestShelleyHardForkAtEpoch of - - -- This specifies the major protocol version number update that will - -- trigger us moving to the Shelley protocol. - -- - -- Version 0 is Byron with Ouroboros classic - -- Version 1 is Byron with Ouroboros Permissive BFT - -- Version 2 is Shelley - -- Version 3 is Allegra - -- Version 4 is Mary - -- Version 5 is Alonzo - -- Version 6 is Alonzo (intra era hardfork) - -- Version 7 is Babbage - -- Version 8 is Babbage (intra era hardfork) - -- Version 9 is Conway - -- - -- But we also provide an override to allow for simpler test setups - -- such as triggering at the 0 -> 1 transition . - -- - Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion - - -- Alternatively, for testing we can transition at a specific epoch. - -- - Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo - -- Shelley to Allegra hard fork parameters - , triggerHardForkAllegra = - case npcTestAllegraHardForkAtEpoch of - Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion - Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo - -- Allegra to Mary hard fork parameters - , triggerHardForkMary = - case npcTestMaryHardForkAtEpoch of - Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion - Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo - -- Mary to Alonzo hard fork parameters - , triggerHardForkAlonzo = - case npcTestAlonzoHardForkAtEpoch of - Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion - Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo - -- Alonzo to Babbage hard fork parameters - , triggerHardForkBabbage = - case npcTestBabbageHardForkAtEpoch of - Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion + Consensus.CardanoHardForkTriggers' + { triggerHardForkShelley = + -- What will trigger the Byron -> Shelley hard fork? + case npcTestShelleyHardForkAtEpoch of + -- This specifies the major protocol version number update that will + -- trigger us moving to the Shelley protocol. + -- + -- Version 0 is Byron with Ouroboros classic + -- Version 1 is Byron with Ouroboros Permissive BFT + -- Version 2 is Shelley + -- Version 3 is Allegra + -- Version 4 is Mary + -- Version 5 is Alonzo + -- Version 6 is Alonzo (intra era hardfork) + -- Version 7 is Babbage + -- Version 8 is Babbage (intra era hardfork) + -- Version 9 is Conway + -- + -- But we also provide an override to allow for simpler test setups + -- such as triggering at the 0 -> 1 transition . + -- + Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion + -- Alternatively, for testing we can transition at a specific epoch. + -- + Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo + , -- Shelley to Allegra hard fork parameters + triggerHardForkAllegra = + case npcTestAllegraHardForkAtEpoch of + Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo - -- Babbage to Conway hard fork parameters - , triggerHardForkConway = - case npcTestConwayHardForkAtEpoch of - Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion + , -- Allegra to Mary hard fork parameters + triggerHardForkMary = + case npcTestMaryHardForkAtEpoch of + Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo - } + , -- Mary to Alonzo hard fork parameters + triggerHardForkAlonzo = + case npcTestAlonzoHardForkAtEpoch of + Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion + Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo + , -- Alonzo to Babbage hard fork parameters + triggerHardForkBabbage = + case npcTestBabbageHardForkAtEpoch of + Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion + Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo + , -- Babbage to Conway hard fork parameters + triggerHardForkConway = + case npcTestConwayHardForkAtEpoch of + Nothing -> Consensus.CardanoTriggerHardForkAtDefaultVersion + Just epochNo -> Consensus.CardanoTriggerHardForkAtEpoch epochNo + } transitionLedgerConfig emptyCheckpointsMap (ProtVer (L.eraProtVerHigh @L.LatestKnownEra) 0) @@ -245,24 +247,19 @@ mkConsensusProtocolCardano NodeByronProtocolConfiguration { -- Errors -- -data CardanoProtocolInstantiationError = - CardanoProtocolInstantiationErrorByron - Byron.ByronProtocolInstantiationError - - | CardanoProtocolInstantiationShelleyGenesisReadError - Shelley.GenesisReadError - - | CardanoProtocolInstantiationAlonzoGenesisReadError - Shelley.GenesisReadError - - | CardanoProtocolInstantiationConwayGenesisReadError - Shelley.GenesisReadError - - | CardanoProtocolInstantiationPraosLeaderCredentialsError - Shelley.PraosLeaderCredentialsError - - | CardanoProtocolInstantiationErrorAlonzo - Alonzo.AlonzoProtocolInstantiationError +data CardanoProtocolInstantiationError + = CardanoProtocolInstantiationErrorByron + Byron.ByronProtocolInstantiationError + | CardanoProtocolInstantiationShelleyGenesisReadError + Shelley.GenesisReadError + | CardanoProtocolInstantiationAlonzoGenesisReadError + Shelley.GenesisReadError + | CardanoProtocolInstantiationConwayGenesisReadError + Shelley.GenesisReadError + | CardanoProtocolInstantiationPraosLeaderCredentialsError + Shelley.PraosLeaderCredentialsError + | CardanoProtocolInstantiationErrorAlonzo + Alonzo.AlonzoProtocolInstantiationError deriving Show instance Error CardanoProtocolInstantiationError where diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Conway.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Conway.hs index db1f90aecf..c0f54631b1 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Conway.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Conway.hs @@ -2,40 +2,47 @@ -- TODO DUPLICATE? -- as-if adapted? from: cardano-node/src/Cardano/Node/Protocol/Conway.hs -module Cardano.Node.Protocol.Conway ( - ConwayProtocolInstantiationError +module Cardano.Node.Protocol.Conway + ( ConwayProtocolInstantiationError + -- * Reusable parts , readGenesis , validateGenesis ) where -import qualified Cardano.Ledger.Conway.Genesis as Conway -import Cardano.Node.Protocol.Shelley (GenesisReadError, - readGenesisAny) -import Cardano.Node.Types -import Cardano.Prelude +import Cardano.Ledger.Conway.Genesis qualified as Conway +import Cardano.Node.Protocol.Shelley + ( GenesisReadError + , readGenesisAny + ) +import Cardano.Node.Types +import Cardano.Prelude -- -- Conway genesis -- -readGenesis :: GenesisFile - -> Maybe GenesisHash - -> ExceptT GenesisReadError IO - (Conway.ConwayGenesis, GenesisHash) +readGenesis :: + GenesisFile -> + Maybe GenesisHash -> + ExceptT + GenesisReadError + IO + (Conway.ConwayGenesis, GenesisHash) readGenesis = readGenesisAny -validateGenesis :: Conway.ConwayGenesis - -> ExceptT ConwayProtocolInstantiationError IO () -validateGenesis _ = return () --TODO conway: do the validation +validateGenesis :: + Conway.ConwayGenesis -> + ExceptT ConwayProtocolInstantiationError IO () +validateGenesis _ = return () -- TODO conway: do the validation data ConwayProtocolInstantiationError -{- TODO - = InvalidCostModelError !FilePath - | CostModelExtractionError !FilePath - | ConwayCostModelFileError !(FileError ()) - | ConwayCostModelDecodeError !FilePath !String --} + {- TODO + = InvalidCostModelError !FilePath + | CostModelExtractionError !FilePath + | ConwayCostModelFileError !(FileError ()) + | ConwayCostModelDecodeError !FilePath !String + -} deriving Show {- TODO diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Shelley.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Shelley.hs index 38f87e5ca3..a65dc6a8b1 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Shelley.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Shelley.hs @@ -7,13 +7,15 @@ -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Protocol/Shelley.hs -module Cardano.Node.Protocol.Shelley ( - mkSomeConsensusProtocolShelley +module Cardano.Node.Protocol.Shelley + ( mkSomeConsensusProtocolShelley + -- * Errors , GenesisReadError (..) , GenesisValidationError (..) , PraosLeaderCredentialsError (..) , ShelleyProtocolInstantiationError (..) + -- * Reusable parts , genesisHashToPraosNonce , readGenesis @@ -22,35 +24,43 @@ module Cardano.Node.Protocol.Shelley ( , validateGenesis ) where -import Cardano.Api.Any hiding (FileError (..)) -import qualified Cardano.Api.Any as Api (FileError (..)) -import Cardano.Api.Key -import Cardano.Api.KeysPraos as Praos -import Cardano.Api.KeysShelley -import Cardano.Api.OperationalCertificate -import qualified Cardano.Api.Protocol.Types as Protocol -import Cardano.Api.SerialiseTextEnvelope -import qualified Cardano.Crypto.Hash.Class as Crypto -import Cardano.Ledger.BaseTypes (ProtVer (..), natVersion) -import Cardano.Ledger.Keys (coerceKeyRole) -import qualified Cardano.Ledger.Shelley.Genesis as Shelley -import Cardano.Node.Protocol.Types -import Cardano.Node.Types -import Cardano.Prelude -import Cardano.Protocol.Crypto (StandardCrypto) -import Control.Monad.Trans.Except.Extra (firstExceptT, - handleIOExceptT, hoistEither, left, newExceptT) -import qualified Data.Aeson as Aeson (FromJSON (..), eitherDecodeStrict') -import qualified Data.ByteString as BS -import qualified Data.Text as T -import qualified Ouroboros.Consensus.Cardano as Consensus -import Ouroboros.Consensus.Protocol.Praos.Common - (PraosCanBeLeader (..)) -import Ouroboros.Consensus.Shelley.Node (Nonce (..), - ProtocolParamsShelleyBased (..), ShelleyGenesis (..), - ShelleyLeaderCredentials (..)) -import Prelude (String, id) - +import Cardano.Api.Any hiding (FileError (..)) +import Cardano.Api.Any qualified as Api (FileError (..)) +import Cardano.Api.Key +import Cardano.Api.KeysPraos as Praos +import Cardano.Api.KeysShelley +import Cardano.Api.OperationalCertificate +import Cardano.Api.Protocol.Types qualified as Protocol +import Cardano.Api.SerialiseTextEnvelope +import Cardano.Crypto.Hash.Class qualified as Crypto +import Cardano.Ledger.BaseTypes (ProtVer (..), natVersion) +import Cardano.Ledger.Keys (coerceKeyRole) +import Cardano.Ledger.Shelley.Genesis qualified as Shelley +import Cardano.Node.Protocol.Types +import Cardano.Node.Types +import Cardano.Prelude +import Cardano.Protocol.Crypto (StandardCrypto) +import Control.Monad.Trans.Except.Extra + ( firstExceptT + , handleIOExceptT + , hoistEither + , left + , newExceptT + ) +import Data.Aeson qualified as Aeson (FromJSON (..), eitherDecodeStrict') +import Data.ByteString qualified as BS +import Data.Text qualified as T +import Ouroboros.Consensus.Cardano qualified as Consensus +import Ouroboros.Consensus.Protocol.Praos.Common + ( PraosCanBeLeader (..) + ) +import Ouroboros.Consensus.Shelley.Node + ( Nonce (..) + , ProtocolParamsShelleyBased (..) + , ShelleyGenesis (..) + , ShelleyLeaderCredentials (..) + ) +import Prelude (String, id) ------------------------------------------------------------------------------ -- Shelley protocol @@ -63,93 +73,114 @@ import Prelude (String, id) -- This also serves a purpose as a sanity check that we have all the necessary -- type class instances available. mkSomeConsensusProtocolShelley :: - NodeShelleyProtocolConfiguration - -> Maybe ProtocolFilepaths - -> ExceptT ShelleyProtocolInstantiationError IO SomeConsensusProtocol -mkSomeConsensusProtocolShelley NodeShelleyProtocolConfiguration { - npcShelleyGenesisFile, - npcShelleyGenesisFileHash - } - files = do - (genesis, genesisHash) <- firstExceptT GenesisReadError $ - readGenesis npcShelleyGenesisFile - npcShelleyGenesisFileHash + NodeShelleyProtocolConfiguration -> + Maybe ProtocolFilepaths -> + ExceptT ShelleyProtocolInstantiationError IO SomeConsensusProtocol +mkSomeConsensusProtocolShelley + NodeShelleyProtocolConfiguration + { npcShelleyGenesisFile + , npcShelleyGenesisFileHash + } + files = do + (genesis, genesisHash) <- + firstExceptT GenesisReadError + $ readGenesis + npcShelleyGenesisFile + npcShelleyGenesisFileHash firstExceptT GenesisValidationError $ validateGenesis genesis - leaderCredentials <- firstExceptT PraosLeaderCredentialsError $ - readLeaderCredentials files - - return $ SomeConsensusProtocol Protocol.ShelleyBlockType $ Protocol.ProtocolInfoArgsShelley - genesis - Consensus.ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = genesisHashToPraosNonce genesisHash, - shelleyBasedLeaderCredentials = - leaderCredentials - } - (ProtVer (natVersion @2) 0) + leaderCredentials <- + firstExceptT PraosLeaderCredentialsError + $ readLeaderCredentials files + + return + $ SomeConsensusProtocol Protocol.ShelleyBlockType + $ Protocol.ProtocolInfoArgsShelley + genesis + Consensus.ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = genesisHashToPraosNonce genesisHash + , shelleyBasedLeaderCredentials = + leaderCredentials + } + (ProtVer (natVersion @2) 0) genesisHashToPraosNonce :: GenesisHash -> Nonce genesisHashToPraosNonce (GenesisHash h) = Nonce (Crypto.castHash h) -readGenesis :: GenesisFile - -> Maybe GenesisHash - -> ExceptT GenesisReadError IO - (ShelleyGenesis, GenesisHash) +readGenesis :: + GenesisFile -> + Maybe GenesisHash -> + ExceptT + GenesisReadError + IO + (ShelleyGenesis, GenesisHash) readGenesis = readGenesisAny -readGenesisAny :: Aeson.FromJSON genesis - => GenesisFile - -> Maybe GenesisHash - -> ExceptT GenesisReadError IO (genesis, GenesisHash) +readGenesisAny :: + Aeson.FromJSON genesis => + GenesisFile -> + Maybe GenesisHash -> + ExceptT GenesisReadError IO (genesis, GenesisHash) readGenesisAny (GenesisFile file) mbExpectedGenesisHash = do - content <- handleIOExceptT (GenesisReadFileError file) $ - BS.readFile file - let genesisHash = GenesisHash (Crypto.hashWith id content) - checkExpectedGenesisHash genesisHash - genesis <- firstExceptT (GenesisDecodeError file) $ hoistEither $ - Aeson.eitherDecodeStrict' content - return (genesis, genesisHash) - where - checkExpectedGenesisHash :: GenesisHash - -> ExceptT GenesisReadError IO () - checkExpectedGenesisHash actual = - case mbExpectedGenesisHash of - Just expected | actual /= expected - -> throwError (GenesisHashMismatch actual expected) - _ -> return () - -validateGenesis :: ShelleyGenesis - -> ExceptT GenesisValidationError IO () + content <- + handleIOExceptT (GenesisReadFileError file) + $ BS.readFile file + let genesisHash = GenesisHash (Crypto.hashWith id content) + checkExpectedGenesisHash genesisHash + genesis <- + firstExceptT (GenesisDecodeError file) + $ hoistEither + $ Aeson.eitherDecodeStrict' content + return (genesis, genesisHash) + where + checkExpectedGenesisHash :: + GenesisHash -> + ExceptT GenesisReadError IO () + checkExpectedGenesisHash actual = + case mbExpectedGenesisHash of + Just expected + | actual /= expected -> + throwError (GenesisHashMismatch actual expected) + _ -> return () + +validateGenesis :: + ShelleyGenesis -> + ExceptT GenesisValidationError IO () validateGenesis genesis = - firstExceptT GenesisValidationErrors . hoistEither $ - Shelley.validateGenesis genesis + firstExceptT GenesisValidationErrors + . hoistEither + $ Shelley.validateGenesis genesis readLeaderCredentials :: - Maybe ProtocolFilepaths - -> ExceptT PraosLeaderCredentialsError IO [ShelleyLeaderCredentials StandardCrypto] + Maybe ProtocolFilepaths -> + ExceptT PraosLeaderCredentialsError IO [ShelleyLeaderCredentials StandardCrypto] readLeaderCredentials Nothing = return [] readLeaderCredentials (Just pfp) = -- The set of credentials is a sum total of what comes from the CLI, -- as well as what's in the bulk credentials file. - (<>) <$> readLeaderCredentialsSingleton pfp - <*> readLeaderCredentialsBulk pfp + (<>) + <$> readLeaderCredentialsSingleton pfp + <*> readLeaderCredentialsBulk pfp readLeaderCredentialsSingleton :: - ProtocolFilepaths -> - ExceptT PraosLeaderCredentialsError IO - [ShelleyLeaderCredentials StandardCrypto] + ProtocolFilepaths -> + ExceptT + PraosLeaderCredentialsError + IO + [ShelleyLeaderCredentials StandardCrypto] -- It's OK to supply none of the files on the CLI readLeaderCredentialsSingleton - ProtocolFilepaths - { shelleyCertFile = Nothing, - shelleyVRFFile = Nothing, - shelleyKESFile = Nothing - } = pure [] + ProtocolFilepaths + { shelleyCertFile = Nothing + , shelleyVRFFile = Nothing + , shelleyKESFile = Nothing + } = pure [] -- Or to supply all of the files readLeaderCredentialsSingleton - ProtocolFilepaths { shelleyCertFile = Just opCertFile, - shelleyVRFFile = Just vrfFile, - shelleyKESFile = Just kesFile - } = do + ProtocolFilepaths + { shelleyCertFile = Just opCertFile + , shelleyVRFFile = Just vrfFile + , shelleyKESFile = Just kesFile + } = do vrfSKey <- firstExceptT FileError (newExceptT $ readFileTextEnvelope (AsSigningKey AsVrfKey) vrfFile) @@ -158,19 +189,19 @@ readLeaderCredentialsSingleton return [mkPraosLeaderCredentials opCert vrfSKey kesSKey] -- But not OK to supply some of the files without the others. -readLeaderCredentialsSingleton ProtocolFilepaths {shelleyCertFile = Nothing} = - left OCertNotSpecified -readLeaderCredentialsSingleton ProtocolFilepaths {shelleyVRFFile = Nothing} = - left VRFKeyNotSpecified -readLeaderCredentialsSingleton ProtocolFilepaths {shelleyKESFile = Nothing} = - left KESKeyNotSpecified +readLeaderCredentialsSingleton ProtocolFilepaths{shelleyCertFile = Nothing} = + left OCertNotSpecified +readLeaderCredentialsSingleton ProtocolFilepaths{shelleyVRFFile = Nothing} = + left VRFKeyNotSpecified +readLeaderCredentialsSingleton ProtocolFilepaths{shelleyKESFile = Nothing} = + left KESKeyNotSpecified opCertKesKeyCheck :: - FilePath - -- ^ KES key - -> FilePath - -- ^ Operational certificate - -> ExceptT PraosLeaderCredentialsError IO (OperationalCertificate, SigningKey KesKey) + -- | KES key + FilePath -> + -- | Operational certificate + FilePath -> + ExceptT PraosLeaderCredentialsError IO (OperationalCertificate, SigningKey KesKey) opCertKesKeyCheck kesFile certFile = do opCert <- firstExceptT FileError (newExceptT $ readFileTextEnvelope AsOperationalCertificate certFile) @@ -181,116 +212,124 @@ opCertKesKeyCheck kesFile certFile = do -- Specified KES key in operational certificate should match the one -- supplied to the node. if suppliedKesKeyHash /= opCertSpecifiedKesKeyhash - then left $ MismatchedKesKey kesFile certFile - else return (opCert, kesSKey) + then left $ MismatchedKesKey kesFile certFile + else return (opCert, kesSKey) data ShelleyCredentials = ShelleyCredentials - { scCert :: (TextEnvelope, FilePath) - , scVrf :: (TextEnvelope, FilePath) - , scKes :: (TextEnvelope, FilePath) - } + { scCert :: (TextEnvelope, FilePath) + , scVrf :: (TextEnvelope, FilePath) + , scKes :: (TextEnvelope, FilePath) + } readLeaderCredentialsBulk :: - ProtocolFilepaths - -> ExceptT PraosLeaderCredentialsError IO [ShelleyLeaderCredentials StandardCrypto] -readLeaderCredentialsBulk ProtocolFilepaths { shelleyBulkCredsFile = mfp } = + ProtocolFilepaths -> + ExceptT PraosLeaderCredentialsError IO [ShelleyLeaderCredentials StandardCrypto] +readLeaderCredentialsBulk ProtocolFilepaths{shelleyBulkCredsFile = mfp} = mapM parseShelleyCredentials =<< readBulkFile mfp where - parseShelleyCredentials - :: ShelleyCredentials - -> ExceptT PraosLeaderCredentialsError IO (ShelleyLeaderCredentials StandardCrypto) - parseShelleyCredentials ShelleyCredentials { scCert, scVrf, scKes } = do - mkPraosLeaderCredentials - <$> parseEnvelope AsOperationalCertificate scCert - <*> parseEnvelope (AsSigningKey AsVrfKey) scVrf - <*> parseEnvelope (AsSigningKey AsKesKey) scKes - - readBulkFile - :: Maybe FilePath - -> ExceptT PraosLeaderCredentialsError IO [ShelleyCredentials] - readBulkFile Nothing = pure [] - readBulkFile (Just fp) = do - content <- handleIOExceptT (CredentialsReadError fp) $ - BS.readFile fp - envelopes <- firstExceptT (EnvelopeParseError fp) $ hoistEither $ - Aeson.eitherDecodeStrict' content - pure $ uncurry mkCredentials <$> zip [0..] envelopes - where - mkCredentials :: Int -> (TextEnvelope, TextEnvelope, TextEnvelope) - -> ShelleyCredentials - mkCredentials ix (teCert, teVrf, teKes) = - let loc ty = fp <> "." <> show ix <> ty - in ShelleyCredentials (teCert, loc "cert") - (teVrf, loc "vrf") - (teKes, loc "kes") + parseShelleyCredentials :: + ShelleyCredentials -> + ExceptT PraosLeaderCredentialsError IO (ShelleyLeaderCredentials StandardCrypto) + parseShelleyCredentials ShelleyCredentials{scCert, scVrf, scKes} = do + mkPraosLeaderCredentials + <$> parseEnvelope AsOperationalCertificate scCert + <*> parseEnvelope (AsSigningKey AsVrfKey) scVrf + <*> parseEnvelope (AsSigningKey AsKesKey) scKes + + readBulkFile :: + Maybe FilePath -> + ExceptT PraosLeaderCredentialsError IO [ShelleyCredentials] + readBulkFile Nothing = pure [] + readBulkFile (Just fp) = do + content <- + handleIOExceptT (CredentialsReadError fp) + $ BS.readFile fp + envelopes <- + firstExceptT (EnvelopeParseError fp) + $ hoistEither + $ Aeson.eitherDecodeStrict' content + pure $ uncurry mkCredentials <$> zip [0 ..] envelopes + where + mkCredentials :: + Int -> + (TextEnvelope, TextEnvelope, TextEnvelope) -> + ShelleyCredentials + mkCredentials ix (teCert, teVrf, teKes) = + let loc ty = fp <> "." <> show ix <> ty + in ShelleyCredentials + (teCert, loc "cert") + (teVrf, loc "vrf") + (teKes, loc "kes") mkPraosLeaderCredentials :: - OperationalCertificate - -> SigningKey VrfKey - -> SigningKey KesKey - -> ShelleyLeaderCredentials StandardCrypto + OperationalCertificate -> + SigningKey VrfKey -> + SigningKey KesKey -> + ShelleyLeaderCredentials StandardCrypto mkPraosLeaderCredentials - (OperationalCertificate opcert (StakePoolVerificationKey vkey)) - (VrfSigningKey vrfKey) - (KesSigningKey kesKey) = + (OperationalCertificate opcert (StakePoolVerificationKey vkey)) + (VrfSigningKey vrfKey) + (KesSigningKey kesKey) = ShelleyLeaderCredentials - { shelleyLeaderCredentialsCanBeLeader = - PraosCanBeLeader { - praosCanBeLeaderOpCert = opcert, - praosCanBeLeaderColdVerKey = coerceKeyRole vkey, - praosCanBeLeaderSignKeyVRF = vrfKey - }, - shelleyLeaderCredentialsInitSignKey = kesKey, - shelleyLeaderCredentialsLabel = "Shelley" - } + { shelleyLeaderCredentialsCanBeLeader = + PraosCanBeLeader + { praosCanBeLeaderOpCert = opcert + , praosCanBeLeaderColdVerKey = coerceKeyRole vkey + , praosCanBeLeaderSignKeyVRF = vrfKey + } + , shelleyLeaderCredentialsInitSignKey = kesKey + , shelleyLeaderCredentialsLabel = "Shelley" + } parseEnvelope :: - HasTextEnvelope a - => AsType a - -> (TextEnvelope, String) - -> ExceptT PraosLeaderCredentialsError IO a + HasTextEnvelope a => + AsType a -> + (TextEnvelope, String) -> + ExceptT PraosLeaderCredentialsError IO a parseEnvelope as (te, loc) = - firstExceptT (FileError . Api.FileError loc) . hoistEither $ - deserialiseFromTextEnvelope as te - + firstExceptT (FileError . Api.FileError loc) + . hoistEither + $ deserialiseFromTextEnvelope as te ------------------------------------------------------------------------------ -- Errors -- -data ShelleyProtocolInstantiationError = - GenesisReadError GenesisReadError - | GenesisValidationError GenesisValidationError - | PraosLeaderCredentialsError PraosLeaderCredentialsError +data ShelleyProtocolInstantiationError + = GenesisReadError GenesisReadError + | GenesisValidationError GenesisValidationError + | PraosLeaderCredentialsError PraosLeaderCredentialsError deriving Show instance Error ShelleyProtocolInstantiationError where - displayError (GenesisReadError err) = displayError err - displayError (GenesisValidationError err) = displayError err + displayError (GenesisReadError err) = displayError err + displayError (GenesisValidationError err) = displayError err displayError (PraosLeaderCredentialsError err) = displayError err - -data GenesisReadError = - GenesisReadFileError !FilePath !IOException - | GenesisHashMismatch !GenesisHash !GenesisHash -- actual, expected - | GenesisDecodeError !FilePath !String +data GenesisReadError + = GenesisReadFileError !FilePath !IOException + | GenesisHashMismatch !GenesisHash !GenesisHash -- actual, expected + | GenesisDecodeError !FilePath !String deriving Show instance Error GenesisReadError where displayError (GenesisReadFileError fp err) = - "There was an error reading the genesis file: " - <> toS fp <> " Error: " <> show err - + "There was an error reading the genesis file: " + <> toS fp + <> " Error: " + <> show err displayError (GenesisHashMismatch actual expected) = - "Wrong genesis file: the actual hash is " <> show actual - <> ", but the expected genesis hash given in the node " - <> "configuration file is " <> show expected - + "Wrong genesis file: the actual hash is " + <> show actual + <> ", but the expected genesis hash given in the node " + <> "configuration file is " + <> show expected displayError (GenesisDecodeError fp err) = - "There was an error parsing the genesis file: " - <> toS fp <> " Error: " <> show err - + "There was an error parsing the genesis file: " + <> toS fp + <> " Error: " + <> show err newtype GenesisValidationError = GenesisValidationErrors [Shelley.ValidationErr] deriving Show @@ -299,35 +338,38 @@ instance Error GenesisValidationError where displayError (GenesisValidationErrors vErrs) = T.unpack (unlines (map Shelley.describeValidationErr vErrs)) - -data PraosLeaderCredentialsError = - CredentialsReadError !FilePath !IOException - | EnvelopeParseError !FilePath !String - | FileError !(Api.FileError TextEnvelopeError) - | OCertNotSpecified - | VRFKeyNotSpecified - | KESKeyNotSpecified - | MismatchedKesKey - FilePath - -- KES signing key - FilePath - -- Operational certificate +data PraosLeaderCredentialsError + = CredentialsReadError !FilePath !IOException + | EnvelopeParseError !FilePath !String + | FileError !(Api.FileError TextEnvelopeError) + | OCertNotSpecified + | VRFKeyNotSpecified + | KESKeyNotSpecified + | MismatchedKesKey + FilePath + -- KES signing key + FilePath + -- Operational certificate deriving Show instance Error PraosLeaderCredentialsError where displayError (CredentialsReadError fp err) = - "There was an error reading a credentials file: " - <> toS fp <> " Error: " <> show err - + "There was an error reading a credentials file: " + <> toS fp + <> " Error: " + <> show err displayError (EnvelopeParseError fp err) = - "There was an error parsing a credentials envelope: " - <> toS fp <> " Error: " <> show err - + "There was an error parsing a credentials envelope: " + <> toS fp + <> " Error: " + <> show err displayError (FileError fileErr) = displayError fileErr displayError (MismatchedKesKey kesFp certFp) = - "The KES key provided at: " <> show kesFp - <> " does not match the KES key specified in the operational certificate at: " <> show certFp - displayError OCertNotSpecified = missingFlagMessage "shelley-operational-certificate" + "The KES key provided at: " + <> show kesFp + <> " does not match the KES key specified in the operational certificate at: " + <> show certFp + displayError OCertNotSpecified = missingFlagMessage "shelley-operational-certificate" displayError VRFKeyNotSpecified = missingFlagMessage "shelley-vrf-key" displayError KESKeyNotSpecified = missingFlagMessage "shelley-kes-key" diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Types.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Types.hs index dee96391a1..60ded59f27 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Types.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Protocol/Types.hs @@ -9,24 +9,24 @@ -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Protocol/Types.hs -module Cardano.Node.Protocol.Types ( - Protocol (..) +module Cardano.Node.Protocol.Types + ( Protocol (..) , SomeConsensusProtocol (..) ) where -import qualified Cardano.Api.Protocol.Types as Cardano -import Cardano.Prelude (Generic, NFData) -import Data.Aeson -import NoThunks.Class (NoThunks) +import Cardano.Api.Protocol.Types qualified as Cardano +import Cardano.Prelude (Generic, NFData) +import Data.Aeson +import NoThunks.Class (NoThunks) - -data Protocol = ByronProtocol - | ShelleyProtocol - | CardanoProtocol +data Protocol + = ByronProtocol + | ShelleyProtocol + | CardanoProtocol deriving (Eq, Generic) instance Show Protocol where - show ByronProtocol = "Byron" + show ByronProtocol = "Byron" show ShelleyProtocol = "Shelley" show CardanoProtocol = "Byron; Shelley" @@ -36,23 +36,23 @@ deriving instance NoThunks Protocol instance FromJSON Protocol where parseJSON = withText "Protocol" $ \str -> case str of - -- The new names "Byron" -> pure ByronProtocol "Shelley" -> pure ShelleyProtocol "Cardano" -> pure CardanoProtocol - -- The old names "RealPBFT" -> pure ByronProtocol "TPraos" -> pure ShelleyProtocol - - _ -> fail $ "Parsing of Protocol failed. " - <> show str <> " is not a valid protocol" + _ -> + fail $ + "Parsing of Protocol failed. " + <> show str + <> " is not a valid protocol" data SomeConsensusProtocol where - - SomeConsensusProtocol :: forall blk. ( Cardano.Protocol IO blk - ) - => Cardano.BlockType blk - -> Cardano.ProtocolInfoArgs IO blk - -> SomeConsensusProtocol + SomeConsensusProtocol :: + forall blk. + Cardano.Protocol IO blk => + Cardano.BlockType blk -> + Cardano.ProtocolInfoArgs IO blk -> + SomeConsensusProtocol diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Types.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Types.hs index 13ce5530d0..b510c46ff1 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Types.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Node/Types.hs @@ -7,8 +7,8 @@ -- DUPLICATE -- adapted from: cardano-node/src/Cardano/Node/Types.hs -module Cardano.Node.Types ( - -- * Configuration +module Cardano.Node.Types + ( -- * Configuration AdjustFilePaths (..) , ConfigError (..) , ConfigYamlFilePath (..) @@ -18,6 +18,7 @@ module Cardano.Node.Types ( , MaxConcurrencyBulkSync (..) , MaxConcurrencyDeadline (..) , ProtocolFilepaths (..) + -- * Consensus protocol configuration , NodeAlonzoProtocolConfiguration (..) , NodeByronProtocolConfiguration (..) @@ -29,50 +30,52 @@ module Cardano.Node.Types ( , renderVRFPrivateKeyFilePermissionError ) where -import qualified Cardano.Chain.Update as Byron -import Cardano.Crypto (RequiresNetworkMagic) -import qualified Cardano.Crypto.Hash as Crypto -import Data.Aeson -import Data.String (IsString) -import Data.Text as Text (Text, pack, unpack) -import Data.Word (Word16, Word8) -import Ouroboros.Consensus.Block.Abstract (EpochNo) - +import Cardano.Chain.Update qualified as Byron +import Cardano.Crypto (RequiresNetworkMagic) +import Cardano.Crypto.Hash qualified as Crypto +import Data.Aeson +import Data.String (IsString) +import Data.Text as Text (Text, pack, unpack) +import Data.Word (Word16, Word8) +import Ouroboros.Consensus.Block.Abstract (EpochNo) -- | Errors for the cardano-config module. -data ConfigError = - ConfigErrorFileNotFound FilePath +data ConfigError + = ConfigErrorFileNotFound FilePath | ConfigErrorNoEKG - deriving Show + deriving Show -- | Filepath of the configuration yaml file. This file determines -- all the configuration settings required for the cardano node -- (logging, tracing, protocol, slot length etc) newtype ConfigYamlFilePath = ConfigYamlFilePath - { unConfigPath :: FilePath } + {unConfigPath :: FilePath} deriving newtype (Eq, Show) newtype DbFile = DbFile - { unDB :: FilePath } + {unDB :: FilePath} deriving newtype (Eq, Show) newtype GenesisFile = GenesisFile - { unGenesisFile :: FilePath } + {unGenesisFile :: FilePath} deriving stock (Eq, Ord) deriving newtype (IsString, Show) instance FromJSON GenesisFile where parseJSON (String genFp) = pure . GenesisFile $ Text.unpack genFp - parseJSON invalid = fail $ "Parsing of GenesisFile failed due to type mismatch. " - <> "Encountered: " <> show invalid + parseJSON invalid = + fail $ + "Parsing of GenesisFile failed due to type mismatch. " + <> "Encountered: " + <> show invalid newtype MaxConcurrencyBulkSync = MaxConcurrencyBulkSync - { unMaxConcurrencyBulkSync :: Word } + {unMaxConcurrencyBulkSync :: Word} deriving stock (Eq, Ord) deriving newtype (FromJSON, Show) newtype MaxConcurrencyDeadline = MaxConcurrencyDeadline - { unMaxConcurrencyDeadline :: Word } + {unMaxConcurrencyDeadline :: Word} deriving stock (Eq, Ord) deriving newtype (FromJSON, Show) @@ -97,178 +100,169 @@ instance FromJSON NodeDiffusionMode where class AdjustFilePaths a where adjustFilePaths :: (FilePath -> FilePath) -> a -> a - -data ProtocolFilepaths = - ProtocolFilepaths { - byronCertFile :: !(Maybe FilePath) - , byronKeyFile :: !(Maybe FilePath) - , shelleyKESFile :: !(Maybe FilePath) - , shelleyVRFFile :: !(Maybe FilePath) - , shelleyCertFile :: !(Maybe FilePath) - , shelleyBulkCredsFile :: !(Maybe FilePath) - } deriving (Eq, Show) +data ProtocolFilepaths + = ProtocolFilepaths + { byronCertFile :: !(Maybe FilePath) + , byronKeyFile :: !(Maybe FilePath) + , shelleyKESFile :: !(Maybe FilePath) + , shelleyVRFFile :: !(Maybe FilePath) + , shelleyCertFile :: !(Maybe FilePath) + , shelleyBulkCredsFile :: !(Maybe FilePath) + } + deriving (Eq, Show) newtype GenesisHash = GenesisHash (Crypto.Hash Crypto.Blake2b_256 Crypto.ByteString) deriving newtype (Eq, Show, ToJSON, FromJSON) -data NodeProtocolConfiguration = - NodeProtocolConfigurationByron NodeByronProtocolConfiguration - | NodeProtocolConfigurationShelley NodeShelleyProtocolConfiguration - | NodeProtocolConfigurationCardano NodeByronProtocolConfiguration - NodeShelleyProtocolConfiguration - NodeAlonzoProtocolConfiguration - NodeConwayProtocolConfiguration - NodeHardForkProtocolConfiguration +data NodeProtocolConfiguration + = NodeProtocolConfigurationByron NodeByronProtocolConfiguration + | NodeProtocolConfigurationShelley NodeShelleyProtocolConfiguration + | NodeProtocolConfigurationCardano + NodeByronProtocolConfiguration + NodeShelleyProtocolConfiguration + NodeAlonzoProtocolConfiguration + NodeConwayProtocolConfiguration + NodeHardForkProtocolConfiguration deriving (Eq, Show) -data NodeShelleyProtocolConfiguration = - NodeShelleyProtocolConfiguration { - npcShelleyGenesisFile :: !GenesisFile - , npcShelleyGenesisFileHash :: !(Maybe GenesisHash) - } +data NodeShelleyProtocolConfiguration + = NodeShelleyProtocolConfiguration + { npcShelleyGenesisFile :: !GenesisFile + , npcShelleyGenesisFileHash :: !(Maybe GenesisHash) + } deriving (Eq, Show) -data NodeAlonzoProtocolConfiguration = - NodeAlonzoProtocolConfiguration { - npcAlonzoGenesisFile :: !GenesisFile - , npcAlonzoGenesisFileHash :: !(Maybe GenesisHash) - } +data NodeAlonzoProtocolConfiguration + = NodeAlonzoProtocolConfiguration + { npcAlonzoGenesisFile :: !GenesisFile + , npcAlonzoGenesisFileHash :: !(Maybe GenesisHash) + } deriving (Eq, Show) -data NodeByronProtocolConfiguration = - NodeByronProtocolConfiguration { - npcByronGenesisFile :: !GenesisFile - , npcByronGenesisFileHash :: !(Maybe GenesisHash) - , npcByronReqNetworkMagic :: !RequiresNetworkMagic - , npcByronPbftSignatureThresh :: !(Maybe Double) - - --TODO: eliminate these two: it can be hard-coded - -- | Update application name. - , npcByronApplicationName :: !Byron.ApplicationName - - -- | Application (ie software) version. - , npcByronApplicationVersion :: !Byron.NumSoftwareVersion - - --TODO: eliminate these: it can be done automatically in consensus - -- | These declare the version of the protocol that the node is prepared - -- to run. This is usually the version of the protocol in use on the - -- chain now, but during protocol updates this version will be the one - -- that we declare that we are ready to move to. This is the endorsement - -- mechanism for determining when enough block producers are ready to - -- move to the next version. - -- - , npcByronSupportedProtocolVersionMajor :: !Word16 - , npcByronSupportedProtocolVersionMinor :: !Word16 - , npcByronSupportedProtocolVersionAlt :: !Word8 - } +data NodeByronProtocolConfiguration + = NodeByronProtocolConfiguration + { npcByronGenesisFile :: !GenesisFile + , npcByronGenesisFileHash :: !(Maybe GenesisHash) + , npcByronReqNetworkMagic :: !RequiresNetworkMagic + , npcByronPbftSignatureThresh :: !(Maybe Double) + , -- TODO: eliminate these two: it can be hard-coded + + npcByronApplicationName :: !Byron.ApplicationName + -- ^ Update application name. + , npcByronApplicationVersion :: !Byron.NumSoftwareVersion + -- ^ Application (ie software) version. + , -- TODO: eliminate these: it can be done automatically in consensus + + npcByronSupportedProtocolVersionMajor :: !Word16 + -- ^ These declare the version of the protocol that the node is prepared + -- to run. This is usually the version of the protocol in use on the + -- chain now, but during protocol updates this version will be the one + -- that we declare that we are ready to move to. This is the endorsement + -- mechanism for determining when enough block producers are ready to + -- move to the next version. + , npcByronSupportedProtocolVersionMinor :: !Word16 + , npcByronSupportedProtocolVersionAlt :: !Word8 + } deriving (Eq, Show) -data NodeConwayProtocolConfiguration = - NodeConwayProtocolConfiguration { - npcConwayGenesisFile :: !GenesisFile - , npcConwayGenesisFileHash :: !(Maybe GenesisHash) - } +data NodeConwayProtocolConfiguration + = NodeConwayProtocolConfiguration + { npcConwayGenesisFile :: !GenesisFile + , npcConwayGenesisFileHash :: !(Maybe GenesisHash) + } deriving (Eq, Show) -- | Configuration relating to a hard forks themselves, not the specific eras. --- -data NodeHardForkProtocolConfiguration = - NodeHardForkProtocolConfiguration { - - -- | During the development and integration of new eras we wish to be - -- able to test the hard fork transition into the new era, but we do not - -- wish to generally have the node advertise that it understands the new - -- era. Avoiding advertising new development eras until they are ready - -- makes it practical to include new not-yet-ready eras into the main - -- release version of the node without the danger that operators on the - -- mainnet will prematurely advertise that their nodes are capable of - -- crossing the next hard fork. - -- - -- It should /always/ remain at the default of false for nodes running - -- on the mainnet. - -- - -- This flag should be set to true for nodes taking part in testnets for - -- testing the new era. - -- - npcTestEnableDevelopmentHardForkEras :: Bool - - -- | For testing purposes we support specifying that the hard fork - -- happens at an exact epoch number (ie the first epoch of the new era). - -- - -- Obviously if this is used, all the nodes in the test cluster must be - -- configured the same, or they will disagree. - -- - , npcTestShelleyHardForkAtEpoch :: Maybe EpochNo - - -- | For testing purposes we support specifying that the hard fork - -- happens at an exact epoch number (ie the first epoch of the new era). - -- - -- Obviously if this is used, all the nodes in the test cluster must be - -- configured the same, or they will disagree. - -- - , npcTestAllegraHardForkAtEpoch :: Maybe EpochNo - - -- | For testing purposes we support specifying that the hard fork - -- happens at an exact epoch number (ie the first epoch of the new era). - -- - -- Obviously if this is used, all the nodes in the test cluster must be - -- configured the same, or they will disagree. - -- - , npcTestMaryHardForkAtEpoch :: Maybe EpochNo - - -- | For testing purposes we support specifying that the hard fork - -- happens at an exact epoch number (ie the first epoch of the new era). - -- - -- Obviously if this is used, all the nodes in the test cluster must be - -- configured the same, or they will disagree. - -- - , npcTestAlonzoHardForkAtEpoch :: Maybe EpochNo - - , npcTestBabbageHardForkAtEpoch :: Maybe EpochNo - - , npcTestConwayHardForkAtEpoch :: Maybe EpochNo - } +data NodeHardForkProtocolConfiguration + = NodeHardForkProtocolConfiguration + { npcTestEnableDevelopmentHardForkEras :: Bool + -- ^ During the development and integration of new eras we wish to be + -- able to test the hard fork transition into the new era, but we do not + -- wish to generally have the node advertise that it understands the new + -- era. Avoiding advertising new development eras until they are ready + -- makes it practical to include new not-yet-ready eras into the main + -- release version of the node without the danger that operators on the + -- mainnet will prematurely advertise that their nodes are capable of + -- crossing the next hard fork. + -- + -- It should /always/ remain at the default of false for nodes running + -- on the mainnet. + -- + -- This flag should be set to true for nodes taking part in testnets for + -- testing the new era. + , npcTestShelleyHardForkAtEpoch :: Maybe EpochNo + -- ^ For testing purposes we support specifying that the hard fork + -- happens at an exact epoch number (ie the first epoch of the new era). + -- + -- Obviously if this is used, all the nodes in the test cluster must be + -- configured the same, or they will disagree. + , npcTestAllegraHardForkAtEpoch :: Maybe EpochNo + -- ^ For testing purposes we support specifying that the hard fork + -- happens at an exact epoch number (ie the first epoch of the new era). + -- + -- Obviously if this is used, all the nodes in the test cluster must be + -- configured the same, or they will disagree. + , npcTestMaryHardForkAtEpoch :: Maybe EpochNo + -- ^ For testing purposes we support specifying that the hard fork + -- happens at an exact epoch number (ie the first epoch of the new era). + -- + -- Obviously if this is used, all the nodes in the test cluster must be + -- configured the same, or they will disagree. + , npcTestAlonzoHardForkAtEpoch :: Maybe EpochNo + -- ^ For testing purposes we support specifying that the hard fork + -- happens at an exact epoch number (ie the first epoch of the new era). + -- + -- Obviously if this is used, all the nodes in the test cluster must be + -- configured the same, or they will disagree. + , npcTestBabbageHardForkAtEpoch :: Maybe EpochNo + , npcTestConwayHardForkAtEpoch :: Maybe EpochNo + } deriving (Eq, Show) - instance AdjustFilePaths NodeProtocolConfiguration where - adjustFilePaths f (NodeProtocolConfigurationByron pc) = NodeProtocolConfigurationByron (adjustFilePaths f pc) - adjustFilePaths f (NodeProtocolConfigurationShelley pc) = NodeProtocolConfigurationShelley (adjustFilePaths f pc) - adjustFilePaths f (NodeProtocolConfigurationCardano pcb pcs pca pcc pch) = - NodeProtocolConfigurationCardano (adjustFilePaths f pcb) - (adjustFilePaths f pcs) - (adjustFilePaths f pca) - (adjustFilePaths f pcc) - pch + NodeProtocolConfigurationCardano + (adjustFilePaths f pcb) + (adjustFilePaths f pcs) + (adjustFilePaths f pca) + (adjustFilePaths f pcc) + pch instance AdjustFilePaths NodeByronProtocolConfiguration where - adjustFilePaths f x@NodeByronProtocolConfiguration { - npcByronGenesisFile - } = - x { npcByronGenesisFile = adjustFilePaths f npcByronGenesisFile } + adjustFilePaths + f + x@NodeByronProtocolConfiguration + { npcByronGenesisFile + } = + x{npcByronGenesisFile = adjustFilePaths f npcByronGenesisFile} instance AdjustFilePaths NodeShelleyProtocolConfiguration where - adjustFilePaths f x@NodeShelleyProtocolConfiguration { - npcShelleyGenesisFile - } = - x { npcShelleyGenesisFile = adjustFilePaths f npcShelleyGenesisFile } + adjustFilePaths + f + x@NodeShelleyProtocolConfiguration + { npcShelleyGenesisFile + } = + x{npcShelleyGenesisFile = adjustFilePaths f npcShelleyGenesisFile} instance AdjustFilePaths NodeAlonzoProtocolConfiguration where - adjustFilePaths f x@NodeAlonzoProtocolConfiguration { - npcAlonzoGenesisFile - } = - x { npcAlonzoGenesisFile = adjustFilePaths f npcAlonzoGenesisFile } + adjustFilePaths + f + x@NodeAlonzoProtocolConfiguration + { npcAlonzoGenesisFile + } = + x{npcAlonzoGenesisFile = adjustFilePaths f npcAlonzoGenesisFile} instance AdjustFilePaths NodeConwayProtocolConfiguration where - adjustFilePaths f x@NodeConwayProtocolConfiguration { - npcConwayGenesisFile - } = - x { npcConwayGenesisFile = adjustFilePaths f npcConwayGenesisFile } + adjustFilePaths + f + x@NodeConwayProtocolConfiguration + { npcConwayGenesisFile + } = + x{npcConwayGenesisFile = adjustFilePaths f npcConwayGenesisFile} instance AdjustFilePaths GenesisFile where adjustFilePaths f (GenesisFile p) = GenesisFile (f p) @@ -276,7 +270,6 @@ instance AdjustFilePaths GenesisFile where instance AdjustFilePaths a => AdjustFilePaths (Maybe a) where adjustFilePaths f = fmap (adjustFilePaths f) - data VRFPrivateKeyFilePermissionError = OtherPermissionsExist FilePath | GroupPermissionsExist FilePath @@ -287,12 +280,14 @@ renderVRFPrivateKeyFilePermissionError :: VRFPrivateKeyFilePermissionError -> Te renderVRFPrivateKeyFilePermissionError err = case err of OtherPermissionsExist fp -> - "VRF private key file at: " <> Text.pack fp - <> " has \"other\" file permissions. Please remove all \"other\" file permissions." - + "VRF private key file at: " + <> Text.pack fp + <> " has \"other\" file permissions. Please remove all \"other\" file permissions." GroupPermissionsExist fp -> - "VRF private key file at: " <> Text.pack fp - <> "has \"group\" file permissions. Please remove all \"group\" file permissions." + "VRF private key file at: " + <> Text.pack fp + <> "has \"group\" file permissions. Please remove all \"group\" file permissions." GenericPermissionsExist fp -> - "VRF private key file at: " <> Text.pack fp - <> "has \"generic\" file permissions. Please remove all \"generic\" file permissions." + "VRF private key file at: " + <> Text.pack fp + <> "has \"generic\" file permissions. Please remove all \"generic\" file permissions." diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis.hs index b6355a5ce1..9891d55218 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis.hs @@ -12,8 +12,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Cardano.Tools.DBAnalyser.Analysis ( - AnalysisEnv (..) +module Cardano.Tools.DBAnalyser.Analysis + ( AnalysisEnv (..) , AnalysisName (..) , AnalysisResult (..) , AnalysisStartFrom (..) @@ -26,181 +26,197 @@ module Cardano.Tools.DBAnalyser.Analysis ( , runAnalysis ) where -import qualified Cardano.Slotting.Slot as Slotting -import qualified Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.FileWriting as F -import qualified Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint as DP -import Cardano.Tools.DBAnalyser.CSV (computeAndWriteLine, - writeHeaderLine) -import Cardano.Tools.DBAnalyser.HasAnalysis (HasAnalysis) -import qualified Cardano.Tools.DBAnalyser.HasAnalysis as HasAnalysis -import Cardano.Tools.DBAnalyser.Types -import Control.Monad (unless, void, when) -import Control.Monad.Except (runExcept) -import Control.ResourceRegistry -import Control.Tracer (Tracer (..), nullTracer, traceWith) -import Data.Int (Int64) -import Data.List (intercalate) -import qualified Data.Map.Strict as Map -import Data.Singletons -import Data.Word (Word16, Word32, Word64) -import qualified Debug.Trace as Debug -import qualified GHC.Stats as GC -import NoThunks.Class (noThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast (forecastFor) -import Ouroboros.Consensus.HeaderValidation (HasAnnTip (..), - HeaderState (..), headerStatePoint, revalidateHeader, - tickHeaderState, validateHeader) -import Ouroboros.Consensus.Ledger.Abstract - (ApplyBlock (getBlockKeySets, reapplyBlockLedgerResult), - applyBlockLedgerResult, tickThenApply, - tickThenApplyLedgerResult, tickThenReapply) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool - (LedgerSupportsMempool) -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as LedgerSupportsMempool -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol (..)) -import Ouroboros.Consensus.Ledger.Tables.Utils -import qualified Ouroboros.Consensus.Mempool as Mempool -import Ouroboros.Consensus.Protocol.Abstract (LedgerView) -import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import qualified Ouroboros.Consensus.Util.IOLike as IOLike -import Ouroboros.Network.Protocol.LocalStateQuery.Type -import Ouroboros.Network.SizeInBytes -import qualified System.IO as IO +import Cardano.Slotting.Slot qualified as Slotting +import Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.FileWriting qualified as F +import Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint qualified as DP +import Cardano.Tools.DBAnalyser.CSV + ( computeAndWriteLine + , writeHeaderLine + ) +import Cardano.Tools.DBAnalyser.HasAnalysis (HasAnalysis) +import Cardano.Tools.DBAnalyser.HasAnalysis qualified as HasAnalysis +import Cardano.Tools.DBAnalyser.Types +import Control.Monad (unless, void, when) +import Control.Monad.Except (runExcept) +import Control.ResourceRegistry +import Control.Tracer (Tracer (..), nullTracer, traceWith) +import Data.Int (Int64) +import Data.List (intercalate) +import Data.Map.Strict qualified as Map +import Data.Singletons +import Data.Word (Word16, Word32, Word64) +import Debug.Trace qualified as Debug +import GHC.Stats qualified as GC +import NoThunks.Class (noThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast (forecastFor) +import Ouroboros.Consensus.HeaderValidation + ( HasAnnTip (..) + , HeaderState (..) + , headerStatePoint + , revalidateHeader + , tickHeaderState + , validateHeader + ) +import Ouroboros.Consensus.Ledger.Abstract + ( ApplyBlock (getBlockKeySets, reapplyBlockLedgerResult) + , applyBlockLedgerResult + , tickThenApply + , tickThenApplyLedgerResult + , tickThenReapply + ) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool + ( LedgerSupportsMempool + ) +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as LedgerSupportsMempool +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol (..) + ) +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Mempool qualified as Mempool +import Ouroboros.Consensus.Protocol.Abstract (LedgerView) +import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) +import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Util.IOLike qualified as IOLike +import Ouroboros.Network.Protocol.LocalStateQuery.Type +import Ouroboros.Network.SizeInBytes +import System.IO qualified as IO {------------------------------------------------------------------------------- Run the requested analysis -------------------------------------------------------------------------------} runAnalysis :: - forall blk. - ( HasAnalysis blk - , LedgerSupportsMempool.HasTxId (LedgerSupportsMempool.GenTx blk) - , LedgerSupportsMempool.HasTxs blk - , LedgerSupportsMempool blk - , LedgerSupportsProtocol blk - , CanStowLedgerTables (LedgerState blk) - ) - => AnalysisName -> SomeAnalysis blk + forall blk. + ( HasAnalysis blk + , LedgerSupportsMempool.HasTxId (LedgerSupportsMempool.GenTx blk) + , LedgerSupportsMempool.HasTxs blk + , LedgerSupportsMempool blk + , LedgerSupportsProtocol blk + , CanStowLedgerTables (LedgerState blk) + ) => + AnalysisName -> SomeAnalysis blk runAnalysis analysisName = case go analysisName of - SomeAnalysis p analysis -> SomeAnalysis p $ \env@AnalysisEnv{ tracer } -> do - traceWith tracer (StartedEvent analysisName) - result <- analysis env - traceWith tracer DoneEvent - pure result - where - go :: AnalysisName -> SomeAnalysis blk - go ShowSlotBlockNo = mkAnalysis $ showSlotBlockNo - go CountTxOutputs = mkAnalysis $ countTxOutputs - go ShowBlockHeaderSize = mkAnalysis $ showHeaderSize - go ShowBlockTxsSize = mkAnalysis $ showBlockTxsSize - go ShowEBBs = mkAnalysis $ showEBBs - go OnlyValidation = mkAnalysis @StartFromPoint $ \_ -> pure Nothing - go (StoreLedgerStateAt slotNo lgrAppMode) = mkAnalysis $ storeLedgerStateAt slotNo lgrAppMode - go CountBlocks = mkAnalysis $ countBlocks - go (CheckNoThunksEvery nBks) = mkAnalysis $ checkNoThunksEvery nBks - go TraceLedgerProcessing = mkAnalysis $ traceLedgerProcessing - go (ReproMempoolAndForge nBks) = mkAnalysis $ reproMempoolForge nBks - go (BenchmarkLedgerOps mOutfile lgrAppMode) = mkAnalysis $ benchmarkLedgerOps mOutfile lgrAppMode - go (GetBlockApplicationMetrics nrBlocks mOutfile) = mkAnalysis $ getBlockApplicationMetrics nrBlocks mOutfile - - mkAnalysis :: - forall startFrom. SingI startFrom - => Analysis blk startFrom -> SomeAnalysis blk - mkAnalysis = SomeAnalysis (Proxy @startFrom) + SomeAnalysis p analysis -> SomeAnalysis p $ \env@AnalysisEnv{tracer} -> do + traceWith tracer (StartedEvent analysisName) + result <- analysis env + traceWith tracer DoneEvent + pure result + where + go :: AnalysisName -> SomeAnalysis blk + go ShowSlotBlockNo = mkAnalysis $ showSlotBlockNo + go CountTxOutputs = mkAnalysis $ countTxOutputs + go ShowBlockHeaderSize = mkAnalysis $ showHeaderSize + go ShowBlockTxsSize = mkAnalysis $ showBlockTxsSize + go ShowEBBs = mkAnalysis $ showEBBs + go OnlyValidation = mkAnalysis @StartFromPoint $ \_ -> pure Nothing + go (StoreLedgerStateAt slotNo lgrAppMode) = mkAnalysis $ storeLedgerStateAt slotNo lgrAppMode + go CountBlocks = mkAnalysis $ countBlocks + go (CheckNoThunksEvery nBks) = mkAnalysis $ checkNoThunksEvery nBks + go TraceLedgerProcessing = mkAnalysis $ traceLedgerProcessing + go (ReproMempoolAndForge nBks) = mkAnalysis $ reproMempoolForge nBks + go (BenchmarkLedgerOps mOutfile lgrAppMode) = mkAnalysis $ benchmarkLedgerOps mOutfile lgrAppMode + go (GetBlockApplicationMetrics nrBlocks mOutfile) = mkAnalysis $ getBlockApplicationMetrics nrBlocks mOutfile + + mkAnalysis :: + forall startFrom. + SingI startFrom => + Analysis blk startFrom -> SomeAnalysis blk + mkAnalysis = SomeAnalysis (Proxy @startFrom) type Analysis blk startFrom = AnalysisEnv IO blk startFrom -> IO (Maybe AnalysisResult) -data SomeAnalysis blk = - forall startFrom. SingI startFrom - => SomeAnalysis (Proxy startFrom) (Analysis blk startFrom) +data SomeAnalysis blk + = forall startFrom. + SingI startFrom => + SomeAnalysis (Proxy startFrom) (Analysis blk startFrom) -data AnalysisEnv m blk startFrom = AnalysisEnv { - cfg :: TopLevelConfig blk - , startFrom :: AnalysisStartFrom m blk startFrom - , db :: ImmutableDB IO blk - , registry :: ResourceRegistry IO - , limit :: Limit - , tracer :: Tracer m (TraceEvent blk) - } +data AnalysisEnv m blk startFrom = AnalysisEnv + { cfg :: TopLevelConfig blk + , startFrom :: AnalysisStartFrom m blk startFrom + , db :: ImmutableDB IO blk + , registry :: ResourceRegistry IO + , limit :: Limit + , tracer :: Tracer m (TraceEvent blk) + } -- | Whether the db-analyser pass needs access to a ledger state. data StartFrom = StartFromPoint | StartFromLedgerState data SStartFrom startFrom where - SStartFromPoint :: SStartFrom StartFromPoint + SStartFromPoint :: SStartFrom StartFromPoint SStartFromLedgerState :: SStartFrom StartFromLedgerState type instance Sing = SStartFrom -instance SingI StartFromPoint where sing = SStartFromPoint +instance SingI StartFromPoint where sing = SStartFromPoint instance SingI StartFromLedgerState where sing = SStartFromLedgerState data AnalysisStartFrom m blk startFrom where FromPoint :: Point blk -> AnalysisStartFrom m blk StartFromPoint FromLedgerState :: - LedgerDB.LedgerDB' m blk -> LedgerDB.TestInternals' m blk -> AnalysisStartFrom m blk StartFromLedgerState + LedgerDB.LedgerDB' m blk -> + LedgerDB.TestInternals' m blk -> + AnalysisStartFrom m blk StartFromLedgerState -startFromPoint :: (IOLike.IOLike m, HasAnnTip blk) => AnalysisStartFrom m blk startFrom -> m (Point blk) +startFromPoint :: + (IOLike.IOLike m, HasAnnTip blk) => AnalysisStartFrom m blk startFrom -> m (Point blk) startFromPoint = \case - FromPoint pt -> pure pt + FromPoint pt -> pure pt FromLedgerState st _ -> headerStatePoint . headerState <$> IOLike.atomically (LedgerDB.getVolatileTip st) -data TraceEvent blk = +data TraceEvent blk + = -- | triggered when given analysis has started StartedEvent AnalysisName - -- ^ triggered when given analysis has started - | DoneEvent - -- ^ triggered when analysis has ended - | BlockSlotEvent BlockNo SlotNo (HeaderHash blk) - -- ^ triggered when block has been found, it holds: + | -- | triggered when analysis has ended + DoneEvent + | -- | triggered when block has been found, it holds: -- * block's number -- * slot number when the block was forged - | CountTxOutputsEvent BlockNo SlotNo Int Int - -- ^ triggered when block has been found, it holds: + BlockSlotEvent BlockNo SlotNo (HeaderHash blk) + | -- | triggered when block has been found, it holds: -- * block's number -- * slot number when the block was forged -- * cumulative tx output -- * count tx output - | EbbEvent (HeaderHash blk) (ChainHash blk) Bool - -- ^ triggered when EBB block has been found, it holds: + CountTxOutputsEvent BlockNo SlotNo Int Int + | -- | triggered when EBB block has been found, it holds: -- * its hash, -- * hash of previous block -- * flag whether the EBB is known - | CountedBlocksEvent Int - -- ^ triggered once during CountBLocks analysis, + EbbEvent (HeaderHash blk) (ChainHash blk) Bool + | -- | triggered once during CountBLocks analysis, -- when blocks were counted - | HeaderSizeEvent BlockNo SlotNo Word16 Word32 - -- ^ triggered when header size has been measured + CountedBlocksEvent Int + | -- | triggered when header size has been measured -- * block's number -- * slot number when the block was forged -- * block's header size -- * block's size - | MaxHeaderSizeEvent Word16 - -- ^ triggered once during ShowBlockTxsSize analysis, + HeaderSizeEvent BlockNo SlotNo Word16 Word32 + | -- | triggered once during ShowBlockTxsSize analysis, -- holding maximum encountered header size - | SnapshotStoredEvent SlotNo - -- ^ triggered when snapshot of ledger has been stored for SlotNo - | SnapshotWarningEvent SlotNo SlotNo - -- ^ triggered once during StoreLedgerStateAt analysis, + MaxHeaderSizeEvent Word16 + | -- | triggered when snapshot of ledger has been stored for SlotNo + SnapshotStoredEvent SlotNo + | -- | triggered once during StoreLedgerStateAt analysis, -- when snapshot was created in slot proceeding the -- requested one - | LedgerErrorEvent (Point blk) (ExtValidationError blk) - -- ^ triggered when applying a block with the given point failed - | BlockTxSizeEvent SlotNo Int SizeInBytes - -- ^ triggered for all blocks during ShowBlockTxsSize analysis, + SnapshotWarningEvent SlotNo SlotNo + | -- | triggered when applying a block with the given point failed + LedgerErrorEvent (Point blk) (ExtValidationError blk) + | -- | triggered for all blocks during ShowBlockTxsSize analysis, -- it holds: -- * slot number when the block was forged -- * number of transactions in the block -- * total size of transactions in the block - | BlockMempoolAndForgeRepro BlockNo SlotNo Int SizeInBytes IOLike.DiffTime Int64 Int64 IOLike.DiffTime Int64 Int64 - -- ^ triggered for all blocks during MempoolAndForgeRepro analysis, + BlockTxSizeEvent SlotNo Int SizeInBytes + | -- | triggered for all blocks during MempoolAndForgeRepro analysis, -- it holds: -- * block number -- * slot number when the block was forged @@ -212,271 +228,310 @@ data TraceEvent blk = -- * monotonic time to call 'Mempool.getSnapshotFor' -- * total time spent in the mutator when calling 'Mempool.getSnapshotFor' -- * total time spent in gc when calling 'Mempool.getSnapshotFor' + BlockMempoolAndForgeRepro + BlockNo + SlotNo + Int + SizeInBytes + IOLike.DiffTime + Int64 + Int64 + IOLike.DiffTime + Int64 + Int64 instance (HasAnalysis blk, LedgerSupportsProtocol blk) => Show (TraceEvent blk) where - show (StartedEvent analysisName) = "Started " <> (show analysisName) - show DoneEvent = "Done" - show (BlockSlotEvent bn sn h) = intercalate "\t" $ [ - show bn - , show sn - , show h - ] - show (CountTxOutputsEvent bn sn cumulative count) = intercalate "\t" $ [ - show bn - , show sn - , "cumulative: " <> show cumulative - , "count: " <> show count - ] - show (EbbEvent ebb previous known) = intercalate "\t" [ - "EBB: " <> show ebb - , "Prev: " <> show previous - , "Known: " <> show known - ] - show (CountedBlocksEvent counted) = "Counted " <> show counted <> " blocks." - show (HeaderSizeEvent bn sn hSz bSz) = intercalate "\t" $ [ - show bn - , show sn - , "header size: " <> show hSz - , "block size: " <> show bSz - ] - show (MaxHeaderSizeEvent size) = + show (StartedEvent analysisName) = "Started " <> (show analysisName) + show DoneEvent = "Done" + show (BlockSlotEvent bn sn h) = + intercalate "\t" $ + [ show bn + , show sn + , show h + ] + show (CountTxOutputsEvent bn sn cumulative count) = + intercalate "\t" $ + [ show bn + , show sn + , "cumulative: " <> show cumulative + , "count: " <> show count + ] + show (EbbEvent ebb previous known) = + intercalate + "\t" + [ "EBB: " <> show ebb + , "Prev: " <> show previous + , "Known: " <> show known + ] + show (CountedBlocksEvent counted) = "Counted " <> show counted <> " blocks." + show (HeaderSizeEvent bn sn hSz bSz) = + intercalate "\t" $ + [ show bn + , show sn + , "header size: " <> show hSz + , "block size: " <> show bSz + ] + show (MaxHeaderSizeEvent size) = "Maximum encountered header size = " <> show size - show (SnapshotStoredEvent slot) = + show (SnapshotStoredEvent slot) = "Snapshot stored at " <> show slot show (SnapshotWarningEvent requested actual) = - "Snapshot was created at " <> show actual <> " " <> - "because there was no block forged at requested " <> show requested + "Snapshot was created at " + <> show actual + <> " " + <> "because there was no block forged at requested " + <> show requested show (LedgerErrorEvent pt err) = "Applying block at " <> show pt <> " failed: " <> show err - show (BlockTxSizeEvent slot numBlocks txsSize) = intercalate "\t" [ - show slot - , "Num txs in block = " <> show numBlocks - , "Total size of txs in block = " <> show txsSize - ] - show (BlockMempoolAndForgeRepro bno slot txsCount txsSize durTick mutTick gcTick durSnap mutSnap gcSnap) = intercalate "\t" [ - show bno - , show slot - , "txsCount " <> show txsCount - , "txsSize " <> show txsSize - , "durTick " <> show durTick - , "mutTick " <> show mutTick - , "gcTick " <> show gcTick - , "durSnap " <> show durSnap - , "mutSnap " <> show mutSnap - , "gcSnap " <> show gcSnap - ] - + show (BlockTxSizeEvent slot numBlocks txsSize) = + intercalate + "\t" + [ show slot + , "Num txs in block = " <> show numBlocks + , "Total size of txs in block = " <> show txsSize + ] + show (BlockMempoolAndForgeRepro bno slot txsCount txsSize durTick mutTick gcTick durSnap mutSnap gcSnap) = + intercalate + "\t" + [ show bno + , show slot + , "txsCount " <> show txsCount + , "txsSize " <> show txsSize + , "durTick " <> show durTick + , "mutTick " <> show mutTick + , "gcTick " <> show gcTick + , "durSnap " <> show durSnap + , "mutSnap " <> show mutSnap + , "gcSnap " <> show gcSnap + ] {------------------------------------------------------------------------------- Analysis: show block and slot number and hash for all blocks -------------------------------------------------------------------------------} showSlotBlockNo :: forall blk. HasAnalysis blk => Analysis blk StartFromPoint -showSlotBlockNo AnalysisEnv { db, registry, startFrom, limit, tracer } = - processAll_ db registry GetHeader startFrom limit process - >> pure Nothing - where - process :: Header blk -> IO () - process hdr = traceWith tracer $ - BlockSlotEvent (blockNo hdr) (blockSlot hdr) (headerHash hdr) +showSlotBlockNo AnalysisEnv{db, registry, startFrom, limit, tracer} = + processAll_ db registry GetHeader startFrom limit process + >> pure Nothing + where + process :: Header blk -> IO () + process hdr = + traceWith tracer $ + BlockSlotEvent (blockNo hdr) (blockSlot hdr) (headerHash hdr) {------------------------------------------------------------------------------- Analysis: show total number of tx outputs per block -------------------------------------------------------------------------------} countTxOutputs :: forall blk. HasAnalysis blk => Analysis blk StartFromPoint -countTxOutputs AnalysisEnv { db, registry, startFrom, limit, tracer } = do - void $ processAll db registry GetBlock startFrom limit 0 process - pure Nothing - where - process :: Int -> blk -> IO Int - process cumulative blk = do - let cumulative' = cumulative + count - event = CountTxOutputsEvent (blockNo blk) - (blockSlot blk) - cumulative' - count - traceWith tracer event - return cumulative' - where - count = HasAnalysis.countTxOutputs blk +countTxOutputs AnalysisEnv{db, registry, startFrom, limit, tracer} = do + void $ processAll db registry GetBlock startFrom limit 0 process + pure Nothing + where + process :: Int -> blk -> IO Int + process cumulative blk = do + let cumulative' = cumulative + count + event = + CountTxOutputsEvent + (blockNo blk) + (blockSlot blk) + cumulative' + count + traceWith tracer event + return cumulative' + where + count = HasAnalysis.countTxOutputs blk {------------------------------------------------------------------------------- Analysis: show the header size in bytes for all blocks -------------------------------------------------------------------------------} showHeaderSize :: forall blk. HasAnalysis blk => Analysis blk StartFromPoint -showHeaderSize AnalysisEnv { db, registry, startFrom, limit, tracer } = do - maxHeaderSize <- - processAll db registry ((,,) <$> GetHeader <*> GetHeaderSize <*> GetBlockSize) startFrom limit 0 process - traceWith tracer $ MaxHeaderSizeEvent maxHeaderSize - pure $ Just $ ResultMaxHeaderSize maxHeaderSize - where - process :: Word16 -> (Header blk, Word16, SizeInBytes) -> IO Word16 - process maxHeaderSize (hdr, headerSize, blockSize) = do - let event = HeaderSizeEvent (blockNo hdr) - (blockSlot hdr) - headerSize - (getSizeInBytes blockSize) - traceWith tracer event - return $ maxHeaderSize `max` headerSize +showHeaderSize AnalysisEnv{db, registry, startFrom, limit, tracer} = do + maxHeaderSize <- + processAll + db + registry + ((,,) <$> GetHeader <*> GetHeaderSize <*> GetBlockSize) + startFrom + limit + 0 + process + traceWith tracer $ MaxHeaderSizeEvent maxHeaderSize + pure $ Just $ ResultMaxHeaderSize maxHeaderSize + where + process :: Word16 -> (Header blk, Word16, SizeInBytes) -> IO Word16 + process maxHeaderSize (hdr, headerSize, blockSize) = do + let event = + HeaderSizeEvent + (blockNo hdr) + (blockSlot hdr) + headerSize + (getSizeInBytes blockSize) + traceWith tracer event + return $ maxHeaderSize `max` headerSize {------------------------------------------------------------------------------- Analysis: show the total transaction sizes in bytes per block -------------------------------------------------------------------------------} showBlockTxsSize :: forall blk. HasAnalysis blk => Analysis blk StartFromPoint -showBlockTxsSize AnalysisEnv { db, registry, startFrom, limit, tracer } = do - processAll_ db registry GetBlock startFrom limit process - pure Nothing - where - process :: blk -> IO () - process blk = - traceWith tracer $ BlockTxSizeEvent (blockSlot blk) numBlockTxs blockTxsSize - where - txSizes :: [SizeInBytes] - txSizes = HasAnalysis.blockTxSizes blk - - numBlockTxs :: Int - numBlockTxs = length txSizes - - blockTxsSize :: SizeInBytes - blockTxsSize = sum txSizes +showBlockTxsSize AnalysisEnv{db, registry, startFrom, limit, tracer} = do + processAll_ db registry GetBlock startFrom limit process + pure Nothing + where + process :: blk -> IO () + process blk = + traceWith tracer $ BlockTxSizeEvent (blockSlot blk) numBlockTxs blockTxsSize + where + txSizes :: [SizeInBytes] + txSizes = HasAnalysis.blockTxSizes blk + + numBlockTxs :: Int + numBlockTxs = length txSizes + + blockTxsSize :: SizeInBytes + blockTxsSize = sum txSizes {------------------------------------------------------------------------------- Analysis: show EBBs and their predecessors -------------------------------------------------------------------------------} showEBBs :: forall blk. HasAnalysis blk => Analysis blk StartFromPoint -showEBBs AnalysisEnv { db, registry, startFrom, limit, tracer } = do - processAll_ db registry GetBlock startFrom limit process - pure Nothing - where - process :: blk -> IO () - process blk = - case blockIsEBB blk of - Just _epoch -> do - let known = Map.lookup - (blockHash blk) - (HasAnalysis.knownEBBs (Proxy @blk)) - == Just (blockPrevHash blk) - event = EbbEvent (blockHash blk) (blockPrevHash blk) known - traceWith tracer event - _otherwise -> return () -- Skip regular blocks +showEBBs AnalysisEnv{db, registry, startFrom, limit, tracer} = do + processAll_ db registry GetBlock startFrom limit process + pure Nothing + where + process :: blk -> IO () + process blk = + case blockIsEBB blk of + Just _epoch -> do + let known = + Map.lookup + (blockHash blk) + (HasAnalysis.knownEBBs (Proxy @blk)) + == Just (blockPrevHash blk) + event = EbbEvent (blockHash blk) (blockPrevHash blk) known + traceWith tracer event + _otherwise -> return () -- Skip regular blocks {------------------------------------------------------------------------------- Analysis: store a ledger at specific slot -------------------------------------------------------------------------------} storeLedgerStateAt :: - forall blk . - ( LedgerSupportsProtocol blk - , HasAnalysis blk - ) - => SlotNo - -> LedgerApplicationMode - -> Analysis blk StartFromLedgerState + forall blk. + ( LedgerSupportsProtocol blk + , HasAnalysis blk + ) => + SlotNo -> + LedgerApplicationMode -> + Analysis blk StartFromLedgerState storeLedgerStateAt slotNo ledgerAppMode env = do - void $ processAllUntil db registry GetBlock startFrom limit () process - pure Nothing - where - AnalysisEnv { db, registry, startFrom, cfg, limit, tracer } = env - FromLedgerState initLedgerDB internal = startFrom - - process :: () -> blk -> IO (NextStep, ()) - process _ blk = do - let ledgerCfg = ExtLedgerCfg cfg - oldLedger <- IOLike.atomically $ LedgerDB.getVolatileTip initLedgerDB - frk <- LedgerDB.getForkerAtTarget initLedgerDB registry VolatileTip >>= \case - Left {} -> error "Unreachable, volatile tip MUST be in the LedgerDB" + void $ processAllUntil db registry GetBlock startFrom limit () process + pure Nothing + where + AnalysisEnv{db, registry, startFrom, cfg, limit, tracer} = env + FromLedgerState initLedgerDB internal = startFrom + + process :: () -> blk -> IO (NextStep, ()) + process _ blk = do + let ledgerCfg = ExtLedgerCfg cfg + oldLedger <- IOLike.atomically $ LedgerDB.getVolatileTip initLedgerDB + frk <- + LedgerDB.getForkerAtTarget initLedgerDB registry VolatileTip >>= \case + Left{} -> error "Unreachable, volatile tip MUST be in the LedgerDB" Right f -> pure f - tbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) - LedgerDB.forkerClose frk - case runExcept $ tickThenXApply OmitLedgerEvents ledgerCfg blk (oldLedger `withLedgerTables` tbs) of - Right newLedger -> do - when (blockSlot blk >= slotNo) $ storeLedgerState newLedger - when (blockSlot blk > slotNo) $ issueWarning blk - when ((unBlockNo $ blockNo blk) `mod` 1000 == 0) $ reportProgress blk - LedgerDB.push internal newLedger - LedgerDB.tryFlush initLedgerDB - return (continue blk, ()) - Left err -> do - traceWith tracer $ LedgerErrorEvent (blockPoint blk) err - storeLedgerState (oldLedger `withLedgerTables` tbs) - pure (Stop, ()) - - tickThenXApply = case ledgerAppMode of - LedgerReapply -> pure ...: tickThenReapply - LedgerApply -> tickThenApply - - continue :: blk -> NextStep - continue blk - | blockSlot blk >= slotNo = Stop - | otherwise = Continue - - issueWarning blk = let event = SnapshotWarningEvent slotNo (blockSlot blk) - in traceWith tracer event - reportProgress blk = let event = BlockSlotEvent (blockNo blk) (blockSlot blk) (blockHash blk) - in traceWith tracer event - - storeLedgerState :: ExtLedgerState blk mk -> IO () - storeLedgerState ledgerState = case pointSlot pt of - NotOrigin slot -> do - LedgerDB.takeSnapshotNOW internal LedgerDB.TakeAtVolatileTip (Just "db-analyser") - traceWith tracer $ SnapshotStoredEvent slot - Origin -> pure () - where - pt = headerStatePoint $ headerState ledgerState + tbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) + LedgerDB.forkerClose frk + case runExcept $ tickThenXApply OmitLedgerEvents ledgerCfg blk (oldLedger `withLedgerTables` tbs) of + Right newLedger -> do + when (blockSlot blk >= slotNo) $ storeLedgerState newLedger + when (blockSlot blk > slotNo) $ issueWarning blk + when ((unBlockNo $ blockNo blk) `mod` 1000 == 0) $ reportProgress blk + LedgerDB.push internal newLedger + LedgerDB.tryFlush initLedgerDB + return (continue blk, ()) + Left err -> do + traceWith tracer $ LedgerErrorEvent (blockPoint blk) err + storeLedgerState (oldLedger `withLedgerTables` tbs) + pure (Stop, ()) + + tickThenXApply = case ledgerAppMode of + LedgerReapply -> pure ...: tickThenReapply + LedgerApply -> tickThenApply + + continue :: blk -> NextStep + continue blk + | blockSlot blk >= slotNo = Stop + | otherwise = Continue + + issueWarning blk = + let event = SnapshotWarningEvent slotNo (blockSlot blk) + in traceWith tracer event + reportProgress blk = + let event = BlockSlotEvent (blockNo blk) (blockSlot blk) (blockHash blk) + in traceWith tracer event + + storeLedgerState :: ExtLedgerState blk mk -> IO () + storeLedgerState ledgerState = case pointSlot pt of + NotOrigin slot -> do + LedgerDB.takeSnapshotNOW internal LedgerDB.TakeAtVolatileTip (Just "db-analyser") + traceWith tracer $ SnapshotStoredEvent slot + Origin -> pure () + where + pt = headerStatePoint $ headerState ledgerState countBlocks :: - forall blk . - ( HasAnalysis blk - ) - => Analysis blk StartFromPoint -countBlocks (AnalysisEnv { db, registry, startFrom, limit, tracer }) = do - counted <- processAll db registry (GetPure ()) startFrom limit 0 process - traceWith tracer $ CountedBlocksEvent counted - pure $ Just $ ResultCountBlock counted - where - process :: Int -> () -> IO Int - process count _ = pure $ count + 1 + forall blk. + HasAnalysis blk => + Analysis blk StartFromPoint +countBlocks (AnalysisEnv{db, registry, startFrom, limit, tracer}) = do + counted <- processAll db registry (GetPure ()) startFrom limit 0 process + traceWith tracer $ CountedBlocksEvent counted + pure $ Just $ ResultCountBlock counted + where + process :: Int -> () -> IO Int + process count _ = pure $ count + 1 + {------------------------------------------------------------------------------- Analysis: check for ledger state thunks every n blocks -------------------------------------------------------------------------------} checkNoThunksEvery :: forall blk. - ( HasAnalysis blk, - LedgerSupportsProtocol blk, - CanStowLedgerTables (LedgerState blk) + ( HasAnalysis blk + , LedgerSupportsProtocol blk + , CanStowLedgerTables (LedgerState blk) ) => Word64 -> Analysis blk StartFromLedgerState checkNoThunksEvery nBlocks - (AnalysisEnv {db, registry, startFrom, cfg, limit}) = do + (AnalysisEnv{db, registry, startFrom, cfg, limit}) = do putStrLn $ "Checking for thunks in each block where blockNo === 0 (mod " <> show nBlocks <> ")." void $ processAll db registry GetBlock startFrom limit () process pure Nothing - where + where FromLedgerState ldb internal = startFrom process :: () -> blk -> IO () process _ blk = do oldLedger <- IOLike.atomically $ LedgerDB.getVolatileTip ldb - frk <- LedgerDB.getForkerAtTarget ldb registry VolatileTip >>= \case - Left {} -> error "Unreachable, volatile tip MUST be in the LedgerDB" - Right f -> pure f + frk <- + LedgerDB.getForkerAtTarget ldb registry VolatileTip >>= \case + Left{} -> error "Unreachable, volatile tip MUST be in the LedgerDB" + Right f -> pure f tbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) LedgerDB.forkerClose frk let oldLedger' = oldLedger `withLedgerTables` tbs - let ledgerCfg = ExtLedgerCfg cfg + let ledgerCfg = ExtLedgerCfg cfg appliedResult = tickThenApplyLedgerResult OmitLedgerEvents ledgerCfg blk oldLedger' - newLedger = either (error . show) lrResult $ runExcept appliedResult - newLedger' = applyDiffs oldLedger' newLedger - bn = blockNo blk - when (unBlockNo bn `mod` nBlocks == 0 ) $ do + newLedger = either (error . show) lrResult $ runExcept appliedResult + newLedger' = applyDiffs oldLedger' newLedger + bn = blockNo blk + when (unBlockNo bn `mod` nBlocks == 0) $ do -- Check the new ledger state with new values stowed. This checks that -- the ledger has no thunks in their ledgerstate type. IOLike.evaluate (stowLedgerTables $ ledgerState newLedger') >>= checkNoThunks bn @@ -490,7 +545,6 @@ checkNoThunksEvery LedgerDB.push internal newLedger LedgerDB.tryFlush ldb - checkNoThunks :: NoThunksMK mk => BlockNo -> LedgerState blk mk -> IO () checkNoThunks bn ls = noThunks ["--checkThunks"] ls >>= \case @@ -506,38 +560,40 @@ checkNoThunksEvery traceLedgerProcessing :: forall blk. - ( HasAnalysis blk, - LedgerSupportsProtocol blk + ( HasAnalysis blk + , LedgerSupportsProtocol blk ) => Analysis blk StartFromLedgerState traceLedgerProcessing - (AnalysisEnv {db, registry, startFrom, cfg, limit}) = do + (AnalysisEnv{db, registry, startFrom, cfg, limit}) = do void $ processAll db registry GetBlock startFrom limit () (process initLedger) pure Nothing - where + where FromLedgerState initLedger internal = startFrom - process - :: LedgerDB.LedgerDB' IO blk - -> () - -> blk - -> IO () + process :: + LedgerDB.LedgerDB' IO blk -> + () -> + blk -> + IO () process ledgerDB _ blk = do - frk <- LedgerDB.getForkerAtTarget ledgerDB registry VolatileTip >>= \case - Left {} -> error "Unreachable, volatile tip MUST be in the LedgerDB" - Right f -> pure f + frk <- + LedgerDB.getForkerAtTarget ledgerDB registry VolatileTip >>= \case + Left{} -> error "Unreachable, volatile tip MUST be in the LedgerDB" + Right f -> pure f oldLedgerSt <- IOLike.atomically $ LedgerDB.forkerGetLedgerState frk oldLedgerTbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) let oldLedger = oldLedgerSt `withLedgerTables` oldLedgerTbs LedgerDB.forkerClose frk - let ledgerCfg = ExtLedgerCfg cfg + let ledgerCfg = ExtLedgerCfg cfg appliedResult = tickThenApplyLedgerResult OmitLedgerEvents ledgerCfg blk oldLedger - newLedger = either (error . show) lrResult $ runExcept appliedResult - newLedger' = applyDiffs oldLedger newLedger - traces = - (HasAnalysis.emitTraces $ - HasAnalysis.WithLedgerState blk (ledgerState oldLedger) (ledgerState newLedger')) + newLedger = either (error . show) lrResult $ runExcept appliedResult + newLedger' = applyDiffs oldLedger newLedger + traces = + ( HasAnalysis.emitTraces $ + HasAnalysis.WithLedgerState blk (ledgerState oldLedger) (ledgerState newLedger') + ) mapM_ Debug.traceMarkerIO traces LedgerDB.push internal newLedger @@ -562,22 +618,23 @@ traceLedgerProcessing -------------------------------------------------------------------------------} benchmarkLedgerOps :: - forall blk. - ( LedgerSupportsProtocol blk - , HasAnalysis blk - ) - => Maybe FilePath - -> LedgerApplicationMode - -> Analysis blk StartFromLedgerState -benchmarkLedgerOps mOutfile ledgerAppMode AnalysisEnv {db, registry, startFrom, cfg, limit} = do - -- We default to CSV when the no output file is provided (and thus the results are output to stdout). - outFormat <- F.getOutputFormat mOutfile - - withFile mOutfile $ \outFileHandle -> do - F.writeMetadata outFileHandle outFormat ledgerAppMode - F.writeHeader outFileHandle outFormat - - void $ processAll + forall blk. + ( LedgerSupportsProtocol blk + , HasAnalysis blk + ) => + Maybe FilePath -> + LedgerApplicationMode -> + Analysis blk StartFromLedgerState +benchmarkLedgerOps mOutfile ledgerAppMode AnalysisEnv{db, registry, startFrom, cfg, limit} = do + -- We default to CSV when the no output file is provided (and thus the results are output to stdout). + outFormat <- F.getOutputFormat mOutfile + + withFile mOutfile $ \outFileHandle -> do + F.writeMetadata outFileHandle outFormat ledgerAppMode + F.writeHeader outFileHandle outFormat + + void $ + processAll db registry ((,) <$> GetBlock <*> GetBlockSize) @@ -585,193 +642,200 @@ benchmarkLedgerOps mOutfile ledgerAppMode AnalysisEnv {db, registry, startFrom, limit () (process initLedger initial outFileHandle outFormat) - pure Nothing - where - ccfg = topLevelConfigProtocol cfg - lcfg = topLevelConfigLedger cfg - - FromLedgerState initLedger initial = startFrom - - process :: - LedgerDB.LedgerDB' IO blk - -> LedgerDB.TestInternals' IO blk - -> IO.Handle - -> F.OutputFormat - -> () - -> (blk, SizeInBytes) - -> IO () - process ledgerDB intLedgerDB outFileHandle outFormat _ (blk, sz) = do - (prevLedgerState, tables) <- LedgerDB.withPrivateTipForker ledgerDB $ \frk -> do - st <- IOLike.atomically $ LedgerDB.forkerGetLedgerState frk - tbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) - pure (st, tbs) - prevRtsStats <- GC.getRTSStats - let - -- Compute how many nanoseconds the mutator used from the last - -- recorded 'elapsedTime' till the end of the execution of the given - -- action. This function forces the evaluation of its argument's - -- result. - time act = do - tPrev <- GC.mutator_elapsed_ns <$> GC.getRTSStats - !r <- act - tNow <- GC.mutator_elapsed_ns <$> GC.getRTSStats - pure (r, tNow - tPrev) - - let slot = blockSlot blk - -- We do not use strictness annotation on the resulting tuples since - -- 'time' takes care of forcing the evaluation of its argument's result. - (ldgrView, tForecast) <- time $ forecast slot prevLedgerState - (tkHdrSt, tHdrTick) <- time $ tickTheHeaderState slot prevLedgerState ldgrView - (!newHeader, tHdrApp) <- time $ applyTheHeader ldgrView tkHdrSt - (tkLdgrSt, tBlkTick) <- time $ tickTheLedgerState slot prevLedgerState - let !tkLdgrSt' = applyDiffs (prevLedgerState `withLedgerTables` tables) tkLdgrSt - (!newLedger, tBlkApp) <- time $ applyTheBlock tkLdgrSt' - - currentRtsStats <- GC.getRTSStats - let - currentMinusPrevious :: Num a => (GC.RTSStats -> a) -> a - currentMinusPrevious f = f currentRtsStats - f prevRtsStats - major_gcs = currentMinusPrevious GC.major_gcs - slotDataPoint = - DP.SlotDataPoint - { DP.slot = realPointSlot rp - , DP.slotGap = slot `slotCount` getTipSlot prevLedgerState - , DP.totalTime = currentMinusPrevious GC.elapsed_ns `div` 1000 - , DP.mut = currentMinusPrevious GC.mutator_elapsed_ns `div` 1000 - , DP.gc = currentMinusPrevious GC.gc_elapsed_ns `div` 1000 - , DP.majGcCount = major_gcs - , DP.minGcCount = currentMinusPrevious GC.gcs - major_gcs - , DP.allocatedBytes = currentMinusPrevious GC.allocated_bytes - , DP.mut_forecast = tForecast `div` 1000 - , DP.mut_headerTick = tHdrTick `div` 1000 - , DP.mut_headerApply = tHdrApp `div` 1000 - , DP.mut_blockTick = tBlkTick `div` 1000 - , DP.mut_blockApply = tBlkApp `div` 1000 - , DP.blockByteSize = getSizeInBytes sz - , DP.blockStats = DP.BlockStats $ HasAnalysis.blockStats blk - } - - slotCount (SlotNo i) = \case - Slotting.Origin -> i - Slotting.At (SlotNo j) -> i - j - - F.writeDataPoint outFileHandle outFormat slotDataPoint - - LedgerDB.push intLedgerDB $ ExtLedgerState newLedger newHeader - LedgerDB.tryFlush ledgerDB - where - rp = blockRealPoint blk - - forecast :: - SlotNo - -> ExtLedgerState blk mk - -> IO (LedgerView (BlockProtocol blk)) - forecast slot st = do - let forecaster = ledgerViewForecastAt lcfg (ledgerState st) - case runExcept $ forecastFor forecaster slot of - Left err -> fail $ "benchmark doesn't support headers beyond the forecast limit: " <> show rp <> " " <> show err - Right x -> pure x - - tickTheHeaderState :: - SlotNo - -> ExtLedgerState blk mk - -> LedgerView (BlockProtocol blk) - -> IO (Ticked (HeaderState blk)) - tickTheHeaderState slot st ledgerView = - pure $! tickHeaderState ccfg - ledgerView - slot - (headerState st) - - applyTheHeader :: - LedgerView (BlockProtocol blk) - -> Ticked (HeaderState blk) - -> IO (HeaderState blk) - applyTheHeader ledgerView tickedHeaderState = case ledgerAppMode of - LedgerApply -> - case runExcept $ validateHeader cfg ledgerView (getHeader blk) tickedHeaderState of - Left err -> fail $ "benchmark doesn't support invalid headers: " <> show rp <> " " <> show err - Right x -> pure x - LedgerReapply -> - pure $! revalidateHeader cfg ledgerView (getHeader blk) tickedHeaderState - - tickTheLedgerState :: - SlotNo - -> ExtLedgerState blk EmptyMK - -> IO (Ticked (LedgerState blk) DiffMK) - tickTheLedgerState slot st = - pure $ applyChainTick OmitLedgerEvents lcfg slot (ledgerState st) - - applyTheBlock :: - TickedLedgerState blk ValuesMK - -> IO (LedgerState blk DiffMK) - applyTheBlock tickedLedgerSt = case ledgerAppMode of - LedgerApply -> - case runExcept (lrResult <$> applyBlockLedgerResult OmitLedgerEvents lcfg blk tickedLedgerSt) of - Left err -> fail $ "benchmark doesn't support invalid blocks: " <> show rp <> " " <> show err - Right x -> pure x - LedgerReapply -> - pure $! lrResult $ reapplyBlockLedgerResult OmitLedgerEvents lcfg blk tickedLedgerSt + pure Nothing + where + ccfg = topLevelConfigProtocol cfg + lcfg = topLevelConfigLedger cfg + + FromLedgerState initLedger initial = startFrom + + process :: + LedgerDB.LedgerDB' IO blk -> + LedgerDB.TestInternals' IO blk -> + IO.Handle -> + F.OutputFormat -> + () -> + (blk, SizeInBytes) -> + IO () + process ledgerDB intLedgerDB outFileHandle outFormat _ (blk, sz) = do + (prevLedgerState, tables) <- LedgerDB.withPrivateTipForker ledgerDB $ \frk -> do + st <- IOLike.atomically $ LedgerDB.forkerGetLedgerState frk + tbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) + pure (st, tbs) + prevRtsStats <- GC.getRTSStats + let + -- Compute how many nanoseconds the mutator used from the last + -- recorded 'elapsedTime' till the end of the execution of the given + -- action. This function forces the evaluation of its argument's + -- result. + time act = do + tPrev <- GC.mutator_elapsed_ns <$> GC.getRTSStats + !r <- act + tNow <- GC.mutator_elapsed_ns <$> GC.getRTSStats + pure (r, tNow - tPrev) + + let slot = blockSlot blk + -- We do not use strictness annotation on the resulting tuples since + -- 'time' takes care of forcing the evaluation of its argument's result. + (ldgrView, tForecast) <- time $ forecast slot prevLedgerState + (tkHdrSt, tHdrTick) <- time $ tickTheHeaderState slot prevLedgerState ldgrView + (!newHeader, tHdrApp) <- time $ applyTheHeader ldgrView tkHdrSt + (tkLdgrSt, tBlkTick) <- time $ tickTheLedgerState slot prevLedgerState + let !tkLdgrSt' = applyDiffs (prevLedgerState `withLedgerTables` tables) tkLdgrSt + (!newLedger, tBlkApp) <- time $ applyTheBlock tkLdgrSt' + + currentRtsStats <- GC.getRTSStats + let + currentMinusPrevious :: Num a => (GC.RTSStats -> a) -> a + currentMinusPrevious f = f currentRtsStats - f prevRtsStats + major_gcs = currentMinusPrevious GC.major_gcs + slotDataPoint = + DP.SlotDataPoint + { DP.slot = realPointSlot rp + , DP.slotGap = slot `slotCount` getTipSlot prevLedgerState + , DP.totalTime = currentMinusPrevious GC.elapsed_ns `div` 1000 + , DP.mut = currentMinusPrevious GC.mutator_elapsed_ns `div` 1000 + , DP.gc = currentMinusPrevious GC.gc_elapsed_ns `div` 1000 + , DP.majGcCount = major_gcs + , DP.minGcCount = currentMinusPrevious GC.gcs - major_gcs + , DP.allocatedBytes = currentMinusPrevious GC.allocated_bytes + , DP.mut_forecast = tForecast `div` 1000 + , DP.mut_headerTick = tHdrTick `div` 1000 + , DP.mut_headerApply = tHdrApp `div` 1000 + , DP.mut_blockTick = tBlkTick `div` 1000 + , DP.mut_blockApply = tBlkApp `div` 1000 + , DP.blockByteSize = getSizeInBytes sz + , DP.blockStats = DP.BlockStats $ HasAnalysis.blockStats blk + } + + slotCount (SlotNo i) = \case + Slotting.Origin -> i + Slotting.At (SlotNo j) -> i - j + + F.writeDataPoint outFileHandle outFormat slotDataPoint + + LedgerDB.push intLedgerDB $ ExtLedgerState newLedger newHeader + LedgerDB.tryFlush ledgerDB + where + rp = blockRealPoint blk + + forecast :: + SlotNo -> + ExtLedgerState blk mk -> + IO (LedgerView (BlockProtocol blk)) + forecast slot st = do + let forecaster = ledgerViewForecastAt lcfg (ledgerState st) + case runExcept $ forecastFor forecaster slot of + Left err -> + fail $ "benchmark doesn't support headers beyond the forecast limit: " <> show rp <> " " <> show err + Right x -> pure x + + tickTheHeaderState :: + SlotNo -> + ExtLedgerState blk mk -> + LedgerView (BlockProtocol blk) -> + IO (Ticked (HeaderState blk)) + tickTheHeaderState slot st ledgerView = + pure $! + tickHeaderState + ccfg + ledgerView + slot + (headerState st) + + applyTheHeader :: + LedgerView (BlockProtocol blk) -> + Ticked (HeaderState blk) -> + IO (HeaderState blk) + applyTheHeader ledgerView tickedHeaderState = case ledgerAppMode of + LedgerApply -> + case runExcept $ validateHeader cfg ledgerView (getHeader blk) tickedHeaderState of + Left err -> fail $ "benchmark doesn't support invalid headers: " <> show rp <> " " <> show err + Right x -> pure x + LedgerReapply -> + pure $! revalidateHeader cfg ledgerView (getHeader blk) tickedHeaderState + + tickTheLedgerState :: + SlotNo -> + ExtLedgerState blk EmptyMK -> + IO (Ticked (LedgerState blk) DiffMK) + tickTheLedgerState slot st = + pure $ applyChainTick OmitLedgerEvents lcfg slot (ledgerState st) + + applyTheBlock :: + TickedLedgerState blk ValuesMK -> + IO (LedgerState blk DiffMK) + applyTheBlock tickedLedgerSt = case ledgerAppMode of + LedgerApply -> + case runExcept (lrResult <$> applyBlockLedgerResult OmitLedgerEvents lcfg blk tickedLedgerSt) of + Left err -> fail $ "benchmark doesn't support invalid blocks: " <> show rp <> " " <> show err + Right x -> pure x + LedgerReapply -> + pure $! lrResult $ reapplyBlockLedgerResult OmitLedgerEvents lcfg blk tickedLedgerSt withFile :: Maybe FilePath -> (IO.Handle -> IO r) -> IO r withFile (Just outfile) = IO.withFile outfile IO.WriteMode -withFile Nothing = \f -> f IO.stdout +withFile Nothing = \f -> f IO.stdout {------------------------------------------------------------------------------- Analysis: trace ledger state metrics -------------------------------------------------------------------------------} getBlockApplicationMetrics :: - forall blk . - ( HasAnalysis blk - , LedgerSupportsProtocol blk - ) - => NumberOfBlocks -> Maybe FilePath -> Analysis blk StartFromLedgerState + forall blk. + ( HasAnalysis blk + , LedgerSupportsProtocol blk + ) => + NumberOfBlocks -> Maybe FilePath -> Analysis blk StartFromLedgerState getBlockApplicationMetrics (NumberOfBlocks nrBlocks) mOutFile env = do - withFile mOutFile $ \outFileHandle -> do - writeHeaderLine outFileHandle separator (HasAnalysis.blockApplicationMetrics @blk) - void $ processAll db registry GetBlock startFrom limit () (process initLedger internal outFileHandle) - pure Nothing - where - separator = ", " - - AnalysisEnv {db, registry, startFrom, cfg, limit } = env - FromLedgerState initLedger internal = startFrom - - process :: - LedgerDB.LedgerDB' IO blk - -> LedgerDB.TestInternals' IO blk - -> IO.Handle - -> () - -> blk - -> IO () - process ledgerDB intLedgerDB outFileHandle _ blk = do - frk <- LedgerDB.getForkerAtTarget ledgerDB registry VolatileTip >>= \case - Left {} -> error "Unreachable, volatile tip MUST be in the LedgerDB" + withFile mOutFile $ \outFileHandle -> do + writeHeaderLine outFileHandle separator (HasAnalysis.blockApplicationMetrics @blk) + void $ + processAll db registry GetBlock startFrom limit () (process initLedger internal outFileHandle) + pure Nothing + where + separator = ", " + + AnalysisEnv{db, registry, startFrom, cfg, limit} = env + FromLedgerState initLedger internal = startFrom + + process :: + LedgerDB.LedgerDB' IO blk -> + LedgerDB.TestInternals' IO blk -> + IO.Handle -> + () -> + blk -> + IO () + process ledgerDB intLedgerDB outFileHandle _ blk = do + frk <- + LedgerDB.getForkerAtTarget ledgerDB registry VolatileTip >>= \case + Left{} -> error "Unreachable, volatile tip MUST be in the LedgerDB" Right f -> pure f - oldLedgerSt <- IOLike.atomically $ LedgerDB.forkerGetLedgerState frk - oldLedgerTbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) - let oldLedger = oldLedgerSt `withLedgerTables` oldLedgerTbs - LedgerDB.forkerClose frk + oldLedgerSt <- IOLike.atomically $ LedgerDB.forkerGetLedgerState frk + oldLedgerTbs <- LedgerDB.forkerReadTables frk (getBlockKeySets blk) + let oldLedger = oldLedgerSt `withLedgerTables` oldLedgerTbs + LedgerDB.forkerClose frk - let nextLedgerSt = tickThenReapply OmitLedgerEvents (ExtLedgerCfg cfg) blk oldLedger - when (unBlockNo (blockNo blk) `mod` nrBlocks == 0) $ do - let blockApplication = - HasAnalysis.WithLedgerState blk - (ledgerState oldLedger) - (ledgerState $ applyDiffs oldLedger nextLedgerSt) + let nextLedgerSt = tickThenReapply OmitLedgerEvents (ExtLedgerCfg cfg) blk oldLedger + when (unBlockNo (blockNo blk) `mod` nrBlocks == 0) $ do + let blockApplication = + HasAnalysis.WithLedgerState + blk + (ledgerState oldLedger) + (ledgerState $ applyDiffs oldLedger nextLedgerSt) - computeAndWriteLine outFileHandle - separator - (HasAnalysis.blockApplicationMetrics @blk) - blockApplication + computeAndWriteLine + outFileHandle + separator + (HasAnalysis.blockApplicationMetrics @blk) + blockApplication - IO.hFlush outFileHandle + IO.hFlush outFileHandle - LedgerDB.push intLedgerDB nextLedgerSt - LedgerDB.tryFlush ledgerDB + LedgerDB.push intLedgerDB nextLedgerSt + LedgerDB.tryFlush ledgerDB - pure () + pure () {------------------------------------------------------------------------------- Analysis: reforge the blocks, via the mempool @@ -790,39 +854,43 @@ reproMempoolForge :: Int -> Analysis blk StartFromLedgerState reproMempoolForge numBlks env = do - howManyBlocks <- case numBlks of - 1 -> pure ReproMempoolForgeOneBlk - 2 -> pure ReproMempoolForgeTwoBlks - _ -> fail $ "--repro-mempool-and-forge only supports" - <> "1 or 2 blocks at a time, not " <> show numBlks - - mempool <- Mempool.openMempoolWithoutSyncThread - Mempool.LedgerInterface { - Mempool.getCurrentLedgerState = ledgerState <$> LedgerDB.getVolatileTip ledgerDB + howManyBlocks <- case numBlks of + 1 -> pure ReproMempoolForgeOneBlk + 2 -> pure ReproMempoolForgeTwoBlks + _ -> + fail $ + "--repro-mempool-and-forge only supports" + <> "1 or 2 blocks at a time, not " + <> show numBlks + + mempool <- + Mempool.openMempoolWithoutSyncThread + Mempool.LedgerInterface + { Mempool.getCurrentLedgerState = ledgerState <$> LedgerDB.getVolatileTip ledgerDB , Mempool.getLedgerTablesAtFor = \pt keys -> do frk <- LedgerDB.getForkerAtTarget ledgerDB registry (SpecificPoint pt) case frk of Left _ -> pure Nothing Right fr -> do - tbs <- Just . castLedgerTables - <$> LedgerDB.forkerReadTables fr (castLedgerTables keys) + tbs <- + Just . castLedgerTables + <$> LedgerDB.forkerReadTables fr (castLedgerTables keys) LedgerDB.forkerClose fr pure tbs - - } + } lCfg -- one mebibyte should generously accomodate two blocks' worth of txs - ( Mempool.MempoolCapacityBytesOverride - $ LedgerSupportsMempool.ByteSize32 - $ 1024*1024 + ( Mempool.MempoolCapacityBytesOverride $ + LedgerSupportsMempool.ByteSize32 $ + 1024 * 1024 ) nullTracer - void $ processAll db registry GetBlock startFrom limit Nothing (process howManyBlocks mempool) - pure Nothing - where - AnalysisEnv { - cfg + void $ processAll db registry GetBlock startFrom limit Nothing (process howManyBlocks mempool) + pure Nothing + where + AnalysisEnv + { cfg , startFrom = startFrom@(FromLedgerState ledgerDB intLedgerDB) , db , registry @@ -830,29 +898,31 @@ reproMempoolForge numBlks env = do , tracer } = env - lCfg :: LedgerConfig blk - lCfg = configLedger cfg - - timed :: IO a -> IO (a, IOLike.DiffTime, Int64, Int64) - timed m = do - before <- IOLike.getMonotonicTime - prevRtsStats <- GC.getRTSStats - !x <- m - newRtsStats <- GC.getRTSStats - after <- IOLike.getMonotonicTime - pure ( x - , after `IOLike.diffTime` before - , (GC.mutator_elapsed_ns newRtsStats - GC.mutator_elapsed_ns prevRtsStats) `div` 1000 - , (GC.gc_elapsed_ns newRtsStats - GC.gc_elapsed_ns prevRtsStats) `div` 1000 - ) - - process - :: ReproMempoolForgeHowManyBlks - -> Mempool.Mempool IO blk - -> Maybe blk - -> blk - -> IO (Maybe blk) - process howManyBlocks mempool mbBlk blk' = (\() -> Just blk') <$> do + lCfg :: LedgerConfig blk + lCfg = configLedger cfg + + timed :: IO a -> IO (a, IOLike.DiffTime, Int64, Int64) + timed m = do + before <- IOLike.getMonotonicTime + prevRtsStats <- GC.getRTSStats + !x <- m + newRtsStats <- GC.getRTSStats + after <- IOLike.getMonotonicTime + pure + ( x + , after `IOLike.diffTime` before + , (GC.mutator_elapsed_ns newRtsStats - GC.mutator_elapsed_ns prevRtsStats) `div` 1000 + , (GC.gc_elapsed_ns newRtsStats - GC.gc_elapsed_ns prevRtsStats) `div` 1000 + ) + + process :: + ReproMempoolForgeHowManyBlks -> + Mempool.Mempool IO blk -> + Maybe blk -> + blk -> + IO (Maybe blk) + process howManyBlocks mempool mbBlk blk' = + (\() -> Just blk') <$> do -- add this block's transactions to the mempool do results <- Mempool.addTxs mempool $ LedgerSupportsMempool.extractTxs blk' @@ -861,20 +931,21 @@ reproMempoolForge numBlks env = do | rej@(Mempool.MempoolTxRejected tx _) <- results ] unless (null rejs) $ do - fail $ unlines $ - ["Mempool rejected some of the on-chain txs: " <> show rejs] - <> case howManyBlocks of - ReproMempoolForgeOneBlk -> [] - ReproMempoolForgeTwoBlks -> - [ "This might be expected, see the db-analyser README." - , "Consider trying again with `--repro-mempool-and-forge 1`." - ] + fail $ + unlines $ + ["Mempool rejected some of the on-chain txs: " <> show rejs] + <> case howManyBlocks of + ReproMempoolForgeOneBlk -> [] + ReproMempoolForgeTwoBlks -> + [ "This might be expected, see the db-analyser README." + , "Consider trying again with `--repro-mempool-and-forge 1`." + ] let scrutinee = case howManyBlocks of - ReproMempoolForgeOneBlk -> Just blk' + ReproMempoolForgeOneBlk -> Just blk' ReproMempoolForgeTwoBlks -> mbBlk case scrutinee of - Nothing -> pure () + Nothing -> pure () Just blk -> do LedgerDB.withPrivateTipForker ledgerDB $ \forker -> do st <- IOLike.atomically $ LedgerDB.forkerGetLedgerState forker @@ -884,13 +955,16 @@ reproMempoolForge numBlks env = do -- -- Primary caveat: that thread's mempool may have had more transactions in it. let slot = blockSlot blk - (ticked, durTick, mutTick, gcTick) <- timed $ IOLike.evaluate $ - applyChainTick OmitLedgerEvents lCfg slot (ledgerState st) + (ticked, durTick, mutTick, gcTick) <- + timed $ + IOLike.evaluate $ + applyChainTick OmitLedgerEvents lCfg slot (ledgerState st) ((), durSnap, mutSnap, gcSnap) <- timed $ do - snap <- Mempool.getSnapshotFor mempool slot ticked $ + snap <- + Mempool.getSnapshotFor mempool slot ticked $ fmap castLedgerTables . LedgerDB.forkerReadTables forker . castLedgerTables - pure $ length (Mempool.snapshotTxs snap) `seq` Mempool.snapshotStateHash snap `seq` () + pure $ length (Mempool.snapshotTxs snap) `seq` Mempool.snapshotStateHash snap `seq` () let sizes = HasAnalysis.blockTxSizes blk traceWith tracer $ @@ -931,60 +1005,64 @@ decreaseLimit (Limit n) = Just . Limit $ n - 1 data NextStep = Continue | Stop - processAllUntil :: - forall blk b startFrom st. (HasHeader blk, HasAnnTip blk) - => ImmutableDB IO blk - -> ResourceRegistry IO - -> BlockComponent blk b - -> AnalysisStartFrom IO blk startFrom - -> Limit - -> st - -> (st -> b -> IO (NextStep, st)) - -> IO st + forall blk b startFrom st. + (HasHeader blk, HasAnnTip blk) => + ImmutableDB IO blk -> + ResourceRegistry IO -> + BlockComponent blk b -> + AnalysisStartFrom IO blk startFrom -> + Limit -> + st -> + (st -> b -> IO (NextStep, st)) -> + IO st processAllUntil immutableDB registry blockComponent startFrom limit initState callback = do - st <- startFromPoint startFrom - itr <- ImmutableDB.streamAfterKnownPoint + st <- startFromPoint startFrom + itr <- + ImmutableDB.streamAfterKnownPoint immutableDB registry blockComponent st - go itr limit initState - where - go :: ImmutableDB.Iterator IO blk b -> Limit -> st -> IO st - go itr lt !st = case decreaseLimit lt of - Nothing -> return st - Just decreasedLimit -> do - itrResult <- ImmutableDB.iteratorNext itr - case itrResult of - ImmutableDB.IteratorExhausted -> return st - ImmutableDB.IteratorResult b -> callback st b >>= \case + go itr limit initState + where + go :: ImmutableDB.Iterator IO blk b -> Limit -> st -> IO st + go itr lt !st = case decreaseLimit lt of + Nothing -> return st + Just decreasedLimit -> do + itrResult <- ImmutableDB.iteratorNext itr + case itrResult of + ImmutableDB.IteratorExhausted -> return st + ImmutableDB.IteratorResult b -> + callback st b >>= \case (Continue, nst) -> go itr decreasedLimit nst - (Stop, nst) -> return nst + (Stop, nst) -> return nst processAll :: - forall blk b startFrom st. (HasHeader blk, HasAnnTip blk) - => ImmutableDB IO blk - -> ResourceRegistry IO - -> BlockComponent blk b - -> AnalysisStartFrom IO blk startFrom - -> Limit - -> st - -> (st -> b -> IO st) - -> IO st + forall blk b startFrom st. + (HasHeader blk, HasAnnTip blk) => + ImmutableDB IO blk -> + ResourceRegistry IO -> + BlockComponent blk b -> + AnalysisStartFrom IO blk startFrom -> + Limit -> + st -> + (st -> b -> IO st) -> + IO st processAll db rr blockComponent startFrom limit initSt cb = processAllUntil db rr blockComponent startFrom limit initSt callback - where - callback st b = (Continue, ) <$> cb st b + where + callback st b = (Continue,) <$> cb st b processAll_ :: - forall blk b startFrom. (HasHeader blk, HasAnnTip blk) - => ImmutableDB IO blk - -> ResourceRegistry IO - -> BlockComponent blk b - -> AnalysisStartFrom IO blk startFrom - -> Limit - -> (b -> IO ()) - -> IO () + forall blk b startFrom. + (HasHeader blk, HasAnnTip blk) => + ImmutableDB IO blk -> + ResourceRegistry IO -> + BlockComponent blk b -> + AnalysisStartFrom IO blk startFrom -> + Limit -> + (b -> IO ()) -> + IO () processAll_ db registry blockComponent startFrom limit callback = - processAll db registry blockComponent startFrom limit () (const callback) + processAll db registry blockComponent startFrom limit () (const callback) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/FileWriting.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/FileWriting.hs index 4db5fb6464..2154f108c0 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/FileWriting.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/FileWriting.hs @@ -1,28 +1,30 @@ {-# LANGUAGE OverloadedStrings #-} -module Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.FileWriting ( - -- * Output format +module Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.FileWriting + ( -- * Output format OutputFormat , getOutputFormat + -- * File writing functions , writeDataPoint , writeHeader , writeMetadata ) where -import Cardano.Slotting.Slot (SlotNo (unSlotNo)) -import qualified Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.Metadata as BenchmarkLedgerOps.Metadata -import Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint - (SlotDataPoint) -import qualified Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint as DP -import qualified Cardano.Tools.DBAnalyser.CSV as CSV -import Cardano.Tools.DBAnalyser.Types (LedgerApplicationMode) -import Data.Aeson as Aeson -import qualified Data.ByteString.Lazy as BSL -import System.FilePath.Posix (takeExtension) -import qualified System.IO as IO -import qualified TextBuilder as Builder -import TextBuilder (TextBuilder, decimal) +import Cardano.Slotting.Slot (SlotNo (unSlotNo)) +import Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.Metadata qualified as BenchmarkLedgerOps.Metadata +import Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint + ( SlotDataPoint + ) +import Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint qualified as DP +import Cardano.Tools.DBAnalyser.CSV qualified as CSV +import Cardano.Tools.DBAnalyser.Types (LedgerApplicationMode) +import Data.Aeson as Aeson +import Data.ByteString.Lazy qualified as BSL +import System.FilePath.Posix (takeExtension) +import System.IO qualified as IO +import TextBuilder (TextBuilder, decimal) +import TextBuilder qualified as Builder {------------------------------------------------------------------------------- Output format @@ -40,14 +42,13 @@ data OutputFormat = CSV | JSON -- choice. getOutputFormat :: Maybe FilePath -> IO OutputFormat getOutputFormat (Just filePath) = - case takeExtension filePath of - ".csv" -> pure CSV + case takeExtension filePath of + ".csv" -> pure CSV ".json" -> pure JSON - ext -> do + ext -> do IO.hPutStr IO.stderr $ "Unsupported extension '" <> ext <> "'. Defaulting to CSV." pure CSV -getOutputFormat Nothing = pure CSV - +getOutputFormat Nothing = pure CSV {------------------------------------------------------------------------------- File writing functions @@ -67,31 +68,33 @@ csvSeparator = "\t" -- > "slot slotGap totalTime" ... -- -- then the third value written by 'writeDataPoint' will correspond to 'totalTime'. --- writeHeader :: IO.Handle -> OutputFormat -> IO () -writeHeader outFileHandle CSV = - CSV.writeHeaderLine outFileHandle (CSV.Separator csvSeparator) dataPointCsvBuilder -writeHeader _ JSON = pure () +writeHeader outFileHandle CSV = + CSV.writeHeaderLine outFileHandle (CSV.Separator csvSeparator) dataPointCsvBuilder +writeHeader _ JSON = pure () -- | NOTE: This function is not thread safe. writeDataPoint :: - IO.Handle - -> OutputFormat - -> SlotDataPoint - -> IO () -writeDataPoint outFileHandle CSV slotDataPoint = - CSV.computeAndWriteLinePure - outFileHandle (CSV.Separator csvSeparator) dataPointCsvBuilder slotDataPoint + IO.Handle -> + OutputFormat -> + SlotDataPoint -> + IO () +writeDataPoint outFileHandle CSV slotDataPoint = + CSV.computeAndWriteLinePure + outFileHandle + (CSV.Separator csvSeparator) + dataPointCsvBuilder + slotDataPoint writeDataPoint outFileHandle JSON slotDataPoint = - BSL.hPut outFileHandle $ Aeson.encode slotDataPoint + BSL.hPut outFileHandle $ Aeson.encode slotDataPoint -- | Write metadata to a JSON file if this is the selected -- format. Perform a no-op otherwise. writeMetadata :: IO.Handle -> OutputFormat -> LedgerApplicationMode -> IO () writeMetadata _outFileHandle CSV _lgrAppMode = pure () -writeMetadata outFileHandle JSON lgrAppMode = +writeMetadata outFileHandle JSON lgrAppMode = BenchmarkLedgerOps.Metadata.getMetadata lgrAppMode - >>= BSL.hPut outFileHandle . Aeson.encode + >>= BSL.hPut outFileHandle . Aeson.encode {------------------------------------------------------------------------------- Operations to assist CSV printing @@ -99,19 +102,19 @@ writeMetadata outFileHandle JSON lgrAppMode = dataPointCsvBuilder :: [(TextBuilder, SlotDataPoint -> TextBuilder)] dataPointCsvBuilder = - [ ("slot" , decimal . unSlotNo . DP.slot) - , ("slotGap" , decimal . DP.slotGap) - , ("totalTime" , decimal . DP.totalTime) - , ("mut" , decimal . DP.mut) - , ("gc" , decimal . DP.gc) - , ("majGcCount" , decimal . DP.majGcCount) - , ("minGcCount" , decimal . DP.minGcCount) - , ("allocatedBytes" , decimal . DP.allocatedBytes) - , ("mut_forecast" , decimal . DP.mut_forecast) - , ("mut_headerTick" , decimal . DP.mut_headerTick) - , ("mut_headerApply" , decimal . DP.mut_headerApply) - , ("mut_blockTick" , decimal . DP.mut_blockTick) - , ("mut_blockApply" , decimal . DP.mut_blockApply) - , ("blockBytes" , decimal . DP.blockByteSize) - , ("...era-specific stats" , Builder.intercalate csvSeparator . DP.unBlockStats . DP.blockStats) - ] + [ ("slot", decimal . unSlotNo . DP.slot) + , ("slotGap", decimal . DP.slotGap) + , ("totalTime", decimal . DP.totalTime) + , ("mut", decimal . DP.mut) + , ("gc", decimal . DP.gc) + , ("majGcCount", decimal . DP.majGcCount) + , ("minGcCount", decimal . DP.minGcCount) + , ("allocatedBytes", decimal . DP.allocatedBytes) + , ("mut_forecast", decimal . DP.mut_forecast) + , ("mut_headerTick", decimal . DP.mut_headerTick) + , ("mut_headerApply", decimal . DP.mut_headerApply) + , ("mut_blockTick", decimal . DP.mut_blockTick) + , ("mut_blockApply", decimal . DP.mut_blockApply) + , ("blockBytes", decimal . DP.blockByteSize) + , ("...era-specific stats", Builder.intercalate csvSeparator . DP.unBlockStats . DP.blockStats) + ] diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/Metadata.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/Metadata.hs index d55a31ac1f..95d03d6776 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/Metadata.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/Metadata.hs @@ -9,35 +9,35 @@ -- - OS and architecture. -- -- See 'Metadata' and 'getMetadata' for more details. --- -module Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.Metadata ( - Metadata (..) +module Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.Metadata + ( Metadata (..) , getMetadata ) where -import Cardano.Tools.DBAnalyser.Types (LedgerApplicationMode (..)) -import Cardano.Tools.GitRev (gitRev) -import Data.Aeson (ToJSON) -import qualified Data.Aeson as Aeson -import qualified Data.Text as T -import qualified Data.Version -import Data.Word (Word32, Word64) -import GHC.Generics (Generic) -import qualified GHC.RTS.Flags as RTS -import qualified System.Info +import Cardano.Tools.DBAnalyser.Types (LedgerApplicationMode (..)) +import Cardano.Tools.GitRev (gitRev) +import Data.Aeson (ToJSON) +import Data.Aeson qualified as Aeson +import Data.Text qualified as T +import Data.Version qualified +import Data.Word (Word32, Word64) +import GHC.Generics (Generic) +import GHC.RTS.Flags qualified as RTS +import System.Info qualified -data Metadata = Metadata { - rtsGCMaxStkSize :: Word32 - , rtsGCMaxHeapSize :: Word32 +data Metadata = Metadata + { rtsGCMaxStkSize :: Word32 + , rtsGCMaxHeapSize :: Word32 , rtsConcurrentCtxtSwitchTime :: Word64 - , rtsParNCapabilities :: Word32 - , compilerVersion :: String - , compilerName :: String - , operatingSystem :: String - , machineArchitecture :: String - , gitRevison :: String - , ledgerApplicationMode :: String - } deriving (Generic, Show, Eq) + , rtsParNCapabilities :: Word32 + , compilerVersion :: String + , compilerName :: String + , operatingSystem :: String + , machineArchitecture :: String + , gitRevison :: String + , ledgerApplicationMode :: String + } + deriving (Generic, Show, Eq) instance ToJSON Metadata where toEncoding = Aeson.genericToEncoding Aeson.defaultOptions @@ -45,17 +45,18 @@ instance ToJSON Metadata where getMetadata :: LedgerApplicationMode -> IO Metadata getMetadata lgrAppMode = do rtsFlags <- RTS.getRTSFlags - pure $ Metadata { - rtsGCMaxStkSize = RTS.maxStkSize $ RTS.gcFlags rtsFlags - , rtsGCMaxHeapSize = RTS.maxHeapSize $ RTS.gcFlags rtsFlags - , rtsConcurrentCtxtSwitchTime = RTS.ctxtSwitchTime $ RTS.concurrentFlags rtsFlags - , rtsParNCapabilities = RTS.nCapabilities $ RTS.parFlags rtsFlags - , compilerVersion = Data.Version.showVersion System.Info.compilerVersion - , compilerName = System.Info.compilerName - , operatingSystem = System.Info.os - , machineArchitecture = System.Info.arch - , gitRevison = T.unpack gitRev - , ledgerApplicationMode = case lgrAppMode of - LedgerApply -> "full-application" - LedgerReapply -> "reapplication" - } + pure $ + Metadata + { rtsGCMaxStkSize = RTS.maxStkSize $ RTS.gcFlags rtsFlags + , rtsGCMaxHeapSize = RTS.maxHeapSize $ RTS.gcFlags rtsFlags + , rtsConcurrentCtxtSwitchTime = RTS.ctxtSwitchTime $ RTS.concurrentFlags rtsFlags + , rtsParNCapabilities = RTS.nCapabilities $ RTS.parFlags rtsFlags + , compilerVersion = Data.Version.showVersion System.Info.compilerVersion + , compilerName = System.Info.compilerName + , operatingSystem = System.Info.os + , machineArchitecture = System.Info.arch + , gitRevison = T.unpack gitRev + , ledgerApplicationMode = case lgrAppMode of + LedgerApply -> "full-application" + LedgerReapply -> "reapplication" + } diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/SlotDataPoint.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/SlotDataPoint.hs index 15f8f69f4b..081c327249 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/SlotDataPoint.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Analysis/BenchmarkLedgerOps/SlotDataPoint.hs @@ -1,18 +1,18 @@ {-# LANGUAGE DeriveGeneric #-} -module Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint ( - BlockStats (BlockStats, unBlockStats) +module Cardano.Tools.DBAnalyser.Analysis.BenchmarkLedgerOps.SlotDataPoint + ( BlockStats (BlockStats, unBlockStats) , SlotDataPoint (..) ) where -import Cardano.Slotting.Slot (SlotNo) -import Data.Aeson as Aeson -import qualified Data.Aeson.Encoding as Aeson.Encoding -import Data.Int (Int64) -import Data.Word (Word32, Word64) -import GHC.Generics (Generic) -import qualified TextBuilder as Builder -import TextBuilder (TextBuilder) +import Cardano.Slotting.Slot (SlotNo) +import Data.Aeson as Aeson +import Data.Aeson.Encoding qualified as Aeson.Encoding +import Data.Int (Int64) +import Data.Word (Word32, Word64) +import GHC.Generics (Generic) +import TextBuilder (TextBuilder) +import TextBuilder qualified as Builder -- | Information about the time spent processing the block corresponding to -- 'slot', divided into the five major operations: @@ -25,42 +25,43 @@ import TextBuilder (TextBuilder) -- -- It is up to the user of a slot data point to decide which units the data -- represent (eg milliseconds, nanoseconds, etc) -data SlotDataPoint = - SlotDataPoint - { -- | Slot in which the 5 ledger operations were applied. - slot :: !SlotNo - -- | Gap to the previous slot. - , slotGap :: !Word64 - -- | Total time spent in the 5 ledger operations at 'slot'. - , totalTime :: !Int64 - -- | Time spent by the mutator while performing the 5 ledger operations - -- at 'slot'. - , mut :: !Int64 - -- | Time spent in garbage collection while performing the 5 ledger - -- operations at 'slot'. - , gc :: !Int64 - -- | Total number of __major__ garbage collections that took place while - -- performing the 5 ledger operations at 'slot'. - , majGcCount :: !Word32 - -- | Total number of __minor__ garbage collections that took place while - -- performing the 5 ledger operations at 'slot'. - , minGcCount :: !Word32 - -- | Allocated bytes while performing the 5 ledger operations - -- at 'slot'. - , allocatedBytes :: !Word64 - -- | Difference of the GC.mutator_elapsed_ns field when computing the - -- forecast. - , mut_forecast :: !Int64 - , mut_headerTick :: !Int64 - , mut_headerApply :: !Int64 - , mut_blockTick :: !Int64 - , mut_blockApply :: !Int64 - , blockByteSize :: !Word32 - -- | Free-form information about the block. - , blockStats :: !BlockStats - } deriving (Generic, Show) +data SlotDataPoint + = SlotDataPoint + { slot :: !SlotNo + -- ^ Slot in which the 5 ledger operations were applied. + , slotGap :: !Word64 + -- ^ Gap to the previous slot. + , totalTime :: !Int64 + -- ^ Total time spent in the 5 ledger operations at 'slot'. + , mut :: !Int64 + -- ^ Time spent by the mutator while performing the 5 ledger operations + -- at 'slot'. + , gc :: !Int64 + -- ^ Time spent in garbage collection while performing the 5 ledger + -- operations at 'slot'. + , majGcCount :: !Word32 + -- ^ Total number of __major__ garbage collections that took place while + -- performing the 5 ledger operations at 'slot'. + , minGcCount :: !Word32 + -- ^ Total number of __minor__ garbage collections that took place while + -- performing the 5 ledger operations at 'slot'. + , allocatedBytes :: !Word64 + -- ^ Allocated bytes while performing the 5 ledger operations + -- at 'slot'. + , mut_forecast :: !Int64 + -- ^ Difference of the GC.mutator_elapsed_ns field when computing the + -- forecast. + , mut_headerTick :: !Int64 + , mut_headerApply :: !Int64 + , mut_blockTick :: !Int64 + , mut_blockApply :: !Int64 + , blockByteSize :: !Word32 + , blockStats :: !BlockStats + -- ^ Free-form information about the block. + } + deriving (Generic, Show) -newtype BlockStats = BlockStats { unBlockStats :: [TextBuilder] } +newtype BlockStats = BlockStats {unBlockStats :: [TextBuilder]} deriving (Generic, Show) instance ToJSON BlockStats where diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Byron.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Byron.hs index 4a040799f5..832c8ce626 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Byron.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Byron.hs @@ -1,116 +1,130 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Cardano.Tools.DBAnalyser.Block.Byron ( - Args (..) +module Cardano.Tools.DBAnalyser.Block.Byron + ( Args (..) , ByronBlockArgs , openGenesisByron ) where -import qualified Cardano.Chain.Block as Chain -import qualified Cardano.Chain.Genesis as Genesis -import qualified Cardano.Chain.Update as Update -import qualified Cardano.Chain.UTxO as Chain -import Cardano.Crypto (RequiresNetworkMagic (..)) -import qualified Cardano.Crypto as Crypto -import Cardano.Crypto.Raw (Raw) -import Cardano.Ledger.Binary (unAnnotated) -import Cardano.Tools.DBAnalyser.HasAnalysis -import Control.Monad.Except -import Data.ByteString (ByteString) -import qualified Data.ByteString.Lazy as BL -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import qualified Ouroboros.Consensus.Byron.Ledger as Byron -import Ouroboros.Consensus.Byron.Node (PBftSignatureThreshold (..), - ProtocolParamsByron (..), protocolInfoByron) -import Ouroboros.Consensus.Node.ProtocolInfo -import TextBuilder (decimal) +import Cardano.Chain.Block qualified as Chain +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.UTxO qualified as Chain +import Cardano.Chain.Update qualified as Update +import Cardano.Crypto (RequiresNetworkMagic (..)) +import Cardano.Crypto qualified as Crypto +import Cardano.Crypto.Raw (Raw) +import Cardano.Ledger.Binary (unAnnotated) +import Cardano.Tools.DBAnalyser.HasAnalysis +import Control.Monad.Except +import Data.ByteString (ByteString) +import Data.ByteString.Lazy qualified as BL +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Byron.Ledger qualified as Byron +import Ouroboros.Consensus.Byron.Node + ( PBftSignatureThreshold (..) + , ProtocolParamsByron (..) + , protocolInfoByron + ) +import Ouroboros.Consensus.Node.ProtocolInfo +import TextBuilder (decimal) instance HasAnalysis ByronBlock where - countTxOutputs = aBlockOrBoundary (const 0) countTxOutputsByron - blockTxSizes = aBlockOrBoundary (const []) blockTxSizesByron - knownEBBs = const Byron.knownEBBs - emitTraces _ = [] - blockStats blk = [ decimal $ length $ blockTxSizes blk - , decimal $ sum $ blockTxSizes blk - ] - -- For the time being we do not support any block application - -- metrics for the Byron era only. - blockApplicationMetrics = [] + countTxOutputs = aBlockOrBoundary (const 0) countTxOutputsByron + blockTxSizes = aBlockOrBoundary (const []) blockTxSizesByron + knownEBBs = const Byron.knownEBBs + emitTraces _ = [] + blockStats blk = + [ decimal $ length $ blockTxSizes blk + , decimal $ sum $ blockTxSizes blk + ] + + -- For the time being we do not support any block application + -- metrics for the Byron era only. + blockApplicationMetrics = [] instance HasProtocolInfo ByronBlock where - data Args ByronBlock = - ByronBlockArgs { - configFile :: FilePath - , requiresNetworkMagic :: RequiresNetworkMagic - , genesisHash :: Maybe (Crypto.Hash Raw) - , threshold :: Maybe PBftSignatureThreshold - } - mkProtocolInfo args = do - config <- openGenesisByron (configFile args) (genesisHash args) (requiresNetworkMagic args) - return $ mkByronProtocolInfo config (threshold args) + data Args ByronBlock + = ByronBlockArgs + { configFile :: FilePath + , requiresNetworkMagic :: RequiresNetworkMagic + , genesisHash :: Maybe (Crypto.Hash Raw) + , threshold :: Maybe PBftSignatureThreshold + } + mkProtocolInfo args = do + config <- openGenesisByron (configFile args) (genesisHash args) (requiresNetworkMagic args) + return $ mkByronProtocolInfo config (threshold args) type ByronBlockArgs = Args ByronBlock -- | Equivalent of 'either' for 'ABlockOrBoundary'. -aBlockOrBoundary :: (Chain.ABoundaryBlock ByteString -> a) - -> (Chain.ABlock ByteString -> a) - -> ByronBlock -> a +aBlockOrBoundary :: + (Chain.ABoundaryBlock ByteString -> a) -> + (Chain.ABlock ByteString -> a) -> + ByronBlock -> + a aBlockOrBoundary fromBoundary fromRegular blk = case blk of - Byron.ByronBlock (Chain.ABOBBoundary boundaryBlock) _ _ - -> fromBoundary boundaryBlock - Byron.ByronBlock (Chain.ABOBBlock regularBlk) _ _ - -> fromRegular regularBlk + Byron.ByronBlock (Chain.ABOBBoundary boundaryBlock) _ _ -> + fromBoundary boundaryBlock + Byron.ByronBlock (Chain.ABOBBlock regularBlk) _ _ -> + fromRegular regularBlk countTxOutputsByron :: Chain.ABlock ByteString -> Int -countTxOutputsByron Chain.ABlock{ Chain.blockBody } = countTxPayload bodyTxPayload - where - Chain.ABody{ Chain.bodyTxPayload } = blockBody +countTxOutputsByron Chain.ABlock{Chain.blockBody} = countTxPayload bodyTxPayload + where + Chain.ABody{Chain.bodyTxPayload} = blockBody - countTxPayload :: Chain.ATxPayload a -> Int - countTxPayload = sum - . map (countTx . unAnnotated . Chain.aTaTx) - . Chain.aUnTxPayload + countTxPayload :: Chain.ATxPayload a -> Int + countTxPayload = + sum + . map (countTx . unAnnotated . Chain.aTaTx) + . Chain.aUnTxPayload - countTx :: Chain.Tx -> Int - countTx = length . Chain.txOutputs + countTx :: Chain.Tx -> Int + countTx = length . Chain.txOutputs blockTxSizesByron :: Chain.ABlock ByteString -> [SizeInBytes] blockTxSizesByron block = - map (fromIntegral . BL.length . BL.fromStrict . Chain.aTaAnnotation) blockTxAuxs - where - Chain.ABlock{ Chain.blockBody } = block - Chain.ABody{ Chain.bodyTxPayload } = blockBody - Chain.ATxPayload{ Chain.aUnTxPayload = blockTxAuxs } = bodyTxPayload + map (fromIntegral . BL.length . BL.fromStrict . Chain.aTaAnnotation) blockTxAuxs + where + Chain.ABlock{Chain.blockBody} = block + Chain.ABody{Chain.bodyTxPayload} = blockBody + Chain.ATxPayload{Chain.aUnTxPayload = blockTxAuxs} = bodyTxPayload openGenesisByron :: - FilePath - -> Maybe (Crypto.Hash Raw) - -> RequiresNetworkMagic - -> IO Genesis.Config + FilePath -> + Maybe (Crypto.Hash Raw) -> + RequiresNetworkMagic -> + IO Genesis.Config openGenesisByron configFile mHash requiresNetworkMagic = do - genesisHash <- case mHash of - Nothing -> either (error . show) return =<< runExceptT - (Genesis.unGenesisHash . snd <$> Genesis.readGenesisData configFile) - Just hash -> return hash - genesisConfig <- either (error . show) return =<< runExceptT - (Genesis.mkConfigFromFile - requiresNetworkMagic - configFile - genesisHash) - return genesisConfig + genesisHash <- case mHash of + Nothing -> + either (error . show) return + =<< runExceptT + (Genesis.unGenesisHash . snd <$> Genesis.readGenesisData configFile) + Just hash -> return hash + genesisConfig <- + either (error . show) return + =<< runExceptT + ( Genesis.mkConfigFromFile + requiresNetworkMagic + configFile + genesisHash + ) + return genesisConfig -mkByronProtocolInfo :: Genesis.Config - -> Maybe PBftSignatureThreshold - -> ProtocolInfo ByronBlock +mkByronProtocolInfo :: + Genesis.Config -> + Maybe PBftSignatureThreshold -> + ProtocolInfo ByronBlock mkByronProtocolInfo genesisConfig signatureThreshold = - protocolInfoByron $ ProtocolParamsByron { - byronGenesis = genesisConfig + protocolInfoByron $ + ProtocolParamsByron + { byronGenesis = genesisConfig , byronPbftSignatureThreshold = signatureThreshold - , byronProtocolVersion = Update.ProtocolVersion 1 0 0 - , byronSoftwareVersion = Update.SoftwareVersion (Update.ApplicationName "db-analyser") 2 - , byronLeaderCredentials = Nothing + , byronProtocolVersion = Update.ProtocolVersion 1 0 0 + , byronSoftwareVersion = Update.SoftwareVersion (Update.ApplicationName "db-analyser") 2 + , byronLeaderCredentials = Nothing } diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Cardano.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Cardano.hs index 7056a3c1e9..c10e9ae22c 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Cardano.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Cardano.hs @@ -14,81 +14,90 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Cardano.Tools.DBAnalyser.Block.Cardano ( - Args (configFile, threshold, CardanoBlockArgs) +module Cardano.Tools.DBAnalyser.Block.Cardano + ( Args (configFile, threshold, CardanoBlockArgs) , CardanoBlockArgs ) where -import qualified Cardano.Chain.Block as Byron.Block -import qualified Cardano.Chain.Genesis as Byron.Genesis -import qualified Cardano.Chain.Update as Byron.Update -import qualified Cardano.Chain.UTxO as Byron.UTxO -import Cardano.Crypto (RequiresNetworkMagic (..)) -import qualified Cardano.Crypto as Crypto -import qualified Cardano.Crypto.Hash.Class as CryptoClass -import Cardano.Crypto.Raw (Raw) -import qualified Cardano.Ledger.Api.Era as L -import qualified Cardano.Ledger.Api.Transition as SL -import Cardano.Ledger.Core (TxOut) -import qualified Cardano.Ledger.Shelley.LedgerState as Shelley.LedgerState -import qualified Cardano.Ledger.Shelley.UTxO as Shelley.UTxO -import Cardano.Ledger.TxIn (TxIn) -import Cardano.Node.Types (AdjustFilePaths (..)) -import Cardano.Protocol.Crypto -import qualified Cardano.Tools.DBAnalyser.Block.Byron as BlockByron -import Cardano.Tools.DBAnalyser.Block.Shelley () -import Cardano.Tools.DBAnalyser.HasAnalysis -import Control.Monad (when) -import qualified Data.Aeson as Aeson -import qualified Data.Aeson.Types as Aeson -import qualified Data.ByteString as BS -import qualified Data.Compact as Compact -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromJust) -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.Strict -import qualified Data.SOP.Telescope as Telescope -import Data.String (IsString (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import qualified Ouroboros.Consensus.Byron.Ledger.Ledger as Byron.Ledger -import Ouroboros.Consensus.Cardano -import Ouroboros.Consensus.Cardano.Block (CardanoEras) -import Ouroboros.Consensus.Cardano.Node (CardanoProtocolParams (..), - protocolInfoCardano) -import Ouroboros.Consensus.Config (emptyCheckpointsMap) -import Ouroboros.Consensus.HardFork.Combinator (HardForkBlock (..), - OneEraBlock (..), OneEraHash (..), getHardForkState, - hardForkLedgerStatePerEra) -import Ouroboros.Consensus.HardFork.Combinator.State (currentState) -import Ouroboros.Consensus.Ledger.Abstract hiding (TxIn, TxOut) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Shelley.HFEras () -import qualified Ouroboros.Consensus.Shelley.Ledger as Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Block (IsShelleyBlock, - ShelleyBlock, ShelleyBlockLedgerEra) -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import System.Directory (makeAbsolute) -import System.FilePath (takeDirectory, ()) -import qualified TextBuilder as Builder -import TextBuilder (TextBuilder) +import Cardano.Chain.Block qualified as Byron.Block +import Cardano.Chain.Genesis qualified as Byron.Genesis +import Cardano.Chain.UTxO qualified as Byron.UTxO +import Cardano.Chain.Update qualified as Byron.Update +import Cardano.Crypto (RequiresNetworkMagic (..)) +import Cardano.Crypto qualified as Crypto +import Cardano.Crypto.Hash.Class qualified as CryptoClass +import Cardano.Crypto.Raw (Raw) +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.Api.Transition qualified as SL +import Cardano.Ledger.Core (TxOut) +import Cardano.Ledger.Shelley.LedgerState qualified as Shelley.LedgerState +import Cardano.Ledger.Shelley.UTxO qualified as Shelley.UTxO +import Cardano.Ledger.TxIn (TxIn) +import Cardano.Node.Types (AdjustFilePaths (..)) +import Cardano.Protocol.Crypto +import Cardano.Tools.DBAnalyser.Block.Byron qualified as BlockByron +import Cardano.Tools.DBAnalyser.Block.Shelley () +import Cardano.Tools.DBAnalyser.HasAnalysis +import Control.Monad (when) +import Data.Aeson qualified as Aeson +import Data.Aeson.Types qualified as Aeson +import Data.ByteString qualified as BS +import Data.Compact qualified as Compact +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromJust) +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.Strict +import Data.SOP.Telescope qualified as Telescope +import Data.String (IsString (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Byron.Ledger.Ledger qualified as Byron.Ledger +import Ouroboros.Consensus.Cardano +import Ouroboros.Consensus.Cardano.Block (CardanoEras) +import Ouroboros.Consensus.Cardano.Node + ( CardanoProtocolParams (..) + , protocolInfoCardano + ) +import Ouroboros.Consensus.Config (emptyCheckpointsMap) +import Ouroboros.Consensus.HardFork.Combinator + ( HardForkBlock (..) + , OneEraBlock (..) + , OneEraHash (..) + , getHardForkState + , hardForkLedgerStatePerEra + ) +import Ouroboros.Consensus.HardFork.Combinator.State (currentState) +import Ouroboros.Consensus.Ledger.Abstract hiding (TxIn, TxOut) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger qualified as Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Block + ( IsShelleyBlock + , ShelleyBlock + , ShelleyBlockLedgerEra + ) +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import System.Directory (makeAbsolute) +import System.FilePath (takeDirectory, ()) +import TextBuilder (TextBuilder) +import TextBuilder qualified as Builder analyseBlock :: - (forall blk. HasAnalysis blk => blk -> a) - -> CardanoBlock StandardCrypto -> a + (forall blk. HasAnalysis blk => blk -> a) -> + CardanoBlock StandardCrypto -> + a analyseBlock f = - hcollapse + hcollapse . hcmap p (K . f . unI) . getOneEraBlock . getHardForkBlock - where - p :: Proxy HasAnalysis - p = Proxy + where + p :: Proxy HasAnalysis + p = Proxy -- | Lift a function polymorphic over all block types supporting `HasAnalysis` -- into a corresponding function over `CardanoBlock.` @@ -103,126 +112,124 @@ analyseWithLedgerState f (WithLedgerState cb sb sa) = . fromJust . hsequence' $ hzipWith3 zipLS (goLS sb) (goLS sa) oeb - where - p :: Proxy HasAnalysis - p = Proxy - - zipLS (Comp (Just (Flip sb'))) (Comp (Just (Flip sa'))) (I blk) = - Comp . Just $ WithLedgerState blk sb' sa' - zipLS _ _ _ = Comp Nothing - - oeb = getOneEraBlock . getHardForkBlock $ cb - - goLS :: - LedgerState (CardanoBlock StandardCrypto) mk -> - NP (Maybe :.: Flip LedgerState mk) (CardanoEras StandardCrypto) - goLS = - hexpand (Comp Nothing) - . hmap (Comp . Just . currentState) - . Telescope.tip - . getHardForkState - . hardForkLedgerStatePerEra + where + p :: Proxy HasAnalysis + p = Proxy + + zipLS (Comp (Just (Flip sb'))) (Comp (Just (Flip sa'))) (I blk) = + Comp . Just $ WithLedgerState blk sb' sa' + zipLS _ _ _ = Comp Nothing + + oeb = getOneEraBlock . getHardForkBlock $ cb + + goLS :: + LedgerState (CardanoBlock StandardCrypto) mk -> + NP (Maybe :.: Flip LedgerState mk) (CardanoEras StandardCrypto) + goLS = + hexpand (Comp Nothing) + . hmap (Comp . Just . currentState) + . Telescope.tip + . getHardForkState + . hardForkLedgerStatePerEra instance HasProtocolInfo (CardanoBlock StandardCrypto) where - data Args (CardanoBlock StandardCrypto) = CardanoBlockArgs { - configFile :: FilePath - , threshold :: Maybe PBftSignatureThreshold - } + data Args (CardanoBlock StandardCrypto) = CardanoBlockArgs + { configFile :: FilePath + , threshold :: Maybe PBftSignatureThreshold + } mkProtocolInfo CardanoBlockArgs{configFile, threshold} = do relativeToConfig :: (FilePath -> FilePath) <- - () . takeDirectory <$> makeAbsolute configFile + () . takeDirectory <$> makeAbsolute configFile cc :: CardanoConfig <- - either (error . show) (return . adjustFilePaths relativeToConfig) =<< - Aeson.eitherDecodeFileStrict' configFile + either (error . show) (return . adjustFilePaths relativeToConfig) + =<< Aeson.eitherDecodeFileStrict' configFile - genesisByron <- + genesisByron <- BlockByron.openGenesisByron (byronGenesisPath cc) (byronGenesisHash cc) (requiresNetworkMagic cc) - genesisShelley <- either (error . show) return =<< - Aeson.eitherDecodeFileStrict' (shelleyGenesisPath cc) - genesisAlonzo <- either (error . show) return =<< - Aeson.eitherDecodeFileStrict' (alonzoGenesisPath cc) - genesisConway <- either (error . show) return =<< - Aeson.eitherDecodeFileStrict' (conwayGenesisPath cc) + genesisShelley <- + either (error . show) return + =<< Aeson.eitherDecodeFileStrict' (shelleyGenesisPath cc) + genesisAlonzo <- + either (error . show) return + =<< Aeson.eitherDecodeFileStrict' (alonzoGenesisPath cc) + genesisConway <- + either (error . show) return + =<< Aeson.eitherDecodeFileStrict' (conwayGenesisPath cc) let transCfg = SL.mkLatestTransitionConfig genesisShelley genesisAlonzo genesisConway initialNonce <- case shelleyGenesisHash cc of - Just h -> pure h + Just h -> pure h Nothing -> do content <- BS.readFile (shelleyGenesisPath cc) - pure - $ Nonce - $ CryptoClass.castHash - $ CryptoClass.hashWith id - $ content - - return - $ mkCardanoProtocolInfo - genesisByron - threshold - transCfg - initialNonce - (cfgHardForkTriggers cc) - -data CardanoConfig = CardanoConfig { - -- | @RequiresNetworkMagic@ field - requiresNetworkMagic :: RequiresNetworkMagic - - -- | @ByronGenesisFile@ field - , byronGenesisPath :: FilePath - -- | @ByronGenesisHash@ field - , byronGenesisHash :: Maybe (Crypto.Hash Raw) - - -- | @ShelleyGenesisFile@ field - -- | @ShelleyGenesisHash@ field - , shelleyGenesisPath :: FilePath - , shelleyGenesisHash :: Maybe Nonce - - -- | @AlonzoGenesisFile@ field - , alonzoGenesisPath :: FilePath - - -- | @ConwayGenesisFile@ field - , conwayGenesisPath :: FilePath - - -- | @Test*HardForkAtEpoch@ for each Shelley era - , cfgHardForkTriggers :: CardanoHardForkTriggers + pure $ + Nonce $ + CryptoClass.castHash $ + CryptoClass.hashWith id $ + content + + return $ + mkCardanoProtocolInfo + genesisByron + threshold + transCfg + initialNonce + (cfgHardForkTriggers cc) + +data CardanoConfig = CardanoConfig + { requiresNetworkMagic :: RequiresNetworkMagic + -- ^ @RequiresNetworkMagic@ field + , byronGenesisPath :: FilePath + -- ^ @ByronGenesisFile@ field + , byronGenesisHash :: Maybe (Crypto.Hash Raw) + -- ^ @ByronGenesisHash@ field + , shelleyGenesisPath :: FilePath + -- ^ @ShelleyGenesisFile@ field + -- | @ShelleyGenesisHash@ field + , shelleyGenesisHash :: Maybe Nonce + , alonzoGenesisPath :: FilePath + -- ^ @AlonzoGenesisFile@ field + , conwayGenesisPath :: FilePath + -- ^ @ConwayGenesisFile@ field + , cfgHardForkTriggers :: CardanoHardForkTriggers + -- ^ @Test*HardForkAtEpoch@ for each Shelley era } instance AdjustFilePaths CardanoConfig where - adjustFilePaths f cc = - cc { - byronGenesisPath = f $ byronGenesisPath cc - , shelleyGenesisPath = f $ shelleyGenesisPath cc - , alonzoGenesisPath = f $ alonzoGenesisPath cc - , conwayGenesisPath = f $ conwayGenesisPath cc - -- Byron, Shelley, Alonzo, and Conway are the only eras that have genesis - -- data. The actual genesis block is a Byron block, therefore we needed a - -- genesis file. To transition to Shelley, we needed to add some additional - -- genesis data (eg some initial values of new protocol parametrers like - -- @d@). Similarly in Alonzo (eg Plutus interpreter parameters/limits) and - -- in Conway too (ie keys of the new genesis delegates). - -- - -- In contrast, the Allegra, Mary, and Babbage eras did not introduce any new - -- genesis data. - } + adjustFilePaths f cc = + cc + { byronGenesisPath = f $ byronGenesisPath cc + , shelleyGenesisPath = f $ shelleyGenesisPath cc + , alonzoGenesisPath = f $ alonzoGenesisPath cc + , conwayGenesisPath = f $ conwayGenesisPath cc + -- Byron, Shelley, Alonzo, and Conway are the only eras that have genesis + -- data. The actual genesis block is a Byron block, therefore we needed a + -- genesis file. To transition to Shelley, we needed to add some additional + -- genesis data (eg some initial values of new protocol parametrers like + -- @d@). Similarly in Alonzo (eg Plutus interpreter parameters/limits) and + -- in Conway too (ie keys of the new genesis delegates). + -- + -- In contrast, the Allegra, Mary, and Babbage eras did not introduce any new + -- genesis data. + } instance Aeson.FromJSON CardanoConfig where parseJSON = Aeson.withObject "CardanoConfigFile" $ \v -> do - requiresNetworkMagic <- v Aeson..: "RequiresNetworkMagic" - byronGenesisPath <- v Aeson..: "ByronGenesisFile" + byronGenesisPath <- v Aeson..: "ByronGenesisFile" byronGenesisHash <- v Aeson..:? "ByronGenesisHash" shelleyGenesisPath <- v Aeson..: "ShelleyGenesisFile" - shelleyGenesisHash <- v Aeson..:? "ShelleyGenesisHash" >>= \case - Nothing -> pure Nothing - Just hex -> case CryptoClass.hashFromTextAsHex hex of - Nothing -> fail "could not parse ShelleyGenesisHash as a hex string" - Just h -> pure $ Just $ Nonce h + shelleyGenesisHash <- + v Aeson..:? "ShelleyGenesisHash" >>= \case + Nothing -> pure Nothing + Just hex -> case CryptoClass.hashFromTextAsHex hex of + Nothing -> fail "could not parse ShelleyGenesisHash as a hex string" + Just h -> pure $ Just $ Nonce h alonzoGenesisPath <- v Aeson..: "AlonzoGenesisFile" @@ -230,25 +237,30 @@ instance Aeson.FromJSON CardanoConfig where triggers <- do let parseTrigger :: - forall blk era. (IsShelleyBlock blk, ShelleyBlockLedgerEra blk ~ era) - => (Aeson.Parser :.: CardanoHardForkTrigger) blk - parseTrigger = Comp $ - (fmap CardanoTriggerHardForkAtEpoch <$> (v Aeson..:? nm)) - Aeson..!= CardanoTriggerHardForkAtDefaultVersion - where - nm = fromString $ "Test" <> L.eraName @era <> "HardForkAtEpoch" + forall blk era. + (IsShelleyBlock blk, ShelleyBlockLedgerEra blk ~ era) => + (Aeson.Parser :.: CardanoHardForkTrigger) blk + parseTrigger = + Comp $ + (fmap CardanoTriggerHardForkAtEpoch <$> (v Aeson..:? nm)) + Aeson..!= CardanoTriggerHardForkAtDefaultVersion + where + nm = fromString $ "Test" <> L.eraName @era <> "HardForkAtEpoch" triggers <- hsequence' $ hcpure (Proxy @IsShelleyBlock) parseTrigger let isBad :: NP CardanoHardForkTrigger xs -> Bool isBad = \case CardanoTriggerHardForkAtDefaultVersion - :* CardanoTriggerHardForkAtEpoch{} :* _ -> True + :* CardanoTriggerHardForkAtEpoch{} + :* _ -> True _ :* np -> isBad np - Nil -> False - fmap (\() -> triggers) $ when (isBad triggers) $ fail $ - "if the Cardano config file sets a Test*HardForkEpoch," - <> " it must also set it for all previous eras." + Nil -> False + fmap (\() -> triggers) $ + when (isBad triggers) $ + fail $ + "if the Cardano config file sets a Test*HardForkEpoch," + <> " it must also set it for all previous eras." pure $ CardanoConfig @@ -264,136 +276,151 @@ instance Aeson.FromJSON CardanoConfig where instance HasAnalysis (CardanoBlock StandardCrypto) where countTxOutputs = analyseBlock countTxOutputs - blockTxSizes = analyseBlock blockTxSizes - knownEBBs _ = - Map.mapKeys castHeaderHash . Map.map castChainHash $ - knownEBBs (Proxy @ByronBlock) + blockTxSizes = analyseBlock blockTxSizes + knownEBBs _ = + Map.mapKeys castHeaderHash . Map.map castChainHash $ + knownEBBs (Proxy @ByronBlock) emitTraces = analyseWithLedgerState emitTraces blockStats = analyseBlock blockStats blockApplicationMetrics = - [ ("Slot Number", \(WithLedgerState blk _preSt _postSt) -> - pure $ Builder.decimal $ unSlotNo $ blockSlot blk - ) - , ("Block Number", \(WithLedgerState blk _preSt _postSt) -> - pure $ Builder.decimal $ unBlockNo $ blockNo blk - ) - -- TODO the states will only contain the outputs produced by the block, + [ + ( "Slot Number" + , \(WithLedgerState blk _preSt _postSt) -> + pure $ Builder.decimal $ unSlotNo $ blockSlot blk + ) + , + ( "Block Number" + , \(WithLedgerState blk _preSt _postSt) -> + pure $ Builder.decimal $ unBlockNo $ blockNo blk + ) + , -- TODO the states will only contain the outputs produced by the block, -- not the whole UTxO set, so there is a regression here. - , ("UTxO size (via Compact)", \(WithLedgerState _blk _preSt postSt) -> do - let compactSize utxo = do - compactedUtxo <- Compact.compact utxo - compactedUtxoSize <- Compact.compactSize compactedUtxo - pure $ Builder.decimal $ compactedUtxoSize - - dispatch postSt - (applyToByronUtxo compactSize) - (applyToShelleyBasedUtxo compactSize) - ) - , ("UTxO map size", \(WithLedgerState _blk _preSt postSt) -> do - let mapSize = pure . Builder.decimal . Map.size - dispatch postSt - (applyToByronUtxo mapSize) - (applyToShelleyBasedUtxo mapSize) - ) - ] + + ( "UTxO size (via Compact)" + , \(WithLedgerState _blk _preSt postSt) -> do + let compactSize utxo = do + compactedUtxo <- Compact.compact utxo + compactedUtxoSize <- Compact.compactSize compactedUtxo + pure $ Builder.decimal $ compactedUtxoSize + + dispatch + postSt + (applyToByronUtxo compactSize) + (applyToShelleyBasedUtxo compactSize) + ) + , + ( "UTxO map size" + , \(WithLedgerState _blk _preSt postSt) -> do + let mapSize = pure . Builder.decimal . Map.size + dispatch + postSt + (applyToByronUtxo mapSize) + (applyToShelleyBasedUtxo mapSize) + ) + ] dispatch :: - LedgerState (CardanoBlock StandardCrypto) ValuesMK - -> (LedgerState ByronBlock ValuesMK -> IO TextBuilder) - -> (forall proto era. LedgerState (ShelleyBlock proto era) ValuesMK -> IO TextBuilder) - -> IO TextBuilder + LedgerState (CardanoBlock StandardCrypto) ValuesMK -> + (LedgerState ByronBlock ValuesMK -> IO TextBuilder) -> + (forall proto era. LedgerState (ShelleyBlock proto era) ValuesMK -> IO TextBuilder) -> + IO TextBuilder dispatch cardanoSt fByron fShelley = - hcollapse $ - hap ( fn k_fByron - :* fn k_fShelley - :* fn k_fShelley - :* fn k_fShelley - :* fn k_fShelley - :* fn k_fShelley - :* fn k_fShelley - :* Nil - ) - (hardForkLedgerStatePerEra cardanoSt) - where - k_fByron = K . fByron . unFlip - - k_fShelley :: - forall proto era. - Flip LedgerState ValuesMK (ShelleyBlock proto era) - -> K (IO TextBuilder) (ShelleyBlock proto era) - k_fShelley = K . fShelley . unFlip + hcollapse $ + hap + ( fn k_fByron + :* fn k_fShelley + :* fn k_fShelley + :* fn k_fShelley + :* fn k_fShelley + :* fn k_fShelley + :* fn k_fShelley + :* Nil + ) + (hardForkLedgerStatePerEra cardanoSt) + where + k_fByron = K . fByron . unFlip + + k_fShelley :: + forall proto era. + Flip LedgerState ValuesMK (ShelleyBlock proto era) -> + K (IO TextBuilder) (ShelleyBlock proto era) + k_fShelley = K . fShelley . unFlip applyToByronUtxo :: - (Map Byron.UTxO.CompactTxIn Byron.UTxO.CompactTxOut -> IO TextBuilder) - -> LedgerState ByronBlock ValuesMK - -> IO TextBuilder -applyToByronUtxo f st = - f $ getByronUtxo st - -getByronUtxo :: LedgerState ByronBlock ValuesMK - -> Map Byron.UTxO.CompactTxIn Byron.UTxO.CompactTxOut -getByronUtxo = Byron.UTxO.unUTxO - . Byron.Block.cvsUtxo - . Byron.Ledger.byronLedgerState + (Map Byron.UTxO.CompactTxIn Byron.UTxO.CompactTxOut -> IO TextBuilder) -> + LedgerState ByronBlock ValuesMK -> + IO TextBuilder +applyToByronUtxo f st = + f $ getByronUtxo st + +getByronUtxo :: + LedgerState ByronBlock ValuesMK -> + Map Byron.UTxO.CompactTxIn Byron.UTxO.CompactTxOut +getByronUtxo = + Byron.UTxO.unUTxO + . Byron.Block.cvsUtxo + . Byron.Ledger.byronLedgerState applyToShelleyBasedUtxo :: - (Map TxIn (TxOut era) -> IO TextBuilder) - -> LedgerState (ShelleyBlock proto era) ValuesMK - -> IO TextBuilder + (Map TxIn (TxOut era) -> IO TextBuilder) -> + LedgerState (ShelleyBlock proto era) ValuesMK -> + IO TextBuilder applyToShelleyBasedUtxo f st = do - f $ getShelleyBasedUtxo st + f $ getShelleyBasedUtxo st getShelleyBasedUtxo :: - LedgerState (ShelleyBlock proto era) ValuesMK - -> Map TxIn (TxOut era) -getShelleyBasedUtxo = (\(Shelley.UTxO.UTxO xs)-> xs) - . Shelley.LedgerState.utxosUtxo - . Shelley.LedgerState.lsUTxOState - . Shelley.LedgerState.esLState - . Shelley.LedgerState.nesEs - . Shelley.Ledger.shelleyLedgerState - + LedgerState (ShelleyBlock proto era) ValuesMK -> + Map TxIn (TxOut era) +getShelleyBasedUtxo = + (\(Shelley.UTxO.UTxO xs) -> xs) + . Shelley.LedgerState.utxosUtxo + . Shelley.LedgerState.lsUTxOState + . Shelley.LedgerState.esLState + . Shelley.LedgerState.nesEs + . Shelley.Ledger.shelleyLedgerState type CardanoBlockArgs = Args (CardanoBlock StandardCrypto) mkCardanoProtocolInfo :: - Byron.Genesis.Config - -> Maybe PBftSignatureThreshold - -> SL.TransitionConfig L.LatestKnownEra - -> Nonce - -> CardanoHardForkTriggers - -> ProtocolInfo (CardanoBlock StandardCrypto) + Byron.Genesis.Config -> + Maybe PBftSignatureThreshold -> + SL.TransitionConfig L.LatestKnownEra -> + Nonce -> + CardanoHardForkTriggers -> + ProtocolInfo (CardanoBlock StandardCrypto) mkCardanoProtocolInfo genesisByron signatureThreshold transitionConfig initialNonce triggers = - fst $ protocolInfoCardano @_ @IO - (CardanoProtocolParams - ProtocolParamsByron { - byronGenesis = genesisByron - , byronPbftSignatureThreshold = signatureThreshold - , byronProtocolVersion = Byron.Update.ProtocolVersion 1 2 0 - , byronSoftwareVersion = Byron.Update.SoftwareVersion (Byron.Update.ApplicationName "db-analyser") 2 - , byronLeaderCredentials = Nothing - } - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = initialNonce - , shelleyBasedLeaderCredentials = [] - } - triggers - transitionConfig - emptyCheckpointsMap - (ProtVer (L.eraProtVerHigh @L.LatestKnownEra) 0) + fst $ + protocolInfoCardano @_ @IO + ( CardanoProtocolParams + ProtocolParamsByron + { byronGenesis = genesisByron + , byronPbftSignatureThreshold = signatureThreshold + , byronProtocolVersion = Byron.Update.ProtocolVersion 1 2 0 + , byronSoftwareVersion = + Byron.Update.SoftwareVersion (Byron.Update.ApplicationName "db-analyser") 2 + , byronLeaderCredentials = Nothing + } + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = initialNonce + , shelleyBasedLeaderCredentials = [] + } + triggers + transitionConfig + emptyCheckpointsMap + (ProtVer (L.eraProtVerHigh @L.LatestKnownEra) 0) ) - where + where castHeaderHash :: - HeaderHash ByronBlock - -> HeaderHash (CardanoBlock StandardCrypto) + HeaderHash ByronBlock -> + HeaderHash (CardanoBlock StandardCrypto) castHeaderHash = OneEraHash . toShortRawHash (Proxy @ByronBlock) castChainHash :: - ChainHash ByronBlock - -> ChainHash (CardanoBlock StandardCrypto) -castChainHash GenesisHash = GenesisHash + ChainHash ByronBlock -> + ChainHash (CardanoBlock StandardCrypto) +castChainHash GenesisHash = GenesisHash castChainHash (BlockHash h) = BlockHash $ castHeaderHash h diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Shelley.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Shelley.hs index 3b1a3fd377..6803f5ead4 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Shelley.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Block/Shelley.hs @@ -6,147 +6,154 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Cardano.Tools.DBAnalyser.Block.Shelley ( - Args (..) +module Cardano.Tools.DBAnalyser.Block.Shelley + ( Args (..) , ShelleyBlockArgs ) where -import Cardano.Ledger.Allegra (AllegraEra) -import Cardano.Ledger.Alonzo (AlonzoEra) -import qualified Cardano.Ledger.Alonzo.Scripts as Alonzo -import qualified Cardano.Ledger.Alonzo.Tx as Alonzo -import Cardano.Ledger.Babbage (BabbageEra) -import qualified Cardano.Ledger.BaseTypes as CL (natVersion) -import Cardano.Ledger.Conway (ConwayEra) -import qualified Cardano.Ledger.Core as Core -import Cardano.Ledger.Mary (MaryEra) -import Cardano.Ledger.Shelley (ShelleyEra) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.RewardUpdate as SL -import Cardano.Tools.DBAnalyser.HasAnalysis -import qualified Data.Aeson as Aeson -import Data.Foldable as Foldable (foldl', toList) -import qualified Data.Map.Strict as Map -import Data.Maybe (catMaybes, maybeToList) -import Data.Maybe.Strict -import Data.Sequence.Strict (StrictSeq) -import Data.Word (Word64) -import Lens.Micro ((^.)) -import Lens.Micro.Extras (view) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras (StandardCrypto) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger (ShelleyCompatible, - shelleyLedgerState) -import Ouroboros.Consensus.Shelley.Ledger.Block (ShelleyBlock) -import qualified Ouroboros.Consensus.Shelley.Ledger.Block as Shelley -import Ouroboros.Consensus.Shelley.Node (Nonce (..), - ProtocolParamsShelleyBased (..), ShelleyGenesis, - protocolInfoShelley) -import TextBuilder (decimal) +import Cardano.Ledger.Allegra (AllegraEra) +import Cardano.Ledger.Alonzo (AlonzoEra) +import Cardano.Ledger.Alonzo.Scripts qualified as Alonzo +import Cardano.Ledger.Alonzo.Tx qualified as Alonzo +import Cardano.Ledger.Babbage (BabbageEra) +import Cardano.Ledger.BaseTypes qualified as CL (natVersion) +import Cardano.Ledger.Conway (ConwayEra) +import Cardano.Ledger.Core qualified as Core +import Cardano.Ledger.Mary (MaryEra) +import Cardano.Ledger.Shelley (ShelleyEra) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.RewardUpdate qualified as SL +import Cardano.Tools.DBAnalyser.HasAnalysis +import Data.Aeson qualified as Aeson +import Data.Foldable as Foldable (foldl', toList) +import Data.Map.Strict qualified as Map +import Data.Maybe (catMaybes, maybeToList) +import Data.Maybe.Strict +import Data.Sequence.Strict (StrictSeq) +import Data.Word (Word64) +import Lens.Micro ((^.)) +import Lens.Micro.Extras (view) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras (StandardCrypto) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyCompatible + , shelleyLedgerState + ) +import Ouroboros.Consensus.Shelley.Ledger.Block (ShelleyBlock) +import Ouroboros.Consensus.Shelley.Ledger.Block qualified as Shelley +import Ouroboros.Consensus.Shelley.Node + ( Nonce (..) + , ProtocolParamsShelleyBased (..) + , ShelleyGenesis + , protocolInfoShelley + ) +import TextBuilder (decimal) -- | Usable for each Shelley-based era -instance ( ShelleyCompatible proto era - , PerEraAnalysis era - ) => HasAnalysis (ShelleyBlock proto era) where - +instance + ( ShelleyCompatible proto era + , PerEraAnalysis era + ) => + HasAnalysis (ShelleyBlock proto era) + where countTxOutputs blk = case Shelley.shelleyBlockRaw blk of - SL.Block _ body -> sum $ fmap countOutputs (Core.fromTxSeq @era body) - where - countOutputs :: Core.Tx era -> Int - countOutputs tx = length $ tx ^. Core.bodyTxL . Core.outputsTxBodyL + SL.Block _ body -> sum $ fmap countOutputs (Core.fromTxSeq @era body) + where + countOutputs :: Core.Tx era -> Int + countOutputs tx = length $ tx ^. Core.bodyTxL . Core.outputsTxBodyL blockTxSizes blk = case Shelley.shelleyBlockRaw blk of - SL.Block _ body -> - toList - $ fmap (fromIntegral . view Core.sizeTxF) (Core.fromTxSeq @era body) + SL.Block _ body -> + toList $ + fmap (fromIntegral . view Core.sizeTxF) (Core.fromTxSeq @era body) knownEBBs = const Map.empty - emitTraces (WithLedgerState _blk lsb lsa) = catMaybes - [ - let be = SL.nesEL . shelleyLedgerState $ lsb - ae = SL.nesEL . shelleyLedgerState $ lsa - in if be /= ae - then - Just $ "EPOCH_START_" <> show ae - else Nothing - , let brp = SL.nesRu . shelleyLedgerState $ lsb - arp = SL.nesRu . shelleyLedgerState $ lsa - in case (brp, arp) of - (SNothing, SJust _) -> Just "RWDPULSER_START" - (SJust (SL.Pulsing _ _), SJust (SL.Complete _)) -> Just "RWDPULSER_COMPLETE" - (SJust _, SNothing) -> Just "RWDPULSER_RESET" - (_, _) -> Nothing - ] - - blockStats blk = - [ decimal $ length $ blockTxSizes blk - , decimal $ sum $ blockTxSizes blk - ] - ++ - [ decimal $ Foldable.foldl' (\acc tx -> acc + f tx) 0 txs - | f <- maybeToList txExUnitsSteps + emitTraces (WithLedgerState _blk lsb lsa) = + catMaybes + [ let be = SL.nesEL . shelleyLedgerState $ lsb + ae = SL.nesEL . shelleyLedgerState $ lsa + in if be /= ae + then + Just $ "EPOCH_START_" <> show ae + else Nothing + , let brp = SL.nesRu . shelleyLedgerState $ lsb + arp = SL.nesRu . shelleyLedgerState $ lsa + in case (brp, arp) of + (SNothing, SJust _) -> Just "RWDPULSER_START" + (SJust (SL.Pulsing _ _), SJust (SL.Complete _)) -> Just "RWDPULSER_COMPLETE" + (SJust _, SNothing) -> Just "RWDPULSER_RESET" + (_, _) -> Nothing ] - where - txs :: StrictSeq (Core.Tx era) - txs = case Shelley.shelleyBlockRaw blk of - SL.Block _ body -> Core.fromTxSeq @era body - -- For the time being we do not support any block application - -- metrics for Shelley-only eras. + blockStats blk = + [ decimal $ length $ blockTxSizes blk + , decimal $ sum $ blockTxSizes blk + ] + ++ [ decimal $ Foldable.foldl' (\acc tx -> acc + f tx) 0 txs + | f <- maybeToList txExUnitsSteps + ] + where + txs :: StrictSeq (Core.Tx era) + txs = case Shelley.shelleyBlockRaw blk of + SL.Block _ body -> Core.fromTxSeq @era body + + -- For the time being we do not support any block application + -- metrics for Shelley-only eras. blockApplicationMetrics = [] class PerEraAnalysis era where - txExUnitsSteps :: Maybe (Core.Tx era -> Word64) + txExUnitsSteps :: Maybe (Core.Tx era -> Word64) instance PerEraAnalysis ShelleyEra where txExUnitsSteps = Nothing instance PerEraAnalysis AllegraEra where txExUnitsSteps = Nothing -instance PerEraAnalysis MaryEra where txExUnitsSteps = Nothing +instance PerEraAnalysis MaryEra where txExUnitsSteps = Nothing instance PerEraAnalysis AlonzoEra where - txExUnitsSteps = Just $ \tx -> - let (Alonzo.ExUnits _mem steps) = Alonzo.totExUnits tx - in toEnum $ fromEnum steps + txExUnitsSteps = Just $ \tx -> + let (Alonzo.ExUnits _mem steps) = Alonzo.totExUnits tx + in toEnum $ fromEnum steps instance PerEraAnalysis BabbageEra where - txExUnitsSteps = Just $ \tx -> - let (Alonzo.ExUnits _mem steps) = Alonzo.totExUnits tx - in toEnum $ fromEnum steps + txExUnitsSteps = Just $ \tx -> + let (Alonzo.ExUnits _mem steps) = Alonzo.totExUnits tx + in toEnum $ fromEnum steps instance PerEraAnalysis ConwayEra where - txExUnitsSteps = Just $ \tx -> - let (Alonzo.ExUnits _mem steps) = Alonzo.totExUnits tx - in toEnum $ fromEnum steps + txExUnitsSteps = Just $ \tx -> + let (Alonzo.ExUnits _mem steps) = Alonzo.totExUnits tx + in toEnum $ fromEnum steps -- | Shelley-era specific instance HasProtocolInfo (ShelleyBlock (TPraos StandardCrypto) ShelleyEra) where - data Args (ShelleyBlock (TPraos StandardCrypto) ShelleyEra) = ShelleyBlockArgs { - configFileShelley :: FilePath - , initialNonce :: Nonce - } - deriving (Show) + data Args (ShelleyBlock (TPraos StandardCrypto) ShelleyEra) = ShelleyBlockArgs + { configFileShelley :: FilePath + , initialNonce :: Nonce + } + deriving Show mkProtocolInfo ShelleyBlockArgs{configFileShelley, initialNonce} = do - config <- either (error . show) return =<< - Aeson.eitherDecodeFileStrict' configFileShelley + config <- + either (error . show) return + =<< Aeson.eitherDecodeFileStrict' configFileShelley return $ mkShelleyProtocolInfo config initialNonce type ShelleyBlockArgs = Args (ShelleyBlock (TPraos StandardCrypto) ShelleyEra) mkShelleyProtocolInfo :: - ShelleyGenesis - -> Nonce - -> ProtocolInfo (ShelleyBlock (TPraos StandardCrypto) ShelleyEra) + ShelleyGenesis -> + Nonce -> + ProtocolInfo (ShelleyBlock (TPraos StandardCrypto) ShelleyEra) mkShelleyProtocolInfo genesis initialNonce = - fst $ protocolInfoShelley @IO + fst $ + protocolInfoShelley @IO genesis - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = initialNonce + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = initialNonce , shelleyBasedLeaderCredentials = [] } (SL.ProtVer (CL.natVersion @2) 0) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/CSV.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/CSV.hs index 988b63d803..ea7ae57cfd 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/CSV.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/CSV.hs @@ -18,9 +18,8 @@ -- is needed to compute a row in the resulting CSV. -- -- We use 'TextBuilder' to efficiently intercalate values with the CSV 'Separator'. --- -module Cardano.Tools.DBAnalyser.CSV ( - Separator (Separator, unSeparator) +module Cardano.Tools.DBAnalyser.CSV + ( Separator (Separator, unSeparator) , computeAndWriteLine , computeAndWriteLinePure , computeColumns @@ -29,25 +28,25 @@ module Cardano.Tools.DBAnalyser.CSV ( , writeLine ) where -import Data.String (IsString) -import qualified Data.Text.IO as Text.IO -import qualified System.IO as IO -import qualified TextBuilder as TextBuilder -import TextBuilder (TextBuilder) +import Data.String (IsString) +import Data.Text.IO qualified as Text.IO +import System.IO qualified as IO +import TextBuilder (TextBuilder) +import TextBuilder qualified as TextBuilder -newtype Separator = Separator { unSeparator :: TextBuilder } +newtype Separator = Separator {unSeparator :: TextBuilder} deriving (Show, IsString, Monoid, Semigroup) writeHeaderLine :: IO.Handle -> Separator -> [(TextBuilder, a)] -> IO () writeHeaderLine handle (Separator separator) = - Text.IO.hPutStrLn handle + Text.IO.hPutStrLn handle . TextBuilder.toText . TextBuilder.intercalate separator . fmap fst writeLine :: IO.Handle -> Separator -> [TextBuilder] -> IO () writeLine handle (Separator separator) = - Text.IO.hPutStrLn handle + Text.IO.hPutStrLn handle . TextBuilder.toText . TextBuilder.intercalate separator @@ -57,7 +56,7 @@ computeAndWriteLine handle separator csvTextBuilder b = do computeAndWriteLinePure :: IO.Handle -> Separator -> [(a, b -> TextBuilder)] -> b -> IO () computeAndWriteLinePure handle separator csvTextBuilder b = - writeLine handle separator $ computeColumnsPure (fmap snd csvTextBuilder) b + writeLine handle separator $ computeColumnsPure (fmap snd csvTextBuilder) b computeColumns :: [a -> IO TextBuilder] -> a -> IO [TextBuilder] computeColumns fTextBuilders a = diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/HasAnalysis.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/HasAnalysis.hs index e1eca27072..9f5ce3248b 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/HasAnalysis.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/HasAnalysis.hs @@ -1,45 +1,44 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE TypeFamilies #-} -module Cardano.Tools.DBAnalyser.HasAnalysis ( - HasAnalysis (..) +module Cardano.Tools.DBAnalyser.HasAnalysis + ( HasAnalysis (..) , HasProtocolInfo (..) , SizeInBytes , WithLedgerState (..) ) where -import Data.Map.Strict (Map) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation (HasAnnTip (..)) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Storage.Serialisation (SizeInBytes) -import Ouroboros.Consensus.Util.Condense (Condense) -import TextBuilder (TextBuilder) +import Data.Map.Strict (Map) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation (HasAnnTip (..)) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Storage.Serialisation (SizeInBytes) +import Ouroboros.Consensus.Util.Condense (Condense) +import TextBuilder (TextBuilder) {------------------------------------------------------------------------------- HasAnalysis -------------------------------------------------------------------------------} data WithLedgerState blk = WithLedgerState - { wlsBlk :: blk - -- | This ledger state contains only the values to be consumed by the block + { wlsBlk :: blk , wlsStateBefore :: LedgerState blk ValuesMK - -- | This ledger state contains only the values produced by the block - , wlsStateAfter :: LedgerState blk ValuesMK + -- ^ This ledger state contains only the values to be consumed by the block + , wlsStateAfter :: LedgerState blk ValuesMK + -- ^ This ledger state contains only the values produced by the block } class (HasAnnTip blk, GetPrevHash blk, Condense (HeaderHash blk)) => HasAnalysis blk where - countTxOutputs :: blk -> Int - blockTxSizes :: blk -> [SizeInBytes] - knownEBBs :: proxy blk -> Map (HeaderHash blk) (ChainHash blk) + blockTxSizes :: blk -> [SizeInBytes] + knownEBBs :: proxy blk -> Map (HeaderHash blk) (ChainHash blk) -- | Emit trace markers at points in processing. - emitTraces :: WithLedgerState blk -> [String] + emitTraces :: WithLedgerState blk -> [String] -- | This method was introduced for the sake of the 'BenchmarkLedgerOps' pass. - blockStats :: blk -> [TextBuilder] + blockStats :: blk -> [TextBuilder] -- | This function allows to define different metrics about block application. -- diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Run.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Run.hs index 4fcd692598..c3f4bf437a 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Run.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Run.hs @@ -9,77 +9,81 @@ module Cardano.Tools.DBAnalyser.Run (analyse) where -import Cardano.Ledger.BaseTypes -import Cardano.Tools.DBAnalyser.Analysis -import Cardano.Tools.DBAnalyser.HasAnalysis -import Cardano.Tools.DBAnalyser.Types -import Control.ResourceRegistry -import Control.Tracer (Tracer (..), nullTracer) -import Data.Singletons (Sing, SingI (..)) -import qualified Data.SOP.Dict as Dict -import qualified Debug.Trace as Debug -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Inspect -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as LedgerSupportsMempool - (HasTxs) -import Ouroboros.Consensus.Ledger.SupportsProtocol -import qualified Ouroboros.Consensus.Node as Node -import qualified Ouroboros.Consensus.Node.InitStorage as Node -import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Args as ChainDB -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Stream as ImmutableDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1 as LedgerDB.V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.Args as LedgerDB.V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB as LMDB -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2 as LedgerDB.V2 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2.Args as LedgerDB.V2 -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Block (genesisPoint) -import System.IO -import Text.Printf (printf) +import Cardano.Ledger.BaseTypes +import Cardano.Tools.DBAnalyser.Analysis +import Cardano.Tools.DBAnalyser.HasAnalysis +import Cardano.Tools.DBAnalyser.Types +import Control.ResourceRegistry +import Control.Tracer (Tracer (..), nullTracer) +import Data.SOP.Dict qualified as Dict +import Data.Singletons (Sing, SingI (..)) +import Debug.Trace qualified as Debug +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as LedgerSupportsMempool + ( HasTxs + ) +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Node qualified as Node +import Ouroboros.Consensus.Node.InitStorage qualified as Node +import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args qualified as ChainDB +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.ImmutableDB.Stream qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB.V1 qualified as LedgerDB.V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args qualified as LedgerDB.V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB qualified as LMDB +import Ouroboros.Consensus.Storage.LedgerDB.V2 qualified as LedgerDB.V2 +import Ouroboros.Consensus.Storage.LedgerDB.V2.Args qualified as LedgerDB.V2 +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Block (genesisPoint) +import System.IO +import Text.Printf (printf) {------------------------------------------------------------------------------- Analyse -------------------------------------------------------------------------------} openLedgerDB :: - ( LedgerSupportsProtocol blk - , InspectLedger blk - , LedgerDB.LedgerDbSerialiseConstraints blk - , HasHardForkHistory blk - , LedgerDB.LedgerSupportsLedgerDB blk - ) - => Complete LedgerDB.LedgerDbArgs IO blk - -> IO ( LedgerDB.LedgerDB' IO blk - , LedgerDB.TestInternals' IO blk - ) -openLedgerDB lgrDbArgs@LedgerDB.LedgerDbArgs{LedgerDB.lgrFlavorArgs=LedgerDB.LedgerDbFlavorArgsV1 bss} = do + ( LedgerSupportsProtocol blk + , InspectLedger blk + , LedgerDB.LedgerDbSerialiseConstraints blk + , HasHardForkHistory blk + , LedgerDB.LedgerSupportsLedgerDB blk + ) => + Complete LedgerDB.LedgerDbArgs IO blk -> + IO + ( LedgerDB.LedgerDB' IO blk + , LedgerDB.TestInternals' IO blk + ) +openLedgerDB lgrDbArgs@LedgerDB.LedgerDbArgs{LedgerDB.lgrFlavorArgs = LedgerDB.LedgerDbFlavorArgsV1 bss} = do (ledgerDB, _, intLedgerDB) <- LedgerDB.openDBInternal lgrDbArgs - (LedgerDB.V1.mkInitDb - lgrDbArgs - bss - (\_ -> error "no replay")) + ( LedgerDB.V1.mkInitDb + lgrDbArgs + bss + (\_ -> error "no replay") + ) emptyStream genesisPoint pure (ledgerDB, intLedgerDB) -openLedgerDB lgrDbArgs@LedgerDB.LedgerDbArgs{LedgerDB.lgrFlavorArgs=LedgerDB.LedgerDbFlavorArgsV2 args} = do +openLedgerDB lgrDbArgs@LedgerDB.LedgerDbArgs{LedgerDB.lgrFlavorArgs = LedgerDB.LedgerDbFlavorArgsV2 args} = do (ledgerDB, _, intLedgerDB) <- LedgerDB.openDBInternal lgrDbArgs - (LedgerDB.V2.mkInitDb - lgrDbArgs - args - (\_ -> error "no replay")) + ( LedgerDB.V2.mkInitDb + lgrDbArgs + args + (\_ -> error "no replay") + ) emptyStream genesisPoint pure (ledgerDB, intLedgerDB) @@ -88,140 +92,148 @@ emptyStream :: Applicative m => ImmutableDB.StreamAPI m blk a emptyStream = ImmutableDB.StreamAPI $ \_ k -> k $ Right $ pure ImmutableDB.NoMoreItems defaultLMDBLimits :: LMDB.LMDBLimits -defaultLMDBLimits = LMDB.LMDBLimits - { LMDB.lmdbMapSize = 16 * 1024 * 1024 * 1024 - , LMDB.lmdbMaxDatabases = 10 - , LMDB.lmdbMaxReaders = 16 - } +defaultLMDBLimits = + LMDB.LMDBLimits + { LMDB.lmdbMapSize = 16 * 1024 * 1024 * 1024 + , LMDB.lmdbMaxDatabases = 10 + , LMDB.lmdbMaxReaders = 16 + } analyse :: - forall blk . - ( Node.RunNode blk - , Show (Header blk) - , HasAnalysis blk - , HasProtocolInfo blk - , LedgerSupportsMempool.HasTxs blk - , CanStowLedgerTables (LedgerState blk) - ) - => DBAnalyserConfig - -> Args blk - -> IO (Maybe AnalysisResult) + forall blk. + ( Node.RunNode blk + , Show (Header blk) + , HasAnalysis blk + , HasProtocolInfo blk + , LedgerSupportsMempool.HasTxs blk + , CanStowLedgerTables (LedgerState blk) + ) => + DBAnalyserConfig -> + Args blk -> + IO (Maybe AnalysisResult) analyse dbaConfig args = - withRegistry $ \registry -> do - lock <- newMVar () - chainDBTracer <- mkTracer lock verbose - analysisTracer <- mkTracer lock True - ProtocolInfo { pInfoInitLedger = genesisLedger, pInfoConfig = cfg } <- - mkProtocolInfo args - let shfs = Node.stdMkChainDbHasFS dbDir - chunkInfo = Node.nodeImmutableDbChunkInfo (configStorage cfg) - flavargs = case ldbBackend of - V1InMem -> LedgerDB.LedgerDbFlavorArgsV1 - ( LedgerDB.V1.V1Args - LedgerDB.V1.DisableFlushing - LedgerDB.V1.InMemoryBackingStoreArgs - ) - V1LMDB -> LedgerDB.LedgerDbFlavorArgsV1 - ( LedgerDB.V1.V1Args - LedgerDB.V1.DisableFlushing - ( LedgerDB.V1.LMDBBackingStoreArgs - "lmdb" - defaultLMDBLimits - Dict.Dict - ) - ) - V2InMem -> LedgerDB.LedgerDbFlavorArgsV2 - (LedgerDB.V2.V2Args LedgerDB.V2.InMemoryHandleArgs) - args' = - ChainDB.completeChainDbArgs - registry - cfg - genesisLedger - chunkInfo - (const True) - shfs - shfs - flavargs $ - ChainDB.defaultArgs - -- Set @k=1@ to reduce the memory usage of the LedgerDB. We only ever - -- go forward so we don't need to account for rollbacks. - args'' = - args' { - ChainDB.cdbLgrDbArgs = - (\x -> x - { LedgerDB.lgrConfig = - LedgerDB.LedgerDbCfg - (SecurityParam (knownNonZeroBounded @1)) - (LedgerDB.ledgerDbCfg $ LedgerDB.lgrConfig x) - OmitLedgerEvents - } + withRegistry $ \registry -> do + lock <- newMVar () + chainDBTracer <- mkTracer lock verbose + analysisTracer <- mkTracer lock True + ProtocolInfo{pInfoInitLedger = genesisLedger, pInfoConfig = cfg} <- + mkProtocolInfo args + let shfs = Node.stdMkChainDbHasFS dbDir + chunkInfo = Node.nodeImmutableDbChunkInfo (configStorage cfg) + flavargs = case ldbBackend of + V1InMem -> + LedgerDB.LedgerDbFlavorArgsV1 + ( LedgerDB.V1.V1Args + LedgerDB.V1.DisableFlushing + LedgerDB.V1.InMemoryBackingStoreArgs + ) + V1LMDB -> + LedgerDB.LedgerDbFlavorArgsV1 + ( LedgerDB.V1.V1Args + LedgerDB.V1.DisableFlushing + ( LedgerDB.V1.LMDBBackingStoreArgs + "lmdb" + defaultLMDBLimits + Dict.Dict + ) + ) + V2InMem -> + LedgerDB.LedgerDbFlavorArgsV2 + (LedgerDB.V2.V2Args LedgerDB.V2.InMemoryHandleArgs) + args' = + ChainDB.completeChainDbArgs + registry + cfg + genesisLedger + chunkInfo + (const True) + shfs + shfs + flavargs + $ ChainDB.defaultArgs + -- Set @k=1@ to reduce the memory usage of the LedgerDB. We only ever + -- go forward so we don't need to account for rollbacks. + args'' = + args' + { ChainDB.cdbLgrDbArgs = + ( \x -> + x + { LedgerDB.lgrConfig = + LedgerDB.LedgerDbCfg + (SecurityParam (knownNonZeroBounded @1)) + (LedgerDB.ledgerDbCfg $ LedgerDB.lgrConfig x) + OmitLedgerEvents + } ) - (ChainDB.cdbLgrDbArgs args') - } - chainDbArgs = maybeValidateAll $ ChainDB.updateTracer chainDBTracer args'' - immutableDbArgs = ChainDB.cdbImmDbArgs chainDbArgs - ldbArgs = ChainDB.cdbLgrDbArgs args'' - - withImmutableDB immutableDbArgs $ \(immutableDB, internal) -> do - SomeAnalysis (Proxy :: Proxy startFrom) ana <- pure $ runAnalysis analysis - startFrom <- case sing :: Sing startFrom of - SStartFromPoint -> FromPoint <$> case startSlot of - Origin -> pure GenesisPoint - NotOrigin slot -> ImmutableDB.getHashForSlot internal slot >>= \case - Just hash -> pure $ BlockPoint slot hash - Nothing -> fail $ "No block with given slot in the ImmutableDB: " <> show slot - SStartFromLedgerState -> do + (ChainDB.cdbLgrDbArgs args') + } + chainDbArgs = maybeValidateAll $ ChainDB.updateTracer chainDBTracer args'' + immutableDbArgs = ChainDB.cdbImmDbArgs chainDbArgs + ldbArgs = ChainDB.cdbLgrDbArgs args'' - (ledgerDB, intLedgerDB) <- openLedgerDB ldbArgs - -- This marker divides the "loading" phase of the program, where the - -- system is principally occupied with reading snapshot data from - -- disk, from the "processing" phase, where we are streaming blocks - -- and running the ledger processing on them. - Debug.traceMarkerIO "SNAPSHOT_LOADED" - pure $ FromLedgerState ledgerDB intLedgerDB + withImmutableDB immutableDbArgs $ \(immutableDB, internal) -> do + SomeAnalysis (Proxy :: Proxy startFrom) ana <- pure $ runAnalysis analysis + startFrom <- case sing :: Sing startFrom of + SStartFromPoint -> + FromPoint <$> case startSlot of + Origin -> pure GenesisPoint + NotOrigin slot -> + ImmutableDB.getHashForSlot internal slot >>= \case + Just hash -> pure $ BlockPoint slot hash + Nothing -> fail $ "No block with given slot in the ImmutableDB: " <> show slot + SStartFromLedgerState -> do + (ledgerDB, intLedgerDB) <- openLedgerDB ldbArgs + -- This marker divides the "loading" phase of the program, where the + -- system is principally occupied with reading snapshot data from + -- disk, from the "processing" phase, where we are streaming blocks + -- and running the ledger processing on them. + Debug.traceMarkerIO "SNAPSHOT_LOADED" + pure $ FromLedgerState ledgerDB intLedgerDB - result <- ana AnalysisEnv { - cfg - , startFrom - , db = immutableDB - , registry - , limit = confLimit - , tracer = analysisTracer - } - tipPoint <- atomically $ ImmutableDB.getTipPoint immutableDB - putStrLn $ "ImmutableDB tip: " ++ show tipPoint - pure result - where - DBAnalyserConfig{ - analysis - , confLimit - , dbDir - , selectDB - , validation - , verbose - , ldbBackend - } = dbaConfig + result <- + ana + AnalysisEnv + { cfg + , startFrom + , db = immutableDB + , registry + , limit = confLimit + , tracer = analysisTracer + } + tipPoint <- atomically $ ImmutableDB.getTipPoint immutableDB + putStrLn $ "ImmutableDB tip: " ++ show tipPoint + pure result + where + DBAnalyserConfig + { analysis + , confLimit + , dbDir + , selectDB + , validation + , verbose + , ldbBackend + } = dbaConfig - SelectImmutableDB startSlot = selectDB + SelectImmutableDB startSlot = selectDB - withImmutableDB immutableDbArgs = - bracket - (ImmutableDB.openDBInternal immutableDbArgs runWithTempRegistry) - (ImmutableDB.closeDB . fst) + withImmutableDB immutableDbArgs = + bracket + (ImmutableDB.openDBInternal immutableDbArgs runWithTempRegistry) + (ImmutableDB.closeDB . fst) - mkTracer _ False = return nullTracer - mkTracer lock True = do - startTime <- getMonotonicTime - return $ Tracer $ \ev -> withLock $ do - traceTime <- getMonotonicTime - let diff = diffTime traceTime startTime - hPutStrLn stderr $ printf "[%.6fs] %s" (realToFrac diff :: Double) (show ev) - hFlush stderr - where - withLock = bracket_ (takeMVar lock) (putMVar lock ()) + mkTracer _ False = return nullTracer + mkTracer lock True = do + startTime <- getMonotonicTime + return $ Tracer $ \ev -> withLock $ do + traceTime <- getMonotonicTime + let diff = diffTime traceTime startTime + hPutStrLn stderr $ printf "[%.6fs] %s" (realToFrac diff :: Double) (show ev) + hFlush stderr + where + withLock = bracket_ (takeMVar lock) (putMVar lock ()) - maybeValidateAll = case (analysis, validation) of - (_, Just ValidateAllBlocks) -> ChainDB.ensureValidateAll - (_, Just MinimumBlockValidation) -> id - (OnlyValidation, _ ) -> ChainDB.ensureValidateAll - _ -> id + maybeValidateAll = case (analysis, validation) of + (_, Just ValidateAllBlocks) -> ChainDB.ensureValidateAll + (_, Just MinimumBlockValidation) -> id + (OnlyValidation, _) -> ChainDB.ensureValidateAll + _ -> id diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Types.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Types.hs index d66d1e769d..a43929a250 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Types.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBAnalyser/Types.hs @@ -3,24 +3,24 @@ module Cardano.Tools.DBAnalyser.Types (module Cardano.Tools.DBAnalyser.Types) where -import Data.Word -import Ouroboros.Consensus.Block +import Data.Word +import Ouroboros.Consensus.Block -data SelectDB = - SelectImmutableDB (WithOrigin SlotNo) +data SelectDB + = SelectImmutableDB (WithOrigin SlotNo) -data DBAnalyserConfig = DBAnalyserConfig { - dbDir :: FilePath - , verbose :: Bool - , selectDB :: SelectDB +data DBAnalyserConfig = DBAnalyserConfig + { dbDir :: FilePath + , verbose :: Bool + , selectDB :: SelectDB , validation :: Maybe ValidateBlocks - , analysis :: AnalysisName - , confLimit :: Limit + , analysis :: AnalysisName + , confLimit :: Limit , ldbBackend :: LedgerDBBackend } -data AnalysisName = - ShowSlotBlockNo +data AnalysisName + = ShowSlotBlockNo | CountTxOutputs | ShowBlockHeaderSize | ShowBlockTxsSize @@ -32,19 +32,19 @@ data AnalysisName = | TraceLedgerProcessing | BenchmarkLedgerOps (Maybe FilePath) LedgerApplicationMode | ReproMempoolAndForge Int - -- | Compute different block application metrics every 'NumberOfBlocks'. + | -- | Compute different block application metrics every 'NumberOfBlocks'. -- -- The metrics will be written to the provided file path, or to -- the standard output if no file path is specified. - | GetBlockApplicationMetrics NumberOfBlocks (Maybe FilePath) + GetBlockApplicationMetrics NumberOfBlocks (Maybe FilePath) deriving Show -data AnalysisResult = - ResultCountBlock Int +data AnalysisResult + = ResultCountBlock Int | ResultMaxHeaderSize Word16 deriving (Eq, Show) -newtype NumberOfBlocks = NumberOfBlocks { unNumberOfBlocks :: Word64 } +newtype NumberOfBlocks = NumberOfBlocks {unNumberOfBlocks :: Word64} deriving (Eq, Show, Num, Read) data Limit = Limit Int | Unlimited diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBImmutaliser/Run.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBImmutaliser/Run.hs index d9da1814d6..172245d16f 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBImmutaliser/Run.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBImmutaliser/Run.hs @@ -7,128 +7,138 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} -module Cardano.Tools.DBImmutaliser.Run ( - Opts (..) +module Cardano.Tools.DBImmutaliser.Run + ( Opts (..) , run + -- * Setup , DBDirs (..) , withDBs + -- * Immutalise , TraceImmutalisationEvent (..) , immutalise ) where -import qualified Cardano.Tools.DBAnalyser.Block.Cardano as Cardano -import Cardano.Tools.DBAnalyser.HasAnalysis (mkProtocolInfo) -import Control.Monad (unless) -import Control.ResourceRegistry -import Control.Tracer (Tracer (..), stdoutTracer, traceWith) -import Data.Foldable (for_) -import Data.Functor.Contravariant ((>$<)) -import Data.List (intercalate, sortOn) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Maybe (fromJust, listToMaybe) -import Data.Ord (Down (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import qualified Data.Text as T -import Data.Traversable (for) -import qualified Dot -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) -import Ouroboros.Consensus.Protocol.Abstract -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Paths as Paths -import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB, - ImmutableDbArgs (..), Tip, tipToPoint) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB, - VolatileDbArgs (..)) -import qualified Ouroboros.Consensus.Storage.VolatileDB.API as VolatileDB -import qualified Ouroboros.Consensus.Storage.VolatileDB.Impl as VolatileDB -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (MaxSlotNo) -import System.FS.API (SomeHasFS (..)) -import System.FS.API.Types (MountPoint (..)) -import System.FS.IO (ioHasFS) +import Cardano.Tools.DBAnalyser.Block.Cardano qualified as Cardano +import Cardano.Tools.DBAnalyser.HasAnalysis (mkProtocolInfo) +import Control.Monad (unless) +import Control.ResourceRegistry +import Control.Tracer (Tracer (..), stdoutTracer, traceWith) +import Data.Foldable (for_) +import Data.Functor.Contravariant ((>$<)) +import Data.List (intercalate, sortOn) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Maybe (fromJust, listToMaybe) +import Data.Ord (Down (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Text qualified as T +import Data.Traversable (for) +import Dot qualified +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.Impl.Paths qualified as Paths +import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) +import Ouroboros.Consensus.Storage.ImmutableDB + ( ImmutableDB + , ImmutableDbArgs (..) + , Tip + , tipToPoint + ) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.VolatileDB + ( VolatileDB + , VolatileDbArgs (..) + ) +import Ouroboros.Consensus.Storage.VolatileDB.API qualified as VolatileDB +import Ouroboros.Consensus.Storage.VolatileDB.Impl qualified as VolatileDB +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block (MaxSlotNo) +import System.FS.API (SomeHasFS (..)) +import System.FS.API.Types (MountPoint (..)) +import System.FS.IO (ioHasFS) -data Opts = Opts { - dbDirs :: DBDirs FilePath +data Opts = Opts + { dbDirs :: DBDirs FilePath , configFile :: FilePath - , verbose :: Bool - , dotOut :: Maybe FilePath - , dryRun :: Bool + , verbose :: Bool + , dotOut :: Maybe FilePath + , dryRun :: Bool } run :: Opts -> IO () -run Opts {dbDirs, configFile, verbose, dotOut, dryRun} = do - let dbDirs' = SomeHasFS . ioHasFS . MountPoint <$> dbDirs - args = Cardano.CardanoBlockArgs configFile Nothing - ProtocolInfo{pInfoConfig = cfg} <- mkProtocolInfo args - withRegistry $ \registry -> - withDBs cfg registry dbDirs' $ - immutalise (configBlock cfg) (tracer <> dotTracer) dryRun - where - tracer = prettyTrace verbose >$< stdoutTracer - dotTracer = Tracer $ \case - TraceAllCandidates candidates -> do - let dot = dotCandidates $ fst <$> candidates - whenJust dotOut $ flip Dot.encodeToFile dot - _ -> pure () +run Opts{dbDirs, configFile, verbose, dotOut, dryRun} = do + let dbDirs' = SomeHasFS . ioHasFS . MountPoint <$> dbDirs + args = Cardano.CardanoBlockArgs configFile Nothing + ProtocolInfo{pInfoConfig = cfg} <- mkProtocolInfo args + withRegistry $ \registry -> + withDBs cfg registry dbDirs' $ + immutalise (configBlock cfg) (tracer <> dotTracer) dryRun + where + tracer = prettyTrace verbose >$< stdoutTracer + dotTracer = Tracer $ \case + TraceAllCandidates candidates -> do + let dot = dotCandidates $ fst <$> candidates + whenJust dotOut $ flip Dot.encodeToFile dot + _ -> pure () {------------------------------------------------------------------------------- Setup -------------------------------------------------------------------------------} -data DBDirs a = DBDirs { - immDBDir :: a +data DBDirs a = DBDirs + { immDBDir :: a , volDBDir :: a } deriving stock (Functor, Foldable, Traversable) withDBs :: - forall m blk a. - ( IOLike m - , ConvertRawHash blk - , LedgerSupportsProtocol blk - , ImmutableDB.ImmutableDbSerialiseConstraints blk - , VolatileDB.VolatileDbSerialiseConstraints blk - , NodeInitStorage blk - ) - => TopLevelConfig blk - -> ResourceRegistry m - -> DBDirs (SomeHasFS m) - -> (ImmutableDB m blk -> VolatileDB m blk -> m a) - -> m a + forall m blk a. + ( IOLike m + , ConvertRawHash blk + , LedgerSupportsProtocol blk + , ImmutableDB.ImmutableDbSerialiseConstraints blk + , VolatileDB.VolatileDbSerialiseConstraints blk + , NodeInitStorage blk + ) => + TopLevelConfig blk -> + ResourceRegistry m -> + DBDirs (SomeHasFS m) -> + (ImmutableDB m blk -> VolatileDB m blk -> m a) -> + m a withDBs cfg registry dbDirs f = - ImmutableDB.withDB (ImmutableDB.openDB immDBArgs runWithTempRegistry) $ \immDB -> - VolatileDB.withDB (VolatileDB.openDB volDBArgs runWithTempRegistry) $ \volDB -> do + ImmutableDB.withDB (ImmutableDB.openDB immDBArgs runWithTempRegistry) $ \immDB -> + VolatileDB.withDB (VolatileDB.openDB volDBArgs runWithTempRegistry) $ \volDB -> do f immDB volDB - where - codecCfg = configCodec cfg - storageCfg = configStorage cfg + where + codecCfg = configCodec cfg + storageCfg = configStorage cfg - immDBArgs :: Complete ImmutableDbArgs m blk - immDBArgs = ImmutableDB.defaultArgs { - immCheckIntegrity = nodeCheckIntegrity storageCfg - , immChunkInfo = nodeImmutableDbChunkInfo storageCfg - , immCodecConfig = codecCfg - , immRegistry = registry - , immHasFS = immDBDir dbDirs - } + immDBArgs :: Complete ImmutableDbArgs m blk + immDBArgs = + ImmutableDB.defaultArgs + { immCheckIntegrity = nodeCheckIntegrity storageCfg + , immChunkInfo = nodeImmutableDbChunkInfo storageCfg + , immCodecConfig = codecCfg + , immRegistry = registry + , immHasFS = immDBDir dbDirs + } - volDBArgs :: Complete VolatileDbArgs m blk - volDBArgs = VolatileDB.defaultArgs { - volCheckIntegrity = nodeCheckIntegrity (configStorage cfg) - , volCodecConfig = codecCfg - , volHasFS = volDBDir dbDirs - } + volDBArgs :: Complete VolatileDbArgs m blk + volDBArgs = + VolatileDB.defaultArgs + { volCheckIntegrity = nodeCheckIntegrity (configStorage cfg) + , volCodecConfig = codecCfg + , volHasFS = volDBDir dbDirs + } {------------------------------------------------------------------------------- Immutalise @@ -150,71 +160,75 @@ withDBs cfg registry dbDirs f = -- -- * picking a chain that contains particular points (user input) immutalise :: - forall m blk. - ( IOLike m - , BlockSupportsProtocol blk - ) - => BlockConfig blk - -> Tracer m (TraceImmutalisationEvent blk) - -> Bool -- ^ Dry run? - -> ImmutableDB m blk - -> VolatileDB m blk - -> m () + forall m blk. + ( IOLike m + , BlockSupportsProtocol blk + ) => + BlockConfig blk -> + Tracer m (TraceImmutalisationEvent blk) -> + -- | Dry run? + Bool -> + ImmutableDB m blk -> + VolatileDB m blk -> + m () immutalise bcfg tracer dryRun immDB volDB = do - immTip <- atomically $ ImmutableDB.getTip immDB - volMaxSlotNo <- atomically $ VolatileDB.getMaxSlotNo volDB - traceWith tracer $ TraceStartImmutalisation immTip volMaxSlotNo + immTip <- atomically $ ImmutableDB.getTip immDB + volMaxSlotNo <- atomically $ VolatileDB.getMaxSlotNo volDB + traceWith tracer $ TraceStartImmutalisation immTip volMaxSlotNo - (succsOf, getBlockInfo) <- atomically $ + (succsOf, getBlockInfo) <- + atomically $ (,) <$> VolatileDB.filterByPredecessor volDB <*> VolatileDB.getBlockInfo volDB - let candidates = - Paths.maximalCandidates succsOf Nothing (tipToPoint immTip) + let candidates = + Paths.maximalCandidates succsOf Nothing (tipToPoint immTip) - -- All blocks that are reachable from the immutable tip. There might be - -- further blocks in the VolatileDB, but the public API currently does - -- not provide a way to observe them. - reachableBlocks :: [VolatileDB.BlockInfo blk] - reachableBlocks = - fmap (fromJust . getBlockInfo) - $ Set.toAscList $ foldMap (Set.fromList . NE.toList) candidates - traceWith tracer $ TraceReachableBlocks reachableBlocks + -- All blocks that are reachable from the immutable tip. There might be + -- further blocks in the VolatileDB, but the public API currently does + -- not provide a way to observe them. + reachableBlocks :: [VolatileDB.BlockInfo blk] + reachableBlocks = + fmap (fromJust . getBlockInfo) $ + Set.toAscList $ + foldMap (Set.fromList . NE.toList) candidates + traceWith tracer $ TraceReachableBlocks reachableBlocks - candidatesAndTipHdrs <- for candidates $ \candidate -> do - tipHdr <- - VolatileDB.getKnownBlockComponent volDB GetHeader (NE.last candidate) - pure (candidate, tipHdr) - let sortedCandidates :: - [(NonEmpty (HeaderHash blk), SelectView (BlockProtocol blk))] - sortedCandidates = sortOn (Down . snd) $ do - (candidate, tipHdr) <- candidatesAndTipHdrs - pure (candidate, selectView bcfg tipHdr) + candidatesAndTipHdrs <- for candidates $ \candidate -> do + tipHdr <- + VolatileDB.getKnownBlockComponent volDB GetHeader (NE.last candidate) + pure (candidate, tipHdr) + let sortedCandidates :: + [(NonEmpty (HeaderHash blk), SelectView (BlockProtocol blk))] + sortedCandidates = sortOn (Down . snd) $ do + (candidate, tipHdr) <- candidatesAndTipHdrs + pure (candidate, selectView bcfg tipHdr) - traceWith tracer $ TraceAllCandidates sortedCandidates + traceWith tracer $ TraceAllCandidates sortedCandidates - case sortedCandidates of - [] -> do - traceWith tracer TraceNoVolatileCandidate - (candidate, sv) : _ -> do - traceWith tracer $ TraceCandidateToImmutalise + case sortedCandidates of + [] -> do + traceWith tracer TraceNoVolatileCandidate + (candidate, sv) : _ -> do + traceWith tracer $ + TraceCandidateToImmutalise (NE.last candidate) (NE.length candidate) sv - unless dryRun $ do - -- Copy the candidate blocks from volDB to immDB. - for_ candidate $ \hdrHash -> do - blk <- VolatileDB.getKnownBlockComponent volDB GetBlock hdrHash - ImmutableDB.appendBlock immDB blk + unless dryRun $ do + -- Copy the candidate blocks from volDB to immDB. + for_ candidate $ \hdrHash -> do + blk <- VolatileDB.getKnownBlockComponent volDB GetBlock hdrHash + ImmutableDB.appendBlock immDB blk - newImmTip <- atomically $ ImmutableDB.getTip immDB - traceWith tracer $ TraceCopiedtoImmutableDB newImmTip + newImmTip <- atomically $ ImmutableDB.getTip immDB + traceWith tracer $ TraceCopiedtoImmutableDB newImmTip {------------------------------------------------------------------------------- Tracing -------------------------------------------------------------------------------} -data TraceImmutalisationEvent blk = - TraceStartImmutalisation +data TraceImmutalisationEvent blk + = TraceStartImmutalisation -- | Tip of the ImmutableDB. (WithOrigin (Tip blk)) -- | 'MaxSlotNo' of the VolatileDB. @@ -245,78 +259,88 @@ data TraceImmutalisationEvent blk = deriving stock instance ( ConsensusProtocol (BlockProtocol blk) , StandardHash blk - ) => Show (TraceImmutalisationEvent blk) + ) => + Show (TraceImmutalisationEvent blk) prettyTrace :: - forall blk. - ( ConsensusProtocol (BlockProtocol blk) - , StandardHash blk - ) - => Bool -- ^ verbose? - -> TraceImmutalisationEvent blk - -> String + forall blk. + ( ConsensusProtocol (BlockProtocol blk) + , StandardHash blk + ) => + -- | verbose? + Bool -> + TraceImmutalisationEvent blk -> + String prettyTrace verbose = \case - TraceStartImmutalisation immTip volMaxSlot -> - "Start immutalisation: ImmutableDB tip at " <> show immTip - <> ", VolatileDB max slot at " <> show volMaxSlot - TraceReachableBlocks reachableBlocks -> - "Number of volatile blocks reachable from ImmutableDB tip: " - <> show (length reachableBlocks) <> " (VolatileDB might contain more blocks)" + TraceStartImmutalisation immTip volMaxSlot -> + "Start immutalisation: ImmutableDB tip at " + <> show immTip + <> ", VolatileDB max slot at " + <> show volMaxSlot + TraceReachableBlocks reachableBlocks -> + "Number of volatile blocks reachable from ImmutableDB tip: " + <> show (length reachableBlocks) + <> " (VolatileDB might contain more blocks)" <> if verbose then "\nAll hashes:\n" <> unlines (render <$> reachableBlocks) else "" - where - render :: VolatileDB.BlockInfo blk -> String - render bi = intercalate "\t" [show biHash, show biSlotNo, show biBlockNo] - where - VolatileDB.BlockInfo { - VolatileDB.biHash - , VolatileDB.biSlotNo - , VolatileDB.biBlockNo - } = bi - TraceNoVolatileCandidate -> - "No volatile candidate found for immutalisation" - TraceAllCandidates candidates -> unlines $ - "Number of candidates: " <> show (length candidates) - : concat [selectViewInfo | verbose] - where - selectViewInfo = - "All candidates:" - : [ unlines - [ " - Length: " <> show (NE.length c) - , " Tip hash: " <> show (NE.last c) - , " " <> show sv - ] - | (c, sv) <- candidates - ] - TraceCandidateToImmutalise tipHash numBlocks sv -> - "Immutalising volatile candidate of length " <> show numBlocks - <> " with tip hash " <> show tipHash + where + render :: VolatileDB.BlockInfo blk -> String + render bi = intercalate "\t" [show biHash, show biSlotNo, show biBlockNo] + where + VolatileDB.BlockInfo + { VolatileDB.biHash + , VolatileDB.biSlotNo + , VolatileDB.biBlockNo + } = bi + TraceNoVolatileCandidate -> + "No volatile candidate found for immutalisation" + TraceAllCandidates candidates -> + unlines $ + "Number of candidates: " <> show (length candidates) + : concat [selectViewInfo | verbose] + where + selectViewInfo = + "All candidates:" + : [ unlines + [ " - Length: " <> show (NE.length c) + , " Tip hash: " <> show (NE.last c) + , " " <> show sv + ] + | (c, sv) <- candidates + ] + TraceCandidateToImmutalise tipHash numBlocks sv -> + "Immutalising volatile candidate of length " + <> show numBlocks + <> " with tip hash " + <> show tipHash <> if verbose then " and tip select view " <> show sv else "" - TraceCopiedtoImmutableDB newImmTip -> - "Copied to ImmutableDB, new tip is " <> show newImmTip + TraceCopiedtoImmutableDB newImmTip -> + "Copied to ImmutableDB, new tip is " <> show newImmTip -- | Construct a 'Dot.DotGraph' out of a list of candidates. dotCandidates :: forall hash. Show hash => [NonEmpty hash] -> Dot.DotGraph dotCandidates candidates = - Dot.DotGraph Dot.Strict Dot.Directed Nothing $ do - candidate <- fmap renderHash . NE.toList <$> candidates - (from, to) <- zip ("ImmTip" : candidate) candidate - let fromTo = Dot.ListTwo (toNode from) (toNode to) [] - pure $ Dot.StatementEdge $ Dot.EdgeStatement fromTo [] - where - toNode :: String -> Dot.EdgeElement - toNode l = Dot.EdgeNode $ Dot.NodeId (Dot.Id $ T.pack l) Nothing + Dot.DotGraph Dot.Strict Dot.Directed Nothing $ do + candidate <- fmap renderHash . NE.toList <$> candidates + (from, to) <- zip ("ImmTip" : candidate) candidate + let fromTo = Dot.ListTwo (toNode from) (toNode to) [] + pure $ Dot.StatementEdge $ Dot.EdgeStatement fromTo [] + where + toNode :: String -> Dot.EdgeElement + toNode l = Dot.EdgeNode $ Dot.NodeId (Dot.Id $ T.pack l) Nothing - -- Render a shortened hash like in git, i.e. the smallest prefix length - -- such that the hashes still are unique. - renderHash :: hash -> String - renderHash = take prefix . show - where - prefix = fromJust $ listToMaybe $ - [ k - | k <- [4..] - , Set.size (Set.map (take k) allHashes) == Set.size allHashes - ] + -- Render a shortened hash like in git, i.e. the smallest prefix length + -- such that the hashes still are unique. + renderHash :: hash -> String + renderHash = take prefix . show + where + prefix = + fromJust $ + listToMaybe $ + [ k + | k <- [4 ..] + , Set.size (Set.map (take k) allHashes) == Set.size allHashes + ] - allHashes :: Set String - allHashes = - foldMap (Set.fromList . fmap show . NE.toList) candidates + allHashes :: Set String + allHashes = + foldMap (Set.fromList . fmap show . NE.toList) candidates diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Forging.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Forging.hs index b0d35f4f77..39195a5150 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Forging.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Forging.hs @@ -4,58 +4,79 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Cardano.Tools.DBSynthesizer.Forging ( - GenTxs +module Cardano.Tools.DBSynthesizer.Forging + ( GenTxs , runForge ) where -import Cardano.Tools.DBSynthesizer.Types (ForgeLimit (..), - ForgeResult (..)) -import Control.Monad (when) -import Control.Monad.Except (runExcept) -import Control.Monad.IO.Class (liftIO) -import Control.Monad.Trans.Except (ExceptT, runExceptT, throwE) -import Control.ResourceRegistry -import Control.Tracer as Trace (nullTracer) -import Data.Either (isRight) -import Data.Maybe (isJust) -import Data.Proxy -import Data.Word (Word64) -import Ouroboros.Consensus.Block.Abstract as Block -import Ouroboros.Consensus.Block.Forging as Block (BlockForging (..), - ShouldForge (..), checkShouldForge) -import Ouroboros.Consensus.Config (TopLevelConfig, configConsensus, - configLedger) -import Ouroboros.Consensus.Forecast (forecastFor) -import Ouroboros.Consensus.HeaderValidation - (BasicEnvelopeValidation (..), HeaderState (..)) -import Ouroboros.Consensus.Ledger.Abstract (Validated) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool (GenTx) -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState, - tickChainDepState) -import Ouroboros.Consensus.Storage.ChainDB.API as ChainDB - (AddBlockResult (..), ChainDB, addBlockAsync, - blockProcessed, getCurrentChain, getPastLedger, - getReadOnlyForkerAtPoint) -import qualified Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment as InvalidBlockPunishment - (noPunishment) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Util.IOLike (atomically) -import Ouroboros.Network.AnchoredFragment as AF (Anchor (..), - AnchoredFragment, AnchoredSeq (..), headPoint) -import Ouroboros.Network.Protocol.LocalStateQuery.Type - - -data ForgeState = - ForgeState { - currentSlot :: !SlotNo - , forged :: !Word64 +import Cardano.Tools.DBSynthesizer.Types + ( ForgeLimit (..) + , ForgeResult (..) + ) +import Control.Monad (when) +import Control.Monad.Except (runExcept) +import Control.Monad.IO.Class (liftIO) +import Control.Monad.Trans.Except (ExceptT, runExceptT, throwE) +import Control.ResourceRegistry +import Control.Tracer as Trace (nullTracer) +import Data.Either (isRight) +import Data.Maybe (isJust) +import Data.Proxy +import Data.Word (Word64) +import Ouroboros.Consensus.Block.Abstract as Block +import Ouroboros.Consensus.Block.Forging as Block + ( BlockForging (..) + , ShouldForge (..) + , checkShouldForge + ) +import Ouroboros.Consensus.Config + ( TopLevelConfig + , configConsensus + , configLedger + ) +import Ouroboros.Consensus.Forecast (forecastFor) +import Ouroboros.Consensus.HeaderValidation + ( BasicEnvelopeValidation (..) + , HeaderState (..) + ) +import Ouroboros.Consensus.Ledger.Abstract (Validated) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool (GenTx) +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) +import Ouroboros.Consensus.Protocol.Abstract + ( ChainDepState + , tickChainDepState + ) +import Ouroboros.Consensus.Storage.ChainDB.API as ChainDB + ( AddBlockResult (..) + , ChainDB + , addBlockAsync + , blockProcessed + , getCurrentChain + , getPastLedger + , getReadOnlyForkerAtPoint + ) +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment qualified as InvalidBlockPunishment + ( noPunishment + ) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Util.IOLike (atomically) +import Ouroboros.Network.AnchoredFragment as AF + ( Anchor (..) + , AnchoredFragment + , AnchoredSeq (..) + , headPoint + ) +import Ouroboros.Network.Protocol.LocalStateQuery.Type + +data ForgeState + = ForgeState + { currentSlot :: !SlotNo + , forged :: !Word64 , currentEpoch :: !Word64 - , processed :: !SlotNo + , processed :: !SlotNo } initialForgeState :: ForgeState @@ -63,180 +84,194 @@ initialForgeState = ForgeState 0 0 0 0 -- | An action to generate transactions for a given block type GenTxs blk mk = - SlotNo - -> IO (ReadOnlyForker IO (ExtLedgerState blk) blk) - -> TickedLedgerState blk DiffMK - -> IO [Validated (GenTx blk)] + SlotNo -> + IO (ReadOnlyForker IO (ExtLedgerState blk) blk) -> + TickedLedgerState blk DiffMK -> + IO [Validated (GenTx blk)] -- DUPLICATE: runForge mirrors forging loop from ouroboros-consensus/src/Ouroboros/Consensus/NodeKernel.hs -- For an extensive commentary of the forging loop, see there. runForge :: - forall blk mk. - ( LedgerSupportsProtocol blk ) - => EpochSize - -> SlotNo - -> ForgeLimit - -> ChainDB IO blk - -> [BlockForging IO blk] - -> TopLevelConfig blk - -> GenTxs blk mk - -> IO ForgeResult + forall blk mk. + LedgerSupportsProtocol blk => + EpochSize -> + SlotNo -> + ForgeLimit -> + ChainDB IO blk -> + [BlockForging IO blk] -> + TopLevelConfig blk -> + GenTxs blk mk -> + IO ForgeResult runForge epochSize_ nextSlot opts chainDB blockForging cfg genTxs = do - putStrLn $ "--> epoch size: " ++ show epochSize_ - putStrLn $ "--> will process until: " ++ show opts - endState <- go initialForgeState {currentSlot = nextSlot} - putStrLn $ "--> forged and adopted " ++ show (forged endState) ++ " blocks; reached " ++ show (currentSlot endState) - pure $ ForgeResult $ fromIntegral $ forged endState - where - epochSize = unEpochSize epochSize_ - - forgingDone :: ForgeState -> Bool - forgingDone = case opts of - ForgeLimitSlot s -> (s == ) . processed - ForgeLimitBlock b -> (b == ) . forged - ForgeLimitEpoch e -> (e == ) . currentEpoch - - go :: ForgeState -> IO ForgeState - go forgeState - | forgingDone forgeState = pure forgeState - | otherwise = go . nextForgeState forgeState . isRight + putStrLn $ "--> epoch size: " ++ show epochSize_ + putStrLn $ "--> will process until: " ++ show opts + endState <- go initialForgeState{currentSlot = nextSlot} + putStrLn $ + "--> forged and adopted " + ++ show (forged endState) + ++ " blocks; reached " + ++ show (currentSlot endState) + pure $ ForgeResult $ fromIntegral $ forged endState + where + epochSize = unEpochSize epochSize_ + + forgingDone :: ForgeState -> Bool + forgingDone = case opts of + ForgeLimitSlot s -> (s ==) . processed + ForgeLimitBlock b -> (b ==) . forged + ForgeLimitEpoch e -> (e ==) . currentEpoch + + go :: ForgeState -> IO ForgeState + go forgeState + | forgingDone forgeState = pure forgeState + | otherwise = + go . nextForgeState forgeState . isRight =<< runExceptT (goSlot $ currentSlot forgeState) - nextForgeState :: ForgeState -> Bool -> ForgeState - nextForgeState ForgeState{currentSlot, forged, currentEpoch, processed} didForge = ForgeState { - currentSlot = currentSlot + 1 - , forged = forged + if didForge then 1 else 0 - , currentEpoch = epoch' - , processed = processed' - } - where - processed' = processed + 1 - epoch' = currentEpoch + if unSlotNo processed' `rem` epochSize == 0 then 1 else 0 - - - -- just some shims; in this ported code, we use ExceptT instead of WithEarlyExit - exitEarly' = throwE - lift = liftIO - - goSlot :: SlotNo -> ExceptT String IO () - goSlot currentSlot = do - -- Figure out which block to connect to - BlockContext{bcBlockNo, bcPrevPoint} <- do - eBlkCtx <- lift $ atomically $ + nextForgeState :: ForgeState -> Bool -> ForgeState + nextForgeState ForgeState{currentSlot, forged, currentEpoch, processed} didForge = + ForgeState + { currentSlot = currentSlot + 1 + , forged = forged + if didForge then 1 else 0 + , currentEpoch = epoch' + , processed = processed' + } + where + processed' = processed + 1 + epoch' = currentEpoch + if unSlotNo processed' `rem` epochSize == 0 then 1 else 0 + + -- just some shims; in this ported code, we use ExceptT instead of WithEarlyExit + exitEarly' = throwE + lift = liftIO + + goSlot :: SlotNo -> ExceptT String IO () + goSlot currentSlot = do + -- Figure out which block to connect to + BlockContext{bcBlockNo, bcPrevPoint} <- do + eBlkCtx <- + lift $ + atomically $ mkCurrentBlockContext currentSlot - <$> ChainDB.getCurrentChain chainDB - case eBlkCtx of - Right blkCtx -> return blkCtx - Left{} -> exitEarly' "no block context" - - -- Get corresponding ledger state, ledgder view and ticked 'ChainDepState' - unticked <- do - mExtLedger <- lift $ atomically $ ChainDB.getPastLedger chainDB bcPrevPoint - case mExtLedger of - Just l -> return l - Nothing -> exitEarly' "no ledger state" - - ledgerView <- - case runExcept $ forecastFor - (ledgerViewForecastAt - (configLedger cfg) - (ledgerState unticked)) - currentSlot of - Left err -> exitEarly' $ "no ledger view: " ++ show err - Right lv -> return lv - - let tickedChainDepState :: Ticked (ChainDepState (BlockProtocol blk)) - tickedChainDepState = - tickChainDepState - (configConsensus cfg) - ledgerView - currentSlot - (headerStateChainDep (headerState unticked)) - - -- Check if any forger is slot leader - let - checkShouldForge' f = - checkShouldForge f nullTracer cfg currentSlot tickedChainDepState - - checks <- zip blockForging <$> liftIO (mapM checkShouldForge' blockForging) - - (blockForging', proof) <- case [(f, p) | (f, ShouldForge p) <- checks] of - x:_ -> pure x - _ -> exitEarly' "NoLeader" - - -- Tick the ledger state for the 'SlotNo' we're producing a block for - let tickedLedgerState :: Ticked (LedgerState blk) DiffMK - tickedLedgerState = - applyChainTick - OmitLedgerEvents - (configLedger cfg) - currentSlot - (ledgerState unticked) - - -- Let the caller generate transactions - txs <- lift $ withRegistry $ \reg -> - genTxs + <$> ChainDB.getCurrentChain chainDB + case eBlkCtx of + Right blkCtx -> return blkCtx + Left{} -> exitEarly' "no block context" + + -- Get corresponding ledger state, ledgder view and ticked 'ChainDepState' + unticked <- do + mExtLedger <- lift $ atomically $ ChainDB.getPastLedger chainDB bcPrevPoint + case mExtLedger of + Just l -> return l + Nothing -> exitEarly' "no ledger state" + + ledgerView <- + case runExcept $ + forecastFor + ( ledgerViewForecastAt + (configLedger cfg) + (ledgerState unticked) + ) + currentSlot of + Left err -> exitEarly' $ "no ledger view: " ++ show err + Right lv -> return lv + + let tickedChainDepState :: Ticked (ChainDepState (BlockProtocol blk)) + tickedChainDepState = + tickChainDepState + (configConsensus cfg) + ledgerView currentSlot - ( either (error "Impossible: we are forging on top of a block that the ChainDB cannot create forkers on!") id - <$> getReadOnlyForkerAtPoint chainDB reg (SpecificPoint bcPrevPoint) - ) - tickedLedgerState - - -- Actually produce the block - newBlock <- lift $ - Block.forgeBlock blockForging' - cfg - bcBlockNo + (headerStateChainDep (headerState unticked)) + + -- Check if any forger is slot leader + let + checkShouldForge' f = + checkShouldForge f nullTracer cfg currentSlot tickedChainDepState + + checks <- zip blockForging <$> liftIO (mapM checkShouldForge' blockForging) + + (blockForging', proof) <- case [(f, p) | (f, ShouldForge p) <- checks] of + x : _ -> pure x + _ -> exitEarly' "NoLeader" + + -- Tick the ledger state for the 'SlotNo' we're producing a block for + let tickedLedgerState :: Ticked (LedgerState blk) DiffMK + tickedLedgerState = + applyChainTick + OmitLedgerEvents + (configLedger cfg) currentSlot - (forgetLedgerTables tickedLedgerState) - txs - proof + (ledgerState unticked) + + -- Let the caller generate transactions + txs <- lift $ withRegistry $ \reg -> + genTxs + currentSlot + ( either + (error "Impossible: we are forging on top of a block that the ChainDB cannot create forkers on!") + id + <$> getReadOnlyForkerAtPoint chainDB reg (SpecificPoint bcPrevPoint) + ) + tickedLedgerState + + -- Actually produce the block + newBlock <- + lift $ + Block.forgeBlock + blockForging' + cfg + bcBlockNo + currentSlot + (forgetLedgerTables tickedLedgerState) + txs + proof - -- Add the block to the chain DB (synchronously) and verify adoption - let noPunish = InvalidBlockPunishment.noPunishment - result <- lift $ ChainDB.addBlockAsync chainDB noPunish newBlock - mbCurTip <- lift $ atomically $ ChainDB.blockProcessed result + -- Add the block to the chain DB (synchronously) and verify adoption + let noPunish = InvalidBlockPunishment.noPunishment + result <- lift $ ChainDB.addBlockAsync chainDB noPunish newBlock + mbCurTip <- lift $ atomically $ ChainDB.blockProcessed result - when (mbCurTip /= SuccesfullyAddedBlock (blockPoint newBlock)) $ - exitEarly' "block not adopted" + when (mbCurTip /= SuccesfullyAddedBlock (blockPoint newBlock)) $ + exitEarly' "block not adopted" -- | Context required to forge a block data BlockContext blk = BlockContext - { bcBlockNo :: !BlockNo + { bcBlockNo :: !BlockNo , bcPrevPoint :: !(Point blk) } -- | Create the 'BlockContext' from the header of the previous block blockContextFromPrevHeader :: - HasHeader (Header blk) - => Header blk - -> BlockContext blk + HasHeader (Header blk) => + Header blk -> + BlockContext blk blockContextFromPrevHeader hdr = - BlockContext (succ (blockNo hdr)) (headerPoint hdr) + BlockContext (succ (blockNo hdr)) (headerPoint hdr) -- | Determine the 'BlockContext' for a block about to be forged from the -- current slot, ChainDB chain fragment, and ChainDB tip block number mkCurrentBlockContext :: - forall blk. - ( GetHeader blk - , BasicEnvelopeValidation blk ) - => SlotNo - -> AnchoredFragment (Header blk) - -> Either () (BlockContext blk) + forall blk. + ( GetHeader blk + , BasicEnvelopeValidation blk + ) => + SlotNo -> + AnchoredFragment (Header blk) -> + Either () (BlockContext blk) mkCurrentBlockContext currentSlot c = case c of - Empty AF.AnchorGenesis -> - Right $ BlockContext (expectedFirstBlockNo (Proxy @blk)) GenesisPoint - - Empty (AF.Anchor anchorSlot anchorHash anchorBlockNo) -> - let p :: Point blk = BlockPoint anchorSlot anchorHash - in if anchorSlot < currentSlot - then Right $ BlockContext (succ anchorBlockNo) p - else Left () - - c' :> hdr -> case blockSlot hdr `compare` currentSlot of - LT -> Right $ blockContextFromPrevHeader hdr - GT -> Left () - EQ -> Right $ if isJust (headerIsEBB hdr) - then blockContextFromPrevHeader hdr - else BlockContext (blockNo hdr) $ castPoint $ AF.headPoint c' + Empty AF.AnchorGenesis -> + Right $ BlockContext (expectedFirstBlockNo (Proxy @blk)) GenesisPoint + Empty (AF.Anchor anchorSlot anchorHash anchorBlockNo) -> + let p :: Point blk = BlockPoint anchorSlot anchorHash + in if anchorSlot < currentSlot + then Right $ BlockContext (succ anchorBlockNo) p + else Left () + c' :> hdr -> case blockSlot hdr `compare` currentSlot of + LT -> Right $ blockContextFromPrevHeader hdr + GT -> Left () + EQ -> + Right $ + if isJust (headerIsEBB hdr) + then blockContextFromPrevHeader hdr + else BlockContext (blockNo hdr) $ castPoint $ AF.headPoint c' diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Orphans.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Orphans.hs index bda73a8f30..4131972ddc 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Orphans.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Orphans.hs @@ -1,78 +1,88 @@ {-# LANGUAGE GADTs #-} {-# LANGUAGE OverloadedStrings #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Cardano.Tools.DBSynthesizer.Orphans () where -import qualified Cardano.Chain.Update as Byron (ApplicationName (..)) -import Cardano.Crypto (RequiresNetworkMagic (..)) -import Cardano.Node.Types (AdjustFilePaths (..), - NodeByronProtocolConfiguration (..), - NodeHardForkProtocolConfiguration (..)) -import Cardano.Tools.DBSynthesizer.Types -import Control.Monad (when) -import Data.Aeson as Aeson (FromJSON (..), withObject, (.!=), (.:), - (.:?)) - +import Cardano.Chain.Update qualified as Byron (ApplicationName (..)) +import Cardano.Crypto (RequiresNetworkMagic (..)) +import Cardano.Node.Types + ( AdjustFilePaths (..) + , NodeByronProtocolConfiguration (..) + , NodeHardForkProtocolConfiguration (..) + ) +import Cardano.Tools.DBSynthesizer.Types +import Control.Monad (when) +import Data.Aeson as Aeson + ( FromJSON (..) + , withObject + , (.!=) + , (.:) + , (.:?) + ) instance FromJSON NodeConfigStub where - parseJSON val = withObject "NodeConfigStub" (parse' val) val - where - parse' o v = do - proto <- v .: "Protocol" - when (proto /= ("Cardano" :: String)) $ - fail $ "nodeConfig.Protocol expected: Cardano; found: " ++ proto - NodeConfigStub o - <$> v .: "AlonzoGenesisFile" - <*> v .: "ShelleyGenesisFile" - <*> v .: "ByronGenesisFile" - <*> v .: "ConwayGenesisFile" + parseJSON val = withObject "NodeConfigStub" (parse' val) val + where + parse' o v = do + proto <- v .: "Protocol" + when (proto /= ("Cardano" :: String)) $ + fail $ + "nodeConfig.Protocol expected: Cardano; found: " ++ proto + NodeConfigStub o + <$> v .: "AlonzoGenesisFile" + <*> v .: "ShelleyGenesisFile" + <*> v .: "ByronGenesisFile" + <*> v .: "ConwayGenesisFile" instance AdjustFilePaths NodeConfigStub where - adjustFilePaths f nc = - nc { - ncsAlonzoGenesisFile = f $ ncsAlonzoGenesisFile nc - , ncsShelleyGenesisFile = f $ ncsShelleyGenesisFile nc - , ncsByronGenesisFile = f $ ncsByronGenesisFile nc - , ncsConwayGenesisFile = f $ ncsConwayGenesisFile nc - } + adjustFilePaths f nc = + nc + { ncsAlonzoGenesisFile = f $ ncsAlonzoGenesisFile nc + , ncsShelleyGenesisFile = f $ ncsShelleyGenesisFile nc + , ncsByronGenesisFile = f $ ncsByronGenesisFile nc + , ncsConwayGenesisFile = f $ ncsConwayGenesisFile nc + } instance AdjustFilePaths NodeCredentials where - adjustFilePaths f nc = - nc { - credCertFile = f <$> credCertFile nc - , credVRFFile = f <$> credVRFFile nc - , credKESFile = f <$> credKESFile nc - , credBulkFile = f <$> credBulkFile nc - } + adjustFilePaths f nc = + nc + { credCertFile = f <$> credCertFile nc + , credVRFFile = f <$> credVRFFile nc + , credKESFile = f <$> credKESFile nc + , credBulkFile = f <$> credBulkFile nc + } -- DUPLICATE: mirroring parsers from cardano-node/src/Cardano/Node/Configuration/POM.hs instance FromJSON NodeHardForkProtocolConfiguration where - parseJSON = withObject "NodeHardForkProtocolConfiguration" $ \v -> - NodeHardForkProtocolConfiguration - <$> v .:? "TestEnableDevelopmentHardForkEras" - .!= False - <*> v .:? "TestShelleyHardForkAtEpoch" - <*> v .:? "TestAllegraHardForkAtEpoch" - <*> v .:? "TestMaryHardForkAtEpoch" - <*> v .:? "TestAlonzoHardForkAtEpoch" - <*> v .:? "TestBabbageHardForkAtEpoch" - <*> v .:? "TestConwayHardForkAtEpoch" + parseJSON = withObject "NodeHardForkProtocolConfiguration" $ \v -> + NodeHardForkProtocolConfiguration + <$> v + .:? "TestEnableDevelopmentHardForkEras" + .!= False + <*> v .:? "TestShelleyHardForkAtEpoch" + <*> v .:? "TestAllegraHardForkAtEpoch" + <*> v .:? "TestMaryHardForkAtEpoch" + <*> v .:? "TestAlonzoHardForkAtEpoch" + <*> v .:? "TestBabbageHardForkAtEpoch" + <*> v .:? "TestConwayHardForkAtEpoch" instance FromJSON NodeByronProtocolConfiguration where - parseJSON = withObject "NodeByronProtocolConfiguration" $ \v -> - NodeByronProtocolConfiguration - <$> v .: "ByronGenesisFile" - <*> v .:? "ByronGenesisHash" - <*> v .:? "RequiresNetworkMagic" - .!= RequiresNoMagic - <*> v .:? "PBftSignatureThreshold" - <*> pure (Byron.ApplicationName "cardano-sl") - <*> v .:? "ApplicationVersion" - .!= 1 - <*> v .: "LastKnownBlockVersion-Major" - <*> v .: "LastKnownBlockVersion-Minor" - <*> v .: "LastKnownBlockVersion-Alt" - .!= 0 + parseJSON = withObject "NodeByronProtocolConfiguration" $ \v -> + NodeByronProtocolConfiguration + <$> v .: "ByronGenesisFile" + <*> v .:? "ByronGenesisHash" + <*> v + .:? "RequiresNetworkMagic" + .!= RequiresNoMagic + <*> v .:? "PBftSignatureThreshold" + <*> pure (Byron.ApplicationName "cardano-sl") + <*> v + .:? "ApplicationVersion" + .!= 1 + <*> v .: "LastKnownBlockVersion-Major" + <*> v .: "LastKnownBlockVersion-Minor" + <*> v + .: "LastKnownBlockVersion-Alt" + .!= 0 diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Run.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Run.hs index 1403723ed8..5c6ff41b33 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Run.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Run.hs @@ -1,106 +1,123 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Cardano.Tools.DBSynthesizer.Run ( - initialize +module Cardano.Tools.DBSynthesizer.Run + ( initialize , synthesize ) where -import Cardano.Api.Any (displayError) -import Cardano.Node.Protocol.Cardano (mkConsensusProtocolCardano) -import Cardano.Node.Types -import Cardano.Tools.DBSynthesizer.Forging -import Cardano.Tools.DBSynthesizer.Orphans () -import Cardano.Tools.DBSynthesizer.Types -import Control.Monad (filterM) -import Control.Monad.Trans.Except (ExceptT) -import Control.Monad.Trans.Except.Extra (firstExceptT, - handleIOExceptT, hoistEither, runExceptT) -import Control.ResourceRegistry -import Control.Tracer -import Data.Aeson as Aeson (FromJSON, Result (..), Value, - eitherDecodeFileStrict', eitherDecodeStrict', fromJSON) -import Data.Bool (bool) -import Data.ByteString as BS (ByteString, readFile) -import qualified Data.Set as Set -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Node -import Ouroboros.Consensus.Config (TopLevelConfig, configStorage) -import qualified Ouroboros.Consensus.Node as Node (stdMkChainDbHasFS) -import qualified Ouroboros.Consensus.Node.InitStorage as Node - (nodeImmutableDbChunkInfo) -import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) -import Ouroboros.Consensus.Shelley.Node (ShelleyGenesis (..), - validateGenesis) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB (getTipPoint) -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Args as ChainDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Storage.LedgerDB.V1.Args as LedgerDB.V1 -import Ouroboros.Consensus.Util.IOLike (atomically) -import Ouroboros.Network.Block -import Ouroboros.Network.Point (WithOrigin (..)) -import System.Directory -import System.FilePath (takeDirectory, ()) - +import Cardano.Api.Any (displayError) +import Cardano.Node.Protocol.Cardano (mkConsensusProtocolCardano) +import Cardano.Node.Types +import Cardano.Tools.DBSynthesizer.Forging +import Cardano.Tools.DBSynthesizer.Orphans () +import Cardano.Tools.DBSynthesizer.Types +import Control.Monad (filterM) +import Control.Monad.Trans.Except (ExceptT) +import Control.Monad.Trans.Except.Extra + ( firstExceptT + , handleIOExceptT + , hoistEither + , runExceptT + ) +import Control.ResourceRegistry +import Control.Tracer +import Data.Aeson as Aeson + ( FromJSON + , Result (..) + , Value + , eitherDecodeFileStrict' + , eitherDecodeStrict' + , fromJSON + ) +import Data.Bool (bool) +import Data.ByteString as BS (ByteString, readFile) +import Data.Set qualified as Set +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Node +import Ouroboros.Consensus.Config (TopLevelConfig, configStorage) +import Ouroboros.Consensus.Node qualified as Node (stdMkChainDbHasFS) +import Ouroboros.Consensus.Node.InitStorage qualified as Node + ( nodeImmutableDbChunkInfo + ) +import Ouroboros.Consensus.Node.ProtocolInfo (ProtocolInfo (..)) +import Ouroboros.Consensus.Shelley.Node + ( ShelleyGenesis (..) + , validateGenesis + ) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB (getTipPoint) +import Ouroboros.Consensus.Storage.ChainDB.Impl qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args qualified as ChainDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args as LedgerDB.V1 +import Ouroboros.Consensus.Util.IOLike (atomically) +import Ouroboros.Network.Block +import Ouroboros.Network.Point (WithOrigin (..)) +import System.Directory +import System.FilePath (takeDirectory, ()) initialize :: - NodeFilePaths - -> NodeCredentials - -> DBSynthesizerOptions - -> IO (Either String (DBSynthesizerConfig, CardanoProtocolParams StandardCrypto)) + NodeFilePaths -> + NodeCredentials -> + DBSynthesizerOptions -> + IO (Either String (DBSynthesizerConfig, CardanoProtocolParams StandardCrypto)) initialize NodeFilePaths{nfpConfig, nfpChainDB} creds synthOptions = do - relativeToConfig :: (FilePath -> FilePath) <- - () . takeDirectory <$> makeAbsolute nfpConfig - runExceptT $ do - conf <- initConf relativeToConfig - proto <- initProtocol relativeToConfig conf - pure (conf, proto) - where - initConf :: (FilePath -> FilePath) -> ExceptT String IO DBSynthesizerConfig - initConf relativeToConfig = do - inp <- handleIOExceptT show (BS.readFile nfpConfig) - configStub <- adjustFilePaths relativeToConfig <$> readJson inp - shelleyGenesis <- readFileJson $ ncsShelleyGenesisFile configStub - _ <- hoistEither $ validateGenesis shelleyGenesis - let - protocolCredentials = ProtocolFilepaths { - byronCertFile = Nothing - , byronKeyFile = Nothing - , shelleyKESFile = credKESFile creds - , shelleyVRFFile = credVRFFile creds - , shelleyCertFile = credCertFile creds - , shelleyBulkCredsFile = credBulkFile creds - } - pure DBSynthesizerConfig { - confConfigStub = configStub - , confOptions = synthOptions - , confProtocolCredentials = protocolCredentials - , confShelleyGenesis = shelleyGenesis - , confDbDir = nfpChainDB - } + relativeToConfig :: (FilePath -> FilePath) <- + () . takeDirectory <$> makeAbsolute nfpConfig + runExceptT $ do + conf <- initConf relativeToConfig + proto <- initProtocol relativeToConfig conf + pure (conf, proto) + where + initConf :: (FilePath -> FilePath) -> ExceptT String IO DBSynthesizerConfig + initConf relativeToConfig = do + inp <- handleIOExceptT show (BS.readFile nfpConfig) + configStub <- adjustFilePaths relativeToConfig <$> readJson inp + shelleyGenesis <- readFileJson $ ncsShelleyGenesisFile configStub + _ <- hoistEither $ validateGenesis shelleyGenesis + let + protocolCredentials = + ProtocolFilepaths + { byronCertFile = Nothing + , byronKeyFile = Nothing + , shelleyKESFile = credKESFile creds + , shelleyVRFFile = credVRFFile creds + , shelleyCertFile = credCertFile creds + , shelleyBulkCredsFile = credBulkFile creds + } + pure + DBSynthesizerConfig + { confConfigStub = configStub + , confOptions = synthOptions + , confProtocolCredentials = protocolCredentials + , confShelleyGenesis = shelleyGenesis + , confDbDir = nfpChainDB + } - initProtocol :: (FilePath -> FilePath) -> DBSynthesizerConfig -> ExceptT String IO (CardanoProtocolParams StandardCrypto) - initProtocol relativeToConfig DBSynthesizerConfig{confConfigStub, confProtocolCredentials} = do - hfConfig :: NodeHardForkProtocolConfiguration <- - hoistEither hfConfig_ - byronConfig :: NodeByronProtocolConfiguration <- - adjustFilePaths relativeToConfig <$> hoistEither byConfig_ + initProtocol :: + (FilePath -> FilePath) -> + DBSynthesizerConfig -> + ExceptT String IO (CardanoProtocolParams StandardCrypto) + initProtocol relativeToConfig DBSynthesizerConfig{confConfigStub, confProtocolCredentials} = do + hfConfig :: NodeHardForkProtocolConfiguration <- + hoistEither hfConfig_ + byronConfig :: NodeByronProtocolConfiguration <- + adjustFilePaths relativeToConfig <$> hoistEither byConfig_ - firstExceptT displayError $ - mkConsensusProtocolCardano - byronConfig - shelleyConfig - alonzoConfig - conwayConfig - hfConfig - (Just confProtocolCredentials) - where - shelleyConfig = NodeShelleyProtocolConfiguration (GenesisFile $ ncsShelleyGenesisFile confConfigStub) Nothing - alonzoConfig = NodeAlonzoProtocolConfiguration (GenesisFile $ ncsAlonzoGenesisFile confConfigStub) Nothing - conwayConfig = NodeConwayProtocolConfiguration (GenesisFile $ ncsConwayGenesisFile confConfigStub) Nothing - hfConfig_ = eitherParseJson $ ncsNodeConfig confConfigStub - byConfig_ = eitherParseJson $ ncsNodeConfig confConfigStub + firstExceptT displayError $ + mkConsensusProtocolCardano + byronConfig + shelleyConfig + alonzoConfig + conwayConfig + hfConfig + (Just confProtocolCredentials) + where + shelleyConfig = NodeShelleyProtocolConfiguration (GenesisFile $ ncsShelleyGenesisFile confConfigStub) Nothing + alonzoConfig = NodeAlonzoProtocolConfiguration (GenesisFile $ ncsAlonzoGenesisFile confConfigStub) Nothing + conwayConfig = NodeConwayProtocolConfiguration (GenesisFile $ ncsConwayGenesisFile confConfigStub) Nothing + hfConfig_ = eitherParseJson $ ncsNodeConfig confConfigStub + byConfig_ = eitherParseJson $ ncsNodeConfig confConfigStub readJson :: (Monad m, FromJSON a) => ByteString -> ExceptT String m a readJson = hoistEither . eitherDecodeStrict' @@ -110,90 +127,93 @@ readFileJson f = handleIOExceptT show (eitherDecodeFileStrict' f) >>= hoistEithe eitherParseJson :: FromJSON a => Aeson.Value -> Either String a eitherParseJson v = case fromJSON v of - Error err -> Left err - Success a -> Right a + Error err -> Left err + Success a -> Right a synthesize :: - ( TopLevelConfig (CardanoBlock StandardCrypto) - -> GenTxs (CardanoBlock StandardCrypto) mk - ) - -> DBSynthesizerConfig - -> (CardanoProtocolParams StandardCrypto) - -> IO ForgeResult + ( TopLevelConfig (CardanoBlock StandardCrypto) -> + GenTxs (CardanoBlock StandardCrypto) mk + ) -> + DBSynthesizerConfig -> + (CardanoProtocolParams StandardCrypto) -> + IO ForgeResult synthesize genTxs DBSynthesizerConfig{confOptions, confShelleyGenesis, confDbDir} runP = - withRegistry $ \registry -> do + withRegistry $ \registry -> do + let + epochSize = sgEpochLength confShelleyGenesis + chunkInfo = Node.nodeImmutableDbChunkInfo (configStorage pInfoConfig) + bss = LedgerDB.V1.V1Args LedgerDB.V1.DisableFlushing InMemoryBackingStoreArgs + flavargs = LedgerDB.LedgerDbFlavorArgsV1 bss + dbArgs = + ChainDB.completeChainDbArgs + registry + pInfoConfig + pInfoInitLedger + chunkInfo + (const True) + (Node.stdMkChainDbHasFS confDbDir) + (Node.stdMkChainDbHasFS confDbDir) + flavargs + $ ChainDB.defaultArgs - let - epochSize = sgEpochLength confShelleyGenesis - chunkInfo = Node.nodeImmutableDbChunkInfo (configStorage pInfoConfig) - bss = LedgerDB.V1.V1Args LedgerDB.V1.DisableFlushing InMemoryBackingStoreArgs - flavargs = LedgerDB.LedgerDbFlavorArgsV1 bss - dbArgs = - ChainDB.completeChainDbArgs - registry - pInfoConfig - pInfoInitLedger - chunkInfo - (const True) - (Node.stdMkChainDbHasFS confDbDir) - (Node.stdMkChainDbHasFS confDbDir) - flavargs $ - ChainDB.defaultArgs + forgers <- blockForging + let fCount = length forgers + putStrLn $ "--> forger count: " ++ show fCount + if fCount > 0 + then do + putStrLn $ "--> opening ChainDB on file system with mode: " ++ show synthOpenMode + preOpenChainDB synthOpenMode confDbDir + let dbTracer = nullTracer + ChainDB.withDB (ChainDB.updateTracer dbTracer dbArgs) $ \chainDB -> do + slotNo <- do + tip <- atomically (ChainDB.getTipPoint chainDB) + pure $ case pointSlot tip of + Origin -> 0 + At s -> succ s - forgers <- blockForging - let fCount = length forgers - putStrLn $ "--> forger count: " ++ show fCount - if fCount > 0 - then do - putStrLn $ "--> opening ChainDB on file system with mode: " ++ show synthOpenMode - preOpenChainDB synthOpenMode confDbDir - let dbTracer = nullTracer - ChainDB.withDB (ChainDB.updateTracer dbTracer dbArgs) $ \chainDB -> do - slotNo <- do - tip <- atomically (ChainDB.getTipPoint chainDB) - pure $ case pointSlot tip of - Origin -> 0 - At s -> succ s - - putStrLn $ "--> starting at: " ++ show slotNo - runForge epochSize slotNo synthLimit chainDB forgers pInfoConfig $ genTxs pInfoConfig - else do - putStrLn "--> no forgers found; leaving possibly existing ChainDB untouched" - pure $ ForgeResult 0 - where - DBSynthesizerOptions - { synthOpenMode - , synthLimit - } = confOptions - ( ProtocolInfo - { pInfoConfig - , pInfoInitLedger - } - , blockForging - ) = protocolInfoCardano runP + putStrLn $ "--> starting at: " ++ show slotNo + runForge epochSize slotNo synthLimit chainDB forgers pInfoConfig $ genTxs pInfoConfig + else do + putStrLn "--> no forgers found; leaving possibly existing ChainDB untouched" + pure $ ForgeResult 0 + where + DBSynthesizerOptions + { synthOpenMode + , synthLimit + } = confOptions + ( ProtocolInfo + { pInfoConfig + , pInfoInitLedger + } + , blockForging + ) = protocolInfoCardano runP preOpenChainDB :: DBSynthesizerOpenMode -> FilePath -> IO () preOpenChainDB mode db = - doesDirectoryExist db >>= bool create checkMode - where - checkIsDB ls = Set.fromList ls `Set.isSubsetOf` chainDBDirs - chainDBDirs = Set.fromList ["immutable", "ledger", "volatile", "gsm"] - loc = "preOpenChainDB: '" ++ db ++ "'" - create = createDirectoryIfMissing True db - checkMode = do - isChainDB <- checkIsDB <$> listSubdirectories db - case mode of - OpenCreate -> - fail $ loc ++ " already exists. Use -f to overwrite or -a to append." - OpenAppend | isChainDB -> - pure () - OpenCreateForce | isChainDB -> - removePathForcibly db >> create - _ -> - fail $ loc ++ " is non-empty and does not look like a ChainDB" - <> " (i.e. it contains directories other than" - <> " 'immutable'/'ledger'/'volatile'/'gsm'). Aborting." + doesDirectoryExist db >>= bool create checkMode + where + checkIsDB ls = Set.fromList ls `Set.isSubsetOf` chainDBDirs + chainDBDirs = Set.fromList ["immutable", "ledger", "volatile", "gsm"] + loc = "preOpenChainDB: '" ++ db ++ "'" + create = createDirectoryIfMissing True db + checkMode = do + isChainDB <- checkIsDB <$> listSubdirectories db + case mode of + OpenCreate -> + fail $ loc ++ " already exists. Use -f to overwrite or -a to append." + OpenAppend + | isChainDB -> + pure () + OpenCreateForce + | isChainDB -> + removePathForcibly db >> create + _ -> + fail $ + loc + ++ " is non-empty and does not look like a ChainDB" + <> " (i.e. it contains directories other than" + <> " 'immutable'/'ledger'/'volatile'/'gsm'). Aborting." - listSubdirectories path = filterM isDir =<< listDirectory path - where - isDir p = doesDirectoryExist (path p) + listSubdirectories path = filterM isDir =<< listDirectory path + where + isDir p = doesDirectoryExist (path p) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Types.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Types.hs index fff8dbd16b..abf734dd2f 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Types.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBSynthesizer/Types.hs @@ -1,61 +1,60 @@ module Cardano.Tools.DBSynthesizer.Types (module Cardano.Tools.DBSynthesizer.Types) where -import Cardano.Node.Types (ProtocolFilepaths) -import Data.Aeson as Aeson (Value) -import Data.Word (Word64) -import Ouroboros.Consensus.Block.Abstract (SlotNo) -import Ouroboros.Consensus.Shelley.Node (ShelleyGenesis) - - -data NodeConfigStub = NodeConfigStub { - ncsNodeConfig :: !Aeson.Value - , ncsAlonzoGenesisFile :: !FilePath +import Cardano.Node.Types (ProtocolFilepaths) +import Data.Aeson as Aeson (Value) +import Data.Word (Word64) +import Ouroboros.Consensus.Block.Abstract (SlotNo) +import Ouroboros.Consensus.Shelley.Node (ShelleyGenesis) + +data NodeConfigStub = NodeConfigStub + { ncsNodeConfig :: !Aeson.Value + , ncsAlonzoGenesisFile :: !FilePath , ncsShelleyGenesisFile :: !FilePath - , ncsByronGenesisFile :: !FilePath - , ncsConwayGenesisFile :: !FilePath + , ncsByronGenesisFile :: !FilePath + , ncsConwayGenesisFile :: !FilePath } deriving Show -data NodeFilePaths = NodeFilePaths { - nfpConfig :: !FilePath +data NodeFilePaths = NodeFilePaths + { nfpConfig :: !FilePath , nfpChainDB :: !FilePath } deriving Show -data NodeCredentials = NodeCredentials { - credCertFile :: !(Maybe FilePath) - , credVRFFile :: !(Maybe FilePath) - , credKESFile :: !(Maybe FilePath) +data NodeCredentials = NodeCredentials + { credCertFile :: !(Maybe FilePath) + , credVRFFile :: !(Maybe FilePath) + , credKESFile :: !(Maybe FilePath) , credBulkFile :: !(Maybe FilePath) } deriving Show -data ForgeLimit = - ForgeLimitBlock !Word64 - | ForgeLimitSlot !SlotNo - | ForgeLimitEpoch !Word64 +data ForgeLimit + = ForgeLimitBlock !Word64 + | ForgeLimitSlot !SlotNo + | ForgeLimitEpoch !Word64 deriving (Eq, Show) newtype ForgeResult = ForgeResult {resultForged :: Int} deriving (Eq, Show) -data DBSynthesizerOpenMode = - OpenCreate - | OpenCreateForce - | OpenAppend - deriving (Eq, Show) +data DBSynthesizerOpenMode + = OpenCreate + | OpenCreateForce + | OpenAppend + deriving (Eq, Show) -data DBSynthesizerOptions = DBSynthesizerOptions { - synthLimit :: !ForgeLimit +data DBSynthesizerOptions = DBSynthesizerOptions + { synthLimit :: !ForgeLimit , synthOpenMode :: !DBSynthesizerOpenMode } deriving Show -data DBSynthesizerConfig = DBSynthesizerConfig { - confConfigStub :: NodeConfigStub - , confOptions :: DBSynthesizerOptions +data DBSynthesizerConfig = DBSynthesizerConfig + { confConfigStub :: NodeConfigStub + , confOptions :: DBSynthesizerOptions , confProtocolCredentials :: ProtocolFilepaths - , confShelleyGenesis :: ShelleyGenesis - , confDbDir :: FilePath + , confShelleyGenesis :: ShelleyGenesis + , confDbDir :: FilePath } deriving Show diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Run.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Run.hs index 40fabdcb74..3e663988ac 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Run.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Run.hs @@ -7,43 +7,49 @@ module Cardano.Tools.DBTruncater.Run (truncate) where -import Cardano.Slotting.Slot (WithOrigin (..)) -import Cardano.Tools.DBAnalyser.HasAnalysis -import Cardano.Tools.DBTruncater.Types -import Control.Monad -import Control.Monad.Trans.Class (lift) -import Control.Monad.Trans.Maybe (MaybeT (..)) -import Control.ResourceRegistry (runWithTempRegistry, withRegistry) -import Control.Tracer -import Data.Foldable (asum) -import Data.Functor ((<&>)) -import Data.Functor.Identity -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Node as Node -import Ouroboros.Consensus.Node.InitStorage as Node -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB, Iterator, - IteratorResult (..), Tip (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.ImmutableDB.Impl -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Prelude hiding (truncate) -import System.IO +import Cardano.Slotting.Slot (WithOrigin (..)) +import Cardano.Tools.DBAnalyser.HasAnalysis +import Cardano.Tools.DBTruncater.Types +import Control.Monad +import Control.Monad.Trans.Class (lift) +import Control.Monad.Trans.Maybe (MaybeT (..)) +import Control.ResourceRegistry (runWithTempRegistry, withRegistry) +import Control.Tracer +import Data.Foldable (asum) +import Data.Functor ((<&>)) +import Data.Functor.Identity +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Node as Node +import Ouroboros.Consensus.Node.InitStorage as Node +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB + ( ImmutableDB + , Iterator + , IteratorResult (..) + , Tip (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.ImmutableDB.Impl +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import System.IO +import Prelude hiding (truncate) truncate :: - forall block. (Node.RunNode block, HasProtocolInfo block) - => DBTruncaterConfig - -> Args block - -> IO () -truncate DBTruncaterConfig{ dbDir, truncateAfter, verbose } args = do + forall block. + (Node.RunNode block, HasProtocolInfo block) => + DBTruncaterConfig -> + Args block -> + IO () +truncate DBTruncaterConfig{dbDir, truncateAfter, verbose} args = do withRegistry $ \registry -> do lock <- mkLock immutableDBTracer <- mkTracer lock verbose - ProtocolInfo { - pInfoConfig = config - } <- mkProtocolInfo args + ProtocolInfo + { pInfoConfig = config + } <- + mkProtocolInfo args let fs = Node.stdMkChainDbHasFS dbDir (RelativeMountPoint "immutable") chunkInfo = Node.nodeImmutableDbChunkInfo (configStorage config) @@ -61,47 +67,50 @@ truncate DBTruncaterConfig{ dbDir, truncateAfter, verbose } args = do withDB immutableDBArgs $ \(immutableDB, internal) -> do tip <- atomically $ ImmutableDB.getTip immutableDB let truncationBeyondTip = case truncateAfter of - TruncateAfterSlot slotNo -> (tipSlotNo <$> tip) <= NotOrigin slotNo - TruncateAfterBlock bno -> (tipBlockNo <$> tip) <= NotOrigin bno + TruncateAfterSlot slotNo -> (tipSlotNo <$> tip) <= NotOrigin slotNo + TruncateAfterBlock bno -> (tipBlockNo <$> tip) <= NotOrigin bno if truncationBeyondTip - then putStrLn $ "Nothing to truncate, tip stays at " <> show tip - else do - mLastHdr :: Maybe (Header block) <- case truncateAfter of - TruncateAfterSlot slotNo -> runMaybeT $ asum $ - [slotNo, slotNo - 1 .. 0] <&> \s -> do - pt <- RealPoint s <$> MaybeT (getHashForSlot internal s) - lift $ ImmutableDB.getKnownBlockComponent immutableDB GetHeader pt + then putStrLn $ "Nothing to truncate, tip stays at " <> show tip + else do + mLastHdr :: Maybe (Header block) <- case truncateAfter of + TruncateAfterSlot slotNo -> + runMaybeT $ + asum $ + [slotNo, slotNo - 1 .. 0] <&> \s -> do + pt <- RealPoint s <$> MaybeT (getHashForSlot internal s) + lift $ ImmutableDB.getKnownBlockComponent immutableDB GetHeader pt + TruncateAfterBlock bno -> do + -- At the moment, we're just running a linear search with streamAll to + -- find the correct block to truncate from, but we could in theory do this + -- more quickly by binary searching the chunks of the ImmutableDB. + iterator <- ImmutableDB.streamAll immutableDB registry GetHeader + findLast ((<= bno) . blockNo) iterator - TruncateAfterBlock bno -> do - -- At the moment, we're just running a linear search with streamAll to - -- find the correct block to truncate from, but we could in theory do this - -- more quickly by binary searching the chunks of the ImmutableDB. - iterator <- ImmutableDB.streamAll immutableDB registry GetHeader - findLast ((<= bno) . blockNo) iterator - - case ImmutableDB.headerToTip <$> mLastHdr of - Nothing -> fail "Couldn't find a point to truncate to!" - Just newTip -> do - putStrLn $ mconcat - [ "Truncating the ImmutableDB using the following block as the " - , "new tip:\n" - , " ", show newTip - ] - deleteAfter internal (At newTip) + case ImmutableDB.headerToTip <$> mLastHdr of + Nothing -> fail "Couldn't find a point to truncate to!" + Just newTip -> do + putStrLn $ + mconcat + [ "Truncating the ImmutableDB using the following block as the " + , "new tip:\n" + , " " + , show newTip + ] + deleteAfter internal (At newTip) -- | Given a predicate, and an iterator, find the last item for which -- the predicate passes. findLast :: Monad m => (a -> Bool) -> Iterator m blk a -> m (Maybe a) findLast p iter = - go Nothing - where - go acc = - ImmutableDB.iteratorNext iter >>= \case - IteratorExhausted -> do - ImmutableDB.iteratorClose iter - pure acc - IteratorResult a -> do - if p a then go (Just a) else pure acc + go Nothing + where + go acc = + ImmutableDB.iteratorNext iter >>= \case + IteratorExhausted -> do + ImmutableDB.iteratorClose iter + pure acc + IteratorResult a -> do + if p a then go (Just a) else pure acc mkLock :: MonadMVar m => m (StrictMVar m ()) mkLock = newMVar () @@ -118,8 +127,9 @@ mkTracer lock True = do hFlush stderr withDB :: - (Node.RunNode block, IOLike m) - => ImmutableDbArgs Identity m block - -> ((ImmutableDB m block, Internal m block) -> m a) - -> m a -withDB immutableDBArgs = bracket (ImmutableDB.openDBInternal immutableDBArgs runWithTempRegistry) (ImmutableDB.closeDB . fst) + (Node.RunNode block, IOLike m) => + ImmutableDbArgs Identity m block -> + ((ImmutableDB m block, Internal m block) -> m a) -> + m a +withDB immutableDBArgs = + bracket (ImmutableDB.openDBInternal immutableDBArgs runWithTempRegistry) (ImmutableDB.closeDB . fst) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Types.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Types.hs index 2ce1134fa0..6141860f8c 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Types.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/DBTruncater/Types.hs @@ -1,22 +1,22 @@ -module Cardano.Tools.DBTruncater.Types ( - DBTruncaterConfig (..) +module Cardano.Tools.DBTruncater.Types + ( DBTruncaterConfig (..) , TruncateAfter (..) ) where -import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Block.Abstract -data DBTruncaterConfig = DBTruncaterConfig { - dbDir :: FilePath +data DBTruncaterConfig = DBTruncaterConfig + { dbDir :: FilePath , truncateAfter :: TruncateAfter - , verbose :: Bool + , verbose :: Bool } -- | Where to truncate the ImmutableDB. data TruncateAfter - -- | Truncate after the given slot number, deleting all blocks with a higher + = -- | Truncate after the given slot number, deleting all blocks with a higher -- slot number. - = TruncateAfterSlot SlotNo - -- | Truncate after the given block number (such that the new tip has this + TruncateAfterSlot SlotNo + | -- | Truncate after the given block number (such that the new tip has this -- block number). - | TruncateAfterBlock BlockNo + TruncateAfterBlock BlockNo deriving (Show, Eq) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/GitRev.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/GitRev.hs index 3f33d23ac7..be56c50bb3 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/GitRev.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/GitRev.hs @@ -3,26 +3,26 @@ module Cardano.Tools.GitRev (gitRev) where -import qualified Cardano.Git.Rev -import Data.Text (Text) -import qualified Data.Text as T -import GitHash (giDirty, giHash, tGitInfoCwdTry) +import Cardano.Git.Rev qualified +import Data.Text (Text) +import Data.Text qualified as T +import GitHash (giDirty, giHash, tGitInfoCwdTry) -- | A string representing what ouroboros-consensus git commit this code was -- built from. No particular format should be assumed. gitRev :: Text gitRev | T.all (== '0') rev = "unavailable (git info missing at build time)" - | otherwise = rev - where - rev = - $$(let eGitInfo = $$tGitInfoCwdTry in - -- Local binding only to avoid redundant pattern match warning - case eGitInfo of - Right gitInfo -> - [||T.pack (giHash gitInfo) <> if giDirty gitInfo then "-dirty" else ""||] - -- In case of failure, try cardano-git-rev (where the commit hash - -- can be embedded later). - Left _ -> [||otherRev||] - ) - otherRev = $(Cardano.Git.Rev.gitRev) + | otherwise = rev + where + rev = + $$( let eGitInfo = $$tGitInfoCwdTry + in -- Local binding only to avoid redundant pattern match warning + case eGitInfo of + Right gitInfo -> + [||T.pack (giHash gitInfo) <> if giDirty gitInfo then "-dirty" else ""||] + -- In case of failure, try cardano-git-rev (where the commit hash + -- can be embedded later). + Left _ -> [||otherRev||] + ) + otherRev = $(Cardano.Git.Rev.gitRev) diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/Headers.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/Headers.hs index 471ba1ff4e..49e905f5eb 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/Headers.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/Headers.hs @@ -4,78 +4,102 @@ {-# LANGUAGE TypeApplications #-} -- | Tooling to generate and validate (Praos) headers. -module Cardano.Tools.Headers ( - Options (..) +module Cardano.Tools.Headers + ( Options (..) , ValidationResult (..) , run , validate ) where -import Cardano.Crypto.DSIGN (deriveVerKeyDSIGN) -import Cardano.Crypto.VRF.Class (deriveVerKeyVRF) -import Cardano.Ledger.Api (ConwayEra) -import Cardano.Ledger.Coin (Coin (..)) -import Cardano.Ledger.Compactible (toCompact) -import Cardano.Ledger.Keys (VKey (..), hashKey) -import Cardano.Ledger.State (IndividualPoolStake (..)) -import Cardano.Prelude (ExitCode (..), exitWith, forM_, hPutStrLn, - stderr) -import Cardano.Protocol.Crypto (StandardCrypto, hashVerKeyVRF) -import Control.Monad.Except (runExcept) -import qualified Data.Aeson as Json -import qualified Data.ByteString.Lazy as LBS -import qualified Data.Map as Map -import Data.Maybe (fromJust) -import Ouroboros.Consensus.Block (validateView) -import Ouroboros.Consensus.Protocol.Praos (Praos, - doValidateKESSignature, doValidateVRFSignature) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, - mkShelleyHeader) -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Test.Ouroboros.Consensus.Protocol.Praos.Header - (GeneratorContext (..), MutatedHeader (..), Mutation (..), - Sample (..), expectedError, generateSamples, header, - mutation) +import Cardano.Crypto.DSIGN (deriveVerKeyDSIGN) +import Cardano.Crypto.VRF.Class (deriveVerKeyVRF) +import Cardano.Ledger.Api (ConwayEra) +import Cardano.Ledger.Coin (Coin (..)) +import Cardano.Ledger.Compactible (toCompact) +import Cardano.Ledger.Keys (VKey (..), hashKey) +import Cardano.Ledger.State (IndividualPoolStake (..)) +import Cardano.Prelude + ( ExitCode (..) + , exitWith + , forM_ + , hPutStrLn + , stderr + ) +import Cardano.Protocol.Crypto (StandardCrypto, hashVerKeyVRF) +import Control.Monad.Except (runExcept) +import Data.Aeson qualified as Json +import Data.ByteString.Lazy qualified as LBS +import Data.Map qualified as Map +import Data.Maybe (fromJust) +import Ouroboros.Consensus.Block (validateView) +import Ouroboros.Consensus.Protocol.Praos + ( Praos + , doValidateKESSignature + , doValidateVRFSignature + ) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyBlock + , mkShelleyHeader + ) +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Test.Ouroboros.Consensus.Protocol.Praos.Header + ( GeneratorContext (..) + , MutatedHeader (..) + , Mutation (..) + , Sample (..) + , expectedError + , generateSamples + , header + , mutation + ) type ConwayBlock = ShelleyBlock (Praos StandardCrypto) ConwayEra -- * Running Generator data Options - = Generate Int - | Validate + = Generate Int + | Validate run :: Options -> IO () run = \case - Generate n -> do - sample <- generateSamples n - LBS.putStr $ Json.encode sample <> "\n" - Validate -> - Json.eitherDecode <$> LBS.getContents >>= \case - Left err -> hPutStrLn stderr err >> exitWith (ExitFailure 1) - Right Sample{sample} -> - forM_ sample $ \(context, mutatedHeader) -> do - print $ validate context mutatedHeader + Generate n -> do + sample <- generateSamples n + LBS.putStr $ Json.encode sample <> "\n" + Validate -> + Json.eitherDecode <$> LBS.getContents >>= \case + Left err -> hPutStrLn stderr err >> exitWith (ExitFailure 1) + Right Sample{sample} -> + forM_ sample $ \(context, mutatedHeader) -> do + print $ validate context mutatedHeader data ValidationResult = Valid !Mutation | Invalid !Mutation !String - deriving (Eq, Show) + deriving (Eq, Show) validate :: GeneratorContext -> MutatedHeader -> ValidationResult validate context MutatedHeader{header, mutation} = - case (runExcept $ validateKES >> validateVRF, mutation) of - (Left err, mut) | expectedError mut err -> Valid mut - (Left err, mut) -> Invalid mut (show err) - (Right _, NoMutation) -> Valid NoMutation - (Right _, mut) -> Invalid mut $ "Expected error from mutation " <> show mut <> ", but validation succeeded" - where - GeneratorContext{praosSlotsPerKESPeriod, praosMaxKESEvo, nonce, coldSignKey, vrfSignKey, ocertCounters, activeSlotCoeff} = context - -- TODO: get these from the context - coin = fromJust . toCompact . Coin - ownsAllStake vrfKey = IndividualPoolStake 1 (coin 1) vrfKey - poolDistr = Map.fromList [(poolId, ownsAllStake hashVRFKey)] - poolId = hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey - hashVRFKey = hashVerKeyVRF @StandardCrypto $ deriveVerKeyVRF vrfSignKey + case (runExcept $ validateKES >> validateVRF, mutation) of + (Left err, mut) | expectedError mut err -> Valid mut + (Left err, mut) -> Invalid mut (show err) + (Right _, NoMutation) -> Valid NoMutation + (Right _, mut) -> Invalid mut $ "Expected error from mutation " <> show mut <> ", but validation succeeded" + where + GeneratorContext + { praosSlotsPerKESPeriod + , praosMaxKESEvo + , nonce + , coldSignKey + , vrfSignKey + , ocertCounters + , activeSlotCoeff + } = context + -- TODO: get these from the context + coin = fromJust . toCompact . Coin + ownsAllStake vrfKey = IndividualPoolStake 1 (coin 1) vrfKey + poolDistr = Map.fromList [(poolId, ownsAllStake hashVRFKey)] + poolId = hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey + hashVRFKey = hashVerKeyVRF @StandardCrypto $ deriveVerKeyVRF vrfSignKey - headerView = validateView @ConwayBlock undefined (mkShelleyHeader header) - validateKES = doValidateKESSignature praosMaxKESEvo praosSlotsPerKESPeriod poolDistr ocertCounters headerView - validateVRF = doValidateVRFSignature nonce poolDistr activeSlotCoeff headerView + headerView = validateView @ConwayBlock undefined (mkShelleyHeader header) + validateKES = doValidateKESSignature praosMaxKESEvo praosSlotsPerKESPeriod poolDistr ocertCounters headerView + validateVRF = doValidateVRFSignature nonce poolDistr activeSlotCoeff headerView diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/Diffusion.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/Diffusion.hs index ed6142daa7..53658e5634 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/Diffusion.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/Diffusion.hs @@ -5,102 +5,111 @@ module Cardano.Tools.ImmDBServer.Diffusion (run) where -import Cardano.Tools.ImmDBServer.MiniProtocols (immDBServer) -import Control.ResourceRegistry -import Control.Tracer -import qualified Data.ByteString.Lazy as BL -import Data.Functor.Contravariant ((>$<)) -import Data.Void (Void) -import qualified Network.Mux as Mux -import Network.Socket (SockAddr (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.Node.InitStorage - (NodeInitStorage (nodeCheckIntegrity, nodeImmutableDbChunkInfo)) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run (SerialiseNodeToNodeConstraints) -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDbArgs (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.ErrorPolicy (nullErrorPolicies) -import Ouroboros.Network.IOManager (withIOManager) -import Ouroboros.Network.Mux -import qualified Ouroboros.Network.NodeToNode as N2N -import Ouroboros.Network.PeerSelection.PeerSharing.Codec - (decodeRemoteAddress, encodeRemoteAddress) -import qualified Ouroboros.Network.Snocket as Snocket -import Ouroboros.Network.Socket (configureSocket) -import System.FS.API (SomeHasFS (..)) -import System.FS.API.Types (MountPoint (MountPoint)) -import System.FS.IO (ioHasFS) +import Cardano.Tools.ImmDBServer.MiniProtocols (immDBServer) +import Control.ResourceRegistry +import Control.Tracer +import Data.ByteString.Lazy qualified as BL +import Data.Functor.Contravariant ((>$<)) +import Data.Void (Void) +import Network.Mux qualified as Mux +import Network.Socket (SockAddr (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.Node.InitStorage + ( NodeInitStorage (nodeCheckIntegrity, nodeImmutableDbChunkInfo) + ) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run (SerialiseNodeToNodeConstraints) +import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDbArgs (..)) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.ErrorPolicy (nullErrorPolicies) +import Ouroboros.Network.IOManager (withIOManager) +import Ouroboros.Network.Mux +import Ouroboros.Network.NodeToNode qualified as N2N +import Ouroboros.Network.PeerSelection.PeerSharing.Codec + ( decodeRemoteAddress + , encodeRemoteAddress + ) +import Ouroboros.Network.Snocket qualified as Snocket +import Ouroboros.Network.Socket (configureSocket) +import System.FS.API (SomeHasFS (..)) +import System.FS.API.Types (MountPoint (MountPoint)) +import System.FS.IO (ioHasFS) -- | Glue code for using just the bits from the Diffusion Layer that we need in -- this context. serve :: - SockAddr - -> N2N.Versions N2N.NodeToNodeVersion N2N.NodeToNodeVersionData - (OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode SockAddr BL.ByteString IO Void ()) - -> IO Void + SockAddr -> + N2N.Versions + N2N.NodeToNodeVersion + N2N.NodeToNodeVersionData + (OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode SockAddr BL.ByteString IO Void ()) -> + IO Void serve sockAddr application = withIOManager \iocp -> do - let sn = Snocket.socketSnocket iocp - family = Snocket.addrFamily sn sockAddr - bracket (Snocket.open sn family) (Snocket.close sn) \socket -> do - networkMutableState <- N2N.newNetworkMutableState - configureSocket socket (Just sockAddr) - Snocket.bind sn socket sockAddr - Snocket.listen sn socket - N2N.withServer - sn - N2N.nullNetworkServerTracers { - N2N.nstHandshakeTracer = show >$< stdoutTracer + let sn = Snocket.socketSnocket iocp + family = Snocket.addrFamily sn sockAddr + bracket (Snocket.open sn family) (Snocket.close sn) \socket -> do + networkMutableState <- N2N.newNetworkMutableState + configureSocket socket (Just sockAddr) + Snocket.bind sn socket sockAddr + Snocket.listen sn socket + N2N.withServer + sn + N2N.nullNetworkServerTracers + { N2N.nstHandshakeTracer = show >$< stdoutTracer , N2N.nstErrorPolicyTracer = show >$< stdoutTracer } - networkMutableState - acceptedConnectionsLimit - socket - application - nullErrorPolicies - where - acceptedConnectionsLimit = N2N.AcceptedConnectionsLimit { - N2N.acceptedConnectionsHardLimit = maxBound - , N2N.acceptedConnectionsSoftLimit = maxBound - , N2N.acceptedConnectionsDelay = 0 - } + networkMutableState + acceptedConnectionsLimit + socket + application + nullErrorPolicies + where + acceptedConnectionsLimit = + N2N.AcceptedConnectionsLimit + { N2N.acceptedConnectionsHardLimit = maxBound + , N2N.acceptedConnectionsSoftLimit = maxBound + , N2N.acceptedConnectionsDelay = 0 + } run :: - forall blk. - ( GetPrevHash blk - , ShowProxy blk - , SupportedNetworkProtocolVersion blk - , SerialiseNodeToNodeConstraints blk - , ImmutableDB.ImmutableDbSerialiseConstraints blk - , NodeInitStorage blk - , ConfigSupportsNode blk - ) - => FilePath - -> SockAddr - -> TopLevelConfig blk - -> IO Void + forall blk. + ( GetPrevHash blk + , ShowProxy blk + , SupportedNetworkProtocolVersion blk + , SerialiseNodeToNodeConstraints blk + , ImmutableDB.ImmutableDbSerialiseConstraints blk + , NodeInitStorage blk + , ConfigSupportsNode blk + ) => + FilePath -> + SockAddr -> + TopLevelConfig blk -> + IO Void run immDBDir sockAddr cfg = withRegistry \registry -> - ImmutableDB.withDB - (ImmutableDB.openDB (immDBArgs registry) runWithTempRegistry) - \immDB -> serve sockAddr $ immDBServer - codecCfg - encodeRemoteAddress - decodeRemoteAddress - immDB - networkMagic - where - immDBArgs registry = ImmutableDB.defaultArgs { - immCheckIntegrity = nodeCheckIntegrity storageCfg - , immChunkInfo = nodeImmutableDbChunkInfo storageCfg - , immCodecConfig = codecCfg - , immRegistry = registry - , immHasFS = SomeHasFS $ ioHasFS $ MountPoint immDBDir - } + ImmutableDB.withDB + (ImmutableDB.openDB (immDBArgs registry) runWithTempRegistry) + \immDB -> + serve sockAddr $ + immDBServer + codecCfg + encodeRemoteAddress + decodeRemoteAddress + immDB + networkMagic + where + immDBArgs registry = + ImmutableDB.defaultArgs + { immCheckIntegrity = nodeCheckIntegrity storageCfg + , immChunkInfo = nodeImmutableDbChunkInfo storageCfg + , immCodecConfig = codecCfg + , immRegistry = registry + , immHasFS = SomeHasFS $ ioHasFS $ MountPoint immDBDir + } - codecCfg = configCodec cfg - storageCfg = configStorage cfg - networkMagic = getNetworkMagic . configBlock $ cfg + codecCfg = configCodec cfg + storageCfg = configStorage cfg + networkMagic = getNetworkMagic . configBlock $ cfg diff --git a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/MiniProtocols.hs b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/MiniProtocols.hs index 288be36eb0..c4fdd9f093 100644 --- a/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/MiniProtocols.hs +++ b/ouroboros-consensus-cardano/src/unstable-cardano-tools/Cardano/Tools/ImmDBServer/MiniProtocols.hs @@ -14,245 +14,264 @@ -- | Implement ChainSync and BlockFetch servers on top of just the immutable DB. module Cardano.Tools.ImmDBServer.MiniProtocols (immDBServer) where -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Control.Monad (forever) -import Control.ResourceRegistry -import Control.Tracer -import Data.Bifunctor (bimap) -import qualified Data.ByteString.Lazy as BL -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import Data.Typeable (Typeable) -import Data.Void (Void) -import GHC.Generics (Generic) -import qualified Network.Mux as Mux -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server - (blockFetchServer') -import Ouroboros.Consensus.MiniProtocol.ChainSync.Server - (chainSyncServerForFollower) -import Ouroboros.Consensus.Network.NodeToNode (Codecs (..)) -import qualified Ouroboros.Consensus.Network.NodeToNode as Consensus.N2N -import Ouroboros.Consensus.Node (stdVersionDataNTN) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run (SerialiseNodeToNodeConstraints) -import Ouroboros.Consensus.Storage.ChainDB.API (Follower (..)) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDB) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.API as ImmutableDB -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (ChainUpdate (..), Tip (..)) -import Ouroboros.Network.Driver (runPeer) -import Ouroboros.Network.KeepAlive (keepAliveServer) -import Ouroboros.Network.Magic (NetworkMagic) -import Ouroboros.Network.Mux (MiniProtocol (..), MiniProtocolCb (..), - OuroborosApplication (..), - OuroborosApplicationWithMinimalCtx, RunMiniProtocol (..)) -import Ouroboros.Network.NodeToNode (NodeToNodeVersionData (..), - Versions (..)) -import qualified Ouroboros.Network.NodeToNode as N2N -import Ouroboros.Network.PeerSelection.PeerSharing (PeerSharing (..)) -import Ouroboros.Network.Protocol.BlockFetch.Server -import Ouroboros.Network.Protocol.ChainSync.Server -import Ouroboros.Network.Protocol.Handshake.Version (Version (..)) -import Ouroboros.Network.Protocol.KeepAlive.Server - (keepAliveServerPeer) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Control.Monad (forever) +import Control.ResourceRegistry +import Control.Tracer +import Data.Bifunctor (bimap) +import Data.ByteString.Lazy qualified as BL +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Typeable (Typeable) +import Data.Void (Void) +import GHC.Generics (Generic) +import Network.Mux qualified as Mux +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server + ( blockFetchServer' + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Server + ( chainSyncServerForFollower + ) +import Ouroboros.Consensus.Network.NodeToNode (Codecs (..)) +import Ouroboros.Consensus.Network.NodeToNode qualified as Consensus.N2N +import Ouroboros.Consensus.Node (stdVersionDataNTN) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run (SerialiseNodeToNodeConstraints) +import Ouroboros.Consensus.Storage.ChainDB.API (Follower (..)) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDB) +import Ouroboros.Consensus.Storage.ImmutableDB.API qualified as ImmutableDB +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block (ChainUpdate (..), Tip (..)) +import Ouroboros.Network.Driver (runPeer) +import Ouroboros.Network.KeepAlive (keepAliveServer) +import Ouroboros.Network.Magic (NetworkMagic) +import Ouroboros.Network.Mux + ( MiniProtocol (..) + , MiniProtocolCb (..) + , OuroborosApplication (..) + , OuroborosApplicationWithMinimalCtx + , RunMiniProtocol (..) + ) +import Ouroboros.Network.NodeToNode + ( NodeToNodeVersionData (..) + , Versions (..) + ) +import Ouroboros.Network.NodeToNode qualified as N2N +import Ouroboros.Network.PeerSelection.PeerSharing (PeerSharing (..)) +import Ouroboros.Network.Protocol.BlockFetch.Server +import Ouroboros.Network.Protocol.ChainSync.Server +import Ouroboros.Network.Protocol.Handshake.Version (Version (..)) +import Ouroboros.Network.Protocol.KeepAlive.Server + ( keepAliveServerPeer + ) immDBServer :: - forall m blk addr. - ( IOLike m - , HasHeader blk - , ShowProxy blk - , SerialiseNodeToNodeConstraints blk - , SupportedNetworkProtocolVersion blk - ) - => CodecConfig blk - -> (NodeToNodeVersion -> addr -> CBOR.Encoding) - -> (NodeToNodeVersion -> forall s . CBOR.Decoder s addr) - -> ImmutableDB m blk - -> NetworkMagic - -> Versions NodeToNodeVersion NodeToNodeVersionData - (OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode addr BL.ByteString m Void ()) + forall m blk addr. + ( IOLike m + , HasHeader blk + , ShowProxy blk + , SerialiseNodeToNodeConstraints blk + , SupportedNetworkProtocolVersion blk + ) => + CodecConfig blk -> + (NodeToNodeVersion -> addr -> CBOR.Encoding) -> + (NodeToNodeVersion -> forall s. CBOR.Decoder s addr) -> + ImmutableDB m blk -> + NetworkMagic -> + Versions + NodeToNodeVersion + NodeToNodeVersionData + (OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode addr BL.ByteString m Void ()) immDBServer codecCfg encAddr decAddr immDB networkMagic = do - forAllVersions application - where - forAllVersions :: - (NodeToNodeVersion -> BlockNodeToNodeVersion blk -> r) - -> Versions NodeToNodeVersion NodeToNodeVersionData r - forAllVersions mkR = - Versions - $ Map.mapWithKey mkVersion - $ supportedNodeToNodeVersions (Proxy @blk) - where - mkVersion version blockVersion = Version { - versionApplication = const $ mkR version blockVersion - , versionData = - stdVersionDataNTN - networkMagic - N2N.InitiatorOnlyDiffusionMode - PeerSharingDisabled - } + forAllVersions application + where + forAllVersions :: + (NodeToNodeVersion -> BlockNodeToNodeVersion blk -> r) -> + Versions NodeToNodeVersion NodeToNodeVersionData r + forAllVersions mkR = + Versions $ + Map.mapWithKey mkVersion $ + supportedNodeToNodeVersions (Proxy @blk) + where + mkVersion version blockVersion = + Version + { versionApplication = const $ mkR version blockVersion + , versionData = + stdVersionDataNTN + networkMagic + N2N.InitiatorOnlyDiffusionMode + PeerSharingDisabled + } - application :: - NodeToNodeVersion - -> BlockNodeToNodeVersion blk - -> OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode addr BL.ByteString m Void () - application version blockVersion = - OuroborosApplication miniprotocols - where - miniprotocols = - [ mkMiniProtocol - Mux.StartOnDemandAny - N2N.keepAliveMiniProtocolNum - N2N.keepAliveProtocolLimits - keepAliveProt - , mkMiniProtocol - Mux.StartOnDemand - N2N.chainSyncMiniProtocolNum - N2N.chainSyncProtocolLimits - chainSyncProt - , mkMiniProtocol - Mux.StartOnDemand - N2N.blockFetchMiniProtocolNum - N2N.blockFetchProtocolLimits - blockFetchProt - , mkMiniProtocol - Mux.StartOnDemand - N2N.txSubmissionMiniProtocolNum - N2N.txSubmissionProtocolLimits - txSubmissionProt - ] - where - Consensus.N2N.Codecs { - cKeepAliveCodec - , cChainSyncCodecSerialised - , cBlockFetchCodecSerialised - } = - Consensus.N2N.defaultCodecs codecCfg blockVersion encAddr decAddr version + application :: + NodeToNodeVersion -> + BlockNodeToNodeVersion blk -> + OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode addr BL.ByteString m Void () + application version blockVersion = + OuroborosApplication miniprotocols + where + miniprotocols = + [ mkMiniProtocol + Mux.StartOnDemandAny + N2N.keepAliveMiniProtocolNum + N2N.keepAliveProtocolLimits + keepAliveProt + , mkMiniProtocol + Mux.StartOnDemand + N2N.chainSyncMiniProtocolNum + N2N.chainSyncProtocolLimits + chainSyncProt + , mkMiniProtocol + Mux.StartOnDemand + N2N.blockFetchMiniProtocolNum + N2N.blockFetchProtocolLimits + blockFetchProt + , mkMiniProtocol + Mux.StartOnDemand + N2N.txSubmissionMiniProtocolNum + N2N.txSubmissionProtocolLimits + txSubmissionProt + ] + where + Consensus.N2N.Codecs + { cKeepAliveCodec + , cChainSyncCodecSerialised + , cBlockFetchCodecSerialised + } = + Consensus.N2N.defaultCodecs codecCfg blockVersion encAddr decAddr version - keepAliveProt = - MiniProtocolCb $ \_ctx channel -> - runPeer nullTracer cKeepAliveCodec channel - $ keepAliveServerPeer keepAliveServer - chainSyncProt = - MiniProtocolCb $ \_ctx channel -> - withRegistry - $ runPeer nullTracer cChainSyncCodecSerialised channel + keepAliveProt = + MiniProtocolCb $ \_ctx channel -> + runPeer nullTracer cKeepAliveCodec channel $ + keepAliveServerPeer keepAliveServer + chainSyncProt = + MiniProtocolCb $ \_ctx channel -> + withRegistry $ + runPeer nullTracer cChainSyncCodecSerialised channel . chainSyncServerPeer . chainSyncServer immDB ChainDB.getSerialisedHeaderWithPoint - blockFetchProt = - MiniProtocolCb $ \_ctx channel -> - withRegistry - $ runPeer nullTracer cBlockFetchCodecSerialised channel + blockFetchProt = + MiniProtocolCb $ \_ctx channel -> + withRegistry $ + runPeer nullTracer cBlockFetchCodecSerialised channel . blockFetchServerPeer . blockFetchServer immDB ChainDB.getSerialisedBlockWithPoint - txSubmissionProt = - -- never reply, there is no timeout - MiniProtocolCb $ \_ctx _channel -> forever $ threadDelay 10 + txSubmissionProt = + -- never reply, there is no timeout + MiniProtocolCb $ \_ctx _channel -> forever $ threadDelay 10 - mkMiniProtocol miniProtocolStart miniProtocolNum limits proto = MiniProtocol { - miniProtocolNum - , miniProtocolLimits = limits N2N.defaultMiniProtocolParameters - , miniProtocolRun = ResponderProtocolOnly proto - , miniProtocolStart - } + mkMiniProtocol miniProtocolStart miniProtocolNum limits proto = + MiniProtocol + { miniProtocolNum + , miniProtocolLimits = limits N2N.defaultMiniProtocolParameters + , miniProtocolRun = ResponderProtocolOnly proto + , miniProtocolStart + } -- | The ChainSync specification requires sending a rollback instruction to the -- intersection point right after an intersection has been negotiated. (Opening -- a connection implicitly negotiates the Genesis point as the intersection.) -data ChainSyncIntersection blk = - JustNegotiatedIntersection !(Point blk) +data ChainSyncIntersection blk + = JustNegotiatedIntersection !(Point blk) | AlreadySentRollbackToIntersection - deriving stock (Generic) - deriving anyclass (NoThunks) + deriving stock Generic + deriving anyclass NoThunks chainSyncServer :: - forall m blk a. (IOLike m, HasHeader blk) - => ImmutableDB m blk - -> BlockComponent blk (ChainDB.WithPoint blk a) - -> ResourceRegistry m - -> ChainSyncServer a (Point blk) (Tip blk) m () + forall m blk a. + (IOLike m, HasHeader blk) => + ImmutableDB m blk -> + BlockComponent blk (ChainDB.WithPoint blk a) -> + ResourceRegistry m -> + ChainSyncServer a (Point blk) (Tip blk) m () chainSyncServer immDB blockComponent registry = ChainSyncServer $ do - follower <- newImmutableDBFollower - runChainSyncServer $ - chainSyncServerForFollower nullTracer getImmutableTip follower - where - newImmutableDBFollower :: m (Follower m blk (ChainDB.WithPoint blk a)) - newImmutableDBFollower = do - varIterator <- - newTVarIO =<< ImmutableDB.streamAll immDB registry blockComponent - varIntersection <- - newTVarIO $ JustNegotiatedIntersection GenesisPoint + follower <- newImmutableDBFollower + runChainSyncServer $ + chainSyncServerForFollower nullTracer getImmutableTip follower + where + newImmutableDBFollower :: m (Follower m blk (ChainDB.WithPoint blk a)) + newImmutableDBFollower = do + varIterator <- + newTVarIO =<< ImmutableDB.streamAll immDB registry blockComponent + varIntersection <- + newTVarIO $ JustNegotiatedIntersection GenesisPoint - let followerInstructionBlocking = - readTVarIO varIntersection >>= \case - JustNegotiatedIntersection intersectionPt -> do - atomically $ - writeTVar varIntersection AlreadySentRollbackToIntersection - pure $ RollBack intersectionPt - -- Otherwise, get the next block from the iterator (or fail). - AlreadySentRollbackToIntersection -> do - iterator <- readTVarIO varIterator - ImmutableDB.iteratorNext iterator >>= \case - ImmutableDB.IteratorExhausted -> do - ImmutableDB.iteratorClose iterator - throwIO ReachedImmutableTip - ImmutableDB.IteratorResult a -> - pure $ AddBlock a + let followerInstructionBlocking = + readTVarIO varIntersection >>= \case + JustNegotiatedIntersection intersectionPt -> do + atomically $ + writeTVar varIntersection AlreadySentRollbackToIntersection + pure $ RollBack intersectionPt + -- Otherwise, get the next block from the iterator (or fail). + AlreadySentRollbackToIntersection -> do + iterator <- readTVarIO varIterator + ImmutableDB.iteratorNext iterator >>= \case + ImmutableDB.IteratorExhausted -> do + ImmutableDB.iteratorClose iterator + throwIO ReachedImmutableTip + ImmutableDB.IteratorResult a -> + pure $ AddBlock a - followerClose = ImmutableDB.iteratorClose =<< readTVarIO varIterator + followerClose = ImmutableDB.iteratorClose =<< readTVarIO varIterator - followerForward [] = pure Nothing - followerForward (pt : pts) = - ImmutableDB.streamAfterPoint immDB registry blockComponent pt >>= \case - Left _ -> followerForward pts - Right iterator -> do - followerClose - atomically $ do - writeTVar varIterator iterator - writeTVar varIntersection $ JustNegotiatedIntersection pt - pure $ Just pt + followerForward [] = pure Nothing + followerForward (pt : pts) = + ImmutableDB.streamAfterPoint immDB registry blockComponent pt >>= \case + Left _ -> followerForward pts + Right iterator -> do + followerClose + atomically $ do + writeTVar varIterator iterator + writeTVar varIntersection $ JustNegotiatedIntersection pt + pure $ Just pt - pure Follower { - followerInstruction = Just <$> followerInstructionBlocking - , followerInstructionBlocking - , followerForward - , followerClose - } + pure + Follower + { followerInstruction = Just <$> followerInstructionBlocking + , followerInstructionBlocking + , followerForward + , followerClose + } - getImmutableTip :: STM m (Tip blk) - getImmutableTip = ImmutableDB.getTip immDB <&> \case - Origin -> TipGenesis - NotOrigin tip -> Tip tipSlotNo tipHash tipBlockNo - where - ImmutableDB.Tip tipSlotNo _ tipBlockNo tipHash = tip + getImmutableTip :: STM m (Tip blk) + getImmutableTip = + ImmutableDB.getTip immDB <&> \case + Origin -> TipGenesis + NotOrigin tip -> Tip tipSlotNo tipHash tipBlockNo + where + ImmutableDB.Tip tipSlotNo _ tipBlockNo tipHash = tip blockFetchServer :: - forall m blk a. (IOLike m, StandardHash blk, Typeable blk) - => ImmutableDB m blk - -> BlockComponent blk (ChainDB.WithPoint blk a) - -> ResourceRegistry m - -> BlockFetchServer a (Point blk) m () + forall m blk a. + (IOLike m, StandardHash blk, Typeable blk) => + ImmutableDB m blk -> + BlockComponent blk (ChainDB.WithPoint blk a) -> + ResourceRegistry m -> + BlockFetchServer a (Point blk) m () blockFetchServer immDB blockComponent registry = - blockFetchServer' nullTracer stream - where - stream from to = - bimap convertError convertIterator - <$> ImmutableDB.stream immDB registry blockComponent from to + blockFetchServer' nullTracer stream + where + stream from to = + bimap convertError convertIterator + <$> ImmutableDB.stream immDB registry blockComponent from to - convertError = ChainDB.MissingBlock . ImmutableDB.missingBlockPoint - convertIterator iterator = ChainDB.Iterator { - ChainDB.iteratorNext = ImmutableDB.iteratorNext iterator <&> \case - ImmutableDB.IteratorResult b -> ChainDB.IteratorResult b + convertError = ChainDB.MissingBlock . ImmutableDB.missingBlockPoint + convertIterator iterator = + ChainDB.Iterator + { ChainDB.iteratorNext = + ImmutableDB.iteratorNext iterator <&> \case + ImmutableDB.IteratorResult b -> ChainDB.IteratorResult b ImmutableDB.IteratorExhausted -> ChainDB.IteratorExhausted - , ChainDB.iteratorClose = ImmutableDB.iteratorClose iterator - } + , ChainDB.iteratorClose = ImmutableDB.iteratorClose iterator + } -data ImmDBServerException = - ReachedImmutableTip +data ImmDBServerException + = ReachedImmutableTip | TriedToFetchGenesis - deriving stock (Show) - deriving anyclass (Exception) + deriving stock Show + deriving anyclass Exception diff --git a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Examples.hs b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Examples.hs index 23b88d157e..a7c1383be7 100644 --- a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Examples.hs +++ b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Examples.hs @@ -6,10 +6,11 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Test.Consensus.Shelley.Examples ( - -- * Setup +module Test.Consensus.Shelley.Examples + ( -- * Setup codecConfig , testShelleyGenesis + -- * Examples , examplesAllegra , examplesAlonzo @@ -19,61 +20,74 @@ module Test.Consensus.Shelley.Examples ( , examplesShelley ) where - -import qualified Cardano.Ledger.Block as SL -import Cardano.Ledger.Core -import qualified Cardano.Ledger.Core as LC -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (StandardCrypto) -import qualified Cardano.Protocol.TPraos.BHeader as SL -import Cardano.Slotting.EpochInfo (fixedEpochInfo) -import Cardano.Slotting.Time (mkSlotLength) -import Data.Coerce (coerce) -import Data.Foldable (toList) -import Data.List.NonEmpty (NonEmpty ((:|))) -import qualified Data.Map as Map -import qualified Data.Set as Set -import Lens.Micro -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables hiding (TxIn) -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Protocol.Abstract (translateChainDepState) -import Ouroboros.Consensus.Protocol.Praos (Praos) -import Ouroboros.Consensus.Protocol.Praos.Header - (HeaderBody (HeaderBody)) -import qualified Ouroboros.Consensus.Protocol.Praos.Header as Praos -import Ouroboros.Consensus.Protocol.TPraos (TPraos, - TPraosState (TPraosState)) -import Ouroboros.Consensus.Shelley.HFEras -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Query.Types -import Ouroboros.Consensus.Shelley.Protocol.TPraos () -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime) -import Ouroboros.Network.Block (Serialised (..)) -import Ouroboros.Network.PeerSelection.LedgerPeers.Type -import Ouroboros.Network.PeerSelection.RelayAccessPoint -import Test.Cardano.Ledger.Allegra.Examples.Consensus - (ledgerExamplesAllegra) -import Test.Cardano.Ledger.Alonzo.Examples.Consensus - (ledgerExamplesAlonzo) -import Test.Cardano.Ledger.Babbage.Examples.Consensus - (ledgerExamplesBabbage) -import Test.Cardano.Ledger.Conway.Examples.Consensus - (ledgerExamplesConway) -import Test.Cardano.Ledger.Mary.Examples.Consensus - (ledgerExamplesMary) -import Test.Cardano.Ledger.Shelley.Examples.Consensus - (ShelleyLedgerExamples (..), ShelleyResultExamples (..), - ledgerExamplesShelley, testShelleyGenesis) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Examples (Examples (..), labelled, - unlabelled) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Cardano.Ledger.Block qualified as SL +import Cardano.Ledger.Core +import Cardano.Ledger.Core qualified as LC +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (StandardCrypto) +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Cardano.Slotting.EpochInfo (fixedEpochInfo) +import Cardano.Slotting.Time (mkSlotLength) +import Data.Coerce (coerce) +import Data.Foldable (toList) +import Data.List.NonEmpty (NonEmpty ((:|))) +import Data.Map qualified as Map +import Data.Set qualified as Set +import Lens.Micro +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables hiding (TxIn) +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Protocol.Abstract (translateChainDepState) +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.Praos.Header + ( HeaderBody (HeaderBody) + ) +import Ouroboros.Consensus.Protocol.Praos.Header qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos + ( TPraos + , TPraosState (TPraosState) + ) +import Ouroboros.Consensus.Shelley.HFEras +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Query.Types +import Ouroboros.Consensus.Shelley.Protocol.TPraos () +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime) +import Ouroboros.Network.Block (Serialised (..)) +import Ouroboros.Network.PeerSelection.LedgerPeers.Type +import Ouroboros.Network.PeerSelection.RelayAccessPoint +import Test.Cardano.Ledger.Allegra.Examples.Consensus + ( ledgerExamplesAllegra + ) +import Test.Cardano.Ledger.Alonzo.Examples.Consensus + ( ledgerExamplesAlonzo + ) +import Test.Cardano.Ledger.Babbage.Examples.Consensus + ( ledgerExamplesBabbage + ) +import Test.Cardano.Ledger.Conway.Examples.Consensus + ( ledgerExamplesConway + ) +import Test.Cardano.Ledger.Mary.Examples.Consensus + ( ledgerExamplesMary + ) +import Test.Cardano.Ledger.Shelley.Examples.Consensus + ( ShelleyLedgerExamples (..) + , ShelleyResultExamples (..) + , ledgerExamplesShelley + , testShelleyGenesis + ) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Examples + ( Examples (..) + , labelled + , unlabelled + ) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- Examples @@ -82,61 +96,63 @@ import Test.Util.Serialisation.SomeResult (SomeResult (..)) codecConfig :: CodecConfig StandardShelleyBlock codecConfig = ShelleyCodecConfig -mkLedgerTables :: forall proto era. - ShelleyCompatible proto era - => LC.Tx era - -> LedgerTables (LedgerState (ShelleyBlock proto era)) ValuesMK +mkLedgerTables :: + forall proto era. + ShelleyCompatible proto era => + LC.Tx era -> + LedgerTables (LedgerState (ShelleyBlock proto era)) ValuesMK mkLedgerTables tx = - LedgerTables - $ ValuesMK - $ Map.fromList - $ zip exampleTxIns exampleTxOuts - where - exampleTxIns :: [SL.TxIn] - exampleTxIns = case toList (tx ^. (LC.bodyTxL . LC.allInputsTxBodyF)) of - [] -> error "No transaction inputs were provided to construct the ledger tables" - -- We require at least one transaction input (and one - -- transaction output) in the example provided by - -- cardano-ledger to make sure that we test the serialization - -- of ledger tables with at least one non-trivial example. - -- - -- Also all transactions in Cardano have at least one input for - -- automatic replay protection. - xs -> xs + LedgerTables $ + ValuesMK $ + Map.fromList $ + zip exampleTxIns exampleTxOuts + where + exampleTxIns :: [SL.TxIn] + exampleTxIns = case toList (tx ^. (LC.bodyTxL . LC.allInputsTxBodyF)) of + [] -> error "No transaction inputs were provided to construct the ledger tables" + -- We require at least one transaction input (and one + -- transaction output) in the example provided by + -- cardano-ledger to make sure that we test the serialization + -- of ledger tables with at least one non-trivial example. + -- + -- Also all transactions in Cardano have at least one input for + -- automatic replay protection. + xs -> xs - exampleTxOuts :: [LC.TxOut era] - exampleTxOuts = case toList (tx ^. (LC.bodyTxL . LC.outputsTxBodyL)) of - [] -> error "No transaction outputs were provided to construct the ledger tables" - xs -> xs + exampleTxOuts :: [LC.TxOut era] + exampleTxOuts = case toList (tx ^. (LC.bodyTxL . LC.outputsTxBodyL)) of + [] -> error "No transaction outputs were provided to construct the ledger tables" + xs -> xs fromShelleyLedgerExamples :: - ShelleyCompatible (TPraos StandardCrypto) era - => ShelleyLedgerExamples era - -> Examples (ShelleyBlock (TPraos StandardCrypto) era) + ShelleyCompatible (TPraos StandardCrypto) era => + ShelleyLedgerExamples era -> + Examples (ShelleyBlock (TPraos StandardCrypto) era) fromShelleyLedgerExamples ShelleyLedgerExamples { sleResultExamples = ShelleyResultExamples{..} - , ..} = - Examples { - exampleBlock = unlabelled blk - , exampleSerialisedBlock = unlabelled serialisedBlock - , exampleHeader = unlabelled $ getHeader blk - , exampleSerialisedHeader = unlabelled serialisedHeader - , exampleHeaderHash = unlabelled hash - , exampleGenTx = unlabelled tx - , exampleGenTxId = unlabelled $ txId tx - , exampleApplyTxErr = unlabelled sleApplyTxError - , exampleQuery = queries - , exampleResult = results - , exampleAnnTip = unlabelled annTip - , exampleLedgerState = unlabelled ledgerState - , exampleChainDepState = unlabelled chainDepState - , exampleExtLedgerState = unlabelled extLedgerState - , exampleSlotNo = unlabelled slotNo - , exampleLedgerConfig = unlabelled ledgerConfig - , exampleLedgerTables = unlabelled $ mkLedgerTables sleTx - } - where + , .. + } = + Examples + { exampleBlock = unlabelled blk + , exampleSerialisedBlock = unlabelled serialisedBlock + , exampleHeader = unlabelled $ getHeader blk + , exampleSerialisedHeader = unlabelled serialisedHeader + , exampleHeaderHash = unlabelled hash + , exampleGenTx = unlabelled tx + , exampleGenTxId = unlabelled $ txId tx + , exampleApplyTxErr = unlabelled sleApplyTxError + , exampleQuery = queries + , exampleResult = results + , exampleAnnTip = unlabelled annTip + , exampleLedgerState = unlabelled ledgerState + , exampleChainDepState = unlabelled chainDepState + , exampleExtLedgerState = unlabelled extLedgerState + , exampleSlotNo = unlabelled slotNo + , exampleLedgerConfig = unlabelled ledgerConfig + , exampleLedgerTables = unlabelled $ mkLedgerTables sleTx + } + where blk = mkShelleyBlock sleBlock hash = ShelleyHash $ SL.unHashHeader sleHashHeader serialisedBlock = Serialised "" @@ -144,144 +160,184 @@ fromShelleyLedgerExamples slotNo = SlotNo 42 serialisedHeader = SerialisedHeaderFromDepPair $ GenDepPair (NestedCtxt CtxtShelley) (Serialised "
") - queries = labelled [ - ("GetLedgerTip", SomeBlockQuery GetLedgerTip) - , ("GetEpochNo", SomeBlockQuery GetEpochNo) - , ("GetCurrentPParams", SomeBlockQuery GetCurrentPParams) - , ("GetStakeDistribution", SomeBlockQuery GetStakeDistribution) + queries = + labelled + [ ("GetLedgerTip", SomeBlockQuery GetLedgerTip) + , ("GetEpochNo", SomeBlockQuery GetEpochNo) + , ("GetCurrentPParams", SomeBlockQuery GetCurrentPParams) + , ("GetStakeDistribution", SomeBlockQuery GetStakeDistribution) , ("GetNonMyopicMemberRewards", SomeBlockQuery $ GetNonMyopicMemberRewards sleRewardsCredentials) - , ("GetGenesisConfig", SomeBlockQuery GetGenesisConfig) - , ("GetBigLedgerPeerSnapshot", SomeBlockQuery GetBigLedgerPeerSnapshot) - ] - results = labelled [ - ("LedgerTip", SomeResult GetLedgerTip (blockPoint blk)) - , ("EpochNo", SomeResult GetEpochNo 10) - , ("EmptyPParams", SomeResult GetCurrentPParams srePParams) - , ("StakeDistribution", SomeResult GetStakeDistribution $ fromLedgerPoolDistr srePoolDistr) - , ("NonMyopicMemberRewards", SomeResult (GetNonMyopicMemberRewards Set.empty) - (NonMyopicMemberRewards $ sreNonMyopicRewards)) - , ("GenesisConfig", SomeResult GetGenesisConfig (compactGenesis sreShelleyGenesis)) - , ("GetBigLedgerPeerSnapshot", - SomeResult GetBigLedgerPeerSnapshot - (LedgerPeerSnapshot (NotOrigin slotNo, [(AccPoolStake 0.9 - , (PoolStake 0.9 - , RelayAccessAddress (IPv4 "1.1.1.1") 1234 :| []))]))) + , ("GetGenesisConfig", SomeBlockQuery GetGenesisConfig) + , ("GetBigLedgerPeerSnapshot", SomeBlockQuery GetBigLedgerPeerSnapshot) ] - annTip = AnnTip { - annTipSlotNo = SlotNo 14 - , annTipBlockNo = BlockNo 6 - , annTipInfo = hash - } - ledgerState = ShelleyLedgerState { - shelleyLedgerTip = NotOrigin ShelleyTip { - shelleyTipSlotNo = SlotNo 9 - , shelleyTipBlockNo = BlockNo 3 - , shelleyTipHash = hash - } - , shelleyLedgerState = sleNewEpochState - , shelleyLedgerTransition = ShelleyTransitionInfo {shelleyAfterVoting = 0} - , shelleyLedgerTables = LedgerTables EmptyMK - } + results = + labelled + [ ("LedgerTip", SomeResult GetLedgerTip (blockPoint blk)) + , ("EpochNo", SomeResult GetEpochNo 10) + , ("EmptyPParams", SomeResult GetCurrentPParams srePParams) + , ("StakeDistribution", SomeResult GetStakeDistribution $ fromLedgerPoolDistr srePoolDistr) + , + ( "NonMyopicMemberRewards" + , SomeResult + (GetNonMyopicMemberRewards Set.empty) + (NonMyopicMemberRewards $ sreNonMyopicRewards) + ) + , ("GenesisConfig", SomeResult GetGenesisConfig (compactGenesis sreShelleyGenesis)) + , + ( "GetBigLedgerPeerSnapshot" + , SomeResult + GetBigLedgerPeerSnapshot + ( LedgerPeerSnapshot + ( NotOrigin slotNo + , + [ + ( AccPoolStake 0.9 + , + ( PoolStake 0.9 + , RelayAccessAddress (IPv4 "1.1.1.1") 1234 :| [] + ) + ) + ] + ) + ) + ) + ] + annTip = + AnnTip + { annTipSlotNo = SlotNo 14 + , annTipBlockNo = BlockNo 6 + , annTipInfo = hash + } + ledgerState = + ShelleyLedgerState + { shelleyLedgerTip = + NotOrigin + ShelleyTip + { shelleyTipSlotNo = SlotNo 9 + , shelleyTipBlockNo = BlockNo 3 + , shelleyTipHash = hash + } + , shelleyLedgerState = sleNewEpochState + , shelleyLedgerTransition = ShelleyTransitionInfo{shelleyAfterVoting = 0} + , shelleyLedgerTables = LedgerTables EmptyMK + } chainDepState = TPraosState (NotOrigin 1) sleChainDepState - extLedgerState = ExtLedgerState - ledgerState - (genesisHeaderState chainDepState) + extLedgerState = + ExtLedgerState + ledgerState + (genesisHeaderState chainDepState) ledgerConfig = exampleShelleyLedgerConfig sleTranslationContext -- | TODO Factor this out into something nicer. fromShelleyLedgerExamplesPraos :: forall era. - ShelleyCompatible (Praos StandardCrypto) era - => ShelleyLedgerExamples era - -> Examples (ShelleyBlock (Praos StandardCrypto) era) -fromShelleyLedgerExamplesPraos ShelleyLedgerExamples { - sleResultExamples = ShelleyResultExamples{..} - , ..} = - Examples { - exampleBlock = unlabelled blk - , exampleSerialisedBlock = unlabelled serialisedBlock - , exampleHeader = unlabelled $ getHeader blk - , exampleSerialisedHeader = unlabelled serialisedHeader - , exampleHeaderHash = unlabelled hash - , exampleGenTx = unlabelled tx - , exampleGenTxId = unlabelled $ txId tx - , exampleApplyTxErr = unlabelled sleApplyTxError - , exampleQuery = queries - , exampleResult = results - , exampleAnnTip = unlabelled annTip - , exampleLedgerState = unlabelled ledgerState - , exampleLedgerTables = unlabelled $ mkLedgerTables sleTx - , exampleChainDepState = unlabelled chainDepState - , exampleExtLedgerState = unlabelled extLedgerState - , exampleSlotNo = unlabelled slotNo - , exampleLedgerConfig = unlabelled ledgerConfig - } - where - blk = mkShelleyBlock $ - let SL.Block hdr1 bdy = sleBlock - in SL.Block (translateHeader hdr1) bdy + ShelleyCompatible (Praos StandardCrypto) era => + ShelleyLedgerExamples era -> + Examples (ShelleyBlock (Praos StandardCrypto) era) +fromShelleyLedgerExamplesPraos + ShelleyLedgerExamples + { sleResultExamples = ShelleyResultExamples{..} + , .. + } = + Examples + { exampleBlock = unlabelled blk + , exampleSerialisedBlock = unlabelled serialisedBlock + , exampleHeader = unlabelled $ getHeader blk + , exampleSerialisedHeader = unlabelled serialisedHeader + , exampleHeaderHash = unlabelled hash + , exampleGenTx = unlabelled tx + , exampleGenTxId = unlabelled $ txId tx + , exampleApplyTxErr = unlabelled sleApplyTxError + , exampleQuery = queries + , exampleResult = results + , exampleAnnTip = unlabelled annTip + , exampleLedgerState = unlabelled ledgerState + , exampleLedgerTables = unlabelled $ mkLedgerTables sleTx + , exampleChainDepState = unlabelled chainDepState + , exampleExtLedgerState = unlabelled extLedgerState + , exampleSlotNo = unlabelled slotNo + , exampleLedgerConfig = unlabelled ledgerConfig + } + where + blk = + mkShelleyBlock $ + let SL.Block hdr1 bdy = sleBlock + in SL.Block (translateHeader hdr1) bdy translateHeader :: SL.BHeader StandardCrypto -> Praos.Header StandardCrypto translateHeader (SL.BHeader bhBody bhSig) = - Praos.Header hBody hSig - where - hBody = HeaderBody { - hbBlockNo = SL.bheaderBlockNo bhBody, - hbSlotNo = SL.bheaderSlotNo bhBody, - hbPrev = SL.bheaderPrev bhBody, - hbVk = SL.bheaderVk bhBody, - hbVrfVk = SL.bheaderVrfVk bhBody, - hbVrfRes = coerce $ SL.bheaderEta bhBody, - hbBodySize = SL.bsize bhBody, - hbBodyHash = SL.bhash bhBody, - hbOCert = SL.bheaderOCert bhBody, - hbProtVer = SL.bprotver bhBody - } - hSig = coerce bhSig + Praos.Header hBody hSig + where + hBody = + HeaderBody + { hbBlockNo = SL.bheaderBlockNo bhBody + , hbSlotNo = SL.bheaderSlotNo bhBody + , hbPrev = SL.bheaderPrev bhBody + , hbVk = SL.bheaderVk bhBody + , hbVrfVk = SL.bheaderVrfVk bhBody + , hbVrfRes = coerce $ SL.bheaderEta bhBody + , hbBodySize = SL.bsize bhBody + , hbBodyHash = SL.bhash bhBody + , hbOCert = SL.bheaderOCert bhBody + , hbProtVer = SL.bprotver bhBody + } + hSig = coerce bhSig hash = ShelleyHash $ SL.unHashHeader sleHashHeader serialisedBlock = Serialised "" tx = mkShelleyTx sleTx slotNo = SlotNo 42 serialisedHeader = SerialisedHeaderFromDepPair $ GenDepPair (NestedCtxt CtxtShelley) (Serialised "
") - queries = labelled [ - ("GetLedgerTip", SomeBlockQuery GetLedgerTip) - , ("GetEpochNo", SomeBlockQuery GetEpochNo) - , ("GetCurrentPParams", SomeBlockQuery GetCurrentPParams) - , ("GetStakeDistribution", SomeBlockQuery GetStakeDistribution) + queries = + labelled + [ ("GetLedgerTip", SomeBlockQuery GetLedgerTip) + , ("GetEpochNo", SomeBlockQuery GetEpochNo) + , ("GetCurrentPParams", SomeBlockQuery GetCurrentPParams) + , ("GetStakeDistribution", SomeBlockQuery GetStakeDistribution) , ("GetNonMyopicMemberRewards", SomeBlockQuery $ GetNonMyopicMemberRewards sleRewardsCredentials) - , ("GetGenesisConfig", SomeBlockQuery GetGenesisConfig) - ] - results = labelled [ - ("LedgerTip", SomeResult GetLedgerTip (blockPoint blk)) - , ("EpochNo", SomeResult GetEpochNo 10) - , ("EmptyPParams", SomeResult GetCurrentPParams srePParams) - , ("StakeDistribution", SomeResult GetStakeDistribution $ fromLedgerPoolDistr srePoolDistr) - , ("NonMyopicMemberRewards", SomeResult (GetNonMyopicMemberRewards Set.empty) - (NonMyopicMemberRewards $ sreNonMyopicRewards)) - , ("GenesisConfig", SomeResult GetGenesisConfig (compactGenesis sreShelleyGenesis)) + , ("GetGenesisConfig", SomeBlockQuery GetGenesisConfig) ] - annTip = AnnTip { - annTipSlotNo = SlotNo 14 - , annTipBlockNo = BlockNo 6 - , annTipInfo = hash - } - ledgerState = ShelleyLedgerState { - shelleyLedgerTip = NotOrigin ShelleyTip { - shelleyTipSlotNo = SlotNo 9 - , shelleyTipBlockNo = BlockNo 3 - , shelleyTipHash = hash - } - , shelleyLedgerState = sleNewEpochState - , shelleyLedgerTransition = ShelleyTransitionInfo {shelleyAfterVoting = 0} - , shelleyLedgerTables = emptyLedgerTables - } - chainDepState = translateChainDepState (Proxy @(TPraos StandardCrypto, Praos StandardCrypto)) - $ TPraosState (NotOrigin 1) sleChainDepState - extLedgerState = ExtLedgerState - ledgerState - (genesisHeaderState chainDepState) + results = + labelled + [ ("LedgerTip", SomeResult GetLedgerTip (blockPoint blk)) + , ("EpochNo", SomeResult GetEpochNo 10) + , ("EmptyPParams", SomeResult GetCurrentPParams srePParams) + , ("StakeDistribution", SomeResult GetStakeDistribution $ fromLedgerPoolDistr srePoolDistr) + , + ( "NonMyopicMemberRewards" + , SomeResult + (GetNonMyopicMemberRewards Set.empty) + (NonMyopicMemberRewards $ sreNonMyopicRewards) + ) + , ("GenesisConfig", SomeResult GetGenesisConfig (compactGenesis sreShelleyGenesis)) + ] + annTip = + AnnTip + { annTipSlotNo = SlotNo 14 + , annTipBlockNo = BlockNo 6 + , annTipInfo = hash + } + ledgerState = + ShelleyLedgerState + { shelleyLedgerTip = + NotOrigin + ShelleyTip + { shelleyTipSlotNo = SlotNo 9 + , shelleyTipBlockNo = BlockNo 3 + , shelleyTipHash = hash + } + , shelleyLedgerState = sleNewEpochState + , shelleyLedgerTransition = ShelleyTransitionInfo{shelleyAfterVoting = 0} + , shelleyLedgerTables = emptyLedgerTables + } + chainDepState = + translateChainDepState (Proxy @(TPraos StandardCrypto, Praos StandardCrypto)) $ + TPraosState (NotOrigin 1) sleChainDepState + extLedgerState = + ExtLedgerState + ledgerState + (genesisHeaderState chainDepState) ledgerConfig = exampleShelleyLedgerConfig sleTranslationContext @@ -304,13 +360,15 @@ examplesConway :: Examples StandardConwayBlock examplesConway = fromShelleyLedgerExamplesPraos ledgerExamplesConway exampleShelleyLedgerConfig :: TranslationContext era -> ShelleyLedgerConfig era -exampleShelleyLedgerConfig translationContext = ShelleyLedgerConfig { - shelleyLedgerCompactGenesis = compactGenesis testShelleyGenesis - , shelleyLedgerGlobals = SL.mkShelleyGlobals - testShelleyGenesis - epochInfo +exampleShelleyLedgerConfig translationContext = + ShelleyLedgerConfig + { shelleyLedgerCompactGenesis = compactGenesis testShelleyGenesis + , shelleyLedgerGlobals = + SL.mkShelleyGlobals + testShelleyGenesis + epochInfo , shelleyLedgerTranslationContext = translationContext } - where - epochInfo = fixedEpochInfo (EpochSize 4) slotLength - slotLength = mkSlotLength (secondsToNominalDiffTime 7) + where + epochInfo = fixedEpochInfo (EpochSize 4) slotLength + slotLength = mkSlotLength (secondsToNominalDiffTime 7) diff --git a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Generators.hs b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Generators.hs index dd5e86f2cc..3618f4a0bd 100644 --- a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Generators.hs +++ b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/Generators.hs @@ -6,57 +6,61 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Shelley.Generators (SomeResult (..)) where -import Cardano.Ledger.Core (TranslationContext, toTxSeq) -import Cardano.Ledger.Genesis -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Ledger.Shelley.Translation -import Cardano.Ledger.State (InstantStake) -import Cardano.Protocol.Crypto (Crypto) -import qualified Cardano.Protocol.TPraos.API as SL -import qualified Cardano.Protocol.TPraos.BHeader as SL -import Cardano.Slotting.EpochInfo -import Control.Monad (replicateM) -import Data.Coerce (coerce) -import Generic.Random (genericArbitraryU) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Protocol.Praos (Praos) -import qualified Ouroboros.Consensus.Protocol.Praos as Praos -import qualified Ouroboros.Consensus.Protocol.Praos.Header as Praos -import Ouroboros.Consensus.Protocol.TPraos (TPraos, TPraosState (..)) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.Query.Types -import Ouroboros.Consensus.Shelley.Protocol.Praos () -import Ouroboros.Consensus.Shelley.Protocol.TPraos () -import Ouroboros.Network.Block (mkSerialised) -import Test.Cardano.Ledger.AllegraEraGen () -import Test.Cardano.Ledger.Alonzo.AlonzoEraGen () -import Test.Cardano.Ledger.MaryEraGen () -import Test.Cardano.Ledger.Shelley.Constants (defaultConstants, - numCoreNodes) -import Test.Cardano.Ledger.Shelley.Generator.Presets (genIssuerKeys) -import Test.Cardano.Ledger.Shelley.Generator.ShelleyEraGen () -import Test.Cardano.Ledger.Shelley.Serialisation.EraIndepGenerators - (genCoherentBlock) -import Test.Cardano.Ledger.Shelley.Serialisation.Generators () -import Test.Cardano.Ledger.ShelleyMA.Serialisation.Generators () -import Test.Cardano.Protocol.TPraos.Arbitrary (genBlock) -import Test.Consensus.Protocol.Serialisation.Generators () -import Test.Consensus.Shelley.MockCrypto (CanMock) -import Test.QuickCheck hiding (Result) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip (Coherent (..), - WithVersion (..)) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Cardano.Ledger.Core (TranslationContext, toTxSeq) +import Cardano.Ledger.Genesis +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Translation +import Cardano.Ledger.State (InstantStake) +import Cardano.Protocol.Crypto (Crypto) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Cardano.Slotting.EpochInfo +import Control.Monad (replicateM) +import Data.Coerce (coerce) +import Generic.Random (genericArbitraryU) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Protocol.Praos (Praos) +import Ouroboros.Consensus.Protocol.Praos qualified as Praos +import Ouroboros.Consensus.Protocol.Praos.Header qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos (TPraos, TPraosState (..)) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.Query.Types +import Ouroboros.Consensus.Shelley.Protocol.Praos () +import Ouroboros.Consensus.Shelley.Protocol.TPraos () +import Ouroboros.Network.Block (mkSerialised) +import Test.Cardano.Ledger.AllegraEraGen () +import Test.Cardano.Ledger.Alonzo.AlonzoEraGen () +import Test.Cardano.Ledger.MaryEraGen () +import Test.Cardano.Ledger.Shelley.Constants + ( defaultConstants + , numCoreNodes + ) +import Test.Cardano.Ledger.Shelley.Generator.Presets (genIssuerKeys) +import Test.Cardano.Ledger.Shelley.Generator.ShelleyEraGen () +import Test.Cardano.Ledger.Shelley.Serialisation.EraIndepGenerators + ( genCoherentBlock + ) +import Test.Cardano.Ledger.Shelley.Serialisation.Generators () +import Test.Cardano.Ledger.ShelleyMA.Serialisation.Generators () +import Test.Cardano.Protocol.TPraos.Arbitrary (genBlock) +import Test.Consensus.Protocol.Serialisation.Generators () +import Test.Consensus.Shelley.MockCrypto (CanMock) +import Test.QuickCheck hiding (Result) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip + ( Coherent (..) + , WithVersion (..) + ) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- Generators @@ -67,69 +71,87 @@ import Test.Util.Serialisation.SomeResult (SomeResult (..)) -- | The upstream 'Arbitrary' instance for Shelley blocks does not generate -- coherent blocks, so neither does this. -instance (CanMock (TPraos crypto) era) - => Arbitrary (ShelleyBlock (TPraos crypto) era) where +instance + CanMock (TPraos crypto) era => + Arbitrary (ShelleyBlock (TPraos crypto) era) + where arbitrary = do allPoolKeys <- - replicateM (fromIntegral $ numCoreNodes defaultConstants) - $ genIssuerKeys defaultConstants + replicateM (fromIntegral $ numCoreNodes defaultConstants) $ + genIssuerKeys defaultConstants mkShelleyBlock <$> genBlock allPoolKeys -instance (Praos.PraosCrypto crypto, CanMock (Praos crypto) era) - => Arbitrary (ShelleyBlock (Praos crypto) era) where +instance + (Praos.PraosCrypto crypto, CanMock (Praos crypto) era) => + Arbitrary (ShelleyBlock (Praos crypto) era) + where arbitrary = mkShelleyBlock <$> blk - where blk = SL.Block <$> arbitrary <*> (toTxSeq @era <$> arbitrary) + where + blk = SL.Block <$> arbitrary <*> (toTxSeq @era <$> arbitrary) -- | This uses a different upstream generator to ensure the header and block -- body relate as expected. -instance (CanMock (TPraos crypto) era) - => Arbitrary (Coherent (ShelleyBlock (TPraos crypto) era)) where +instance + CanMock (TPraos crypto) era => + Arbitrary (Coherent (ShelleyBlock (TPraos crypto) era)) + where arbitrary = do allPoolKeys <- - replicateM (fromIntegral $ numCoreNodes defaultConstants) - $ genIssuerKeys defaultConstants + replicateM (fromIntegral $ numCoreNodes defaultConstants) $ + genIssuerKeys defaultConstants Coherent . mkShelleyBlock <$> genCoherentBlock allPoolKeys -- | Create a coherent Praos block -- -- TODO Establish a coherent block without doing this translation from a -- TPraos header. -instance (CanMock (Praos crypto) era) - => Arbitrary (Coherent (ShelleyBlock (Praos crypto) era)) where +instance + CanMock (Praos crypto) era => + Arbitrary (Coherent (ShelleyBlock (Praos crypto) era)) + where arbitrary = do allPoolKeys <- - replicateM (fromIntegral $ numCoreNodes defaultConstants) - $ genIssuerKeys defaultConstants + replicateM (fromIntegral $ numCoreNodes defaultConstants) $ + genIssuerKeys defaultConstants blk <- genCoherentBlock allPoolKeys Coherent . mkBlk <$> pure blk - where - mkBlk sleBlock = mkShelleyBlock $ let - SL.Block hdr1 bdy = sleBlock in SL.Block (translateHeader hdr1) bdy - - translateHeader :: Crypto c => SL.BHeader c -> Praos.Header c - translateHeader (SL.BHeader bhBody bhSig) = - Praos.Header hBody hSig - where - hBody = Praos.HeaderBody { - Praos.hbBlockNo = SL.bheaderBlockNo bhBody, - Praos.hbSlotNo = SL.bheaderSlotNo bhBody, - Praos.hbPrev = SL.bheaderPrev bhBody, - Praos.hbVk = SL.bheaderVk bhBody, - Praos.hbVrfVk = SL.bheaderVrfVk bhBody, - Praos.hbVrfRes = coerce $ SL.bheaderEta bhBody, - Praos.hbBodySize = SL.bsize bhBody, - Praos.hbBodyHash = SL.bhash bhBody, - Praos.hbOCert = SL.bheaderOCert bhBody, - Praos.hbProtVer = SL.bprotver bhBody + where + mkBlk sleBlock = + mkShelleyBlock $ + let + SL.Block hdr1 bdy = sleBlock + in + SL.Block (translateHeader hdr1) bdy + + translateHeader :: Crypto c => SL.BHeader c -> Praos.Header c + translateHeader (SL.BHeader bhBody bhSig) = + Praos.Header hBody hSig + where + hBody = + Praos.HeaderBody + { Praos.hbBlockNo = SL.bheaderBlockNo bhBody + , Praos.hbSlotNo = SL.bheaderSlotNo bhBody + , Praos.hbPrev = SL.bheaderPrev bhBody + , Praos.hbVk = SL.bheaderVk bhBody + , Praos.hbVrfVk = SL.bheaderVrfVk bhBody + , Praos.hbVrfRes = coerce $ SL.bheaderEta bhBody + , Praos.hbBodySize = SL.bsize bhBody + , Praos.hbBodyHash = SL.bhash bhBody + , Praos.hbOCert = SL.bheaderOCert bhBody + , Praos.hbProtVer = SL.bprotver bhBody } - hSig = coerce bhSig + hSig = coerce bhSig -instance (CanMock (TPraos crypto) era) - => Arbitrary (Header (ShelleyBlock (TPraos crypto) era)) where +instance + CanMock (TPraos crypto) era => + Arbitrary (Header (ShelleyBlock (TPraos crypto) era)) + where arbitrary = getHeader <$> arbitrary -instance (CanMock (Praos crypto) era) - => Arbitrary (Header (ShelleyBlock (Praos crypto) era)) where +instance + CanMock (Praos crypto) era => + Arbitrary (Header (ShelleyBlock (Praos crypto) era)) + where arbitrary = do hdr <- arbitrary pure $ ShelleyHeader hdr (ShelleyHash $ Praos.headerHash hdr) @@ -144,34 +166,37 @@ instance CanMock proto era => Arbitrary (GenTxId (ShelleyBlock proto era)) where arbitrary = ShelleyTxId <$> arbitrary instance CanMock proto era => Arbitrary (SomeBlockQuery (BlockQuery (ShelleyBlock proto era))) where - arbitrary = oneof - [ pure $ SomeBlockQuery GetLedgerTip - , pure $ SomeBlockQuery GetEpochNo - , SomeBlockQuery . GetNonMyopicMemberRewards <$> arbitrary - , pure $ SomeBlockQuery GetCurrentPParams - , pure $ SomeBlockQuery GetStakeDistribution - , pure $ SomeBlockQuery DebugEpochState - , (\(SomeBlockQuery q) -> SomeBlockQuery (GetCBOR q)) <$> arbitrary - , SomeBlockQuery . GetFilteredDelegationsAndRewardAccounts <$> arbitrary - , pure $ SomeBlockQuery GetGenesisConfig - , pure $ SomeBlockQuery DebugNewEpochState - ] + arbitrary = + oneof + [ pure $ SomeBlockQuery GetLedgerTip + , pure $ SomeBlockQuery GetEpochNo + , SomeBlockQuery . GetNonMyopicMemberRewards <$> arbitrary + , pure $ SomeBlockQuery GetCurrentPParams + , pure $ SomeBlockQuery GetStakeDistribution + , pure $ SomeBlockQuery DebugEpochState + , (\(SomeBlockQuery q) -> SomeBlockQuery (GetCBOR q)) <$> arbitrary + , SomeBlockQuery . GetFilteredDelegationsAndRewardAccounts <$> arbitrary + , pure $ SomeBlockQuery GetGenesisConfig + , pure $ SomeBlockQuery DebugNewEpochState + ] instance (Arbitrary (InstantStake era), CanMock proto era) => Arbitrary (SomeResult (ShelleyBlock proto era)) where - arbitrary = oneof - [ SomeResult GetLedgerTip <$> arbitrary - , SomeResult GetEpochNo <$> arbitrary - , SomeResult <$> (GetNonMyopicMemberRewards <$> arbitrary) <*> arbitrary - , SomeResult GetCurrentPParams <$> arbitrary - , SomeResult GetStakeDistribution . fromLedgerPoolDistr <$> arbitrary - , SomeResult DebugEpochState <$> arbitrary - , (\(SomeResult q r) -> - SomeResult (GetCBOR q) (mkSerialised (encodeShelleyResult maxBound q) r)) <$> - arbitrary - , SomeResult <$> (GetFilteredDelegationsAndRewardAccounts <$> arbitrary) <*> arbitrary - , SomeResult GetGenesisConfig . compactGenesis <$> arbitrary - , SomeResult DebugNewEpochState <$> arbitrary - ] + arbitrary = + oneof + [ SomeResult GetLedgerTip <$> arbitrary + , SomeResult GetEpochNo <$> arbitrary + , SomeResult <$> (GetNonMyopicMemberRewards <$> arbitrary) <*> arbitrary + , SomeResult GetCurrentPParams <$> arbitrary + , SomeResult GetStakeDistribution . fromLedgerPoolDistr <$> arbitrary + , SomeResult DebugEpochState <$> arbitrary + , ( \(SomeResult q r) -> + SomeResult (GetCBOR q) (mkSerialised (encodeShelleyResult maxBound q) r) + ) + <$> arbitrary + , SomeResult <$> (GetFilteredDelegationsAndRewardAccounts <$> arbitrary) <*> arbitrary + , SomeResult GetGenesisConfig . compactGenesis <$> arbitrary + , SomeResult DebugNewEpochState <$> arbitrary + ] instance Arbitrary NonMyopicMemberRewards where arbitrary = NonMyopicMemberRewards <$> arbitrary @@ -181,42 +206,51 @@ instance CanMock proto era => Arbitrary (Point (ShelleyBlock proto era)) where instance Arbitrary TPraosState where arbitrary = do - lastSlot <- frequency + lastSlot <- + frequency [ (1, return Origin) , (5, NotOrigin . SlotNo <$> choose (0, 100)) ] - TPraosState lastSlot <$> arbitrary + TPraosState lastSlot <$> arbitrary -instance CanMock proto era=> Arbitrary (ShelleyTip proto era) where - arbitrary = ShelleyTip - <$> arbitrary - <*> arbitrary - <*> arbitrary +instance CanMock proto era => Arbitrary (ShelleyTip proto era) where + arbitrary = + ShelleyTip + <$> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary ShelleyTransition where arbitrary = ShelleyTransitionInfo <$> arbitrary -instance (Arbitrary (InstantStake era), CanMock proto era) - => Arbitrary (LedgerState (ShelleyBlock proto era) EmptyMK) where - arbitrary = ShelleyLedgerState - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> pure (LedgerTables EmptyMK) - -instance (Arbitrary (InstantStake era), CanMock proto era) - => Arbitrary (LedgerState (ShelleyBlock proto era) ValuesMK) where - arbitrary = ShelleyLedgerState - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> (LedgerTables . ValuesMK <$> arbitrary) +instance + (Arbitrary (InstantStake era), CanMock proto era) => + Arbitrary (LedgerState (ShelleyBlock proto era) EmptyMK) + where + arbitrary = + ShelleyLedgerState + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> pure (LedgerTables EmptyMK) + +instance + (Arbitrary (InstantStake era), CanMock proto era) => + Arbitrary (LedgerState (ShelleyBlock proto era) ValuesMK) + where + arbitrary = + ShelleyLedgerState + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> (LedgerTables . ValuesMK <$> arbitrary) instance CanMock proto era => Arbitrary (AnnTip (ShelleyBlock proto era)) where - arbitrary = AnnTip - <$> arbitrary - <*> (BlockNo <$> arbitrary) - <*> arbitrary + arbitrary = + AnnTip + <$> arbitrary + <*> (BlockNo <$> arbitrary) + <*> arbitrary instance Arbitrary ShelleyNodeToNodeVersion where arbitrary = arbitraryBoundedEnum @@ -224,8 +258,10 @@ instance Arbitrary ShelleyNodeToNodeVersion where instance Arbitrary ShelleyNodeToClientVersion where arbitrary = arbitraryBoundedEnum -instance ShelleyBasedEra era - => Arbitrary (SomeSecond (NestedCtxt f) (ShelleyBlock proto era)) where +instance + ShelleyBasedEra era => + Arbitrary (SomeSecond (NestedCtxt f) (ShelleyBlock proto era)) + where arbitrary = return (SomeSecond indexIsTrivial) {------------------------------------------------------------------------------- @@ -234,12 +270,15 @@ instance ShelleyBasedEra era -- | Generate a 'ShelleyLedgerConfig' with a fixed 'EpochInfo' (see -- 'arbitraryGlobalsWithFixedEpochInfo'). -instance ( Arbitrary (TranslationContext era) - ) => Arbitrary (ShelleyLedgerConfig era) where - arbitrary = ShelleyLedgerConfig - <$> arbitrary - <*> arbitraryGlobalsWithFixedEpochInfo - <*> arbitrary +instance + Arbitrary (TranslationContext era) => + Arbitrary (ShelleyLedgerConfig era) + where + arbitrary = + ShelleyLedgerConfig + <$> arbitrary + <*> arbitraryGlobalsWithFixedEpochInfo + <*> arbitrary instance Arbitrary CompactGenesis where arbitrary = compactGenesis <$> arbitrary @@ -248,7 +287,8 @@ instance Arbitrary CompactGenesis where -- comprehensive in the case of generating a 'ShelleyLedgerConfig' (see the -- documentation for 'shelleyLedgerGlobals'). arbitraryGlobalsWithFixedEpochInfo :: Gen SL.Globals -arbitraryGlobalsWithFixedEpochInfo = SL.Globals +arbitraryGlobalsWithFixedEpochInfo = + SL.Globals <$> arbitraryFixedEpochInfo <*> arbitrary <*> arbitrary @@ -285,9 +325,12 @@ instance Arbitrary SL.ChainDepState where -- | Some 'Query's are only supported by 'ShelleyNodeToClientVersion2', so we -- make sure to not generate those queries in combination with -- 'ShelleyNodeToClientVersion1'. -instance CanMock proto era - => Arbitrary (WithVersion ShelleyNodeToClientVersion (SomeBlockQuery (BlockQuery (ShelleyBlock proto era)))) where +instance + CanMock proto era => + Arbitrary + (WithVersion ShelleyNodeToClientVersion (SomeBlockQuery (BlockQuery (ShelleyBlock proto era)))) + where arbitrary = do - query@(SomeBlockQuery q) <- arbitrary - version <- arbitrary `suchThat` blockQueryIsSupportedOnVersion q - return $ WithVersion version query + query@(SomeBlockQuery q) <- arbitrary + version <- arbitrary `suchThat` blockQueryIsSupportedOnVersion q + return $ WithVersion version query diff --git a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/MockCrypto.hs b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/MockCrypto.hs index b4f3ab5e4b..086085fc30 100644 --- a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/MockCrypto.hs +++ b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/Consensus/Shelley/MockCrypto.hs @@ -3,38 +3,40 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} +{-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -{-# LANGUAGE TypeOperators #-} -module Test.Consensus.Shelley.MockCrypto ( - Block +module Test.Consensus.Shelley.MockCrypto + ( Block , CanMock , MockCrypto ) where -import Cardano.Crypto.KES (MockKES) -import qualified Cardano.Crypto.KES as KES (Signable) -import Cardano.Crypto.Util (SignableRepresentation) -import Cardano.Crypto.VRF (MockVRF) -import Cardano.Ledger.BaseTypes (Seed) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as Core -import Cardano.Ledger.Shelley.LedgerState (StashedAVVMAddresses) -import Cardano.Protocol.Crypto (Crypto (..)) -import qualified Cardano.Protocol.TPraos.API as SL -import qualified Cardano.Protocol.TPraos.BHeader as SL -import Control.State.Transition.Extended (PredicateFailure) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import qualified Ouroboros.Consensus.Protocol.Praos as Praos -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras (ShelleyEra) -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, - ShelleyCompatible) -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) -import Test.QuickCheck (Arbitrary) +import Cardano.Crypto.KES (MockKES) +import Cardano.Crypto.KES qualified as KES (Signable) +import Cardano.Crypto.Util (SignableRepresentation) +import Cardano.Crypto.VRF (MockVRF) +import Cardano.Ledger.BaseTypes (Seed) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as Core +import Cardano.Ledger.Shelley.LedgerState (StashedAVVMAddresses) +import Cardano.Protocol.Crypto (Crypto (..)) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Control.State.Transition.Extended (PredicateFailure) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Protocol.Praos qualified as Praos +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras (ShelleyEra) +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyBlock + , ShelleyCompatible + ) +import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) +import Test.QuickCheck (Arbitrary) -- | A mock replacement for 'StandardCrypto' -- @@ -45,8 +47,8 @@ import Test.QuickCheck (Arbitrary) data MockCrypto instance Crypto MockCrypto where - type KES MockCrypto = MockKES 10 - type VRF MockCrypto = MockVRF + type KES MockCrypto = MockKES 10 + type VRF MockCrypto = MockVRF instance SL.PraosCrypto MockCrypto instance Praos.PraosCrypto MockCrypto diff --git a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/Infra/Shelley.hs b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/Infra/Shelley.hs index 13f16e693a..c2bbaa4a42 100644 --- a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/Infra/Shelley.hs +++ b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/Infra/Shelley.hs @@ -11,8 +11,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Test.ThreadNet.Infra.Shelley ( - CoreNode (..) +module Test.ThreadNet.Infra.Shelley + ( CoreNode (..) , CoreNodeKeyInfo (..) , DecentralizationParam (..) , KesConfig (..) @@ -36,70 +36,97 @@ module Test.ThreadNet.Infra.Shelley ( , tpraosSlotLength ) where -import Cardano.Crypto.DSIGN (DSIGNAlgorithm (..), SignKeyDSIGN, - seedSizeDSIGN) -import Cardano.Crypto.KES (KESAlgorithm (..), UnsoundPureSignKeyKES, - seedSizeKES, unsoundPureDeriveVerKeyKES, - unsoundPureGenKeyKES) -import Cardano.Crypto.Seed (mkSeedFromBytes) -import qualified Cardano.Crypto.Seed as Cardano.Crypto -import Cardano.Crypto.VRF (SignKeyVRF, deriveVerKeyVRF, genKeyVRF, - seedSizeVRF) -import qualified Cardano.Ledger.Allegra.Scripts as SL -import Cardano.Ledger.Alonzo (AlonzoEra) -import Cardano.Ledger.BaseTypes (boundRational, unNonZero) -import Cardano.Ledger.Hashes (EraIndependentTxBody, - HashAnnotated (..), SafeHash, hashAnnotated) -import qualified Cardano.Ledger.Keys as LK -import qualified Cardano.Ledger.Mary.Core as SL -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Val as SL -import Cardano.Protocol.Crypto (Crypto, KES, VRF, hashVerKeyVRF) -import Cardano.Protocol.TPraos.OCert - (OCert (ocertKESPeriod, ocertN, ocertSigma, ocertVkHot)) -import qualified Cardano.Protocol.TPraos.OCert as SL (KESPeriod, OCert (OCert), - OCertSignable (..)) -import Control.Monad.Except (throwError) -import qualified Data.ByteString as BS -import Data.Coerce (coerce) -import Data.ListMap (ListMap (ListMap)) -import qualified Data.ListMap as ListMap -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe.Strict (maybeToStrictMaybe) -import Data.Ratio (denominator, numerator) -import qualified Data.Sequence.Strict as Seq -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word (Word64) -import GHC.Generics (Generic) -import Lens.Micro -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.Praos.Common - (PraosCanBeLeader (PraosCanBeLeader), - praosCanBeLeaderColdVerKey, praosCanBeLeaderOpCert, - praosCanBeLeaderSignKeyVRF) -import Ouroboros.Consensus.Protocol.TPraos -import Ouroboros.Consensus.Shelley.Eras (ShelleyEra) -import Ouroboros.Consensus.Shelley.Ledger (GenTx (..), - ShelleyBasedEra, ShelleyBlock, ShelleyCompatible, - mkShelleyTx) -import Ouroboros.Consensus.Shelley.Node -import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.IOLike -import Quiet (Quiet (..)) -import qualified Test.Cardano.Ledger.Core.KeyPair as TL (KeyPair (..), - mkWitnessesVKey) -import qualified Test.Cardano.Ledger.Shelley.Generator.Core as Gen -import Test.Cardano.Ledger.Shelley.Utils (unsafeBoundRational) -import Test.QuickCheck -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import Test.Util.Time (dawnOfTime) +import Cardano.Crypto.DSIGN + ( DSIGNAlgorithm (..) + , SignKeyDSIGN + , seedSizeDSIGN + ) +import Cardano.Crypto.KES + ( KESAlgorithm (..) + , UnsoundPureSignKeyKES + , seedSizeKES + , unsoundPureDeriveVerKeyKES + , unsoundPureGenKeyKES + ) +import Cardano.Crypto.Seed (mkSeedFromBytes) +import Cardano.Crypto.Seed qualified as Cardano.Crypto +import Cardano.Crypto.VRF + ( SignKeyVRF + , deriveVerKeyVRF + , genKeyVRF + , seedSizeVRF + ) +import Cardano.Ledger.Allegra.Scripts qualified as SL +import Cardano.Ledger.Alonzo (AlonzoEra) +import Cardano.Ledger.BaseTypes (boundRational, unNonZero) +import Cardano.Ledger.Hashes + ( EraIndependentTxBody + , HashAnnotated (..) + , SafeHash + , hashAnnotated + ) +import Cardano.Ledger.Keys qualified as LK +import Cardano.Ledger.Mary.Core qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Val qualified as SL +import Cardano.Protocol.Crypto (Crypto, KES, VRF, hashVerKeyVRF) +import Cardano.Protocol.TPraos.OCert + ( OCert (ocertKESPeriod, ocertN, ocertSigma, ocertVkHot) + ) +import Cardano.Protocol.TPraos.OCert qualified as SL + ( KESPeriod + , OCert (OCert) + , OCertSignable (..) + ) +import Control.Monad.Except (throwError) +import Data.ByteString qualified as BS +import Data.Coerce (coerce) +import Data.ListMap (ListMap (ListMap)) +import Data.ListMap qualified as ListMap +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe.Strict (maybeToStrictMaybe) +import Data.Ratio (denominator, numerator) +import Data.Sequence.Strict qualified as Seq +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Generics (Generic) +import Lens.Micro +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.Praos.Common + ( PraosCanBeLeader (PraosCanBeLeader) + , praosCanBeLeaderColdVerKey + , praosCanBeLeaderOpCert + , praosCanBeLeaderSignKeyVRF + ) +import Ouroboros.Consensus.Protocol.TPraos +import Ouroboros.Consensus.Shelley.Eras (ShelleyEra) +import Ouroboros.Consensus.Shelley.Ledger + ( GenTx (..) + , ShelleyBasedEra + , ShelleyBlock + , ShelleyCompatible + , mkShelleyTx + ) +import Ouroboros.Consensus.Shelley.Node +import Ouroboros.Consensus.Shelley.Protocol.Abstract (ProtoCrypto) +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.IOLike +import Quiet (Quiet (..)) +import Test.Cardano.Ledger.Core.KeyPair qualified as TL + ( KeyPair (..) + , mkWitnessesVKey + ) +import Test.Cardano.Ledger.Shelley.Generator.Core qualified as Gen +import Test.Cardano.Ledger.Shelley.Utils (unsafeBoundRational) +import Test.QuickCheck +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.Time (dawnOfTime) {------------------------------------------------------------------------------- The decentralization parameter @@ -109,17 +136,17 @@ import Test.Util.Time (dawnOfTime) -- -- In the range @0@ to @1@, inclusive. Beware the misnomer: @0@ means fully -- decentralized, and @1@ means fully centralized. -newtype DecentralizationParam = - DecentralizationParam {decentralizationParamToRational :: Rational } +newtype DecentralizationParam + = DecentralizationParam {decentralizationParamToRational :: Rational} deriving (Eq, Generic, Ord) - deriving (Show) via (Quiet DecentralizationParam) + deriving Show via (Quiet DecentralizationParam) -- | A fraction with denominator @10@ and numerator @0@ to @10@ inclusive instance Arbitrary DecentralizationParam where arbitrary = do - let d = 10 - n <- choose (0, d) - pure $ DecentralizationParam $ fromInteger n / fromInteger d + let d = 10 + n <- choose (0, d) + pure $ DecentralizationParam $ fromInteger n / fromInteger d {------------------------------------------------------------------------------- Important constants @@ -132,24 +159,24 @@ tpraosSlotLength = slotLengthFromSec 2 CoreNode secrets/etc -------------------------------------------------------------------------------} -data CoreNode c = CoreNode { - cnGenesisKey :: !(SignKeyDSIGN LK.DSIGN) - , cnDelegateKey :: !(SignKeyDSIGN LK.DSIGN) - -- ^ Cold delegate key. The hash of the corresponding verification - -- (public) key will be used as the payment credential. - , cnStakingKey :: !(SignKeyDSIGN LK.DSIGN) - -- ^ The hash of the corresponding verification (public) key will be - -- used as the staking credential. - , cnVRF :: !(SignKeyVRF (VRF c)) - , cnKES :: !(UnsoundPureSignKeyKES (KES c)) - , cnOCert :: !(SL.OCert c) - } +data CoreNode c = CoreNode + { cnGenesisKey :: !(SignKeyDSIGN LK.DSIGN) + , cnDelegateKey :: !(SignKeyDSIGN LK.DSIGN) + -- ^ Cold delegate key. The hash of the corresponding verification + -- (public) key will be used as the payment credential. + , cnStakingKey :: !(SignKeyDSIGN LK.DSIGN) + -- ^ The hash of the corresponding verification (public) key will be + -- used as the staking credential. + , cnVRF :: !(SignKeyVRF (VRF c)) + , cnKES :: !(UnsoundPureSignKeyKES (KES c)) + , cnOCert :: !(SL.OCert c) + } data CoreNodeKeyInfo c = CoreNodeKeyInfo - { cnkiKeyPair - :: ( TL.KeyPair 'SL.Payment - , TL.KeyPair 'SL.Staking - ) + { cnkiKeyPair :: + ( TL.KeyPair 'SL.Payment + , TL.KeyPair 'SL.Staking + ) , cnkiCoreNode :: ( TL.KeyPair 'SL.Genesis , Gen.AllIssuerKeys c 'SL.GenesisDelegate @@ -158,71 +185,75 @@ data CoreNodeKeyInfo c = CoreNodeKeyInfo coreNodeKeys :: CoreNode c -> CoreNodeKeyInfo c coreNodeKeys CoreNode{cnGenesisKey, cnDelegateKey, cnStakingKey} = - CoreNodeKeyInfo { - cnkiCoreNode = - ( mkKeyPair cnGenesisKey - , Gen.AllIssuerKeys - { Gen.aikCold = mkKeyPair cnDelegateKey - -- 'CoreNodeKeyInfo' is used for all sorts of generators, not + CoreNodeKeyInfo + { cnkiCoreNode = + ( mkKeyPair cnGenesisKey + , Gen.AllIssuerKeys + { Gen.aikCold = mkKeyPair cnDelegateKey + , -- 'CoreNodeKeyInfo' is used for all sorts of generators, not -- only transaction generators. To generate transactions we -- don't need all these keys, hence the 'error's. - , Gen.aikVrf = error "vrf used while generating transactions" - , Gen.aikHot = error "hot used while generating transactions" + Gen.aikVrf = error "vrf used while generating transactions" + , Gen.aikHot = error "hot used while generating transactions" , Gen.aikColdKeyHash = error "hk used while generating transactions" } - ) - , cnkiKeyPair = (mkKeyPair cnDelegateKey, mkKeyPair cnStakingKey) - } + ) + , cnkiKeyPair = (mkKeyPair cnDelegateKey, mkKeyPair cnStakingKey) + } genCoreNode :: - forall c. - Crypto c - => SL.KESPeriod - -> Gen (CoreNode c) + forall c. + Crypto c => + SL.KESPeriod -> + Gen (CoreNode c) genCoreNode startKESPeriod = do - genKey <- genKeyDSIGN <$> genSeed (seedSizeDSIGN (Proxy @LK.DSIGN)) - delKey <- genKeyDSIGN <$> genSeed (seedSizeDSIGN (Proxy @LK.DSIGN)) - stkKey <- genKeyDSIGN <$> genSeed (seedSizeDSIGN (Proxy @LK.DSIGN)) - vrfKey <- genKeyVRF <$> genSeed (seedSizeVRF (Proxy @(VRF c))) - kesKey <- unsoundPureGenKeyKES <$> genSeed (seedSizeKES (Proxy @(KES c))) - let kesPub = unsoundPureDeriveVerKeyKES kesKey - sigma = LK.signedDSIGN + genKey <- genKeyDSIGN <$> genSeed (seedSizeDSIGN (Proxy @LK.DSIGN)) + delKey <- genKeyDSIGN <$> genSeed (seedSizeDSIGN (Proxy @LK.DSIGN)) + stkKey <- genKeyDSIGN <$> genSeed (seedSizeDSIGN (Proxy @LK.DSIGN)) + vrfKey <- genKeyVRF <$> genSeed (seedSizeVRF (Proxy @(VRF c))) + kesKey <- unsoundPureGenKeyKES <$> genSeed (seedSizeKES (Proxy @(KES c))) + let kesPub = unsoundPureDeriveVerKeyKES kesKey + sigma = + LK.signedDSIGN delKey (SL.OCertSignable kesPub certificateIssueNumber startKESPeriod) - let ocert = SL.OCert { - ocertVkHot = kesPub - , ocertN = certificateIssueNumber + let ocert = + SL.OCert + { ocertVkHot = kesPub + , ocertN = certificateIssueNumber , ocertKESPeriod = startKESPeriod - , ocertSigma = sigma + , ocertSigma = sigma } - return CoreNode { - cnGenesisKey = genKey + return + CoreNode + { cnGenesisKey = genKey , cnDelegateKey = delKey - , cnStakingKey = stkKey - , cnVRF = vrfKey - , cnKES = kesKey - , cnOCert = ocert + , cnStakingKey = stkKey + , cnVRF = vrfKey + , cnKES = kesKey + , cnOCert = ocert } - where - certificateIssueNumber = 0 + where + certificateIssueNumber = 0 - genBytes :: Integral a => a -> Gen BS.ByteString - genBytes nbBytes = BS.pack <$> vectorOf (fromIntegral nbBytes) arbitrary + genBytes :: Integral a => a -> Gen BS.ByteString + genBytes nbBytes = BS.pack <$> vectorOf (fromIntegral nbBytes) arbitrary - genSeed :: Integral a => a -> Gen Cardano.Crypto.Seed - genSeed = fmap mkSeedFromBytes . genBytes + genSeed :: Integral a => a -> Gen Cardano.Crypto.Seed + genSeed = fmap mkSeedFromBytes . genBytes mkLeaderCredentials :: CoreNode c -> ShelleyLeaderCredentials c -mkLeaderCredentials CoreNode { cnDelegateKey, cnVRF, cnKES, cnOCert } = - ShelleyLeaderCredentials { - shelleyLeaderCredentialsInitSignKey = cnKES - , shelleyLeaderCredentialsCanBeLeader = PraosCanBeLeader { - praosCanBeLeaderOpCert = cnOCert - , praosCanBeLeaderColdVerKey = SL.VKey $ deriveVerKeyDSIGN cnDelegateKey - , praosCanBeLeaderSignKeyVRF = cnVRF - } - , shelleyLeaderCredentialsLabel = "ThreadNet" - } +mkLeaderCredentials CoreNode{cnDelegateKey, cnVRF, cnKES, cnOCert} = + ShelleyLeaderCredentials + { shelleyLeaderCredentialsInitSignKey = cnKES + , shelleyLeaderCredentialsCanBeLeader = + PraosCanBeLeader + { praosCanBeLeaderOpCert = cnOCert + , praosCanBeLeaderColdVerKey = SL.VKey $ deriveVerKeyDSIGN cnDelegateKey + , praosCanBeLeaderSignKeyVRF = cnVRF + } + , shelleyLeaderCredentialsLabel = "ThreadNet" + } {------------------------------------------------------------------------------- KES configuration @@ -234,27 +265,29 @@ mkLeaderCredentials CoreNode { cnDelegateKey, cnVRF, cnKES, cnOCert } = -- TODO This limitation may be lifted by PR #2107, see -- . data KesConfig = KesConfig - { maxEvolutions :: Word64 + { maxEvolutions :: Word64 , slotsPerEvolution :: Word64 } -- | A 'KesConfig' that will not require more evolutions than this test's crypto -- allows. mkKesConfig :: - forall proxy c. Crypto c - => proxy c -> NumSlots -> KesConfig -mkKesConfig _ (NumSlots t) = KesConfig + forall proxy c. + Crypto c => + proxy c -> NumSlots -> KesConfig +mkKesConfig _ (NumSlots t) = + KesConfig { maxEvolutions , slotsPerEvolution = divCeiling t maxEvolutions } - where - maxEvolutions = fromIntegral $ totalPeriodsKES (Proxy @(KES c)) + where + maxEvolutions = fromIntegral $ totalPeriodsKES (Proxy @(KES c)) - -- | Like 'div', but rounds-up. - divCeiling :: Integral a => a -> a -> a - divCeiling n d = q + min 1 r - where - (q, r) = quotRem n d + -- \| Like 'div', but rounds-up. + divCeiling :: Integral a => a -> a -> a + divCeiling n d = q + min 1 r + where + (q, r) = quotRem n d {------------------------------------------------------------------------------- TPraos node configuration @@ -265,165 +298,182 @@ mkKesConfig _ (NumSlots t) = KesConfig -- INVARIANT: @10 * k / f@ must be a whole number. mkEpochSize :: SecurityParam -> Rational -> EpochSize mkEpochSize (SecurityParam k) f = - if r /= 0 then error "10 * k / f must be a whole number" else - EpochSize q - where - n = numerator f - d = denominator f + if r /= 0 + then error "10 * k / f must be a whole number" + else + EpochSize q + where + n = numerator f + d = denominator f - (q, r) = quotRem (10 * unNonZero k * fromInteger d) (fromInteger n) + (q, r) = quotRem (10 * unNonZero k * fromInteger d) (fromInteger n) -- | Note: a KES algorithm supports a particular max number of KES evolutions, -- but we can configure a potentially lower maximum for the ledger, that's why -- we take it as an argument. mkGenesisConfig :: - forall c. PraosCrypto c - => ProtVer -- ^ Initial protocol version - -> SecurityParam - -> Rational -- ^ Initial active slot coefficient - -> DecentralizationParam - -> Word64 - -- ^ Max Lovelace supply, must be >= #coreNodes * initialLovelacePerCoreNode - -> SlotLength - -> KesConfig - -> [CoreNode c] - -> ShelleyGenesis + forall c. + PraosCrypto c => + -- | Initial protocol version + ProtVer -> + SecurityParam -> + -- | Initial active slot coefficient + Rational -> + DecentralizationParam -> + -- | Max Lovelace supply, must be >= #coreNodes * initialLovelacePerCoreNode + Word64 -> + SlotLength -> + KesConfig -> + [CoreNode c] -> + ShelleyGenesis mkGenesisConfig pVer k f d maxLovelaceSupply slotLength kesCfg coreNodes = - assertWithMsg checkMaxLovelaceSupply $ - ShelleyGenesis { - -- Matches the start of the ThreadNet tests - sgSystemStart = dawnOfTime - , sgNetworkMagic = 0 - , sgNetworkId = networkId - , sgActiveSlotsCoeff = unsafeBoundRational f - , sgSecurityParam = maxRollbacks k - , sgEpochLength = mkEpochSize k f - , sgSlotsPerKESPeriod = slotsPerEvolution kesCfg - , sgMaxKESEvolutions = maxEvolutions kesCfg - , sgSlotLength = SL.toNominalDiffTimeMicroWithRounding $ getSlotLength slotLength - , sgUpdateQuorum = quorum - , sgMaxLovelaceSupply = maxLovelaceSupply - , sgProtocolParams = pparams - , sgGenDelegs = coreNodesToGenesisMapping - , sgInitialFunds = ListMap.fromMap initialFunds - , sgStaking = initialStake - } - where - checkMaxLovelaceSupply :: Either String () - checkMaxLovelaceSupply - | maxLovelaceSupply >= - fromIntegral (length coreNodes) * initialLovelacePerCoreNode - = return () - | otherwise - = throwError $ unwords [ - "Lovelace supply =" - , show maxLovelaceSupply - , "but must be at least" - , show (fromIntegral (length coreNodes) * initialLovelacePerCoreNode) - ] + assertWithMsg checkMaxLovelaceSupply $ + ShelleyGenesis + { -- Matches the start of the ThreadNet tests + sgSystemStart = dawnOfTime + , sgNetworkMagic = 0 + , sgNetworkId = networkId + , sgActiveSlotsCoeff = unsafeBoundRational f + , sgSecurityParam = maxRollbacks k + , sgEpochLength = mkEpochSize k f + , sgSlotsPerKESPeriod = slotsPerEvolution kesCfg + , sgMaxKESEvolutions = maxEvolutions kesCfg + , sgSlotLength = SL.toNominalDiffTimeMicroWithRounding $ getSlotLength slotLength + , sgUpdateQuorum = quorum + , sgMaxLovelaceSupply = maxLovelaceSupply + , sgProtocolParams = pparams + , sgGenDelegs = coreNodesToGenesisMapping + , sgInitialFunds = ListMap.fromMap initialFunds + , sgStaking = initialStake + } + where + checkMaxLovelaceSupply :: Either String () + checkMaxLovelaceSupply + | maxLovelaceSupply + >= fromIntegral (length coreNodes) * initialLovelacePerCoreNode = + return () + | otherwise = + throwError $ + unwords + [ "Lovelace supply =" + , show maxLovelaceSupply + , "but must be at least" + , show (fromIntegral (length coreNodes) * initialLovelacePerCoreNode) + ] - quorum :: Word64 - quorum = nbCoreNodes `min` ((nbCoreNodes `div` 2) + 1) - where - nbCoreNodes = fromIntegral (length coreNodes) - - pparams :: SL.PParams ShelleyEra - pparams = SL.emptyPParams - & SL.ppDL .~ - unsafeBoundRational (decentralizationParamToRational d) - & SL.ppMaxBBSizeL .~ 10000 -- TODO - & SL.ppMaxBHSizeL .~ 1000 -- TODO + quorum :: Word64 + quorum = nbCoreNodes `min` ((nbCoreNodes `div` 2) + 1) + where + nbCoreNodes = fromIntegral (length coreNodes) + + pparams :: SL.PParams ShelleyEra + pparams = + SL.emptyPParams + & SL.ppDL + .~ unsafeBoundRational (decentralizationParamToRational d) + & SL.ppMaxBBSizeL .~ 10000 -- TODO + & SL.ppMaxBHSizeL .~ 1000 -- TODO & SL.ppProtocolVersionL .~ pVer - coreNodesToGenesisMapping :: - Map (SL.KeyHash 'SL.Genesis) SL.GenDelegPair - coreNodesToGenesisMapping = Map.fromList + coreNodesToGenesisMapping :: + Map (SL.KeyHash 'SL.Genesis) SL.GenDelegPair + coreNodesToGenesisMapping = + Map.fromList [ let gkh :: SL.KeyHash 'SL.Genesis gkh = SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnGenesisKey gdpair :: SL.GenDelegPair - gdpair = SL.GenDelegPair + gdpair = + SL.GenDelegPair (SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey) (hashVerKeyVRF @c $ deriveVerKeyVRF cnVRF) - - in (gkh, gdpair) - | CoreNode { cnGenesisKey, cnDelegateKey, cnVRF } <- coreNodes + in + (gkh, gdpair) + | CoreNode{cnGenesisKey, cnDelegateKey, cnVRF} <- coreNodes ] - initialFunds :: Map SL.Addr SL.Coin - initialFunds = Map.fromList + initialFunds :: Map SL.Addr SL.Coin + initialFunds = + Map.fromList [ (addr, coin) - | CoreNode { cnDelegateKey, cnStakingKey } <- coreNodes - , let addr = SL.Addr networkId - (mkCredential cnDelegateKey) - (SL.StakeRefBase (mkCredential cnStakingKey)) + | CoreNode{cnDelegateKey, cnStakingKey} <- coreNodes + , let addr = + SL.Addr + networkId + (mkCredential cnDelegateKey) + (SL.StakeRefBase (mkCredential cnStakingKey)) coin = SL.Coin $ fromIntegral initialLovelacePerCoreNode ] - -- In this initial stake, each core node delegates its stake to itself. - initialStake :: ShelleyGenesisStaking - initialStake = ShelleyGenesisStaking - { sgsPools = ListMap - [ (pk, pp) - | pp@SL.PoolParams { ppId = pk } <- Map.elems coreNodeToPoolMapping - ] - -- The staking key maps to the key hash of the pool, which is set to the + -- In this initial stake, each core node delegates its stake to itself. + initialStake :: ShelleyGenesisStaking + initialStake = + ShelleyGenesisStaking + { sgsPools = + ListMap + [ (pk, pp) + | pp@SL.PoolParams{ppId = pk} <- Map.elems coreNodeToPoolMapping + ] + , -- The staking key maps to the key hash of the pool, which is set to the -- "delegate key" in order that nodes may issue blocks both as delegates -- and as stake pools. - , sgsStake = ListMap - [ ( SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnStakingKey - , SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey - ) - | CoreNode {cnDelegateKey, cnStakingKey} <- coreNodes - ] - } - where - coreNodeToPoolMapping :: - Map (SL.KeyHash 'SL.StakePool) SL.PoolParams - coreNodeToPoolMapping = Map.fromList [ - ( SL.hashKey . SL.VKey . deriveVerKeyDSIGN $ cnStakingKey - , SL.PoolParams - { SL.ppId = poolHash - , SL.ppVrf = vrfHash - -- Each core node pledges its full stake to the pool. - , SL.ppPledge = SL.Coin $ fromIntegral initialLovelacePerCoreNode - , SL.ppCost = SL.Coin 1 - , SL.ppMargin = minBound - -- Reward accounts live in a separate "namespace" to other - -- accounts, so it should be fine to use the same address. - , SL.ppRewardAccount = SL.RewardAccount networkId $ mkCredential cnDelegateKey - , SL.ppOwners = Set.singleton poolOwnerHash - , SL.ppRelays = Seq.empty - , SL.ppMetadata = SL.SNothing - } + sgsStake = + ListMap + [ ( SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnStakingKey + , SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey ) - | CoreNode { cnDelegateKey, cnStakingKey, cnVRF } <- coreNodes - -- The pool and owner hashes are derived from the same key, but - -- use different hashing schemes - , let poolHash = SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey - , let poolOwnerHash = SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey - , let vrfHash = hashVerKeyVRF @c $ deriveVerKeyVRF cnVRF + | CoreNode{cnDelegateKey, cnStakingKey} <- coreNodes ] + } + where + coreNodeToPoolMapping :: + Map (SL.KeyHash 'SL.StakePool) SL.PoolParams + coreNodeToPoolMapping = + Map.fromList + [ ( SL.hashKey . SL.VKey . deriveVerKeyDSIGN $ cnStakingKey + , SL.PoolParams + { SL.ppId = poolHash + , SL.ppVrf = vrfHash + , -- Each core node pledges its full stake to the pool. + SL.ppPledge = SL.Coin $ fromIntegral initialLovelacePerCoreNode + , SL.ppCost = SL.Coin 1 + , SL.ppMargin = minBound + , -- Reward accounts live in a separate "namespace" to other + -- accounts, so it should be fine to use the same address. + SL.ppRewardAccount = SL.RewardAccount networkId $ mkCredential cnDelegateKey + , SL.ppOwners = Set.singleton poolOwnerHash + , SL.ppRelays = Seq.empty + , SL.ppMetadata = SL.SNothing + } + ) + | CoreNode{cnDelegateKey, cnStakingKey, cnVRF} <- coreNodes + , -- The pool and owner hashes are derived from the same key, but + -- use different hashing schemes + let poolHash = SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey + , let poolOwnerHash = SL.hashKey . SL.VKey $ deriveVerKeyDSIGN cnDelegateKey + , let vrfHash = hashVerKeyVRF @c $ deriveVerKeyVRF cnVRF + ] mkProtocolShelley :: - forall m c. - (IOLike m, ShelleyCompatible (TPraos c) ShelleyEra) - => ShelleyGenesis - -> SL.Nonce - -> ProtVer - -> CoreNode c - -> ( ProtocolInfo (ShelleyBlock (TPraos c) ShelleyEra) - , m [BlockForging m (ShelleyBlock (TPraos c) ShelleyEra)] - ) + forall m c. + (IOLike m, ShelleyCompatible (TPraos c) ShelleyEra) => + ShelleyGenesis -> + SL.Nonce -> + ProtVer -> + CoreNode c -> + ( ProtocolInfo (ShelleyBlock (TPraos c) ShelleyEra) + , m [BlockForging m (ShelleyBlock (TPraos c) ShelleyEra)] + ) mkProtocolShelley genesis initialNonce protVer coreNode = - protocolInfoShelley - genesis - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = initialNonce - , shelleyBasedLeaderCredentials = [mkLeaderCredentials coreNode] - } - protVer + protocolInfoShelley + genesis + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = initialNonce + , shelleyBasedLeaderCredentials = [mkLeaderCredentials coreNode] + } + protVer + {------------------------------------------------------------------------------- Necessary transactions for updating the 'DecentralizationParam' -------------------------------------------------------------------------------} @@ -432,79 +482,88 @@ incrementMinorProtVer :: SL.ProtVer -> SL.ProtVer incrementMinorProtVer (SL.ProtVer major minor) = SL.ProtVer major (succ minor) mkSetDecentralizationParamTxs :: - forall c. (ShelleyBasedEra ShelleyEra) - => [CoreNode c] - -> ProtVer -- ^ The proposed protocol version - -> SlotNo -- ^ The TTL - -> DecentralizationParam -- ^ The new value - -> [GenTx (ShelleyBlock (TPraos c) ShelleyEra)] + forall c. + ShelleyBasedEra ShelleyEra => + [CoreNode c] -> + -- | The proposed protocol version + ProtVer -> + -- | The TTL + SlotNo -> + -- | The new value + DecentralizationParam -> + [GenTx (ShelleyBlock (TPraos c) ShelleyEra)] mkSetDecentralizationParamTxs coreNodes pVer ttl dNew = - (:[]) $ + (: []) $ mkShelleyTx $ - SL.mkBasicTx body & SL.witsTxL .~ witnesses - where - -- The funds touched by this transaction assume it's the first transaction - -- executed. - scheduledEpoch :: EpochNo - scheduledEpoch = EpochNo 0 - - - witnesses :: SL.TxWits ShelleyEra - witnesses = SL.mkBasicTxWits & SL.addrTxWitsL .~ signatures - - -- Every node signs the transaction body, since it includes a " vote " from - -- every node. - signatures :: Set (SL.WitVKey 'SL.Witness) - signatures = - TL.mkWitnessesVKey - (hashAnnotated body) - [ TL.KeyPair (SL.VKey vk) sk - | cn <- coreNodes - , let sk = cnDelegateKey cn - , let vk = deriveVerKeyDSIGN sk - ] - - -- Nothing but the parameter update and the obligatory touching of an - -- input. - body :: SL.TxBody ShelleyEra - body = SL.mkBasicTxBody - & SL.inputsTxBodyL .~ Set.singleton (fst touchCoins) - & SL.outputsTxBodyL .~ Seq.singleton (snd touchCoins) - & SL.ttlTxBodyL .~ ttl - & SL.updateTxBodyL .~ SL.SJust update - - -- Every Shelley transaction requires one input. - -- - -- We use the input of the first node, but we just put it all right back. - -- - -- ASSUMPTION: This transaction runs in the first slot. - touchCoins :: (SL.TxIn, SL.TxOut ShelleyEra) - touchCoins = case coreNodes of - [] -> error "no nodes!" - cn:_ -> - ( SL.initialFundsPseudoTxIn addr - , SL.ShelleyTxOut addr coin - ) - where - addr = SL.Addr networkId - (mkCredential (cnDelegateKey cn)) - (SL.StakeRefBase (mkCredential (cnStakingKey cn))) - coin = SL.Coin $ fromIntegral initialLovelacePerCoreNode + SL.mkBasicTx body & SL.witsTxL .~ witnesses + where + -- The funds touched by this transaction assume it's the first transaction + -- executed. + scheduledEpoch :: EpochNo + scheduledEpoch = EpochNo 0 + + witnesses :: SL.TxWits ShelleyEra + witnesses = SL.mkBasicTxWits & SL.addrTxWitsL .~ signatures + + -- Every node signs the transaction body, since it includes a " vote " from + -- every node. + signatures :: Set (SL.WitVKey 'SL.Witness) + signatures = + TL.mkWitnessesVKey + (hashAnnotated body) + [ TL.KeyPair (SL.VKey vk) sk + | cn <- coreNodes + , let sk = cnDelegateKey cn + , let vk = deriveVerKeyDSIGN sk + ] - -- One replicant of the parameter update per each node. - update :: SL.Update ShelleyEra - update = - flip SL.Update scheduledEpoch $ SL.ProposedPPUpdates $ + -- Nothing but the parameter update and the obligatory touching of an + -- input. + body :: SL.TxBody ShelleyEra + body = + SL.mkBasicTxBody + & SL.inputsTxBodyL .~ Set.singleton (fst touchCoins) + & SL.outputsTxBodyL .~ Seq.singleton (snd touchCoins) + & SL.ttlTxBodyL .~ ttl + & SL.updateTxBodyL .~ SL.SJust update + + -- Every Shelley transaction requires one input. + -- + -- We use the input of the first node, but we just put it all right back. + -- + -- ASSUMPTION: This transaction runs in the first slot. + touchCoins :: (SL.TxIn, SL.TxOut ShelleyEra) + touchCoins = case coreNodes of + [] -> error "no nodes!" + cn : _ -> + ( SL.initialFundsPseudoTxIn addr + , SL.ShelleyTxOut addr coin + ) + where + addr = + SL.Addr + networkId + (mkCredential (cnDelegateKey cn)) + (SL.StakeRefBase (mkCredential (cnStakingKey cn))) + coin = SL.Coin $ fromIntegral initialLovelacePerCoreNode + + -- One replicant of the parameter update per each node. + update :: SL.Update ShelleyEra + update = + flip SL.Update scheduledEpoch $ + SL.ProposedPPUpdates $ Map.fromList $ - [ ( SL.hashKey $ SL.VKey $ deriveVerKeyDSIGN $ cnGenesisKey cn - , SL.emptyPParamsUpdate - & SL.ppuDL .~ (maybeToStrictMaybe $ - boundRational $ - decentralizationParamToRational dNew) - & SL.ppuProtocolVersionL .~ SL.SJust pVer - ) - | cn <- coreNodes - ] + [ ( SL.hashKey $ SL.VKey $ deriveVerKeyDSIGN $ cnGenesisKey cn + , SL.emptyPParamsUpdate + & SL.ppuDL + .~ ( maybeToStrictMaybe $ + boundRational $ + decentralizationParamToRational dNew + ) + & SL.ppuProtocolVersionL .~ SL.SJust pVer + ) + | cn <- coreNodes + ] {------------------------------------------------------------------------------- Auxiliary @@ -523,9 +582,10 @@ mkVerKey :: SignKeyDSIGN LK.DSIGN -> SL.VKey r mkVerKey = SL.VKey . deriveVerKeyDSIGN mkKeyPair :: SignKeyDSIGN LK.DSIGN -> TL.KeyPair r -mkKeyPair sk = TL.KeyPair { vKey = mkVerKey sk, sKey = sk } +mkKeyPair sk = TL.KeyPair{vKey = mkVerKey sk, sKey = sk} -mkKeyHashVrf :: forall c r. Crypto c => SignKeyVRF (VRF c) -> LK.VRFVerKeyHash (r :: LK.KeyRoleVRF) +mkKeyHashVrf :: + forall c r. Crypto c => SignKeyVRF (VRF c) -> LK.VRFVerKeyHash (r :: LK.KeyRoleVRF) mkKeyHashVrf = hashVerKeyVRF @c . deriveVerKeyVRF networkId :: SL.Network @@ -540,95 +600,105 @@ networkId = SL.Testnet -- Our current plan is to replace all of this infrastructure with the ThreadNet -- rewrite; so we're minimizing the work and maintenance here for now. mkMASetDecentralizationParamTxs :: - forall proto era. - ( ShelleyBasedEra era - , SL.AllegraEraTxBody era - , SL.ShelleyEraTxBody era - , SL.AtMostEra AlonzoEra era - ) - => [CoreNode (ProtoCrypto proto)] - -> ProtVer -- ^ The proposed protocol version - -> SlotNo -- ^ The TTL - -> DecentralizationParam -- ^ The new value - -> [GenTx (ShelleyBlock proto era)] + forall proto era. + ( ShelleyBasedEra era + , SL.AllegraEraTxBody era + , SL.ShelleyEraTxBody era + , SL.AtMostEra AlonzoEra era + ) => + [CoreNode (ProtoCrypto proto)] -> + -- | The proposed protocol version + ProtVer -> + -- | The TTL + SlotNo -> + -- | The new value + DecentralizationParam -> + [GenTx (ShelleyBlock proto era)] mkMASetDecentralizationParamTxs coreNodes pVer ttl dNew = - (:[]) $ + (: []) $ mkShelleyTx $ - SL.mkBasicTx body & SL.witsTxL .~ witnesses - where - -- The funds touched by this transaction assume it's the first transaction - -- executed. - scheduledEpoch :: EpochNo - scheduledEpoch = EpochNo 0 - - witnesses :: SL.TxWits era - witnesses = SL.mkBasicTxWits & SL.addrTxWitsL .~ signatures - - -- Every node signs the transaction body, since it includes a " vote " from - -- every node. - signatures :: Set (SL.WitVKey 'SL.Witness) - signatures = - TL.mkWitnessesVKey - (eraIndTxBodyHash' body) - [ TL.KeyPair (SL.VKey vk) sk - | cn <- coreNodes - , let sk = cnDelegateKey cn - , let vk = deriveVerKeyDSIGN sk - ] + SL.mkBasicTx body & SL.witsTxL .~ witnesses + where + -- The funds touched by this transaction assume it's the first transaction + -- executed. + scheduledEpoch :: EpochNo + scheduledEpoch = EpochNo 0 + + witnesses :: SL.TxWits era + witnesses = SL.mkBasicTxWits & SL.addrTxWitsL .~ signatures + + -- Every node signs the transaction body, since it includes a " vote " from + -- every node. + signatures :: Set (SL.WitVKey 'SL.Witness) + signatures = + TL.mkWitnessesVKey + (eraIndTxBodyHash' body) + [ TL.KeyPair (SL.VKey vk) sk + | cn <- coreNodes + , let sk = cnDelegateKey cn + , let vk = deriveVerKeyDSIGN sk + ] - -- Nothing but the parameter update and the obligatory touching of an - -- input. - body :: SL.TxBody era - body = SL.mkBasicTxBody + -- Nothing but the parameter update and the obligatory touching of an + -- input. + body :: SL.TxBody era + body = + SL.mkBasicTxBody & SL.inputsTxBodyL .~ inputs & SL.outputsTxBodyL .~ outputs & SL.vldtTxBodyL .~ vldt & SL.updateTxBodyL .~ update' - where - inputs = Set.singleton (fst touchCoins) - outputs = Seq.singleton (snd touchCoins) - vldt = SL.ValidityInterval { - invalidBefore = SL.SNothing - , invalidHereafter = SL.SJust ttl - } - update' = SL.SJust update - - -- Every Shelley transaction requires one input. - -- - -- We use the input of the first node, but we just put it all right back. - -- - -- ASSUMPTION: This transaction runs in the first slot. - touchCoins :: (SL.TxIn, SL.TxOut era) - touchCoins = case coreNodes of - [] -> error "no nodes!" - cn:_ -> - ( SL.initialFundsPseudoTxIn addr - , SL.mkBasicTxOut addr coin - ) - where - addr = SL.Addr networkId - (mkCredential (cnDelegateKey cn)) - (SL.StakeRefBase (mkCredential (cnStakingKey cn))) - coin = SL.inject $ SL.Coin $ fromIntegral initialLovelacePerCoreNode - - -- One replicant of the parameter update per each node. - update :: SL.Update era - update = - flip SL.Update scheduledEpoch $ SL.ProposedPPUpdates $ + where + inputs = Set.singleton (fst touchCoins) + outputs = Seq.singleton (snd touchCoins) + vldt = + SL.ValidityInterval + { invalidBefore = SL.SNothing + , invalidHereafter = SL.SJust ttl + } + update' = SL.SJust update + + -- Every Shelley transaction requires one input. + -- + -- We use the input of the first node, but we just put it all right back. + -- + -- ASSUMPTION: This transaction runs in the first slot. + touchCoins :: (SL.TxIn, SL.TxOut era) + touchCoins = case coreNodes of + [] -> error "no nodes!" + cn : _ -> + ( SL.initialFundsPseudoTxIn addr + , SL.mkBasicTxOut addr coin + ) + where + addr = + SL.Addr + networkId + (mkCredential (cnDelegateKey cn)) + (SL.StakeRefBase (mkCredential (cnStakingKey cn))) + coin = SL.inject $ SL.Coin $ fromIntegral initialLovelacePerCoreNode + + -- One replicant of the parameter update per each node. + update :: SL.Update era + update = + flip SL.Update scheduledEpoch $ + SL.ProposedPPUpdates $ Map.fromList $ - [ ( SL.hashKey $ SL.VKey $ deriveVerKeyDSIGN $ cnGenesisKey cn - , SL.emptyPParamsUpdate - & SL.ppuDL .~ (maybeToStrictMaybe $ - boundRational $ - decentralizationParamToRational dNew) - & SL.ppuProtocolVersionL .~ SL.SJust pVer - ) - | cn <- coreNodes - ] + [ ( SL.hashKey $ SL.VKey $ deriveVerKeyDSIGN $ cnGenesisKey cn + , SL.emptyPParamsUpdate + & SL.ppuDL + .~ ( maybeToStrictMaybe $ + boundRational $ + decentralizationParamToRational dNew + ) + & SL.ppuProtocolVersionL .~ SL.SJust pVer + ) + | cn <- coreNodes + ] eraIndTxBodyHash' :: - HashAnnotated body EraIndependentTxBody - => body - -> SafeHash - EraIndependentTxBody + HashAnnotated body EraIndependentTxBody => + body -> + SafeHash + EraIndependentTxBody eraIndTxBodyHash' = coerce . hashAnnotated diff --git a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/TxGen/Shelley.hs b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/TxGen/Shelley.hs index af24405499..35c9842b13 100644 --- a/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/TxGen/Shelley.hs +++ b/ouroboros-consensus-cardano/src/unstable-shelley-testlib/Test/ThreadNet/TxGen/Shelley.hs @@ -5,177 +5,180 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Test.ThreadNet.TxGen.Shelley ( - ShelleyTxGenExtra (..) +module Test.ThreadNet.TxGen.Shelley + ( ShelleyTxGenExtra (..) , WhetherToGeneratePPUs (..) , genTx , mkGenEnv ) where -import qualified Cardano.Ledger.Shelley.API as SL -import Control.Monad.Except (runExcept) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras (ShelleyEra) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger -import qualified Test.Cardano.Ledger.Shelley.Constants as Gen -import qualified Test.Cardano.Ledger.Shelley.Generator.Core as Gen -import Test.Cardano.Ledger.Shelley.Generator.EraGen - (EraGen (genEraTwoPhase2Arg, genEraTwoPhase3Arg)) -import qualified Test.Cardano.Ledger.Shelley.Generator.Presets as Gen.Presets -import Test.Cardano.Ledger.Shelley.Generator.ShelleyEraGen () -import qualified Test.Cardano.Ledger.Shelley.Generator.Utxo as Gen -import Test.Consensus.Shelley.MockCrypto (MockCrypto) -import Test.QuickCheck -import Test.ThreadNet.Infra.Shelley -import Test.ThreadNet.TxGen (TxGen (..)) +import Cardano.Ledger.Shelley.API qualified as SL +import Control.Monad.Except (runExcept) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras (ShelleyEra) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger +import Test.Cardano.Ledger.Shelley.Constants qualified as Gen +import Test.Cardano.Ledger.Shelley.Generator.Core qualified as Gen +import Test.Cardano.Ledger.Shelley.Generator.EraGen + ( EraGen (genEraTwoPhase2Arg, genEraTwoPhase3Arg) + ) +import Test.Cardano.Ledger.Shelley.Generator.Presets qualified as Gen.Presets +import Test.Cardano.Ledger.Shelley.Generator.ShelleyEraGen () +import Test.Cardano.Ledger.Shelley.Generator.Utxo qualified as Gen +import Test.Consensus.Shelley.MockCrypto (MockCrypto) +import Test.QuickCheck +import Test.ThreadNet.Infra.Shelley +import Test.ThreadNet.TxGen (TxGen (..)) data ShelleyTxGenExtra = ShelleyTxGenExtra - { -- | Generator environment. - stgeGenEnv :: Gen.GenEnv MockCrypto ShelleyEra - -- | Generate no transactions before this slot. + { stgeGenEnv :: Gen.GenEnv MockCrypto ShelleyEra + -- ^ Generator environment. , stgeStartAt :: SlotNo + -- ^ Generate no transactions before this slot. } instance TxGen (ShelleyBlock (TPraos MockCrypto) ShelleyEra) where - type TxGenExtra (ShelleyBlock (TPraos MockCrypto) ShelleyEra) = ShelleyTxGenExtra testGenTxs _coreNodeId _numCoreNodes curSlotNo cfg extra lst - | stgeStartAt > curSlotNo = pure [] - - -- TODO Temporarily disable the transaction generator until we fix the - -- failing assertion in TxSubmission.Inbound, see #2680. - -- - -- When fixed, remove the True case keepig the else case below to re-enable - -- the transaction generator. - - | otherwise = - if True - then pure [] - else do - n <- choose (0, 20) - go [] n - $ applyDiffs lst - $ applyChainTick OmitLedgerEvents lcfg curSlotNo - $ forgetLedgerTables lst - where - ShelleyTxGenExtra - { stgeGenEnv - , stgeStartAt - } = extra - - lcfg :: LedgerConfig (ShelleyBlock (TPraos MockCrypto) ShelleyEra) - lcfg = configLedger cfg - - go :: [GenTx (ShelleyBlock (TPraos MockCrypto) ShelleyEra)] -- ^ Accumulator - -> Integer -- ^ Number of txs to still produce - -> TickedLedgerState (ShelleyBlock (TPraos MockCrypto) ShelleyEra) ValuesMK - -> Gen [GenTx (ShelleyBlock (TPraos MockCrypto) ShelleyEra)] - go acc 0 _ = return (reverse acc) - go acc n st = do - mbTx <- genTx cfg curSlotNo st stgeGenEnv - case mbTx of - Nothing -> return (reverse acc) -- cannot afford more transactions - Just tx -> case runExcept $ fst <$> applyTx lcfg DoNotIntervene curSlotNo tx st of - -- We don't mind generating invalid transactions - Left _ -> go (tx:acc) (n - 1) st - Right st' -> go (tx:acc) (n - 1) (applyDiffs st st') + | stgeStartAt > curSlotNo = pure [] + -- TODO Temporarily disable the transaction generator until we fix the + -- failing assertion in TxSubmission.Inbound, see #2680. + -- + -- When fixed, remove the True case keepig the else case below to re-enable + -- the transaction generator. + + | otherwise = + if True + then pure [] + else do + n <- choose (0, 20) + go [] n $ + applyDiffs lst $ + applyChainTick OmitLedgerEvents lcfg curSlotNo $ + forgetLedgerTables lst + where + ShelleyTxGenExtra + { stgeGenEnv + , stgeStartAt + } = extra + + lcfg :: LedgerConfig (ShelleyBlock (TPraos MockCrypto) ShelleyEra) + lcfg = configLedger cfg + + go :: + [GenTx (ShelleyBlock (TPraos MockCrypto) ShelleyEra)] -> + -- \^ Accumulator + Integer -> + -- \^ Number of txs to still produce + TickedLedgerState (ShelleyBlock (TPraos MockCrypto) ShelleyEra) ValuesMK -> + Gen [GenTx (ShelleyBlock (TPraos MockCrypto) ShelleyEra)] + go acc 0 _ = return (reverse acc) + go acc n st = do + mbTx <- genTx cfg curSlotNo st stgeGenEnv + case mbTx of + Nothing -> return (reverse acc) -- cannot afford more transactions + Just tx -> case runExcept $ fst <$> applyTx lcfg DoNotIntervene curSlotNo tx st of + -- We don't mind generating invalid transactions + Left _ -> go (tx : acc) (n - 1) st + Right st' -> go (tx : acc) (n - 1) (applyDiffs st st') genTx :: - TopLevelConfig (ShelleyBlock (TPraos MockCrypto) ShelleyEra) - -> SlotNo - -> TickedLedgerState (ShelleyBlock (TPraos MockCrypto) ShelleyEra) ValuesMK - -> Gen.GenEnv MockCrypto ShelleyEra - -> Gen (Maybe (GenTx (ShelleyBlock (TPraos MockCrypto) ShelleyEra))) -genTx _cfg slotNo TickedShelleyLedgerState { tickedShelleyLedgerState } genEnv = - Just . mkShelleyTx <$> Gen.genTx + TopLevelConfig (ShelleyBlock (TPraos MockCrypto) ShelleyEra) -> + SlotNo -> + TickedLedgerState (ShelleyBlock (TPraos MockCrypto) ShelleyEra) ValuesMK -> + Gen.GenEnv MockCrypto ShelleyEra -> + Gen (Maybe (GenTx (ShelleyBlock (TPraos MockCrypto) ShelleyEra))) +genTx _cfg slotNo TickedShelleyLedgerState{tickedShelleyLedgerState} genEnv = + Just . mkShelleyTx + <$> Gen.genTx genEnv ledgerEnv (SL.LedgerState utxoSt dpState) - where - epochState :: SL.EpochState ShelleyEra - epochState = SL.nesEs tickedShelleyLedgerState - - ledgerEnv :: SL.LedgerEnv ShelleyEra - ledgerEnv = SL.LedgerEnv - { ledgerEpochNo = Nothing - , ledgerSlotNo = slotNo - , ledgerIx = minBound - , ledgerPp = getPParams tickedShelleyLedgerState - , ledgerAccount = SL.esAccountState epochState + where + epochState :: SL.EpochState ShelleyEra + epochState = SL.nesEs tickedShelleyLedgerState + + ledgerEnv :: SL.LedgerEnv ShelleyEra + ledgerEnv = + SL.LedgerEnv + { ledgerEpochNo = Nothing + , ledgerSlotNo = slotNo + , ledgerIx = minBound + , ledgerPp = getPParams tickedShelleyLedgerState + , ledgerAccount = SL.esAccountState epochState } - utxoSt :: SL.UTxOState ShelleyEra - utxoSt = - SL.lsUTxOState + utxoSt :: SL.UTxOState ShelleyEra + utxoSt = + SL.lsUTxOState . SL.esLState $ epochState - dpState :: SL.CertState ShelleyEra - dpState = - SL.lsCertState + dpState :: SL.CertState ShelleyEra + dpState = + SL.lsCertState . SL.esLState $ epochState data WhetherToGeneratePPUs = DoNotGeneratePPUs | DoGeneratePPUs - deriving (Show) + deriving Show mkGenEnv :: - WhetherToGeneratePPUs - -> [CoreNode MockCrypto] - -> Gen.GenEnv MockCrypto ShelleyEra + WhetherToGeneratePPUs -> + [CoreNode MockCrypto] -> + Gen.GenEnv MockCrypto ShelleyEra mkGenEnv whetherPPUs coreNodes = Gen.GenEnv keySpace scriptSpace constants - where - -- Configuration of the transaction generator - constants :: Gen.Constants - constants = - setCerts $ - setPPUs $ + where + -- Configuration of the transaction generator + constants :: Gen.Constants + constants = + setCerts $ + setPPUs $ Gen.defaultConstants { Gen.frequencyMIRCert = 0 , Gen.genTxStableUtxoSize = 100 , Gen.genTxUtxoIncrement = 3 } - where - -- Testing with certificates requires additional handling in the - -- testing framework, because, for example, they may transfer block - -- issuance rights from one node to another, and we must have the - -- relevant nodes brought online at that point. - setCerts cs = cs{ Gen.maxCertsPerTx = 0 } - - setPPUs cs = case whetherPPUs of - DoGeneratePPUs -> cs - DoNotGeneratePPUs -> cs{ Gen.frequencyTxUpdates = 0 } - - keySpace :: Gen.KeySpace MockCrypto ShelleyEra - keySpace = - Gen.KeySpace - (cnkiCoreNode <$> cn) - ksGenesisDelegates - ksStakePools - (ksKeyPairs <> (cnkiKeyPair <$> cn)) - ksMSigScripts - where - cn = coreNodeKeys <$> coreNodes - Gen.KeySpace_ - { ksKeyPairs, - ksMSigScripts, - ksGenesisDelegates, - ksStakePools - } = - Gen.Presets.keySpace @ShelleyEra constants - - scriptSpace :: Gen.ScriptSpace ShelleyEra - scriptSpace = - Gen.Presets.scriptSpace @ShelleyEra - (genEraTwoPhase3Arg @ShelleyEra) - (genEraTwoPhase2Arg @ShelleyEra) + where + -- Testing with certificates requires additional handling in the + -- testing framework, because, for example, they may transfer block + -- issuance rights from one node to another, and we must have the + -- relevant nodes brought online at that point. + setCerts cs = cs{Gen.maxCertsPerTx = 0} + + setPPUs cs = case whetherPPUs of + DoGeneratePPUs -> cs + DoNotGeneratePPUs -> cs{Gen.frequencyTxUpdates = 0} + + keySpace :: Gen.KeySpace MockCrypto ShelleyEra + keySpace = + Gen.KeySpace + (cnkiCoreNode <$> cn) + ksGenesisDelegates + ksStakePools + (ksKeyPairs <> (cnkiKeyPair <$> cn)) + ksMSigScripts + where + cn = coreNodeKeys <$> coreNodes + Gen.KeySpace_ + { ksKeyPairs + , ksMSigScripts + , ksGenesisDelegates + , ksStakePools + } = + Gen.Presets.keySpace @ShelleyEra constants + + scriptSpace :: Gen.ScriptSpace ShelleyEra + scriptSpace = + Gen.Presets.scriptSpace @ShelleyEra + (genEraTwoPhase3Arg @ShelleyEra) + (genEraTwoPhase2Arg @ShelleyEra) diff --git a/ouroboros-consensus-cardano/test/byron-test/Main.hs b/ouroboros-consensus-cardano/test/byron-test/Main.hs index 752811b26f..4c1b1fbcff 100644 --- a/ouroboros-consensus-cardano/test/byron-test/Main.hs +++ b/ouroboros-consensus-cardano/test/byron-test/Main.hs @@ -1,23 +1,26 @@ module Main (main) where -import qualified Test.Consensus.Byron.Golden (tests) -import qualified Test.Consensus.Byron.LedgerTables (tests) -import qualified Test.Consensus.Byron.Serialisation (tests) -import Test.Tasty -import qualified Test.ThreadNet.Byron (tests) -import qualified Test.ThreadNet.DualByron (tests) -import Test.Util.TestEnv (defaultMainWithTestEnv, - defaultTestEnvConfig) +import Test.Consensus.Byron.Golden qualified (tests) +import Test.Consensus.Byron.LedgerTables qualified (tests) +import Test.Consensus.Byron.Serialisation qualified (tests) +import Test.Tasty +import Test.ThreadNet.Byron qualified (tests) +import Test.ThreadNet.DualByron qualified (tests) +import Test.Util.TestEnv + ( defaultMainWithTestEnv + , defaultTestEnvConfig + ) main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests tests :: TestTree tests = - testGroup "byron" - [ Test.Consensus.Byron.Golden.tests - , Test.Consensus.Byron.LedgerTables.tests - , Test.Consensus.Byron.Serialisation.tests - , Test.ThreadNet.Byron.tests - , Test.ThreadNet.DualByron.tests - ] + testGroup + "byron" + [ Test.Consensus.Byron.Golden.tests + , Test.Consensus.Byron.LedgerTables.tests + , Test.Consensus.Byron.Serialisation.tests + , Test.ThreadNet.Byron.tests + , Test.ThreadNet.DualByron.tests + ] diff --git a/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Golden.hs b/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Golden.hs index 6ca252e718..c704066c59 100644 --- a/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Golden.hs +++ b/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Golden.hs @@ -1,25 +1,25 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TemplateHaskell #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Byron.Golden (tests) where -import Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion -import Ouroboros.Consensus.Byron.Node () -import Ouroboros.Consensus.Ledger.Query (QueryVersion) -import System.FilePath (()) -import Test.Consensus.Byron.Examples -import Test.Tasty -import Test.Util.Paths -import Test.Util.Serialisation.Golden +import Ouroboros.Consensus.Byron.Ledger.NetworkProtocolVersion +import Ouroboros.Consensus.Byron.Node () +import Ouroboros.Consensus.Ledger.Query (QueryVersion) +import System.FilePath (()) +import Test.Consensus.Byron.Examples +import Test.Tasty +import Test.Util.Paths +import Test.Util.Serialisation.Golden tests :: TestTree tests = goldenTest_all codecConfig ($(getGoldenDir) "byron") examples instance ToGoldenDirectory ByronNodeToNodeVersion - -- Use defaults + +-- Use defaults instance ToGoldenDirectory (QueryVersion, ByronNodeToClientVersion) where - toGoldenDirectory (queryVersion, blockVersion) - = show queryVersion show blockVersion + toGoldenDirectory (queryVersion, blockVersion) = + show queryVersion show blockVersion diff --git a/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/LedgerTables.hs b/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/LedgerTables.hs index 8e6c072869..838a20bafe 100644 --- a/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/LedgerTables.hs +++ b/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/LedgerTables.hs @@ -2,14 +2,16 @@ module Test.Consensus.Byron.LedgerTables (tests) where -import Ouroboros.Consensus.Byron.Ledger -import Test.Consensus.Byron.Generators () -import Test.LedgerTables -import Test.Tasty -import Test.Tasty.QuickCheck +import Ouroboros.Consensus.Byron.Ledger +import Test.Consensus.Byron.Generators () +import Test.LedgerTables +import Test.Tasty +import Test.Tasty.QuickCheck tests :: TestTree -tests = testGroup "LedgerTables" - [ testProperty "Stowable laws" (prop_stowable_laws @ByronBlock) - , testProperty "HasLedgerTables laws" (prop_hasledgertables_laws @ByronBlock) - ] +tests = + testGroup + "LedgerTables" + [ testProperty "Stowable laws" (prop_stowable_laws @ByronBlock) + , testProperty "HasLedgerTables laws" (prop_hasledgertables_laws @ByronBlock) + ] diff --git a/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Serialisation.hs b/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Serialisation.hs index dbb3fee198..55356b6424 100644 --- a/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Serialisation.hs +++ b/ouroboros-consensus-cardano/test/byron-test/Test/Consensus/Byron/Serialisation.hs @@ -5,47 +5,49 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Byron.Serialisation (tests) where -import Cardano.Chain.Block (ABlockOrBoundary (..)) -import qualified Cardano.Chain.Block as CC.Block -import qualified Cardano.Chain.Update as CC.Update -import Codec.CBOR.Write (toLazyByteString) -import qualified Data.ByteString.Lazy as Lazy -import Data.Constraint -import Ouroboros.Consensus.Byron.Ledger hiding (byronProtocolVersion, - byronSoftwareVersion) -import Ouroboros.Consensus.Byron.Node -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Serialisation () -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) -import qualified Test.Cardano.Chain.Genesis.Dummy as CC -import Test.Consensus.Byron.Generators -import Test.QuickCheck hiding (Result) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Corruption -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip +import Cardano.Chain.Block (ABlockOrBoundary (..)) +import Cardano.Chain.Block qualified as CC.Block +import Cardano.Chain.Update qualified as CC.Update +import Codec.CBOR.Write (toLazyByteString) +import Data.ByteString.Lazy qualified as Lazy +import Data.Constraint +import Ouroboros.Consensus.Byron.Ledger hiding + ( byronProtocolVersion + , byronSoftwareVersion + ) +import Ouroboros.Consensus.Byron.Node +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Serialisation () +import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) +import Test.Cardano.Chain.Genesis.Dummy qualified as CC +import Test.Consensus.Byron.Generators +import Test.QuickCheck hiding (Result) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Corruption +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip tests :: TestTree -tests = testGroup "Byron" +tests = + testGroup + "Byron" [ roundtrip_all testCodecCfg dictNestedHdr - , testProperty "BinaryBlockInfo sanity check" prop_byronBinaryBlockInfo - - , testGroup "Integrity" + , testGroup + "Integrity" [ testProperty "detect corruption in RegularBlock" prop_detectCorruption_RegularBlock ] ] - where - dictNestedHdr :: forall a. NestedCtxt_ ByronBlock Header a -> Dict (Eq a, Show a) - dictNestedHdr (CtxtByronBoundary _) = Dict - dictNestedHdr (CtxtByronRegular _) = Dict + where + dictNestedHdr :: forall a. NestedCtxt_ ByronBlock Header a -> Dict (Eq a, Show a) + dictNestedHdr (CtxtByronBoundary _) = Dict + dictNestedHdr (CtxtByronRegular _) = Dict {------------------------------------------------------------------------------- BinaryBlockInfo @@ -53,21 +55,21 @@ tests = testGroup "Byron" prop_byronBinaryBlockInfo :: ByronBlock -> Property prop_byronBinaryBlockInfo blk = - headerAnnotation === extractedHeader - where - BinaryBlockInfo { headerOffset, headerSize } = - byronBinaryBlockInfo blk + headerAnnotation === extractedHeader + where + BinaryBlockInfo{headerOffset, headerSize} = + byronBinaryBlockInfo blk - extractedHeader :: Lazy.ByteString - extractedHeader = - Lazy.take (fromIntegral headerSize) $ - Lazy.drop (fromIntegral headerOffset) $ + extractedHeader :: Lazy.ByteString + extractedHeader = + Lazy.take (fromIntegral headerSize) $ + Lazy.drop (fromIntegral headerOffset) $ toLazyByteString (encodeByronBlock blk) - headerAnnotation :: Lazy.ByteString - headerAnnotation = Lazy.fromStrict $ case byronBlockRaw blk of - ABOBBoundary b -> CC.Block.boundaryHeaderAnnotation $ CC.Block.boundaryHeader b - ABOBBlock b -> CC.Block.headerAnnotation $ CC.Block.blockHeader b + headerAnnotation :: Lazy.ByteString + headerAnnotation = Lazy.fromStrict $ case byronBlockRaw blk of + ABOBBoundary b -> CC.Block.boundaryHeaderAnnotation $ CC.Block.boundaryHeader b + ABOBBlock b -> CC.Block.headerAnnotation $ CC.Block.blockHeader b {------------------------------------------------------------------------------- Integrity @@ -79,24 +81,25 @@ prop_byronBinaryBlockInfo blk = -- only test with regular blocks. prop_detectCorruption_RegularBlock :: RegularBlock -> Corruption -> Property prop_detectCorruption_RegularBlock (RegularBlock blk) = - detectCorruption - encodeByronBlock - (decodeByronBlock epochSlots) - (verifyBlockIntegrity (configBlock testCfg)) - blk + detectCorruption + encodeByronBlock + (decodeByronBlock epochSlots) + (verifyBlockIntegrity (configBlock testCfg)) + blk -- | Matches the values used for the generators. testCfg :: TopLevelConfig ByronBlock testCfg = pInfoConfig protocolInfo - where - protocolInfo :: ProtocolInfo ByronBlock - protocolInfo = - protocolInfoByron $ ProtocolParamsByron { - byronGenesis = CC.dummyConfig + where + protocolInfo :: ProtocolInfo ByronBlock + protocolInfo = + protocolInfoByron $ + ProtocolParamsByron + { byronGenesis = CC.dummyConfig , byronPbftSignatureThreshold = Just (PBftSignatureThreshold 0.5) - , byronProtocolVersion = CC.Update.ProtocolVersion 1 0 0 - , byronSoftwareVersion = CC.Update.SoftwareVersion (CC.Update.ApplicationName "Cardano Test") 2 - , byronLeaderCredentials = Nothing + , byronProtocolVersion = CC.Update.ProtocolVersion 1 0 0 + , byronSoftwareVersion = CC.Update.SoftwareVersion (CC.Update.ApplicationName "Cardano Test") 2 + , byronLeaderCredentials = Nothing } -- | Matches the values used for the generators. diff --git a/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/Byron.hs b/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/Byron.hs index d2f0502662..26994ccb7a 100644 --- a/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/Byron.hs +++ b/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/Byron.hs @@ -8,8 +8,9 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Test.ThreadNet.Byron ( - tests +module Test.ThreadNet.Byron + ( tests + -- * To support the DualByron tests , TestSetup (..) , byronPBftParams @@ -18,116 +19,125 @@ module Test.ThreadNet.Byron ( , noEBBs ) where -import qualified Cardano.Chain.Block as Block -import qualified Cardano.Chain.Common as Common -import qualified Cardano.Chain.Delegation as Delegation -import qualified Cardano.Chain.Genesis as Genesis -import Cardano.Chain.ProtocolConstants (kEpochSlots) -import Cardano.Chain.Slotting (EpochNumber (..), unEpochSlots) -import qualified Cardano.Crypto as Crypto -import qualified Cardano.Crypto.DSIGN as Crypto -import Cardano.Crypto.Seed (mkSeedFromBytes) -import Cardano.Ledger.BaseTypes (knownNonZeroBounded, nonZero, - unNonZero) -import Cardano.Ledger.Binary (byronProtVer, reAnnotate) -import qualified Cardano.Ledger.Binary.Plain as Plain -import Control.Monad (join) -import qualified Data.ByteString as BS -import Data.Coerce (coerce) -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import Data.Maybe (mapMaybe) -import qualified Data.Set as Set -import Data.Word (Word64) -import Numeric.Search.Range (searchFromTo) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import qualified Ouroboros.Consensus.Byron.Crypto.DSIGN as Crypto -import Ouroboros.Consensus.Byron.Ledger (ByronBlock, - ByronNodeToNodeVersion (..)) -import qualified Ouroboros.Consensus.Byron.Ledger as Byron -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Node -import Ouroboros.Consensus.Byron.Protocol -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.Crypto as Crypto -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.Condense (condense) -import Ouroboros.Network.Mock.Chain (Chain) -import qualified Ouroboros.Network.Mock.Chain as Chain -import Test.Cardano.Slotting.Numeric () -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.Infra.Byron -import Test.ThreadNet.Network (NodeOutput (..), - TestNodeInitialization (..)) -import qualified Test.ThreadNet.Ref.PBFT as Ref -import Test.ThreadNet.Rekeying -import Test.ThreadNet.TxGen.Byron () -import Test.ThreadNet.Util -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeToNodeVersion -import Test.ThreadNet.Util.NodeTopology -import Test.ThreadNet.Util.Seed -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import qualified Test.Util.Stream as Stream -import Test.Util.TestEnv (adjustQuickCheckTests) +import Cardano.Chain.Block qualified as Block +import Cardano.Chain.Common qualified as Common +import Cardano.Chain.Delegation qualified as Delegation +import Cardano.Chain.Genesis qualified as Genesis +import Cardano.Chain.ProtocolConstants (kEpochSlots) +import Cardano.Chain.Slotting (EpochNumber (..), unEpochSlots) +import Cardano.Crypto qualified as Crypto +import Cardano.Crypto.DSIGN qualified as Crypto +import Cardano.Crypto.Seed (mkSeedFromBytes) +import Cardano.Ledger.BaseTypes + ( knownNonZeroBounded + , nonZero + , unNonZero + ) +import Cardano.Ledger.Binary (byronProtVer, reAnnotate) +import Cardano.Ledger.Binary.Plain qualified as Plain +import Control.Monad (join) +import Data.ByteString qualified as BS +import Data.Coerce (coerce) +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Maybe (mapMaybe) +import Data.Set qualified as Set +import Data.Word (Word64) +import Numeric.Search.Range (searchFromTo) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Byron.Crypto.DSIGN qualified as Crypto +import Ouroboros.Consensus.Byron.Ledger + ( ByronBlock + , ByronNodeToNodeVersion (..) + ) +import Ouroboros.Consensus.Byron.Ledger qualified as Byron +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Node +import Ouroboros.Consensus.Byron.Protocol +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.Crypto qualified as Crypto +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.Condense (condense) +import Ouroboros.Network.Mock.Chain (Chain) +import Ouroboros.Network.Mock.Chain qualified as Chain +import Test.Cardano.Slotting.Numeric () +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Byron +import Test.ThreadNet.Network + ( NodeOutput (..) + , TestNodeInitialization (..) + ) +import Test.ThreadNet.Ref.PBFT qualified as Ref +import Test.ThreadNet.Rekeying +import Test.ThreadNet.TxGen.Byron () +import Test.ThreadNet.Util +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeToNodeVersion +import Test.ThreadNet.Util.NodeTopology +import Test.ThreadNet.Util.Seed +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.Stream qualified as Stream +import Test.Util.TestEnv (adjustQuickCheckTests) data TestSetup = TestSetup - { setupEBBs :: ProduceEBBs - , setupK :: SecurityParam - , setupTestConfig :: TestConfig + { setupEBBs :: ProduceEBBs + , setupK :: SecurityParam + , setupTestConfig :: TestConfig , setupNodeJoinPlan :: NodeJoinPlan , setupNodeRestarts :: NodeRestarts - , setupSlotLength :: SlotLength - , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion ByronBlock) + , setupSlotLength :: SlotLength + , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion ByronBlock) } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - -- TODO Issue #1566 will bring this to k>=0 - k <- SecurityParam <$> choose (1, 10) `suchThatMap` nonZero + -- TODO Issue #1566 will bring this to k>=0 + k <- SecurityParam <$> choose (1, 10) `suchThatMap` nonZero - join $ genTestSetup k <$> arbitrary <*> arbitrary <*> arbitrary + join $ genTestSetup k <$> arbitrary <*> arbitrary <*> arbitrary - -- TODO shrink +-- TODO shrink -- | An entrypoint used by "Test.ThreadNet.DualByron" -- -- See the @'Arbitrary' 'Test.ThreadNet.DualByron.SetupDualByron'@ instance. genTestSetup :: SecurityParam -> NumCoreNodes -> NumSlots -> SlotLength -> Gen TestSetup genTestSetup k numCoreNodes numSlots setupSlotLength = do - setupEBBs <- arbitrary - initSeed <- arbitrary - nodeTopology <- genNodeTopology numCoreNodes + setupEBBs <- arbitrary + initSeed <- arbitrary + nodeTopology <- genNodeTopology numCoreNodes - let testConfig = TestConfig + let testConfig = + TestConfig { initSeed , nodeTopology , numCoreNodes , numSlots } - let params = byronPBftParams k numCoreNodes - - nodeJoinPlan <- genByronNodeJoinPlan params numSlots - nodeRestarts <- genNodeRestarts nodeJoinPlan numSlots >>= - genNodeRekeys params nodeJoinPlan nodeTopology numSlots + let params = byronPBftParams k numCoreNodes - setupVersion <- genVersion (Proxy @ByronBlock) + nodeJoinPlan <- genByronNodeJoinPlan params numSlots + nodeRestarts <- + genNodeRestarts nodeJoinPlan numSlots + >>= genNodeRekeys params nodeJoinPlan nodeTopology numSlots + setupVersion <- genVersion (Proxy @ByronBlock) - pure $ TestSetup + pure $ + TestSetup setupEBBs k testConfig @@ -137,691 +147,846 @@ genTestSetup k numCoreNodes numSlots setupSlotLength = do setupVersion tests :: TestTree -tests = testGroup "Byron" $ - [ testProperty "trivial join plan is considered deterministic" - $ \TestSetup{setupK = k, setupTestConfig = TestConfig{numCoreNodes}} -> +tests = + testGroup "Byron" $ + [ testProperty "trivial join plan is considered deterministic" $ + \TestSetup{setupK = k, setupTestConfig = TestConfig{numCoreNodes}} -> prop_deterministicPlan $ byronPBftParams k numCoreNodes , adjustQuickCheckTests (`div` 10) $ - -- as of merging PR #773, this test case fails without the commit that - -- introduces the InvalidRollForward exception - -- - -- See a related discussion at - -- https://github.com/IntersectMBO/ouroboros-network/pull/773#issuecomment-522192097 - testProperty "addressed by InvalidRollForward exception (PR #773)" $ + -- as of merging PR #773, this test case fails without the commit that + -- introduces the InvalidRollForward exception + -- + -- See a related discussion at + -- https://github.com/IntersectMBO/ouroboros-network/pull/773#issuecomment-522192097 + testProperty "addressed by InvalidRollForward exception (PR #773)" $ once $ - let ncn = NumCoreNodes 3 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @10 - , setupTestConfig = TestConfig - { initSeed = Seed 0 - , nodeTopology = meshNodeTopology ncn - , numCoreNodes = ncn - , numSlots = NumSlots 24 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList [(CoreNodeId 0,SlotNo 0), (CoreNodeId 1,SlotNo 20), (CoreNodeId 2,SlotNo 22)] - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + let ncn = NumCoreNodes 3 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @10 + , setupTestConfig = + TestConfig + { initSeed = Seed 0 + , nodeTopology = meshNodeTopology ncn + , numCoreNodes = ncn + , numSlots = NumSlots 24 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList [(CoreNodeId 0, SlotNo 0), (CoreNodeId 1, SlotNo 20), (CoreNodeId 2, SlotNo 22)] + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "rewind to EBB supported as of Issue #1312, #1" $ - once $ - let ncn = NumCoreNodes 2 in - -- When node 1 joins in slot 1, it leads with an empty chain and so - -- forges the 0-EBB again. This causes it to report slot 0 as the - -- found intersection point to node 0, which causes node 0 to - -- \"rewind\" to slot 0 (even though it's already there). That rewind - -- fails if EBBs don't affect the PBFT chain state, since its chain - -- state is empty. - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @10 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 2 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo 0),(CoreNodeId 1,SlotNo 1)]) - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + once $ + let ncn = NumCoreNodes 2 + in -- When node 1 joins in slot 1, it leads with an empty chain and so + -- forges the 0-EBB again. This causes it to report slot 0 as the + -- found intersection point to node 0, which causes node 0 to + -- \"rewind\" to slot 0 (even though it's already there). That rewind + -- fails if EBBs don't affect the PBFT chain state, since its chain + -- state is empty. + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @10 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 2 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0, SlotNo 0), (CoreNodeId 1, SlotNo 1)]) + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "rewind to EBB supported as of Issue #1312, #2" $ - once $ - let ncn = NumCoreNodes 2 in - -- Same as above, except node 0 gets to forge an actual block before - -- node 1 tells it to rewind to the EBB. - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @10 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 4 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 0}),(CoreNodeId 1,SlotNo {unSlotNo = 3})]) - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + once $ + let ncn = NumCoreNodes 2 + in -- Same as above, except node 0 gets to forge an actual block before + -- node 1 tells it to rewind to the EBB. + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @10 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 4 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + (Map.fromList [(CoreNodeId 0, SlotNo{unSlotNo = 0}), (CoreNodeId 1, SlotNo{unSlotNo = 3})]) + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "one testOutputTipBlockNos update per node per slot" $ - once $ - let ncn = NumCoreNodes 2 in - -- In this example, a node was forging a new block and then - -- restarting. Its instrumentation thread ran before and also after - -- the restart, which caused the 'testOutputTipBlockNos' field to - -- contain data from the middle of the slot (after the node lead) - -- instead of only from the onset of the slot. - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @5 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 7 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 0}),(CoreNodeId 1,SlotNo {unSlotNo = 0})]) - , setupNodeRestarts = NodeRestarts (Map.fromList [(SlotNo {unSlotNo = 5},Map.fromList [(CoreNodeId 1,NodeRestart)])]) - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + once $ + let ncn = NumCoreNodes 2 + in -- In this example, a node was forging a new block and then + -- restarting. Its instrumentation thread ran before and also after + -- the restart, which caused the 'testOutputTipBlockNos' field to + -- contain data from the middle of the slot (after the node lead) + -- instead of only from the onset of the slot. + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @5 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 7 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + (Map.fromList [(CoreNodeId 0, SlotNo{unSlotNo = 0}), (CoreNodeId 1, SlotNo{unSlotNo = 0})]) + , setupNodeRestarts = + NodeRestarts (Map.fromList [(SlotNo{unSlotNo = 5}, Map.fromList [(CoreNodeId 1, NodeRestart)])]) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "BlockFetch live lock due to an EBB at the ImmutableDB tip, Issue #1435" $ - once $ - let ncn = NumCoreNodes 4 in - -- c0's ImmutableDB is T > U > V. Note that U is an EBB and U and V - -- are both in slot 50. When its BlockFetchServer tries to stream T - -- and U using a ChainDB.Iterator, instead of looking in the - -- ImmutableDB, we end up looking in the VolatileDB and incorrectly - -- return ForkTooOld. The client keeps on requesting this block range, - -- resulting in a live lock. - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @5 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 58 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList [(CoreNodeId 0,SlotNo 3),(CoreNodeId 1,SlotNo 3),(CoreNodeId 2,SlotNo 5),(CoreNodeId 3,SlotNo 57)] - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + once $ + let ncn = NumCoreNodes 4 + in -- c0's ImmutableDB is T > U > V. Note that U is an EBB and U and V + -- are both in slot 50. When its BlockFetchServer tries to stream T + -- and U using a ChainDB.Iterator, instead of looking in the + -- ImmutableDB, we end up looking in the VolatileDB and incorrectly + -- return ForkTooOld. The client keeps on requesting this block range, + -- resulting in a live lock. + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @5 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 58 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo 3) + , (CoreNodeId 1, SlotNo 3) + , (CoreNodeId 2, SlotNo 5) + , (CoreNodeId 3, SlotNo 57) + ] + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "ImmutableDB is leaking file handles, #1543" $ - -- The failure was: c0 leaks one ImmutableDB file handle (for path - -- @00000.epoch@, read only, offset at 0). - -- - -- The test case seems somewhat fragile, since the 'slotLength' value - -- seems to matter! - once $ - let ncn5 = NumCoreNodes 5 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = ncn5 - -- Still fails if I increase numSlots. - , numSlots = NumSlots 54 - , nodeTopology = meshNodeTopology ncn5 - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList - [ (CoreNodeId 0, SlotNo {unSlotNo = 0}) - , (CoreNodeId 1, SlotNo {unSlotNo = 0}) - , (CoreNodeId 2, SlotNo {unSlotNo = 0}) - , (CoreNodeId 3, SlotNo {unSlotNo = 53}) - , (CoreNodeId 4, SlotNo {unSlotNo = 53}) - ] - -- Passes if I drop either of these restarts. - , setupNodeRestarts = NodeRestarts $ Map.fromList - [ (SlotNo {unSlotNo = 50},Map.fromList [(CoreNodeId 0,NodeRestart)]) - , (SlotNo {unSlotNo = 53},Map.fromList [(CoreNodeId 3,NodeRestart)]) - ] - -- Slot length of 19s passes, and 21s also fails; I haven't seen this matter before. - , setupSlotLength = slotLengthFromSec 20 - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- The failure was: c0 leaks one ImmutableDB file handle (for path + -- @00000.epoch@, read only, offset at 0). + -- + -- The test case seems somewhat fragile, since the 'slotLength' value + -- seems to matter! + once $ + let ncn5 = NumCoreNodes 5 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn5 + , -- Still fails if I increase numSlots. + numSlots = NumSlots 54 + , nodeTopology = meshNodeTopology ncn5 + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo{unSlotNo = 0}) + , (CoreNodeId 1, SlotNo{unSlotNo = 0}) + , (CoreNodeId 2, SlotNo{unSlotNo = 0}) + , (CoreNodeId 3, SlotNo{unSlotNo = 53}) + , (CoreNodeId 4, SlotNo{unSlotNo = 53}) + ] + , -- Passes if I drop either of these restarts. + setupNodeRestarts = + NodeRestarts $ + Map.fromList + [ (SlotNo{unSlotNo = 50}, Map.fromList [(CoreNodeId 0, NodeRestart)]) + , (SlotNo{unSlotNo = 53}, Map.fromList [(CoreNodeId 3, NodeRestart)]) + ] + , -- Slot length of 19s passes, and 21s also fails; I haven't seen this matter before. + setupSlotLength = slotLengthFromSec 20 + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , -- Byron runs are slow, so do 10x less of this narrow test adjustQuickCheckTests (`div` 10) $ - testProperty "re-delegation via NodeRekey" $ \seed w -> + testProperty "re-delegation via NodeRekey" $ \seed w -> let ncn = NumCoreNodes 5 - k = knownNonZeroBounded @5 -- small so that multiple epochs fit into a simulation + k = knownNonZeroBounded @5 -- small so that multiple epochs fit into a simulation window :: Num a => a - window = 20 -- just for generality + window = 20 -- just for generality slotsPerEpoch :: Num a => a - slotsPerEpoch = fromIntegral $ unEpochSlots $ - kEpochSlots $ coerce (unNonZero k :: Word64) + slotsPerEpoch = + fromIntegral $ + unEpochSlots $ + kEpochSlots $ + coerce (unNonZero k :: Word64) slotsPerRekey :: Word64 - slotsPerRekey = 2 * unNonZero k -- delegations take effect 2k slots later - in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam k - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots $ window + slotsPerEpoch + slotsPerRekey + window - , nodeTopology = meshNodeTopology ncn - , initSeed = seed - } - , setupNodeJoinPlan = trivialNodeJoinPlan ncn - , setupNodeRestarts = NodeRestarts $ Map.singleton (SlotNo (slotsPerEpoch + mod w window)) (Map.singleton (CoreNodeId 0) NodeRekey) - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + slotsPerRekey = 2 * unNonZero k -- delegations take effect 2k slots later + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam k + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots $ window + slotsPerEpoch + slotsPerRekey + window + , nodeTopology = meshNodeTopology ncn + , initSeed = seed + } + , setupNodeJoinPlan = trivialNodeJoinPlan ncn + , setupNodeRestarts = + NodeRestarts $ + Map.singleton (SlotNo (slotsPerEpoch + mod w window)) (Map.singleton (CoreNodeId 0) NodeRekey) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "exercise a corner case of mkCurrentBlockContext" $ - -- The current chain fragment is @Empty a :> B@ and we're trying to - -- forge B'; the oddity is that B and B' have the same slot, since - -- the node is actually leading for the /second/ time in that slot - -- due to the 'NodeRestart'. - -- - -- This failed with @Exception: the first block on the Byron chain - -- must be an EBB@. - let k = SecurityParam $ knownNonZeroBounded @1 - ncn = NumCoreNodes 2 - in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = k - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 2 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = trivialNodeJoinPlan ncn - , setupNodeRestarts = NodeRestarts $ Map.singleton (SlotNo 1) (Map.singleton (CoreNodeId 1) NodeRestart) - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- The current chain fragment is @Empty a :> B@ and we're trying to + -- forge B'; the oddity is that B and B' have the same slot, since + -- the node is actually leading for the /second/ time in that slot + -- due to the 'NodeRestart'. + -- + -- This failed with @Exception: the first block on the Byron chain + -- must be an EBB@. + let k = SecurityParam $ knownNonZeroBounded @1 + ncn = NumCoreNodes 2 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = k + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 2 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = trivialNodeJoinPlan ncn + , setupNodeRestarts = + NodeRestarts $ Map.singleton (SlotNo 1) (Map.singleton (CoreNodeId 1) NodeRestart) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "correct EpochNumber in delegation certificate 1" $ - -- Node 3 rekeys in slot 59, which is epoch 1. But Node 3 also leads - -- that slot, and it forged and adopted a block before restarting. So - -- the delegation transaction ends up in a block in slot 60, which is - -- epoch 2. - once $ - let ncn4 = NumCoreNodes 4 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @3 - , setupTestConfig = TestConfig - { numCoreNodes = ncn4 - , numSlots = NumSlots 72 - , nodeTopology = meshNodeTopology ncn4 - , initSeed = Seed 0 - } - , setupNodeJoinPlan = trivialNodeJoinPlan ncn4 - , setupNodeRestarts = NodeRestarts (Map.fromList [(SlotNo 59,Map.fromList [(CoreNodeId 3,NodeRekey)])]) - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- Node 3 rekeys in slot 59, which is epoch 1. But Node 3 also leads + -- that slot, and it forged and adopted a block before restarting. So + -- the delegation transaction ends up in a block in slot 60, which is + -- epoch 2. + once $ + let ncn4 = NumCoreNodes 4 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @3 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn4 + , numSlots = NumSlots 72 + , nodeTopology = meshNodeTopology ncn4 + , initSeed = Seed 0 + } + , setupNodeJoinPlan = trivialNodeJoinPlan ncn4 + , setupNodeRestarts = + NodeRestarts (Map.fromList [(SlotNo 59, Map.fromList [(CoreNodeId 3, NodeRekey)])]) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "correct EpochNumber in delegation certificate 2" $ - -- Revealed the incorrectness of setting the dlg cert epoch based on - -- the slot in which the node rekeyed. It must be based on the slot - -- in which the next block will be successfully forged; hence adding - -- 'rekeyOracle' fixed this. - -- - -- Node 2 joins and rekeys in slot 58, epoch 2. It also leads slot - -- 59. So its dlg cert tx will only be included in the block in slot - -- 60. However, since that's epoch 3, the tx is discarded as invalid - -- before the block is forged. - let ncn3 = NumCoreNodes 3 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = ncn3 - , numSlots = NumSlots 84 - , nodeTopology = meshNodeTopology ncn3 - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 1}),(CoreNodeId 1,SlotNo {unSlotNo = 1}),(CoreNodeId 2,SlotNo {unSlotNo = 58})]) - , setupNodeRestarts = NodeRestarts (Map.fromList [(SlotNo {unSlotNo = 58},Map.fromList [(CoreNodeId 2,NodeRekey)])]) - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- Revealed the incorrectness of setting the dlg cert epoch based on + -- the slot in which the node rekeyed. It must be based on the slot + -- in which the next block will be successfully forged; hence adding + -- 'rekeyOracle' fixed this. + -- + -- Node 2 joins and rekeys in slot 58, epoch 2. It also leads slot + -- 59. So its dlg cert tx will only be included in the block in slot + -- 60. However, since that's epoch 3, the tx is discarded as invalid + -- before the block is forged. + let ncn3 = NumCoreNodes 3 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn3 + , numSlots = NumSlots 84 + , nodeTopology = meshNodeTopology ncn3 + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + ( Map.fromList + [ (CoreNodeId 0, SlotNo{unSlotNo = 1}) + , (CoreNodeId 1, SlotNo{unSlotNo = 1}) + , (CoreNodeId 2, SlotNo{unSlotNo = 58}) + ] + ) + , setupNodeRestarts = + NodeRestarts (Map.fromList [(SlotNo{unSlotNo = 58}, Map.fromList [(CoreNodeId 2, NodeRekey)])]) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "repeatedly add the the dlg cert tx" $ - -- Revealed the incorrectness of only adding dlg cert tx to the - -- mempool once (since it may be essentially immediately discarded); - -- hence adding it every time the ledger tip changes fixed this. - -- - -- The failure was: PBftNotGenesisDelegate in Slot 95. It disappeared - -- with the mesh topology, which usually means subtle timings are - -- involved, unfortunately. - -- - -- c2 rekeys in s83. c0 leads s84. But c2's dlg cert tx never reached - -- c0. It turns out that c2 told c0 it exists but then discarded it - -- before c0 actually requested it. - -- - -- Excerpt of c2 trace events during s83: - -- - -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Send MsgRequestNext - -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Send MsgReplyTxIds (BlockingReply ((dlgid: certificateid: fb50aa22,202) :| [])) - -- > SwitchedToChain - -- > { _prevChain = AnchoredFragment {anchorPoint = BlockPoint (SlotNo 25) 26851f52, unanchorFragment = ChainFragment (SFT (fromList - -- > [ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 27}, byronHeaderHash = ByronHash {unByronHash = AbstractHash d50e0d2c}} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 28}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 1523de50}} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 30}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 77cb5dda}} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 31}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 7efd3ec2}} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 33}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 8903fa61}} - -- > {-an EBB-} ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 40}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 43f8067e}} - -- > ]))} - -- > , _newChain = AnchoredFragment {anchorPoint = BlockPoint (SlotNo 27) d50e0d2c, unanchorFragment = ChainFragment (SFT (fromList - -- > [ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 28}, byronHeaderHash = 1523de50} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 30}, byronHeaderHash = 77cb5dda} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 31}, byronHeaderHash = 7efd3ec2} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 33}, byronHeaderHash = 8903fa61} - -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 34}, byronHeaderHash = afa797b4} - -- > ]))}} - -- - -- That SwitchedToChain rolled back the slot 40 EBB (epoch 1) and - -- picked up a proper block in slot 34 (epoch 0) instead. - -- - -- > TraceMempoolRemoveTxs (SyncWithLedger (At (SlotNo {unSlotNo = 35}))) [(dlg: Delegation.Certificate { w = #2, iVK = pub:a3219c1a, dVK = pub:1862f6a2 },MempoolDlgErr (WrongEpoch (EpochNumber {getEpochNumber = 0}) (EpochNumber {getEpochNumber = 2})))] (MempoolSize {msNumTxs = 0, msNumBytes = 0}) - -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Recv MsgBatchDone - -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Recv MsgRequestTxs [dlgid: certificateid: fb50aa22] - -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Send MsgReplyTxs [] - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @4 - , setupTestConfig = TestConfig - { numCoreNodes = NumCoreNodes 3 - , numSlots = NumSlots 96 - , nodeTopology = -- 1 <-> 0 <-> 2 - NodeTopology $ Map.fromList [(CoreNodeId 0,Set.fromList []),(CoreNodeId 1,Set.fromList [CoreNodeId 0]),(CoreNodeId 2,Set.fromList [CoreNodeId 0])] - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList [(CoreNodeId 0,SlotNo 0),(CoreNodeId 1,SlotNo 0),(CoreNodeId 2,SlotNo 83)] - , setupNodeRestarts = NodeRestarts $ Map.fromList [(SlotNo 83,Map.fromList [(CoreNodeId 2,NodeRekey)])] - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) + -- Revealed the incorrectness of only adding dlg cert tx to the + -- mempool once (since it may be essentially immediately discarded); + -- hence adding it every time the ledger tip changes fixed this. + -- + -- The failure was: PBftNotGenesisDelegate in Slot 95. It disappeared + -- with the mesh topology, which usually means subtle timings are + -- involved, unfortunately. + -- + -- c2 rekeys in s83. c0 leads s84. But c2's dlg cert tx never reached + -- c0. It turns out that c2 told c0 it exists but then discarded it + -- before c0 actually requested it. + -- + -- Excerpt of c2 trace events during s83: + -- + -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Send MsgRequestNext + -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Send MsgReplyTxIds (BlockingReply ((dlgid: certificateid: fb50aa22,202) :| [])) + -- > SwitchedToChain + -- > { _prevChain = AnchoredFragment {anchorPoint = BlockPoint (SlotNo 25) 26851f52, unanchorFragment = ChainFragment (SFT (fromList + -- > [ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 27}, byronHeaderHash = ByronHash {unByronHash = AbstractHash d50e0d2c}} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 28}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 1523de50}} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 30}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 77cb5dda}} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 31}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 7efd3ec2}} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 33}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 8903fa61}} + -- > {-an EBB-} ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 40}, byronHeaderHash = ByronHash {unByronHash = AbstractHash 43f8067e}} + -- > ]))} + -- > , _newChain = AnchoredFragment {anchorPoint = BlockPoint (SlotNo 27) d50e0d2c, unanchorFragment = ChainFragment (SFT (fromList + -- > [ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 28}, byronHeaderHash = 1523de50} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 30}, byronHeaderHash = 77cb5dda} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 31}, byronHeaderHash = 7efd3ec2} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 33}, byronHeaderHash = 8903fa61} + -- > ,ByronHeader {..., byronHeaderSlotNo = SlotNo {unSlotNo = 34}, byronHeaderHash = afa797b4} + -- > ]))}} + -- + -- That SwitchedToChain rolled back the slot 40 EBB (epoch 1) and + -- picked up a proper block in slot 34 (epoch 0) instead. + -- + -- > TraceMempoolRemoveTxs (SyncWithLedger (At (SlotNo {unSlotNo = 35}))) [(dlg: Delegation.Certificate { w = #2, iVK = pub:a3219c1a, dVK = pub:1862f6a2 },MempoolDlgErr (WrongEpoch (EpochNumber {getEpochNumber = 0}) (EpochNumber {getEpochNumber = 2})))] (MempoolSize {msNumTxs = 0, msNumBytes = 0}) + -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Recv MsgBatchDone + -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Recv MsgRequestTxs [dlgid: certificateid: fb50aa22] + -- > TraceLabelPeer (CoreId (CoreNodeId 0)) Send MsgReplyTxs [] + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @4 + , setupTestConfig = + TestConfig + { numCoreNodes = NumCoreNodes 3 + , numSlots = NumSlots 96 + , nodeTopology -- 1 <-> 0 <-> 2 + = + NodeTopology $ + Map.fromList + [ (CoreNodeId 0, Set.fromList []) + , (CoreNodeId 1, Set.fromList [CoreNodeId 0]) + , (CoreNodeId 2, Set.fromList [CoreNodeId 0]) + ] + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList [(CoreNodeId 0, SlotNo 0), (CoreNodeId 1, SlotNo 0), (CoreNodeId 2, SlotNo 83)] + , setupNodeRestarts = + NodeRestarts $ Map.fromList [(SlotNo 83, Map.fromList [(CoreNodeId 2, NodeRekey)])] + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) } , testProperty "topology prevents timely dlg cert tx propagation" $ - -- Caught a bug in the test infrastructure. If node X rekeys in slot - -- S and Y leads slot S+1, then either the topology must connect X - -- and Y directly, or Y must join before slot S. Otherwise, X - -- successfully propagates its dlg cert tx to the pre-existing nodes, - -- but Y won't pull it from them in time to include the tx in its - -- block for S+1. When Y joined in S, its mini protocols all failed - -- and were delayed to restart in the next slot (S+1). They do so, - -- but it forges its block in S+1 before the dlg cert tx arrives. - -- - -- The expected failure is an unexpected block rejection (cf - -- 'pgaExpectedCannotForge') (PBftNotGenesisDelegate) in Slot 49. - -- It disappears with the mesh topology, which usually means subtle - -- timings are involved, unfortunately. - -- - -- c3 and c4 join in s37. c4 rekeys in s37. c3 leads in s38. - -- - -- The dlg cert tx does not arrive at c3 in time because of the - -- topology. When c3 and c4 join in s37, their mini protocol threads - -- that serve {c0,c1,c2} as clients fail and are scheduled to restart - -- at the onset of the next slot (s38). Since c3 and c4 are not - -- directly connected, and in particular the mini protocol instances - -- with clients in {c0,c1,c2} and server c4 are down, c4 cannot - -- communicate its dlg cert tx to c3 in time (it arrives in s38, but - -- after c3 has forged its block). - let ncn5 = NumCoreNodes 5 in - expectFailure $ - once $ - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = ncn5 - , numSlots = NumSlots 50 - , nodeTopology = -- 3 <-> {0,1,2} <-> 4 - NodeTopology (Map.fromList [(CoreNodeId 0,Set.fromList []),(CoreNodeId 1,Set.fromList [CoreNodeId 0]),(CoreNodeId 2,Set.fromList [CoreNodeId 0, CoreNodeId 1]),(CoreNodeId 3,Set.fromList [CoreNodeId 0,CoreNodeId 1,CoreNodeId 2]),(CoreNodeId 4,Set.fromList [CoreNodeId 0,CoreNodeId 1,CoreNodeId 2])]) - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 0}),(CoreNodeId 1,SlotNo {unSlotNo = 0}),(CoreNodeId 2,SlotNo {unSlotNo = 0}),(CoreNodeId 3,SlotNo {unSlotNo = 37}),(CoreNodeId 4,SlotNo {unSlotNo = 37})]) - , setupNodeRestarts = NodeRestarts (Map.fromList [(SlotNo {unSlotNo = 37},Map.fromList [(CoreNodeId 4,NodeRekey)])]) - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- Caught a bug in the test infrastructure. If node X rekeys in slot + -- S and Y leads slot S+1, then either the topology must connect X + -- and Y directly, or Y must join before slot S. Otherwise, X + -- successfully propagates its dlg cert tx to the pre-existing nodes, + -- but Y won't pull it from them in time to include the tx in its + -- block for S+1. When Y joined in S, its mini protocols all failed + -- and were delayed to restart in the next slot (S+1). They do so, + -- but it forges its block in S+1 before the dlg cert tx arrives. + -- + -- The expected failure is an unexpected block rejection (cf + -- 'pgaExpectedCannotForge') (PBftNotGenesisDelegate) in Slot 49. + -- It disappears with the mesh topology, which usually means subtle + -- timings are involved, unfortunately. + -- + -- c3 and c4 join in s37. c4 rekeys in s37. c3 leads in s38. + -- + -- The dlg cert tx does not arrive at c3 in time because of the + -- topology. When c3 and c4 join in s37, their mini protocol threads + -- that serve {c0,c1,c2} as clients fail and are scheduled to restart + -- at the onset of the next slot (s38). Since c3 and c4 are not + -- directly connected, and in particular the mini protocol instances + -- with clients in {c0,c1,c2} and server c4 are down, c4 cannot + -- communicate its dlg cert tx to c3 in time (it arrives in s38, but + -- after c3 has forged its block). + let ncn5 = NumCoreNodes 5 + in expectFailure $ + once $ + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn5 + , numSlots = NumSlots 50 + , nodeTopology -- 3 <-> {0,1,2} <-> 4 + = + NodeTopology + ( Map.fromList + [ (CoreNodeId 0, Set.fromList []) + , (CoreNodeId 1, Set.fromList [CoreNodeId 0]) + , (CoreNodeId 2, Set.fromList [CoreNodeId 0, CoreNodeId 1]) + , (CoreNodeId 3, Set.fromList [CoreNodeId 0, CoreNodeId 1, CoreNodeId 2]) + , (CoreNodeId 4, Set.fromList [CoreNodeId 0, CoreNodeId 1, CoreNodeId 2]) + ] + ) + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + ( Map.fromList + [ (CoreNodeId 0, SlotNo{unSlotNo = 0}) + , (CoreNodeId 1, SlotNo{unSlotNo = 0}) + , (CoreNodeId 2, SlotNo{unSlotNo = 0}) + , (CoreNodeId 3, SlotNo{unSlotNo = 37}) + , (CoreNodeId 4, SlotNo{unSlotNo = 37}) + ] + ) + , setupNodeRestarts = + NodeRestarts (Map.fromList [(SlotNo{unSlotNo = 37}, Map.fromList [(CoreNodeId 4, NodeRekey)])]) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "mkDelegationEnvironment uses currentSlot not latestSlot" $ - -- After rekeying, node 2 continues to emit its dlg cert tx. This an ugly - -- implementation detail of rekeying, but as a nice surprise it found a - -- bug! - -- - -- In slot 40, node 1 forged a block that included the now-/expired/ dlg - -- cert tx (cf @WrongEpoch@). This happened because the Byron transaction - -- validation logic was using the slot of the latest block (i.e. 39) as - -- the current slot (i.e. actually 40), so the transaction wasn't - -- identified as expired until it was already inside a block. - once $ - let ncn = NumCoreNodes 3 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 41 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = trivialNodeJoinPlan ncn - , setupNodeRestarts = NodeRestarts $ Map.singleton (SlotNo 30) $ Map.singleton (CoreNodeId 2) NodeRekey - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- After rekeying, node 2 continues to emit its dlg cert tx. This an ugly + -- implementation detail of rekeying, but as a nice surprise it found a + -- bug! + -- + -- In slot 40, node 1 forged a block that included the now-/expired/ dlg + -- cert tx (cf @WrongEpoch@). This happened because the Byron transaction + -- validation logic was using the slot of the latest block (i.e. 39) as + -- the current slot (i.e. actually 40), so the transaction wasn't + -- identified as expired until it was already inside a block. + once $ + let ncn = NumCoreNodes 3 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 41 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = trivialNodeJoinPlan ncn + , setupNodeRestarts = + NodeRestarts $ Map.singleton (SlotNo 30) $ Map.singleton (CoreNodeId 2) NodeRekey + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "delayed message corner case" $ - once $ - let ncn = NumCoreNodes 2 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @7 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 10 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 0}),(CoreNodeId 1,SlotNo {unSlotNo = 1})]) - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + once $ + let ncn = NumCoreNodes 2 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @7 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 10 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + (Map.fromList [(CoreNodeId 0, SlotNo{unSlotNo = 0}), (CoreNodeId 1, SlotNo{unSlotNo = 1})]) + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "mkUpdateLabels anticipates instant confirmation" $ - -- caught a bug in 'mkUpdateLabels' where it didn't anticipate that - -- node c0 can confirm the proposal as soon as it joins when quorum - -- == 1 - let ncn = NumCoreNodes 3 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @9 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 1 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = trivialNodeJoinPlan ncn - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- caught a bug in 'mkUpdateLabels' where it didn't anticipate that + -- node c0 can confirm the proposal as soon as it joins when quorum + -- == 1 + let ncn = NumCoreNodes 3 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @9 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 1 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = trivialNodeJoinPlan ncn + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "have nodes add transactions as promptly as possible, as expected by proposal tracking" $ - -- this repro requires that changes to the ledger point triggers the - -- nearly oracular wallet to attempt to add its proposal vote again - -- - -- Without that, node c1's own vote is not included in the block it - -- forges in the last slot, because it attempts to add the vote - -- before the proposal arrives from c0. With the trigger, the arrival - -- of c0's block triggers it. In particular, the ledger *slot* - -- doesn't change in this repro, since the new block and its - -- predecessor both inhabit slot 0. EBBeeeeeees! - let ncn = NumCoreNodes 4 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @8 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 2 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = trivialNodeJoinPlan ncn - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- this repro requires that changes to the ledger point triggers the + -- nearly oracular wallet to attempt to add its proposal vote again + -- + -- Without that, node c1's own vote is not included in the block it + -- forges in the last slot, because it attempts to add the vote + -- before the proposal arrives from c0. With the trigger, the arrival + -- of c0's block triggers it. In particular, the ledger *slot* + -- doesn't change in this repro, since the new block and its + -- predecessor both inhabit slot 0. EBBeeeeeees! + let ncn = NumCoreNodes 4 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @8 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 2 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = trivialNodeJoinPlan ncn + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "track proposals even when c0 is not the first to lead" $ - -- requires prompt and accurate vote tracking when c0 is not the - -- first node to lead - -- - -- The necessary promptness trigger in this case is the arrival of - -- the proposal transaction. - let ncn = NumCoreNodes 4 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @5 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 5 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList [ (CoreNodeId 0, SlotNo 2) , (CoreNodeId 1, SlotNo 3) , (CoreNodeId 2, SlotNo 4) , (CoreNodeId 3, SlotNo 4) ] - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- requires prompt and accurate vote tracking when c0 is not the + -- first node to lead + -- + -- The necessary promptness trigger in this case is the arrival of + -- the proposal transaction. + let ncn = NumCoreNodes 4 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @5 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 5 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo 2) + , (CoreNodeId 1, SlotNo 3) + , (CoreNodeId 2, SlotNo 4) + , (CoreNodeId 3, SlotNo 4) + ] + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "cardano-ledger-byron checks for proposal confirmation before it checks for expiry" $ - -- must check for quorum before checking for expiration - let ncn = NumCoreNodes 5 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @10 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 12 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList [ (CoreNodeId 0, SlotNo 0) , (CoreNodeId 1, SlotNo 0) , (CoreNodeId 2, SlotNo 10) , (CoreNodeId 3, SlotNo 10) , (CoreNodeId 4, SlotNo 10) ] - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- must check for quorum before checking for expiration + let ncn = NumCoreNodes 5 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @10 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 12 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo 0) + , (CoreNodeId 1, SlotNo 0) + , (CoreNodeId 2, SlotNo 10) + , (CoreNodeId 3, SlotNo 10) + , (CoreNodeId 4, SlotNo 10) + ] + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "repropose an expired proposal" $ - -- the proposal expires in slot 10, but then c0 reintroduces it in - -- slot 11 and it is eventually confirmed - let ncn = NumCoreNodes 5 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @10 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 17 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList - [(CoreNodeId 0, SlotNo 0) - ,(CoreNodeId 1, SlotNo 10) - ,(CoreNodeId 2, SlotNo 11) - ,(CoreNodeId 3, SlotNo 11) - ,(CoreNodeId 4, SlotNo 16) - ] - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- the proposal expires in slot 10, but then c0 reintroduces it in + -- slot 11 and it is eventually confirmed + let ncn = NumCoreNodes 5 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @10 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 17 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo 0) + , (CoreNodeId 1, SlotNo 10) + , (CoreNodeId 2, SlotNo 11) + , (CoreNodeId 3, SlotNo 11) + , (CoreNodeId 4, SlotNo 16) + ] + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "only expect EBBs if the reference simulator does" $ - -- In this repro, block in the 20th slot is wasted since c2 just - -- joined. As a result, the final chains won't include that EBB. - let ncn = NumCoreNodes 3 in - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = ncn - , numSlots = NumSlots 21 - , nodeTopology = meshNodeTopology ncn - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList - [ (CoreNodeId 0,SlotNo {unSlotNo = 0}) - , (CoreNodeId 1,SlotNo {unSlotNo = 0}) - , (CoreNodeId 2,SlotNo {unSlotNo = 20}) - ] - , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } + -- In this repro, block in the 20th slot is wasted since c2 just + -- joined. As a result, the final chains won't include that EBB. + let ncn = NumCoreNodes 3 + in prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = NumSlots 21 + , nodeTopology = meshNodeTopology ncn + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo{unSlotNo = 0}) + , (CoreNodeId 1, SlotNo{unSlotNo = 0}) + , (CoreNodeId 2, SlotNo{unSlotNo = 20}) + ] + , setupNodeRestarts = noRestarts + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) + } , testProperty "only check updates for mesh topologies" $ - -- This repro exercises - -- 'Test.ThreadNet.Byron.TrackUpdates.checkTopo'. - -- - -- The predicted slot outcomes are - -- - -- > leader 01234 - -- > s0 NAAAA - -- > s5 NAAAA - -- > s10 NWN - -- - -- The votes of c1, c3, and c4 arrive to c2 during s11 via TxSub - -- /before/ the block containing the proposal does, so c2's mempool - -- rejects them as invalid. When it then forges in s12, it only - -- includes its own vote, which doesn't meet quota (3 = 5 * 0.6) and - -- so the proposal then expires (TTL 10 slots, but only after an - -- endorsement; see Issue 749 in cardano-ledger-byron). - -- - -- "Test.ThreadNet.Byron.TrackUpdates" does not otherwise - -- correctly anticipate such races, so it makes no requirement for - -- non-mesh topologies. - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @10 - , setupTestConfig = TestConfig - { numCoreNodes = NumCoreNodes 5 - , numSlots = NumSlots 13 - , nodeTopology = NodeTopology $ Map.fromList - -- mesh except for 0 <-> 2 - [ (CoreNodeId 0, Set.fromList []) - , (CoreNodeId 1, Set.fromList [CoreNodeId 0]) - , (CoreNodeId 2, Set.fromList [CoreNodeId 1]) - , (CoreNodeId 3, Set.fromList [CoreNodeId 0, CoreNodeId 1, CoreNodeId 2]) - , (CoreNodeId 4, Set.fromList [CoreNodeId 0, CoreNodeId 1, CoreNodeId 2, CoreNodeId 3]) - ] - , initSeed = Seed 0 - } - , setupNodeJoinPlan = NodeJoinPlan $ Map.fromList - [ (CoreNodeId 0, SlotNo 0) - , (CoreNodeId 1, SlotNo 11) - , (CoreNodeId 2, SlotNo 11) - , (CoreNodeId 3, SlotNo 11) - , (CoreNodeId 4, SlotNo 11) - ] + -- This repro exercises + -- 'Test.ThreadNet.Byron.TrackUpdates.checkTopo'. + -- + -- The predicted slot outcomes are + -- + -- > leader 01234 + -- > s0 NAAAA + -- > s5 NAAAA + -- > s10 NWN + -- + -- The votes of c1, c3, and c4 arrive to c2 during s11 via TxSub + -- /before/ the block containing the proposal does, so c2's mempool + -- rejects them as invalid. When it then forges in s12, it only + -- includes its own vote, which doesn't meet quota (3 = 5 * 0.6) and + -- so the proposal then expires (TTL 10 slots, but only after an + -- endorsement; see Issue 749 in cardano-ledger-byron). + -- + -- "Test.ThreadNet.Byron.TrackUpdates" does not otherwise + -- correctly anticipate such races, so it makes no requirement for + -- non-mesh topologies. + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @10 + , setupTestConfig = + TestConfig + { numCoreNodes = NumCoreNodes 5 + , numSlots = NumSlots 13 + , nodeTopology = + NodeTopology $ + Map.fromList + -- mesh except for 0 <-> 2 + [ (CoreNodeId 0, Set.fromList []) + , (CoreNodeId 1, Set.fromList [CoreNodeId 0]) + , (CoreNodeId 2, Set.fromList [CoreNodeId 1]) + , (CoreNodeId 3, Set.fromList [CoreNodeId 0, CoreNodeId 1, CoreNodeId 2]) + , (CoreNodeId 4, Set.fromList [CoreNodeId 0, CoreNodeId 1, CoreNodeId 2, CoreNodeId 3]) + ] + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan $ + Map.fromList + [ (CoreNodeId 0, SlotNo 0) + , (CoreNodeId 1, SlotNo 11) + , (CoreNodeId 2, SlotNo 11) + , (CoreNodeId 3, SlotNo 11) + , (CoreNodeId 4, SlotNo 11) + ] , setupNodeRestarts = noRestarts - , setupSlotLength = defaultSlotLength - , setupVersion = (minBound, ByronNodeToNodeVersion1) + , setupSlotLength = defaultSlotLength + , setupVersion = (minBound, ByronNodeToNodeVersion1) } , testProperty "HeaderProtocolError prevents JIT EBB emission" $ - -- "extra" EBB generated in anticipation of a block that ends up - -- being PBftExceededSignThreshold - -- - -- PR 1942 reduced the input of blockProduction from ExtLedgerState - -- to just LedgerState, but that does not provide enough information - -- to fully anticipate the block's invalidity, since it excludes - -- protocol-level validation - -- - -- Remark: this particular repro involves a peculiar phenomenon - -- apparent for k=8 n=3 in which the nodes' steady-state behavior - -- involves a regularly occurring 'PBftExceededSignThreshold' - once $ - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @8 - , setupTestConfig = TestConfig - { numCoreNodes = NumCoreNodes 3 - , numSlots = NumSlots 81 - , nodeTopology = meshNodeTopology (NumCoreNodes 3) - , initSeed = Seed 0 + -- "extra" EBB generated in anticipation of a block that ends up + -- being PBftExceededSignThreshold + -- + -- PR 1942 reduced the input of blockProduction from ExtLedgerState + -- to just LedgerState, but that does not provide enough information + -- to fully anticipate the block's invalidity, since it excludes + -- protocol-level validation + -- + -- Remark: this particular repro involves a peculiar phenomenon + -- apparent for k=8 n=3 in which the nodes' steady-state behavior + -- involves a regularly occurring 'PBftExceededSignThreshold' + once $ + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @8 + , setupTestConfig = + TestConfig + { numCoreNodes = NumCoreNodes 3 + , numSlots = NumSlots 81 + , nodeTopology = meshNodeTopology (NumCoreNodes 3) + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + ( Map.fromList + [ (CoreNodeId 0, SlotNo{unSlotNo = 2}) + , (CoreNodeId 1, SlotNo{unSlotNo = 6}) + , (CoreNodeId 2, SlotNo{unSlotNo = 9}) + ] + ) + , setupNodeRestarts = noRestarts + , setupSlotLength = slotLengthFromSec 20 + , setupVersion = (minBound, ByronNodeToNodeVersion1) } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 2}),(CoreNodeId 1,SlotNo {unSlotNo = 6}),(CoreNodeId 2,SlotNo {unSlotNo = 9})]) - , setupNodeRestarts = noRestarts - , setupSlotLength = slotLengthFromSec 20 - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } , testProperty "WallClock must handle PastHorizon by exactly slotLength delay" $ - -- Previously, 'PastTimeHorizon' put the node to sleep for 60s. That - -- had caused it to be offline for slots it shouldn't miss. - once $ - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = ProduceEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = NumCoreNodes 2 - , numSlots = NumSlots 39 - , nodeTopology = meshNodeTopology (NumCoreNodes 2) - , initSeed = Seed 0 + -- Previously, 'PastTimeHorizon' put the node to sleep for 60s. That + -- had caused it to be offline for slots it shouldn't miss. + once $ + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = ProduceEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = NumCoreNodes 2 + , numSlots = NumSlots 39 + , nodeTopology = meshNodeTopology (NumCoreNodes 2) + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + (Map.fromList [(CoreNodeId 0, SlotNo{unSlotNo = 0}), (CoreNodeId 1, SlotNo{unSlotNo = 33})]) + , setupNodeRestarts = noRestarts + , setupSlotLength = slotLengthFromSec 20 + , setupVersion = (minBound, ByronNodeToNodeVersion1) } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 0}),(CoreNodeId 1,SlotNo {unSlotNo = 33})]) - , setupNodeRestarts = noRestarts - , setupSlotLength = slotLengthFromSec 20 - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } , testProperty "systemTimeCurrent must not answer once clock is exhausted" $ - -- This test would pass (before fixing the bug) if I moved both the - -- end of the test and also the last node's join slot ahead by 3 - -- slots. So I think to the epoch boundary may be involved, likely - -- via its effect on the 'SafeZone'. - -- - -- The failure is due to the following sequence. The node joins with - -- an empty chain. The existing nodes disconnect from it. The new - -- node blocks due to 'PastTimeHorizon', and so syncs the whole chain. - -- Then the new node leads. But the old nodes don't get its new - -- block. Then the test ends. - -- - -- The new node should not have been able to create a block in that - -- slot. The 'PastTimeHorizon' should cause the node to sleep for an - -- entire slot duration, so it should have missed it's chance to - -- lead. - -- - -- This failure clarified that 'OracularClock.systemTimeCurrent' should not provide - -- a time after the clock is exhausted. - once $ - prop_simple_real_pbft_convergence TestSetup - { setupEBBs = NoEBBs - , setupK = SecurityParam $ knownNonZeroBounded @2 - , setupTestConfig = TestConfig - { numCoreNodes = NumCoreNodes 3 - , numSlots = NumSlots 21 - , nodeTopology = meshNodeTopology (NumCoreNodes 3) - , initSeed = Seed 0 + -- This test would pass (before fixing the bug) if I moved both the + -- end of the test and also the last node's join slot ahead by 3 + -- slots. So I think to the epoch boundary may be involved, likely + -- via its effect on the 'SafeZone'. + -- + -- The failure is due to the following sequence. The node joins with + -- an empty chain. The existing nodes disconnect from it. The new + -- node blocks due to 'PastTimeHorizon', and so syncs the whole chain. + -- Then the new node leads. But the old nodes don't get its new + -- block. Then the test ends. + -- + -- The new node should not have been able to create a block in that + -- slot. The 'PastTimeHorizon' should cause the node to sleep for an + -- entire slot duration, so it should have missed it's chance to + -- lead. + -- + -- This failure clarified that 'OracularClock.systemTimeCurrent' should not provide + -- a time after the clock is exhausted. + once $ + prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = NoEBBs + , setupK = SecurityParam $ knownNonZeroBounded @2 + , setupTestConfig = + TestConfig + { numCoreNodes = NumCoreNodes 3 + , numSlots = NumSlots 21 + , nodeTopology = meshNodeTopology (NumCoreNodes 3) + , initSeed = Seed 0 + } + , setupNodeJoinPlan = + NodeJoinPlan + ( Map.fromList + [ (CoreNodeId 0, SlotNo{unSlotNo = 0}) + , (CoreNodeId 1, SlotNo{unSlotNo = 0}) + , (CoreNodeId 2, SlotNo{unSlotNo = 20}) + ] + ) + , setupNodeRestarts = noRestarts + , setupSlotLength = slotLengthFromSec 20 + , setupVersion = (minBound, ByronNodeToNodeVersion1) } - , setupNodeJoinPlan = NodeJoinPlan (Map.fromList [(CoreNodeId 0,SlotNo {unSlotNo = 0}),(CoreNodeId 1,SlotNo {unSlotNo = 0}),(CoreNodeId 2,SlotNo {unSlotNo = 20})]) - , setupNodeRestarts = noRestarts - , setupSlotLength = slotLengthFromSec 20 - , setupVersion = (minBound, ByronNodeToNodeVersion1) - } , testProperty "simple convergence" $ \setup -> prop_simple_real_pbft_convergence setup ] - where - defaultSlotLength :: SlotLength - defaultSlotLength = slotLengthFromSec 1 + where + defaultSlotLength :: SlotLength + defaultSlotLength = slotLengthFromSec 1 prop_deterministicPlan :: PBftParams -> NumSlots -> NumCoreNodes -> Property prop_deterministicPlan params numSlots numCoreNodes = - property $ case Ref.simulate params njp numSlots of - Ref.Forked{} -> False - Ref.Outcomes{} -> True - Ref.Nondeterministic -> False - where - njp = trivialNodeJoinPlan numCoreNodes + property $ case Ref.simulate params njp numSlots of + Ref.Forked{} -> False + Ref.Outcomes{} -> True + Ref.Nondeterministic -> False + where + njp = trivialNodeJoinPlan numCoreNodes expectedCannotForge :: - SecurityParam - -> NumCoreNodes - -> NodeRestarts - -> SlotNo - -> NodeId - -> WrapCannotForge ByronBlock - -> Bool + SecurityParam -> + NumCoreNodes -> + NodeRestarts -> + SlotNo -> + NodeId -> + WrapCannotForge ByronBlock -> + Bool expectedCannotForge - k numCoreNodes (NodeRestarts nrs) - s (CoreId (CoreNodeId i)) (WrapCannotForge cl) - = case cl of - PBftCannotForgeThresholdExceeded{} -> + k + numCoreNodes + (NodeRestarts nrs) + s + (CoreId (CoreNodeId i)) + (WrapCannotForge cl) = + case cl of + PBftCannotForgeThresholdExceeded{} -> -- TODO validate this against Ref implementation? True - PBftCannotForgeInvalidDelegation {} -> + PBftCannotForgeInvalidDelegation{} -> -- only if it rekeyed within before a restarts latest possible -- maturation - not $ null $ - [ () - | (restartSlot, nrs') <- Map.toList nrs - , restartSlot <= s - && s < latestPossibleDlgMaturation k numCoreNodes restartSlot - , (CoreNodeId i', NodeRekey) <- Map.toList nrs' - , i' == i - ] + not $ + null $ + [ () + | (restartSlot, nrs') <- Map.toList nrs + , restartSlot <= s + && s < latestPossibleDlgMaturation k numCoreNodes restartSlot + , (CoreNodeId i', NodeRekey) <- Map.toList nrs' + , i' == i + ] expectedCannotForge _ _ _ _ _ _ = False -- | If we rekey in slot rekeySlot, it is in general possible that the leader @@ -843,62 +1008,68 @@ expectedCannotForge _ _ _ _ _ _ = False -- See @genNodeRekeys@ for the logic that ensures at least one of those slots' -- leaders will be able to lead. latestPossibleDlgMaturation :: - SecurityParam -> NumCoreNodes -> SlotNo -> SlotNo + SecurityParam -> NumCoreNodes -> SlotNo -> SlotNo latestPossibleDlgMaturation - (SecurityParam k) (NumCoreNodes n) (SlotNo rekeySlot) = + (SecurityParam k) + (NumCoreNodes n) + (SlotNo rekeySlot) = SlotNo $ rekeySlot + n + 2 * unNonZero k prop_simple_real_pbft_convergence :: TestSetup -> Property -prop_simple_real_pbft_convergence TestSetup - { setupEBBs = produceEBBs - , setupK = k - , setupTestConfig = testConfig - , setupNodeJoinPlan = nodeJoinPlan - , setupNodeRestarts = nodeRestarts - , setupSlotLength = slotLength - , setupVersion = version - } = - tabulate "produce EBBs" [show produceEBBs] $ - tabulate "Ref.PBFT result" [Ref.resultConstrName refResult] $ - tabulate "proposed protocol version was adopted" [show aPvuRequired] $ - tabulate "proposed software version was adopted" [show aSvuRequired] $ - counterexample ("params: " <> show params) $ - counterexample ("Ref.PBFT result: " <> show refResult) $ - counterexample - ("delegation certificates: " <> show [ - (,) nid $ - mapMaybe (>>= \x@(_, dlgs) -> if null dlgs then Nothing else Just x) $ - [ case Byron.byronBlockRaw blk of - Block.ABOBBlock b -> Just (Block.blockSlot b, Delegation.getPayload $ Block.blockDlgPayload b) - Block.ABOBBoundary _ -> Nothing - | blk <- Chain.chainToList ch - ] - | (nid, ch) <- finalChains - ]) $ - prop_general PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = Byron.countByronGenTxs - , pgaExpectedCannotForge = expectedCannotForge k numCoreNodes nodeRestarts - , pgaFirstBlockNo = 1 - , pgaFixedMaxForkLength = - Just $ NumBlocks $ case refResult of - Ref.Forked{} -> 1 - _ -> 0 - , pgaFixedSchedule = - Just $ roundRobinLeaderSchedule numCoreNodes numSlots - , pgaSecurityParam = k - , pgaTestConfig = testConfig - , pgaTestConfigB = testConfigB - } - testOutput .&&. - prop_pvu .&&. - prop_svu .&&. - not (all (Chain.null . snd) finalChains) .&&. - case refResult of - Ref.Outcomes outcomes -> - conjoin (map (hasAllEBBs k produceEBBs outcomes) finalChains) - _ -> property True - where +prop_simple_real_pbft_convergence + TestSetup + { setupEBBs = produceEBBs + , setupK = k + , setupTestConfig = testConfig + , setupNodeJoinPlan = nodeJoinPlan + , setupNodeRestarts = nodeRestarts + , setupSlotLength = slotLength + , setupVersion = version + } = + tabulate "produce EBBs" [show produceEBBs] + $ tabulate "Ref.PBFT result" [Ref.resultConstrName refResult] + $ tabulate "proposed protocol version was adopted" [show aPvuRequired] + $ tabulate "proposed software version was adopted" [show aSvuRequired] + $ counterexample ("params: " <> show params) + $ counterexample ("Ref.PBFT result: " <> show refResult) + $ counterexample + ( "delegation certificates: " + <> show + [ (,) nid $ + mapMaybe (>>= \x@(_, dlgs) -> if null dlgs then Nothing else Just x) $ + [ case Byron.byronBlockRaw blk of + Block.ABOBBlock b -> Just (Block.blockSlot b, Delegation.getPayload $ Block.blockDlgPayload b) + Block.ABOBBoundary _ -> Nothing + | blk <- Chain.chainToList ch + ] + | (nid, ch) <- finalChains + ] + ) + $ prop_general + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = Byron.countByronGenTxs + , pgaExpectedCannotForge = expectedCannotForge k numCoreNodes nodeRestarts + , pgaFirstBlockNo = 1 + , pgaFixedMaxForkLength = + Just $ NumBlocks $ case refResult of + Ref.Forked{} -> 1 + _ -> 0 + , pgaFixedSchedule = + Just $ roundRobinLeaderSchedule numCoreNodes numSlots + , pgaSecurityParam = k + , pgaTestConfig = testConfig + , pgaTestConfigB = testConfigB + } + testOutput + .&&. prop_pvu + .&&. prop_svu + .&&. not (all (Chain.null . snd) finalChains) + .&&. case refResult of + Ref.Outcomes outcomes -> + conjoin (map (hasAllEBBs k produceEBBs outcomes) finalChains) + _ -> property True + where TestConfig { nodeTopology , numCoreNodes @@ -906,57 +1077,68 @@ prop_simple_real_pbft_convergence TestSetup , initSeed } = testConfig - testConfigB = TestConfigB - { forgeEbbEnv = case produceEBBs of - NoEBBs -> Nothing - ProduceEBBs -> Just byronForgeEbbEnv - , future = singleEraFuture slotLength epochSize - , messageDelay = noCalcMessageDelay - , nodeJoinPlan - , nodeRestarts - , txGenExtra = () - , version = version - } + testConfigB = + TestConfigB + { forgeEbbEnv = case produceEBBs of + NoEBBs -> Nothing + ProduceEBBs -> Just byronForgeEbbEnv + , future = singleEraFuture slotLength epochSize + , messageDelay = noCalcMessageDelay + , nodeJoinPlan + , nodeRestarts + , txGenExtra = () + , version = version + } testOutput = - runTestNetwork testConfig testConfigB TestConfigMB - { nodeInfo = \nid -> - mkProtocolByronAndHardForkTxs - params nid genesisConfig genesisSecrets - theProposedProtocolVersion - , mkRekeyM = Just $ fromRekeyingToRekeyM Rekeying - { rekeyOracle = \cid s -> - let nominalSlots = case refResult of - Ref.Forked{} -> Set.empty - Ref.Outcomes outcomes -> - Set.fromList $ - [ s' - | (Ref.Nominal, s') <- zip outcomes [0..] - -- ignore the 'Ref.Nominal's disrupted by the - -- rekey; see comment on 'refResult' - , cid /= Ref.mkLeaderOf params s' - ] - Ref.Nondeterministic{} -> Set.empty - in Set.lookupGT s nominalSlots - , rekeyUpd = mkRekeyUpd genesisConfig genesisSecrets - , rekeyFreshSKs = - let prj = Crypto.hashVerKey . Crypto.deriveVerKeyDSIGN - acc0 = -- the VKs of the operational keys at genesis - Set.fromList $ - map (Common.hashKey . Delegation.delegateVK) $ - Map.elems $ - Genesis.unGenesisDelegation $ - Genesis.gdHeavyDelegation $ - Genesis.configGenesisData genesisConfig - genKeyDSIGNRandom = do - Crypto.genKeyDSIGN . mkSeedFromBytes . BS.pack - <$> vectorOf 32 arbitrary - in - Stream.nubOrdBy prj acc0 $ - runGen initSeed $ -- seems fine to reuse seed for this - sequence $ let ms = genKeyDSIGNRandom Stream.:< ms in ms - } - } + runTestNetwork + testConfig + testConfigB + TestConfigMB + { nodeInfo = \nid -> + mkProtocolByronAndHardForkTxs + params + nid + genesisConfig + genesisSecrets + theProposedProtocolVersion + , mkRekeyM = + Just $ + fromRekeyingToRekeyM + Rekeying + { rekeyOracle = \cid s -> + let nominalSlots = case refResult of + Ref.Forked{} -> Set.empty + Ref.Outcomes outcomes -> + Set.fromList $ + [ s' + | (Ref.Nominal, s') <- zip outcomes [0 ..] + , -- ignore the 'Ref.Nominal's disrupted by the + -- rekey; see comment on 'refResult' + cid /= Ref.mkLeaderOf params s' + ] + Ref.Nondeterministic{} -> Set.empty + in Set.lookupGT s nominalSlots + , rekeyUpd = mkRekeyUpd genesisConfig genesisSecrets + , rekeyFreshSKs = + let prj = Crypto.hashVerKey . Crypto.deriveVerKeyDSIGN + acc0 = + -- the VKs of the operational keys at genesis + Set.fromList $ + map (Common.hashKey . Delegation.delegateVK) $ + Map.elems $ + Genesis.unGenesisDelegation $ + Genesis.gdHeavyDelegation $ + Genesis.configGenesisData genesisConfig + genKeyDSIGNRandom = do + Crypto.genKeyDSIGN . mkSeedFromBytes . BS.pack + <$> vectorOf 32 arbitrary + in Stream.nubOrdBy prj acc0 $ + runGen initSeed $ -- seems fine to reuse seed for this + sequence $ + let ms = genKeyDSIGNRandom Stream.:< ms in ms + } + } -- Byron has a hard-coded relation between k and the size of an epoch epochSize :: EpochSize @@ -979,10 +1161,10 @@ prop_simple_real_pbft_convergence TestSetup svuLabels :: [(NodeId, SoftwareVersionUpdateLabel)] svuLabels = map (fmap snd) updLabels - updLabels - :: [(NodeId, (ProtocolVersionUpdateLabel, SoftwareVersionUpdateLabel))] + updLabels :: + [(NodeId, (ProtocolVersionUpdateLabel, SoftwareVersionUpdateLabel))] updLabels = - [ (,) cid $ + [ (,) cid $ mkUpdateLabels params numSlots @@ -991,14 +1173,14 @@ prop_simple_real_pbft_convergence TestSetup nodeTopology refResult ldgr - | (cid, ldgr) <- finalLedgers - ] + | (cid, ldgr) <- finalLedgers + ] -- whether the proposed protocol version was required to have been adopted -- in one of the chains aPvuRequired :: Bool aPvuRequired = - or + or [ Just True == pvuRequired | (_, ProtocolVersionUpdateLabel{pvuRequired}) <- pvuLabels ] @@ -1007,7 +1189,7 @@ prop_simple_real_pbft_convergence TestSetup -- one of the chains aSvuRequired :: Bool aSvuRequired = - or + or [ Just True == svuRequired | (_, SoftwareVersionUpdateLabel{svuRequired}) <- svuLabels ] @@ -1016,47 +1198,46 @@ prop_simple_real_pbft_convergence TestSetup -- was adopted prop_pvu :: Property prop_pvu = - counterexample (show pvuLabels) $ + counterexample (show pvuLabels) $ conjoin - [ counterexample (show (cid, pvuLabel)) $ - let ProtocolVersionUpdateLabel - { pvuObserved - , pvuRequired - } = pvuLabel - in - property $ case pvuRequired of - Just b -> b == pvuObserved - Nothing -> True - | (cid, pvuLabel) <- pvuLabels - ] + [ counterexample (show (cid, pvuLabel)) $ + let ProtocolVersionUpdateLabel + { pvuObserved + , pvuRequired + } = pvuLabel + in property $ case pvuRequired of + Just b -> b == pvuObserved + Nothing -> True + | (cid, pvuLabel) <- pvuLabels + ] -- check whether the proposed software version should have been and if so -- was adopted prop_svu :: Property prop_svu = - counterexample (show svuLabels) $ + counterexample (show svuLabels) $ conjoin - [ counterexample (show (cid, svuLabel)) $ - let SoftwareVersionUpdateLabel - { svuObserved - , svuRequired - } = svuLabel - in - property $ case svuRequired of - Just b -> b == svuObserved - Nothing -> True - | (cid, svuLabel) <- svuLabels - ] + [ counterexample (show (cid, svuLabel)) $ + let SoftwareVersionUpdateLabel + { svuObserved + , svuRequired + } = svuLabel + in property $ case svuRequired of + Just b -> b == svuObserved + Nothing -> True + | (cid, svuLabel) <- svuLabels + ] params :: PBftParams params = byronPBftParams k numCoreNodes - genesisConfig :: Genesis.Config + genesisConfig :: Genesis.Config genesisSecrets :: Genesis.GeneratedSecrets (genesisConfig, genesisSecrets) = generateGenesisConfig slotLength params byronForgeEbbEnv :: ForgeEbbEnv ByronBlock -byronForgeEbbEnv = ForgeEbbEnv +byronForgeEbbEnv = + ForgeEbbEnv { forgeEBB = Byron.forgeEBB . configBlock } @@ -1064,45 +1245,45 @@ byronForgeEbbEnv = ForgeEbbEnv -- -- TODO add a case to generate EBBs upto some epoch, like on mainnet data ProduceEBBs - = NoEBBs - -- ^ No EBBs are produced in the tests. The node will still automatically + = -- | No EBBs are produced in the tests. The node will still automatically -- produce its own genesis EBB. - | ProduceEBBs - -- ^ In addition to the genesis EBB the node generates itself, the tests + NoEBBs + | -- | In addition to the genesis EBB the node generates itself, the tests -- also produce an EBB at the start of each subsequent epoch. + ProduceEBBs deriving (Eq, Show) -- | Exported alias for 'NoEBBs'. --- noEBBs :: ProduceEBBs noEBBs = NoEBBs instance Arbitrary ProduceEBBs where arbitrary = elements [NoEBBs, ProduceEBBs] - shrink NoEBBs = [] + shrink NoEBBs = [] shrink ProduceEBBs = [NoEBBs] -hasAllEBBs :: SecurityParam - -> ProduceEBBs - -> [Ref.Outcome] - -> (NodeId, Chain ByronBlock) - -> Property +hasAllEBBs :: + SecurityParam -> + ProduceEBBs -> + [Ref.Outcome] -> + (NodeId, Chain ByronBlock) -> + Property hasAllEBBs k produceEBBs outcomes (nid, c) = - counterexample ("Missing or unexpected EBBs in " <> condense (nid, c)) $ + counterexample ("Missing or unexpected EBBs in " <> condense (nid, c)) $ actual === expected - where - expected :: [EpochNo] - expected = case produceEBBs of - NoEBBs -> [0] - ProduceEBBs -> case reverse [ s :: SlotNo | (Ref.Nominal, s) <- zip outcomes [0..] ] of - [] -> [0] - s:_ -> coerce [0 .. hi] - where - hi :: Word64 - hi = unSlotNo s `div` denom - denom = unEpochSlots $ kEpochSlots $ coerce (unNonZero (maxRollbacks k) :: Word64) + where + expected :: [EpochNo] + expected = case produceEBBs of + NoEBBs -> [0] + ProduceEBBs -> case reverse [s :: SlotNo | (Ref.Nominal, s) <- zip outcomes [0 ..]] of + [] -> [0] + s : _ -> coerce [0 .. hi] + where + hi :: Word64 + hi = unSlotNo s `div` denom + denom = unEpochSlots $ kEpochSlots $ coerce (unNonZero (maxRollbacks k) :: Word64) - actual = mapMaybe blockIsEBB $ Chain.toOldestFirst c + actual = mapMaybe blockIsEBB $ Chain.toOldestFirst c {------------------------------------------------------------------------------- Generating node join plans that ensure sufficiently dense chains @@ -1118,65 +1299,69 @@ genSlot lo hi = SlotNo <$> choose (unSlotNo lo, unSlotNo hi) -- -- Note that there is only one chain: at any slot onset, the net's fork only -- has one tine. --- genByronNodeJoinPlan :: PBftParams -> NumSlots -> Gen NodeJoinPlan genByronNodeJoinPlan params numSlots@(NumSlots t) - | n < 0 || t < 1 = error $ "Cannot generate Byron NodeJoinPlan: " - ++ show (params, numSlots) - | otherwise = - go (NodeJoinPlan Map.empty) Ref.emptyState - `suchThat` (\njp -> Ref.definitelyEnoughBlocks params $ - Ref.simulate params njp numSlots) + | n < 0 || t < 1 = + error $ + "Cannot generate Byron NodeJoinPlan: " + ++ show (params, numSlots) + | otherwise = + go (NodeJoinPlan Map.empty) Ref.emptyState + `suchThat` ( \njp -> + Ref.definitelyEnoughBlocks params $ + Ref.simulate params njp numSlots + ) + where + -- This suchThat might loop a few times, but it should always + -- eventually succeed, since the plan where all nodes join immediately + -- satisfies it. + -- + -- In a run of 7000 successful Byron tests, this 'suchThat' retried: + -- + -- 486 retried once + -- 100 retried twice + -- 10 retried 3 times + -- 4 retried 4 times + -- 4 retried 5 times + -- 1 retried 6 times + + PBftParams{pbftNumNodes} = params + NumCoreNodes n = pbftNumNodes - -- This suchThat might loop a few times, but it should always - -- eventually succeed, since the plan where all nodes join immediately - -- satisfies it. + sentinel = SlotNo t + lastSlot = pred sentinel -- note the t guard above + go :: + NodeJoinPlan -> + -- \^ an /incomplete/ and /viable/ node join plan + Ref.State -> + -- \^ a state whose 'Ref.nextSlot' is <= the last join slot in given + -- plan (or 0 if the plan is empty) + Gen NodeJoinPlan + go nodeJoinPlan@(NodeJoinPlan m) st + | i == n = pure $ NodeJoinPlan m + | otherwise = do + -- @True@ if this join slot for @nid@ is viable -- - -- In a run of 7000 successful Byron tests, this 'suchThat' retried: + -- /Viable/ means the desired chain density invariant remains + -- satisfiable, at the very least the nodes after @nid@ may need to + -- also join in this same slot. -- - -- 486 retried once - -- 100 retried twice - -- 10 retried 3 times - -- 4 retried 4 times - -- 4 retried 5 times - -- 1 retried 6 times - where - PBftParams{pbftNumNodes} = params - NumCoreNodes n = pbftNumNodes - - sentinel = SlotNo t - lastSlot = pred sentinel -- note the t guard above - - go :: - NodeJoinPlan - -- ^ an /incomplete/ and /viable/ node join plan - -> Ref.State - -- ^ a state whose 'Ref.nextSlot' is <= the last join slot in given - -- plan (or 0 if the plan is empty) - -> Gen NodeJoinPlan - go nodeJoinPlan@(NodeJoinPlan m) st - | i == n = pure $ NodeJoinPlan m - | otherwise = do - -- @True@ if this join slot for @nid@ is viable - -- - -- /Viable/ means the desired chain density invariant remains - -- satisfiable, at the very least the nodes after @nid@ may need to - -- also join in this same slot. - -- - -- Assuming @nodeJoinPlan@ is indeed viable and @st@ is indeed not - -- ahead of it, then we should be able to find a join slot for - -- @nid@ that is also viable: the viability of @nodeJoinPlan@ means - -- @nid@ can at least join \"immediately\" wrt to @nodeJoinPlan@. - -- - -- The base case is that the empty join plan and empty state are - -- viable, which assumes that the invariant would be satisfied if - -- all nodes join in slot 0. For uninterrupted round-robin, that - -- merely requires @n * floor (k * t) >= k@. (TODO Does that - -- __always__ suffice?) + -- Assuming @nodeJoinPlan@ is indeed viable and @st@ is indeed not + -- ahead of it, then we should be able to find a join slot for + -- @nid@ that is also viable: the viability of @nodeJoinPlan@ means + -- @nid@ can at least join \"immediately\" wrt to @nodeJoinPlan@. + -- + -- The base case is that the empty join plan and empty state are + -- viable, which assumes that the invariant would be satisfied if + -- all nodes join in slot 0. For uninterrupted round-robin, that + -- merely requires @n * floor (k * t) >= k@. (TODO Does that + -- __always__ suffice?) let check s' = - Ref.viable params sentinel - (NodeJoinPlan (Map.insert nid s' m)) - st + Ref.viable + params + sentinel + (NodeJoinPlan (Map.insert nid s' m)) + st lo = Ref.nextSlot st -- @check@ is downward-closed, but 'searchFromTo' requires @@ -1184,12 +1369,13 @@ genByronNodeJoinPlan params numSlots@(NumSlots t) inn = (maxBound -) . unSlotNo out = SlotNo . (maxBound -) s' <- case out <$> searchFromTo (check . out) (inn lastSlot) (inn lo) of - Just hi -> genSlot lo hi - Nothing -> error $ - "Cannot find viable Byron NodeJoinPlan: " ++ - show (params, numSlots, nodeJoinPlan, st) + Just hi -> genSlot lo hi + Nothing -> + error $ + "Cannot find viable Byron NodeJoinPlan: " + ++ show (params, numSlots, nodeJoinPlan, st) - let m' = Map.insert nid s' m + let m' = Map.insert nid s' m -- optimization: avoid simulating from the same inputs multiple -- times @@ -1201,12 +1387,12 @@ genByronNodeJoinPlan params numSlots@(NumSlots t) -- NOTE @m@ is congruent to @m'@ for all slots prior to @s'@ st' = Ref.advanceUpTo params nodeJoinPlan st s' go (NodeJoinPlan m') st' - where - -- the next node to be added to the incomplete join plan - nid = CoreNodeId i - i = case fst <$> Map.lookupMax m of - Nothing -> 0 - Just (CoreNodeId h) -> succ h + where + -- the next node to be added to the incomplete join plan + nid = CoreNodeId i + i = case fst <$> Map.lookupMax m of + Nothing -> 0 + Just (CoreNodeId h) -> succ h -- | Possibly promote some 'NodeRestart's to 'NodeRekey's -- @@ -1218,52 +1404,56 @@ genByronNodeJoinPlan params numSlots@(NumSlots t) -- until it's finished. Therefore, at most one node will be rekeying at a time, -- since otherwise its inability to lead may spoil the invariants established -- by 'genByronNodeJoinPlan'. --- genNodeRekeys :: - PBftParams - -> NodeJoinPlan - -> NodeTopology - -> NumSlots - -> NodeRestarts - -> Gen NodeRestarts -genNodeRekeys params nodeJoinPlan nodeTopology numSlots@(NumSlots t) + PBftParams -> + NodeJoinPlan -> + NodeTopology -> + NumSlots -> + NodeRestarts -> + Gen NodeRestarts +genNodeRekeys + params + nodeJoinPlan + nodeTopology + numSlots@(NumSlots t) nodeRestarts@(NodeRestarts nrs) - | t <= 0 = pure nodeRestarts - | otherwise = - -- The necessary conditions are relatively rare, so favor adding a - -- 'NodeRekey' when we can. But not always. - (\x -> frequency [(2, pure nodeRestarts), (8, x)]) $ - -- TODO rekey nodes other than the last - -- TODO rekey more than one node - -- TODO rekey a node in a slot other than its join slot - case Map.lookupMax njp of - Just (cid, jslot) - -- last node joins after first epoch, ... - | jslot >= beginSecondEpoch - -- ... and could instead join unproblematically at the latest time - -- the delegation certificate would mature ... - , latestPossibleDlgMaturation pbftSecurityParam numCoreNodes jslot - < sentinel - , let nodeJoinPlan' = - NodeJoinPlan $ Map.insert cid (jslot + twoK) njp - , Ref.definitelyEnoughBlocks params $ - Ref.simulate params nodeJoinPlan' numSlots - -- ... and does not join in the same slot as the leader of the next - -- slot unless they are neighbors (otherwise the dlg cert tx might - -- not reach it soon enough) - , let nextLeader = Ref.mkLeaderOf params $ succ jslot - , jslot /= coreNodeIdJoinSlot nodeJoinPlan nextLeader || - cid `elem` coreNodeIdNeighbors nodeTopology nextLeader - -> pure $ NodeRestarts $ - -- We discard any 'NodeRestart's also scheduled for this slot. - -- 'NodeRestart's are less interesting, so it's fine. - -- - -- TODO retain those coincident node restarts as long as they - -- don't include every other node, since that risks forgetting - -- some relevant blocks. - Map.insert jslot (Map.singleton cid NodeRekey) nrs - _ -> pure nodeRestarts - where + | t <= 0 = pure nodeRestarts + | otherwise = + -- The necessary conditions are relatively rare, so favor adding a + -- 'NodeRekey' when we can. But not always. + (\x -> frequency [(2, pure nodeRestarts), (8, x)]) $ + -- TODO rekey nodes other than the last + -- TODO rekey more than one node + -- TODO rekey a node in a slot other than its join slot + case Map.lookupMax njp of + Just (cid, jslot) + -- last node joins after first epoch, ... + | jslot >= beginSecondEpoch + , -- ... and could instead join unproblematically at the latest time + -- the delegation certificate would mature ... + latestPossibleDlgMaturation pbftSecurityParam numCoreNodes jslot + < sentinel + , let nodeJoinPlan' = + NodeJoinPlan $ Map.insert cid (jslot + twoK) njp + , Ref.definitelyEnoughBlocks params $ + Ref.simulate params nodeJoinPlan' numSlots + , -- ... and does not join in the same slot as the leader of the next + -- slot unless they are neighbors (otherwise the dlg cert tx might + -- not reach it soon enough) + let nextLeader = Ref.mkLeaderOf params $ succ jslot + , jslot /= coreNodeIdJoinSlot nodeJoinPlan nextLeader + || cid `elem` coreNodeIdNeighbors nodeTopology nextLeader -> + pure $ + NodeRestarts $ + -- We discard any 'NodeRestart's also scheduled for this slot. + -- 'NodeRestart's are less interesting, so it's fine. + -- + -- TODO retain those coincident node restarts as long as they + -- don't include every other node, since that risks forgetting + -- some relevant blocks. + Map.insert jslot (Map.singleton cid NodeRekey) nrs + _ -> pure nodeRestarts + where PBftParams{pbftSecurityParam} = params k = unNonZero $ maxRollbacks pbftSecurityParam sentinel = SlotNo t @@ -1271,8 +1461,8 @@ genNodeRekeys params nodeJoinPlan nodeTopology numSlots@(NumSlots t) NodeJoinPlan njp = nodeJoinPlan - twoK = SlotNo $ 2 * k - beginSecondEpoch = SlotNo $ 10 * k -- c.f. Genesis.configEpochSlots + twoK = SlotNo $ 2 * k + beginSecondEpoch = SlotNo $ 10 * k -- c.f. Genesis.configEpochSlots {------------------------------------------------------------------------------- Updating operational keys @@ -1280,92 +1470,90 @@ genNodeRekeys params nodeJoinPlan nodeTopology numSlots@(NumSlots t) -- | Overwrite the 'ProtocolInfo''s operational key, if any, and provide a -- transaction for its new delegation certificate --- mkRekeyUpd :: - Monad m - => Genesis.Config - -> Genesis.GeneratedSecrets - -> CoreNodeId - -> ProtocolInfo ByronBlock - -> m [BlockForging m ByronBlock] - -> EpochNo - -> Crypto.SignKeyDSIGN Crypto.ByronDSIGN - -> m (Maybe (TestNodeInitialization m ByronBlock)) + Monad m => + Genesis.Config -> + Genesis.GeneratedSecrets -> + CoreNodeId -> + ProtocolInfo ByronBlock -> + m [BlockForging m ByronBlock] -> + EpochNo -> + Crypto.SignKeyDSIGN Crypto.ByronDSIGN -> + m (Maybe (TestNodeInitialization m ByronBlock)) mkRekeyUpd genesisConfig genesisSecrets cid pInfo blockForging eno newSK = do blockForging <&> \case - [] -> Nothing - (_:_) -> + [] -> Nothing + (_ : _) -> let genSK = genesisSecretFor genesisConfig genesisSecrets cid creds' = updSignKey genSK bcfg cid (coerce eno) newSK blockForging' = byronBlockForging creds' - - in Just TestNodeInitialization - { tniCrucialTxs = [dlgTx (blcDlgCert creds')] - , tniProtocolInfo = pInfo - , tniBlockForging = pure [blockForging'] - } - where - bcfg = configBlock (pInfoConfig pInfo) + in Just + TestNodeInitialization + { tniCrucialTxs = [dlgTx (blcDlgCert creds')] + , tniProtocolInfo = pInfo + , tniBlockForging = pure [blockForging'] + } + where + bcfg = configBlock (pInfoConfig pInfo) -- | The secret key for a node index --- genesisSecretFor :: - Genesis.Config - -> Genesis.GeneratedSecrets - -> CoreNodeId - -> Crypto.SignKeyDSIGN Crypto.ByronDSIGN + Genesis.Config -> + Genesis.GeneratedSecrets -> + CoreNodeId -> + Crypto.SignKeyDSIGN Crypto.ByronDSIGN genesisSecretFor genesisConfig genesisSecrets cid = - case hits of - [sec] -> Crypto.SignKeyByronDSIGN sec - _ -> error $ "Not exactly one genesis key " <> show (cid, hits) - where - hits :: [Crypto.SigningKey] - hits = - filter - ((Just cid ==) . gkToIdx) - (Genesis.gsDlgIssuersSecrets genesisSecrets) + case hits of + [sec] -> Crypto.SignKeyByronDSIGN sec + _ -> error $ "Not exactly one genesis key " <> show (cid, hits) + where + hits :: [Crypto.SigningKey] + hits = + filter + ((Just cid ==) . gkToIdx) + (Genesis.gsDlgIssuersSecrets genesisSecrets) - gkToIdx :: Crypto.SigningKey -> Maybe CoreNodeId - gkToIdx = - genesisKeyCoreNodeId genesisConfig - . Crypto.VerKeyByronDSIGN . Crypto.toVerification + gkToIdx :: Crypto.SigningKey -> Maybe CoreNodeId + gkToIdx = + genesisKeyCoreNodeId genesisConfig + . Crypto.VerKeyByronDSIGN + . Crypto.toVerification -- | Create new 'ByronLeaderCredentials' by generating a new delegation -- certificate for the given new operational key. --- updSignKey :: - Crypto.SignKeyDSIGN Crypto.ByronDSIGN - -> BlockConfig ByronBlock - -> CoreNodeId - -> EpochNumber - -> Crypto.SignKeyDSIGN Crypto.ByronDSIGN - -> ByronLeaderCredentials + Crypto.SignKeyDSIGN Crypto.ByronDSIGN -> + BlockConfig ByronBlock -> + CoreNodeId -> + EpochNumber -> + Crypto.SignKeyDSIGN Crypto.ByronDSIGN -> + ByronLeaderCredentials updSignKey genSK extCfg cid eno newSK = - ByronLeaderCredentials { - blcSignKey = sk' - , blcDlgCert = newCert - , blcCoreNodeId = cid - , blcLabel = "Updated Byron credentials" - } - where - newCert = - Delegation.signCertificate - (Byron.byronProtocolMagicId extCfg) - (Crypto.toVerification sk') - eno - (Crypto.noPassSafeSigner gsk') + ByronLeaderCredentials + { blcSignKey = sk' + , blcDlgCert = newCert + , blcCoreNodeId = cid + , blcLabel = "Updated Byron credentials" + } + where + newCert = + Delegation.signCertificate + (Byron.byronProtocolMagicId extCfg) + (Crypto.toVerification sk') + eno + (Crypto.noPassSafeSigner gsk') - Crypto.SignKeyByronDSIGN gsk' = genSK - Crypto.SignKeyByronDSIGN sk' = newSK + Crypto.SignKeyByronDSIGN gsk' = genSK + Crypto.SignKeyByronDSIGN sk' = newSK -- | Map a delegation certificate to a delegation transaction --- dlgTx :: Delegation.Certificate -> Byron.GenTx ByronBlock dlgTx cert = - let ann = Plain.serialize' (cert :: Delegation.Certificate) - cert' = cert - { Delegation.aEpoch = + let ann = Plain.serialize' (cert :: Delegation.Certificate) + cert' = + cert + { Delegation.aEpoch = reAnnotate byronProtVer (Delegation.aEpoch cert) , Delegation.annotation = ann } - in Byron.ByronDlg (Delegation.recoverCertificateId cert') cert' + in Byron.ByronDlg (Delegation.recoverCertificateId cert') cert' diff --git a/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/DualByron.hs b/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/DualByron.hs index 751e89df64..c182d3a756 100644 --- a/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/DualByron.hs +++ b/ouroboros-consensus-cardano/test/byron-test/Test/ThreadNet/DualByron.hs @@ -3,177 +3,187 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | This runs the Byron ledger and the Byron specification in lockstep, -- verifying that they agree at every point. --- module Test.ThreadNet.DualByron (tests) where -import qualified Byron.Spec.Chain.STS.Rule.Chain as Spec -import qualified Byron.Spec.Ledger.Core as Spec -import qualified Byron.Spec.Ledger.UTxO as Spec -import qualified Cardano.Chain.ProtocolConstants as Impl -import qualified Cardano.Chain.UTxO as Impl -import Cardano.Ledger.BaseTypes (nonZeroOr) -import Control.Monad.Except -import qualified Control.State.Transition.Extended as Spec -import Data.ByteString (ByteString) -import qualified Data.Map.Strict as Map -import Data.Proxy -import qualified Data.Set as Set -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.ByronDual.Ledger -import Ouroboros.Consensus.ByronDual.Node -import Ouroboros.Consensus.ByronSpec.Ledger -import qualified Ouroboros.Consensus.ByronSpec.Ledger.Genesis as Genesis -import qualified Ouroboros.Consensus.ByronSpec.Ledger.Rules as Rules -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Dual -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.PBFT -import Ouroboros.Consensus.TypeFamilyWrappers -import qualified Test.Cardano.Chain.Elaboration.UTxO as Spec.Test -import qualified Test.Control.State.Transition.Generator as Spec.QC -import Test.QuickCheck -import Test.QuickCheck.Hedgehog (hedgehog) -import Test.Tasty -import Test.Tasty.QuickCheck -import qualified Test.ThreadNet.Byron as Byron -import Test.ThreadNet.General -import qualified Test.ThreadNet.Ref.PBFT as Ref -import Test.ThreadNet.TxGen -import Test.ThreadNet.Util -import Test.ThreadNet.Util.NodeRestarts (noRestarts) -import Test.ThreadNet.Util.NodeToNodeVersion (newestVersion) -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Slots (NumSlots (..)) -import Test.Util.TestEnv (adjustQuickCheckTests) +import Byron.Spec.Chain.STS.Rule.Chain qualified as Spec +import Byron.Spec.Ledger.Core qualified as Spec +import Byron.Spec.Ledger.UTxO qualified as Spec +import Cardano.Chain.ProtocolConstants qualified as Impl +import Cardano.Chain.UTxO qualified as Impl +import Cardano.Ledger.BaseTypes (nonZeroOr) +import Control.Monad.Except +import Control.State.Transition.Extended qualified as Spec +import Data.ByteString (ByteString) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.Set qualified as Set +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.ByronDual.Ledger +import Ouroboros.Consensus.ByronDual.Node +import Ouroboros.Consensus.ByronSpec.Ledger +import Ouroboros.Consensus.ByronSpec.Ledger.Genesis qualified as Genesis +import Ouroboros.Consensus.ByronSpec.Ledger.Rules qualified as Rules +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Dual +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.TypeFamilyWrappers +import Test.Cardano.Chain.Elaboration.UTxO qualified as Spec.Test +import Test.Control.State.Transition.Generator qualified as Spec.QC +import Test.QuickCheck +import Test.QuickCheck.Hedgehog (hedgehog) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.Byron qualified as Byron +import Test.ThreadNet.General +import Test.ThreadNet.Ref.PBFT qualified as Ref +import Test.ThreadNet.TxGen +import Test.ThreadNet.Util +import Test.ThreadNet.Util.NodeRestarts (noRestarts) +import Test.ThreadNet.Util.NodeToNodeVersion (newestVersion) +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Slots (NumSlots (..)) +import Test.Util.TestEnv (adjustQuickCheckTests) tests :: TestTree -tests = testGroup "DualByron" [ - adjustQuickCheckTests (`div` 10) $ testProperty "convergence" $ prop_convergence +tests = + testGroup + "DualByron" + [ adjustQuickCheckTests (`div` 10) $ testProperty "convergence" $ prop_convergence ] -- These tests are very expensive, due to the Byron generators -- (100 tests take about 20 minutes) -- We limit it to 10 tests for now. prop_convergence :: SetupDualByron -> Property -prop_convergence setup = (\prop -> if mightForgeInSlot0 then discard else prop) $ +prop_convergence setup = + (\prop -> if mightForgeInSlot0 then discard else prop) $ tabulate "Ref.PBFT result" [Ref.resultConstrName refResult] $ - prop_general PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = countByronGenTxs . dualBlockMain - , pgaExpectedCannotForge = setupExpectedCannotForge setup - , pgaFirstBlockNo = 1 - , pgaFixedMaxForkLength = - Just $ NumBlocks $ case refResult of - Ref.Forked{} -> 1 - _ -> 0 - , pgaFixedSchedule = - Just $ roundRobinLeaderSchedule numCoreNodes numSlots - , pgaSecurityParam = setupK - , pgaTestConfig = setupTestConfig - , pgaTestConfigB = setupTestConfigB setup - } - (setupTestOutput setup) - where - SetupDualByron{..} = setup - Byron.TestSetup{..} = setupByron - TestConfig{..} = setupTestConfig + prop_general + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = countByronGenTxs . dualBlockMain + , pgaExpectedCannotForge = setupExpectedCannotForge setup + , pgaFirstBlockNo = 1 + , pgaFixedMaxForkLength = + Just $ NumBlocks $ case refResult of + Ref.Forked{} -> 1 + _ -> 0 + , pgaFixedSchedule = + Just $ roundRobinLeaderSchedule numCoreNodes numSlots + , pgaSecurityParam = setupK + , pgaTestConfig = setupTestConfig + , pgaTestConfigB = setupTestConfigB setup + } + (setupTestOutput setup) + where + SetupDualByron{..} = setup + Byron.TestSetup{..} = setupByron + TestConfig{..} = setupTestConfig - refResult :: Ref.Result - refResult = - Ref.simulate (setupParams setup) setupNodeJoinPlan numSlots + refResult :: Ref.Result + refResult = + Ref.simulate (setupParams setup) setupNodeJoinPlan numSlots - -- The test infrastructure allows nodes to forge in slot 0; however, the - -- cardano-ledger-specs code causes @PBFTFailure (SlotNotAfterLastBlock - -- (Slot 0) (Slot 0))@ in that case. So we discard such tests. - -- - -- This is ultimately due to the spec not modeling EBBs, while Byron - -- requires that successor of the genesis block is always the epoch 0 EBB. - -- As a result, the PBFT implementation tests the slot progression with - -- @<=@ to accomodate EBBs whereas the executable STS spec uses @<@. - mightForgeInSlot0 :: Bool - mightForgeInSlot0 = case refResult of - Ref.Forked _ m -> any (0 `Set.member`) m - Ref.Nondeterministic -> True - Ref.Outcomes outcomes -> case outcomes of - [] -> False - o : _ -> case o of - Ref.Absent -> False - Ref.Nominal -> True - Ref.Unable -> True - Ref.Wasted -> True + -- The test infrastructure allows nodes to forge in slot 0; however, the + -- cardano-ledger-specs code causes @PBFTFailure (SlotNotAfterLastBlock + -- (Slot 0) (Slot 0))@ in that case. So we discard such tests. + -- + -- This is ultimately due to the spec not modeling EBBs, while Byron + -- requires that successor of the genesis block is always the epoch 0 EBB. + -- As a result, the PBFT implementation tests the slot progression with + -- @<=@ to accomodate EBBs whereas the executable STS spec uses @<@. + mightForgeInSlot0 :: Bool + mightForgeInSlot0 = case refResult of + Ref.Forked _ m -> any (0 `Set.member`) m + Ref.Nondeterministic -> True + Ref.Outcomes outcomes -> case outcomes of + [] -> False + o : _ -> case o of + Ref.Absent -> False + Ref.Nominal -> True + Ref.Unable -> True + Ref.Wasted -> True {------------------------------------------------------------------------------- Test setup -------------------------------------------------------------------------------} -data SetupDualByron = SetupDualByron { - setupGenesis :: ByronSpecGenesis - , setupByron :: Byron.TestSetup - } - deriving (Show) +data SetupDualByron = SetupDualByron + { setupGenesis :: ByronSpecGenesis + , setupByron :: Byron.TestSetup + } + deriving Show setupParams :: SetupDualByron -> PBftParams setupParams = byronPBftParams . setupGenesis setupTestConfigB :: SetupDualByron -> TestConfigB DualByronBlock -setupTestConfigB SetupDualByron{..} = TestConfigB - { forgeEbbEnv = Nothing -- spec does not model EBBs - , future = singleEraFuture setupSlotLength epochSize - , messageDelay = noCalcMessageDelay - , nodeJoinPlan = setupNodeJoinPlan - , nodeRestarts = setupNodeRestarts - , txGenExtra = () - , version = newestVersion (Proxy @DualByronBlock) - } - where - Byron.TestSetup{..} = setupByron +setupTestConfigB SetupDualByron{..} = + TestConfigB + { forgeEbbEnv = Nothing -- spec does not model EBBs + , future = singleEraFuture setupSlotLength epochSize + , messageDelay = noCalcMessageDelay + , nodeJoinPlan = setupNodeJoinPlan + , nodeRestarts = setupNodeRestarts + , txGenExtra = () + , version = newestVersion (Proxy @DualByronBlock) + } + where + Byron.TestSetup{..} = setupByron - epochSize :: EpochSize - epochSize = - fromByronEpochSlots $ Impl.kEpochSlots (toByronBlockCount setupK) + epochSize :: EpochSize + epochSize = + fromByronEpochSlots $ Impl.kEpochSlots (toByronBlockCount setupK) setupTestOutput :: SetupDualByron -> TestOutput DualByronBlock setupTestOutput setup@SetupDualByron{..} = - runTestNetwork testConfig testConfigB TestConfigMB { - nodeInfo = \coreNodeId -> - uncurry plainTestNodeInitialization - (protocolInfoDualByron - setupGenesis - (setupParams setup) - [coreNodeId]) + runTestNetwork + testConfig + testConfigB + TestConfigMB + { nodeInfo = \coreNodeId -> + uncurry + plainTestNodeInitialization + ( protocolInfoDualByron + setupGenesis + (setupParams setup) + [coreNodeId] + ) , mkRekeyM = Nothing -- TODO } - where - testConfig = Byron.setupTestConfig setupByron - testConfigB = setupTestConfigB setup + where + testConfig = Byron.setupTestConfig setupByron + testConfigB = setupTestConfigB setup setupExpectedCannotForge :: - SetupDualByron - -> SlotNo - -> NodeId - -> WrapCannotForge DualByronBlock - -> Bool + SetupDualByron -> + SlotNo -> + NodeId -> + WrapCannotForge DualByronBlock -> + Bool setupExpectedCannotForge SetupDualByron{..} s nid (WrapCannotForge cl) = - Byron.expectedCannotForge - setupK - numCoreNodes - setupNodeRestarts - s nid (WrapCannotForge cl) - where - Byron.TestSetup{..} = setupByron - TestConfig{..} = setupTestConfig + Byron.expectedCannotForge + setupK + numCoreNodes + setupNodeRestarts + s + nid + (WrapCannotForge cl) + where + Byron.TestSetup{..} = setupByron + TestConfig{..} = setupTestConfig {------------------------------------------------------------------------------- Generator for 'SetupDualByron' @@ -185,42 +195,43 @@ setupExpectedCannotForge SetupDualByron{..} s nid (WrapCannotForge cl) = -- value provided by 'Byron.TestSetup' (eg @k@). instance Arbitrary SetupDualByron where arbitrary = do - numSlots <- arbitrary - slotLen <- arbitrary + numSlots <- arbitrary + slotLen <- arbitrary - genesis0 <- genSpecGenesis slotLen numSlots - let params@PBftParams{..} = byronPBftParams genesis0 - setupGenesis = adjustGenesis params genesis0 + genesis0 <- genSpecGenesis slotLen numSlots + let params@PBftParams{..} = byronPBftParams genesis0 + setupGenesis = adjustGenesis params genesis0 - -- TODO: Once we produce all kinds of transactions, we will need to - -- rethink rekeys/restarts (but might not be trivial, as we do not - -- generate the blocks upfront..). - setupByron <- - (\x -> x{Byron.setupNodeRestarts = noRestarts}) + -- TODO: Once we produce all kinds of transactions, we will need to + -- rethink rekeys/restarts (but might not be trivial, as we do not + -- generate the blocks upfront..). + setupByron <- + (\x -> x{Byron.setupNodeRestarts = noRestarts}) <$> Byron.genTestSetup - pbftSecurityParam - pbftNumNodes - numSlots - slotLen + pbftSecurityParam + pbftNumNodes + numSlots + slotLen - return SetupDualByron{..} - where - -- The spec tests and the Byron tests compute a different test value for - -- the PBFT threshold. For now we ignore the value computed by the spec - -- and override it with the value computed in the Byron tests. - -- - -- TODO: It would be interesting to see if we can bring these two in line, - -- but if we do, we probably need to adjust 'expectedBlockRejection'. - adjustGenesis :: PBftParams - -> ByronSpecGenesis - -> ByronSpecGenesis - adjustGenesis = - Genesis.modPBftThreshold - . const - . getPBftSignatureThreshold - . pbftSignatureThreshold + return SetupDualByron{..} + where + -- The spec tests and the Byron tests compute a different test value for + -- the PBFT threshold. For now we ignore the value computed by the spec + -- and override it with the value computed in the Byron tests. + -- + -- TODO: It would be interesting to see if we can bring these two in line, + -- but if we do, we probably need to adjust 'expectedBlockRejection'. + adjustGenesis :: + PBftParams -> + ByronSpecGenesis -> + ByronSpecGenesis + adjustGenesis = + Genesis.modPBftThreshold + . const + . getPBftSignatureThreshold + . pbftSignatureThreshold - -- TODO shrink +-- TODO shrink -- | Generate abstract genesis config (environment for the CHAIN rule) -- @@ -229,30 +240,33 @@ instance Arbitrary SetupDualByron where -- this wants to know the chain length; we don't know that a-priority, but we -- do know the number of slots, and will use that as a stand-in. genSpecGenesis :: SlotLength -> NumSlots -> Gen ByronSpecGenesis -genSpecGenesis slotLen (NumSlots numSlots) = fmap fromEnv . hedgehog $ +genSpecGenesis slotLen (NumSlots numSlots) = + fmap fromEnv . hedgehog $ -- Convert Hedgehog generator to QuickCheck one -- Unfortunately, this does mean we lose any shrinking. Spec.QC.envGen @Spec.CHAIN numSlots - where - -- Start with a larger initial UTxO. This is important, because the Byron - -- spec TX generator is wasteful, and with every transaction the UTxO - -- shrinks. By starting with a larger initial UTxO we avoid the depleting - -- the UTxO too early (at which point we'd not be able to generate further - -- transactions, and produce empty blocks only). - fromEnv :: Spec.Environment Spec.CHAIN -> ByronSpecGenesis - fromEnv = Genesis.modUtxoValues (* 10000) - . Genesis.fromChainEnv (toByronSlotLength slotLen) + where + -- Start with a larger initial UTxO. This is important, because the Byron + -- spec TX generator is wasteful, and with every transaction the UTxO + -- shrinks. By starting with a larger initial UTxO we avoid the depleting + -- the UTxO too early (at which point we'd not be able to generate further + -- transactions, and produce empty blocks only). + fromEnv :: Spec.Environment Spec.CHAIN -> ByronSpecGenesis + fromEnv = + Genesis.modUtxoValues (* 10000) + . Genesis.fromChainEnv (toByronSlotLength slotLen) byronPBftParams :: ByronSpecGenesis -> PBftParams byronPBftParams ByronSpecGenesis{..} = - Byron.byronPBftParams (SecurityParam k') numCoreNodes - where - Spec.BlockCount k = byronSpecGenesisSecurityParam - k' = nonZeroOr k $ error "Got zero. Expected nonzero." + Byron.byronPBftParams (SecurityParam k') numCoreNodes + where + Spec.BlockCount k = byronSpecGenesisSecurityParam + k' = nonZeroOr k $ error "Got zero. Expected nonzero." - numCoreNodes :: NumCoreNodes - numCoreNodes = NumCoreNodes $ - fromIntegral (Set.size byronSpecGenesisDelegators) + numCoreNodes :: NumCoreNodes + numCoreNodes = + NumCoreNodes $ + fromIntegral (Set.size byronSpecGenesisDelegators) {------------------------------------------------------------------------------- Generate transactions @@ -260,30 +274,32 @@ byronPBftParams ByronSpecGenesis{..} = instance TxGen DualByronBlock where testGenTxs _coreNodeId _numCoreNodes curSlotNo cfg () = \st -> do - n <- choose (0, 20) - go [] n - $ applyDiffs st - $ applyChainTick OmitLedgerEvents (configLedger cfg) curSlotNo - $ forgetLedgerTables st - where - -- Attempt to produce @n@ transactions - -- Stops when the transaction generator cannot produce more txs - go :: [GenTx DualByronBlock] -- Accumulator - -> Integer -- Number of txs to still produce - -> TickedLedgerState DualByronBlock ValuesMK - -> Gen [GenTx DualByronBlock] - go acc 0 _ = return (reverse acc) - go acc n st = do - tx <- genTx cfg st - case runExcept $ applyTx - (configLedger cfg) - DoNotIntervene - curSlotNo - tx - st of - Right (st', _vtx) -> - go (tx:acc) (n - 1) (applyDiffs st st') - Left _ -> error "testGenTxs: unexpected invalid tx" + n <- choose (0, 20) + go [] n $ + applyDiffs st $ + applyChainTick OmitLedgerEvents (configLedger cfg) curSlotNo $ + forgetLedgerTables st + where + -- Attempt to produce @n@ transactions + -- Stops when the transaction generator cannot produce more txs + go :: + [GenTx DualByronBlock] -> -- Accumulator + Integer -> -- Number of txs to still produce + TickedLedgerState DualByronBlock ValuesMK -> + Gen [GenTx DualByronBlock] + go acc 0 _ = return (reverse acc) + go acc n st = do + tx <- genTx cfg st + case runExcept $ + applyTx + (configLedger cfg) + DoNotIntervene + curSlotNo + tx + st of + Right (st', _vtx) -> + go (tx : acc) (n - 1) (applyDiffs st st') + Left _ -> error "testGenTxs: unexpected invalid tx" -- | Generate transaction -- @@ -291,42 +307,48 @@ instance TxGen DualByronBlock where -- certificates and update proposals/votes is out of the scope of this test, -- for now. Extending the scope will require integration with the restart/rekey -- infrastructure of the Byron tests. -genTx :: TopLevelConfig DualByronBlock - -> TickedLedgerState DualByronBlock ValuesMK - -> Gen (GenTx DualByronBlock) +genTx :: + TopLevelConfig DualByronBlock -> + TickedLedgerState DualByronBlock ValuesMK -> + Gen (GenTx DualByronBlock) genTx cfg st = do - aux <- sigGen (Rules.ctxtUTXOW cfg') st' - let main :: Impl.ATxAux ByteString - main = Spec.Test.elaborateTxBS - elaborateTxId - aux + aux <- sigGen (Rules.ctxtUTXOW cfg') st' + let main :: Impl.ATxAux ByteString + main = + Spec.Test.elaborateTxBS + elaborateTxId + aux - return $ DualGenTx { - dualGenTxMain = ByronTx (byronIdTx main) main - , dualGenTxAux = ByronSpecGenTx $ ByronSpecGenTxTx aux + return $ + DualGenTx + { dualGenTxMain = ByronTx (byronIdTx main) main + , dualGenTxAux = ByronSpecGenTx $ ByronSpecGenTxTx aux , dualGenTxBridge = specToImplTx aux main } - where - cfg' :: ByronSpecGenesis - st' :: Spec.State Spec.CHAIN + where + cfg' :: ByronSpecGenesis + st' :: Spec.State Spec.CHAIN - cfg' = dualLedgerConfigAux (configLedger cfg) - st' = tickedByronSpecLedgerState $ tickedDualLedgerStateAux st + cfg' = dualLedgerConfigAux (configLedger cfg) + st' = tickedByronSpecLedgerState $ tickedDualLedgerStateAux st - bridge :: ByronSpecBridge - bridge = tickedDualLedgerStateBridge st + bridge :: ByronSpecBridge + bridge = tickedDualLedgerStateBridge st - elaborateTxId :: Spec.TxId -> Impl.TxId - elaborateTxId tid = - case Map.lookup tid (bridgeTransactionIds bridge) of - Nothing -> error $ "elaborateTxId: unknown tx ID " ++ show tid - Just tid' -> tid' + elaborateTxId :: Spec.TxId -> Impl.TxId + elaborateTxId tid = + case Map.lookup tid (bridgeTransactionIds bridge) of + Nothing -> error $ "elaborateTxId: unknown tx ID " ++ show tid + Just tid' -> tid' -sigGen :: forall sts. (Spec.QC.HasTrace sts) - => Rules.RuleContext sts - -> Spec.State Spec.CHAIN - -> Gen (Spec.Signal sts) -sigGen Rules.RuleContext{..} st = hedgehog $ +sigGen :: + forall sts. + Spec.QC.HasTrace sts => + Rules.RuleContext sts -> + Spec.State Spec.CHAIN -> + Gen (Spec.Signal sts) +sigGen Rules.RuleContext{..} st = + hedgehog $ -- Convert Hedgehog generator to QuickCheck one -- Unfortunately, this does mean we lose any shrinking. Spec.QC.sigGen @sts (getRuleEnv st) (getRuleState st) diff --git a/ouroboros-consensus-cardano/test/cardano-test/Main.hs b/ouroboros-consensus-cardano/test/cardano-test/Main.hs index 1bb74fceed..35eb3f8e23 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Main.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Main.hs @@ -1,21 +1,23 @@ module Main (main) where -import System.IO (BufferMode (LineBuffering), hSetBuffering, stdout) -import qualified Test.Consensus.Cardano.DiffusionPipelining -import qualified Test.Consensus.Cardano.Golden -import qualified Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.Server -import qualified Test.Consensus.Cardano.Serialisation (tests) -import qualified Test.Consensus.Cardano.Show () -import qualified Test.Consensus.Cardano.SupportedNetworkProtocolVersion -import qualified Test.Consensus.Cardano.SupportsSanityCheck -import qualified Test.Consensus.Cardano.Translation (tests) -import Test.Tasty -import qualified Test.ThreadNet.AllegraMary -import qualified Test.ThreadNet.Cardano -import qualified Test.ThreadNet.MaryAlonzo -import qualified Test.ThreadNet.ShelleyAllegra -import Test.Util.TestEnv (defaultMainWithTestEnv, - defaultTestEnvConfig) +import System.IO (BufferMode (LineBuffering), hSetBuffering, stdout) +import Test.Consensus.Cardano.DiffusionPipelining qualified +import Test.Consensus.Cardano.Golden qualified +import Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.Server qualified +import Test.Consensus.Cardano.Serialisation qualified (tests) +import Test.Consensus.Cardano.Show qualified () +import Test.Consensus.Cardano.SupportedNetworkProtocolVersion qualified +import Test.Consensus.Cardano.SupportsSanityCheck qualified +import Test.Consensus.Cardano.Translation qualified (tests) +import Test.Tasty +import Test.ThreadNet.AllegraMary qualified +import Test.ThreadNet.Cardano qualified +import Test.ThreadNet.MaryAlonzo qualified +import Test.ThreadNet.ShelleyAllegra qualified +import Test.Util.TestEnv + ( defaultMainWithTestEnv + , defaultTestEnvConfig + ) main :: IO () main = do @@ -24,16 +26,17 @@ main = do tests :: TestTree tests = - testGroup "cardano" - [ Test.Consensus.Cardano.DiffusionPipelining.tests - , Test.Consensus.Cardano.Golden.tests - , Test.Consensus.Cardano.Serialisation.tests - , Test.Consensus.Cardano.SupportedNetworkProtocolVersion.tests - , Test.Consensus.Cardano.SupportsSanityCheck.tests - , Test.ThreadNet.AllegraMary.tests - , Test.ThreadNet.Cardano.tests - , Test.ThreadNet.MaryAlonzo.tests - , Test.ThreadNet.ShelleyAllegra.tests - , Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.Server.tests - , Test.Consensus.Cardano.Translation.tests - ] + testGroup + "cardano" + [ Test.Consensus.Cardano.DiffusionPipelining.tests + , Test.Consensus.Cardano.Golden.tests + , Test.Consensus.Cardano.Serialisation.tests + , Test.Consensus.Cardano.SupportedNetworkProtocolVersion.tests + , Test.Consensus.Cardano.SupportsSanityCheck.tests + , Test.ThreadNet.AllegraMary.tests + , Test.ThreadNet.Cardano.tests + , Test.ThreadNet.MaryAlonzo.tests + , Test.ThreadNet.ShelleyAllegra.tests + , Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.Server.tests + , Test.Consensus.Cardano.Translation.tests + ] diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/DiffusionPipelining.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/DiffusionPipelining.hs index befd26f186..458f3c2d0a 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/DiffusionPipelining.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/DiffusionPipelining.hs @@ -9,87 +9,95 @@ module Test.Consensus.Cardano.DiffusionPipelining (tests) where -import Control.Monad (replicateM) -import Data.Containers.ListUtils (nubOrd) -import Data.List (sort) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import Data.Traversable (for) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger (ByronBlock) -import Ouroboros.Consensus.Cardano (CardanoBlock) -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.Protocol.PBFT -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, - ShelleyCompatible) -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node.DiffusionPipelining -import Ouroboros.Consensus.TypeFamilyWrappers -import Test.Cardano.Ledger.Binary.Arbitrary () -import Test.Cardano.Ledger.Core.Arbitrary () -import Test.Ouroboros.Consensus.DiffusionPipelining -import Test.Tasty -import Test.Tasty.QuickCheck +import Control.Monad (replicateM) +import Data.Containers.ListUtils (nubOrd) +import Data.List (sort) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Data.Traversable (for) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger (ByronBlock) +import Ouroboros.Consensus.Cardano (CardanoBlock) +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyBlock + , ShelleyCompatible + ) +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node.DiffusionPipelining +import Ouroboros.Consensus.TypeFamilyWrappers +import Test.Cardano.Ledger.Binary.Arbitrary () +import Test.Cardano.Ledger.Core.Arbitrary () +import Test.Ouroboros.Consensus.DiffusionPipelining +import Test.Tasty +import Test.Tasty.QuickCheck tests :: TestTree -tests = testGroup "Cardano diffusion pipelining" +tests = + testGroup + "Cardano diffusion pipelining" [ testProperty "subsequence consistency" $ prop_cardanoDiffusionPipeliningSubsequenceConsistency ] prop_cardanoDiffusionPipeliningSubsequenceConsistency :: Property prop_cardanoDiffusionPipeliningSubsequenceConsistency = - forAllShrink - (genTentativeHeaderViews p) - shrinkThvs - (prop_diffusionPipeliningSubsequenceConsistency p) - where - p = Proxy @(CardanoBlock StandardCrypto) + forAllShrink + (genTentativeHeaderViews p) + shrinkThvs + (prop_diffusionPipeliningSubsequenceConsistency p) + where + p = Proxy @(CardanoBlock StandardCrypto) - shrinkThvs = shrinkList (const []) + shrinkThvs = shrinkList (const []) class GenTentativeHeaderViews blk where genTentativeHeaderViews :: Proxy blk -> Gen [TentativeHeaderView blk] instance All GenTentativeHeaderViews xs => GenTentativeHeaderViews (HardForkBlock xs) where genTentativeHeaderViews _ = - fmap OneEraTentativeHeaderView - . foldMap hsequence' - . hcollapse . hap injections + fmap OneEraTentativeHeaderView + . foldMap hsequence' + . hcollapse + . hap injections <$> gen - where - gen :: Gen (NP ([] :.: WrapTentativeHeaderView) xs) - gen = hctraverse' (Proxy @GenTentativeHeaderViews) + where + gen :: Gen (NP ([] :.: WrapTentativeHeaderView) xs) + gen = + hctraverse' + (Proxy @GenTentativeHeaderViews) (\p -> Comp . fmap WrapTentativeHeaderView <$> genTentativeHeaderViews p) (hpure Proxy) instance GenTentativeHeaderViews ByronBlock where genTentativeHeaderViews _ = - nubOrd . sort <$> listOf do - bno <- arbitrary - isEBB <- toIsEBB <$> arbitrary - pure $ PBftSelectView bno isEBB + nubOrd . sort <$> listOf do + bno <- arbitrary + isEBB <- toIsEBB <$> arbitrary + pure $ PBftSelectView bno isEBB instance ShelleyCompatible proto era => GenTentativeHeaderViews (ShelleyBlock proto era) where genTentativeHeaderViews _ = do - bnos <- nubOrd <$> orderedList - issuerHashes <- nubOrd <$> replicateM numIssuers arbitrary - hotIdentities <- concat <$> for issuerHashes \issuerHash -> do + bnos <- nubOrd <$> orderedList + issuerHashes <- nubOrd <$> replicateM numIssuers arbitrary + hotIdentities <- + concat <$> for issuerHashes \issuerHash -> do -- Due to the constraints placed by the OCERT rule on how the issue -- number can evolve, the number of issue numbers per block number and -- issuer (cold) identity is bounded. Note that we don't actually -- enforce those exact constraints here across different block numbers -- as their details are not relevant for this test. numIssueNos <- elements [1, 2] - issueNos <- take numIssueNos . iterate succ <$> arbitrary + issueNos <- take numIssueNos . iterate succ <$> arbitrary pure $ HotIdentity issuerHash <$> issueNos - concat <$> for bnos \bno -> do - hotIds <- shuffle =<< sublistOf hotIdentities - pure $ ShelleyTentativeHeaderView bno <$> hotIds - where - -- Upper bound on the number of issuer identities - numIssuers = 5 + concat <$> for bnos \bno -> do + hotIds <- shuffle =<< sublistOf hotIdentities + pure $ ShelleyTentativeHeaderView bno <$> hotIds + where + -- Upper bound on the number of issuer identities + numIssuers = 5 diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Golden.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Golden.hs index b89e7708ea..e749638889 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Golden.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Golden.hs @@ -2,39 +2,43 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Cardano.Golden (tests) where -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Node -import Ouroboros.Consensus.HardFork.Combinator.Serialisation -import Ouroboros.Consensus.Ledger.Query (QueryVersion) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import System.FilePath (()) -import Test.Consensus.Cardano.Examples -import Test.Tasty -import Test.Util.Paths -import Test.Util.Serialisation.Golden +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Node +import Ouroboros.Consensus.HardFork.Combinator.Serialisation +import Ouroboros.Consensus.Ledger.Query (QueryVersion) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import System.FilePath (()) +import Test.Consensus.Cardano.Examples +import Test.Tasty +import Test.Util.Paths +import Test.Util.Serialisation.Golden tests :: TestTree tests = goldenTest_all codecConfig ($(getGoldenDir) "cardano") examples -instance CardanoHardForkConstraints c - => ToGoldenDirectory (HardForkNodeToNodeVersion (CardanoEras c)) where +instance + CardanoHardForkConstraints c => + ToGoldenDirectory (HardForkNodeToNodeVersion (CardanoEras c)) + where toGoldenDirectory v = case v of CardanoNodeToNodeVersion1 -> "CardanoNodeToNodeVersion1" CardanoNodeToNodeVersion2 -> "CardanoNodeToNodeVersion2" - _ -> error $ "Unknown version: " <> show v + _ -> error $ "Unknown version: " <> show v -instance CardanoHardForkConstraints c - => ToGoldenDirectory (QueryVersion, HardForkNodeToClientVersion (CardanoEras c)) where - toGoldenDirectory (queryVersion, blockVersion) = show queryVersion case blockVersion of - CardanoNodeToClientVersion12 -> "CardanoNodeToClientVersion12" - CardanoNodeToClientVersion13 -> "CardanoNodeToClientVersion13" - CardanoNodeToClientVersion14 -> "CardanoNodeToClientVersion14" - CardanoNodeToClientVersion15 -> "CardanoNodeToClientVersion15" - CardanoNodeToClientVersion16 -> "CardanoNodeToClientVersion16" - _ -> error $ "Unknown version: " <> show blockVersion +instance + CardanoHardForkConstraints c => + ToGoldenDirectory (QueryVersion, HardForkNodeToClientVersion (CardanoEras c)) + where + toGoldenDirectory (queryVersion, blockVersion) = + show queryVersion case blockVersion of + CardanoNodeToClientVersion12 -> "CardanoNodeToClientVersion12" + CardanoNodeToClientVersion13 -> "CardanoNodeToClientVersion13" + CardanoNodeToClientVersion14 -> "CardanoNodeToClientVersion14" + CardanoNodeToClientVersion15 -> "CardanoNodeToClientVersion15" + CardanoNodeToClientVersion16 -> "CardanoNodeToClientVersion16" + _ -> error $ "Unknown version: " <> show blockVersion diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/ByteStringTxParser.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/ByteStringTxParser.hs index b785dbdabb..480fd6e9ee 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/ByteStringTxParser.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/ByteStringTxParser.hs @@ -8,57 +8,60 @@ -- -- > cabal repl ouroboros-consensus-cardano:test:cardano-test -- > import Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.ByteStringTxParser --- -module Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.ByteStringTxParser ( - cardanoCodecCfg +module Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.ByteStringTxParser + ( cardanoCodecCfg , deserialiseTx , printDeserializedTx ) where -import Cardano.Chain.Epoch.File (mainnetEpochSlots) -import Codec.CBOR.Read (DeserialiseFailure, deserialiseFromBytes) -import Data.ByteString.Base16.Lazy (decodeLenient) -import qualified Data.ByteString.Lazy as BL -import qualified Data.Map.Strict as Map -import Data.Proxy (Proxy (..)) -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Node () -import Ouroboros.Consensus.Node.NetworkProtocolVersion - (latestReleasedNodeVersion, supportedNodeToClientVersions) -import Ouroboros.Consensus.Node.Serialisation - (SerialiseNodeToClient (decodeNodeToClient)) -import Ouroboros.Consensus.Shelley.Ledger - (CodecConfig (ShelleyCodecConfig)) -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Text.Pretty.Simple (pPrint) +import Cardano.Chain.Epoch.File (mainnetEpochSlots) +import Codec.CBOR.Read (DeserialiseFailure, deserialiseFromBytes) +import Data.ByteString.Base16.Lazy (decodeLenient) +import Data.ByteString.Lazy qualified as BL +import Data.Map.Strict qualified as Map +import Data.Proxy (Proxy (..)) +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Node () +import Ouroboros.Consensus.Node.NetworkProtocolVersion + ( latestReleasedNodeVersion + , supportedNodeToClientVersions + ) +import Ouroboros.Consensus.Node.Serialisation + ( SerialiseNodeToClient (decodeNodeToClient) + ) +import Ouroboros.Consensus.Shelley.Ledger + ( CodecConfig (ShelleyCodecConfig) + ) +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Text.Pretty.Simple (pPrint) cardanoCodecCfg :: CodecConfig (CardanoBlock StandardCrypto) cardanoCodecCfg = CardanoCodecConfig - (ByronCodecConfig mainnetEpochSlots) - ShelleyCodecConfig - ShelleyCodecConfig - ShelleyCodecConfig - ShelleyCodecConfig - ShelleyCodecConfig - ShelleyCodecConfig + (ByronCodecConfig mainnetEpochSlots) + ShelleyCodecConfig + ShelleyCodecConfig + ShelleyCodecConfig + ShelleyCodecConfig + ShelleyCodecConfig + ShelleyCodecConfig deserialiseTx :: - BL.ByteString - -> Either DeserialiseFailure (BL.ByteString, GenTx (CardanoBlock StandardCrypto)) + BL.ByteString -> + Either DeserialiseFailure (BL.ByteString, GenTx (CardanoBlock StandardCrypto)) deserialiseTx = deserialiseFromBytes cborDecoder . decodeLenient - where - cborDecoder = decodeNodeToClient cardanoCodecCfg latestReleasedBlockNodeToClientVersion - latestReleasedBlockNodeToClientVersion = - case latestReleasedNodeVersion p of - (_, Just n2c) -> supportedNodeToClientVersions p Map.! n2c - _ -> error "no latest released Cardano NodeToClient version" - where - p = Proxy @(CardanoBlock StandardCrypto) + where + cborDecoder = decodeNodeToClient cardanoCodecCfg latestReleasedBlockNodeToClientVersion + latestReleasedBlockNodeToClientVersion = + case latestReleasedNodeVersion p of + (_, Just n2c) -> supportedNodeToClientVersions p Map.! n2c + _ -> error "no latest released Cardano NodeToClient version" + where + p = Proxy @(CardanoBlock StandardCrypto) printDeserializedTx :: BL.ByteString -> IO () printDeserializedTx bs = case deserialiseTx bs of - Left err -> pPrint err + Left err -> pPrint err Right (_rest, result) -> pPrint result diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/Server.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/Server.hs index 643eafa446..b18ee8b240 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/Server.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/MiniProtocol/LocalTxSubmission/Server.hs @@ -6,132 +6,154 @@ -- | Test that we can submit transactions to the mempool using the local -- submission server, in different Cardano eras. --- module Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.Server (tests) where -import Cardano.Ledger.BaseTypes (knownNonZeroBounded) -import Control.Monad (void) -import Control.Tracer (Tracer, nullTracer, stdoutTracer) -import Data.Functor.Contravariant ((>$<)) -import Data.SOP.Strict (index_NS) -import qualified Data.SOP.Telescope as Telescope -import Network.TypedProtocol.Proofs (connect) -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Config (topLevelConfigLedger) -import qualified Ouroboros.Consensus.Config as Consensus -import Ouroboros.Consensus.HardFork.Combinator (getHardForkState, - hardForkLedgerStatePerEra) -import Ouroboros.Consensus.Ledger.Extended (ledgerState) -import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32 (..)) -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as Ledger -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as LedgerSupportsMempool -import qualified Ouroboros.Consensus.Mempool.Capacity as Mempool -import Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server - (TraceLocalTxSubmissionServerEvent, - localTxSubmissionServer) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Network.Protocol.LocalTxSubmission.Client - (SubmitResult, localTxSubmissionClientPeer) -import Ouroboros.Network.Protocol.LocalTxSubmission.Examples - (localTxSubmissionClient) -import Ouroboros.Network.Protocol.LocalTxSubmission.Server - (localTxSubmissionServerPeer) -import Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.ByteStringTxParser - (deserialiseTx) -import Test.Consensus.Cardano.ProtocolInfo - (ByronSlotLengthInSeconds (..), Era (..), - ShelleySlotLengthInSeconds (..), hardForkInto, - mkSimpleTestProtocolInfo, protocolVersionZero) -import qualified Test.Consensus.Mempool.Mocked as Mocked -import Test.Consensus.Mempool.Mocked (MockedMempool) -import Test.Tasty (TestTree, testGroup) -import Test.Tasty.HUnit (testCase, (@=?)) -import qualified Test.ThreadNet.Infra.Shelley as Shelley +import Cardano.Ledger.BaseTypes (knownNonZeroBounded) +import Control.Monad (void) +import Control.Tracer (Tracer, nullTracer, stdoutTracer) +import Data.Functor.Contravariant ((>$<)) +import Data.SOP.Strict (index_NS) +import Data.SOP.Telescope qualified as Telescope +import Network.TypedProtocol.Proofs (connect) +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Config (topLevelConfigLedger) +import Ouroboros.Consensus.Config qualified as Consensus +import Ouroboros.Consensus.HardFork.Combinator + ( getHardForkState + , hardForkLedgerStatePerEra + ) +import Ouroboros.Consensus.Ledger.Extended (ledgerState) +import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32 (..)) +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as Ledger +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as LedgerSupportsMempool +import Ouroboros.Consensus.Mempool.Capacity qualified as Mempool +import Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server + ( TraceLocalTxSubmissionServerEvent + , localTxSubmissionServer + ) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Network.Protocol.LocalTxSubmission.Client + ( SubmitResult + , localTxSubmissionClientPeer + ) +import Ouroboros.Network.Protocol.LocalTxSubmission.Examples + ( localTxSubmissionClient + ) +import Ouroboros.Network.Protocol.LocalTxSubmission.Server + ( localTxSubmissionServerPeer + ) +import Test.Consensus.Cardano.MiniProtocol.LocalTxSubmission.ByteStringTxParser + ( deserialiseTx + ) +import Test.Consensus.Cardano.ProtocolInfo + ( ByronSlotLengthInSeconds (..) + , Era (..) + , ShelleySlotLengthInSeconds (..) + , hardForkInto + , mkSimpleTestProtocolInfo + , protocolVersionZero + ) +import Test.Consensus.Mempool.Mocked (MockedMempool) +import Test.Consensus.Mempool.Mocked qualified as Mocked +import Test.Tasty (TestTree, testGroup) +import Test.Tasty.HUnit (testCase, (@=?)) +import Test.ThreadNet.Infra.Shelley qualified as Shelley tests :: TestTree tests = - testGroup "LocalTxSubmissionServer" - $ fmap localServerPassesRegressionTests [Byron ..] - where - localServerPassesRegressionTests era = - testCase ("Passes the regression tests (" ++ show era ++ ")") $ do - let - pInfo :: ProtocolInfo (CardanoBlock StandardCrypto) - pInfo = mkSimpleTestProtocolInfo - (Shelley.DecentralizationParam 1) - (Consensus.SecurityParam $ knownNonZeroBounded @10) - (ByronSlotLengthInSeconds 1) - (ShelleySlotLengthInSeconds 1) - protocolVersionZero - (hardForkInto era) + testGroup "LocalTxSubmissionServer" $ + fmap localServerPassesRegressionTests [Byron ..] + where + localServerPassesRegressionTests era = + testCase ("Passes the regression tests (" ++ show era ++ ")") $ do + let + pInfo :: ProtocolInfo (CardanoBlock StandardCrypto) + pInfo = + mkSimpleTestProtocolInfo + (Shelley.DecentralizationParam 1) + (Consensus.SecurityParam $ knownNonZeroBounded @10) + (ByronSlotLengthInSeconds 1) + (ShelleySlotLengthInSeconds 1) + protocolVersionZero + (hardForkInto era) - eraIndex = index_NS - . Telescope.tip - . getHardForkState - . hardForkLedgerStatePerEra - . ledgerState - $ pInfoInitLedger pInfo + eraIndex = + index_NS + . Telescope.tip + . getHardForkState + . hardForkLedgerStatePerEra + . ledgerState + $ pInfoInitLedger pInfo - eraIndex @=? fromEnum era + eraIndex @=? fromEnum era - let - -- We don't want the mempool to fill up during these tests. - capcityBytesOverride = - Mempool.mkCapacityBytesOverride (ByteSize32 100_000) - -- Use 'show >$< stdoutTracer' for debugging. - tracer = nullTracer - mempoolParams = Mocked.MempoolAndModelParams { - Mocked.immpInitialState = - ledgerState $ pInfoInitLedger pInfo - , Mocked.immpLedgerConfig = - topLevelConfigLedger $ pInfoConfig pInfo - } + let + -- We don't want the mempool to fill up during these tests. + capcityBytesOverride = + Mempool.mkCapacityBytesOverride (ByteSize32 100_000) + -- Use 'show >$< stdoutTracer' for debugging. + tracer = nullTracer + mempoolParams = + Mocked.MempoolAndModelParams + { Mocked.immpInitialState = + ledgerState $ pInfoInitLedger pInfo + , Mocked.immpLedgerConfig = + topLevelConfigLedger $ pInfoConfig pInfo + } - mempool <- Mocked.openMockedMempool - capcityBytesOverride - tracer - mempoolParams + mempool <- + Mocked.openMockedMempool + capcityBytesOverride + tracer + mempoolParams - mempool `should_process` [ _137 ] - where - -- Reported in https://github.com/IntersectMBO/ouroboros-consensus/issues/137 - _137 :: GenTx (CardanoBlock StandardCrypto) - _137 = either (error . show) snd (deserialiseTx _137_bs) - where - _137_bs = "8205d818590210" <> "84a400828258203a79a6a834e7779c67b0b3cbd3b7271883bbbeac15b1a89d78f057edc25e000b008258203a79a6a834e7779c67b0b3cbd3b7271883bbbeac15b1a89d78f057edc25e000b010182825839007d5a2560d23c3443b98d84c57b0c491311da4b3098de1945c7bcfc4c63ea8c5404f9ed9ae80d95b5544857b2011e3f26b63ddc3be1abd42d1a001e84808258390009ecea977429fa7a4993bc045ea618f3697e6b8eac9d5ea68bba7e4b63ea8c5404f9ed9ae80d95b5544857b2011e3f26b63ddc3be1abd42d821a560ea01ca4581c47be64fcc8a7fe5321b976282ce4e43e4d29015f6613cfabcea28eaba244546573741a3b97c0aa51576f52456d706972654c696368303037391a3443f4a0581c4cd2ea369880853541c5f446725f3e4ecaf141635f0c56c43104923ba14574464c41431b0de0b6b346d4b018581c85ef026c7da6a91f7acc1e662c50301bcce79eb401a3217690aa7044a14574464c41431b000000022eaca140581c92bd3be92d6a6eadd7c01ce9ff485809f3f2eb36845cd7a25c9177bfa14b546f20746865206d6f6f6e01021a0002b9b5031a05a18ef7a100818258202726733baa5c15d8d856c8d94e7d83bcfc7f5661ec7f952f052f311a2443feb258405f9d3d8a703baf700a3015994a3e8702fd7fe2e25d640487944b32ea999f36b314be9674be09b8b8f2c678976ecf994c83086180e854120d81243476c2b89e05f5f6" + mempool `should_process` [_137] + where + -- Reported in https://github.com/IntersectMBO/ouroboros-consensus/issues/137 + _137 :: GenTx (CardanoBlock StandardCrypto) + _137 = either (error . show) snd (deserialiseTx _137_bs) + where + _137_bs = + "8205d818590210" + <> "84a400828258203a79a6a834e7779c67b0b3cbd3b7271883bbbeac15b1a89d78f057edc25e000b008258203a79a6a834e7779c67b0b3cbd3b7271883bbbeac15b1a89d78f057edc25e000b010182825839007d5a2560d23c3443b98d84c57b0c491311da4b3098de1945c7bcfc4c63ea8c5404f9ed9ae80d95b5544857b2011e3f26b63ddc3be1abd42d1a001e84808258390009ecea977429fa7a4993bc045ea618f3697e6b8eac9d5ea68bba7e4b63ea8c5404f9ed9ae80d95b5544857b2011e3f26b63ddc3be1abd42d821a560ea01ca4581c47be64fcc8a7fe5321b976282ce4e43e4d29015f6613cfabcea28eaba244546573741a3b97c0aa51576f52456d706972654c696368303037391a3443f4a0581c4cd2ea369880853541c5f446725f3e4ecaf141635f0c56c43104923ba14574464c41431b0de0b6b346d4b018581c85ef026c7da6a91f7acc1e662c50301bcce79eb401a3217690aa7044a14574464c41431b000000022eaca140581c92bd3be92d6a6eadd7c01ce9ff485809f3f2eb36845cd7a25c9177bfa14b546f20746865206d6f6f6e01021a0002b9b5031a05a18ef7a100818258202726733baa5c15d8d856c8d94e7d83bcfc7f5661ec7f952f052f311a2443feb258405f9d3d8a703baf700a3015994a3e8702fd7fe2e25d640487944b32ea999f36b314be9674be09b8b8f2c678976ecf994c83086180e854120d81243476c2b89e05f5f6" -- | Check that the given transactions can be processed, irrespective of whether -- they were sucessfully validated. should_process :: MockedMempool IO blk -> [Ledger.GenTx blk] -> IO () should_process mockedMempool txs = do - void $ processTxs nullTracer mockedMempool txs + void $ processTxs nullTracer mockedMempool txs processTxs :: - Tracer IO (TraceLocalTxSubmissionServerEvent blk) - -> MockedMempool IO blk - -> [Ledger.GenTx blk] - -> IO [(GenTx blk, SubmitResult (LedgerSupportsMempool.ApplyTxErr blk))] + Tracer IO (TraceLocalTxSubmissionServerEvent blk) -> + MockedMempool IO blk -> + [Ledger.GenTx blk] -> + IO [(GenTx blk, SubmitResult (LedgerSupportsMempool.ApplyTxErr blk))] processTxs tracer mockedMempool txs = - (\(a, _, _) -> a) <$> - connect (localTxSubmissionClientPeer client) (localTxSubmissionServerPeer mServer) - where - mServer = pure $ localTxSubmissionServer tracer - (Mocked.getMempool mockedMempool) - client = localTxSubmissionClient txs + (\(a, _, _) -> a) + <$> connect (localTxSubmissionClientPeer client) (localTxSubmissionServerPeer mServer) + where + mServer = + pure $ + localTxSubmissionServer + tracer + (Mocked.getMempool mockedMempool) + client = localTxSubmissionClient txs -- TODO: this function is unused at the moment. We will use it once we add tests -- for Cardano transactions that are supposed to succeed. _should_process_and_return :: - ( Show (Ledger.GenTx blk) - , Eq (Ledger.ApplyTxErr blk) - , Show (SubmitResult (LedgerSupportsMempool.ApplyTxErr blk)) - ) - => MockedMempool IO blk -> [(Ledger.GenTx blk, SubmitResult (LedgerSupportsMempool.ApplyTxErr blk))] -> IO () + ( Show (Ledger.GenTx blk) + , Eq (Ledger.ApplyTxErr blk) + , Show (SubmitResult (LedgerSupportsMempool.ApplyTxErr blk)) + ) => + MockedMempool IO blk -> + [(Ledger.GenTx blk, SubmitResult (LedgerSupportsMempool.ApplyTxErr blk))] -> + IO () _should_process_and_return mockedMempool txs_ress = do - processResult <- processTxs (show >$< stdoutTracer) mockedMempool (fmap fst txs_ress) - let - actualResults = fmap snd processResult - expectedResults = fmap snd txs_ress - length actualResults @=? length expectedResults - mapM_ (uncurry (@=?)) $ zip expectedResults actualResults - pure () + processResult <- processTxs (show >$< stdoutTracer) mockedMempool (fmap fst txs_ress) + let + actualResults = fmap snd processResult + expectedResults = fmap snd txs_ress + length actualResults @=? length expectedResults + mapM_ (uncurry (@=?)) $ zip expectedResults actualResults + pure () diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Serialisation.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Serialisation.hs index 31a42feafe..85d28872d2 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Serialisation.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Serialisation.hs @@ -5,38 +5,41 @@ module Test.Consensus.Cardano.Serialisation (tests) where -import qualified Codec.CBOR.Write as CBOR -import qualified Data.ByteString.Lazy as Lazy -import Data.Constraint -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Byron.Ledger -import Ouroboros.Consensus.Byron.Node () -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Node () -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Node () -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised (..)) -import Test.Consensus.Byron.Generators (epochSlots) -import qualified Test.Consensus.Cardano.Examples as Cardano.Examples -import Test.Consensus.Cardano.Generators () -import Test.Consensus.Cardano.MockCrypto (MockCryptoCompatByron) -import Test.Tasty -import Test.Tasty.QuickCheck (Property, testProperty, (===)) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip +import Codec.CBOR.Write qualified as CBOR +import Data.ByteString.Lazy qualified as Lazy +import Data.Constraint +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Byron.Ledger +import Ouroboros.Consensus.Byron.Node () +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Node () +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Node () +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block (Serialised (..)) +import Test.Consensus.Byron.Generators (epochSlots) +import Test.Consensus.Cardano.Examples qualified as Cardano.Examples +import Test.Consensus.Cardano.Generators () +import Test.Consensus.Cardano.MockCrypto (MockCryptoCompatByron) +import Test.Tasty +import Test.Tasty.QuickCheck (Property, testProperty, (===)) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip tests :: TestTree -tests = testGroup "Cardano" - [ testGroup "Examples roundtrip" $ examplesRoundtrip Cardano.Examples.codecConfig Cardano.Examples.examples +tests = + testGroup + "Cardano" + [ testGroup "Examples roundtrip" $ + examplesRoundtrip Cardano.Examples.codecConfig Cardano.Examples.examples , roundtrip_all_skipping result testCodecCfg dictNestedHdr , testProperty "BinaryBlockInfo sanity check" prop_CardanoBinaryBlockInfo ] - where - -- See https://github.com/IntersectMBO/cardano-ledger/issues/3800 - result "roundtrip Result" = DoNotCheckCBORValidity - result _ = CheckCBORValidity + where + -- See https://github.com/IntersectMBO/cardano-ledger/issues/3800 + result "roundtrip Result" = DoNotCheckCBORValidity + result _ = CheckCBORValidity testCodecCfg :: CardanoCodecConfig MockCryptoCompatByron testCodecCfg = @@ -50,18 +53,18 @@ testCodecCfg = ShelleyCodecConfig dictNestedHdr :: - forall a. - NestedCtxt_ (CardanoBlock MockCryptoCompatByron) Header a - -> Dict (Eq a, Show a) + forall a. + NestedCtxt_ (CardanoBlock MockCryptoCompatByron) Header a -> + Dict (Eq a, Show a) dictNestedHdr = \case - NCZ (CtxtByronBoundary {}) -> Dict - NCZ (CtxtByronRegular {}) -> Dict - NCS (NCZ CtxtShelley) -> Dict - NCS (NCS (NCZ CtxtShelley)) -> Dict - NCS (NCS (NCS (NCZ CtxtShelley))) -> Dict - NCS (NCS (NCS (NCS (NCZ CtxtShelley)))) -> Dict - NCS (NCS (NCS (NCS (NCS (NCZ CtxtShelley))))) -> Dict - NCS (NCS (NCS (NCS (NCS (NCS (NCZ CtxtShelley)))))) -> Dict + NCZ (CtxtByronBoundary{}) -> Dict + NCZ (CtxtByronRegular{}) -> Dict + NCS (NCZ CtxtShelley) -> Dict + NCS (NCS (NCZ CtxtShelley)) -> Dict + NCS (NCS (NCS (NCZ CtxtShelley))) -> Dict + NCS (NCS (NCS (NCS (NCZ CtxtShelley)))) -> Dict + NCS (NCS (NCS (NCS (NCS (NCZ CtxtShelley))))) -> Dict + NCS (NCS (NCS (NCS (NCS (NCS (NCZ CtxtShelley)))))) -> Dict {------------------------------------------------------------------------------- BinaryBlockInfo @@ -69,17 +72,17 @@ dictNestedHdr = \case prop_CardanoBinaryBlockInfo :: CardanoBlock MockCryptoCompatByron -> Property prop_CardanoBinaryBlockInfo blk = - encodedNestedHeader === extractedHeader - where - BinaryBlockInfo { headerOffset, headerSize } = - getBinaryBlockInfo blk + encodedNestedHeader === extractedHeader + where + BinaryBlockInfo{headerOffset, headerSize} = + getBinaryBlockInfo blk - extractedHeader :: Lazy.ByteString - extractedHeader = - Lazy.take (fromIntegral headerSize) $ - Lazy.drop (fromIntegral headerOffset) $ + extractedHeader :: Lazy.ByteString + extractedHeader = + Lazy.take (fromIntegral headerSize) $ + Lazy.drop (fromIntegral headerOffset) $ CBOR.toLazyByteString (encodeDisk testCodecCfg blk) - encodedNestedHeader :: Lazy.ByteString - encodedNestedHeader = case encodeDepPair testCodecCfg (unnest (getHeader blk)) of - GenDepPair _ (Serialised bytes) -> bytes + encodedNestedHeader :: Lazy.ByteString + encodedNestedHeader = case encodeDepPair testCodecCfg (unnest (getHeader blk)) of + GenDepPair _ (Serialised bytes) -> bytes diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Show.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Show.hs index 65052b64c9..f91fd16778 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Show.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Show.hs @@ -1,8 +1,8 @@ module Test.Consensus.Cardano.Show () where -import Ouroboros.Consensus.Cardano () -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Cardano () +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -- | This definition exists solely to ensure that 'CardanoLedgerConfig' has a working 'Show' instance that isn't accidentally removed or broken in future _showCardanoLedgerConfig :: CardanoLedgerConfig StandardCrypto -> String diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportedNetworkProtocolVersion.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportedNetworkProtocolVersion.hs index 612c342e93..eff2a4587f 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportedNetworkProtocolVersion.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportedNetworkProtocolVersion.hs @@ -2,16 +2,16 @@ module Test.Consensus.Cardano.SupportedNetworkProtocolVersion (tests) where -import Data.Proxy -import Ouroboros.Consensus.Cardano -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Test.Tasty -import Test.Tasty.HUnit -import Test.Util.SupportedNetworkProtocolVersion +import Data.Proxy +import Ouroboros.Consensus.Cardano +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Test.Tasty +import Test.Tasty.HUnit +import Test.Util.SupportedNetworkProtocolVersion tests :: TestTree tests = - testCase "Cardano exhaustive network protocol versions" - $ exhaustiveSupportedNetworkProtocolVersions - (Proxy @(CardanoBlock StandardCrypto)) + testCase "Cardano exhaustive network protocol versions" $ + exhaustiveSupportedNetworkProtocolVersions + (Proxy @(CardanoBlock StandardCrypto)) diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportsSanityCheck.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportsSanityCheck.hs index 0a40b80e2f..57058da6f2 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportsSanityCheck.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/SupportsSanityCheck.hs @@ -1,26 +1,31 @@ {-# LANGUAGE NamedFieldPuns #-} + module Test.Consensus.Cardano.SupportsSanityCheck (tests) where -import Cardano.Ledger.BaseTypes (nonZero, nonZeroOr, unNonZero) -import Ouroboros.Consensus.Cardano (CardanoHardForkTriggers) -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Test.Consensus.Cardano.ProtocolInfo -import qualified Test.QuickCheck as QC -import qualified Test.QuickCheck.Gen as Gen -import Test.Tasty -import Test.Tasty.QuickCheck -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.Util.SanityCheck +import Cardano.Ledger.BaseTypes (nonZero, nonZeroOr, unNonZero) +import Ouroboros.Consensus.Cardano (CardanoHardForkTriggers) +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Test.Consensus.Cardano.ProtocolInfo +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Gen qualified as Gen +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.Util.SanityCheck tests :: TestTree -tests = testGroup "SupportsSanityCheck" - [ testProperty "cardano block top level config passes a sanity check" prop_cardanoBlockSanityChecks - , testProperty "intentionally-misconfigured top level config fails a sanity check" prop_intentionallyBrokenConfigDoesNotSanityCheck - ] +tests = + testGroup + "SupportsSanityCheck" + [ testProperty "cardano block top level config passes a sanity check" prop_cardanoBlockSanityChecks + , testProperty + "intentionally-misconfigured top level config fails a sanity check" + prop_intentionallyBrokenConfigDoesNotSanityCheck + ] prop_cardanoBlockSanityChecks :: QC.Property prop_cardanoBlockSanityChecks = @@ -32,18 +37,23 @@ prop_intentionallyBrokenConfigDoesNotSanityCheck = let saneTopLevelConfig = pInfoConfig pinfo brokenConfig = breakTopLevelConfig saneTopLevelConfig - in expectFailure $ prop_sanityChecks brokenConfig + in expectFailure $ prop_sanityChecks brokenConfig -breakTopLevelConfig :: TopLevelConfig (CardanoBlock StandardCrypto) -> TopLevelConfig (CardanoBlock StandardCrypto) +breakTopLevelConfig :: + TopLevelConfig (CardanoBlock StandardCrypto) -> TopLevelConfig (CardanoBlock StandardCrypto) breakTopLevelConfig tlc = let TopLevelConfig{topLevelConfigProtocol} = tlc HardForkConsensusConfig{hardForkConsensusConfigK} = topLevelConfigProtocol k = unNonZero $ maxRollbacks hardForkConsensusConfigK - in tlc - { topLevelConfigProtocol = topLevelConfigProtocol - { hardForkConsensusConfigK = SecurityParam $ nonZeroOr (succ k) $ error "Impossible! In breakTopLevelConfig, found zero, expected a positive number." - } - } + in tlc + { topLevelConfigProtocol = + topLevelConfigProtocol + { hardForkConsensusConfigK = + SecurityParam $ + nonZeroOr (succ k) $ + error "Impossible! In breakTopLevelConfig, found zero, expected a positive number." + } + } genSimpleTestProtocolInfo :: Gen (ProtocolInfo (CardanoBlock StandardCrypto)) genSimpleTestProtocolInfo = do @@ -59,10 +69,10 @@ genSimpleTestProtocolInfo = do data SimpleTestProtocolInfoSetup = SimpleTestProtocolInfoSetup { decentralizationParam :: Shelley.DecentralizationParam - , securityParam :: SecurityParam - , byronSlotLength :: ByronSlotLengthInSeconds - , shelleySlotLength :: ShelleySlotLengthInSeconds - , hardForkTriggers :: CardanoHardForkTriggers + , securityParam :: SecurityParam + , byronSlotLength :: ByronSlotLengthInSeconds + , shelleySlotLength :: ShelleySlotLengthInSeconds + , hardForkTriggers :: CardanoHardForkTriggers } instance Arbitrary SimpleTestProtocolInfoSetup where @@ -73,12 +83,12 @@ instance Arbitrary SimpleTestProtocolInfoSetup where <*> genByronSlotLength <*> genShelleySlotLength <*> genHardForkTriggers - where - genSecurityParam = - SecurityParam <$> Gen.choose (8, 12) `suchThatMap` nonZero - genByronSlotLength = - ByronSlotLengthInSeconds <$> Gen.choose (1, 4) - genShelleySlotLength = - ShelleySlotLengthInSeconds <$> Gen.choose (1, 4) - genHardForkTriggers = - hardForkInto <$> Gen.chooseEnum (Byron, Conway) + where + genSecurityParam = + SecurityParam <$> Gen.choose (8, 12) `suchThatMap` nonZero + genByronSlotLength = + ByronSlotLengthInSeconds <$> Gen.choose (1, 4) + genShelleySlotLength = + ShelleySlotLengthInSeconds <$> Gen.choose (1, 4) + genHardForkTriggers = + hardForkInto <$> Gen.chooseEnum (Byron, Conway) diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Translation.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Translation.hs index 6fac8cff09..639ab2b833 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Translation.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/Consensus/Cardano/Translation.hs @@ -11,107 +11,137 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} {-# OPTIONS_GHC -Wno-orphans #-} + module Test.Consensus.Cardano.Translation (tests) where -import qualified Cardano.Chain.Block as Byron -import qualified Cardano.Chain.UTxO as Byron -import Cardano.Ledger.Alonzo () -import Cardano.Ledger.BaseTypes (TxIx (..)) -import qualified Cardano.Ledger.Core as Core -import qualified Cardano.Ledger.Genesis as Genesis -import Cardano.Ledger.Shelley.API - (NewEpochState (stashedAVVMAddresses), ShelleyGenesis (..), - TxIn (..), translateCompactTxOutByronToShelley, - translateTxIdByronToShelley) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Ledger.Shelley.LedgerState (esLState, lsUTxOState, - nesEs, utxosUtxo) -import Cardano.Ledger.Shelley.Translation -import Cardano.Ledger.Shelley.UTxO (UTxO (..)) -import Cardano.Slotting.EpochInfo (fixedEpochInfo) -import Cardano.Slotting.Slot (EpochNo (..)) -import qualified Data.Map.Strict as Map -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.InPairs (RequiringBoth (..), provideBoth) -import Ouroboros.Consensus.BlockchainTime.WallClock.Types - (slotLengthFromSec) -import Ouroboros.Consensus.Byron.Ledger (ByronBlock, byronLedgerState) -import Ouroboros.Consensus.Cardano.Block (CardanoEras) -import Ouroboros.Consensus.Cardano.CanHardFork -import Ouroboros.Consensus.Cardano.CanHardFork () -import Ouroboros.Consensus.HardFork.Combinator (InPairs (..), - hardForkEraTranslation, translateLedgerState) -import Ouroboros.Consensus.HardFork.Combinator.State.Types - (TranslateLedgerState (TranslateLedgerState, translateLedgerStateWith)) -import Ouroboros.Consensus.Ledger.Basics (LedgerCfg, LedgerConfig, - LedgerState) -import Ouroboros.Consensus.Ledger.Tables hiding (TxIn) -import Ouroboros.Consensus.Ledger.Tables.Diff (Diff) -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Protocol.Praos -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock, - ShelleyLedgerConfig, mkShelleyLedgerConfig, - shelleyLedgerState, shelleyLedgerTables) -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.TypeFamilyWrappers -import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () -import Test.Cardano.Ledger.Babbage.Serialisation.Generators () -import Test.Cardano.Ledger.Conway.Arbitrary () -import Test.Cardano.Ledger.Shelley.Examples.Consensus -import Test.Consensus.Byron.Generators (genByronLedgerConfig, - genByronLedgerState) -import Test.Consensus.Cardano.MockCrypto (MockCryptoCompatByron) -import Test.Consensus.Shelley.Generators () -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck +import Cardano.Chain.Block qualified as Byron +import Cardano.Chain.UTxO qualified as Byron +import Cardano.Ledger.Alonzo () +import Cardano.Ledger.BaseTypes (TxIx (..)) +import Cardano.Ledger.Core qualified as Core +import Cardano.Ledger.Genesis qualified as Genesis +import Cardano.Ledger.Shelley.API + ( NewEpochState (stashedAVVMAddresses) + , ShelleyGenesis (..) + , TxIn (..) + , translateCompactTxOutByronToShelley + , translateTxIdByronToShelley + ) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.LedgerState + ( esLState + , lsUTxOState + , nesEs + , utxosUtxo + ) +import Cardano.Ledger.Shelley.Translation +import Cardano.Ledger.Shelley.UTxO (UTxO (..)) +import Cardano.Slotting.EpochInfo (fixedEpochInfo) +import Cardano.Slotting.Slot (EpochNo (..)) +import Data.Map.Strict qualified as Map +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.InPairs (RequiringBoth (..), provideBoth) +import Ouroboros.Consensus.BlockchainTime.WallClock.Types + ( slotLengthFromSec + ) +import Ouroboros.Consensus.Byron.Ledger (ByronBlock, byronLedgerState) +import Ouroboros.Consensus.Cardano.Block (CardanoEras) +import Ouroboros.Consensus.Cardano.CanHardFork +import Ouroboros.Consensus.Cardano.CanHardFork () +import Ouroboros.Consensus.HardFork.Combinator + ( InPairs (..) + , hardForkEraTranslation + , translateLedgerState + ) +import Ouroboros.Consensus.HardFork.Combinator.State.Types + ( TranslateLedgerState (TranslateLedgerState, translateLedgerStateWith) + ) +import Ouroboros.Consensus.Ledger.Basics + ( LedgerCfg + , LedgerConfig + , LedgerState + ) +import Ouroboros.Consensus.Ledger.Tables hiding (TxIn) +import Ouroboros.Consensus.Ledger.Tables.Diff (Diff) +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Protocol.Praos +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger + ( ShelleyBlock + , ShelleyLedgerConfig + , mkShelleyLedgerConfig + , shelleyLedgerState + , shelleyLedgerTables + ) +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.TypeFamilyWrappers +import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () +import Test.Cardano.Ledger.Babbage.Serialisation.Generators () +import Test.Cardano.Ledger.Conway.Arbitrary () +import Test.Cardano.Ledger.Shelley.Examples.Consensus +import Test.Consensus.Byron.Generators + ( genByronLedgerConfig + , genByronLedgerState + ) +import Test.Consensus.Cardano.MockCrypto (MockCryptoCompatByron) +import Test.Consensus.Shelley.Generators () +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck -- Definitions to make the signatures a bit less unwieldy type Crypto = MockCryptoCompatByron -type Proto = TPraos Crypto +type Proto = TPraos Crypto tests :: TestTree -tests = testGroup "UpdateTablesOnEraTransition" - [ testTablesTranslation "Byron to Shelley" - byronToShelleyLedgerStateTranslation - byronUtxosAreInsertsInShelleyUtxoDiff - (\st -> cover 50 ( nonEmptyUtxosByron st) "UTxO set is not empty" - -- The Byron ledger generators are very - -- unlikely to generate an empty UTxO, but we - -- want to test with the empty UTxO as well. - -- See 'Test.Cardano.Chain.UTxO.Gen.genUTxO' - -- and the @Arbitrary - -- 'Cardano.Chain.UTxO.UTxO'@ instance in - -- "Test.Consensus.Byron.Generators". - . cover 0.1 (not $ nonEmptyUtxosByron st) "UTxO set is empty" - ) - , testTablesTranslation "Shelley to Allegra" - shelleyToAllegraLedgerStateTranslation - shelleyAvvmAddressesAreDeletesInUtxoDiff - (\st -> cover 50 (nonEmptyAvvmAddresses st) "AVVM set is not empty") - , testTablesTranslation "Allegra to Mary" - allegraToMaryLedgerStateTranslation - utxoTablesAreEmpty - (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") - , testTablesTranslation "Mary to Alonzo" - maryToAlonzoLedgerStateTranslation - utxoTablesAreEmpty - (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") - , testTablesTranslation "Alonzo to Babbage" - alonzoToBabbageLedgerStateTranslation - utxoTablesAreEmpty - (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") - , testTablesTranslation "Babbage to Conway" - babbageToConwayLedgerStateTranslation - utxoTablesAreEmpty - (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") - ] - +tests = + testGroup + "UpdateTablesOnEraTransition" + [ testTablesTranslation + "Byron to Shelley" + byronToShelleyLedgerStateTranslation + byronUtxosAreInsertsInShelleyUtxoDiff + ( \st -> + cover 50 (nonEmptyUtxosByron st) "UTxO set is not empty" + -- The Byron ledger generators are very + -- unlikely to generate an empty UTxO, but we + -- want to test with the empty UTxO as well. + -- See 'Test.Cardano.Chain.UTxO.Gen.genUTxO' + -- and the @Arbitrary + -- 'Cardano.Chain.UTxO.UTxO'@ instance in + -- "Test.Consensus.Byron.Generators". + . cover 0.1 (not $ nonEmptyUtxosByron st) "UTxO set is empty" + ) + , testTablesTranslation + "Shelley to Allegra" + shelleyToAllegraLedgerStateTranslation + shelleyAvvmAddressesAreDeletesInUtxoDiff + (\st -> cover 50 (nonEmptyAvvmAddresses st) "AVVM set is not empty") + , testTablesTranslation + "Allegra to Mary" + allegraToMaryLedgerStateTranslation + utxoTablesAreEmpty + (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") + , testTablesTranslation + "Mary to Alonzo" + maryToAlonzoLedgerStateTranslation + utxoTablesAreEmpty + (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") + , testTablesTranslation + "Alonzo to Babbage" + alonzoToBabbageLedgerStateTranslation + utxoTablesAreEmpty + (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") + , testTablesTranslation + "Babbage to Conway" + babbageToConwayLedgerStateTranslation + utxoTablesAreEmpty + (\st -> cover 50 (nonEmptyUtxosShelley st) "UTxO set is not empty") + ] {------------------------------------------------------------------------------- Ledger-state translations between eras that we test in this module @@ -121,48 +151,65 @@ tests = testGroup "UpdateTablesOnEraTransition" -- and other translations in ' Ouroboros.Consensus.Cardano.CanHardFork'. byronToShelleyLedgerStateTranslation :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - ByronBlock - (ShelleyBlock (TPraos Crypto) ShelleyEra) -shelleyToAllegraLedgerStateTranslation :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (TPraos Crypto) ShelleyEra) - (ShelleyBlock (TPraos Crypto) AllegraEra) -allegraToMaryLedgerStateTranslation :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (TPraos Crypto) AllegraEra) - (ShelleyBlock (TPraos Crypto) MaryEra) -maryToAlonzoLedgerStateTranslation :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (TPraos Crypto) MaryEra) - (ShelleyBlock (TPraos Crypto) AlonzoEra) -alonzoToBabbageLedgerStateTranslation :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (TPraos Crypto) AlonzoEra) - (ShelleyBlock (Praos Crypto) BabbageEra) -PCons byronToShelleyLedgerStateTranslation - (PCons shelleyToAllegraLedgerStateTranslation - (PCons allegraToMaryLedgerStateTranslation - (PCons maryToAlonzoLedgerStateTranslation - (PCons alonzoToBabbageLedgerStateTranslation - (PCons _ - PNil))))) = tls - where - tls :: InPairs - (RequiringBoth WrapLedgerConfig TranslateLedgerState) - (CardanoEras Crypto) + WrapLedgerConfig + TranslateLedgerState + ByronBlock + (ShelleyBlock (TPraos Crypto) ShelleyEra) +shelleyToAllegraLedgerStateTranslation :: + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (TPraos Crypto) ShelleyEra) + (ShelleyBlock (TPraos Crypto) AllegraEra) +allegraToMaryLedgerStateTranslation :: + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (TPraos Crypto) AllegraEra) + (ShelleyBlock (TPraos Crypto) MaryEra) +maryToAlonzoLedgerStateTranslation :: + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (TPraos Crypto) MaryEra) + (ShelleyBlock (TPraos Crypto) AlonzoEra) +alonzoToBabbageLedgerStateTranslation :: + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (TPraos Crypto) AlonzoEra) + (ShelleyBlock (Praos Crypto) BabbageEra) +PCons + byronToShelleyLedgerStateTranslation + ( PCons + shelleyToAllegraLedgerStateTranslation + ( PCons + allegraToMaryLedgerStateTranslation + ( PCons + maryToAlonzoLedgerStateTranslation + ( PCons + alonzoToBabbageLedgerStateTranslation + ( PCons + _ + PNil + ) + ) + ) + ) + ) = tls + where + tls :: + InPairs + (RequiringBoth WrapLedgerConfig TranslateLedgerState) + (CardanoEras Crypto) tls = translateLedgerState hardForkEraTranslation -babbageToConwayLedgerStateTranslation :: RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (Praos Crypto) BabbageEra) - (ShelleyBlock (Praos Crypto) ConwayEra) +babbageToConwayLedgerStateTranslation :: + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (Praos Crypto) BabbageEra) + (ShelleyBlock (Praos Crypto) ConwayEra) babbageToConwayLedgerStateTranslation = translateLedgerStateBabbageToConwayWrapper -- | Tech debt: The babbage to conway translation performs a tick, and we would @@ -171,189 +218,230 @@ babbageToConwayLedgerStateTranslation = translateLedgerStateBabbageToConwayWrapp -- -- This should be fixed once the real translation is fixed. translateLedgerStateBabbageToConwayWrapper :: - RequiringBoth - WrapLedgerConfig - TranslateLedgerState - (ShelleyBlock (Praos Crypto) BabbageEra) - (ShelleyBlock (Praos Crypto) ConwayEra) + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + (ShelleyBlock (Praos Crypto) BabbageEra) + (ShelleyBlock (Praos Crypto) ConwayEra) translateLedgerStateBabbageToConwayWrapper = - RequireBoth $ \_ cfgConway -> - TranslateLedgerState $ \_ -> - noNewTickingDiffs - . unFlip - . unComp - . Core.translateEra' (getConwayTranslationContext cfgConway) - . Comp - . Flip + RequireBoth $ \_ cfgConway -> + TranslateLedgerState $ \_ -> + noNewTickingDiffs + . unFlip + . unComp + . Core.translateEra' (getConwayTranslationContext cfgConway) + . Comp + . Flip -- | Check that the tables are correctly translated from one era to the next. testTablesTranslation :: - forall srcBlk dstBlk. - ( Arbitrary (TestSetup srcBlk dstBlk) - , Show (LedgerCfg (LedgerState srcBlk)) - , Show (LedgerCfg (LedgerState dstBlk)) - , Show (LedgerState srcBlk EmptyMK) - ) - => String - -- ^ Property label - -> RequiringBoth - WrapLedgerConfig - TranslateLedgerState - srcBlk - dstBlk - -> (LedgerState srcBlk EmptyMK -> LedgerState dstBlk DiffMK -> Bool) - -> (LedgerState srcBlk EmptyMK -> Property -> Property) - -- ^ Coverage testing function - -> TestTree + forall srcBlk dstBlk. + ( Arbitrary (TestSetup srcBlk dstBlk) + , Show (LedgerCfg (LedgerState srcBlk)) + , Show (LedgerCfg (LedgerState dstBlk)) + , Show (LedgerState srcBlk EmptyMK) + ) => + -- | Property label + String -> + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + srcBlk + dstBlk -> + (LedgerState srcBlk EmptyMK -> LedgerState dstBlk DiffMK -> Bool) -> + -- | Coverage testing function + (LedgerState srcBlk EmptyMK -> Property -> Property) -> + TestTree testTablesTranslation propLabel translateWithConfig translationShouldSatisfy ledgerStateShouldCover = - testProperty propLabel withTestSetup - where - withTestSetup :: TestSetup srcBlk dstBlk -> Property - withTestSetup ts = - checkCoverage $ ledgerStateShouldCover tsSrcLedgerState - $ property - $ translationShouldSatisfy tsSrcLedgerState destState - where - TestSetup {tsSrcLedgerConfig, tsDestLedgerConfig, tsSrcLedgerState, tsEpochNo} = ts - destState = translateLedgerStateWith translation tsEpochNo tsSrcLedgerState - where - translation :: TranslateLedgerState srcBlk dstBlk - translation = provideBoth translateWithConfig - (WrapLedgerConfig tsSrcLedgerConfig) - (WrapLedgerConfig tsDestLedgerConfig) + testProperty propLabel withTestSetup + where + withTestSetup :: TestSetup srcBlk dstBlk -> Property + withTestSetup ts = + checkCoverage $ + ledgerStateShouldCover tsSrcLedgerState $ + property $ + translationShouldSatisfy tsSrcLedgerState destState + where + TestSetup{tsSrcLedgerConfig, tsDestLedgerConfig, tsSrcLedgerState, tsEpochNo} = ts + destState = translateLedgerStateWith translation tsEpochNo tsSrcLedgerState + where + translation :: TranslateLedgerState srcBlk dstBlk + translation = + provideBoth + translateWithConfig + (WrapLedgerConfig tsSrcLedgerConfig) + (WrapLedgerConfig tsDestLedgerConfig) {------------------------------------------------------------------------------- Specific predicates -------------------------------------------------------------------------------} -byronUtxosAreInsertsInShelleyUtxoDiff - :: LedgerState ByronBlock EmptyMK - -> LedgerState (ShelleyBlock Proto ShelleyEra) DiffMK - -> Bool +byronUtxosAreInsertsInShelleyUtxoDiff :: + LedgerState ByronBlock EmptyMK -> + LedgerState (ShelleyBlock Proto ShelleyEra) DiffMK -> + Bool byronUtxosAreInsertsInShelleyUtxoDiff srcLedgerState destLedgerState = - toNextUtxoDiff srcLedgerState == extractUtxoDiff destLedgerState - where - toNextUtxoDiff - :: LedgerState ByronBlock mk - -> Diff.Diff SL.TxIn (Core.TxOut ShelleyEra) - toNextUtxoDiff ledgerState = - let - Byron.UTxO utxo = Byron.cvsUtxo $ byronLedgerState ledgerState - keyFn = translateTxInByronToShelley . Byron.fromCompactTxIn - valFn = Diff.Insert . translateCompactTxOutByronToShelley - in - Diff.Diff $ Map.map valFn $ Map.mapKeys keyFn utxo + toNextUtxoDiff srcLedgerState == extractUtxoDiff destLedgerState + where + toNextUtxoDiff :: + LedgerState ByronBlock mk -> + Diff.Diff SL.TxIn (Core.TxOut ShelleyEra) + toNextUtxoDiff ledgerState = + let + Byron.UTxO utxo = Byron.cvsUtxo $ byronLedgerState ledgerState + keyFn = translateTxInByronToShelley . Byron.fromCompactTxIn + valFn = Diff.Insert . translateCompactTxOutByronToShelley + in + Diff.Diff $ Map.map valFn $ Map.mapKeys keyFn utxo - translateTxInByronToShelley :: Byron.TxIn -> TxIn - translateTxInByronToShelley byronTxIn = - let - Byron.TxInUtxo txId txIx = byronTxIn - shelleyTxId' = translateTxIdByronToShelley txId - in - TxIn shelleyTxId' (TxIx txIx) + translateTxInByronToShelley :: Byron.TxIn -> TxIn + translateTxInByronToShelley byronTxIn = + let + Byron.TxInUtxo txId txIx = byronTxIn + shelleyTxId' = translateTxIdByronToShelley txId + in + TxIn shelleyTxId' (TxIx txIx) -shelleyAvvmAddressesAreDeletesInUtxoDiff - :: LedgerState (ShelleyBlock Proto ShelleyEra) EmptyMK - -> LedgerState (ShelleyBlock Proto AllegraEra) DiffMK - -> Bool +shelleyAvvmAddressesAreDeletesInUtxoDiff :: + LedgerState (ShelleyBlock Proto ShelleyEra) EmptyMK -> + LedgerState (ShelleyBlock Proto AllegraEra) DiffMK -> + Bool shelleyAvvmAddressesAreDeletesInUtxoDiff srcLedgerState destLedgerState = - toNextUtxoDiff srcLedgerState == extractUtxoDiff destLedgerState - where - toNextUtxoDiff - :: LedgerState (ShelleyBlock Proto ShelleyEra) EmptyMK - -> Diff.Diff SL.TxIn (Core.TxOut AllegraEra) - toNextUtxoDiff = avvmAddressesToUtxoDiff . stashedAVVMAddresses . shelleyLedgerState - avvmAddressesToUtxoDiff (UTxO m) = Diff.Diff $ Map.map (\_ -> Diff.Delete) m + toNextUtxoDiff srcLedgerState == extractUtxoDiff destLedgerState + where + toNextUtxoDiff :: + LedgerState (ShelleyBlock Proto ShelleyEra) EmptyMK -> + Diff.Diff SL.TxIn (Core.TxOut AllegraEra) + toNextUtxoDiff = avvmAddressesToUtxoDiff . stashedAVVMAddresses . shelleyLedgerState + avvmAddressesToUtxoDiff (UTxO m) = Diff.Diff $ Map.map (\_ -> Diff.Delete) m -utxoTablesAreEmpty - :: LedgerState (ShelleyBlock srcProto srcEra) EmptyMK - -> LedgerState (ShelleyBlock destProto destEra) DiffMK - -> Bool +utxoTablesAreEmpty :: + LedgerState (ShelleyBlock srcProto srcEra) EmptyMK -> + LedgerState (ShelleyBlock destProto destEra) DiffMK -> + Bool utxoTablesAreEmpty _ destLedgerState = Diff.null $ extractUtxoDiff destLedgerState nonEmptyUtxosByron :: LedgerState ByronBlock EmptyMK -> Bool nonEmptyUtxosByron ledgerState = let Byron.UTxO utxo = Byron.cvsUtxo $ byronLedgerState ledgerState - in not $ Map.null utxo + in not $ Map.null utxo nonEmptyUtxosShelley :: LedgerState (ShelleyBlock proto era) EmptyMK -> Bool nonEmptyUtxosShelley ledgerState = let UTxO m = utxosUtxo $ lsUTxOState $ esLState $ nesEs $ shelleyLedgerState ledgerState - in not $ Map.null m + in not $ Map.null m nonEmptyAvvmAddresses :: LedgerState (ShelleyBlock Proto ShelleyEra) EmptyMK -> Bool nonEmptyAvvmAddresses ledgerState = let UTxO m = stashedAVVMAddresses $ shelleyLedgerState ledgerState - in not $ Map.null m + in not $ Map.null m {------------------------------------------------------------------------------- Utilities -------------------------------------------------------------------------------} -extractUtxoDiff - :: LedgerState (ShelleyBlock proto era) DiffMK - -> Diff SL.TxIn (Core.TxOut era) +extractUtxoDiff :: + LedgerState (ShelleyBlock proto era) DiffMK -> + Diff SL.TxIn (Core.TxOut era) extractUtxoDiff shelleyLedgerState = let DiffMK tables = getLedgerTables $ shelleyLedgerTables shelleyLedgerState - in tables + in tables {------------------------------------------------------------------------------- TestSetup -------------------------------------------------------------------------------} -data TestSetup src dest = TestSetup { - tsSrcLedgerConfig :: LedgerConfig src +data TestSetup src dest = TestSetup + { tsSrcLedgerConfig :: LedgerConfig src , tsDestLedgerConfig :: LedgerConfig dest - , tsSrcLedgerState :: LedgerState src EmptyMK - , tsEpochNo :: EpochNo -} + , tsSrcLedgerState :: LedgerState src EmptyMK + , tsEpochNo :: EpochNo + } -deriving instance ( Show (LedgerConfig src) - , Show (LedgerConfig dest) - , Show (LedgerState src EmptyMK)) => Show (TestSetup src dest) +deriving instance + ( Show (LedgerConfig src) + , Show (LedgerConfig dest) + , Show (LedgerState src EmptyMK) + ) => + Show (TestSetup src dest) instance Arbitrary (TestSetup ByronBlock (ShelleyBlock Proto ShelleyEra)) where arbitrary = let ledgerConfig = fixedShelleyLedgerConfig emptyFromByronTranslationContext - in TestSetup <$> genByronLedgerConfig - <*> pure ledgerConfig - <*> genByronLedgerState - <*> (EpochNo <$> arbitrary) + in TestSetup + <$> genByronLedgerConfig + <*> pure ledgerConfig + <*> genByronLedgerState + <*> (EpochNo <$> arbitrary) -instance Arbitrary (TestSetup (ShelleyBlock Proto ShelleyEra) - (ShelleyBlock Proto AllegraEra)) where - arbitrary = TestSetup (fixedShelleyLedgerConfig emptyFromByronTranslationContext) - (fixedShelleyLedgerConfig Genesis.NoGenesis) - <$> arbitrary - <*> (EpochNo <$> arbitrary) +instance + Arbitrary + ( TestSetup + (ShelleyBlock Proto ShelleyEra) + (ShelleyBlock Proto AllegraEra) + ) + where + arbitrary = + TestSetup + (fixedShelleyLedgerConfig emptyFromByronTranslationContext) + (fixedShelleyLedgerConfig Genesis.NoGenesis) + <$> arbitrary + <*> (EpochNo <$> arbitrary) -instance Arbitrary (TestSetup (ShelleyBlock Proto AllegraEra) - (ShelleyBlock Proto MaryEra)) where - arbitrary = TestSetup (fixedShelleyLedgerConfig Genesis.NoGenesis) - (fixedShelleyLedgerConfig Genesis.NoGenesis) - <$> arbitrary - <*> (EpochNo <$> arbitrary) +instance + Arbitrary + ( TestSetup + (ShelleyBlock Proto AllegraEra) + (ShelleyBlock Proto MaryEra) + ) + where + arbitrary = + TestSetup + (fixedShelleyLedgerConfig Genesis.NoGenesis) + (fixedShelleyLedgerConfig Genesis.NoGenesis) + <$> arbitrary + <*> (EpochNo <$> arbitrary) -instance Arbitrary (TestSetup (ShelleyBlock Proto MaryEra) - (ShelleyBlock Proto AlonzoEra)) where - arbitrary = TestSetup (fixedShelleyLedgerConfig Genesis.NoGenesis) - <$> (fixedShelleyLedgerConfig <$> arbitrary) - <*> arbitrary - <*> (EpochNo <$> arbitrary) +instance + Arbitrary + ( TestSetup + (ShelleyBlock Proto MaryEra) + (ShelleyBlock Proto AlonzoEra) + ) + where + arbitrary = + TestSetup (fixedShelleyLedgerConfig Genesis.NoGenesis) + <$> (fixedShelleyLedgerConfig <$> arbitrary) + <*> arbitrary + <*> (EpochNo <$> arbitrary) -instance Arbitrary (TestSetup (ShelleyBlock (TPraos Crypto) AlonzoEra) - (ShelleyBlock (Praos Crypto) BabbageEra)) where - arbitrary = TestSetup <$> (fixedShelleyLedgerConfig <$> arbitrary) - <*> (pure $ fixedShelleyLedgerConfig Genesis.NoGenesis) - <*> arbitrary - <*> (EpochNo <$> arbitrary) +instance + Arbitrary + ( TestSetup + (ShelleyBlock (TPraos Crypto) AlonzoEra) + (ShelleyBlock (Praos Crypto) BabbageEra) + ) + where + arbitrary = + TestSetup + <$> (fixedShelleyLedgerConfig <$> arbitrary) + <*> (pure $ fixedShelleyLedgerConfig Genesis.NoGenesis) + <*> arbitrary + <*> (EpochNo <$> arbitrary) -instance Arbitrary (TestSetup (ShelleyBlock (Praos Crypto) BabbageEra) - (ShelleyBlock (Praos Crypto) ConwayEra)) where - arbitrary = TestSetup <$> (pure $ fixedShelleyLedgerConfig Genesis.NoGenesis) - <*> (fixedShelleyLedgerConfig <$> arbitrary) - <*> arbitrary - <*> (EpochNo <$> arbitrary) +instance + Arbitrary + ( TestSetup + (ShelleyBlock (Praos Crypto) BabbageEra) + (ShelleyBlock (Praos Crypto) ConwayEra) + ) + where + arbitrary = + TestSetup + <$> (pure $ fixedShelleyLedgerConfig Genesis.NoGenesis) + <*> (fixedShelleyLedgerConfig <$> arbitrary) + <*> arbitrary + <*> (EpochNo <$> arbitrary) {------------------------------------------------------------------------------- Generators @@ -363,8 +451,8 @@ instance Arbitrary (TestSetup (ShelleyBlock (Praos Crypto) BabbageEra) -- tables on era transitions does not depend on the configurations of any of -- the ledgers involved. fixedShelleyLedgerConfig :: - Core.TranslationContext era - -> ShelleyLedgerConfig era + Core.TranslationContext era -> + ShelleyLedgerConfig era fixedShelleyLedgerConfig translationContext = mkShelleyLedgerConfig testShelleyGenesis diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/AllegraMary.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/AllegraMary.hs index 84625dc6c8..662264f3d2 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/AllegraMary.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/AllegraMary.hs @@ -9,64 +9,68 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.AllegraMary (tests) where -import qualified Cardano.Ledger.Api.Transition as L -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Ledger.Shelley.Core as SL -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import Data.Maybe (maybeToList) -import Data.Proxy (Proxy (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.SOP.Strict (NP (..)) -import Data.Word (Word64) -import Lens.Micro ((^.)) -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Cardano.Condense () -import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common - (isHardForkNodeToNodeEnabled) -import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node - (ProtocolParamsShelleyBased (..), ShelleyGenesis (..)) -import Test.Consensus.Shelley.MockCrypto (MockCrypto) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.Infra.ShelleyBasedHardFork -import Test.ThreadNet.Infra.TwoEras -import Test.ThreadNet.Network (NodeOutput (..), - TestNodeInitialization (..)) -import Test.ThreadNet.TxGen -import Test.ThreadNet.TxGen.Allegra () -import Test.ThreadNet.TxGen.Mary () -import Test.ThreadNet.Util.Expectations (NumBlocks (..)) -import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) -import Test.ThreadNet.Util.NodeRestarts (noRestarts) -import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) -import Test.ThreadNet.Util.Seed (runGen) -import qualified Test.Util.BoolProps as BoolProps -import Test.Util.HardFork.Future (EraSize (..), Future (..)) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import Test.Util.TestEnv +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Shelley.Core qualified as SL +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Maybe (maybeToList) +import Data.Proxy (Proxy (..)) +import Data.SOP.Strict (NP (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import Lens.Micro ((^.)) +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Cardano.Condense () +import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common + ( isHardForkNodeToNodeEnabled + ) +import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node + ( ProtocolParamsShelleyBased (..) + , ShelleyGenesis (..) + ) +import Test.Consensus.Shelley.MockCrypto (MockCrypto) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.Infra.ShelleyBasedHardFork +import Test.ThreadNet.Infra.TwoEras +import Test.ThreadNet.Network + ( NodeOutput (..) + , TestNodeInitialization (..) + ) +import Test.ThreadNet.TxGen +import Test.ThreadNet.TxGen.Allegra () +import Test.ThreadNet.TxGen.Mary () +import Test.ThreadNet.Util.Expectations (NumBlocks (..)) +import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) +import Test.ThreadNet.Util.NodeRestarts (noRestarts) +import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) +import Test.ThreadNet.Util.Seed (runGen) +import Test.Util.BoolProps qualified as BoolProps +import Test.Util.HardFork.Future (EraSize (..), Future (..)) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.TestEnv type AllegraMaryBlock = ShelleyBasedHardForkBlock (TPraos MockCrypto) AllegraEra (TPraos MockCrypto) MaryEra @@ -77,185 +81,202 @@ type AllegraMaryBlock = -- it literally as soon as possible. Therefore, if the test reaches the end of -- the first epoch, the proposal will be adopted. data TestSetup = TestSetup - { setupD :: Shelley.DecentralizationParam - , setupHardFork :: Bool - -- ^ whether the proposal should trigger a hard fork or not + { setupD :: Shelley.DecentralizationParam + , setupHardFork :: Bool + -- ^ whether the proposal should trigger a hard fork or not , setupInitialNonce :: SL.Nonce - -- ^ the initial Shelley 'SL.ticknStateEpochNonce' - -- - -- We vary it to ensure we explore different leader schedules. - , setupK :: SecurityParam - , setupPartition :: Partition - , setupSlotLength :: SlotLength - , setupTestConfig :: TestConfig - , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion AllegraMaryBlock) + -- ^ the initial Shelley 'SL.ticknStateEpochNonce' + -- + -- We vary it to ensure we explore different leader schedules. + , setupK :: SecurityParam + , setupPartition :: Partition + , setupSlotLength :: SlotLength + , setupTestConfig :: TestConfig + , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion AllegraMaryBlock) } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - setupD <- arbitrary - -- The decentralization parameter cannot be 0 in the first - -- Shelley epoch, since stake pools can only be created and - -- delegated to via Shelley transactions. - `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) + setupD <- + arbitrary + -- The decentralization parameter cannot be 0 in the first + -- Shelley epoch, since stake pools can only be created and + -- delegated to via Shelley transactions. + `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) setupK <- SecurityParam <$> choose (8, 10) `suchThatMap` nonZero - -- If k < 8, common prefix violations become too likely in - -- Praos mode for thin overlay schedules (ie low d), even for - -- f=0.2. + -- If k < 8, common prefix violations become too likely in + -- Praos mode for thin overlay schedules (ie low d), even for + -- f=0.2. setupInitialNonce <- genNonce - setupSlotLength <- arbitrary + setupSlotLength <- arbitrary let epochSize = EpochSize $ shelleyEpochSize setupK - setupTestConfig <- genTestConfig - setupK - (epochSize, epochSize) + setupTestConfig <- + genTestConfig + setupK + (epochSize, epochSize) let TestConfig{numCoreNodes, numSlots} = setupTestConfig - setupHardFork <- frequency [(49, pure True), (1, pure False)] + setupHardFork <- frequency [(49, pure True), (1, pure False)] -- TODO How reliable is the Byron-based partition duration logic when -- reused for Shelley? setupPartition <- genPartition numCoreNodes numSlots setupK - setupVersion <- genVersionFiltered - isHardForkNodeToNodeEnabled - (Proxy @AllegraMaryBlock) - - pure TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLength - , setupTestConfig - , setupVersion - } + setupVersion <- + genVersionFiltered + isHardForkNodeToNodeEnabled + (Proxy @AllegraMaryBlock) + + pure + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLength + , setupTestConfig + , setupVersion + } - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "AllegraMary ThreadNet" [ - askTestEnv $ adjustTestEnv $ testProperty "simple convergence" prop_simple_allegraMary_convergence - ] - where - adjustTestEnv :: TestTree -> TestEnv -> TestTree - adjustTestEnv tree = \case - Nightly -> tree - _ -> adjustQuickCheckTests (`div` 10) tree +tests = + testGroup + "AllegraMary ThreadNet" + [ askTestEnv $ adjustTestEnv $ testProperty "simple convergence" prop_simple_allegraMary_convergence + ] + where + adjustTestEnv :: TestTree -> TestEnv -> TestTree + adjustTestEnv tree = \case + Nightly -> tree + _ -> adjustQuickCheckTests (`div` 10) tree prop_simple_allegraMary_convergence :: TestSetup -> Property -prop_simple_allegraMary_convergence TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLength - , setupTestConfig - , setupVersion - } = - prop_general_semisync pga testOutput .&&. - prop_inSync testOutput .&&. - prop_ReachesEra2 reachesEra2 .&&. - prop_noCPViolation .&&. - ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] $ - tabulate "Observed forge during a non-overlay slot in the second era" - [ label_hadActiveNonOverlaySlots - testOutput - overlaySlots - ] $ - tabulatePartitionDuration setupK setupPartition $ - tabulateFinalIntersectionDepth - setupK - (NumBlocks finalIntersectionDepth) - finalBlockEra $ - tabulatePartitionPosition - (NumSlots numFirstEraSlots) - setupPartition - (ledgerReachesEra2 reachesEra2) $ - property True - ) - where +prop_simple_allegraMary_convergence + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLength + , setupTestConfig + , setupVersion + } = + prop_general_semisync pga testOutput + .&&. prop_inSync testOutput + .&&. prop_ReachesEra2 reachesEra2 + .&&. prop_noCPViolation + .&&. ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] + $ tabulate + "Observed forge during a non-overlay slot in the second era" + [ label_hadActiveNonOverlaySlots + testOutput + overlaySlots + ] + $ tabulatePartitionDuration setupK setupPartition + $ tabulateFinalIntersectionDepth + setupK + (NumBlocks finalIntersectionDepth) + finalBlockEra + $ tabulatePartitionPosition + (NumSlots numFirstEraSlots) + setupPartition + (ledgerReachesEra2 reachesEra2) + $ property True + ) + where TestConfig { initSeed , numCoreNodes , numSlots } = setupTestConfig - pga = PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = fromIntegral . length . extractTxs + pga = + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = fromIntegral . length . extractTxs , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Just maxForkLength - -- the leader schedule isn't fixed because the Shelley leader + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Just maxForkLength + , -- the leader schedule isn't fixed because the Shelley leader -- schedule is (at least ideally) unpredictable - , pgaFixedSchedule = Nothing - , pgaSecurityParam = setupK - , pgaTestConfig = setupTestConfig - , pgaTestConfigB = testConfigB + pgaFixedSchedule = Nothing + , pgaSecurityParam = setupK + , pgaTestConfig = setupTestConfig + , pgaTestConfigB = testConfigB } - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = - if setupHardFork - then - -- In this case the PVU will trigger the transition to the second era - -- - -- By FACT (B), the PVU is always successful if we reach the second - -- era. - EraCons setupSlotLength epochSize firstEraSize $ - EraFinal setupSlotLength epochSize - else - EraFinal setupSlotLength epochSize - , messageDelay = mkMessageDelay setupPartition - , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - , nodeRestarts = noRestarts - , txGenExtra = WrapTxGenExtra () :* WrapTxGenExtra () :* Nil - , version = setupVersion - } + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + if setupHardFork + then + -- In this case the PVU will trigger the transition to the second era + -- + -- By FACT (B), the PVU is always successful if we reach the second + -- era. + EraCons setupSlotLength epochSize firstEraSize $ + EraFinal setupSlotLength epochSize + else + EraFinal setupSlotLength epochSize + , messageDelay = mkMessageDelay setupPartition + , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + , nodeRestarts = noRestarts + , txGenExtra = WrapTxGenExtra () :* WrapTxGenExtra () :* Nil + , version = setupVersion + } testOutput :: TestOutput AllegraMaryBlock - testOutput = runTestNetwork setupTestConfig testConfigB TestConfigMB { - nodeInfo = \(CoreNodeId nid) -> - let protocolParamsShelleyBased = - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = setupInitialNonce - , shelleyBasedLeaderCredentials = - [Shelley.mkLeaderCredentials - (coreNodes !! fromIntegral nid)] - } - hardForkTrigger = - TriggerHardForkAtVersion $ SL.getVersion majorVersion2 - (protocolInfo, blockForging) = - protocolInfoShelleyBasedHardFork - protocolParamsShelleyBased - (SL.ProtVer majorVersion1 0) - (SL.ProtVer majorVersion2 0) - ( L.mkTransitionConfig L.NoGenesis - $ L.mkTransitionConfig L.NoGenesis - $ L.mkShelleyTransitionConfig genesisShelley - ) - hardForkTrigger - in TestNodeInitialization { - tniCrucialTxs = - if not setupHardFork then [] else - fmap GenTxShelley1 $ - Shelley.mkMASetDecentralizationParamTxs - coreNodes + testOutput = + runTestNetwork + setupTestConfig + testConfigB + TestConfigMB + { nodeInfo = \(CoreNodeId nid) -> + let protocolParamsShelleyBased = + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = setupInitialNonce + , shelleyBasedLeaderCredentials = + [ Shelley.mkLeaderCredentials + (coreNodes !! fromIntegral nid) + ] + } + hardForkTrigger = + TriggerHardForkAtVersion $ SL.getVersion majorVersion2 + (protocolInfo, blockForging) = + protocolInfoShelleyBasedHardFork + protocolParamsShelleyBased + (SL.ProtVer majorVersion1 0) (SL.ProtVer majorVersion2 0) - (SlotNo $ unNumSlots numSlots) -- never expire - setupD -- unchanged - , tniProtocolInfo = protocolInfo - , tniBlockForging = blockForging - } - , mkRekeyM = Nothing - } + ( L.mkTransitionConfig L.NoGenesis $ + L.mkTransitionConfig L.NoGenesis $ + L.mkShelleyTransitionConfig genesisShelley + ) + hardForkTrigger + in TestNodeInitialization + { tniCrucialTxs = + if not setupHardFork + then [] + else + fmap GenTxShelley1 $ + Shelley.mkMASetDecentralizationParamTxs + coreNodes + (SL.ProtVer majorVersion2 0) + (SlotNo $ unNumSlots numSlots) -- never expire + setupD -- unchanged + , tniProtocolInfo = protocolInfo + , tniBlockForging = blockForging + } + , mkRekeyM = Nothing + } maxForkLength :: NumBlocks maxForkLength = NumBlocks $ unNonZero $ maxRollbacks setupK @@ -264,11 +285,12 @@ prop_simple_allegraMary_convergence TestSetup initialKESPeriod = SL.KESPeriod 0 coreNodes :: [Shelley.CoreNode MockCrypto] - coreNodes = runGen initSeed $ + coreNodes = + runGen initSeed $ replicateM (fromIntegral n) $ Shelley.genCoreNode initialKESPeriod - where - NumCoreNodes n = numCoreNodes + where + NumCoreNodes n = numCoreNodes maxLovelaceSupply :: Word64 maxLovelaceSupply = @@ -276,15 +298,15 @@ prop_simple_allegraMary_convergence TestSetup genesisShelley :: ShelleyGenesis genesisShelley = - Shelley.mkGenesisConfig - (SL.ProtVer majorVersion1 0) - setupK - activeSlotCoeff - setupD - maxLovelaceSupply - setupSlotLength - (Shelley.mkKesConfig (Proxy @MockCrypto) numSlots) - coreNodes + Shelley.mkGenesisConfig + (SL.ProtVer majorVersion1 0) + setupK + activeSlotCoeff + setupD + maxLovelaceSupply + setupSlotLength + (Shelley.mkKesConfig (Proxy @MockCrypto) numSlots) + coreNodes -- the Shelley ledger is designed to use a fixed epoch size, so this test -- does not randomize it @@ -297,62 +319,65 @@ prop_simple_allegraMary_convergence TestSetup -- Classifying test cases reachesEra2 :: ReachesEra2 - reachesEra2 = ReachesEra2 - { rsEra1Slots = - BoolProps.enabledIf $ t > numFirstEraSlots - , rsPV = BoolProps.enabledIf setupHardFork - , rsEra2Blocks = - or $ - [ not $ isFirstEraBlock blk - | (_nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputForges} = no - , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges - -- the last block the node forged - ] - , rsEra2Slots = - --- TODO this comment and code are wrong - - BoolProps.requiredIf $ - -- The active slots in the first two Shelley epochs are all overlay - -- slots, so the first Shelley block will arise from one of those. - not $ Set.null overlaySlots - } - where - NumSlots t = numSlots - TestOutput{testOutputNodes} = testOutput + reachesEra2 = + ReachesEra2 + { rsEra1Slots = + BoolProps.enabledIf $ t > numFirstEraSlots + , rsPV = BoolProps.enabledIf setupHardFork + , rsEra2Blocks = + or $ + [ not $ isFirstEraBlock blk + | (_nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputForges} = no + , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges + -- the last block the node forged + ] + , rsEra2Slots = + --- TODO this comment and code are wrong + + BoolProps.requiredIf $ + -- The active slots in the first two Shelley epochs are all overlay + -- slots, so the first Shelley block will arise from one of those. + not $ + Set.null overlaySlots + } + where + NumSlots t = numSlots + TestOutput{testOutputNodes} = testOutput -- All OBFT overlay slots in the second era. overlaySlots :: Set SlotNo overlaySlots = - secondEraOverlaySlots - numSlots - (NumSlots numFirstEraSlots) - (sgProtocolParams genesisShelley ^. SL.ppDG) - epochSize + secondEraOverlaySlots + numSlots + (NumSlots numFirstEraSlots) + (sgProtocolParams genesisShelley ^. SL.ppDG) + epochSize numFirstEraSlots :: Word64 numFirstEraSlots = - numFirstEraEpochs * unEpochSize epochSize + numFirstEraEpochs * unEpochSize epochSize finalBlockEra :: String finalBlockEra = - if rsEra2Blocks reachesEra2 + if rsEra2Blocks reachesEra2 then "Allegra" else "Mary" finalIntersectionDepth :: Word64 finalIntersectionDepth = depth - where - NumBlocks depth = calcFinalIntersectionDepth pga testOutput + where + NumBlocks depth = calcFinalIntersectionDepth pga testOutput prop_noCPViolation :: Property prop_noCPViolation = - counterexample - ( "finalChains: " <> - show (nodeOutputFinalChain <$> testOutputNodes testOutput) - ) $ - counterexample "CP violation in final chains!" $ - property $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth + counterexample + ( "finalChains: " + <> show (nodeOutputFinalChain <$> testOutputNodes testOutput) + ) + $ counterexample "CP violation in final chains!" + $ property + $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth {------------------------------------------------------------------------------- Constants diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/Cardano.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/Cardano.hs index c2a6b5db3a..af9a9d51d4 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/Cardano.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/Cardano.hs @@ -10,75 +10,80 @@ module Test.ThreadNet.Cardano (tests) where -import qualified Cardano.Chain.Block as CC -import qualified Cardano.Chain.Common as CC.Common -import qualified Cardano.Chain.Genesis as CC.Genesis -import Cardano.Chain.ProtocolConstants (kEpochSlots) -import Cardano.Chain.Slotting (unEpochSlots) -import qualified Cardano.Chain.Update as CC.Update -import qualified Cardano.Chain.Update.Validation.Interface as CC -import qualified Cardano.Ledger.Api.Era as L -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as SL -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) -import Control.Exception (assert) -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import Data.Maybe (maybeToList) -import Data.Proxy (Proxy (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.SOP.Functors -import Data.Word (Word64) -import Lens.Micro -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Byron.Ledger (LedgerState (..)) -import Ouroboros.Consensus.Byron.Ledger.Block (ByronBlock) -import Ouroboros.Consensus.Byron.Ledger.Conversions -import Ouroboros.Consensus.Byron.Node -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Cardano.Condense () -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common - (isHardForkNodeToNodeEnabled) -import Ouroboros.Consensus.HardFork.Combinator.State (Current (..)) -import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState (..)) -import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.PBFT -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node -import Ouroboros.Consensus.Util.IOLike (IOLike) -import Test.Consensus.Cardano.ProtocolInfo - (hardForkOnDefaultProtocolVersions, mkTestProtocolInfo) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import qualified Test.ThreadNet.Infra.Byron as Byron -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.Infra.TwoEras -import Test.ThreadNet.Network (NodeOutput (..), - TestNodeInitialization (..)) -import Test.ThreadNet.TxGen.Cardano (CardanoTxGenExtra (..)) -import Test.ThreadNet.Util.Expectations (NumBlocks (..)) -import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) -import Test.ThreadNet.Util.NodeRestarts (noRestarts) -import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) -import Test.ThreadNet.Util.Seed (runGen) -import qualified Test.Util.BoolProps as BoolProps -import Test.Util.HardFork.Future -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import Test.Util.TestEnv +import Cardano.Chain.Block qualified as CC +import Cardano.Chain.Common qualified as CC.Common +import Cardano.Chain.Genesis qualified as CC.Genesis +import Cardano.Chain.ProtocolConstants (kEpochSlots) +import Cardano.Chain.Slotting (unEpochSlots) +import Cardano.Chain.Update qualified as CC.Update +import Cardano.Chain.Update.Validation.Interface qualified as CC +import Cardano.Ledger.Api.Era qualified as L +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as SL +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) +import Control.Exception (assert) +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Maybe (maybeToList) +import Data.Proxy (Proxy (..)) +import Data.SOP.Functors +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import Lens.Micro +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Byron.Ledger (LedgerState (..)) +import Ouroboros.Consensus.Byron.Ledger.Block (ByronBlock) +import Ouroboros.Consensus.Byron.Ledger.Conversions +import Ouroboros.Consensus.Byron.Node +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Cardano.Condense () +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common + ( isHardForkNodeToNodeEnabled + ) +import Ouroboros.Consensus.HardFork.Combinator.State (Current (..)) +import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState (..)) +import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node +import Ouroboros.Consensus.Util.IOLike (IOLike) +import Test.Consensus.Cardano.ProtocolInfo + ( hardForkOnDefaultProtocolVersions + , mkTestProtocolInfo + ) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Byron qualified as Byron +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.Infra.TwoEras +import Test.ThreadNet.Network + ( NodeOutput (..) + , TestNodeInitialization (..) + ) +import Test.ThreadNet.TxGen.Cardano (CardanoTxGenExtra (..)) +import Test.ThreadNet.Util.Expectations (NumBlocks (..)) +import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) +import Test.ThreadNet.Util.NodeRestarts (noRestarts) +import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) +import Test.ThreadNet.Util.Seed (runGen) +import Test.Util.BoolProps qualified as BoolProps +import Test.Util.HardFork.Future +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.TestEnv -- | Use 'MockCryptoCompatByron' so that bootstrap addresses and -- bootstrap witnesses are supported. @@ -90,197 +95,213 @@ type Crypto = StandardCrypto -- and endorse it literally as soon as possible. Therefore, if the test reaches -- the end of the first epoch, the proposal will be adopted. data TestSetup = TestSetup - { setupD :: Shelley.DecentralizationParam - , setupHardFork :: Bool - -- ^ whether the proposal should trigger a hard fork or not - , setupInitialNonce :: SL.Nonce - -- ^ the initial Shelley 'SL.ticknStateEpochNonce' - -- - -- We vary it to ensure we explore different leader schedules. - , setupK :: SecurityParam - , setupPartition :: Partition - , setupSlotLengthByron :: SlotLength + { setupD :: Shelley.DecentralizationParam + , setupHardFork :: Bool + -- ^ whether the proposal should trigger a hard fork or not + , setupInitialNonce :: SL.Nonce + -- ^ the initial Shelley 'SL.ticknStateEpochNonce' + -- + -- We vary it to ensure we explore different leader schedules. + , setupK :: SecurityParam + , setupPartition :: Partition + , setupSlotLengthByron :: SlotLength , setupSlotLengthShelley :: SlotLength - , setupTestConfig :: TestConfig - , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion (CardanoBlock Crypto)) + , setupTestConfig :: TestConfig + , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion (CardanoBlock Crypto)) } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - setupD <- arbitrary - -- The decentralization parameter cannot be 0 in the first - -- Shelley epoch, since stake pools can only be created and - -- delegated to via Shelley transactions. - `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) + setupD <- + arbitrary + -- The decentralization parameter cannot be 0 in the first + -- Shelley epoch, since stake pools can only be created and + -- delegated to via Shelley transactions. + `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) setupK <- SecurityParam <$> choose (8, 10) `suchThatMap` nonZero - -- If k < 8, common prefix violations become too likely in - -- Praos mode for thin overlay schedules (ie low d), even for - -- f=0.2. + -- If k < 8, common prefix violations become too likely in + -- Praos mode for thin overlay schedules (ie low d), even for + -- f=0.2. setupInitialNonce <- genNonce - setupSlotLengthByron <- arbitrary + setupSlotLengthByron <- arbitrary setupSlotLengthShelley <- arbitrary - setupTestConfig <- genTestConfig - setupK - ( EpochSize $ byronEpochSize setupK - , EpochSize $ shelleyEpochSize setupK - ) + setupTestConfig <- + genTestConfig + setupK + ( EpochSize $ byronEpochSize setupK + , EpochSize $ shelleyEpochSize setupK + ) let TestConfig{numCoreNodes, numSlots} = setupTestConfig - setupHardFork <- frequency [(49, pure True), (1, pure False)] - setupPartition <- genPartition numCoreNodes numSlots setupK - - setupVersion <- genVersionFiltered - isHardForkNodeToNodeEnabled - (Proxy @(CardanoBlock Crypto)) - - pure TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLengthByron - , setupSlotLengthShelley - , setupTestConfig - , setupVersion - } + setupHardFork <- frequency [(49, pure True), (1, pure False)] + setupPartition <- genPartition numCoreNodes numSlots setupK + + setupVersion <- + genVersionFiltered + isHardForkNodeToNodeEnabled + (Proxy @(CardanoBlock Crypto)) + + pure + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLengthByron + , setupSlotLengthShelley + , setupTestConfig + , setupVersion + } - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "Cardano ThreadNet" [ - let name = "simple convergence" in - askTestEnv $ adjustTestMode $ - testProperty name prop_simple_cardano_convergence +tests = + testGroup + "Cardano ThreadNet" + [ let name = "simple convergence" + in askTestEnv $ + adjustTestMode $ + testProperty name prop_simple_cardano_convergence ] - where - adjustTestMode :: TestTree -> TestEnv -> TestTree - adjustTestMode tree = \case - Nightly -> tree - _ -> adjustQuickCheckTests (\n -> (2 * n) `div` 5) tree + where + adjustTestMode :: TestTree -> TestEnv -> TestTree + adjustTestMode tree = \case + Nightly -> tree + _ -> adjustQuickCheckTests (\n -> (2 * n) `div` 5) tree prop_simple_cardano_convergence :: TestSetup -> Property -prop_simple_cardano_convergence TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLengthByron - , setupSlotLengthShelley - , setupTestConfig - , setupVersion - } = - prop_general_semisync pga testOutput .&&. - prop_inSync testOutput .&&. - prop_ReachesEra2 reachesEra2 .&&. - prop_noCPViolation .&&. - ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] $ - tabulate "Observed forge during a non-overlay Shelley slot" - [label_hadActiveNonOverlaySlots testOutput overlaySlots] $ - tabulatePartitionDuration setupK setupPartition $ - tabulateFinalIntersectionDepth - setupK - (NumBlocks finalIntersectionDepth) - finalBlockEra $ - tabulatePartitionPosition - (NumSlots numByronSlots) - setupPartition - (ledgerReachesEra2 reachesEra2) $ - property True - ) - where +prop_simple_cardano_convergence + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLengthByron + , setupSlotLengthShelley + , setupTestConfig + , setupVersion + } = + prop_general_semisync pga testOutput + .&&. prop_inSync testOutput + .&&. prop_ReachesEra2 reachesEra2 + .&&. prop_noCPViolation + .&&. ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] + $ tabulate + "Observed forge during a non-overlay Shelley slot" + [label_hadActiveNonOverlaySlots testOutput overlaySlots] + $ tabulatePartitionDuration setupK setupPartition + $ tabulateFinalIntersectionDepth + setupK + (NumBlocks finalIntersectionDepth) + finalBlockEra + $ tabulatePartitionPosition + (NumSlots numByronSlots) + setupPartition + (ledgerReachesEra2 reachesEra2) + $ property True + ) + where TestConfig { initSeed , numCoreNodes , numSlots } = setupTestConfig - pga = PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = fromIntegral . length . extractTxs + pga = + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = fromIntegral . length . extractTxs , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 1 - , pgaFixedMaxForkLength = Just maxForkLength - , pgaFixedSchedule = + , pgaFirstBlockNo = 1 + , pgaFixedMaxForkLength = Just maxForkLength + , pgaFixedSchedule = -- the leader schedule isn't fixed because the Shelley leader -- schedule is (at least ideally) unpredictable Nothing - , pgaSecurityParam = setupK - , pgaTestConfig = setupTestConfig - , pgaTestConfigB = testConfigB + , pgaSecurityParam = setupK + , pgaTestConfig = setupTestConfig + , pgaTestConfigB = testConfigB } - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = - if setupHardFork - then - -- In this case the PVU will trigger the transition to Shelley. - -- - -- By FACT (B), the PVU is always successful if we reach the second - -- era. - EraCons setupSlotLengthByron epochSizeByron eraSizeByron $ - EraFinal setupSlotLengthShelley epochSizeShelley - else - EraFinal setupSlotLengthByron epochSizeByron - , messageDelay = mkMessageDelay setupPartition - , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - , nodeRestarts = noRestarts - , txGenExtra = CardanoTxGenExtra - { ctgeByronGenesisKeys = generatedSecrets - , ctgeNetworkMagic = - CC.Common.makeNetworkMagic $ - CC.Genesis.configProtocolMagic genesisByron - , ctgeShelleyCoreNodes = coreNodes + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + if setupHardFork + then + -- In this case the PVU will trigger the transition to Shelley. + -- + -- By FACT (B), the PVU is always successful if we reach the second + -- era. + EraCons setupSlotLengthByron epochSizeByron eraSizeByron $ + EraFinal setupSlotLengthShelley epochSizeShelley + else + EraFinal setupSlotLengthByron epochSizeByron + , messageDelay = mkMessageDelay setupPartition + , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + , nodeRestarts = noRestarts + , txGenExtra = + CardanoTxGenExtra + { ctgeByronGenesisKeys = generatedSecrets + , ctgeNetworkMagic = + CC.Common.makeNetworkMagic $ + CC.Genesis.configProtocolMagic genesisByron + , ctgeShelleyCoreNodes = coreNodes + } + , version = setupVersion } - , version = setupVersion - } testOutput :: TestOutput (CardanoBlock Crypto) testOutput = - runTestNetwork setupTestConfig testConfigB TestConfigMB - { nodeInfo = \coreNodeId@(CoreNodeId nid) -> - mkProtocolCardanoAndHardForkTxs - pbftParams - coreNodeId - genesisByron - generatedSecrets - propPV - genesisShelley - setupInitialNonce - (coreNodes !! fromIntegral nid) - , mkRekeyM = Nothing - } + runTestNetwork + setupTestConfig + testConfigB + TestConfigMB + { nodeInfo = \coreNodeId@(CoreNodeId nid) -> + mkProtocolCardanoAndHardForkTxs + pbftParams + coreNodeId + genesisByron + generatedSecrets + propPV + genesisShelley + setupInitialNonce + (coreNodes !! fromIntegral nid) + , mkRekeyM = Nothing + } maxForkLength :: NumBlocks - maxForkLength = NumBlocks $ + maxForkLength = + NumBlocks $ if rsEra2Blocks reachesEra2 - then - -- Shelley inherently creates small forks, but we haven't yet seen a - -- Common Prefix violation in this test even though @k@ is small - -- - -- TODO I'd value having a repro that demonstrates a violation of - -- this typical limit, so I'm leaving it in for now. If it never - -- fails, we should figure out why not. Even with @k=2 ncn=5 d=0.1@ - -- fixed the deepest fork I'm seeing is ~2.5% @k-1@ - -- 'finalIntersectionDepth'. - unNonZero $ maxRollbacks setupK - else - -- Recall that all nodes join ASAP, so the partition is the only - -- potential cause for a fork during Byron. See the reasoning in - -- 'genPartition' for the motivation of this limit. - div partitionDuration 2 + mod partitionDuration 2 + then + -- Shelley inherently creates small forks, but we haven't yet seen a + -- Common Prefix violation in this test even though @k@ is small + -- + -- TODO I'd value having a repro that demonstrates a violation of + -- this typical limit, so I'm leaving it in for now. If it never + -- fails, we should figure out why not. Even with @k=2 ncn=5 d=0.1@ + -- fixed the deepest fork I'm seeing is ~2.5% @k-1@ + -- 'finalIntersectionDepth'. + unNonZero $ maxRollbacks setupK + else + -- Recall that all nodes join ASAP, so the partition is the only + -- potential cause for a fork during Byron. See the reasoning in + -- 'genPartition' for the motivation of this limit. + div partitionDuration 2 + mod partitionDuration 2 partitionDuration :: Word64 partitionDuration = dur - where - Partition _ (NumSlots dur) = setupPartition + where + Partition _ (NumSlots dur) = setupPartition -- Byron @@ -291,15 +312,15 @@ prop_simple_cardano_convergence TestSetup -- does not randomize it epochSizeByron :: EpochSize epochSizeByron = - fromByronEpochSlots $ CC.Genesis.configEpochSlots genesisByron + fromByronEpochSlots $ CC.Genesis.configEpochSlots genesisByron eraSizeByron :: EraSize eraSizeByron = EraSize numFirstEraEpochs - genesisByron :: CC.Genesis.Config + genesisByron :: CC.Genesis.Config generatedSecrets :: CC.Genesis.GeneratedSecrets (genesisByron, generatedSecrets) = - Byron.generateGenesisConfig setupSlotLengthByron pbftParams + Byron.generateGenesisConfig setupSlotLengthByron pbftParams -- Shelley @@ -307,11 +328,12 @@ prop_simple_cardano_convergence TestSetup initialKESPeriod = SL.KESPeriod 0 coreNodes :: [Shelley.CoreNode Crypto] - coreNodes = runGen initSeed $ + coreNodes = + runGen initSeed $ replicateM (fromIntegral n) $ Shelley.genCoreNode initialKESPeriod - where - NumCoreNodes n = numCoreNodes + where + NumCoreNodes n = numCoreNodes -- Same value as for mainnet. Must be larger than the amount of Lovelace in -- circulation in the Byron ledger. Since this is the maximum value of @@ -321,15 +343,15 @@ prop_simple_cardano_convergence TestSetup genesisShelley :: ShelleyGenesis genesisShelley = - Shelley.mkGenesisConfig - (SL.ProtVer shelleyMajorVersion 0) - setupK - activeSlotCoeff - setupD - maxLovelaceSupply - setupSlotLengthShelley - (Shelley.mkKesConfig (Proxy @Crypto) numSlots) - coreNodes + Shelley.mkGenesisConfig + (SL.ProtVer shelleyMajorVersion 0) + setupK + activeSlotCoeff + setupD + maxLovelaceSupply + setupSlotLengthShelley + (Shelley.mkKesConfig (Proxy @Crypto) numSlots) + coreNodes -- the Shelley ledger is designed to use a fixed epoch size, so this test -- does not randomize it @@ -364,115 +386,127 @@ prop_simple_cardano_convergence TestSetup propPV :: CC.Update.ProtocolVersion propPV = if setupHardFork - then - -- this new version must induce the hard fork if accepted - CC.Update.ProtocolVersion (SL.getVersion shelleyMajorVersion) 0 0 - else - -- this new version must not induce the hard fork if accepted - CC.Update.ProtocolVersion - byronMajorVersion (byronInitialMinorVersion + 1) 0 + then + -- this new version must induce the hard fork if accepted + CC.Update.ProtocolVersion (SL.getVersion shelleyMajorVersion) 0 0 + else + -- this new version must not induce the hard fork if accepted + CC.Update.ProtocolVersion + byronMajorVersion + (byronInitialMinorVersion + 1) + 0 -- Classifying test cases reachesEra2 :: ReachesEra2 - reachesEra2 = ReachesEra2 - { rsEra1Slots = - BoolProps.enabledIf $ t > numByronSlots - , rsPV = BoolProps.enabledIf setupHardFork - , rsEra2Blocks = - or $ - [ not $ isFirstEraBlock blk - | (_nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputForges} = no - , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges - -- the last block the node forged - ] - , rsEra2Slots = - assert (w >= k) $ - BoolProps.requiredIf $ - -- The active slots in the first two Shelley epochs are all overlay - -- slots, so the first Shelley block will arise from one of those. - not $ Set.null $ overlaySlots - } - where - NumSlots t = numSlots - TestOutput{testOutputNodes} = testOutput + reachesEra2 = + ReachesEra2 + { rsEra1Slots = + BoolProps.enabledIf $ t > numByronSlots + , rsPV = BoolProps.enabledIf setupHardFork + , rsEra2Blocks = + or $ + [ not $ isFirstEraBlock blk + | (_nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputForges} = no + , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges + -- the last block the node forged + ] + , rsEra2Slots = + assert (w >= k) $ + BoolProps.requiredIf $ + -- The active slots in the first two Shelley epochs are all overlay + -- slots, so the first Shelley block will arise from one of those. + not $ + Set.null $ + overlaySlots + } + where + NumSlots t = numSlots + TestOutput{testOutputNodes} = testOutput - k :: Word64 - k = unNonZero $ maxRollbacks setupK + k :: Word64 + k = unNonZero $ maxRollbacks setupK - coeff :: SL.ActiveSlotCoeff - coeff = SL.sgActiveSlotCoeff genesisShelley + coeff :: SL.ActiveSlotCoeff + coeff = SL.sgActiveSlotCoeff genesisShelley - w :: Word64 - w = SL.computeStabilityWindow k coeff + w :: Word64 + w = SL.computeStabilityWindow k coeff overlaySlots :: Set SlotNo overlaySlots = - secondEraOverlaySlots - numSlots - (NumSlots numByronSlots) - (sgProtocolParams genesisShelley ^. SL.ppDL) - epochSizeShelley + secondEraOverlaySlots + numSlots + (NumSlots numByronSlots) + (sgProtocolParams genesisShelley ^. SL.ppDL) + epochSizeShelley numByronSlots :: Word64 numByronSlots = numFirstEraEpochs * unEpochSize epochSizeByron finalBlockEra :: String finalBlockEra = - if rsEra2Blocks reachesEra2 then "Shelley" else "Byron" + if rsEra2Blocks reachesEra2 then "Shelley" else "Byron" finalIntersectionDepth :: Word64 finalIntersectionDepth = depth - where - NumBlocks depth = calcFinalIntersectionDepth pga testOutput + where + NumBlocks depth = calcFinalIntersectionDepth pga testOutput prop_noCPViolation :: Property prop_noCPViolation = - counterexample - ( "finalChains: " <> - show (nodeOutputFinalChain <$> testOutputNodes testOutput) - ) $ - counterexample "CP violation in final chains!" $ - property $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth + counterexample + ( "finalChains: " + <> show (nodeOutputFinalChain <$> testOutputNodes testOutput) + ) + $ counterexample "CP violation in final chains!" + $ property + $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth mkProtocolCardanoAndHardForkTxs :: - forall c m. (IOLike m, c ~ StandardCrypto) - -- Byron - => PBftParams - -> CoreNodeId - -> CC.Genesis.Config - -> CC.Genesis.GeneratedSecrets - -> CC.Update.ProtocolVersion - -- Shelley - -> ShelleyGenesis - -> SL.Nonce - -> Shelley.CoreNode c - -> TestNodeInitialization m (CardanoBlock c) + forall c m. + (IOLike m, c ~ StandardCrypto) => + -- Byron + PBftParams -> + CoreNodeId -> + CC.Genesis.Config -> + CC.Genesis.GeneratedSecrets -> + CC.Update.ProtocolVersion -> + -- Shelley + ShelleyGenesis -> + SL.Nonce -> + Shelley.CoreNode c -> + TestNodeInitialization m (CardanoBlock c) mkProtocolCardanoAndHardForkTxs - pbftParams coreNodeId genesisByron generatedSecretsByron propPV - genesisShelley initialNonce coreNodeShelley - = + pbftParams + coreNodeId + genesisByron + generatedSecretsByron + propPV + genesisShelley + initialNonce + coreNodeShelley = TestNodeInitialization - { tniCrucialTxs = crucialTxs + { tniCrucialTxs = crucialTxs , tniProtocolInfo = protocolInfo , tniBlockForging = blockForging } - where + where crucialTxs :: [GenTx (CardanoBlock c)] crucialTxs = - GenTxByron <$> tniCrucialTxs tniByron - where - -- reuse the Byron logic for generating the crucial txs, ie the - -- proposal and votes - tniByron :: TestNodeInitialization m ByronBlock - tniByron = - Byron.mkProtocolByronAndHardForkTxs - pbftParams - coreNodeId - genesisByron - generatedSecretsByron - propPV + GenTxByron <$> tniCrucialTxs tniByron + where + -- reuse the Byron logic for generating the crucial txs, ie the + -- proposal and votes + tniByron :: TestNodeInitialization m ByronBlock + tniByron = + Byron.mkProtocolByronAndHardForkTxs + pbftParams + coreNodeId + genesisByron + generatedSecretsByron + propPV protocolInfo :: ProtocolInfo (CardanoBlock c) blockForging :: m [BlockForging m (CardanoBlock c)] @@ -485,9 +519,9 @@ mkProtocolCardanoAndHardForkTxs genesisByron generatedSecretsByron (Just $ PBftSignatureThreshold 1) -- Trivialize the PBFT signature - -- window so that the forks induced by - -- the network partition are as deep - -- as possible. + -- window so that the forks induced by + -- the network partition are as deep + -- as possible. -- This test only enters the Shelley era. (SL.ProtVer shelleyMajorVersion 0) hardForkOnDefaultProtocolVersions @@ -514,27 +548,28 @@ shelleyMajorVersion = L.eraProtVerLow @ShelleyEra byronEpochSize :: SecurityParam -> Word64 byronEpochSize (SecurityParam k) = - unEpochSlots $ kEpochSlots $ CC.Common.BlockCount $ unNonZero k + unEpochSlots $ kEpochSlots $ CC.Common.BlockCount $ unNonZero k -- | By default, the initial major Byron protocol version is @0@, but we want to -- set it to 'byronMajorVersion'. setByronProtVer :: ProtocolInfo (CardanoBlock c) -> ProtocolInfo (CardanoBlock c) setByronProtVer = - modifyInitLedger $ modifyExtLedger $ modifyHFLedgerState $ \st -> - let cvs = byronLedgerState st - us = (CC.cvsUpdateState cvs) { - CC.adoptedProtocolVersion = + modifyInitLedger $ modifyExtLedger $ modifyHFLedgerState $ \st -> + let cvs = byronLedgerState st + us = + (CC.cvsUpdateState cvs) + { CC.adoptedProtocolVersion = CC.Update.ProtocolVersion byronMajorVersion byronInitialMinorVersion 0 } - in st { byronLedgerState = cvs { CC.cvsUpdateState = us } } - where - modifyInitLedger f pinfo = pinfo { pInfoInitLedger = f (pInfoInitLedger pinfo) } - modifyExtLedger f elgr = elgr { ledgerState = f (ledgerState elgr ) } - - modifyHFLedgerState :: - (LedgerState x mk -> LedgerState x mk) - -> LedgerState (HardForkBlock (x : xs)) mk - -> LedgerState (HardForkBlock (x : xs)) mk - modifyHFLedgerState f (HardForkLedgerState (HardForkState (TZ st))) = - HardForkLedgerState (HardForkState (TZ st {currentState = Flip $ f (unFlip $ currentState st)})) - modifyHFLedgerState _ st = st + in st{byronLedgerState = cvs{CC.cvsUpdateState = us}} + where + modifyInitLedger f pinfo = pinfo{pInfoInitLedger = f (pInfoInitLedger pinfo)} + modifyExtLedger f elgr = elgr{ledgerState = f (ledgerState elgr)} + + modifyHFLedgerState :: + (LedgerState x mk -> LedgerState x mk) -> + LedgerState (HardForkBlock (x : xs)) mk -> + LedgerState (HardForkBlock (x : xs)) mk + modifyHFLedgerState f (HardForkLedgerState (HardForkState (TZ st))) = + HardForkLedgerState (HardForkState (TZ st{currentState = Flip $ f (unFlip $ currentState st)})) + modifyHFLedgerState _ st = st diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/MaryAlonzo.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/MaryAlonzo.hs index ac4116f72f..8e07d7dcbf 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/MaryAlonzo.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/MaryAlonzo.hs @@ -12,63 +12,71 @@ module Test.ThreadNet.MaryAlonzo (tests) where -import Cardano.Ledger.Alonzo.Genesis (AlonzoGenesis) -import qualified Cardano.Ledger.Api.Transition as L -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import qualified Cardano.Ledger.BaseTypes as SL (Version, getVersion, - natVersion) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as SL -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import Data.Maybe (maybeToList) -import Data.Proxy (Proxy (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.SOP.Strict (NP (..)) -import Data.Word (Word64) -import Lens.Micro -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Cardano.Condense () -import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common - (isHardForkNodeToNodeEnabled) -import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node - (ProtocolParamsShelleyBased (..), ShelleyGenesis (..)) -import qualified Test.Cardano.Ledger.Alonzo.Examples.Consensus as SL -import Test.Consensus.Shelley.MockCrypto (MockCrypto) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.Infra.ShelleyBasedHardFork -import Test.ThreadNet.Infra.TwoEras -import Test.ThreadNet.Network (NodeOutput (..), - TestNodeInitialization (..)) -import Test.ThreadNet.TxGen -import Test.ThreadNet.TxGen.Alonzo () -import Test.ThreadNet.TxGen.Mary () -import Test.ThreadNet.Util.Expectations (NumBlocks (..)) -import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) -import Test.ThreadNet.Util.NodeRestarts (noRestarts) -import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) -import Test.ThreadNet.Util.Seed (runGen) -import qualified Test.Util.BoolProps as BoolProps -import Test.Util.HardFork.Future (EraSize (..), Future (..)) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import Test.Util.TestEnv +import Cardano.Ledger.Alonzo.Genesis (AlonzoGenesis) +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Cardano.Ledger.BaseTypes qualified as SL + ( Version + , getVersion + , natVersion + ) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as SL +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Maybe (maybeToList) +import Data.Proxy (Proxy (..)) +import Data.SOP.Strict (NP (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import Lens.Micro +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Cardano.Condense () +import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common + ( isHardForkNodeToNodeEnabled + ) +import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node + ( ProtocolParamsShelleyBased (..) + , ShelleyGenesis (..) + ) +import Test.Cardano.Ledger.Alonzo.Examples.Consensus qualified as SL +import Test.Consensus.Shelley.MockCrypto (MockCrypto) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.Infra.ShelleyBasedHardFork +import Test.ThreadNet.Infra.TwoEras +import Test.ThreadNet.Network + ( NodeOutput (..) + , TestNodeInitialization (..) + ) +import Test.ThreadNet.TxGen +import Test.ThreadNet.TxGen.Alonzo () +import Test.ThreadNet.TxGen.Mary () +import Test.ThreadNet.Util.Expectations (NumBlocks (..)) +import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) +import Test.ThreadNet.Util.NodeRestarts (noRestarts) +import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) +import Test.ThreadNet.Util.Seed (runGen) +import Test.Util.BoolProps qualified as BoolProps +import Test.Util.HardFork.Future (EraSize (..), Future (..)) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.TestEnv type MaryAlonzoBlock = ShelleyBasedHardForkBlock (TPraos MockCrypto) MaryEra (TPraos MockCrypto) AlonzoEra @@ -79,188 +87,204 @@ type MaryAlonzoBlock = -- it literally as soon as possible. Therefore, if the test reaches the end of -- the first epoch, the proposal will be adopted. data TestSetup = TestSetup - { setupD :: Shelley.DecentralizationParam - , setupHardFork :: Bool - -- ^ whether the proposal should trigger a hard fork or not + { setupD :: Shelley.DecentralizationParam + , setupHardFork :: Bool + -- ^ whether the proposal should trigger a hard fork or not , setupInitialNonce :: SL.Nonce - -- ^ the initial Shelley 'SL.ticknStateEpochNonce' - -- - -- We vary it to ensure we explore different leader schedules. - , setupK :: SecurityParam - , setupPartition :: Partition - , setupSlotLength :: SlotLength - , setupTestConfig :: TestConfig - , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion MaryAlonzoBlock) + -- ^ the initial Shelley 'SL.ticknStateEpochNonce' + -- + -- We vary it to ensure we explore different leader schedules. + , setupK :: SecurityParam + , setupPartition :: Partition + , setupSlotLength :: SlotLength + , setupTestConfig :: TestConfig + , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion MaryAlonzoBlock) } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - setupD <- arbitrary - -- The decentralization parameter cannot be 0 in the first - -- Shelley epoch, since stake pools can only be created and - -- delegated to via Shelley transactions. - `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) + setupD <- + arbitrary + -- The decentralization parameter cannot be 0 in the first + -- Shelley epoch, since stake pools can only be created and + -- delegated to via Shelley transactions. + `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) setupK <- SecurityParam <$> choose (8, 10) `suchThatMap` nonZero - -- If k < 8, common prefix violations become too likely in - -- Praos mode for thin overlay schedules (ie low d), even for - -- f=0.2. + -- If k < 8, common prefix violations become too likely in + -- Praos mode for thin overlay schedules (ie low d), even for + -- f=0.2. setupInitialNonce <- genNonce - setupSlotLength <- arbitrary + setupSlotLength <- arbitrary let epochSize = EpochSize $ shelleyEpochSize setupK - setupTestConfig <- genTestConfig - setupK - (epochSize, epochSize) + setupTestConfig <- + genTestConfig + setupK + (epochSize, epochSize) let TestConfig{numCoreNodes, numSlots} = setupTestConfig - setupHardFork <- frequency [(49, pure True), (1, pure False)] + setupHardFork <- frequency [(49, pure True), (1, pure False)] -- TODO How reliable is the Byron-based partition duration logic when -- reused for Shelley? setupPartition <- genPartition numCoreNodes numSlots setupK - setupVersion <- genVersionFiltered - isHardForkNodeToNodeEnabled - (Proxy @MaryAlonzoBlock) - - pure TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLength - , setupTestConfig - , setupVersion - } + setupVersion <- + genVersionFiltered + isHardForkNodeToNodeEnabled + (Proxy @MaryAlonzoBlock) + + pure + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLength + , setupTestConfig + , setupVersion + } - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "MaryAlonzo ThreadNet" [ - let name = "simple convergence" in - askTestEnv $ adjustTestMode $ - testProperty name prop_simple_allegraAlonzo_convergence +tests = + testGroup + "MaryAlonzo ThreadNet" + [ let name = "simple convergence" + in askTestEnv $ + adjustTestMode $ + testProperty name prop_simple_allegraAlonzo_convergence ] - - where - adjustTestMode :: TestTree -> TestEnv -> TestTree - adjustTestMode tree = \case - Nightly -> tree - _ -> adjustQuickCheckTests (`div` 10) tree + where + adjustTestMode :: TestTree -> TestEnv -> TestTree + adjustTestMode tree = \case + Nightly -> tree + _ -> adjustQuickCheckTests (`div` 10) tree prop_simple_allegraAlonzo_convergence :: TestSetup -> Property -prop_simple_allegraAlonzo_convergence TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLength - , setupTestConfig - , setupVersion - } = - prop_general_semisync pga testOutput .&&. - prop_inSync testOutput .&&. - prop_ReachesEra2 reachesEra2 .&&. - prop_noCPViolation .&&. - ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] $ - tabulate "Observed forge during a non-overlay slot in the second era" - [ label_hadActiveNonOverlaySlots - testOutput - overlaySlots - ] $ - tabulatePartitionDuration setupK setupPartition $ - tabulateFinalIntersectionDepth - setupK - (NumBlocks finalIntersectionDepth) - finalBlockEra $ - tabulatePartitionPosition - (NumSlots numFirstEraSlots) - setupPartition - (ledgerReachesEra2 reachesEra2) $ - property True - ) - where +prop_simple_allegraAlonzo_convergence + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLength + , setupTestConfig + , setupVersion + } = + prop_general_semisync pga testOutput + .&&. prop_inSync testOutput + .&&. prop_ReachesEra2 reachesEra2 + .&&. prop_noCPViolation + .&&. ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] + $ tabulate + "Observed forge during a non-overlay slot in the second era" + [ label_hadActiveNonOverlaySlots + testOutput + overlaySlots + ] + $ tabulatePartitionDuration setupK setupPartition + $ tabulateFinalIntersectionDepth + setupK + (NumBlocks finalIntersectionDepth) + finalBlockEra + $ tabulatePartitionPosition + (NumSlots numFirstEraSlots) + setupPartition + (ledgerReachesEra2 reachesEra2) + $ property True + ) + where TestConfig { initSeed , numCoreNodes , numSlots } = setupTestConfig - pga = PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = fromIntegral . length . extractTxs + pga = + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = fromIntegral . length . extractTxs , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Just maxForkLength - -- the leader schedule isn't fixed because the Shelley leader + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Just maxForkLength + , -- the leader schedule isn't fixed because the Shelley leader -- schedule is (at least ideally) unpredictable - , pgaFixedSchedule = Nothing - , pgaSecurityParam = setupK - , pgaTestConfig = setupTestConfig - , pgaTestConfigB = testConfigB + pgaFixedSchedule = Nothing + , pgaSecurityParam = setupK + , pgaTestConfig = setupTestConfig + , pgaTestConfigB = testConfigB } - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = - if setupHardFork - then - -- In this case the PVU will trigger the transition to the second era - -- - -- By FACT (B), the PVU is always successful if we reach the second - -- era. - EraCons setupSlotLength epochSize firstEraSize $ - EraFinal setupSlotLength epochSize - else - EraFinal setupSlotLength epochSize - , messageDelay = mkMessageDelay setupPartition - , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - , nodeRestarts = noRestarts - , txGenExtra = WrapTxGenExtra () :* WrapTxGenExtra () :* Nil - , version = setupVersion - } + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + if setupHardFork + then + -- In this case the PVU will trigger the transition to the second era + -- + -- By FACT (B), the PVU is always successful if we reach the second + -- era. + EraCons setupSlotLength epochSize firstEraSize $ + EraFinal setupSlotLength epochSize + else + EraFinal setupSlotLength epochSize + , messageDelay = mkMessageDelay setupPartition + , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + , nodeRestarts = noRestarts + , txGenExtra = WrapTxGenExtra () :* WrapTxGenExtra () :* Nil + , version = setupVersion + } testOutput :: TestOutput MaryAlonzoBlock - testOutput = runTestNetwork setupTestConfig testConfigB TestConfigMB { - nodeInfo = \(CoreNodeId nid) -> - let protocolParamsShelleyBased = - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = setupInitialNonce - , shelleyBasedLeaderCredentials = - [Shelley.mkLeaderCredentials - (coreNodes !! fromIntegral nid)] + testOutput = + runTestNetwork + setupTestConfig + testConfigB + TestConfigMB + { nodeInfo = \(CoreNodeId nid) -> + let protocolParamsShelleyBased = + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = setupInitialNonce + , shelleyBasedLeaderCredentials = + [ Shelley.mkLeaderCredentials + (coreNodes !! fromIntegral nid) + ] + } + hardForkTrigger = + TriggerHardForkAtVersion $ SL.getVersion majorVersion2 + (protocolInfo, blockForging) = + protocolInfoShelleyBasedHardFork + protocolParamsShelleyBased + (SL.ProtVer majorVersion1 0) + (SL.ProtVer majorVersion2 0) + ( L.mkTransitionConfig alonzoGenesis $ + L.mkTransitionConfig L.NoGenesis $ + L.mkTransitionConfig L.NoGenesis $ + L.mkShelleyTransitionConfig shelleyGenesis + ) + hardForkTrigger + in TestNodeInitialization + { tniCrucialTxs = + if not setupHardFork + then [] + else + fmap GenTxShelley1 $ + Shelley.mkMASetDecentralizationParamTxs + coreNodes + (SL.ProtVer majorVersion2 0) + (SlotNo $ unNumSlots numSlots) -- never expire + setupD -- unchanged + , tniProtocolInfo = protocolInfo + , tniBlockForging = blockForging } - hardForkTrigger = - TriggerHardForkAtVersion $ SL.getVersion majorVersion2 - (protocolInfo, blockForging) = - protocolInfoShelleyBasedHardFork - protocolParamsShelleyBased - (SL.ProtVer majorVersion1 0) - (SL.ProtVer majorVersion2 0) - ( L.mkTransitionConfig alonzoGenesis - $ L.mkTransitionConfig L.NoGenesis - $ L.mkTransitionConfig L.NoGenesis - $ L.mkShelleyTransitionConfig shelleyGenesis - ) - hardForkTrigger - in - TestNodeInitialization { - tniCrucialTxs = - if not setupHardFork then [] else - fmap GenTxShelley1 $ - Shelley.mkMASetDecentralizationParamTxs - coreNodes - (SL.ProtVer majorVersion2 0) - (SlotNo $ unNumSlots numSlots) -- never expire - setupD -- unchanged - , tniProtocolInfo = protocolInfo - , tniBlockForging = blockForging - } , mkRekeyM = Nothing } @@ -271,11 +295,12 @@ prop_simple_allegraAlonzo_convergence TestSetup initialKESPeriod = SL.KESPeriod 0 coreNodes :: [Shelley.CoreNode MockCrypto] - coreNodes = runGen initSeed $ + coreNodes = + runGen initSeed $ replicateM (fromIntegral n) $ Shelley.genCoreNode initialKESPeriod - where - NumCoreNodes n = numCoreNodes + where + NumCoreNodes n = numCoreNodes maxLovelaceSupply :: Word64 maxLovelaceSupply = @@ -283,15 +308,15 @@ prop_simple_allegraAlonzo_convergence TestSetup shelleyGenesis :: ShelleyGenesis shelleyGenesis = - Shelley.mkGenesisConfig - (SL.ProtVer majorVersion1 0) - setupK - activeSlotCoeff - setupD - maxLovelaceSupply - setupSlotLength - (Shelley.mkKesConfig (Proxy @MockCrypto) numSlots) - coreNodes + Shelley.mkGenesisConfig + (SL.ProtVer majorVersion1 0) + setupK + activeSlotCoeff + setupD + maxLovelaceSupply + setupSlotLength + (Shelley.mkKesConfig (Proxy @MockCrypto) numSlots) + coreNodes alonzoGenesis :: AlonzoGenesis alonzoGenesis = SL.exampleAlonzoGenesis @@ -307,62 +332,65 @@ prop_simple_allegraAlonzo_convergence TestSetup -- Classifying test cases reachesEra2 :: ReachesEra2 - reachesEra2 = ReachesEra2 - { rsEra1Slots = - BoolProps.enabledIf $ t > numFirstEraSlots - , rsPV = BoolProps.enabledIf setupHardFork - , rsEra2Blocks = - or $ - [ not $ isFirstEraBlock blk - | (_nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputForges} = no - , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges - -- the last block the node forged - ] - , rsEra2Slots = - --- TODO this comment and code are wrong - - BoolProps.requiredIf $ - -- The active slots in the first two Shelley epochs are all overlay - -- slots, so the first Shelley block will arise from one of those. - not $ Set.null overlaySlots - } - where - NumSlots t = numSlots - TestOutput{testOutputNodes} = testOutput + reachesEra2 = + ReachesEra2 + { rsEra1Slots = + BoolProps.enabledIf $ t > numFirstEraSlots + , rsPV = BoolProps.enabledIf setupHardFork + , rsEra2Blocks = + or $ + [ not $ isFirstEraBlock blk + | (_nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputForges} = no + , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges + -- the last block the node forged + ] + , rsEra2Slots = + --- TODO this comment and code are wrong + + BoolProps.requiredIf $ + -- The active slots in the first two Shelley epochs are all overlay + -- slots, so the first Shelley block will arise from one of those. + not $ + Set.null overlaySlots + } + where + NumSlots t = numSlots + TestOutput{testOutputNodes} = testOutput -- All OBFT overlay slots in the second era. overlaySlots :: Set SlotNo overlaySlots = - secondEraOverlaySlots - numSlots - (NumSlots numFirstEraSlots) - (sgProtocolParams shelleyGenesis ^. SL.ppDG) - epochSize + secondEraOverlaySlots + numSlots + (NumSlots numFirstEraSlots) + (sgProtocolParams shelleyGenesis ^. SL.ppDG) + epochSize numFirstEraSlots :: Word64 numFirstEraSlots = - numFirstEraEpochs * unEpochSize epochSize + numFirstEraEpochs * unEpochSize epochSize finalBlockEra :: String finalBlockEra = - if rsEra2Blocks reachesEra2 + if rsEra2Blocks reachesEra2 then "Mary" else "Alonzo" finalIntersectionDepth :: Word64 finalIntersectionDepth = depth - where - NumBlocks depth = calcFinalIntersectionDepth pga testOutput + where + NumBlocks depth = calcFinalIntersectionDepth pga testOutput prop_noCPViolation :: Property prop_noCPViolation = - counterexample - ( "finalChains: " <> - show (nodeOutputFinalChain <$> testOutputNodes testOutput) - ) $ - counterexample "CP violation in final chains!" $ - property $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth + counterexample + ( "finalChains: " + <> show (nodeOutputFinalChain <$> testOutputNodes testOutput) + ) + $ counterexample "CP violation in final chains!" + $ property + $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth {------------------------------------------------------------------------------- Constants diff --git a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/ShelleyAllegra.hs b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/ShelleyAllegra.hs index cadeac97ca..f847f279cd 100644 --- a/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/ShelleyAllegra.hs +++ b/ouroboros-consensus-cardano/test/cardano-test/Test/ThreadNet/ShelleyAllegra.hs @@ -9,64 +9,68 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.ShelleyAllegra (tests) where -import qualified Cardano.Ledger.Api.Transition as L -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Ledger.Shelley.Core as SL -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import Data.Maybe (maybeToList) -import Data.Proxy (Proxy (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.SOP.Strict (NP (..)) -import Data.Word (Word64) -import Lens.Micro ((^.)) -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Cardano.Condense () -import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common - (isHardForkNodeToNodeEnabled) -import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node - (ProtocolParamsShelleyBased (..), ShelleyGenesis (..)) -import Test.Consensus.Shelley.MockCrypto (MockCrypto) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import qualified Test.ThreadNet.Infra.Shelley as Shelley -import Test.ThreadNet.Infra.ShelleyBasedHardFork -import Test.ThreadNet.Infra.TwoEras -import Test.ThreadNet.Network (NodeOutput (..), - TestNodeInitialization (..)) -import Test.ThreadNet.TxGen -import Test.ThreadNet.TxGen.Allegra () -import Test.ThreadNet.TxGen.Shelley -import Test.ThreadNet.Util.Expectations (NumBlocks (..)) -import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) -import Test.ThreadNet.Util.NodeRestarts (noRestarts) -import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) -import Test.ThreadNet.Util.Seed (runGen) -import qualified Test.Util.BoolProps as BoolProps -import Test.Util.HardFork.Future (EraSize (..), Future (..)) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import Test.Util.TestEnv +import Cardano.Ledger.Api.Transition qualified as L +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Shelley.Core qualified as SL +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.Slot (EpochSize (..), SlotNo (..)) +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Maybe (maybeToList) +import Data.Proxy (Proxy (..)) +import Data.SOP.Strict (NP (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import Lens.Micro ((^.)) +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Cardano.Condense () +import Ouroboros.Consensus.Cardano.Node (TriggerHardFork (..)) +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common + ( isHardForkNodeToNodeEnabled + ) +import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node + ( ProtocolParamsShelleyBased (..) + , ShelleyGenesis (..) + ) +import Test.Consensus.Shelley.MockCrypto (MockCrypto) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Shelley qualified as Shelley +import Test.ThreadNet.Infra.ShelleyBasedHardFork +import Test.ThreadNet.Infra.TwoEras +import Test.ThreadNet.Network + ( NodeOutput (..) + , TestNodeInitialization (..) + ) +import Test.ThreadNet.TxGen +import Test.ThreadNet.TxGen.Allegra () +import Test.ThreadNet.TxGen.Shelley +import Test.ThreadNet.Util.Expectations (NumBlocks (..)) +import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) +import Test.ThreadNet.Util.NodeRestarts (noRestarts) +import Test.ThreadNet.Util.NodeToNodeVersion (genVersionFiltered) +import Test.ThreadNet.Util.Seed (runGen) +import Test.Util.BoolProps qualified as BoolProps +import Test.Util.HardFork.Future (EraSize (..), Future (..)) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.TestEnv type ShelleyAllegraBlock = ShelleyBasedHardForkBlock (TPraos MockCrypto) ShelleyEra (TPraos MockCrypto) AllegraEra @@ -77,194 +81,209 @@ type ShelleyAllegraBlock = -- it literally as soon as possible. Therefore, if the test reaches the end of -- the first epoch, the proposal will be adopted. data TestSetup = TestSetup - { setupD :: Shelley.DecentralizationParam - , setupHardFork :: Bool - -- ^ whether the proposal should trigger a hard fork or not + { setupD :: Shelley.DecentralizationParam + , setupHardFork :: Bool + -- ^ whether the proposal should trigger a hard fork or not , setupInitialNonce :: SL.Nonce - -- ^ the initial Shelley 'SL.ticknStateEpochNonce' - -- - -- We vary it to ensure we explore different leader schedules. - , setupK :: SecurityParam - , setupPartition :: Partition - , setupSlotLength :: SlotLength - , setupTestConfig :: TestConfig - , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion ShelleyAllegraBlock) + -- ^ the initial Shelley 'SL.ticknStateEpochNonce' + -- + -- We vary it to ensure we explore different leader schedules. + , setupK :: SecurityParam + , setupPartition :: Partition + , setupSlotLength :: SlotLength + , setupTestConfig :: TestConfig + , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion ShelleyAllegraBlock) } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - setupD <- arbitrary - -- The decentralization parameter cannot be 0 in the first - -- Shelley epoch, since stake pools can only be created and - -- delegated to via Shelley transactions. - `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) + setupD <- + arbitrary + -- The decentralization parameter cannot be 0 in the first + -- Shelley epoch, since stake pools can only be created and + -- delegated to via Shelley transactions. + `suchThat` ((/= 0) . Shelley.decentralizationParamToRational) setupK <- SecurityParam <$> choose (8, 10) `suchThatMap` nonZero - -- If k < 8, common prefix violations become too likely in - -- Praos mode for thin overlay schedules (ie low d), even for - -- f=0.2. + -- If k < 8, common prefix violations become too likely in + -- Praos mode for thin overlay schedules (ie low d), even for + -- f=0.2. setupInitialNonce <- genNonce - setupSlotLength <- arbitrary + setupSlotLength <- arbitrary let epochSize = EpochSize $ shelleyEpochSize setupK - setupTestConfig <- genTestConfig - setupK - (epochSize, epochSize) + setupTestConfig <- + genTestConfig + setupK + (epochSize, epochSize) let TestConfig{numCoreNodes, numSlots} = setupTestConfig - setupHardFork <- frequency [(49, pure True), (1, pure False)] + setupHardFork <- frequency [(49, pure True), (1, pure False)] -- TODO How reliable is the Byron-based partition duration logic when -- reused for Shelley? setupPartition <- genPartition numCoreNodes numSlots setupK - setupVersion <- genVersionFiltered - isHardForkNodeToNodeEnabled - (Proxy @ShelleyAllegraBlock) - - pure TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLength - , setupTestConfig - , setupVersion - } + setupVersion <- + genVersionFiltered + isHardForkNodeToNodeEnabled + (Proxy @ShelleyAllegraBlock) + + pure + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLength + , setupTestConfig + , setupVersion + } - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "ShelleyAllegra ThreadNet" $ - [ let name = "simple convergence" in - askTestEnv $ adjustTestMode $ testProperty name $ \setup -> - prop_simple_shelleyAllegra_convergence setup +tests = + testGroup "ShelleyAllegra ThreadNet" $ + [ let name = "simple convergence" + in askTestEnv $ adjustTestMode $ testProperty name $ \setup -> + prop_simple_shelleyAllegra_convergence setup ] - - where - adjustTestMode :: TestTree -> TestEnv -> TestTree - adjustTestMode tree = \case - Nightly -> tree - _ -> adjustQuickCheckTests (`div` 10) tree + where + adjustTestMode :: TestTree -> TestEnv -> TestTree + adjustTestMode tree = \case + Nightly -> tree + _ -> adjustQuickCheckTests (`div` 10) tree prop_simple_shelleyAllegra_convergence :: TestSetup -> Property -prop_simple_shelleyAllegra_convergence TestSetup - { setupD - , setupHardFork - , setupInitialNonce - , setupK - , setupPartition - , setupSlotLength - , setupTestConfig - , setupVersion - } = - prop_general_semisync pga testOutput .&&. - prop_inSync testOutput .&&. - prop_ReachesEra2 reachesEra2 .&&. - prop_noCPViolation .&&. - ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] $ - tabulate "Observed forge during a non-overlay slot in the second era" - [ label_hadActiveNonOverlaySlots - testOutput - overlaySlots - ] $ - tabulatePartitionDuration setupK setupPartition $ - tabulateFinalIntersectionDepth - setupK - (NumBlocks finalIntersectionDepth) - finalBlockEra $ - tabulatePartitionPosition - (NumSlots numFirstEraSlots) - setupPartition - (ledgerReachesEra2 reachesEra2) $ - property True - ) - where +prop_simple_shelleyAllegra_convergence + TestSetup + { setupD + , setupHardFork + , setupInitialNonce + , setupK + , setupPartition + , setupSlotLength + , setupTestConfig + , setupVersion + } = + prop_general_semisync pga testOutput + .&&. prop_inSync testOutput + .&&. prop_ReachesEra2 reachesEra2 + .&&. prop_noCPViolation + .&&. ( tabulate "ReachesEra2 label" [label_ReachesEra2 reachesEra2] + $ tabulate + "Observed forge during a non-overlay slot in the second era" + [ label_hadActiveNonOverlaySlots + testOutput + overlaySlots + ] + $ tabulatePartitionDuration setupK setupPartition + $ tabulateFinalIntersectionDepth + setupK + (NumBlocks finalIntersectionDepth) + finalBlockEra + $ tabulatePartitionPosition + (NumSlots numFirstEraSlots) + setupPartition + (ledgerReachesEra2 reachesEra2) + $ property True + ) + where TestConfig { initSeed , numCoreNodes , numSlots } = setupTestConfig - pga = PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = fromIntegral . length . extractTxs + pga = + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = fromIntegral . length . extractTxs , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Just maxForkLength - -- the leader schedule isn't fixed because the Shelley leader + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Just maxForkLength + , -- the leader schedule isn't fixed because the Shelley leader -- schedule is (at least ideally) unpredictable - , pgaFixedSchedule = Nothing - , pgaSecurityParam = setupK - , pgaTestConfig = setupTestConfig - , pgaTestConfigB = testConfigB + pgaFixedSchedule = Nothing + , pgaSecurityParam = setupK + , pgaTestConfig = setupTestConfig + , pgaTestConfigB = testConfigB } - txGenExtra = ShelleyTxGenExtra - { stgeGenEnv = mkGenEnv DoNotGeneratePPUs coreNodes - -- We don't generate any transactions before the transaction - -- carrying the proposal because they might consume its inputs - -- before it does, thereby rendering it invalid. - , stgeStartAt = SlotNo 1 - } - - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = - if setupHardFork - then - -- In this case the PVU will trigger the transition to the second era - -- - -- By FACT (B), the PVU is always successful if we reach the second - -- era. - EraCons setupSlotLength epochSize firstEraSize $ - EraFinal setupSlotLength epochSize - else - EraFinal setupSlotLength epochSize - , messageDelay = mkMessageDelay setupPartition - , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - , nodeRestarts = noRestarts - , txGenExtra = WrapTxGenExtra txGenExtra :* WrapTxGenExtra () :* Nil - , version = setupVersion - } + txGenExtra = + ShelleyTxGenExtra + { stgeGenEnv = mkGenEnv DoNotGeneratePPUs coreNodes + , -- We don't generate any transactions before the transaction + -- carrying the proposal because they might consume its inputs + -- before it does, thereby rendering it invalid. + stgeStartAt = SlotNo 1 + } + + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + if setupHardFork + then + -- In this case the PVU will trigger the transition to the second era + -- + -- By FACT (B), the PVU is always successful if we reach the second + -- era. + EraCons setupSlotLength epochSize firstEraSize $ + EraFinal setupSlotLength epochSize + else + EraFinal setupSlotLength epochSize + , messageDelay = mkMessageDelay setupPartition + , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + , nodeRestarts = noRestarts + , txGenExtra = WrapTxGenExtra txGenExtra :* WrapTxGenExtra () :* Nil + , version = setupVersion + } testOutput :: TestOutput ShelleyAllegraBlock - testOutput = runTestNetwork setupTestConfig testConfigB TestConfigMB { - nodeInfo = \(CoreNodeId nid) -> - let protocolParamsShelleyBased = - ProtocolParamsShelleyBased { - shelleyBasedInitialNonce = setupInitialNonce - , shelleyBasedLeaderCredentials = - [Shelley.mkLeaderCredentials - (coreNodes !! fromIntegral nid)] + testOutput = + runTestNetwork + setupTestConfig + testConfigB + TestConfigMB + { nodeInfo = \(CoreNodeId nid) -> + let protocolParamsShelleyBased = + ProtocolParamsShelleyBased + { shelleyBasedInitialNonce = setupInitialNonce + , shelleyBasedLeaderCredentials = + [ Shelley.mkLeaderCredentials + (coreNodes !! fromIntegral nid) + ] + } + hardForkTrigger = + TriggerHardForkAtVersion $ SL.getVersion majorVersion2 + (protocolInfo, blockForging) = + protocolInfoShelleyBasedHardFork + protocolParamsShelleyBased + (SL.ProtVer majorVersion1 0) + (SL.ProtVer majorVersion2 0) + ( L.mkTransitionConfig L.NoGenesis $ + L.mkShelleyTransitionConfig genesisShelley + ) + hardForkTrigger + in TestNodeInitialization + { tniCrucialTxs = + if not setupHardFork + then [] + else + fmap GenTxShelley1 $ + Shelley.mkSetDecentralizationParamTxs + coreNodes + (SL.ProtVer majorVersion2 0) + (SlotNo $ unNumSlots numSlots) -- never expire + setupD -- unchanged + , tniProtocolInfo = protocolInfo + , tniBlockForging = blockForging } - hardForkTrigger = - TriggerHardForkAtVersion $ SL.getVersion majorVersion2 - (protocolInfo, blockForging) = - protocolInfoShelleyBasedHardFork - protocolParamsShelleyBased - (SL.ProtVer majorVersion1 0) - (SL.ProtVer majorVersion2 0) - ( L.mkTransitionConfig L.NoGenesis - $ L.mkShelleyTransitionConfig genesisShelley - ) - hardForkTrigger - in - TestNodeInitialization { - tniCrucialTxs = - if not setupHardFork then [] else - fmap GenTxShelley1 $ - Shelley.mkSetDecentralizationParamTxs - coreNodes - (SL.ProtVer majorVersion2 0) - (SlotNo $ unNumSlots numSlots) -- never expire - setupD -- unchanged - , tniProtocolInfo = protocolInfo - , tniBlockForging = blockForging - } , mkRekeyM = Nothing } @@ -275,11 +294,12 @@ prop_simple_shelleyAllegra_convergence TestSetup initialKESPeriod = SL.KESPeriod 0 coreNodes :: [Shelley.CoreNode MockCrypto] - coreNodes = runGen initSeed $ + coreNodes = + runGen initSeed $ replicateM (fromIntegral n) $ Shelley.genCoreNode initialKESPeriod - where - NumCoreNodes n = numCoreNodes + where + NumCoreNodes n = numCoreNodes maxLovelaceSupply :: Word64 maxLovelaceSupply = @@ -287,15 +307,15 @@ prop_simple_shelleyAllegra_convergence TestSetup genesisShelley :: ShelleyGenesis genesisShelley = - Shelley.mkGenesisConfig - (SL.ProtVer majorVersion1 0) - setupK - activeSlotCoeff - setupD - maxLovelaceSupply - setupSlotLength - (Shelley.mkKesConfig (Proxy @MockCrypto) numSlots) - coreNodes + Shelley.mkGenesisConfig + (SL.ProtVer majorVersion1 0) + setupK + activeSlotCoeff + setupD + maxLovelaceSupply + setupSlotLength + (Shelley.mkKesConfig (Proxy @MockCrypto) numSlots) + coreNodes -- the Shelley ledger is designed to use a fixed epoch size, so this test -- does not randomize it @@ -308,62 +328,65 @@ prop_simple_shelleyAllegra_convergence TestSetup -- Classifying test cases reachesEra2 :: ReachesEra2 - reachesEra2 = ReachesEra2 - { rsEra1Slots = - BoolProps.enabledIf $ t > numFirstEraSlots - , rsPV = BoolProps.enabledIf setupHardFork - , rsEra2Blocks = - or $ - [ not $ isFirstEraBlock blk - | (_nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputForges} = no - , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges - -- the last block the node forged - ] - , rsEra2Slots = - --- TODO this comment and code are wrong - - BoolProps.requiredIf $ - -- The active slots in the first two Shelley epochs are all overlay - -- slots, so the first Shelley block will arise from one of those. - not $ Set.null overlaySlots - } - where - NumSlots t = numSlots - TestOutput{testOutputNodes} = testOutput + reachesEra2 = + ReachesEra2 + { rsEra1Slots = + BoolProps.enabledIf $ t > numFirstEraSlots + , rsPV = BoolProps.enabledIf setupHardFork + , rsEra2Blocks = + or $ + [ not $ isFirstEraBlock blk + | (_nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputForges} = no + , (blk, _m) <- maybeToList $ Map.maxView nodeOutputForges + -- the last block the node forged + ] + , rsEra2Slots = + --- TODO this comment and code are wrong + + BoolProps.requiredIf $ + -- The active slots in the first two Shelley epochs are all overlay + -- slots, so the first Shelley block will arise from one of those. + not $ + Set.null overlaySlots + } + where + NumSlots t = numSlots + TestOutput{testOutputNodes} = testOutput -- All OBFT overlay slots in the second era. overlaySlots :: Set SlotNo overlaySlots = - secondEraOverlaySlots - numSlots - (NumSlots numFirstEraSlots) - (sgProtocolParams genesisShelley ^. SL.ppDG) - epochSize + secondEraOverlaySlots + numSlots + (NumSlots numFirstEraSlots) + (sgProtocolParams genesisShelley ^. SL.ppDG) + epochSize numFirstEraSlots :: Word64 numFirstEraSlots = - numFirstEraEpochs * unEpochSize epochSize + numFirstEraEpochs * unEpochSize epochSize finalBlockEra :: String finalBlockEra = - if rsEra2Blocks reachesEra2 + if rsEra2Blocks reachesEra2 then "Allegra" else "Shelley" finalIntersectionDepth :: Word64 finalIntersectionDepth = depth - where - NumBlocks depth = calcFinalIntersectionDepth pga testOutput + where + NumBlocks depth = calcFinalIntersectionDepth pga testOutput prop_noCPViolation :: Property prop_noCPViolation = - counterexample - ( "finalChains: " <> - show (nodeOutputFinalChain <$> testOutputNodes testOutput) - ) $ - counterexample "CP violation in final chains!" $ - property $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth + counterexample + ( "finalChains: " + <> show (nodeOutputFinalChain <$> testOutputNodes testOutput) + ) + $ counterexample "CP violation in final chains!" + $ property + $ unNonZero (maxRollbacks setupK) >= finalIntersectionDepth {------------------------------------------------------------------------------- Constants diff --git a/ouroboros-consensus-cardano/test/shelley-test/Main.hs b/ouroboros-consensus-cardano/test/shelley-test/Main.hs index 302ef96b0d..6d8e6f5a50 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Main.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Main.hs @@ -1,25 +1,28 @@ module Main (main) where -import qualified Test.Consensus.Shelley.Coherence (tests) -import qualified Test.Consensus.Shelley.Golden (tests) -import qualified Test.Consensus.Shelley.LedgerTables (tests) -import qualified Test.Consensus.Shelley.Serialisation (tests) -import qualified Test.Consensus.Shelley.SupportedNetworkProtocolVersion (tests) -import Test.Tasty -import qualified Test.ThreadNet.Shelley (tests) -import Test.Util.TestEnv (defaultMainWithTestEnv, - defaultTestEnvConfig) +import Test.Consensus.Shelley.Coherence qualified (tests) +import Test.Consensus.Shelley.Golden qualified (tests) +import Test.Consensus.Shelley.LedgerTables qualified (tests) +import Test.Consensus.Shelley.Serialisation qualified (tests) +import Test.Consensus.Shelley.SupportedNetworkProtocolVersion qualified (tests) +import Test.Tasty +import Test.ThreadNet.Shelley qualified (tests) +import Test.Util.TestEnv + ( defaultMainWithTestEnv + , defaultTestEnvConfig + ) main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests tests :: TestTree tests = - testGroup "shelley" - [ Test.Consensus.Shelley.Coherence.tests - , Test.Consensus.Shelley.Golden.tests - , Test.Consensus.Shelley.LedgerTables.tests - , Test.Consensus.Shelley.Serialisation.tests - , Test.Consensus.Shelley.SupportedNetworkProtocolVersion.tests - , Test.ThreadNet.Shelley.tests - ] + testGroup + "shelley" + [ Test.Consensus.Shelley.Coherence.tests + , Test.Consensus.Shelley.Golden.tests + , Test.Consensus.Shelley.LedgerTables.tests + , Test.Consensus.Shelley.Serialisation.tests + , Test.Consensus.Shelley.SupportedNetworkProtocolVersion.tests + , Test.ThreadNet.Shelley.tests + ] diff --git a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Coherence.hs b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Coherence.hs index 5d948f88d8..029bbcdda1 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Coherence.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Coherence.hs @@ -1,34 +1,41 @@ module Test.Consensus.Shelley.Coherence (tests) where -import Cardano.Ledger.Alonzo.Scripts (ExUnits, pointWiseExUnits) -import qualified Data.Measure as Measure -import Data.Word (Word32) -import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32 (..), - IgnoringOverflow (..)) -import Ouroboros.Consensus.Shelley.Ledger.Mempool (AlonzoMeasure (..), - ConwayMeasure (..), fromExUnits) -import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () -import Test.Tasty -import Test.Tasty.QuickCheck +import Cardano.Ledger.Alonzo.Scripts (ExUnits, pointWiseExUnits) +import Data.Measure qualified as Measure +import Data.Word (Word32) +import Ouroboros.Consensus.Ledger.SupportsMempool + ( ByteSize32 (..) + , IgnoringOverflow (..) + ) +import Ouroboros.Consensus.Shelley.Ledger.Mempool + ( AlonzoMeasure (..) + , ConwayMeasure (..) + , fromExUnits + ) +import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () +import Test.Tasty +import Test.Tasty.QuickCheck tests :: TestTree -tests = testGroup "Shelley coherences" [ - testProperty "Measure.<= uses pointWiseExUnits (<=)" leqCoherence +tests = + testGroup + "Shelley coherences" + [ testProperty "Measure.<= uses pointWiseExUnits (<=)" leqCoherence ] -- | 'Measure.<=' and @'pointWiseExUnits' (<=)@ must agree leqCoherence :: Word32 -> Word32 -> ExUnits -> ExUnits -> Property leqCoherence w1 w2 eu1 eu2 = - actual === expected - where - -- ConwayMeasure is the fullest TxMeasure and mainnet's - inj eu = - ConwayMeasure - (AlonzoMeasure - (IgnoringOverflow $ ByteSize32 w1) - (fromExUnits eu) - ) - (IgnoringOverflow $ ByteSize32 w2) + actual === expected + where + -- ConwayMeasure is the fullest TxMeasure and mainnet's + inj eu = + ConwayMeasure + ( AlonzoMeasure + (IgnoringOverflow $ ByteSize32 w1) + (fromExUnits eu) + ) + (IgnoringOverflow $ ByteSize32 w2) - actual = inj eu1 Measure.<= inj eu2 - expected = pointWiseExUnits (<=) eu1 eu2 + actual = inj eu1 Measure.<= inj eu2 + expected = pointWiseExUnits (<=) eu1 eu2 diff --git a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Golden.hs b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Golden.hs index a6bbbb2959..acfab846ec 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Golden.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Golden.hs @@ -1,28 +1,28 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TemplateHaskell #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Shelley.Golden (tests) where -import Ouroboros.Consensus.Ledger.Query (QueryVersion) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node () -import Ouroboros.Consensus.Shelley.ShelleyHFC () -import System.FilePath (()) -import Test.Consensus.Shelley.Examples -import Test.Tasty -import Test.Util.Paths -import Test.Util.Serialisation.Golden +import Ouroboros.Consensus.Ledger.Query (QueryVersion) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger.NetworkProtocolVersion +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node () +import Ouroboros.Consensus.Shelley.ShelleyHFC () +import System.FilePath (()) +import Test.Consensus.Shelley.Examples +import Test.Tasty +import Test.Util.Paths +import Test.Util.Serialisation.Golden tests :: TestTree tests = goldenTest_all codecConfig ($(getGoldenDir) "shelley") examplesShelley instance ToGoldenDirectory ShelleyNodeToNodeVersion - -- Use defaults + +-- Use defaults instance ToGoldenDirectory (QueryVersion, ShelleyNodeToClientVersion) where - toGoldenDirectory (queryVersion, blockVersion) - = show queryVersion show blockVersion + toGoldenDirectory (queryVersion, blockVersion) = + show queryVersion show blockVersion diff --git a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/LedgerTables.hs b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/LedgerTables.hs index 21e4a18aab..66296e45f7 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/LedgerTables.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/LedgerTables.hs @@ -6,44 +6,45 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Shelley.LedgerTables (tests) where -import qualified Cardano.Ledger.Api.Era as L -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import Ouroboros.Consensus.Cardano.Block (CardanoShelleyEras) -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Shelley.Eras -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () -import Test.Cardano.Ledger.Babbage.Arbitrary () -import Test.Cardano.Ledger.Babbage.Serialisation.Generators () -import Test.Cardano.Ledger.Conway.Arbitrary () -import Test.Consensus.Shelley.Generators () -import Test.Consensus.Shelley.MockCrypto (CanMock) -import Test.LedgerTables -import Test.Tasty -import Test.Tasty.QuickCheck +import Cardano.Ledger.Api.Era qualified as L +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Ouroboros.Consensus.Cardano.Block (CardanoShelleyEras) +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Shelley.Eras +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Test.Cardano.Ledger.Alonzo.Serialisation.Generators () +import Test.Cardano.Ledger.Babbage.Arbitrary () +import Test.Cardano.Ledger.Babbage.Serialisation.Generators () +import Test.Cardano.Ledger.Conway.Arbitrary () +import Test.Consensus.Shelley.Generators () +import Test.Consensus.Shelley.MockCrypto (CanMock) +import Test.LedgerTables +import Test.Tasty +import Test.Tasty.QuickCheck tests :: TestTree tests = - testGroup "LedgerTables" + testGroup "LedgerTables" . hcollapse . hcmap (Proxy @TestLedgerTables) (K . f) $ (hpure Proxy :: NP Proxy (CardanoShelleyEras StandardCrypto)) - where - f :: forall blk. TestLedgerTables blk => Proxy blk -> TestTree - f _ = testGroup (L.eraName @(ShelleyBlockLedgerEra blk)) - [ testProperty "Stowable laws" (prop_stowable_laws @blk) - , testProperty "HasLedgerTables laws" (prop_hasledgertables_laws @blk) - ] + where + f :: forall blk. TestLedgerTables blk => Proxy blk -> TestTree + f _ = + testGroup + (L.eraName @(ShelleyBlockLedgerEra blk)) + [ testProperty "Stowable laws" (prop_stowable_laws @blk) + , testProperty "HasLedgerTables laws" (prop_hasledgertables_laws @blk) + ] class ( HasLedgerTables (LedgerState blk) @@ -52,7 +53,8 @@ class , (Show `And` Arbitrary) (LedgerState blk ValuesMK) , (Show `And` Arbitrary) (LedgerTables (LedgerState blk) ValuesMK) , L.Era (ShelleyBlockLedgerEra blk) - ) => TestLedgerTables blk + ) => + TestLedgerTables blk instance ( HasLedgerTables (LedgerState blk) @@ -61,9 +63,13 @@ instance , (Show `And` Arbitrary) (LedgerState blk ValuesMK) , (Show `And` Arbitrary) (LedgerTables (LedgerState blk) ValuesMK) , L.Era (ShelleyBlockLedgerEra blk) - ) => TestLedgerTables blk + ) => + TestLedgerTables blk -instance ( CanMock proto era - , Arbitrary (LedgerState (ShelleyBlock proto era) EmptyMK) - ) => Arbitrary (LedgerTables (LedgerState (ShelleyBlock proto era)) ValuesMK) where +instance + ( CanMock proto era + , Arbitrary (LedgerState (ShelleyBlock proto era) EmptyMK) + ) => + Arbitrary (LedgerTables (LedgerState (ShelleyBlock proto era)) ValuesMK) + where arbitrary = projectLedgerTables . unstowLedgerTables <$> arbitrary diff --git a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Serialisation.hs b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Serialisation.hs index 251143266e..1aa3f26058 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Serialisation.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/Serialisation.hs @@ -5,56 +5,57 @@ module Test.Consensus.Shelley.Serialisation (tests) where -import qualified Codec.CBOR.Write as CBOR -import qualified Data.ByteString.Lazy as Lazy -import Data.Constraint -import Data.Proxy (Proxy (..)) -import Data.Word (Word64) -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.HFEras () -import Ouroboros.Consensus.Shelley.Ledger -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node () -import Ouroboros.Consensus.Shelley.Node.Serialisation () -import Ouroboros.Consensus.Shelley.Protocol.TPraos () -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) -import Test.Consensus.Cardano.Generators () -import Test.Consensus.Shelley.Generators () -import Test.Consensus.Shelley.MockCrypto -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Corruption -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip +import Codec.CBOR.Write qualified as CBOR +import Data.ByteString.Lazy qualified as Lazy +import Data.Constraint +import Data.Proxy (Proxy (..)) +import Data.Word (Word64) +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.HFEras () +import Ouroboros.Consensus.Shelley.Ledger +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node () +import Ouroboros.Consensus.Shelley.Node.Serialisation () +import Ouroboros.Consensus.Shelley.Protocol.TPraos () +import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) +import Test.Consensus.Cardano.Generators () +import Test.Consensus.Shelley.Generators () +import Test.Consensus.Shelley.MockCrypto +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Corruption +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip tests :: TestTree -tests = testGroup "Shelley" +tests = + testGroup + "Shelley" [ roundtrip_all testCodecCfg dictNestedHdr - - -- Test for real crypto too - , testProperty "hashSize real crypto" $ prop_hashSize pReal + , -- Test for real crypto too + testProperty "hashSize real crypto" $ prop_hashSize pReal , testProperty "ConvertRawHash real crypto" $ roundtrip_ConvertRawHash pReal - , testProperty "BinaryBlockInfo sanity check" prop_shelleyBinaryBlockInfo - - , testGroup "Integrity" - [ testProperty "generate non-corrupt blocks" prop_blockIntegrity + , testGroup + "Integrity" + [ testProperty "generate non-corrupt blocks" prop_blockIntegrity , testProperty "generate non-corrupt headers" prop_headerIntegrity - , testProperty "detect corruption in blocks" prop_detectCorruption_Block + , testProperty "detect corruption in blocks" prop_detectCorruption_Block , testProperty "detect corruption in headers" prop_detectCorruption_Header ] ] - where - pReal :: Proxy Block - pReal = Proxy + where + pReal :: Proxy Block + pReal = Proxy - testCodecCfg :: CodecConfig Block - testCodecCfg = ShelleyCodecConfig + testCodecCfg :: CodecConfig Block + testCodecCfg = ShelleyCodecConfig - dictNestedHdr :: - forall a era proto. ShelleyCompatible proto era - => NestedCtxt_ (ShelleyBlock proto era) Header a -> Dict (Eq a, Show a) - dictNestedHdr CtxtShelley = Dict + dictNestedHdr :: + forall a era proto. + ShelleyCompatible proto era => + NestedCtxt_ (ShelleyBlock proto era) Header a -> Dict (Eq a, Show a) + dictNestedHdr CtxtShelley = Dict {------------------------------------------------------------------------------- BinaryBlockInfo @@ -62,19 +63,19 @@ tests = testGroup "Shelley" prop_shelleyBinaryBlockInfo :: Block -> Property prop_shelleyBinaryBlockInfo blk = - encodedHeader === extractedHeader - where - BinaryBlockInfo { headerOffset, headerSize } = - shelleyBinaryBlockInfo blk - - extractedHeader :: Lazy.ByteString - extractedHeader = - Lazy.take (fromIntegral headerSize) $ - Lazy.drop (fromIntegral headerOffset) $ + encodedHeader === extractedHeader + where + BinaryBlockInfo{headerOffset, headerSize} = + shelleyBinaryBlockInfo blk + + extractedHeader :: Lazy.ByteString + extractedHeader = + Lazy.take (fromIntegral headerSize) $ + Lazy.drop (fromIntegral headerOffset) $ CBOR.toLazyByteString (encodeShelleyBlock blk) - encodedHeader :: Lazy.ByteString - encodedHeader = CBOR.toLazyByteString $ encodeShelleyHeader (getHeader blk) + encodedHeader :: Lazy.ByteString + encodedHeader = CBOR.toLazyByteString $ encodeShelleyHeader (getHeader blk) {------------------------------------------------------------------------------- Integrity @@ -88,7 +89,7 @@ testTPraosSlotsPerKESPeriod = maxBound -- | Test that the block we generate pass the 'verifyBlockIntegrity' check prop_blockIntegrity :: Coherent Block -> Bool prop_blockIntegrity = - verifyBlockIntegrity testTPraosSlotsPerKESPeriod . getCoherent + verifyBlockIntegrity testTPraosSlotsPerKESPeriod . getCoherent -- | Test that the block we generate pass the 'verifyHeaderIntegrity' check prop_headerIntegrity :: Header Block -> Bool @@ -99,17 +100,18 @@ prop_headerIntegrity = -- | Test that we can detect random bitflips in blocks. prop_detectCorruption_Block :: Coherent Block -> Corruption -> Property prop_detectCorruption_Block (Coherent blk) = - detectCorruption - encodeShelleyBlock - decodeShelleyBlock - (verifyBlockIntegrity testTPraosSlotsPerKESPeriod) - blk + detectCorruption + encodeShelleyBlock + decodeShelleyBlock + (verifyBlockIntegrity testTPraosSlotsPerKESPeriod) + blk -- | Test that we can detect random bitflips in blocks. prop_detectCorruption_Header :: Header Block -> Corruption -> Property prop_detectCorruption_Header = - detectCorruption - encodeShelleyHeader - decodeShelleyHeader - (verifyHeaderIntegrity @(TPraos MockCrypto) testTPraosSlotsPerKESPeriod - . shelleyHeaderRaw) + detectCorruption + encodeShelleyHeader + decodeShelleyHeader + ( verifyHeaderIntegrity @(TPraos MockCrypto) testTPraosSlotsPerKESPeriod + . shelleyHeaderRaw + ) diff --git a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/SupportedNetworkProtocolVersion.hs b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/SupportedNetworkProtocolVersion.hs index d6790b1641..fe8bdb770d 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/SupportedNetworkProtocolVersion.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Test/Consensus/Shelley/SupportedNetworkProtocolVersion.hs @@ -2,28 +2,29 @@ module Test.Consensus.Shelley.SupportedNetworkProtocolVersion (tests) where -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Cardano.Block -import Ouroboros.Consensus.Node.NetworkProtocolVersion - (SupportedNetworkProtocolVersion) -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Test.Tasty -import Test.Tasty.HUnit -import Test.Util.SupportedNetworkProtocolVersion +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Cardano.Block +import Ouroboros.Consensus.Node.NetworkProtocolVersion + ( SupportedNetworkProtocolVersion + ) +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Test.Tasty +import Test.Tasty.HUnit +import Test.Util.SupportedNetworkProtocolVersion tests :: TestTree tests = - testCase "Shelley exhaustive network protocol versions" + testCase "Shelley exhaustive network protocol versions" . sequence_ . hcollapse . hcmap - (Proxy @(And Typeable SupportedNetworkProtocolVersion)) - (K . exhaustiveSupportedNetworkProtocolVersions) + (Proxy @(And Typeable SupportedNetworkProtocolVersion)) + (K . exhaustiveSupportedNetworkProtocolVersions) $ shelleyBlocks - where - shelleyBlocks :: NP Proxy (CardanoShelleyEras StandardCrypto) - shelleyBlocks = hpure Proxy + where + shelleyBlocks :: NP Proxy (CardanoShelleyEras StandardCrypto) + shelleyBlocks = hpure Proxy diff --git a/ouroboros-consensus-cardano/test/shelley-test/Test/ThreadNet/Shelley.hs b/ouroboros-consensus-cardano/test/shelley-test/Test/ThreadNet/Shelley.hs index 5296d90cc8..70fcac6f34 100644 --- a/ouroboros-consensus-cardano/test/shelley-test/Test/ThreadNet/Shelley.hs +++ b/ouroboros-consensus-cardano/test/shelley-test/Test/ThreadNet/Shelley.hs @@ -5,95 +5,107 @@ module Test.ThreadNet.Shelley (tests) where -import Cardano.Ledger.BaseTypes (nonZero) -import qualified Cardano.Ledger.BaseTypes as SL (UnitInterval, - mkNonceFromNumber, shelleyProtVer, unboundRational) -import Cardano.Ledger.Shelley (ShelleyEra) -import qualified Cardano.Ledger.Shelley.API as SL -import qualified Cardano.Ledger.Shelley.Core as SL -import qualified Cardano.Ledger.Shelley.Translation as SL - (toFromByronTranslationContext) -import qualified Cardano.Protocol.TPraos.OCert as SL -import Cardano.Slotting.EpochInfo (fixedEpochInfo) -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import Data.Word (Word64) -import Lens.Micro ((^.)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.TPraos (TPraos) -import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) -import qualified Ouroboros.Consensus.Shelley.Ledger as Shelley -import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () -import Ouroboros.Consensus.Shelley.Node -import Ouroboros.Consensus.Shelley.ShelleyHFC () -import Test.Consensus.Shelley.MockCrypto (MockCrypto) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.Infra.Shelley -import Test.ThreadNet.Network (TestNodeInitialization (..), - nodeOutputFinalLedger) -import Test.ThreadNet.TxGen.Shelley -import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) -import Test.ThreadNet.Util.NodeRestarts (noRestarts) -import Test.ThreadNet.Util.NodeToNodeVersion (genVersion) -import Test.ThreadNet.Util.Seed (runGen) -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) -import Test.Util.TestEnv +import Cardano.Ledger.BaseTypes (nonZero) +import Cardano.Ledger.BaseTypes qualified as SL + ( UnitInterval + , mkNonceFromNumber + , shelleyProtVer + , unboundRational + ) +import Cardano.Ledger.Shelley (ShelleyEra) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Ledger.Shelley.Core qualified as SL +import Cardano.Ledger.Shelley.Translation qualified as SL + ( toFromByronTranslationContext + ) +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Slotting.EpochInfo (fixedEpochInfo) +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Word (Word64) +import Lens.Micro ((^.)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool (extractTxs) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.TPraos (TPraos) +import Ouroboros.Consensus.Shelley.Ledger (ShelleyBlock) +import Ouroboros.Consensus.Shelley.Ledger qualified as Shelley +import Ouroboros.Consensus.Shelley.Ledger.SupportsProtocol () +import Ouroboros.Consensus.Shelley.Node +import Ouroboros.Consensus.Shelley.ShelleyHFC () +import Test.Consensus.Shelley.MockCrypto (MockCrypto) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Infra.Shelley +import Test.ThreadNet.Network + ( TestNodeInitialization (..) + , nodeOutputFinalLedger + ) +import Test.ThreadNet.TxGen.Shelley +import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) +import Test.ThreadNet.Util.NodeRestarts (noRestarts) +import Test.ThreadNet.Util.NodeToNodeVersion (genVersion) +import Test.ThreadNet.Util.Seed (runGen) +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) +import Test.Util.TestEnv data TestSetup = TestSetup - { setupD :: DecentralizationParam - , setupD2 :: DecentralizationParam - -- ^ scheduled value - -- - -- If not equal to 'setupD', every node immediately (ie slot 0) issues a - -- protocol update transaction that will change the @d@ protocol parameter - -- accordingly. + { setupD :: DecentralizationParam + , setupD2 :: DecentralizationParam + -- ^ scheduled value + -- + -- If not equal to 'setupD', every node immediately (ie slot 0) issues a + -- protocol update transaction that will change the @d@ protocol parameter + -- accordingly. , setupInitialNonce :: SL.Nonce - -- ^ the initial Shelley 'SL.ticknStateEpochNonce' - -- - -- This test varies it too ensure it explores different leader schedules. - , setupK :: SecurityParam - , setupTestConfig :: TestConfig - , setupVersion :: (NodeToNodeVersion, BlockNodeToNodeVersion (ShelleyBlock (TPraos MockCrypto) ShelleyEra)) + -- ^ the initial Shelley 'SL.ticknStateEpochNonce' + -- + -- This test varies it too ensure it explores different leader schedules. + , setupK :: SecurityParam + , setupTestConfig :: TestConfig + , setupVersion :: + ( NodeToNodeVersion + , BlockNodeToNodeVersion (ShelleyBlock (TPraos MockCrypto) ShelleyEra) + ) } - deriving (Show) + deriving Show minK :: Word64 -minK = 5 -- Less than this increases risk of CP violations +minK = 5 -- Less than this increases risk of CP violations maxK :: Word64 -maxK = 10 -- More than this wastes execution time +maxK = 10 -- More than this wastes execution time activeSlotCoeff :: Rational -activeSlotCoeff = 0.5 -- TODO this is high +activeSlotCoeff = 0.5 -- TODO this is high instance Arbitrary TestSetup where arbitrary = do - setupD <- arbitrary - setupD2 <- arbitrary + setupD <- arbitrary + setupD2 <- arbitrary - setupInitialNonce <- frequency + setupInitialNonce <- + frequency [ (1, pure SL.NeutralNonce) , (9, SL.mkNonceFromNumber <$> arbitrary) ] - setupK <- SecurityParam <$> choose (minK, maxK) `suchThatMap` nonZero + setupK <- SecurityParam <$> choose (minK, maxK) `suchThatMap` nonZero - setupTestConfig <- arbitrary + setupTestConfig <- arbitrary - setupVersion <- genVersion (Proxy @(ShelleyBlock (TPraos MockCrypto) ShelleyEra)) + setupVersion <- genVersion (Proxy @(ShelleyBlock (TPraos MockCrypto) ShelleyEra)) - pure TestSetup + pure + TestSetup { setupD , setupD2 , setupInitialNonce @@ -102,93 +114,108 @@ instance Arbitrary TestSetup where , setupVersion } - -- TODO shrink +-- TODO shrink -- | We run for more slots at night. newtype NightlyTestSetup = NightlyTestSetup TestSetup - deriving (Show) + deriving Show instance Arbitrary NightlyTestSetup where shrink (NightlyTestSetup setup) = NightlyTestSetup <$> shrink setup arbitrary = do - setup <- arbitrary + setup <- arbitrary - -- This caused 100 tests to have an expected run time of half an hour on - -- a Buildkite machine. Note that the Buildkite CI infrastructure is now - -- deprecated in favour of self-hosted Hydra instances. - -- - -- 100 extended tests had an average run time of 4643 seconds - -- 100 unextended tests had an average of 689 seconds - -- - -- 3/4*689 + 1/4*4643 seconds =~= 28 minutes. - moreEpochs <- frequency [(3, pure False), (1, pure True)] - - NightlyTestSetup <$> if not moreEpochs then pure setup else do - let TestSetup - { setupK - , setupTestConfig - } = setup - TestConfig - { numSlots - } = setupTestConfig - NumSlots t = numSlots - - -- run for multiple epochs - factor <- choose (1, 2) - let t' = t + factor * unEpochSize (mkEpochSize setupK activeSlotCoeff) - - pure setup - { setupTestConfig = setupTestConfig - { numSlots = NumSlots t' + -- This caused 100 tests to have an expected run time of half an hour on + -- a Buildkite machine. Note that the Buildkite CI infrastructure is now + -- deprecated in favour of self-hosted Hydra instances. + -- + -- 100 extended tests had an average run time of 4643 seconds + -- 100 unextended tests had an average of 689 seconds + -- + -- 3/4*689 + 1/4*4643 seconds =~= 28 minutes. + moreEpochs <- frequency [(3, pure False), (1, pure True)] + + NightlyTestSetup + <$> if not moreEpochs + then pure setup + else do + let TestSetup + { setupK + , setupTestConfig + } = setup + TestConfig + { numSlots + } = setupTestConfig + NumSlots t = numSlots + + -- run for multiple epochs + factor <- choose (1, 2) + let t' = t + factor * unEpochSize (mkEpochSize setupK activeSlotCoeff) + + pure + setup + { setupTestConfig = + setupTestConfig + { numSlots = NumSlots t' + } } - } tests :: TestTree -tests = testGroup "Shelley ThreadNet" - [ let name = "simple convergence" in - askTestEnv $ \case - Nightly -> testProperty name $ \(NightlyTestSetup setup) -> - prop_simple_real_tpraos_convergence setup - _ -> adjustQuickCheckTests (`div` 5) $ testProperty name prop_simple_real_tpraos_convergence +tests = + testGroup + "Shelley ThreadNet" + [ let name = "simple convergence" + in askTestEnv $ \case + Nightly -> testProperty name $ \(NightlyTestSetup setup) -> + prop_simple_real_tpraos_convergence setup + _ -> adjustQuickCheckTests (`div` 5) $ testProperty name prop_simple_real_tpraos_convergence ] prop_simple_real_tpraos_convergence :: TestSetup -> Property -prop_simple_real_tpraos_convergence TestSetup - { setupD - , setupD2 - , setupInitialNonce - , setupK - , setupTestConfig - , setupVersion - } = - countertabulate "Epoch number of last slot" +prop_simple_real_tpraos_convergence + TestSetup + { setupD + , setupD2 + , setupInitialNonce + , setupK + , setupTestConfig + , setupVersion + } = + countertabulate + "Epoch number of last slot" ( show $ - if 0 >= unNumSlots numSlots then 0 else - (unNumSlots numSlots - 1) `div` unEpochSize epochSize - ) $ - countertabulate "Updating d" - ( if not dShouldUpdate then "No" else - "Yes, " <> show (compare setupD setupD2) - ) $ - counterexample (show setupK) $ - prop_general PropGeneralArgs - { pgaBlockProperty = const $ property True - , pgaCountTxs = fromIntegral . length . extractTxs - , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Nothing - , pgaFixedSchedule = Nothing - , pgaSecurityParam = setupK - , pgaTestConfig = setupTestConfig - , pgaTestConfigB = testConfigB - } - testOutput .&&. - prop_checkFinalD - where + if 0 >= unNumSlots numSlots + then 0 + else + (unNumSlots numSlots - 1) `div` unEpochSize epochSize + ) + $ countertabulate + "Updating d" + ( if not dShouldUpdate + then "No" + else + "Yes, " <> show (compare setupD setupD2) + ) + $ counterexample (show setupK) + $ prop_general + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = fromIntegral . length . extractTxs + , pgaExpectedCannotForge = noExpectedCannotForges + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Nothing + , pgaFixedSchedule = Nothing + , pgaSecurityParam = setupK + , pgaTestConfig = setupTestConfig + , pgaTestConfigB = testConfigB + } + testOutput + .&&. prop_checkFinalD + where countertabulate :: String -> String -> Property -> Property countertabulate lbl s = - tabulate lbl [s] . counterexample (lbl <> ": " <> s) + tabulate lbl [s] . counterexample (lbl <> ": " <> s) TestConfig { initSeed @@ -197,36 +224,38 @@ prop_simple_real_tpraos_convergence TestSetup } = setupTestConfig testConfigB :: TestConfigB (ShelleyBlock (TPraos MockCrypto) ShelleyEra) - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = singleEraFuture tpraosSlotLength epochSize - , messageDelay = noCalcMessageDelay - , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - , nodeRestarts = noRestarts - , txGenExtra = ShelleyTxGenExtra - { stgeGenEnv = mkGenEnv inclPPUs coreNodes - , stgeStartAt = - SlotNo $ if includingDUpdateTx then 1 else 0 - -- We don't generate any transactions before the transaction - -- carrying the proposal because they might consume its inputs - -- before it does, thereby rendering it invalid. + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = singleEraFuture tpraosSlotLength epochSize + , messageDelay = noCalcMessageDelay + , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + , nodeRestarts = noRestarts + , txGenExtra = + ShelleyTxGenExtra + { stgeGenEnv = mkGenEnv inclPPUs coreNodes + , stgeStartAt = + SlotNo $ if includingDUpdateTx then 1 else 0 + -- We don't generate any transactions before the transaction + -- carrying the proposal because they might consume its inputs + -- before it does, thereby rendering it invalid. + } + , version = setupVersion } - , version = setupVersion - } inclPPUs :: WhetherToGeneratePPUs inclPPUs = - -- We don't generate any other updates, since doing so might - -- accidentally supplant the bespoke update that these tests are - -- expecting. - -- - -- The transaction this test introduces causes all nodes to propose the - -- same parameter update. It'd technically be OK if some nodes then - -- changed their proposal to a different update, as long as at least - -- @Quorum@-many nodes were still proposing this test's original update - -- as of the epoch boundary. However, we keep the test simple and just - -- avoid introducing any other proposals. - if includingDUpdateTx then DoNotGeneratePPUs else DoGeneratePPUs + -- We don't generate any other updates, since doing so might + -- accidentally supplant the bespoke update that these tests are + -- expecting. + -- + -- The transaction this test introduces causes all nodes to propose the + -- same parameter update. It'd technically be OK if some nodes then + -- changed their proposal to a different update, as long as at least + -- @Quorum@-many nodes were still proposing this test's original update + -- as of the epoch boundary. However, we keep the test simple and just + -- avoid introducing any other proposals. + if includingDUpdateTx then DoNotGeneratePPUs else DoGeneratePPUs -- The slot immediately after the end of this test. sentinel :: SlotNo @@ -245,8 +274,11 @@ prop_simple_real_tpraos_convergence TestSetup dShouldUpdate = includingDUpdateTx && sentinel >= dUpdatedAsOf testOutput = - runTestNetwork setupTestConfig testConfigB TestConfigMB - { nodeInfo = \(CoreNodeId nid) -> + runTestNetwork + setupTestConfig + testConfigB + TestConfigMB + { nodeInfo = \(CoreNodeId nid) -> let (protocolInfo, blockForging) = mkProtocolShelley genesisConfig @@ -256,26 +288,29 @@ prop_simple_real_tpraos_convergence TestSetup in TestNodeInitialization { tniProtocolInfo = protocolInfo , tniCrucialTxs = - if not includingDUpdateTx then [] else - mkSetDecentralizationParamTxs - coreNodes - nextProtVer - sentinel -- Does not expire during test - setupD2 + if not includingDUpdateTx + then [] + else + mkSetDecentralizationParamTxs + coreNodes + nextProtVer + sentinel -- Does not expire during test + setupD2 , tniBlockForging = blockForging } - , mkRekeyM = Nothing - } + , mkRekeyM = Nothing + } initialKESPeriod :: SL.KESPeriod initialKESPeriod = SL.KESPeriod 0 coreNodes :: [CoreNode MockCrypto] - coreNodes = runGen initSeed $ + coreNodes = + runGen initSeed $ replicateM (fromIntegral n) $ genCoreNode initialKESPeriod - where - NumCoreNodes n = numCoreNodes + where + NumCoreNodes n = numCoreNodes maxLovelaceSupply :: Word64 maxLovelaceSupply = @@ -283,15 +318,15 @@ prop_simple_real_tpraos_convergence TestSetup genesisConfig :: ShelleyGenesis genesisConfig = - mkGenesisConfig - genesisProtVer - setupK - activeSlotCoeff - setupD - maxLovelaceSupply - tpraosSlotLength - (mkKesConfig (Proxy @MockCrypto) numSlots) - coreNodes + mkGenesisConfig + genesisProtVer + setupK + activeSlotCoeff + setupD + maxLovelaceSupply + tpraosSlotLength + (mkKesConfig (Proxy @MockCrypto) numSlots) + coreNodes epochSize :: EpochSize epochSize = sgEpochLength genesisConfig @@ -307,17 +342,18 @@ prop_simple_real_tpraos_convergence TestSetup -- to the 'sentinel' slot? prop_checkFinalD :: Property prop_checkFinalD = - conjoin $ + conjoin $ [ let ls = - -- Handle the corner case where the test has enough scheduled - -- slots to reach the epoch transition but the last several - -- slots end up empty. - Shelley.tickedShelleyLedgerState $ + -- Handle the corner case where the test has enough scheduled + -- slots to reach the epoch transition but the last several + -- slots end up empty. + Shelley.tickedShelleyLedgerState $ applyChainTick OmitLedgerEvents ledgerConfig sentinel lsUnticked msg = - "The ticked final ledger state of " <> show nid <> - " has an unexpected value for the d protocol parameter." + "The ticked final ledger state of " + <> show nid + <> " has an unexpected value for the d protocol parameter." -- The actual final value of @d@ actual :: SL.UnitInterval @@ -328,35 +364,35 @@ prop_simple_real_tpraos_convergence TestSetup -- NOTE: Not applicable if 'dWasFreeToVary'. expected :: DecentralizationParam expected = if dShouldUpdate then setupD2 else setupD - in - counterexample ("unticked " <> show lsUnticked) $ - counterexample ("ticked " <> show ls) $ - counterexample ("(d,d2) = " <> show (setupD, setupD2)) $ - counterexample - ( "(dUpdatedAsOf, dShouldUpdate) = " <> - show (dUpdatedAsOf, dShouldUpdate) - ) $ - counterexample msg $ - dWasFreeToVary .||. - SL.unboundRational actual === - decentralizationParamToRational expected + in counterexample ("unticked " <> show lsUnticked) + $ counterexample ("ticked " <> show ls) + $ counterexample ("(d,d2) = " <> show (setupD, setupD2)) + $ counterexample + ( "(dUpdatedAsOf, dShouldUpdate) = " + <> show (dUpdatedAsOf, dShouldUpdate) + ) + $ counterexample msg + $ dWasFreeToVary + .||. SL.unboundRational actual + === decentralizationParamToRational expected | (nid, lsUnticked) <- finalLedgers ] - where - -- If the test setup does not introduce a PPU then the normal Shelley - -- generator might do so, and so we will not know what d to expect at - -- the end. - dWasFreeToVary :: Bool - dWasFreeToVary = case inclPPUs of - DoGeneratePPUs -> True - DoNotGeneratePPUs -> False - - finalLedgers :: [(NodeId, LedgerState (ShelleyBlock (TPraos MockCrypto) ShelleyEra) EmptyMK)] - finalLedgers = - Map.toList $ nodeOutputFinalLedger <$> testOutputNodes testOutput - - ledgerConfig :: LedgerConfig (ShelleyBlock (TPraos MockCrypto) ShelleyEra) - ledgerConfig = Shelley.mkShelleyLedgerConfig - genesisConfig - (SL.toFromByronTranslationContext genesisConfig) -- trivial translation context - (fixedEpochInfo epochSize tpraosSlotLength) + where + -- If the test setup does not introduce a PPU then the normal Shelley + -- generator might do so, and so we will not know what d to expect at + -- the end. + dWasFreeToVary :: Bool + dWasFreeToVary = case inclPPUs of + DoGeneratePPUs -> True + DoNotGeneratePPUs -> False + + finalLedgers :: [(NodeId, LedgerState (ShelleyBlock (TPraos MockCrypto) ShelleyEra) EmptyMK)] + finalLedgers = + Map.toList $ nodeOutputFinalLedger <$> testOutputNodes testOutput + + ledgerConfig :: LedgerConfig (ShelleyBlock (TPraos MockCrypto) ShelleyEra) + ledgerConfig = + Shelley.mkShelleyLedgerConfig + genesisConfig + (SL.toFromByronTranslationContext genesisConfig) -- trivial translation context + (fixedEpochInfo epochSize tpraosSlotLength) diff --git a/ouroboros-consensus-cardano/test/tools-test/Main.hs b/ouroboros-consensus-cardano/test/tools-test/Main.hs index ac6b492e9d..763802aa6d 100644 --- a/ouroboros-consensus-cardano/test/tools-test/Main.hs +++ b/ouroboros-consensus-cardano/test/tools-test/Main.hs @@ -1,77 +1,76 @@ module Main (main) where -import qualified Cardano.Tools.DBAnalyser.Block.Cardano as Cardano -import qualified Cardano.Tools.DBAnalyser.Run as DBAnalyser -import Cardano.Tools.DBAnalyser.Types -import qualified Cardano.Tools.DBImmutaliser.Run as DBImmutaliser -import qualified Cardano.Tools.DBSynthesizer.Run as DBSynthesizer -import Cardano.Tools.DBSynthesizer.Types -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Cardano.Block -import qualified Test.Cardano.Tools.Headers -import Test.Tasty -import Test.Tasty.HUnit -import Test.Util.TestEnv - +import Cardano.Tools.DBAnalyser.Block.Cardano qualified as Cardano +import Cardano.Tools.DBAnalyser.Run qualified as DBAnalyser +import Cardano.Tools.DBAnalyser.Types +import Cardano.Tools.DBImmutaliser.Run qualified as DBImmutaliser +import Cardano.Tools.DBSynthesizer.Run qualified as DBSynthesizer +import Cardano.Tools.DBSynthesizer.Types +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Cardano.Block +import Test.Cardano.Tools.Headers qualified +import Test.Tasty +import Test.Tasty.HUnit +import Test.Util.TestEnv nodeConfig, chainDB :: FilePath -nodeConfig = "test/tools-test/disk/config/config.json" -chainDB = "test/tools-test/disk/chaindb" - +nodeConfig = "test/tools-test/disk/config/config.json" +chainDB = "test/tools-test/disk/chaindb" testSynthOptionsCreate :: DBSynthesizerOptions testSynthOptionsCreate = - DBSynthesizerOptions { - synthLimit = ForgeLimitEpoch 1 - , synthOpenMode = OpenCreateForce + DBSynthesizerOptions + { synthLimit = ForgeLimitEpoch 1 + , synthOpenMode = OpenCreateForce } testSynthOptionsAppend :: DBSynthesizerOptions testSynthOptionsAppend = - DBSynthesizerOptions { - synthLimit = ForgeLimitSlot 8192 - , synthOpenMode = OpenAppend + DBSynthesizerOptions + { synthLimit = ForgeLimitSlot 8192 + , synthOpenMode = OpenAppend } testNodeFilePaths :: NodeFilePaths testNodeFilePaths = - NodeFilePaths { - nfpConfig = nodeConfig - , nfpChainDB = chainDB + NodeFilePaths + { nfpConfig = nodeConfig + , nfpChainDB = chainDB } testNodeCredentials :: NodeCredentials testNodeCredentials = - NodeCredentials { - credCertFile = Nothing - , credVRFFile = Nothing - , credKESFile = Nothing - , credBulkFile = Just "test/tools-test/disk/config/bulk-creds-k2.json" + NodeCredentials + { credCertFile = Nothing + , credVRFFile = Nothing + , credKESFile = Nothing + , credBulkFile = Just "test/tools-test/disk/config/bulk-creds-k2.json" } testImmutaliserConfig :: DBImmutaliser.Opts testImmutaliserConfig = - DBImmutaliser.Opts { - DBImmutaliser.dbDirs = DBImmutaliser.DBDirs { - DBImmutaliser.immDBDir = chainDB <> "/immutable" - , DBImmutaliser.volDBDir = chainDB <> "/volatile" - } - , DBImmutaliser.configFile = nodeConfig - , DBImmutaliser.verbose = False - , DBImmutaliser.dotOut = Nothing - , DBImmutaliser.dryRun = False - } + DBImmutaliser.Opts + { DBImmutaliser.dbDirs = + DBImmutaliser.DBDirs + { DBImmutaliser.immDBDir = chainDB <> "/immutable" + , DBImmutaliser.volDBDir = chainDB <> "/volatile" + } + , DBImmutaliser.configFile = nodeConfig + , DBImmutaliser.verbose = False + , DBImmutaliser.dotOut = Nothing + , DBImmutaliser.dryRun = False + } testAnalyserConfig :: DBAnalyserConfig testAnalyserConfig = - DBAnalyserConfig { - dbDir = chainDB - , ldbBackend = V2InMem - , verbose = False - , selectDB = SelectImmutableDB Origin - , validation = Just ValidateAllBlocks - , analysis = CountBlocks - , confLimit = Unlimited + DBAnalyserConfig + { dbDir = chainDB + , ldbBackend = V2InMem + , verbose = False + , selectDB = SelectImmutableDB Origin + , validation = Just ValidateAllBlocks + , analysis = CountBlocks + , confLimit = Unlimited } testBlockArgs :: Cardano.Args (CardanoBlock StandardCrypto) @@ -87,40 +86,47 @@ testBlockArgs = Cardano.CardanoBlockArgs nodeConfig Nothing -- blockCountTest :: (String -> IO ()) -> Assertion blockCountTest logStep = do - logStep "running synthesis - create" - (options, protocol) <- either assertFailure pure - =<< DBSynthesizer.initialize - testNodeFilePaths - testNodeCredentials - testSynthOptionsCreate - resultCreate <- DBSynthesizer.synthesize genTxs options protocol - let blockCountCreate = resultForged resultCreate - blockCountCreate > 0 @? "no blocks have been forged during create step" - - logStep "running synthesis - append" - resultAppend <- DBSynthesizer.synthesize genTxs options {confOptions = testSynthOptionsAppend} protocol - let blockCountAppend = resultForged resultAppend - blockCountAppend > 0 @? "no blocks have been forged during append step" - - logStep "copy volatile to immutable DB" - DBImmutaliser.run testImmutaliserConfig - - logStep "running analysis" - resultAnalysis <- DBAnalyser.analyse testAnalyserConfig testBlockArgs - - let blockCount = blockCountCreate + blockCountAppend - resultAnalysis == Just (ResultCountBlock blockCount) @? - "wrong number of blocks encountered during analysis \ - \ (counted: " ++ show resultAnalysis ++ "; expected: " ++ show blockCount ++ ")" - where - genTxs _ _ _ _ = pure [] + logStep "running synthesis - create" + (options, protocol) <- + either assertFailure pure + =<< DBSynthesizer.initialize + testNodeFilePaths + testNodeCredentials + testSynthOptionsCreate + resultCreate <- DBSynthesizer.synthesize genTxs options protocol + let blockCountCreate = resultForged resultCreate + blockCountCreate > 0 @? "no blocks have been forged during create step" + + logStep "running synthesis - append" + resultAppend <- + DBSynthesizer.synthesize genTxs options{confOptions = testSynthOptionsAppend} protocol + let blockCountAppend = resultForged resultAppend + blockCountAppend > 0 @? "no blocks have been forged during append step" + + logStep "copy volatile to immutable DB" + DBImmutaliser.run testImmutaliserConfig + + logStep "running analysis" + resultAnalysis <- DBAnalyser.analyse testAnalyserConfig testBlockArgs + + let blockCount = blockCountCreate + blockCountAppend + resultAnalysis == Just (ResultCountBlock blockCount) + @? "wrong number of blocks encountered during analysis \ + \ (counted: " + ++ show resultAnalysis + ++ "; expected: " + ++ show blockCount + ++ ")" + where + genTxs _ _ _ _ = pure [] tests :: TestTree tests = - testGroup "cardano-tools" - [ testCaseSteps "synthesize and analyse: blockCount\n" blockCountTest - , Test.Cardano.Tools.Headers.tests - ] + testGroup + "cardano-tools" + [ testCaseSteps "synthesize and analyse: blockCount\n" blockCountTest + , Test.Cardano.Tools.Headers.tests + ] main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests diff --git a/ouroboros-consensus-cardano/test/tools-test/Test/Cardano/Tools/Headers.hs b/ouroboros-consensus-cardano/test/tools-test/Test/Cardano/Tools/Headers.hs index d259f30c28..ff191d435c 100644 --- a/ouroboros-consensus-cardano/test/tools-test/Test/Cardano/Tools/Headers.hs +++ b/ouroboros-consensus-cardano/test/tools-test/Test/Cardano/Tools/Headers.hs @@ -1,50 +1,60 @@ module Test.Cardano.Tools.Headers (tests) where -import Cardano.Tools.Headers (ValidationResult (..), validate) -import qualified Data.Aeson as Json -import Data.Function ((&)) -import qualified Data.Text.Lazy as LT -import Data.Text.Lazy.Encoding (decodeUtf8) -import Test.Ouroboros.Consensus.Protocol.Praos.Header (genContext, - genMutatedHeader, genSample) -import Test.QuickCheck (Property, counterexample, forAll, forAllBlind, - label, property, (===)) -import Test.Tasty (TestTree, testGroup) -import Test.Tasty.QuickCheck (testProperty) +import Cardano.Tools.Headers (ValidationResult (..), validate) +import Data.Aeson qualified as Json +import Data.Function ((&)) +import Data.Text.Lazy qualified as LT +import Data.Text.Lazy.Encoding (decodeUtf8) +import Test.Ouroboros.Consensus.Protocol.Praos.Header + ( genContext + , genMutatedHeader + , genSample + ) +import Test.QuickCheck + ( Property + , counterexample + , forAll + , forAllBlind + , label + , property + , (===) + ) +import Test.Tasty (TestTree, testGroup) +import Test.Tasty.QuickCheck (testProperty) tests :: TestTree tests = - testGroup - "HeaderValidation" - [ testProperty "roundtrip To/FromJSON samples" prop_roundtrip_json_samples - , testProperty "validate legit header" prop_validate_legit_header - ] + testGroup + "HeaderValidation" + [ testProperty "roundtrip To/FromJSON samples" prop_roundtrip_json_samples + , testProperty "validate legit header" prop_validate_legit_header + ] prop_roundtrip_json_samples :: Property prop_roundtrip_json_samples = - forAll genSample $ \sample -> - let encoded = Json.encode sample - decoded = Json.eitherDecode encoded - in decoded === Right sample + forAll genSample $ \sample -> + let encoded = Json.encode sample + decoded = Json.eitherDecode encoded + in decoded === Right sample prop_validate_legit_header :: Property prop_validate_legit_header = - forAllBlind genContext $ \context -> - forAllBlind (genMutatedHeader context) $ \(context', header) -> - annotate context' header $ - case validate context' header of - Valid mut -> property True & label (show mut) - Invalid mut err -> property False & counterexample ("Expected: " <> show mut <> "\nError: " <> err) - where - annotate context header = - counterexample - ( unlines $ - [ "context:" - , asJson context - , "header:" - , show header - ] - ) + forAllBlind genContext $ \context -> + forAllBlind (genMutatedHeader context) $ \(context', header) -> + annotate context' header $ + case validate context' header of + Valid mut -> property True & label (show mut) + Invalid mut err -> property False & counterexample ("Expected: " <> show mut <> "\nError: " <> err) + where + annotate context header = + counterexample + ( unlines $ + [ "context:" + , asJson context + , "header:" + , show header + ] + ) - asJson :: (Json.ToJSON a) => a -> String - asJson = LT.unpack . decodeUtf8 . Json.encode + asJson :: Json.ToJSON a => a -> String + asJson = LT.unpack . decodeUtf8 . Json.encode diff --git a/ouroboros-consensus-diffusion/ouroboros-consensus-diffusion.cabal b/ouroboros-consensus-diffusion/ouroboros-consensus-diffusion.cabal index 23727ae9b9..1af5daf4f0 100644 --- a/ouroboros-consensus-diffusion/ouroboros-consensus-diffusion.cabal +++ b/ouroboros-consensus-diffusion/ouroboros-consensus-diffusion.cabal @@ -30,6 +30,7 @@ flag asserts common common-lib default-language: Haskell2010 + default-extensions: ImportQualifiedPost ghc-options: -Wall -Wcompat @@ -41,6 +42,7 @@ common common-lib -Wmissing-export-lists -Wunused-packages -Wno-unticked-promoted-constructors + -Wprepositive-qualified-module if flag(asserts) ghc-options: -fno-ignore-asserts diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToClient.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToClient.hs index b0a47738e7..70c6c96721 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToClient.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToClient.hs @@ -10,10 +10,11 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Intended for qualified import -module Ouroboros.Consensus.Network.NodeToClient ( - -- * Handlers +module Ouroboros.Consensus.Network.NodeToClient + ( -- * Handlers Handlers (..) , mkHandlers + -- * Codecs , ClientCodecs , Codecs @@ -22,143 +23,153 @@ module Ouroboros.Consensus.Network.NodeToClient ( , clientCodecs , defaultCodecs , identityCodecs + -- * ClientCodecs + -- * Tracers , Tracers , Tracers' (..) , nullTracers , showTracers + -- * Applications , App , Apps (..) , mkApps + -- ** Projections , responder ) where -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import Codec.CBOR.Read (DeserialiseFailure) -import Codec.Serialise (Serialise) -import Control.ResourceRegistry -import Control.Tracer -import Data.ByteString.Lazy (ByteString) -import Data.Typeable -import Data.Void (Void) -import qualified Network.Mux as Mux -import Network.TypedProtocol.Codec -import qualified Network.TypedProtocol.Stateful.Codec as Stateful -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.MiniProtocol.ChainSync.Server -import Ouroboros.Consensus.MiniProtocol.LocalStateQuery.Server -import Ouroboros.Consensus.MiniProtocol.LocalTxMonitor.Server -import Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import qualified Ouroboros.Consensus.Node.Tracers as Node -import Ouroboros.Consensus.NodeKernel -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Util (ShowProxy) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Block (Serialised, decodePoint, decodeTip, - encodePoint, encodeTip) -import Ouroboros.Network.BlockFetch -import Ouroboros.Network.Channel -import Ouroboros.Network.Context -import Ouroboros.Network.Driver -import qualified Ouroboros.Network.Driver.Stateful as Stateful -import Ouroboros.Network.Mux -import Ouroboros.Network.NodeToClient hiding - (NodeToClientVersion (..)) -import qualified Ouroboros.Network.NodeToClient as N (NodeToClientVersion (..), - NodeToClientVersionData) -import Ouroboros.Network.Protocol.ChainSync.Codec -import Ouroboros.Network.Protocol.ChainSync.Server -import Ouroboros.Network.Protocol.ChainSync.Type -import Ouroboros.Network.Protocol.LocalStateQuery.Codec -import Ouroboros.Network.Protocol.LocalStateQuery.Server -import Ouroboros.Network.Protocol.LocalStateQuery.Type as LocalStateQuery -import Ouroboros.Network.Protocol.LocalTxMonitor.Codec -import Ouroboros.Network.Protocol.LocalTxMonitor.Server -import Ouroboros.Network.Protocol.LocalTxMonitor.Type -import Ouroboros.Network.Protocol.LocalTxSubmission.Codec -import Ouroboros.Network.Protocol.LocalTxSubmission.Server -import Ouroboros.Network.Protocol.LocalTxSubmission.Type +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Read (DeserialiseFailure) +import Codec.Serialise (Serialise) +import Control.ResourceRegistry +import Control.Tracer +import Data.ByteString.Lazy (ByteString) +import Data.Typeable +import Data.Void (Void) +import Network.Mux qualified as Mux +import Network.TypedProtocol.Codec +import Network.TypedProtocol.Stateful.Codec qualified as Stateful +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.MiniProtocol.ChainSync.Server +import Ouroboros.Consensus.MiniProtocol.LocalStateQuery.Server +import Ouroboros.Consensus.MiniProtocol.LocalTxMonitor.Server +import Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Node.Tracers qualified as Node +import Ouroboros.Consensus.NodeKernel +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Util (ShowProxy) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Block + ( Serialised + , decodePoint + , decodeTip + , encodePoint + , encodeTip + ) +import Ouroboros.Network.BlockFetch +import Ouroboros.Network.Channel +import Ouroboros.Network.Context +import Ouroboros.Network.Driver +import Ouroboros.Network.Driver.Stateful qualified as Stateful +import Ouroboros.Network.Mux +import Ouroboros.Network.NodeToClient hiding + ( NodeToClientVersion (..) + ) +import Ouroboros.Network.NodeToClient qualified as N + ( NodeToClientVersion (..) + , NodeToClientVersionData + ) +import Ouroboros.Network.Protocol.ChainSync.Codec +import Ouroboros.Network.Protocol.ChainSync.Server +import Ouroboros.Network.Protocol.ChainSync.Type +import Ouroboros.Network.Protocol.LocalStateQuery.Codec +import Ouroboros.Network.Protocol.LocalStateQuery.Server +import Ouroboros.Network.Protocol.LocalStateQuery.Type as LocalStateQuery +import Ouroboros.Network.Protocol.LocalTxMonitor.Codec +import Ouroboros.Network.Protocol.LocalTxMonitor.Server +import Ouroboros.Network.Protocol.LocalTxMonitor.Type +import Ouroboros.Network.Protocol.LocalTxSubmission.Codec +import Ouroboros.Network.Protocol.LocalTxSubmission.Server +import Ouroboros.Network.Protocol.LocalTxSubmission.Type {------------------------------------------------------------------------------- Handlers -------------------------------------------------------------------------------} -- | Protocol handlers for node-to-client (local) communication -data Handlers m peer blk = Handlers { - hChainSyncServer - :: ChainDB.Follower m blk (ChainDB.WithPoint blk (Serialised blk)) - -> ChainSyncServer (Serialised blk) (Point blk) (Tip blk) m () - - , hTxSubmissionServer - :: LocalTxSubmissionServer (GenTx blk) (ApplyTxErr blk) m () - - , hStateQueryServer - :: ResourceRegistry m - -> LocalStateQueryServer blk (Point blk) (Query blk) m () - - , hTxMonitorServer - :: LocalTxMonitorServer (GenTxId blk) (GenTx blk) SlotNo m () - } +data Handlers m peer blk = Handlers + { hChainSyncServer :: + ChainDB.Follower m blk (ChainDB.WithPoint blk (Serialised blk)) -> + ChainSyncServer (Serialised blk) (Point blk) (Tip blk) m () + , hTxSubmissionServer :: + LocalTxSubmissionServer (GenTx blk) (ApplyTxErr blk) m () + , hStateQueryServer :: + ResourceRegistry m -> + LocalStateQueryServer blk (Point blk) (Query blk) m () + , hTxMonitorServer :: + LocalTxMonitorServer (GenTxId blk) (GenTx blk) SlotNo m () + } mkHandlers :: - forall m blk addrNTN addrNTC. - ( IOLike m - , LedgerSupportsMempool blk - , LedgerSupportsProtocol blk - , BlockSupportsLedgerQuery blk - , ConfigSupportsNode blk - ) - => NodeKernelArgs m addrNTN addrNTC blk - -> NodeKernel m addrNTN addrNTC blk - -> Handlers m addrNTC blk -mkHandlers NodeKernelArgs {cfg, tracers} NodeKernel {getChainDB, getMempool} = - Handlers { - hChainSyncServer = - chainSyncBlocksServer - (Node.chainSyncServerBlockTracer tracers) - getChainDB - , hTxSubmissionServer = - localTxSubmissionServer - (Node.localTxSubmissionServerTracer tracers) - getMempool - , hStateQueryServer = - localStateQueryServer (ExtLedgerCfg cfg) - . ChainDB.getReadOnlyForkerAtPoint getChainDB - , hTxMonitorServer = - localTxMonitorServer - getMempool - } + forall m blk addrNTN addrNTC. + ( IOLike m + , LedgerSupportsMempool blk + , LedgerSupportsProtocol blk + , BlockSupportsLedgerQuery blk + , ConfigSupportsNode blk + ) => + NodeKernelArgs m addrNTN addrNTC blk -> + NodeKernel m addrNTN addrNTC blk -> + Handlers m addrNTC blk +mkHandlers NodeKernelArgs{cfg, tracers} NodeKernel{getChainDB, getMempool} = + Handlers + { hChainSyncServer = + chainSyncBlocksServer + (Node.chainSyncServerBlockTracer tracers) + getChainDB + , hTxSubmissionServer = + localTxSubmissionServer + (Node.localTxSubmissionServerTracer tracers) + getMempool + , hStateQueryServer = + localStateQueryServer (ExtLedgerCfg cfg) + . ChainDB.getReadOnlyForkerAtPoint getChainDB + , hTxMonitorServer = + localTxMonitorServer + getMempool + } {------------------------------------------------------------------------------- Codecs -------------------------------------------------------------------------------} -- | Node-to-client protocol codecs needed to run 'Handlers'. -data Codecs' blk serialisedBlk e m bCS bTX bSQ bTM = Codecs { - cChainSyncCodec :: Codec (ChainSync serialisedBlk (Point blk) (Tip blk)) e m bCS - , cTxSubmissionCodec :: Codec (LocalTxSubmission (GenTx blk) (ApplyTxErr blk)) e m bTX - , cStateQueryCodec :: Stateful.Codec (LocalStateQuery blk (Point blk) (Query blk)) e LocalStateQuery.State m bSQ - , cTxMonitorCodec :: Codec (LocalTxMonitor (GenTxId blk) (GenTx blk) SlotNo) e m bTM - } +data Codecs' blk serialisedBlk e m bCS bTX bSQ bTM = Codecs + { cChainSyncCodec :: Codec (ChainSync serialisedBlk (Point blk) (Tip blk)) e m bCS + , cTxSubmissionCodec :: Codec (LocalTxSubmission (GenTx blk) (ApplyTxErr blk)) e m bTX + , cStateQueryCodec :: + Stateful.Codec (LocalStateQuery blk (Point blk) (Query blk)) e LocalStateQuery.State m bSQ + , cTxMonitorCodec :: Codec (LocalTxMonitor (GenTxId blk) (GenTx blk) SlotNo) e m bTM + } type Codecs blk e m bCS bTX bSQ bTM = - Codecs' blk (Serialised blk) e m bCS bTX bSQ bTM + Codecs' blk (Serialised blk) e m bCS bTX bSQ bTM type DefaultCodecs blk m = - Codecs' blk (Serialised blk) DeserialiseFailure m ByteString ByteString ByteString ByteString -type ClientCodecs blk m = - Codecs' blk blk DeserialiseFailure m ByteString ByteString ByteString ByteString + Codecs' blk (Serialised blk) DeserialiseFailure m ByteString ByteString ByteString ByteString +type ClientCodecs blk m = + Codecs' blk blk DeserialiseFailure m ByteString ByteString ByteString ByteString -- | Protocol codecs for the node-to-client protocols -- @@ -176,35 +187,35 @@ type ClientCodecs blk m = -- Implementation mode: currently none of the consensus encoders/decoders do -- anything different based on the version, so @_version@ is unused; it's just -- that not all codecs are used, depending on the version number. -defaultCodecs :: forall m blk. - ( MonadST m - , SerialiseNodeToClientConstraints blk - , BlockSupportsLedgerQuery blk - , Show (BlockNodeToClientVersion blk) - , StandardHash blk - , Serialise (HeaderHash blk) - ) - => CodecConfig blk - -> BlockNodeToClientVersion blk - -> N.NodeToClientVersion - -> DefaultCodecs blk m -defaultCodecs ccfg version networkVersion = Codecs { - cChainSyncCodec = +defaultCodecs :: + forall m blk. + ( MonadST m + , SerialiseNodeToClientConstraints blk + , BlockSupportsLedgerQuery blk + , Show (BlockNodeToClientVersion blk) + , StandardHash blk + , Serialise (HeaderHash blk) + ) => + CodecConfig blk -> + BlockNodeToClientVersion blk -> + N.NodeToClientVersion -> + DefaultCodecs blk m +defaultCodecs ccfg version networkVersion = + Codecs + { cChainSyncCodec = codecChainSync enc dec (encodePoint (encodeRawHash p)) (decodePoint (decodeRawHash p)) - (encodeTip (encodeRawHash p)) - (decodeTip (decodeRawHash p)) - + (encodeTip (encodeRawHash p)) + (decodeTip (decodeRawHash p)) , cTxSubmissionCodec = codecLocalTxSubmission enc dec enc dec - , cStateQueryCodec = codecLocalStateQuery networkVersion @@ -214,59 +225,60 @@ defaultCodecs ccfg version networkVersion = Codecs { ((\(SomeSecond q) -> Some q) <$> queryDecodeNodeToClient ccfg queryVersion version) (encodeResult ccfg version) (decodeResult ccfg version) - , cTxMonitorCodec = codecLocalTxMonitor networkVersion - enc dec - enc dec - enc dec + enc + dec + enc + dec + enc + dec } - where - queryVersion :: QueryVersion - queryVersion = nodeToClientVersionToQueryVersion networkVersion + where + queryVersion :: QueryVersion + queryVersion = nodeToClientVersionToQueryVersion networkVersion - p :: Proxy blk - p = Proxy + p :: Proxy blk + p = Proxy - enc :: SerialiseNodeToClient blk a => a -> Encoding - enc = encodeNodeToClient ccfg version + enc :: SerialiseNodeToClient blk a => a -> Encoding + enc = encodeNodeToClient ccfg version - dec :: SerialiseNodeToClient blk a => forall s. Decoder s a - dec = decodeNodeToClient ccfg version + dec :: SerialiseNodeToClient blk a => forall s. Decoder s a + dec = decodeNodeToClient ccfg version -- | Protocol codecs for the node-to-client protocols which serialise -- / deserialise blocks in /chain-sync/ protocol. --- -clientCodecs :: forall m blk. - ( MonadST m - , SerialiseNodeToClientConstraints blk - , BlockSupportsLedgerQuery blk - , Show (BlockNodeToClientVersion blk) - , StandardHash blk - , Serialise (HeaderHash blk) - ) - => CodecConfig blk - -> BlockNodeToClientVersion blk - -> N.NodeToClientVersion - -> ClientCodecs blk m -clientCodecs ccfg version networkVersion = Codecs { - cChainSyncCodec = +clientCodecs :: + forall m blk. + ( MonadST m + , SerialiseNodeToClientConstraints blk + , BlockSupportsLedgerQuery blk + , Show (BlockNodeToClientVersion blk) + , StandardHash blk + , Serialise (HeaderHash blk) + ) => + CodecConfig blk -> + BlockNodeToClientVersion blk -> + N.NodeToClientVersion -> + ClientCodecs blk m +clientCodecs ccfg version networkVersion = + Codecs + { cChainSyncCodec = codecChainSync enc dec (encodePoint (encodeRawHash p)) (decodePoint (decodeRawHash p)) - (encodeTip (encodeRawHash p)) - (decodeTip (decodeRawHash p)) - + (encodeTip (encodeRawHash p)) + (decodeTip (decodeRawHash p)) , cTxSubmissionCodec = codecLocalTxSubmission enc dec enc dec - , cStateQueryCodec = codecLocalStateQuery networkVersion @@ -276,39 +288,46 @@ clientCodecs ccfg version networkVersion = Codecs { ((\(SomeSecond q) -> Some q) <$> queryDecodeNodeToClient ccfg queryVersion version) (encodeResult ccfg version) (decodeResult ccfg version) - , cTxMonitorCodec = codecLocalTxMonitor networkVersion - enc dec - enc dec - enc dec + enc + dec + enc + dec + enc + dec } - where - queryVersion :: QueryVersion - queryVersion = nodeToClientVersionToQueryVersion networkVersion + where + queryVersion :: QueryVersion + queryVersion = nodeToClientVersionToQueryVersion networkVersion - p :: Proxy blk - p = Proxy + p :: Proxy blk + p = Proxy - enc :: SerialiseNodeToClient blk a => a -> Encoding - enc = encodeNodeToClient ccfg version + enc :: SerialiseNodeToClient blk a => a -> Encoding + enc = encodeNodeToClient ccfg version - dec :: SerialiseNodeToClient blk a => forall s. Decoder s a - dec = decodeNodeToClient ccfg version + dec :: SerialiseNodeToClient blk a => forall s. Decoder s a + dec = decodeNodeToClient ccfg version -- | Identity codecs used in tests. -identityCodecs :: (Monad m, BlockSupportsLedgerQuery blk) - => Codecs blk CodecFailure m - (AnyMessage (ChainSync (Serialised blk) (Point blk) (Tip blk))) - (AnyMessage (LocalTxSubmission (GenTx blk) (ApplyTxErr blk))) - (Stateful.AnyMessage (LocalStateQuery blk (Point blk) (Query blk)) LocalStateQuery.State) - (AnyMessage (LocalTxMonitor (GenTxId blk) (GenTx blk) SlotNo)) -identityCodecs = Codecs { - cChainSyncCodec = codecChainSyncId +identityCodecs :: + (Monad m, BlockSupportsLedgerQuery blk) => + Codecs + blk + CodecFailure + m + (AnyMessage (ChainSync (Serialised blk) (Point blk) (Tip blk))) + (AnyMessage (LocalTxSubmission (GenTx blk) (ApplyTxErr blk))) + (Stateful.AnyMessage (LocalStateQuery blk (Point blk) (Query blk)) LocalStateQuery.State) + (AnyMessage (LocalTxMonitor (GenTxId blk) (GenTx blk) SlotNo)) +identityCodecs = + Codecs + { cChainSyncCodec = codecChainSyncId , cTxSubmissionCodec = codecLocalTxSubmissionId - , cStateQueryCodec = codecLocalStateQueryId sameDepIndex - , cTxMonitorCodec = codecLocalTxMonitorId + , cStateQueryCodec = codecLocalStateQueryId sameDepIndex + , cTxMonitorCodec = codecLocalTxMonitorId } {------------------------------------------------------------------------------- @@ -317,50 +336,64 @@ identityCodecs = Codecs { -- | A record of 'Tracer's for the different protocols. type Tracers m peer blk e = - Tracers' peer blk e (Tracer m) - -data Tracers' peer blk e f = Tracers { - tChainSyncTracer :: f (TraceLabelPeer peer (TraceSendRecv (ChainSync (Serialised blk) (Point blk) (Tip blk)))) - , tTxSubmissionTracer :: f (TraceLabelPeer peer (TraceSendRecv (LocalTxSubmission (GenTx blk) (ApplyTxErr blk)))) - , tStateQueryTracer :: f (TraceLabelPeer peer (Stateful.TraceSendRecv (LocalStateQuery blk (Point blk) (Query blk)) LocalStateQuery.State)) - , tTxMonitorTracer :: f (TraceLabelPeer peer (TraceSendRecv (LocalTxMonitor (GenTxId blk) (GenTx blk) SlotNo))) - } + Tracers' peer blk e (Tracer m) + +data Tracers' peer blk e f = Tracers + { tChainSyncTracer :: + f (TraceLabelPeer peer (TraceSendRecv (ChainSync (Serialised blk) (Point blk) (Tip blk)))) + , tTxSubmissionTracer :: + f (TraceLabelPeer peer (TraceSendRecv (LocalTxSubmission (GenTx blk) (ApplyTxErr blk)))) + , tStateQueryTracer :: + f + ( TraceLabelPeer + peer + (Stateful.TraceSendRecv (LocalStateQuery blk (Point blk) (Query blk)) LocalStateQuery.State) + ) + , tTxMonitorTracer :: + f (TraceLabelPeer peer (TraceSendRecv (LocalTxMonitor (GenTxId blk) (GenTx blk) SlotNo))) + } instance (forall a. Semigroup (f a)) => Semigroup (Tracers' peer blk e f) where - l <> r = Tracers { - tChainSyncTracer = f tChainSyncTracer + l <> r = + Tracers + { tChainSyncTracer = f tChainSyncTracer , tTxSubmissionTracer = f tTxSubmissionTracer - , tStateQueryTracer = f tStateQueryTracer - , tTxMonitorTracer = f tTxMonitorTracer + , tStateQueryTracer = f tStateQueryTracer + , tTxMonitorTracer = f tTxMonitorTracer } - where - f :: forall a. Semigroup a - => (Tracers' peer blk e f -> a) - -> a - f prj = prj l <> prj r + where + f :: + forall a. + Semigroup a => + (Tracers' peer blk e f -> a) -> + a + f prj = prj l <> prj r -- | Use a 'nullTracer' for each protocol. nullTracers :: Monad m => Tracers m peer blk e -nullTracers = Tracers { - tChainSyncTracer = nullTracer +nullTracers = + Tracers + { tChainSyncTracer = nullTracer , tTxSubmissionTracer = nullTracer - , tStateQueryTracer = nullTracer - , tTxMonitorTracer = nullTracer + , tStateQueryTracer = nullTracer + , tTxMonitorTracer = nullTracer } -showTracers :: ( Show peer - , Show (GenTx blk) - , Show (GenTxId blk) - , Show (ApplyTxErr blk) - , forall fp. ShowQuery (BlockQuery blk fp) - , HasHeader blk - ) - => Tracer m String -> Tracers m peer blk e -showTracers tr = Tracers { - tChainSyncTracer = showTracing tr +showTracers :: + ( Show peer + , Show (GenTx blk) + , Show (GenTxId blk) + , Show (ApplyTxErr blk) + , forall fp. ShowQuery (BlockQuery blk fp) + , HasHeader blk + ) => + Tracer m String -> Tracers m peer blk e +showTracers tr = + Tracers + { tChainSyncTracer = showTracing tr , tTxSubmissionTracer = showTracing tr - , tStateQueryTracer = showTracing tr - , tTxMonitorTracer = showTracing tr + , tStateQueryTracer = showTracing tr + , tTxMonitorTracer = showTracing tr } {------------------------------------------------------------------------------- @@ -373,94 +406,91 @@ type App m peer bytes a = peer -> Channel m bytes -> m (a, Maybe bytes) -- | Applications for the node-to-client (i.e., local) protocols -- -- See 'Network.Mux.Types.MuxApplication' -data Apps m peer bCS bTX bSQ bTM a = Apps { - -- | Start a local chain sync server. - aChainSyncServer :: App m peer bCS a - - -- | Start a local transaction submission server. - , aTxSubmissionServer :: App m peer bTX a - - -- | Start a local state query server. - , aStateQueryServer :: App m peer bSQ a - - -- | Start a local transaction monitor server - , aTxMonitorServer :: App m peer bTM a - } +data Apps m peer bCS bTX bSQ bTM a = Apps + { aChainSyncServer :: App m peer bCS a + -- ^ Start a local chain sync server. + , aTxSubmissionServer :: App m peer bTX a + -- ^ Start a local transaction submission server. + , aStateQueryServer :: App m peer bSQ a + -- ^ Start a local state query server. + , aTxMonitorServer :: App m peer bTM a + -- ^ Start a local transaction monitor server + } -- | Construct the 'NetworkApplication' for the node-to-client protocols mkApps :: - forall m addrNTN addrNTC blk e bCS bTX bSQ bTM. - ( IOLike m - , Exception e - , ShowProxy blk - , ShowProxy (ApplyTxErr blk) - , ShowProxy (GenTx blk) - , ShowProxy (GenTxId blk) - , ShowProxy (Query blk) - , forall fp. ShowQuery (BlockQuery blk fp) - ) - => NodeKernel m addrNTN addrNTC blk - -> Tracers m addrNTC blk e - -> Codecs blk e m bCS bTX bSQ bTM - -> Handlers m addrNTC blk - -> Apps m addrNTC bCS bTX bSQ bTM () -mkApps kernel Tracers {..} Codecs {..} Handlers {..} = - Apps {..} - where - aChainSyncServer - :: addrNTC - -> Channel m bCS - -> m ((), Maybe bCS) - aChainSyncServer them channel = do - labelThisThread "LocalChainSyncServer" - bracketWithPrivateRegistry - (chainSyncBlockServerFollower (getChainDB kernel)) - ChainDB.followerClose - $ \flr -> - runPeer - (contramap (TraceLabelPeer them) tChainSyncTracer) - cChainSyncCodec - channel - $ chainSyncServerPeer - $ hChainSyncServer flr - - aTxSubmissionServer - :: addrNTC - -> Channel m bTX - -> m ((), Maybe bTX) - aTxSubmissionServer them channel = do - labelThisThread "LocalTxSubmissionServer" - runPeer - (contramap (TraceLabelPeer them) tTxSubmissionTracer) - cTxSubmissionCodec - channel - (localTxSubmissionServerPeer (pure hTxSubmissionServer)) - - aStateQueryServer - :: addrNTC - -> Channel m bSQ - -> m ((), Maybe bSQ) - aStateQueryServer them channel = do - labelThisThread "LocalStateQueryServer" - withRegistry $ \rr -> - Stateful.runPeer - (contramap (TraceLabelPeer them) tStateQueryTracer) - cStateQueryCodec + forall m addrNTN addrNTC blk e bCS bTX bSQ bTM. + ( IOLike m + , Exception e + , ShowProxy blk + , ShowProxy (ApplyTxErr blk) + , ShowProxy (GenTx blk) + , ShowProxy (GenTxId blk) + , ShowProxy (Query blk) + , forall fp. ShowQuery (BlockQuery blk fp) + ) => + NodeKernel m addrNTN addrNTC blk -> + Tracers m addrNTC blk e -> + Codecs blk e m bCS bTX bSQ bTM -> + Handlers m addrNTC blk -> + Apps m addrNTC bCS bTX bSQ bTM () +mkApps kernel Tracers{..} Codecs{..} Handlers{..} = + Apps{..} + where + aChainSyncServer :: + addrNTC -> + Channel m bCS -> + m ((), Maybe bCS) + aChainSyncServer them channel = do + labelThisThread "LocalChainSyncServer" + bracketWithPrivateRegistry + (chainSyncBlockServerFollower (getChainDB kernel)) + ChainDB.followerClose + $ \flr -> + runPeer + (contramap (TraceLabelPeer them) tChainSyncTracer) + cChainSyncCodec channel - LocalStateQuery.StateIdle - (localStateQueryServerPeer (hStateQueryServer rr)) - - aTxMonitorServer - :: addrNTC - -> Channel m bTM - -> m ((), Maybe bTM) - aTxMonitorServer them channel = do - labelThisThread "LocalTxMonitorServer" - runPeer - (contramap (TraceLabelPeer them) tTxMonitorTracer) - cTxMonitorCodec + $ chainSyncServerPeer + $ hChainSyncServer flr + + aTxSubmissionServer :: + addrNTC -> + Channel m bTX -> + m ((), Maybe bTX) + aTxSubmissionServer them channel = do + labelThisThread "LocalTxSubmissionServer" + runPeer + (contramap (TraceLabelPeer them) tTxSubmissionTracer) + cTxSubmissionCodec + channel + (localTxSubmissionServerPeer (pure hTxSubmissionServer)) + + aStateQueryServer :: + addrNTC -> + Channel m bSQ -> + m ((), Maybe bSQ) + aStateQueryServer them channel = do + labelThisThread "LocalStateQueryServer" + withRegistry $ \rr -> + Stateful.runPeer + (contramap (TraceLabelPeer them) tStateQueryTracer) + cStateQueryCodec channel - (localTxMonitorServerPeer hTxMonitorServer) + LocalStateQuery.StateIdle + (localStateQueryServerPeer (hStateQueryServer rr)) + + aTxMonitorServer :: + addrNTC -> + Channel m bTM -> + m ((), Maybe bTM) + aTxMonitorServer them channel = do + labelThisThread "LocalTxMonitorServer" + runPeer + (contramap (TraceLabelPeer them) tTxMonitorTracer) + cTxMonitorCodec + channel + (localTxMonitorServerPeer hTxMonitorServer) {------------------------------------------------------------------------------- Projections from 'Apps' @@ -469,25 +499,26 @@ mkApps kernel Tracers {..} Codecs {..} Handlers {..} = -- | A projection from 'NetworkApplication' to a server-side -- 'OuroborosApplication' for the node-to-client protocols. responder :: - N.NodeToClientVersion - -> N.NodeToClientVersionData - -> Apps m (ConnectionId peer) b b b b a - -> OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode peer b m Void a -responder version versionData Apps {..} = - nodeToClientProtocols - (NodeToClientProtocols { - localChainSyncProtocol = + N.NodeToClientVersion -> + N.NodeToClientVersionData -> + Apps m (ConnectionId peer) b b b b a -> + OuroborosApplicationWithMinimalCtx 'Mux.ResponderMode peer b m Void a +responder version versionData Apps{..} = + nodeToClientProtocols + ( NodeToClientProtocols + { localChainSyncProtocol = ResponderProtocolOnly $ MiniProtocolCb $ \ctx -> - aChainSyncServer (rcConnectionId ctx), - localTxSubmissionProtocol = + aChainSyncServer (rcConnectionId ctx) + , localTxSubmissionProtocol = ResponderProtocolOnly $ MiniProtocolCb $ \ctx -> - aTxSubmissionServer (rcConnectionId ctx), - localStateQueryProtocol = + aTxSubmissionServer (rcConnectionId ctx) + , localStateQueryProtocol = ResponderProtocolOnly $ MiniProtocolCb $ \ctx -> - aStateQueryServer (rcConnectionId ctx), - localTxMonitorProtocol = + aStateQueryServer (rcConnectionId ctx) + , localTxMonitorProtocol = ResponderProtocolOnly $ MiniProtocolCb $ \ctx -> aTxMonitorServer (rcConnectionId ctx) - }) - version - versionData + } + ) + version + versionData diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToNode.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToNode.hs index 03a006ea88..ef6852e3f2 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToNode.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Network/NodeToNode.hs @@ -7,227 +7,265 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Intended for qualified import -module Ouroboros.Consensus.Network.NodeToNode ( - -- * Handlers +module Ouroboros.Consensus.Network.NodeToNode + ( -- * Handlers Handlers (..) , mkHandlers + -- * Codecs , Codecs (..) , defaultCodecs , identityCodecs + -- * Byte Limits , ByteLimits , byteLimits , noByteLimits + -- * Tracers , Tracers , Tracers' (..) , nullTracers , showTracers + -- * Applications , Apps (..) , ClientApp , ServerApp , mkApps + -- ** Projections , initiator , initiatorAndResponder + -- * Re-exports , ChainSyncTimeout (..) ) where -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as CBOR -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.CBOR.Read (DeserialiseFailure) -import qualified Control.Concurrent.Class.MonadSTM.Strict.TVar as TVar.Unchecked -import Control.Monad.Class.MonadTime.SI (MonadTime) -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import Control.ResourceRegistry -import Control.Tracer -import Data.ByteString.Lazy (ByteString) -import qualified Data.ByteString.Lazy as BSL -import Data.Hashable (Hashable) -import Data.Int (Int64) -import Data.Map.Strict (Map) -import Data.Void (Void) -import qualified Network.Mux as Mux -import Network.TypedProtocol.Codec -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config (DiffusionPipeliningSupport (..)) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncStateView (..)) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CsClient -import Ouroboros.Consensus.MiniProtocol.ChainSync.Server -import Ouroboros.Consensus.Node.ExitPolicy -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import qualified Ouroboros.Consensus.Node.Tracers as Node -import Ouroboros.Consensus.NodeKernel -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Storage.Serialisation (SerialisedHeader) -import Ouroboros.Consensus.Util (ShowProxy) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Block (Serialised (..), decodePoint, - decodeTip, encodePoint, encodeTip) -import Ouroboros.Network.BlockFetch -import Ouroboros.Network.BlockFetch.Client (BlockFetchClient, - blockFetchClient) -import Ouroboros.Network.Channel -import Ouroboros.Network.Context -import Ouroboros.Network.DeltaQ -import Ouroboros.Network.Driver -import Ouroboros.Network.Driver.Limits -import Ouroboros.Network.KeepAlive -import Ouroboros.Network.Mux -import Ouroboros.Network.NodeToNode -import Ouroboros.Network.PeerSelection.PeerMetric.Type - (FetchedMetricsTracer, ReportPeerMetrics (..)) -import Ouroboros.Network.PeerSharing (PeerSharingController, - bracketPeerSharingClient, peerSharingClient, - peerSharingServer) -import Ouroboros.Network.Protocol.BlockFetch.Codec -import Ouroboros.Network.Protocol.BlockFetch.Server (BlockFetchServer, - blockFetchServerPeer) -import Ouroboros.Network.Protocol.BlockFetch.Type (BlockFetch (..)) -import Ouroboros.Network.Protocol.ChainSync.ClientPipelined -import Ouroboros.Network.Protocol.ChainSync.Codec -import Ouroboros.Network.Protocol.ChainSync.PipelineDecision -import Ouroboros.Network.Protocol.ChainSync.Server -import Ouroboros.Network.Protocol.ChainSync.Type -import Ouroboros.Network.Protocol.KeepAlive.Client -import Ouroboros.Network.Protocol.KeepAlive.Codec -import Ouroboros.Network.Protocol.KeepAlive.Server -import Ouroboros.Network.Protocol.KeepAlive.Type -import Ouroboros.Network.Protocol.PeerSharing.Client - (PeerSharingClient, peerSharingClientPeer) -import Ouroboros.Network.Protocol.PeerSharing.Codec - (byteLimitsPeerSharing, codecPeerSharing, - codecPeerSharingId, timeLimitsPeerSharing) -import Ouroboros.Network.Protocol.PeerSharing.Server - (PeerSharingServer, peerSharingServerPeer) -import Ouroboros.Network.Protocol.PeerSharing.Type (PeerSharing) -import Ouroboros.Network.Protocol.TxSubmission2.Client -import Ouroboros.Network.Protocol.TxSubmission2.Codec -import Ouroboros.Network.Protocol.TxSubmission2.Server -import Ouroboros.Network.Protocol.TxSubmission2.Type -import Ouroboros.Network.TxSubmission.Inbound -import Ouroboros.Network.TxSubmission.Mempool.Reader - (mapTxSubmissionMempoolReader) -import Ouroboros.Network.TxSubmission.Outbound - +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.CBOR.Read (DeserialiseFailure) +import Control.Concurrent.Class.MonadSTM.Strict.TVar qualified as TVar.Unchecked +import Control.Monad.Class.MonadTime.SI (MonadTime) +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.ResourceRegistry +import Control.Tracer +import Data.ByteString.Lazy (ByteString) +import Data.ByteString.Lazy qualified as BSL +import Data.Hashable (Hashable) +import Data.Int (Int64) +import Data.Map.Strict (Map) +import Data.Void (Void) +import Network.Mux qualified as Mux +import Network.TypedProtocol.Codec +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config (DiffusionPipeliningSupport (..)) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncStateView (..) + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CsClient +import Ouroboros.Consensus.MiniProtocol.ChainSync.Server +import Ouroboros.Consensus.Node.ExitPolicy +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Node.Tracers qualified as Node +import Ouroboros.Consensus.NodeKernel +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.Serialisation (SerialisedHeader) +import Ouroboros.Consensus.Util (ShowProxy) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Block + ( Serialised (..) + , decodePoint + , decodeTip + , encodePoint + , encodeTip + ) +import Ouroboros.Network.BlockFetch +import Ouroboros.Network.BlockFetch.Client + ( BlockFetchClient + , blockFetchClient + ) +import Ouroboros.Network.Channel +import Ouroboros.Network.Context +import Ouroboros.Network.DeltaQ +import Ouroboros.Network.Driver +import Ouroboros.Network.Driver.Limits +import Ouroboros.Network.KeepAlive +import Ouroboros.Network.Mux +import Ouroboros.Network.NodeToNode +import Ouroboros.Network.PeerSelection.PeerMetric.Type + ( FetchedMetricsTracer + , ReportPeerMetrics (..) + ) +import Ouroboros.Network.PeerSharing + ( PeerSharingController + , bracketPeerSharingClient + , peerSharingClient + , peerSharingServer + ) +import Ouroboros.Network.Protocol.BlockFetch.Codec +import Ouroboros.Network.Protocol.BlockFetch.Server + ( BlockFetchServer + , blockFetchServerPeer + ) +import Ouroboros.Network.Protocol.BlockFetch.Type (BlockFetch (..)) +import Ouroboros.Network.Protocol.ChainSync.ClientPipelined +import Ouroboros.Network.Protocol.ChainSync.Codec +import Ouroboros.Network.Protocol.ChainSync.PipelineDecision +import Ouroboros.Network.Protocol.ChainSync.Server +import Ouroboros.Network.Protocol.ChainSync.Type +import Ouroboros.Network.Protocol.KeepAlive.Client +import Ouroboros.Network.Protocol.KeepAlive.Codec +import Ouroboros.Network.Protocol.KeepAlive.Server +import Ouroboros.Network.Protocol.KeepAlive.Type +import Ouroboros.Network.Protocol.PeerSharing.Client + ( PeerSharingClient + , peerSharingClientPeer + ) +import Ouroboros.Network.Protocol.PeerSharing.Codec + ( byteLimitsPeerSharing + , codecPeerSharing + , codecPeerSharingId + , timeLimitsPeerSharing + ) +import Ouroboros.Network.Protocol.PeerSharing.Server + ( PeerSharingServer + , peerSharingServerPeer + ) +import Ouroboros.Network.Protocol.PeerSharing.Type (PeerSharing) +import Ouroboros.Network.Protocol.TxSubmission2.Client +import Ouroboros.Network.Protocol.TxSubmission2.Codec +import Ouroboros.Network.Protocol.TxSubmission2.Server +import Ouroboros.Network.Protocol.TxSubmission2.Type +import Ouroboros.Network.TxSubmission.Inbound +import Ouroboros.Network.TxSubmission.Mempool.Reader + ( mapTxSubmissionMempoolReader + ) +import Ouroboros.Network.TxSubmission.Outbound {------------------------------------------------------------------------------- Handlers -------------------------------------------------------------------------------} -- | Protocol handlers for node-to-node (remote) communication -data Handlers m addr blk = Handlers { - hChainSyncClient - :: ConnectionId addr - -> IsBigLedgerPeer - -> CsClient.DynamicEnv m blk - -> ChainSyncClientPipelined (Header blk) (Point blk) (Tip blk) m - CsClient.ChainSyncClientResult - -- TODO: we should reconsider bundling these context parameters into a - -- record, perhaps instead extending the protocol handler - -- representation to support bracket-style initialisation so that we - -- could have the closure include these and not need to be explicit - -- about them here. - - , hChainSyncServer - :: ConnectionId addr - -> NodeToNodeVersion - -> ChainDB.Follower m blk (ChainDB.WithPoint blk (SerialisedHeader blk)) - -> ChainSyncServer (SerialisedHeader blk) (Point blk) (Tip blk) m () - - -- TODO block fetch client does not have GADT view of the handlers. - , hBlockFetchClient - :: NodeToNodeVersion - -> ControlMessageSTM m - -> FetchedMetricsTracer m - -> BlockFetchClient (HeaderWithTime blk) blk m () - - , hBlockFetchServer - :: ConnectionId addr - -> NodeToNodeVersion - -> ResourceRegistry m - -> BlockFetchServer (Serialised blk) (Point blk) m () - - , hTxSubmissionClient - :: NodeToNodeVersion - -> ControlMessageSTM m - -> ConnectionId addr - -> TxSubmissionClient (GenTxId blk) (GenTx blk) m () - - , hTxSubmissionServer - :: NodeToNodeVersion - -> ConnectionId addr - -> TxSubmissionServerPipelined (GenTxId blk) (GenTx blk) m () - - , hKeepAliveClient - :: NodeToNodeVersion - -> ControlMessageSTM m - -> ConnectionId addr - -> TVar.Unchecked.StrictTVar m (Map (ConnectionId addr) PeerGSV) - -> KeepAliveInterval - -> KeepAliveClient m () - - , hKeepAliveServer - :: NodeToNodeVersion - -> ConnectionId addr - -> KeepAliveServer m () - - , hPeerSharingClient - :: NodeToNodeVersion - -> ControlMessageSTM m - -> ConnectionId addr - -> PeerSharingController addr m - -> m (PeerSharingClient addr m ()) - - , hPeerSharingServer - :: NodeToNodeVersion - -> ConnectionId addr - -> PeerSharingServer addr m - } +data Handlers m addr blk = Handlers + { hChainSyncClient :: + ConnectionId addr -> + IsBigLedgerPeer -> + CsClient.DynamicEnv m blk -> + ChainSyncClientPipelined + (Header blk) + (Point blk) + (Tip blk) + m + CsClient.ChainSyncClientResult + , -- TODO: we should reconsider bundling these context parameters into a + -- record, perhaps instead extending the protocol handler + -- representation to support bracket-style initialisation so that we + -- could have the closure include these and not need to be explicit + -- about them here. + + hChainSyncServer :: + ConnectionId addr -> + NodeToNodeVersion -> + ChainDB.Follower m blk (ChainDB.WithPoint blk (SerialisedHeader blk)) -> + ChainSyncServer (SerialisedHeader blk) (Point blk) (Tip blk) m () + , -- TODO block fetch client does not have GADT view of the handlers. + hBlockFetchClient :: + NodeToNodeVersion -> + ControlMessageSTM m -> + FetchedMetricsTracer m -> + BlockFetchClient (HeaderWithTime blk) blk m () + , hBlockFetchServer :: + ConnectionId addr -> + NodeToNodeVersion -> + ResourceRegistry m -> + BlockFetchServer (Serialised blk) (Point blk) m () + , hTxSubmissionClient :: + NodeToNodeVersion -> + ControlMessageSTM m -> + ConnectionId addr -> + TxSubmissionClient (GenTxId blk) (GenTx blk) m () + , hTxSubmissionServer :: + NodeToNodeVersion -> + ConnectionId addr -> + TxSubmissionServerPipelined (GenTxId blk) (GenTx blk) m () + , hKeepAliveClient :: + NodeToNodeVersion -> + ControlMessageSTM m -> + ConnectionId addr -> + TVar.Unchecked.StrictTVar m (Map (ConnectionId addr) PeerGSV) -> + KeepAliveInterval -> + KeepAliveClient m () + , hKeepAliveServer :: + NodeToNodeVersion -> + ConnectionId addr -> + KeepAliveServer m () + , hPeerSharingClient :: + NodeToNodeVersion -> + ControlMessageSTM m -> + ConnectionId addr -> + PeerSharingController addr m -> + m (PeerSharingClient addr m ()) + , hPeerSharingServer :: + NodeToNodeVersion -> + ConnectionId addr -> + PeerSharingServer addr m + } mkHandlers :: - forall m blk addrNTN addrNTC. - ( IOLike m - , MonadTime m - , MonadTimer m - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , LedgerSupportsProtocol blk - , Ord addrNTN - , Hashable addrNTN - ) - => NodeKernelArgs m addrNTN addrNTC blk - -> NodeKernel m addrNTN addrNTC blk - -> Handlers m addrNTN blk + forall m blk addrNTN addrNTC. + ( IOLike m + , MonadTime m + , MonadTimer m + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , LedgerSupportsProtocol blk + , Ord addrNTN + , Hashable addrNTN + ) => + NodeKernelArgs m addrNTN addrNTC blk -> + NodeKernel m addrNTN addrNTC blk -> + Handlers m addrNTN blk mkHandlers - NodeKernelArgs {chainSyncFutureCheck, chainSyncHistoricityCheck, keepAliveRng, miniProtocolParameters, getDiffusionPipeliningSupport} - NodeKernel {getChainDB, getMempool, getTopLevelConfig, getTracers = tracers, getPeerSharingAPI, getGsmState} = - Handlers { - hChainSyncClient = \peer _isBigLedgerpeer dynEnv -> + NodeKernelArgs + { chainSyncFutureCheck + , chainSyncHistoricityCheck + , keepAliveRng + , miniProtocolParameters + , getDiffusionPipeliningSupport + } + NodeKernel + { getChainDB + , getMempool + , getTopLevelConfig + , getTracers = tracers + , getPeerSharingAPI + , getGsmState + } = + Handlers + { hChainSyncClient = \peer _isBigLedgerpeer dynEnv -> CsClient.chainSyncClient - CsClient.ConfigEnv { - CsClient.cfg = getTopLevelConfig + CsClient.ConfigEnv + { CsClient.cfg = getTopLevelConfig , CsClient.someHeaderInFutureCheck = chainSyncFutureCheck - , CsClient.historicityCheck = chainSyncHistoricityCheck (atomically getGsmState) - , CsClient.chainDbView = + , CsClient.historicityCheck = chainSyncHistoricityCheck (atomically getGsmState) + , CsClient.chainDbView = CsClient.defaultChainDbView getChainDB - , CsClient.mkPipelineDecision0 = pipelineDecisionLowHighMark - (chainSyncPipeliningLowMark miniProtocolParameters) - (chainSyncPipeliningHighMark miniProtocolParameters) - , CsClient.tracer = + , CsClient.mkPipelineDecision0 = + pipelineDecisionLowHighMark + (chainSyncPipeliningLowMark miniProtocolParameters) + (chainSyncPipeliningHighMark miniProtocolParameters) + , CsClient.tracer = contramap (TraceLabelPeer peer) (Node.chainSyncClientTracer tracers) , CsClient.getDiffusionPipeliningSupport = getDiffusionPipeliningSupport } @@ -268,100 +306,114 @@ mkHandlers -------------------------------------------------------------------------------} -- | Node-to-node protocol codecs needed to run 'Handlers'. -data Codecs blk addr e m bCS bSCS bBF bSBF bTX bKA bPS = Codecs { - cChainSyncCodec :: Codec (ChainSync (Header blk) (Point blk) (Tip blk)) e m bCS - , cChainSyncCodecSerialised :: Codec (ChainSync (SerialisedHeader blk) (Point blk) (Tip blk)) e m bSCS - , cBlockFetchCodec :: Codec (BlockFetch blk (Point blk)) e m bBF - , cBlockFetchCodecSerialised :: Codec (BlockFetch (Serialised blk) (Point blk)) e m bSBF - , cTxSubmission2Codec :: Codec (TxSubmission2 (GenTxId blk) (GenTx blk)) e m bTX - , cKeepAliveCodec :: Codec KeepAlive e m bKA - , cPeerSharingCodec :: Codec (PeerSharing addr) e m bPS - } +data Codecs blk addr e m bCS bSCS bBF bSBF bTX bKA bPS = Codecs + { cChainSyncCodec :: Codec (ChainSync (Header blk) (Point blk) (Tip blk)) e m bCS + , cChainSyncCodecSerialised :: + Codec (ChainSync (SerialisedHeader blk) (Point blk) (Tip blk)) e m bSCS + , cBlockFetchCodec :: Codec (BlockFetch blk (Point blk)) e m bBF + , cBlockFetchCodecSerialised :: + Codec (BlockFetch (Serialised blk) (Point blk)) e m bSBF + , cTxSubmission2Codec :: Codec (TxSubmission2 (GenTxId blk) (GenTx blk)) e m bTX + , cKeepAliveCodec :: Codec KeepAlive e m bKA + , cPeerSharingCodec :: Codec (PeerSharing addr) e m bPS + } -- | Protocol codecs for the node-to-node protocols -defaultCodecs :: forall m blk addr. - ( IOLike m - , SerialiseNodeToNodeConstraints blk - ) - => CodecConfig blk - -> BlockNodeToNodeVersion blk - -> (NodeToNodeVersion -> addr -> CBOR.Encoding) - -> (NodeToNodeVersion -> forall s . CBOR.Decoder s addr) - -> NodeToNodeVersion - -> Codecs blk addr DeserialiseFailure m - ByteString ByteString ByteString ByteString ByteString ByteString ByteString -defaultCodecs ccfg version encAddr decAddr nodeToNodeVersion = Codecs { - cChainSyncCodec = +defaultCodecs :: + forall m blk addr. + ( IOLike m + , SerialiseNodeToNodeConstraints blk + ) => + CodecConfig blk -> + BlockNodeToNodeVersion blk -> + (NodeToNodeVersion -> addr -> CBOR.Encoding) -> + (NodeToNodeVersion -> forall s. CBOR.Decoder s addr) -> + NodeToNodeVersion -> + Codecs + blk + addr + DeserialiseFailure + m + ByteString + ByteString + ByteString + ByteString + ByteString + ByteString + ByteString +defaultCodecs ccfg version encAddr decAddr nodeToNodeVersion = + Codecs + { cChainSyncCodec = codecChainSync enc dec (encodePoint (encodeRawHash p)) (decodePoint (decodeRawHash p)) - (encodeTip (encodeRawHash p)) - (decodeTip (decodeRawHash p)) - + (encodeTip (encodeRawHash p)) + (decodeTip (decodeRawHash p)) , cChainSyncCodecSerialised = codecChainSync enc dec (encodePoint (encodeRawHash p)) (decodePoint (decodeRawHash p)) - (encodeTip (encodeRawHash p)) - (decodeTip (decodeRawHash p)) - + (encodeTip (encodeRawHash p)) + (decodeTip (decodeRawHash p)) , cBlockFetchCodec = codecBlockFetch enc dec (encodePoint (encodeRawHash p)) (decodePoint (decodeRawHash p)) - , cBlockFetchCodecSerialised = codecBlockFetch enc dec (encodePoint (encodeRawHash p)) (decodePoint (decodeRawHash p)) - , cTxSubmission2Codec = codecTxSubmission2 enc dec enc dec - , cKeepAliveCodec = codecKeepAlive_v2 - , cPeerSharingCodec = codecPeerSharing (encAddr nodeToNodeVersion) (decAddr nodeToNodeVersion) } - where - p :: Proxy blk - p = Proxy + where + p :: Proxy blk + p = Proxy - enc :: SerialiseNodeToNode blk a => a -> Encoding - enc = encodeNodeToNode ccfg version + enc :: SerialiseNodeToNode blk a => a -> Encoding + enc = encodeNodeToNode ccfg version - dec :: SerialiseNodeToNode blk a => forall s. Decoder s a - dec = decodeNodeToNode ccfg version + dec :: SerialiseNodeToNode blk a => forall s. Decoder s a + dec = decodeNodeToNode ccfg version -- | Identity codecs used in tests. -identityCodecs :: Monad m - => Codecs blk addr CodecFailure m - (AnyMessage (ChainSync (Header blk) (Point blk) (Tip blk))) - (AnyMessage (ChainSync (SerialisedHeader blk) (Point blk) (Tip blk))) - (AnyMessage (BlockFetch blk (Point blk))) - (AnyMessage (BlockFetch (Serialised blk) (Point blk))) - (AnyMessage (TxSubmission2 (GenTxId blk) (GenTx blk))) - (AnyMessage KeepAlive) - (AnyMessage (PeerSharing addr)) -identityCodecs = Codecs { - cChainSyncCodec = codecChainSyncId - , cChainSyncCodecSerialised = codecChainSyncId - , cBlockFetchCodec = codecBlockFetchId +identityCodecs :: + Monad m => + Codecs + blk + addr + CodecFailure + m + (AnyMessage (ChainSync (Header blk) (Point blk) (Tip blk))) + (AnyMessage (ChainSync (SerialisedHeader blk) (Point blk) (Tip blk))) + (AnyMessage (BlockFetch blk (Point blk))) + (AnyMessage (BlockFetch (Serialised blk) (Point blk))) + (AnyMessage (TxSubmission2 (GenTxId blk) (GenTx blk))) + (AnyMessage KeepAlive) + (AnyMessage (PeerSharing addr)) +identityCodecs = + Codecs + { cChainSyncCodec = codecChainSyncId + , cChainSyncCodecSerialised = codecChainSyncId + , cBlockFetchCodec = codecBlockFetchId , cBlockFetchCodecSerialised = codecBlockFetchId - , cTxSubmission2Codec = codecTxSubmission2Id - , cKeepAliveCodec = codecKeepAliveId - , cPeerSharingCodec = codecPeerSharingId + , cTxSubmission2Codec = codecTxSubmission2Id + , cKeepAliveCodec = codecKeepAliveId + , cPeerSharingCodec = codecPeerSharingId } {------------------------------------------------------------------------------- @@ -370,63 +422,73 @@ identityCodecs = Codecs { -- | A record of 'Tracer's for the different protocols. type Tracers m ntnAddr blk e = - Tracers' (ConnectionId ntnAddr) ntnAddr blk e (Tracer m) - -data Tracers' peer ntnAddr blk e f = Tracers { - tChainSyncTracer :: f (TraceLabelPeer peer (TraceSendRecv (ChainSync (Header blk) (Point blk) (Tip blk)))) - , tChainSyncSerialisedTracer :: f (TraceLabelPeer peer (TraceSendRecv (ChainSync (SerialisedHeader blk) (Point blk) (Tip blk)))) - , tBlockFetchTracer :: f (TraceLabelPeer peer (TraceSendRecv (BlockFetch blk (Point blk)))) - , tBlockFetchSerialisedTracer :: f (TraceLabelPeer peer (TraceSendRecv (BlockFetch (Serialised blk) (Point blk)))) - , tTxSubmission2Tracer :: f (TraceLabelPeer peer (TraceSendRecv (TxSubmission2 (GenTxId blk) (GenTx blk)))) - , tKeepAliveTracer :: f (TraceLabelPeer peer (TraceSendRecv KeepAlive)) - , tPeerSharingTracer :: f (TraceLabelPeer peer (TraceSendRecv (PeerSharing ntnAddr))) - } + Tracers' (ConnectionId ntnAddr) ntnAddr blk e (Tracer m) + +data Tracers' peer ntnAddr blk e f = Tracers + { tChainSyncTracer :: + f (TraceLabelPeer peer (TraceSendRecv (ChainSync (Header blk) (Point blk) (Tip blk)))) + , tChainSyncSerialisedTracer :: + f (TraceLabelPeer peer (TraceSendRecv (ChainSync (SerialisedHeader blk) (Point blk) (Tip blk)))) + , tBlockFetchTracer :: f (TraceLabelPeer peer (TraceSendRecv (BlockFetch blk (Point blk)))) + , tBlockFetchSerialisedTracer :: + f (TraceLabelPeer peer (TraceSendRecv (BlockFetch (Serialised blk) (Point blk)))) + , tTxSubmission2Tracer :: + f (TraceLabelPeer peer (TraceSendRecv (TxSubmission2 (GenTxId blk) (GenTx blk)))) + , tKeepAliveTracer :: f (TraceLabelPeer peer (TraceSendRecv KeepAlive)) + , tPeerSharingTracer :: f (TraceLabelPeer peer (TraceSendRecv (PeerSharing ntnAddr))) + } instance (forall a. Semigroup (f a)) => Semigroup (Tracers' peer ntnAddr blk e f) where - l <> r = Tracers { - tChainSyncTracer = f tChainSyncTracer - , tChainSyncSerialisedTracer = f tChainSyncSerialisedTracer - , tBlockFetchTracer = f tBlockFetchTracer + l <> r = + Tracers + { tChainSyncTracer = f tChainSyncTracer + , tChainSyncSerialisedTracer = f tChainSyncSerialisedTracer + , tBlockFetchTracer = f tBlockFetchTracer , tBlockFetchSerialisedTracer = f tBlockFetchSerialisedTracer - , tTxSubmission2Tracer = f tTxSubmission2Tracer - , tKeepAliveTracer = f tKeepAliveTracer - , tPeerSharingTracer = f tPeerSharingTracer + , tTxSubmission2Tracer = f tTxSubmission2Tracer + , tKeepAliveTracer = f tKeepAliveTracer + , tPeerSharingTracer = f tPeerSharingTracer } - where - f :: forall a. Semigroup a - => (Tracers' peer ntnAddr blk e f -> a) - -> a - f prj = prj l <> prj r + where + f :: + forall a. + Semigroup a => + (Tracers' peer ntnAddr blk e f -> a) -> + a + f prj = prj l <> prj r -- | Use a 'nullTracer' for each protocol. nullTracers :: Monad m => Tracers m ntnAddr blk e -nullTracers = Tracers { - tChainSyncTracer = nullTracer - , tChainSyncSerialisedTracer = nullTracer - , tBlockFetchTracer = nullTracer +nullTracers = + Tracers + { tChainSyncTracer = nullTracer + , tChainSyncSerialisedTracer = nullTracer + , tBlockFetchTracer = nullTracer , tBlockFetchSerialisedTracer = nullTracer - , tTxSubmission2Tracer = nullTracer - , tKeepAliveTracer = nullTracer - , tPeerSharingTracer = nullTracer + , tTxSubmission2Tracer = nullTracer + , tKeepAliveTracer = nullTracer + , tPeerSharingTracer = nullTracer } -showTracers :: ( Show blk - , Show ntnAddr - , Show (Header blk) - , Show (GenTx blk) - , Show (GenTxId blk) - , HasHeader blk - , HasNestedContent Header blk - ) - => Tracer m String -> Tracers m ntnAddr blk e -showTracers tr = Tracers { - tChainSyncTracer = showTracing tr - , tChainSyncSerialisedTracer = showTracing tr - , tBlockFetchTracer = showTracing tr +showTracers :: + ( Show blk + , Show ntnAddr + , Show (Header blk) + , Show (GenTx blk) + , Show (GenTxId blk) + , HasHeader blk + , HasNestedContent Header blk + ) => + Tracer m String -> Tracers m ntnAddr blk e +showTracers tr = + Tracers + { tChainSyncTracer = showTracing tr + , tChainSyncSerialisedTracer = showTracing tr + , tBlockFetchTracer = showTracing tr , tBlockFetchSerialisedTracer = showTracing tr - , tTxSubmission2Tracer = showTracing tr - , tKeepAliveTracer = showTracing tr - , tPeerSharingTracer = showTracing tr + , tTxSubmission2Tracer = showTracing tr + , tKeepAliveTracer = showTracing tr + , tPeerSharingTracer = showTracing tr } {------------------------------------------------------------------------------- @@ -435,143 +497,138 @@ showTracers tr = Tracers { -- | A node-to-node application type ClientApp m addr bytes a = - NodeToNodeVersion - -> ExpandedInitiatorContext addr m - -> Channel m bytes - -> m (a, Maybe bytes) + NodeToNodeVersion -> + ExpandedInitiatorContext addr m -> + Channel m bytes -> + m (a, Maybe bytes) type ServerApp m addr bytes a = - NodeToNodeVersion - -> ResponderContext addr - -> Channel m bytes - -> m (a, Maybe bytes) + NodeToNodeVersion -> + ResponderContext addr -> + Channel m bytes -> + m (a, Maybe bytes) -- | Applications for the node-to-node protocols -- -- See 'Network.Mux.Types.MuxApplication' -data Apps m addr bCS bBF bTX bKA bPS a b = Apps { - -- | Start a chain sync client that communicates with the given upstream - -- node. - aChainSyncClient :: ClientApp m addr bCS a - - -- | Start a chain sync server. - , aChainSyncServer :: ServerApp m addr bCS b - - -- | Start a block fetch client that communicates with the given - -- upstream node. - , aBlockFetchClient :: ClientApp m addr bBF a - - -- | Start a block fetch server. - , aBlockFetchServer :: ServerApp m addr bBF b - - -- | Start a transaction submission v2 client that communicates with the - -- given upstream node. - , aTxSubmission2Client :: ClientApp m addr bTX a - - -- | Start a transaction submission v2 server. - , aTxSubmission2Server :: ServerApp m addr bTX b - - -- | Start a keep-alive client. - , aKeepAliveClient :: ClientApp m addr bKA a - - -- | Start a keep-alive server. - , aKeepAliveServer :: ServerApp m addr bKA b - - -- | Start a peer-sharing client. - , aPeerSharingClient :: ClientApp m addr bPS a - - -- | Start a peer-sharing server. - , aPeerSharingServer :: ServerApp m addr bPS b - } - +data Apps m addr bCS bBF bTX bKA bPS a b = Apps + { aChainSyncClient :: ClientApp m addr bCS a + -- ^ Start a chain sync client that communicates with the given upstream + -- node. + , aChainSyncServer :: ServerApp m addr bCS b + -- ^ Start a chain sync server. + , aBlockFetchClient :: ClientApp m addr bBF a + -- ^ Start a block fetch client that communicates with the given + -- upstream node. + , aBlockFetchServer :: ServerApp m addr bBF b + -- ^ Start a block fetch server. + , aTxSubmission2Client :: ClientApp m addr bTX a + -- ^ Start a transaction submission v2 client that communicates with the + -- given upstream node. + , aTxSubmission2Server :: ServerApp m addr bTX b + -- ^ Start a transaction submission v2 server. + , aKeepAliveClient :: ClientApp m addr bKA a + -- ^ Start a keep-alive client. + , aKeepAliveServer :: ServerApp m addr bKA b + -- ^ Start a keep-alive server. + , aPeerSharingClient :: ClientApp m addr bPS a + -- ^ Start a peer-sharing client. + , aPeerSharingServer :: ServerApp m addr bPS b + -- ^ Start a peer-sharing server. + } -- | Per mini-protocol byte limits; For each mini-protocol they provide -- per-state byte size limits, i.e. how much data can arrive from the network. -- -- They don't depend on the instantiation of the protocol parameters (which -- block type is used, etc.), hence the use of 'RankNTypes'. --- -data ByteLimits bCS bBF bTX bKA = ByteLimits { - blChainSync :: forall header point tip. - ProtocolSizeLimits - (ChainSync header point tip) - bCS - - , blBlockFetch :: forall block point. - ProtocolSizeLimits - (BlockFetch block point) - bBF - - , blTxSubmission2 :: forall txid tx. - ProtocolSizeLimits - (TxSubmission2 txid tx) - bTX - - , blKeepAlive :: ProtocolSizeLimits - KeepAlive - bKA - - } +data ByteLimits bCS bBF bTX bKA = ByteLimits + { blChainSync :: + forall header point tip. + ProtocolSizeLimits + (ChainSync header point tip) + bCS + , blBlockFetch :: + forall block point. + ProtocolSizeLimits + (BlockFetch block point) + bBF + , blTxSubmission2 :: + forall txid tx. + ProtocolSizeLimits + (TxSubmission2 txid tx) + bTX + , blKeepAlive :: + ProtocolSizeLimits + KeepAlive + bKA + } noByteLimits :: ByteLimits bCS bBF bTX bKA -noByteLimits = ByteLimits { - blChainSync = byteLimitsChainSync (const 0) - , blBlockFetch = byteLimitsBlockFetch (const 0) - , blTxSubmission2 = byteLimitsTxSubmission2 (const 0) - , blKeepAlive = byteLimitsKeepAlive (const 0) - } +noByteLimits = + ByteLimits + { blChainSync = byteLimitsChainSync (const 0) + , blBlockFetch = byteLimitsBlockFetch (const 0) + , blTxSubmission2 = byteLimitsTxSubmission2 (const 0) + , blKeepAlive = byteLimitsKeepAlive (const 0) + } byteLimits :: ByteLimits ByteString ByteString ByteString ByteString -byteLimits = ByteLimits { - blChainSync = byteLimitsChainSync size - , blBlockFetch = byteLimitsBlockFetch size +byteLimits = + ByteLimits + { blChainSync = byteLimitsChainSync size + , blBlockFetch = byteLimitsBlockFetch size , blTxSubmission2 = byteLimitsTxSubmission2 size - , blKeepAlive = byteLimitsKeepAlive size + , blKeepAlive = byteLimitsKeepAlive size } - where - size :: ByteString -> Word - size = (fromIntegral :: Int64 -> Word) - . BSL.length + where + size :: ByteString -> Word + size = + (fromIntegral :: Int64 -> Word) + . BSL.length -- | Construct the 'NetworkApplication' for the node-to-node protocols mkApps :: - forall m addrNTN addrNTC blk e bCS bBF bTX bKA bPS. - ( IOLike m - , MonadTimer m - , Ord addrNTN - , Exception e - , LedgerSupportsProtocol blk - , ShowProxy blk - , ShowProxy (Header blk) - , ShowProxy (TxId (GenTx blk)) - , ShowProxy (GenTx blk) - ) - => NodeKernel m addrNTN addrNTC blk -- ^ Needed for bracketing only - -> Tracers m addrNTN blk e - -> (NodeToNodeVersion -> Codecs blk addrNTN e m bCS bCS bBF bBF bTX bKA bPS) - -> ByteLimits bCS bBF bTX bKA - -> m ChainSyncTimeout - -> CsClient.ChainSyncLoPBucketConfig - -> CsClient.CSJConfig - -> ReportPeerMetrics m (ConnectionId addrNTN) - -> Handlers m addrNTN blk - -> Apps m addrNTN bCS bBF bTX bKA bPS NodeToNodeInitiatorResult () -mkApps kernel Tracers {..} mkCodecs ByteLimits {..} genChainSyncTimeout lopBucketConfig csjConfig ReportPeerMetrics {..} Handlers {..} = - Apps {..} - where - NodeKernel { getDiffusionPipeliningSupport } = kernel - - aChainSyncClient - :: NodeToNodeVersion - -> ExpandedInitiatorContext addrNTN m - -> Channel m bCS - -> m (NodeToNodeInitiatorResult, Maybe bCS) - aChainSyncClient version ExpandedInitiatorContext { - eicConnectionId = them, - eicControlMessage = controlMessageSTM, - eicIsBigLedgerPeer = isBigLedgerPeer - } - channel = do + forall m addrNTN addrNTC blk e bCS bBF bTX bKA bPS. + ( IOLike m + , MonadTimer m + , Ord addrNTN + , Exception e + , LedgerSupportsProtocol blk + , ShowProxy blk + , ShowProxy (Header blk) + , ShowProxy (TxId (GenTx blk)) + , ShowProxy (GenTx blk) + ) => + -- | Needed for bracketing only + NodeKernel m addrNTN addrNTC blk -> + Tracers m addrNTN blk e -> + (NodeToNodeVersion -> Codecs blk addrNTN e m bCS bCS bBF bBF bTX bKA bPS) -> + ByteLimits bCS bBF bTX bKA -> + m ChainSyncTimeout -> + CsClient.ChainSyncLoPBucketConfig -> + CsClient.CSJConfig -> + ReportPeerMetrics m (ConnectionId addrNTN) -> + Handlers m addrNTN blk -> + Apps m addrNTN bCS bBF bTX bKA bPS NodeToNodeInitiatorResult () +mkApps kernel Tracers{..} mkCodecs ByteLimits{..} genChainSyncTimeout lopBucketConfig csjConfig ReportPeerMetrics{..} Handlers{..} = + Apps{..} + where + NodeKernel{getDiffusionPipeliningSupport} = kernel + + aChainSyncClient :: + NodeToNodeVersion -> + ExpandedInitiatorContext addrNTN m -> + Channel m bCS -> + m (NodeToNodeInitiatorResult, Maybe bCS) + aChainSyncClient + version + ExpandedInitiatorContext + { eicConnectionId = them + , eicControlMessage = controlMessageSTM + , eicIsBigLedgerPeer = isBigLedgerPeer + } + channel = do labelThisThread "ChainSyncClient" -- Note that it is crucial that we sync with the fetch client "outside" -- of registering the state for the sync client. This is needed to @@ -579,225 +636,246 @@ mkApps kernel Tracers {..} mkCodecs ByteLimits {..} genChainSyncTimeout lopBucke -- each candidate chain there is a corresponding block fetch client that -- can be used to fetch blocks for that chain. bracketSyncWithFetchClient - (getFetchClientRegistry kernel) them $ - CsClient.bracketChainSyncClient - (contramap (TraceLabelPeer them) (Node.chainSyncClientTracer (getTracers kernel))) - (contramap (TraceLabelPeer them) (Node.csjTracer (getTracers kernel))) - (CsClient.defaultChainDbView (getChainDB kernel)) - (getChainSyncHandles kernel) - (getGsmState kernel) - them - version - lopBucketConfig - csjConfig - getDiffusionPipeliningSupport - $ \csState -> do - chainSyncTimeout <- genChainSyncTimeout - (r, trailing) <- - runPipelinedPeerWithLimits - (contramap (TraceLabelPeer them) tChainSyncTracer) - (cChainSyncCodec (mkCodecs version)) - blChainSync - (timeLimitsChainSync chainSyncTimeout) - channel - $ chainSyncClientPeerPipelined - $ hChainSyncClient - them - isBigLedgerPeer - CsClient.DynamicEnv { - CsClient.version - , CsClient.controlMessageSTM - , CsClient.headerMetricsTracer = TraceLabelPeer them `contramap` reportHeader - , CsClient.setCandidate = csvSetCandidate csState - , CsClient.idling = csvIdling csState - , CsClient.loPBucket = csvLoPBucket csState - , CsClient.setLatestSlot = csvSetLatestSlot csState - , CsClient.jumping = csvJumping csState - } - return (ChainSyncInitiatorResult r, trailing) - - aChainSyncServer - :: NodeToNodeVersion - -> ResponderContext addrNTN - -> Channel m bCS - -> m ((), Maybe bCS) - aChainSyncServer version ResponderContext { rcConnectionId = them } channel = do - labelThisThread "ChainSyncServer" - chainSyncTimeout <- genChainSyncTimeout - bracketWithPrivateRegistry - (chainSyncHeaderServerFollower - (getChainDB kernel) - ( case getDiffusionPipeliningSupport of - DiffusionPipeliningOn -> ChainDB.TentativeChain + (getFetchClientRegistry kernel) + them + $ CsClient.bracketChainSyncClient + (contramap (TraceLabelPeer them) (Node.chainSyncClientTracer (getTracers kernel))) + (contramap (TraceLabelPeer them) (Node.csjTracer (getTracers kernel))) + (CsClient.defaultChainDbView (getChainDB kernel)) + (getChainSyncHandles kernel) + (getGsmState kernel) + them + version + lopBucketConfig + csjConfig + getDiffusionPipeliningSupport + $ \csState -> do + chainSyncTimeout <- genChainSyncTimeout + (r, trailing) <- + runPipelinedPeerWithLimits + (contramap (TraceLabelPeer them) tChainSyncTracer) + (cChainSyncCodec (mkCodecs version)) + blChainSync + (timeLimitsChainSync chainSyncTimeout) + channel + $ chainSyncClientPeerPipelined + $ hChainSyncClient + them + isBigLedgerPeer + CsClient.DynamicEnv + { CsClient.version + , CsClient.controlMessageSTM + , CsClient.headerMetricsTracer = TraceLabelPeer them `contramap` reportHeader + , CsClient.setCandidate = csvSetCandidate csState + , CsClient.idling = csvIdling csState + , CsClient.loPBucket = csvLoPBucket csState + , CsClient.setLatestSlot = csvSetLatestSlot csState + , CsClient.jumping = csvJumping csState + } + return (ChainSyncInitiatorResult r, trailing) + + aChainSyncServer :: + NodeToNodeVersion -> + ResponderContext addrNTN -> + Channel m bCS -> + m ((), Maybe bCS) + aChainSyncServer version ResponderContext{rcConnectionId = them} channel = do + labelThisThread "ChainSyncServer" + chainSyncTimeout <- genChainSyncTimeout + bracketWithPrivateRegistry + ( chainSyncHeaderServerFollower + (getChainDB kernel) + ( case getDiffusionPipeliningSupport of + DiffusionPipeliningOn -> ChainDB.TentativeChain DiffusionPipeliningOff -> ChainDB.SelectedChain - ) - ) - ChainDB.followerClose - $ \flr -> - runPeerWithLimits - (contramap (TraceLabelPeer them) tChainSyncSerialisedTracer) - (cChainSyncCodecSerialised (mkCodecs version)) - blChainSync - (timeLimitsChainSync chainSyncTimeout) - channel - $ chainSyncServerPeer - $ hChainSyncServer them version flr - - aBlockFetchClient - :: NodeToNodeVersion - -> ExpandedInitiatorContext addrNTN m - -> Channel m bBF - -> m (NodeToNodeInitiatorResult, Maybe bBF) - aBlockFetchClient version ExpandedInitiatorContext { - eicConnectionId = them, - eicControlMessage = controlMessageSTM - } - channel = do - labelThisThread "BlockFetchClient" - bracketFetchClient (getFetchClientRegistry kernel) version - them $ \clientCtx -> do - ((), trailing) <- runPipelinedPeerWithLimits - (contramap (TraceLabelPeer them) tBlockFetchTracer) - (cBlockFetchCodec (mkCodecs version)) - blBlockFetch - timeLimitsBlockFetch - channel - $ hBlockFetchClient version controlMessageSTM - (TraceLabelPeer them `contramap` reportFetch) clientCtx - return (NoInitiatorResult, trailing) - - aBlockFetchServer - :: NodeToNodeVersion - -> ResponderContext addrNTN - -> Channel m bBF - -> m ((), Maybe bBF) - aBlockFetchServer version ResponderContext { rcConnectionId = them } channel = do - labelThisThread "BlockFetchServer" - withRegistry $ \registry -> + ) + ) + ChainDB.followerClose + $ \flr -> runPeerWithLimits - (contramap (TraceLabelPeer them) tBlockFetchSerialisedTracer) - (cBlockFetchCodecSerialised (mkCodecs version)) - blBlockFetch - timeLimitsBlockFetch + (contramap (TraceLabelPeer them) tChainSyncSerialisedTracer) + (cChainSyncCodecSerialised (mkCodecs version)) + blChainSync + (timeLimitsChainSync chainSyncTimeout) channel - $ blockFetchServerPeer - $ hBlockFetchServer them version registry - - aTxSubmission2Client - :: NodeToNodeVersion - -> ExpandedInitiatorContext addrNTN m - -> Channel m bTX - -> m (NodeToNodeInitiatorResult, Maybe bTX) - aTxSubmission2Client version ExpandedInitiatorContext { - eicConnectionId = them, - eicControlMessage = controlMessageSTM - } - channel = do - labelThisThread "TxSubmissionClient" - ((), trailing) <- runPeerWithLimits - (contramap (TraceLabelPeer them) tTxSubmission2Tracer) - (cTxSubmission2Codec (mkCodecs version)) - blTxSubmission2 - timeLimitsTxSubmission2 + $ chainSyncServerPeer + $ hChainSyncServer them version flr + + aBlockFetchClient :: + NodeToNodeVersion -> + ExpandedInitiatorContext addrNTN m -> + Channel m bBF -> + m (NodeToNodeInitiatorResult, Maybe bBF) + aBlockFetchClient + version + ExpandedInitiatorContext + { eicConnectionId = them + , eicControlMessage = controlMessageSTM + } + channel = do + labelThisThread "BlockFetchClient" + bracketFetchClient + (getFetchClientRegistry kernel) + version + them + $ \clientCtx -> do + ((), trailing) <- + runPipelinedPeerWithLimits + (contramap (TraceLabelPeer them) tBlockFetchTracer) + (cBlockFetchCodec (mkCodecs version)) + blBlockFetch + timeLimitsBlockFetch + channel + $ hBlockFetchClient + version + controlMessageSTM + (TraceLabelPeer them `contramap` reportFetch) + clientCtx + return (NoInitiatorResult, trailing) + + aBlockFetchServer :: + NodeToNodeVersion -> + ResponderContext addrNTN -> + Channel m bBF -> + m ((), Maybe bBF) + aBlockFetchServer version ResponderContext{rcConnectionId = them} channel = do + labelThisThread "BlockFetchServer" + withRegistry $ \registry -> + runPeerWithLimits + (contramap (TraceLabelPeer them) tBlockFetchSerialisedTracer) + (cBlockFetchCodecSerialised (mkCodecs version)) + blBlockFetch + timeLimitsBlockFetch channel - (txSubmissionClientPeer (hTxSubmissionClient version controlMessageSTM them)) + $ blockFetchServerPeer + $ hBlockFetchServer them version registry + + aTxSubmission2Client :: + NodeToNodeVersion -> + ExpandedInitiatorContext addrNTN m -> + Channel m bTX -> + m (NodeToNodeInitiatorResult, Maybe bTX) + aTxSubmission2Client + version + ExpandedInitiatorContext + { eicConnectionId = them + , eicControlMessage = controlMessageSTM + } + channel = do + labelThisThread "TxSubmissionClient" + ((), trailing) <- + runPeerWithLimits + (contramap (TraceLabelPeer them) tTxSubmission2Tracer) + (cTxSubmission2Codec (mkCodecs version)) + blTxSubmission2 + timeLimitsTxSubmission2 + channel + (txSubmissionClientPeer (hTxSubmissionClient version controlMessageSTM them)) return (NoInitiatorResult, trailing) - aTxSubmission2Server - :: NodeToNodeVersion - -> ResponderContext addrNTN - -> Channel m bTX - -> m ((), Maybe bTX) - aTxSubmission2Server version ResponderContext { rcConnectionId = them } channel = do - labelThisThread "TxSubmissionServer" - runPipelinedPeerWithLimits - (contramap (TraceLabelPeer them) tTxSubmission2Tracer) - (cTxSubmission2Codec (mkCodecs version)) - blTxSubmission2 - timeLimitsTxSubmission2 - channel - (txSubmissionServerPeerPipelined (hTxSubmissionServer version them)) - - aKeepAliveClient - :: NodeToNodeVersion - -> ExpandedInitiatorContext addrNTN m - -> Channel m bKA - -> m (NodeToNodeInitiatorResult, Maybe bKA) - aKeepAliveClient version ExpandedInitiatorContext { - eicConnectionId = them, - eicControlMessage = controlMessageSTM - } - channel = do + aTxSubmission2Server :: + NodeToNodeVersion -> + ResponderContext addrNTN -> + Channel m bTX -> + m ((), Maybe bTX) + aTxSubmission2Server version ResponderContext{rcConnectionId = them} channel = do + labelThisThread "TxSubmissionServer" + runPipelinedPeerWithLimits + (contramap (TraceLabelPeer them) tTxSubmission2Tracer) + (cTxSubmission2Codec (mkCodecs version)) + blTxSubmission2 + timeLimitsTxSubmission2 + channel + (txSubmissionServerPeerPipelined (hTxSubmissionServer version them)) + + aKeepAliveClient :: + NodeToNodeVersion -> + ExpandedInitiatorContext addrNTN m -> + Channel m bKA -> + m (NodeToNodeInitiatorResult, Maybe bKA) + aKeepAliveClient + version + ExpandedInitiatorContext + { eicConnectionId = them + , eicControlMessage = controlMessageSTM + } + channel = do labelThisThread "KeepAliveClient" let kacApp = \dqCtx -> - runPeerWithLimits - (TraceLabelPeer them `contramap` tKeepAliveTracer) - (cKeepAliveCodec (mkCodecs version)) - blKeepAlive - timeLimitsKeepAlive - channel - $ keepAliveClientPeer - $ hKeepAliveClient version controlMessageSTM them dqCtx - (KeepAliveInterval 10) + runPeerWithLimits + (TraceLabelPeer them `contramap` tKeepAliveTracer) + (cKeepAliveCodec (mkCodecs version)) + blKeepAlive + timeLimitsKeepAlive + channel + $ keepAliveClientPeer + $ hKeepAliveClient + version + controlMessageSTM + them + dqCtx + (KeepAliveInterval 10) ((), trailing) <- bracketKeepAliveClient (getFetchClientRegistry kernel) them kacApp return (NoInitiatorResult, trailing) - aKeepAliveServer - :: NodeToNodeVersion - -> ResponderContext addrNTN - -> Channel m bKA - -> m ((), Maybe bKA) - aKeepAliveServer version ResponderContext { rcConnectionId = them } channel = do - labelThisThread "KeepAliveServer" - runPeerWithLimits - (TraceLabelPeer them `contramap` tKeepAliveTracer) - (cKeepAliveCodec (mkCodecs version)) - (byteLimitsKeepAlive (const 0)) -- TODO: Real Bytelimits, see #1727 - timeLimitsKeepAlive - channel - $ keepAliveServerPeer - $ keepAliveServer - - - aPeerSharingClient - :: NodeToNodeVersion - -> ExpandedInitiatorContext addrNTN m - -> Channel m bPS - -> m (NodeToNodeInitiatorResult, Maybe bPS) - aPeerSharingClient version ExpandedInitiatorContext { - eicConnectionId = them, - eicControlMessage = controlMessageSTM - } - channel = do + aKeepAliveServer :: + NodeToNodeVersion -> + ResponderContext addrNTN -> + Channel m bKA -> + m ((), Maybe bKA) + aKeepAliveServer version ResponderContext{rcConnectionId = them} channel = do + labelThisThread "KeepAliveServer" + runPeerWithLimits + (TraceLabelPeer them `contramap` tKeepAliveTracer) + (cKeepAliveCodec (mkCodecs version)) + (byteLimitsKeepAlive (const 0)) -- TODO: Real Bytelimits, see #1727 + timeLimitsKeepAlive + channel + $ keepAliveServerPeer + $ keepAliveServer + + aPeerSharingClient :: + NodeToNodeVersion -> + ExpandedInitiatorContext addrNTN m -> + Channel m bPS -> + m (NodeToNodeInitiatorResult, Maybe bPS) + aPeerSharingClient + version + ExpandedInitiatorContext + { eicConnectionId = them + , eicControlMessage = controlMessageSTM + } + channel = do labelThisThread "PeerSharingClient" - bracketPeerSharingClient (getPeerSharingRegistry kernel) (remoteAddress them) - $ \controller -> do + bracketPeerSharingClient (getPeerSharingRegistry kernel) (remoteAddress them) $ + \controller -> do psClient <- hPeerSharingClient version controlMessageSTM them controller - ((), trailing) <- runPeerWithLimits - (TraceLabelPeer them `contramap` tPeerSharingTracer) - (cPeerSharingCodec (mkCodecs version)) - (byteLimitsPeerSharing (const 0)) - timeLimitsPeerSharing - channel - (peerSharingClientPeer psClient) + ((), trailing) <- + runPeerWithLimits + (TraceLabelPeer them `contramap` tPeerSharingTracer) + (cPeerSharingCodec (mkCodecs version)) + (byteLimitsPeerSharing (const 0)) + timeLimitsPeerSharing + channel + (peerSharingClientPeer psClient) return (NoInitiatorResult, trailing) - aPeerSharingServer - :: NodeToNodeVersion - -> ResponderContext addrNTN - -> Channel m bPS - -> m ((), Maybe bPS) - aPeerSharingServer version ResponderContext { rcConnectionId = them } channel = do - labelThisThread "PeerSharingServer" - runPeerWithLimits - (TraceLabelPeer them `contramap` tPeerSharingTracer) - (cPeerSharingCodec (mkCodecs version)) - (byteLimitsPeerSharing (const 0)) - timeLimitsPeerSharing - channel - $ peerSharingServerPeer - $ hPeerSharingServer version them + aPeerSharingServer :: + NodeToNodeVersion -> + ResponderContext addrNTN -> + Channel m bPS -> + m ((), Maybe bPS) + aPeerSharingServer version ResponderContext{rcConnectionId = them} channel = do + labelThisThread "PeerSharingServer" + runPeerWithLimits + (TraceLabelPeer them `contramap` tPeerSharingTracer) + (cPeerSharingCodec (mkCodecs version)) + (byteLimitsPeerSharing (const 0)) + timeLimitsPeerSharing + channel + $ peerSharingServerPeer + $ hPeerSharingServer version them {------------------------------------------------------------------------------- Projections from 'Apps' @@ -810,34 +888,35 @@ mkApps kernel Tracers {..} mkCodecs ByteLimits {..} genChainSyncTimeout lopBucke -- on the protocol version, but it eventually may; this is why @_version@ is -- currently unused. initiator :: - MiniProtocolParameters - -> NodeToNodeVersion - -> NodeToNodeVersionData - -> Apps m addr b b b b b a c - -> OuroborosBundleWithExpandedCtx 'Mux.InitiatorMode addr b m a Void -initiator miniProtocolParameters version versionData Apps {..} = - nodeToNodeProtocols - miniProtocolParameters - -- TODO: currently consensus is using 'ConnectionId' for its 'peer' type. - -- This is currently ok, as we might accept multiple connections from the - -- same ip address, however this will change when we will switch to - -- p2p-governor & connection-manager. Then consensus can use peer's ip - -- address & port number, rather than 'ConnectionId' (which is - -- a quadruple uniquely determining a connection). - (NodeToNodeProtocols { - chainSyncProtocol = - (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aChainSyncClient version ctx))), - blockFetchProtocol = - (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aBlockFetchClient version ctx))), - txSubmissionProtocol = - (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aTxSubmission2Client version ctx))), - keepAliveProtocol = - (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aKeepAliveClient version ctx))), - peerSharingProtocol = + MiniProtocolParameters -> + NodeToNodeVersion -> + NodeToNodeVersionData -> + Apps m addr b b b b b a c -> + OuroborosBundleWithExpandedCtx 'Mux.InitiatorMode addr b m a Void +initiator miniProtocolParameters version versionData Apps{..} = + nodeToNodeProtocols + miniProtocolParameters + -- TODO: currently consensus is using 'ConnectionId' for its 'peer' type. + -- This is currently ok, as we might accept multiple connections from the + -- same ip address, however this will change when we will switch to + -- p2p-governor & connection-manager. Then consensus can use peer's ip + -- address & port number, rather than 'ConnectionId' (which is + -- a quadruple uniquely determining a connection). + ( NodeToNodeProtocols + { chainSyncProtocol = + (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aChainSyncClient version ctx))) + , blockFetchProtocol = + (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aBlockFetchClient version ctx))) + , txSubmissionProtocol = + (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aTxSubmission2Client version ctx))) + , keepAliveProtocol = + (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aKeepAliveClient version ctx))) + , peerSharingProtocol = (InitiatorProtocolOnly (MiniProtocolCb (\ctx -> aPeerSharingClient version ctx))) - }) - version - versionData + } + ) + version + versionData -- | A bi-directional network application. -- @@ -845,36 +924,41 @@ initiator miniProtocolParameters version versionData Apps {..} = -- on the protocol version, but it eventually may; this is why @_version@ is -- currently unused. initiatorAndResponder :: - MiniProtocolParameters - -> NodeToNodeVersion - -> NodeToNodeVersionData - -> Apps m addr b b b b b a c - -> OuroborosBundleWithExpandedCtx 'Mux.InitiatorResponderMode addr b m a c -initiatorAndResponder miniProtocolParameters version versionData Apps {..} = - nodeToNodeProtocols - miniProtocolParameters - (NodeToNodeProtocols { - chainSyncProtocol = - (InitiatorAndResponderProtocol - (MiniProtocolCb (\initiatorCtx -> aChainSyncClient version initiatorCtx)) - (MiniProtocolCb (\responderCtx -> aChainSyncServer version responderCtx))), - blockFetchProtocol = - (InitiatorAndResponderProtocol - (MiniProtocolCb (\initiatorCtx -> aBlockFetchClient version initiatorCtx)) - (MiniProtocolCb (\responderCtx -> aBlockFetchServer version responderCtx))), - txSubmissionProtocol = - (InitiatorAndResponderProtocol - (MiniProtocolCb (\initiatorCtx -> aTxSubmission2Client version initiatorCtx)) - (MiniProtocolCb (\responderCtx -> aTxSubmission2Server version responderCtx))), - keepAliveProtocol = - (InitiatorAndResponderProtocol - (MiniProtocolCb (\initiatorCtx -> aKeepAliveClient version initiatorCtx)) - (MiniProtocolCb (\responderCtx -> aKeepAliveServer version responderCtx))), - - peerSharingProtocol = - (InitiatorAndResponderProtocol - (MiniProtocolCb (\initiatorCtx -> aPeerSharingClient version initiatorCtx)) - (MiniProtocolCb (\responderCtx -> aPeerSharingServer version responderCtx))) - }) - version - versionData + MiniProtocolParameters -> + NodeToNodeVersion -> + NodeToNodeVersionData -> + Apps m addr b b b b b a c -> + OuroborosBundleWithExpandedCtx 'Mux.InitiatorResponderMode addr b m a c +initiatorAndResponder miniProtocolParameters version versionData Apps{..} = + nodeToNodeProtocols + miniProtocolParameters + ( NodeToNodeProtocols + { chainSyncProtocol = + ( InitiatorAndResponderProtocol + (MiniProtocolCb (\initiatorCtx -> aChainSyncClient version initiatorCtx)) + (MiniProtocolCb (\responderCtx -> aChainSyncServer version responderCtx)) + ) + , blockFetchProtocol = + ( InitiatorAndResponderProtocol + (MiniProtocolCb (\initiatorCtx -> aBlockFetchClient version initiatorCtx)) + (MiniProtocolCb (\responderCtx -> aBlockFetchServer version responderCtx)) + ) + , txSubmissionProtocol = + ( InitiatorAndResponderProtocol + (MiniProtocolCb (\initiatorCtx -> aTxSubmission2Client version initiatorCtx)) + (MiniProtocolCb (\responderCtx -> aTxSubmission2Server version responderCtx)) + ) + , keepAliveProtocol = + ( InitiatorAndResponderProtocol + (MiniProtocolCb (\initiatorCtx -> aKeepAliveClient version initiatorCtx)) + (MiniProtocolCb (\responderCtx -> aKeepAliveServer version responderCtx)) + ) + , peerSharingProtocol = + ( InitiatorAndResponderProtocol + (MiniProtocolCb (\initiatorCtx -> aPeerSharingClient version initiatorCtx)) + (MiniProtocolCb (\responderCtx -> aPeerSharingServer version responderCtx)) + ) + } + ) + version + versionData diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node.hs index 037425f737..cd56683d23 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node.hs @@ -19,10 +19,10 @@ -- | Run the whole Node -- -- Intended for qualified import. --- -module Ouroboros.Consensus.Node ( - run +module Ouroboros.Consensus.Node + ( run , runWith + -- * Standard arguments , StdRunNodeArgs (..) , stdBfcSaltIO @@ -34,8 +34,10 @@ module Ouroboros.Consensus.Node ( , stdVersionDataNTC , stdVersionDataNTN , stdWithCheckedDB + -- ** P2P Switch , NetworkP2PMode (..) + -- * Exposed by 'run' et al , ChainDB.RelativeMountPoint (..) , ChainDB.TraceEvent (..) @@ -55,112 +57,144 @@ module Ouroboros.Consensus.Node ( , Tracers' (..) , pattern DoDiskSnapshotChecksum , pattern NoDoDiskSnapshotChecksum + -- * Internal helpers , mkNodeKernelArgs , nodeKernelArgsEnforceInvariants , openChainDB ) where -import Cardano.Network.PeerSelection.Bootstrap - (UseBootstrapPeers (..)) -import Cardano.Network.Types (LedgerStateJudgement (..)) -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (DeserialiseFailure) -import qualified Control.Concurrent.Class.MonadSTM.Strict as StrictSTM -import Control.DeepSeq (NFData) -import Control.Exception (IOException) -import Control.Monad (forM_, when) -import Control.Monad.Class.MonadTime.SI (MonadTime) -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import Control.ResourceRegistry -import Control.Tracer (Tracer, contramap, traceWith) -import Data.ByteString.Lazy (ByteString) -import Data.Functor.Contravariant (Predicate (..)) -import Data.Hashable (Hashable) -import Data.Kind (Type) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe, isNothing) -import Data.Time (NominalDiffTime) -import Data.Typeable (Typeable) -import Network.DNS.Resolver (Resolver) -import Network.Mux.Types -import qualified Ouroboros.Cardano.Network.ArgumentsExtra as Cardano -import qualified Ouroboros.Cardano.Network.LedgerPeerConsensusInterface as Cardano -import qualified Ouroboros.Cardano.Network.PeerSelection.Governor.PeerSelectionState as Cardano -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime hiding (getSystemStart) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.Ledger.Basics (ValuesMK) -import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck - (HistoricityCheck) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck as HistoricityCheck -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck as InFutureCheck -import qualified Ouroboros.Consensus.Network.NodeToClient as NTC -import qualified Ouroboros.Consensus.Network.NodeToNode as NTN -import Ouroboros.Consensus.Node.DbLock -import Ouroboros.Consensus.Node.DbMarker -import Ouroboros.Consensus.Node.ErrorPolicy -import Ouroboros.Consensus.Node.ExitPolicy -import Ouroboros.Consensus.Node.Genesis (GenesisConfig (..), - GenesisNodeKernelArgs, mkGenesisNodeKernelArgs) -import Ouroboros.Consensus.Node.GSM (GsmNodeKernelArgs (..)) -import qualified Ouroboros.Consensus.Node.GSM as GSM -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Recovery -import Ouroboros.Consensus.Node.RethrowPolicy -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Tracers -import Ouroboros.Consensus.NodeKernel -import Ouroboros.Consensus.Storage.ChainDB (ChainDB, ChainDbArgs, - TraceEvent) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Args as ChainDB -import Ouroboros.Consensus.Storage.LedgerDB.Args -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime) -import Ouroboros.Network.BlockFetch (BlockFetchConfiguration (..), - FetchMode) -import qualified Ouroboros.Network.Diffusion as Diffusion -import qualified Ouroboros.Network.Diffusion.Common as Diffusion -import qualified Ouroboros.Network.Diffusion.Configuration as Diffusion -import qualified Ouroboros.Network.Diffusion.NonP2P as NonP2P -import qualified Ouroboros.Network.Diffusion.P2P as Diffusion.P2P -import Ouroboros.Network.Magic -import Ouroboros.Network.NodeToClient (ConnectionId, LocalAddress, - LocalSocket, NodeToClientVersionData (..), combineVersions, - simpleSingletonVersions) -import Ouroboros.Network.NodeToNode (DiffusionMode (..), - ExceptionInHandler (..), MiniProtocolParameters, - NodeToNodeVersionData (..), RemoteAddress, Socket, - blockFetchPipeliningMax, defaultMiniProtocolParameters) -import Ouroboros.Network.PeerSelection.Governor.Types - (PeerSelectionState, PublicPeerSelectionState) -import Ouroboros.Network.PeerSelection.LedgerPeers - (LedgerPeersConsensusInterface (..), UseLedgerPeers (..)) -import Ouroboros.Network.PeerSelection.PeerMetric (PeerMetrics, - newPeerMetric, reportMetric) -import Ouroboros.Network.PeerSelection.PeerSharing (PeerSharing) -import Ouroboros.Network.PeerSelection.PeerSharing.Codec - (decodeRemoteAddress, encodeRemoteAddress) -import Ouroboros.Network.PeerSelection.RootPeersDNS.PublicRootPeers - (TracePublicRootPeers) -import Ouroboros.Network.RethrowPolicy -import qualified SafeWildCards -import System.Exit (ExitCode (..)) -import System.FilePath (()) -import System.FS.API (SomeHasFS (..)) -import System.FS.API.Types (MountPoint (..)) -import System.FS.IO (ioHasFS) -import System.Random (StdGen, newStdGen, randomIO, split) +import Cardano.Network.PeerSelection.Bootstrap + ( UseBootstrapPeers (..) + ) +import Cardano.Network.Types (LedgerStateJudgement (..)) +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (DeserialiseFailure) +import Control.Concurrent.Class.MonadSTM.Strict qualified as StrictSTM +import Control.DeepSeq (NFData) +import Control.Exception (IOException) +import Control.Monad (forM_, when) +import Control.Monad.Class.MonadTime.SI (MonadTime) +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.ResourceRegistry +import Control.Tracer (Tracer, contramap, traceWith) +import Data.ByteString.Lazy (ByteString) +import Data.Functor.Contravariant (Predicate (..)) +import Data.Hashable (Hashable) +import Data.Kind (Type) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe, isNothing) +import Data.Time (NominalDiffTime) +import Data.Typeable (Typeable) +import Network.DNS.Resolver (Resolver) +import Network.Mux.Types +import Ouroboros.Cardano.Network.ArgumentsExtra qualified as Cardano +import Ouroboros.Cardano.Network.LedgerPeerConsensusInterface qualified as Cardano +import Ouroboros.Cardano.Network.PeerSelection.Governor.PeerSelectionState qualified as Cardano +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime hiding (getSystemStart) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.Ledger.Basics (ValuesMK) +import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState (..)) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck + ( HistoricityCheck + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck qualified as HistoricityCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck qualified as InFutureCheck +import Ouroboros.Consensus.Network.NodeToClient qualified as NTC +import Ouroboros.Consensus.Network.NodeToNode qualified as NTN +import Ouroboros.Consensus.Node.DbLock +import Ouroboros.Consensus.Node.DbMarker +import Ouroboros.Consensus.Node.ErrorPolicy +import Ouroboros.Consensus.Node.ExitPolicy +import Ouroboros.Consensus.Node.GSM (GsmNodeKernelArgs (..)) +import Ouroboros.Consensus.Node.GSM qualified as GSM +import Ouroboros.Consensus.Node.Genesis + ( GenesisConfig (..) + , GenesisNodeKernelArgs + , mkGenesisNodeKernelArgs + ) +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Recovery +import Ouroboros.Consensus.Node.RethrowPolicy +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Tracers +import Ouroboros.Consensus.NodeKernel +import Ouroboros.Consensus.Storage.ChainDB + ( ChainDB + , ChainDbArgs + , TraceEvent + ) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args qualified as ChainDB +import Ouroboros.Consensus.Storage.LedgerDB.Args +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime) +import Ouroboros.Network.BlockFetch + ( BlockFetchConfiguration (..) + , FetchMode + ) +import Ouroboros.Network.Diffusion qualified as Diffusion +import Ouroboros.Network.Diffusion.Common qualified as Diffusion +import Ouroboros.Network.Diffusion.Configuration qualified as Diffusion +import Ouroboros.Network.Diffusion.NonP2P qualified as NonP2P +import Ouroboros.Network.Diffusion.P2P qualified as Diffusion.P2P +import Ouroboros.Network.Magic +import Ouroboros.Network.NodeToClient + ( ConnectionId + , LocalAddress + , LocalSocket + , NodeToClientVersionData (..) + , combineVersions + , simpleSingletonVersions + ) +import Ouroboros.Network.NodeToNode + ( DiffusionMode (..) + , ExceptionInHandler (..) + , MiniProtocolParameters + , NodeToNodeVersionData (..) + , RemoteAddress + , Socket + , blockFetchPipeliningMax + , defaultMiniProtocolParameters + ) +import Ouroboros.Network.PeerSelection.Governor.Types + ( PeerSelectionState + , PublicPeerSelectionState + ) +import Ouroboros.Network.PeerSelection.LedgerPeers + ( LedgerPeersConsensusInterface (..) + , UseLedgerPeers (..) + ) +import Ouroboros.Network.PeerSelection.PeerMetric + ( PeerMetrics + , newPeerMetric + , reportMetric + ) +import Ouroboros.Network.PeerSelection.PeerSharing (PeerSharing) +import Ouroboros.Network.PeerSelection.PeerSharing.Codec + ( decodeRemoteAddress + , encodeRemoteAddress + ) +import Ouroboros.Network.PeerSelection.RootPeersDNS.PublicRootPeers + ( TracePublicRootPeers + ) +import Ouroboros.Network.RethrowPolicy +import SafeWildCards qualified +import System.Exit (ExitCode (..)) +import System.FS.API (SomeHasFS (..)) +import System.FS.API.Types (MountPoint (..)) +import System.FS.IO (ioHasFS) +import System.FilePath (()) +import System.Random (StdGen, newStdGen, randomIO, split) {------------------------------------------------------------------------------- The arguments to the Consensus Layer node functionality @@ -189,44 +223,36 @@ import System.Random (StdGen, newStdGen, randomIO, split) -- | Arguments expected from any invocation of 'runWith', whether by deployed -- code, tests, etc. type RunNodeArgs :: - (Type -> Type) - -> Type - -> Type - -> Type - -> Diffusion.P2P - -> Type -data RunNodeArgs m addrNTN addrNTC blk p2p = RunNodeArgs { - -- | Consensus tracers - rnTraceConsensus :: Tracers m (ConnectionId addrNTN) (ConnectionId addrNTC) blk - - -- | Protocol tracers for node-to-node communication - , rnTraceNTN :: NTN.Tracers m addrNTN blk DeserialiseFailure - - -- | Protocol tracers for node-to-client communication - , rnTraceNTC :: NTC.Tracers m (ConnectionId addrNTC) blk DeserialiseFailure - - -- | Protocol info - , rnProtocolInfo :: ProtocolInfo blk - - -- | Hook called after the initialisation of the 'NodeKernel' - -- - -- Called on the 'NodeKernel' after creating it, but before the network - -- layer is initialised. - , rnNodeKernelHook :: ResourceRegistry m - -> NodeKernel m addrNTN (ConnectionId addrNTC) blk - -> m () - - -- | Network P2P Mode switch - , rnEnableP2P :: NetworkP2PMode p2p - - -- | Network PeerSharing miniprotocol willingness flag - , rnPeerSharing :: PeerSharing - - , rnGetUseBootstrapPeers :: STM m UseBootstrapPeers - - , rnGenesisConfig :: GenesisConfig - } - + (Type -> Type) -> + Type -> + Type -> + Type -> + Diffusion.P2P -> + Type +data RunNodeArgs m addrNTN addrNTC blk p2p = RunNodeArgs + { rnTraceConsensus :: Tracers m (ConnectionId addrNTN) (ConnectionId addrNTC) blk + -- ^ Consensus tracers + , rnTraceNTN :: NTN.Tracers m addrNTN blk DeserialiseFailure + -- ^ Protocol tracers for node-to-node communication + , rnTraceNTC :: NTC.Tracers m (ConnectionId addrNTC) blk DeserialiseFailure + -- ^ Protocol tracers for node-to-client communication + , rnProtocolInfo :: ProtocolInfo blk + -- ^ Protocol info + , rnNodeKernelHook :: + ResourceRegistry m -> + NodeKernel m addrNTN (ConnectionId addrNTC) blk -> + m () + -- ^ Hook called after the initialisation of the 'NodeKernel' + -- + -- Called on the 'NodeKernel' after creating it, but before the network + -- layer is initialised. + , rnEnableP2P :: NetworkP2PMode p2p + -- ^ Network P2P Mode switch + , rnPeerSharing :: PeerSharing + -- ^ Network PeerSharing miniprotocol willingness flag + , rnGetUseBootstrapPeers :: STM m UseBootstrapPeers + , rnGenesisConfig :: GenesisConfig + } -- | Arguments that usually only tests /directly/ specify. -- @@ -235,188 +261,232 @@ data RunNodeArgs m addrNTN addrNTC blk p2p = RunNodeArgs { -- abbreviation, which uses 'stdLowLevelRunNodeArgsIO' to indirectly specify -- these low-level values from the higher-level 'StdRunNodeArgs'. type LowLevelRunNodeArgs :: - (Type -> Type) - -> Type - -> Type - -> Type - -> Diffusion.P2P - -> Type - -> Type -data LowLevelRunNodeArgs m addrNTN addrNTC blk p2p extraAPI = - LowLevelRunNodeArgs { - - -- | An action that will receive a marker indicating whether the previous - -- shutdown was considered clean and a wrapper for installing a handler to - -- create a clean file on exit if needed. See - -- 'Ouroboros.Consensus.Node.Recovery.runWithCheckedDB'. - llrnWithCheckedDB :: forall a. ( LastShutDownWasClean - -> (ChainDB m blk -> m a -> m a) - -> m a) - -> m a - - -- | The " static " ChainDB arguments - , llrnChainDbArgsDefaults :: Incomplete ChainDbArgs m blk - - -- | File-system on which the directory for the ImmutableDB will - -- be created. - , llrnMkImmutableHasFS :: ChainDB.RelativeMountPoint -> SomeHasFS m - - -- | File-system on which the directories for databases other than the ImmutableDB will - -- be created. - , llrnMkVolatileHasFS :: ChainDB.RelativeMountPoint -> SomeHasFS m - - -- | Customise the 'ChainDbArgs'. 'StdRunNodeArgs' will use this field to - -- set various options that are exposed in @cardano-node@ configuration - -- files. - , llrnCustomiseChainDbArgs :: - Complete ChainDbArgs m blk - -> Complete ChainDbArgs m blk - - -- | Customise the 'NodeArgs' - , llrnCustomiseNodeKernelArgs :: - NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - -> NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - - -- | Ie 'bfcSalt' - , llrnBfcSalt :: Int - - -- | Ie 'gsmAntiThunderingHerd' - , llrnGsmAntiThunderingHerd :: StdGen - - -- | Ie 'keepAliveRng' - , llrnKeepAliveRng :: StdGen - - -- | Customise the 'HardForkBlockchainTimeArgs' - , llrnCustomiseHardForkBlockchainTimeArgs :: - HardForkBlockchainTimeArgs m blk - -> HardForkBlockchainTimeArgs m blk - - -- | See 'NTN.ChainSyncTimeout' - , llrnChainSyncTimeout :: m NTN.ChainSyncTimeout - - , llrnGenesisConfig :: GenesisConfig - - -- | How to run the data diffusion applications - -- - -- 'run' will not return before this does. - , llrnRunDataDiffusion :: - NodeKernel m addrNTN (ConnectionId addrNTC) blk - -> Diffusion.Applications - addrNTN NodeToNodeVersion NodeToNodeVersionData - addrNTC NodeToClientVersion NodeToClientVersionData - extraAPI m NodeToNodeInitiatorResult - -> Diffusion.ApplicationsExtra p2p addrNTN m NodeToNodeInitiatorResult - -> m () - - , llrnVersionDataNTC :: NodeToClientVersionData - - , llrnVersionDataNTN :: NodeToNodeVersionData - - -- | node-to-node protocol versions to run. - , llrnNodeToNodeVersions :: Map NodeToNodeVersion (BlockNodeToNodeVersion blk) - - -- | node-to-client protocol versions to run. - , llrnNodeToClientVersions :: Map NodeToClientVersion (BlockNodeToClientVersion blk) - - -- | If the volatile tip is older than this, then the node will exit the - -- @CaughtUp@ state. - , llrnMaxCaughtUpAge :: NominalDiffTime - - -- | Maximum clock skew - , llrnMaxClockSkew :: InFutureCheck.ClockSkew - - , llrnPublicPeerSelectionStateVar :: StrictSTM.StrictTVar m (PublicPeerSelectionState addrNTN) - - -- | The flavor arguments - , llrnLdbFlavorArgs :: Complete LedgerDbFlavorArgs m - } + (Type -> Type) -> + Type -> + Type -> + Type -> + Diffusion.P2P -> + Type -> + Type +data LowLevelRunNodeArgs m addrNTN addrNTC blk p2p extraAPI + = LowLevelRunNodeArgs + { llrnWithCheckedDB :: + forall a. + ( LastShutDownWasClean -> + (ChainDB m blk -> m a -> m a) -> + m a + ) -> + m a + -- ^ An action that will receive a marker indicating whether the previous + -- shutdown was considered clean and a wrapper for installing a handler to + -- create a clean file on exit if needed. See + -- 'Ouroboros.Consensus.Node.Recovery.runWithCheckedDB'. + , llrnChainDbArgsDefaults :: Incomplete ChainDbArgs m blk + -- ^ The " static " ChainDB arguments + , llrnMkImmutableHasFS :: ChainDB.RelativeMountPoint -> SomeHasFS m + -- ^ File-system on which the directory for the ImmutableDB will + -- be created. + , llrnMkVolatileHasFS :: ChainDB.RelativeMountPoint -> SomeHasFS m + -- ^ File-system on which the directories for databases other than the ImmutableDB will + -- be created. + , llrnCustomiseChainDbArgs :: + Complete ChainDbArgs m blk -> + Complete ChainDbArgs m blk + -- ^ Customise the 'ChainDbArgs'. 'StdRunNodeArgs' will use this field to + -- set various options that are exposed in @cardano-node@ configuration + -- files. + , llrnCustomiseNodeKernelArgs :: + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -> + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk + -- ^ Customise the 'NodeArgs' + , llrnBfcSalt :: Int + -- ^ Ie 'bfcSalt' + , llrnGsmAntiThunderingHerd :: StdGen + -- ^ Ie 'gsmAntiThunderingHerd' + , llrnKeepAliveRng :: StdGen + -- ^ Ie 'keepAliveRng' + , llrnCustomiseHardForkBlockchainTimeArgs :: + HardForkBlockchainTimeArgs m blk -> + HardForkBlockchainTimeArgs m blk + -- ^ Customise the 'HardForkBlockchainTimeArgs' + , llrnChainSyncTimeout :: m NTN.ChainSyncTimeout + -- ^ See 'NTN.ChainSyncTimeout' + , llrnGenesisConfig :: GenesisConfig + , llrnRunDataDiffusion :: + NodeKernel m addrNTN (ConnectionId addrNTC) blk -> + Diffusion.Applications + addrNTN + NodeToNodeVersion + NodeToNodeVersionData + addrNTC + NodeToClientVersion + NodeToClientVersionData + extraAPI + m + NodeToNodeInitiatorResult -> + Diffusion.ApplicationsExtra p2p addrNTN m NodeToNodeInitiatorResult -> + m () + -- ^ How to run the data diffusion applications + -- + -- 'run' will not return before this does. + , llrnVersionDataNTC :: NodeToClientVersionData + , llrnVersionDataNTN :: NodeToNodeVersionData + , llrnNodeToNodeVersions :: Map NodeToNodeVersion (BlockNodeToNodeVersion blk) + -- ^ node-to-node protocol versions to run. + , llrnNodeToClientVersions :: Map NodeToClientVersion (BlockNodeToClientVersion blk) + -- ^ node-to-client protocol versions to run. + , llrnMaxCaughtUpAge :: NominalDiffTime + -- ^ If the volatile tip is older than this, then the node will exit the + -- @CaughtUp@ state. + , llrnMaxClockSkew :: InFutureCheck.ClockSkew + -- ^ Maximum clock skew + , llrnPublicPeerSelectionStateVar :: StrictSTM.StrictTVar m (PublicPeerSelectionState addrNTN) + , llrnLdbFlavorArgs :: Complete LedgerDbFlavorArgs m + -- ^ The flavor arguments + } -data NodeDatabasePaths = +data NodeDatabasePaths + = -- | Databases will be stored under this path, such that given a + -- path @/foo@, databases will be in @/foo/{immutable,volatile,...}@. OnePathForAllDbs - FilePath -- ^ Databases will be stored under this path, such that given a - -- path @/foo@, databases will be in @/foo/{immutable,volatile,...}@. + FilePath | MultipleDbPaths - FilePath -- ^ Immutable path, usually pointing to a non-necessarily - -- performant volume. ImmutableDB will be stored under this path, - -- so given @/foo@, the ImmutableDB will be in @/foo/immutable@. - FilePath -- ^ Non-immutable (volatile data) path, usually pointing to a - -- performant volume. Databases other than the ImmutableDB will - -- be stored under this path, so given @/bar@, it will contain - -- @/bar/{volatile,ledger,...}@. + -- | Immutable path, usually pointing to a non-necessarily + -- performant volume. ImmutableDB will be stored under this path, + -- so given @/foo@, the ImmutableDB will be in @/foo/immutable@. + FilePath + -- | Non-immutable (volatile data) path, usually pointing to a + -- performant volume. Databases other than the ImmutableDB will + -- be stored under this path, so given @/bar@, it will contain + -- @/bar/{volatile,ledger,...}@. + FilePath immutableDbPath :: NodeDatabasePaths -> FilePath -immutableDbPath (OnePathForAllDbs f) = f +immutableDbPath (OnePathForAllDbs f) = f immutableDbPath (MultipleDbPaths imm _) = imm nonImmutableDbPath :: NodeDatabasePaths -> FilePath -nonImmutableDbPath (OnePathForAllDbs f) = f +nonImmutableDbPath (OnePathForAllDbs f) = f nonImmutableDbPath (MultipleDbPaths _ vol) = vol -- | Higher-level arguments that can determine the 'LowLevelRunNodeArgs' under -- some usual assumptions for realistic use cases such as in @cardano-node@. -- -- See 'stdLowLevelRunNodeArgsIO'. -data StdRunNodeArgs m blk (p2p :: Diffusion.P2P) extraArgs extraState extraDebugState extraActions extraAPI extraPeers extraFlags extraChurnArgs extraCounters exception = StdRunNodeArgs - { srnBfcMaxConcurrencyBulkSync :: Maybe Word - , srnBfcMaxConcurrencyDeadline :: Maybe Word - , srnChainDbValidateOverride :: Bool - -- ^ If @True@, validate the ChainDB on init no matter what - , srnDatabasePath :: NodeDatabasePaths - -- ^ Location of the DBs - , srnDiffusionArguments :: Diffusion.Arguments - IO - Socket RemoteAddress - LocalSocket LocalAddress - , srnDiffusionArgumentsExtra :: Diffusion.P2PDecision p2p (Tracer IO TracePublicRootPeers) () - -> Diffusion.P2PDecision p2p (STM IO FetchMode) () - -> Diffusion.P2PDecision p2p extraAPI () - -> Diffusion.ArgumentsExtra p2p - extraArgs extraState extraDebugState - extraFlags extraPeers extraAPI - extraChurnArgs extraCounters - exception RemoteAddress LocalAddress - Resolver IOException IO - , srnDiffusionTracers :: Diffusion.Tracers - RemoteAddress NodeToNodeVersion - LocalAddress NodeToClientVersion - IO - , srnDiffusionTracersExtra :: Diffusion.ExtraTracers p2p extraState extraDebugState extraFlags extraPeers extraCounters IO - , srnSigUSR1SignalHandler :: ( forall (mode :: Mode) x y. - Diffusion.ExtraTracers p2p - extraState - Cardano.DebugPeerSelectionState - extraFlags extraPeers extraCounters - IO - -> STM IO UseLedgerPeers - -> PeerSharing - -> STM IO UseBootstrapPeers - -> STM IO LedgerStateJudgement - -> Diffusion.P2P.NodeToNodeConnectionManager mode Socket - RemoteAddress NodeToNodeVersionData - NodeToNodeVersion IO x y - -> StrictSTM.StrictTVar IO - (PeerSelectionState extraState extraFlags extraPeers - RemoteAddress - (Diffusion.P2P.NodeToNodePeerConnectionHandle - mode RemoteAddress - NodeToNodeVersionData IO x y)) - -> PeerMetrics IO RemoteAddress - -> IO ()) - , srnEnableInDevelopmentVersions :: Bool - -- ^ If @False@, then the node will limit the negotiated NTN and NTC - -- versions to the latest " official " release (as chosen by Network and - -- Consensus Team, with input from Node Team) - , srnTraceChainDB :: Tracer m (ChainDB.TraceEvent blk) +data + StdRunNodeArgs + m + blk + (p2p :: Diffusion.P2P) + extraArgs + extraState + extraDebugState + extraActions + extraAPI + extraPeers + extraFlags + extraChurnArgs + extraCounters + exception + = StdRunNodeArgs + { srnBfcMaxConcurrencyBulkSync :: Maybe Word + , srnBfcMaxConcurrencyDeadline :: Maybe Word + , srnChainDbValidateOverride :: Bool + -- ^ If @True@, validate the ChainDB on init no matter what + , srnDatabasePath :: NodeDatabasePaths + -- ^ Location of the DBs + , srnDiffusionArguments :: + Diffusion.Arguments + IO + Socket + RemoteAddress + LocalSocket + LocalAddress + , srnDiffusionArgumentsExtra :: + Diffusion.P2PDecision p2p (Tracer IO TracePublicRootPeers) () -> + Diffusion.P2PDecision p2p (STM IO FetchMode) () -> + Diffusion.P2PDecision p2p extraAPI () -> + Diffusion.ArgumentsExtra + p2p + extraArgs + extraState + extraDebugState + extraFlags + extraPeers + extraAPI + extraChurnArgs + extraCounters + exception + RemoteAddress + LocalAddress + Resolver + IOException + IO + , srnDiffusionTracers :: + Diffusion.Tracers + RemoteAddress + NodeToNodeVersion + LocalAddress + NodeToClientVersion + IO + , srnDiffusionTracersExtra :: + Diffusion.ExtraTracers p2p extraState extraDebugState extraFlags extraPeers extraCounters IO + , srnSigUSR1SignalHandler :: + ( forall (mode :: Mode) x y. + Diffusion.ExtraTracers + p2p + extraState + Cardano.DebugPeerSelectionState + extraFlags + extraPeers + extraCounters + IO -> + STM IO UseLedgerPeers -> + PeerSharing -> + STM IO UseBootstrapPeers -> + STM IO LedgerStateJudgement -> + Diffusion.P2P.NodeToNodeConnectionManager + mode + Socket + RemoteAddress + NodeToNodeVersionData + NodeToNodeVersion + IO + x + y -> + StrictSTM.StrictTVar + IO + ( PeerSelectionState + extraState + extraFlags + extraPeers + RemoteAddress + ( Diffusion.P2P.NodeToNodePeerConnectionHandle + mode + RemoteAddress + NodeToNodeVersionData + IO + x + y + ) + ) -> + PeerMetrics IO RemoteAddress -> + IO () + ) + , srnEnableInDevelopmentVersions :: Bool + -- ^ If @False@, then the node will limit the negotiated NTN and NTC + -- versions to the latest " official " release (as chosen by Network and + -- Consensus Team, with input from Node Team) + , srnTraceChainDB :: Tracer m (ChainDB.TraceEvent blk) , srnMaybeMempoolCapacityOverride :: Maybe MempoolCapacityBytesOverride - -- ^ Determine whether to use the system default mempool capacity or explicitly set - -- capacity of the mempool. - , srnChainSyncTimeout :: Maybe (m NTN.ChainSyncTimeout) - -- ^ A custom timeout for ChainSync. - - -- Ad hoc values to replace default ChainDB configurations - , srnSnapshotPolicyArgs :: SnapshotPolicyArgs - , srnQueryBatchSize :: QueryBatchSize - , srnLdbFlavorArgs :: Complete LedgerDbFlavorArgs m + -- ^ Determine whether to use the system default mempool capacity or explicitly set + -- capacity of the mempool. + , srnChainSyncTimeout :: Maybe (m NTN.ChainSyncTimeout) + -- ^ A custom timeout for ChainSync. + , -- Ad hoc values to replace default ChainDB configurations + srnSnapshotPolicyArgs :: SnapshotPolicyArgs + , srnQueryBatchSize :: QueryBatchSize + , srnLdbFlavorArgs :: Complete LedgerDbFlavorArgs m } {------------------------------------------------------------------------------- @@ -424,46 +494,58 @@ data StdRunNodeArgs m blk (p2p :: Diffusion.P2P) extraArgs extraState extraDebug -------------------------------------------------------------------------------} -- | P2P Switch --- data NetworkP2PMode (p2p :: Diffusion.P2P) where - EnabledP2PMode :: NetworkP2PMode 'Diffusion.P2P - DisabledP2PMode :: NetworkP2PMode 'Diffusion.NonP2P + EnabledP2PMode :: NetworkP2PMode 'Diffusion.P2P + DisabledP2PMode :: NetworkP2PMode 'Diffusion.NonP2P -deriving instance Eq (NetworkP2PMode p2p) +deriving instance Eq (NetworkP2PMode p2p) deriving instance Show (NetworkP2PMode p2p) pure [] -- | Combination of 'runWith' and 'stdLowLevelRunArgsIO' -run :: forall blk p2p extraState extraActions extraPeers extraFlags extraChurnArgs extraCounters exception. - ( RunNode blk - , Monoid extraPeers - , Eq extraCounters - , Eq extraFlags - , Exception exception - ) - => RunNodeArgs IO RemoteAddress LocalAddress blk p2p - -> StdRunNodeArgs IO blk p2p (Cardano.ExtraArguments IO) extraState Cardano.DebugPeerSelectionState extraActions (Cardano.LedgerPeersConsensusInterface IO) extraPeers extraFlags extraChurnArgs extraCounters exception - -> IO () +run :: + forall blk p2p extraState extraActions extraPeers extraFlags extraChurnArgs extraCounters exception. + ( RunNode blk + , Monoid extraPeers + , Eq extraCounters + , Eq extraFlags + , Exception exception + ) => + RunNodeArgs IO RemoteAddress LocalAddress blk p2p -> + StdRunNodeArgs + IO + blk + p2p + (Cardano.ExtraArguments IO) + extraState + Cardano.DebugPeerSelectionState + extraActions + (Cardano.LedgerPeersConsensusInterface IO) + extraPeers + extraFlags + extraChurnArgs + extraCounters + exception -> + IO () run args stdArgs = - stdLowLevelRunNodeArgsIO args stdArgs - >>= runWith args encodeRemoteAddress decodeRemoteAddress + stdLowLevelRunNodeArgsIO args stdArgs + >>= runWith args encodeRemoteAddress decodeRemoteAddress -- | Extra constraints used by `ouroboros-network`. --- -type NetworkIO m = ( - MonadTime m, - MonadTimer m, - MonadLabelledSTM m - ) +type NetworkIO m = + ( MonadTime m + , MonadTimer m + , MonadLabelledSTM m + ) -- | Extra constraints used by `ouroboros-network`. -type NetworkAddr addr = ( - Ord addr, - Typeable addr, - NoThunks addr, - NFData addr - ) +type NetworkAddr addr = + ( Ord addr + , Typeable addr + , NoThunks addr + , NFData addr + ) -- | Start a node. -- @@ -471,373 +553,403 @@ type NetworkAddr addr = ( -- network layer. -- -- This function runs forever unless an exception is thrown. -runWith :: forall m addrNTN addrNTC blk p2p. - ( RunNode blk - , IOLike m - , Hashable addrNTN -- the constraint comes from `initNodeKernel` - , NetworkIO m - , NetworkAddr addrNTN - ) - => RunNodeArgs m addrNTN addrNTC blk p2p - -> (NodeToNodeVersion -> addrNTN -> CBOR.Encoding) - -> (NodeToNodeVersion -> forall s . CBOR.Decoder s addrNTN) - -> LowLevelRunNodeArgs m addrNTN addrNTC blk p2p (Cardano.LedgerPeersConsensusInterface m) - -> m () +runWith :: + forall m addrNTN addrNTC blk p2p. + ( RunNode blk + , IOLike m + , Hashable addrNTN -- the constraint comes from `initNodeKernel` + , NetworkIO m + , NetworkAddr addrNTN + ) => + RunNodeArgs m addrNTN addrNTC blk p2p -> + (NodeToNodeVersion -> addrNTN -> CBOR.Encoding) -> + (NodeToNodeVersion -> forall s. CBOR.Decoder s addrNTN) -> + LowLevelRunNodeArgs m addrNTN addrNTC blk p2p (Cardano.LedgerPeersConsensusInterface m) -> + m () runWith RunNodeArgs{..} encAddrNtN decAddrNtN LowLevelRunNodeArgs{..} = - - llrnWithCheckedDB $ \(LastShutDownWasClean lastShutDownWasClean) continueWithCleanChainDB -> + llrnWithCheckedDB $ \(LastShutDownWasClean lastShutDownWasClean) continueWithCleanChainDB -> withRegistry $ \registry -> handleJust - -- Ignore exception thrown in connection handlers and diffusion. - -- Also ignore 'ExitSuccess'. - (runPredicate $ - Predicate ( \err -> - case fromException @ExceptionInLinkedThread err of - Just (ExceptionInLinkedThread _ err') - -> (/= Just ExitSuccess) $ fromException err' - Nothing -> False) - <> Predicate (isNothing . fromException @ExceptionInHandler) - <> Predicate (isNothing . fromException @Diffusion.Failure) + -- Ignore exception thrown in connection handlers and diffusion. + -- Also ignore 'ExitSuccess'. + ( runPredicate $ + Predicate + ( \err -> + case fromException @ExceptionInLinkedThread err of + Just (ExceptionInLinkedThread _ err') -> + (/= Just ExitSuccess) $ fromException err' + Nothing -> False + ) + <> Predicate (isNothing . fromException @ExceptionInHandler) + <> Predicate (isNothing . fromException @Diffusion.Failure) + ) + ( \err -> + traceWith (consensusErrorTracer rnTraceConsensus) err + >> throwIO err + ) + $ do + let systemStart :: SystemStart + systemStart = getSystemStart (configBlock cfg) + + systemTime :: SystemTime m + systemTime = + defaultSystemTime + systemStart + (blockchainTimeTracer rnTraceConsensus) + + (genesisArgs, setLoEinChainDbArgs) <- + mkGenesisNodeKernelArgs llrnGenesisConfig + + let maybeValidateAll + | lastShutDownWasClean = + id + | otherwise = + -- When the last shutdown was not clean, validate the complete + -- ChainDB to detect and recover from any disk corruption. + ChainDB.ensureValidateAll + + forM_ (sanityCheckConfig cfg) $ \issue -> + traceWith (consensusSanityCheckTracer rnTraceConsensus) issue + + (chainDB, finalArgs) <- + openChainDB + registry + cfg + initLedger + llrnMkImmutableHasFS + llrnMkVolatileHasFS + llrnLdbFlavorArgs + llrnChainDbArgsDefaults + ( setLoEinChainDbArgs + . maybeValidateAll + . llrnCustomiseChainDbArgs ) - (\err -> traceWith (consensusErrorTracer rnTraceConsensus) err - >> throwIO err - ) $ do - let systemStart :: SystemStart - systemStart = getSystemStart (configBlock cfg) - - systemTime :: SystemTime m - systemTime = defaultSystemTime - systemStart - (blockchainTimeTracer rnTraceConsensus) - - (genesisArgs, setLoEinChainDbArgs) <- - mkGenesisNodeKernelArgs llrnGenesisConfig - - let maybeValidateAll - | lastShutDownWasClean - = id - | otherwise - -- When the last shutdown was not clean, validate the complete - -- ChainDB to detect and recover from any disk corruption. - = ChainDB.ensureValidateAll - - forM_ (sanityCheckConfig cfg) $ \issue -> - traceWith (consensusSanityCheckTracer rnTraceConsensus) issue - - (chainDB, finalArgs) <- openChainDB - registry - cfg - initLedger - llrnMkImmutableHasFS - llrnMkVolatileHasFS - llrnLdbFlavorArgs - llrnChainDbArgsDefaults - ( setLoEinChainDbArgs - . maybeValidateAll - . llrnCustomiseChainDbArgs - ) - continueWithCleanChainDB chainDB $ do - btime <- - hardForkBlockchainTime $ - llrnCustomiseHardForkBlockchainTimeArgs $ - HardForkBlockchainTimeArgs - { hfbtBackoffDelay = pure $ BackoffDelay 60 - , hfbtGetLedgerState = - ledgerState <$> ChainDB.getCurrentLedger chainDB - , hfbtLedgerConfig = configLedger cfg - , hfbtRegistry = registry - , hfbtSystemTime = systemTime - , hfbtTracer = - contramap (fmap (fromRelativeTime systemStart)) - (blockchainTimeTracer rnTraceConsensus) - , hfbtMaxClockRewind = secondsToNominalDiffTime 20 - } + continueWithCleanChainDB chainDB $ do + btime <- + hardForkBlockchainTime $ + llrnCustomiseHardForkBlockchainTimeArgs $ + HardForkBlockchainTimeArgs + { hfbtBackoffDelay = pure $ BackoffDelay 60 + , hfbtGetLedgerState = + ledgerState <$> ChainDB.getCurrentLedger chainDB + , hfbtLedgerConfig = configLedger cfg + , hfbtRegistry = registry + , hfbtSystemTime = systemTime + , hfbtTracer = + contramap + (fmap (fromRelativeTime systemStart)) + (blockchainTimeTracer rnTraceConsensus) + , hfbtMaxClockRewind = secondsToNominalDiffTime 20 + } - nodeKernelArgs <- do - durationUntilTooOld <- GSM.realDurationUntilTooOld - (configLedger cfg) - (ledgerState <$> ChainDB.getCurrentLedger chainDB) - llrnMaxCaughtUpAge - systemTime + nodeKernelArgs <- do + durationUntilTooOld <- + GSM.realDurationUntilTooOld + (configLedger cfg) + (ledgerState <$> ChainDB.getCurrentLedger chainDB) + llrnMaxCaughtUpAge + systemTime let gsmMarkerFileView = case ChainDB.cdbsHasFSGsmDB $ ChainDB.cdbsArgs finalArgs of - SomeHasFS x -> GSM.realMarkerFileView chainDB x + SomeHasFS x -> GSM.realMarkerFileView chainDB x historicityCheck getGsmState = case gcHistoricityCutoff llrnGenesisConfig of - Nothing -> HistoricityCheck.noCheck + Nothing -> HistoricityCheck.noCheck Just historicityCutoff -> HistoricityCheck.mkCheck systemTime getGsmState historicityCutoff - fmap (nodeKernelArgsEnforceInvariants . llrnCustomiseNodeKernelArgs) - $ mkNodeKernelArgs - registry - llrnBfcSalt - llrnGsmAntiThunderingHerd - llrnKeepAliveRng - cfg - rnTraceConsensus - btime - (InFutureCheck.realHeaderInFutureCheck llrnMaxClockSkew systemTime) - historicityCheck - chainDB - llrnMaxCaughtUpAge - (Just durationUntilTooOld) - gsmMarkerFileView - rnGetUseBootstrapPeers - llrnPublicPeerSelectionStateVar - genesisArgs - DiffusionPipeliningOn - nodeKernel <- initNodeKernel nodeKernelArgs - rnNodeKernelHook registry nodeKernel - - peerMetrics <- newPeerMetric Diffusion.peerMetricsConfiguration - let ntnApps = mkNodeToNodeApps nodeKernelArgs nodeKernel peerMetrics encAddrNtN decAddrNtN - ntcApps = mkNodeToClientApps nodeKernelArgs nodeKernel - (apps, appsExtra) = - mkDiffusionApplications rnEnableP2P - (miniProtocolParameters nodeKernelArgs) - ntnApps - ntcApps - nodeKernel - peerMetrics - - llrnRunDataDiffusion nodeKernel apps appsExtra - where - ProtocolInfo - { pInfoConfig = cfg - , pInfoInitLedger = initLedger - } = rnProtocolInfo - - codecConfig :: CodecConfig blk - codecConfig = configCodec cfg - - mkNodeToNodeApps - :: NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - -> NodeKernel m addrNTN (ConnectionId addrNTC) blk - -> PeerMetrics m addrNTN - -> (NodeToNodeVersion -> addrNTN -> CBOR.Encoding) - -> (NodeToNodeVersion -> forall s . CBOR.Decoder s addrNTN) - -> BlockNodeToNodeVersion blk - -> NTN.Apps m - addrNTN - ByteString - ByteString - ByteString - ByteString - ByteString - NodeToNodeInitiatorResult - () - mkNodeToNodeApps nodeKernelArgs nodeKernel peerMetrics encAddrNTN decAddrNTN version = - NTN.mkApps - nodeKernel - rnTraceNTN - (NTN.defaultCodecs codecConfig version encAddrNTN decAddrNTN) - NTN.byteLimits - llrnChainSyncTimeout - (gcChainSyncLoPBucketConfig llrnGenesisConfig) - (gcCSJConfig llrnGenesisConfig) - (reportMetric Diffusion.peerMetricsConfiguration peerMetrics) - (NTN.mkHandlers nodeKernelArgs nodeKernel) - - mkNodeToClientApps - :: NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - -> NodeKernel m addrNTN (ConnectionId addrNTC) blk - -> BlockNodeToClientVersion blk - -> NodeToClientVersion - -> NTC.Apps m (ConnectionId addrNTC) ByteString ByteString ByteString ByteString () - mkNodeToClientApps nodeKernelArgs nodeKernel blockVersion networkVersion = - NTC.mkApps - nodeKernel - rnTraceNTC - (NTC.defaultCodecs codecConfig blockVersion networkVersion) - (NTC.mkHandlers nodeKernelArgs nodeKernel) - - mkDiffusionApplications - :: NetworkP2PMode p2p - -> MiniProtocolParameters - -> ( BlockNodeToNodeVersion blk - -> NTN.Apps - m - addrNTN - ByteString - ByteString - ByteString - ByteString - ByteString - NodeToNodeInitiatorResult - () - ) - -> ( BlockNodeToClientVersion blk - -> NodeToClientVersion - -> NTC.Apps - m (ConnectionId addrNTC) ByteString ByteString ByteString ByteString () - ) - -> NodeKernel m addrNTN (ConnectionId addrNTC) blk - -> PeerMetrics m addrNTN - -> ( Diffusion.Applications - addrNTN NodeToNodeVersion NodeToNodeVersionData - addrNTC NodeToClientVersion NodeToClientVersionData - (Cardano.LedgerPeersConsensusInterface m) - m NodeToNodeInitiatorResult - , Diffusion.ApplicationsExtra p2p addrNTN m NodeToNodeInitiatorResult - ) - mkDiffusionApplications - enP2P - miniProtocolParams - ntnApps - ntcApps - kernel - peerMetrics = + fmap (nodeKernelArgsEnforceInvariants . llrnCustomiseNodeKernelArgs) $ + mkNodeKernelArgs + registry + llrnBfcSalt + llrnGsmAntiThunderingHerd + llrnKeepAliveRng + cfg + rnTraceConsensus + btime + (InFutureCheck.realHeaderInFutureCheck llrnMaxClockSkew systemTime) + historicityCheck + chainDB + llrnMaxCaughtUpAge + (Just durationUntilTooOld) + gsmMarkerFileView + rnGetUseBootstrapPeers + llrnPublicPeerSelectionStateVar + genesisArgs + DiffusionPipeliningOn + nodeKernel <- initNodeKernel nodeKernelArgs + rnNodeKernelHook registry nodeKernel + + peerMetrics <- newPeerMetric Diffusion.peerMetricsConfiguration + let ntnApps = mkNodeToNodeApps nodeKernelArgs nodeKernel peerMetrics encAddrNtN decAddrNtN + ntcApps = mkNodeToClientApps nodeKernelArgs nodeKernel + (apps, appsExtra) = + mkDiffusionApplications + rnEnableP2P + (miniProtocolParameters nodeKernelArgs) + ntnApps + ntcApps + nodeKernel + peerMetrics + + llrnRunDataDiffusion nodeKernel apps appsExtra + where + ProtocolInfo + { pInfoConfig = cfg + , pInfoInitLedger = initLedger + } = rnProtocolInfo + + codecConfig :: CodecConfig blk + codecConfig = configCodec cfg + + mkNodeToNodeApps :: + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -> + NodeKernel m addrNTN (ConnectionId addrNTC) blk -> + PeerMetrics m addrNTN -> + (NodeToNodeVersion -> addrNTN -> CBOR.Encoding) -> + (NodeToNodeVersion -> forall s. CBOR.Decoder s addrNTN) -> + BlockNodeToNodeVersion blk -> + NTN.Apps + m + addrNTN + ByteString + ByteString + ByteString + ByteString + ByteString + NodeToNodeInitiatorResult + () + mkNodeToNodeApps nodeKernelArgs nodeKernel peerMetrics encAddrNTN decAddrNTN version = + NTN.mkApps + nodeKernel + rnTraceNTN + (NTN.defaultCodecs codecConfig version encAddrNTN decAddrNTN) + NTN.byteLimits + llrnChainSyncTimeout + (gcChainSyncLoPBucketConfig llrnGenesisConfig) + (gcCSJConfig llrnGenesisConfig) + (reportMetric Diffusion.peerMetricsConfiguration peerMetrics) + (NTN.mkHandlers nodeKernelArgs nodeKernel) + + mkNodeToClientApps :: + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -> + NodeKernel m addrNTN (ConnectionId addrNTC) blk -> + BlockNodeToClientVersion blk -> + NodeToClientVersion -> + NTC.Apps m (ConnectionId addrNTC) ByteString ByteString ByteString ByteString () + mkNodeToClientApps nodeKernelArgs nodeKernel blockVersion networkVersion = + NTC.mkApps + nodeKernel + rnTraceNTC + (NTC.defaultCodecs codecConfig blockVersion networkVersion) + (NTC.mkHandlers nodeKernelArgs nodeKernel) + + mkDiffusionApplications :: + NetworkP2PMode p2p -> + MiniProtocolParameters -> + ( BlockNodeToNodeVersion blk -> + NTN.Apps + m + addrNTN + ByteString + ByteString + ByteString + ByteString + ByteString + NodeToNodeInitiatorResult + () + ) -> + ( BlockNodeToClientVersion blk -> + NodeToClientVersion -> + NTC.Apps + m + (ConnectionId addrNTC) + ByteString + ByteString + ByteString + ByteString + () + ) -> + NodeKernel m addrNTN (ConnectionId addrNTC) blk -> + PeerMetrics m addrNTN -> + ( Diffusion.Applications + addrNTN + NodeToNodeVersion + NodeToNodeVersionData + addrNTC + NodeToClientVersion + NodeToClientVersionData + (Cardano.LedgerPeersConsensusInterface m) + m + NodeToNodeInitiatorResult + , Diffusion.ApplicationsExtra p2p addrNTN m NodeToNodeInitiatorResult + ) + mkDiffusionApplications + enP2P + miniProtocolParams + ntnApps + ntcApps + kernel + peerMetrics = case enP2P of EnabledP2PMode -> ( apps , Diffusion.P2PApplicationsExtra - Diffusion.P2P.ApplicationsExtra { - Diffusion.P2P.daRethrowPolicy = consensusRethrowPolicy (Proxy @blk), - Diffusion.P2P.daReturnPolicy = returnPolicy, - Diffusion.P2P.daLocalRethrowPolicy = localRethrowPolicy, - Diffusion.P2P.daPeerMetrics = peerMetrics, - Diffusion.P2P.daPeerSharingRegistry = getPeerSharingRegistry kernel - } + Diffusion.P2P.ApplicationsExtra + { Diffusion.P2P.daRethrowPolicy = consensusRethrowPolicy (Proxy @blk) + , Diffusion.P2P.daReturnPolicy = returnPolicy + , Diffusion.P2P.daLocalRethrowPolicy = localRethrowPolicy + , Diffusion.P2P.daPeerMetrics = peerMetrics + , Diffusion.P2P.daPeerSharingRegistry = getPeerSharingRegistry kernel + } ) DisabledP2PMode -> ( apps , Diffusion.NonP2PApplicationsExtra - NonP2P.ApplicationsExtra { - NonP2P.daErrorPolicies = consensusErrorPolicy (Proxy @blk) - } + NonP2P.ApplicationsExtra + { NonP2P.daErrorPolicies = consensusErrorPolicy (Proxy @blk) + } ) - where - apps = Diffusion.Applications { - Diffusion.daApplicationInitiatorMode = + where + apps = + Diffusion.Applications + { Diffusion.daApplicationInitiatorMode = combineVersions [ simpleSingletonVersions version llrnVersionDataNTN - (\versionData -> - NTN.initiator miniProtocolParams version versionData - -- Initiator side won't start responder side of Peer - -- Sharing protocol so we give a dummy implementation - -- here. - $ ntnApps blockVersion) + ( \versionData -> + NTN.initiator miniProtocolParams version versionData + -- Initiator side won't start responder side of Peer + -- Sharing protocol so we give a dummy implementation + -- here. + $ + ntnApps blockVersion + ) | (version, blockVersion) <- Map.toList llrnNodeToNodeVersions - ], - Diffusion.daApplicationInitiatorResponderMode = + ] + , Diffusion.daApplicationInitiatorResponderMode = combineVersions [ simpleSingletonVersions version llrnVersionDataNTN - (\versionData -> - NTN.initiatorAndResponder miniProtocolParams version versionData - $ ntnApps blockVersion) + ( \versionData -> + NTN.initiatorAndResponder miniProtocolParams version versionData $ + ntnApps blockVersion + ) | (version, blockVersion) <- Map.toList llrnNodeToNodeVersions - ], - Diffusion.daLocalResponderApplication = + ] + , Diffusion.daLocalResponderApplication = combineVersions [ simpleSingletonVersions version llrnVersionDataNTC (\versionData -> NTC.responder version versionData $ ntcApps blockVersion version) | (version, blockVersion) <- Map.toList llrnNodeToClientVersions - ], - Diffusion.daLedgerPeersCtx = - LedgerPeersConsensusInterface { - lpGetLatestSlot = getImmTipSlot kernel, - lpGetLedgerPeers = fromMaybe [] <$> getPeersFromCurrentLedger kernel (const True), - lpExtraAPI = - Cardano.LedgerPeersConsensusInterface { - Cardano.getLedgerStateJudgement = GSM.gsmStateToLedgerJudgement <$> getGsmState kernel - , Cardano.updateOutboundConnectionsState = - let varOcs = getOutboundConnectionsState kernel in \newOcs -> do - oldOcs <- readTVar varOcs - when (newOcs /= oldOcs) $ writeTVar varOcs newOcs - } + ] + , Diffusion.daLedgerPeersCtx = + LedgerPeersConsensusInterface + { lpGetLatestSlot = getImmTipSlot kernel + , lpGetLedgerPeers = fromMaybe [] <$> getPeersFromCurrentLedger kernel (const True) + , lpExtraAPI = + Cardano.LedgerPeersConsensusInterface + { Cardano.getLedgerStateJudgement = GSM.gsmStateToLedgerJudgement <$> getGsmState kernel + , Cardano.updateOutboundConnectionsState = + let varOcs = getOutboundConnectionsState kernel + in \newOcs -> do + oldOcs <- readTVar varOcs + when (newOcs /= oldOcs) $ writeTVar varOcs newOcs + } } } - localRethrowPolicy :: RethrowPolicy - localRethrowPolicy = mempty - - runPredicate :: Predicate a -> a -> Maybe a - runPredicate (Predicate p) err = if p err then Just err else Nothing + localRethrowPolicy :: RethrowPolicy + localRethrowPolicy = mempty + runPredicate :: Predicate a -> a -> Maybe a + runPredicate (Predicate p) err = if p err then Just err else Nothing -- | Check the DB marker, lock the DB and look for the clean shutdown marker. -- -- Run the body action with the DB locked. --- stdWithCheckedDB :: - forall blk a. (StandardHash blk, Typeable blk) - => Proxy blk - -> Tracer IO (TraceEvent blk) - -> FilePath - -> NetworkMagic - -> (LastShutDownWasClean -> (ChainDB IO blk -> IO a -> IO a) -> IO a) -- ^ Body action with last shutdown was clean. - -> IO a + forall blk a. + (StandardHash blk, Typeable blk) => + Proxy blk -> + Tracer IO (TraceEvent blk) -> + FilePath -> + NetworkMagic -> + -- | Body action with last shutdown was clean. + (LastShutDownWasClean -> (ChainDB IO blk -> IO a -> IO a) -> IO a) -> + IO a stdWithCheckedDB pb tracer databasePath networkMagic body = do - - -- Check the DB marker first, before doing the lock file, since if the - -- marker is not present, it expects an empty DB dir. - either throwIO return =<< checkDbMarker + -- Check the DB marker first, before doing the lock file, since if the + -- marker is not present, it expects an empty DB dir. + either throwIO return + =<< checkDbMarker hasFS mountPoint networkMagic - -- Then create the lock file. - withLockDB mountPoint $ runWithCheckedDB pb tracer hasFS body - where - mountPoint = MountPoint databasePath - hasFS = ioHasFS mountPoint + -- Then create the lock file. + withLockDB mountPoint $ runWithCheckedDB pb tracer hasFS body + where + mountPoint = MountPoint databasePath + hasFS = ioHasFS mountPoint openChainDB :: - forall m blk. (RunNode blk, IOLike m) - => ResourceRegistry m - -> TopLevelConfig blk - -> ExtLedgerState blk ValuesMK - -- ^ Initial ledger - -> (ChainDB.RelativeMountPoint -> SomeHasFS m) - -- ^ Immutable FS, see 'NodeDatabasePaths' - -> (ChainDB.RelativeMountPoint -> SomeHasFS m) - -- ^ Volatile FS, see 'NodeDatabasePaths' - -> Complete LedgerDbFlavorArgs m - -> Incomplete ChainDbArgs m blk - -- ^ A set of default arguments (possibly modified from 'defaultArgs') - -> (Complete ChainDbArgs m blk -> Complete ChainDbArgs m blk) - -- ^ Customise the 'ChainDbArgs' - -> m (ChainDB m blk, Complete ChainDbArgs m blk) + forall m blk. + (RunNode blk, IOLike m) => + ResourceRegistry m -> + TopLevelConfig blk -> + -- | Initial ledger + ExtLedgerState blk ValuesMK -> + -- | Immutable FS, see 'NodeDatabasePaths' + (ChainDB.RelativeMountPoint -> SomeHasFS m) -> + -- | Volatile FS, see 'NodeDatabasePaths' + (ChainDB.RelativeMountPoint -> SomeHasFS m) -> + Complete LedgerDbFlavorArgs m -> + -- | A set of default arguments (possibly modified from 'defaultArgs') + Incomplete ChainDbArgs m blk -> + -- | Customise the 'ChainDbArgs' + (Complete ChainDbArgs m blk -> Complete ChainDbArgs m blk) -> + m (ChainDB m blk, Complete ChainDbArgs m blk) openChainDB registry cfg initLedger fsImm fsVol flavorArgs defArgs customiseArgs = - let args = customiseArgs $ ChainDB.completeChainDbArgs - registry - cfg - initLedger - (nodeImmutableDbChunkInfo (configStorage cfg)) - (nodeCheckIntegrity (configStorage cfg)) - fsImm - fsVol - flavorArgs - defArgs - in (,args) <$> ChainDB.openDB args + let args = + customiseArgs $ + ChainDB.completeChainDbArgs + registry + cfg + initLedger + (nodeImmutableDbChunkInfo (configStorage cfg)) + (nodeCheckIntegrity (configStorage cfg)) + fsImm + fsVol + flavorArgs + defArgs + in (,args) <$> ChainDB.openDB args mkNodeKernelArgs :: - forall m addrNTN addrNTC blk. (RunNode blk, IOLike m) - => ResourceRegistry m - -> Int - -> StdGen - -> StdGen - -> TopLevelConfig blk - -> Tracers m (ConnectionId addrNTN) (ConnectionId addrNTC) blk - -> BlockchainTime m - -> InFutureCheck.SomeHeaderInFutureCheck m blk - -> (m GSM.GsmState -> HistoricityCheck m blk) - -> ChainDB m blk - -> NominalDiffTime - -> Maybe (GSM.WrapDurationUntilTooOld m blk) - -> GSM.MarkerFileView m - -> STM m UseBootstrapPeers - -> StrictSTM.StrictTVar m (PublicPeerSelectionState addrNTN) - -> GenesisNodeKernelArgs m blk - -> DiffusionPipeliningSupport - -> m (NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk) + forall m addrNTN addrNTC blk. + (RunNode blk, IOLike m) => + ResourceRegistry m -> + Int -> + StdGen -> + StdGen -> + TopLevelConfig blk -> + Tracers m (ConnectionId addrNTN) (ConnectionId addrNTC) blk -> + BlockchainTime m -> + InFutureCheck.SomeHeaderInFutureCheck m blk -> + (m GSM.GsmState -> HistoricityCheck m blk) -> + ChainDB m blk -> + NominalDiffTime -> + Maybe (GSM.WrapDurationUntilTooOld m blk) -> + GSM.MarkerFileView m -> + STM m UseBootstrapPeers -> + StrictSTM.StrictTVar m (PublicPeerSelectionState addrNTN) -> + GenesisNodeKernelArgs m blk -> + DiffusionPipeliningSupport -> + m (NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk) mkNodeKernelArgs registry bfcSalt @@ -855,60 +967,67 @@ mkNodeKernelArgs getUseBootstrapPeers publicPeerSelectionStateVar genesisArgs - getDiffusionPipeliningSupport - = do - let (kaRng, psRng) = split rng - return NodeKernelArgs - { tracers - , registry - , cfg - , btime - , chainDB - , initChainDB = nodeInitChainDB - , chainSyncFutureCheck - , chainSyncHistoricityCheck - , blockFetchSize = estimateBlockSize - , mempoolCapacityOverride = NoMempoolCapacityBytesOverride - , miniProtocolParameters = defaultMiniProtocolParameters - , blockFetchConfiguration = Diffusion.defaultBlockFetchConfiguration bfcSalt - , gsmArgs = GsmNodeKernelArgs { - gsmAntiThunderingHerd - , gsmDurationUntilTooOld - , gsmMarkerFileView - , gsmMinCaughtUpDuration = maxCaughtUpAge - } - , getUseBootstrapPeers - , keepAliveRng = kaRng - , peerSharingRng = psRng - , publicPeerSelectionStateVar - , genesisArgs - , getDiffusionPipeliningSupport - } + getDiffusionPipeliningSupport = + do + let (kaRng, psRng) = split rng + return + NodeKernelArgs + { tracers + , registry + , cfg + , btime + , chainDB + , initChainDB = nodeInitChainDB + , chainSyncFutureCheck + , chainSyncHistoricityCheck + , blockFetchSize = estimateBlockSize + , mempoolCapacityOverride = NoMempoolCapacityBytesOverride + , miniProtocolParameters = defaultMiniProtocolParameters + , blockFetchConfiguration = Diffusion.defaultBlockFetchConfiguration bfcSalt + , gsmArgs = + GsmNodeKernelArgs + { gsmAntiThunderingHerd + , gsmDurationUntilTooOld + , gsmMarkerFileView + , gsmMinCaughtUpDuration = maxCaughtUpAge + } + , getUseBootstrapPeers + , keepAliveRng = kaRng + , peerSharingRng = psRng + , publicPeerSelectionStateVar + , genesisArgs + , getDiffusionPipeliningSupport + } -- | We allow the user running the node to customise the 'NodeKernelArgs' -- through 'llrnCustomiseNodeKernelArgs', but there are some limits to some -- values. This function makes sure we don't exceed those limits and that the -- values are consistent. nodeKernelArgsEnforceInvariants :: - NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - -> NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -nodeKernelArgsEnforceInvariants nodeKernelArgs = nodeKernelArgs - { miniProtocolParameters = miniProtocolParameters - -- If 'blockFetchPipeliningMax' exceeds the configured default, it - -- would be a protocol violation. - { blockFetchPipeliningMax = - min (blockFetchPipeliningMax miniProtocolParameters) + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -> + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk +nodeKernelArgsEnforceInvariants nodeKernelArgs = + nodeKernelArgs + { miniProtocolParameters = + miniProtocolParameters + { -- If 'blockFetchPipeliningMax' exceeds the configured default, it + -- would be a protocol violation. + blockFetchPipeliningMax = + min + (blockFetchPipeliningMax miniProtocolParameters) (blockFetchPipeliningMax defaultMiniProtocolParameters) - } - , blockFetchConfiguration = blockFetchConfiguration - -- 'bfcMaxRequestsInflight' must be <= 'blockFetchPipeliningMax' - { bfcMaxRequestsInflight = - min (bfcMaxRequestsInflight blockFetchConfiguration) + } + , blockFetchConfiguration = + blockFetchConfiguration + { -- 'bfcMaxRequestsInflight' must be <= 'blockFetchPipeliningMax' + bfcMaxRequestsInflight = + min + (bfcMaxRequestsInflight blockFetchConfiguration) (fromIntegral $ blockFetchPipeliningMax miniProtocolParameters) - } + } } - where - NodeKernelArgs{..} = nodeKernelArgs + where + NodeKernelArgs{..} = nodeKernelArgs {------------------------------------------------------------------------------- Arguments for use in the real node @@ -916,11 +1035,11 @@ nodeKernelArgsEnforceInvariants nodeKernelArgs = nodeKernelArgs -- | How to locate the ChainDB on disk stdMkChainDbHasFS :: - FilePath - -> ChainDB.RelativeMountPoint - -> SomeHasFS IO + FilePath -> + ChainDB.RelativeMountPoint -> + SomeHasFS IO stdMkChainDbHasFS rootPath (ChainDB.RelativeMountPoint relPath) = - SomeHasFS $ ioHasFS $ MountPoint $ rootPath relPath + SomeHasFS $ ioHasFS $ MountPoint $ rootPath relPath stdBfcSaltIO :: IO Int stdBfcSaltIO = randomIO @@ -931,261 +1050,300 @@ stdGsmAntiThunderingHerdIO = newStdGen stdKeepAliveRngIO :: IO StdGen stdKeepAliveRngIO = newStdGen -stdVersionDataNTN :: NetworkMagic - -> DiffusionMode - -> PeerSharing - -> NodeToNodeVersionData -stdVersionDataNTN networkMagic diffusionMode peerSharing = NodeToNodeVersionData +stdVersionDataNTN :: + NetworkMagic -> + DiffusionMode -> + PeerSharing -> + NodeToNodeVersionData +stdVersionDataNTN networkMagic diffusionMode peerSharing = + NodeToNodeVersionData { networkMagic , diffusionMode , peerSharing - , query = False + , query = False } stdVersionDataNTC :: NetworkMagic -> NodeToClientVersionData -stdVersionDataNTC networkMagic = NodeToClientVersionData +stdVersionDataNTC networkMagic = + NodeToClientVersionData { networkMagic - , query = False + , query = False } -stdRunDataDiffusion - :: ( Monoid extraPeers - , Eq extraCounters - , Eq extraFlags - , Exception exception - ) - => ( forall (mode :: Mode) x y. - Diffusion.P2P.NodeToNodeConnectionManager - mode - Socket - RemoteAddress - NodeToNodeVersionData - NodeToNodeVersion - IO - x - y - -> StrictSTM.StrictTVar - IO - (PeerSelectionState - extraState - extraFlags - extraPeers - RemoteAddress - (Diffusion.P2P.NodeToNodePeerConnectionHandle - mode - RemoteAddress - NodeToNodeVersionData - IO - x - y) - ) - -> PeerMetrics IO RemoteAddress - -> IO () - ) -> Diffusion.Tracers +stdRunDataDiffusion :: + ( Monoid extraPeers + , Eq extraCounters + , Eq extraFlags + , Exception exception + ) => + ( forall (mode :: Mode) x y. + Diffusion.P2P.NodeToNodeConnectionManager + mode + Socket RemoteAddress + NodeToNodeVersionData NodeToNodeVersion - LocalAddress - NodeToClientVersion IO - -> Diffusion.ExtraTracers - p2p - extraState - extraDebugState - extraFlags - extraPeers - extraCounters + x + y -> + StrictSTM.StrictTVar IO - -> Diffusion.Arguments - IO - Socket - RemoteAddress - LocalSocket - LocalAddress - -> Diffusion.ArgumentsExtra - p2p - extraArgs - extraState - extraDebugState - extraFlags - extraPeers - extraAPI - extraChurnArgs - extraCounters - exception - RemoteAddress - LocalAddress - Resolver - IOException - IO - -> Diffusion.Applications - RemoteAddress NodeToNodeVersion NodeToNodeVersionData - LocalAddress NodeToClientVersion NodeToClientVersionData - extraAPI IO a - -> Diffusion.ApplicationsExtra p2p RemoteAddress IO a - -> IO () + ( PeerSelectionState + extraState + extraFlags + extraPeers + RemoteAddress + ( Diffusion.P2P.NodeToNodePeerConnectionHandle + mode + RemoteAddress + NodeToNodeVersionData + IO + x + y + ) + ) -> + PeerMetrics IO RemoteAddress -> + IO () + ) -> + Diffusion.Tracers + RemoteAddress + NodeToNodeVersion + LocalAddress + NodeToClientVersion + IO -> + Diffusion.ExtraTracers + p2p + extraState + extraDebugState + extraFlags + extraPeers + extraCounters + IO -> + Diffusion.Arguments + IO + Socket + RemoteAddress + LocalSocket + LocalAddress -> + Diffusion.ArgumentsExtra + p2p + extraArgs + extraState + extraDebugState + extraFlags + extraPeers + extraAPI + extraChurnArgs + extraCounters + exception + RemoteAddress + LocalAddress + Resolver + IOException + IO -> + Diffusion.Applications + RemoteAddress + NodeToNodeVersion + NodeToNodeVersionData + LocalAddress + NodeToClientVersion + NodeToClientVersionData + extraAPI + IO + a -> + Diffusion.ApplicationsExtra p2p RemoteAddress IO a -> + IO () stdRunDataDiffusion = Diffusion.run -- | Conveniently packaged 'LowLevelRunNodeArgs' arguments from a standard -- non-testing invocation. -stdLowLevelRunNodeArgsIO - :: forall blk p2p extraState extraActions extraPeers extraFlags extraChurnArgs extraCounters exception . +stdLowLevelRunNodeArgsIO :: + forall blk p2p extraState extraActions extraPeers extraFlags extraChurnArgs extraCounters exception. ( RunNode blk , Monoid extraPeers , Eq extraCounters , Eq extraFlags , Exception exception - ) - => RunNodeArgs IO RemoteAddress LocalAddress blk p2p - -> StdRunNodeArgs IO blk p2p (Cardano.ExtraArguments IO) extraState Cardano.DebugPeerSelectionState extraActions (Cardano.LedgerPeersConsensusInterface IO) extraPeers extraFlags extraChurnArgs extraCounters exception - -> IO (LowLevelRunNodeArgs - IO - RemoteAddress - LocalAddress - blk - p2p - (Cardano.LedgerPeersConsensusInterface IO)) -stdLowLevelRunNodeArgsIO RunNodeArgs{ rnProtocolInfo - , rnEnableP2P - , rnPeerSharing - , rnGenesisConfig - , rnGetUseBootstrapPeers - } - $(SafeWildCards.fields 'StdRunNodeArgs) = do - llrnBfcSalt <- stdBfcSaltIO + ) => + RunNodeArgs IO RemoteAddress LocalAddress blk p2p -> + StdRunNodeArgs + IO + blk + p2p + (Cardano.ExtraArguments IO) + extraState + Cardano.DebugPeerSelectionState + extraActions + (Cardano.LedgerPeersConsensusInterface IO) + extraPeers + extraFlags + extraChurnArgs + extraCounters + exception -> + IO + ( LowLevelRunNodeArgs + IO + RemoteAddress + LocalAddress + blk + p2p + (Cardano.LedgerPeersConsensusInterface IO) + ) +stdLowLevelRunNodeArgsIO + RunNodeArgs + { rnProtocolInfo + , rnEnableP2P + , rnPeerSharing + , rnGenesisConfig + , rnGetUseBootstrapPeers + } + $(SafeWildCards.fields 'StdRunNodeArgs) = do + llrnBfcSalt <- stdBfcSaltIO llrnGsmAntiThunderingHerd <- stdGsmAntiThunderingHerdIO - llrnKeepAliveRng <- stdKeepAliveRngIO - pure LowLevelRunNodeArgs - { llrnBfcSalt - , llrnChainSyncTimeout = fromMaybe Diffusion.defaultChainSyncTimeout srnChainSyncTimeout - , llrnGenesisConfig = rnGenesisConfig - , llrnCustomiseHardForkBlockchainTimeArgs = id - , llrnGsmAntiThunderingHerd - , llrnKeepAliveRng - , llrnMkImmutableHasFS = stdMkChainDbHasFS $ immutableDbPath srnDatabasePath - , llrnMkVolatileHasFS = stdMkChainDbHasFS $ nonImmutableDbPath srnDatabasePath - , llrnChainDbArgsDefaults = updateChainDbDefaults ChainDB.defaultArgs - , llrnCustomiseChainDbArgs = id - , llrnCustomiseNodeKernelArgs - , llrnRunDataDiffusion = - \kernel apps extraApps -> do - case rnEnableP2P of - EnabledP2PMode -> - case srnDiffusionTracersExtra of - Diffusion.P2PTracers extraTracers -> do - let srnDiffusionArgumentsExtra' = - srnDiffusionArgumentsExtra (Diffusion.P2PDecision (Diffusion.P2P.dtTracePublicRootPeersTracer extraTracers)) - (Diffusion.P2PDecision (getFetchMode kernel)) - (Diffusion.P2PDecision (lpExtraAPI (Diffusion.daLedgerPeersCtx apps))) - case srnDiffusionArgumentsExtra' of - Diffusion.P2PArguments extraArgs -> - stdRunDataDiffusion - (srnSigUSR1SignalHandler + llrnKeepAliveRng <- stdKeepAliveRngIO + pure + LowLevelRunNodeArgs + { llrnBfcSalt + , llrnChainSyncTimeout = fromMaybe Diffusion.defaultChainSyncTimeout srnChainSyncTimeout + , llrnGenesisConfig = rnGenesisConfig + , llrnCustomiseHardForkBlockchainTimeArgs = id + , llrnGsmAntiThunderingHerd + , llrnKeepAliveRng + , llrnMkImmutableHasFS = stdMkChainDbHasFS $ immutableDbPath srnDatabasePath + , llrnMkVolatileHasFS = stdMkChainDbHasFS $ nonImmutableDbPath srnDatabasePath + , llrnChainDbArgsDefaults = updateChainDbDefaults ChainDB.defaultArgs + , llrnCustomiseChainDbArgs = id + , llrnCustomiseNodeKernelArgs + , llrnRunDataDiffusion = + \kernel apps extraApps -> do + case rnEnableP2P of + EnabledP2PMode -> + case srnDiffusionTracersExtra of + Diffusion.P2PTracers extraTracers -> do + let srnDiffusionArgumentsExtra' = + srnDiffusionArgumentsExtra + (Diffusion.P2PDecision (Diffusion.P2P.dtTracePublicRootPeersTracer extraTracers)) + (Diffusion.P2PDecision (getFetchMode kernel)) + (Diffusion.P2PDecision (lpExtraAPI (Diffusion.daLedgerPeersCtx apps))) + case srnDiffusionArgumentsExtra' of + Diffusion.P2PArguments extraArgs -> + stdRunDataDiffusion + ( srnSigUSR1SignalHandler srnDiffusionTracersExtra (Diffusion.P2P.daReadUseLedgerPeers extraArgs) rnPeerSharing rnGetUseBootstrapPeers - (GSM.gsmStateToLedgerJudgement <$> getGsmState kernel)) - srnDiffusionTracers - srnDiffusionTracersExtra - srnDiffusionArguments - srnDiffusionArgumentsExtra' - apps extraApps - - DisabledP2PMode -> - stdRunDataDiffusion - (srnSigUSR1SignalHandler - (Diffusion.NonP2PTracers NonP2P.nullTracers) - (pure DontUseLedgerPeers) - rnPeerSharing - (pure DontUseBootstrapPeers) - (pure TooOld)) - srnDiffusionTracers - srnDiffusionTracersExtra - srnDiffusionArguments - (srnDiffusionArgumentsExtra - (Diffusion.NonP2PDecision ()) - (Diffusion.NonP2PDecision ()) - (Diffusion.NonP2PDecision ())) - apps extraApps - , llrnVersionDataNTC = - stdVersionDataNTC networkMagic - , llrnVersionDataNTN = - stdVersionDataNTN - networkMagic - (case rnEnableP2P of - EnabledP2PMode -> Diffusion.daMode srnDiffusionArguments - -- Every connection in non-p2p mode is unidirectional; We connect - -- from an ephemeral port. We still pass `srnDiffusionArguments` - -- to the diffusion layer, so the server side will be run also in - -- `InitiatorAndResponderDiffusionMode`. - DisabledP2PMode -> InitiatorOnlyDiffusionMode - ) - rnPeerSharing - , llrnNodeToNodeVersions = - limitToLatestReleasedVersion - fst - (supportedNodeToNodeVersions (Proxy @blk)) - , llrnNodeToClientVersions = - limitToLatestReleasedVersion - snd - (supportedNodeToClientVersions (Proxy @blk)) - , llrnWithCheckedDB = - -- 'stdWithCheckedDB' uses the FS just to check for the clean file. - -- We put that one in the immutable path. - stdWithCheckedDB (Proxy @blk) srnTraceChainDB (immutableDbPath srnDatabasePath) networkMagic - , llrnMaxCaughtUpAge = secondsToNominalDiffTime $ 20 * 60 -- 20 min - , llrnMaxClockSkew = - InFutureCheck.defaultClockSkew - , llrnPublicPeerSelectionStateVar = - Diffusion.daPublicPeerSelectionVar srnDiffusionArguments - , llrnLdbFlavorArgs = - srnLdbFlavorArgs - } - where + (GSM.gsmStateToLedgerJudgement <$> getGsmState kernel) + ) + srnDiffusionTracers + srnDiffusionTracersExtra + srnDiffusionArguments + srnDiffusionArgumentsExtra' + apps + extraApps + DisabledP2PMode -> + stdRunDataDiffusion + ( srnSigUSR1SignalHandler + (Diffusion.NonP2PTracers NonP2P.nullTracers) + (pure DontUseLedgerPeers) + rnPeerSharing + (pure DontUseBootstrapPeers) + (pure TooOld) + ) + srnDiffusionTracers + srnDiffusionTracersExtra + srnDiffusionArguments + ( srnDiffusionArgumentsExtra + (Diffusion.NonP2PDecision ()) + (Diffusion.NonP2PDecision ()) + (Diffusion.NonP2PDecision ()) + ) + apps + extraApps + , llrnVersionDataNTC = + stdVersionDataNTC networkMagic + , llrnVersionDataNTN = + stdVersionDataNTN + networkMagic + ( case rnEnableP2P of + EnabledP2PMode -> Diffusion.daMode srnDiffusionArguments + -- Every connection in non-p2p mode is unidirectional; We connect + -- from an ephemeral port. We still pass `srnDiffusionArguments` + -- to the diffusion layer, so the server side will be run also in + -- `InitiatorAndResponderDiffusionMode`. + DisabledP2PMode -> InitiatorOnlyDiffusionMode + ) + rnPeerSharing + , llrnNodeToNodeVersions = + limitToLatestReleasedVersion + fst + (supportedNodeToNodeVersions (Proxy @blk)) + , llrnNodeToClientVersions = + limitToLatestReleasedVersion + snd + (supportedNodeToClientVersions (Proxy @blk)) + , llrnWithCheckedDB = + -- 'stdWithCheckedDB' uses the FS just to check for the clean file. + -- We put that one in the immutable path. + stdWithCheckedDB (Proxy @blk) srnTraceChainDB (immutableDbPath srnDatabasePath) networkMagic + , llrnMaxCaughtUpAge = secondsToNominalDiffTime $ 20 * 60 -- 20 min + , llrnMaxClockSkew = + InFutureCheck.defaultClockSkew + , llrnPublicPeerSelectionStateVar = + Diffusion.daPublicPeerSelectionVar srnDiffusionArguments + , llrnLdbFlavorArgs = + srnLdbFlavorArgs + } + where networkMagic :: NetworkMagic networkMagic = getNetworkMagic $ configBlock $ pInfoConfig rnProtocolInfo updateChainDbDefaults :: - Incomplete ChainDbArgs IO blk - -> Incomplete ChainDbArgs IO blk + Incomplete ChainDbArgs IO blk -> + Incomplete ChainDbArgs IO blk updateChainDbDefaults = - ChainDB.updateSnapshotPolicyArgs srnSnapshotPolicyArgs + ChainDB.updateSnapshotPolicyArgs srnSnapshotPolicyArgs . ChainDB.updateQueryBatchSize srnQueryBatchSize . ChainDB.updateTracer srnTraceChainDB - . (if not srnChainDbValidateOverride - then id - else ChainDB.ensureValidateAll) + . ( if not srnChainDbValidateOverride + then id + else ChainDB.ensureValidateAll + ) llrnCustomiseNodeKernelArgs :: - NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - -> NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -> + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk llrnCustomiseNodeKernelArgs = - overBlockFetchConfiguration modifyBlockFetchConfiguration - . modifyMempoolCapacityOverride - where - modifyBlockFetchConfiguration = - maybe id - (\mc bfc -> bfc { bfcMaxConcurrencyDeadline = mc }) - srnBfcMaxConcurrencyDeadline - . maybe id - (\mc bfc -> bfc { bfcMaxConcurrencyBulkSync = mc }) - srnBfcMaxConcurrencyBulkSync - modifyMempoolCapacityOverride = - maybe id - (\mc nka -> nka { mempoolCapacityOverride = mc }) - srnMaybeMempoolCapacityOverride + overBlockFetchConfiguration modifyBlockFetchConfiguration + . modifyMempoolCapacityOverride + where + modifyBlockFetchConfiguration = + maybe + id + (\mc bfc -> bfc{bfcMaxConcurrencyDeadline = mc}) + srnBfcMaxConcurrencyDeadline + . maybe + id + (\mc bfc -> bfc{bfcMaxConcurrencyBulkSync = mc}) + srnBfcMaxConcurrencyBulkSync + modifyMempoolCapacityOverride = + maybe + id + (\mc nka -> nka{mempoolCapacityOverride = mc}) + srnMaybeMempoolCapacityOverride -- Limit the node version unless srnEnableInDevelopmentVersions is set - limitToLatestReleasedVersion :: forall k v. - Ord k - => ((Maybe NodeToNodeVersion, Maybe NodeToClientVersion) -> Maybe k) - -> Map k v - -> Map k v + limitToLatestReleasedVersion :: + forall k v. + Ord k => + ((Maybe NodeToNodeVersion, Maybe NodeToClientVersion) -> Maybe k) -> + Map k v -> + Map k v limitToLatestReleasedVersion prj = - if srnEnableInDevelopmentVersions then id - else - case prj $ latestReleasedNodeVersion (Proxy @blk) of - Nothing -> id + if srnEnableInDevelopmentVersions + then id + else case prj $ latestReleasedNodeVersion (Proxy @blk) of + Nothing -> id Just version -> Map.takeWhileAntitone (<= version) {------------------------------------------------------------------------------- @@ -1193,11 +1351,12 @@ stdLowLevelRunNodeArgsIO RunNodeArgs{ rnProtocolInfo -------------------------------------------------------------------------------} overBlockFetchConfiguration :: - (BlockFetchConfiguration -> BlockFetchConfiguration) - -> NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk - -> NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -overBlockFetchConfiguration f args = args { - blockFetchConfiguration = f blockFetchConfiguration + (BlockFetchConfiguration -> BlockFetchConfiguration) -> + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk -> + NodeKernelArgs m addrNTN (ConnectionId addrNTC) blk +overBlockFetchConfiguration f args = + args + { blockFetchConfiguration = f blockFetchConfiguration } - where - NodeKernelArgs { blockFetchConfiguration } = args + where + NodeKernelArgs{blockFetchConfiguration} = args diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbLock.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbLock.hs index ce3e31c6b7..7597b1a5c3 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbLock.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbLock.hs @@ -2,22 +2,24 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Node.DbLock ( - DbLocked (..) +module Ouroboros.Consensus.Node.DbLock + ( DbLocked (..) , withLockDB + -- * Defaults , dbLockFsPath , dbLockTimeout + -- * For testing purposes , withLockDB_ ) where -import Control.Monad.Class.MonadFork -import Control.Monad.Class.MonadTimer.SI -import qualified Data.Time.Clock as Time -import Ouroboros.Consensus.Util.FileLock -import Ouroboros.Consensus.Util.IOLike -import System.FS.API.Types +import Control.Monad.Class.MonadFork +import Control.Monad.Class.MonadTimer.SI +import Data.Time.Clock qualified as Time +import Ouroboros.Consensus.Util.FileLock +import Ouroboros.Consensus.Util.IOLike +import System.FS.API.Types -- | We use an empty file ('dbLockFsPath') as a lock of the database so that -- the database cannot be opened by more than one process. We wait up to @@ -25,11 +27,11 @@ import System.FS.API.Types -- 'DbLocked' exception. withLockDB :: MountPoint -> IO a -> IO a withLockDB mountPoint = - withLockDB_ - ioFileLock - mountPoint - dbLockFsPath - dbLockTimeout + withLockDB_ + ioFileLock + mountPoint + dbLockFsPath + dbLockTimeout -- | The default lock file dbLockFsPath :: FsPath @@ -46,44 +48,51 @@ dbLockTimeout = Time.secondsToDiffTime 2 -- -- Some systems may delete the empty file when all its handles are closed. -- This is not an issue, since the file is created if it doesn't exist. -withLockDB_ - :: forall m a. (IOLike m, MonadTimer m) - => FileLock m - -> MountPoint -- ^ Root of the path - -> FsPath -- ^ File to lock - -> DiffTime -- ^ Timeout - -> m a - -> m a +withLockDB_ :: + forall m a. + (IOLike m, MonadTimer m) => + FileLock m -> + -- | Root of the path + MountPoint -> + -- | File to lock + FsPath -> + -- | Timeout + DiffTime -> + m a -> + m a withLockDB_ fileLock mountPoint lockFsPath lockTimeout action = - bracket acquireLock id (const action) - where - -- We want to avoid blocking the main thread at an uninterruptible ffi, to - -- avoid unresponsiveness to timeouts and ^C. So we use 'async' and let a - -- new thread do the actual ffi call. - -- - -- We shouldn't be tempted to use 'withAsync', which is usually mentioned - -- as a better alternative, or try to synchronously cancel the forked - -- thread during cleanup, since this would block the main thread and negate - -- the whole point of using 'async'. - -- - -- This means that we leave the thread taking the lock running in case of - -- a timeout. This is not a problem, though, since if we fail to take the - -- lock, the whole process will soon die. - acquireLock :: m (m ()) - acquireLock = do - lockFileAsync <- async (do - labelThisThread "ChainDB lock" - lockFile fileLock lockFilePath) - timeout lockTimeout (wait lockFileAsync) >>= \case - -- We timed out while waiting on the lock. The db is still locked. - Nothing -> throwIO $ DbLocked lockFilePath - Just unlock -> return unlock + bracket acquireLock id (const action) + where + -- We want to avoid blocking the main thread at an uninterruptible ffi, to + -- avoid unresponsiveness to timeouts and ^C. So we use 'async' and let a + -- new thread do the actual ffi call. + -- + -- We shouldn't be tempted to use 'withAsync', which is usually mentioned + -- as a better alternative, or try to synchronously cancel the forked + -- thread during cleanup, since this would block the main thread and negate + -- the whole point of using 'async'. + -- + -- This means that we leave the thread taking the lock running in case of + -- a timeout. This is not a problem, though, since if we fail to take the + -- lock, the whole process will soon die. + acquireLock :: m (m ()) + acquireLock = do + lockFileAsync <- + async + ( do + labelThisThread "ChainDB lock" + lockFile fileLock lockFilePath + ) + timeout lockTimeout (wait lockFileAsync) >>= \case + -- We timed out while waiting on the lock. The db is still locked. + Nothing -> throwIO $ DbLocked lockFilePath + Just unlock -> return unlock - lockFilePath = fsToFilePath mountPoint lockFsPath + lockFilePath = fsToFilePath mountPoint lockFsPath newtype DbLocked = DbLocked FilePath - deriving (Eq, Show) + deriving (Eq, Show) instance Exception DbLocked where - displayException (DbLocked f) = - "The db is used by another process. File \"" <> f <> "\" is locked" + displayException (DbLocked f) = + "The db is used by another process. File \"" <> f <> "\" is locked" diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbMarker.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbMarker.hs index c5e68031db..946c4273bd 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbMarker.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/DbMarker.hs @@ -2,28 +2,29 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Special file we store in the DB dir to avoid unintended deletions -module Ouroboros.Consensus.Node.DbMarker ( - DbMarkerError (..) +module Ouroboros.Consensus.Node.DbMarker + ( DbMarkerError (..) , checkDbMarker + -- * For the benefit of testing only , dbMarkerContents , dbMarkerFile , dbMarkerParse ) where -import Control.Monad (void, when) -import Control.Monad.Except (ExceptT (..), runExceptT, throwError) -import Control.Monad.Trans.Class (lift) -import Data.ByteString (ByteString) -import qualified Data.ByteString.Char8 as BS.Char8 -import Data.ByteString.Lazy (fromStrict, toStrict) -import qualified Data.Set as Set -import Data.Text (Text) -import Data.Word -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Magic (NetworkMagic (..)) -import System.FS.API.Lazy -import Text.Read (readMaybe) +import Control.Monad (void, when) +import Control.Monad.Except (ExceptT (..), runExceptT, throwError) +import Control.Monad.Trans.Class (lift) +import Data.ByteString (ByteString) +import Data.ByteString.Char8 qualified as BS.Char8 +import Data.ByteString.Lazy (fromStrict, toStrict) +import Data.Set qualified as Set +import Data.Text (Text) +import Data.Word +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Magic (NetworkMagic (..)) +import System.FS.API.Lazy +import Text.Read (readMaybe) {------------------------------------------------------------------------------- Check proper @@ -62,77 +63,90 @@ import Text.Read (readMaybe) -- -- Note that an 'FsError' can also be thrown. checkDbMarker :: - forall m h. MonadThrow m - => HasFS m h - -> MountPoint - -- ^ Database directory. Should be the mount point of the @HasFS@. Used - -- in error messages. - -> NetworkMagic - -> m (Either DbMarkerError ()) + forall m h. + MonadThrow m => + HasFS m h -> + -- | Database directory. Should be the mount point of the @HasFS@. Used + -- in error messages. + MountPoint -> + NetworkMagic -> + m (Either DbMarkerError ()) checkDbMarker hasFS mountPoint networkMagic = runExceptT $ do - fileExists <- lift $ doesFileExist hasFS pFile - if fileExists then do + fileExists <- lift $ doesFileExist hasFS pFile + if fileExists + then do actualNetworkMagic <- readNetworkMagicFile when (actualNetworkMagic /= networkMagic) $ - throwError $ NetworkMagicMismatch - fullPath - actualNetworkMagic - networkMagic + throwError $ + NetworkMagicMismatch + fullPath + actualNetworkMagic + networkMagic else do lift $ createDirectoryIfMissing hasFS False root isEmpty <- lift $ Set.null <$> listDirectory hasFS root - if isEmpty then - createNetworkMagicFile - else - throwError $ NoDbMarkerAndNotEmpty fullPath - where - root = mkFsPath [] - pFile = fsPathFromList [dbMarkerFile] - fullPath = fsToFilePath mountPoint pFile + if isEmpty + then + createNetworkMagicFile + else + throwError $ NoDbMarkerAndNotEmpty fullPath + where + root = mkFsPath [] + pFile = fsPathFromList [dbMarkerFile] + fullPath = fsToFilePath mountPoint pFile - readNetworkMagicFile :: ExceptT DbMarkerError m NetworkMagic - readNetworkMagicFile = ExceptT $ - withFile hasFS pFile ReadMode $ \h -> do - bs <- toStrict <$> hGetAll hasFS h - runExceptT $ dbMarkerParse fullPath bs + readNetworkMagicFile :: ExceptT DbMarkerError m NetworkMagic + readNetworkMagicFile = ExceptT $ + withFile hasFS pFile ReadMode $ \h -> do + bs <- toStrict <$> hGetAll hasFS h + runExceptT $ dbMarkerParse fullPath bs - createNetworkMagicFile :: ExceptT DbMarkerError m () - createNetworkMagicFile = lift $ - withFile hasFS pFile (AppendMode MustBeNew) $ \h -> - void $ hPutAll hasFS h $ - fromStrict $ dbMarkerContents networkMagic + createNetworkMagicFile :: ExceptT DbMarkerError m () + createNetworkMagicFile = lift $ + withFile hasFS pFile (AppendMode MustBeNew) $ \h -> + void $ + hPutAll hasFS h $ + fromStrict $ + dbMarkerContents networkMagic {------------------------------------------------------------------------------- Error -------------------------------------------------------------------------------} -data DbMarkerError = - -- | There was a 'dbMarkerFile' in the database folder, but it +data DbMarkerError + = -- | There was a 'dbMarkerFile' in the database folder, but it -- contained a different 'NetworkMagic' than the expected one. This -- indicates that this database folder corresponds to another net. NetworkMagicMismatch - FilePath -- ^ The full path to the 'dbMarkerFile' - NetworkMagic -- ^ Actual - NetworkMagic -- ^ Expected - - -- | The database folder contained no 'dbMarkerFile', but also + -- | The full path to the 'dbMarkerFile' + FilePath + -- | Actual + NetworkMagic + -- | Expected + NetworkMagic + | -- | The database folder contained no 'dbMarkerFile', but also -- contained some files. Either the given folder is a non-database folder -- or it is a database folder, but its 'dbMarkerFile' has been -- deleted. - | NoDbMarkerAndNotEmpty - FilePath -- ^ The full path to the 'dbMarkerFile' - - -- | The database folder contained a 'dbMarkerFile' that could not + NoDbMarkerAndNotEmpty + -- | The full path to the 'dbMarkerFile' + FilePath + | -- | The database folder contained a 'dbMarkerFile' that could not -- be read. The file has been tampered with or it was corrupted somehow. - | CorruptDbMarker - FilePath -- ^ The full path to the 'dbMarkerFile' + CorruptDbMarker + -- | The full path to the 'dbMarkerFile' + FilePath deriving (Eq, Show) instance Exception DbMarkerError where displayException e = case e of NetworkMagicMismatch f actual expected -> - "Wrong NetworkMagic in \"" <> f <> "\": " <> show actual <> - ", but expected: " <> show expected + "Wrong NetworkMagic in \"" + <> f + <> "\": " + <> show actual + <> ", but expected: " + <> show expected NoDbMarkerAndNotEmpty f -> "Missing \"" <> f <> "\" but the folder was not empty" CorruptDbMarker f -> @@ -156,16 +170,17 @@ dbMarkerFile = "protocolMagicId" -- type, we must consider how this affects existing DB deployments. dbMarkerContents :: NetworkMagic -> ByteString dbMarkerContents (NetworkMagic (nm :: Word32)) = - BS.Char8.pack $ show nm + BS.Char8.pack $ show nm -- | Parse contents of the DB marker file -- -- Must be inverse to 'dbMarkerContents' -dbMarkerParse :: Monad m - => FilePath - -> ByteString - -> ExceptT DbMarkerError m NetworkMagic +dbMarkerParse :: + Monad m => + FilePath -> + ByteString -> + ExceptT DbMarkerError m NetworkMagic dbMarkerParse fullPath bs = - case readMaybe (BS.Char8.unpack bs) of - Just nm -> return $ NetworkMagic nm - Nothing -> throwError $ CorruptDbMarker fullPath + case readMaybe (BS.Char8.unpack bs) of + Just nm -> return $ NetworkMagic nm + Nothing -> throwError $ CorruptDbMarker fullPath diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ErrorPolicy.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ErrorPolicy.hs index 4ff689017c..01e4d91e13 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ErrorPolicy.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ErrorPolicy.hs @@ -2,40 +2,48 @@ module Ouroboros.Consensus.Node.ErrorPolicy (consensusErrorPolicy) where -import Control.Monad.Class.MonadAsync (ExceptionInLinkedThread (..)) -import Control.ResourceRegistry (RegistryClosedException, - ResourceRegistryThreadException, TempRegistryException) -import Data.Proxy (Proxy) -import Data.Time.Clock (DiffTime) -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (StandardHash) -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server - (BlockFetchServerException) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientException) -import Ouroboros.Consensus.Node.DbLock -import Ouroboros.Consensus.Node.DbMarker (DbMarkerError) -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDbError (..), - ChainDbFailure) -import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDBError) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.API as ImmutableDB -import Ouroboros.Consensus.Storage.VolatileDB.API (VolatileDBError) -import qualified Ouroboros.Consensus.Storage.VolatileDB.API as VolatileDB -import Ouroboros.Network.ErrorPolicy -import System.FS.API.Types (FsError) +import Control.Monad.Class.MonadAsync (ExceptionInLinkedThread (..)) +import Control.ResourceRegistry + ( RegistryClosedException + , ResourceRegistryThreadException + , TempRegistryException + ) +import Data.Proxy (Proxy) +import Data.Time.Clock (DiffTime) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (StandardHash) +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server + ( BlockFetchServerException + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientException + ) +import Ouroboros.Consensus.Node.DbLock +import Ouroboros.Consensus.Node.DbMarker (DbMarkerError) +import Ouroboros.Consensus.Storage.ChainDB.API + ( ChainDbError (..) + , ChainDbFailure + ) +import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDBError) +import Ouroboros.Consensus.Storage.ImmutableDB.API qualified as ImmutableDB +import Ouroboros.Consensus.Storage.VolatileDB.API (VolatileDBError) +import Ouroboros.Consensus.Storage.VolatileDB.API qualified as VolatileDB +import Ouroboros.Network.ErrorPolicy +import System.FS.API.Types (FsError) consensusErrorPolicy :: - forall blk. (Typeable blk, StandardHash blk) - => Proxy blk - -> ErrorPolicies -consensusErrorPolicy pb = ErrorPolicies { - -- Exception raised during connect + forall blk. + (Typeable blk, StandardHash blk) => + Proxy blk -> + ErrorPolicies +consensusErrorPolicy pb = + ErrorPolicies + { -- Exception raised during connect -- -- This is entirely a network-side concern. epConErrorPolicies = [] - - -- Exception raised during interaction with the peer + , -- Exception raised during interaction with the peer -- -- The list below should contain an entry for every type declared as an -- instance of 'Exception' within ouroboros-consensus. @@ -49,89 +57,83 @@ consensusErrorPolicy pb = ErrorPolicies { -- -- TODO: Talk to devops about what they should do when the node does -- terminate with a storage layer exception (restart with full recovery). - , epAppErrorPolicies = [ - -- Any exceptions in the storage layer should terminate the node + epAppErrorPolicies = + [ -- Any exceptions in the storage layer should terminate the node -- -- NOTE: We do not catch IOExceptions here; they /ought/ to be caught -- by the FS layer (and turn into FsError). If we do want to catch -- them, we'd somehow have to distinguish between IO exceptions -- arising from disk I/O (shutdownNode) and those arising from -- network failures (SuspendConsumer). - ErrorPolicy $ \(_ :: DbMarkerError) -> Just shutdownNode - , ErrorPolicy $ \(_ :: DbLocked) -> Just shutdownNode - , ErrorPolicy $ \(_ :: ChainDbFailure blk) -> Just shutdownNode - , ErrorPolicy $ \(e :: VolatileDBError blk) -> + ErrorPolicy $ \(_ :: DbMarkerError) -> Just shutdownNode + , ErrorPolicy $ \(_ :: DbLocked) -> Just shutdownNode + , ErrorPolicy $ \(_ :: ChainDbFailure blk) -> Just shutdownNode + , ErrorPolicy $ \(e :: VolatileDBError blk) -> case e of - VolatileDB.ApiMisuse{} -> Just ourBug + VolatileDB.ApiMisuse{} -> Just ourBug VolatileDB.UnexpectedFailure{} -> Just shutdownNode , ErrorPolicy $ \(e :: ImmutableDBError blk) -> case e of - ImmutableDB.ApiMisuse{} -> Just ourBug + ImmutableDB.ApiMisuse{} -> Just ourBug ImmutableDB.UnexpectedFailure{} -> Just shutdownNode , ErrorPolicy $ \(_ :: FsError) -> Just shutdownNode - - -- When the system clock moved back, we have to restart the node. - , ErrorPolicy $ \(_ :: SystemClockMovedBackException) -> Just shutdownNode - - -- Some chain DB errors are indicative of a bug in our code, others + , -- When the system clock moved back, we have to restart the node. + ErrorPolicy $ \(_ :: SystemClockMovedBackException) -> Just shutdownNode + , -- Some chain DB errors are indicative of a bug in our code, others -- indicate an invalid request from the peer. If the DB is closed -- entirely, it will only be reopened after a node restart. - , ErrorPolicy $ \(e :: ChainDbError blk) -> + ErrorPolicy $ \(e :: ChainDbError blk) -> case e of - ClosedDBError{} -> Just shutdownNode - ClosedFollowerError{} -> Just ourBug + ClosedDBError{} -> Just shutdownNode + ClosedFollowerError{} -> Just ourBug InvalidIteratorRange{} -> Just theyBuggyOrEvil - - -- We have some resource registries that are used per-connection, + , -- We have some resource registries that are used per-connection, -- and so if we have ResourceRegistry related exception, we close -- the connection but leave the rest of the node running. - , ErrorPolicy $ \(_ :: RegistryClosedException) -> Just ourBug + ErrorPolicy $ \(_ :: RegistryClosedException) -> Just ourBug , ErrorPolicy $ \(_ :: ResourceRegistryThreadException) -> Just ourBug - , ErrorPolicy $ \(_ :: TempRegistryException) -> Just ourBug - - -- An exception in the block fetch server meant the client asked + , ErrorPolicy $ \(_ :: TempRegistryException) -> Just ourBug + , -- An exception in the block fetch server meant the client asked -- for some blocks we used to have but got GCed. This means the -- peer is on a chain that forks off more than @k@ blocks away. - , ErrorPolicy $ \(_ :: BlockFetchServerException) -> Just distantPeer - - -- Chain sync client exceptions indicate malicious behaviour. When we + ErrorPolicy $ \(_ :: BlockFetchServerException) -> Just distantPeer + , -- Chain sync client exceptions indicate malicious behaviour. When we -- have diverged too much from a client, making it no longer -- interesting to us, we terminate with a result. - , ErrorPolicy $ \(_ :: ChainSyncClientException) -> Just theyBuggyOrEvil - - -- Dispatch on nested exception - , ErrorPolicy $ \(ExceptionInLinkedThread _ e) -> + ErrorPolicy $ \(_ :: ChainSyncClientException) -> Just theyBuggyOrEvil + , -- Dispatch on nested exception + ErrorPolicy $ \(ExceptionInLinkedThread _ e) -> evalErrorPolicies e (epAppErrorPolicies (consensusErrorPolicy pb)) ] } - where - -- Shutdown the node. If we have a storage layer failure, the node /must/ - -- be restarted (triggering recovery). - shutdownNode :: SuspendDecision DiffTime - shutdownNode = Throw + where + -- Shutdown the node. If we have a storage layer failure, the node /must/ + -- be restarted (triggering recovery). + shutdownNode :: SuspendDecision DiffTime + shutdownNode = Throw - -- Peer is either on a distant chain (one that forks more than k blocks ago) - -- or else is just too far behind; the chain sync client doesn't really have - -- any way of distinguishing between these two cases. If they are merely - -- far behind, we might want to reconnect to them later. - distantPeer :: SuspendDecision DiffTime - distantPeer = SuspendConsumer defaultDelay + -- Peer is either on a distant chain (one that forks more than k blocks ago) + -- or else is just too far behind; the chain sync client doesn't really have + -- any way of distinguishing between these two cases. If they are merely + -- far behind, we might want to reconnect to them later. + distantPeer :: SuspendDecision DiffTime + distantPeer = SuspendConsumer defaultDelay - -- The peer sent us some data that they could have known was invalid. - -- This can only be due to a bug or malice. - theyBuggyOrEvil :: SuspendDecision DiffTime - theyBuggyOrEvil = SuspendPeer defaultDelay defaultDelay + -- The peer sent us some data that they could have known was invalid. + -- This can only be due to a bug or malice. + theyBuggyOrEvil :: SuspendDecision DiffTime + theyBuggyOrEvil = SuspendPeer defaultDelay defaultDelay - -- Something went wrong due to a bug in our code. We disconnect from the - -- peer, but allow to try again later in the hope the bug was transient. - -- We do not close the connection in the other direction; if the bug was - -- indeed local, it might not affect communication in the other direction. - ourBug :: SuspendDecision DiffTime - ourBug = SuspendConsumer defaultDelay + -- Something went wrong due to a bug in our code. We disconnect from the + -- peer, but allow to try again later in the hope the bug was transient. + -- We do not close the connection in the other direction; if the bug was + -- indeed local, it might not affect communication in the other direction. + ourBug :: SuspendDecision DiffTime + ourBug = SuspendConsumer defaultDelay - -- Default delay - -- - -- We might want to tweak the delays for the various different kinds of - -- problems, but we'd need to establish a policy on how to set them. - defaultDelay :: DiffTime - defaultDelay = 200 -- seconds + -- Default delay + -- + -- We might want to tweak the delays for the various different kinds of + -- problems, but we'd need to establish a policy on how to set them. + defaultDelay :: DiffTime + defaultDelay = 200 -- seconds diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Exit.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Exit.hs index 759bf72c09..5c1792c7e1 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Exit.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Exit.hs @@ -2,28 +2,32 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Node.Exit ( - -- * ExitFailure +module Ouroboros.Consensus.Node.Exit + ( -- * ExitFailure ExitFailure , exitReasontoExitFailure + -- * ExitReason , ExitReason (..) , toExitReason ) where -import Control.Exception (AsyncException (..), SomeException, - fromException) -import Control.Monad.Class.MonadAsync (ExceptionInLinkedThread (..)) -import Data.Proxy (Proxy) -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (StandardHash) -import Ouroboros.Consensus.Node.DbMarker (DbMarkerError) -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDbFailure (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDBError) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.API as ImmutableDB -import Ouroboros.Consensus.Storage.VolatileDB (VolatileDBError) -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import System.FS.API.Types (FsError (..), FsErrorType (..)) +import Control.Exception + ( AsyncException (..) + , SomeException + , fromException + ) +import Control.Monad.Class.MonadAsync (ExceptionInLinkedThread (..)) +import Data.Proxy (Proxy) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (StandardHash) +import Ouroboros.Consensus.Node.DbMarker (DbMarkerError) +import Ouroboros.Consensus.Storage.ChainDB.API (ChainDbFailure (..)) +import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDBError) +import Ouroboros.Consensus.Storage.ImmutableDB.API qualified as ImmutableDB +import Ouroboros.Consensus.Storage.VolatileDB (VolatileDBError) +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import System.FS.API.Types (FsError (..), FsErrorType (..)) {------------------------------------------------------------------------------- ExitFailure @@ -40,112 +44,102 @@ type ExitFailure = Int -- | Convert an 'ExitReason' to an 'ExitFailure'. exitReasontoExitFailure :: ExitReason -> ExitFailure exitReasontoExitFailure = \case - -- Some action should be taken before restarting in the cases below. - ConfigurationError -> 3 - WrongDatabase -> 4 - DiskFull -> 5 - InsufficientPermissions -> 6 - NoNetwork -> 7 - - -- The node can simply be restarted in the cases below. - -- - -- NOTE: Database corruption is handled automically: when the node is - -- restarted, it will do a full validation pass. - Killed -> 1 - DatabaseCorruption -> 2 - Other -> 2 + -- Some action should be taken before restarting in the cases below. + ConfigurationError -> 3 + WrongDatabase -> 4 + DiskFull -> 5 + InsufficientPermissions -> 6 + NoNetwork -> 7 + -- The node can simply be restarted in the cases below. + -- + -- NOTE: Database corruption is handled automically: when the node is + -- restarted, it will do a full validation pass. + Killed -> 1 + DatabaseCorruption -> 2 + Other -> 2 {------------------------------------------------------------------------------- ExitReason -------------------------------------------------------------------------------} -- | The reason of shutting down -data ExitReason = - -- | The node process was killed, by the @kill@ command, @CTRL-C@ or some +data ExitReason + = -- | The node process was killed, by the @kill@ command, @CTRL-C@ or some -- other means. This is normal way for a user to terminate the node -- process. The node can simply be restarted. Killed - - -- | Something is wrong with the node configuration, the user should check it. + | -- | Something is wrong with the node configuration, the user should check it. -- -- For example, for PBFT, it could be that the block signing key and the -- delegation certificate do not match. - | ConfigurationError - - -- | We were unable to open the database, probably the user is using the + ConfigurationError + | -- | We were unable to open the database, probably the user is using the -- wrong directory. See 'DbMarkerError' for details. - | WrongDatabase - - -- | The disk is full, make some space before restarting the node. - | DiskFull - - -- | The database folder doesn't have the right permissions. - | InsufficientPermissions - - -- | There is a problem with the network connection, the user should + WrongDatabase + | -- | The disk is full, make some space before restarting the node. + DiskFull + | -- | The database folder doesn't have the right permissions. + InsufficientPermissions + | -- | There is a problem with the network connection, the user should -- investigate. -- -- TODO We're not yet returning this. - | NoNetwork - - -- | Something went wrong with the database, restart the node with + NoNetwork + | -- | Something went wrong with the database, restart the node with -- recovery enabled. - | DatabaseCorruption - - -- | Some exception was thrown. The node should just be restarted. - | Other + DatabaseCorruption + | -- | Some exception was thrown. The node should just be restarted. + Other -- | Return the 'ExitReason' for the given 'SomeException'. Defaults to -- 'Other'. toExitReason :: - forall blk. (Typeable blk, StandardHash blk) - => Proxy blk - -> SomeException - -> ExitReason + forall blk. + (Typeable blk, StandardHash blk) => + Proxy blk -> + SomeException -> + ExitReason toExitReason pb e - | Just (e' :: AsyncException) <- fromException e - = case e' of - ThreadKilled -> Killed + | Just (e' :: AsyncException) <- fromException e = + case e' of + ThreadKilled -> Killed UserInterrupt -> Killed - _ -> Other - - | Just (ExceptionInLinkedThread _ e') <- fromException e - = toExitReason pb e' - | Just (_ :: DbMarkerError) <- fromException e - = WrongDatabase - | Just (e' :: ChainDbFailure blk) <- fromException e - = case e' of + _ -> Other + | Just (ExceptionInLinkedThread _ e') <- fromException e = + toExitReason pb e' + | Just (_ :: DbMarkerError) <- fromException e = + WrongDatabase + | Just (e' :: ChainDbFailure blk) <- fromException e = + case e' of LgrDbFailure fe -> fsError fe - _ -> DatabaseCorruption - - | Just (e' :: VolatileDBError blk) <- fromException e - = case e' of + _ -> DatabaseCorruption + | Just (e' :: VolatileDBError blk) <- fromException e = + case e' of VolatileDB.UnexpectedFailure uf -> volatileDbUnexpectedFailure uf - _ -> Other - -- The two exceptions below will always be wrapped in a - -- 'ChainDbFailure', but we include them just in case. - | Just (e' :: ImmutableDBError blk) <- fromException e - = case e' of + _ -> Other + -- The two exceptions below will always be wrapped in a + -- 'ChainDbFailure', but we include them just in case. + | Just (e' :: ImmutableDBError blk) <- fromException e = + case e' of ImmutableDB.UnexpectedFailure uf -> immutableDbUnexpectedFailure uf - _ -> Other - | Just (e' :: FsError) <- fromException e - = fsError e' - - | otherwise - = Other - where - immutableDbUnexpectedFailure :: ImmutableDB.UnexpectedFailure blk -> ExitReason - immutableDbUnexpectedFailure = \case - ImmutableDB.FileSystemError fe -> fsError fe - _ -> DatabaseCorruption - - volatileDbUnexpectedFailure :: VolatileDB.UnexpectedFailure blk -> ExitReason - volatileDbUnexpectedFailure = \case - VolatileDB.FileSystemError fe -> fsError fe - _ -> DatabaseCorruption - - fsError :: FsError -> ExitReason - fsError FsError { fsErrorType } = case fsErrorType of - FsDeviceFull -> DiskFull - FsInsufficientPermissions -> InsufficientPermissions - _ -> DatabaseCorruption + _ -> Other + | Just (e' :: FsError) <- fromException e = + fsError e' + | otherwise = + Other + where + immutableDbUnexpectedFailure :: ImmutableDB.UnexpectedFailure blk -> ExitReason + immutableDbUnexpectedFailure = \case + ImmutableDB.FileSystemError fe -> fsError fe + _ -> DatabaseCorruption + + volatileDbUnexpectedFailure :: VolatileDB.UnexpectedFailure blk -> ExitReason + volatileDbUnexpectedFailure = \case + VolatileDB.FileSystemError fe -> fsError fe + _ -> DatabaseCorruption + + fsError :: FsError -> ExitReason + fsError FsError{fsErrorType} = case fsErrorType of + FsDeviceFull -> DiskFull + FsInsufficientPermissions -> InsufficientPermissions + _ -> DatabaseCorruption diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ExitPolicy.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ExitPolicy.hs index 81f99159b1..df8677ec68 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ExitPolicy.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/ExitPolicy.hs @@ -1,31 +1,31 @@ -module Ouroboros.Consensus.Node.ExitPolicy ( - NodeToNodeInitiatorResult (..) +module Ouroboros.Consensus.Node.ExitPolicy + ( NodeToNodeInitiatorResult (..) , returnPolicy + -- * Re-exports , ReturnPolicy ) where -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import Ouroboros.Network.ExitPolicy - +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Network.ExitPolicy -- | Result of any of the `node-to-node` mini-protocols. We ignore all but -- `chain-sync` results. --- -data NodeToNodeInitiatorResult = - ChainSyncInitiatorResult !CSClient.ChainSyncClientResult +data NodeToNodeInitiatorResult + = ChainSyncInitiatorResult !CSClient.ChainSyncClientResult | NoInitiatorResult - returnPolicy :: ReturnPolicy NodeToNodeInitiatorResult returnPolicy NoInitiatorResult = RepromoteDelay 10 returnPolicy (ChainSyncInitiatorResult result) = case result of -- TODO: it would be nice to have additional context to predict when we will -- be ready to reconnect. - CSClient.ForkTooDeep _ _ourTip _theirTip -> RepromoteDelay 120 + CSClient.ForkTooDeep _ _ourTip _theirTip -> RepromoteDelay 120 CSClient.NoMoreIntersection _ourTip _theirTip -> RepromoteDelay 120 CSClient.RolledBackPastIntersection - _ _ourTip _theirTip -> RepromoteDelay 180 + _ + _ourTip + _theirTip -> RepromoteDelay 180 -- the outbound-governor asked for hot to warm demotion; it's up to the -- governor to decide to promote the peer to hot. - CSClient.AskedToTerminate -> RepromoteDelay 10 + CSClient.AskedToTerminate -> RepromoteDelay 10 diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/GSM.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/GSM.hs index dbc5894f29..d4cf28cb25 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/GSM.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/GSM.hs @@ -7,8 +7,8 @@ {-# LANGUAGE ViewPatterns #-} -- | The Genesis State Machine decides whether the node is caught-up or not. -module Ouroboros.Consensus.Node.GSM ( - CandidateVersusSelection (..) +module Ouroboros.Consensus.Node.GSM + ( CandidateVersusSelection (..) , DurationFromNow (..) , GsmEntryPoints (..) , GsmNodeKernelArgs (..) @@ -16,122 +16,129 @@ module Ouroboros.Consensus.Node.GSM ( , GsmView (..) , MarkerFileView (..) , WrapDurationUntilTooOld (..) + -- * Auxiliaries , TraceGsmEvent (..) , gsmStateToLedgerJudgement , initializationGsmState + -- * Constructors , realDurationUntilTooOld , realGsmEntryPoints , realMarkerFileView + -- * Re-exported , module Ouroboros.Consensus.Node.GsmState ) where -import Cardano.Network.Types (LedgerStateJudgement (..)) -import qualified Cardano.Slotting.Slot as Slot -import qualified Control.Concurrent.Class.MonadSTM.TVar as LazySTM -import Control.Monad (forever, join, unless) -import Control.Monad.Class.MonadSTM (MonadSTM, STM, atomically, check, - orElse) -import Control.Monad.Class.MonadThrow (MonadThrow) -import Control.Monad.Class.MonadTimer (threadDelay) -import qualified Control.Monad.Class.MonadTimer.SI as SI -import Control.Tracer (Tracer, traceWith) -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import Data.Time (NominalDiffTime) -import qualified Ouroboros.Consensus.BlockchainTime.WallClock.Types as Clock -import qualified Ouroboros.Consensus.HardFork.Abstract as HardFork -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import qualified Ouroboros.Consensus.HardFork.History.Qry as Qry -import qualified Ouroboros.Consensus.Ledger.Basics as L -import Ouroboros.Consensus.Node.GsmState -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) -import Ouroboros.Consensus.Util.NormalForm.StrictTVar (StrictTVar) -import qualified Ouroboros.Consensus.Util.NormalForm.StrictTVar as StrictSTM -import System.FS.API (HasFS, createDirectoryIfMissing, doesFileExist, - removeFile, withFile) -import System.FS.API.Types (AllowExisting (..), FsPath, OpenMode (..), - mkFsPath) -import System.Random (StdGen, uniformR) +import Cardano.Network.Types (LedgerStateJudgement (..)) +import Cardano.Slotting.Slot qualified as Slot +import Control.Concurrent.Class.MonadSTM.TVar qualified as LazySTM +import Control.Monad (forever, join, unless) +import Control.Monad.Class.MonadSTM + ( MonadSTM + , STM + , atomically + , check + , orElse + ) +import Control.Monad.Class.MonadThrow (MonadThrow) +import Control.Monad.Class.MonadTimer (threadDelay) +import Control.Monad.Class.MonadTimer.SI qualified as SI +import Control.Tracer (Tracer, traceWith) +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Time (NominalDiffTime) +import Ouroboros.Consensus.BlockchainTime.WallClock.Types qualified as Clock +import Ouroboros.Consensus.HardFork.Abstract qualified as HardFork +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HardFork.History.Qry qualified as Qry +import Ouroboros.Consensus.Ledger.Basics qualified as L +import Ouroboros.Consensus.Node.GsmState +import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) +import Ouroboros.Consensus.Util.NormalForm.StrictTVar (StrictTVar) +import Ouroboros.Consensus.Util.NormalForm.StrictTVar qualified as StrictSTM +import System.FS.API + ( HasFS + , createDirectoryIfMissing + , doesFileExist + , removeFile + , withFile + ) +import System.FS.API.Types + ( AllowExisting (..) + , FsPath + , OpenMode (..) + , mkFsPath + ) +import System.Random (StdGen, uniformR) {------------------------------------------------------------------------------- Interface -------------------------------------------------------------------------------} -data DurationFromNow = +data DurationFromNow + = -- | INVARIANT positive After !NominalDiffTime - -- ^ INVARIANT positive - | - Already - -- ^ This value represents all non-positive durations, ie events from the + | -- | This value represents all non-positive durations, ie events from the -- past + Already deriving (Eq, Show) -data CandidateVersusSelection = - CandidateDoesNotIntersect - -- ^ The GSM assumes that this is ephemeral +data CandidateVersusSelection + = -- | The GSM assumes that this is ephemeral -- -- For example, the ChainSync client will either disconnect from the peer -- or update the candidate to one that is not stale. It's also technically -- possible that the selection is stale, which the ChainDB would also -- resolve as soon as possible. - | + CandidateDoesNotIntersect + | -- | Whether the candidate is better than the selection WhetherCandidateIsBetter !Bool - -- ^ Whether the candidate is better than the selection deriving (Eq, Show) -data GsmView m upstreamPeer selection chainSyncState = GsmView { - antiThunderingHerd :: Maybe StdGen - -- ^ An initial seed used to randomly increase 'minCaughtUpDuration' by up - -- to 15% every transition from Syncing to CaughtUp, in order to avoid a - -- thundering herd phenomenon. - -- - -- 'Nothing' should only be used for testing. - , - candidateOverSelection :: - selection -> chainSyncState -> CandidateVersusSelection - , - peerIsIdle :: chainSyncState -> Bool - , - durationUntilTooOld :: Maybe (selection -> m DurationFromNow) - -- ^ How long from now until the selection will be so old that the node - -- should exit the @CaughtUp@ state - -- - -- 'Nothing' means the selection can never become too old. - , - equivalent :: selection -> selection -> Bool - -- ^ Whether the two selections are equivalent for the purpose of the - -- Genesis State Machine - , - getChainSyncStates :: - STM m (Map.Map upstreamPeer (StrictTVar m chainSyncState)) - -- ^ The current ChainSync state with the latest candidates from the - -- upstream peers - , - getCurrentSelection :: STM m selection - -- ^ The node's current selection - , - minCaughtUpDuration :: NominalDiffTime - -- ^ How long the node must stay in CaughtUp after transitioning to it from - -- Syncing, regardless of the selection's age. This prevents the whole - -- network from thrashing between CaughtUp and (Pre)Syncing if there's an - -- outage in block production. - -- - -- See also 'antiThunderingHerd'. - , - setCaughtUpPersistentMark :: Bool -> m () - -- ^ EG touch/delete the marker file on disk - , - writeGsmState :: GsmState -> m () - -- ^ EG update the TVar that the Diffusion Layer monitors, or en-/disable - -- certain components of Genesis - , - isHaaSatisfied :: STM m Bool - -- ^ Whether the Honest Availability Assumption is currently satisfied. This - -- is used as the trigger for transitioning from 'PreSyncing' to 'Syncing' - -- and vice versa. +data GsmView m upstreamPeer selection chainSyncState = GsmView + { antiThunderingHerd :: Maybe StdGen + -- ^ An initial seed used to randomly increase 'minCaughtUpDuration' by up + -- to 15% every transition from Syncing to CaughtUp, in order to avoid a + -- thundering herd phenomenon. + -- + -- 'Nothing' should only be used for testing. + , candidateOverSelection :: + selection -> + chainSyncState -> + CandidateVersusSelection + , peerIsIdle :: chainSyncState -> Bool + , durationUntilTooOld :: Maybe (selection -> m DurationFromNow) + -- ^ How long from now until the selection will be so old that the node + -- should exit the @CaughtUp@ state + -- + -- 'Nothing' means the selection can never become too old. + , equivalent :: selection -> selection -> Bool + -- ^ Whether the two selections are equivalent for the purpose of the + -- Genesis State Machine + , getChainSyncStates :: + STM m (Map.Map upstreamPeer (StrictTVar m chainSyncState)) + -- ^ The current ChainSync state with the latest candidates from the + -- upstream peers + , getCurrentSelection :: STM m selection + -- ^ The node's current selection + , minCaughtUpDuration :: NominalDiffTime + -- ^ How long the node must stay in CaughtUp after transitioning to it from + -- Syncing, regardless of the selection's age. This prevents the whole + -- network from thrashing between CaughtUp and (Pre)Syncing if there's an + -- outage in block production. + -- + -- See also 'antiThunderingHerd'. + , setCaughtUpPersistentMark :: Bool -> m () + -- ^ EG touch/delete the marker file on disk + , writeGsmState :: GsmState -> m () + -- ^ EG update the TVar that the Diffusion Layer monitors, or en-/disable + -- certain components of Genesis + , isHaaSatisfied :: STM m Bool + -- ^ Whether the Honest Availability Assumption is currently satisfied. This + -- is used as the trigger for transitioning from 'PreSyncing' to 'Syncing' + -- and vice versa. } -- | The two proper GSM entrypoints. @@ -139,20 +146,19 @@ data GsmView m upstreamPeer selection chainSyncState = GsmView { -- See the @BootstrapPeersIER.md@ document for documentation. -- -- See 'initializationLedgerJudgement' for the @Initializing@ pseudo-state. -data GsmEntryPoints m = GsmEntryPoints { - enterCaughtUp :: forall neverTerminates. m neverTerminates - -- ^ ASSUMPTION the marker file is present on disk, a la - -- @'setCaughtUpPersistentMark' True@ - -- - -- Thus this can be invoked at node start up after determining the marker - -- file is present (and the tip is still not stale) - , - enterPreSyncing :: forall neverTerminates. m neverTerminates - -- ^ ASSUMPTION the marker file is absent on disk, a la - -- @'setCaughtUpPersistentMark' False@ - -- - -- Thus this can be invoked at node start up after determining the marker - -- file is absent. +data GsmEntryPoints m = GsmEntryPoints + { enterCaughtUp :: forall neverTerminates. m neverTerminates + -- ^ ASSUMPTION the marker file is present on disk, a la + -- @'setCaughtUpPersistentMark' True@ + -- + -- Thus this can be invoked at node start up after determining the marker + -- file is present (and the tip is still not stale) + , enterPreSyncing :: forall neverTerminates. m neverTerminates + -- ^ ASSUMPTION the marker file is absent on disk, a la + -- @'setCaughtUpPersistentMark' False@ + -- + -- Thus this can be invoked at node start up after determining the marker + -- file is absent. } ----- @@ -161,38 +167,40 @@ data GsmEntryPoints m = GsmEntryPoints { -- -- Also initializes the persistent marker file. initializationGsmState :: - ( L.GetTip (L.LedgerState blk) - , Monad m - ) - => m (L.LedgerState blk L.EmptyMK) - -> Maybe (WrapDurationUntilTooOld m blk) - -- ^ 'Nothing' if @blk@ has no age limit - -> MarkerFileView m - -> m GsmState + ( L.GetTip (L.LedgerState blk) + , Monad m + ) => + m (L.LedgerState blk L.EmptyMK) -> + -- | 'Nothing' if @blk@ has no age limit + Maybe (WrapDurationUntilTooOld m blk) -> + MarkerFileView m -> + m GsmState initializationGsmState - getCurrentLedger - mbDurationUntilTooOld - markerFileView - = do - wasCaughtUp <- hasMarkerFile markerFileView - if not wasCaughtUp then pure PreSyncing else do - case mbDurationUntilTooOld of + getCurrentLedger + mbDurationUntilTooOld + markerFileView = + do + wasCaughtUp <- hasMarkerFile markerFileView + if not wasCaughtUp + then pure PreSyncing + else do + case mbDurationUntilTooOld of Nothing -> return CaughtUp Just wd -> do - sno <- L.getTipSlot <$> getCurrentLedger - getDurationUntilTooOld wd sno >>= \case - After{} -> return CaughtUp - Already -> do - removeMarkerFile markerFileView - return PreSyncing + sno <- L.getTipSlot <$> getCurrentLedger + getDurationUntilTooOld wd sno >>= \case + After{} -> return CaughtUp + Already -> do + removeMarkerFile markerFileView + return PreSyncing -- | For 'LedgerStateJudgement' as used in the Diffusion layer, there is no -- difference between 'PreSyncing' and 'Syncing'. gsmStateToLedgerJudgement :: GsmState -> LedgerStateJudgement gsmStateToLedgerJudgement = \case - PreSyncing -> TooOld - Syncing -> TooOld - CaughtUp -> YoungEnough + PreSyncing -> TooOld + Syncing -> TooOld + CaughtUp -> YoungEnough {------------------------------------------------------------------------------- A real implementation @@ -205,276 +213,268 @@ gsmStateToLedgerJudgement = \case -- states OnlyBootstrap is split into: -- -- - PreSyncing ⟶ Syncing: The Honest Availability Assumption is satisfied. --- + --- - Syncing ⟶ PreSyncing: The Honest Availability Assumption is no longer --- satisfied. -realGsmEntryPoints :: forall m upstreamPeer selection tracedSelection candidate. - ( SI.MonadDelay m - , SI.MonadTimer m - ) - => (selection -> tracedSelection, Tracer m (TraceGsmEvent tracedSelection)) - -> GsmView m upstreamPeer selection candidate - -> GsmEntryPoints m -realGsmEntryPoints tracerArgs gsmView = GsmEntryPoints { - enterCaughtUp - , - enterPreSyncing - } - where - (cnvSelection, tracer) = tracerArgs - - GsmView { - antiThunderingHerd - , - candidateOverSelection - , - peerIsIdle - , - durationUntilTooOld - , - equivalent - , - getChainSyncStates - , - getCurrentSelection - , - minCaughtUpDuration - , - setCaughtUpPersistentMark - , - writeGsmState - , - isHaaSatisfied - } = gsmView - - enterCaughtUp :: forall neverTerminates. m neverTerminates - enterCaughtUp = enterCaughtUp' antiThunderingHerd - - enterPreSyncing :: forall neverTerminates. m neverTerminates - enterPreSyncing = enterPreSyncing' antiThunderingHerd - - enterCaughtUp' :: forall neverTerminates. Maybe StdGen -> m neverTerminates - enterCaughtUp' g = do - (g', ev) <- blockWhileCaughtUp g - - setCaughtUpPersistentMark False +realGsmEntryPoints :: + forall m upstreamPeer selection tracedSelection candidate. + ( SI.MonadDelay m + , SI.MonadTimer m + ) => + (selection -> tracedSelection, Tracer m (TraceGsmEvent tracedSelection)) -> + GsmView m upstreamPeer selection candidate -> + GsmEntryPoints m +realGsmEntryPoints tracerArgs gsmView = + GsmEntryPoints + { enterCaughtUp + , enterPreSyncing + } + where + (cnvSelection, tracer) = tracerArgs + + GsmView + { antiThunderingHerd + , candidateOverSelection + , peerIsIdle + , durationUntilTooOld + , equivalent + , getChainSyncStates + , getCurrentSelection + , minCaughtUpDuration + , setCaughtUpPersistentMark + , writeGsmState + , isHaaSatisfied + } = gsmView + + enterCaughtUp :: forall neverTerminates. m neverTerminates + enterCaughtUp = enterCaughtUp' antiThunderingHerd + + enterPreSyncing :: forall neverTerminates. m neverTerminates + enterPreSyncing = enterPreSyncing' antiThunderingHerd + + enterCaughtUp' :: forall neverTerminates. Maybe StdGen -> m neverTerminates + enterCaughtUp' g = do + (g', ev) <- blockWhileCaughtUp g + + setCaughtUpPersistentMark False + writeGsmState PreSyncing + traceWith tracer ev + + enterPreSyncing' g' + + enterPreSyncing' :: Maybe StdGen -> forall neverTerminates. m neverTerminates + enterPreSyncing' g = do + blockUntilHonestAvailabilityAssumption + + writeGsmState Syncing + traceWith tracer GsmEventPreSyncingToSyncing + + enterSyncing' g + + enterSyncing' :: Maybe StdGen -> forall neverTerminates. m neverTerminates + enterSyncing' g = do + -- Wait until either the Honest Availability Assumption is no longer + -- satisfied, or we are caught up. + mev <- + atomically $ + (Nothing <$ blockWhileHonestAvailabilityAssumption) + `orElse` (Just <$> blockUntilCaughtUp) + + case mev of + Nothing -> do writeGsmState PreSyncing + traceWith tracer GsmEventSyncingToPreSyncing + + enterPreSyncing' g + Just ev -> do + writeGsmState CaughtUp + setCaughtUpPersistentMark True traceWith tracer ev - enterPreSyncing' g' - - enterPreSyncing' :: Maybe StdGen -> forall neverTerminates. m neverTerminates - enterPreSyncing' g = do - blockUntilHonestAvailabilityAssumption - - writeGsmState Syncing - traceWith tracer GsmEventPreSyncingToSyncing - - enterSyncing' g - - enterSyncing' :: Maybe StdGen -> forall neverTerminates. m neverTerminates - enterSyncing' g = do - -- Wait until either the Honest Availability Assumption is no longer - -- satisfied, or we are caught up. - mev <- atomically $ - (Nothing <$ blockWhileHonestAvailabilityAssumption) - `orElse` - (Just <$> blockUntilCaughtUp) - - case mev of - Nothing -> do - writeGsmState PreSyncing - traceWith tracer GsmEventSyncingToPreSyncing - - enterPreSyncing' g - Just ev -> do - writeGsmState CaughtUp - setCaughtUpPersistentMark True - traceWith tracer ev - - -- When transitioning from Syncing to CaughtUp, the node will remain - -- in CaughtUp for at least 'minCaughtUpDuration', regardless of the - -- selection's age. - SI.threadDelay $ realToFrac minCaughtUpDuration - - enterCaughtUp' g - - blockWhileCaughtUp :: - Maybe StdGen - -> m (Maybe StdGen, TraceGsmEvent tracedSelection) - blockWhileCaughtUp g = do - -- Randomly add up to 5min. - -- - -- Under the ideal circumstances, nodes have perfectly synchronized - -- clocks. However, if there's a block production outage, that means - -- /all/ nodes will switch back to the bootstrap peers - -- /simultaneously/, incurring a thundering herd of requests on that - -- relatively small population. This random change will spread that - -- load out. - -- - -- TODO should the Diffusion Layer do this? IE the node /promptly/ - -- switches to PreSyncing, but then the Diffusion Layer introces a delay - -- before reaching out to the bootstrap peers? - let (bonus, g') = case g of - Nothing -> (0, Nothing) -- it's disabled in some tests - Just x -> - let (seconds, !g'') = - uniformR (0, 300 :: Int) x - in - (fromIntegral seconds, Just g'') - - ev <- atomically getCurrentSelection >>= blockWhileCaughtUpHelper bonus - - pure (g', ev) - - blockWhileCaughtUpHelper :: - SI.DiffTime - -> selection - -> m (TraceGsmEvent tracedSelection) - blockWhileCaughtUpHelper bonus selection = do - let tracedSelection = cnvSelection selection - - computeDuration :: m (Maybe DurationFromNow) - computeDuration = mapM ($ selection) durationUntilTooOld - computeDuration >>= \case - Nothing -> forever $ threadDelay maxBound - Just Already -> do -- it's already too old - pure $ GsmEventLeaveCaughtUp tracedSelection Already - Just (After dur) -> do - varTimeoutExpired <- SI.registerDelay (realToFrac dur + bonus) - - -- If the selection changes before the timeout expires, loop to - -- setup a new timeout for the new tip. - -- - -- Otherwise the timeout expired before the selection changed - -- (or they both happened after the previous attempt of this - -- STM transaction), so the node is no longer in @CaughtUp@. - join $ atomically $ do - expired <- LazySTM.readTVar varTimeoutExpired - let ev = GsmEventLeaveCaughtUp tracedSelection (After dur) - if expired then pure (pure ev) else do - selection' <- getCurrentSelection - check $ not $ equivalent selection selection' - pure $ blockWhileCaughtUpHelper bonus selection' - - blockUntilCaughtUp :: STM m (TraceGsmEvent tracedSelection) - blockUntilCaughtUp = do - -- STAGE 1: all ChainSync clients report no subsequent headers - varsState <- getChainSyncStates - states <- traverse StrictSTM.readTVar varsState - check $ - not (Map.null states) - && all peerIsIdle states - - -- STAGE 2: no candidate is better than the node's current - -- selection - -- - -- For the Bootstrap State Machine, it's fine to completely ignore - -- block diffusion pipelining here, because all bootstrap peers will - -- /promptly/ rollback the tentative header if its block body turns out - -- to be invalid (aka /trap header/). Thus the node will stay in - -- CaughtUp slighty longer, until the system is no longer pipelining a - -- block; general Praos reasoning ensures that won't take particularly - -- long. - selection <- getCurrentSelection - candidates <- traverse StrictSTM.readTVar varsState - let ok candidate = - WhetherCandidateIsBetter False - == candidateOverSelection selection candidate - check $ all ok candidates - - pure $ GsmEventEnterCaughtUp - (Map.size states) - (cnvSelection selection) - - -- STAGE 3: the previous stages weren't so slow that the idler - -- set/candidate set/individual candidates changed - -- - -- At this point, the STM scheduler will automatically retry this - -- transaction if and only if any of the TVars are no longer - -- pointer-equal to what was read above. That outcome is unlikely as - -- long as there are not a huge number of peers; as Simon Marlow wrote, - -- "Never read an unbounded number of TVars in a single transaction - -- because the O(n) performance of readTVar then gives O(n*n) for the - -- whole transaction." + -- When transitioning from Syncing to CaughtUp, the node will remain + -- in CaughtUp for at least 'minCaughtUpDuration', regardless of the + -- selection's age. + SI.threadDelay $ realToFrac minCaughtUpDuration + + enterCaughtUp' g + + blockWhileCaughtUp :: + Maybe StdGen -> + m (Maybe StdGen, TraceGsmEvent tracedSelection) + blockWhileCaughtUp g = do + -- Randomly add up to 5min. + -- + -- Under the ideal circumstances, nodes have perfectly synchronized + -- clocks. However, if there's a block production outage, that means + -- /all/ nodes will switch back to the bootstrap peers + -- /simultaneously/, incurring a thundering herd of requests on that + -- relatively small population. This random change will spread that + -- load out. + -- + -- TODO should the Diffusion Layer do this? IE the node /promptly/ + -- switches to PreSyncing, but then the Diffusion Layer introces a delay + -- before reaching out to the bootstrap peers? + let (bonus, g') = case g of + Nothing -> (0, Nothing) -- it's disabled in some tests + Just x -> + let (seconds, !g'') = + uniformR (0, 300 :: Int) x + in (fromIntegral seconds, Just g'') + + ev <- atomically getCurrentSelection >>= blockWhileCaughtUpHelper bonus + + pure (g', ev) + + blockWhileCaughtUpHelper :: + SI.DiffTime -> + selection -> + m (TraceGsmEvent tracedSelection) + blockWhileCaughtUpHelper bonus selection = do + let tracedSelection = cnvSelection selection + + computeDuration :: m (Maybe DurationFromNow) + computeDuration = mapM ($ selection) durationUntilTooOld + computeDuration >>= \case + Nothing -> forever $ threadDelay maxBound + Just Already -> do + -- it's already too old + pure $ GsmEventLeaveCaughtUp tracedSelection Already + Just (After dur) -> do + varTimeoutExpired <- SI.registerDelay (realToFrac dur + bonus) + + -- If the selection changes before the timeout expires, loop to + -- setup a new timeout for the new tip. -- - -- (NSF: I peeked at ghc/rts/STM.c today. The thing being counted by - -- the O(n*n) notation in the quote above is iterations of a C for loop - -- that reads a C array. The transaction log is a linked list of - -- chunks, each a 16 element array. So the 4 node kernel tvars + one - -- tvar for each of the first 12 peers fill up the first chunk, and - -- then there's a new chunk for each group of 16 peers beyond that. For - -- example, 44 peers would exactly fill 3 chunks. Thus, each readTVar - -- pages in at most 4 VM pages for the number of peers we're - -- anticipating. And then the STM validation at the end touches them - -- all one last time. Summary: seems likely to be fast enough.) - - blockUntilHonestAvailabilityAssumption :: m () - blockUntilHonestAvailabilityAssumption = - atomically $ check =<< isHaaSatisfied - - blockWhileHonestAvailabilityAssumption :: STM m () - blockWhileHonestAvailabilityAssumption = - check . not =<< isHaaSatisfied - -data TraceGsmEvent selection = + -- Otherwise the timeout expired before the selection changed + -- (or they both happened after the previous attempt of this + -- STM transaction), so the node is no longer in @CaughtUp@. + join $ atomically $ do + expired <- LazySTM.readTVar varTimeoutExpired + let ev = GsmEventLeaveCaughtUp tracedSelection (After dur) + if expired + then pure (pure ev) + else do + selection' <- getCurrentSelection + check $ not $ equivalent selection selection' + pure $ blockWhileCaughtUpHelper bonus selection' + + blockUntilCaughtUp :: STM m (TraceGsmEvent tracedSelection) + blockUntilCaughtUp = do + -- STAGE 1: all ChainSync clients report no subsequent headers + varsState <- getChainSyncStates + states <- traverse StrictSTM.readTVar varsState + check $ + not (Map.null states) + && all peerIsIdle states + + -- STAGE 2: no candidate is better than the node's current + -- selection + -- + -- For the Bootstrap State Machine, it's fine to completely ignore + -- block diffusion pipelining here, because all bootstrap peers will + -- /promptly/ rollback the tentative header if its block body turns out + -- to be invalid (aka /trap header/). Thus the node will stay in + -- CaughtUp slighty longer, until the system is no longer pipelining a + -- block; general Praos reasoning ensures that won't take particularly + -- long. + selection <- getCurrentSelection + candidates <- traverse StrictSTM.readTVar varsState + let ok candidate = + WhetherCandidateIsBetter False + == candidateOverSelection selection candidate + check $ all ok candidates + + pure $ + GsmEventEnterCaughtUp + (Map.size states) + (cnvSelection selection) + + -- STAGE 3: the previous stages weren't so slow that the idler + -- set/candidate set/individual candidates changed + -- + -- At this point, the STM scheduler will automatically retry this + -- transaction if and only if any of the TVars are no longer + -- pointer-equal to what was read above. That outcome is unlikely as + -- long as there are not a huge number of peers; as Simon Marlow wrote, + -- "Never read an unbounded number of TVars in a single transaction + -- because the O(n) performance of readTVar then gives O(n*n) for the + -- whole transaction." + -- + -- (NSF: I peeked at ghc/rts/STM.c today. The thing being counted by + -- the O(n*n) notation in the quote above is iterations of a C for loop + -- that reads a C array. The transaction log is a linked list of + -- chunks, each a 16 element array. So the 4 node kernel tvars + one + -- tvar for each of the first 12 peers fill up the first chunk, and + -- then there's a new chunk for each group of 16 peers beyond that. For + -- example, 44 peers would exactly fill 3 chunks. Thus, each readTVar + -- pages in at most 4 VM pages for the number of peers we're + -- anticipating. And then the STM validation at the end touches them + -- all one last time. Summary: seems likely to be fast enough.) + + blockUntilHonestAvailabilityAssumption :: m () + blockUntilHonestAvailabilityAssumption = + atomically $ check =<< isHaaSatisfied + + blockWhileHonestAvailabilityAssumption :: STM m () + blockWhileHonestAvailabilityAssumption = + check . not =<< isHaaSatisfied + +data TraceGsmEvent selection + = -- | how many peers and the current selection GsmEventEnterCaughtUp !Int !selection - -- ^ how many peers and the current selection - | + | -- | the current selection and its age GsmEventLeaveCaughtUp !selection !DurationFromNow - -- ^ the current selection and its age - | + | -- | the Honest Availability Assumption is now satisfied GsmEventPreSyncingToSyncing - -- ^ the Honest Availability Assumption is now satisfied - | + | -- | the Honest Availability Assumption is no longer satisfied GsmEventSyncingToPreSyncing - -- ^ the Honest Availability Assumption is no longer satisfied deriving (Eq, Show) {------------------------------------------------------------------------------- A helper for constructing a real 'GsmView' -------------------------------------------------------------------------------} -newtype WrapDurationUntilTooOld m blk = DurationUntilTooOld { - getDurationUntilTooOld :: Slot.WithOrigin Slot.SlotNo -> m DurationFromNow +newtype WrapDurationUntilTooOld m blk = DurationUntilTooOld + { getDurationUntilTooOld :: Slot.WithOrigin Slot.SlotNo -> m DurationFromNow } -- | The real system's 'durationUntilTooOld' realDurationUntilTooOld :: - ( HardFork.HasHardForkHistory blk - , MonadSTM m - ) - => L.LedgerConfig blk - -> STM m (L.LedgerState blk L.EmptyMK) - -> NominalDiffTime - -- ^ If the volatile tip is older than this, then the node will exit the - -- @CaughtUp@ state. - -- - -- Eg 'Ouroboros.Consensus.Node.llrnMaxCaughtUpAge' - -- - -- WARNING This function returns 'Already' if the wall clock is beyond the - -- current ledger state's translation horizon; that may be confusing if an - -- unexpectedly large 'NominalDiffTime' is given here (eg 1 one week). - -> Clock.SystemTime m - -> m (WrapDurationUntilTooOld m blk) + ( HardFork.HasHardForkHistory blk + , MonadSTM m + ) => + L.LedgerConfig blk -> + STM m (L.LedgerState blk L.EmptyMK) -> + -- | If the volatile tip is older than this, then the node will exit the + -- @CaughtUp@ state. + -- + -- Eg 'Ouroboros.Consensus.Node.llrnMaxCaughtUpAge' + -- + -- WARNING This function returns 'Already' if the wall clock is beyond the + -- current ledger state's translation horizon; that may be confusing if an + -- unexpectedly large 'NominalDiffTime' is given here (eg 1 one week). + NominalDiffTime -> + Clock.SystemTime m -> + m (WrapDurationUntilTooOld m blk) realDurationUntilTooOld lcfg getLedgerState maxCaughtUpAge systemTime = do - runner <- - HardFork.runWithCachedSummary - $ HardFork.hardForkSummary lcfg <$> getLedgerState - pure $ DurationUntilTooOld $ \woSlot -> do - now <- Clock.systemTimeCurrent systemTime - case woSlot of - Slot.Origin -> pure $ toDur now $ Clock.RelativeTime 0 - Slot.At slot -> do - let qry = Qry.slotToWallclock slot - atomically $ HardFork.cachedRunQuery runner qry <&> \case - Left Qry.PastHorizon{} -> Already - Right (onset, _slotLen) -> toDur now onset - where - toDur - (Clock.RelativeTime now) - (Clock.getRelativeTime -> (+ maxCaughtUpAge) -> limit) - = if limit <= now then Already else After (limit - now) + runner <- + HardFork.runWithCachedSummary $ + HardFork.hardForkSummary lcfg <$> getLedgerState + pure $ DurationUntilTooOld $ \woSlot -> do + now <- Clock.systemTimeCurrent systemTime + case woSlot of + Slot.Origin -> pure $ toDur now $ Clock.RelativeTime 0 + Slot.At slot -> do + let qry = Qry.slotToWallclock slot + atomically $ + HardFork.cachedRunQuery runner qry <&> \case + Left Qry.PastHorizon{} -> Already + Right (onset, _slotLen) -> toDur now onset + where + toDur + (Clock.RelativeTime now) + (Clock.getRelativeTime -> (+ maxCaughtUpAge) -> limit) = + if limit <= now then Already else After (limit - now) {------------------------------------------------------------------------------- A helper for constructing a real 'GsmView' @@ -486,18 +486,16 @@ realDurationUntilTooOld lcfg getLedgerState maxCaughtUpAge systemTime = do -- -- These comments constrain the result of 'realMarkerFile'; mock views in -- testing are free to be different. -data MarkerFileView m = MarkerFileView { - hasMarkerFile :: m Bool - , - -- | Remove the marker file - -- - -- Will throw an 'FsResourceDoesNotExist' error when it does not exist. - removeMarkerFile :: m () - , - -- | Create the marker file - -- - -- Idempotent. - touchMarkerFile :: m () +data MarkerFileView m = MarkerFileView + { hasMarkerFile :: m Bool + , removeMarkerFile :: m () + -- ^ Remove the marker file + -- + -- Will throw an 'FsResourceDoesNotExist' error when it does not exist. + , touchMarkerFile :: m () + -- ^ Create the marker file + -- + -- Idempotent. } -- | The real system's 'MarkerFileView' @@ -505,26 +503,24 @@ data MarkerFileView m = MarkerFileView { -- The strict 'ChainDB' argument is unused, but its existence ensures there's -- only one process using this file system. realMarkerFileView :: - MonadThrow m - => ChainDB m blk - -> HasFS m h - -- ^ should be independent of other filesystems, eg @gsm/@ - -> MarkerFileView m + MonadThrow m => + ChainDB m blk -> + -- | should be independent of other filesystems, eg @gsm/@ + HasFS m h -> + MarkerFileView m realMarkerFileView !_cdb hasFS = - MarkerFileView { - hasMarkerFile - , - removeMarkerFile = removeFile hasFS markerFile - , - touchMarkerFile = do - createDirectoryIfMissing hasFS True (mkFsPath []) - alreadyExists <- hasMarkerFile - unless alreadyExists $ - withFile hasFS markerFile (WriteMode MustBeNew) $ \_h -> - return () - } - where - hasMarkerFile = doesFileExist hasFS markerFile + MarkerFileView + { hasMarkerFile + , removeMarkerFile = removeFile hasFS markerFile + , touchMarkerFile = do + createDirectoryIfMissing hasFS True (mkFsPath []) + alreadyExists <- hasMarkerFile + unless alreadyExists $ + withFile hasFS markerFile (WriteMode MustBeNew) $ \_h -> + return () + } + where + hasMarkerFile = doesFileExist hasFS markerFile -- | The path to the GSM's /Caught-Up persistent marker/ inside its dedicated -- 'HasFS' @@ -539,15 +535,12 @@ markerFile = mkFsPath ["CaughtUpMarker"] -------------------------------------------------------------------------------} -- | Arguments the NodeKernel has to take because of the GSM -data GsmNodeKernelArgs m blk = GsmNodeKernelArgs { - gsmAntiThunderingHerd :: StdGen - -- ^ See 'antiThunderingHerd' - , - gsmDurationUntilTooOld :: Maybe (WrapDurationUntilTooOld m blk) - -- ^ See 'durationUntilTooOld' - , - gsmMarkerFileView :: MarkerFileView m - , - gsmMinCaughtUpDuration :: NominalDiffTime - -- ^ See 'minCaughtUpDuration' +data GsmNodeKernelArgs m blk = GsmNodeKernelArgs + { gsmAntiThunderingHerd :: StdGen + -- ^ See 'antiThunderingHerd' + , gsmDurationUntilTooOld :: Maybe (WrapDurationUntilTooOld m blk) + -- ^ See 'durationUntilTooOld' + , gsmMarkerFileView :: MarkerFileView m + , gsmMinCaughtUpDuration :: NominalDiffTime + -- ^ See 'minCaughtUpDuration' } diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Genesis.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Genesis.hs index 585902f892..45ad239a7c 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Genesis.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Genesis.hs @@ -7,8 +7,8 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Node.Genesis ( - -- * 'GenesisConfig' +module Ouroboros.Consensus.Node.Genesis + ( -- * 'GenesisConfig' GenesisConfig (..) , GenesisConfigFlags (..) , LoEAndGDDConfig (..) @@ -16,6 +16,7 @@ module Ouroboros.Consensus.Node.Genesis ( , disableGenesisConfig , enableGenesisConfigDefault , mkGenesisConfig + -- * NodeKernel helpers , GenesisNodeKernelArgs (..) , LoEAndGDDNodeKernelArgs (..) @@ -23,34 +24,38 @@ module Ouroboros.Consensus.Node.Genesis ( , setGetLoEFragment ) where -import Control.Monad (join) -import Data.Maybe (fromMaybe) -import Data.Traversable (for) -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (CSJConfig (..), CSJEnabledConfig (..), - ChainSyncLoPBucketConfig (..), - ChainSyncLoPBucketEnabledConfig (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck - (HistoricityCutoff (..)) -import qualified Ouroboros.Consensus.Node.GsmState as GSM -import Ouroboros.Consensus.Storage.ChainDB (ChainDbArgs) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Args as ChainDB -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.BlockFetch - (GenesisBlockFetchConfiguration (..)) +import Control.Monad (join) +import Data.Maybe (fromMaybe) +import Data.Traversable (for) +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( CSJConfig (..) + , CSJEnabledConfig (..) + , ChainSyncLoPBucketConfig (..) + , ChainSyncLoPBucketEnabledConfig (..) + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck + ( HistoricityCutoff (..) + ) +import Ouroboros.Consensus.Node.GsmState qualified as GSM +import Ouroboros.Consensus.Storage.ChainDB (ChainDbArgs) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args qualified as ChainDB +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.BlockFetch + ( GenesisBlockFetchConfiguration (..) + ) -- | Whether to en-/disable the Limit on Eagerness and the Genesis Density -- Disconnector. -data LoEAndGDDConfig a = - LoEAndGDDEnabled !a +data LoEAndGDDConfig a + = LoEAndGDDEnabled !a | LoEAndGDDDisabled deriving stock (Eq, Generic, Show, Functor, Foldable, Traversable) @@ -59,36 +64,39 @@ data LoEAndGDDConfig a = -- Usually, 'enableGenesisConfigDefault' or 'disableGenesisConfig' can be used. -- See the haddocks of the types of the individual fields for details. data GenesisConfig = GenesisConfig - { gcBlockFetchConfig :: !GenesisBlockFetchConfiguration + { gcBlockFetchConfig :: !GenesisBlockFetchConfiguration , gcChainSyncLoPBucketConfig :: !ChainSyncLoPBucketConfig - , gcCSJConfig :: !CSJConfig - , gcLoEAndGDDConfig :: !(LoEAndGDDConfig LoEAndGDDParams) - , gcHistoricityCutoff :: !(Maybe HistoricityCutoff) - } deriving stock (Eq, Generic, Show) + , gcCSJConfig :: !CSJConfig + , gcLoEAndGDDConfig :: !(LoEAndGDDConfig LoEAndGDDParams) + , gcHistoricityCutoff :: !(Maybe HistoricityCutoff) + } + deriving stock (Eq, Generic, Show) -- | Genesis configuration flags and low-level args, as parsed from config file or CLI data GenesisConfigFlags = GenesisConfigFlags - { gcfEnableCSJ :: Bool - , gcfEnableLoEAndGDD :: Bool - , gcfEnableLoP :: Bool + { gcfEnableCSJ :: Bool + , gcfEnableLoEAndGDD :: Bool + , gcfEnableLoP :: Bool , gcfBlockFetchGracePeriod :: Maybe DiffTime - , gcfBucketCapacity :: Maybe Integer - , gcfBucketRate :: Maybe Integer - , gcfCSJJumpSize :: Maybe SlotNo - , gcfGDDRateLimit :: Maybe DiffTime - } deriving stock (Eq, Generic, Show) + , gcfBucketCapacity :: Maybe Integer + , gcfBucketRate :: Maybe Integer + , gcfCSJJumpSize :: Maybe SlotNo + , gcfGDDRateLimit :: Maybe DiffTime + } + deriving stock (Eq, Generic, Show) defaultGenesisConfigFlags :: GenesisConfigFlags -defaultGenesisConfigFlags = GenesisConfigFlags - { gcfEnableCSJ = True - , gcfEnableLoEAndGDD = True - , gcfEnableLoP = True - , gcfBlockFetchGracePeriod = Nothing - , gcfBucketCapacity = Nothing - , gcfBucketRate = Nothing - , gcfCSJJumpSize = Nothing - , gcfGDDRateLimit = Nothing - } +defaultGenesisConfigFlags = + GenesisConfigFlags + { gcfEnableCSJ = True + , gcfEnableLoEAndGDD = True + , gcfEnableLoP = True + , gcfBlockFetchGracePeriod = Nothing + , gcfBucketCapacity = Nothing + , gcfBucketRate = Nothing + , gcfCSJJumpSize = Nothing + , gcfGDDRateLimit = Nothing + } enableGenesisConfigDefault :: GenesisConfig enableGenesisConfigDefault = mkGenesisConfig $ Just defaultGenesisConfigFlags @@ -98,143 +106,164 @@ disableGenesisConfig :: GenesisConfig disableGenesisConfig = mkGenesisConfig Nothing mkGenesisConfig :: Maybe GenesisConfigFlags -> GenesisConfig -mkGenesisConfig Nothing = -- disable Genesis +mkGenesisConfig Nothing = + -- disable Genesis GenesisConfig - { gcBlockFetchConfig = GenesisBlockFetchConfiguration - { gbfcGracePeriod = 0 -- no grace period when Genesis is disabled - } + { gcBlockFetchConfig = + GenesisBlockFetchConfiguration + { gbfcGracePeriod = 0 -- no grace period when Genesis is disabled + } , gcChainSyncLoPBucketConfig = ChainSyncLoPBucketDisabled - , gcCSJConfig = CSJDisabled - , gcLoEAndGDDConfig = LoEAndGDDDisabled - , gcHistoricityCutoff = Nothing + , gcCSJConfig = CSJDisabled + , gcLoEAndGDDConfig = LoEAndGDDDisabled + , gcHistoricityCutoff = Nothing } mkGenesisConfig (Just cfg) = GenesisConfig - { gcBlockFetchConfig = GenesisBlockFetchConfiguration - { gbfcGracePeriod - } - , gcChainSyncLoPBucketConfig = if gcfEnableLoP - then ChainSyncLoPBucketEnabled ChainSyncLoPBucketEnabledConfig - { csbcCapacity - , csbcRate + { gcBlockFetchConfig = + GenesisBlockFetchConfiguration + { gbfcGracePeriod } - else ChainSyncLoPBucketDisabled - , gcCSJConfig = if gcfEnableCSJ - then CSJEnabled CSJEnabledConfig - { csjcJumpSize - } - else CSJDisabled - , gcLoEAndGDDConfig = if gcfEnableLoEAndGDD - then LoEAndGDDEnabled LoEAndGDDParams{lgpGDDRateLimit} - else LoEAndGDDDisabled + , gcChainSyncLoPBucketConfig = + if gcfEnableLoP + then + ChainSyncLoPBucketEnabled + ChainSyncLoPBucketEnabledConfig + { csbcCapacity + , csbcRate + } + else ChainSyncLoPBucketDisabled + , gcCSJConfig = + if gcfEnableCSJ + then + CSJEnabled + CSJEnabledConfig + { csjcJumpSize + } + else CSJDisabled + , gcLoEAndGDDConfig = + if gcfEnableLoEAndGDD + then LoEAndGDDEnabled LoEAndGDDParams{lgpGDDRateLimit} + else LoEAndGDDDisabled , -- Duration in seconds of one Cardano mainnet Shelley stability window -- (3k/f slots times one second per slot) plus one extra hour as a -- safety margin. gcHistoricityCutoff = Just $ HistoricityCutoff $ 3 * 2160 * 20 + 3600 } - where - GenesisConfigFlags { - gcfEnableLoP - , gcfEnableCSJ - , gcfEnableLoEAndGDD - , gcfBlockFetchGracePeriod - , gcfBucketCapacity - , gcfBucketRate - , gcfCSJJumpSize - , gcfGDDRateLimit - } = cfg - - -- The minimum amount of time during which the Genesis BlockFetch logic will - -- download blocks from a specific peer (even if it is not performing well - -- during that period). - defaultBlockFetchGracePeriod = 10 -- seconds - - -- LoP parameters. Empirically, it takes less than 1ms to validate a header, - -- so leaking one token per 2ms is conservative. The capacity of 100_000 - -- tokens corresponds to 200s, which is definitely enough to handle long GC - -- pauses; we could even make this more conservative. - defaultCapacity = 100_000 -- number of tokens - defaultRate = 500 -- tokens per second leaking, 1/2ms - - -- The larger Shelley forecast range (3 * 2160 * 20) works in more recent - -- ranges of slots, but causes syncing to block in Byron. A future - -- improvement would be to make this era-dynamic, such that we can use the - -- larger (and hence more efficient) larger CSJ jump size in Shelley-based - -- eras. - defaultCSJJumpSize = 2 * 2160 -- Byron forecast range - - -- Limiting the performance impact of the GDD. - defaultGDDRateLimit = 1.0 -- seconds - - gbfcGracePeriod = fromMaybe defaultBlockFetchGracePeriod gcfBlockFetchGracePeriod - csbcCapacity = fromMaybe defaultCapacity gcfBucketCapacity - csbcRate = maybe defaultRate (fromInteger @Rational) gcfBucketRate - csjcJumpSize = fromMaybe defaultCSJJumpSize gcfCSJJumpSize - lgpGDDRateLimit = fromMaybe defaultGDDRateLimit gcfGDDRateLimit + where + GenesisConfigFlags + { gcfEnableLoP + , gcfEnableCSJ + , gcfEnableLoEAndGDD + , gcfBlockFetchGracePeriod + , gcfBucketCapacity + , gcfBucketRate + , gcfCSJJumpSize + , gcfGDDRateLimit + } = cfg + + -- The minimum amount of time during which the Genesis BlockFetch logic will + -- download blocks from a specific peer (even if it is not performing well + -- during that period). + defaultBlockFetchGracePeriod = 10 -- seconds + + -- LoP parameters. Empirically, it takes less than 1ms to validate a header, + -- so leaking one token per 2ms is conservative. The capacity of 100_000 + -- tokens corresponds to 200s, which is definitely enough to handle long GC + -- pauses; we could even make this more conservative. + defaultCapacity = 100_000 -- number of tokens + defaultRate = 500 -- tokens per second leaking, 1/2ms + + -- The larger Shelley forecast range (3 * 2160 * 20) works in more recent + -- ranges of slots, but causes syncing to block in Byron. A future + -- improvement would be to make this era-dynamic, such that we can use the + -- larger (and hence more efficient) larger CSJ jump size in Shelley-based + -- eras. + defaultCSJJumpSize = 2 * 2160 -- Byron forecast range + + -- Limiting the performance impact of the GDD. + defaultGDDRateLimit = 1.0 -- seconds + gbfcGracePeriod = fromMaybe defaultBlockFetchGracePeriod gcfBlockFetchGracePeriod + csbcCapacity = fromMaybe defaultCapacity gcfBucketCapacity + csbcRate = maybe defaultRate (fromInteger @Rational) gcfBucketRate + csjcJumpSize = fromMaybe defaultCSJJumpSize gcfCSJJumpSize + lgpGDDRateLimit = fromMaybe defaultGDDRateLimit gcfGDDRateLimit newtype LoEAndGDDParams = LoEAndGDDParams - { -- | How often to evaluate GDD. 0 means as soon as possible. - -- Otherwise, no faster than once every T seconds, where T is the - -- value of the field. - lgpGDDRateLimit :: DiffTime - } deriving stock (Eq, Generic, Show) + { lgpGDDRateLimit :: DiffTime + -- ^ How often to evaluate GDD. 0 means as soon as possible. + -- Otherwise, no faster than once every T seconds, where T is the + -- value of the field. + } + deriving stock (Eq, Generic, Show) -- | Genesis-related arguments needed by the NodeKernel initialization logic. -data GenesisNodeKernelArgs m blk = GenesisNodeKernelArgs { - gnkaLoEAndGDDArgs :: !(LoEAndGDDConfig (LoEAndGDDNodeKernelArgs m blk)) +data GenesisNodeKernelArgs m blk = GenesisNodeKernelArgs + { gnkaLoEAndGDDArgs :: !(LoEAndGDDConfig (LoEAndGDDNodeKernelArgs m blk)) } -data LoEAndGDDNodeKernelArgs m blk = LoEAndGDDNodeKernelArgs { - -- | A TVar containing an action that returns the 'ChainDB.GetLoEFragment' - -- action. We use this extra indirection to update this action after we - -- opened the ChainDB (which happens before we initialize the NodeKernel). - -- After that, this TVar will not be modified again. - lgnkaLoEFragmentTVar :: !(StrictTVar m (ChainDB.GetLoEFragment m blk)) - , lgnkaGDDRateLimit :: DiffTime +data LoEAndGDDNodeKernelArgs m blk = LoEAndGDDNodeKernelArgs + { lgnkaLoEFragmentTVar :: !(StrictTVar m (ChainDB.GetLoEFragment m blk)) + -- ^ A TVar containing an action that returns the 'ChainDB.GetLoEFragment' + -- action. We use this extra indirection to update this action after we + -- opened the ChainDB (which happens before we initialize the NodeKernel). + -- After that, this TVar will not be modified again. + , lgnkaGDDRateLimit :: DiffTime } + -- | Create the initial 'GenesisNodeKernelArgs" (with a temporary -- 'ChainDB.GetLoEFragment' that will be replaced via 'setGetLoEFragment') and a -- function to update the 'ChainDbArgs' accordingly. mkGenesisNodeKernelArgs :: - forall m blk. (IOLike m, GetHeader blk, Typeable blk) - => GenesisConfig - -> m ( GenesisNodeKernelArgs m blk - , Complete ChainDbArgs m blk -> Complete ChainDbArgs m blk - ) + forall m blk. + (IOLike m, GetHeader blk, Typeable blk) => + GenesisConfig -> + m + ( GenesisNodeKernelArgs m blk + , Complete ChainDbArgs m blk -> Complete ChainDbArgs m blk + ) mkGenesisNodeKernelArgs gcfg = do - gnkaLoEAndGDDArgs <- for (gcLoEAndGDDConfig gcfg) $ \p -> do - loeFragmentTVar <- newTVarIO $ pure $ + gnkaLoEAndGDDArgs <- for (gcLoEAndGDDConfig gcfg) $ \p -> do + loeFragmentTVar <- + newTVarIO $ + pure $ -- Use the most conservative LoE fragment until 'setGetLoEFragment' -- is called. - ChainDB.LoEEnabled $ AF.Empty AF.AnchorGenesis - pure LoEAndGDDNodeKernelArgs - { lgnkaLoEFragmentTVar = loeFragmentTVar - , lgnkaGDDRateLimit = lgpGDDRateLimit p - } - let updateChainDbArgs = case gnkaLoEAndGDDArgs of - LoEAndGDDDisabled -> id - LoEAndGDDEnabled lgnkArgs -> \cfg -> - cfg { ChainDB.cdbsArgs = - (ChainDB.cdbsArgs cfg) { ChainDB.cdbsLoE = getLoEFragment } - } - where - getLoEFragment = join $ readTVarIO $ lgnkaLoEFragmentTVar lgnkArgs - pure (GenesisNodeKernelArgs{gnkaLoEAndGDDArgs}, updateChainDbArgs) + ChainDB.LoEEnabled $ + AF.Empty AF.AnchorGenesis + pure + LoEAndGDDNodeKernelArgs + { lgnkaLoEFragmentTVar = loeFragmentTVar + , lgnkaGDDRateLimit = lgpGDDRateLimit p + } + let updateChainDbArgs = case gnkaLoEAndGDDArgs of + LoEAndGDDDisabled -> id + LoEAndGDDEnabled lgnkArgs -> \cfg -> + cfg + { ChainDB.cdbsArgs = + (ChainDB.cdbsArgs cfg){ChainDB.cdbsLoE = getLoEFragment} + } + where + getLoEFragment = join $ readTVarIO $ lgnkaLoEFragmentTVar lgnkArgs + pure (GenesisNodeKernelArgs{gnkaLoEAndGDDArgs}, updateChainDbArgs) -- | Set 'gnkaGetLoEFragment' to the actual logic for determining the current -- LoE fragment. setGetLoEFragment :: - forall m blk. (IOLike m, GetHeader blk, Typeable blk) - => STM m GSM.GsmState - -> STM m (AnchoredFragment (HeaderWithTime blk)) - -- ^ The LoE fragment. - -> StrictTVar m (ChainDB.GetLoEFragment m blk) - -> m () + forall m blk. + (IOLike m, GetHeader blk, Typeable blk) => + STM m GSM.GsmState -> + -- | The LoE fragment. + STM m (AnchoredFragment (HeaderWithTime blk)) -> + StrictTVar m (ChainDB.GetLoEFragment m blk) -> + m () setGetLoEFragment readGsmState readLoEFragment varGetLoEFragment = - atomically $ writeTVar varGetLoEFragment getLoEFragment - where - getLoEFragment :: ChainDB.GetLoEFragment m blk - getLoEFragment = atomically $ readGsmState >>= \case + atomically $ writeTVar varGetLoEFragment getLoEFragment + where + getLoEFragment :: ChainDB.GetLoEFragment m blk + getLoEFragment = + atomically $ + readGsmState >>= \case -- When the Honest Availability Assumption cannot currently be -- guaranteed, we should not select any blocks that would cause our -- immutable tip to advance, so we return the most conservative LoE @@ -242,8 +271,8 @@ setGetLoEFragment readGsmState readLoEFragment varGetLoEFragment = GSM.PreSyncing -> pure $ ChainDB.LoEEnabled $ AF.Empty AF.AnchorGenesis -- When we are syncing, return the current LoE fragment. - GSM.Syncing -> + GSM.Syncing -> ChainDB.LoEEnabled <$> readLoEFragment -- When we are caught up, the LoE is disabled. - GSM.CaughtUp -> + GSM.CaughtUp -> pure ChainDB.LoEDisabled diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Recovery.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Recovery.hs index 8c44c86f79..d1a4b1926e 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Recovery.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Recovery.hs @@ -1,25 +1,29 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Node.Recovery ( - LastShutDownWasClean (..) +module Ouroboros.Consensus.Node.Recovery + ( LastShutDownWasClean (..) , createCleanShutdownMarker , hasCleanShutdownMarker , removeCleanShutdownMarker , runWithCheckedDB ) where -import Control.Monad (unless, when) -import Control.Tracer (Tracer, traceWith) -import Data.Proxy (Proxy) -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (StandardHash) -import Ouroboros.Consensus.Node.Exit (ExitReason (..), toExitReason) -import Ouroboros.Consensus.Storage.ChainDB -import Ouroboros.Consensus.Util.IOLike -import System.FS.API (HasFS, doesFileExist, removeFile, withFile) -import System.FS.API.Types (AllowExisting (..), FsPath, OpenMode (..), - mkFsPath) +import Control.Monad (unless, when) +import Control.Tracer (Tracer, traceWith) +import Data.Proxy (Proxy) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (StandardHash) +import Ouroboros.Consensus.Node.Exit (ExitReason (..), toExitReason) +import Ouroboros.Consensus.Storage.ChainDB +import Ouroboros.Consensus.Util.IOLike +import System.FS.API (HasFS, doesFileExist, removeFile, withFile) +import System.FS.API.Types + ( AllowExisting (..) + , FsPath + , OpenMode (..) + , mkFsPath + ) -- | The path to the /clean shutdown marker file/. cleanShutdownMarkerFile :: FsPath @@ -31,43 +35,44 @@ newtype LastShutDownWasClean = LastShutDownWasClean Bool -- | Return 'True' when 'cleanShutdownMarkerFile' exists. hasCleanShutdownMarker :: - HasFS m h - -> m Bool + HasFS m h -> + m Bool hasCleanShutdownMarker hasFS = - doesFileExist hasFS cleanShutdownMarkerFile + doesFileExist hasFS cleanShutdownMarkerFile -- | Create the 'cleanShutdownMarkerFile'. -- -- Idempotent. createCleanShutdownMarker :: - IOLike m - => HasFS m h - -> m () + IOLike m => + HasFS m h -> + m () createCleanShutdownMarker hasFS = do - alreadyExists <- hasCleanShutdownMarker hasFS - unless alreadyExists $ - withFile hasFS cleanShutdownMarkerFile (WriteMode MustBeNew) $ \_h -> - return () + alreadyExists <- hasCleanShutdownMarker hasFS + unless alreadyExists $ + withFile hasFS cleanShutdownMarkerFile (WriteMode MustBeNew) $ \_h -> + return () -- | Remove 'cleanShutdownMarkerFile'. -- -- Will throw an 'FsResourceDoesNotExist' error when it does not exist. removeCleanShutdownMarker :: - HasFS m h - -> m () + HasFS m h -> + m () removeCleanShutdownMarker hasFS = - removeFile hasFS cleanShutdownMarkerFile + removeFile hasFS cleanShutdownMarkerFile -- | Return 'True' if the given exception indicates that recovery of the -- database is required on the next startup. exceptionRequiresRecovery :: - forall blk. (StandardHash blk, Typeable blk) - => Proxy blk - -> SomeException - -> Bool + forall blk. + (StandardHash blk, Typeable blk) => + Proxy blk -> + SomeException -> + Bool exceptionRequiresRecovery pb e = case toExitReason pb e of - DatabaseCorruption -> True - _ -> False + DatabaseCorruption -> True + _ -> False -- | A bracket function that manages the clean-shutdown marker on disk. -- @@ -87,48 +92,54 @@ exceptionRequiresRecovery pb e = case toExitReason pb e of -- (see 'exceptionRequiresRecovery') that indicate corruption, for which we -- want the next startup to do revalidation. runWithCheckedDB :: - forall a m h blk. (IOLike m, StandardHash blk, Typeable blk) - => Proxy blk - -> Tracer m (TraceEvent blk) - -> HasFS m h - -> (LastShutDownWasClean -> (ChainDB m blk -> m a -> m a) -> m a) - -> m a + forall a m h blk. + (IOLike m, StandardHash blk, Typeable blk) => + Proxy blk -> + Tracer m (TraceEvent blk) -> + HasFS m h -> + (LastShutDownWasClean -> (ChainDB m blk -> m a -> m a) -> m a) -> + m a runWithCheckedDB pb tracer hasFS body = do - -- When we shut down cleanly, we create a marker file so that the next - -- time we start, we know we don't have to validate the contents of the - -- whole ChainDB. When we shut down with an exception indicating - -- corruption or something going wrong with the file system, we don't - -- create this marker file so that the next time we start, we do a full - -- validation. - wasClean <- hasCleanShutdownMarker hasFS - unless wasClean (traceWith tracer TraceLastShutdownUnclean) - removeMarkerOnUncleanShutdown wasClean - $ body - (LastShutDownWasClean wasClean) - (\_cdb runWithInitializedChainDB -> createMarkerOnCleanShutdown $ do - -- ChainDB initialization has finished by the time we reach this - -- point. We remove the marker so that a SIGKILL will cause an unclean - -- shutdown. - when wasClean $ removeCleanShutdownMarker hasFS - runWithInitializedChainDB - ) - where - -- | If there is a unclean exception during ChainDB initialization, we want - -- to remove the marker file, so we install this handler. - -- - -- It is OK to also wrap this handler around code that runs after ChainDB - -- initialization, because the condition on this handler is the opposite of - -- the condition in the @createMarkerOnCleanShutdown@ handler. - removeMarkerOnUncleanShutdown wasClean = if not wasClean then id else onExceptionIf - (exceptionRequiresRecovery pb) - (removeCleanShutdownMarker hasFS) + -- When we shut down cleanly, we create a marker file so that the next + -- time we start, we know we don't have to validate the contents of the + -- whole ChainDB. When we shut down with an exception indicating + -- corruption or something going wrong with the file system, we don't + -- create this marker file so that the next time we start, we do a full + -- validation. + wasClean <- hasCleanShutdownMarker hasFS + unless wasClean (traceWith tracer TraceLastShutdownUnclean) + removeMarkerOnUncleanShutdown wasClean $ + body + (LastShutDownWasClean wasClean) + ( \_cdb runWithInitializedChainDB -> createMarkerOnCleanShutdown $ do + -- ChainDB initialization has finished by the time we reach this + -- point. We remove the marker so that a SIGKILL will cause an unclean + -- shutdown. + when wasClean $ removeCleanShutdownMarker hasFS + runWithInitializedChainDB + ) + where + -- \| If there is a unclean exception during ChainDB initialization, we want + -- to remove the marker file, so we install this handler. + -- + -- It is OK to also wrap this handler around code that runs after ChainDB + -- initialization, because the condition on this handler is the opposite of + -- the condition in the @createMarkerOnCleanShutdown@ handler. + removeMarkerOnUncleanShutdown wasClean = + if not wasClean + then id + else + onExceptionIf + (exceptionRequiresRecovery pb) + (removeCleanShutdownMarker hasFS) - -- | If a clean exception terminates the running node after ChainDB - -- initialization, we want to create the marker file. - -- - -- NOTE: we assume the action (i.e., the node itself) never terminates without - -- an exception. - createMarkerOnCleanShutdown = onExceptionIf + -- \| If a clean exception terminates the running node after ChainDB + -- initialization, we want to create the marker file. + -- + -- NOTE: we assume the action (i.e., the node itself) never terminates without + -- an exception. + createMarkerOnCleanShutdown = + onExceptionIf (not . exceptionRequiresRecovery pb) (createCleanShutdownMarker hasFS) @@ -137,11 +148,14 @@ runWithCheckedDB pb tracer hasFS body = do -------------------------------------------------------------------------------} onExceptionIf :: - (IOLike m, Exception e) - => (e -> Bool) -- ^ Predicate to selection exceptions - -> m () -- ^ Exception handler - -> m a - -> m a -onExceptionIf p h m = m `catch` \e -> do + (IOLike m, Exception e) => + -- | Predicate to selection exceptions + (e -> Bool) -> + -- | Exception handler + m () -> + m a -> + m a +onExceptionIf p h m = + m `catch` \e -> do when (p e) h throwIO e diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/RethrowPolicy.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/RethrowPolicy.hs index 4261ac8193..c67f62a236 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/RethrowPolicy.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/RethrowPolicy.hs @@ -2,27 +2,34 @@ module Ouroboros.Consensus.Node.RethrowPolicy (consensusRethrowPolicy) where -import Control.Monad.Class.MonadAsync (ExceptionInLinkedThread (..)) -import Control.ResourceRegistry (RegistryClosedException, - ResourceRegistryThreadException, TempRegistryException) -import Data.Proxy (Proxy) -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (StandardHash) -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server - (BlockFetchServerException) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientException) -import Ouroboros.Consensus.Node.DbLock -import Ouroboros.Consensus.Node.DbMarker (DbMarkerError) -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDbError (..), - ChainDbFailure) -import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDBError) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.API as ImmutableDB -import Ouroboros.Consensus.Storage.VolatileDB.API (VolatileDBError) -import qualified Ouroboros.Consensus.Storage.VolatileDB.API as VolatileDB -import Ouroboros.Network.RethrowPolicy -import System.FS.API.Types (FsError) +import Control.Monad.Class.MonadAsync (ExceptionInLinkedThread (..)) +import Control.ResourceRegistry + ( RegistryClosedException + , ResourceRegistryThreadException + , TempRegistryException + ) +import Data.Proxy (Proxy) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (StandardHash) +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server + ( BlockFetchServerException + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientException + ) +import Ouroboros.Consensus.Node.DbLock +import Ouroboros.Consensus.Node.DbMarker (DbMarkerError) +import Ouroboros.Consensus.Storage.ChainDB.API + ( ChainDbError (..) + , ChainDbFailure + ) +import Ouroboros.Consensus.Storage.ImmutableDB.API (ImmutableDBError) +import Ouroboros.Consensus.Storage.ImmutableDB.API qualified as ImmutableDB +import Ouroboros.Consensus.Storage.VolatileDB.API (VolatileDBError) +import Ouroboros.Consensus.Storage.VolatileDB.API qualified as VolatileDB +import Ouroboros.Network.RethrowPolicy +import System.FS.API.Types (FsError) -- Exception raised during interaction with the peer -- @@ -39,82 +46,85 @@ import System.FS.API.Types (FsError) -- TODO: Talk to devops about what they should do when the node does -- terminate with a storage layer exception (restart with full recovery). consensusRethrowPolicy :: - forall blk. (Typeable blk, StandardHash blk) - => Proxy blk - -> RethrowPolicy + forall blk. + (Typeable blk, StandardHash blk) => + Proxy blk -> + RethrowPolicy consensusRethrowPolicy pb = - mkRethrowPolicy (\_ctx (_ :: DbMarkerError) -> shutdownNode) - -- Any exceptions in the storage layer should terminate the node - -- - -- NOTE: We do not catch IOExceptions here; they /ought/ to be caught - -- by the FS layer (and turn into FsError). If we do want to catch - -- them, we'd somehow have to distinguish between IO exceptions - -- arising from disk I/O (shutdownNode) and those arising from - -- network failures (SuspendConsumer). - <> mkRethrowPolicy (\_ctx (_ :: DbMarkerError) -> shutdownNode) - <> mkRethrowPolicy (\_ctx (_ :: DbLocked) -> shutdownNode) - <> mkRethrowPolicy (\_ctx (_ :: ChainDbFailure blk) -> shutdownNode) - <> mkRethrowPolicy (\_ctx (e :: VolatileDBError blk) -> + mkRethrowPolicy (\_ctx (_ :: DbMarkerError) -> shutdownNode) + -- Any exceptions in the storage layer should terminate the node + -- + -- NOTE: We do not catch IOExceptions here; they /ought/ to be caught + -- by the FS layer (and turn into FsError). If we do want to catch + -- them, we'd somehow have to distinguish between IO exceptions + -- arising from disk I/O (shutdownNode) and those arising from + -- network failures (SuspendConsumer). + <> mkRethrowPolicy (\_ctx (_ :: DbMarkerError) -> shutdownNode) + <> mkRethrowPolicy (\_ctx (_ :: DbLocked) -> shutdownNode) + <> mkRethrowPolicy (\_ctx (_ :: ChainDbFailure blk) -> shutdownNode) + <> mkRethrowPolicy + ( \_ctx (e :: VolatileDBError blk) -> case e of - VolatileDB.ApiMisuse{} -> ourBug - VolatileDB.UnexpectedFailure{} -> shutdownNode) - <> mkRethrowPolicy (\_ctx (e :: ImmutableDBError blk) -> + VolatileDB.ApiMisuse{} -> ourBug + VolatileDB.UnexpectedFailure{} -> shutdownNode + ) + <> mkRethrowPolicy + ( \_ctx (e :: ImmutableDBError blk) -> case e of - ImmutableDB.ApiMisuse{} -> ourBug - ImmutableDB.UnexpectedFailure{} -> shutdownNode) - <> mkRethrowPolicy (\_ctx (_ :: FsError) -> shutdownNode) - - -- When the system clock moved back, we have to restart the node. + ImmutableDB.ApiMisuse{} -> ourBug + ImmutableDB.UnexpectedFailure{} -> shutdownNode + ) + <> mkRethrowPolicy (\_ctx (_ :: FsError) -> shutdownNode) + -- When the system clock moved back, we have to restart the node. <> mkRethrowPolicy (\_ctx (_ :: SystemClockMovedBackException) -> shutdownNode) - - -- Some chain DB errors are indicative of a bug in our code, others - -- indicate an invalid request from the peer. If the DB is closed - -- entirely, it will only be reopened after a node restart. - <> mkRethrowPolicy (\_ctx (e :: ChainDbError blk) -> - case e of - ClosedDBError{} -> shutdownNode - ClosedFollowerError{} -> ourBug - InvalidIteratorRange{} -> theyBuggyOrEvil) - - -- We have some resource registries that are used per-connection, - -- and so if we have ResourceRegistry related exception, we close - -- the connection but leave the rest of the node running. - <> mkRethrowPolicy (\_ctx (_ :: RegistryClosedException) -> ourBug) + -- Some chain DB errors are indicative of a bug in our code, others + -- indicate an invalid request from the peer. If the DB is closed + -- entirely, it will only be reopened after a node restart. + <> mkRethrowPolicy + ( \_ctx (e :: ChainDbError blk) -> + case e of + ClosedDBError{} -> shutdownNode + ClosedFollowerError{} -> ourBug + InvalidIteratorRange{} -> theyBuggyOrEvil + ) + -- We have some resource registries that are used per-connection, + -- and so if we have ResourceRegistry related exception, we close + -- the connection but leave the rest of the node running. + <> mkRethrowPolicy (\_ctx (_ :: RegistryClosedException) -> ourBug) <> mkRethrowPolicy (\_ctx (_ :: ResourceRegistryThreadException) -> ourBug) - <> mkRethrowPolicy (\_ctx (_ :: TempRegistryException) -> ourBug) - - -- An exception in the block fetch server meant the client asked - -- for some blocks we used to have but got GCed. This means the - -- peer is on a chain that forks off more than @k@ blocks away. + <> mkRethrowPolicy (\_ctx (_ :: TempRegistryException) -> ourBug) + -- An exception in the block fetch server meant the client asked + -- for some blocks we used to have but got GCed. This means the + -- peer is on a chain that forks off more than @k@ blocks away. <> mkRethrowPolicy (\_ctx (_ :: BlockFetchServerException) -> distantPeer) + -- Some chain sync client exceptions indicate malicious behaviour, + -- others merely mean that we should disconnect from this client + -- because we have diverged too much. + <> mkRethrowPolicy (\_ctx (_ :: ChainSyncClientException) -> theyBuggyOrEvil) + -- Dispatch on nested exception + <> mkRethrowPolicy + ( \ctx (ExceptionInLinkedThread _ e) -> + runRethrowPolicy (consensusRethrowPolicy pb) ctx e + ) + where + -- Shutdown the node. If we have a storage layer failure, the node /must/ + -- be restarted (triggering recovery). + shutdownNode :: ErrorCommand + shutdownNode = ShutdownNode - -- Some chain sync client exceptions indicate malicious behaviour, - -- others merely mean that we should disconnect from this client - -- because we have diverged too much. - <> mkRethrowPolicy (\_ctx (_ :: ChainSyncClientException) -> theyBuggyOrEvil) - - -- Dispatch on nested exception - <> mkRethrowPolicy (\ctx (ExceptionInLinkedThread _ e) -> - runRethrowPolicy (consensusRethrowPolicy pb) ctx e) - where - -- Shutdown the node. If we have a storage layer failure, the node /must/ - -- be restarted (triggering recovery). - shutdownNode :: ErrorCommand - shutdownNode = ShutdownNode - - -- Peer is either on a distant chain (one that forks more than k blocks ago) - -- or else is just too far behind; the chain sync client doesn't really have - -- any way of distinguishing between these two cases. If they are merely - -- far behind, we might want to reconnect to them later. - distantPeer :: ErrorCommand - distantPeer = ShutdownPeer + -- Peer is either on a distant chain (one that forks more than k blocks ago) + -- or else is just too far behind; the chain sync client doesn't really have + -- any way of distinguishing between these two cases. If they are merely + -- far behind, we might want to reconnect to them later. + distantPeer :: ErrorCommand + distantPeer = ShutdownPeer - -- The peer sent us some data that they could have known was invalid. - -- This can only be due to a bug or malice. - theyBuggyOrEvil :: ErrorCommand - theyBuggyOrEvil = ShutdownPeer + -- The peer sent us some data that they could have known was invalid. + -- This can only be due to a bug or malice. + theyBuggyOrEvil :: ErrorCommand + theyBuggyOrEvil = ShutdownPeer - -- Something went wrong due to a bug in our code. We disconnect from the - -- peer, but allow to try again later in the hope the bug was transient. - ourBug :: ErrorCommand - ourBug = ShutdownPeer + -- Something went wrong due to a bug in our code. We disconnect from the + -- peer, but allow to try again later in the hope the bug was transient. + ourBug :: ErrorCommand + ourBug = ShutdownPeer diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Tracers.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Tracers.hs index 9b5a6eb8db..f1f1b72861 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Tracers.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/Node/Tracers.hs @@ -5,169 +5,190 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Node.Tracers ( - -- * All tracers of a node bundled together +module Ouroboros.Consensus.Node.Tracers + ( -- * All tracers of a node bundled together Tracers , Tracers' (..) , nullTracers , showTracers + -- * Specific tracers , TraceForgeEvent (..) , TraceLabelCreds (..) ) where -import Control.Exception (SomeException) -import Control.Tracer (Tracer, nullTracer, showTracing) -import Data.Text (Text) -import Data.Time (UTCTime) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Forecast (OutsideForecastRange) -import Ouroboros.Consensus.Genesis.Governor (TraceGDDEvent) -import Ouroboros.Consensus.Ledger.Extended (ExtValidationError) -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Mempool (MempoolSize, TraceEventMempool) -import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server - (TraceBlockFetchServerEvent) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (TraceChainSyncClientEvent) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping as CSJumping -import Ouroboros.Consensus.MiniProtocol.ChainSync.Server - (TraceChainSyncServerEvent) -import Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server - (TraceLocalTxSubmissionServerEvent (..)) -import Ouroboros.Consensus.Node.GSM (TraceGsmEvent) -import Ouroboros.Network.Block (Tip) -import Ouroboros.Network.BlockFetch (TraceFetchClientState, - TraceLabelPeer) -import Ouroboros.Network.BlockFetch.Decision.Trace - (TraceDecisionEvent) -import Ouroboros.Network.KeepAlive (TraceKeepAliveClient) -import Ouroboros.Network.TxSubmission.Inbound - (TraceTxSubmissionInbound) -import Ouroboros.Network.TxSubmission.Outbound - (TraceTxSubmissionOutbound) +import Control.Exception (SomeException) +import Control.Tracer (Tracer, nullTracer, showTracing) +import Data.Text (Text) +import Data.Time (UTCTime) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Forecast (OutsideForecastRange) +import Ouroboros.Consensus.Genesis.Governor (TraceGDDEvent) +import Ouroboros.Consensus.Ledger.Extended (ExtValidationError) +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Mempool (MempoolSize, TraceEventMempool) +import Ouroboros.Consensus.MiniProtocol.BlockFetch.Server + ( TraceBlockFetchServerEvent + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( TraceChainSyncClientEvent + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping qualified as CSJumping +import Ouroboros.Consensus.MiniProtocol.ChainSync.Server + ( TraceChainSyncServerEvent + ) +import Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server + ( TraceLocalTxSubmissionServerEvent (..) + ) +import Ouroboros.Consensus.Node.GSM (TraceGsmEvent) +import Ouroboros.Network.Block (Tip) +import Ouroboros.Network.BlockFetch + ( TraceFetchClientState + , TraceLabelPeer + ) +import Ouroboros.Network.BlockFetch.Decision.Trace + ( TraceDecisionEvent + ) +import Ouroboros.Network.KeepAlive (TraceKeepAliveClient) +import Ouroboros.Network.TxSubmission.Inbound + ( TraceTxSubmissionInbound + ) +import Ouroboros.Network.TxSubmission.Outbound + ( TraceTxSubmissionOutbound + ) {------------------------------------------------------------------------------- All tracers of a node bundled together -------------------------------------------------------------------------------} data Tracers' remotePeer localPeer blk f = Tracers - { chainSyncClientTracer :: f (TraceLabelPeer remotePeer (TraceChainSyncClientEvent blk)) - , chainSyncServerHeaderTracer :: f (TraceLabelPeer remotePeer (TraceChainSyncServerEvent blk)) - , chainSyncServerBlockTracer :: f (TraceChainSyncServerEvent blk) - , blockFetchDecisionTracer :: f (TraceDecisionEvent remotePeer (Header blk)) - , blockFetchClientTracer :: f (TraceLabelPeer remotePeer (TraceFetchClientState (Header blk))) - , blockFetchServerTracer :: f (TraceLabelPeer remotePeer (TraceBlockFetchServerEvent blk)) - , txInboundTracer :: f (TraceLabelPeer remotePeer (TraceTxSubmissionInbound (GenTxId blk) (GenTx blk))) - , txOutboundTracer :: f (TraceLabelPeer remotePeer (TraceTxSubmissionOutbound (GenTxId blk) (GenTx blk))) + { chainSyncClientTracer :: f (TraceLabelPeer remotePeer (TraceChainSyncClientEvent blk)) + , chainSyncServerHeaderTracer :: f (TraceLabelPeer remotePeer (TraceChainSyncServerEvent blk)) + , chainSyncServerBlockTracer :: f (TraceChainSyncServerEvent blk) + , blockFetchDecisionTracer :: f (TraceDecisionEvent remotePeer (Header blk)) + , blockFetchClientTracer :: f (TraceLabelPeer remotePeer (TraceFetchClientState (Header blk))) + , blockFetchServerTracer :: f (TraceLabelPeer remotePeer (TraceBlockFetchServerEvent blk)) + , txInboundTracer :: + f (TraceLabelPeer remotePeer (TraceTxSubmissionInbound (GenTxId blk) (GenTx blk))) + , txOutboundTracer :: + f (TraceLabelPeer remotePeer (TraceTxSubmissionOutbound (GenTxId blk) (GenTx blk))) , localTxSubmissionServerTracer :: f (TraceLocalTxSubmissionServerEvent blk) - , mempoolTracer :: f (TraceEventMempool blk) - , forgeTracer :: f (TraceLabelCreds (TraceForgeEvent blk)) - , blockchainTimeTracer :: f (TraceBlockchainTimeEvent UTCTime) - , forgeStateInfoTracer :: f (TraceLabelCreds (ForgeStateInfo blk)) - , keepAliveClientTracer :: f (TraceKeepAliveClient remotePeer) - , consensusSanityCheckTracer :: f SanityCheckIssue - , consensusErrorTracer :: f SomeException - , gsmTracer :: f (TraceGsmEvent (Tip blk)) - , gddTracer :: f (TraceGDDEvent remotePeer blk) - , csjTracer :: f (TraceLabelPeer remotePeer (CSJumping.TraceEventCsj remotePeer blk)) - , dbfTracer :: f (CSJumping.TraceEventDbf remotePeer) + , mempoolTracer :: f (TraceEventMempool blk) + , forgeTracer :: f (TraceLabelCreds (TraceForgeEvent blk)) + , blockchainTimeTracer :: f (TraceBlockchainTimeEvent UTCTime) + , forgeStateInfoTracer :: f (TraceLabelCreds (ForgeStateInfo blk)) + , keepAliveClientTracer :: f (TraceKeepAliveClient remotePeer) + , consensusSanityCheckTracer :: f SanityCheckIssue + , consensusErrorTracer :: f SomeException + , gsmTracer :: f (TraceGsmEvent (Tip blk)) + , gddTracer :: f (TraceGDDEvent remotePeer blk) + , csjTracer :: + f (TraceLabelPeer remotePeer (CSJumping.TraceEventCsj remotePeer blk)) + , dbfTracer :: f (CSJumping.TraceEventDbf remotePeer) } -instance (forall a. Semigroup (f a)) - => Semigroup (Tracers' remotePeer localPeer blk f) where - l <> r = Tracers - { chainSyncClientTracer = f chainSyncClientTracer - , chainSyncServerHeaderTracer = f chainSyncServerHeaderTracer - , chainSyncServerBlockTracer = f chainSyncServerBlockTracer - , blockFetchDecisionTracer = f blockFetchDecisionTracer - , blockFetchClientTracer = f blockFetchClientTracer - , blockFetchServerTracer = f blockFetchServerTracer - , txInboundTracer = f txInboundTracer - , txOutboundTracer = f txOutboundTracer +instance + (forall a. Semigroup (f a)) => + Semigroup (Tracers' remotePeer localPeer blk f) + where + l <> r = + Tracers + { chainSyncClientTracer = f chainSyncClientTracer + , chainSyncServerHeaderTracer = f chainSyncServerHeaderTracer + , chainSyncServerBlockTracer = f chainSyncServerBlockTracer + , blockFetchDecisionTracer = f blockFetchDecisionTracer + , blockFetchClientTracer = f blockFetchClientTracer + , blockFetchServerTracer = f blockFetchServerTracer + , txInboundTracer = f txInboundTracer + , txOutboundTracer = f txOutboundTracer , localTxSubmissionServerTracer = f localTxSubmissionServerTracer - , mempoolTracer = f mempoolTracer - , forgeTracer = f forgeTracer - , blockchainTimeTracer = f blockchainTimeTracer - , forgeStateInfoTracer = f forgeStateInfoTracer - , keepAliveClientTracer = f keepAliveClientTracer - , consensusSanityCheckTracer = f consensusSanityCheckTracer - , consensusErrorTracer = f consensusErrorTracer - , gsmTracer = f gsmTracer - , gddTracer = f gddTracer - , csjTracer = f csjTracer - , dbfTracer = f dbfTracer + , mempoolTracer = f mempoolTracer + , forgeTracer = f forgeTracer + , blockchainTimeTracer = f blockchainTimeTracer + , forgeStateInfoTracer = f forgeStateInfoTracer + , keepAliveClientTracer = f keepAliveClientTracer + , consensusSanityCheckTracer = f consensusSanityCheckTracer + , consensusErrorTracer = f consensusErrorTracer + , gsmTracer = f gsmTracer + , gddTracer = f gddTracer + , csjTracer = f csjTracer + , dbfTracer = f dbfTracer } - where - f :: forall a. Semigroup a - => (Tracers' remotePeer localPeer blk f -> a) -> a - f prj = prj l <> prj r + where + f :: + forall a. + Semigroup a => + (Tracers' remotePeer localPeer blk f -> a) -> a + f prj = prj l <> prj r -- | A record of 'Tracer's for the node. type Tracers m remotePeer localPeer blk = - Tracers' remotePeer localPeer blk (Tracer m) + Tracers' remotePeer localPeer blk (Tracer m) -- | Use a 'nullTracer' for each of the 'Tracer's in 'Tracers' nullTracers :: Monad m => Tracers m remotePeer localPeer blk -nullTracers = Tracers - { chainSyncClientTracer = nullTracer - , chainSyncServerHeaderTracer = nullTracer - , chainSyncServerBlockTracer = nullTracer - , blockFetchDecisionTracer = nullTracer - , blockFetchClientTracer = nullTracer - , blockFetchServerTracer = nullTracer - , txInboundTracer = nullTracer - , txOutboundTracer = nullTracer +nullTracers = + Tracers + { chainSyncClientTracer = nullTracer + , chainSyncServerHeaderTracer = nullTracer + , chainSyncServerBlockTracer = nullTracer + , blockFetchDecisionTracer = nullTracer + , blockFetchClientTracer = nullTracer + , blockFetchServerTracer = nullTracer + , txInboundTracer = nullTracer + , txOutboundTracer = nullTracer , localTxSubmissionServerTracer = nullTracer - , mempoolTracer = nullTracer - , forgeTracer = nullTracer - , blockchainTimeTracer = nullTracer - , forgeStateInfoTracer = nullTracer - , keepAliveClientTracer = nullTracer - , consensusSanityCheckTracer = nullTracer - , consensusErrorTracer = nullTracer - , gsmTracer = nullTracer - , gddTracer = nullTracer - , csjTracer = nullTracer - , dbfTracer = nullTracer + , mempoolTracer = nullTracer + , forgeTracer = nullTracer + , blockchainTimeTracer = nullTracer + , forgeStateInfoTracer = nullTracer + , keepAliveClientTracer = nullTracer + , consensusSanityCheckTracer = nullTracer + , consensusErrorTracer = nullTracer + , gsmTracer = nullTracer + , gddTracer = nullTracer + , csjTracer = nullTracer + , dbfTracer = nullTracer } -showTracers :: ( Show blk - , Show (GenTx blk) - , Show (Validated (GenTx blk)) - , Show (GenTxId blk) - , Show (ApplyTxErr blk) - , Show (Header blk) - , Show (ForgeStateInfo blk) - , Show (ForgeStateUpdateError blk) - , Show (CannotForge blk) - , Show remotePeer - , LedgerSupportsProtocol blk - ) - => Tracer m String -> Tracers m remotePeer localPeer blk -showTracers tr = Tracers - { chainSyncClientTracer = showTracing tr - , chainSyncServerHeaderTracer = showTracing tr - , chainSyncServerBlockTracer = showTracing tr - , blockFetchDecisionTracer = showTracing tr - , blockFetchClientTracer = showTracing tr - , blockFetchServerTracer = showTracing tr - , txInboundTracer = showTracing tr - , txOutboundTracer = showTracing tr +showTracers :: + ( Show blk + , Show (GenTx blk) + , Show (Validated (GenTx blk)) + , Show (GenTxId blk) + , Show (ApplyTxErr blk) + , Show (Header blk) + , Show (ForgeStateInfo blk) + , Show (ForgeStateUpdateError blk) + , Show (CannotForge blk) + , Show remotePeer + , LedgerSupportsProtocol blk + ) => + Tracer m String -> Tracers m remotePeer localPeer blk +showTracers tr = + Tracers + { chainSyncClientTracer = showTracing tr + , chainSyncServerHeaderTracer = showTracing tr + , chainSyncServerBlockTracer = showTracing tr + , blockFetchDecisionTracer = showTracing tr + , blockFetchClientTracer = showTracing tr + , blockFetchServerTracer = showTracing tr + , txInboundTracer = showTracing tr + , txOutboundTracer = showTracing tr , localTxSubmissionServerTracer = showTracing tr - , mempoolTracer = showTracing tr - , forgeTracer = showTracing tr - , blockchainTimeTracer = showTracing tr - , forgeStateInfoTracer = showTracing tr - , keepAliveClientTracer = showTracing tr - , consensusSanityCheckTracer = showTracing tr - , consensusErrorTracer = showTracing tr - , gsmTracer = showTracing tr - , gddTracer = showTracing tr - , csjTracer = showTracing tr - , dbfTracer = showTracing tr + , mempoolTracer = showTracing tr + , forgeTracer = showTracing tr + , blockchainTimeTracer = showTracing tr + , forgeStateInfoTracer = showTracing tr + , keepAliveClientTracer = showTracing tr + , consensusSanityCheckTracer = showTracing tr + , consensusErrorTracer = showTracing tr + , gsmTracer = showTracing tr + , gddTracer = showTracing tr + , csjTracer = showTracing tr + , dbfTracer = showTracing tr } {------------------------------------------------------------------------------- @@ -210,12 +231,11 @@ showTracers tr = Tracers -- > | -- > TraceAdoptedBlock data TraceForgeEvent blk - -- | Start of the leadership check + = -- | Start of the leadership check -- -- We record the current slot number. - = TraceStartLeadershipCheck SlotNo - - -- | Leadership check failed: the tip of the ImmutableDB inhabits the + TraceStartLeadershipCheck SlotNo + | -- | Leadership check failed: the tip of the ImmutableDB inhabits the -- current slot -- -- This might happen in two cases. @@ -232,9 +252,8 @@ data TraceForgeEvent blk -- ImmutableDB. -- -- See also - | TraceSlotIsImmutable SlotNo (Point blk) BlockNo - - -- | Leadership check failed: the current chain contains a block from a slot + TraceSlotIsImmutable SlotNo (Point blk) BlockNo + | -- | Leadership check failed: the current chain contains a block from a slot -- /after/ the current slot -- -- This can only happen if the system is under heavy load. @@ -243,9 +262,8 @@ data TraceForgeEvent blk -- block at the tip of the chain. -- -- See also - | TraceBlockFromFuture SlotNo SlotNo - - -- | We found out to which block we are going to connect the block we are about + TraceBlockFromFuture SlotNo SlotNo + | -- | We found out to which block we are going to connect the block we are about -- to forge. -- -- We record the current slot number, the block number of the block to @@ -253,9 +271,8 @@ data TraceForgeEvent blk -- -- Note that block number of the block we will try to forge is one more than -- the recorded block number. - | TraceBlockContext SlotNo BlockNo (Point blk) - - -- | Leadership check failed: we were unable to get the ledger state for the + TraceBlockContext SlotNo BlockNo (Point blk) + | -- | Leadership check failed: we were unable to get the ledger state for the -- point of the block we want to connect to -- -- This can happen if after choosing which block to connect to the node @@ -266,69 +283,59 @@ data TraceForgeEvent blk -- We record both the current slot number as well as the point of the block -- we attempt to connect the new block to (that we requested the ledger -- state for). - | TraceNoLedgerState SlotNo (Point blk) - - -- | We obtained a ledger state for the point of the block we want to + TraceNoLedgerState SlotNo (Point blk) + | -- | We obtained a ledger state for the point of the block we want to -- connect to -- -- We record both the current slot number as well as the point of the block -- we attempt to connect the new block to (that we requested the ledger -- state for). - | TraceLedgerState SlotNo (Point blk) - - -- | Leadership check failed: we were unable to get the ledger view for the + TraceLedgerState SlotNo (Point blk) + | -- | Leadership check failed: we were unable to get the ledger view for the -- current slot number -- -- This will only happen if there are many missing blocks between the tip of -- our chain and the current slot. -- -- We record also the failure returned by 'forecastFor'. - | TraceNoLedgerView SlotNo OutsideForecastRange - - -- | We obtained a ledger view for the current slot number + TraceNoLedgerView SlotNo OutsideForecastRange + | -- | We obtained a ledger view for the current slot number -- -- We record the current slot number. - | TraceLedgerView SlotNo - - -- | Updating the forge state failed. + TraceLedgerView SlotNo + | -- | Updating the forge state failed. -- -- For example, the KES key could not be evolved anymore. -- -- We record the error returned by 'updateForgeState'. - | TraceForgeStateUpdateError SlotNo (ForgeStateUpdateError blk) - - -- | We did the leadership check and concluded that we should lead and forge + TraceForgeStateUpdateError SlotNo (ForgeStateUpdateError blk) + | -- | We did the leadership check and concluded that we should lead and forge -- a block, but cannot. -- -- This should only happen rarely and should be logged with warning severity. -- -- Records why we cannot forge a block. - | TraceNodeCannotForge SlotNo (CannotForge blk) - - -- | We did the leadership check and concluded we are not the leader + TraceNodeCannotForge SlotNo (CannotForge blk) + | -- | We did the leadership check and concluded we are not the leader -- -- We record the current slot number - | TraceNodeNotLeader SlotNo - - -- | We did the leadership check and concluded we /are/ the leader + TraceNodeNotLeader SlotNo + | -- | We did the leadership check and concluded we /are/ the leader -- -- The node will soon forge; it is about to read its transactions from the -- Mempool. This will be followed by TraceForgedBlock. - | TraceNodeIsLeader SlotNo - - -- | We ticked the ledger state for the slot of the to-be-forged block. + TraceNodeIsLeader SlotNo + | -- | We ticked the ledger state for the slot of the to-be-forged block. -- -- We record the current slot number and the point of the block we attempt -- to connect the new block to. - | TraceForgeTickedLedgerState SlotNo (Point blk) - - -- | We acquired a mempool snapshot. + TraceForgeTickedLedgerState SlotNo (Point blk) + | -- | We acquired a mempool snapshot. -- -- We record the the point of the state we are starting from (ie the point -- from 'TraceLedgerState') and point the mempool had most last synced wrt. - | TraceForgingMempoolSnapshot SlotNo (Point blk) (ChainHash blk) SlotNo - - -- | We forged a block + TraceForgingMempoolSnapshot SlotNo (Point blk) (ChainHash blk) SlotNo + | -- | We forged a block -- -- We record the current slot number, the point of the predecessor, the block -- itself, and the total size of the mempool snapshot at the time we produced @@ -340,39 +347,39 @@ data TraceForgeEvent blk -- * TraceAdoptedBlock (normally) -- * TraceDidntAdoptBlock (rarely) -- * TraceForgedInvalidBlock (hopefully never -- this would indicate a bug) - | TraceForgedBlock SlotNo (Point blk) blk MempoolSize - - -- | We did not adopt the block we produced, but the block was valid. We + TraceForgedBlock SlotNo (Point blk) blk MempoolSize + | -- | We did not adopt the block we produced, but the block was valid. We -- must have adopted a block that another leader of the same slot produced -- before we got the chance of adopting our own block. This is very rare, -- this warrants a warning. - | TraceDidntAdoptBlock SlotNo blk - - -- | We did not adopt the block we produced, because the adoption thread + TraceDidntAdoptBlock SlotNo blk + | -- | We did not adopt the block we produced, because the adoption thread -- died. Most likely because of an async exception. - | TraceAdoptionThreadDied SlotNo blk - - -- | We forged a block that is invalid according to the ledger in the + TraceAdoptionThreadDied SlotNo blk + | -- | We forged a block that is invalid according to the ledger in the -- ChainDB. This means there is an inconsistency between the mempool -- validation and the ledger validation. This is a serious error! - | TraceForgedInvalidBlock SlotNo blk (ExtValidationError blk) - - -- | We adopted the block we produced, we also trace the transactions + TraceForgedInvalidBlock SlotNo blk (ExtValidationError blk) + | -- | We adopted the block we produced, we also trace the transactions -- that were adopted. - | TraceAdoptedBlock SlotNo blk [Validated (GenTx blk)] + TraceAdoptedBlock SlotNo blk [Validated (GenTx blk)] -deriving instance ( LedgerSupportsProtocol blk - , Eq blk - , Eq (Validated (GenTx blk)) - , Eq (ForgeStateUpdateError blk) - , Eq (CannotForge blk) - ) => Eq (TraceForgeEvent blk) -deriving instance ( LedgerSupportsProtocol blk - , Show blk - , Show (Validated (GenTx blk)) - , Show (ForgeStateUpdateError blk) - , Show (CannotForge blk) - ) => Show (TraceForgeEvent blk) +deriving instance + ( LedgerSupportsProtocol blk + , Eq blk + , Eq (Validated (GenTx blk)) + , Eq (ForgeStateUpdateError blk) + , Eq (CannotForge blk) + ) => + Eq (TraceForgeEvent blk) +deriving instance + ( LedgerSupportsProtocol blk + , Show blk + , Show (Validated (GenTx blk)) + , Show (ForgeStateUpdateError blk) + , Show (CannotForge blk) + ) => + Show (TraceForgeEvent blk) -- | Label a forge-related trace event with the label associated with its -- credentials. diff --git a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/NodeKernel.hs b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/NodeKernel.hs index 188ac9b4c2..2142c228a4 100644 --- a/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/NodeKernel.hs +++ b/ouroboros-consensus-diffusion/src/ouroboros-consensus-diffusion/Ouroboros/Consensus/NodeKernel.hs @@ -12,8 +12,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.NodeKernel ( - -- * Node kernel +module Ouroboros.Consensus.NodeKernel + ( -- * Node kernel MempoolCapacityBytesOverride (..) , NodeKernel (..) , NodeKernelArgs (..) @@ -26,206 +26,222 @@ module Ouroboros.Consensus.NodeKernel ( , initNodeKernel ) where - -import Cardano.Network.ConsensusMode (ConsensusMode (..)) -import Cardano.Network.PeerSelection.Bootstrap (UseBootstrapPeers) -import Cardano.Network.PeerSelection.LocalRootPeers - (OutboundConnectionsState (..)) -import Cardano.Network.Types (LedgerStateJudgement (..)) -import qualified Control.Concurrent.Class.MonadSTM as LazySTM -import qualified Control.Concurrent.Class.MonadSTM.Strict as StrictSTM -import Control.DeepSeq (force) -import Control.Monad -import qualified Control.Monad.Class.MonadTimer.SI as SI -import Control.Monad.Except -import Control.ResourceRegistry -import Control.Tracer -import Data.Bifunctor (second) -import Data.Data (Typeable) -import Data.Foldable (traverse_) -import Data.Function (on) -import Data.Functor ((<&>)) -import Data.Hashable (Hashable) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Maybe (isJust, mapMaybe) -import Data.Proxy -import qualified Data.Text as Text -import Data.Void (Void) -import Ouroboros.Consensus.Block hiding (blockMatchesHeader) -import qualified Ouroboros.Consensus.Block as Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.Genesis.Governor (gddWatcher) -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) -import Ouroboros.Consensus.Mempool -import qualified Ouroboros.Consensus.MiniProtocol.BlockFetch.ClientInterface as BlockFetchClientInterface -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientHandle (..), - ChainSyncClientHandleCollection (..), ChainSyncState (..), - newChainSyncClientHandleCollection) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck - (HistoricityCheck) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck - (SomeHeaderInFutureCheck) -import Ouroboros.Consensus.Node.Genesis (GenesisNodeKernelArgs (..), - LoEAndGDDConfig (..), LoEAndGDDNodeKernelArgs (..), - setGetLoEFragment) -import Ouroboros.Consensus.Node.GSM (GsmNodeKernelArgs (..)) -import qualified Ouroboros.Consensus.Node.GSM as GSM -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Tracers -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.API (AddBlockResult (..), - ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment as InvalidBlockPunishment -import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.Init as InitChainDB -import Ouroboros.Consensus.Storage.LedgerDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Util (whenJust) -import Ouroboros.Consensus.Util.AnchoredFragment - (preferAnchoredCandidate) -import Ouroboros.Consensus.Util.EarlyExit -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.LeakyBucket - (atomicallyWithMonotonicTime) -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Consensus.Util.STM -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - AnchoredSeq (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (castTip, tipFromHeader) -import Ouroboros.Network.BlockFetch -import Ouroboros.Network.BlockFetch.ClientState - (mapTraceFetchClientState) -import Ouroboros.Network.BlockFetch.Decision.Trace - (TraceDecisionEvent (..)) -import Ouroboros.Network.NodeToNode (ConnectionId, - MiniProtocolParameters (..)) -import Ouroboros.Network.PeerSelection.Governor.Types - (PublicPeerSelectionState) -import Ouroboros.Network.PeerSharing (PeerSharingAPI, - PeerSharingRegistry, newPeerSharingAPI, - newPeerSharingRegistry, ps_POLICY_PEER_SHARE_MAX_PEERS, - ps_POLICY_PEER_SHARE_STICKY_TIME) -import Ouroboros.Network.Protocol.LocalStateQuery.Type (Target (..)) -import Ouroboros.Network.SizeInBytes -import Ouroboros.Network.TxSubmission.Inbound - (TxSubmissionMempoolWriter) -import qualified Ouroboros.Network.TxSubmission.Inbound as Inbound -import Ouroboros.Network.TxSubmission.Mempool.Reader - (TxSubmissionMempoolReader) -import qualified Ouroboros.Network.TxSubmission.Mempool.Reader as MempoolReader -import System.Random (StdGen) +import Cardano.Network.ConsensusMode (ConsensusMode (..)) +import Cardano.Network.PeerSelection.Bootstrap (UseBootstrapPeers) +import Cardano.Network.PeerSelection.LocalRootPeers + ( OutboundConnectionsState (..) + ) +import Cardano.Network.Types (LedgerStateJudgement (..)) +import Control.Concurrent.Class.MonadSTM qualified as LazySTM +import Control.Concurrent.Class.MonadSTM.Strict qualified as StrictSTM +import Control.DeepSeq (force) +import Control.Monad +import Control.Monad.Class.MonadTimer.SI qualified as SI +import Control.Monad.Except +import Control.ResourceRegistry +import Control.Tracer +import Data.Bifunctor (second) +import Data.Data (Typeable) +import Data.Foldable (traverse_) +import Data.Function (on) +import Data.Functor ((<&>)) +import Data.Hashable (Hashable) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Maybe (isJust, mapMaybe) +import Data.Proxy +import Data.Text qualified as Text +import Data.Void (Void) +import Ouroboros.Consensus.Block hiding (blockMatchesHeader) +import Ouroboros.Consensus.Block qualified as Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.Genesis.Governor (gddWatcher) +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) +import Ouroboros.Consensus.Mempool +import Ouroboros.Consensus.MiniProtocol.BlockFetch.ClientInterface qualified as BlockFetchClientInterface +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientHandle (..) + , ChainSyncClientHandleCollection (..) + , ChainSyncState (..) + , newChainSyncClientHandleCollection + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck + ( HistoricityCheck + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck + ( SomeHeaderInFutureCheck + ) +import Ouroboros.Consensus.Node.GSM (GsmNodeKernelArgs (..)) +import Ouroboros.Consensus.Node.GSM qualified as GSM +import Ouroboros.Consensus.Node.Genesis + ( GenesisNodeKernelArgs (..) + , LoEAndGDDConfig (..) + , LoEAndGDDNodeKernelArgs (..) + , setGetLoEFragment + ) +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Tracers +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.API + ( AddBlockResult (..) + , ChainDB + ) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment qualified as InvalidBlockPunishment +import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) +import Ouroboros.Consensus.Storage.ChainDB.Init qualified as InitChainDB +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Util (whenJust) +import Ouroboros.Consensus.Util.AnchoredFragment + ( preferAnchoredCandidate + ) +import Ouroboros.Consensus.Util.EarlyExit +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.LeakyBucket + ( atomicallyWithMonotonicTime + ) +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Consensus.Util.STM +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , AnchoredSeq (..) + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (castTip, tipFromHeader) +import Ouroboros.Network.BlockFetch +import Ouroboros.Network.BlockFetch.ClientState + ( mapTraceFetchClientState + ) +import Ouroboros.Network.BlockFetch.Decision.Trace + ( TraceDecisionEvent (..) + ) +import Ouroboros.Network.NodeToNode + ( ConnectionId + , MiniProtocolParameters (..) + ) +import Ouroboros.Network.PeerSelection.Governor.Types + ( PublicPeerSelectionState + ) +import Ouroboros.Network.PeerSharing + ( PeerSharingAPI + , PeerSharingRegistry + , newPeerSharingAPI + , newPeerSharingRegistry + , ps_POLICY_PEER_SHARE_MAX_PEERS + , ps_POLICY_PEER_SHARE_STICKY_TIME + ) +import Ouroboros.Network.Protocol.LocalStateQuery.Type (Target (..)) +import Ouroboros.Network.SizeInBytes +import Ouroboros.Network.TxSubmission.Inbound + ( TxSubmissionMempoolWriter + ) +import Ouroboros.Network.TxSubmission.Inbound qualified as Inbound +import Ouroboros.Network.TxSubmission.Mempool.Reader + ( TxSubmissionMempoolReader + ) +import Ouroboros.Network.TxSubmission.Mempool.Reader qualified as MempoolReader +import System.Random (StdGen) {------------------------------------------------------------------------------- Relay node -------------------------------------------------------------------------------} -- | Interface against running relay node -data NodeKernel m addrNTN addrNTC blk = NodeKernel { - -- | The 'ChainDB' of the node - getChainDB :: ChainDB m blk - - -- | The node's mempool - , getMempool :: Mempool m blk - - -- | The node's top-level static configuration - , getTopLevelConfig :: TopLevelConfig blk - - -- | The fetch client registry, used for the block fetch clients. - , getFetchClientRegistry :: FetchClientRegistry (ConnectionId addrNTN) (HeaderWithTime blk) blk m - - -- | The fetch mode, used by diffusion. - -- - , getFetchMode :: STM m FetchMode - - -- | The GSM state, used by diffusion. A ledger judgement can be derived - -- from it with 'GSM.gsmStateToLedgerJudgement'. - -- - , getGsmState :: STM m GSM.GsmState - - -- | The kill handle and exposed state for each ChainSync client. - , getChainSyncHandles :: ChainSyncClientHandleCollection (ConnectionId addrNTN) m blk - - -- | Read the current peer sharing registry, used for interacting with - -- the PeerSharing protocol - , getPeerSharingRegistry :: PeerSharingRegistry addrNTN m - - -- | The node's tracers - , getTracers :: Tracers m (ConnectionId addrNTN) addrNTC blk - - -- | Set block forging - -- - -- When set with the empty list '[]' block forging will be disabled. - -- - , setBlockForging :: [BlockForging m blk] -> m () - - , getPeerSharingAPI :: PeerSharingAPI addrNTN StdGen m - - , getOutboundConnectionsState - :: StrictTVar m OutboundConnectionsState - , getDiffusionPipeliningSupport - :: DiffusionPipeliningSupport - , getBlockchainTime :: BlockchainTime m - } +data NodeKernel m addrNTN addrNTC blk = NodeKernel + { getChainDB :: ChainDB m blk + -- ^ The 'ChainDB' of the node + , getMempool :: Mempool m blk + -- ^ The node's mempool + , getTopLevelConfig :: TopLevelConfig blk + -- ^ The node's top-level static configuration + , getFetchClientRegistry :: FetchClientRegistry (ConnectionId addrNTN) (HeaderWithTime blk) blk m + -- ^ The fetch client registry, used for the block fetch clients. + , getFetchMode :: STM m FetchMode + -- ^ The fetch mode, used by diffusion. + , getGsmState :: STM m GSM.GsmState + -- ^ The GSM state, used by diffusion. A ledger judgement can be derived + -- from it with 'GSM.gsmStateToLedgerJudgement'. + , getChainSyncHandles :: ChainSyncClientHandleCollection (ConnectionId addrNTN) m blk + -- ^ The kill handle and exposed state for each ChainSync client. + , getPeerSharingRegistry :: PeerSharingRegistry addrNTN m + -- ^ Read the current peer sharing registry, used for interacting with + -- the PeerSharing protocol + , getTracers :: Tracers m (ConnectionId addrNTN) addrNTC blk + -- ^ The node's tracers + , setBlockForging :: [BlockForging m blk] -> m () + -- ^ Set block forging + -- + -- When set with the empty list '[]' block forging will be disabled. + , getPeerSharingAPI :: PeerSharingAPI addrNTN StdGen m + , getOutboundConnectionsState :: + StrictTVar m OutboundConnectionsState + , getDiffusionPipeliningSupport :: + DiffusionPipeliningSupport + , getBlockchainTime :: BlockchainTime m + } -- | Arguments required when initializing a node -data NodeKernelArgs m addrNTN addrNTC blk = NodeKernelArgs { - tracers :: Tracers m (ConnectionId addrNTN) addrNTC blk - , registry :: ResourceRegistry m - , cfg :: TopLevelConfig blk - , btime :: BlockchainTime m - , chainDB :: ChainDB m blk - , initChainDB :: StorageConfig blk -> InitChainDB m blk -> m () - , chainSyncFutureCheck :: SomeHeaderInFutureCheck m blk - -- | See 'HistoricityCheck' for details. - , chainSyncHistoricityCheck - :: m GSM.GsmState -> HistoricityCheck m blk - , blockFetchSize :: Header blk -> SizeInBytes - , mempoolCapacityOverride :: MempoolCapacityBytesOverride - , miniProtocolParameters :: MiniProtocolParameters - , blockFetchConfiguration :: BlockFetchConfiguration - , keepAliveRng :: StdGen - , gsmArgs :: GsmNodeKernelArgs m blk - , getUseBootstrapPeers :: STM m UseBootstrapPeers - , peerSharingRng :: StdGen - , publicPeerSelectionStateVar - :: StrictSTM.StrictTVar m (PublicPeerSelectionState addrNTN) - , genesisArgs :: GenesisNodeKernelArgs m blk - , getDiffusionPipeliningSupport :: DiffusionPipeliningSupport - } +data NodeKernelArgs m addrNTN addrNTC blk = NodeKernelArgs + { tracers :: Tracers m (ConnectionId addrNTN) addrNTC blk + , registry :: ResourceRegistry m + , cfg :: TopLevelConfig blk + , btime :: BlockchainTime m + , chainDB :: ChainDB m blk + , initChainDB :: StorageConfig blk -> InitChainDB m blk -> m () + , chainSyncFutureCheck :: SomeHeaderInFutureCheck m blk + , chainSyncHistoricityCheck :: + m GSM.GsmState -> + HistoricityCheck m blk + -- ^ See 'HistoricityCheck' for details. + , blockFetchSize :: Header blk -> SizeInBytes + , mempoolCapacityOverride :: MempoolCapacityBytesOverride + , miniProtocolParameters :: MiniProtocolParameters + , blockFetchConfiguration :: BlockFetchConfiguration + , keepAliveRng :: StdGen + , gsmArgs :: GsmNodeKernelArgs m blk + , getUseBootstrapPeers :: STM m UseBootstrapPeers + , peerSharingRng :: StdGen + , publicPeerSelectionStateVar :: + StrictSTM.StrictTVar m (PublicPeerSelectionState addrNTN) + , genesisArgs :: GenesisNodeKernelArgs m blk + , getDiffusionPipeliningSupport :: DiffusionPipeliningSupport + } initNodeKernel :: - forall m addrNTN addrNTC blk. - ( IOLike m - , SI.MonadTimer m - , RunNode blk - , Ord addrNTN - , Hashable addrNTN - , Typeable addrNTN - ) - => NodeKernelArgs m addrNTN addrNTC blk - -> m (NodeKernel m addrNTN addrNTC blk) -initNodeKernel args@NodeKernelArgs { registry, cfg, tracers - , chainDB, initChainDB - , blockFetchConfiguration - , btime - , gsmArgs - , peerSharingRng - , publicPeerSelectionStateVar - , genesisArgs - , getDiffusionPipeliningSupport - } = do + forall m addrNTN addrNTC blk. + ( IOLike m + , SI.MonadTimer m + , RunNode blk + , Ord addrNTN + , Hashable addrNTN + , Typeable addrNTN + ) => + NodeKernelArgs m addrNTN addrNTC blk -> + m (NodeKernel m addrNTN addrNTC blk) +initNodeKernel + args@NodeKernelArgs + { registry + , cfg + , tracers + , chainDB + , initChainDB + , blockFetchConfiguration + , btime + , gsmArgs + , peerSharingRng + , publicPeerSelectionStateVar + , genesisArgs + , getDiffusionPipeliningSupport + } = do -- using a lazy 'TVar', 'BlockForging' does not have a 'NoThunks' instance. blockForgingVar :: LazySTM.TMVar m [BlockForging m blk] <- LazySTM.newTMVarIO [] initChainDB (configStorage cfg) (InitChainDB.fromFull chainDB) @@ -242,63 +258,68 @@ initNodeKernel args@NodeKernelArgs { registry, cfg, tracers varOutboundConnectionsState <- newTVarIO UntrustedState - do let GsmNodeKernelArgs {..} = gsmArgs - gsmTracerArgs = - ( castTip . either AF.anchorToTip tipFromHeader . AF.head . fst - , gsmTracer tracers - ) - - let gsm = GSM.realGsmEntryPoints gsmTracerArgs GSM.GsmView - { GSM.antiThunderingHerd = Just gsmAntiThunderingHerd - , GSM.candidateOverSelection = \(headers, _lst) state -> + do + let GsmNodeKernelArgs{..} = gsmArgs + gsmTracerArgs = + ( castTip . either AF.anchorToTip tipFromHeader . AF.head . fst + , gsmTracer tracers + ) + + let gsm = + GSM.realGsmEntryPoints + gsmTracerArgs + GSM.GsmView + { GSM.antiThunderingHerd = Just gsmAntiThunderingHerd + , GSM.candidateOverSelection = \(headers, _lst) state -> case AF.intersectionPoint headers (csCandidate state) of - Nothing -> GSM.CandidateDoesNotIntersect - Just{} -> - GSM.WhetherCandidateIsBetter - $ -- precondition requires intersection - preferAnchoredCandidate - (configBlock cfg) - headers - (csCandidate state) - , GSM.peerIsIdle = csIdling - , GSM.durationUntilTooOld = - gsmDurationUntilTooOld - <&> \wd (_headers, lst) -> + Nothing -> GSM.CandidateDoesNotIntersect + Just{} -> + GSM.WhetherCandidateIsBetter $ -- precondition requires intersection + preferAnchoredCandidate + (configBlock cfg) + headers + (csCandidate state) + , GSM.peerIsIdle = csIdling + , GSM.durationUntilTooOld = + gsmDurationUntilTooOld + <&> \wd (_headers, lst) -> GSM.getDurationUntilTooOld wd (getTipSlot lst) - , GSM.equivalent = (==) `on` (AF.headPoint . fst) - , GSM.getChainSyncStates = fmap cschState <$> cschcMap varChainSyncHandles - , GSM.getCurrentSelection = do - headers <- ChainDB.getCurrentChainWithTime chainDB - extLedgerState <- ChainDB.getCurrentLedger chainDB - return (headers, ledgerState extLedgerState) - , GSM.minCaughtUpDuration = gsmMinCaughtUpDuration - , GSM.setCaughtUpPersistentMark = \upd -> - (if upd then GSM.touchMarkerFile else GSM.removeMarkerFile) - gsmMarkerFileView - , GSM.writeGsmState = \gsmState -> - atomicallyWithMonotonicTime $ \time -> do - writeTVar varGsmState gsmState - handles <- cschcMap varChainSyncHandles - traverse_ (($ time) . ($ gsmState) . cschOnGsmStateChanged) handles - , GSM.isHaaSatisfied = do - readTVar varOutboundConnectionsState <&> \case - -- See the upstream Haddocks for the exact conditions under - -- which the diffusion layer is in this state. - TrustedStateWithExternalPeers -> True - UntrustedState -> False - } - judgment <- GSM.gsmStateToLedgerJudgement <$> readTVarIO varGsmState - void $ forkLinkedThread registry "NodeKernel.GSM" $ case judgment of - TooOld -> GSM.enterPreSyncing gsm - YoungEnough -> GSM.enterCaughtUp gsm - - peerSharingAPI <- newPeerSharingAPI publicPeerSelectionStateVar - peerSharingRng - ps_POLICY_PEER_SHARE_STICKY_TIME - ps_POLICY_PEER_SHARE_MAX_PEERS + , GSM.equivalent = (==) `on` (AF.headPoint . fst) + , GSM.getChainSyncStates = fmap cschState <$> cschcMap varChainSyncHandles + , GSM.getCurrentSelection = do + headers <- ChainDB.getCurrentChainWithTime chainDB + extLedgerState <- ChainDB.getCurrentLedger chainDB + return (headers, ledgerState extLedgerState) + , GSM.minCaughtUpDuration = gsmMinCaughtUpDuration + , GSM.setCaughtUpPersistentMark = \upd -> + (if upd then GSM.touchMarkerFile else GSM.removeMarkerFile) + gsmMarkerFileView + , GSM.writeGsmState = \gsmState -> + atomicallyWithMonotonicTime $ \time -> do + writeTVar varGsmState gsmState + handles <- cschcMap varChainSyncHandles + traverse_ (($ time) . ($ gsmState) . cschOnGsmStateChanged) handles + , GSM.isHaaSatisfied = do + readTVar varOutboundConnectionsState <&> \case + -- See the upstream Haddocks for the exact conditions under + -- which the diffusion layer is in this state. + TrustedStateWithExternalPeers -> True + UntrustedState -> False + } + judgment <- GSM.gsmStateToLedgerJudgement <$> readTVarIO varGsmState + void $ forkLinkedThread registry "NodeKernel.GSM" $ case judgment of + TooOld -> GSM.enterPreSyncing gsm + YoungEnough -> GSM.enterCaughtUp gsm + + peerSharingAPI <- + newPeerSharingAPI + publicPeerSelectionStateVar + peerSharingRng + ps_POLICY_PEER_SHARE_STICKY_TIME + ps_POLICY_PEER_SHARE_MAX_PEERS case gnkaLoEAndGDDArgs genesisArgs of - LoEAndGDDDisabled -> pure () + LoEAndGDDDisabled -> pure () LoEAndGDDEnabled lgArgs -> do varLoEFragment <- newTVarIO $ AF.Empty AF.AnchorGenesis setGetLoEFragment @@ -306,396 +327,431 @@ initNodeKernel args@NodeKernelArgs { registry, cfg, tracers (readTVar varLoEFragment) (lgnkaLoEFragmentTVar lgArgs) - void $ forkLinkedWatcher registry "NodeKernel.GDD" $ - gddWatcher - cfg - (gddTracer tracers) - chainDB - (lgnkaGDDRateLimit lgArgs) - (readTVar varGsmState) - (cschcMap varChainSyncHandles) - varLoEFragment - - void $ forkLinkedThread registry "NodeKernel.blockForging" $ - blockForgingController st (LazySTM.takeTMVar blockForgingVar) + void $ + forkLinkedWatcher registry "NodeKernel.GDD" $ + gddWatcher + cfg + (gddTracer tracers) + chainDB + (lgnkaGDDRateLimit lgArgs) + (readTVar varGsmState) + (cschcMap varChainSyncHandles) + varLoEFragment + + void $ + forkLinkedThread registry "NodeKernel.blockForging" $ + blockForgingController st (LazySTM.takeTMVar blockForgingVar) -- Run the block fetch logic in the background. This will call -- 'addFetchedBlock' whenever a new block is downloaded. - void $ forkLinkedThread registry "NodeKernel.blockFetchLogic" $ - blockFetchLogic - (contramap castTraceFetchDecision $ blockFetchDecisionTracer tracers) - (contramap (fmap castTraceFetchClientState) $ blockFetchClientTracer tracers) - blockFetchInterface - fetchClientRegistry - blockFetchConfiguration - - return NodeKernel - { getChainDB = chainDB - , getMempool = mempool - , getTopLevelConfig = cfg - , getFetchClientRegistry = fetchClientRegistry - , getFetchMode = readFetchMode blockFetchInterface - , getGsmState = readTVar varGsmState - , getChainSyncHandles = varChainSyncHandles - , getPeerSharingRegistry = peerSharingRegistry - , getTracers = tracers - , setBlockForging = \a -> atomically . LazySTM.putTMVar blockForgingVar $! a - , getPeerSharingAPI = peerSharingAPI - , getOutboundConnectionsState - = varOutboundConnectionsState - , getDiffusionPipeliningSupport - , getBlockchainTime = btime - } - where - blockForgingController :: InternalState m remotePeer localPeer blk - -> STM m [BlockForging m blk] - -> m Void + void $ + forkLinkedThread registry "NodeKernel.blockFetchLogic" $ + blockFetchLogic + (contramap castTraceFetchDecision $ blockFetchDecisionTracer tracers) + (contramap (fmap castTraceFetchClientState) $ blockFetchClientTracer tracers) + blockFetchInterface + fetchClientRegistry + blockFetchConfiguration + + return + NodeKernel + { getChainDB = chainDB + , getMempool = mempool + , getTopLevelConfig = cfg + , getFetchClientRegistry = fetchClientRegistry + , getFetchMode = readFetchMode blockFetchInterface + , getGsmState = readTVar varGsmState + , getChainSyncHandles = varChainSyncHandles + , getPeerSharingRegistry = peerSharingRegistry + , getTracers = tracers + , setBlockForging = \a -> atomically . LazySTM.putTMVar blockForgingVar $! a + , getPeerSharingAPI = peerSharingAPI + , getOutboundConnectionsState = + varOutboundConnectionsState + , getDiffusionPipeliningSupport + , getBlockchainTime = btime + } + where + blockForgingController :: + InternalState m remotePeer localPeer blk -> + STM m [BlockForging m blk] -> + m Void blockForgingController st getBlockForging = go [] - where - go :: [Thread m Void] -> m Void - go !forgingThreads = do - blockForging <- atomically getBlockForging - traverse_ cancelThread forgingThreads - blockForging' <- traverse (forkBlockForging st) blockForging - go blockForging' + where + go :: [Thread m Void] -> m Void + go !forgingThreads = do + blockForging <- atomically getBlockForging + traverse_ cancelThread forgingThreads + blockForging' <- traverse (forkBlockForging st) blockForging + go blockForging' castTraceFetchDecision :: - forall remotePeer blk. - TraceDecisionEvent remotePeer (HeaderWithTime blk) -> TraceDecisionEvent remotePeer (Header blk) + forall remotePeer blk. + TraceDecisionEvent remotePeer (HeaderWithTime blk) -> TraceDecisionEvent remotePeer (Header blk) castTraceFetchDecision = \case - PeersFetch xs -> PeersFetch (map (fmap (second (map castPoint))) xs) -- [TraceLabelPeer peer (FetchDecision [Point header])] - PeerStarvedUs peer -> PeerStarvedUs peer + PeersFetch xs -> PeersFetch (map (fmap (second (map castPoint))) xs) -- [TraceLabelPeer peer (FetchDecision [Point header])] + PeerStarvedUs peer -> PeerStarvedUs peer castTraceFetchClientState :: - forall blk. HasHeader (Header blk) - => TraceFetchClientState (HeaderWithTime blk) -> TraceFetchClientState (Header blk) + forall blk. + HasHeader (Header blk) => + TraceFetchClientState (HeaderWithTime blk) -> TraceFetchClientState (Header blk) castTraceFetchClientState = mapTraceFetchClientState hwtHeader {------------------------------------------------------------------------------- Internal node components -------------------------------------------------------------------------------} -data InternalState m addrNTN addrNTC blk = IS { - tracers :: Tracers m (ConnectionId addrNTN) addrNTC blk - , cfg :: TopLevelConfig blk - , registry :: ResourceRegistry m - , btime :: BlockchainTime m - , chainDB :: ChainDB m blk - , blockFetchInterface :: BlockFetchConsensusInterface (ConnectionId addrNTN) (HeaderWithTime blk) blk m - , fetchClientRegistry :: FetchClientRegistry (ConnectionId addrNTN) (HeaderWithTime blk) blk m - , varChainSyncHandles :: ChainSyncClientHandleCollection (ConnectionId addrNTN) m blk - , varGsmState :: StrictTVar m GSM.GsmState - , mempool :: Mempool m blk - , peerSharingRegistry :: PeerSharingRegistry addrNTN m - } +data InternalState m addrNTN addrNTC blk = IS + { tracers :: Tracers m (ConnectionId addrNTN) addrNTC blk + , cfg :: TopLevelConfig blk + , registry :: ResourceRegistry m + , btime :: BlockchainTime m + , chainDB :: ChainDB m blk + , blockFetchInterface :: + BlockFetchConsensusInterface (ConnectionId addrNTN) (HeaderWithTime blk) blk m + , fetchClientRegistry :: FetchClientRegistry (ConnectionId addrNTN) (HeaderWithTime blk) blk m + , varChainSyncHandles :: ChainSyncClientHandleCollection (ConnectionId addrNTN) m blk + , varGsmState :: StrictTVar m GSM.GsmState + , mempool :: Mempool m blk + , peerSharingRegistry :: PeerSharingRegistry addrNTN m + } initInternalState :: - forall m addrNTN addrNTC blk. - ( IOLike m - , Ord addrNTN - , Typeable addrNTN - , RunNode blk - ) - => NodeKernelArgs m addrNTN addrNTC blk - -> m (InternalState m addrNTN addrNTC blk) -initInternalState NodeKernelArgs { tracers, chainDB, registry, cfg - , blockFetchSize, btime - , mempoolCapacityOverride - , gsmArgs, getUseBootstrapPeers - , getDiffusionPipeliningSupport - , genesisArgs - } = do + forall m addrNTN addrNTC blk. + ( IOLike m + , Ord addrNTN + , Typeable addrNTN + , RunNode blk + ) => + NodeKernelArgs m addrNTN addrNTC blk -> + m (InternalState m addrNTN addrNTC blk) +initInternalState + NodeKernelArgs + { tracers + , chainDB + , registry + , cfg + , blockFetchSize + , btime + , mempoolCapacityOverride + , gsmArgs + , getUseBootstrapPeers + , getDiffusionPipeliningSupport + , genesisArgs + } = do varGsmState <- do - let GsmNodeKernelArgs {..} = gsmArgs - gsmState <- GSM.initializationGsmState - (atomically $ ledgerState <$> ChainDB.getCurrentLedger chainDB) - gsmDurationUntilTooOld - gsmMarkerFileView + let GsmNodeKernelArgs{..} = gsmArgs + gsmState <- + GSM.initializationGsmState + (atomically $ ledgerState <$> ChainDB.getCurrentLedger chainDB) + gsmDurationUntilTooOld + gsmMarkerFileView newTVarIO gsmState varChainSyncHandles <- atomically newChainSyncClientHandleCollection - mempool <- openMempool registry - (chainDBLedgerInterface chainDB) - (configLedger cfg) - mempoolCapacityOverride - (mempoolTracer tracers) + mempool <- + openMempool + registry + (chainDBLedgerInterface chainDB) + (configLedger cfg) + mempoolCapacityOverride + (mempoolTracer tracers) fetchClientRegistry <- newFetchClientRegistry - let readFetchMode = BlockFetchClientInterface.readFetchModeDefault - (toConsensusMode $ gnkaLoEAndGDDArgs genesisArgs) - btime - (ChainDB.getCurrentChain chainDB) - getUseBootstrapPeers - (GSM.gsmStateToLedgerJudgement <$> readTVar varGsmState) - blockFetchInterface :: BlockFetchConsensusInterface (ConnectionId addrNTN) (HeaderWithTime blk) blk m - blockFetchInterface = BlockFetchClientInterface.mkBlockFetchConsensusInterface - (dbfTracer tracers) - (configBlock cfg) - (BlockFetchClientInterface.defaultChainDbView chainDB) - varChainSyncHandles - blockFetchSize - readFetchMode - getDiffusionPipeliningSupport + let readFetchMode = + BlockFetchClientInterface.readFetchModeDefault + (toConsensusMode $ gnkaLoEAndGDDArgs genesisArgs) + btime + (ChainDB.getCurrentChain chainDB) + getUseBootstrapPeers + (GSM.gsmStateToLedgerJudgement <$> readTVar varGsmState) + blockFetchInterface :: + BlockFetchConsensusInterface (ConnectionId addrNTN) (HeaderWithTime blk) blk m + blockFetchInterface = + BlockFetchClientInterface.mkBlockFetchConsensusInterface + (dbfTracer tracers) + (configBlock cfg) + (BlockFetchClientInterface.defaultChainDbView chainDB) + varChainSyncHandles + blockFetchSize + readFetchMode + getDiffusionPipeliningSupport peerSharingRegistry <- newPeerSharingRegistry - return IS {..} - where + return IS{..} + where toConsensusMode :: forall a. LoEAndGDDConfig a -> ConsensusMode toConsensusMode = \case - LoEAndGDDDisabled -> PraosMode + LoEAndGDDDisabled -> PraosMode LoEAndGDDEnabled _ -> GenesisMode forkBlockForging :: - forall m addrNTN addrNTC blk. - (IOLike m, RunNode blk) - => InternalState m addrNTN addrNTC blk - -> BlockForging m blk - -> m (Thread m Void) + forall m addrNTN addrNTC blk. + (IOLike m, RunNode blk) => + InternalState m addrNTN addrNTC blk -> + BlockForging m blk -> + m (Thread m Void) forkBlockForging IS{..} blockForging = - forkLinkedWatcher registry threadLabel - $ knownSlotWatcher btime - $ \currentSlot -> withRegistry (\rr -> withEarlyExit_ $ go rr currentSlot) - where - threadLabel :: String - threadLabel = - "NodeKernel.blockForging." <> Text.unpack (forgeLabel blockForging) - - go :: ResourceRegistry m -> SlotNo -> WithEarlyExit m () - go reg currentSlot = do - trace $ TraceStartLeadershipCheck currentSlot - - -- Figure out which block to connect to - -- - -- Normally this will be the current block at the tip, but it may be the - -- /previous/ block, if there were multiple slot leaders - BlockContext{bcBlockNo, bcPrevPoint} <- do - eBlkCtx <- lift $ atomically $ - mkCurrentBlockContext currentSlot - <$> ChainDB.getCurrentChain chainDB - case eBlkCtx of - Right blkCtx -> return blkCtx - Left failure -> do - trace failure - exitEarly - - trace $ TraceBlockContext currentSlot bcBlockNo bcPrevPoint - - -- Get forker corresponding to bcPrevPoint - -- - -- This might fail if, in between choosing 'bcPrevPoint' and this call to - -- 'ChainDB.getReadOnlyForkerAtPoint', we switched to a fork where 'bcPrevPoint' - -- is no longer on our chain. When that happens, we simply give up on the - -- chance to produce a block. - forkerEith <- lift $ ChainDB.getReadOnlyForkerAtPoint chainDB reg (SpecificPoint bcPrevPoint) - -- Remember to close this forker before exiting! - forker <- case forkerEith of - Left _ -> do - trace $ TraceNoLedgerState currentSlot bcPrevPoint + forkLinkedWatcher registry threadLabel $ + knownSlotWatcher btime $ + \currentSlot -> withRegistry (\rr -> withEarlyExit_ $ go rr currentSlot) + where + threadLabel :: String + threadLabel = + "NodeKernel.blockForging." <> Text.unpack (forgeLabel blockForging) + + go :: ResourceRegistry m -> SlotNo -> WithEarlyExit m () + go reg currentSlot = do + trace $ TraceStartLeadershipCheck currentSlot + + -- Figure out which block to connect to + -- + -- Normally this will be the current block at the tip, but it may be the + -- /previous/ block, if there were multiple slot leaders + BlockContext{bcBlockNo, bcPrevPoint} <- do + eBlkCtx <- + lift $ + atomically $ + mkCurrentBlockContext currentSlot + <$> ChainDB.getCurrentChain chainDB + case eBlkCtx of + Right blkCtx -> return blkCtx + Left failure -> do + trace failure exitEarly - Right forker -> pure forker - - unticked <- lift $ atomically $ LedgerDB.roforkerGetLedgerState forker - - trace $ TraceLedgerState currentSlot bcPrevPoint - - -- We require the ticked ledger view in order to construct the ticked - -- 'ChainDepState'. - ledgerView <- - case runExcept $ forecastFor - (ledgerViewForecastAt - (configLedger cfg) - (ledgerState unticked)) - currentSlot of - Left err -> do - -- There are so many empty slots between the tip of our chain and the - -- current slot that we cannot get an ledger view anymore In - -- principle, this is no problem; we can still produce a block (we use - -- the ticked ledger state). However, we probably don't /want/ to - -- produce a block in this case; we are most likely missing a blocks - -- on our chain. - trace $ TraceNoLedgerView currentSlot err - lift $ roforkerClose forker - exitEarly - Right lv -> - return lv - - trace $ TraceLedgerView currentSlot - - -- Tick the 'ChainDepState' for the 'SlotNo' we're producing a block for. We - -- only need the ticked 'ChainDepState' to check the whether we're a leader. - -- This is much cheaper than ticking the entire 'ExtLedgerState'. - let tickedChainDepState :: Ticked (ChainDepState (BlockProtocol blk)) - tickedChainDepState = - tickChainDepState - (configConsensus cfg) - ledgerView - currentSlot - (headerStateChainDep (headerState unticked)) - - -- Check if we are the leader - proof <- do - shouldForge <- lift $ + + trace $ TraceBlockContext currentSlot bcBlockNo bcPrevPoint + + -- Get forker corresponding to bcPrevPoint + -- + -- This might fail if, in between choosing 'bcPrevPoint' and this call to + -- 'ChainDB.getReadOnlyForkerAtPoint', we switched to a fork where 'bcPrevPoint' + -- is no longer on our chain. When that happens, we simply give up on the + -- chance to produce a block. + forkerEith <- lift $ ChainDB.getReadOnlyForkerAtPoint chainDB reg (SpecificPoint bcPrevPoint) + -- Remember to close this forker before exiting! + forker <- case forkerEith of + Left _ -> do + trace $ TraceNoLedgerState currentSlot bcPrevPoint + exitEarly + Right forker -> pure forker + + unticked <- lift $ atomically $ LedgerDB.roforkerGetLedgerState forker + + trace $ TraceLedgerState currentSlot bcPrevPoint + + -- We require the ticked ledger view in order to construct the ticked + -- 'ChainDepState'. + ledgerView <- + case runExcept $ + forecastFor + ( ledgerViewForecastAt + (configLedger cfg) + (ledgerState unticked) + ) + currentSlot of + Left err -> do + -- There are so many empty slots between the tip of our chain and the + -- current slot that we cannot get an ledger view anymore In + -- principle, this is no problem; we can still produce a block (we use + -- the ticked ledger state). However, we probably don't /want/ to + -- produce a block in this case; we are most likely missing a blocks + -- on our chain. + trace $ TraceNoLedgerView currentSlot err + lift $ roforkerClose forker + exitEarly + Right lv -> + return lv + + trace $ TraceLedgerView currentSlot + + -- Tick the 'ChainDepState' for the 'SlotNo' we're producing a block for. We + -- only need the ticked 'ChainDepState' to check the whether we're a leader. + -- This is much cheaper than ticking the entire 'ExtLedgerState'. + let tickedChainDepState :: Ticked (ChainDepState (BlockProtocol blk)) + tickedChainDepState = + tickChainDepState + (configConsensus cfg) + ledgerView + currentSlot + (headerStateChainDep (headerState unticked)) + + -- Check if we are the leader + proof <- do + shouldForge <- + lift $ checkShouldForge blockForging - (contramap (TraceLabelCreds (forgeLabel blockForging)) - (forgeStateInfoTracer tracers)) + ( contramap + (TraceLabelCreds (forgeLabel blockForging)) + (forgeStateInfoTracer tracers) + ) cfg currentSlot tickedChainDepState - case shouldForge of - ForgeStateUpdateError err -> do - trace $ TraceForgeStateUpdateError currentSlot err - lift $ roforkerClose forker - exitEarly - CannotForge cannotForge -> do - trace $ TraceNodeCannotForge currentSlot cannotForge - lift $ roforkerClose forker - exitEarly - NotLeader -> do - trace $ TraceNodeNotLeader currentSlot - lift $ roforkerClose forker - exitEarly - ShouldForge p -> return p - - -- At this point we have established that we are indeed slot leader - trace $ TraceNodeIsLeader currentSlot - - -- Tick the ledger state for the 'SlotNo' we're producing a block for - let tickedLedgerState :: Ticked (LedgerState blk) DiffMK - tickedLedgerState = - applyChainTick - OmitLedgerEvents - (configLedger cfg) - currentSlot - (ledgerState unticked) - - _ <- evaluate tickedLedgerState - trace $ TraceForgeTickedLedgerState currentSlot bcPrevPoint - - -- Get a snapshot of the mempool that is consistent with the ledger - -- - -- NOTE: It is possible that due to adoption of new blocks the - -- /current/ ledger will have changed. This doesn't matter: we will - -- produce a block that fits onto the ledger we got above; if the - -- ledger in the meantime changes, the block we produce here may or - -- may not be adopted, but it won't be invalid. - (mempoolHash, mempoolSlotNo) <- lift $ atomically $ do - snap <- getSnapshot mempool -- only used for its tip-like information - pure (castHash $ snapshotStateHash snap, snapshotSlotNo snap) - - let readTables = fmap castLedgerTables . roforkerReadTables forker . castLedgerTables - - mempoolSnapshot <- lift $ getSnapshotFor - mempool - currentSlot - tickedLedgerState - readTables - - lift $ roforkerClose forker - - let txs = snapshotTake mempoolSnapshot - $ blockCapacityTxMeasure (configLedger cfg) tickedLedgerState - -- NB respect the capacity of the ledger state we're extending, - -- which is /not/ 'snapshotLedgerState' - - -- force the mempool's computation before the tracer event - _ <- evaluate (length txs) - _ <- evaluate mempoolHash - - trace $ TraceForgingMempoolSnapshot currentSlot bcPrevPoint mempoolHash mempoolSlotNo - - -- Actually produce the block - newBlock <- lift $ Block.forgeBlock - blockForging - cfg - bcBlockNo - currentSlot - (forgetLedgerTables tickedLedgerState) - txs - proof - - trace $ TraceForgedBlock - currentSlot - (ledgerTipPoint (ledgerState unticked)) - newBlock - (snapshotMempoolSize mempoolSnapshot) - - -- Add the block to the chain DB - let noPunish = InvalidBlockPunishment.noPunishment -- no way to punish yourself - -- Make sure that if an async exception is thrown while a block is - -- added to the chain db, we will remove txs from the mempool. - - -- 'addBlockAsync' is a non-blocking action, so `mask_` would suffice, - -- but the finalizer is a blocking operation, hence we need to use - -- 'uninterruptibleMask_' to make sure that async exceptions do not - -- interrupt it. - uninterruptibleMask_ $ do - result <- lift $ ChainDB.addBlockAsync chainDB noPunish newBlock - -- Block until we have processed the block - mbCurTip <- lift $ atomically $ ChainDB.blockProcessed result - - -- Check whether we adopted our block - when (mbCurTip /= SuccesfullyAddedBlock (blockPoint newBlock)) $ do - isInvalid <- lift $ atomically $ - ($ blockHash newBlock) . forgetFingerprint <$> - ChainDB.getIsInvalidBlock chainDB - case isInvalid of - Nothing -> - trace $ TraceDidntAdoptBlock currentSlot newBlock - Just reason -> do - trace $ TraceForgedInvalidBlock currentSlot newBlock reason - -- We just produced a block that is invalid according to the - -- ledger in the ChainDB, while the mempool said it is valid. - -- There is an inconsistency between the two! - -- - -- Remove all the transactions in that block, otherwise we'll - -- run the risk of forging the same invalid block again. This - -- means that we'll throw away some good transactions in the - -- process. - whenJust - (NE.nonEmpty (map (txId . txForgetValidated) txs)) - (lift . removeTxsEvenIfValid mempool) + case shouldForge of + ForgeStateUpdateError err -> do + trace $ TraceForgeStateUpdateError currentSlot err + lift $ roforkerClose forker exitEarly + CannotForge cannotForge -> do + trace $ TraceNodeCannotForge currentSlot cannotForge + lift $ roforkerClose forker + exitEarly + NotLeader -> do + trace $ TraceNodeNotLeader currentSlot + lift $ roforkerClose forker + exitEarly + ShouldForge p -> return p - -- We successfully produced /and/ adopted a block - -- - -- NOTE: we are tracing the transactions we retrieved from the Mempool, - -- not the transactions actually /in the block/. - -- The transactions in the block should be a prefix of the transactions - -- in the mempool. If this is not the case, this is a bug. - -- Unfortunately, we can't - -- assert this here because the ability to extract transactions from a - -- block, i.e., the @HasTxs@ class, is not implementable by all blocks, - -- e.g., @DualBlock@. - trace $ TraceAdoptedBlock currentSlot newBlock txs - - trace :: TraceForgeEvent blk -> WithEarlyExit m () - trace = - lift - . traceWith (forgeTracer tracers) - . TraceLabelCreds (forgeLabel blockForging) + -- At this point we have established that we are indeed slot leader + trace $ TraceNodeIsLeader currentSlot + + -- Tick the ledger state for the 'SlotNo' we're producing a block for + let tickedLedgerState :: Ticked (LedgerState blk) DiffMK + tickedLedgerState = + applyChainTick + OmitLedgerEvents + (configLedger cfg) + currentSlot + (ledgerState unticked) + + _ <- evaluate tickedLedgerState + trace $ TraceForgeTickedLedgerState currentSlot bcPrevPoint + + -- Get a snapshot of the mempool that is consistent with the ledger + -- + -- NOTE: It is possible that due to adoption of new blocks the + -- /current/ ledger will have changed. This doesn't matter: we will + -- produce a block that fits onto the ledger we got above; if the + -- ledger in the meantime changes, the block we produce here may or + -- may not be adopted, but it won't be invalid. + (mempoolHash, mempoolSlotNo) <- lift $ atomically $ do + snap <- getSnapshot mempool -- only used for its tip-like information + pure (castHash $ snapshotStateHash snap, snapshotSlotNo snap) + + let readTables = fmap castLedgerTables . roforkerReadTables forker . castLedgerTables + + mempoolSnapshot <- + lift $ + getSnapshotFor + mempool + currentSlot + tickedLedgerState + readTables + + lift $ roforkerClose forker + + let txs = + snapshotTake mempoolSnapshot $ + blockCapacityTxMeasure (configLedger cfg) tickedLedgerState + -- NB respect the capacity of the ledger state we're extending, + -- which is /not/ 'snapshotLedgerState' + + -- force the mempool's computation before the tracer event + _ <- evaluate (length txs) + _ <- evaluate mempoolHash + + trace $ TraceForgingMempoolSnapshot currentSlot bcPrevPoint mempoolHash mempoolSlotNo + + -- Actually produce the block + newBlock <- + lift $ + Block.forgeBlock + blockForging + cfg + bcBlockNo + currentSlot + (forgetLedgerTables tickedLedgerState) + txs + proof + + trace $ + TraceForgedBlock + currentSlot + (ledgerTipPoint (ledgerState unticked)) + newBlock + (snapshotMempoolSize mempoolSnapshot) + + -- Add the block to the chain DB + let noPunish = InvalidBlockPunishment.noPunishment -- no way to punish yourself + -- Make sure that if an async exception is thrown while a block is + -- added to the chain db, we will remove txs from the mempool. + + -- 'addBlockAsync' is a non-blocking action, so `mask_` would suffice, + -- but the finalizer is a blocking operation, hence we need to use + -- 'uninterruptibleMask_' to make sure that async exceptions do not + -- interrupt it. + uninterruptibleMask_ $ do + result <- lift $ ChainDB.addBlockAsync chainDB noPunish newBlock + -- Block until we have processed the block + mbCurTip <- lift $ atomically $ ChainDB.blockProcessed result + + -- Check whether we adopted our block + when (mbCurTip /= SuccesfullyAddedBlock (blockPoint newBlock)) $ do + isInvalid <- + lift $ + atomically $ + ($ blockHash newBlock) . forgetFingerprint + <$> ChainDB.getIsInvalidBlock chainDB + case isInvalid of + Nothing -> + trace $ TraceDidntAdoptBlock currentSlot newBlock + Just reason -> do + trace $ TraceForgedInvalidBlock currentSlot newBlock reason + -- We just produced a block that is invalid according to the + -- ledger in the ChainDB, while the mempool said it is valid. + -- There is an inconsistency between the two! + -- + -- Remove all the transactions in that block, otherwise we'll + -- run the risk of forging the same invalid block again. This + -- means that we'll throw away some good transactions in the + -- process. + whenJust + (NE.nonEmpty (map (txId . txForgetValidated) txs)) + (lift . removeTxsEvenIfValid mempool) + exitEarly + + -- We successfully produced /and/ adopted a block + -- + -- NOTE: we are tracing the transactions we retrieved from the Mempool, + -- not the transactions actually /in the block/. + -- The transactions in the block should be a prefix of the transactions + -- in the mempool. If this is not the case, this is a bug. + -- Unfortunately, we can't + -- assert this here because the ability to extract transactions from a + -- block, i.e., the @HasTxs@ class, is not implementable by all blocks, + -- e.g., @DualBlock@. + trace $ TraceAdoptedBlock currentSlot newBlock txs + + trace :: TraceForgeEvent blk -> WithEarlyExit m () + trace = + lift + . traceWith (forgeTracer tracers) + . TraceLabelCreds (forgeLabel blockForging) -- | Context required to forge a block data BlockContext blk = BlockContext - { bcBlockNo :: !BlockNo - -- ^ the block number of the block to be forged + { bcBlockNo :: !BlockNo + -- ^ the block number of the block to be forged , bcPrevPoint :: !(Point blk) - -- ^ the point of /the predecessor of/ the block - -- - -- Note that a block/header stores the hash of its predecessor but not the - -- slot. + -- ^ the point of /the predecessor of/ the block + -- + -- Note that a block/header stores the hash of its predecessor but not the + -- slot. } -- | Create the 'BlockContext' from the header of the previous block blockContextFromPrevHeader :: - HasHeader (Header blk) - => Header blk -> BlockContext blk + HasHeader (Header blk) => + Header blk -> BlockContext blk blockContextFromPrevHeader hdr = - -- Recall that an EBB has the same block number as its predecessor, so this - -- @succ@ is even correct when @hdr@ is an EBB. - BlockContext (succ (blockNo hdr)) (headerPoint hdr) + -- Recall that an EBB has the same block number as its predecessor, so this + -- @succ@ is even correct when @hdr@ is an EBB. + BlockContext (succ (blockNo hdr)) (headerPoint hdr) -- | Determine the 'BlockContext' for a block about to be forged from the -- current slot, ChainDB chain fragment, and ChainDB tip block number @@ -706,91 +762,94 @@ blockContextFromPrevHeader hdr = -- predecessor. If the chain is empty, then it will refer to the chain's anchor -- point, which may be genesis. mkCurrentBlockContext :: - forall blk. RunNode blk - => SlotNo - -- ^ the current slot, i.e. the slot of the block about to be forged - -> AnchoredFragment (Header blk) - -- ^ the current chain fragment - -- - -- Recall that the anchor point is the tip of the ImmutableDB. - -> Either (TraceForgeEvent blk) (BlockContext blk) - -- ^ the event records the cause of the failure + forall blk. + RunNode blk => + -- | the current slot, i.e. the slot of the block about to be forged + SlotNo -> + -- | the current chain fragment + -- + -- Recall that the anchor point is the tip of the ImmutableDB. + AnchoredFragment (Header blk) -> + -- | the event records the cause of the failure + Either (TraceForgeEvent blk) (BlockContext blk) mkCurrentBlockContext currentSlot c = case c of - Empty AF.AnchorGenesis -> - -- The chain is entirely empty. - Right $ BlockContext (expectedFirstBlockNo (Proxy @blk)) GenesisPoint - - Empty (AF.Anchor anchorSlot anchorHash anchorBlockNo) -> - let p :: Point blk = BlockPoint anchorSlot anchorHash - in if anchorSlot < currentSlot - then Right $ BlockContext (succ anchorBlockNo) p - else Left $ TraceSlotIsImmutable currentSlot p anchorBlockNo - - c' :> hdr -> case blockSlot hdr `compare` currentSlot of - - -- The block at the tip of our chain has a slot number /before/ the - -- current slot number. This is the common case, and we just want to - -- connect our new block to the block at the tip. - LT -> Right $ blockContextFromPrevHeader hdr - - -- The block at the tip of our chain has a slot that lies in the - -- future. Although the chain DB should not contain blocks from the - -- future, if the volatile DB contained such blocks on startup - -- (due to a node clock misconfiguration) this invariant may be - -- violated. See: https://github.com/IntersectMBO/ouroboros-consensus/blob/main/docs/website/contents/for-developers/HandlingBlocksFromTheFuture.md#handling-blocks-from-the-future - -- Also note that if the - -- system is under heavy load, it is possible (though unlikely) that - -- one or more slots have passed after @currentSlot@ that we got from - -- @onSlotChange@ and before we queried the chain DB for the block - -- at its tip. At the moment, we simply don't produce a block if this - -- happens. - - -- TODO: We may wish to produce a block here anyway, treating this - -- as similar to the @EQ@ case below, but we should be careful: - -- - -- 1. We should think about what slot number to use. - -- 2. We should be careful to distinguish between the case where we - -- need to drop a block from the chain and where we don't. - -- 3. We should be careful about slot numbers and EBBs. - -- 4. We should probably not produce a block if the system is under - -- very heavy load (e.g., if a lot of blocks have been produced - -- after @currentTime@). - -- - -- See - GT -> Left $ TraceBlockFromFuture currentSlot (blockSlot hdr) - - -- The block at the tip has the same slot as the block we're going to - -- produce (@currentSlot@). - EQ -> Right $ if isJust (headerIsEBB hdr) - -- We allow forging a block that is the successor of an EBB in the - -- same slot. - then blockContextFromPrevHeader hdr - -- If @hdr@ is not an EBB, then forge an alternative to @hdr@: same - -- block no and same predecessor. - else BlockContext (blockNo hdr) $ castPoint $ AF.headPoint c' + Empty AF.AnchorGenesis -> + -- The chain is entirely empty. + Right $ BlockContext (expectedFirstBlockNo (Proxy @blk)) GenesisPoint + Empty (AF.Anchor anchorSlot anchorHash anchorBlockNo) -> + let p :: Point blk = BlockPoint anchorSlot anchorHash + in if anchorSlot < currentSlot + then Right $ BlockContext (succ anchorBlockNo) p + else Left $ TraceSlotIsImmutable currentSlot p anchorBlockNo + c' :> hdr -> case blockSlot hdr `compare` currentSlot of + -- The block at the tip of our chain has a slot number /before/ the + -- current slot number. This is the common case, and we just want to + -- connect our new block to the block at the tip. + LT -> Right $ blockContextFromPrevHeader hdr + -- The block at the tip of our chain has a slot that lies in the + -- future. Although the chain DB should not contain blocks from the + -- future, if the volatile DB contained such blocks on startup + -- (due to a node clock misconfiguration) this invariant may be + -- violated. See: https://github.com/IntersectMBO/ouroboros-consensus/blob/main/docs/website/contents/for-developers/HandlingBlocksFromTheFuture.md#handling-blocks-from-the-future + -- Also note that if the + -- system is under heavy load, it is possible (though unlikely) that + -- one or more slots have passed after @currentSlot@ that we got from + -- @onSlotChange@ and before we queried the chain DB for the block + -- at its tip. At the moment, we simply don't produce a block if this + -- happens. + + -- TODO: We may wish to produce a block here anyway, treating this + -- as similar to the @EQ@ case below, but we should be careful: + -- + -- 1. We should think about what slot number to use. + -- 2. We should be careful to distinguish between the case where we + -- need to drop a block from the chain and where we don't. + -- 3. We should be careful about slot numbers and EBBs. + -- 4. We should probably not produce a block if the system is under + -- very heavy load (e.g., if a lot of blocks have been produced + -- after @currentTime@). + -- + -- See + GT -> Left $ TraceBlockFromFuture currentSlot (blockSlot hdr) + -- The block at the tip has the same slot as the block we're going to + -- produce (@currentSlot@). + EQ -> + Right $ + if isJust (headerIsEBB hdr) + -- We allow forging a block that is the successor of an EBB in the + -- same slot. + then blockContextFromPrevHeader hdr + -- If @hdr@ is not an EBB, then forge an alternative to @hdr@: same + -- block no and same predecessor. + else BlockContext (blockNo hdr) $ castPoint $ AF.headPoint c' {------------------------------------------------------------------------------- TxSubmission integration -------------------------------------------------------------------------------} getMempoolReader :: - forall m blk. - ( LedgerSupportsMempool blk - , IOLike m - , HasTxId (GenTx blk) - ) - => Mempool m blk - -> TxSubmissionMempoolReader (GenTxId blk) (Validated (GenTx blk)) TicketNo m -getMempoolReader mempool = MempoolReader.TxSubmissionMempoolReader - { mempoolZeroIdx = zeroTicketNo + forall m blk. + ( LedgerSupportsMempool blk + , IOLike m + , HasTxId (GenTx blk) + ) => + Mempool m blk -> + TxSubmissionMempoolReader (GenTxId blk) (Validated (GenTx blk)) TicketNo m +getMempoolReader mempool = + MempoolReader.TxSubmissionMempoolReader + { mempoolZeroIdx = zeroTicketNo , mempoolGetSnapshot = convertSnapshot <$> getSnapshot mempool } - where - convertSnapshot - :: MempoolSnapshot blk - -> MempoolReader.MempoolSnapshot (GenTxId blk) (Validated (GenTx blk)) TicketNo - convertSnapshot MempoolSnapshot { snapshotTxsAfter, snapshotLookupTx, - snapshotHasTx } = + where + convertSnapshot :: + MempoolSnapshot blk -> + MempoolReader.MempoolSnapshot (GenTxId blk) (Validated (GenTx blk)) TicketNo + convertSnapshot + MempoolSnapshot + { snapshotTxsAfter + , snapshotLookupTx + , snapshotHasTx + } = MempoolReader.MempoolSnapshot { mempoolTxIdsAfter = \idx -> [ ( txId (txForgetValidated tx) @@ -799,22 +858,23 @@ getMempoolReader mempool = MempoolReader.TxSubmissionMempoolReader ) | (tx, idx', msr) <- snapshotTxsAfter idx ] - , mempoolLookupTx = snapshotLookupTx - , mempoolHasTx = snapshotHasTx + , mempoolLookupTx = snapshotLookupTx + , mempoolHasTx = snapshotHasTx } getMempoolWriter :: - ( LedgerSupportsMempool blk - , IOLike m - , HasTxId (GenTx blk) - ) - => Mempool m blk - -> TxSubmissionMempoolWriter (GenTxId blk) (GenTx blk) TicketNo m -getMempoolWriter mempool = Inbound.TxSubmissionMempoolWriter - { Inbound.txId = txId + ( LedgerSupportsMempool blk + , IOLike m + , HasTxId (GenTx blk) + ) => + Mempool m blk -> + TxSubmissionMempoolWriter (GenTxId blk) (GenTx blk) TicketNo m +getMempoolWriter mempool = + Inbound.TxSubmissionMempoolWriter + { Inbound.txId = txId , mempoolAddTxs = \txs -> - map (txId . txForgetValidated) . mapMaybe mempoolTxAddedToMaybe <$> - addTxs mempool txs + map (txId . txForgetValidated) . mapMaybe mempoolTxAddedToMaybe + <$> addTxs mempool txs } {------------------------------------------------------------------------------- @@ -836,47 +896,47 @@ getMempoolWriter mempool = Inbound.TxSubmissionMempoolWriter -- pools registered in that ledger state are guaranteed to be stable. This -- justifies merging the future and current stake pools. getPeersFromCurrentLedger :: - (IOLike m, LedgerSupportsPeerSelection blk) - => NodeKernel m addrNTN addrNTC blk - -> (LedgerState blk EmptyMK -> Bool) - -> STM m (Maybe [(PoolStake, NonEmpty RelayAccessPoint)]) + (IOLike m, LedgerSupportsPeerSelection blk) => + NodeKernel m addrNTN addrNTC blk -> + (LedgerState blk EmptyMK -> Bool) -> + STM m (Maybe [(PoolStake, NonEmpty RelayAccessPoint)]) getPeersFromCurrentLedger kernel p = do - immutableLedger <- - ledgerState <$> ChainDB.getImmutableLedger (getChainDB kernel) - return $ do - guard (p immutableLedger) - return - $ map (second (fmap stakePoolRelayAccessPoint)) - $ force - $ getPeers immutableLedger + immutableLedger <- + ledgerState <$> ChainDB.getImmutableLedger (getChainDB kernel) + return $ do + guard (p immutableLedger) + return $ + map (second (fmap stakePoolRelayAccessPoint)) $ + force $ + getPeers immutableLedger -- | Like 'getPeersFromCurrentLedger' but with a \"after slot number X\" -- condition. getPeersFromCurrentLedgerAfterSlot :: - forall m blk addrNTN addrNTC . - ( IOLike m - , LedgerSupportsPeerSelection blk - , UpdateLedger blk - ) - => NodeKernel m addrNTN addrNTC blk - -> SlotNo - -> STM m (Maybe [(PoolStake, NonEmpty RelayAccessPoint)]) + forall m blk addrNTN addrNTC. + ( IOLike m + , LedgerSupportsPeerSelection blk + , UpdateLedger blk + ) => + NodeKernel m addrNTN addrNTC blk -> + SlotNo -> + STM m (Maybe [(PoolStake, NonEmpty RelayAccessPoint)]) getPeersFromCurrentLedgerAfterSlot kernel slotNo = - getPeersFromCurrentLedger kernel afterSlotNo - where - afterSlotNo :: LedgerState blk mk -> Bool - afterSlotNo st = - case ledgerTipSlot st of - Origin -> False - NotOrigin tip -> tip > slotNo + getPeersFromCurrentLedger kernel afterSlotNo + where + afterSlotNo :: LedgerState blk mk -> Bool + afterSlotNo st = + case ledgerTipSlot st of + Origin -> False + NotOrigin tip -> tip > slotNo -- | Retrieve the slot of the immutable tip getImmTipSlot :: - ( IOLike m - , UpdateLedger blk - ) - => NodeKernel m addrNTN addrNTC blk - -> STM m (WithOrigin SlotNo) + ( IOLike m + , UpdateLedger blk + ) => + NodeKernel m addrNTN addrNTC blk -> + STM m (WithOrigin SlotNo) getImmTipSlot kernel = - getTipSlot + getTipSlot <$> ChainDB.getImmutableLedger (getChainDB kernel) diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/General.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/General.hs index 348d183e31..4885d31665 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/General.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/General.hs @@ -9,13 +9,14 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} -module Test.ThreadNet.General ( - PropGeneralArgs (..) +module Test.ThreadNet.General + ( PropGeneralArgs (..) , calcFinalIntersectionDepth , prop_general , prop_general_semisync , prop_inSync , runTestNetwork + -- * TestConfig , TestConfig (..) , TestConfigB (..) @@ -23,8 +24,10 @@ module Test.ThreadNet.General ( , truncateNodeJoinPlan , truncateNodeRestarts , truncateNodeTopology + -- * Expected CannotForge , noExpectedCannotForges + -- * Re-exports , ForgeEbbEnv (..) , TestOutput (..) @@ -32,53 +35,53 @@ module Test.ThreadNet.General ( , plainTestNodeInitialization ) where -import Control.Exception (assert) -import Control.Monad (guard) -import Control.Monad.IOSim (runSimOrThrow, setCurrentTime) -import Control.Tracer (nullTracer) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import qualified Ouroboros.Consensus.Block.Abstract as BA -import qualified Ouroboros.Consensus.BlockchainTime as BTime -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Ledger.Extended (ExtValidationError) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.Abstract (LedgerView) -import Ouroboros.Consensus.Protocol.LeaderSchedule -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Enclose (pattern FallingEdge) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Consensus.Util.RedundantConstraints -import qualified Ouroboros.Network.Mock.Chain as MockChain -import qualified System.FS.Sim.MockFS as Mock -import System.FS.Sim.MockFS (MockFS) -import Test.QuickCheck -import Test.ThreadNet.Network -import Test.ThreadNet.TxGen -import Test.ThreadNet.Util -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeTopology -import Test.ThreadNet.Util.Seed -import Test.Util.HardFork.Future (Future) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Orphans.IOLike () -import Test.Util.Orphans.NoThunks () -import Test.Util.QuickCheck -import Test.Util.Range -import Test.Util.Shrink (andId, dropId) -import Test.Util.Slots (NumSlots (..)) -import Test.Util.Time (dawnOfTime) +import Control.Exception (assert) +import Control.Monad (guard) +import Control.Monad.IOSim (runSimOrThrow, setCurrentTime) +import Control.Tracer (nullTracer) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Block.Abstract qualified as BA +import Ouroboros.Consensus.BlockchainTime qualified as BTime +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Ledger.Extended (ExtValidationError) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.Abstract (LedgerView) +import Ouroboros.Consensus.Protocol.LeaderSchedule +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Enclose (pattern FallingEdge) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Consensus.Util.RedundantConstraints +import Ouroboros.Network.Mock.Chain qualified as MockChain +import System.FS.Sim.MockFS (MockFS) +import System.FS.Sim.MockFS qualified as Mock +import Test.QuickCheck +import Test.ThreadNet.Network +import Test.ThreadNet.TxGen +import Test.ThreadNet.Util +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeTopology +import Test.ThreadNet.Util.Seed +import Test.Util.HardFork.Future (Future) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Orphans.IOLike () +import Test.Util.Orphans.NoThunks () +import Test.Util.QuickCheck +import Test.Util.Range +import Test.Util.Shrink (andId, dropId) +import Test.Util.Slots (NumSlots (..)) +import Test.Util.Time (dawnOfTime) {------------------------------------------------------------------------------- Configuring tests @@ -92,66 +95,71 @@ import Test.Util.Time (dawnOfTime) -- (each of which realizes a ledger-protocol combination) influences the -- validity of these data. data TestConfig = TestConfig - { initSeed :: Seed + { initSeed :: Seed , nodeTopology :: NodeTopology , numCoreNodes :: NumCoreNodes - , numSlots :: NumSlots - -- ^ TODO generate in function of @k@ + , numSlots :: NumSlots + -- ^ TODO generate in function of @k@ } - deriving (Show) + deriving Show truncateNodeJoinPlan :: - NodeJoinPlan -> NumCoreNodes -> (NumSlots, NumSlots) -> NodeJoinPlan + NodeJoinPlan -> NumCoreNodes -> (NumSlots, NumSlots) -> NodeJoinPlan truncateNodeJoinPlan - (NodeJoinPlan m) (NumCoreNodes n') (NumSlots t, NumSlots t') = + (NodeJoinPlan m) + (NumCoreNodes n') + (NumSlots t, NumSlots t') = NodeJoinPlan $ - -- scale by t' / t - Map.map (\(SlotNo i) -> SlotNo $ (i * t') `div` t) $ - -- discard discarded nodes - Map.filterWithKey (\(CoreNodeId nid) _ -> nid < n') - m + -- scale by t' / t + Map.map (\(SlotNo i) -> SlotNo $ (i * t') `div` t) $ + -- discard discarded nodes + Map.filterWithKey + (\(CoreNodeId nid) _ -> nid < n') + m truncateNodeTopology :: NodeTopology -> NumCoreNodes -> NodeTopology truncateNodeTopology (NodeTopology m) (NumCoreNodes n') = - NodeTopology $ Map.filterWithKey (\(CoreNodeId i) _ -> i < n') m + NodeTopology $ Map.filterWithKey (\(CoreNodeId i) _ -> i < n') m truncateNodeRestarts :: NodeRestarts -> NumSlots -> NodeRestarts truncateNodeRestarts (NodeRestarts m) (NumSlots t) = - NodeRestarts $ Map.filterWithKey (\(SlotNo s) _ -> s < t) m + NodeRestarts $ Map.filterWithKey (\(SlotNo s) _ -> s < t) m instance Arbitrary TestConfig where arbitrary = do - initSeed <- arbitrary + initSeed <- arbitrary - numCoreNodes <- arbitrary - nodeTopology <- genNodeTopology numCoreNodes + numCoreNodes <- arbitrary + nodeTopology <- genNodeTopology numCoreNodes - numSlots <- arbitrary - pure TestConfig + numSlots <- arbitrary + pure + TestConfig { initSeed , nodeTopology , numCoreNodes , numSlots } - shrink TestConfig - { initSeed - , nodeTopology - , numCoreNodes - , numSlots - } = + shrink + TestConfig + { initSeed + , nodeTopology + , numCoreNodes + , numSlots + } = dropId $ - [ TestConfig - { initSeed - , nodeTopology = top' - , numCoreNodes = n' - , numSlots = t' - } - | n' <- andId shrink numCoreNodes - , t' <- andId shrink numSlots - , let adjustedTop = truncateNodeTopology nodeTopology n' - , top' <- andId shrinkNodeTopology adjustedTop - ] + [ TestConfig + { initSeed + , nodeTopology = top' + , numCoreNodes = n' + , numSlots = t' + } + | n' <- andId shrink numCoreNodes + , t' <- andId shrink numSlots + , let adjustedTop = truncateNodeTopology nodeTopology n' + , top' <- andId shrinkNodeTopology adjustedTop + ] {------------------------------------------------------------------------------- Configuring tests for a specific block type @@ -167,17 +175,18 @@ instance Arbitrary TestConfig where -- absence/lateness. And 'epochSize' is here because eg the Byron ledger -- assumes a fixed epoch size of @10k@. And so on. data TestConfigB blk = TestConfigB - { forgeEbbEnv :: Maybe (ForgeEbbEnv blk) - , future :: Future + { forgeEbbEnv :: Maybe (ForgeEbbEnv blk) + , future :: Future , messageDelay :: CalcMessageDelay blk , nodeJoinPlan :: NodeJoinPlan , nodeRestarts :: NodeRestarts - , txGenExtra :: TxGenExtra blk - , version :: (NodeToNodeVersion, BlockNodeToNodeVersion blk) + , txGenExtra :: TxGenExtra blk + , version :: (NodeToNodeVersion, BlockNodeToNodeVersion blk) } -deriving instance (Show (TxGenExtra blk), Show (BlockNodeToNodeVersion blk)) - => Show (TestConfigB blk) +deriving instance + (Show (TxGenExtra blk), Show (BlockNodeToNodeVersion blk)) => + Show (TestConfigB blk) -- | Test configuration that depends on the block and the monad -- @@ -187,8 +196,8 @@ deriving instance (Show (TxGenExtra blk), Show (BlockNodeToNodeVersion blk)) data TestConfigMB m blk = TestConfigMB { nodeInfo :: CoreNodeId -> TestNodeInitialization m blk , mkRekeyM :: Maybe (m (RekeyM m blk)) - -- ^ 'runTestNetwork' immediately runs this action once in order to - -- initialize an 'RekeyM' value that it then reuses throughout the test + -- ^ 'runTestNetwork' immediately runs this action once in order to + -- initialize an 'RekeyM' value that it then reuses throughout the test } {------------------------------------------------------------------------------- @@ -196,111 +205,107 @@ data TestConfigMB m blk = TestConfigMB -------------------------------------------------------------------------------} -- | Thin wrapper around 'runThreadNetwork' --- runTestNetwork :: forall blk. - ( RunNode blk - , TxGen blk - , TracingConstraints blk - , HasCallStack - ) - => TestConfig - -> TestConfigB blk - -> (forall m. IOLike m => TestConfigMB m blk) - -> TestOutput blk -runTestNetwork TestConfig - { numCoreNodes - , numSlots - , nodeTopology - , initSeed - } TestConfigB - { forgeEbbEnv - , future - , messageDelay - , nodeJoinPlan - , nodeRestarts - , txGenExtra - , version = (networkVersion, blockVersion) - } - mkTestConfigMB - = runSimOrThrow $ do - setCurrentTime dawnOfTime - let TestConfigMB - { nodeInfo - , mkRekeyM - } = mkTestConfigMB - let systemTime = + ( RunNode blk + , TxGen blk + , TracingConstraints blk + , HasCallStack + ) => + TestConfig -> + TestConfigB blk -> + (forall m. IOLike m => TestConfigMB m blk) -> + TestOutput blk +runTestNetwork + TestConfig + { numCoreNodes + , numSlots + , nodeTopology + , initSeed + } + TestConfigB + { forgeEbbEnv + , future + , messageDelay + , nodeJoinPlan + , nodeRestarts + , txGenExtra + , version = (networkVersion, blockVersion) + } + mkTestConfigMB = + runSimOrThrow $ do + setCurrentTime dawnOfTime + let TestConfigMB + { nodeInfo + , mkRekeyM + } = mkTestConfigMB + let systemTime = BTime.defaultSystemTime (BTime.SystemStart dawnOfTime) nullTracer - runThreadNetwork systemTime ThreadNetworkArgs - { tnaForgeEbbEnv = forgeEbbEnv - , tnaFuture = future - , tnaJoinPlan = nodeJoinPlan - , tnaMessageDelay = messageDelay - , tnaNodeInfo = nodeInfo - , tnaNumCoreNodes = numCoreNodes - , tnaNumSlots = numSlots - , tnaSeed = initSeed - , tnaMkRekeyM = mkRekeyM - , tnaRestarts = nodeRestarts - , tnaTopology = nodeTopology - , tnaTxGenExtra = txGenExtra - , tnaVersion = networkVersion - , tnaBlockVersion = blockVersion - } + runThreadNetwork + systemTime + ThreadNetworkArgs + { tnaForgeEbbEnv = forgeEbbEnv + , tnaFuture = future + , tnaJoinPlan = nodeJoinPlan + , tnaMessageDelay = messageDelay + , tnaNodeInfo = nodeInfo + , tnaNumCoreNodes = numCoreNodes + , tnaNumSlots = numSlots + , tnaSeed = initSeed + , tnaMkRekeyM = mkRekeyM + , tnaRestarts = nodeRestarts + , tnaTopology = nodeTopology + , tnaTxGenExtra = txGenExtra + , tnaVersion = networkVersion + , tnaBlockVersion = blockVersion + } {------------------------------------------------------------------------------- Test properties -------------------------------------------------------------------------------} -- | Data about a node rejecting a block as invalid --- data BlockRejection blk = BlockRejection { brBlockHash :: !(HeaderHash blk) , brBlockSlot :: !SlotNo - , brReason :: !(ExtValidationError blk) - , brRejector :: !NodeId + , brReason :: !(ExtValidationError blk) + , brRejector :: !NodeId } - deriving (Show) + deriving Show data PropGeneralArgs blk = PropGeneralArgs - { pgaBlockProperty :: blk -> Property - -- ^ test if the block is as expected - -- - -- For example, it may fail if the block includes transactions that should - -- have expired before/when the block was forged. - -- - , pgaCountTxs :: blk -> Word64 - -- ^ the number of transactions in the block - -- + { pgaBlockProperty :: blk -> Property + -- ^ test if the block is as expected + -- + -- For example, it may fail if the block includes transactions that should + -- have expired before/when the block was forged. + , pgaCountTxs :: blk -> Word64 + -- ^ the number of transactions in the block , pgaExpectedCannotForge :: SlotNo -> NodeId -> WrapCannotForge blk -> Bool - -- ^ whether this 'CannotForge' was expected - -- - , pgaFirstBlockNo :: BlockNo - -- ^ the block number of the first proper block on the chain - -- - -- At time of writing this comment... For example, this is 1 for Byron - -- tests and 0 for mock tests. The epoch boundary block (EBB) in slot 0 - -- specifies itself as having block number 0, which implies the genesis - -- block is block number 0, and so the first proper block is number 1. For - -- the mock tests, the first proper block is block number 0. - -- - , pgaFixedMaxForkLength :: Maybe NumBlocks - -- ^ the maximum length of a unique suffix among the final chains - -- - -- If not provided, it will be crudely estimated. For example, this - -- estimation is known to be incorrect for PBFT; it does not anticipate - -- 'Ouroboros.Consensus.Protocol.PBFT.PBftExceededSignThreshold'. - -- - , pgaFixedSchedule :: Maybe LeaderSchedule - -- ^ the leader schedule of the nodes - -- - -- If not provided, it will be recovered from the nodes' 'Tracer' data. - -- - , pgaSecurityParam :: SecurityParam - , pgaTestConfig :: TestConfig - , pgaTestConfigB :: TestConfigB blk + -- ^ whether this 'CannotForge' was expected + , pgaFirstBlockNo :: BlockNo + -- ^ the block number of the first proper block on the chain + -- + -- At time of writing this comment... For example, this is 1 for Byron + -- tests and 0 for mock tests. The epoch boundary block (EBB) in slot 0 + -- specifies itself as having block number 0, which implies the genesis + -- block is block number 0, and so the first proper block is number 1. For + -- the mock tests, the first proper block is block number 0. + , pgaFixedMaxForkLength :: Maybe NumBlocks + -- ^ the maximum length of a unique suffix among the final chains + -- + -- If not provided, it will be crudely estimated. For example, this + -- estimation is known to be incorrect for PBFT; it does not anticipate + -- 'Ouroboros.Consensus.Protocol.PBFT.PBftExceededSignThreshold'. + , pgaFixedSchedule :: Maybe LeaderSchedule + -- ^ the leader schedule of the nodes + -- + -- If not provided, it will be recovered from the nodes' 'Tracer' data. + , pgaSecurityParam :: SecurityParam + , pgaTestConfig :: TestConfig + , pgaTestConfigB :: TestConfigB blk } -- | Expect no 'CannotForge's @@ -326,6 +331,7 @@ noExpectedCannotForges _ _ _ = False -- * No nodes are unduly unable to lead (see 'pgaExpectedCannotForge'). -- * No blocks are rejected as invalid. + -- -- Those properties are currently checked under several assumptions. If the -- nodes violate any of these assumptions, the tests will fail. The following @@ -403,14 +409,14 @@ noExpectedCannotForges _ _ _ = False -- assumptions about delegation certificates, update proposals, etc. prop_general :: forall blk. - ( Condense blk - , Condense (HeaderHash blk) - , Eq blk - , RunNode blk - ) - => PropGeneralArgs blk - -> TestOutput blk - -> Property + ( Condense blk + , Condense (HeaderHash blk) + , Eq blk + , RunNode blk + ) => + PropGeneralArgs blk -> + TestOutput blk -> + Property prop_general = prop_general_internal Sync -- | /Synchrony/ or /Semi-synchrony/ @@ -431,455 +437,477 @@ data Synchronicity = SemiSync | Sync -- For now, this simply disables a few 'Property's that depend on synchrony. prop_general_semisync :: forall blk. - ( Condense blk - , Condense (HeaderHash blk) - , Eq blk - , RunNode blk - ) - => PropGeneralArgs blk - -> TestOutput blk - -> Property + ( Condense blk + , Condense (HeaderHash blk) + , Eq blk + , RunNode blk + ) => + PropGeneralArgs blk -> + TestOutput blk -> + Property prop_general_semisync = prop_general_internal SemiSync prop_general_internal :: forall blk. - ( Condense blk - , Condense (HeaderHash blk) - , Eq blk - , RunNode blk - ) - => Synchronicity - -> PropGeneralArgs blk - -> TestOutput blk - -> Property + ( Condense blk + , Condense (HeaderHash blk) + , Eq blk + , RunNode blk + ) => + Synchronicity -> + PropGeneralArgs blk -> + TestOutput blk -> + Property prop_general_internal syncity pga testOutput = - counterexample ("nodeChains: " <> nodeChainsString) $ - counterexample ("nodeJoinPlan: " <> condense nodeJoinPlan) $ - counterexample ("nodeRestarts: " <> condense nodeRestarts) $ - counterexample ("nodeTopology: " <> condense nodeTopology) $ - counterexample ("slot-node-tipBlockNo: " <> condense tipBlockNos) $ - counterexample ("mbSchedule: " <> condense mbSchedule) $ - counterexample ("growth schedule: " <> condense growthSchedule) $ - counterexample ("actual leader schedule: " <> condense actualLeaderSchedule) $ - counterexample ("consensus expected: " <> show isConsensusExpected) $ - counterexample ("maxForkLength: " <> show maxForkLength) $ - tabulateSync "consensus expected" [show isConsensusExpected] $ - tabulate "k" [show (maxRollbacks k)] $ - tabulate ("shortestLength (k = " <> show (maxRollbacks k) <> ")") - [show (rangeK k (shortestLength nodeChains))] $ - tabulate "floor(4 * lastJoinSlot / numSlots)" [show lastJoinSlot] $ - tabulate "minimumDegreeNodeTopology" [show (minimumDegreeNodeTopology nodeTopology)] $ - tabulate "involves >=1 re-delegation" [show hasNodeRekey] $ - tabulate "average #txs/block" [show (range averageNumTxs)] $ - tabulate "updates" [unlines ("" : map (\x -> " " <> condense x) (Map.toList nodeUpdates))] $ - prop_no_BlockRejections .&&. - prop_no_unexpected_CannotForges .&&. - prop_no_invalid_blocks .&&. - prop_pipelining .&&. - propSync - ( prop_all_common_prefix maxForkLength (Map.elems nodeChains) .&&. - prop_all_growth .&&. - prop_no_unexpected_message_delays - ) .&&. - conjoin - [ fileHandleLeakCheck nid nodeDBs - | (nid, nodeDBs) <- Map.toList nodeOutputDBs ] - where - tabulateSync = case syncity of - Sync -> tabulate - SemiSync -> \_ _ -> id - propSync prop = case syncity of - Sync -> prop - SemiSync -> property True - - _ = keepRedundantConstraint (Proxy @(Show (LedgerView (BlockProtocol blk)))) - - PropGeneralArgs - { pgaBlockProperty = prop_valid_block - , pgaCountTxs = countTxs - , pgaExpectedCannotForge = expectedCannotForge - , pgaFirstBlockNo = firstBlockNo - , pgaFixedMaxForkLength = mbMaxForkLength - , pgaFixedSchedule = mbSchedule - , pgaSecurityParam = k - , pgaTestConfig - , pgaTestConfigB - } = pga - TestConfig - { numSlots - , nodeTopology - } = pgaTestConfig - TestConfigB - { nodeJoinPlan - , nodeRestarts - } = pgaTestConfigB - TestOutput - { testOutputNodes - , testOutputTipBlockNos - } = testOutput - - prop_no_BlockRejections = - counterexample msg $ - null brs - where - msg = - "There were unexpected block rejections: " <> - unlines (map show brs) - brs = - [ BlockRejection - { brBlockHash = h - , brBlockSlot = s - , brRejector = nid - , brReason = err - } - | (nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputInvalids} = no - , (RealPoint s h, errs) <- Map.toList nodeOutputInvalids - , err <- errs - ] - - prop_no_unexpected_CannotForges = - counterexample msg $ - Map.null cls - where - msg = "There were unexpected CannotForges: " <> show cls - cls = - Map.unionsWith (++) $ - [ Map.filter (not . null) $ - Map.mapWithKey (\s -> filter (not . ok s nid)) $ + counterexample ("nodeChains: " <> nodeChainsString) + $ counterexample ("nodeJoinPlan: " <> condense nodeJoinPlan) + $ counterexample ("nodeRestarts: " <> condense nodeRestarts) + $ counterexample ("nodeTopology: " <> condense nodeTopology) + $ counterexample ("slot-node-tipBlockNo: " <> condense tipBlockNos) + $ counterexample ("mbSchedule: " <> condense mbSchedule) + $ counterexample ("growth schedule: " <> condense growthSchedule) + $ counterexample ("actual leader schedule: " <> condense actualLeaderSchedule) + $ counterexample ("consensus expected: " <> show isConsensusExpected) + $ counterexample ("maxForkLength: " <> show maxForkLength) + $ tabulateSync "consensus expected" [show isConsensusExpected] + $ tabulate "k" [show (maxRollbacks k)] + $ tabulate + ("shortestLength (k = " <> show (maxRollbacks k) <> ")") + [show (rangeK k (shortestLength nodeChains))] + $ tabulate "floor(4 * lastJoinSlot / numSlots)" [show lastJoinSlot] + $ tabulate "minimumDegreeNodeTopology" [show (minimumDegreeNodeTopology nodeTopology)] + $ tabulate "involves >=1 re-delegation" [show hasNodeRekey] + $ tabulate "average #txs/block" [show (range averageNumTxs)] + $ tabulate "updates" [unlines ("" : map (\x -> " " <> condense x) (Map.toList nodeUpdates))] + $ prop_no_BlockRejections + .&&. prop_no_unexpected_CannotForges + .&&. prop_no_invalid_blocks + .&&. prop_pipelining + .&&. propSync + ( prop_all_common_prefix maxForkLength (Map.elems nodeChains) + .&&. prop_all_growth + .&&. prop_no_unexpected_message_delays + ) + .&&. conjoin + [ fileHandleLeakCheck nid nodeDBs + | (nid, nodeDBs) <- Map.toList nodeOutputDBs + ] + where + tabulateSync = case syncity of + Sync -> tabulate + SemiSync -> \_ _ -> id + propSync prop = case syncity of + Sync -> prop + SemiSync -> property True + + _ = keepRedundantConstraint (Proxy @(Show (LedgerView (BlockProtocol blk)))) + + PropGeneralArgs + { pgaBlockProperty = prop_valid_block + , pgaCountTxs = countTxs + , pgaExpectedCannotForge = expectedCannotForge + , pgaFirstBlockNo = firstBlockNo + , pgaFixedMaxForkLength = mbMaxForkLength + , pgaFixedSchedule = mbSchedule + , pgaSecurityParam = k + , pgaTestConfig + , pgaTestConfigB + } = pga + TestConfig + { numSlots + , nodeTopology + } = pgaTestConfig + TestConfigB + { nodeJoinPlan + , nodeRestarts + } = pgaTestConfigB + TestOutput + { testOutputNodes + , testOutputTipBlockNos + } = testOutput + + prop_no_BlockRejections = + counterexample msg $ + null brs + where + msg = + "There were unexpected block rejections: " + <> unlines (map show brs) + brs = + [ BlockRejection + { brBlockHash = h + , brBlockSlot = s + , brRejector = nid + , brReason = err + } + | (nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputInvalids} = no + , (RealPoint s h, errs) <- Map.toList nodeOutputInvalids + , err <- errs + ] + + prop_no_unexpected_CannotForges = + counterexample msg $ + Map.null cls + where + msg = "There were unexpected CannotForges: " <> show cls + cls = + Map.unionsWith (++) $ + [ Map.filter (not . null) $ + Map.mapWithKey (\s -> filter (not . ok s nid)) $ nodeOutputCannotForges - | (nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputCannotForges} = no - ] - ok s nid cl = - expectedCannotForge s nid (WrapCannotForge cl) - - schedule = case mbSchedule of - Nothing -> actualLeaderSchedule - Just sched -> sched - - NumBlocks maxForkLength = case mbMaxForkLength of - Nothing -> determineForkLength k nodeJoinPlan schedule - Just fl -> fl - - -- build a leader schedule which includes every node that forged unless: - -- - -- * the node rejected its own new block (eg 'PBftExceededSignThreshold') - -- - actualLeaderSchedule :: LeaderSchedule - actualLeaderSchedule = - foldl (<>) (emptyLeaderSchedule numSlots) $ - [ let NodeOutput - { nodeOutputForges - , nodeOutputInvalids - } = no - in - LeaderSchedule $ - Map.mapMaybeWithKey - (actuallyLead cid (Map.keysSet nodeOutputInvalids)) - nodeOutputForges - | (cid, no) <- Map.toList testOutputNodes + | (nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputCannotForges} = no ] - where - actuallyLead :: - NodeId - -> Set (RealPoint blk) - -> SlotNo - -> blk - -> Maybe [CoreNodeId] - actuallyLead nid invalids s b = do - cid <- case nid of - CoreId i -> Just i - RelayId _ -> Nothing - - let j = nodeIdJoinSlot nodeJoinPlan nid - guard $ j <= s - - guard $ not $ Set.member (blockRealPoint b) invalids - - pure [cid] - - -- Refine 'actualLeaderSchedule' to also ignore a leader if: - -- - -- * the node just joined in this slot (unless it's the earliest slot in - -- which any nodes joined) - -- - growthSchedule :: LeaderSchedule - growthSchedule = - LeaderSchedule $ Map.mapWithKey (\s -> filter (keep s)) mlead - where - LeaderSchedule mlead = actualLeaderSchedule - - keep s cid = - isFirstJoinSlot s - || coreNodeIdJoinSlot nodeJoinPlan cid < s - - isFirstJoinSlot s = - Just s == (snd <$> Map.lookupMin mjoin) - where - NodeJoinPlan mjoin = nodeJoinPlan - - nodeChains = nodeOutputFinalChain <$> testOutputNodes - nodeOutputDBs = nodeOutputNodeDBs <$> testOutputNodes - nodeUpdates = nodeOutputUpdates <$> testOutputNodes - - nodeChainsString :: String - nodeChainsString = - unlines $ ("" :) $ + ok s nid cl = + expectedCannotForge s nid (WrapCannotForge cl) + + schedule = case mbSchedule of + Nothing -> actualLeaderSchedule + Just sched -> sched + + NumBlocks maxForkLength = case mbMaxForkLength of + Nothing -> determineForkLength k nodeJoinPlan schedule + Just fl -> fl + + -- build a leader schedule which includes every node that forged unless: + -- + -- \* the node rejected its own new block (eg 'PBftExceededSignThreshold') + -- + actualLeaderSchedule :: LeaderSchedule + actualLeaderSchedule = + foldl (<>) (emptyLeaderSchedule numSlots) $ + [ let NodeOutput + { nodeOutputForges + , nodeOutputInvalids + } = no + in LeaderSchedule $ + Map.mapMaybeWithKey + (actuallyLead cid (Map.keysSet nodeOutputInvalids)) + nodeOutputForges + | (cid, no) <- Map.toList testOutputNodes + ] + where + actuallyLead :: + NodeId -> + Set (RealPoint blk) -> + SlotNo -> + blk -> + Maybe [CoreNodeId] + actuallyLead nid invalids s b = do + cid <- case nid of + CoreId i -> Just i + RelayId _ -> Nothing + + let j = nodeIdJoinSlot nodeJoinPlan nid + guard $ j <= s + + guard $ not $ Set.member (blockRealPoint b) invalids + + pure [cid] + + -- Refine 'actualLeaderSchedule' to also ignore a leader if: + -- + -- \* the node just joined in this slot (unless it's the earliest slot in + -- which any nodes joined) + -- + growthSchedule :: LeaderSchedule + growthSchedule = + LeaderSchedule $ Map.mapWithKey (\s -> filter (keep s)) mlead + where + LeaderSchedule mlead = actualLeaderSchedule + + keep s cid = + isFirstJoinSlot s + || coreNodeIdJoinSlot nodeJoinPlan cid < s + + isFirstJoinSlot s = + Just s == (snd <$> Map.lookupMin mjoin) + where + NodeJoinPlan mjoin = nodeJoinPlan + + nodeChains = nodeOutputFinalChain <$> testOutputNodes + nodeOutputDBs = nodeOutputNodeDBs <$> testOutputNodes + nodeUpdates = nodeOutputUpdates <$> testOutputNodes + + nodeChainsString :: String + nodeChainsString = + unlines $ + ("" :) $ map (\x -> " " <> condense x) $ - Map.toList $ fmap MockChain.headTip nodeChains + Map.toList $ + fmap MockChain.headTip nodeChains - isConsensusExpected :: Bool - isConsensusExpected = consensusExpected k nodeJoinPlan schedule + isConsensusExpected :: Bool + isConsensusExpected = consensusExpected k nodeJoinPlan schedule - fileHandleLeakCheck :: NodeId -> NodeDBs MockFS -> Property - fileHandleLeakCheck nid nodeDBs = conjoin - [ checkLeak "ImmutableDB" $ nodeDBsImm nodeDBs - , checkLeak "VolatileDB" $ nodeDBsVol nodeDBs - , checkLeak "LedgerDB" $ nodeDBsLgr nodeDBs + fileHandleLeakCheck :: NodeId -> NodeDBs MockFS -> Property + fileHandleLeakCheck nid nodeDBs = + conjoin + [ checkLeak "ImmutableDB" $ nodeDBsImm nodeDBs + , checkLeak "VolatileDB" $ nodeDBsVol nodeDBs + , checkLeak "LedgerDB" $ nodeDBsLgr nodeDBs + ] + where + checkLeak dbName fs = + counterexample + ("Node " <> show nid <> "'s " <> dbName <> " is leaking file handles") + (Mock.numOpenHandles fs === 0) + + -- in which quarter of the simulation does the last node join? + lastJoinSlot :: Maybe Word64 + lastJoinSlot = + fmap (\(SlotNo i, _) -> (4 * i) `div` t) $ + Map.maxView m + where + NumSlots t = numSlots + NodeJoinPlan m = nodeJoinPlan + + -- check for Chain Growth violations if there are no Common Prefix + -- violations + -- + -- We consider all possible non-empty intervals, so the interval span + -- @s@ varies but is always at least 1. We compute a different /speed + -- coefficient/ @τ@ for each interval under the assumption that there are + -- no message delays (ie @Δ = 0@). This is essentially a count of the + -- active slots for that interval in the refined @growthSchedule@. + -- + -- The paper defines + -- Common Growth as follows. + -- + -- \* Chain Growth (CG); with parameters τ ∈ (0, 1], s ∈ N. Consider the + -- chains C1, C2 possessed by two honest parties at the onset of two + -- slots sl1, sl2 with sl2 at least s slots ahead of sl1. Then it holds + -- that len(C2) − len(C1) ≥ τs. We call τ the speed coefficient. + prop_all_growth = + isConsensusExpected + `implies` conjoin + [ prop_growth (s1, max1) (s2, min2) + | ((s1, _, max1), (s2, min2, _)) <- orderedPairs extrema ] - where - checkLeak dbName fs = counterexample - ("Node " <> show nid <> "'s " <> dbName <> " is leaking file handles") - (Mock.numOpenHandles fs === 0) - - -- in which quarter of the simulation does the last node join? - lastJoinSlot :: Maybe Word64 - lastJoinSlot = - fmap (\(SlotNo i, _) -> (4 * i) `div` t) $ - Map.maxView m - where - NumSlots t = numSlots - NodeJoinPlan m = nodeJoinPlan - - -- check for Chain Growth violations if there are no Common Prefix - -- violations - -- - -- We consider all possible non-empty intervals, so the interval span - -- @s@ varies but is always at least 1. We compute a different /speed - -- coefficient/ @τ@ for each interval under the assumption that there are - -- no message delays (ie @Δ = 0@). This is essentially a count of the - -- active slots for that interval in the refined @growthSchedule@. - -- - -- The paper defines - -- Common Growth as follows. - -- - -- * Chain Growth (CG); with parameters τ ∈ (0, 1], s ∈ N. Consider the - -- chains C1, C2 possessed by two honest parties at the onset of two - -- slots sl1, sl2 with sl2 at least s slots ahead of sl1. Then it holds - -- that len(C2) − len(C1) ≥ τs. We call τ the speed coefficient. - prop_all_growth = - isConsensusExpected `implies` - conjoin - [ prop_growth (s1, max1) (s2, min2) - | ((s1, _, max1), (s2, min2, _)) <- orderedPairs extrema - ] - where - -- all pairs @(x, y)@ where @x@ precedes @y@ in the given list - orderedPairs :: [a] -> [(a, a)] - orderedPairs = \case - [] -> [] - x:ys -> foldr ((:) . (,) x) (orderedPairs ys) ys - - prop_growth :: (SlotNo, WithOrigin BlockNo) - -> (SlotNo, WithOrigin BlockNo) - -> Property - prop_growth (s1, b1) (s2, b2) = - counterexample (condense (s1, s2, b1, b2, numActiveSlots)) $ - nonNegativeGrowth .&&. - sufficientGrowth - where - nonNegativeGrowth = - counterexample "negative chain growth" $ - property (b2 >= b1) - - sufficientGrowth = - counterexample "insufficient chain growth" $ - property (d >= toEnum numActiveSlots) - - BlockNo d = case (b1, b2) of - (NotOrigin b1', NotOrigin b2') -> b2' - b1' - (Origin, NotOrigin b2') -> b2' + 1 - (Origin, Origin) -> 0 - (NotOrigin _, Origin) -> error "prop_growth: negative growth" - numActiveSlots = - Map.size $ - flip Map.filterWithKey (getLeaderSchedule growthSchedule) $ - \slot ls -> s1 <= slot && slot < s2 && (not . null) ls - - -- @(s, min, max)@ the minimum and maximum block number of the tip of a - -- chain at the onset of slot @s@. - extrema :: [(SlotNo, WithOrigin BlockNo, WithOrigin BlockNo)] - extrema = - [ case map snd bnos' of - [] -> (slot, Origin, Origin) - o -> (slot, minimum o, maximum o) - | (slot, bnos) <- tipBlockNos - , let bnos' = filter (joinedBefore slot . fst) bnos - ] - - joinedBefore slot nid = nodeIdJoinSlot nodeJoinPlan nid < slot - - -- swizzled 'testOutputTipBlockNos' - tipBlockNos :: [(SlotNo, [(NodeId, WithOrigin BlockNo)])] - tipBlockNos = - Map.toAscList $ - fmap Map.toAscList $ + where + -- all pairs @(x, y)@ where @x@ precedes @y@ in the given list + orderedPairs :: [a] -> [(a, a)] + orderedPairs = \case + [] -> [] + x : ys -> foldr ((:) . (,) x) (orderedPairs ys) ys + + prop_growth :: + (SlotNo, WithOrigin BlockNo) -> + (SlotNo, WithOrigin BlockNo) -> + Property + prop_growth (s1, b1) (s2, b2) = + counterexample (condense (s1, s2, b1, b2, numActiveSlots)) $ + nonNegativeGrowth + .&&. sufficientGrowth + where + nonNegativeGrowth = + counterexample "negative chain growth" $ + property (b2 >= b1) + + sufficientGrowth = + counterexample "insufficient chain growth" $ + property (d >= toEnum numActiveSlots) + + BlockNo d = case (b1, b2) of + (NotOrigin b1', NotOrigin b2') -> b2' - b1' + (Origin, NotOrigin b2') -> b2' + 1 + (Origin, Origin) -> 0 + (NotOrigin _, Origin) -> error "prop_growth: negative growth" + numActiveSlots = + Map.size $ + flip Map.filterWithKey (getLeaderSchedule growthSchedule) $ + \slot ls -> s1 <= slot && slot < s2 && (not . null) ls + + -- @(s, min, max)@ the minimum and maximum block number of the tip of a + -- chain at the onset of slot @s@. + extrema :: [(SlotNo, WithOrigin BlockNo, WithOrigin BlockNo)] + extrema = + [ case map snd bnos' of + [] -> (slot, Origin, Origin) + o -> (slot, minimum o, maximum o) + | (slot, bnos) <- tipBlockNos + , let bnos' = filter (joinedBefore slot . fst) bnos + ] + + joinedBefore slot nid = nodeIdJoinSlot nodeJoinPlan nid < slot + + -- swizzled 'testOutputTipBlockNos' + tipBlockNos :: [(SlotNo, [(NodeId, WithOrigin BlockNo)])] + tipBlockNos = + Map.toAscList $ + fmap Map.toAscList $ testOutputTipBlockNos - -- In the paper , a - -- /message/ carries a chain from one party to another. When a party forges - -- a block, it \"diffuses\" the chain with that block as its head by - -- sending a message to each other party (actually, to itself too, but - -- that's ultimately redundant). The adversary is able to delay each - -- message differently, so some parties may receive it before others do. - -- Once a party receives a message, the party can consider that chain for - -- selection. - -- - -- In the implementation, on the other hand, our messages are varied and - -- much more granular than a whole chain. We therefore observe a delay - -- analogous to the paper's /message/ /delay/ by comparing the slot in - -- which a block is added to each node's ChainDB against the slot in which - -- that block was forged. - -- - -- Since our mock network currently introduces only negligible latency - -- compared to the slot duration, we generally expect all messages to have - -- no delay: they should arrive to all nodes during the same slot in which - -- they were forged. However, some delays are expected, due to nodes - -- joining late and also due to the practicality of the ChainSync and - -- BlockFetch policies, which try to avoid /unnecessary/ header/block - -- fetches. See the relevant comments below. - -- - -- NOTE: This current property does not check for interminable message - -- delay: i.e. for blocks that were never added to some ChainDBs. It only - -- checks the slot difference once a message does arrive. This seems - -- acceptable: if there are no Common Prefix or Chain Growth violations, - -- then each message must have either arrived or ultimately been - -- irrelevant. - -- - prop_no_unexpected_message_delays :: HasCallStack => Property - prop_no_unexpected_message_delays = - conjoin $ - [ case p of - RealPoint sendSlot hsh -> - prop1 nid recvSlot sendSlot hsh bno - | (nid, m) <- Map.toList adds - , (recvSlot, pbnos) <- Map.toList m - , (p, bno) <- Set.toList pbnos - ] - where - -- INVARIANT: these AddBlock events are *not* for EBBs - adds = nodeOutputAdds <$> testOutputNodes - - prop1 nid recvSlot sendSlot hsh bno = - counterexample msg $ - delayOK || noDelay - where - msg = - "Unexpected message delay " <> - "(" <> "recipient: " <> condense nid <> - "," <> "expected receive slot: " - <> condense firstPossibleReception <> - "," <> "actual receive slot: " <> condense recvSlot <> - "," <> "blockHash: " <> show hsh <> - "," <> "blockNo: " <> condense (unBlockNo bno) <> - ")" - - -- a node cannot receive a block until both exist - firstPossibleReception = - nodeIdJoinSlot nodeJoinPlan nid `max` sendSlot - - noDelay = recvSlot == firstPossibleReception - - delayOK = delayOK1 || delayOK2 - - -- When a node leads in the same slot in which it joins the - -- network, it immediately forges a single block on top of Genesis; - -- this block then prevents it from fetching the network's current - -- chain if that also consists of just one block. - -- - -- NOTE This predicate is more general than that specific scenario, - -- but we don't anticipate it wholly masking any interesting cases. - delayOK1 = firstBlockNo == bno - - -- When a slot has multiple leaders, each node chooses one of the - -- mutually-exclusive forged blocks and won't fetch any of the - -- others until it's later compelled to switch to a chain - -- containing one of them - -- - -- TODO This predicate is more general than that specific scenario, - -- and should be tightened accordingly. We currently anticipate - -- that Issues #229 and #230 will handle that. - delayOK2 = case Map.lookup sendSlot sched of - Just (_:_:_) -> True - _ -> False - where - LeaderSchedule sched = actualLeaderSchedule - - hasNodeRekey :: Bool - hasNodeRekey = - NodeRekey `Set.member` (foldMap . foldMap) Set.singleton m - where - NodeRestarts m = nodeRestarts - - -- Average number of txs/block - averageNumTxs :: Double - averageNumTxs = - average - . map (fromIntegral . countTxs) - . concatMap MockChain.toOldestFirst - $ Map.elems nodeChains - where - average :: [Double] -> Double - average [] = 0 - average xs = sum xs / fromIntegral (length xs) - - -- The 'prop_valid_block' argument could, for example, check for no expired - -- transactions. - prop_no_invalid_blocks :: Property - prop_no_invalid_blocks = conjoin $ - [ counterexample - ("In slot " <> condense s <> ", node " <> condense nid) $ - counterexample ("forged an invalid block " <> condense blk) $ - prop_valid_block blk - | (nid, NodeOutput{nodeOutputForges}) <- Map.toList testOutputNodes - -- checking all forged blocks, even if they were never or only - -- temporarily selected. - , (s, blk) <- Map.toAscList nodeOutputForges - ] + -- In the paper , a + -- /message/ carries a chain from one party to another. When a party forges + -- a block, it \"diffuses\" the chain with that block as its head by + -- sending a message to each other party (actually, to itself too, but + -- that's ultimately redundant). The adversary is able to delay each + -- message differently, so some parties may receive it before others do. + -- Once a party receives a message, the party can consider that chain for + -- selection. + -- + -- In the implementation, on the other hand, our messages are varied and + -- much more granular than a whole chain. We therefore observe a delay + -- analogous to the paper's /message/ /delay/ by comparing the slot in + -- which a block is added to each node's ChainDB against the slot in which + -- that block was forged. + -- + -- Since our mock network currently introduces only negligible latency + -- compared to the slot duration, we generally expect all messages to have + -- no delay: they should arrive to all nodes during the same slot in which + -- they were forged. However, some delays are expected, due to nodes + -- joining late and also due to the practicality of the ChainSync and + -- BlockFetch policies, which try to avoid /unnecessary/ header/block + -- fetches. See the relevant comments below. + -- + -- NOTE: This current property does not check for interminable message + -- delay: i.e. for blocks that were never added to some ChainDBs. It only + -- checks the slot difference once a message does arrive. This seems + -- acceptable: if there are no Common Prefix or Chain Growth violations, + -- then each message must have either arrived or ultimately been + -- irrelevant. + -- + prop_no_unexpected_message_delays :: HasCallStack => Property + prop_no_unexpected_message_delays = + conjoin $ + [ case p of + RealPoint sendSlot hsh -> + prop1 nid recvSlot sendSlot hsh bno + | (nid, m) <- Map.toList adds + , (recvSlot, pbnos) <- Map.toList m + , (p, bno) <- Set.toList pbnos + ] + where + -- INVARIANT: these AddBlock events are *not* for EBBs + adds = nodeOutputAdds <$> testOutputNodes + + prop1 nid recvSlot sendSlot hsh bno = + counterexample msg $ + delayOK || noDelay + where + msg = + "Unexpected message delay " + <> "(" + <> "recipient: " + <> condense nid + <> "," + <> "expected receive slot: " + <> condense firstPossibleReception + <> "," + <> "actual receive slot: " + <> condense recvSlot + <> "," + <> "blockHash: " + <> show hsh + <> "," + <> "blockNo: " + <> condense (unBlockNo bno) + <> ")" + + -- a node cannot receive a block until both exist + firstPossibleReception = + nodeIdJoinSlot nodeJoinPlan nid `max` sendSlot + + noDelay = recvSlot == firstPossibleReception + + delayOK = delayOK1 || delayOK2 + + -- When a node leads in the same slot in which it joins the + -- network, it immediately forges a single block on top of Genesis; + -- this block then prevents it from fetching the network's current + -- chain if that also consists of just one block. + -- + -- NOTE This predicate is more general than that specific scenario, + -- but we don't anticipate it wholly masking any interesting cases. + delayOK1 = firstBlockNo == bno + + -- When a slot has multiple leaders, each node chooses one of the + -- mutually-exclusive forged blocks and won't fetch any of the + -- others until it's later compelled to switch to a chain + -- containing one of them + -- + -- TODO This predicate is more general than that specific scenario, + -- and should be tightened accordingly. We currently anticipate + -- that Issues #229 and #230 will handle that. + delayOK2 = case Map.lookup sendSlot sched of + Just (_ : _ : _) -> True + _ -> False + where + LeaderSchedule sched = actualLeaderSchedule + + hasNodeRekey :: Bool + hasNodeRekey = + NodeRekey `Set.member` (foldMap . foldMap) Set.singleton m + where + NodeRestarts m = nodeRestarts + + -- Average number of txs/block + averageNumTxs :: Double + averageNumTxs = + average + . map (fromIntegral . countTxs) + . concatMap MockChain.toOldestFirst + $ Map.elems nodeChains + where + average :: [Double] -> Double + average [] = 0 + average xs = sum xs / fromIntegral (length xs) + + -- The 'prop_valid_block' argument could, for example, check for no expired + -- transactions. + prop_no_invalid_blocks :: Property + prop_no_invalid_blocks = + conjoin $ + [ counterexample + ("In slot " <> condense s <> ", node " <> condense nid) + $ counterexample ("forged an invalid block " <> condense blk) + $ prop_valid_block blk + | (nid, NodeOutput{nodeOutputForges}) <- Map.toList testOutputNodes + , -- checking all forged blocks, even if they were never or only + -- temporarily selected. + (s, blk) <- Map.toAscList nodeOutputForges + ] - -- Check that all self-issued blocks are pipelined. - prop_pipelining :: Property - prop_pipelining = case syncity of - -- See #545 for why this is trivially true - SemiSync -> property True - Sync -> conjoin + -- Check that all self-issued blocks are pipelined. + prop_pipelining :: Property + prop_pipelining = case syncity of + -- See #545 for why this is trivially true + SemiSync -> property True + Sync -> + conjoin [ counterexample ("Node " <> condense nid <> " did not pipeline") $ - counterexample ("some of its blocks forged as the sole slot leader:") $ - counterexample (condense forgedButNotPipelined) $ - Set.null forgedButNotPipelined - | (nid, NodeOutput - { nodeOutputForges - , nodePipeliningEvents - }) <- Map.toList testOutputNodes + counterexample ("some of its blocks forged as the sole slot leader:") $ + counterexample (condense forgedButNotPipelined) $ + Set.null forgedButNotPipelined + | ( nid + , NodeOutput + { nodeOutputForges + , nodePipeliningEvents + } + ) <- + Map.toList testOutputNodes , CoreId cnid <- [nid] - , let tentativePoints = Set.fromList - [ headerPoint hdr - | ChainDB.SetTentativeHeader hdr FallingEdge <- nodePipeliningEvents - ] - forgedAsSoleLeaderPoints = Set.fromList $ - [ blockPoint blk - | blk <- Map.elems nodeOutputForges - , let s = blockSlot blk - NodeRestarts nrs = nodeRestarts - , getLeaderSchedule actualLeaderSchedule Map.! s == [cnid] - -- When the node is restarted while it is a slot + , let tentativePoints = + Set.fromList + [ headerPoint hdr + | ChainDB.SetTentativeHeader hdr FallingEdge <- nodePipeliningEvents + ] + forgedAsSoleLeaderPoints = + Set.fromList $ + [ blockPoint blk + | blk <- Map.elems nodeOutputForges + , let s = blockSlot blk + NodeRestarts nrs = nodeRestarts + , getLeaderSchedule actualLeaderSchedule Map.! s == [cnid] + , -- When the node is restarted while it is a slot -- leader, this property is often not satisfied in -- the Byron ThreadNet tests. As diffusion -- pipelining is concerned with up-to-date, -- long-running nodes, we ignore this edge case. - , cnid `Map.notMember` Map.findWithDefault mempty s nrs - ] + cnid `Map.notMember` Map.findWithDefault mempty s nrs + ] forgedButNotPipelined = forgedAsSoleLeaderPoints Set.\\ tentativePoints ] @@ -892,56 +920,61 @@ prop_general_internal syncity pga testOutput = -- final chain in order to reach the final chains' common prefix? -- -- NOTE: This count excludes EBBs. -calcFinalIntersectionDepth :: forall blk. (BA.HasHeader blk) - => PropGeneralArgs blk - -> TestOutput blk - -> NumBlocks +calcFinalIntersectionDepth :: + forall blk. + BA.HasHeader blk => + PropGeneralArgs blk -> + TestOutput blk -> + NumBlocks calcFinalIntersectionDepth pga testOutput = - NumBlocks $ unBlockNo $ - case (MockChain.headBlockNo commonPrefix, maxLength) of - (BA.Origin, BA.Origin) -> 0 - (BA.Origin, BA.NotOrigin b) -> 1 + b - pgaFirstBlockNo - (BA.NotOrigin{}, BA.Origin) -> error "impossible" - (BA.NotOrigin cp, BA.NotOrigin b) -> - assert (b >= cp) $ -- guaranteed by the foldl below - b - cp - where - PropGeneralArgs{pgaFirstBlockNo} = pga - TestOutput{testOutputNodes} = testOutput - - -- length of longest chain - maxLength :: BA.WithOrigin BlockNo - -- the common prefix - commonPrefix :: MockChain.Chain blk - (maxLength, commonPrefix) = - case map prj $ Map.toList testOutputNodes of - [] -> (BA.Origin, MockChain.Genesis) - x:xs -> foldl combine x xs - where - prj (_nid, NodeOutput{nodeOutputFinalChain}) = (d, c) - where - d = MockChain.headBlockNo nodeOutputFinalChain - c = nodeOutputFinalChain - - combine (dl, cl) (dr, cr) = (max dl dr, chainCommonPrefix cl cr) + NumBlocks $ + unBlockNo $ + case (MockChain.headBlockNo commonPrefix, maxLength) of + (BA.Origin, BA.Origin) -> 0 + (BA.Origin, BA.NotOrigin b) -> 1 + b - pgaFirstBlockNo + (BA.NotOrigin{}, BA.Origin) -> error "impossible" + (BA.NotOrigin cp, BA.NotOrigin b) -> + assert (b >= cp) $ -- guaranteed by the foldl below + b - cp + where + PropGeneralArgs{pgaFirstBlockNo} = pga + TestOutput{testOutputNodes} = testOutput + + -- length of longest chain + maxLength :: BA.WithOrigin BlockNo + -- the common prefix + commonPrefix :: MockChain.Chain blk + (maxLength, commonPrefix) = + case map prj $ Map.toList testOutputNodes of + [] -> (BA.Origin, MockChain.Genesis) + x : xs -> foldl combine x xs + where + prj (_nid, NodeOutput{nodeOutputFinalChain}) = (d, c) + where + d = MockChain.headBlockNo nodeOutputFinalChain + c = nodeOutputFinalChain + + combine (dl, cl) (dr, cr) = (max dl dr, chainCommonPrefix cl cr) -- | All final chains have the same block number -prop_inSync :: forall blk. (BA.HasHeader blk) - => TestOutput blk -> Property +prop_inSync :: + forall blk. + BA.HasHeader blk => + TestOutput blk -> Property prop_inSync testOutput = - counterexample (show lengths) $ + counterexample (show lengths) $ counterexample "the nodes' final chains have different block numbers" $ - property $ - case lengths of - [] -> False - l:ls -> all (== l) ls - where - TestOutput{testOutputNodes} = testOutput - - -- the length of each final chain - lengths :: [BA.WithOrigin BlockNo] - lengths = - [ MockChain.headBlockNo nodeOutputFinalChain - | (_nid, no) <- Map.toList testOutputNodes - , let NodeOutput{nodeOutputFinalChain} = no - ] + property $ + case lengths of + [] -> False + l : ls -> all (== l) ls + where + TestOutput{testOutputNodes} = testOutput + + -- the length of each final chain + lengths :: [BA.WithOrigin BlockNo] + lengths = + [ MockChain.headBlockNo nodeOutputFinalChain + | (_nid, no) <- Map.toList testOutputNodes + , let NodeOutput{nodeOutputFinalChain} = no + ] diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Network.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Network.hs index 0ef7e84bba..c16bc548c5 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Network.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Network.hs @@ -17,8 +17,8 @@ {-# LANGUAGE UndecidableInstances #-} -- | Setup network -module Test.ThreadNet.Network ( - CalcMessageDelay (..) +module Test.ThreadNet.Network + ( CalcMessageDelay (..) , ForgeEbbEnv (..) , RekeyM , TestNodeInitialization (..) @@ -27,132 +27,145 @@ module Test.ThreadNet.Network ( , noCalcMessageDelay , plainTestNodeInitialization , runThreadNetwork + -- * Tracers , MiniProtocolFatalException (..) , MiniProtocolState (..) + -- * Test Output , NodeDBs (..) , NodeOutput (..) , TestOutput (..) ) where -import Cardano.Network.PeerSelection.Bootstrap - (UseBootstrapPeers (..)) -import Codec.CBOR.Read (DeserialiseFailure) -import qualified Control.Concurrent.Class.MonadSTM as MonadSTM -import Control.Concurrent.Class.MonadSTM.Strict (newTMVar) -import qualified Control.Exception as Exn -import Control.Monad -import Control.Monad.Class.MonadTime.SI (MonadTime) -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import qualified Control.Monad.Except as Exc -import Control.ResourceRegistry -import Control.Tracer -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor.Contravariant ((>$<)) -import Data.Functor.Identity (Identity) -import qualified Data.List as List -import qualified Data.List.NonEmpty as NE -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import qualified Data.Typeable as Typeable -import Data.Void (Void) -import GHC.Stack -import Network.TypedProtocol.Codec (AnyMessage (..), CodecFailure, - mapFailureCodec) -import qualified Network.TypedProtocol.Codec as Codec -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Mempool -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck as HistoricityCheck -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck as InFutureCheck -import qualified Ouroboros.Consensus.Network.NodeToNode as NTN -import Ouroboros.Consensus.Node.ExitPolicy -import Ouroboros.Consensus.Node.Genesis -import qualified Ouroboros.Consensus.Node.GSM as GSM -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Tracers -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.NodeKernel as NodeKernel -import Ouroboros.Consensus.Protocol.Abstract -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment as InvalidBlockPunishment -import Ouroboros.Consensus.Storage.ChainDB.Impl.Args -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Enclose (pattern FallingEdge) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Consensus.Util.RedundantConstraints -import Ouroboros.Consensus.Util.STM -import Ouroboros.Consensus.Util.Time -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.BlockFetch (BlockFetchConfiguration (..), - TraceLabelPeer (..)) -import Ouroboros.Network.Channel -import Ouroboros.Network.ControlMessage (ControlMessage (..)) -import Ouroboros.Network.Mock.Chain (Chain (Genesis)) -import Ouroboros.Network.NodeToNode (ConnectionId (..), - ExpandedInitiatorContext (..), IsBigLedgerPeer (..), - MiniProtocolParameters (..), ResponderContext (..)) -import Ouroboros.Network.PeerSelection.Governor - (makePublicPeerSelectionStateVar) -import Ouroboros.Network.PeerSelection.PeerMetric (nullMetric) -import Ouroboros.Network.Point (WithOrigin (..)) -import qualified Ouroboros.Network.Protocol.ChainSync.Type as CS -import Ouroboros.Network.Protocol.KeepAlive.Type -import Ouroboros.Network.Protocol.Limits (waitForever) -import Ouroboros.Network.Protocol.LocalStateQuery.Type -import Ouroboros.Network.Protocol.PeerSharing.Type (PeerSharing) -import Ouroboros.Network.Protocol.TxSubmission2.Type -import qualified System.FS.Sim.MockFS as Mock -import System.FS.Sim.MockFS (MockFS) -import System.Random (mkStdGen, split) -import Test.ThreadNet.TxGen -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeTopology -import Test.ThreadNet.Util.Seed -import Test.Util.ChainDB -import qualified Test.Util.HardFork.Future as HFF -import Test.Util.HardFork.Future (Future) -import qualified Test.Util.HardFork.OracularClock as OracularClock -import Test.Util.HardFork.OracularClock (OracularClock (..)) -import Test.Util.Slots (NumSlots (..)) -import Test.Util.Time -import Test.Util.Tracer +import Cardano.Network.PeerSelection.Bootstrap + ( UseBootstrapPeers (..) + ) +import Codec.CBOR.Read (DeserialiseFailure) +import Control.Concurrent.Class.MonadSTM qualified as MonadSTM +import Control.Concurrent.Class.MonadSTM.Strict (newTMVar) +import Control.Exception qualified as Exn +import Control.Monad +import Control.Monad.Class.MonadTime.SI (MonadTime) +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.Monad.Except qualified as Exc +import Control.ResourceRegistry +import Control.Tracer +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor.Contravariant ((>$<)) +import Data.Functor.Identity (Identity) +import Data.List qualified as List +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Typeable qualified as Typeable +import Data.Void (Void) +import GHC.Stack +import Network.TypedProtocol.Codec + ( AnyMessage (..) + , CodecFailure + , mapFailureCodec + ) +import Network.TypedProtocol.Codec qualified as Codec +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Mempool +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck qualified as HistoricityCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck qualified as InFutureCheck +import Ouroboros.Consensus.Network.NodeToNode qualified as NTN +import Ouroboros.Consensus.Node.ExitPolicy +import Ouroboros.Consensus.Node.GSM qualified as GSM +import Ouroboros.Consensus.Node.Genesis +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Tracers +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.NodeKernel as NodeKernel +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment qualified as InvalidBlockPunishment +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB as LedgerDB +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Enclose (pattern FallingEdge) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Consensus.Util.RedundantConstraints +import Ouroboros.Consensus.Util.STM +import Ouroboros.Consensus.Util.Time +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.BlockFetch + ( BlockFetchConfiguration (..) + , TraceLabelPeer (..) + ) +import Ouroboros.Network.Channel +import Ouroboros.Network.ControlMessage (ControlMessage (..)) +import Ouroboros.Network.Mock.Chain (Chain (Genesis)) +import Ouroboros.Network.NodeToNode + ( ConnectionId (..) + , ExpandedInitiatorContext (..) + , IsBigLedgerPeer (..) + , MiniProtocolParameters (..) + , ResponderContext (..) + ) +import Ouroboros.Network.PeerSelection.Governor + ( makePublicPeerSelectionStateVar + ) +import Ouroboros.Network.PeerSelection.PeerMetric (nullMetric) +import Ouroboros.Network.Point (WithOrigin (..)) +import Ouroboros.Network.Protocol.ChainSync.Type qualified as CS +import Ouroboros.Network.Protocol.KeepAlive.Type +import Ouroboros.Network.Protocol.Limits (waitForever) +import Ouroboros.Network.Protocol.LocalStateQuery.Type +import Ouroboros.Network.Protocol.PeerSharing.Type (PeerSharing) +import Ouroboros.Network.Protocol.TxSubmission2.Type +import System.FS.Sim.MockFS (MockFS) +import System.FS.Sim.MockFS qualified as Mock +import System.Random (mkStdGen, split) +import Test.ThreadNet.TxGen +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeTopology +import Test.ThreadNet.Util.Seed +import Test.Util.ChainDB +import Test.Util.HardFork.Future (Future) +import Test.Util.HardFork.Future qualified as HFF +import Test.Util.HardFork.OracularClock (OracularClock (..)) +import Test.Util.HardFork.OracularClock qualified as OracularClock +import Test.Util.Slots (NumSlots (..)) +import Test.Util.Time +import Test.Util.Tracer -- | How to forge an EBB --- data ForgeEbbEnv blk = ForgeEbbEnv { forgeEBB :: - TopLevelConfig blk - -> SlotNo - -- EBB slot - -> BlockNo - -- EBB block number (i.e. that of its predecessor) - -> ChainHash blk - -- EBB predecessor's hash - -> blk + TopLevelConfig blk -> + SlotNo -> + -- EBB slot + BlockNo -> + -- EBB block number (i.e. that of its predecessor) + ChainHash blk -> + -- EBB predecessor's hash + blk } + instance Show (ForgeEbbEnv blk) where showsPrec p _ = showParen (p > 10) $ showString "ForgeEbbEnv _" @@ -161,38 +174,39 @@ instance Show (ForgeEbbEnv blk) where -- When there is a 'NodeRekey' scheduled in the 'NodeRestarts', the test node -- will restart and use 'tnaRekeyM' to compute its new 'ProtocolInfo'. type RekeyM m blk = - CoreNodeId - -> ProtocolInfo blk - -> m [BlockForging m blk] - -> SlotNo - -- ^ The slot in which the node is rekeying - -> (SlotNo -> m EpochNo) - -- ^ Which epoch the slot is in - -> m (TestNodeInitialization m blk) - -- ^ 'tniProtocolInfo' should include new delegation cert/operational key, - -- and 'tniCrucialTxs' should include the new delegation certificate - -- transaction + CoreNodeId -> + ProtocolInfo blk -> + m [BlockForging m blk] -> + -- | The slot in which the node is rekeying + SlotNo -> + -- | Which epoch the slot is in + (SlotNo -> m EpochNo) -> + -- | 'tniProtocolInfo' should include new delegation cert/operational key, + -- and 'tniCrucialTxs' should include the new delegation certificate + -- transaction + m (TestNodeInitialization m blk) -- | Data used when starting/restarting a node data TestNodeInitialization m blk = TestNodeInitialization - { tniCrucialTxs :: [GenTx blk] - -- ^ these transactions are added immediately and repeatedly (whenever the - -- 'ledgerTipSlot' changes) - -- - -- In particular, a leading node's crucial transactions must (if valid) - -- enter its mempool each slot /before/ the node takes the mempool snapshot - -- that determines which transactions will be included in the block it's - -- about to forge. + { tniCrucialTxs :: [GenTx blk] + -- ^ these transactions are added immediately and repeatedly (whenever the + -- 'ledgerTipSlot' changes) + -- + -- In particular, a leading node's crucial transactions must (if valid) + -- enter its mempool each slot /before/ the node takes the mempool snapshot + -- that determines which transactions will be included in the block it's + -- about to forge. , tniProtocolInfo :: ProtocolInfo blk , tniBlockForging :: m [BlockForging m blk] } plainTestNodeInitialization :: - ProtocolInfo blk - -> m [BlockForging m blk] - -> TestNodeInitialization m blk -plainTestNodeInitialization pInfo blockForging = TestNodeInitialization - { tniCrucialTxs = [] + ProtocolInfo blk -> + m [BlockForging m blk] -> + TestNodeInitialization m blk +plainTestNodeInitialization pInfo blockForging = + TestNodeInitialization + { tniCrucialTxs = [] , tniProtocolInfo = pInfo , tniBlockForging = blockForging } @@ -203,8 +217,9 @@ plainTestNodeInitialization pInfo blockForging = TestNodeInitialization -- forge slot of the header it is carrying. -- -- It may depend on the @(sender, recipient)@, the current slot, and header. -newtype CalcMessageDelay blk = CalcMessageDelay - ((CoreNodeId, CoreNodeId) -> SlotNo -> Header blk -> NumSlots) +newtype CalcMessageDelay blk + = CalcMessageDelay + ((CoreNodeId, CoreNodeId) -> SlotNo -> Header blk -> NumSlots) noCalcMessageDelay :: CalcMessageDelay blk noCalcMessageDelay = CalcMessageDelay $ \_ _ _ -> NumSlots 0 @@ -214,21 +229,20 @@ instance Show (CalcMessageDelay blk) where show _ = "_CalcMessageDelay" -- | Parameters for the test node net --- data ThreadNetworkArgs m blk = ThreadNetworkArgs - { tnaForgeEbbEnv :: Maybe (ForgeEbbEnv blk) - , tnaFuture :: Future - , tnaJoinPlan :: NodeJoinPlan - , tnaNodeInfo :: CoreNodeId -> TestNodeInitialization m blk + { tnaForgeEbbEnv :: Maybe (ForgeEbbEnv blk) + , tnaFuture :: Future + , tnaJoinPlan :: NodeJoinPlan + , tnaNodeInfo :: CoreNodeId -> TestNodeInitialization m blk , tnaNumCoreNodes :: NumCoreNodes - , tnaNumSlots :: NumSlots + , tnaNumSlots :: NumSlots , tnaMessageDelay :: CalcMessageDelay blk - , tnaSeed :: Seed - , tnaMkRekeyM :: Maybe (m (RekeyM m blk)) - , tnaRestarts :: NodeRestarts - , tnaTopology :: NodeTopology - , tnaTxGenExtra :: TxGenExtra blk - , tnaVersion :: NodeToNodeVersion + , tnaSeed :: Seed + , tnaMkRekeyM :: Maybe (m (RekeyM m blk)) + , tnaRestarts :: NodeRestarts + , tnaTopology :: NodeTopology + , tnaTxGenExtra :: TxGenExtra blk + , tnaVersion :: NodeToNodeVersion , tnaBlockVersion :: BlockNodeToNodeVersion blk } @@ -248,17 +262,16 @@ data ThreadNetworkArgs m blk = ThreadNetworkArgs -- \"node\", which can mean either \"vertex\" or \"node instance\". We take -- more care than usual in this module to be explicit, but still often rely on -- context. --- data VertexStatus m blk - = VDown (Chain blk) (LedgerState blk EmptyMK) - -- ^ The vertex does not currently have a node instance; its previous + = -- | The vertex does not currently have a node instance; its previous -- instance stopped with this chain and ledger state (empty/initial before -- first instance) - | VFalling - -- ^ The vertex has a node instance, but it is about to transition to + VDown (Chain blk) (LedgerState blk EmptyMK) + | -- | The vertex has a node instance, but it is about to transition to -- 'VDown' as soon as its edges transition to 'EDown'. - | VUp !(NodeKernel m NodeId Void blk) !(LimitedApp m NodeId blk) - -- ^ The vertex currently has a node instance, with these handles. + VFalling + | -- | The vertex currently has a node instance, with these handles. + VUp !(NodeKernel m NodeId Void blk) !(LimitedApp m NodeId blk) -- | A directed /edge/ denotes the \"operator of a node-to-node connection\"; -- in production, that's generally the TCP connection and the networking layers @@ -272,13 +285,12 @@ data VertexStatus m blk -- (We do not need 'EFalling' because node instances can exist without mini -- protocols; we only need 'VFalling' because mini protocol instances cannot -- exist without node instances.) --- data EdgeStatus - = EDown - -- ^ The edge does not currently have mini protocol instances. - | EUp - -- ^ The edge currently has mini protocol instances. - deriving (Eq) + = -- | The edge does not currently have mini protocol instances. + EDown + | -- | The edge currently has mini protocol instances. + EUp + deriving Eq type VertexStatusVar m blk = StrictTVar m (VertexStatus m blk) type EdgeStatusVar m = StrictTVar m EdgeStatus @@ -292,32 +304,35 @@ type EdgeStatusVar m = StrictTVar m EdgeStatus -- -- We run for the specified number of blocks, then return the final state of -- each node. -runThreadNetwork :: forall m blk. - ( IOLike m - , MonadTime m - , MonadTimer m - , RunNode blk - , TxGen blk - , TracingConstraints blk - , HasCallStack - ) - => SystemTime m -> ThreadNetworkArgs m blk -> m (TestOutput blk) -runThreadNetwork systemTime ThreadNetworkArgs - { tnaForgeEbbEnv = mbForgeEbbEnv - , tnaFuture = future - , tnaJoinPlan = nodeJoinPlan - , tnaNodeInfo = mkProtocolInfo - , tnaNumCoreNodes = numCoreNodes - , tnaNumSlots = numSlots - , tnaMessageDelay = calcMessageDelay - , tnaSeed = seed - , tnaMkRekeyM = mbMkRekeyM - , tnaRestarts = nodeRestarts - , tnaTopology = nodeTopology - , tnaTxGenExtra = txGenExtra - , tnaVersion = version - , tnaBlockVersion = blockVersion - } = withRegistry $ \sharedRegistry -> do +runThreadNetwork :: + forall m blk. + ( IOLike m + , MonadTime m + , MonadTimer m + , RunNode blk + , TxGen blk + , TracingConstraints blk + , HasCallStack + ) => + SystemTime m -> ThreadNetworkArgs m blk -> m (TestOutput blk) +runThreadNetwork + systemTime + ThreadNetworkArgs + { tnaForgeEbbEnv = mbForgeEbbEnv + , tnaFuture = future + , tnaJoinPlan = nodeJoinPlan + , tnaNodeInfo = mkProtocolInfo + , tnaNumCoreNodes = numCoreNodes + , tnaNumSlots = numSlots + , tnaMessageDelay = calcMessageDelay + , tnaSeed = seed + , tnaMkRekeyM = mbMkRekeyM + , tnaRestarts = nodeRestarts + , tnaTopology = nodeTopology + , tnaTxGenExtra = txGenExtra + , tnaVersion = version + , tnaBlockVersion = blockVersion + } = withRegistry $ \sharedRegistry -> do mbRekeyM <- sequence mbMkRekeyM -- This shared registry is used for 'newTestBlockchainTime' and the @@ -355,9 +370,9 @@ runThreadNetwork systemTime ThreadNetworkArgs ProtocolInfo{pInfoInitLedger} = tniProtocolInfo ExtLedgerState{ledgerState} = pInfoInitLedger v <- - uncheckedNewTVarM - $ VDown Genesis - $ forgetLedgerTables ledgerState + uncheckedNewTVarM $ + VDown Genesis $ + forgetLedgerTables ledgerState pure (nid, v) -- fork the directed edges, which also allocates their status variables @@ -381,9 +396,9 @@ runThreadNetwork systemTime ThreadNetworkArgs -- fork the vertices let nodesByJoinSlot = - List.sortOn fst $ -- sort non-descending by join slot - map (\nv@(n, _) -> (joinSlotOf n, nv)) $ - Map.toList vertexStatusVars + List.sortOn fst $ -- sort non-descending by join slot + map (\nv@(n, _) -> (joinSlotOf n, nv)) $ + Map.toList vertexStatusVars vertexInfos0 <- forM nodesByJoinSlot $ \vertexData -> do let (joinSlot, (coreNodeId, vertexStatusVar)) = vertexData @@ -431,13 +446,13 @@ runThreadNetwork systemTime ThreadNetworkArgs -- Collect all nodes' final chains vertexInfos <- atomically $ - forM vertexInfos0 $ \(coreNodeId, vertexStatusVar, readNodeInfo) -> do - readTVar vertexStatusVar >>= \case - VDown ch ldgr -> pure (coreNodeId, readNodeInfo, ch, ldgr) - _ -> retry + forM vertexInfos0 $ \(coreNodeId, vertexStatusVar, readNodeInfo) -> do + readTVar vertexStatusVar >>= \case + VDown ch ldgr -> pure (coreNodeId, readNodeInfo, ch, ldgr) + _ -> retry mkTestOutput vertexInfos - where + where _ = keepRedundantConstraint (Proxy @(Show (LedgerView (BlockProtocol blk)))) -- epoch size of the first era (ie the one that might have EBBs) @@ -450,17 +465,17 @@ runThreadNetwork systemTime ThreadNetworkArgs joinSlotOf :: CoreNodeId -> SlotNo joinSlotOf = coreNodeIdJoinSlot nodeJoinPlan - forkVertex - :: Maybe (RekeyM m blk) - -> OracularClock m - -> SlotNo - -> ResourceRegistry m - -> CoreNodeId - -> VertexStatusVar m blk - -> [EdgeStatusVar m] - -> NodeInfo blk (StrictTMVar m MockFS) (Tracer m) - -> StrictTVar m SlotNo - -> m () + forkVertex :: + Maybe (RekeyM m blk) -> + OracularClock m -> + SlotNo -> + ResourceRegistry m -> + CoreNodeId -> + VertexStatusVar m blk -> + [EdgeStatusVar m] -> + NodeInfo blk (StrictTMVar m MockFS) (Tracer m) -> + StrictTVar m SlotNo -> + m () forkVertex mbRekeyM clock @@ -473,25 +488,27 @@ runThreadNetwork systemTime ThreadNetworkArgs nextInstrSlotVar = void $ forkLinkedThread sharedRegistry label $ do loop 0 tniProtocolInfo tniBlockForging NodeRestart restarts0 - where + where label = "vertex-" <> condense coreNodeId TestNodeInitialization - { tniCrucialTxs - , tniProtocolInfo - , tniBlockForging - } = mkProtocolInfo coreNodeId + { tniCrucialTxs + , tniProtocolInfo + , tniBlockForging + } = mkProtocolInfo coreNodeId restarts0 :: Map SlotNo NodeRestart restarts0 = Map.mapMaybe (Map.lookup coreNodeId) m - where - NodeRestarts m = nodeRestarts - - loop :: SlotNo - -> ProtocolInfo blk - -> m [BlockForging m blk] - -> NodeRestart - -> Map SlotNo NodeRestart -> m () + where + NodeRestarts m = nodeRestarts + + loop :: + SlotNo -> + ProtocolInfo blk -> + m [BlockForging m blk] -> + NodeRestart -> + Map SlotNo NodeRestart -> + m () loop s pInfo blockForging nr rs = do -- a registry solely for the resources of this specific node instance (again, finalChain, finalLdgr) <- withRegistry $ \nodeRegistry -> do @@ -500,10 +517,10 @@ runThreadNetwork systemTime ThreadNetworkArgs tni' <- case (nr, mbRekeyM) of (NodeRekey, Just rekeyM) -> do rekeyM coreNodeId pInfo blockForging s (pure . HFF.futureSlotToEpoch future) - _ -> - pure $ plainTestNodeInitialization pInfo blockForging + _ -> + pure $ plainTestNodeInitialization pInfo blockForging let TestNodeInitialization - { tniCrucialTxs = crucialTxs' + { tniCrucialTxs = crucialTxs' , tniProtocolInfo = pInfo' , tniBlockForging = blockForging' } = tni' @@ -511,21 +528,22 @@ runThreadNetwork systemTime ThreadNetworkArgs -- allocate the node's internal state and fork its internal threads -- (specifically not the communication threads running the Mini -- Protocols, like the ChainSync Client) - (kernel, app) <- forkNode - coreNodeId - clock - joinSlot - nodeRegistry - pInfo' - blockForging' - nodeInfo - (crucialTxs' ++ tniCrucialTxs) + (kernel, app) <- + forkNode + coreNodeId + clock + joinSlot + nodeRegistry + pInfo' + blockForging' + nodeInfo + (crucialTxs' ++ tniCrucialTxs) atomically $ writeTVar vertexStatusVar $ VUp kernel app -- wait until this node instance should stop again <- case Map.minViewWithKey rs of -- end of test - Nothing -> do + Nothing -> do OracularClock.waitUntilDone clock pure Nothing -- onset of schedule restart slot @@ -533,8 +551,9 @@ runThreadNetwork systemTime ThreadNetworkArgs -- wait until the node should stop tooLate <- OracularClock.blockUntilSlot clock s' when tooLate $ do - error $ "unsatisfiable nodeRestarts: " - ++ show (coreNodeId, s') + error $ + "unsatisfiable nodeRestarts: " + ++ show (coreNodeId, s') -- this synchronization prevents a race with the -- instrumentation thread: we it want to record the current @@ -553,33 +572,34 @@ runThreadNetwork systemTime ThreadNetworkArgs -- assuming nothing else is changing it, read the final chain let chainDB = getChainDB kernel - ExtLedgerState{ledgerState} <- atomically $ - ChainDB.getCurrentLedger chainDB + ExtLedgerState{ledgerState} <- + atomically $ + ChainDB.getCurrentLedger chainDB finalChain <- ChainDB.toChain chainDB - pure (again, finalChain, ledgerState) - -- end of the node's withRegistry + -- end of the node's withRegistry - atomically $ writeTVar vertexStatusVar $ - VDown finalChain finalLdgr + atomically $ + writeTVar vertexStatusVar $ + VDown finalChain finalLdgr case again of - Nothing -> pure () + Nothing -> pure () Just (s', pInfo', blockForging', nr', rs') -> loop s' pInfo' blockForging' nr' rs' - -- | Instrumentation: record the tip's block number at the onset of the + -- \| Instrumentation: record the tip's block number at the onset of the -- slot. -- -- With such a short transaction (read a few TVars) we assume this runs (1) -- before anything else in the slot and (2) once per slot. - forkInstrumentation - :: OracularClock m - -> ResourceRegistry m - -> VertexStatusVar m blk - -> NodeInfo blk (StrictTMVar m MockFS) (Tracer m) - -> StrictTVar m SlotNo - -> m () + forkInstrumentation :: + OracularClock m -> + ResourceRegistry m -> + VertexStatusVar m blk -> + NodeInfo blk (StrictTMVar m MockFS) (Tracer m) -> + StrictTVar m SlotNo -> + m () forkInstrumentation clock registry @@ -587,37 +607,37 @@ runThreadNetwork systemTime ThreadNetworkArgs nodeInfo nextInstrSlotVar = void $ OracularClock.forkEachSlot registry clock lbl $ \s -> do - bno <- atomically $ readTVar vertexStatusVar >>= \case - VUp kernel _ -> ChainDB.getTipBlockNo (getChainDB kernel) - _ -> retry + bno <- + atomically $ + readTVar vertexStatusVar >>= \case + VUp kernel _ -> ChainDB.getTipBlockNo (getChainDB kernel) + _ -> retry traceWith nodeEventsTipBlockNos (s, bno) atomically $ modifyTVar nextInstrSlotVar $ max (succ s) - where - NodeInfo{nodeInfoEvents} = nodeInfo + where + NodeInfo{nodeInfoEvents} = nodeInfo NodeEvents{nodeEventsTipBlockNos} = nodeInfoEvents - lbl = "instrumentation" + lbl = "instrumentation" - -- | Persistently attempt to add the given transactions to the mempool + -- \| Persistently attempt to add the given transactions to the mempool -- every time the ledger slot changes, even if successful! -- -- If we add the transaction and then the mempools discards it for some -- reason, this thread will add it again. - -- - forkCrucialTxs - :: HasCallStack - => OracularClock m - -> SlotNo - -> ResourceRegistry m - -> (SlotNo -> STM m ()) - -> STM m (Point blk) - -> (ResourceRegistry m -> m (ReadOnlyForker' m blk)) - -> Mempool m blk - -> [GenTx blk] - -- ^ valid transactions the node should immediately propagate - -> m () + forkCrucialTxs :: + HasCallStack => + OracularClock m -> + SlotNo -> + ResourceRegistry m -> + (SlotNo -> STM m ()) -> + STM m (Point blk) -> + (ResourceRegistry m -> m (ReadOnlyForker' m blk)) -> + Mempool m blk -> + [GenTx blk] -> + -- \^ valid transactions the node should immediately propagate + m () forkCrucialTxs clock s0 registry unblockForge getTipPoint mforker mempool txs0 = do void $ forkLinkedThread registry "crucialTxs" $ withRegistry $ \reg -> do - let loop (slot, mempFp) = do forker <- mforker reg extLedger <- atomically $ roforkerGetLedgerState forker @@ -652,8 +672,9 @@ runThreadNetwork systemTime ThreadNetworkArgs -- wake up when any of those change -- -- key observation: it's OK to add the crucial txs too often - fps' <- fmap (either (either id id) id) $ - slotChanged `race` mempChanged `race` ldgrChanged + fps' <- + fmap (either (either id id) id) $ + slotChanged `race` mempChanged `race` ldgrChanged -- avoid the race in which we wake up before the mempool's -- background thread wakes up by mimicking it before we do @@ -663,113 +684,125 @@ runThreadNetwork systemTime ThreadNetworkArgs loop fps' loop (s0, []) - -- | Produce transactions every time the slot changes and submit them to + -- \| Produce transactions every time the slot changes and submit them to -- the mempool. - forkTxProducer :: HasCallStack - => CoreNodeId - -> ResourceRegistry m - -> OracularClock m - -> TopLevelConfig blk - -> Seed - -> (ResourceRegistry m -> m (ReadOnlyForker' m blk)) - -- ^ How to get the current ledger state - -> Mempool m blk - -> m () + forkTxProducer :: + HasCallStack => + CoreNodeId -> + ResourceRegistry m -> + OracularClock m -> + TopLevelConfig blk -> + Seed -> + (ResourceRegistry m -> m (ReadOnlyForker' m blk)) -> + -- \^ How to get the current ledger state + Mempool m blk -> + m () forkTxProducer coreNodeId registry clock cfg nodeSeed mforker mempool = - void $ OracularClock.forkEachSlot registry clock "txProducer" $ \curSlotNo -> withRegistry $ \reg -> do - forker <- mforker reg - emptySt' <- atomically $ roforkerGetLedgerState forker - let emptySt = emptySt' - doRangeQuery = roforkerRangeReadTables forker - fullLedgerSt <- fmap ledgerState $ do - fullUTxO <- doRangeQuery NoPreviousQuery - pure $! withLedgerTables emptySt fullUTxO - roforkerClose forker - -- Combine the node's seed with the current slot number, to make sure - -- we generate different transactions in each slot. - let txs = runGen - (nodeSeed `combineWith` unSlotNo curSlotNo) - (testGenTxs coreNodeId numCoreNodes curSlotNo cfg txGenExtra fullLedgerSt) - void $ addTxs mempool txs - - mkArgs :: ResourceRegistry m - -> TopLevelConfig blk - -> ExtLedgerState blk ValuesMK - -> Tracer m (RealPoint blk, ExtValidationError blk) - -- ^ invalid block tracer - -> Tracer m (RealPoint blk, BlockNo) - -- ^ added block tracer - -> Tracer m (RealPoint blk, BlockNo) - -- ^ block selection tracer - -> Tracer m (LedgerUpdate blk) - -- ^ ledger updates tracer - -> Tracer m (ChainDB.TracePipeliningEvent blk) - -> NodeDBs (StrictTMVar m MockFS) - -> CoreNodeId - -> ChainDbArgs Identity m blk + void $ OracularClock.forkEachSlot registry clock "txProducer" $ \curSlotNo -> withRegistry $ \reg -> do + forker <- mforker reg + emptySt' <- atomically $ roforkerGetLedgerState forker + let emptySt = emptySt' + doRangeQuery = roforkerRangeReadTables forker + fullLedgerSt <- fmap ledgerState $ do + fullUTxO <- doRangeQuery NoPreviousQuery + pure $! withLedgerTables emptySt fullUTxO + roforkerClose forker + -- Combine the node's seed with the current slot number, to make sure + -- we generate different transactions in each slot. + let txs = + runGen + (nodeSeed `combineWith` unSlotNo curSlotNo) + (testGenTxs coreNodeId numCoreNodes curSlotNo cfg txGenExtra fullLedgerSt) + void $ addTxs mempool txs + + mkArgs :: + ResourceRegistry m -> + TopLevelConfig blk -> + ExtLedgerState blk ValuesMK -> + Tracer m (RealPoint blk, ExtValidationError blk) -> + -- \^ invalid block tracer + Tracer m (RealPoint blk, BlockNo) -> + -- \^ added block tracer + Tracer m (RealPoint blk, BlockNo) -> + -- \^ block selection tracer + Tracer m (LedgerUpdate blk) -> + -- \^ ledger updates tracer + Tracer m (ChainDB.TracePipeliningEvent blk) -> + NodeDBs (StrictTMVar m MockFS) -> + CoreNodeId -> + ChainDbArgs Identity m blk mkArgs registry - cfg initLedger - invalidTracer addTracer selTracer updatesTracer pipeliningTracer - nodeDBs _coreNodeId = - let args = fromMinimalChainDbArgs MinimalChainDbArgs { - mcdbTopLevelConfig = cfg - , mcdbChunkInfo = ImmutableDB.simpleChunkInfo epochSize0 - , mcdbInitLedger = initLedger - , mcdbRegistry = registry - , mcdbNodeDBs = nodeDBs - } + cfg + initLedger + invalidTracer + addTracer + selTracer + updatesTracer + pipeliningTracer + nodeDBs + _coreNodeId = + let args = + fromMinimalChainDbArgs + MinimalChainDbArgs + { mcdbTopLevelConfig = cfg + , mcdbChunkInfo = ImmutableDB.simpleChunkInfo epochSize0 + , mcdbInitLedger = initLedger + , mcdbRegistry = registry + , mcdbNodeDBs = nodeDBs + } tr = instrumentationTracer <> nullDebugTracer - in args { cdbImmDbArgs = (cdbImmDbArgs args) { - ImmutableDB.immCheckIntegrity = nodeCheckIntegrity (configStorage cfg) + in args + { cdbImmDbArgs = + (cdbImmDbArgs args) + { ImmutableDB.immCheckIntegrity = nodeCheckIntegrity (configStorage cfg) , ImmutableDB.immTracer = TraceImmutableDBEvent >$< tr } - , cdbVolDbArgs = (cdbVolDbArgs args) { - VolatileDB.volCheckIntegrity = nodeCheckIntegrity (configStorage cfg) + , cdbVolDbArgs = + (cdbVolDbArgs args) + { VolatileDB.volCheckIntegrity = nodeCheckIntegrity (configStorage cfg) , VolatileDB.volTracer = TraceVolatileDBEvent >$< tr } - , cdbLgrDbArgs = (cdbLgrDbArgs args) { - LedgerDB.lgrTracer = TraceLedgerDBEvent >$< tr + , cdbLgrDbArgs = + (cdbLgrDbArgs args) + { LedgerDB.lgrTracer = TraceLedgerDBEvent >$< tr } - , cdbsArgs = (cdbsArgs args) { - -- TODO: Vary cdbsGcDelay, cdbsGcInterval, cdbsBlockToAddSize + , cdbsArgs = + (cdbsArgs args) + { -- TODO: Vary cdbsGcDelay, cdbsGcInterval, cdbsBlockToAddSize cdbsGcDelay = 0 , cdbsTracer = instrumentationTracer <> nullDebugTracer } - } - where + } + where prj af = case AF.headBlockNo af of - At bno -> bno - Origin -> error "selTracer" + At bno -> bno + Origin -> error "selTracer" -- prop_general relies on this tracer instrumentationTracer = Tracer $ \case ChainDB.TraceAddBlockEvent - (ChainDB.AddBlockValidation (ChainDB.InvalidBlock e p)) - -> traceWith invalidTracer (p, e) - + (ChainDB.AddBlockValidation (ChainDB.InvalidBlock e p)) -> + traceWith invalidTracer (p, e) ChainDB.TraceAddBlockEvent - (ChainDB.AddedBlockToVolatileDB p bno IsNotEBB FallingEdge) - -> traceWith addTracer (p, bno) - + (ChainDB.AddedBlockToVolatileDB p bno IsNotEBB FallingEdge) -> + traceWith addTracer (p, bno) ChainDB.TraceAddBlockEvent - (ChainDB.AddedToCurrentChain events p _old new) - -> let (warnings, updates) = partitionLedgerEvents events in - assertWithMsg (noWarnings warnings) $ do - mapM_ (traceWith updatesTracer) updates - traceWith selTracer (ChainDB.newTipPoint p, prj new) + (ChainDB.AddedToCurrentChain events p _old new) -> + let (warnings, updates) = partitionLedgerEvents events + in assertWithMsg (noWarnings warnings) $ do + mapM_ (traceWith updatesTracer) updates + traceWith selTracer (ChainDB.newTipPoint p, prj new) ChainDB.TraceAddBlockEvent - (ChainDB.SwitchedToAFork events p _old new) - -> let (warnings, updates) = partitionLedgerEvents events in - assertWithMsg (noWarnings warnings) $ do - mapM_ (traceWith updatesTracer) updates - traceWith selTracer (ChainDB.newTipPoint p, prj new) - + (ChainDB.SwitchedToAFork events p _old new) -> + let (warnings, updates) = partitionLedgerEvents events + in assertWithMsg (noWarnings warnings) $ do + mapM_ (traceWith updatesTracer) updates + traceWith selTracer (ChainDB.newTipPoint p, prj new) ChainDB.TraceAddBlockEvent - (ChainDB.PipeliningEvent e) - -> traceWith pipeliningTracer e - - _ -> pure () + (ChainDB.PipeliningEvent e) -> + traceWith pipeliningTracer e + _ -> pure () -- We don't expect any ledger warnings -- (that would indicate node misconfiguration in the tests) @@ -780,22 +813,23 @@ runThreadNetwork systemTime ThreadNetworkArgs -- Augment a tracer message with the node which produces it. _decorateId :: CoreNodeId -> Tracer m String -> Tracer m String _decorateId (CoreNodeId cid) = contramap $ \s -> - show cid <> " | " <> s - - forkNode - :: HasCallStack - => CoreNodeId - -> OracularClock m - -> SlotNo - -> ResourceRegistry m - -> ProtocolInfo blk - -> m [BlockForging m blk] - -> NodeInfo blk (StrictTMVar m MockFS) (Tracer m) - -> [GenTx blk] - -- ^ valid transactions the node should immediately propagate - -> m ( NodeKernel m NodeId Void blk - , LimitedApp m NodeId blk - ) + show cid <> " | " <> s + + forkNode :: + HasCallStack => + CoreNodeId -> + OracularClock m -> + SlotNo -> + ResourceRegistry m -> + ProtocolInfo blk -> + m [BlockForging m blk] -> + NodeInfo blk (StrictTMVar m MockFS) (Tracer m) -> + [GenTx blk] -> + -- \^ valid transactions the node should immediately propagate + m + ( NodeKernel m NodeId Void blk + , LimitedApp m NodeId blk + ) forkNode coreNodeId clock joinSlot registry pInfo blockForging nodeInfo txs0 = do let ProtocolInfo{..} = pInfo @@ -806,36 +840,39 @@ runThreadNetwork systemTime ThreadNetworkArgs -- prop_general relies on these tracers let invalidTracer = nodeEventsInvalids nodeInfoEvents - updatesTracer = nodeEventsUpdates nodeInfoEvents - wrapTracer tr = Tracer $ \(p, bno) -> do + updatesTracer = nodeEventsUpdates nodeInfoEvents + wrapTracer tr = Tracer $ \(p, bno) -> do s <- OracularClock.getCurrentSlot clock traceWith tr (s, p, bno) - addTracer = wrapTracer $ nodeEventsAdds nodeInfoEvents - selTracer = wrapTracer $ nodeEventsSelects nodeInfoEvents - headerAddTracer = wrapTracer $ nodeEventsHeaderAdds nodeInfoEvents + addTracer = wrapTracer $ nodeEventsAdds nodeInfoEvents + selTracer = wrapTracer $ nodeEventsSelects nodeInfoEvents + headerAddTracer = wrapTracer $ nodeEventsHeaderAdds nodeInfoEvents pipeliningTracer = nodeEventsPipelining nodeInfoEvents - let chainDbArgs = mkArgs - registry - pInfoConfig pInfoInitLedger - invalidTracer - addTracer - selTracer - updatesTracer - pipeliningTracer - nodeInfoDBs - coreNodeId - chainDB <- snd <$> - allocate registry (const (ChainDB.openDB chainDbArgs)) ChainDB.closeDB + let chainDbArgs = + mkArgs + registry + pInfoConfig + pInfoInitLedger + invalidTracer + addTracer + selTracer + updatesTracer + pipeliningTracer + nodeInfoDBs + coreNodeId + chainDB <- + snd + <$> allocate registry (const (ChainDB.openDB chainDbArgs)) ChainDB.closeDB let customForgeBlock :: - BlockForging m blk - -> TopLevelConfig blk - -> BlockNo - -> SlotNo - -> TickedLedgerState blk mk - -> [Validated (GenTx blk)] - -> IsLeader (BlockProtocol blk) - -> m blk + BlockForging m blk -> + TopLevelConfig blk -> + BlockNo -> + SlotNo -> + TickedLedgerState blk mk -> + [Validated (GenTx blk)] -> + IsLeader (BlockProtocol blk) -> + m blk customForgeBlock origBlockForging cfg' currentBno currentSlot tickedLdgSt txs prf = do let currentEpoch = HFF.futureSlotToEpoch future currentSlot @@ -844,9 +881,9 @@ runThreadNetwork systemTime ThreadNetworkArgs let ebbSlot :: SlotNo ebbSlot = SlotNo $ x * y - where - EpochNo x = currentEpoch - EpochSize y = epochSize0 + where + EpochNo x = currentEpoch + EpochSize y = epochSize0 let p :: Point blk p = castPoint $ getTip tickedLdgSt @@ -854,59 +891,68 @@ runThreadNetwork systemTime ThreadNetworkArgs let needEBB = inFirstEra && NotOrigin ebbSlot > pointSlot p case mbForgeEbbEnv <* guard needEBB of Nothing -> - -- no EBB needed, forge without making one - forgeBlock - origBlockForging - cfg' - currentBno - currentSlot - (forgetLedgerTables tickedLdgSt) - txs - prf + -- no EBB needed, forge without making one + forgeBlock + origBlockForging + cfg' + currentBno + currentSlot + (forgetLedgerTables tickedLdgSt) + txs + prf Just forgeEbbEnv -> do - -- The EBB shares its BlockNo with its predecessor (if - -- there is one) - let ebbBno = case currentBno of - -- We assume this invariant: - -- - -- If forging of EBBs is enabled then the node - -- initialization is responsible for producing any - -- proper non-EBB blocks with block number 0. - -- - -- So this case is unreachable. - 0 -> error "Error, only node initialization can forge non-EBB with block number 0." - n -> pred n - let ebb = forgeEBB forgeEbbEnv pInfoConfig - ebbSlot ebbBno (pointHash p) - - -- fail if the EBB is invalid - -- if it is valid, we retick to the /same/ slot - let apply = applyLedgerBlock OmitLedgerEvents (configLedger pInfoConfig) - tables = emptyLedgerTables -- EBBs need no input tables - tickedLdgSt' <- case Exc.runExcept $ apply ebb (tickedLdgSt `withLedgerTables` tables) of - Left e -> Exn.throw $ JitEbbError @blk e - Right st -> pure $ applyChainTick OmitLedgerEvents - (configLedger pInfoConfig) - currentSlot - (forgetLedgerTables st) - - -- forge the block usings the ledger state that includes - -- the EBB - blk <- forgeBlock - origBlockForging - cfg' - currentBno - currentSlot - (forgetLedgerTables tickedLdgSt') - txs - prf - - -- If the EBB or the subsequent block is invalid, then the - -- ChainDB will reject it as invalid, and - -- 'Test.ThreadNet.General.prop_general' will eventually fail - -- because of a block rejection. - void $ ChainDB.addBlock chainDB InvalidBlockPunishment.noPunishment ebb - pure blk + -- The EBB shares its BlockNo with its predecessor (if + -- there is one) + let ebbBno = case currentBno of + -- We assume this invariant: + -- + -- If forging of EBBs is enabled then the node + -- initialization is responsible for producing any + -- proper non-EBB blocks with block number 0. + -- + -- So this case is unreachable. + 0 -> error "Error, only node initialization can forge non-EBB with block number 0." + n -> pred n + let ebb = + forgeEBB + forgeEbbEnv + pInfoConfig + ebbSlot + ebbBno + (pointHash p) + + -- fail if the EBB is invalid + -- if it is valid, we retick to the /same/ slot + let apply = applyLedgerBlock OmitLedgerEvents (configLedger pInfoConfig) + tables = emptyLedgerTables -- EBBs need no input tables + tickedLdgSt' <- case Exc.runExcept $ apply ebb (tickedLdgSt `withLedgerTables` tables) of + Left e -> Exn.throw $ JitEbbError @blk e + Right st -> + pure $ + applyChainTick + OmitLedgerEvents + (configLedger pInfoConfig) + currentSlot + (forgetLedgerTables st) + + -- forge the block usings the ledger state that includes + -- the EBB + blk <- + forgeBlock + origBlockForging + cfg' + currentBno + currentSlot + (forgetLedgerTables tickedLdgSt') + txs + prf + + -- If the EBB or the subsequent block is invalid, then the + -- ChainDB will reject it as invalid, and + -- 'Test.ThreadNet.General.prop_general' will eventually fail + -- because of a block rejection. + void $ ChainDB.addBlock chainDB InvalidBlockPunishment.noPunishment ebb + pure blk -- This variable holds the number of the earliest slot in which the -- crucial txs have not yet been added. In other words, it holds the @@ -925,152 +971,167 @@ runThreadNetwork systemTime ThreadNetworkArgs check $ s < sentinel ) - let -- prop_general relies on these tracers - instrumentationTracers = nullTracers - { chainSyncClientTracer = Tracer $ \case - TraceLabelPeer _ (CSClient.TraceDownloadedHeader hdr) - -> case blockPoint hdr of - GenesisPoint -> pure () - BlockPoint s h -> - -- TODO include tip in TraceDownloadedHeader - -- and only trace if hdr == tip? - traceWith headerAddTracer - (RealPoint s h, blockNo hdr) - _ -> pure () - , forgeTracer = Tracer $ \(TraceLabelCreds _ ev) -> do - traceWith (nodeEventsForges nodeInfoEvents) ev - case ev of - TraceNodeIsLeader s -> atomically $ blockOnCrucial s - _ -> pure () - } + let + -- prop_general relies on these tracers + instrumentationTracers = + nullTracers + { chainSyncClientTracer = Tracer $ \case + TraceLabelPeer _ (CSClient.TraceDownloadedHeader hdr) -> + case blockPoint hdr of + GenesisPoint -> pure () + BlockPoint s h -> + -- TODO include tip in TraceDownloadedHeader + -- and only trace if hdr == tip? + traceWith + headerAddTracer + (RealPoint s h, blockNo hdr) + _ -> pure () + , forgeTracer = Tracer $ \(TraceLabelCreds _ ev) -> do + traceWith (nodeEventsForges nodeInfoEvents) ev + case ev of + TraceNodeIsLeader s -> atomically $ blockOnCrucial s + _ -> pure () + } - -- traces the node's local events other than those from the -- ChainDB - tracers = instrumentationTracers <> nullDebugTracers - - let -- use a backoff delay of exactly one slot length (which the - -- 'OracularClock' always knows) for the following reasons - -- - -- o It gives the node a chance to sync some blocks so that it will - -- eventually not need to backoff - -- - -- o It maintains the invariant that the node's activities all happen " - -- during " a slot onset - -- - -- o It avoids causing the node to miss a slot it could have - -- nominally lead. EG If we used a backoff of two slot durations, - -- then it might have synced during the first slot and then been - -- able to productively lead the second slot had it not still been - -- asleep. - -- - -- o We assume a node will only backoff when it joins late and only - -- until it syncs enough of the net's existing common prefix. - hfbtBackoffDelay = - BackoffDelay <$> OracularClock.delayUntilNextSlot clock - btime <- hardForkBlockchainTime HardForkBlockchainTimeArgs - { hfbtBackoffDelay - , hfbtGetLedgerState = - ledgerState <$> ChainDB.getCurrentLedger chainDB - , hfbtLedgerConfig = configLedger pInfoConfig - , hfbtRegistry = registry - , hfbtSystemTime = OracularClock.finiteSystemTime clock - , hfbtTracer = - contramap - -- We don't really have a SystemStart in the tests - (fmap (fromRelativeTime (SystemStart dawnOfTime))) - (blockchainTimeTracer tracers) - , hfbtMaxClockRewind = secondsToNominalDiffTime 0 - } + -- traces the node's local events other than those from the -- ChainDB + tracers = instrumentationTracers <> nullDebugTracers + + let + -- use a backoff delay of exactly one slot length (which the + -- 'OracularClock' always knows) for the following reasons + -- + -- o It gives the node a chance to sync some blocks so that it will + -- eventually not need to backoff + -- + -- o It maintains the invariant that the node's activities all happen " + -- during " a slot onset + -- + -- o It avoids causing the node to miss a slot it could have + -- nominally lead. EG If we used a backoff of two slot durations, + -- then it might have synced during the first slot and then been + -- able to productively lead the second slot had it not still been + -- asleep. + -- + -- o We assume a node will only backoff when it joins late and only + -- until it syncs enough of the net's existing common prefix. + hfbtBackoffDelay = + BackoffDelay <$> OracularClock.delayUntilNextSlot clock + btime <- + hardForkBlockchainTime + HardForkBlockchainTimeArgs + { hfbtBackoffDelay + , hfbtGetLedgerState = + ledgerState <$> ChainDB.getCurrentLedger chainDB + , hfbtLedgerConfig = configLedger pInfoConfig + , hfbtRegistry = registry + , hfbtSystemTime = OracularClock.finiteSystemTime clock + , hfbtTracer = + contramap + -- We don't really have a SystemStart in the tests + (fmap (fromRelativeTime (SystemStart dawnOfTime))) + (blockchainTimeTracer tracers) + , hfbtMaxClockRewind = secondsToNominalDiffTime 0 + } let rng = case seed of - Seed s -> mkStdGen s + Seed s -> mkStdGen s (kaRng, psRng) = split rng publicPeerSelectionStateVar <- makePublicPeerSelectionStateVar - let nodeKernelArgs = NodeKernelArgs - { tracers - , registry - , cfg = pInfoConfig - , btime - , chainDB - , initChainDB = nodeInitChainDB - , chainSyncFutureCheck = + let nodeKernelArgs = + NodeKernelArgs + { tracers + , registry + , cfg = pInfoConfig + , btime + , chainDB + , initChainDB = nodeInitChainDB + , chainSyncFutureCheck = InFutureCheck.realHeaderInFutureCheck InFutureCheck.defaultClockSkew (OracularClock.finiteSystemTime clock) - , chainSyncHistoricityCheck = \_getGsmState -> HistoricityCheck.noCheck - , blockFetchSize = estimateBlockSize - , mempoolCapacityOverride = NoMempoolCapacityBytesOverride - , keepAliveRng = kaRng - , peerSharingRng = psRng - , miniProtocolParameters = MiniProtocolParameters { - chainSyncPipeliningHighMark = 4, - chainSyncPipeliningLowMark = 2, - blockFetchPipeliningMax = 10, - txSubmissionMaxUnacked = 1000 -- TODO ? - } - , blockFetchConfiguration = BlockFetchConfiguration { - bfcMaxConcurrencyBulkSync = 1 - , bfcMaxConcurrencyDeadline = 2 - , bfcMaxRequestsInflight = 10 - , bfcDecisionLoopIntervalPraos = 0.0 -- Mock testsuite can use sub-second slot - , bfcDecisionLoopIntervalGenesis = 0.0 -- interval which doesn't play nice with - -- blockfetch descision interval. - , bfcSalt = 0 - , bfcGenesisBFConfig = gcBlockFetchConfig enableGenesisConfigDefault - } - , gsmArgs = GSM.GsmNodeKernelArgs { - gsmAntiThunderingHerd = kaRng - , gsmDurationUntilTooOld = Nothing - , gsmMarkerFileView = GSM.MarkerFileView { - touchMarkerFile = pure () - , removeMarkerFile = pure () - , hasMarkerFile = pure False + , chainSyncHistoricityCheck = \_getGsmState -> HistoricityCheck.noCheck + , blockFetchSize = estimateBlockSize + , mempoolCapacityOverride = NoMempoolCapacityBytesOverride + , keepAliveRng = kaRng + , peerSharingRng = psRng + , miniProtocolParameters = + MiniProtocolParameters + { chainSyncPipeliningHighMark = 4 + , chainSyncPipeliningLowMark = 2 + , blockFetchPipeliningMax = 10 + , txSubmissionMaxUnacked = 1000 -- TODO ? } - , gsmMinCaughtUpDuration = 0 - } - , getUseBootstrapPeers = pure DontUseBootstrapPeers - , publicPeerSelectionStateVar - , genesisArgs = GenesisNodeKernelArgs { - gnkaLoEAndGDDArgs = LoEAndGDDDisabled - } - , getDiffusionPipeliningSupport = DiffusionPipeliningOn - } + , blockFetchConfiguration = + BlockFetchConfiguration + { bfcMaxConcurrencyBulkSync = 1 + , bfcMaxConcurrencyDeadline = 2 + , bfcMaxRequestsInflight = 10 + , bfcDecisionLoopIntervalPraos = 0.0 -- Mock testsuite can use sub-second slot + , bfcDecisionLoopIntervalGenesis = 0.0 -- interval which doesn't play nice with + -- blockfetch descision interval. + , bfcSalt = 0 + , bfcGenesisBFConfig = gcBlockFetchConfig enableGenesisConfigDefault + } + , gsmArgs = + GSM.GsmNodeKernelArgs + { gsmAntiThunderingHerd = kaRng + , gsmDurationUntilTooOld = Nothing + , gsmMarkerFileView = + GSM.MarkerFileView + { touchMarkerFile = pure () + , removeMarkerFile = pure () + , hasMarkerFile = pure False + } + , gsmMinCaughtUpDuration = 0 + } + , getUseBootstrapPeers = pure DontUseBootstrapPeers + , publicPeerSelectionStateVar + , genesisArgs = + GenesisNodeKernelArgs + { gnkaLoEAndGDDArgs = LoEAndGDDDisabled + } + , getDiffusionPipeliningSupport = DiffusionPipeliningOn + } nodeKernel <- initNodeKernel nodeKernelArgs blockForging' <- - map (\bf -> bf { forgeBlock = customForgeBlock bf }) - <$> blockForging + map (\bf -> bf{forgeBlock = customForgeBlock bf}) + <$> blockForging setBlockForging nodeKernel blockForging' let mempool = getMempool nodeKernel - let app = NTN.mkApps - nodeKernel - -- these tracers report every message sent/received by this - -- node - nullDebugProtocolTracers - (customNodeToNodeCodecs pInfoConfig) - NTN.noByteLimits - -- see #1882, tests that can't cope with timeouts. - (pure $ NTN.ChainSyncTimeout - { canAwaitTimeout = waitForever - , intersectTimeout = waitForever - , mustReplyTimeout = waitForever - , idleTimeout = waitForever - }) - CSClient.ChainSyncLoPBucketDisabled - CSClient.CSJDisabled - nullMetric - -- The purpose of this test is not testing protocols, so - -- returning constant empty list is fine if we have thorough - -- tests about the peer sharing protocol itself. - (NTN.mkHandlers nodeKernelArgs nodeKernel) + let app = + NTN.mkApps + nodeKernel + -- these tracers report every message sent/received by this + -- node + nullDebugProtocolTracers + (customNodeToNodeCodecs pInfoConfig) + NTN.noByteLimits + -- see #1882, tests that can't cope with timeouts. + ( pure $ + NTN.ChainSyncTimeout + { canAwaitTimeout = waitForever + , intersectTimeout = waitForever + , mustReplyTimeout = waitForever + , idleTimeout = waitForever + } + ) + CSClient.ChainSyncLoPBucketDisabled + CSClient.CSJDisabled + nullMetric + -- The purpose of this test is not testing protocols, so + -- returning constant empty list is fine if we have thorough + -- tests about the peer sharing protocol itself. + (NTN.mkHandlers nodeKernelArgs nodeKernel) -- Create a 'ReadOnlyForker' to be used in 'forkTxProducer'. This function -- needs the read-only forker to elaborate a complete UTxO set to generate -- transactions. let getForker rr = do ChainDB.getReadOnlyForkerAtPoint chainDB rr VolatileTip >>= \case - Left e -> error $ show e + Left e -> error $ show e Right l -> pure l -- In practice, a robust wallet/user can persistently add a transaction @@ -1114,18 +1175,23 @@ runThreadNetwork systemTime ThreadNetworkArgs return (nodeKernel, LimitedApp app) - customNodeToNodeCodecs - :: TopLevelConfig blk - -> NodeToNodeVersion - -> NTN.Codecs blk NodeId CodecError m - Lazy.ByteString - Lazy.ByteString - Lazy.ByteString - Lazy.ByteString - (AnyMessage (TxSubmission2 (GenTxId blk) (GenTx blk))) - (AnyMessage KeepAlive) - (AnyMessage (PeerSharing NodeId)) - customNodeToNodeCodecs cfg ntnVersion = NTN.Codecs + customNodeToNodeCodecs :: + TopLevelConfig blk -> + NodeToNodeVersion -> + NTN.Codecs + blk + NodeId + CodecError + m + Lazy.ByteString + Lazy.ByteString + Lazy.ByteString + Lazy.ByteString + (AnyMessage (TxSubmission2 (GenTxId blk) (GenTx blk))) + (AnyMessage KeepAlive) + (AnyMessage (PeerSharing NodeId)) + customNodeToNodeCodecs cfg ntnVersion = + NTN.Codecs { cChainSyncCodec = mapFailureCodec (CodecBytesFailure "ChainSync") $ NTN.cChainSyncCodec binaryProtocolCodecs @@ -1148,15 +1214,22 @@ runThreadNetwork systemTime ThreadNetworkArgs mapFailureCodec CodecIdFailure $ NTN.cPeerSharingCodec NTN.identityCodecs } - where - binaryProtocolCodecs = NTN.defaultCodecs (configCodec cfg) blockVersion (const encodeNodeId) (const decodeNodeId) ntnVersion + where + binaryProtocolCodecs = + NTN.defaultCodecs + (configCodec cfg) + blockVersion + (const encodeNodeId) + (const decodeNodeId) + ntnVersion -- | Sum of 'CodecFailure' (from @identityCodecs@) and 'DeserialiseFailure' -- (from @defaultCodecs@). data CodecError = CodecIdFailure CodecFailure | CodecBytesFailure - String -- ^ Extra error message, e.g., the name of the codec + -- | Extra error message, e.g., the name of the codec + String DeserialiseFailure deriving (Show, Exception) @@ -1165,37 +1238,36 @@ data CodecError -------------------------------------------------------------------------------} -- | Cause for an edge to restart --- data RestartCause - = RestartScheduled - -- ^ restart because at least one of the two nodes set its status to + = -- | restart because at least one of the two nodes set its status to -- 'VFalling' because of a scheduled restart in 'tnaRestarts' - | RestartChainSyncTerminated - -- ^ restart because the ChainSync client terminated the mini protocol + RestartScheduled + | -- | restart because the ChainSync client terminated the mini protocol + RestartChainSyncTerminated -- | Fork two directed edges, one in each direction between the two vertices --- forkBothEdges :: - (IOLike m, RunNode blk, HasCallStack) - => ResourceRegistry m - -> OracularClock m - -> Tracer m (SlotNo, MiniProtocolState) - -> (NodeToNodeVersion, BlockNodeToNodeVersion blk) - -> (CodecConfig blk, CalcMessageDelay blk) - -> Map CoreNodeId (VertexStatusVar m blk) - -> (CoreNodeId, CoreNodeId) - -> m [((CoreNodeId, CoreNodeId), EdgeStatusVar m)] + (IOLike m, RunNode blk, HasCallStack) => + ResourceRegistry m -> + OracularClock m -> + Tracer m (SlotNo, MiniProtocolState) -> + (NodeToNodeVersion, BlockNodeToNodeVersion blk) -> + (CodecConfig blk, CalcMessageDelay blk) -> + Map CoreNodeId (VertexStatusVar m blk) -> + (CoreNodeId, CoreNodeId) -> + m [((CoreNodeId, CoreNodeId), EdgeStatusVar m)] forkBothEdges sharedRegistry clock tr version cfg vertexStatusVars (node1, node2) = do let endpoint1 = mkEndpoint node1 endpoint2 = mkEndpoint node2 mkEndpoint node = case Map.lookup node vertexStatusVars of - Nothing -> error $ "node not found: " ++ show node - Just var -> (node, var) + Nothing -> error $ "node not found: " ++ show node + Just var -> (node, var) let mkDirEdge e1 e2 = do v <- uncheckedNewTVarM EDown - let label = concat - ["directed-edge-", condense (fst e1), "-", condense (fst e2)] + let label = + concat + ["directed-edge-", condense (fst e1), "-", condense (fst e2)] void $ forkLinkedThread sharedRegistry label $ do directedEdge sharedRegistry tr version cfg clock v e1 e2 pure ((fst e1, fst e2), v) @@ -1225,44 +1297,48 @@ forkBothEdges sharedRegistry clock tr version cfg vertexStatusVars (node1, node2 -- edge via the @async@ interface rather than relying on some sort of mock -- socket semantics to convey the cancellation. directedEdge :: - forall m blk. (IOLike m, RunNode blk) - => ResourceRegistry m - -> Tracer m (SlotNo, MiniProtocolState) - -> (NodeToNodeVersion, BlockNodeToNodeVersion blk) - -> (CodecConfig blk, CalcMessageDelay blk) - -> OracularClock m - -> EdgeStatusVar m - -> (CoreNodeId, VertexStatusVar m blk) - -> (CoreNodeId, VertexStatusVar m blk) - -> m () + forall m blk. + (IOLike m, RunNode blk) => + ResourceRegistry m -> + Tracer m (SlotNo, MiniProtocolState) -> + (NodeToNodeVersion, BlockNodeToNodeVersion blk) -> + (CodecConfig blk, CalcMessageDelay blk) -> + OracularClock m -> + EdgeStatusVar m -> + (CoreNodeId, VertexStatusVar m blk) -> + (CoreNodeId, VertexStatusVar m blk) -> + m () directedEdge registry tr version cfg clock edgeStatusVar client server = + loop + where + loop = do + restart <- + directedEdgeInner registry clock version cfg edgeStatusVar client server + `catch` hUnexpected + atomically $ writeTVar edgeStatusVar EDown + case restart of + RestartScheduled -> pure () + RestartChainSyncTerminated -> do + -- "error" policy: restart at beginning of next slot + s <- OracularClock.getCurrentSlot clock + let s' = succ s + traceWith tr (s, MiniProtocolDelayed) + void $ OracularClock.blockUntilSlot clock s' + traceWith tr (s', MiniProtocolRestarting) loop - where - loop = do - restart <- directedEdgeInner registry clock version cfg edgeStatusVar client server - `catch` hUnexpected - atomically $ writeTVar edgeStatusVar EDown - case restart of - RestartScheduled -> pure () - RestartChainSyncTerminated -> do - -- "error" policy: restart at beginning of next slot - s <- OracularClock.getCurrentSlot clock - let s' = succ s - traceWith tr (s, MiniProtocolDelayed) - void $ OracularClock.blockUntilSlot clock s' - traceWith tr (s', MiniProtocolRestarting) - loop - where - -- Wrap synchronous exceptions in 'MiniProtocolFatalException' - -- - hUnexpected :: forall a. SomeException -> m a - hUnexpected e@(Exn.SomeException e') = case fromException e of - Just (_ :: Exn.AsyncException) -> throwIO e - Nothing -> case fromException e of - Just (_ :: Exn.SomeAsyncException) -> throwIO e - Nothing -> throwIO MiniProtocolFatalException - { mpfeType = Typeable.typeOf e' - , mpfeExn = e + where + -- Wrap synchronous exceptions in 'MiniProtocolFatalException' + -- + hUnexpected :: forall a. SomeException -> m a + hUnexpected e@(Exn.SomeException e') = case fromException e of + Just (_ :: Exn.AsyncException) -> throwIO e + Nothing -> case fromException e of + Just (_ :: Exn.SomeAsyncException) -> throwIO e + Nothing -> + throwIO + MiniProtocolFatalException + { mpfeType = Typeable.typeOf e' + , mpfeExn = e , mpfeClient = fst client , mpfeServer = fst server } @@ -1271,19 +1347,26 @@ directedEdge registry tr version cfg clock edgeStatusVar client server = -- -- See 'directedEdge'. directedEdgeInner :: - forall m blk. (IOLike m, RunNode blk) - => ResourceRegistry m - -> OracularClock m - -> (NodeToNodeVersion, BlockNodeToNodeVersion blk) - -> (CodecConfig blk, CalcMessageDelay blk) - -> EdgeStatusVar m - -> (CoreNodeId, VertexStatusVar m blk) - -- ^ client threads on this node - -> (CoreNodeId, VertexStatusVar m blk) - -- ^ server threads on this node - -> m RestartCause -directedEdgeInner registry clock (version, blockVersion) (cfg, calcMessageDelay) edgeStatusVar - (node1, vertexStatusVar1) (node2, vertexStatusVar2) = do + forall m blk. + (IOLike m, RunNode blk) => + ResourceRegistry m -> + OracularClock m -> + (NodeToNodeVersion, BlockNodeToNodeVersion blk) -> + (CodecConfig blk, CalcMessageDelay blk) -> + EdgeStatusVar m -> + -- | client threads on this node + (CoreNodeId, VertexStatusVar m blk) -> + -- | server threads on this node + (CoreNodeId, VertexStatusVar m blk) -> + m RestartCause +directedEdgeInner + registry + clock + (version, blockVersion) + (cfg, calcMessageDelay) + edgeStatusVar + (node1, vertexStatusVar1) + (node2, vertexStatusVar2) = do -- block until both nodes are 'VUp' (LimitedApp app1, LimitedApp app2) <- atomically $ do (,) <$> getApp vertexStatusVar1 <*> getApp vertexStatusVar2 @@ -1291,77 +1374,83 @@ directedEdgeInner registry clock (version, blockVersion) (cfg, calcMessageDelay) atomically $ writeTVar edgeStatusVar EUp let miniProtocol :: - String - -- ^ protocol name - -> (String -> a -> RestartCause) - -> (String -> b -> RestartCause) - -> ( LimitedApp' m NodeId blk - -> NodeToNodeVersion - -> ExpandedInitiatorContext NodeId m - -> Channel m msg - -> m (a, trailingBytes) - ) - -- ^ client action to run on node1 - -> ( LimitedApp' m NodeId blk - -> NodeToNodeVersion - -> ResponderContext NodeId - -> Channel m msg - -> m (b, trailingBytes) - ) - -- ^ server action to run on node2 - -> (msg -> m ()) - -> m (m RestartCause, m RestartCause) + String -> + -- \^ protocol name + (String -> a -> RestartCause) -> + (String -> b -> RestartCause) -> + ( LimitedApp' m NodeId blk -> + NodeToNodeVersion -> + ExpandedInitiatorContext NodeId m -> + Channel m msg -> + m (a, trailingBytes) + ) -> + -- \^ client action to run on node1 + ( LimitedApp' m NodeId blk -> + NodeToNodeVersion -> + ResponderContext NodeId -> + Channel m msg -> + m (b, trailingBytes) + ) -> + -- \^ server action to run on node2 + (msg -> m ()) -> + m (m RestartCause, m RestartCause) miniProtocol proto retClient retServer client server middle = do - (chan, dualChan) <- - createConnectedChannelsWithDelay registry (node1, node2, proto) middle - pure - ( (retClient (proto <> ".client") . fst) <$> client app1 version initiatorCtx chan - , (retServer (proto <> ".server") . fst) <$> server app2 version responderCtx dualChan - ) - where - initiatorCtx = ExpandedInitiatorContext { - eicConnectionId = ConnectionId (fromCoreNodeId node1) (fromCoreNodeId node2), - eicControlMessage = return Continue, - eicIsBigLedgerPeer = IsNotBigLedgerPeer + (chan, dualChan) <- + createConnectedChannelsWithDelay registry (node1, node2, proto) middle + pure + ( (retClient (proto <> ".client") . fst) <$> client app1 version initiatorCtx chan + , (retServer (proto <> ".server") . fst) <$> server app2 version responderCtx dualChan + ) + where + initiatorCtx = + ExpandedInitiatorContext + { eicConnectionId = ConnectionId (fromCoreNodeId node1) (fromCoreNodeId node2) + , eicControlMessage = return Continue + , eicIsBigLedgerPeer = IsNotBigLedgerPeer } - responderCtx = ResponderContext { - rcConnectionId = ConnectionId (fromCoreNodeId node1) (fromCoreNodeId node2) + responderCtx = + ResponderContext + { rcConnectionId = ConnectionId (fromCoreNodeId node1) (fromCoreNodeId node2) } (>>= withAsyncsWaitAny) $ fmap flattenPairs $ - sequence $ - pure (watcher vertexStatusVar1, watcher vertexStatusVar2) - NE.:| - [ miniProtocol "ChainSync" - (\_s _ -> RestartChainSyncTerminated) - (\_s () -> RestartChainSyncTerminated) - NTN.aChainSyncClient - NTN.aChainSyncServer - chainSyncMiddle - , miniProtocol "BlockFetch" - neverReturns - neverReturns - NTN.aBlockFetchClient - NTN.aBlockFetchServer - (\_ -> pure ()) - , miniProtocol "TxSubmission" - neverReturns - neverReturns - NTN.aTxSubmission2Client - NTN.aTxSubmission2Server - (\_ -> pure ()) - , miniProtocol "KeepAlive" - neverReturns - neverReturns - NTN.aKeepAliveClient - NTN.aKeepAliveServer - (\_ -> pure ()) - ] - where - getApp v = readTVar v >>= \case - VUp _ app -> pure app - _ -> retry + sequence $ + pure (watcher vertexStatusVar1, watcher vertexStatusVar2) + NE.:| [ miniProtocol + "ChainSync" + (\_s _ -> RestartChainSyncTerminated) + (\_s () -> RestartChainSyncTerminated) + NTN.aChainSyncClient + NTN.aChainSyncServer + chainSyncMiddle + , miniProtocol + "BlockFetch" + neverReturns + neverReturns + NTN.aBlockFetchClient + NTN.aBlockFetchServer + (\_ -> pure ()) + , miniProtocol + "TxSubmission" + neverReturns + neverReturns + NTN.aTxSubmission2Client + NTN.aTxSubmission2Server + (\_ -> pure ()) + , miniProtocol + "KeepAlive" + neverReturns + neverReturns + NTN.aKeepAliveClient + NTN.aKeepAliveServer + (\_ -> pure ()) + ] + where + getApp v = + readTVar v >>= \case + VUp _ app -> pure app + _ -> retry flattenPairs :: forall a. NE.NonEmpty (a, a) -> NE.NonEmpty a flattenPairs = uncurry (<>) . neUnzip @@ -1376,9 +1465,10 @@ directedEdgeInner registry clock (version, blockVersion) (cfg, calcMessageDelay) -- edge watcher :: VertexStatusVar m blk -> m RestartCause watcher v = do - atomically $ readTVar v >>= \case + atomically $ + readTVar v >>= \case VFalling -> pure RestartScheduled - _ -> retry + _ -> retry -- introduce a delay for 'CS.MsgRollForward' -- @@ -1386,22 +1476,26 @@ directedEdgeInner registry clock (version, blockVersion) (cfg, calcMessageDelay) -- first step in process of one node diffusing a block to another node. chainSyncMiddle :: Lazy.ByteString -> m () chainSyncMiddle bs = do - let tok = CS.SingNext CS.SingMustReply - decodeStep :: Codec.DecodeStep - Lazy.ByteString DeserialiseFailure m - (Codec.SomeMessage ('CS.StNext 'CS.StMustReply)) - <- Codec.decode codec tok - Codec.runDecoder [bs] decodeStep >>= \case - Right (Codec.SomeMessage (CS.MsgRollForward hdr _tip)) -> do - s <- OracularClock.getCurrentSlot clock - let NumSlots d = f (node1, node2) s hdr - where - CalcMessageDelay f = calcMessageDelay - void $ OracularClock.blockUntilSlot clock $ blockSlot hdr + SlotNo d - _ -> pure () - where - codec = - NTN.cChainSyncCodec $ NTN.defaultCodecs cfg blockVersion (const encodeNodeId) (const decodeNodeId) version + let tok = CS.SingNext CS.SingMustReply + decodeStep :: + Codec.DecodeStep + Lazy.ByteString + DeserialiseFailure + m + (Codec.SomeMessage ('CS.StNext 'CS.StMustReply)) <- + Codec.decode codec tok + Codec.runDecoder [bs] decodeStep >>= \case + Right (Codec.SomeMessage (CS.MsgRollForward hdr _tip)) -> do + s <- OracularClock.getCurrentSlot clock + let NumSlots d = f (node1, node2) s hdr + where + CalcMessageDelay f = calcMessageDelay + void $ OracularClock.blockUntilSlot clock $ blockSlot hdr + SlotNo d + _ -> pure () + where + codec = + NTN.cChainSyncCodec $ + NTN.defaultCodecs cfg blockVersion (const encodeNodeId) (const decodeNodeId) version -- | Variant of 'createConnectChannels' with intermediate queues for -- delayed-but-in-order messages @@ -1410,39 +1504,44 @@ directedEdgeInner registry clock (version, blockVersion) (cfg, calcMessageDelay) -- 'threadDelay' in order to delay the transfer of the given message from the -- queue to the recipient. createConnectedChannelsWithDelay :: - IOLike m - => ResourceRegistry m - -> (CoreNodeId, CoreNodeId, String) - -- ^ (client, server, protocol) - -> (a -> m ()) - -- ^ per-message delay - -> m (Channel m a, Channel m a) + IOLike m => + ResourceRegistry m -> + -- | (client, server, protocol) + (CoreNodeId, CoreNodeId, String) -> + -- | per-message delay + (a -> m ()) -> + m (Channel m a, Channel m a) createConnectedChannelsWithDelay registry (client, server, proto) middle = do - -- queue for async send and an mvar for delayed-but-in-order reads from the - -- queue - qA <- atomically $ MonadSTM.newTQueue - bA <- atomically $ MonadSTM.newEmptyTMVar - spawn (client, server) qA bA - - qB <- atomically $ MonadSTM.newTQueue - bB <- atomically $ MonadSTM.newEmptyTMVar - spawn (server, client) qB bB - - return (chan qA bB, chan qB bA) -- note the crossover - where - spawn (cid1, cid2) q b = do - let label = - "delaying thread for " <> proto <> " " <> - show cid1 <> " to " <> show cid2 - void $ forkLinkedThread registry label $ forever $ do - x <- atomically $ MonadSTM.readTQueue q - middle x - atomically $ MonadSTM.putTMVar b x - - chan q b = Channel - { recv = fmap Just $ atomically $ MonadSTM.takeTMVar b - , send = atomically . MonadSTM.writeTQueue q - } + -- queue for async send and an mvar for delayed-but-in-order reads from the + -- queue + qA <- atomically $ MonadSTM.newTQueue + bA <- atomically $ MonadSTM.newEmptyTMVar + spawn (client, server) qA bA + + qB <- atomically $ MonadSTM.newTQueue + bB <- atomically $ MonadSTM.newEmptyTMVar + spawn (server, client) qB bB + + return (chan qA bB, chan qB bA) -- note the crossover + where + spawn (cid1, cid2) q b = do + let label = + "delaying thread for " + <> proto + <> " " + <> show cid1 + <> " to " + <> show cid2 + void $ forkLinkedThread registry label $ forever $ do + x <- atomically $ MonadSTM.readTQueue q + middle x + atomically $ MonadSTM.putTMVar b x + + chan q b = + Channel + { recv = fmap Just $ atomically $ MonadSTM.takeTMVar b + , send = atomically . MonadSTM.writeTQueue q + } {------------------------------------------------------------------------------- Node information not bound to lifetime of a specific node instance @@ -1450,7 +1549,7 @@ createConnectedChannelsWithDelay registry (client, server, proto) middle = do data NodeInfo blk db ev = NodeInfo { nodeInfoEvents :: NodeEvents blk ev - , nodeInfoDBs :: NodeDBs db + , nodeInfoDBs :: NodeDBs db } -- | A vector with an @ev@-shaped element for a particular set of @@ -1459,98 +1558,102 @@ data NodeInfo blk db ev = NodeInfo -- The @ev@ type parameter is instantiated by this module at types for -- 'Tracer's and lists: actions for accumulating and lists as accumulations. data NodeEvents blk ev = NodeEvents - { nodeEventsAdds :: ev (SlotNo, RealPoint blk, BlockNo) - -- ^ every 'AddedBlockToVolatileDB' excluding EBBs - , nodeEventsForges :: ev (TraceForgeEvent blk) - -- ^ every 'TraceForgeEvent' - , nodeEventsHeaderAdds :: ev (SlotNo, RealPoint blk, BlockNo) - -- ^ every 'TraceDownloadedHeader', excluding EBBs - , nodeEventsInvalids :: ev (RealPoint blk, ExtValidationError blk) - -- ^ the point of every 'ChainDB.InvalidBlock' event - , nodeEventsSelects :: ev (SlotNo, RealPoint blk, BlockNo) - -- ^ every 'ChainDB.AddedToCurrentChain' and 'ChainDB.SwitchedToAFork' + { nodeEventsAdds :: ev (SlotNo, RealPoint blk, BlockNo) + -- ^ every 'AddedBlockToVolatileDB' excluding EBBs + , nodeEventsForges :: ev (TraceForgeEvent blk) + -- ^ every 'TraceForgeEvent' + , nodeEventsHeaderAdds :: ev (SlotNo, RealPoint blk, BlockNo) + -- ^ every 'TraceDownloadedHeader', excluding EBBs + , nodeEventsInvalids :: ev (RealPoint blk, ExtValidationError blk) + -- ^ the point of every 'ChainDB.InvalidBlock' event + , nodeEventsSelects :: ev (SlotNo, RealPoint blk, BlockNo) + -- ^ every 'ChainDB.AddedToCurrentChain' and 'ChainDB.SwitchedToAFork' , nodeEventsTipBlockNos :: ev (SlotNo, WithOrigin BlockNo) - -- ^ 'ChainDB.getTipBlockNo' for each node at the onset of each slot - , nodeEventsUpdates :: ev (LedgerUpdate blk) - -- ^ Ledger updates every time we adopt a block/switch to a fork - , nodeEventsPipelining :: ev (ChainDB.TracePipeliningEvent blk) - -- ^ Pipelining events tracking the tentative header + -- ^ 'ChainDB.getTipBlockNo' for each node at the onset of each slot + , nodeEventsUpdates :: ev (LedgerUpdate blk) + -- ^ Ledger updates every time we adopt a block/switch to a fork + , nodeEventsPipelining :: ev (ChainDB.TracePipeliningEvent blk) + -- ^ Pipelining events tracking the tentative header } newNodeInfo :: forall blk m. - IOLike m - => m ( NodeInfo blk (StrictTMVar m MockFS) (Tracer m) - , m (NodeInfo blk MockFS []) - ) + IOLike m => + m + ( NodeInfo blk (StrictTMVar m MockFS) (Tracer m) + , m (NodeInfo blk MockFS []) + ) newNodeInfo = do (nodeInfoEvents, readEvents) <- do - (t1, m1) <- recordingTracerTVar - (t2, m2) <- recordingTracerTVar - (t3, m3) <- recordingTracerTVar - (t4, m4) <- recordingTracerTVar - (t5, m5) <- recordingTracerTVar - (t6, m6) <- recordingTracerTVar - (t7, m7) <- recordingTracerTVar - (t8, m8) <- recordingTracerTVar - pure - ( NodeEvents t1 t2 t3 t4 t5 t6 t7 t8 - , NodeEvents <$> m1 <*> m2 <*> m3 <*> m4 <*> m5 <*> m6 <*> m7 <*> m8 - ) + (t1, m1) <- recordingTracerTVar + (t2, m2) <- recordingTracerTVar + (t3, m3) <- recordingTracerTVar + (t4, m4) <- recordingTracerTVar + (t5, m5) <- recordingTracerTVar + (t6, m6) <- recordingTracerTVar + (t7, m7) <- recordingTracerTVar + (t8, m8) <- recordingTracerTVar + pure + ( NodeEvents t1 t2 t3 t4 t5 t6 t7 t8 + , NodeEvents <$> m1 <*> m2 <*> m3 <*> m4 <*> m5 <*> m6 <*> m7 <*> m8 + ) (nodeInfoDBs, readDBs) <- do - let mk :: m (StrictTMVar m MockFS, STM m MockFS) - mk = do - v <- atomically $ newTMVar Mock.empty - pure (v, readTMVar v) - (v1, m1) <- mk - (v2, m2) <- mk - (v3, m3) <- mk - (v4, m4) <- mk - pure - ( NodeDBs v1 v2 v3 v4 - , NodeDBs <$> m1 <*> m2 <*> m3 <*> m4 - ) + let mk :: m (StrictTMVar m MockFS, STM m MockFS) + mk = do + v <- atomically $ newTMVar Mock.empty + pure (v, readTMVar v) + (v1, m1) <- mk + (v2, m2) <- mk + (v3, m3) <- mk + (v4, m4) <- mk + pure + ( NodeDBs v1 v2 v3 v4 + , NodeDBs <$> m1 <*> m2 <*> m3 <*> m4 + ) pure - ( NodeInfo{nodeInfoEvents, nodeInfoDBs } - , NodeInfo <$> readEvents <*> atomically readDBs - ) + ( NodeInfo{nodeInfoEvents, nodeInfoDBs} + , NodeInfo <$> readEvents <*> atomically readDBs + ) {------------------------------------------------------------------------------- Test Output - output data about each node -------------------------------------------------------------------------------} data NodeOutput blk = NodeOutput - { nodeOutputAdds :: Map SlotNo (Set (RealPoint blk, BlockNo)) + { nodeOutputAdds :: Map SlotNo (Set (RealPoint blk, BlockNo)) , nodeOutputCannotForges :: Map SlotNo [CannotForge blk] - , nodeOutputFinalChain :: Chain blk - , nodeOutputFinalLedger :: LedgerState blk EmptyMK - , nodeOutputForges :: Map SlotNo blk - , nodeOutputHeaderAdds :: Map SlotNo [(RealPoint blk, BlockNo)] - , nodeOutputInvalids :: Map (RealPoint blk) [ExtValidationError blk] - , nodeOutputNodeDBs :: NodeDBs MockFS - , nodeOutputSelects :: Map SlotNo [(RealPoint blk, BlockNo)] - , nodeOutputUpdates :: [LedgerUpdate blk] - , nodePipeliningEvents :: [ChainDB.TracePipeliningEvent blk] + , nodeOutputFinalChain :: Chain blk + , nodeOutputFinalLedger :: LedgerState blk EmptyMK + , nodeOutputForges :: Map SlotNo blk + , nodeOutputHeaderAdds :: Map SlotNo [(RealPoint blk, BlockNo)] + , nodeOutputInvalids :: Map (RealPoint blk) [ExtValidationError blk] + , nodeOutputNodeDBs :: NodeDBs MockFS + , nodeOutputSelects :: Map SlotNo [(RealPoint blk, BlockNo)] + , nodeOutputUpdates :: [LedgerUpdate blk] + , nodePipeliningEvents :: [ChainDB.TracePipeliningEvent blk] } data TestOutput blk = TestOutput - { testOutputNodes :: Map NodeId (NodeOutput blk) - , testOutputTipBlockNos :: Map SlotNo (Map NodeId (WithOrigin BlockNo)) - } + { testOutputNodes :: Map NodeId (NodeOutput blk) + , testOutputTipBlockNos :: Map SlotNo (Map NodeId (WithOrigin BlockNo)) + } -- | Gather the test output from the nodes mkTestOutput :: - forall m blk. (IOLike m, HasHeader blk) - => [( CoreNodeId - , m (NodeInfo blk MockFS []) - , Chain blk - , LedgerState blk EmptyMK - )] - -> m (TestOutput blk) + forall m blk. + (IOLike m, HasHeader blk) => + [ ( CoreNodeId + , m (NodeInfo blk MockFS []) + , Chain blk + , LedgerState blk EmptyMK + ) + ] -> + m (TestOutput blk) mkTestOutput vertexInfos = do - (nodeOutputs', tipBlockNos') <- fmap unzip $ forM vertexInfos $ + (nodeOutputs', tipBlockNos') <- fmap unzip $ + forM vertexInfos $ \(cid, readNodeInfo, ch, ldgr) -> do let nid = fromCoreNodeId cid nodeInfo <- readNodeInfo @@ -1568,43 +1671,45 @@ mkTestOutput vertexInfos = do , nodeEventsUpdates , nodeEventsPipelining } = nodeInfoEvents - let nodeOutput = NodeOutput - { nodeOutputAdds = - Map.fromListWith Set.union $ - [ (s, Set.singleton (p, bno)) | (s, p, bno) <- nodeEventsAdds ] - , nodeOutputCannotForges = - Map.fromListWith (flip (++)) $ - [ (s, [err]) | TraceNodeCannotForge s err <- nodeEventsForges ] - , nodeOutputFinalChain = ch - , nodeOutputFinalLedger = ldgr - , nodeOutputForges = - Map.fromList $ - [ (s, b) | TraceForgedBlock s _ b _ <- nodeEventsForges ] - , nodeOutputHeaderAdds = - Map.fromListWith (flip (++)) $ - [ (s, [(p, bno)]) - | (s, p, bno) <- nodeEventsHeaderAdds - ] - , nodeOutputSelects = - Map.fromListWith (flip (++)) $ - [ (s, [(p, bno)]) - | (s, p, bno) <- nodeEventsSelects - ] - , nodeOutputInvalids = (:[]) <$> Map.fromList nodeEventsInvalids - , nodeOutputNodeDBs = nodeInfoDBs - , nodeOutputUpdates = nodeEventsUpdates - , nodePipeliningEvents = nodeEventsPipelining - } + let nodeOutput = + NodeOutput + { nodeOutputAdds = + Map.fromListWith Set.union $ + [(s, Set.singleton (p, bno)) | (s, p, bno) <- nodeEventsAdds] + , nodeOutputCannotForges = + Map.fromListWith (flip (++)) $ + [(s, [err]) | TraceNodeCannotForge s err <- nodeEventsForges] + , nodeOutputFinalChain = ch + , nodeOutputFinalLedger = ldgr + , nodeOutputForges = + Map.fromList $ + [(s, b) | TraceForgedBlock s _ b _ <- nodeEventsForges] + , nodeOutputHeaderAdds = + Map.fromListWith (flip (++)) $ + [ (s, [(p, bno)]) + | (s, p, bno) <- nodeEventsHeaderAdds + ] + , nodeOutputSelects = + Map.fromListWith (flip (++)) $ + [ (s, [(p, bno)]) + | (s, p, bno) <- nodeEventsSelects + ] + , nodeOutputInvalids = (: []) <$> Map.fromList nodeEventsInvalids + , nodeOutputNodeDBs = nodeInfoDBs + , nodeOutputUpdates = nodeEventsUpdates + , nodePipeliningEvents = nodeEventsPipelining + } pure ( Map.singleton nid nodeOutput , Map.singleton nid <$> Map.fromList nodeEventsTipBlockNos ) - pure $ TestOutput - { testOutputNodes = Map.unions nodeOutputs' - , testOutputTipBlockNos = Map.unionsWith Map.union tipBlockNos' - } + pure $ + TestOutput + { testOutputNodes = Map.unions nodeOutputs' + , testOutputTipBlockNos = Map.unionsWith Map.union tipBlockNos' + } {------------------------------------------------------------------------------- Constraints needed for verbose tracing @@ -1616,22 +1721,22 @@ nullDebugTracer = nullTracer `asTypeOf` showTracing debugTracer -- | Occurs throughout in positions that might be useful for debugging. nullDebugTracers :: - ( Monad m - , Show peer - , LedgerSupportsProtocol blk - , TracingConstraints blk - ) - => Tracers m peer Void blk + ( Monad m + , Show peer + , LedgerSupportsProtocol blk + , TracingConstraints blk + ) => + Tracers m peer Void blk nullDebugTracers = nullTracers `asTypeOf` showTracers debugTracer -- | Occurs throughout in positions that might be useful for debugging. nullDebugProtocolTracers :: - ( Monad m - , HasHeader blk - , TracingConstraints blk - , Show peer - ) - => NTN.Tracers m peer blk failure + ( Monad m + , HasHeader blk + , TracingConstraints blk + , Show peer + ) => + NTN.Tracers m peer blk failure nullDebugProtocolTracers = NTN.nullTracers `asTypeOf` NTN.showTracers debugTracer @@ -1665,66 +1770,70 @@ type TracingConstraints blk = -- which is likely not intended. withAsyncsWaitAny :: forall m a. IOLike m => NE.NonEmpty (m a) -> m a withAsyncsWaitAny = go [] . NE.toList - where - go acc = \case - [] -> snd <$> waitAny acc - m:ms -> withAsync m $ \h -> go (h:acc) ms + where + go acc = \case + [] -> snd <$> waitAny acc + m : ms -> withAsync m $ \h -> go (h : acc) ms -- | The partially instantiation of the 'NetworkApplication' type according to -- its use in this module -- -- Used internal to this module, essentially as an abbreviation. -data LimitedApp m addr blk = - LimitedApp (LimitedApp' m addr blk) +data LimitedApp m addr blk + = LimitedApp (LimitedApp' m addr blk) -- | Argument of 'LimitedApp' data constructor -- -- Used internal to this module, essentially as an abbreviation. type LimitedApp' m addr blk = - NTN.Apps m addr - -- The 'ChainSync' and 'BlockFetch' protocols use @'Serialised' x@ for - -- the servers and @x@ for the clients. Since both have to match to be - -- sent across a channel, we can't use @'AnyMessage' ..@, instead, we - -- (de)serialise the messages so that they can be sent across the - -- channel with the same type on both ends, i.e., 'Lazy.ByteString'. - Lazy.ByteString - Lazy.ByteString - (AnyMessage (TxSubmission2 (GenTxId blk) (GenTx blk))) - (AnyMessage KeepAlive) - (AnyMessage (PeerSharing addr)) - NodeToNodeInitiatorResult () + NTN.Apps + m + addr + -- The 'ChainSync' and 'BlockFetch' protocols use @'Serialised' x@ for + -- the servers and @x@ for the clients. Since both have to match to be + -- sent across a channel, we can't use @'AnyMessage' ..@, instead, we + -- (de)serialise the messages so that they can be sent across the + -- channel with the same type on both ends, i.e., 'Lazy.ByteString'. + Lazy.ByteString + Lazy.ByteString + (AnyMessage (TxSubmission2 (GenTxId blk) (GenTx blk))) + (AnyMessage KeepAlive) + (AnyMessage (PeerSharing addr)) + NodeToNodeInitiatorResult + () {------------------------------------------------------------------------------- Tracing -------------------------------------------------------------------------------} data MiniProtocolState = MiniProtocolDelayed | MiniProtocolRestarting - deriving (Show) + deriving Show -- | Any synchronous exception from a 'directedEdge' data MiniProtocolFatalException = MiniProtocolFatalException - { mpfeType :: !Typeable.TypeRep - -- ^ Including the type explicitly makes it easier for a human to debug - , mpfeExn :: !SomeException + { mpfeType :: !Typeable.TypeRep + -- ^ Including the type explicitly makes it easier for a human to debug + , mpfeExn :: !SomeException , mpfeClient :: !CoreNodeId , mpfeServer :: !CoreNodeId } - deriving (Show) + deriving Show instance Exception MiniProtocolFatalException -- | Our scheme for Just-In-Time EBBs makes some assumptions --- data JitEbbError blk - = JitEbbError (LedgerError blk) - -- ^ we were unable to extend the ledger state with the JIT EBB + = -- | we were unable to extend the ledger state with the JIT EBB + JitEbbError (LedgerError blk) deriving instance LedgerSupportsProtocol blk => Show (JitEbbError blk) instance LedgerSupportsProtocol blk => Exception (JitEbbError blk) -- | The 'TxGen' generator consecutively failed too many times -data TxGenFailure = TxGenFailure Int -- ^ how many times it failed - deriving (Show) +data TxGenFailure + = -- | how many times it failed + TxGenFailure Int + deriving Show instance Exception TxGenFailure @@ -1733,5 +1842,5 @@ instance Exception TxGenFailure -- 4.20 do not have that. -- Neatest solution is to cargo cult it here and switch to Data.Function.unzip -- later. -neUnzip :: Functor f => f (a,b) -> (f a, f b) +neUnzip :: Functor f => f (a, b) -> (f a, f b) neUnzip xs = (fst <$> xs, snd <$> xs) diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Ref/PBFT.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Ref/PBFT.hs index 027253a862..41ea4b98b7 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Ref/PBFT.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Ref/PBFT.hs @@ -6,9 +6,8 @@ -- | A reference simulator of the PBFT protocol under \"ideal circumstances\" -- -- See 'step'. --- -module Test.ThreadNet.Ref.PBFT ( - Outcome (..) +module Test.ThreadNet.Ref.PBFT + ( Outcome (..) , Result (..) , State (..) , advanceUpTo @@ -24,38 +23,40 @@ module Test.ThreadNet.Ref.PBFT ( , viable ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Applicative ((<|>)) -import Control.Arrow ((&&&)) -import Control.Monad (guard) -import Data.Foldable as Foldable (foldl', toList) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Sequence (Seq) -import qualified Data.Sequence as Seq -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.PBFT (PBftParams (..), - PBftSignatureThreshold (..)) -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import Test.ThreadNet.Util.NodeJoinPlan -import qualified Test.Util.InvertedMap as InvertedMap -import Test.Util.InvertedMap (InvertedMap) -import Test.Util.Slots (NumSlots (..)) +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Applicative ((<|>)) +import Control.Arrow ((&&&)) +import Control.Monad (guard) +import Data.Foldable as Foldable (foldl', toList) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Sequence (Seq) +import Data.Sequence qualified as Seq +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.PBFT + ( PBftParams (..) + , PBftSignatureThreshold (..) + ) +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Test.ThreadNet.Util.NodeJoinPlan +import Test.Util.InvertedMap (InvertedMap) +import Test.Util.InvertedMap qualified as InvertedMap +import Test.Util.Slots (NumSlots (..)) oneK :: Num a => PBftParams -> a oneK PBftParams{pbftSecurityParam} = - fromIntegral $ unNonZero $ maxRollbacks pbftSecurityParam + fromIntegral $ unNonZero $ maxRollbacks pbftSecurityParam twoK :: Num a => PBftParams -> a twoK PBftParams{pbftSecurityParam} = - 2 * fromIntegral (unNonZero $ maxRollbacks pbftSecurityParam) + 2 * fromIntegral (unNonZero $ maxRollbacks pbftSecurityParam) oneN :: Num a => PBftParams -> a oneN PBftParams{pbftNumNodes = NumCoreNodes n} = fromIntegral n @@ -65,48 +66,38 @@ oneN PBftParams{pbftNumNodes = NumCoreNodes n} = fromIntegral n -------------------------------------------------------------------------------} -- | Outcome of a node leading a slot --- data Outcome - - = Absent - -- ^ the leader hadn't yet joined - - | Nominal - -- ^ the leader extended the networks' common chain with a valid block + = -- | the leader hadn't yet joined + Absent + | -- | the leader extended the networks' common chain with a valid block -- -- (We're using /nominal/ in the engineer's sense of \"according to -- plan\".) - - | Unable - -- ^ the leader could not forge a valid block - - | Wasted - -- ^ the leader forged a valid-but-irrelevant block, because it forged + Nominal + | -- | the leader could not forge a valid block + Unable + | -- | the leader forged a valid-but-irrelevant block, because it forged -- before sufficiently synchronizing -- -- As of commit 4eb23cfe \"Merge #1260\", this only happens if a node leads -- in the slot it joins. In that slot, it forges before the mini protocols -- are able to pull in any of the existing nodes' chain, and so it -- re-forges the epoch 0 boundary block (EBB). - + Wasted deriving (Eq, Show) instance Condense Outcome where condense = show -- | The state of a PBFT net with only one longest chain --- data State = State - { forgers :: !(Seq CoreNodeId) - -- ^ who forged each of the last @k@ blocks - + { forgers :: !(Seq CoreNodeId) + -- ^ who forged each of the last @k@ blocks , nomCount :: !NumNominals - -- ^ cache for optimization: number of 'Nominal's in 'outs' - + -- ^ cache for optimization: number of 'Nominal's in 'outs' , nextSlot :: !SlotNo - - , outs :: !(Seq Outcome) - -- ^ the outcome of each the last @2k@ slots + , outs :: !(Seq Outcome) + -- ^ the outcome of each the last @2k@ slots } deriving (Eq, Show) @@ -117,80 +108,77 @@ emptyState :: State emptyState = State Seq.empty (NumNominals 0) 0 Seq.empty -- | There are no recorded forgings --- nullState :: State -> Bool nullState State{forgers} = Seq.null forgers -- | Record the latest forging in the state -- -- Should be followed by a call to 'extendOutcome'. --- extendForger :: PBftParams -> State -> CoreNodeId -> State -extendForger params st@State{forgers} i = st - { forgers = snd $ prune (oneK params) $ forgers Seq.|> i - } +extendForger params st@State{forgers} i = + st + { forgers = snd $ prune (oneK params) $ forgers Seq.|> i + } count :: (Eq a, Foldable t) => a -> t a -> Int -count a bs = length [ () | b <- toList bs, a == b ] +count a bs = length [() | b <- toList bs, a == b] prune :: Int -> Seq a -> (Seq a, Seq a) prune lim x = Seq.splitAt excess x - where - excess = Seq.length x - lim + where + excess = Seq.length x - lim -- | Record the latest outcome in the state --- extendOutcome :: PBftParams -> State -> Outcome -> State -extendOutcome params State{forgers, nomCount, nextSlot, outs} out = State - { forgers - , nomCount = nomCount' - , outs = outs' - , nextSlot = succ nextSlot - } - where - (dropped, outs') = prune (twoK params) $ outs Seq.|> out +extendOutcome params State{forgers, nomCount, nextSlot, outs} out = + State + { forgers + , nomCount = nomCount' + , outs = outs' + , nextSlot = succ nextSlot + } + where + (dropped, outs') = prune (twoK params) $ outs Seq.|> out - NumNominals nc = nomCount - nomCount' = NumNominals $ nc - + (if Nominal == out then 1 else 0) - - count Nominal dropped + NumNominals nc = nomCount + nomCount' = + NumNominals $ + nc + + (if Nominal == out then 1 else 0) + - count Nominal dropped -- | @True@ if the state would violate 'pbftSignatureThreshold' --- tooMany :: PBftParams -> State -> Bool tooMany params st@State{forgers} = - count i forgers > pbftLimit params - where - i = nextLeader params st + count i forgers > pbftLimit params + where + i = nextLeader params st -- | How many blocks in the latest @k@-blocks that a single core node is -- allowed to have signed --- pbftLimit :: Integral a => PBftParams -> a pbftLimit params = - floor $ oneK params * getPBftSignatureThreshold pbftSignatureThreshold - where - PBftParams{pbftSignatureThreshold} = params + floor $ oneK params * getPBftSignatureThreshold pbftSignatureThreshold + where + PBftParams{pbftSignatureThreshold} = params -- | @True@ if the state resulted from a sequence of @2k@ slots that had less -- than @k@ 'Nominal' outcomes --- tooSparse :: PBftParams -> State -> Bool tooSparse params State{nomCount, outs} = - Seq.length outs - nc > oneK params - where - NumNominals nc = nomCount + Seq.length outs - nc > oneK params + where + NumNominals nc = nomCount -- | @True@ if the state has a suffix of @n@ sequential 'Nominal' outcomes -- -- Once this happens, by the assumption list on 'step', all remaining slots -- will be 'Nominal'. --- saturated :: PBftParams -> State -> Bool saturated params State{outs} = - Seq.length outs >= oneN params && all (== Nominal) nouts - where - (_, nouts) = prune (oneN params) outs + Seq.length outs >= oneN params && all (== Nominal) nouts + where + (_, nouts) = prune (oneN params) outs {------------------------------------------------------------------------------- PBFT reference implementation @@ -206,43 +194,41 @@ saturated params State{outs} = -- -- The assumption is useful in this reference implementation because it lets us -- maintain a single \"global\" 'State' common to all nodes. --- step :: PBftParams -> NodeJoinPlan -> State -> (Outcome, State) step params nodeJoinPlan st - | maybe True (s <) mbJ = (id &&& stuck) Absent + | maybe True (s <) mbJ = (id &&& stuck) Absent | joinLead, not isFirst = (id &&& stuck) Wasted - | tooMany params st' = (id &&& stuck) Unable - | otherwise = (id &&& extendOutcome params st') Nominal - where - s = nextSlot st + | tooMany params st' = (id &&& stuck) Unable + | otherwise = (id &&& extendOutcome params st') Nominal + where + s = nextSlot st - -- @s@'s scheduled leader - i = nextLeader params st + -- @s@'s scheduled leader + i = nextLeader params st - -- when @i@ joins the network - mbJ = Map.lookup i m where NodeJoinPlan m = nodeJoinPlan + -- when @i@ joins the network + mbJ = Map.lookup i m where NodeJoinPlan m = nodeJoinPlan - -- @i@ is joining and also leading - joinLead = Just s == mbJ + -- @i@ is joining and also leading + joinLead = Just s == mbJ - -- whether @i@ would be the first to lead - isFirst = nullState st + -- whether @i@ would be the first to lead + isFirst = nullState st - -- the state that @i@ would make - st' = extendForger params st i + -- the state that @i@ would make + st' = extendForger params st i - stuck o = extendOutcome params st o + stuck o = extendOutcome params st o -- | Iterate 'step' -- -- POST 'nextSlot' is @>=@ the given slot --- advanceUpTo :: PBftParams -> NodeJoinPlan -> State -> SlotNo -> State advanceUpTo params nodeJoinPlan = go - where - go st s - | nextSlot st >= s = st - | otherwise = go (snd $ step params nodeJoinPlan st) s + where + go st s + | nextSlot st >= s = st + | otherwise = go (snd $ step params nodeJoinPlan st) s -- | The result of a reference simulation -- @@ -259,277 +245,290 @@ advanceUpTo params nodeJoinPlan = go -- up to that slot. Moreover, it cannot predict past that slot, since the -- choice of chain affects who may be able to lead next. Therefore, the -- simulator offers no information beyond identifying this case. --- data Result - = Forked !NumSlots !(Map CoreNodeId (Set SlotNo)) - -- ^ All the chains in the net consist of 1 block. The map contains the + = -- | All the chains in the net consist of 1 block. The map contains the -- deterministic block selections made by nodes, identifying each one-block -- chain by the slot in which its block was forged. - | Nondeterministic - -- ^ The outcomes cannot be determined statically. - | Outcomes ![Outcome] - -- ^ The expected outcomes of each slot. - deriving (Show) + Forked !NumSlots !(Map CoreNodeId (Set SlotNo)) + | -- | The outcomes cannot be determined statically. + Nondeterministic + | -- | The expected outcomes of each slot. + Outcomes ![Outcome] + deriving Show resultConstrName :: Result -> String resultConstrName = \case - Outcomes{} -> "Outcomes" - Forked{} -> "Forked" + Outcomes{} -> "Outcomes" + Forked{} -> "Forked" Nondeterministic -> "Nondeterministic" -- | Run 'simulateStep', switching to iterating 'step' once there is a 2-block -- chain -- -- See 'Result'. --- simulate :: - HasCallStack => PBftParams -> NodeJoinPlan -> NumSlots -> Result + HasCallStack => PBftParams -> NodeJoinPlan -> NumSlots -> Result simulate params nodeJoinPlan numSlots = - case simulateShort params nodeJoinPlan numSlots of - Left (Det st@State{outs}) -> Outcomes $ toList outs <> go st - Left Nondet -> Nondeterministic - Right st -> Forked numSlots $ selected <> forged - where - ShortState{ssChains, ssJoined} = st - - -- nodes that forged a chain definitely selected it - forged :: Map CoreNodeId (Set SlotNo) - forged = Set.singleton <$> ssChains - - -- each chain is identified by the slot of the only block in the - -- chain - chains :: Set SlotNo - chains = foldMap Set.singleton ssChains - - -- nodes that did not forge their own chain select among the first - -- available - selected :: Map CoreNodeId (Set SlotNo) - selected = flip Map.map ssJoined $ \joinSlot -> - case Set.lookupMin chains of - -- if no chains /ever/ exist, the node selects none - Nothing -> Set.empty - -- the first chain was forged in this slot - Just s - -- if the node joined before or during the forging of the first - -- chain, it selects it - | joinSlot <= s -> Set.singleton s - -- the node selects from among the chains that existed when it - -- joined - | otherwise -> - (fst . Set.split (succ joinSlot)) $ + case simulateShort params nodeJoinPlan numSlots of + Left (Det st@State{outs}) -> Outcomes $ toList outs <> go st + Left Nondet -> Nondeterministic + Right st -> Forked numSlots $ selected <> forged + where + ShortState{ssChains, ssJoined} = st + + -- nodes that forged a chain definitely selected it + forged :: Map CoreNodeId (Set SlotNo) + forged = Set.singleton <$> ssChains + + -- each chain is identified by the slot of the only block in the + -- chain + chains :: Set SlotNo + chains = foldMap Set.singleton ssChains + + -- nodes that did not forge their own chain select among the first + -- available + selected :: Map CoreNodeId (Set SlotNo) + selected = flip Map.map ssJoined $ \joinSlot -> + case Set.lookupMin chains of + -- if no chains /ever/ exist, the node selects none + Nothing -> Set.empty + -- the first chain was forged in this slot + Just s + -- if the node joined before or during the forging of the first + -- chain, it selects it + | joinSlot <= s -> Set.singleton s + -- the node selects from among the chains that existed when it + -- joined + | otherwise -> + (fst . Set.split (succ joinSlot)) $ chains - where - NumSlots t = numSlots + where + NumSlots t = numSlots - go st - | nextSlot st >= SlotNo t = [] - | otherwise = o : go st' - where - (o, st') = step params nodeJoinPlan st + go st + | nextSlot st >= SlotNo t = [] + | otherwise = o : go st' + where + (o, st') = step params nodeJoinPlan st {------------------------------------------------------------------------------- Prior to a 2-block chain arising -------------------------------------------------------------------------------} data DetOrNondet - = Det !State - -- ^ The plan is /deterministic/ and results in the given 'State'. - | Nondet - -- ^ The plan is /nondeterministic/; we cannot predict which 'State' it + = -- | The plan is /deterministic/ and results in the given 'State'. + Det !State + | -- | The plan is /nondeterministic/; we cannot predict which 'State' it -- yields. -- -- See 'Result'. + Nondet simulateShort :: - HasCallStack - => PBftParams -> NodeJoinPlan -> NumSlots -> Either DetOrNondet ShortState + HasCallStack => + PBftParams -> NodeJoinPlan -> NumSlots -> Either DetOrNondet ShortState simulateShort params nodeJoinPlan (NumSlots t) -- no one has time to forge - | 0 == t = Left $ Det emptyState + | 0 == t = Left $ Det emptyState | otherwise = go $ emptyShortState nodeJoinPlan - where - go :: ShortState -> Either DetOrNondet ShortState - go st = do - st'@ShortState{ssNextSlot = s'} <- stepShort params st - (if s' < SlotNo t then go else Right) st' + where + go :: ShortState -> Either DetOrNondet ShortState + go st = do + st'@ShortState{ssNextSlot = s'} <- stepShort params st + (if s' < SlotNo t then go else Right) st' -- | The state of a PBFT net with no chains or any number of one-blocks chains -- -- See 'Result'. --- data ShortState = ShortState - { ssAbsent :: !(InvertedMap SlotNo CoreNodeId) - -- ^ The nodes that haven't yet joined - -- - -- INVARIANT @all (>= 'ssNextSlot') 'ssAbsent'@ - , ssChains :: !(Map CoreNodeId SlotNo) - -- ^ The nodes that have forged a one-block chain and when - , ssJoined :: !(Map CoreNodeId SlotNo) - -- ^ The nodes that have joined in previous slots + { ssAbsent :: !(InvertedMap SlotNo CoreNodeId) + -- ^ The nodes that haven't yet joined + -- + -- INVARIANT @all (>= 'ssNextSlot') 'ssAbsent'@ + , ssChains :: !(Map CoreNodeId SlotNo) + -- ^ The nodes that have forged a one-block chain and when + , ssJoined :: !(Map CoreNodeId SlotNo) + -- ^ The nodes that have joined in previous slots , ssNextSlot :: !SlotNo } deriving Show -- | Origin: no joined nodes and no chains --- emptyShortState :: NodeJoinPlan -> ShortState -emptyShortState (NodeJoinPlan m) = ShortState - { ssAbsent = InvertedMap.fromMap m - , ssChains = Map.empty - , ssJoined = Map.empty - , ssNextSlot = 0 - } +emptyShortState (NodeJoinPlan m) = + ShortState + { ssAbsent = InvertedMap.fromMap m + , ssChains = Map.empty + , ssJoined = Map.empty + , ssNextSlot = 0 + } stepShort :: - HasCallStack - => PBftParams -> ShortState -> Either DetOrNondet ShortState + HasCallStack => + PBftParams -> ShortState -> Either DetOrNondet ShortState stepShort params st | Just don <- mbDon = Left don - | otherwise = Right ShortState - { ssAbsent = ssAbsent' - , ssChains = ssChains' - , ssJoined = ssJoined' - , ssNextSlot = ssNextSlot' - } - where - ShortState{ssAbsent, ssChains, ssJoined, ssNextSlot} = st - - leaderOf :: SlotNo -> CoreNodeId - leaderOf = mkLeaderOf params - - joinSlotOf :: CoreNodeId -> SlotNo - joinSlotOf = - coreNodeIdJoinSlot $ NodeJoinPlan $ ssJoined' <> InvertedMap.toMap ssAbsent' - - (ssAbsent', ssJoined') = - (later, ssJoined <> InvertedMap.toMap now) - where - (now, later) = assertInvariant $ - InvertedMap.spanAntitone (== ssNextSlot) ssAbsent - ssChains' = - maybe id (\l -> Map.insert l ssNextSlot) mbLeader $ + | otherwise = + Right + ShortState + { ssAbsent = ssAbsent' + , ssChains = ssChains' + , ssJoined = ssJoined' + , ssNextSlot = ssNextSlot' + } + where + ShortState{ssAbsent, ssChains, ssJoined, ssNextSlot} = st + + leaderOf :: SlotNo -> CoreNodeId + leaderOf = mkLeaderOf params + + joinSlotOf :: CoreNodeId -> SlotNo + joinSlotOf = + coreNodeIdJoinSlot $ NodeJoinPlan $ ssJoined' <> InvertedMap.toMap ssAbsent' + + (ssAbsent', ssJoined') = + (later, ssJoined <> InvertedMap.toMap now) + where + (now, later) = + assertInvariant $ + InvertedMap.spanAntitone (== ssNextSlot) ssAbsent + ssChains' = + maybe id (\l -> Map.insert l ssNextSlot) mbLeader $ ssChains - ssNextSlot' = succ ssNextSlot - - assertInvariant :: forall a. a -> a - assertInvariant x = case InvertedMap.minViewWithKey ssAbsent of - Just ((s, _), _) | s < ssNextSlot -> error $ - "ShortState invariant violation: " <> show (ssNextSlot, ssAbsent) - _ -> x - - -- 'Just' if we can decide 'DetOrNondet' - mbDon :: Maybe DetOrNondet - mbDon = - degenerateLim <|> - detSecondLead <|> - choiceMade <|> - detAllHaveJoinedWithoutMultipleChains - - -- each node is 'Unable' to lead even once - degenerateLim :: Maybe DetOrNondet - degenerateLim = do - guard $ (0 :: Word64) == pbftLimit params - Just $ Det State - { forgers = Seq.empty - , nomCount = NumNominals 0 - , nextSlot = ssNextSlot' - , outs = Seq.fromList $ - [ if s < joinSlotOf (leaderOf s) then Absent else Unable - | s <- [0 .. ssNextSlot] - ] - } - - -- a node is leading for the second time; it has won the race - detSecondLead :: Maybe DetOrNondet - detSecondLead = do - winner <- mbLeader - leadSlot <- Map.lookup winner ssChains - Just $ Det State - { forgers = Seq.fromList [winner, winner] - , nomCount = NumNominals 2 - , nextSlot = ssNextSlot' - , outs = Seq.fromList $ - [ let l = leaderOf s in - if | s == leadSlot || s == ssNextSlot -> Nominal - | s < joinSlotOf l -> Absent - | Just s == Map.lookup l ssChains -> Wasted - | otherwise -> Unable - | s <- [0 .. ssNextSlot] - ] - } - - -- old node is leading for the first time in a slot later than its join - -- slot - choiceMade :: Maybe DetOrNondet - choiceMade = do - leader <- mbLeader - guard $ leader `Map.notMember` ssChains - guard $ leader `Map.member` ssJoined - case Map.toList ssChains of - -- it's merely forging the first 1-block chain - [] -> Nothing - - [(winner, leadSlot)] -> Just $ Det State - { forgers = Seq.fromList [winner, leader] - , nomCount = NumNominals 2 + ssNextSlot' = succ ssNextSlot + + assertInvariant :: forall a. a -> a + assertInvariant x = case InvertedMap.minViewWithKey ssAbsent of + Just ((s, _), _) + | s < ssNextSlot -> + error $ + "ShortState invariant violation: " <> show (ssNextSlot, ssAbsent) + _ -> x + + -- 'Just' if we can decide 'DetOrNondet' + mbDon :: Maybe DetOrNondet + mbDon = + degenerateLim + <|> detSecondLead + <|> choiceMade + <|> detAllHaveJoinedWithoutMultipleChains + + -- each node is 'Unable' to lead even once + degenerateLim :: Maybe DetOrNondet + degenerateLim = do + guard $ (0 :: Word64) == pbftLimit params + Just $ + Det + State + { forgers = Seq.empty + , nomCount = NumNominals 0 , nextSlot = ssNextSlot' - , outs = Seq.fromList - [ let l = leaderOf s in - if | s == leadSlot || s == ssNextSlot -> Nominal - | s < joinSlotOf l -> Absent - | Just s == Map.lookup l ssChains -> Wasted - | otherwise -> Unable - | s <- [0 .. ssNextSlot] - ] + , outs = + Seq.fromList $ + [ if s < joinSlotOf (leaderOf s) then Absent else Unable + | s <- [0 .. ssNextSlot] + ] } - _:_:_ -> Just Nondet - - -- all nodes have joined and there are not multiple chains - detAllHaveJoinedWithoutMultipleChains :: Maybe DetOrNondet - detAllHaveJoinedWithoutMultipleChains = do - guard $ InvertedMap.null ssAbsent' - case Map.toList ssChains' of - -- the degenerateLim guard should have prevented this evaluation; we - -- know no other way for this state to be reachable - [] -> error "impossible!" - - [(winner, leadSlot)] -> Just $ Det State - { forgers = Seq.singleton winner - , nomCount = NumNominals 1 + -- a node is leading for the second time; it has won the race + detSecondLead :: Maybe DetOrNondet + detSecondLead = do + winner <- mbLeader + leadSlot <- Map.lookup winner ssChains + Just $ + Det + State + { forgers = Seq.fromList [winner, winner] + , nomCount = NumNominals 2 , nextSlot = ssNextSlot' - , outs = Seq.fromList - [ let l = leaderOf s in - if | s == leadSlot -> Nominal - | s < joinSlotOf l -> Absent - | Just s == Map.lookup l ssChains -> Wasted - | otherwise -> Unable - | s <- [0 .. ssNextSlot] - ] + , outs = + Seq.fromList $ + [ let l = leaderOf s + in if + | s == leadSlot || s == ssNextSlot -> Nominal + | s < joinSlotOf l -> Absent + | Just s == Map.lookup l ssChains -> Wasted + | otherwise -> Unable + | s <- [0 .. ssNextSlot] + ] } - _:_:_ -> Nothing - - -- the node that successfully leads in ssNextSlot - mbLeader :: Maybe CoreNodeId - mbLeader - -- node is 'Absent' - | not $ cid `Map.member` ssJoined' - = Nothing - - -- each node is 'Unable' to lead twice before all lead once - -- - -- Note: This whole function will reach a 'DetOrNondet' before any node - -- leads twice. - | (1 :: Word64) < oneK params - , (1 :: Word64) == pbftLimit params - , cid `Map.member` ssChains - = Nothing - - | otherwise = Just cid - where - -- the node scheduled to lead in ssNextSlot - cid = leaderOf ssNextSlot + -- old node is leading for the first time in a slot later than its join + -- slot + choiceMade :: Maybe DetOrNondet + choiceMade = do + leader <- mbLeader + guard $ leader `Map.notMember` ssChains + guard $ leader `Map.member` ssJoined + case Map.toList ssChains of + -- it's merely forging the first 1-block chain + [] -> Nothing + [(winner, leadSlot)] -> + Just $ + Det + State + { forgers = Seq.fromList [winner, leader] + , nomCount = NumNominals 2 + , nextSlot = ssNextSlot' + , outs = + Seq.fromList + [ let l = leaderOf s + in if + | s == leadSlot || s == ssNextSlot -> Nominal + | s < joinSlotOf l -> Absent + | Just s == Map.lookup l ssChains -> Wasted + | otherwise -> Unable + | s <- [0 .. ssNextSlot] + ] + } + _ : _ : _ -> Just Nondet + + -- all nodes have joined and there are not multiple chains + detAllHaveJoinedWithoutMultipleChains :: Maybe DetOrNondet + detAllHaveJoinedWithoutMultipleChains = do + guard $ InvertedMap.null ssAbsent' + case Map.toList ssChains' of + -- the degenerateLim guard should have prevented this evaluation; we + -- know no other way for this state to be reachable + [] -> error "impossible!" + [(winner, leadSlot)] -> + Just $ + Det + State + { forgers = Seq.singleton winner + , nomCount = NumNominals 1 + , nextSlot = ssNextSlot' + , outs = + Seq.fromList + [ let l = leaderOf s + in if + | s == leadSlot -> Nominal + | s < joinSlotOf l -> Absent + | Just s == Map.lookup l ssChains -> Wasted + | otherwise -> Unable + | s <- [0 .. ssNextSlot] + ] + } + _ : _ : _ -> Nothing + + -- the node that successfully leads in ssNextSlot + mbLeader :: Maybe CoreNodeId + mbLeader + -- node is 'Absent' + | not $ cid `Map.member` ssJoined' = + Nothing + -- each node is 'Unable' to lead twice before all lead once + -- + -- Note: This whole function will reach a 'DetOrNondet' before any node + -- leads twice. + | (1 :: Word64) < oneK params + , (1 :: Word64) == pbftLimit params + , cid `Map.member` ssChains = + Nothing + | otherwise = Just cid + where + -- the node scheduled to lead in ssNextSlot + cid = leaderOf ssNextSlot {------------------------------------------------------------------------------- Queries @@ -542,49 +541,47 @@ stepShort params st -- -- NOTE this function does not consider the competition between 1-block -- multichains, so 'definitelyEnoughBlocks' must still be checked to confirm --- viable :: PBftParams -> SlotNo -> NodeJoinPlan -> State -> Bool viable params sentinel nodeJoinPlan st0 = go st0 - where - nodeJoinPlan' = fillOut params nodeJoinPlan (nextSlot st0) + where + nodeJoinPlan' = fillOut params nodeJoinPlan (nextSlot st0) - go st - | sentinel <= nextSlot st = True - | saturated params st = True -- an optimization - | tooSparse params st' = False - | otherwise = go st' - where - (_o, st') = step params nodeJoinPlan' st + go st + | sentinel <= nextSlot st = True + | saturated params st = True -- an optimization + | tooSparse params st' = False + | otherwise = go st' + where + (_o, st') = step params nodeJoinPlan' st -- | Confirm that the simulated chain includes at least @k@ blocks within every -- @2k@-slot window --- definitelyEnoughBlocks :: PBftParams -> Result -> Bool definitelyEnoughBlocks params = \case - Forked numSlots m -> - let NumSlots t = numSlots - nominals = if Map.null m then 0 else 1 - badCount = max 0 $ t - nominals - in badCount < k - Nondeterministic -> False - Outcomes outcomes -> - let enters = map tick outcomes - exits = replicate (2 * fromIntegral k) 0 ++ enters - in go 0 $ zip exits enters - where - PBftParams{pbftSecurityParam} = params - k = unNonZero $ maxRollbacks pbftSecurityParam - - tick :: Outcome -> Word64 - tick Nominal = 0 - tick _ = 1 - - go :: Word64 -> [(Word64, Word64)] -> Bool - go badCount exens - | badCount > k = False - | otherwise = case exens of - [] -> True - (exit, enter) : exens' -> go (badCount - exit + enter) exens' + Forked numSlots m -> + let NumSlots t = numSlots + nominals = if Map.null m then 0 else 1 + badCount = max 0 $ t - nominals + in badCount < k + Nondeterministic -> False + Outcomes outcomes -> + let enters = map tick outcomes + exits = replicate (2 * fromIntegral k) 0 ++ enters + in go 0 $ zip exits enters + where + PBftParams{pbftSecurityParam} = params + k = unNonZero $ maxRollbacks pbftSecurityParam + + tick :: Outcome -> Word64 + tick Nominal = 0 + tick _ = 1 + + go :: Word64 -> [(Word64, Word64)] -> Bool + go badCount exens + | badCount > k = False + | otherwise = case exens of + [] -> True + (exit, enter) : exens' -> go (badCount - exit + enter) exens' {------------------------------------------------------------------------------- Auxiliaries @@ -592,13 +589,12 @@ definitelyEnoughBlocks params = \case mkLeaderOf :: PBftParams -> SlotNo -> CoreNodeId mkLeaderOf params (SlotNo s) = - CoreNodeId $ s `mod` n - where - PBftParams{pbftNumNodes} = params - NumCoreNodes n = pbftNumNodes + CoreNodeId $ s `mod` n + where + PBftParams{pbftNumNodes} = params + NumCoreNodes n = pbftNumNodes -- | The scheduled leader of 'nextSlot' --- nextLeader :: PBftParams -> State -> CoreNodeId nextLeader params State{nextSlot} = mkLeaderOf params nextSlot @@ -606,15 +602,14 @@ nextLeader params State{nextSlot} = mkLeaderOf params nextSlot -- -- Specifically, \"ASAP\" is either when the last already-scheduled node joins -- or \"now\" (the given 'SlotNo'), whichever is latest. --- fillOut :: PBftParams -> NodeJoinPlan -> SlotNo -> NodeJoinPlan fillOut params (NodeJoinPlan m) s = - NodeJoinPlan $ + NodeJoinPlan $ Foldable.foldl' (\acc i -> Map.insert i j acc) m $ - CoreNodeId <$> [i0 .. iN] - where - iN = oneN params - 1 - j = max s j0 - (i0, j0) = case Map.lookupMax m of - Nothing -> (0, 0) - Just (CoreNodeId h, x) -> (succ h, x) + CoreNodeId <$> [i0 .. iN] + where + iN = oneN params - 1 + j = max s j0 + (i0, j0) = case Map.lookupMax m of + Nothing -> (0, 0) + Just (CoreNodeId h, x) -> (succ h, x) diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Rekeying.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Rekeying.hs index 4490a3c73a..6a2a48362b 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Rekeying.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Rekeying.hs @@ -2,18 +2,18 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} -module Test.ThreadNet.Rekeying ( - Rekeying (..) +module Test.ThreadNet.Rekeying + ( Rekeying (..) , fromRekeyingToRekeyM ) where -import Data.Functor ((<&>)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Util.IOLike -import Test.ThreadNet.Network -import Test.Util.Stream +import Data.Functor ((<&>)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Util.IOLike +import Test.ThreadNet.Network +import Test.Util.Stream -- | Functionality used by test node in order to update its operational key -- @@ -21,43 +21,45 @@ import Test.Util.Stream -- is used to define 'tnaRekeyM', which the test infrastructure invokes per the -- 'NodeRestarts' schedule. data Rekeying m blk = forall opKey. Rekeying - { rekeyOracle - :: CoreNodeId -> SlotNo -> Maybe SlotNo - -- ^ The first /nominal/ slot after the given slot, assuming the given core - -- node cannot lead. - -- - -- IE the first slot that will result in a block successfully being forged - -- and diffused (eg no @PBftExceededSignThreshold@). + { rekeyOracle :: + CoreNodeId -> + SlotNo -> + Maybe SlotNo + -- ^ The first /nominal/ slot after the given slot, assuming the given core + -- node cannot lead. + -- + -- IE the first slot that will result in a block successfully being forged + -- and diffused (eg no @PBftExceededSignThreshold@). , rekeyUpd :: - CoreNodeId - -> ProtocolInfo blk - -> m [BlockForging m blk] - -> EpochNo - -> opKey - -> m (Maybe (TestNodeInitialization m blk)) - -- ^ new config and any corresponding delegation certificate transactions - -- - -- The given epoch contains the first nominal slot whose block will - -- include the redelegation certificate transaction. - -- - -- The 'TestNodeInitialization' includes the new 'ProtocolInfo' used when - -- the node completes restarting. + CoreNodeId -> + ProtocolInfo blk -> + m [BlockForging m blk] -> + EpochNo -> + opKey -> + m (Maybe (TestNodeInitialization m blk)) + -- ^ new config and any corresponding delegation certificate transactions + -- + -- The given epoch contains the first nominal slot whose block will + -- include the redelegation certificate transaction. + -- + -- The 'TestNodeInitialization' includes the new 'ProtocolInfo' used when + -- the node completes restarting. , rekeyFreshSKs :: Stream opKey - -- ^ a stream that only repeats itself after an *effectively* *infinite* - -- number of iterations and also never includes an operational key from - -- the genesis configuration + -- ^ a stream that only repeats itself after an *effectively* *infinite* + -- number of iterations and also never includes an operational key from + -- the genesis configuration } fromRekeyingToRekeyM :: IOLike m => Rekeying m blk -> m (RekeyM m blk) fromRekeyingToRekeyM Rekeying{rekeyFreshSKs, rekeyOracle, rekeyUpd} = do - rekeyVar <- uncheckedNewTVarM rekeyFreshSKs - pure $ \cid pInfo blockForging s mkEno -> case rekeyOracle cid s of - Nothing -> pure $ plainTestNodeInitialization pInfo blockForging - Just s' -> do - x <- atomically $ do - x :< xs <- readTVar rekeyVar - x <$ writeTVar rekeyVar xs - eno <- mkEno s' - rekeyUpd cid pInfo blockForging eno x <&> \case - Nothing -> plainTestNodeInitialization pInfo blockForging - Just tni -> tni + rekeyVar <- uncheckedNewTVarM rekeyFreshSKs + pure $ \cid pInfo blockForging s mkEno -> case rekeyOracle cid s of + Nothing -> pure $ plainTestNodeInitialization pInfo blockForging + Just s' -> do + x <- atomically $ do + x :< xs <- readTVar rekeyVar + x <$ writeTVar rekeyVar xs + eno <- mkEno s' + rekeyUpd cid pInfo blockForging eno x <&> \case + Nothing -> plainTestNodeInitialization pInfo blockForging + Just tni -> tni diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/TxGen.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/TxGen.hs index 3f53d70267..1b15d520a3 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/TxGen.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/TxGen.hs @@ -6,36 +6,37 @@ {-# LANGUAGE UndecidableInstances #-} -- | Transaction generator for testing -module Test.ThreadNet.TxGen ( - TxGen (..) +module Test.ThreadNet.TxGen + ( TxGen (..) + -- * Implementation for HFC , WrapTxGenExtra (..) , testGenTxsHfc ) where -import Data.Kind (Type) -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index -import Data.SOP.Strict -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) -import Ouroboros.Consensus.NodeId (CoreNodeId) -import Test.QuickCheck (Gen) +import Data.Kind (Type) +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Functors (Flip (..)) +import Data.SOP.Index +import Data.SOP.Strict +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) +import Ouroboros.Consensus.NodeId (CoreNodeId) +import Test.QuickCheck (Gen) {------------------------------------------------------------------------------- TxGen class -------------------------------------------------------------------------------} class TxGen blk where - -- | Extra information required to generate transactions type TxGenExtra blk :: Type + type TxGenExtra blk = () -- | Generate a number of transactions, valid or invalid, that can be @@ -45,13 +46,14 @@ class TxGen blk where -- -- Note: this function returns a list so that an empty list can be returned -- in case we are unable to generate transactions for a @blk@. - testGenTxs :: CoreNodeId - -> NumCoreNodes - -> SlotNo - -> TopLevelConfig blk - -> TxGenExtra blk - -> LedgerState blk ValuesMK - -> Gen [GenTx blk] + testGenTxs :: + CoreNodeId -> + NumCoreNodes -> + SlotNo -> + TopLevelConfig blk -> + TxGenExtra blk -> + LedgerState blk ValuesMK -> + Gen [GenTx blk] {------------------------------------------------------------------------------- Implementation for HFC @@ -59,9 +61,9 @@ class TxGen blk where -- | Newtypes wrapper around the 'TxGenExtra' type family so that it can be -- partially applied. -newtype WrapTxGenExtra blk = WrapTxGenExtra { - unwrapTxGenExtra :: TxGenExtra blk - } +newtype WrapTxGenExtra blk = WrapTxGenExtra + { unwrapTxGenExtra :: TxGenExtra blk + } -- | Function that can be used for 'TxGen' instances for 'HardForkBlock'. -- @@ -73,35 +75,39 @@ newtype WrapTxGenExtra blk = WrapTxGenExtra { -- Choose @NP WrapTxGenExtra xs@ for the instance of the 'TxGenExtra' type -- family, where @xs@ matches the concrete instantiation. testGenTxsHfc :: - forall xs. (All TxGen xs, CanHardFork xs) - => CoreNodeId - -> NumCoreNodes - -> SlotNo - -> TopLevelConfig (HardForkBlock xs) - -> NP WrapTxGenExtra xs - -> LedgerState (HardForkBlock xs) ValuesMK - -> Gen [GenTx (HardForkBlock xs)] + forall xs. + (All TxGen xs, CanHardFork xs) => + CoreNodeId -> + NumCoreNodes -> + SlotNo -> + TopLevelConfig (HardForkBlock xs) -> + NP WrapTxGenExtra xs -> + LedgerState (HardForkBlock xs) ValuesMK -> + Gen [GenTx (HardForkBlock xs)] testGenTxsHfc coreNodeId numCoreNodes curSlotNo cfg extras state = - hcollapse $ + hcollapse $ hcizipWith3 (Proxy @TxGen) aux cfgs extras (State.tip (hardForkLedgerStatePerEra state)) - where - cfgs = distribTopLevelConfig ei cfg - ei = State.epochInfoLedger - (configLedger cfg) - (hardForkLedgerStatePerEra state) + where + cfgs = distribTopLevelConfig ei cfg + ei = + State.epochInfoLedger + (configLedger cfg) + (hardForkLedgerStatePerEra state) - aux :: - forall blk. TxGen blk - => Index xs blk - -> TopLevelConfig blk - -> WrapTxGenExtra blk - -> Flip LedgerState ValuesMK blk - -> K (Gen [GenTx (HardForkBlock xs)]) blk - aux index cfg' (WrapTxGenExtra extra') (Flip state') = K $ - fmap (injectNS' (Proxy @GenTx) index) - <$> testGenTxs coreNodeId numCoreNodes curSlotNo cfg' extra' state' + aux :: + forall blk. + TxGen blk => + Index xs blk -> + TopLevelConfig blk -> + WrapTxGenExtra blk -> + Flip LedgerState ValuesMK blk -> + K (Gen [GenTx (HardForkBlock xs)]) blk + aux index cfg' (WrapTxGenExtra extra') (Flip state') = + K $ + fmap (injectNS' (Proxy @GenTx) index) + <$> testGenTxs coreNodeId numCoreNodes curSlotNo cfg' extra' state' diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util.hs index 8f6e67ef5c..de2b44bce1 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util.hs @@ -4,51 +4,56 @@ {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeFamilies #-} -module Test.ThreadNet.Util ( - -- * Chain properties +module Test.ThreadNet.Util + ( -- * Chain properties chainCommonPrefix , prop_all_common_prefix , shortestLength + -- * LeaderSchedule , consensusExpected , emptyLeaderSchedule , roundRobinLeaderSchedule + -- * GraphViz Dot , tracesToDot + -- * Re-exports , module Test.ThreadNet.Util.Expectations ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Data.Graph.Inductive.Graph -import Data.Graph.Inductive.PatriciaTree -import Data.GraphViz -import Data.GraphViz.Attributes.Complete -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (catMaybes) -import Data.Set (Set) -import qualified Data.Set as Set -import qualified Data.Text.Lazy as Text -import Data.Word (Word64) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.LeaderSchedule -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Mock.Chain (Chain (..)) -import qualified Ouroboros.Network.Mock.Chain as Chain -import Test.QuickCheck -import Test.ThreadNet.Network (NodeOutput (..)) -import Test.ThreadNet.Util.Expectations (NumBlocks (..), - determineForkLength) -import Test.ThreadNet.Util.HasCreator -import Test.ThreadNet.Util.NodeJoinPlan (NodeJoinPlan) -import qualified Test.Util.MockChain as Chain -import Test.Util.Slots (NumSlots (..)) +import Cardano.Ledger.BaseTypes (unNonZero) +import Data.Graph.Inductive.Graph +import Data.Graph.Inductive.PatriciaTree +import Data.GraphViz +import Data.GraphViz.Attributes.Complete +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (catMaybes) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Text.Lazy qualified as Text +import Data.Word (Word64) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.LeaderSchedule +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Mock.Chain (Chain (..)) +import Ouroboros.Network.Mock.Chain qualified as Chain +import Test.QuickCheck +import Test.ThreadNet.Network (NodeOutput (..)) +import Test.ThreadNet.Util.Expectations + ( NumBlocks (..) + , determineForkLength + ) +import Test.ThreadNet.Util.HasCreator +import Test.ThreadNet.Util.NodeJoinPlan (NodeJoinPlan) +import Test.Util.MockChain qualified as Chain +import Test.Util.Slots (NumSlots (..)) {------------------------------------------------------------------------------- Chain properties @@ -57,196 +62,219 @@ import Test.Util.Slots (NumSlots (..)) shortestLength :: Map NodeId (Chain b) -> Natural shortestLength = fromIntegral . minimum . map Chain.length . Map.elems -prop_all_common_prefix :: (HasHeader b, Condense (HeaderHash b), Eq b) - => Word64 -> [Chain b] -> Property -prop_all_common_prefix _ [] = property True -prop_all_common_prefix l (c:cs) = conjoin [prop_common_prefix l c d | d <- cs] +prop_all_common_prefix :: + (HasHeader b, Condense (HeaderHash b), Eq b) => + Word64 -> [Chain b] -> Property +prop_all_common_prefix _ [] = property True +prop_all_common_prefix l (c : cs) = conjoin [prop_common_prefix l c d | d <- cs] -prop_common_prefix :: forall b. (HasHeader b, Condense (HeaderHash b), Eq b) - => Word64 -> Chain b -> Chain b -> Property +prop_common_prefix :: + forall b. + (HasHeader b, Condense (HeaderHash b), Eq b) => + Word64 -> Chain b -> Chain b -> Property prop_common_prefix l x y = go x y .&&. go y x - where - go c d = - let (l', c') = findPrefix c d - e = "after dropping " - <> show l' - <> " blocks from " - <> showChain c - <> ",\n\nthe resulting " - <> showChain c' - <> "\n\nis a prefix of " - <> showChain d - <> ",\n\nbut only " - <> show l - <> " block(s) should have been necessary" - in counterexample e $ l' <= l - - findPrefix c' d - | c' `Chain.isPrefixOf` d = (0, c') - | otherwise = let (l', c'') = findPrefix (Chain.dropLastBlocks 1 c') d - in (l' + 1, c'') - - showChain :: Chain b -> String - showChain c = condense (Chain.headTip c) - <> "\n(length " - <> show (Chain.length c) - <> ")" + where + go c d = + let (l', c') = findPrefix c d + e = + "after dropping " + <> show l' + <> " blocks from " + <> showChain c + <> ",\n\nthe resulting " + <> showChain c' + <> "\n\nis a prefix of " + <> showChain d + <> ",\n\nbut only " + <> show l + <> " block(s) should have been necessary" + in counterexample e $ l' <= l + + findPrefix c' d + | c' `Chain.isPrefixOf` d = (0, c') + | otherwise = + let (l', c'') = findPrefix (Chain.dropLastBlocks 1 c') d + in (l' + 1, c'') + + showChain :: Chain b -> String + showChain c = + condense (Chain.headTip c) + <> "\n(length " + <> show (Chain.length c) + <> ")" -- | Find the common prefix of two chains chainCommonPrefix :: HasHeader b => Chain b -> Chain b -> Chain b -chainCommonPrefix Genesis _ = Genesis -chainCommonPrefix _ Genesis = Genesis +chainCommonPrefix Genesis _ = Genesis +chainCommonPrefix _ Genesis = Genesis chainCommonPrefix cl@(cl' :> bl) cr@(cr' :> br) = - case blockNo bl `compare` blockNo br of - LT -> chainCommonPrefix cl cr' - GT -> chainCommonPrefix cl' cr - EQ -> - if blockHash bl /= blockHash br - then chainCommonPrefix cl' cr' - else cl + case blockNo bl `compare` blockNo br of + LT -> chainCommonPrefix cl cr' + GT -> chainCommonPrefix cl' cr + EQ -> + if blockHash bl /= blockHash br + then chainCommonPrefix cl' cr' + else cl {------------------------------------------------------------------------------- Generation of a dot-file to represent the trace as a graph -------------------------------------------------------------------------------} data BlockInfo b = BlockInfo - { biSlot :: !SlotNo - , biCreator :: !(Maybe CoreNodeId) - , biHash :: !(ChainHash b) - , biPrevious :: !(Maybe (ChainHash b)) - } + { biSlot :: !SlotNo + , biCreator :: !(Maybe CoreNodeId) + , biHash :: !(ChainHash b) + , biPrevious :: !(Maybe (ChainHash b)) + } genesisBlockInfo :: BlockInfo b -genesisBlockInfo = BlockInfo - { biSlot = 0 - , biCreator = Nothing - , biHash = GenesisHash +genesisBlockInfo = + BlockInfo + { biSlot = 0 + , biCreator = Nothing + , biHash = GenesisHash , biPrevious = Nothing } - -blockInfo :: (GetPrevHash b, HasCreator b) - => b -> BlockInfo b -blockInfo b = BlockInfo - { biSlot = blockSlot b - , biCreator = Just $ getCreator b - , biHash = BlockHash $ blockHash b +blockInfo :: + (GetPrevHash b, HasCreator b) => + b -> BlockInfo b +blockInfo b = + BlockInfo + { biSlot = blockSlot b + , biCreator = Just $ getCreator b + , biHash = BlockHash $ blockHash b , biPrevious = Just $ blockPrevHash b } data NodeLabel = NodeLabel - { nlSlot :: SlotNo - , nlCreator :: Maybe CoreNodeId - , nlBelievers :: Set NodeId - } + { nlSlot :: SlotNo + , nlCreator :: Maybe CoreNodeId + , nlBelievers :: Set NodeId + } instance Labellable NodeLabel where - toLabelValue NodeLabel{..} = StrLabel $ Text.pack $ - show (unSlotNo nlSlot) - <> " " - <> maybe "" (showNodeId . fromCoreNodeId) nlCreator - <> showNodeIds nlBelievers - where - fromNodeId :: NodeId -> Maybe Word64 - fromNodeId (CoreId (CoreNodeId nid)) = Just nid - fromNodeId (RelayId _) = Nothing - - showNodeId :: NodeId -> String - showNodeId = maybe "" show . fromNodeId - - showNodeIds :: Set NodeId -> String - showNodeIds nids = case catMaybes $ map fromNodeId $ Set.toList nids of - [] -> "" - xs -> " [" <> unwords (map show xs) <> "]" + toLabelValue NodeLabel{..} = + StrLabel $ + Text.pack $ + show (unSlotNo nlSlot) + <> " " + <> maybe "" (showNodeId . fromCoreNodeId) nlCreator + <> showNodeIds nlBelievers + where + fromNodeId :: NodeId -> Maybe Word64 + fromNodeId (CoreId (CoreNodeId nid)) = Just nid + fromNodeId (RelayId _) = Nothing + + showNodeId :: NodeId -> String + showNodeId = maybe "" show . fromNodeId + + showNodeIds :: Set NodeId -> String + showNodeIds nids = case catMaybes $ map fromNodeId $ Set.toList nids of + [] -> "" + xs -> " [" <> unwords (map show xs) <> "]" data EdgeLabel = EdgeLabel instance Labellable EdgeLabel where - toLabelValue = const $ StrLabel Text.empty + toLabelValue = const $ StrLabel Text.empty -tracesToDot :: forall b. (GetPrevHash b, HasCreator b) - => Map NodeId (NodeOutput b) - -> String +tracesToDot :: + forall b. + (GetPrevHash b, HasCreator b) => + Map NodeId (NodeOutput b) -> + String tracesToDot traces = Text.unpack $ printDotGraph $ graphToDot quickParams graph - where - chainBlockInfos :: Chain b - -> Map (ChainHash b) (BlockInfo b) - chainBlockInfos = Chain.foldChain f (Map.singleton GenesisHash genesisBlockInfo) - where - f m b = let info = blockInfo b - in Map.insert (biHash info) info m - - blockInfos :: Map (ChainHash b) (BlockInfo b) - blockInfos = Map.unions + where + chainBlockInfos :: + Chain b -> + Map (ChainHash b) (BlockInfo b) + chainBlockInfos = Chain.foldChain f (Map.singleton GenesisHash genesisBlockInfo) + where + f m b = + let info = blockInfo b + in Map.insert (biHash info) info m + + blockInfos :: Map (ChainHash b) (BlockInfo b) + blockInfos = + Map.unions [ chainBlockInfos (nodeOutputFinalChain no) | no <- Map.elems traces ] - lastHash :: Chain b -> ChainHash b - lastHash Genesis = GenesisHash - lastHash (_ :> b) = BlockHash $ blockHash b - - blockInfosAndBelievers :: Map (ChainHash b) (BlockInfo b, Set NodeId) - blockInfosAndBelievers = - Map.foldlWithKey f i (nodeOutputFinalChain <$> traces) - where - i = (\info -> (info, Set.empty)) <$> blockInfos - - f m nid chain = Map.adjust - (\(info, believers) -> - (info, Set.insert nid believers)) - (lastHash chain) - m - - hashToId :: Map (ChainHash b) Node - hashToId = Map.fromList $ zip (Map.keys blockInfosAndBelievers) [0..] - - ns :: [LNode NodeLabel] - ns = [ ( hashToId Map.! h - , NodeLabel - { nlSlot = biSlot info - , nlCreator = biCreator info - , nlBelievers = believers - } - ) - | (h, (info, believers)) <- Map.toList blockInfosAndBelievers - ] - - es :: [LEdge EdgeLabel] - es = map g - $ catMaybes - $ map f - [ (biHash info, biPrevious info) | info <- Map.elems blockInfos] - where f (h, mh) = (h,) <$> mh - g (h1, h2) = (hashToId Map.! h1, hashToId Map.! h2, EdgeLabel) - - graph :: Gr NodeLabel EdgeLabel - graph = mkGraph ns es + lastHash :: Chain b -> ChainHash b + lastHash Genesis = GenesisHash + lastHash (_ :> b) = BlockHash $ blockHash b + + blockInfosAndBelievers :: Map (ChainHash b) (BlockInfo b, Set NodeId) + blockInfosAndBelievers = + Map.foldlWithKey f i (nodeOutputFinalChain <$> traces) + where + i = (\info -> (info, Set.empty)) <$> blockInfos + + f m nid chain = + Map.adjust + ( \(info, believers) -> + (info, Set.insert nid believers) + ) + (lastHash chain) + m + + hashToId :: Map (ChainHash b) Node + hashToId = Map.fromList $ zip (Map.keys blockInfosAndBelievers) [0 ..] + + ns :: [LNode NodeLabel] + ns = + [ ( hashToId Map.! h + , NodeLabel + { nlSlot = biSlot info + , nlCreator = biCreator info + , nlBelievers = believers + } + ) + | (h, (info, believers)) <- Map.toList blockInfosAndBelievers + ] + + es :: [LEdge EdgeLabel] + es = + map g $ + catMaybes $ + map + f + [(biHash info, biPrevious info) | info <- Map.elems blockInfos] + where + f (h, mh) = (h,) <$> mh + g (h1, h2) = (hashToId Map.! h1, hashToId Map.! h2, EdgeLabel) + + graph :: Gr NodeLabel EdgeLabel + graph = mkGraph ns es {------------------------------------------------------------------------------- Leader Schedule -------------------------------------------------------------------------------} consensusExpected :: - SecurityParam - -> NodeJoinPlan - -> LeaderSchedule - -> Bool + SecurityParam -> + NodeJoinPlan -> + LeaderSchedule -> + Bool consensusExpected k nodeJoinPlan schedule = - maxForkLength <= unNonZero (maxRollbacks k) - where - NumBlocks maxForkLength = determineForkLength k nodeJoinPlan schedule + maxForkLength <= unNonZero (maxRollbacks k) + where + NumBlocks maxForkLength = determineForkLength k nodeJoinPlan schedule emptyLeaderSchedule :: NumSlots -> LeaderSchedule -emptyLeaderSchedule (NumSlots t) = LeaderSchedule $ +emptyLeaderSchedule (NumSlots t) = + LeaderSchedule $ Map.fromList $ - [ (SlotNo i, []) - | i <- [ 0 .. t - 1 ] - ] + [ (SlotNo i, []) + | i <- [0 .. t - 1] + ] roundRobinLeaderSchedule :: NumCoreNodes -> NumSlots -> LeaderSchedule -roundRobinLeaderSchedule (NumCoreNodes n) (NumSlots t) = LeaderSchedule $ +roundRobinLeaderSchedule (NumCoreNodes n) (NumSlots t) = + LeaderSchedule $ Map.fromList $ - [ (SlotNo i, [CoreNodeId (i `mod` n)]) - | i <- [ 0 .. t - 1 ] - ] + [ (SlotNo i, [CoreNodeId (i `mod` n)]) + | i <- [0 .. t - 1] + ] diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Expectations.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Expectations.hs index 6d605b61a0..15ecacc25a 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Expectations.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Expectations.hs @@ -1,19 +1,19 @@ {-# LANGUAGE NamedFieldPuns #-} -module Test.ThreadNet.Util.Expectations ( - NumBlocks (..) +module Test.ThreadNet.Util.Expectations + ( NumBlocks (..) , determineForkLength ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Data.Foldable as Foldable (foldl') -import qualified Data.Map.Strict as Map -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.LeaderSchedule -import Test.ThreadNet.Util.NodeJoinPlan +import Cardano.Ledger.BaseTypes (unNonZero) +import Data.Foldable as Foldable (foldl') +import Data.Map.Strict qualified as Map +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.LeaderSchedule +import Test.ThreadNet.Util.NodeJoinPlan newtype NumBlocks = NumBlocks Word64 deriving (Eq, Show) @@ -21,12 +21,12 @@ newtype NumBlocks = NumBlocks Word64 -- | Internal accumulator of 'determineForkLength' data Acc = Acc { maxChainLength :: !Word64 - -- ^ Upper bound on length of the longest chain in the network - , maxForkLength :: !Word64 - -- ^ Upper bound on length of the longest fork in the network, excluding - -- the common prefix - -- - -- Note that @0@ corresponds to /consensus/. + -- ^ Upper bound on length of the longest chain in the network + , maxForkLength :: !Word64 + -- ^ Upper bound on length of the longest fork in the network, excluding + -- the common prefix + -- + -- Note that @0@ corresponds to /consensus/. } -- | Compute a bound for the length of the final forks @@ -93,75 +93,71 @@ data Acc = Acc -- with an empty chain). Except for the edge cases when the longest chains in -- the network are of length 0 or 1, this means that leaders who are joining -- can be disregarded. --- determineForkLength :: - SecurityParam - -> NodeJoinPlan - -> LeaderSchedule - -> NumBlocks + SecurityParam -> + NodeJoinPlan -> + LeaderSchedule -> + NumBlocks determineForkLength k (NodeJoinPlan joinPlan) (LeaderSchedule sched) = - prj $ Foldable.foldl' step initial (Map.toAscList sched) - where - prj Acc{maxForkLength} = NumBlocks maxForkLength + prj $ Foldable.foldl' step initial (Map.toAscList sched) + where + prj Acc{maxForkLength} = NumBlocks maxForkLength - -- assume the network begins in consensus (eg all nodes start at Genesis) - initial = Acc + -- assume the network begins in consensus (eg all nodes start at Genesis) + initial = + Acc { maxChainLength = 0 , maxForkLength = 0 } - -- this logic focuses on the new chains made in this slot that are longer - -- than the longest chains from the previous slot - step Acc{maxChainLength, maxForkLength} (slot, leaders) = - Acc - { maxChainLength = grow maxChainLength - , maxForkLength = update maxForkLength - } - where - grow = if 0 == pullingAhead then id else (+ 1) - - update - -- too late to reach consensus, so further diverge - | maxForkLength > unNonZero (maxRollbacks k) = grow - - -- assume (common) worst-case: each leader creates a unique longer - -- chain - | pullingAhead > 1 = (+ 1) - - -- the sole leader creates the sole longer chain, bringing the - -- network into consensus - | pullingAhead == 1 = const 0 - - -- there will be multiple longest chains that disagree on at least - -- the latest block - -- - -- Note that pullingAhead == 0 by the preceding guards - | pullingEven > 0 = max 1 + -- this logic focuses on the new chains made in this slot that are longer + -- than the longest chains from the previous slot + step Acc{maxChainLength, maxForkLength} (slot, leaders) = + Acc + { maxChainLength = grow maxChainLength + , maxForkLength = update maxForkLength + } + where + grow = if 0 == pullingAhead then id else (+ 1) - -- no nodes are extending their chain, so the longest chains are - -- the same as in the previous slot - | otherwise = id + update + -- too late to reach consensus, so further diverge + | maxForkLength > unNonZero (maxRollbacks k) = grow + -- assume (common) worst-case: each leader creates a unique longer + -- chain + | pullingAhead > 1 = (+ 1) + -- the sole leader creates the sole longer chain, bringing the + -- network into consensus + | pullingAhead == 1 = const 0 + -- there will be multiple longest chains that disagree on at least + -- the latest block + -- + -- Note that pullingAhead == 0 by the preceding guards + | pullingEven > 0 = max 1 + -- no nodes are extending their chain, so the longest chains are + -- the same as in the previous slot + | otherwise = id - -- how many leaders are forging a block onto a @maxForkLength@-chain - pullingAhead = nlOld + nlNew (maxChainLength == 0) - -- how many leaders are forging a block onto a @maxForkLength - 1@-chain - pullingEven = nlNew (maxChainLength == 1) + -- how many leaders are forging a block onto a @maxForkLength@-chain + pullingAhead = nlOld + nlNew (maxChainLength == 0) + -- how many leaders are forging a block onto a @maxForkLength - 1@-chain + pullingEven = nlNew (maxChainLength == 1) - -- how many leaders joined before this slot - nlOld = length $ filter ((< slot) . joinSlot) leaders - nlNew b - | not b = 0 - -- how many leaders are joining during this slot; these might not - -- be relevant - -- - -- A node leading the same slot it joins always forges a 1-block - -- chain; there's actually a race-condition between block forging - -- and BlockFetch/ChainSync, but forging always wins in the current - -- test framework implementation. - | otherwise = length $ filter ((== slot) . joinSlot) leaders + -- how many leaders joined before this slot + nlOld = length $ filter ((< slot) . joinSlot) leaders + nlNew b + | not b = 0 + -- how many leaders are joining during this slot; these might not + -- be relevant + -- + -- A node leading the same slot it joins always forges a 1-block + -- chain; there's actually a race-condition between block forging + -- and BlockFetch/ChainSync, but forging always wins in the current + -- test framework implementation. + | otherwise = length $ filter ((== slot) . joinSlot) leaders - -- slot in which the node joins the network - joinSlot :: CoreNodeId -> SlotNo - joinSlot nid = case Map.lookup nid joinPlan of - Nothing -> error "determineForkLength: incomplete node join plan" - Just slot' -> slot' + -- slot in which the node joins the network + joinSlot :: CoreNodeId -> SlotNo + joinSlot nid = case Map.lookup nid joinPlan of + Nothing -> error "determineForkLength: incomplete node join plan" + Just slot' -> slot' diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/HasCreator.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/HasCreator.hs index ae5374ee01..64f7f51aa1 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/HasCreator.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/HasCreator.hs @@ -2,7 +2,7 @@ -- it. module Test.ThreadNet.Util.HasCreator (HasCreator (..)) where -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) class HasCreator b where - getCreator :: b -> CoreNodeId + getCreator :: b -> CoreNodeId diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeJoinPlan.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeJoinPlan.hs index 5e090dbde2..abc375bb67 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeJoinPlan.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeJoinPlan.hs @@ -6,8 +6,8 @@ {-# OPTIONS_GHC -Wno-x-partial #-} #endif -module Test.ThreadNet.Util.NodeJoinPlan ( - -- * Node Join Plan +module Test.ThreadNet.Util.NodeJoinPlan + ( -- * Node Join Plan NodeJoinPlan (..) , coreNodeIdJoinSlot , genNodeJoinPlan @@ -16,102 +16,100 @@ module Test.ThreadNet.Util.NodeJoinPlan ( , trivialNodeJoinPlan ) where -import qualified Data.List as List -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Orphans () -import Test.QuickCheck -import Test.Util.Slots (NumSlots (..)) +import Data.List qualified as List +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Orphans () +import Test.QuickCheck +import Test.Util.Slots (NumSlots (..)) {------------------------------------------------------------------------------- Node Join Plans -------------------------------------------------------------------------------} -- | In which slot each node joins the network --- newtype NodeJoinPlan = NodeJoinPlan (Map CoreNodeId SlotNo) - deriving (Show) + deriving Show instance Condense NodeJoinPlan where - condense (NodeJoinPlan m) = condense - [ (fromCoreNodeId nid, slot) | (nid, slot) <- Map.toAscList m ] + condense (NodeJoinPlan m) = + condense + [(fromCoreNodeId nid, slot) | (nid, slot) <- Map.toAscList m] -- | All nodes join immediately --- trivialNodeJoinPlan :: NumCoreNodes -> NodeJoinPlan trivialNodeJoinPlan numCoreNodes = - NodeJoinPlan $ + NodeJoinPlan $ Map.fromList $ - [ (nid, SlotNo 0) | nid <- enumCoreNodes numCoreNodes ] + [(nid, SlotNo 0) | nid <- enumCoreNodes numCoreNodes] -- | Generate a 'NodeJoinPlan' consistent with the given properties -- -- INVARIANT: Nodes with higher Ids will not join before nodes with lower Ids. -- This eliminates some uninteresting symmetry and makes the counter-examples -- easier for humans to interpret. --- genNodeJoinPlan :: - NumCoreNodes - -- ^ PRECONDITION: non-negative - -> NumSlots - -- ^ PRECONDITION: positive - -> Gen NodeJoinPlan + -- | PRECONDITION: non-negative + NumCoreNodes -> + -- | PRECONDITION: positive + NumSlots -> + Gen NodeJoinPlan genNodeJoinPlan numCoreNodes@(NumCoreNodes n) numSlots@(NumSlots t) - | n < 0 || t < 1 = error $ "Cannot generate TestConfig: " - ++ show (numCoreNodes, numSlots) - + | n < 0 || t < 1 = + error $ + "Cannot generate TestConfig: " + ++ show (numCoreNodes, numSlots) | otherwise = do - let genJoinSlot = do + let genJoinSlot = do let lastSlot = t - 1 SlotNo <$> choose (0, lastSlot) - let nids = enumCoreNodes numCoreNodes :: [CoreNodeId] - schedules <- vectorOf (fromIntegral n) genJoinSlot - -- without loss of generality, the nodes start initializing in order of - -- their Ids; this merely makes it easer to interpret the counterexamples - pure $ NodeJoinPlan $ Map.fromList $ zip nids $ List.sort schedules + let nids = enumCoreNodes numCoreNodes :: [CoreNodeId] + schedules <- vectorOf (fromIntegral n) genJoinSlot + -- without loss of generality, the nodes start initializing in order of + -- their Ids; this merely makes it easer to interpret the counterexamples + pure $ NodeJoinPlan $ Map.fromList $ zip nids $ List.sort schedules -- | Shrink a node join plan -- -- INVARIANT no inter-join delay increases -- -- Specifically, we shrink by setting some of the delays to 0. --- shrinkNodeJoinPlan :: NodeJoinPlan -> [NodeJoinPlan] shrinkNodeJoinPlan (NodeJoinPlan m0) = - init $ -- the last one is the same as the input - map (NodeJoinPlan . snd) $ go diffs0 - where - slots = map snd (Map.toDescList m0) ++ [0] - diffs0 = zipWith (\j2 j1 -> j2 - j1) slots (tail slots) + init $ -- the last one is the same as the input + map (NodeJoinPlan . snd) $ + go diffs0 + where + slots = map snd (Map.toDescList m0) ++ [0] + diffs0 = zipWith (\j2 j1 -> j2 - j1) slots (tail slots) - go = \case - [] -> [((CoreNodeId 0, 0), Map.empty)] - d:ds -> do - ((CoreNodeId i, mx), m) <- go ds - let f s = ((CoreNodeId (succ i), s), Map.insert (CoreNodeId i) s m) - [f mx] ++ [f (mx + d) | d > 0] + go = \case + [] -> [((CoreNodeId 0, 0), Map.empty)] + d : ds -> do + ((CoreNodeId i, mx), m) <- go ds + let f s = ((CoreNodeId (succ i), s), Map.insert (CoreNodeId i) s m) + [f mx] ++ [f (mx + d) | d > 0] -- | Partial; @error@ for a node not in the plan --- coreNodeIdJoinSlot :: - HasCallStack - => NodeJoinPlan -> CoreNodeId -> SlotNo + HasCallStack => + NodeJoinPlan -> CoreNodeId -> SlotNo coreNodeIdJoinSlot (NodeJoinPlan m) nid = - Map.findWithDefault - (error $ "not found: " <> condense (nid, Map.toList m)) - nid m + Map.findWithDefault + (error $ "not found: " <> condense (nid, Map.toList m)) + nid + m -- | Partial; @error@ for a node not in the plan --- nodeIdJoinSlot :: - HasCallStack - => NodeJoinPlan -> NodeId -> SlotNo + HasCallStack => + NodeJoinPlan -> NodeId -> SlotNo nodeIdJoinSlot nodeJoinPlan@(NodeJoinPlan m) ni = case ni of - CoreId cni -> coreNodeIdJoinSlot nodeJoinPlan cni - _ -> error $ "not found: " <> condense (ni, Map.toList m) + CoreId cni -> coreNodeIdJoinSlot nodeJoinPlan cni + _ -> error $ "not found: " <> condense (ni, Map.toList m) diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeRestarts.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeRestarts.hs index 12c254aabe..c8f1da6222 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeRestarts.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeRestarts.hs @@ -1,28 +1,28 @@ -module Test.ThreadNet.Util.NodeRestarts ( - NodeRestart (..) +module Test.ThreadNet.Util.NodeRestarts + ( NodeRestart (..) , NodeRestarts (..) , genNodeRestarts , noRestarts , shrinkNodeRestarts ) where -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Data.Traversable (forM) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Util.Condense -import Test.QuickCheck -import Test.ThreadNet.Util.NodeJoinPlan -import Test.Util.Slots (NumSlots (..)) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Data.Traversable (forM) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Util.Condense +import Test.QuickCheck +import Test.ThreadNet.Util.NodeJoinPlan +import Test.Util.Slots (NumSlots (..)) data NodeRestart - = NodeRekey - -- ^ restart the node with a fresh operational key and immediately emit a + = -- | restart the node with a fresh operational key and immediately emit a -- delegation certificate transaction - | NodeRestart - -- ^ restart the node without rekeying + NodeRekey + | -- | restart the node without rekeying + NodeRestart deriving (Eq, Ord, Show) instance Condense NodeRestart where @@ -31,16 +31,16 @@ instance Condense NodeRestart where -- | Which nodes are scheduled to restart in each slot -- -- INVARIANT no element 'Map' is empty --- newtype NodeRestarts = NodeRestarts (Map SlotNo (Map CoreNodeId NodeRestart)) deriving (Eq, Show) instance Condense NodeRestarts where - condense (NodeRestarts m) = condense + condense (NodeRestarts m) = + condense [ (,) slot $ - [ (fromCoreNodeId cid, r) - | (cid, r) <- Map.toAscList m' - ] + [ (fromCoreNodeId cid, r) + | (cid, r) <- Map.toAscList m' + ] | (slot, m') <- Map.toAscList m ] @@ -56,39 +56,41 @@ noRestarts = NodeRestarts Map.empty -- -- POSTCONDITION will not simultaneously restart all nodes that have previously -- joined --- genNodeRestarts :: NodeJoinPlan -> NumSlots -> Gen NodeRestarts genNodeRestarts (NodeJoinPlan m) (NumSlots t) - | t < 1 = pure noRestarts + | t < 1 = pure noRestarts | otherwise = - fmap (NodeRestarts . Map.filter (not . Map.null) . Map.fromList) $ do - ss <- sublistOf [0 .. SlotNo (t - 1)] - forM ss $ \s -> - fmap ((,) s) $ - let alreadyJoined = Map.keysSet $ Map.filter (< s) m - keepSome - | Set.null alreadyJoined = id - | otherwise = - (`suchThat` \x -> not $ alreadyJoined `Set.isSubsetOf` Map.keysSet x) - candidates = Map.filterWithKey (canRestartIn s) m - in - keepSome $ - if Map.null candidates - then pure Map.empty - else fmap (Map.fromList . map (flip (,) NodeRestart)) $ - sublistOf $ Map.keys $ candidates - where - isLeading (CoreNodeId i) s = i /= unSlotNo s `mod` n - where - n = fromIntegral $ Map.size m + fmap (NodeRestarts . Map.filter (not . Map.null) . Map.fromList) $ do + ss <- sublistOf [0 .. SlotNo (t - 1)] + forM ss $ \s -> + fmap ((,) s) $ + let alreadyJoined = Map.keysSet $ Map.filter (< s) m + keepSome + | Set.null alreadyJoined = id + | otherwise = + (`suchThat` \x -> not $ alreadyJoined `Set.isSubsetOf` Map.keysSet x) + candidates = Map.filterWithKey (canRestartIn s) m + in keepSome $ + if Map.null candidates + then pure Map.empty + else + fmap (Map.fromList . map (flip (,) NodeRestart)) $ + sublistOf $ + Map.keys $ + candidates + where + isLeading (CoreNodeId i) s = i /= unSlotNo s `mod` n + where + n = fromIntegral $ Map.size m - canRestartIn s nid joinSlot = - -- must be present - joinSlot <= s && - -- must not be leading (TODO relax this somehow?) - not (isLeading nid s) + canRestartIn s nid joinSlot = + -- must be present + joinSlot <= s + && + -- must not be leading (TODO relax this somehow?) + not (isLeading nid s) shrinkNodeRestarts :: NodeRestarts -> [NodeRestarts] shrinkNodeRestarts (NodeRestarts m) - | Map.null m = [] -- TODO better shrink - | otherwise = [noRestarts] + | Map.null m = [] -- TODO better shrink + | otherwise = [noRestarts] diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeToNodeVersion.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeToNodeVersion.hs index 4a074f7cac..48ae99f7a6 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeToNodeVersion.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeToNodeVersion.hs @@ -1,26 +1,26 @@ -module Test.ThreadNet.Util.NodeToNodeVersion ( - genVersion +module Test.ThreadNet.Util.NodeToNodeVersion + ( genVersion , genVersionFiltered , newestVersion ) where -import qualified Data.Map.Strict as Map -import Data.Proxy (Proxy (..)) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Test.QuickCheck +import Data.Map.Strict qualified as Map +import Data.Proxy (Proxy (..)) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Test.QuickCheck genVersion :: - SupportedNetworkProtocolVersion blk - => Proxy blk -> Gen (NodeToNodeVersion, BlockNodeToNodeVersion blk) + SupportedNetworkProtocolVersion blk => + Proxy blk -> Gen (NodeToNodeVersion, BlockNodeToNodeVersion blk) genVersion = genVersionFiltered (const True) genVersionFiltered :: - SupportedNetworkProtocolVersion blk - => (BlockNodeToNodeVersion blk -> Bool) - -> Proxy blk - -> Gen (NodeToNodeVersion, BlockNodeToNodeVersion blk) + SupportedNetworkProtocolVersion blk => + (BlockNodeToNodeVersion blk -> Bool) -> + Proxy blk -> + Gen (NodeToNodeVersion, BlockNodeToNodeVersion blk) genVersionFiltered f = - elements + elements . filter (f . snd) . Map.toList . supportedNodeToNodeVersions @@ -29,6 +29,6 @@ genVersionFiltered f = -- 'NodeToNodeVersion'. This can be used when you don't care about the -- versioning of a block. newestVersion :: - SupportedNetworkProtocolVersion blk - => Proxy blk -> (NodeToNodeVersion, BlockNodeToNodeVersion blk) + SupportedNetworkProtocolVersion blk => + Proxy blk -> (NodeToNodeVersion, BlockNodeToNodeVersion blk) newestVersion = Map.findMax . supportedNodeToNodeVersions diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeTopology.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeTopology.hs index b19b3b62e7..fd07d0ab54 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeTopology.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/NodeTopology.hs @@ -3,8 +3,8 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.ThreadNet.Util.NodeTopology ( - -- * Node Topology +module Test.ThreadNet.Util.NodeTopology + ( -- * Node Topology NodeTopology (..) , coreNodeIdNeighbors , edgesNodeTopology @@ -16,18 +16,18 @@ module Test.ThreadNet.Util.NodeTopology ( , unionNodeTopology ) where -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Orphans () -import Quiet (Quiet (..)) -import Test.QuickCheck +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Orphans () +import Quiet (Quiet (..)) +import Test.QuickCheck {------------------------------------------------------------------------------- Node Topologies @@ -48,113 +48,113 @@ import Test.QuickCheck -- network partitions, the network may split (temporarily or not) into separate -- connected components during the execution, but the base topology is -- connected. --- -newtype NodeTopology = - NodeTopology {unNodeTopology :: Map CoreNodeId (Set CoreNodeId)} +newtype NodeTopology + = NodeTopology {unNodeTopology :: Map CoreNodeId (Set CoreNodeId)} deriving (Eq, Generic) - deriving (Show) via Quiet NodeTopology + deriving Show via Quiet NodeTopology instance Condense NodeTopology where condense top@(NodeTopology m) | top == mesh = "meshNodeTopology (NumCoreNodes " ++ show (Map.size m) ++ ")" - | otherwise = condense - [ (fromCoreNodeId nid, Set.map fromCoreNodeId nids) - | (nid, nids) <- Map.toAscList m ] - where - mesh = meshNodeTopology (NumCoreNodes (fromIntegral (Map.size m))) + | otherwise = + condense + [ (fromCoreNodeId nid, Set.map fromCoreNodeId nids) + | (nid, nids) <- Map.toAscList m + ] + where + mesh = meshNodeTopology (NumCoreNodes (fromIntegral (Map.size m))) -- | Connect every pair of nodes --- meshNodeTopology :: - NumCoreNodes - -- ^ PRECONDITION: non-negative - -> NodeTopology + -- | PRECONDITION: non-negative + NumCoreNodes -> + NodeTopology meshNodeTopology numCoreNodes = - NodeTopology $ + NodeTopology $ Map.fromList $ - [ (nid, Set.fromList $ enumCoreNodes (NumCoreNodes i)) - | nid@(CoreNodeId i) <- enumCoreNodes numCoreNodes ] + [ (nid, Set.fromList $ enumCoreNodes (NumCoreNodes i)) + | nid@(CoreNodeId i) <- enumCoreNodes numCoreNodes + ] -- | Generate a 'NodeTopology' consistent with the given properties --- genNodeTopology :: - HasCallStack - => NumCoreNodes - -- ^ PRECONDITION: non-negative - -> Gen NodeTopology + HasCallStack => + -- | PRECONDITION: non-negative + NumCoreNodes -> + Gen NodeTopology genNodeTopology numCoreNodes@(NumCoreNodes n) - | n < 0 = error $ "Unsatisfiable parameters: " - ++ show numCoreNodes - + | n < 0 = + error $ + "Unsatisfiable parameters: " + ++ show numCoreNodes | otherwise = do - let genNeighbors me@(CoreNodeId i) = case i of - 0 -> pure (me, Set.empty) - _ -> - fmap ((,) me . Set.fromList) $ - flip suchThat (not . null) $ - sublistOf (enumCoreNodes (NumCoreNodes i)) + let genNeighbors me@(CoreNodeId i) = case i of + 0 -> pure (me, Set.empty) + _ -> + fmap ((,) me . Set.fromList) $ + flip suchThat (not . null) $ + sublistOf (enumCoreNodes (NumCoreNodes i)) - fmap (NodeTopology . Map.fromList) $ - mapM genNeighbors (enumCoreNodes numCoreNodes) + fmap (NodeTopology . Map.fromList) $ + mapM genNeighbors (enumCoreNodes numCoreNodes) -- | Shrink a node topology -- -- The new topologies must be usable with the same number of nodes and slots as -- the old topology --- shrinkNodeTopology :: NodeTopology -> [NodeTopology] shrinkNodeTopology top@(NodeTopology m) - | top == mesh = [] - | otherwise = [mesh] - where - mesh = meshNodeTopology (NumCoreNodes (fromIntegral (Map.size m))) - -- TODO more sophisticated shrinks. I anticipate that they'll need to use - -- 'Test.QuickCheck.Shrinking' or else risk very slow responses + | top == mesh = [] + | otherwise = [mesh] + where + mesh = meshNodeTopology (NumCoreNodes (fromIntegral (Map.size m))) + +-- TODO more sophisticated shrinks. I anticipate that they'll need to use +-- 'Test.QuickCheck.Shrinking' or else risk very slow responses -- | The neighbors of this node --- coreNodeIdNeighbors :: - HasCallStack - => NodeTopology -> CoreNodeId -> [CoreNodeId] + HasCallStack => + NodeTopology -> CoreNodeId -> [CoreNodeId] coreNodeIdNeighbors (NodeTopology m) nid = - case hit of - Nothing -> - error $ - "invariant violated: " <> - "could not find " <> condense (nid, Map.toList m) - Just lessers -> Set.toList lessers ++ greaters - where - (_, hit, greaters0) = Map.splitLookup nid m - greaters = Map.keys (Map.filter (nid `Set.member`) greaters0) + case hit of + Nothing -> + error $ + "invariant violated: " + <> "could not find " + <> condense (nid, Map.toList m) + Just lessers -> Set.toList lessers ++ greaters + where + (_, hit, greaters0) = Map.splitLookup nid m + greaters = Map.keys (Map.filter (nid `Set.member`) greaters0) -- | The edges in this topology --- edgesNodeTopology :: NodeTopology -> [(CoreNodeId, CoreNodeId)] edgesNodeTopology (NodeTopology m) = - flip foldMap (Map.toList m) $ \(greater, lessers) -> - map (flip (,) greater) (Set.toList lessers) + flip foldMap (Map.toList m) $ \(greater, lessers) -> + map (flip (,) greater) (Set.toList lessers) -- | The neighbor count of the node with the fewest neighbors, unless there are -- zero nodes --- minimumDegreeNodeTopology :: NodeTopology -> Maybe Int minimumDegreeNodeTopology top@(NodeTopology m) = - check [ length (coreNodeIdNeighbors top nid) | nid <- Map.keys m ] - where - check = \case - [] -> Nothing - x:xs -> Just $ foldl min x xs + check [length (coreNodeIdNeighbors top nid) | nid <- Map.keys m] + where + check = \case + [] -> Nothing + x : xs -> Just $ foldl min x xs unionNodeTopology :: NodeTopology -> NodeTopology -> NodeTopology unionNodeTopology (NodeTopology l) (NodeTopology r) = - NodeTopology $ Map.unionWith Set.union l r + NodeTopology $ Map.unionWith Set.union l r mapNodeTopology :: (CoreNodeId -> CoreNodeId) -> NodeTopology -> NodeTopology mapNodeTopology f topo = - NodeTopology $ Map.fromListWith Set.union $ - [ f l `sortedSingleton` f r - | (l, r) <- edgesNodeTopology topo - ] - where - sortedSingleton l r = - if l > r then (l, Set.singleton r) else (r, Set.singleton l) + NodeTopology $ + Map.fromListWith Set.union $ + [ f l `sortedSingleton` f r + | (l, r) <- edgesNodeTopology topo + ] + where + sortedSingleton l r = + if l > r then (l, Set.singleton r) else (r, Set.singleton l) diff --git a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Seed.hs b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Seed.hs index 9a77b41849..96689dfa49 100644 --- a/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Seed.hs +++ b/ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Util/Seed.hs @@ -1,17 +1,17 @@ {-# LANGUAGE TypeApplications #-} -- | Seed used for the ThreadNet tests -module Test.ThreadNet.Util.Seed ( - Seed (..) +module Test.ThreadNet.Util.Seed + ( Seed (..) , combineWith , runGen ) where -import Data.Bits (xor) -import Data.Coerce (coerce) -import Test.QuickCheck -import Test.QuickCheck.Gen -import Test.QuickCheck.Random (mkQCGen) +import Data.Bits (xor) +import Data.Coerce (coerce) +import Test.QuickCheck +import Test.QuickCheck.Gen +import Test.QuickCheck.Random (mkQCGen) newtype Seed = Seed Int deriving (Eq, Show) @@ -24,11 +24,11 @@ combineWith seed x = seed <> Seed (fromIntegral x) runGen :: Seed -> Gen a -> a runGen (Seed seed) g = - unGen g qcSeed qcSize - where - -- The traditional initial QC size - qcSize = 30 :: Int - qcSeed = mkQCGen seed + unGen g qcSeed qcSize + where + -- The traditional initial QC size + qcSize = 30 :: Int + qcSeed = mkQCGen seed instance Arbitrary Seed where arbitrary = Seed <$> choose (minBound, maxBound) diff --git a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/Consensus/Ledger/Mock/Generators.hs b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/Consensus/Ledger/Mock/Generators.hs index fcb21a9a1c..b928f318c0 100644 --- a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/Consensus/Ledger/Mock/Generators.hs +++ b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/Consensus/Ledger/Mock/Generators.hs @@ -5,49 +5,50 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Ledger.Mock.Generators () where -import Cardano.Crypto.DSIGN -import Cardano.Crypto.Hash -import Codec.Serialise (Serialise, encode, serialise) -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Data.Typeable -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Mock.Ledger.Block.BFT -import qualified Ouroboros.Consensus.Mock.Ledger.State as L -import qualified Ouroboros.Consensus.Mock.Ledger.UTxO as L -import Ouroboros.Consensus.Mock.Node.Serialisation () -import Ouroboros.Consensus.Protocol.BFT -import Test.Crypto.Hash () -import Test.QuickCheck -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Cardano.Crypto.DSIGN +import Cardano.Crypto.Hash +import Codec.Serialise (Serialise, encode, serialise) +import Data.ByteString.Lazy qualified as Lazy +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Data.Typeable +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Mock.Ledger.Block.BFT +import Ouroboros.Consensus.Mock.Ledger.State qualified as L +import Ouroboros.Consensus.Mock.Ledger.UTxO qualified as L +import Ouroboros.Consensus.Mock.Node.Serialisation () +import Ouroboros.Consensus.Protocol.BFT +import Test.Crypto.Hash () +import Test.QuickCheck +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- General instances -------------------------------------------------------------------------------} instance Arbitrary (HeaderHash blk) => Arbitrary (ChainHash blk) where - arbitrary = oneof [ - return GenesisHash + arbitrary = + oneof + [ return GenesisHash , BlockHash <$> arbitrary ] instance Arbitrary (HeaderHash blk) => Arbitrary (Point blk) where - arbitrary = oneof [ - return GenesisPoint + arbitrary = + oneof + [ return GenesisPoint , BlockPoint <$> arbitrary <*> arbitrary ] @@ -57,44 +58,54 @@ instance Arbitrary (HeaderHash blk) => Arbitrary (Point blk) where -- | This blindly creates random values, so the block will not be valid, but -- this does not matter for serialisation tests. -instance (SimpleCrypto c, Arbitrary ext, Serialise ext) - => Arbitrary (SimpleBlock c ext) where +instance + (SimpleCrypto c, Arbitrary ext, Serialise ext) => + Arbitrary (SimpleBlock c ext) + where arbitrary = do simpleStdHeader <- arbitrary - body <- arbitrary - ext <- arbitrary + body <- arbitrary + ext <- arbitrary let hdr = mkSimpleHeader encode simpleStdHeader ext return $ SimpleBlock hdr body -- | This blindly creates random values, so the block will not be valid, but -- this does not matter for serialisation tests. Except we do touch-up the -- 'simpleBodySize'; hence 'Coherent'. -instance (SimpleCrypto c, Arbitrary ext, Serialise ext) - => Arbitrary (Coherent (SimpleBlock c ext)) where +instance + (SimpleCrypto c, Arbitrary ext, Serialise ext) => + Arbitrary (Coherent (SimpleBlock c ext)) + where arbitrary = do simpleStdHeader <- arbitrary - body <- arbitrary - ext <- arbitrary - let simpleStdHeader' = simpleStdHeader { - -- Fill in the right body size, because we rely on this in the - -- serialisation tests - simpleBodySize = fromIntegral $ Lazy.length $ serialise body - } + body <- arbitrary + ext <- arbitrary + let simpleStdHeader' = + simpleStdHeader + { -- Fill in the right body size, because we rely on this in the + -- serialisation tests + simpleBodySize = fromIntegral $ Lazy.length $ serialise body + } hdr = mkSimpleHeader encode simpleStdHeader' ext return $ Coherent $ SimpleBlock hdr body -instance (SimpleCrypto c, Arbitrary ext, Serialise ext, Typeable ext) - => Arbitrary (Header (SimpleBlock c ext)) where +instance + (SimpleCrypto c, Arbitrary ext, Serialise ext, Typeable ext) => + Arbitrary (Header (SimpleBlock c ext)) + where arbitrary = getHeader <$> arbitrary -instance (HashAlgorithm (SimpleHash c), Arbitrary ext, Serialise ext) - => Arbitrary (SimpleStdHeader c ext) where - arbitrary = SimpleStdHeader - <$> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary +instance + (HashAlgorithm (SimpleHash c), Arbitrary ext, Serialise ext) => + Arbitrary (SimpleStdHeader c ext) + where + arbitrary = + SimpleStdHeader + <$> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary instance Arbitrary SimpleBody where arbitrary = SimpleBody <$> listOf arbitrary @@ -108,18 +119,22 @@ instance Arbitrary (SomeBlockQuery (BlockQuery (SimpleBlock c ext))) where instance (SimpleCrypto c, Typeable ext) => Arbitrary (SomeResult (SimpleBlock c ext)) where arbitrary = SomeResult QueryLedgerTip <$> arbitrary -instance (SimpleCrypto c, Typeable ext) - => Arbitrary (LedgerState (SimpleBlock c ext) EmptyMK) where +instance + (SimpleCrypto c, Typeable ext) => + Arbitrary (LedgerState (SimpleBlock c ext) EmptyMK) + where arbitrary = - forgetLedgerTables - <$> arbitrary @(LedgerState (SimpleBlock c ext) ValuesMK) + forgetLedgerTables + <$> arbitrary @(LedgerState (SimpleBlock c ext) ValuesMK) -instance (SimpleCrypto c, Typeable ext) - => Arbitrary (LedgerState (SimpleBlock c ext) ValuesMK) where +instance + (SimpleCrypto c, Typeable ext) => + Arbitrary (LedgerState (SimpleBlock c ext) ValuesMK) + where arbitrary = - unstowLedgerTables - . flip SimpleLedgerState emptyLedgerTables - <$> arbitrary + unstowLedgerTables + . flip SimpleLedgerState emptyLedgerTables + <$> arbitrary instance Arbitrary (LedgerTables (LedgerState (SimpleBlock c ext)) ValuesMK) where arbitrary = LedgerTables . ValuesMK <$> arbitrary @@ -130,22 +145,24 @@ instance Arbitrary ByteSize32 where instance Arbitrary L.MockConfig where arbitrary = L.MockConfig <$> arbitrary -instance ( Arbitrary (MockLedgerConfig c ext) - ) => Arbitrary (SimpleLedgerConfig c ext) where +instance + Arbitrary (MockLedgerConfig c ext) => + Arbitrary (SimpleLedgerConfig c ext) + where arbitrary = SimpleLedgerConfig <$> arbitrary <*> arbitrary <*> arbitrary instance HashAlgorithm (SimpleHash c) => Arbitrary (AnnTip (SimpleBlock c ext)) where arbitrary = do - annTipSlotNo <- SlotNo <$> arbitrary - annTipBlockNo <- BlockNo <$> arbitrary - annTipInfo <- arbitrary - return AnnTip{..} + annTipSlotNo <- SlotNo <$> arbitrary + annTipBlockNo <- BlockNo <$> arbitrary + annTipInfo <- arbitrary + return AnnTip{..} instance Arbitrary (GenTx (SimpleBlock c ext)) where arbitrary = do - simpleGenTx <- arbitrary - simpleGenTxId <- arbitrary - return SimpleGenTx{..} + simpleGenTx <- arbitrary + simpleGenTxId <- arbitrary + return SimpleGenTx{..} instance Arbitrary (TxId (GenTx (SimpleBlock c ext))) where arbitrary = SimpleGenTxId <$> arbitrary @@ -157,26 +174,30 @@ instance Arbitrary (TxId (GenTx (SimpleBlock c ext))) where -------------------------------------------------------------------------------} instance Arbitrary L.Tx where - arbitrary = L.Tx L.DoNotExpire - <$> pure mempty -- For simplicity - <*> arbitrary + arbitrary = + L.Tx L.DoNotExpire + <$> pure mempty -- For simplicity + <*> arbitrary instance Arbitrary L.Addr where arbitrary = elements ["a", "b", "c"] instance Arbitrary (L.MockState blk) where - arbitrary = return $ L.MockState { - mockUtxo = Map.empty - , mockConfirmed = Set.empty - , mockTip = GenesisPoint - } + arbitrary = + return $ + L.MockState + { mockUtxo = Map.empty + , mockConfirmed = Set.empty + , mockTip = GenesisPoint + } instance Arbitrary (HeaderHash blk) => Arbitrary (L.MockError blk) where - arbitrary = oneof [ - L.MockExpired <$> arbitrary <*> arbitrary - -- , MockUtxOError <$> arbitrary -- TODO - , L.MockInvalidHash <$> arbitrary <*> arbitrary - ] + arbitrary = + oneof + [ L.MockExpired <$> arbitrary <*> arbitrary + , -- , MockUtxOError <$> arbitrary -- TODO + L.MockInvalidHash <$> arbitrary <*> arbitrary + ] {------------------------------------------------------------------------------- Per protocol @@ -184,11 +205,12 @@ instance Arbitrary (HeaderHash blk) => Arbitrary (L.MockError blk) where instance Arbitrary (SimpleBftExt c BftMockCrypto) where arbitrary = do - simpleBftExt <- arbitrary - return SimpleBftExt{..} + simpleBftExt <- arbitrary + return SimpleBftExt{..} instance Arbitrary (BftFields BftMockCrypto toSign) where arbitrary = do - bftSignature <- SignedDSIGN <$> - (SigMockDSIGN <$> arbitrary <*> arbitrary) - return BftFields{..} + bftSignature <- + SignedDSIGN + <$> (SigMockDSIGN <$> arbitrary <*> arbitrary) + return BftFields{..} diff --git a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/TxGen/Mock.hs b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/TxGen/Mock.hs index 2c9fcae124..68f3478480 100644 --- a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/TxGen/Mock.hs +++ b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/TxGen/Mock.hs @@ -1,19 +1,18 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE ScopedTypeVariables #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.TxGen.Mock () where -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Mock.Ledger -import Test.QuickCheck -import Test.ThreadNet.TxGen +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Mock.Ledger +import Test.QuickCheck +import Test.ThreadNet.TxGen {------------------------------------------------------------------------------- TxGen SimpleBlock @@ -21,36 +20,37 @@ import Test.ThreadNet.TxGen instance TxGen (SimpleBlock SimpleMockCrypto ext) where testGenTxs _coreNodeId numCoreNodes curSlotNo _cfg () ledgerState = do - n <- choose (0, 20) - -- We don't update the UTxO after each transaction, so some of the - -- generated transactions could very well be invalid. - replicateM n $ - mkSimpleGenTx <$> genSimpleTx curSlotNo addrs utxo - where - addrs :: [Addr] - addrs = Map.keys $ mkAddrDist numCoreNodes + n <- choose (0, 20) + -- We don't update the UTxO after each transaction, so some of the + -- generated transactions could very well be invalid. + replicateM n $ + mkSimpleGenTx <$> genSimpleTx curSlotNo addrs utxo + where + addrs :: [Addr] + addrs = Map.keys $ mkAddrDist numCoreNodes - utxo :: Utxo - utxo = mockUtxo $ simpleLedgerState $ stowLedgerTables ledgerState + utxo :: Utxo + utxo = mockUtxo $ simpleLedgerState $ stowLedgerTables ledgerState genSimpleTx :: SlotNo -> [Addr] -> Utxo -> Gen Tx genSimpleTx curSlotNo addrs u = do - let senders = Set.toList . Set.fromList . map fst . Map.elems $ u -- people with funds - sender <- elements senders - recipient <- elements $ filter (/= sender) addrs - let assets = filter (\(_, (a, _)) -> a == sender) $ Map.toList u - fortune = sum [c | (_, (_, c)) <- assets] - ins = Set.fromList $ map fst assets - amount <- choose (1, fortune) - let outRecipient = (recipient, amount) - outs = if amount == fortune - then [outRecipient] - else [outRecipient, (sender, fortune - amount)] - -- generate transactions within several slots in the future or never - expiry <- elements $ map mkExpiry $ Nothing : map Just [0 .. 10] - return $ Tx expiry ins outs - where - mkExpiry :: Maybe SlotNo -> Expiry - mkExpiry = \case - Nothing -> DoNotExpire - Just delta -> ExpireAtOnsetOf $ curSlotNo + delta + let senders = Set.toList . Set.fromList . map fst . Map.elems $ u -- people with funds + sender <- elements senders + recipient <- elements $ filter (/= sender) addrs + let assets = filter (\(_, (a, _)) -> a == sender) $ Map.toList u + fortune = sum [c | (_, (_, c)) <- assets] + ins = Set.fromList $ map fst assets + amount <- choose (1, fortune) + let outRecipient = (recipient, amount) + outs = + if amount == fortune + then [outRecipient] + else [outRecipient, (sender, fortune - amount)] + -- generate transactions within several slots in the future or never + expiry <- elements $ map mkExpiry $ Nothing : map Just [0 .. 10] + return $ Tx expiry ins outs + where + mkExpiry :: Maybe SlotNo -> Expiry + mkExpiry = \case + Nothing -> DoNotExpire + Just delta -> ExpireAtOnsetOf $ curSlotNo + delta diff --git a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/HasCreator/Mock.hs b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/HasCreator/Mock.hs index 61aa13543c..52795a1f20 100644 --- a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/HasCreator/Mock.hs +++ b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/HasCreator/Mock.hs @@ -1,50 +1,52 @@ {-# LANGUAGE FlexibleInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.ThreadNet.Util.HasCreator.Mock () where -import Cardano.Crypto.DSIGN -import Data.Word (Word64) -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.BFT -import Ouroboros.Consensus.Protocol.PBFT -import Test.ThreadNet.Util.HasCreator - +import Cardano.Crypto.DSIGN +import Data.Word (Word64) +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.BFT +import Ouroboros.Consensus.Protocol.PBFT +import Test.ThreadNet.Util.HasCreator instance HasCreator (SimpleBftBlock c BftMockCrypto) where - getCreator = coreNodeId - . bftSignature - . simpleBftExt - . simpleHeaderExt - . simpleHeader - where - coreNodeId :: SignedDSIGN MockDSIGN a -> CoreNodeId - coreNodeId = CoreNodeId . verKeyIdFromSigned + getCreator = + coreNodeId + . bftSignature + . simpleBftExt + . simpleHeaderExt + . simpleHeader + where + coreNodeId :: SignedDSIGN MockDSIGN a -> CoreNodeId + coreNodeId = CoreNodeId . verKeyIdFromSigned instance HasCreator (SimplePBftBlock c PBftMockCrypto) where - getCreator = coreNodeId - . pbftSignature - . simplePBftExt - . simpleHeaderExt - . simpleHeader - where - coreNodeId :: SignedDSIGN MockDSIGN a -> CoreNodeId - coreNodeId = CoreNodeId . verKeyIdFromSigned + getCreator = + coreNodeId + . pbftSignature + . simplePBftExt + . simpleHeaderExt + . simpleHeader + where + coreNodeId :: SignedDSIGN MockDSIGN a -> CoreNodeId + coreNodeId = CoreNodeId . verKeyIdFromSigned instance HasCreator (SimplePraosBlock c PraosMockCrypto) where - getCreator = praosCreator - . praosExtraFields - . simplePraosExt - . simpleHeaderExt - . simpleHeader + getCreator = + praosCreator + . praosExtraFields + . simplePraosExt + . simpleHeaderExt + . simpleHeader instance HasCreator (SimplePraosRuleBlock c) where - getCreator = simplePraosRuleExt - . simpleHeaderExt - . simpleHeader + getCreator = + simplePraosRuleExt + . simpleHeaderExt + . simpleHeader -- | Get the id of the signer from a signature. Used for testing. verKeyIdFromSigned :: SignedDSIGN MockDSIGN a -> Word64 diff --git a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/SimpleBlock.hs b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/SimpleBlock.hs index 105912d795..ad07990864 100644 --- a/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/SimpleBlock.hs +++ b/ouroboros-consensus-diffusion/src/unstable-mock-testlib/Test/ThreadNet/Util/SimpleBlock.hs @@ -2,26 +2,26 @@ module Test.ThreadNet.Util.SimpleBlock (prop_validSimpleBlock) where -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Util.Condense (condense) -import Test.QuickCheck +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Util.Condense (condense) +import Test.QuickCheck -prop_validSimpleBlock - :: (SimpleCrypto c, Typeable ext, Typeable ext') - => SimpleBlock' c ext ext' -> Property +prop_validSimpleBlock :: + (SimpleCrypto c, Typeable ext, Typeable ext') => + SimpleBlock' c ext ext' -> Property prop_validSimpleBlock blk = conjoin $ map each $ simpleTxs $ simpleBody blk - where - now :: SlotNo - now = blockSlot blk + where + now :: SlotNo + now = blockSlot blk - msg :: String - msg = "block contains expired transaction:" + msg :: String + msg = "block contains expired transaction:" - each :: Tx -> Property - each tx@(Tx expiry _ins _outs) = - counterexample (msg <> " " <> condense (now, tx)) $ + each :: Tx -> Property + each tx@(Tx expiry _ins _outs) = + counterexample (msg <> " " <> condense (now, tx)) $ case expiry of - DoNotExpire -> True + DoNotExpire -> True ExpireAtOnsetOf s -> now < s diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Main.hs b/ouroboros-consensus-diffusion/test/consensus-test/Main.hs index e7174e7503..b4d16543d7 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Main.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Main.hs @@ -1,31 +1,36 @@ module Main (main) where -import qualified Test.Consensus.Genesis.Tests (tests) -import qualified Test.Consensus.GSM (tests) -import qualified Test.Consensus.HardFork.Combinator (tests) -import qualified Test.Consensus.Node (tests) -import qualified Test.Consensus.PeerSimulator.Tests (tests) -import qualified Test.Consensus.PointSchedule.Shrinking.Tests (tests) -import qualified Test.Consensus.PointSchedule.Tests (tests) -import Test.Tasty -import Test.Util.TestEnv (defaultMainWithTestEnv, - defaultTestEnvConfig) +import Test.Consensus.GSM qualified (tests) +import Test.Consensus.Genesis.Tests qualified (tests) +import Test.Consensus.HardFork.Combinator qualified (tests) +import Test.Consensus.Node qualified (tests) +import Test.Consensus.PeerSimulator.Tests qualified (tests) +import Test.Consensus.PointSchedule.Shrinking.Tests qualified (tests) +import Test.Consensus.PointSchedule.Tests qualified (tests) +import Test.Tasty +import Test.Util.TestEnv + ( defaultMainWithTestEnv + , defaultTestEnvConfig + ) main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests tests :: TestTree tests = - testGroup "ouroboros-consensus" - [ Test.Consensus.Node.tests - , testGroup "HardFork" [ - testGroup "Combinator" [ - Test.Consensus.HardFork.Combinator.tests - ] - ] - , Test.Consensus.Genesis.Tests.tests - , testGroup "GSM" Test.Consensus.GSM.tests - , Test.Consensus.PeerSimulator.Tests.tests - , Test.Consensus.PointSchedule.Shrinking.Tests.tests - , Test.Consensus.PointSchedule.Tests.tests - ] + testGroup + "ouroboros-consensus" + [ Test.Consensus.Node.tests + , testGroup + "HardFork" + [ testGroup + "Combinator" + [ Test.Consensus.HardFork.Combinator.tests + ] + ] + , Test.Consensus.Genesis.Tests.tests + , testGroup "GSM" Test.Consensus.GSM.tests + , Test.Consensus.PeerSimulator.Tests.tests + , Test.Consensus.PointSchedule.Shrinking.Tests.tests + , Test.Consensus.PointSchedule.Tests.tests + ] diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/BlockTree.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/BlockTree.hs index 9a77a50fba..d33b0bf58e 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/BlockTree.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/BlockTree.hs @@ -10,8 +10,8 @@ -- different mechanisms but maybe we should rely on that instead to avoid -- duplication. -module Test.Consensus.BlockTree ( - BlockTree (..) +module Test.Consensus.BlockTree + ( BlockTree (..) , BlockTreeBranch (..) , PathAnchoredAtSource (..) , addBranch @@ -23,18 +23,23 @@ module Test.Consensus.BlockTree ( , prettyBlockTree ) where -import Cardano.Slotting.Slot (SlotNo (unSlotNo)) -import Data.Foldable (asum) -import Data.Function ((&)) -import Data.Functor ((<&>)) -import Data.List (sortOn) -import Data.Maybe (fromJust, fromMaybe) -import Data.Ord (Down (Down)) -import qualified Data.Vector as Vector -import Ouroboros.Consensus.Block.Abstract (blockNo, blockSlot, - fromWithOrigin, pointSlot, unBlockNo) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Text.Printf (printf) +import Cardano.Slotting.Slot (SlotNo (unSlotNo)) +import Data.Foldable (asum) +import Data.Function ((&)) +import Data.Functor ((<&>)) +import Data.List (sortOn) +import Data.Maybe (fromJust, fromMaybe) +import Data.Ord (Down (Down)) +import Data.Vector qualified as Vector +import Ouroboros.Consensus.Block.Abstract + ( blockNo + , blockSlot + , fromWithOrigin + , pointSlot + , unBlockNo + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Text.Printf (printf) -- | Represent a branch of a block tree by a prefix and a suffix. The full -- fragment (the prefix and suffix catenated) and the trunk suffix (the rest of @@ -43,13 +48,13 @@ import Text.Printf (printf) -- INVARIANT: the head of @btbPrefix@ is the anchor of @btbSuffix@. -- -- INVARIANT: @btbFull == fromJust $ AF.join btbPrefix btbSuffix@. -data BlockTreeBranch blk = BlockTreeBranch { - btbPrefix :: AF.AnchoredFragment blk, - btbSuffix :: AF.AnchoredFragment blk, - btbTrunkSuffix :: AF.AnchoredFragment blk, - btbFull :: AF.AnchoredFragment blk +data BlockTreeBranch blk = BlockTreeBranch + { btbPrefix :: AF.AnchoredFragment blk + , btbSuffix :: AF.AnchoredFragment blk + , btbTrunkSuffix :: AF.AnchoredFragment blk + , btbFull :: AF.AnchoredFragment blk } - deriving (Show) + deriving Show -- | Represent a block tree with a main trunk and branches leaving from the -- trunk in question. All the branches are represented by their prefix to and @@ -69,15 +74,15 @@ data BlockTreeBranch blk = BlockTreeBranch { -- -- REVIEW: Find another name so as not to clash with 'BlockTree' from -- `unstable-consensus-testlib/Test/Util/TestBlock.hs`. -data BlockTree blk = BlockTree { - btTrunk :: AF.AnchoredFragment blk, - btBranches :: [BlockTreeBranch blk] +data BlockTree blk = BlockTree + { btTrunk :: AF.AnchoredFragment blk + , btBranches :: [BlockTreeBranch blk] } - deriving (Show) + deriving Show -- | Make a block tree made of only a trunk. mkTrunk :: AF.AnchoredFragment blk -> BlockTree blk -mkTrunk btTrunk = BlockTree { btTrunk, btBranches = [] } +mkTrunk btTrunk = BlockTree{btTrunk, btBranches = []} -- | Add a branch to an existing block tree. -- @@ -94,7 +99,7 @@ addBranch branch bt = do -- NOTE: We could use the monadic bind for @Maybe@ here but we would rather -- catch bugs quicker. let btbFull = fromJust $ AF.join btbPrefix btbSuffix - pure $ bt { btBranches = BlockTreeBranch { .. } : btBranches bt } + pure $ bt{btBranches = BlockTreeBranch{..} : btBranches bt} -- | Same as @addBranch@ but calls to 'error' if the former yields 'Nothing'. addBranch' :: AF.HasHeader blk => AF.AnchoredFragment blk -> BlockTree blk -> BlockTree blk @@ -107,7 +112,8 @@ allFragments bt = btTrunk bt : map btbFull (btBranches bt) -- | Look for a point in the block tree and return a fragment going from the -- root of the tree to the point in question. -findFragment :: AF.HasHeader blk => AF.Point blk -> BlockTree blk -> Maybe (AF.AnchoredFragment blk) +findFragment :: + AF.HasHeader blk => AF.Point blk -> BlockTree blk -> Maybe (AF.AnchoredFragment blk) findFragment point blockTree = allFragments blockTree & map (\fragment -> AF.splitAfterPoint fragment point) @@ -150,9 +156,9 @@ findPath source target blockTree = do sourceFragment <- findFragment source blockTree targetFragment <- findFragment target blockTree (_, _, _, targetSuffix) <- AF.intersect sourceFragment targetFragment - pure ( - PathAnchoredAtSource (AF.anchorPoint targetSuffix == source), - targetSuffix + pure + ( PathAnchoredAtSource (AF.anchorPoint targetSuffix == source) + , targetSuffix ) -- | Pretty prints a block tree for human readability. For instance: @@ -167,59 +173,60 @@ prettyBlockTree blockTree = ["slots: " ++ unwords (map (printf "%2d" . unSlotNo) [veryFirstSlot .. veryLastSlot])] ++ [printTrunk honestFragment] ++ (map (uncurry printBranch) adversarialFragments) - - where - honestFragment = btTrunk blockTree - adversarialFragments = - sortOn (Down . pointSlot . AF.anchorPoint . snd) - $ zip [1..] - $ map btbSuffix (btBranches blockTree) - - veryFirstSlot = firstNo $ honestFragment - - veryLastSlot = - foldl max 0 $ - map - lastNo - (honestFragment : map snd adversarialFragments) - - printTrunk :: AF.HasHeader blk => AF.AnchoredFragment blk -> String - printTrunk = printLine (\_ -> "trunk: ─") - - printBranch :: AF.HasHeader blk => Int -> AF.AnchoredFragment blk -> String - printBranch idx = printLine $ \firstSlot -> - let sidx = " (" ++ show idx ++ ")" - pad = replicate 6 ' ' - prefix = sidx ++ drop (length sidx) pad - in - prefix ++ replicate (3 * fromIntegral (unSlotNo (firstSlot - veryFirstSlot))) ' ' ++ " ╰─" - - printLine :: AF.HasHeader blk => (SlotNo -> String) -> AF.AnchoredFragment blk -> String - printLine printHeader fragment = - let firstSlot = firstNo fragment - lastSlot = lastNo fragment - in printHeader firstSlot ++ printFragment firstSlot lastSlot fragment - - printFragment :: AF.HasHeader blk => SlotNo -> SlotNo -> AF.AnchoredFragment blk -> String - printFragment firstSlot lastSlot fragment = - fragment - & AF.toOldestFirst - -- Turn the fragment into a list of (SlotNo, Just BlockNo) - & map (\block -> (fromIntegral (unSlotNo (blockSlot block) - unSlotNo firstSlot), Just (unBlockNo (blockNo block)))) - -- Update only the Vector elements that have blocks in them - & Vector.toList . (slotRange Vector.//) - & map (maybe " " (printf "%2d")) - & unwords - & map (\c -> if c == ' ' then '─' else c) - where - -- Initialize a Vector with the length of the fragment containing only Nothings - slotRange = Vector.replicate (fromIntegral (unSlotNo lastSlot - unSlotNo firstSlot + 1)) Nothing - - lastNo :: AF.HasHeader blk => AF.AnchoredFragment blk -> SlotNo - lastNo = fromWithOrigin 0 . AF.headSlot - - firstNo :: AF.AnchoredFragment blk -> SlotNo - firstNo frag = - case AF.anchor frag of - AF.AnchorGenesis -> 0 - AF.Anchor slotNo _ _ -> slotNo + 1 + where + honestFragment = btTrunk blockTree + adversarialFragments = + sortOn (Down . pointSlot . AF.anchorPoint . snd) $ + zip [1 ..] $ + map btbSuffix (btBranches blockTree) + + veryFirstSlot = firstNo $ honestFragment + + veryLastSlot = + foldl max 0 $ + map + lastNo + (honestFragment : map snd adversarialFragments) + + printTrunk :: AF.HasHeader blk => AF.AnchoredFragment blk -> String + printTrunk = printLine (\_ -> "trunk: ─") + + printBranch :: AF.HasHeader blk => Int -> AF.AnchoredFragment blk -> String + printBranch idx = printLine $ \firstSlot -> + let sidx = " (" ++ show idx ++ ")" + pad = replicate 6 ' ' + prefix = sidx ++ drop (length sidx) pad + in prefix ++ replicate (3 * fromIntegral (unSlotNo (firstSlot - veryFirstSlot))) ' ' ++ " ╰─" + + printLine :: AF.HasHeader blk => (SlotNo -> String) -> AF.AnchoredFragment blk -> String + printLine printHeader fragment = + let firstSlot = firstNo fragment + lastSlot = lastNo fragment + in printHeader firstSlot ++ printFragment firstSlot lastSlot fragment + + printFragment :: AF.HasHeader blk => SlotNo -> SlotNo -> AF.AnchoredFragment blk -> String + printFragment firstSlot lastSlot fragment = + fragment + & AF.toOldestFirst + -- Turn the fragment into a list of (SlotNo, Just BlockNo) + & map + ( \block -> + (fromIntegral (unSlotNo (blockSlot block) - unSlotNo firstSlot), Just (unBlockNo (blockNo block))) + ) + -- Update only the Vector elements that have blocks in them + & Vector.toList . (slotRange Vector.//) + & map (maybe " " (printf "%2d")) + & unwords + & map (\c -> if c == ' ' then '─' else c) + where + -- Initialize a Vector with the length of the fragment containing only Nothings + slotRange = Vector.replicate (fromIntegral (unSlotNo lastSlot - unSlotNo firstSlot + 1)) Nothing + + lastNo :: AF.HasHeader blk => AF.AnchoredFragment blk -> SlotNo + lastNo = fromWithOrigin 0 . AF.headSlot + + firstNo :: AF.AnchoredFragment blk -> SlotNo + firstNo frag = + case AF.anchor frag of + AF.AnchorGenesis -> 0 + AF.Anchor slotNo _ _ -> slotNo + 1 diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM.hs index 59cf2e810f..f7b933595c 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM.hs @@ -3,57 +3,59 @@ module Test.Consensus.GSM (tests) where -import Cardano.Network.Types (LedgerStateJudgement (..)) -import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked -import Control.Monad (replicateM_) -import Control.Monad.Class.MonadAsync (poll, withAsync) -import Control.Monad.Class.MonadFork (MonadFork, yield) -import Control.Monad.Class.MonadSTM -import qualified Control.Monad.Class.MonadTime.SI as SI -import qualified Control.Monad.Class.MonadTimer.SI as SI -import qualified Control.Monad.IOSim as IOSim -import Control.Tracer (Tracer (Tracer)) -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import qualified Ouroboros.Consensus.Node.GSM as GSM -import Ouroboros.Consensus.Util.IOLike (IOLike) -import Test.Consensus.GSM.Model -import Test.Consensus.IOSimQSM.Test.StateMachine.Sequential - (runCommands') -import qualified Test.QuickCheck as QC -import qualified Test.QuickCheck.Monadic as QC -import qualified Test.StateMachine as QSM -import Test.StateMachine (Concrete) -import qualified Test.StateMachine.Types as QSM -import Test.Tasty (TestTree) -import Test.Tasty.QuickCheck (testProperty) -import Test.Util.Orphans.IOLike () -import Test.Util.TestEnv (adjustQuickCheckTests) -import Test.Util.ToExpr () +import Cardano.Network.Types (LedgerStateJudgement (..)) +import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked +import Control.Monad (replicateM_) +import Control.Monad.Class.MonadAsync (poll, withAsync) +import Control.Monad.Class.MonadFork (MonadFork, yield) +import Control.Monad.Class.MonadSTM +import Control.Monad.Class.MonadTime.SI qualified as SI +import Control.Monad.Class.MonadTimer.SI qualified as SI +import Control.Monad.IOSim qualified as IOSim +import Control.Tracer (Tracer (Tracer)) +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Ouroboros.Consensus.Node.GSM qualified as GSM +import Ouroboros.Consensus.Util.IOLike (IOLike) +import Test.Consensus.GSM.Model +import Test.Consensus.IOSimQSM.Test.StateMachine.Sequential + ( runCommands' + ) +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Monadic qualified as QC +import Test.StateMachine (Concrete) +import Test.StateMachine qualified as QSM +import Test.StateMachine.Types qualified as QSM +import Test.Tasty (TestTree) +import Test.Tasty.QuickCheck (testProperty) +import Test.Util.Orphans.IOLike () +import Test.Util.TestEnv (adjustQuickCheckTests) +import Test.Util.ToExpr () ----- tests :: [TestTree] tests = - adhoc <> core - where - adhoc = [ - testProperty "GSM yield regression" prop_yield_regression - ] - - core = [ - adjustQuickCheckTests (* 10) - $ testProperty ("GSM (" <> coreTestName ub <> ")") - $ prop_sequential ub - | ub <- Nothing : map Just [minBound .. maxBound :: UpstreamPeer] - ] - - coreTestName = \case - Nothing -> "no peers" - Just ub -> "at most " <> case fromEnum ub of - 0 -> "1 peer" - n -> show (n + 1) <> " peers" + adhoc <> core + where + adhoc = + [ testProperty "GSM yield regression" prop_yield_regression + ] + + core = + [ adjustQuickCheckTests (* 10) $ + testProperty ("GSM (" <> coreTestName ub <> ")") $ + prop_sequential ub + | ub <- Nothing : map Just [minBound .. maxBound :: UpstreamPeer] + ] + + coreTestName = \case + Nothing -> "no peers" + Just ub -> + "at most " <> case fromEnum ub of + 0 -> "1 peer" + n -> show (n + 1) <> " peers" ----- the QSM code under test @@ -62,252 +64,232 @@ tests = -- except the model definition is in "Test.Consensus.GSM.Model". semantics :: - IOLike m - => Vars m - -> Command Concrete - -> m (Response Concrete) + IOLike m => + Vars m -> + Command Concrete -> + m (Response Concrete) semantics vars cmd = pre $ case cmd of - Disconnect peer -> do - atomically $ do - modifyTVar varStates $ Map.delete peer - pure Unit - ExtendSelection sdel -> do - atomically $ do - Selection b s <- readTVar varSelection - writeTVar varSelection $! Selection (b + 1) (s + sdel) - pure Unit - ModifyCandidate peer bdel -> do - atomically $ do - - v <- (Map.! peer) <$> readTVar varStates - Candidate b <- psCandidate <$> readTVar v - writeTVar v $! PeerState (Candidate (b + bdel)) (Idling False) - - pure Unit - NewCandidate peer bdel -> do - atomically $ do - Selection b _s <- readTVar varSelection - v <- newTVar $! PeerState (Candidate (b + bdel)) (Idling False) - modifyTVar varStates $ Map.insert peer v - pure Unit - ReadGsmState -> do - fmap ReadThisGsmState $ atomically $ readTVar varGsmState - ReadMarker -> do - fmap ReadThisMarker $ atomically $ readTVar varMarker - StartIdling peer -> atomically $ do - v <- (Map.! peer) <$> readTVar varStates - modifyTVar v $ \ (PeerState c _) -> PeerState c (Idling True) - pure Unit - TimePasses dur -> do - SI.threadDelay (0.1 * fromIntegral dur) - pure Unit - where - Vars - varSelection - varStates - varGsmState - varMarker - varEvents - = vars - - pre m = do - push varEvents (EvBegin cmd) - x <- m - yieldSeveralTimes -- see Note [Why yield after the command] - push varEvents EvEnd - pure x + Disconnect peer -> do + atomically $ do + modifyTVar varStates $ Map.delete peer + pure Unit + ExtendSelection sdel -> do + atomically $ do + Selection b s <- readTVar varSelection + writeTVar varSelection $! Selection (b + 1) (s + sdel) + pure Unit + ModifyCandidate peer bdel -> do + atomically $ do + v <- (Map.! peer) <$> readTVar varStates + Candidate b <- psCandidate <$> readTVar v + writeTVar v $! PeerState (Candidate (b + bdel)) (Idling False) + + pure Unit + NewCandidate peer bdel -> do + atomically $ do + Selection b _s <- readTVar varSelection + v <- newTVar $! PeerState (Candidate (b + bdel)) (Idling False) + modifyTVar varStates $ Map.insert peer v + pure Unit + ReadGsmState -> do + fmap ReadThisGsmState $ atomically $ readTVar varGsmState + ReadMarker -> do + fmap ReadThisMarker $ atomically $ readTVar varMarker + StartIdling peer -> atomically $ do + v <- (Map.! peer) <$> readTVar varStates + modifyTVar v $ \(PeerState c _) -> PeerState c (Idling True) + pure Unit + TimePasses dur -> do + SI.threadDelay (0.1 * fromIntegral dur) + pure Unit + where + Vars + varSelection + varStates + varGsmState + varMarker + varEvents = + vars + + pre m = do + push varEvents (EvBegin cmd) + x <- m + yieldSeveralTimes -- see Note [Why yield after the command] + push varEvents EvEnd + pure x sm :: - IOLike m - => Maybe UpstreamPeer - -> Vars m - -> Context - -> QSM.StateMachine Model Command m Response -sm ub vars ctx = QSM.StateMachine { - QSM.cleanup = \_model -> pure () - , - QSM.generator = generator ub - , - QSM.initModel = initModel ctx - , - QSM.invariant = Nothing - , - QSM.mock = \model cmd -> pure $ mock model cmd - , - QSM.postcondition = postcondition - , - QSM.precondition = precondition ctx - , - QSM.semantics = semantics vars - , - QSM.shrinker = shrinker - , - QSM.transition = transition ctx - } + IOLike m => + Maybe UpstreamPeer -> + Vars m -> + Context -> + QSM.StateMachine Model Command m Response +sm ub vars ctx = + QSM.StateMachine + { QSM.cleanup = \_model -> pure () + , QSM.generator = generator ub + , QSM.initModel = initModel ctx + , QSM.invariant = Nothing + , QSM.mock = \model cmd -> pure $ mock model cmd + , QSM.postcondition = postcondition + , QSM.precondition = precondition ctx + , QSM.semantics = semantics vars + , QSM.shrinker = shrinker + , QSM.transition = transition ctx + } prop_sequential :: - Maybe UpstreamPeer - -> LedgerStateJudgement - -> QC.Fun (Set.Set UpstreamPeer) Bool - -> QC.Property + Maybe UpstreamPeer -> + LedgerStateJudgement -> + QC.Fun (Set.Set UpstreamPeer) Bool -> + QC.Property prop_sequential ub initialJudgement (QC.Fn isHaaSatisfied) = - QSM.forAllCommands - commandArbitrarySM - mbMinimumCommandLen - (prop_sequential1 ctx) - where - ctx = Context { - cInitialJudgement = initialJudgement - , - cIsHaaSatisfied = isHaaSatisfied + QSM.forAllCommands + commandArbitrarySM + mbMinimumCommandLen + (prop_sequential1 ctx) + where + ctx = + Context + { cInitialJudgement = initialJudgement + , cIsHaaSatisfied = isHaaSatisfied } - mbMinimumCommandLen = Just 20 -- just a guess + mbMinimumCommandLen = Just 20 -- just a guess - -- NB the monad is irrelevant here but is ambiguous, so we merely ascribe a - -- convenient concrete one - commandArbitrarySM = sm ub (undefined :: Vars IO) ctx + -- NB the monad is irrelevant here but is ambiguous, so we merely ascribe a + -- convenient concrete one + commandArbitrarySM = sm ub (undefined :: Vars IO) ctx prop_sequential1 :: - Context - -> QSM.Commands Command Response - -> QC.Property + Context -> + QSM.Commands Command Response -> + QC.Property prop_sequential1 ctx cmds = runSimQC $ do - let initialGsmState = case initialJudgement of - TooOld -> GSM.PreSyncing - YoungEnough -> GSM.CaughtUp - - -- these variables are part of the 'GSM.GsmView' - varSelection <- newTVarIO (mSelection $ initModel ctx) - varStates <- newTVarIO Map.empty - varGsmState <- newTVarIO initialGsmState - varMarker <- newTVarIO (toMarker initialGsmState) - - -- this variable is for better 'QC.counterexample' messages - varEvents <- newRecorder - let tracer = Tracer $ push varEvents . EvGsm - - let vars = - Vars - varSelection - varStates - varGsmState - varMarker - varEvents - - let executionSM = sm (Just maxBound) vars ctx - - -- NB the specific IO type is unused here - prettySM = sm undefined undefined ctx - - let gsm = GSM.realGsmEntryPoints (id, tracer) GSM.GsmView { - GSM.antiThunderingHerd = Nothing - , - GSM.candidateOverSelection = \ s (PeerState c _) -> candidateOverSelection s c - , - GSM.peerIsIdle = isIdling - , - GSM.durationUntilTooOld = Just durationUntilTooOld - , - GSM.equivalent = (==) -- unsound, but harmless in this test - , - GSM.getChainSyncStates = readTVar varStates - , - GSM.getCurrentSelection = readTVar varSelection - , - GSM.minCaughtUpDuration = thrashLimit - , - GSM.setCaughtUpPersistentMark = \b -> + let initialGsmState = case initialJudgement of + TooOld -> GSM.PreSyncing + YoungEnough -> GSM.CaughtUp + + -- these variables are part of the 'GSM.GsmView' + varSelection <- newTVarIO (mSelection $ initModel ctx) + varStates <- newTVarIO Map.empty + varGsmState <- newTVarIO initialGsmState + varMarker <- newTVarIO (toMarker initialGsmState) + + -- this variable is for better 'QC.counterexample' messages + varEvents <- newRecorder + let tracer = Tracer $ push varEvents . EvGsm + + let vars = + Vars + varSelection + varStates + varGsmState + varMarker + varEvents + + let executionSM = sm (Just maxBound) vars ctx + + -- NB the specific IO type is unused here + prettySM = sm undefined undefined ctx + + let gsm = + GSM.realGsmEntryPoints + (id, tracer) + GSM.GsmView + { GSM.antiThunderingHerd = Nothing + , GSM.candidateOverSelection = \s (PeerState c _) -> candidateOverSelection s c + , GSM.peerIsIdle = isIdling + , GSM.durationUntilTooOld = Just durationUntilTooOld + , GSM.equivalent = (==) -- unsound, but harmless in this test + , GSM.getChainSyncStates = readTVar varStates + , GSM.getCurrentSelection = readTVar varSelection + , GSM.minCaughtUpDuration = thrashLimit + , GSM.setCaughtUpPersistentMark = \b -> atomically $ do - writeTVar varMarker $ if b then Present else Absent - , - GSM.writeGsmState = \x -> atomically $ do + writeTVar varMarker $ if b then Present else Absent + , GSM.writeGsmState = \x -> atomically $ do writeTVar varGsmState x - , - GSM.isHaaSatisfied = + , GSM.isHaaSatisfied = isHaaSatisfied . Map.keysSet <$> readTVar varStates - } - gsmEntryPoint = case initialJudgement of - TooOld -> GSM.enterPreSyncing gsm - YoungEnough -> GSM.enterCaughtUp gsm - - ((hist, model', res), mbExn) <- id - $ withAsync gsmEntryPoint - $ \hGSM -> do - - yieldSeveralTimes -- see Note [Why yield after the command] - - x <- runCommands' (pure executionSM) cmds - - -- notice if the GSM thread raised an exception while processing the - -- commands - poll hGSM <&> \case - Just Right{} -> - error "impossible! GSM terminated" - Just (Left exn) -> - -- we don't simply rethrow it, since we still want to pretty print - -- the command sequence - (x, Just exn) - Nothing -> - (x, Nothing) - - let noExn = case mbExn of - Nothing -> QC.property () - Just exn -> QC.counterexample (show exn) False - - -- effectively add a 'ReadGsmState' to the end of the command list - lastCheck <- do - actual <- semantics vars ReadGsmState - let expected = mock model' ReadGsmState - pure $ case (actual, expected) of - (ReadThisGsmState x, ReadThisGsmState y) -> - QC.counterexample "lastCheck" $ x QC.=== y - _ -> - error "impossible! lastCheck response" - - watcherEvents <- dumpEvents varEvents - - pure - $ QC.monadicIO - $ QSM.prettyCommands prettySM hist - $ QC.counterexample - (unlines - $ (:) "WATCHER" - $ map ((" " <>) . show) - $ watcherEvents - ) - $ QC.tabulate - "Notables" - (case Set.toList $ mNotables model' of - [] -> [""] - notables -> map show notables - ) - $ QSM.checkCommandNames cmds - $ noExn QC..&&. lastCheck QC..&&. res QC.=== QSM.Ok - where - Context { - cInitialJudgement = initialJudgement - , - cIsHaaSatisfied = isHaaSatisfied - } = ctx + } + gsmEntryPoint = case initialJudgement of + TooOld -> GSM.enterPreSyncing gsm + YoungEnough -> GSM.enterCaughtUp gsm + + ((hist, model', res), mbExn) <- id $ + withAsync gsmEntryPoint $ + \hGSM -> do + yieldSeveralTimes -- see Note [Why yield after the command] + x <- runCommands' (pure executionSM) cmds + + -- notice if the GSM thread raised an exception while processing the + -- commands + poll hGSM <&> \case + Just Right{} -> + error "impossible! GSM terminated" + Just (Left exn) -> + -- we don't simply rethrow it, since we still want to pretty print + -- the command sequence + (x, Just exn) + Nothing -> + (x, Nothing) + + let noExn = case mbExn of + Nothing -> QC.property () + Just exn -> QC.counterexample (show exn) False + + -- effectively add a 'ReadGsmState' to the end of the command list + lastCheck <- do + actual <- semantics vars ReadGsmState + let expected = mock model' ReadGsmState + pure $ case (actual, expected) of + (ReadThisGsmState x, ReadThisGsmState y) -> + QC.counterexample "lastCheck" $ x QC.=== y + _ -> + error "impossible! lastCheck response" + + watcherEvents <- dumpEvents varEvents + + pure + $ QC.monadicIO + $ QSM.prettyCommands prettySM hist + $ QC.counterexample + ( unlines $ + (:) "WATCHER" $ + map ((" " <>) . show) $ + watcherEvents + ) + $ QC.tabulate + "Notables" + ( case Set.toList $ mNotables model' of + [] -> [""] + notables -> map show notables + ) + $ QSM.checkCommandNames cmds + $ noExn QC..&&. lastCheck QC..&&. res QC.=== QSM.Ok + where + Context + { cInitialJudgement = initialJudgement + , cIsHaaSatisfied = isHaaSatisfied + } = ctx ----- durationUntilTooOld :: Selection -> IOSim.IOSim s GSM.DurationFromNow durationUntilTooOld sel = do - let expiryAge = ageLimit `SI.addTime` onset sel - now <- SI.getMonotonicTime - pure $ case compare expiryAge now of - LT -> GSM.Already - GT -> GSM.After $ realToFrac $ expiryAge `SI.diffTime` now - - -- 'boringDur' cannot prevent this case. In particular, this case - -- necessarily arises in the GSM itself during a 'TimePasses' that - -- incurs a so-called /flicker/ event, in which the anti-thrashing - -- timer expires and yet the node state at that moment still - -- _immediately_ indicates that it's CaughtUp. For the specific case of - -- this test suite, the answer here must be 'GSM.Already'. - EQ -> GSM.Already + let expiryAge = ageLimit `SI.addTime` onset sel + now <- SI.getMonotonicTime + pure $ case compare expiryAge now of + LT -> GSM.Already + GT -> GSM.After $ realToFrac $ expiryAge `SI.diffTime` now + -- 'boringDur' cannot prevent this case. In particular, this case + -- necessarily arises in the GSM itself during a 'TimePasses' that + -- incurs a so-called /flicker/ event, in which the anti-thrashing + -- timer expires and yet the node state at that moment still + -- _immediately_ indicates that it's CaughtUp. For the specific case of + -- this test suite, the answer here must be 'GSM.Already'. + EQ -> GSM.Already ----- @@ -362,34 +344,32 @@ first command, for consistency. -- ()@. prop_yield_regression :: QC.Property prop_yield_regression = - QC.once - $ prop_sequential1 ctx - $ QSM.Commands - [ QSM.Command (NewCandidate Amara (B 1)) Unit [] - , QSM.Command (StartIdling Amara) Unit [] - , QSM.Command (TimePasses 61) Unit [] - , QSM.Command (ExtendSelection (S (-4))) Unit [] - , QSM.Command ReadMarker (ReadThisMarker Absent) [] - ] - where - ctx = Context { - cInitialJudgement = YoungEnough - , - cIsHaaSatisfied = \_peers -> True + QC.once $ + prop_sequential1 ctx $ + QSM.Commands + [ QSM.Command (NewCandidate Amara (B 1)) Unit [] + , QSM.Command (StartIdling Amara) Unit [] + , QSM.Command (TimePasses 61) Unit [] + , QSM.Command (ExtendSelection (S (-4))) Unit [] + , QSM.Command ReadMarker (ReadThisMarker Absent) [] + ] + where + ctx = + Context + { cInitialJudgement = YoungEnough + , cIsHaaSatisfied = \_peers -> True } ----- trivial event accumulator, useful for debugging test failures -data Ev = +data Ev + = -- | 'semantics' started stimulating the GSM code being tested EvBegin (Command Concrete) - -- ^ 'semantics' started stimulating the GSM code being tested - | + | -- | 'semantics' stopped stimulating the GSM code being tested EvEnd - -- ^ 'semantics' stopped stimulating the GSM code being tested - | + | -- | the GSM code being tested emitted an event EvGsm (GSM.TraceGsmEvent Selection) - -- ^ the GSM code being tested emitted an event - deriving (Show) + deriving Show newtype EvRecorder m = EvRecorder (StrictTVar m [(SI.Time, Ev)]) @@ -401,26 +381,27 @@ dumpEvents (EvRecorder var) = reverse <$> readTVarIO var push :: IOLike m => EvRecorder m -> Ev -> m () push (EvRecorder var) ev = do - now <- SI.getMonotonicTime - atomically $ modifyTVar var $ (:) (now, ev) + now <- SI.getMonotonicTime + atomically $ modifyTVar var $ (:) (now, ev) isIdling :: PeerState -> Bool -isIdling (PeerState {psIdling = Idling i}) = i +isIdling (PeerState{psIdling = Idling i}) = i ----- -- | merely a tidy bundle of arguments -data Vars m = Vars - (StrictTVar m Selection) - (StrictTVar m (Map.Map UpstreamPeer (StrictTVar m PeerState))) - (StrictTVar m GSM.GsmState) - (StrictTVar m MarkerState) - (EvRecorder m) +data Vars m + = Vars + (StrictTVar m Selection) + (StrictTVar m (Map.Map UpstreamPeer (StrictTVar m PeerState))) + (StrictTVar m GSM.GsmState) + (StrictTVar m MarkerState) + (EvRecorder m) newtype Idling = Idling Bool deriving (Eq, Ord, Show) -data PeerState = PeerState { psCandidate :: !Candidate, psIdling :: !Idling } +data PeerState = PeerState {psCandidate :: !Candidate, psIdling :: !Idling} deriving (Eq, Ord, Show) ----- @@ -428,5 +409,5 @@ data PeerState = PeerState { psCandidate :: !Candidate, psIdling :: !Idling } -- | a straight-forwardtrivial alias runSimQC :: (forall s. IOSim.IOSim s QC.Property) -> QC.Property runSimQC m = case IOSim.runSim m of - Left failure -> QC.counterexample (show failure) False - Right prop -> prop + Left failure -> QC.counterexample (show failure) False + Right prop -> prop diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM/Model.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM/Model.hs index f9cdc6f175..90438ea1ba 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM/Model.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/GSM/Model.hs @@ -6,29 +6,27 @@ {-# LANGUAGE PolyKinds #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE StandaloneKindSignatures #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | The definition of the GSM QSM model and its auxiliaries - module Test.Consensus.GSM.Model (module Test.Consensus.GSM.Model) where -import Cardano.Network.Types (LedgerStateJudgement (..)) -import qualified Control.Monad.Class.MonadTime.SI as SI -import Data.Kind (Type) -import Data.List ((\\)) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Data.Time (diffTimeToPicoseconds) -import qualified Data.TreeDiff as TD -import GHC.Generics (Generic, Generic1) -import qualified Ouroboros.Consensus.Node.GSM as GSM -import qualified Test.QuickCheck as QC -import Test.QuickCheck (choose, elements, shrink) -import qualified Test.StateMachine as QSM -import Test.StateMachine (Concrete, Symbolic) -import qualified Test.StateMachine.Types.Rank2 as QSM -import Test.Util.Orphans.ToExpr () +import Cardano.Network.Types (LedgerStateJudgement (..)) +import Control.Monad.Class.MonadTime.SI qualified as SI +import Data.Kind (Type) +import Data.List ((\\)) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Data.Time (diffTimeToPicoseconds) +import Data.TreeDiff qualified as TD +import GHC.Generics (Generic, Generic1) +import Ouroboros.Consensus.Node.GSM qualified as GSM +import Test.QuickCheck (choose, elements, shrink) +import Test.QuickCheck qualified as QC +import Test.StateMachine (Concrete, Symbolic) +import Test.StateMachine qualified as QSM +import Test.StateMachine.Types.Rank2 qualified as QSM +import Test.Util.Orphans.ToExpr () ----- the QSM model @@ -39,36 +37,28 @@ import Test.Util.Orphans.ToExpr () -- | TODO restarts (or maybe just killing the GSM thread) type Command :: (Type -> Type) -> Type -data Command r = - Disconnect UpstreamPeer - -- ^ INVARIANT must be an existing peer +data Command r + = -- | INVARIANT must be an existing peer -- -- Mocks the necessary ChainSync client behavior. - | - ExtendSelection S - -- ^ INVARIANT 'selectionIsBehind' + Disconnect UpstreamPeer + | -- | INVARIANT 'selectionIsBehind' -- -- NOTE Harmless to assume it only advances by @'B' 1@ at a time. - | - ModifyCandidate UpstreamPeer B - -- ^ INVARIANT existing peer + ExtendSelection S + | -- | INVARIANT existing peer -- -- Mocks the necessary ChainSync client behavior. - | - NewCandidate UpstreamPeer B - -- ^ INVARIANT new peer + ModifyCandidate UpstreamPeer B + | -- | INVARIANT new peer -- -- Mocks the necessary ChainSync client behavior. - | - ReadGsmState - | - ReadMarker - | + NewCandidate UpstreamPeer B + | ReadGsmState + | ReadMarker + | -- | INVARIANT existing peer, not idling StartIdling UpstreamPeer - -- ^ INVARIANT existing peer, not idling - | - TimePasses Int - -- ^ tenths of a second + | -- | tenths of a second -- -- INVARIANT positive -- @@ -77,88 +67,74 @@ data Command r = -- -- NOTE The generator does not yield consecutive 'TimePasses' commands, -- though shrinking might. - -- - - -- DRAFT NOTE An earlier design attempted to prevent TimePasses from ever landing - -- exactly on an interesting moment, ie when some timeout expires. Notably: - -- when a slot becomes too old or when the anti-thrashing limit expires. - -- This is feasible and its complexity is tolerable for the generators. - -- However, shrinking------wait, doesn't precondition guard against it? - deriving stock (Generic1, Read, Show) + TimePasses Int + -- DRAFT NOTE An earlier design attempted to prevent TimePasses from ever landing + -- exactly on an interesting moment, ie when some timeout expires. Notably: + -- when a slot becomes too old or when the anti-thrashing limit expires. + -- This is feasible and its complexity is tolerable for the generators. + -- However, shrinking------wait, doesn't precondition guard against it? + deriving stock (Generic1, Read, Show) deriving anyclass (QSM.CommandNames, QSM.Foldable, QSM.Functor, QSM.Traversable) type Response :: (Type -> Type) -> Type -data Response r = - ReadThisGsmState GSM.GsmState - | - ReadThisMarker MarkerState - | - Unit - deriving stock (Generic1, Read, Show) +data Response r + = ReadThisGsmState GSM.GsmState + | ReadThisMarker MarkerState + | Unit + deriving stock (Generic1, Read, Show) deriving anyclass (QSM.Foldable, QSM.Functor, QSM.Traversable) -data Context = Context { - cInitialJudgement :: LedgerStateJudgement - , - cIsHaaSatisfied :: Set.Set UpstreamPeer -> Bool +data Context = Context + { cInitialJudgement :: LedgerStateJudgement + , cIsHaaSatisfied :: Set.Set UpstreamPeer -> Bool } type Model :: (Type -> Type) -> Type -data Model r = Model { - mCandidates :: Map.Map UpstreamPeer Candidate - , - mClock :: SI.Time - , - mIdlers :: Set.Set UpstreamPeer - , - mNotables :: Set.Set Notable - , - mPrev :: WhetherPrevTimePasses - , - mSelection :: Selection - , - mState :: ModelState +data Model r = Model + { mCandidates :: Map.Map UpstreamPeer Candidate + , mClock :: SI.Time + , mIdlers :: Set.Set UpstreamPeer + , mNotables :: Set.Set Notable + , mPrev :: WhetherPrevTimePasses + , mSelection :: Selection + , mState :: ModelState } deriving (Generic, Show) - deriving anyclass (TD.ToExpr) + deriving anyclass TD.ToExpr addNotableWhen :: Notable -> Bool -> Model r -> Model r addNotableWhen n b model = - if not b then model else - model { mNotables = n `Set.insert` mNotables model } + if not b + then model + else + model{mNotables = n `Set.insert` mNotables model} initModel :: Context -> Model r -initModel ctx = Model { - mCandidates = Map.empty - , - mClock = SI.Time 0 - , - mIdlers = idlers - , - mNotables = Set.empty - , - mPrev = WhetherPrevTimePasses True - , - mSelection = Selection 0 s - , - mState = case j of +initModel ctx = + Model + { mCandidates = Map.empty + , mClock = SI.Time 0 + , mIdlers = idlers + , mNotables = Set.empty + , mPrev = WhetherPrevTimePasses True + , mSelection = Selection 0 s + , mState = case j of TooOld | isHaaSatisfied idlers -> ModelSyncing - | otherwise -> ModelPreSyncing - YoungEnough -> ModelCaughtUp (SI.Time (-10000)) - } - where - Context { - cInitialJudgement = j - , - cIsHaaSatisfied = isHaaSatisfied - } = ctx + | otherwise -> ModelPreSyncing + YoungEnough -> ModelCaughtUp (SI.Time (-10000)) + } + where + Context + { cInitialJudgement = j + , cIsHaaSatisfied = isHaaSatisfied + } = ctx - idlers = Set.empty + idlers = Set.empty - s = S $ case j of - TooOld -> (-11) - YoungEnough -> 0 + s = S $ case j of + TooOld -> (-11) + YoungEnough -> 0 -- The extra expressivity of 'QSM.Logic' beyond 'Bool' will not be useful in -- this test module as-is, since we only run commands (via 'runCommands'') that @@ -167,423 +143,389 @@ initModel ctx = Model { -- annotations may be helpful. precondition :: Context -> Model Symbolic -> Command Symbolic -> QSM.Logic precondition ctx model = pre $ \case - cmd@ExtendSelection{} -> - let model' = transition ctx model cmd Unit - in - "syncing node got ahead" `atom` selectionIsBehind model - QSM..&& - "early selection" `atom` selectionIsNotEarly model' - QSM..&& - boringDur model' 0 - Disconnect peer -> - "double disconnect" `atom` (peer `Map.member` cands) - ModifyCandidate peer _bdel -> - "modify after disconnect" `atom` (peer `Map.member` cands) - NewCandidate peer _bdel -> - "double connect" `atom` (peer `Map.notMember` cands) - ReadGsmState -> - QSM.Top - ReadMarker -> - QSM.Top - StartIdling peer -> - "idle after disconnect" `atom` (peer `Map.member` cands) - QSM..&& - "double idle" `atom` (peer `Set.notMember` idlers) - TimePasses dur -> - "non-positive duration" `atom` (0 < dur) - QSM..&& - boringDur model dur - where - Model { - mCandidates = cands - , - mIdlers = idlers - } = model - - pre f cmd = f cmd QSM..// show cmd + cmd@ExtendSelection{} -> + let model' = transition ctx model cmd Unit + in "syncing node got ahead" + `atom` selectionIsBehind model + QSM..&& "early selection" + `atom` selectionIsNotEarly model' + QSM..&& boringDur model' 0 + Disconnect peer -> + "double disconnect" `atom` (peer `Map.member` cands) + ModifyCandidate peer _bdel -> + "modify after disconnect" `atom` (peer `Map.member` cands) + NewCandidate peer _bdel -> + "double connect" `atom` (peer `Map.notMember` cands) + ReadGsmState -> + QSM.Top + ReadMarker -> + QSM.Top + StartIdling peer -> + "idle after disconnect" + `atom` (peer `Map.member` cands) + QSM..&& "double idle" + `atom` (peer `Set.notMember` idlers) + TimePasses dur -> + "non-positive duration" + `atom` (0 < dur) + QSM..&& boringDur model dur + where + Model + { mCandidates = cands + , mIdlers = idlers + } = model + + pre f cmd = f cmd QSM..// show cmd transition :: Context -> Model r -> Command r -> Response r -> Model r transition ctx model cmd resp = fixupModelState ctx cmd $ case (cmd, resp) of - (Disconnect peer, Unit) -> - model' { - mCandidates = Map.delete peer cands - , - mIdlers = Set.delete peer idlers - } - (ExtendSelection sdel, Unit) -> - model' { mSelection = Selection (b + 1) (s + sdel) } - (ModifyCandidate peer bdel, Unit) -> - model' { - mCandidates = Map.insertWith plusC peer (Candidate bdel) cands - , - mIdlers = Set.delete peer idlers - } - (NewCandidate peer bdel, Unit) -> - model' { mCandidates = Map.insert peer (Candidate (b + bdel)) cands } - (ReadGsmState, ReadThisGsmState{}) -> - model' - (ReadMarker, ReadThisMarker{}) -> - model' - (StartIdling peer, Unit) -> - model' { mIdlers = Set.insert peer idlers } - (TimePasses dur, Unit) -> - addNotableWhen BigDurN (dur > 300) - $ model { - mClock = SI.addTime (0.1 * fromIntegral dur) clk - , - mPrev = WhetherPrevTimePasses True - } - o -> error $ "impossible response: " <> show o - where - Model { - mCandidates = cands - , - mClock = clk - , - mIdlers = idlers - , - mSelection = Selection b s - } = model - - model' = model { mPrev = WhetherPrevTimePasses False } - - plusC (Candidate x) (Candidate y) = Candidate (x + y) + (Disconnect peer, Unit) -> + model' + { mCandidates = Map.delete peer cands + , mIdlers = Set.delete peer idlers + } + (ExtendSelection sdel, Unit) -> + model'{mSelection = Selection (b + 1) (s + sdel)} + (ModifyCandidate peer bdel, Unit) -> + model' + { mCandidates = Map.insertWith plusC peer (Candidate bdel) cands + , mIdlers = Set.delete peer idlers + } + (NewCandidate peer bdel, Unit) -> + model'{mCandidates = Map.insert peer (Candidate (b + bdel)) cands} + (ReadGsmState, ReadThisGsmState{}) -> + model' + (ReadMarker, ReadThisMarker{}) -> + model' + (StartIdling peer, Unit) -> + model'{mIdlers = Set.insert peer idlers} + (TimePasses dur, Unit) -> + addNotableWhen BigDurN (dur > 300) $ + model + { mClock = SI.addTime (0.1 * fromIntegral dur) clk + , mPrev = WhetherPrevTimePasses True + } + o -> error $ "impossible response: " <> show o + where + Model + { mCandidates = cands + , mClock = clk + , mIdlers = idlers + , mSelection = Selection b s + } = model + + model' = model{mPrev = WhetherPrevTimePasses False} + + plusC (Candidate x) (Candidate y) = Candidate (x + y) -- | Update the 'mState', assuming that's the only stale field in the given -- 'Model' fixupModelState :: Context -> Command r -> Model r -> Model r fixupModelState ctx cmd model = - case st of - ModelPreSyncing - | haaSatisfied -> - avoidTransientState - $ addNotableWhen PreSyncingToSyncingN True - $ model { mState = ModelSyncing } - | otherwise -> - model - ModelSyncing - | not haaSatisfied -> - addNotableWhen SyncingToPreSyncingN True - $ model { mState = ModelPreSyncing } - | caughtUp -> - -- ASSUMPTION This new state was /NOT/ incurred by the 'TimePasses' - -- command. - -- - -- Therefore the current clock is necessarily the correct timestamp - -- to record. - addNotableWhen CaughtUpN True - $ model { mState = ModelCaughtUp clk } - | otherwise -> - model - ModelCaughtUp timestamp - | flicker timestamp -> - addNotableWhen FlickerN True - $ model { mState = ModelCaughtUp (flickerTimestamp timestamp) } - | fellBehind timestamp -> - avoidTransientState - $ addNotableWhen FellBehindN True - $ model { mState = ModelPreSyncing } - | otherwise -> - -- NB in this branch, these notables are mutually exclusive - addNotableWhen TooOldN (expiryAge < clk) - $ addNotableWhen - NotThrashingN - (SI.Time 0 < timestamp && expiryThrashing timestamp < clk) - $ model - where - Model { - mCandidates = cands - , - mClock = clk - , - mIdlers = idlers - , - mSelection = sel - , - mState = st - } = model - - Context { - cIsHaaSatisfied = isHaaSatisfied - } = ctx - - haaSatisfied = isHaaSatisfied $ Map.keysSet cands - caughtUp = some && allIdling && all ok cands - fellBehind timestamp = expiry timestamp < clk -- NB 'boringDur' prevents == - - flicker timestamp = fellBehind timestamp && caughtUp && haaSatisfied - - some = 0 < Map.size cands - - allIdling = idlers == Map.keysSet cands - - ok cand = - GSM.WhetherCandidateIsBetter False == candidateOverSelection sel cand - - -- the first time the node would transition to PreSyncing - expiry timestamp = expiryAge `max` expiryThrashing timestamp - expiryAge = SI.addTime ageLimit (onset sel) - expiryThrashing timestamp = SI.addTime thrashLimit timestamp - - -- It's possible for the node to instantly return to CaughtUp, but that - -- might have happened /during/ the 'TimePasses' command, not only when it - -- ends. - -- - -- Therefore the age limit of the selection is the correct timestamp to - -- record, instead of the current clock (ie when the 'TimePasses' ended). - -- - -- NOTE Superficially, in the real implementation, the Diffusion Layer - -- should be discarding all peers when transitioning from CaughtUp to - -- PreSyncing. However, it would be plausible for an implementation to - -- retain any bootstrap/ledger peers it happened to have, so the - -- idiosyncratic behavior of the system under test in this module is not - -- totally irrelevant. - -- - -- the /last/ time the node instantaneously visited PreSyncing during the - -- 'TimePasses' command, assuming it did so at least once - flickerTimestamp timestamp = case cmd of - ExtendSelection sdel | sdel < 0 -> + case st of + ModelPreSyncing + | haaSatisfied -> + avoidTransientState $ + addNotableWhen PreSyncingToSyncingN True $ + model{mState = ModelSyncing} + | otherwise -> + model + ModelSyncing + | not haaSatisfied -> + addNotableWhen SyncingToPreSyncingN True $ + model{mState = ModelPreSyncing} + | caughtUp -> + -- ASSUMPTION This new state was /NOT/ incurred by the 'TimePasses' + -- command. + -- + -- Therefore the current clock is necessarily the correct timestamp + -- to record. + addNotableWhen CaughtUpN True $ + model{mState = ModelCaughtUp clk} + | otherwise -> + model + ModelCaughtUp timestamp + | flicker timestamp -> + addNotableWhen FlickerN True $ + model{mState = ModelCaughtUp (flickerTimestamp timestamp)} + | fellBehind timestamp -> + avoidTransientState $ + addNotableWhen FellBehindN True $ + model{mState = ModelPreSyncing} + | otherwise -> + -- NB in this branch, these notables are mutually exclusive + addNotableWhen TooOldN (expiryAge < clk) + $ addNotableWhen + NotThrashingN + (SI.Time 0 < timestamp && expiryThrashing timestamp < clk) + $ model + where + Model + { mCandidates = cands + , mClock = clk + , mIdlers = idlers + , mSelection = sel + , mState = st + } = model + + Context + { cIsHaaSatisfied = isHaaSatisfied + } = ctx + + haaSatisfied = isHaaSatisfied $ Map.keysSet cands + caughtUp = some && allIdling && all ok cands + fellBehind timestamp = expiry timestamp < clk -- NB 'boringDur' prevents == + flicker timestamp = fellBehind timestamp && caughtUp && haaSatisfied + + some = 0 < Map.size cands + + allIdling = idlers == Map.keysSet cands + + ok cand = + GSM.WhetherCandidateIsBetter False == candidateOverSelection sel cand + + -- the first time the node would transition to PreSyncing + expiry timestamp = expiryAge `max` expiryThrashing timestamp + expiryAge = SI.addTime ageLimit (onset sel) + expiryThrashing timestamp = SI.addTime thrashLimit timestamp + + -- It's possible for the node to instantly return to CaughtUp, but that + -- might have happened /during/ the 'TimePasses' command, not only when it + -- ends. + -- + -- Therefore the age limit of the selection is the correct timestamp to + -- record, instead of the current clock (ie when the 'TimePasses' ended). + -- + -- NOTE Superficially, in the real implementation, the Diffusion Layer + -- should be discarding all peers when transitioning from CaughtUp to + -- PreSyncing. However, it would be plausible for an implementation to + -- retain any bootstrap/ledger peers it happened to have, so the + -- idiosyncratic behavior of the system under test in this module is not + -- totally irrelevant. + -- + -- the /last/ time the node instantaneously visited PreSyncing during the + -- 'TimePasses' command, assuming it did so at least once + flickerTimestamp timestamp = case cmd of + ExtendSelection sdel + | sdel < 0 -> clk - TimePasses{} -> - foldl max (expiry timestamp) - $ takeWhile (< clk) -- NB 'boringDur' prevents == - $ iterate (SI.addTime thrashLimit) (expiry timestamp) - _ -> - error - $ "impossible! flicker but neither " - <> - "negative ExtendSelection nor TimePasses: " - <> - show cmd - - avoidTransientState = fixupModelState ctx cmd + TimePasses{} -> + foldl max (expiry timestamp) $ + takeWhile (< clk) $ -- NB 'boringDur' prevents == + iterate (SI.addTime thrashLimit) (expiry timestamp) + _ -> + error $ + "impossible! flicker but neither " + <> "negative ExtendSelection nor TimePasses: " + <> show cmd + + avoidTransientState = fixupModelState ctx cmd postcondition :: - Model Concrete - -> Command Concrete - -> Response Concrete - -> QSM.Logic + Model Concrete -> + Command Concrete -> + Response Concrete -> + QSM.Logic postcondition model _cmd = \case - ReadThisGsmState s' -> - s' QSM..== s - ReadThisMarker m' -> - m' QSM..== toMarker s - Unit -> - QSM.Top - where - s = toGsmState $ mState model + ReadThisGsmState s' -> + s' QSM..== s + ReadThisMarker m' -> + m' QSM..== toMarker s + Unit -> + QSM.Top + where + s = toGsmState $ mState model generator :: - Maybe UpstreamPeer - -> Model Symbolic - -> Maybe (QC.Gen (Command Symbolic)) -generator ub model = Just $ QC.frequency $ - [ (,) 5 $ Disconnect <$> elements old | notNull old ] - <> - [ (,) 10 $ ExtendSelection <$> elements sdels - | notNull sdels - , selectionIsBehind model -- NB harmless to assume this node never mints - ] - <> - [ (,) 20 $ do - (peer, bdel) <- elements bdels - ModifyCandidate peer <$> elements bdel - | notNull bdels - ] - <> - [ (,) 100 $ - NewCandidate - <$> elements new - <*> (B <$> choose (-10, 10)) - | notNull new - ] - <> - [ (,) 20 $ pure ReadGsmState ] - <> - [ (,) 20 $ pure ReadMarker ] - <> - [ (,) 50 $ StartIdling <$> elements oldNotIdling | notNull oldNotIdling ] - <> - [ (,) 100 $ TimePasses <$> genTimePassesDur | prev == WhetherPrevTimePasses False ] - where - Model { - mCandidates = cands - , - mClock = clk - , - mIdlers = idlers - , - mPrev = prev - , - mSelection = sel - } = model - - notNull :: [a] -> Bool - notNull = not . null - - old = Map.keys cands - - new = case ub of - Nothing -> [] - Just peer -> [ minBound .. peer ] \\ old - - oldNotIdling = old \\ Set.toList idlers - - genTimePassesDur = QC.frequency $ - [ (,) 10 $ choose (1, 70) ] - <> - [ (,) 1 $ choose (300, 600) - | case mState model of - ModelCaughtUp{} -> True - ModelPreSyncing{} -> False - ModelSyncing{} -> False + Maybe UpstreamPeer -> + Model Symbolic -> + Maybe (QC.Gen (Command Symbolic)) +generator ub model = + Just $ + QC.frequency $ + [(,) 5 $ Disconnect <$> elements old | notNull old] + <> [ (,) 10 $ ExtendSelection <$> elements sdels + | notNull sdels + , selectionIsBehind model -- NB harmless to assume this node never mints + ] + <> [ (,) 20 $ do + (peer, bdel) <- elements bdels + ModifyCandidate peer <$> elements bdel + | notNull bdels + ] + <> [ (,) 100 $ + NewCandidate + <$> elements new + <*> (B <$> choose (-10, 10)) + | notNull new + ] + <> [(,) 20 $ pure ReadGsmState] + <> [(,) 20 $ pure ReadMarker] + <> [(,) 50 $ StartIdling <$> elements oldNotIdling | notNull oldNotIdling] + <> [(,) 100 $ TimePasses <$> genTimePassesDur | prev == WhetherPrevTimePasses False] + where + Model + { mCandidates = cands + , mClock = clk + , mIdlers = idlers + , mPrev = prev + , mSelection = sel + } = model + + notNull :: [a] -> Bool + notNull = not . null + + old = Map.keys cands + + new = case ub of + Nothing -> [] + Just peer -> [minBound .. peer] \\ old + + oldNotIdling = old \\ Set.toList idlers + + genTimePassesDur = + QC.frequency $ + [(,) 10 $ choose (1, 70)] + <> [ (,) 1 $ choose (300, 600) + | case mState model of + ModelCaughtUp{} -> True + ModelPreSyncing{} -> False + ModelSyncing{} -> False + ] + + -- sdels that would not cause the selection to be in the future + sdels = + let Selection b s = sel + in [ sdel + | sdel <- map S [-4 .. 10] + , 0 /= sdel + , onset (Selection b (s + sdel)) <= clk ] - -- sdels that would not cause the selection to be in the future - sdels = - let Selection b s = sel - in - [ sdel - | sdel <- map S [-4 .. 10] - , 0 /= sdel - , onset (Selection b (s + sdel)) <= clk - ] - - -- bdels that keep the candidates' lengths near the selection - bdels = - let Selection b _s = sel - lim = 3 - in - [ (,) - peer - (filter (/= 0) [ b + offset - c | offset <- [-lim .. lim] ]) - | (peer, Candidate c) <- Map.assocs cands - ] - + -- bdels that keep the candidates' lengths near the selection + bdels = + let Selection b _s = sel + lim = 3 + in [ (,) + peer + (filter (/= 0) [b + offset - c | offset <- [-lim .. lim]]) + | (peer, Candidate c) <- Map.assocs cands + ] shrinker :: Model Symbolic -> Command Symbolic -> [Command Symbolic] shrinker _model = \case - Disconnect{} -> - [] - ExtendSelection sdel -> - [ ExtendSelection sdel' | sdel' <- shrinkS sdel ] - ModifyCandidate peer bdel -> - [ ModifyCandidate peer bdel' | bdel' <- shrinkB bdel, bdel' /= 0 ] - NewCandidate peer bdel -> - [ NewCandidate peer bdel' | bdel' <- shrinkB bdel, bdel' /= 0 ] - ReadGsmState -> - [] - ReadMarker -> - [] - StartIdling{} -> - [] - TimePasses dur -> - [ TimePasses dur' | dur' <- shrink dur, 0 < dur' ] - where - shrinkB (B x) = [ B x' | x' <- shrink x ] - shrinkS (S x) = [ S x' | x' <- shrink x ] + Disconnect{} -> + [] + ExtendSelection sdel -> + [ExtendSelection sdel' | sdel' <- shrinkS sdel] + ModifyCandidate peer bdel -> + [ModifyCandidate peer bdel' | bdel' <- shrinkB bdel, bdel' /= 0] + NewCandidate peer bdel -> + [NewCandidate peer bdel' | bdel' <- shrinkB bdel, bdel' /= 0] + ReadGsmState -> + [] + ReadMarker -> + [] + StartIdling{} -> + [] + TimePasses dur -> + [TimePasses dur' | dur' <- shrink dur, 0 < dur'] + where + shrinkB (B x) = [B x' | x' <- shrink x] + shrinkS (S x) = [S x' | x' <- shrink x] mock :: Model r -> Command r -> Response r mock model = \case - Disconnect{} -> - Unit - ExtendSelection{} -> - Unit - ModifyCandidate{} -> - Unit - NewCandidate{} -> - Unit - ReadGsmState -> - ReadThisGsmState s - ReadMarker -> - ReadThisMarker $ toMarker s - StartIdling{} -> - Unit - TimePasses{} -> - Unit - where - s = toGsmState $ mState model + Disconnect{} -> + Unit + ExtendSelection{} -> + Unit + ModifyCandidate{} -> + Unit + NewCandidate{} -> + Unit + ReadGsmState -> + ReadThisGsmState s + ReadMarker -> + ReadThisMarker $ toMarker s + StartIdling{} -> + Unit + TimePasses{} -> + Unit + where + s = toGsmState $ mState model ----- -- | A block count newtype B = B Int - deriving stock (Eq, Ord, Generic, Read, Show) - deriving newtype (Enum, Num) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Read, Show) + deriving newtype (Enum, Num) + deriving anyclass TD.ToExpr -- | A slot count newtype S = S Int - deriving stock (Eq, Ord, Generic, Read, Show) - deriving newtype (Enum, Num) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Read, Show) + deriving newtype (Enum, Num) + deriving anyclass TD.ToExpr data UpstreamPeer = Amara | Bao | Cait | Dhani | Eric - deriving stock (Bounded, Enum, Eq, Ord, Generic, Read, Show) + deriving stock (Bounded, Enum, Eq, Ord, Generic, Read, Show) deriving anyclass (TD.ToExpr, QC.CoArbitrary, QC.Function) -- | The cumulative growth relative to whatever length the initial selection -- was and the slot relative to the start of the test (which is assumed to be -- the exact onset of some slot) data Selection = Selection !B !S - deriving stock (Eq, Ord, Generic, Show) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Show) + deriving anyclass TD.ToExpr -- | The age of the candidate is irrelevant, only its length matters newtype Candidate = Candidate B - deriving stock (Eq, Ord, Generic, Show) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Show) + deriving anyclass TD.ToExpr data MarkerState = Present | Absent - deriving stock (Eq, Ord, Generic, Read, Show) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Read, Show) + deriving anyclass TD.ToExpr newtype WhetherPrevTimePasses = WhetherPrevTimePasses Bool - deriving stock (Eq, Ord, Generic, Show) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Show) + deriving anyclass TD.ToExpr -data ModelState = - ModelPreSyncing - | - ModelSyncing - | +data ModelState + = ModelPreSyncing + | ModelSyncing + | -- | when the model most recently transitioned to 'GSM.CaughtUp'. ModelCaughtUp !SI.Time - -- ^ when the model most recently transitioned to 'GSM.CaughtUp'. - deriving stock (Eq, Ord, Generic, Show) - deriving anyclass (TD.ToExpr) + deriving stock (Eq, Ord, Generic, Show) + deriving anyclass TD.ToExpr ----- -- | Interesting events to record /within the model/ -- -- TODO some less superficial ones (eg even just combinations of these) -data Notable = +data Notable + = -- | there was a "big" 'TimesPasses' command BigDurN - -- ^ there was a "big" 'TimesPasses' command - | + | -- | the node transitioned from Syncing to CaughtUp CaughtUpN - -- ^ the node transitioned from Syncing to CaughtUp - | + | -- | the node transitioned from CaughtUp to PreSyncing FellBehindN - -- ^ the node transitioned from CaughtUp to PreSyncing - | + | -- | the node transition from Syncing to PreSyncing SyncingToPreSyncingN - -- ^ the node transition from Syncing to PreSyncing - | + | -- | the node transition from PreSyncing to Syncing PreSyncingToSyncingN - -- ^ the node transition from PreSyncing to Syncing - | - FlickerN - -- ^ the node transitioned from CaughtUp to PreSyncing to Syncing and back + | -- | the node transitioned from CaughtUp to PreSyncing to Syncing and back -- to CaughtUp "instantly" - | - NotThrashingN - -- ^ the anti-thrashing would have allowed 'FellBehindN', but the selection + FlickerN + | -- | the anti-thrashing would have allowed 'FellBehindN', but the selection -- wasn't old enough - | - TooOldN - -- ^ the selection was old enough for 'FellBehindN', but the anti-thrashing + NotThrashingN + | -- | the selection was old enough for 'FellBehindN', but the anti-thrashing -- prevented it + TooOldN deriving (Eq, Ord, Show) instance TD.ToExpr Notable where toExpr = TD.defaultExprViaShow @@ -593,41 +535,41 @@ instance TD.ToExpr Notable where toExpr = TD.defaultExprViaShow deriving instance Read LedgerStateJudgement instance QC.Arbitrary LedgerStateJudgement where - arbitrary = elements [TooOld, YoungEnough] - shrink = \case - TooOld -> [YoungEnough] - YoungEnough -> [] + arbitrary = elements [TooOld, YoungEnough] + shrink = \case + TooOld -> [YoungEnough] + YoungEnough -> [] instance QC.Arbitrary MarkerState where - arbitrary = elements [Absent, Present] - shrink = \case - Absent -> [Present] - Present -> [] + arbitrary = elements [Absent, Present] + shrink = \case + Absent -> [Present] + Present -> [] ----- candidateOverSelection :: - Selection - -> Candidate - -> GSM.CandidateVersusSelection + Selection -> + Candidate -> + GSM.CandidateVersusSelection candidateOverSelection (Selection b _s) (Candidate b') = - -- TODO this ignores CandidateDoesNotIntersect, which seems harmless, but - -- I'm not quite sure - GSM.WhetherCandidateIsBetter (b < b') + -- TODO this ignores CandidateDoesNotIntersect, which seems harmless, but + -- I'm not quite sure + GSM.WhetherCandidateIsBetter (b < b') ----- toGsmState :: ModelState -> GSM.GsmState toGsmState = \case - ModelPreSyncing -> GSM.PreSyncing - ModelSyncing -> GSM.Syncing - ModelCaughtUp{} -> GSM.CaughtUp + ModelPreSyncing -> GSM.PreSyncing + ModelSyncing -> GSM.Syncing + ModelCaughtUp{} -> GSM.CaughtUp toMarker :: GSM.GsmState -> MarkerState toMarker = \case - GSM.PreSyncing -> Absent - GSM.Syncing -> Absent - GSM.CaughtUp -> Present + GSM.PreSyncing -> Absent + GSM.Syncing -> Absent + GSM.CaughtUp -> Present ----- @@ -638,30 +580,28 @@ onset :: Selection -> SI.Time onset (Selection _b (S s)) = SI.Time $ fromIntegral s ageLimit :: Num a => a -ageLimit = 10 -- seconds +ageLimit = 10 -- seconds thrashLimit :: Num a => a -thrashLimit = 8 -- seconds +thrashLimit = 8 -- seconds selectionIsBehind :: Model r -> Bool selectionIsBehind model = - any (\(Candidate b') -> b' > b) cands - where - Model { - mCandidates = cands - , - mSelection = Selection b _s - } = model + any (\(Candidate b') -> b' > b) cands + where + Model + { mCandidates = cands + , mSelection = Selection b _s + } = model selectionIsNotEarly :: Model r -> Bool selectionIsNotEarly model = - onset sel <= clk - where - Model { - mClock = clk - , - mSelection = sel - } = model + onset sel <= clk + where + Model + { mClock = clk + , mSelection = sel + } = model -- | Checks that a 'TimePasses' command does not end exactly when a timeout -- could fire and that a 'ExtendSelection' does not incur a timeout that would @@ -671,35 +611,32 @@ selectionIsNotEarly model = -- world. boringDur :: Model r -> Int -> QSM.Logic boringDur model dur = - boringSelection QSM..&& boringState - where - Model { - mClock = clk - , - mSelection = sel - , - mState = st - } = model - - -- the first time the node would transition to PreSyncing - expiry timestamp = expiryAge `max` expiryThrashing timestamp - expiryAge = SI.addTime ageLimit (onset sel) - expiryThrashing timestamp = SI.addTime thrashLimit timestamp - - clk' = SI.addTime (0.1 * fromIntegral dur) clk - - boringSelection = "boringDur selection" `atom` (clk' /= expiryAge) - - boringState = case st of - ModelPreSyncing -> QSM.Top - ModelSyncing -> QSM.Top - ModelCaughtUp timestamp -> - let gap = clk' `SI.diffTime` expiry timestamp - n = - mod - (diffTimeToPicoseconds gap) - (secondsToPicoseconds thrashLimit) - in - "boringDur state" `atom` (gap < 0 || 0 /= n) - - secondsToPicoseconds x = x * 10 ^ (12 :: Int) + boringSelection QSM..&& boringState + where + Model + { mClock = clk + , mSelection = sel + , mState = st + } = model + + -- the first time the node would transition to PreSyncing + expiry timestamp = expiryAge `max` expiryThrashing timestamp + expiryAge = SI.addTime ageLimit (onset sel) + expiryThrashing timestamp = SI.addTime thrashLimit timestamp + + clk' = SI.addTime (0.1 * fromIntegral dur) clk + + boringSelection = "boringDur selection" `atom` (clk' /= expiryAge) + + boringState = case st of + ModelPreSyncing -> QSM.Top + ModelSyncing -> QSM.Top + ModelCaughtUp timestamp -> + let gap = clk' `SI.diffTime` expiry timestamp + n = + mod + (diffTimeToPicoseconds gap) + (secondsToPicoseconds thrashLimit) + in "boringDur state" `atom` (gap < 0 || 0 /= n) + + secondsToPicoseconds x = x * 10 ^ (12 :: Int) diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup.hs index 632249c89c..e1d90c16a3 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup.hs @@ -5,41 +5,50 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Consensus.Genesis.Setup ( - module Test.Consensus.Genesis.Setup.GenChains +module Test.Consensus.Genesis.Setup + ( module Test.Consensus.Genesis.Setup.GenChains , forAllGenesisTest , runGenesisTest , runGenesisTest' ) where -import Control.Exception (throw) -import Control.Monad.Class.MonadAsync - (AsyncCancelled (AsyncCancelled)) -import Control.Monad.IOSim (IOSim, runSimStrictShutdown) -import Control.Tracer (debugTracer, traceWith) -import Data.Maybe (mapMaybe) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientException (..)) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IOLike (Exception, fromException) -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededTimeLimit)) -import Test.Consensus.Genesis.Setup.Classifiers (Classifiers (..), - ResultClassifiers (..), ScheduleClassifiers (..), - classifiers, resultClassifiers, scheduleClassifiers) -import Test.Consensus.Genesis.Setup.GenChains -import Test.Consensus.PeerSimulator.Run -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PeerSimulator.Trace (traceLinesWith, - tracerTestBlock) -import Test.Consensus.PointSchedule -import Test.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.QuickCheck (forAllGenRunShrinkCheck) -import Test.Util.TestBlock (TestBlock) -import Test.Util.Tracer (recordingTracerM) -import Text.Printf (printf) - +import Control.Exception (throw) +import Control.Monad.Class.MonadAsync + ( AsyncCancelled (AsyncCancelled) + ) +import Control.Monad.IOSim (IOSim, runSimStrictShutdown) +import Control.Tracer (debugTracer, traceWith) +import Data.Maybe (mapMaybe) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientException (..) + ) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IOLike (Exception, fromException) +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededTimeLimit) + ) +import Test.Consensus.Genesis.Setup.Classifiers + ( Classifiers (..) + , ResultClassifiers (..) + , ScheduleClassifiers (..) + , classifiers + , resultClassifiers + , scheduleClassifiers + ) +import Test.Consensus.Genesis.Setup.GenChains +import Test.Consensus.PeerSimulator.Run +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PeerSimulator.Trace + ( traceLinesWith + , tracerTestBlock + ) +import Test.Consensus.PointSchedule +import Test.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.QuickCheck (forAllGenRunShrinkCheck) +import Test.Util.TestBlock (TestBlock) +import Test.Util.Tracer (recordingTracerM) +import Text.Printf (printf) -- | Like 'runSimStrictShutdown' but fail when the main thread terminates if -- there are other threads still running or blocked. If one is trying to follow @@ -47,7 +56,7 @@ import Text.Printf (printf) runSimStrictShutdownOrThrow :: forall a. (forall s. IOSim s a) -> a runSimStrictShutdownOrThrow action = case runSimStrictShutdown action of - Left e -> throw e + Left e -> throw e Right x -> x -- | Runs the given 'GenesisTest' and 'PointSchedule' and evaluates the given @@ -67,7 +76,7 @@ runGenesisTest schedulerConfig genesisTest = traceWith tracer (condense rgtrStateView) rgtrTrace <- unlines <$> getTrace - pure $ RunGenesisTestResult {rgtrTrace, rgtrStateView} + pure $ RunGenesisTestResult{rgtrTrace, rgtrStateView} -- | Variant of 'runGenesisTest' that also takes a property on the final -- 'StateView' and returns a QuickCheck property. The trace is printed in case @@ -79,10 +88,10 @@ runGenesisTest' :: (StateView TestBlock -> prop) -> Property runGenesisTest' schedulerConfig genesisTest makeProperty = - counterexample rgtrTrace $ makeProperty rgtrStateView - where - RunGenesisTestResult{rgtrTrace, rgtrStateView} = - runGenesisTest schedulerConfig genesisTest + counterexample rgtrTrace $ makeProperty rgtrStateView + where + RunGenesisTestResult{rgtrTrace, rgtrStateView} = + runGenesisTest schedulerConfig genesisTest -- | All-in-one helper that generates a 'GenesisTest' and a 'Peers -- PeerSchedule', runs them with 'runGenesisTest', check whether the given @@ -100,37 +109,44 @@ forAllGenesisTest generator schedulerConfig shrinker mkProperty = resCls = resultClassifiers genesisTest result schCls = scheduleClassifiers genesisTest stateView = rgtrStateView result - in classify (allAdversariesSelectable cls) "All adversaries have more than k blocks after intersection" $ - classify (allAdversariesForecastable cls) "All adversaries have at least 1 forecastable block after intersection" $ - classify (allAdversariesKPlus1InForecast cls) "All adversaries have k+1 blocks in forecast window after intersection" $ - classify (genesisWindowAfterIntersection cls) "Full genesis window after intersection" $ - classify (adversaryRollback schCls) "An adversary did a rollback" $ - classify (honestRollback schCls) "The honest peer did a rollback" $ - classify (allAdversariesEmpty schCls) "All adversaries have empty schedules" $ - classify (allAdversariesTrivial schCls) "All adversaries have trivial schedules" $ - tabulate "Adversaries killed by LoP" [printf "%.1f%%" $ adversariesKilledByLoP resCls] $ - tabulate "Adversaries killed by GDD" [printf "%.1f%%" $ adversariesKilledByGDD resCls] $ - tabulate "Adversaries killed by Timeout" [printf "%.1f%%" $ adversariesKilledByTimeout resCls] $ - tabulate "Surviving adversaries" [printf "%.1f%%" $ adversariesSurvived resCls] $ - counterexample (rgtrTrace result) $ - mkProperty genesisTest stateView .&&. hasOnlyExpectedExceptions stateView - where - runner = runGenesisTest schedulerConfig - shrinker' gt = shrinker gt . rgtrStateView - hasOnlyExpectedExceptions StateView{svPeerSimulatorResults} = - conjoin $ isExpectedException <$> mapMaybe - (pscrToException . pseResult) - svPeerSimulatorResults - isExpectedException exn - | Just EmptyBucket <- e = true - | Just DensityTooLow <- e = true - | Just (ExceededTimeLimit _) <- e = true - | Just AsyncCancelled <- e = true - | Just CandidateTooSparse{} <- e = true - | otherwise = counterexample - ("Encountered unexpected exception: " ++ show exn) - False - where - e :: (Exception e) => Maybe e - e = fromException exn - true = property True + in classify (allAdversariesSelectable cls) "All adversaries have more than k blocks after intersection" + $ classify + (allAdversariesForecastable cls) + "All adversaries have at least 1 forecastable block after intersection" + $ classify + (allAdversariesKPlus1InForecast cls) + "All adversaries have k+1 blocks in forecast window after intersection" + $ classify (genesisWindowAfterIntersection cls) "Full genesis window after intersection" + $ classify (adversaryRollback schCls) "An adversary did a rollback" + $ classify (honestRollback schCls) "The honest peer did a rollback" + $ classify (allAdversariesEmpty schCls) "All adversaries have empty schedules" + $ classify (allAdversariesTrivial schCls) "All adversaries have trivial schedules" + $ tabulate "Adversaries killed by LoP" [printf "%.1f%%" $ adversariesKilledByLoP resCls] + $ tabulate "Adversaries killed by GDD" [printf "%.1f%%" $ adversariesKilledByGDD resCls] + $ tabulate "Adversaries killed by Timeout" [printf "%.1f%%" $ adversariesKilledByTimeout resCls] + $ tabulate "Surviving adversaries" [printf "%.1f%%" $ adversariesSurvived resCls] + $ counterexample (rgtrTrace result) + $ mkProperty genesisTest stateView .&&. hasOnlyExpectedExceptions stateView + where + runner = runGenesisTest schedulerConfig + shrinker' gt = shrinker gt . rgtrStateView + hasOnlyExpectedExceptions StateView{svPeerSimulatorResults} = + conjoin $ + isExpectedException + <$> mapMaybe + (pscrToException . pseResult) + svPeerSimulatorResults + isExpectedException exn + | Just EmptyBucket <- e = true + | Just DensityTooLow <- e = true + | Just (ExceededTimeLimit _) <- e = true + | Just AsyncCancelled <- e = true + | Just CandidateTooSparse{} <- e = true + | otherwise = + counterexample + ("Encountered unexpected exception: " ++ show exn) + False + where + e :: Exception e => Maybe e + e = fromException exn + true = property True diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/Classifiers.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/Classifiers.hs index 76be99cf96..c775060983 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/Classifiers.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/Classifiers.hs @@ -4,8 +4,8 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} -module Test.Consensus.Genesis.Setup.Classifiers ( - Classifiers (..) +module Test.Consensus.Genesis.Setup.Classifiers + ( Classifiers (..) , ResultClassifiers (..) , ScheduleClassifiers (..) , classifiers @@ -14,137 +14,155 @@ module Test.Consensus.Genesis.Setup.Classifiers ( , simpleHash ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Cardano.Slotting.Slot (WithOrigin (..)) -import Data.List (sortOn, tails) -import qualified Data.List.NonEmpty as NonEmpty -import qualified Data.Map as Map -import Data.Maybe (mapMaybe) -import Data.Word (Word64) -import Ouroboros.Consensus.Block (ChainHash (..), HeaderHash, - blockSlot, succWithOrigin) -import Ouroboros.Consensus.Block.Abstract (SlotNo (SlotNo), - withOrigin) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientException (DensityTooLow, EmptyBucket)) -import Ouroboros.Consensus.Util.IOLike (SomeException, fromException) -import Ouroboros.Network.AnchoredFragment (anchor, anchorToSlotNo, - headSlot) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededTimeLimit)) -import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) -import Test.Consensus.Network.AnchoredFragment.Extras (slotLength) -import Test.Consensus.PeerSimulator.StateView - (PeerSimulatorResult (..), StateView (..), pscrToException) -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (PeerId (..), Peers (..)) -import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..)) -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock, TestHash (TestHash), - isAncestorOf) +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Slotting.Slot (WithOrigin (..)) +import Data.List (sortOn, tails) +import Data.List.NonEmpty qualified as NonEmpty +import Data.Map qualified as Map +import Data.Maybe (mapMaybe) +import Data.Word (Word64) +import Ouroboros.Consensus.Block + ( ChainHash (..) + , HeaderHash + , blockSlot + , succWithOrigin + ) +import Ouroboros.Consensus.Block.Abstract + ( SlotNo (SlotNo) + , withOrigin + ) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientException (DensityTooLow, EmptyBucket) + ) +import Ouroboros.Consensus.Util.IOLike (SomeException, fromException) +import Ouroboros.Network.AnchoredFragment + ( anchor + , anchorToSlotNo + , headSlot + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededTimeLimit) + ) +import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) +import Test.Consensus.Network.AnchoredFragment.Extras (slotLength) +import Test.Consensus.PeerSimulator.StateView + ( PeerSimulatorResult (..) + , StateView (..) + , pscrToException + ) +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers (PeerId (..), Peers (..)) +import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..)) +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock + ( TestBlock + , TestHash (TestHash) + , isAncestorOf + ) -- | Interesting categories to classify test inputs -data Classifiers = - Classifiers { - -- | There are more than k blocks in at least one alternative chain after the intersection - existsSelectableAdversary :: Bool, - -- | There are more than k blocks in all alternative chains after the - -- intersection. Note that this is always guaranteed for the honest chain. - allAdversariesSelectable :: Bool, - -- | There is always at least one block per sliding forecast window in all - -- alternative chains. Note that this is always guaranteed for the honest - -- chain. - allAdversariesForecastable :: Bool, - -- | All adversaries have at least @k+1@ block in the forecast window the - -- follows their intersection with the trunk. Note that the generator always - -- enforces that the trunk wins in all _Genesis_ windows after the - -- intersection. In particular, if @sgen = sfor@, then the trunk will have - -- at least @k+2@. - allAdversariesKPlus1InForecast :: Bool, - -- | There are at least scg slots after the intersection on both the honest - -- and the alternative chain - -- - -- Knowing if there is a Genesis window after the intersection is important because - -- otherwise the Genesis node has no chance to advance the immutable tip past - -- the Limit on Eagerness. - -- - genesisWindowAfterIntersection :: Bool, - -- | The honest chain's slot count is greater than or equal to the Genesis window size. - longerThanGenesisWindow :: Bool +data Classifiers + = Classifiers + { existsSelectableAdversary :: Bool + -- ^ There are more than k blocks in at least one alternative chain after the intersection + , allAdversariesSelectable :: Bool + -- ^ There are more than k blocks in all alternative chains after the + -- intersection. Note that this is always guaranteed for the honest chain. + , allAdversariesForecastable :: Bool + -- ^ There is always at least one block per sliding forecast window in all + -- alternative chains. Note that this is always guaranteed for the honest + -- chain. + , allAdversariesKPlus1InForecast :: Bool + -- ^ All adversaries have at least @k+1@ block in the forecast window the + -- follows their intersection with the trunk. Note that the generator always + -- enforces that the trunk wins in all _Genesis_ windows after the + -- intersection. In particular, if @sgen = sfor@, then the trunk will have + -- at least @k+2@. + , genesisWindowAfterIntersection :: Bool + -- ^ There are at least scg slots after the intersection on both the honest + -- and the alternative chain + -- + -- Knowing if there is a Genesis window after the intersection is important because + -- otherwise the Genesis node has no chance to advance the immutable tip past + -- the Limit on Eagerness. + , longerThanGenesisWindow :: Bool + -- ^ The honest chain's slot count is greater than or equal to the Genesis window size. } classifiers :: AF.HasHeader blk => GenesisTest blk schedule -> Classifiers -classifiers GenesisTest {gtBlockTree, gtSecurityParam = SecurityParam k, gtGenesisWindow = GenesisWindow scg} = - Classifiers { - existsSelectableAdversary, - allAdversariesSelectable, - allAdversariesForecastable, - allAdversariesKPlus1InForecast, - genesisWindowAfterIntersection, - longerThanGenesisWindow - } - where - longerThanGenesisWindow = AF.headSlot goodChain >= At (fromIntegral scg) +classifiers GenesisTest{gtBlockTree, gtSecurityParam = SecurityParam k, gtGenesisWindow = GenesisWindow scg} = + Classifiers + { existsSelectableAdversary + , allAdversariesSelectable + , allAdversariesForecastable + , allAdversariesKPlus1InForecast + , genesisWindowAfterIntersection + , longerThanGenesisWindow + } + where + longerThanGenesisWindow = AF.headSlot goodChain >= At (fromIntegral scg) - genesisWindowAfterIntersection = - any fragmentHasGenesis branches + genesisWindowAfterIntersection = + any fragmentHasGenesis branches - fragmentHasGenesis btb = - let - frag = btbSuffix btb - SlotNo intersection = withOrigin 0 id (anchorToSlotNo (anchor frag)) - in isSelectable btb && slotLength frag > fromIntegral scg && goodTipSlot - intersection > scg + fragmentHasGenesis btb = + let + frag = btbSuffix btb + SlotNo intersection = withOrigin 0 id (anchorToSlotNo (anchor frag)) + in + isSelectable btb && slotLength frag > fromIntegral scg && goodTipSlot - intersection > scg - existsSelectableAdversary = - any isSelectable branches + existsSelectableAdversary = + any isSelectable branches - allAdversariesSelectable = - all isSelectable branches + allAdversariesSelectable = + all isSelectable branches - isSelectable bt = AF.length (btbSuffix bt) > fromIntegral (unNonZero k) + isSelectable bt = AF.length (btbSuffix bt) > fromIntegral (unNonZero k) - allAdversariesForecastable = - all isForecastable branches + allAdversariesForecastable = + all isForecastable branches - isForecastable bt = - -- FIXME: We are using `scg` here but what we really mean is `sfor`. - -- Distinguish `scg` vs. `sgen` vs. `sfor` and use the latter here. - -- NOTE: We only care about the difference between slot numbers so it is - -- not a problem to add @1@ to all of them. However, we do care VERY MUCH - -- that this list includes the anchor. - let slotNos = (succWithOrigin $ anchorToSlotNo $ anchor $ btbFull bt) - : (map ((+1) . blockSlot) $ AF.toOldestFirst $ btbFull bt) in - all (\(SlotNo prev, SlotNo next) -> next - prev <= scg) (zip slotNos (drop 1 slotNos)) + isForecastable bt = + -- FIXME: We are using `scg` here but what we really mean is `sfor`. + -- Distinguish `scg` vs. `sgen` vs. `sfor` and use the latter here. + -- NOTE: We only care about the difference between slot numbers so it is + -- not a problem to add @1@ to all of them. However, we do care VERY MUCH + -- that this list includes the anchor. + let slotNos = + (succWithOrigin $ anchorToSlotNo $ anchor $ btbFull bt) + : (map ((+ 1) . blockSlot) $ AF.toOldestFirst $ btbFull bt) + in all (\(SlotNo prev, SlotNo next) -> next - prev <= scg) (zip slotNos (drop 1 slotNos)) - allAdversariesKPlus1InForecast = - all hasKPlus1InForecast branches + allAdversariesKPlus1InForecast = + all hasKPlus1InForecast branches - hasKPlus1InForecast BlockTreeBranch{btbSuffix} = - -- FIXME: We are using `scg` here but what we really mean is `sfor`. - -- Distinguish `scg` vs. `sgen` vs. `sfor` and use the latter here. - let forecastSlot = succWithOrigin (anchorToSlotNo $ anchor btbSuffix) + SlotNo scg - forecastBlocks = AF.takeWhileOldest (\b -> blockSlot b < forecastSlot) btbSuffix - in AF.length forecastBlocks >= fromIntegral (unNonZero k) + 1 + hasKPlus1InForecast BlockTreeBranch{btbSuffix} = + -- FIXME: We are using `scg` here but what we really mean is `sfor`. + -- Distinguish `scg` vs. `sgen` vs. `sfor` and use the latter here. + let forecastSlot = succWithOrigin (anchorToSlotNo $ anchor btbSuffix) + SlotNo scg + forecastBlocks = AF.takeWhileOldest (\b -> blockSlot b < forecastSlot) btbSuffix + in AF.length forecastBlocks >= fromIntegral (unNonZero k) + 1 - SlotNo goodTipSlot = withOrigin 0 id (headSlot goodChain) + SlotNo goodTipSlot = withOrigin 0 id (headSlot goodChain) - branches = btBranches gtBlockTree + branches = btBranches gtBlockTree - goodChain = btTrunk gtBlockTree + goodChain = btTrunk gtBlockTree -- | Interesting categories to classify test results -data ResultClassifiers = - ResultClassifiers{ - -- | Percentage of adversaries that were killed by receiving an EmptyBucket exception from the LoP - adversariesKilledByLoP :: Double, - -- | Percentage of adversaries that were disconnected because their fragment was not dense enough - adversariesKilledByGDD :: Double, - -- | Percentage of adversaries that were disconnected by network-level timeouts - adversariesKilledByTimeout :: Double, - -- | Percentage of adversaries that weren't killed - adversariesSurvived :: Double +data ResultClassifiers + = ResultClassifiers + { adversariesKilledByLoP :: Double + -- ^ Percentage of adversaries that were killed by receiving an EmptyBucket exception from the LoP + , adversariesKilledByGDD :: Double + -- ^ Percentage of adversaries that were disconnected because their fragment was not dense enough + , adversariesKilledByTimeout :: Double + -- ^ Percentage of adversaries that were disconnected by network-level timeouts + , adversariesSurvived :: Double + -- ^ Percentage of adversaries that weren't killed } -- | Returned when there were no adversaries @@ -154,60 +172,74 @@ nullResultClassifier = ResultClassifiers 0 0 0 0 resultClassifiers :: GenesisTestFull blk -> RunGenesisTestResult -> ResultClassifiers resultClassifiers GenesisTest{gtSchedule} RunGenesisTestResult{rgtrStateView} = if adversariesCount > 0 - then ResultClassifiers { - adversariesKilledByLoP = 100 * adversariesKilledByLoPC / adversariesCount, - adversariesKilledByGDD = 100 * adversariesKilledByGDDC / adversariesCount, - adversariesKilledByTimeout = 100 * adversariesKilledByTimeoutC / adversariesCount, - adversariesSurvived = 100 * adversariesSurvivedC / adversariesCount - } + then + ResultClassifiers + { adversariesKilledByLoP = 100 * adversariesKilledByLoPC / adversariesCount + , adversariesKilledByGDD = 100 * adversariesKilledByGDDC / adversariesCount + , adversariesKilledByTimeout = 100 * adversariesKilledByTimeoutC / adversariesCount + , adversariesSurvived = 100 * adversariesSurvivedC / adversariesCount + } else nullResultClassifier - where - StateView{svPeerSimulatorResults} = rgtrStateView + where + StateView{svPeerSimulatorResults} = rgtrStateView - adversaries :: [PeerId] - adversaries = fmap AdversarialPeer $ Map.keys $ adversarialPeers $ psSchedule gtSchedule + adversaries :: [PeerId] + adversaries = fmap AdversarialPeer $ Map.keys $ adversarialPeers $ psSchedule gtSchedule - adversariesCount = fromIntegral $ length adversaries + adversariesCount = fromIntegral $ length adversaries - adversariesExceptions :: [(PeerId, SomeException)] - adversariesExceptions = mapMaybe - (\PeerSimulatorResult{psePeerId, pseResult} -> case psePeerId of - HonestPeer _ -> Nothing - pid -> (pid,) <$> pscrToException pseResult + adversariesExceptions :: [(PeerId, SomeException)] + adversariesExceptions = + mapMaybe + ( \PeerSimulatorResult{psePeerId, pseResult} -> case psePeerId of + HonestPeer _ -> Nothing + pid -> (pid,) <$> pscrToException pseResult ) svPeerSimulatorResults - adversariesSurvivedC = fromIntegral $ length $ filter - (\pid -> not $ pid `elem` map fst adversariesExceptions) - adversaries - - adversariesKilledByLoPC = fromIntegral $ length $ filter - (\(_, exn) -> fromException exn == Just EmptyBucket) - adversariesExceptions - - adversariesKilledByGDDC = fromIntegral $ length $ filter - (\(_, exn) -> fromException exn == Just DensityTooLow) - adversariesExceptions - - adversariesKilledByTimeoutC = fromIntegral $ length $ filter - (\(_, exn) -> case fromException exn of - Just (ExceededTimeLimit _) -> True - _ -> False - ) - adversariesExceptions - -data ScheduleClassifiers = - ScheduleClassifiers{ - -- | There is an adversary that did a rollback - adversaryRollback :: Bool, - -- | The honest peer did a rollback - honestRollback :: Bool, - -- | All adversaries have an empty schedule: the only way to disconnect them are - -- network timeouts. - allAdversariesEmpty :: Bool, - -- | All adversaries have trivial schedules: they only have an initial state, and - -- do nothing afterwards. - allAdversariesTrivial :: Bool + adversariesSurvivedC = + fromIntegral $ + length $ + filter + (\pid -> not $ pid `elem` map fst adversariesExceptions) + adversaries + + adversariesKilledByLoPC = + fromIntegral $ + length $ + filter + (\(_, exn) -> fromException exn == Just EmptyBucket) + adversariesExceptions + + adversariesKilledByGDDC = + fromIntegral $ + length $ + filter + (\(_, exn) -> fromException exn == Just DensityTooLow) + adversariesExceptions + + adversariesKilledByTimeoutC = + fromIntegral $ + length $ + filter + ( \(_, exn) -> case fromException exn of + Just (ExceededTimeLimit _) -> True + _ -> False + ) + adversariesExceptions + +data ScheduleClassifiers + = ScheduleClassifiers + { adversaryRollback :: Bool + -- ^ There is an adversary that did a rollback + , honestRollback :: Bool + -- ^ The honest peer did a rollback + , allAdversariesEmpty :: Bool + -- ^ All adversaries have an empty schedule: the only way to disconnect them are + -- network timeouts. + , allAdversariesTrivial :: Bool + -- ^ All adversaries have trivial schedules: they only have an initial state, and + -- do nothing afterwards. } scheduleClassifiers :: GenesisTestFull TestBlock -> ScheduleClassifiers @@ -218,50 +250,53 @@ scheduleClassifiers GenesisTest{gtSchedule = schedule} = , allAdversariesEmpty , allAdversariesTrivial } - where - hasRollback :: PeerSchedule TestBlock -> Bool - hasRollback peerSch' = - any (not . isSorted) [tips, headers, blocks] - where - peerSch = sortOn fst peerSch' - isSorted l = and [x `ancestor` y | (x:y:_) <- tails l] - ancestor Origin Origin = True - ancestor Origin (At _) = True - ancestor (At _) Origin = False - ancestor (At p1) (At p2) = p1 `isAncestorOf` p2 - tips = mapMaybe - (\(_, point) -> case point of + where + hasRollback :: PeerSchedule TestBlock -> Bool + hasRollback peerSch' = + any (not . isSorted) [tips, headers, blocks] + where + peerSch = sortOn fst peerSch' + isSorted l = and [x `ancestor` y | (x : y : _) <- tails l] + ancestor Origin Origin = True + ancestor Origin (At _) = True + ancestor (At _) Origin = False + ancestor (At p1) (At p2) = p1 `isAncestorOf` p2 + tips = + mapMaybe + ( \(_, point) -> case point of ScheduleTipPoint blk -> Just blk - _ -> Nothing - ) - peerSch - headers = mapMaybe - (\(_, point) -> case point of + _ -> Nothing + ) + peerSch + headers = + mapMaybe + ( \(_, point) -> case point of ScheduleHeaderPoint blk -> Just blk - _ -> Nothing - ) - peerSch - blocks = mapMaybe - (\(_, point) -> case point of + _ -> Nothing + ) + peerSch + blocks = + mapMaybe + ( \(_, point) -> case point of ScheduleBlockPoint blk -> Just blk - _ -> Nothing - ) - peerSch + _ -> Nothing + ) + peerSch - rollbacks :: Peers Bool - rollbacks = hasRollback <$> psSchedule schedule + rollbacks :: Peers Bool + rollbacks = hasRollback <$> psSchedule schedule - adversaryRollback = any id $ adversarialPeers rollbacks - honestRollback = any id $ honestPeers rollbacks + adversaryRollback = any id $ adversarialPeers rollbacks + honestRollback = any id $ honestPeers rollbacks - allAdversariesEmpty = all id $ adversarialPeers $ null <$> psSchedule schedule + allAdversariesEmpty = all id $ adversarialPeers $ null <$> psSchedule schedule - isTrivial :: PeerSchedule TestBlock -> Bool - isTrivial = \case - [] -> True - (t0, _):points -> all ((== t0) . fst) points + isTrivial :: PeerSchedule TestBlock -> Bool + isTrivial = \case + [] -> True + (t0, _) : points -> all ((== t0) . fst) points - allAdversariesTrivial = all id $ adversarialPeers $ isTrivial <$> psSchedule schedule + allAdversariesTrivial = all id $ adversarialPeers $ isTrivial <$> psSchedule schedule simpleHash :: HeaderHash block ~ TestHash => @@ -269,4 +304,4 @@ simpleHash :: [Word64] simpleHash = \case BlockHash (TestHash h) -> reverse (NonEmpty.toList h) - GenesisHash -> [] + GenesisHash -> [] diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/GenChains.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/GenChains.hs index 02b813c45e..ddae6d5f89 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/GenChains.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Setup/GenChains.hs @@ -5,47 +5,54 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Consensus.Genesis.Setup.GenChains ( - GenesisTest (..) +module Test.Consensus.Genesis.Setup.GenChains + ( GenesisTest (..) , genChains , genChainsWithExtraHonestPeers ) where -import Cardano.Ledger.BaseTypes (nonZeroOr) -import Cardano.Slotting.Time (slotLengthFromSec) -import Control.Monad (replicateM) -import qualified Control.Monad.Except as Exn -import Data.List as List (foldl') -import Data.Proxy (Proxy (..)) -import Data.Time.Clock (DiffTime) -import qualified Data.Vector.Unboxed as Vector -import Data.Word (Word8) -import Ouroboros.Consensus.Block.Abstract hiding (Header) -import Ouroboros.Consensus.Protocol.Abstract - (SecurityParam (SecurityParam)) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Protocol.ChainSync.Codec - (ChainSyncTimeout (..)) -import Ouroboros.Network.Protocol.Limits (shortWait) -import qualified Test.Consensus.BlockTree as BT -import Test.Consensus.PointSchedule -import qualified Test.Ouroboros.Consensus.ChainGenerator.Adversarial as A -import Test.Ouroboros.Consensus.ChainGenerator.Adversarial - (genPrefixBlockCount) -import Test.Ouroboros.Consensus.ChainGenerator.Counting - (Count (Count), getVector) -import qualified Test.Ouroboros.Consensus.ChainGenerator.Honest as H -import Test.Ouroboros.Consensus.ChainGenerator.Honest - (ChainSchema (ChainSchema), HonestRecipe (..)) -import Test.Ouroboros.Consensus.ChainGenerator.Params -import qualified Test.Ouroboros.Consensus.ChainGenerator.Slot as S -import Test.Ouroboros.Consensus.ChainGenerator.Slot (S) -import qualified Test.QuickCheck as QC -import Test.QuickCheck.Extras (unsafeMapSuchThatJust) -import Test.QuickCheck.Random (QCGen) -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock hiding (blockTree) +import Cardano.Ledger.BaseTypes (nonZeroOr) +import Cardano.Slotting.Time (slotLengthFromSec) +import Control.Monad (replicateM) +import Control.Monad.Except qualified as Exn +import Data.List as List (foldl') +import Data.Proxy (Proxy (..)) +import Data.Time.Clock (DiffTime) +import Data.Vector.Unboxed qualified as Vector +import Data.Word (Word8) +import Ouroboros.Consensus.Block.Abstract hiding (Header) +import Ouroboros.Consensus.Protocol.Abstract + ( SecurityParam (SecurityParam) + ) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Protocol.ChainSync.Codec + ( ChainSyncTimeout (..) + ) +import Ouroboros.Network.Protocol.Limits (shortWait) +import Test.Consensus.BlockTree qualified as BT +import Test.Consensus.PointSchedule +import Test.Ouroboros.Consensus.ChainGenerator.Adversarial + ( genPrefixBlockCount + ) +import Test.Ouroboros.Consensus.ChainGenerator.Adversarial qualified as A +import Test.Ouroboros.Consensus.ChainGenerator.Counting + ( Count (Count) + , getVector + ) +import Test.Ouroboros.Consensus.ChainGenerator.Honest + ( ChainSchema (ChainSchema) + , HonestRecipe (..) + ) +import Test.Ouroboros.Consensus.ChainGenerator.Honest qualified as H +import Test.Ouroboros.Consensus.ChainGenerator.Params +import Test.Ouroboros.Consensus.ChainGenerator.Slot (S) +import Test.Ouroboros.Consensus.ChainGenerator.Slot qualified as S +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Extras (unsafeMapSuchThatJust) +import Test.QuickCheck.Random (QCGen) +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock hiding (blockTree) -- | Random generator for an honest chain recipe and schema. genHonestChainSchema :: QC.Gen (Asc, H.HonestRecipe, H.SomeHonestChainSchema) @@ -55,7 +62,7 @@ genHonestChainSchema = do H.SomeCheckedHonestRecipe Proxy Proxy honestRecipe' <- case Exn.runExcept $ H.checkHonestRecipe honestRecipe of - Left exn -> error $ "impossible! " <> show (honestRecipe, exn) + Left exn -> error $ "impossible! " <> show (honestRecipe, exn) Right honestRecipe' -> pure honestRecipe' (seed :: QCGen) <- QC.arbitrary let schema = H.uniformTheHonestChain (Just asc) honestRecipe' seed @@ -75,20 +82,20 @@ genAlternativeChainSchema (testRecipeH, arHonest) = (seedPrefix :: QCGen) <- QC.arbitrary let arPrefix = genPrefixBlockCount testRecipeH seedPrefix arHonest - let testRecipeA = A.AdversarialRecipe { - A.arPrefix, - A.arParams = (kcp, scg, delta), - A.arHonest - } + let testRecipeA = + A.AdversarialRecipe + { A.arPrefix + , A.arParams = (kcp, scg, delta) + , A.arHonest + } alternativeAsc <- ascFromBits <$> QC.choose (1 :: Word8, maxBound - 1) case Exn.runExcept $ A.checkAdversarialRecipe testRecipeA of Left e -> case e of A.NoSuchAdversarialBlock -> pure Nothing - A.NoSuchCompetitor -> error $ "impossible! " <> show e - A.NoSuchIntersection -> error $ "impossible! " <> show e - + A.NoSuchCompetitor -> error $ "impossible! " <> show e + A.NoSuchIntersection -> error $ "impossible! " <> show e Right (A.SomeCheckedAdversarialRecipe _ testRecipeA'') -> do let Count prefixCount = arPrefix (seed :: QCGen) <- QC.arbitrary @@ -96,7 +103,7 @@ genAlternativeChainSchema (testRecipeH, arHonest) = pure $ Just (prefixCount, Vector.toList (getVector v)) genChains :: QC.Gen Word -> QC.Gen (GenesisTest TestBlock ()) -genChains = genChainsWithExtraHonestPeers (pure 0) +genChains = genChainsWithExtraHonestPeers (pure 0) -- | Random generator for a block tree. The block tree contains one trunk (the -- “honest” chain) and as many branches as given as a parameter (the @@ -123,88 +130,94 @@ genChainsWithExtraHonestPeers genNumExtraHonest genNumForks = do numForks <- genNumForks gtExtraHonestPeers <- genNumExtraHonest - alternativeChainSchemas <- replicateM (fromIntegral numForks) (genAlternativeChainSchema (honestRecipe, honestChainSchema)) - pure $ GenesisTest { - gtSecurityParam = - SecurityParam $ - -- As long as `genKSD` generates a `k` that is > 0, this won't lead to an ErrorCall. - nonZeroOr (fromIntegral kcp) $ error "Generated Kcp was zero. Cannot construct a NonZero value for the SecurityParam.", - gtGenesisWindow = GenesisWindow (fromIntegral scg), - gtForecastRange = ForecastRange (fromIntegral scg), -- REVIEW: Do we want to generate those randomly? - gtDelay = delta, - gtSlotLength = slotLengthFromSec 20, - gtChainSyncTimeouts = chainSyncTimeouts, - gtBlockFetchTimeouts = blockFetchTimeouts, - gtLoPBucketParams = LoPBucketParams { lbpCapacity = 50, lbpRate = 10 }, - -- These values give little enough leeway (5s) so that some adversaries get disconnected - -- by the LoP during the stalling attack test. Maybe we should design a way to override - -- those values for individual tests? - -- Also, we might want to generate these randomly. - gtCSJParams = CSJParams $ fromIntegral scg, - gtBlockTree = List.foldl' (flip BT.addBranch') (BT.mkTrunk goodChain) $ zipWith (genAdversarialFragment goodBlocks) [1..] alternativeChainSchemas, - gtExtraHonestPeers, - gtSchedule = () - } - - where - genAdversarialFragment :: [TestBlock] -> Int -> (Int, [S]) -> AnchoredFragment TestBlock - genAdversarialFragment goodBlocks forkNo (prefixCount, slotsA) - = mkTestFragment (mkTestBlocks prefix slotsA forkNo) - where - -- blocks in the common prefix in reversed order - prefix = drop (length goodBlocks - prefixCount) goodBlocks - - mkTestFragment :: [TestBlock] -> AnchoredFragment TestBlock - mkTestFragment = - AF.fromNewestFirst AF.AnchorGenesis - - mkTestBlocks :: [TestBlock] -> [S] -> Int -> [TestBlock] - mkTestBlocks pre active forkNo = - fst (List.foldl' folder ([], 0) active) - where - folder (chain, inc) s | S.test S.notInverted s = (issue inc chain, 0) - | otherwise = (chain, inc + 1) - issue inc (h : t) = incSlot inc (successorBlock h) : h : t - issue inc [] | [] <- pre = [incSlot inc ((firstBlock (fromIntegral forkNo)) {tbSlot = 0})] - | h : t <- pre = incSlot inc (modifyFork (const (fromIntegral forkNo)) (successorBlock h)) : h : t - - incSlot :: SlotNo -> TestBlock -> TestBlock - incSlot n b = b { tbSlot = tbSlot b + n } + alternativeChainSchemas <- + replicateM (fromIntegral numForks) (genAlternativeChainSchema (honestRecipe, honestChainSchema)) + pure $ + GenesisTest + { gtSecurityParam = + SecurityParam $ + -- As long as `genKSD` generates a `k` that is > 0, this won't lead to an ErrorCall. + nonZeroOr (fromIntegral kcp) $ + error "Generated Kcp was zero. Cannot construct a NonZero value for the SecurityParam." + , gtGenesisWindow = GenesisWindow (fromIntegral scg) + , gtForecastRange = ForecastRange (fromIntegral scg) -- REVIEW: Do we want to generate those randomly? + , gtDelay = delta + , gtSlotLength = slotLengthFromSec 20 + , gtChainSyncTimeouts = chainSyncTimeouts + , gtBlockFetchTimeouts = blockFetchTimeouts + , gtLoPBucketParams = LoPBucketParams{lbpCapacity = 50, lbpRate = 10} + , -- These values give little enough leeway (5s) so that some adversaries get disconnected + -- by the LoP during the stalling attack test. Maybe we should design a way to override + -- those values for individual tests? + -- Also, we might want to generate these randomly. + gtCSJParams = CSJParams $ fromIntegral scg + , gtBlockTree = + List.foldl' (flip BT.addBranch') (BT.mkTrunk goodChain) $ + zipWith (genAdversarialFragment goodBlocks) [1 ..] alternativeChainSchemas + , gtExtraHonestPeers + , gtSchedule = () + } + where + genAdversarialFragment :: [TestBlock] -> Int -> (Int, [S]) -> AnchoredFragment TestBlock + genAdversarialFragment goodBlocks forkNo (prefixCount, slotsA) = + mkTestFragment (mkTestBlocks prefix slotsA forkNo) + where + -- blocks in the common prefix in reversed order + prefix = drop (length goodBlocks - prefixCount) goodBlocks + + mkTestFragment :: [TestBlock] -> AnchoredFragment TestBlock + mkTestFragment = + AF.fromNewestFirst AF.AnchorGenesis + + mkTestBlocks :: [TestBlock] -> [S] -> Int -> [TestBlock] + mkTestBlocks pre active forkNo = + fst (List.foldl' folder ([], 0) active) + where + folder (chain, inc) s + | S.test S.notInverted s = (issue inc chain, 0) + | otherwise = (chain, inc + 1) + issue inc (h : t) = incSlot inc (successorBlock h) : h : t + issue inc [] + | [] <- pre = [incSlot inc ((firstBlock (fromIntegral forkNo)){tbSlot = 0})] + | h : t <- pre = incSlot inc (modifyFork (const (fromIntegral forkNo)) (successorBlock h)) : h : t + + incSlot :: SlotNo -> TestBlock -> TestBlock + incSlot n b = b{tbSlot = tbSlot b + n} chainSyncTimeouts :: ChainSyncTimeout chainSyncTimeouts = ChainSyncTimeout - { canAwaitTimeout, - intersectTimeout, - mustReplyTimeout, - idleTimeout + { canAwaitTimeout + , intersectTimeout + , mustReplyTimeout + , idleTimeout } - where - canAwaitTimeout :: Maybe DiffTime - canAwaitTimeout = shortWait - intersectTimeout :: Maybe DiffTime - intersectTimeout = shortWait - idleTimeout :: Maybe DiffTime - -- | The default from 'Ouroboros.Consensus.Node.stdChainSyncTimeout' is - -- 3673s, which is virtually infinite, so let us make it actually infinite - -- for our test environment. - idleTimeout = Nothing - -- | The 'mustReplyTimeout' must be disabled in our context, because the - -- chains are finite, and therefore an honest peer can only serve it all, - -- then send 'MsgAwaitReply' (therefore entering 'StMustReply'), and then - -- stall forever, and it must not be killed for it. - -- - -- Note that this allows the adversaries to stall us forever in that same - -- situation. However, that peer is only allowed to send 'MsgAwaitReply' - -- when they have served their tip, which leaves them fully vulnerable to - -- the Genesis Density Disconnection (GDD) logic. A bug related to this - -- disabled timeout is in fact either a bug in the GDD or in the tests. - mustReplyTimeout :: Maybe DiffTime - mustReplyTimeout = Nothing + where + canAwaitTimeout :: Maybe DiffTime + canAwaitTimeout = shortWait + intersectTimeout :: Maybe DiffTime + intersectTimeout = shortWait + idleTimeout :: Maybe DiffTime + -- \| The default from 'Ouroboros.Consensus.Node.stdChainSyncTimeout' is + -- 3673s, which is virtually infinite, so let us make it actually infinite + -- for our test environment. + idleTimeout = Nothing + -- \| The 'mustReplyTimeout' must be disabled in our context, because the + -- chains are finite, and therefore an honest peer can only serve it all, + -- then send 'MsgAwaitReply' (therefore entering 'StMustReply'), and then + -- stall forever, and it must not be killed for it. + -- + -- Note that this allows the adversaries to stall us forever in that same + -- situation. However, that peer is only allowed to send 'MsgAwaitReply' + -- when they have served their tip, which leaves them fully vulnerable to + -- the Genesis Density Disconnection (GDD) logic. A bug related to this + -- disabled timeout is in fact either a bug in the GDD or in the tests. + mustReplyTimeout :: Maybe DiffTime + mustReplyTimeout = Nothing blockFetchTimeouts :: BlockFetchTimeout blockFetchTimeouts = BlockFetchTimeout - { busyTimeout = Just 60, - streamingTimeout = Just 60 + { busyTimeout = Just 60 + , streamingTimeout = Just 60 } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests.hs index 6c8776bb15..cd0224cb0e 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests.hs @@ -1,15 +1,17 @@ module Test.Consensus.Genesis.Tests (tests) where -import qualified Test.Consensus.Genesis.Tests.CSJ as CSJ -import qualified Test.Consensus.Genesis.Tests.DensityDisconnect as GDD -import qualified Test.Consensus.Genesis.Tests.LoE as LoE -import qualified Test.Consensus.Genesis.Tests.LongRangeAttack as LongRangeAttack -import qualified Test.Consensus.Genesis.Tests.LoP as LoP -import qualified Test.Consensus.Genesis.Tests.Uniform as Uniform -import Test.Tasty +import Test.Consensus.Genesis.Tests.CSJ qualified as CSJ +import Test.Consensus.Genesis.Tests.DensityDisconnect qualified as GDD +import Test.Consensus.Genesis.Tests.LoE qualified as LoE +import Test.Consensus.Genesis.Tests.LoP qualified as LoP +import Test.Consensus.Genesis.Tests.LongRangeAttack qualified as LongRangeAttack +import Test.Consensus.Genesis.Tests.Uniform qualified as Uniform +import Test.Tasty tests :: TestTree -tests = testGroup "Genesis tests" +tests = + testGroup + "Genesis tests" [ CSJ.tests , GDD.tests , LongRangeAttack.tests diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/CSJ.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/CSJ.hs index 1610e05399..629d776f9c 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/CSJ.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/CSJ.hs @@ -3,54 +3,67 @@ module Test.Consensus.Genesis.Tests.CSJ (tests) where -import Data.List (nub) -import qualified Data.Map.Strict as Map -import Data.Maybe (mapMaybe) -import Ouroboros.Consensus.Block (Header, blockSlot, succWithOrigin, - unSlotNo) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (TraceChainSyncClientEvent (..)) -import Ouroboros.Consensus.Util.Condense (PaddingDirection (..), - condenseListWithPadding) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Protocol.ChainSync.Codec - (ChainSyncTimeout (mustReplyTimeout), idleTimeout) -import Test.Consensus.BlockTree (BlockTree (..)) -import Test.Consensus.Genesis.Setup -import Test.Consensus.Genesis.Tests.Uniform (genUniformSchedulePoints) -import Test.Consensus.PeerSimulator.Run (SchedulerConfig (..), - defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView (StateView (..)) -import Test.Consensus.PeerSimulator.Trace (TraceEvent (..)) -import Test.Consensus.PointSchedule -import qualified Test.Consensus.PointSchedule.Peers as Peers -import Test.Consensus.PointSchedule.Peers (Peers (..), peers') -import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.PartialAccessors -import Test.Util.TestBlock (TestBlock) -import Test.Util.TestEnv (adjustQuickCheckMaxSize, - adjustQuickCheckTests) +import Data.List (nub) +import Data.Map.Strict qualified as Map +import Data.Maybe (mapMaybe) +import Ouroboros.Consensus.Block + ( Header + , blockSlot + , succWithOrigin + , unSlotNo + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( TraceChainSyncClientEvent (..) + ) +import Ouroboros.Consensus.Util.Condense + ( PaddingDirection (..) + , condenseListWithPadding + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Protocol.ChainSync.Codec + ( ChainSyncTimeout (mustReplyTimeout) + , idleTimeout + ) +import Test.Consensus.BlockTree (BlockTree (..)) +import Test.Consensus.Genesis.Setup +import Test.Consensus.Genesis.Tests.Uniform (genUniformSchedulePoints) +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (..) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView (StateView (..)) +import Test.Consensus.PeerSimulator.Trace (TraceEvent (..)) +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers (Peers (..), peers') +import Test.Consensus.PointSchedule.Peers qualified as Peers +import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.PartialAccessors +import Test.Util.TestBlock (TestBlock) +import Test.Util.TestEnv + ( adjustQuickCheckMaxSize + , adjustQuickCheckTests + ) tests :: TestTree tests = adjustQuickCheckTests (* 10) $ - adjustQuickCheckMaxSize (`div` 5) $ - testGroup - "CSJ" - [ testGroup - "Happy Path" - [ testProperty "honest peers are synchronised" $ prop_CSJ NoAdversaries OneScheduleForAllPeers, - testProperty "honest peers do their own thing" $ prop_CSJ NoAdversaries OneSchedulePerHonestPeer - ], - testGroup - "With some adversaries" - [ testProperty "honest peers are synchronised" $ prop_CSJ WithAdversaries OneScheduleForAllPeers, - testProperty "honest peers do their own thing" $ prop_CSJ WithAdversaries OneSchedulePerHonestPeer - ] - ] + adjustQuickCheckMaxSize (`div` 5) $ + testGroup + "CSJ" + [ testGroup + "Happy Path" + [ testProperty "honest peers are synchronised" $ prop_CSJ NoAdversaries OneScheduleForAllPeers + , testProperty "honest peers do their own thing" $ prop_CSJ NoAdversaries OneSchedulePerHonestPeer + ] + , testGroup + "With some adversaries" + [ testProperty "honest peers are synchronised" $ prop_CSJ WithAdversaries OneScheduleForAllPeers + , testProperty "honest peers do their own thing" $ prop_CSJ WithAdversaries OneSchedulePerHonestPeer + ] + ] -- | A flag to indicate if properties are tested with adversarial peers data WithAdversariesFlag = NoAdversaries | WithAdversaries @@ -80,34 +93,34 @@ data NumHonestSchedulesFlag = OneScheduleForAllPeers | OneSchedulePerHonestPeer -- jumpers takes its place and starts serving headers. This might lead to -- duplication of headers, but only in a window of @jumpSize@ slots near the tip -- of the chain. --- prop_CSJ :: WithAdversariesFlag -> NumHonestSchedulesFlag -> Property prop_CSJ adversariesFlag numHonestSchedules = do let genForks = case adversariesFlag of - NoAdversaries -> pure 0 - WithAdversaries -> choose (2, 4) + NoAdversaries -> pure 0 + WithAdversaries -> choose (2, 4) forAllGenesisTest ( disableBoringTimeouts <$> case numHonestSchedules of OneScheduleForAllPeers -> genChains genForks - `enrichedWith` genDuplicatedHonestSchedule + `enrichedWith` genDuplicatedHonestSchedule OneSchedulePerHonestPeer -> genChainsWithExtraHonestPeers (choose (2, 4)) genForks - `enrichedWith` genUniformSchedulePoints + `enrichedWith` genUniformSchedulePoints ) ( defaultSchedulerConfig - { scEnableCSJ = True - , scEnableLoE = True - , scEnableLoP = True - , scEnableChainSelStarvation = adversariesFlag == NoAdversaries - -- ^ NOTE: When there are adversaries and the ChainSel - -- starvation detection of BlockFetch is enabled, then our property does - -- not actually hold, because peer simulator-based tests have virtually - -- infinite CPU, and therefore ChainSel gets starved at every tick, which - -- makes us cycle the dynamos, which can lead to some extra headers being - -- downloaded. - } + { scEnableCSJ = True + , scEnableLoE = True + , scEnableLoP = True + , scEnableChainSelStarvation = adversariesFlag == NoAdversaries + } ) + -- \^ NOTE: When there are adversaries and the ChainSel + -- starvation detection of BlockFetch is enabled, then our property does + -- not actually hold, because peer simulator-based tests have virtually + -- infinite CPU, and therefore ChainSel gets starved at every tick, which + -- makes us cycle the dynamos, which can lead to some extra headers being + -- downloaded. + shrinkPeerSchedules ( \gt StateView{svTrace} -> let @@ -116,12 +129,12 @@ prop_CSJ adversariesFlag numHonestSchedules = do -- expect to see only once per header if CSJ works properly. headerHonestDownloadEvents = mapMaybe - (\case - TraceChainSyncClientEvent pid (TraceDownloadedHeader hdr) - | not (isNewerThanJumpSizeFromTip gt hdr) - , Peers.HonestPeer _ <- pid - -> Just (pid, hdr) - _ -> Nothing + ( \case + TraceChainSyncClientEvent pid (TraceDownloadedHeader hdr) + | not (isNewerThanJumpSizeFromTip gt hdr) + , Peers.HonestPeer _ <- pid -> + Just (pid, hdr) + _ -> Nothing ) svTrace -- We receive headers at most once from honest peer. The only @@ -131,54 +144,57 @@ prop_CSJ adversariesFlag numHonestSchedules = do -- promote objectors to dynamo to reuse their state, then we could -- make this bound tighter. receivedHeadersAtMostOnceFromHonestPeers = - length headerHonestDownloadEvents <= - length (nub $ snd <$> headerHonestDownloadEvents) + - (fromIntegral $ unSlotNo $ csjpJumpSize $ gtCSJParams gt) - in - tabulate "" + length headerHonestDownloadEvents + <= length (nub $ snd <$> headerHonestDownloadEvents) + + (fromIntegral $ unSlotNo $ csjpJumpSize $ gtCSJParams gt) + in + tabulate + "" [ if headerHonestDownloadEvents == [] then "All headers are within the last jump window" else "There exist headers that have to be downloaded exactly once" - ] $ - counterexample - ("Downloaded headers (except jumpSize slots near the tip):\n" ++ - ( unlines $ fmap (" " ++) $ zipWith - (\peer header -> peer ++ " | " ++ header) - (condenseListWithPadding PadRight $ fst <$> headerHonestDownloadEvents) - (condenseListWithPadding PadRight $ snd <$> headerHonestDownloadEvents) - ) - ) - receivedHeadersAtMostOnceFromHonestPeers + ] + $ counterexample + ( "Downloaded headers (except jumpSize slots near the tip):\n" + ++ ( unlines $ + fmap (" " ++) $ + zipWith + (\peer header -> peer ++ " | " ++ header) + (condenseListWithPadding PadRight $ fst <$> headerHonestDownloadEvents) + (condenseListWithPadding PadRight $ snd <$> headerHonestDownloadEvents) + ) + ) + receivedHeadersAtMostOnceFromHonestPeers ) - where - genDuplicatedHonestSchedule :: GenesisTest TestBlock () -> Gen (PointSchedule TestBlock) - genDuplicatedHonestSchedule gt@GenesisTest {gtExtraHonestPeers} = do - ps@PointSchedule {psSchedule = Peers {honestPeers, adversarialPeers}} <- genUniformSchedulePoints gt - pure $ ps { - psSchedule = - Peers.unionWithKey - (\_ _ _ -> error "should not happen") - ( peers' - (replicate (fromIntegral gtExtraHonestPeers + 1) (getHonestPeer honestPeers)) - [] - ) - (Peers Map.empty adversarialPeers) + where + genDuplicatedHonestSchedule :: GenesisTest TestBlock () -> Gen (PointSchedule TestBlock) + genDuplicatedHonestSchedule gt@GenesisTest{gtExtraHonestPeers} = do + ps@PointSchedule{psSchedule = Peers{honestPeers, adversarialPeers}} <- genUniformSchedulePoints gt + pure $ + ps + { psSchedule = + Peers.unionWithKey + (\_ _ _ -> error "should not happen") + ( peers' + (replicate (fromIntegral gtExtraHonestPeers + 1) (getHonestPeer honestPeers)) + [] + ) + (Peers Map.empty adversarialPeers) } - isNewerThanJumpSizeFromTip :: GenesisTestFull TestBlock -> Header TestBlock -> Bool - isNewerThanJumpSizeFromTip gt hdr = - let jumpSize = csjpJumpSize $ gtCSJParams gt - tipSlot = AF.headSlot $ btTrunk $ gtBlockTree gt - hdrSlot = blockSlot hdr - in - -- Sanity check: add @1 +@ after @>@ and watch the World burn. + isNewerThanJumpSizeFromTip :: GenesisTestFull TestBlock -> Header TestBlock -> Bool + isNewerThanJumpSizeFromTip gt hdr = + let jumpSize = csjpJumpSize $ gtCSJParams gt + tipSlot = AF.headSlot $ btTrunk $ gtBlockTree gt + hdrSlot = blockSlot hdr + in -- Sanity check: add @1 +@ after @>@ and watch the World burn. hdrSlot + jumpSize >= succWithOrigin tipSlot - disableBoringTimeouts gt = - gt - { gtChainSyncTimeouts = - (gtChainSyncTimeouts gt) - { mustReplyTimeout = Nothing, - idleTimeout = Nothing - } - } + disableBoringTimeouts gt = + gt + { gtChainSyncTimeouts = + (gtChainSyncTimeouts gt) + { mustReplyTimeout = Nothing + , idleTimeout = Nothing + } + } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/DensityDisconnect.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/DensityDisconnect.hs index d97a7998f7..0706f163c6 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/DensityDisconnect.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/DensityDisconnect.hs @@ -6,79 +6,109 @@ module Test.Consensus.Genesis.Tests.DensityDisconnect (tests) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Cardano.Slotting.Slot (SlotNo (unSlotNo), WithOrigin (..)) -import Control.Exception (fromException) -import Control.Monad.Class.MonadTime.SI (Time (..)) -import Data.Bifunctor -import Data.Foldable (maximumBy, minimumBy, toList) -import Data.Function (on) -import Data.Functor (($>), (<&>)) -import Data.List (intercalate) -import Data.List.NonEmpty (nonEmpty) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import Data.Maybe.Strict (StrictMaybe (..)) -import Data.Semigroup (Endo (..)) -import Data.Set (Set, (\\)) -import qualified Data.Set as Set -import Ouroboros.Consensus.Block (Point (GenesisPoint), - WithOrigin (NotOrigin), blockSlot, fromWithOrigin, - withOrigin) -import Ouroboros.Consensus.Block.Abstract (Header, getHeader) -import Ouroboros.Consensus.Config.SecurityParam - (SecurityParam (SecurityParam), maxRollbacks) -import Ouroboros.Consensus.Genesis.Governor (DensityBounds, - densityDisconnect, sharedCandidatePrefix) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientException (..), ChainSyncState (..)) -import Ouroboros.Consensus.Util.Condense (condense) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (HasHeader, Tip (TipGenesis), - tipFromHeader) -import Test.Consensus.BlockTree -import Test.Consensus.Genesis.Setup -import Test.Consensus.Genesis.Setup.Classifiers (classifiers, - genesisWindowAfterIntersection) -import Test.Consensus.PeerSimulator.Run - (SchedulerConfig (scEnableLoE), defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView - (PeerSimulatorComponent (..), StateView (..), - exceptionsByComponent) -import Test.Consensus.PeerSimulator.Trace (prettyDensityBounds) -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers -import Test.Consensus.PointSchedule.Shrinking - (shrinkByRemovingAdversaries) -import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..), - scheduleBlockPoint, scheduleHeaderPoint, scheduleTipPoint) -import qualified Test.QuickCheck as QC -import Test.QuickCheck -import Test.QuickCheck.Extras (unsafeMapSuchThatJust) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Header (attachSlotTimeToFragment) -import Test.Util.Orphans.IOLike () -import Test.Util.PartialAccessors -import Test.Util.TersePrinting (terseHFragment, terseHWTFragment, - terseHeader) -import Test.Util.TestBlock (TestBlock, singleNodeTestConfig) -import Test.Util.TestEnv (adjustQuickCheckMaxSize, - adjustQuickCheckTests) - +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Slotting.Slot (SlotNo (unSlotNo), WithOrigin (..)) +import Control.Exception (fromException) +import Control.Monad.Class.MonadTime.SI (Time (..)) +import Data.Bifunctor +import Data.Foldable (maximumBy, minimumBy, toList) +import Data.Function (on) +import Data.Functor (($>), (<&>)) +import Data.List (intercalate) +import Data.List.NonEmpty (nonEmpty) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import Data.Maybe.Strict (StrictMaybe (..)) +import Data.Semigroup (Endo (..)) +import Data.Set (Set, (\\)) +import Data.Set qualified as Set +import Ouroboros.Consensus.Block + ( Point (GenesisPoint) + , WithOrigin (NotOrigin) + , blockSlot + , fromWithOrigin + , withOrigin + ) +import Ouroboros.Consensus.Block.Abstract (Header, getHeader) +import Ouroboros.Consensus.Config.SecurityParam + ( SecurityParam (SecurityParam) + , maxRollbacks + ) +import Ouroboros.Consensus.Genesis.Governor + ( DensityBounds + , densityDisconnect + , sharedCandidatePrefix + ) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientException (..) + , ChainSyncState (..) + ) +import Ouroboros.Consensus.Util.Condense (condense) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block + ( HasHeader + , Tip (TipGenesis) + , tipFromHeader + ) +import Test.Consensus.BlockTree +import Test.Consensus.Genesis.Setup +import Test.Consensus.Genesis.Setup.Classifiers + ( classifiers + , genesisWindowAfterIntersection + ) +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (scEnableLoE) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView + ( PeerSimulatorComponent (..) + , StateView (..) + , exceptionsByComponent + ) +import Test.Consensus.PeerSimulator.Trace (prettyDensityBounds) +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers +import Test.Consensus.PointSchedule.Shrinking + ( shrinkByRemovingAdversaries + ) +import Test.Consensus.PointSchedule.SinglePeer + ( SchedulePoint (..) + , scheduleBlockPoint + , scheduleHeaderPoint + , scheduleTipPoint + ) +import Test.QuickCheck +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Extras (unsafeMapSuchThatJust) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Header (attachSlotTimeToFragment) +import Test.Util.Orphans.IOLike () +import Test.Util.PartialAccessors +import Test.Util.TersePrinting + ( terseHFragment + , terseHWTFragment + , terseHeader + ) +import Test.Util.TestBlock (TestBlock, singleNodeTestConfig) +import Test.Util.TestEnv + ( adjustQuickCheckMaxSize + , adjustQuickCheckTests + ) tests :: TestTree tests = adjustQuickCheckTests (* 10) $ - adjustQuickCheckMaxSize (`div` 5) $ - testGroup "gdd" [ - testProperty "basic" prop_densityDisconnectStatic, - testProperty "monotonicity" prop_densityDisconnectMonotonic, - testProperty "re-triggers chain selection on disconnection" prop_densityDisconnectTriggersChainSel - ] + adjustQuickCheckMaxSize (`div` 5) $ + testGroup + "gdd" + [ testProperty "basic" prop_densityDisconnectStatic + , testProperty "monotonicity" prop_densityDisconnectMonotonic + , testProperty "re-triggers chain selection on disconnection" prop_densityDisconnectTriggersChainSel + ] branchTip :: AnchoredFragment TestBlock -> Tip TestBlock branchTip = @@ -87,13 +117,13 @@ branchTip = toHeaders :: AnchoredFragment TestBlock -> AnchoredFragment (Header TestBlock) toHeaders = AF.mapAnchoredFragment getHeader -data StaticCandidates = - StaticCandidates { - k :: SecurityParam, - sgen :: GenesisWindow, - suffixes :: [(PeerId, AnchoredFragment (HeaderWithTime TestBlock))], - tips :: Map PeerId (Tip TestBlock), - loeFrag :: AnchoredFragment (HeaderWithTime TestBlock) +data StaticCandidates + = StaticCandidates + { k :: SecurityParam + , sgen :: GenesisWindow + , suffixes :: [(PeerId, AnchoredFragment (HeaderWithTime TestBlock))] + , tips :: Map PeerId (Tip TestBlock) + , loeFrag :: AnchoredFragment (HeaderWithTime TestBlock) } deriving Show @@ -103,38 +133,40 @@ data StaticCandidates = -- Return a 'StaticCandidates' value for each of them, containing the candidate suffixes and LoE fragment computed by -- 'sharedCandidatePrefix' from the selection. staticCandidates :: GenesisTest TestBlock s -> [StaticCandidates] -staticCandidates GenesisTest {gtSecurityParam, gtGenesisWindow, gtBlockTree} = - one . attachTimeUsingTestConfig . toHeaders <$> selections - where - one curChain = - StaticCandidates { - k = gtSecurityParam, - sgen = gtGenesisWindow, - suffixes, - tips, - loeFrag +staticCandidates GenesisTest{gtSecurityParam, gtGenesisWindow, gtBlockTree} = + one . attachTimeUsingTestConfig . toHeaders <$> selections + where + one curChain = + StaticCandidates + { k = gtSecurityParam + , sgen = gtGenesisWindow + , suffixes + , tips + , loeFrag } - where - (loeFrag, suffixes) = - sharedCandidatePrefix - curChain - (second (attachTimeUsingTestConfig . toHeaders) - <$> candidates - ) + where + (loeFrag, suffixes) = + sharedCandidatePrefix + curChain + ( second (attachTimeUsingTestConfig . toHeaders) + <$> candidates + ) - selections = selection <$> branches + selections = selection <$> branches - selection branch = - AF.takeOldest (AF.length (btbPrefix branch) + fromIntegral (unNonZero $ maxRollbacks gtSecurityParam)) (btbFull branch) + selection branch = + AF.takeOldest + (AF.length (btbPrefix branch) + fromIntegral (unNonZero $ maxRollbacks gtSecurityParam)) + (btbFull branch) - tips = branchTip <$> Map.fromList candidates + tips = branchTip <$> Map.fromList candidates - candidates :: [(PeerId, AnchoredFragment TestBlock)] - candidates = zip (HonestPeer 1 : enumerateAdversaries) chains + candidates :: [(PeerId, AnchoredFragment TestBlock)] + candidates = zip (HonestPeer 1 : enumerateAdversaries) chains - chains = btTrunk gtBlockTree : (btbFull <$> branches) + chains = btTrunk gtBlockTree : (btbFull <$> branches) - branches = btBranches gtBlockTree + branches = btBranches gtBlockTree -- | Attach a relative slot time to a fragment of headers using the -- 'singleNodeTestConfig'. Since 'k' is not used for time conversions, @@ -149,50 +181,50 @@ attachTimeUsingTestConfig = attachSlotTimeToFragment singleNodeTestConfig -- intersections, and that it's not the honest peer. prop_densityDisconnectStatic :: Property prop_densityDisconnectStatic = - forAll gen $ \ StaticCandidates {k, sgen, suffixes, loeFrag} -> do + forAll gen $ \StaticCandidates{k, sgen, suffixes, loeFrag} -> do let (disconnect, _) = densityDisconnect sgen k (mkState <$> Map.fromList suffixes) suffixes loeFrag counterexample "it should disconnect some node" (not (null disconnect)) - .&&. - counterexample "it should not disconnect the honest peers" - (not $ any isHonestPeerId disconnect) - where - mkState :: AnchoredFragment (HeaderWithTime TestBlock) -> ChainSyncState TestBlock - mkState frag = - ChainSyncState { - csCandidate = frag, - csLatestSlot = SJust (AF.headSlot frag), - csIdling = False + .&&. counterexample + "it should not disconnect the honest peers" + (not $ any isHonestPeerId disconnect) + where + mkState :: AnchoredFragment (HeaderWithTime TestBlock) -> ChainSyncState TestBlock + mkState frag = + ChainSyncState + { csCandidate = frag + , csLatestSlot = SJust (AF.headSlot frag) + , csIdling = False } - gen = do - gt <- genChains (QC.choose (1, 4)) - elements (staticCandidates gt) - -data EvolvingPeer = - EvolvingPeer { - forkPrefix :: AnchoredFragment (Header TestBlock), - forkSuffix :: AnchoredFragment (Header TestBlock), - candidate :: AnchoredFragment (Header TestBlock), - suffix :: [Header TestBlock], - tip :: Tip TestBlock, - prefixSlots :: Int, - forkSlot :: WithOrigin SlotNo + gen = do + gt <- genChains (QC.choose (1, 4)) + elements (staticCandidates gt) + +data EvolvingPeer + = EvolvingPeer + { forkPrefix :: AnchoredFragment (Header TestBlock) + , forkSuffix :: AnchoredFragment (Header TestBlock) + , candidate :: AnchoredFragment (Header TestBlock) + , suffix :: [Header TestBlock] + , tip :: Tip TestBlock + , prefixSlots :: Int + , forkSlot :: WithOrigin SlotNo } deriving Show -data EvolvingPeers = - EvolvingPeers { - k :: SecurityParam, - sgen :: GenesisWindow, - peers :: Peers EvolvingPeer, - loeFrag :: AnchoredFragment (HeaderWithTime TestBlock), - fullTree :: BlockTree TestBlock +data EvolvingPeers + = EvolvingPeers + { k :: SecurityParam + , sgen :: GenesisWindow + , peers :: Peers EvolvingPeer + , loeFrag :: AnchoredFragment (HeaderWithTime TestBlock) + , fullTree :: BlockTree TestBlock } deriving Show -data Evolution = - Evolution { - peers :: Peers EvolvingPeer, - killed :: Set PeerId +data Evolution + = Evolution + { peers :: Peers EvolvingPeer + , killed :: Set PeerId } lastSlot :: @@ -203,72 +235,71 @@ lastSlot = fromIntegral . withOrigin 0 unSlotNo . AF.headSlot initCandidates :: GenesisTest TestBlock s -> EvolvingPeers -initCandidates GenesisTest {gtSecurityParam, gtGenesisWindow, gtBlockTree} = - EvolvingPeers { - k = gtSecurityParam, - sgen = gtGenesisWindow, - peers, - loeFrag = AF.Empty AF.AnchorGenesis, - fullTree = gtBlockTree - } - where - peers = peers' [peer trunk (AF.Empty (AF.headAnchor trunk)) (btTrunk gtBlockTree)] (branchPeer <$> branches) - - branchPeer branch = peer (btbPrefix branch) (btbSuffix branch) (btbFull branch) - - peer forkPrefix forkSuffix chain = - EvolvingPeer { - forkPrefix = toHeaders forkPrefix, - forkSuffix = toHeaders forkSuffix, - candidate = AF.Empty AF.AnchorGenesis, - suffix = AF.toOldestFirst headers, - tip = branchTip chain, - prefixSlots = lastSlot forkPrefix, - forkSlot = AF.lastSlot forkSuffix +initCandidates GenesisTest{gtSecurityParam, gtGenesisWindow, gtBlockTree} = + EvolvingPeers + { k = gtSecurityParam + , sgen = gtGenesisWindow + , peers + , loeFrag = AF.Empty AF.AnchorGenesis + , fullTree = gtBlockTree + } + where + peers = + peers' [peer trunk (AF.Empty (AF.headAnchor trunk)) (btTrunk gtBlockTree)] (branchPeer <$> branches) + + branchPeer branch = peer (btbPrefix branch) (btbSuffix branch) (btbFull branch) + + peer forkPrefix forkSuffix chain = + EvolvingPeer + { forkPrefix = toHeaders forkPrefix + , forkSuffix = toHeaders forkSuffix + , candidate = AF.Empty AF.AnchorGenesis + , suffix = AF.toOldestFirst headers + , tip = branchTip chain + , prefixSlots = lastSlot forkPrefix + , forkSlot = AF.lastSlot forkSuffix } - where - headers = toHeaders chain - - trunk = btTrunk gtBlockTree - - branches = btBranches gtBlockTree - -data UpdateEvent = UpdateEvent { - -- | The peer whose candidate was extended in this step - target :: PeerId - -- | The header appended to the candidate of 'target' - , added :: Header TestBlock - -- | Peers that have been disconnected in the current step - , killed :: Set PeerId - -- | The GDD data - , bounds :: [(PeerId, DensityBounds TestBlock)] - -- | The current chains - , tree :: BlockTree (Header TestBlock) - , loeFrag :: AnchoredFragment (HeaderWithTime TestBlock) + where + headers = toHeaders chain + + trunk = btTrunk gtBlockTree + + branches = btBranches gtBlockTree + +data UpdateEvent = UpdateEvent + { target :: PeerId + -- ^ The peer whose candidate was extended in this step + , added :: Header TestBlock + -- ^ The header appended to the candidate of 'target' + , killed :: Set PeerId + -- ^ Peers that have been disconnected in the current step + , bounds :: [(PeerId, DensityBounds TestBlock)] + -- ^ The GDD data + , tree :: BlockTree (Header TestBlock) + -- ^ The current chains + , loeFrag :: AnchoredFragment (HeaderWithTime TestBlock) , curChain :: AnchoredFragment (Header TestBlock) } snapshotTree :: Peers EvolvingPeer -> BlockTree (Header TestBlock) -snapshotTree Peers {honestPeers, adversarialPeers} = +snapshotTree Peers{honestPeers, adversarialPeers} = foldr addBranch' (mkTrunk (candidate (getHonestPeer honestPeers))) (candidate <$> adversarialPeers) prettyUpdateEvent :: UpdateEvent -> [String] -prettyUpdateEvent UpdateEvent {target, added, killed, bounds, tree, loeFrag, curChain} = - [ - "Extended " ++ condense target ++ " with " ++ terseHeader added, - " disconnect: " ++ show killed, - " LoE frag: " ++ terseHWTFragment loeFrag, - " selection: " ++ terseHFragment curChain +prettyUpdateEvent UpdateEvent{target, added, killed, bounds, tree, loeFrag, curChain} = + [ "Extended " ++ condense target ++ " with " ++ terseHeader added + , " disconnect: " ++ show killed + , " LoE frag: " ++ terseHWTFragment loeFrag + , " selection: " ++ terseHFragment curChain ] - ++ prettyDensityBounds bounds - ++ "" : prettyBlockTree tree + ++ prettyDensityBounds bounds + ++ "" + : prettyBlockTree tree -data MonotonicityResult = - HonestKilled - | - Nonmonotonic UpdateEvent - | - Finished +data MonotonicityResult + = HonestKilled + | Nonmonotonic UpdateEvent + | Finished -- | Check whether the honest peer was killed or a peer's new losing state -- violates monotonicity, i.e. if it was found to be losing before, it shouldn't @@ -292,38 +323,39 @@ updatePeers :: Set PeerId -> UpdateEvent -> Either (MonotonicityResult, Peers EvolvingPeer) Evolution -updatePeers (GenesisWindow sgen) peers killedBefore event@UpdateEvent {target, killed = killedNow} - | HonestPeer 1 `Set.member` killedNow - = Left (HonestKilled, peers) - | not (null violations) - = Left (Nonmonotonic event, peers) - | null remaining - = Left (Finished, peers) - | otherwise - = Right evo - where - -- The peers that were killed in an earlier step but not in the current one - violations = killedBefore \\ killedNow - - -- The new state if no violations were detected - evo@Evolution {peers = Peers {adversarialPeers = remaining}} - | targetExhausted - -- If the target is done, reset the set of killed peers, since other peers - -- may have lost only against the target. - -- Remove the target from the active peers. - = Evolution {peers = deletePeer target peers, killed = mempty} - | otherwise - -- Otherwise replace the killed peers with the current set - = Evolution {peers, killed = killedNow} - - -- Whether the extended peer is uninteresting for GDD from now on - targetExhausted = - -- Its fragment cannot be extended anymore, or - null suffix || +updatePeers (GenesisWindow sgen) peers killedBefore event@UpdateEvent{target, killed = killedNow} + | HonestPeer 1 `Set.member` killedNow = + Left (HonestKilled, peers) + | not (null violations) = + Left (Nonmonotonic event, peers) + | null remaining = + Left (Finished, peers) + | otherwise = + Right evo + where + -- The peers that were killed in an earlier step but not in the current one + violations = killedBefore \\ killedNow + + -- The new state if no violations were detected + evo@Evolution{peers = Peers{adversarialPeers = remaining}} + | targetExhausted = + -- If the target is done, reset the set of killed peers, since other peers + -- may have lost only against the target. + -- Remove the target from the active peers. + Evolution{peers = deletePeer target peers, killed = mempty} + | otherwise = + -- Otherwise replace the killed peers with the current set + Evolution{peers, killed = killedNow} + + -- Whether the extended peer is uninteresting for GDD from now on + targetExhausted = + -- Its fragment cannot be extended anymore, or + null suffix + || -- Its candidate is longer than a Genesis window lastSlot candidate - prefixSlots > fromIntegral sgen - Peer {value = EvolvingPeer {candidate, suffix, prefixSlots}} = getPeer target peers + Peer{value = EvolvingPeer{candidate, suffix, prefixSlots}} = getPeer target peers -- | Find the peer whose candidate has the earliest intersection. -- If no peer has reached its fork suffix yet, return the one with the highest slot. @@ -333,31 +365,35 @@ updatePeers (GenesisWindow sgen) peers killedBefore event@UpdateEvent {target, k firstBranch :: Peers EvolvingPeer -> Peer EvolvingPeer firstBranch peers = fromMaybe newest $ - minimumBy (compare `on` forkAnchor) <$> nonEmpty (filter hasForked (toList (adversarialPeers'' peers))) - where - newest = maximumBy (compare `on` (AF.headSlot . candidate . value)) (toList (honestPeers'' peers) ++ toList (adversarialPeers'' peers)) - forkAnchor = fromWithOrigin 0 . AF.anchorToSlotNo . AF.anchor . forkSuffix . value - hasForked Peer {value = EvolvingPeer {candidate, forkSlot}} = - AF.headSlot candidate >= forkSlot + minimumBy (compare `on` forkAnchor) + <$> nonEmpty (filter hasForked (toList (adversarialPeers'' peers))) + where + newest = + maximumBy + (compare `on` (AF.headSlot . candidate . value)) + (toList (honestPeers'' peers) ++ toList (adversarialPeers'' peers)) + forkAnchor = fromWithOrigin 0 . AF.anchorToSlotNo . AF.anchor . forkSuffix . value + hasForked Peer{value = EvolvingPeer{candidate, forkSlot}} = + AF.headSlot candidate >= forkSlot -- | Determine the immutable tip by computing the latest point before the fork intesection -- for all peers, and then taking the earliest among the results. immutableTip :: Peers EvolvingPeer -> AF.Point (Header TestBlock) immutableTip peers = minimum (lastHonest <$> toList (adversarialPeers'' peers)) - where - lastHonest Peer {value = EvolvingPeer {candidate, forkSlot = NotOrigin forkSlot}} = - AF.headPoint $ - AF.dropWhileNewest (\ b -> blockSlot b >= forkSlot) candidate - lastHonest _ = GenesisPoint + where + lastHonest Peer{value = EvolvingPeer{candidate, forkSlot = NotOrigin forkSlot}} = + AF.headPoint $ + AF.dropWhileNewest (\b -> blockSlot b >= forkSlot) candidate + lastHonest _ = GenesisPoint -- | Take one block off the peer's suffix and append it to the candidate fragment. -- -- Since we don't remove the honest peer when it's exhausted, this may be called with an empty suffix. movePeer :: EvolvingPeer -> (EvolvingPeer, Maybe (Header TestBlock)) movePeer = \case - peer@EvolvingPeer {candidate, suffix = h : t} -> - (peer {candidate = candidate AF.:> h, suffix = t}, Just h) + peer@EvolvingPeer{candidate, suffix = h : t} -> + (peer{candidate = candidate AF.:> h, suffix = t}, Just h) peer -> (peer, Nothing) -- | Repeatedly run the GDD, each time updating the candidate fragment of a @@ -375,103 +411,105 @@ movePeer = \case evolveBranches :: EvolvingPeers -> Gen (MonotonicityResult, EvolvingPeers, [UpdateEvent]) -evolveBranches EvolvingPeers {k, sgen, peers = initialPeers, fullTree} = - step [] Evolution {peers = initialPeers, killed = mempty} - where - step events Evolution {peers = ps, killed = killedBefore} = do - (target, nextPeers, added) <- unsafeMapSuchThatJust $ do - -- Select a random peer - pid <- elements ids - pure $ do - -- Add a block to the candidate. If the peer has no more blocks, - -- this returns 'Nothing' and the generator retries. - (nextPeers, added) <- sequence (updatePeer movePeer pid ps) - pure (pid, nextPeers, added) - let - -- Compute the selection. - curChain = selection (immutableTip ps) (firstBranch ps) - candidates = candidate . value <$> toMap nextPeers - states = - candidates <&> \ csCandidate -> - ChainSyncState { - csCandidate = attachTimeUsingTestConfig csCandidate, - csIdling = False, - csLatestSlot = SJust (AF.headSlot csCandidate) +evolveBranches EvolvingPeers{k, sgen, peers = initialPeers, fullTree} = + step [] Evolution{peers = initialPeers, killed = mempty} + where + step events Evolution{peers = ps, killed = killedBefore} = do + (target, nextPeers, added) <- unsafeMapSuchThatJust $ do + -- Select a random peer + pid <- elements ids + pure $ do + -- Add a block to the candidate. If the peer has no more blocks, + -- this returns 'Nothing' and the generator retries. + (nextPeers, added) <- sequence (updatePeer movePeer pid ps) + pure (pid, nextPeers, added) + let + -- Compute the selection. + curChain = selection (immutableTip ps) (firstBranch ps) + candidates = candidate . value <$> toMap nextPeers + states = + candidates <&> \csCandidate -> + ChainSyncState + { csCandidate = attachTimeUsingTestConfig csCandidate + , csIdling = False + , csLatestSlot = SJust (AF.headSlot csCandidate) } - -- Run GDD. - (loeFrag, suffixes) = - sharedCandidatePrefix - (attachTimeUsingTestConfig curChain) - (Map.toList $ - fmap attachTimeUsingTestConfig candidates - ) - (killedNow, bounds) = first Set.fromList $ densityDisconnect sgen k states suffixes loeFrag - event = UpdateEvent { - target, - added, - killed = killedNow, - bounds, - tree = snapshotTree nextPeers, - loeFrag, - curChain + -- Run GDD. + (loeFrag, suffixes) = + sharedCandidatePrefix + (attachTimeUsingTestConfig curChain) + ( Map.toList $ + fmap attachTimeUsingTestConfig candidates + ) + (killedNow, bounds) = first Set.fromList $ densityDisconnect sgen k states suffixes loeFrag + event = + UpdateEvent + { target + , added + , killed = killedNow + , bounds + , tree = snapshotTree nextPeers + , loeFrag + , curChain } - newEvents = event : events - -- Check the termination condition and remove exhausted peers. - updated = updatePeers sgen nextPeers killedBefore event - either (pure . result newEvents loeFrag) (step newEvents) updated - where - result evs f (res, final) = (res, EvolvingPeers {k, sgen, peers = final, loeFrag = f, fullTree}, reverse evs) + newEvents = event : events + -- Check the termination condition and remove exhausted peers. + updated = updatePeers sgen nextPeers killedBefore event + either (pure . result newEvents loeFrag) (step newEvents) updated + where + result evs f (res, final) = (res, EvolvingPeers{k, sgen, peers = final, loeFrag = f, fullTree}, reverse evs) - -- Take k blocks after the immutable tip on the first fork. - selection imm Peer {value = EvolvingPeer {candidate}} = - case AF.splitAfterPoint candidate imm of - Just (_, s) -> AF.takeOldest (fromIntegral $ unNonZero k') s - Nothing -> error "immutable tip not on candidate" + -- Take k blocks after the immutable tip on the first fork. + selection imm Peer{value = EvolvingPeer{candidate}} = + case AF.splitAfterPoint candidate imm of + Just (_, s) -> AF.takeOldest (fromIntegral $ unNonZero k') s + Nothing -> error "immutable tip not on candidate" - ids = toList (getPeerIds ps) + ids = toList (getPeerIds ps) - SecurityParam k' = k + SecurityParam k' = k peerInfo :: EvolvingPeers -> [String] -peerInfo EvolvingPeers {k = SecurityParam k, sgen = GenesisWindow sgen, loeFrag} = - [ - "k: " <> show k, - "sgen: " <> show sgen, - "loeFrag: " <> terseHWTFragment loeFrag +peerInfo EvolvingPeers{k = SecurityParam k, sgen = GenesisWindow sgen, loeFrag} = + [ "k: " <> show k + , "sgen: " <> show sgen + , "loeFrag: " <> terseHWTFragment loeFrag ] -- | Tests that when GDD disconnects a peer, it continues to disconnect it when -- its candidate fragment is extended. prop_densityDisconnectMonotonic :: Property prop_densityDisconnectMonotonic = - forAllBlind gen $ \ (result, final, events) -> + forAllBlind gen $ \(result, final, events) -> appEndo (foldMap (Endo . counterexample) (peerInfo final)) $ - check final events result - where - check final events = \case - HonestKilled -> withEvents $ counterexample "Honest peer was killed" False - Nonmonotonic event -> do - let msg = "Peer went from losing to remaining" - withEvents $ counterexample (catLines (msg : prettyUpdateEvent event)) False - Finished -> property True - where - withEvents | debug = counterexample (catLines debugInfo) - | otherwise = id - - debugInfo = - "Event log:" : ((++ [""]) . prettyUpdateEvent =<< events) ++ - ["k: " ++ show k'] ++ - ("Full tree:" : prettyBlockTree (fullTree final) ++ [""]) - - EvolvingPeers {k = SecurityParam k'} = final - - catLines = intercalate "\n" - - gen = do - gt <- genChains (QC.choose (1, 4)) - evolveBranches (initCandidates gt) - - debug = True + check final events result + where + check final events = \case + HonestKilled -> withEvents $ counterexample "Honest peer was killed" False + Nonmonotonic event -> do + let msg = "Peer went from losing to remaining" + withEvents $ counterexample (catLines (msg : prettyUpdateEvent event)) False + Finished -> property True + where + withEvents + | debug = counterexample (catLines debugInfo) + | otherwise = id + + debugInfo = + "Event log:" + : ((++ [""]) . prettyUpdateEvent =<< events) + ++ ["k: " ++ show k'] + ++ ("Full tree:" : prettyBlockTree (fullTree final) ++ [""]) + + EvolvingPeers{k = SecurityParam k'} = final + + catLines = intercalate "\n" + + gen = do + gt <- genChains (QC.choose (1, 4)) + evolveBranches (initCandidates gt) + + debug = True -- | Tests that a GDD disconnection re-triggers chain selection, i.e. when the current -- selection is blocked by LoE, and the leashing adversary reveals it is not dense enough, @@ -480,65 +518,66 @@ prop_densityDisconnectTriggersChainSel :: Property prop_densityDisconnectTriggersChainSel = forAllGenesisTest ( do - gt@GenesisTest {gtBlockTree} <- genChains (pure 1) + gt@GenesisTest{gtBlockTree} <- genChains (pure 1) let ps = lowDensitySchedule gtBlockTree cls = classifiers gt if genesisWindowAfterIntersection cls then pure $ gt $> ps else discard ) - - (defaultSchedulerConfig {scEnableLoE = True}) - + (defaultSchedulerConfig{scEnableLoE = True}) shrinkByRemovingAdversaries - - ( \GenesisTest {gtBlockTree, gtSchedule} stateView@StateView {svTipBlock} -> + ( \GenesisTest{gtBlockTree, gtSchedule} stateView@StateView{svTipBlock} -> let othersCount = Map.size (adversarialPeers $ psSchedule gtSchedule) exnCorrect = case exceptionsByComponent ChainSyncClient stateView of - [fromException -> Just DensityTooLow] -> True + [fromException -> Just DensityTooLow] -> True [fromException -> Just CandidateTooSparse{}] -> True - [] | othersCount == 0 -> True - _ -> False + [] | othersCount == 0 -> True + _ -> False tipPointCorrect = Just (getTrunkTip gtBlockTree) == svTipBlock - in counterexample "Unexpected exceptions" exnCorrect - .&&. - counterexample "The tip of the final selection is not the expected one" tipPointCorrect + in + counterexample "Unexpected exceptions" exnCorrect + .&&. counterexample "The tip of the final selection is not the expected one" tipPointCorrect ) - - where - -- 1. The adversary advertises blocks up to the intersection. - -- 2. The honest node advertises all its chain, which is - -- long enough to be blocked by the LoE. - -- 3. The adversary gives a block after the genesis window, - -- which should allow the GDD to realize that the chain - -- is not dense enough, and that the whole of the honest - -- chain should be selected. - lowDensitySchedule :: HasHeader blk => BlockTree blk -> PointSchedule blk - lowDensitySchedule tree = - let trunkTip = getTrunkTip tree - branch = getOnlyBranch tree - intersect = case btbPrefix branch of - (AF.Empty _) -> Origin - (_ AF.:> tipBlock) -> At tipBlock - advTip = getOnlyBranchTip tree - in PointSchedule { - psSchedule = peers' - -- Eagerly serve the honest tree, but after the adversary has - -- advertised its chain up to the intersection. - [[(Time 0, scheduleTipPoint trunkTip), - (Time 0.5, scheduleHeaderPoint trunkTip), - (Time 0.5, scheduleBlockPoint trunkTip) - ]] - -- Advertise the alternate branch early, but wait for the honest - -- node to have served its chain before disclosing the alternate - -- branch is not dense enough. - [[(Time 0, scheduleTipPoint advTip), - (Time 0, ScheduleHeaderPoint intersect), - (Time 0, ScheduleBlockPoint intersect), - (Time 1, scheduleHeaderPoint advTip), - (Time 1, scheduleBlockPoint advTip) - ]], - psStartOrder = [], - psMinEndTime = Time 0 + where + -- 1. The adversary advertises blocks up to the intersection. + -- 2. The honest node advertises all its chain, which is + -- long enough to be blocked by the LoE. + -- 3. The adversary gives a block after the genesis window, + -- which should allow the GDD to realize that the chain + -- is not dense enough, and that the whole of the honest + -- chain should be selected. + lowDensitySchedule :: HasHeader blk => BlockTree blk -> PointSchedule blk + lowDensitySchedule tree = + let trunkTip = getTrunkTip tree + branch = getOnlyBranch tree + intersect = case btbPrefix branch of + (AF.Empty _) -> Origin + (_ AF.:> tipBlock) -> At tipBlock + advTip = getOnlyBranchTip tree + in PointSchedule + { psSchedule = + peers' + -- Eagerly serve the honest tree, but after the adversary has + -- advertised its chain up to the intersection. + [ + [ (Time 0, scheduleTipPoint trunkTip) + , (Time 0.5, scheduleHeaderPoint trunkTip) + , (Time 0.5, scheduleBlockPoint trunkTip) + ] + ] + -- Advertise the alternate branch early, but wait for the honest + -- node to have served its chain before disclosing the alternate + -- branch is not dense enough. + [ + [ (Time 0, scheduleTipPoint advTip) + , (Time 0, ScheduleHeaderPoint intersect) + , (Time 0, ScheduleBlockPoint intersect) + , (Time 1, scheduleHeaderPoint advTip) + , (Time 1, scheduleBlockPoint advTip) + ] + ] + , psStartOrder = [] + , psMinEndTime = Time 0 } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoE.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoE.hs index 8fb3957b9a..907c752a59 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoE.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoE.hs @@ -6,40 +6,47 @@ module Test.Consensus.Genesis.Tests.LoE (tests) where -import Data.Functor (($>)) -import Ouroboros.Consensus.Util.IOLike (Time (Time), fromException) -import Ouroboros.Network.AnchoredFragment (HasHeader (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededTimeLimit)) -import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) -import Test.Consensus.Genesis.Setup -import Test.Consensus.PeerSimulator.Run (SchedulerConfig (..), - defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (peers') -import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) -import Test.Consensus.PointSchedule.SinglePeer (scheduleBlockPoint, - scheduleHeaderPoint, scheduleTipPoint) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.PartialAccessors -import Test.Util.TestEnv (adjustQuickCheckMaxSize, - adjustQuickCheckTests) +import Data.Functor (($>)) +import Ouroboros.Consensus.Util.IOLike (Time (Time), fromException) +import Ouroboros.Network.AnchoredFragment (HasHeader (..)) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededTimeLimit) + ) +import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) +import Test.Consensus.Genesis.Setup +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (..) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers (peers') +import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) +import Test.Consensus.PointSchedule.SinglePeer + ( scheduleBlockPoint + , scheduleHeaderPoint + , scheduleTipPoint + ) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.PartialAccessors +import Test.Util.TestEnv + ( adjustQuickCheckMaxSize + , adjustQuickCheckTests + ) tests :: TestTree tests = adjustQuickCheckTests (* 10) $ - testGroup - "LoE" - [ - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "adversary does not hit timeouts" (prop_adversaryHitsTimeouts False), - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "adversary hits timeouts" (prop_adversaryHitsTimeouts True) - ] + testGroup + "LoE" + [ adjustQuickCheckMaxSize (`div` 5) $ + testProperty "adversary does not hit timeouts" (prop_adversaryHitsTimeouts False) + , adjustQuickCheckMaxSize (`div` 5) $ + testProperty "adversary hits timeouts" (prop_adversaryHitsTimeouts True) + ] -- | Tests that the selection advances in presence of the LoE when a peer is -- killed by something that is not LoE-aware, eg. the timeouts. This test @@ -59,56 +66,59 @@ prop_adversaryHitsTimeouts timeoutsEnabled = noShrinking $ forAllGenesisTest ( do - gt@GenesisTest {gtBlockTree} <- genChains (pure 1) + gt@GenesisTest{gtBlockTree} <- genChains (pure 1) let ps = delaySchedule gtBlockTree pure $ gt $> ps ) -- NOTE: Crucially, there must be timeouts for this test. ( defaultSchedulerConfig - { scEnableChainSyncTimeouts = timeoutsEnabled, - scEnableLoE = True, - scEnableLoP = False + { scEnableChainSyncTimeouts = timeoutsEnabled + , scEnableLoE = True + , scEnableLoP = False } ) shrinkPeerSchedules - ( \GenesisTest {gtBlockTree} stateView@StateView {svSelectedChain} -> - let -- The tip of the blocktree trunk. - treeTipPoint = AF.headPoint $ btTrunk gtBlockTree - -- The tip of the selection. - selectedTipPoint = AF.castPoint $ AF.headPoint svSelectedChain - -- If timeouts are enabled, then the adversary should have been - -- killed and the selection should be the whole trunk. - selectedCorrect = timeoutsEnabled == (treeTipPoint == selectedTipPoint) - -- If timeouts are enabled, then we expect exactly one - -- `ExceededTimeLimit` exception in the adversary's ChainSync. - exceptionsCorrect = case exceptionsByComponent ChainSyncClient stateView of - [] -> not timeoutsEnabled - [fromException -> Just (ExceededTimeLimit _)] -> timeoutsEnabled - _ -> False - in selectedCorrect && exceptionsCorrect + ( \GenesisTest{gtBlockTree} stateView@StateView{svSelectedChain} -> + let + -- The tip of the blocktree trunk. + treeTipPoint = AF.headPoint $ btTrunk gtBlockTree + -- The tip of the selection. + selectedTipPoint = AF.castPoint $ AF.headPoint svSelectedChain + -- If timeouts are enabled, then the adversary should have been + -- killed and the selection should be the whole trunk. + selectedCorrect = timeoutsEnabled == (treeTipPoint == selectedTipPoint) + -- If timeouts are enabled, then we expect exactly one + -- `ExceededTimeLimit` exception in the adversary's ChainSync. + exceptionsCorrect = case exceptionsByComponent ChainSyncClient stateView of + [] -> not timeoutsEnabled + [fromException -> Just (ExceededTimeLimit _)] -> timeoutsEnabled + _ -> False + in + selectedCorrect && exceptionsCorrect ) - where - delaySchedule :: HasHeader blk => BlockTree blk -> PointSchedule blk - delaySchedule tree = - let trunkTip = getTrunkTip tree - branch = getOnlyBranch tree - intersectM = case btbPrefix branch of - (AF.Empty _) -> Nothing - (_ AF.:> tipBlock) -> Just tipBlock - branchTip = getOnlyBranchTip tree - psSchedule = peers' + where + delaySchedule :: HasHeader blk => BlockTree blk -> PointSchedule blk + delaySchedule tree = + let trunkTip = getTrunkTip tree + branch = getOnlyBranch tree + intersectM = case btbPrefix branch of + (AF.Empty _) -> Nothing + (_ AF.:> tipBlock) -> Just tipBlock + branchTip = getOnlyBranchTip tree + psSchedule = + peers' -- Eagerly serve the honest tree, but after the adversary has -- advertised its chain. [ (Time 0, scheduleTipPoint trunkTip) : case intersectM of Nothing -> - [ (Time 0.5, scheduleHeaderPoint trunkTip), - (Time 0.5, scheduleBlockPoint trunkTip) + [ (Time 0.5, scheduleHeaderPoint trunkTip) + , (Time 0.5, scheduleBlockPoint trunkTip) ] Just intersect -> - [ (Time 0.5, scheduleHeaderPoint intersect), - (Time 0.5, scheduleBlockPoint intersect), - (Time 5, scheduleHeaderPoint trunkTip), - (Time 5, scheduleBlockPoint trunkTip) + [ (Time 0.5, scheduleHeaderPoint intersect) + , (Time 0.5, scheduleBlockPoint intersect) + , (Time 5, scheduleHeaderPoint trunkTip) + , (Time 5, scheduleBlockPoint trunkTip) ] ] -- The one adversarial peer advertises and serves up to the @@ -118,10 +128,10 @@ prop_adversaryHitsTimeouts timeoutsEnabled = Nothing -> [] -- the alternate branch forks from `intersect` Just intersect -> - [ (Time 0, scheduleHeaderPoint intersect), - (Time 0, scheduleBlockPoint intersect) + [ (Time 0, scheduleHeaderPoint intersect) + , (Time 0, scheduleBlockPoint intersect) ] ] - -- We want to wait more than the short wait timeout - psMinEndTime = Time 11 - in PointSchedule {psSchedule, psStartOrder = [], psMinEndTime} + -- We want to wait more than the short wait timeout + psMinEndTime = Time 11 + in PointSchedule{psSchedule, psStartOrder = [], psMinEndTime} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoP.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoP.hs index 56049d2a1e..98264f4c4c 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoP.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LoP.hs @@ -6,58 +6,74 @@ module Test.Consensus.Genesis.Tests.LoP (tests) where -import Data.Functor (($>)) -import Data.Ratio ((%)) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import Ouroboros.Consensus.Util.IOLike (DiffTime, Time (Time), - fromException) -import Ouroboros.Consensus.Util.LeakyBucket - (secondsRationalToDiffTime) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - HasHeader) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) -import Test.Consensus.Genesis.Setup -import Test.Consensus.PeerSimulator.Run (SchedulerConfig (..), - defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (peers', peersOnlyAdversary, - peersOnlyHonest) -import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) -import Test.Consensus.PointSchedule.SinglePeer (scheduleBlockPoint, - scheduleHeaderPoint, scheduleTipPoint) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.PartialAccessors -import Test.Util.TestEnv (adjustQuickCheckMaxSize, - adjustQuickCheckTests) +import Data.Functor (($>)) +import Data.Ratio ((%)) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.Util.IOLike + ( DiffTime + , Time (Time) + , fromException + ) +import Ouroboros.Consensus.Util.LeakyBucket + ( secondsRationalToDiffTime + ) +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , HasHeader + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) +import Test.Consensus.Genesis.Setup +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (..) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers + ( peers' + , peersOnlyAdversary + , peersOnlyHonest + ) +import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) +import Test.Consensus.PointSchedule.SinglePeer + ( scheduleBlockPoint + , scheduleHeaderPoint + , scheduleTipPoint + ) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.PartialAccessors +import Test.Util.TestEnv + ( adjustQuickCheckMaxSize + , adjustQuickCheckTests + ) tests :: TestTree tests = adjustQuickCheckTests (* 10) $ - testGroup - "LoP" - [ -- \| NOTE: Running the test that must _not_ timeout (@prop_smoke False@) takes - -- significantly more time than the one that does. This is because the former - -- does all the computation (serving the headers, validating them, serving the - -- block, validating them) while the former does nothing, because it timeouts - -- before reaching the last tick of the point schedule. - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "wait just enough" (prop_wait False), - testProperty "wait too much" (prop_wait True), - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "wait behind forecast horizon" prop_waitBehindForecastHorizon, - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "serve just fast enough" (prop_serve False), - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "serve too slow" (prop_serve True), - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "delaying attack succeeds without LoP" (prop_delayAttack False), - adjustQuickCheckMaxSize (`div` 5) $ - testProperty "delaying attack fails with LoP" (prop_delayAttack True) - ] + testGroup + "LoP" + [ -- \| NOTE: Running the test that must _not_ timeout (@prop_smoke False@) takes + -- significantly more time than the one that does. This is because the former + -- does all the computation (serving the headers, validating them, serving the + -- block, validating them) while the former does nothing, because it timeouts + -- before reaching the last tick of the point schedule. + adjustQuickCheckMaxSize (`div` 5) $ + testProperty "wait just enough" (prop_wait False) + , testProperty "wait too much" (prop_wait True) + , adjustQuickCheckMaxSize (`div` 5) $ + testProperty "wait behind forecast horizon" prop_waitBehindForecastHorizon + , adjustQuickCheckMaxSize (`div` 5) $ + testProperty "serve just fast enough" (prop_serve False) + , adjustQuickCheckMaxSize (`div` 5) $ + testProperty "serve too slow" (prop_serve True) + , adjustQuickCheckMaxSize (`div` 5) $ + testProperty "delaying attack succeeds without LoP" (prop_delayAttack False) + , adjustQuickCheckMaxSize (`div` 5) $ + testProperty "delaying attack fails with LoP" (prop_delayAttack True) + ] -- | Simple test in which we connect to only one peer, who advertises the tip of -- the block tree trunk and then does nothing. If the given boolean, @@ -70,32 +86,32 @@ prop_wait :: Bool -> Property prop_wait mustTimeout = forAllGenesisTest ( do - gt@GenesisTest {gtBlockTree} <- genChains (pure 0) + gt@GenesisTest{gtBlockTree} <- genChains (pure 0) let ps = dullSchedule 10 (btTrunk gtBlockTree) - gt' = gt {gtLoPBucketParams = LoPBucketParams {lbpCapacity = 10, lbpRate = 1}} + gt' = gt{gtLoPBucketParams = LoPBucketParams{lbpCapacity = 10, lbpRate = 1}} pure $ gt' $> ps ) -- NOTE: Crucially, there must not be timeouts for this test. - (defaultSchedulerConfig {scEnableChainSyncTimeouts = False, scEnableLoP = True}) + (defaultSchedulerConfig{scEnableChainSyncTimeouts = False, scEnableLoP = True}) shrinkPeerSchedules ( \_ stateView -> case exceptionsByComponent ChainSyncClient stateView of - [] -> not mustTimeout + [] -> not mustTimeout [fromException -> Just CSClient.EmptyBucket] -> mustTimeout - _ -> False + _ -> False ) - where - dullSchedule :: (HasHeader blk) => DiffTime -> AnchoredFragment blk -> PointSchedule blk - dullSchedule _ (AF.Empty _) = error "requires a non-empty block tree" - dullSchedule timeout (_ AF.:> tipBlock) = - let offset :: DiffTime = if mustTimeout then 1 else -1 - in PointSchedule - { psSchedule = - (if mustTimeout then peersOnlyAdversary else peersOnlyHonest) + where + dullSchedule :: HasHeader blk => DiffTime -> AnchoredFragment blk -> PointSchedule blk + dullSchedule _ (AF.Empty _) = error "requires a non-empty block tree" + dullSchedule timeout (_ AF.:> tipBlock) = + let offset :: DiffTime = if mustTimeout then 1 else -1 + in PointSchedule + { psSchedule = + (if mustTimeout then peersOnlyAdversary else peersOnlyHonest) [(Time 0, scheduleTipPoint tipBlock)] - , psStartOrder = [] - , psMinEndTime = Time $ timeout + offset - } + , psStartOrder = [] + , psMinEndTime = Time $ timeout + offset + } -- | Simple test in which we connect to only one peer, who advertises the tip of -- the block tree trunk, serves all of its headers, and then does nothing. @@ -108,31 +124,32 @@ prop_waitBehindForecastHorizon :: Property prop_waitBehindForecastHorizon = forAllGenesisTest ( do - gt@GenesisTest {gtBlockTree} <- genChains (pure 0) + gt@GenesisTest{gtBlockTree} <- genChains (pure 0) let ps = dullSchedule (btTrunk gtBlockTree) - gt' = gt {gtLoPBucketParams = LoPBucketParams {lbpCapacity = 10, lbpRate = 1}} + gt' = gt{gtLoPBucketParams = LoPBucketParams{lbpCapacity = 10, lbpRate = 1}} pure $ gt' $> ps ) -- NOTE: Crucially, there must not be timeouts for this test. - (defaultSchedulerConfig {scEnableChainSyncTimeouts = False, scEnableLoP = True}) + (defaultSchedulerConfig{scEnableChainSyncTimeouts = False, scEnableLoP = True}) shrinkPeerSchedules ( \_ stateView -> case exceptionsByComponent ChainSyncClient stateView of [] -> True - _ -> False + _ -> False ) - where - dullSchedule :: (HasHeader blk) => AnchoredFragment blk -> PointSchedule blk - dullSchedule (AF.Empty _) = error "requires a non-empty block tree" - dullSchedule (_ AF.:> tipBlock) = - PointSchedule - { psSchedule = peersOnlyHonest $ + where + dullSchedule :: HasHeader blk => AnchoredFragment blk -> PointSchedule blk + dullSchedule (AF.Empty _) = error "requires a non-empty block tree" + dullSchedule (_ AF.:> tipBlock) = + PointSchedule + { psSchedule = + peersOnlyHonest $ [ (Time 0, scheduleTipPoint tipBlock) , (Time 0, scheduleHeaderPoint tipBlock) ] - , psStartOrder = [] - , psMinEndTime = Time 11 - } + , psStartOrder = [] + , psMinEndTime = Time 11 + } -- | Simple test where we serve all the chain at regular intervals, but just -- slow enough to lose against the LoP bucket. @@ -155,51 +172,53 @@ prop_serve :: Bool -> Property prop_serve mustTimeout = forAllGenesisTest ( do - gt@GenesisTest {gtBlockTree} <- genChains (pure 0) + gt@GenesisTest{gtBlockTree} <- genChains (pure 0) let lbpRate = borderlineRate (AF.length (btTrunk gtBlockTree)) ps = makeSchedule (btTrunk gtBlockTree) - gt' = gt {gtLoPBucketParams = LoPBucketParams {lbpCapacity, lbpRate}} + gt' = gt{gtLoPBucketParams = LoPBucketParams{lbpCapacity, lbpRate}} pure $ gt' $> ps ) -- NOTE: Crucially, there must not be timeouts for this test. - (defaultSchedulerConfig {scEnableChainSyncTimeouts = False, scEnableLoP = True}) + (defaultSchedulerConfig{scEnableChainSyncTimeouts = False, scEnableLoP = True}) shrinkPeerSchedules ( \_ stateView -> case exceptionsByComponent ChainSyncClient stateView of - [] -> not mustTimeout + [] -> not mustTimeout [fromException -> Just CSClient.EmptyBucket] -> mustTimeout - _ -> False + _ -> False ) - where - lbpCapacity :: Integer = 10 - timeBetweenBlocks :: Rational = 0.100 + where + lbpCapacity :: Integer = 10 + timeBetweenBlocks :: Rational = 0.100 - -- \| Rate that is almost the limit between surviving and succumbing to the - -- LoP bucket, given a number of blocks. One should not exactly use the - -- limit rate because it is unspecified what would happen in IOSim and it - -- would simply be flakey in IO. - borderlineRate :: (Integral n) => n -> Rational - borderlineRate numberOfBlocks = - (if mustTimeout then (105 % 100) else (95 % 100)) - * ((fromIntegral lbpCapacity + fromIntegral numberOfBlocks - 1) / (timeBetweenBlocks * fromIntegral numberOfBlocks)) + -- \| Rate that is almost the limit between surviving and succumbing to the + -- LoP bucket, given a number of blocks. One should not exactly use the + -- limit rate because it is unspecified what would happen in IOSim and it + -- would simply be flakey in IO. + borderlineRate :: Integral n => n -> Rational + borderlineRate numberOfBlocks = + (if mustTimeout then (105 % 100) else (95 % 100)) + * ( (fromIntegral lbpCapacity + fromIntegral numberOfBlocks - 1) + / (timeBetweenBlocks * fromIntegral numberOfBlocks) + ) - -- \| Make a schedule serving the given fragment with regularity, one block - -- every 'timeBetweenBlocks'. NOTE: We must do something at @Time 0@ - -- otherwise the others times will be shifted such that the first one is 0. - makeSchedule :: (HasHeader blk) => AnchoredFragment blk -> PointSchedule blk - makeSchedule (AF.Empty _) = error "fragment must have at least one block" - makeSchedule fragment@(_ AF.:> tipBlock) = - PointSchedule { - psSchedule = - (if mustTimeout then peersOnlyAdversary else peersOnlyHonest) $ - (Time 0, scheduleTipPoint tipBlock) - : ( flip concatMap (zip [1 ..] (AF.toOldestFirst fragment)) $ \(i, block) -> - [ (Time (secondsRationalToDiffTime (i * timeBetweenBlocks)), scheduleHeaderPoint block), - (Time (secondsRationalToDiffTime (i * timeBetweenBlocks)), scheduleBlockPoint block) - ] - ), - psStartOrder = [], - psMinEndTime = Time 0 + -- \| Make a schedule serving the given fragment with regularity, one block + -- every 'timeBetweenBlocks'. NOTE: We must do something at @Time 0@ + -- otherwise the others times will be shifted such that the first one is 0. + makeSchedule :: HasHeader blk => AnchoredFragment blk -> PointSchedule blk + makeSchedule (AF.Empty _) = error "fragment must have at least one block" + makeSchedule fragment@(_ AF.:> tipBlock) = + PointSchedule + { psSchedule = + (if mustTimeout then peersOnlyAdversary else peersOnlyHonest) $ + (Time 0, scheduleTipPoint tipBlock) + : ( flip concatMap (zip [1 ..] (AF.toOldestFirst fragment)) $ \(i, block) -> + [ (Time (secondsRationalToDiffTime (i * timeBetweenBlocks)), scheduleHeaderPoint block) + , (Time (secondsRationalToDiffTime (i * timeBetweenBlocks)), scheduleBlockPoint block) + ] + ) + , psStartOrder = [] + , psMinEndTime = Time 0 } -- NOTE: Same as 'LoE.prop_adversaryHitsTimeouts' with LoP instead of timeouts. @@ -211,57 +230,60 @@ prop_delayAttack lopEnabled = noShrinking $ forAllGenesisTest ( do - gt@GenesisTest {gtBlockTree} <- genChains (pure 1) - let gt' = gt {gtLoPBucketParams = LoPBucketParams {lbpCapacity = 10, lbpRate = 1}} + gt@GenesisTest{gtBlockTree} <- genChains (pure 1) + let gt' = gt{gtLoPBucketParams = LoPBucketParams{lbpCapacity = 10, lbpRate = 1}} ps = delaySchedule gtBlockTree pure $ gt' $> ps ) -- NOTE: Crucially, there must not be timeouts for this test. ( defaultSchedulerConfig - { scEnableChainSyncTimeouts = False, - scEnableLoE = True, - scEnableLoP = lopEnabled + { scEnableChainSyncTimeouts = False + , scEnableLoE = True + , scEnableLoP = lopEnabled } ) shrinkPeerSchedules - ( \GenesisTest {gtBlockTree} stateView@StateView {svSelectedChain} -> - let -- The tip of the blocktree trunk. - treeTipPoint = AF.headPoint $ btTrunk gtBlockTree - -- The tip of the selection. - selectedTipPoint = AF.castPoint $ AF.headPoint svSelectedChain - -- If LoP is enabled, then the adversary should have been killed - -- and the selection should be the whole trunk. - selectedCorrect = lopEnabled == (treeTipPoint == selectedTipPoint) - -- If LoP is enabled, then we expect exactly one `EmptyBucket` - -- exception in the adversary's ChainSync. - exceptionsCorrect = case exceptionsByComponent ChainSyncClient stateView of - [] -> not lopEnabled - [fromException -> Just CSClient.EmptyBucket] -> lopEnabled - _ -> False - in selectedCorrect && exceptionsCorrect + ( \GenesisTest{gtBlockTree} stateView@StateView{svSelectedChain} -> + let + -- The tip of the blocktree trunk. + treeTipPoint = AF.headPoint $ btTrunk gtBlockTree + -- The tip of the selection. + selectedTipPoint = AF.castPoint $ AF.headPoint svSelectedChain + -- If LoP is enabled, then the adversary should have been killed + -- and the selection should be the whole trunk. + selectedCorrect = lopEnabled == (treeTipPoint == selectedTipPoint) + -- If LoP is enabled, then we expect exactly one `EmptyBucket` + -- exception in the adversary's ChainSync. + exceptionsCorrect = case exceptionsByComponent ChainSyncClient stateView of + [] -> not lopEnabled + [fromException -> Just CSClient.EmptyBucket] -> lopEnabled + _ -> False + in + selectedCorrect && exceptionsCorrect ) - where - delaySchedule :: (HasHeader blk) => BlockTree blk -> PointSchedule blk - delaySchedule tree = - let trunkTip = getTrunkTip tree - branch = getOnlyBranch tree - intersectM = case btbPrefix branch of - (AF.Empty _) -> Nothing - (_ AF.:> tipBlock) -> Just tipBlock - branchTip = getOnlyBranchTip tree - psSchedule = peers' + where + delaySchedule :: HasHeader blk => BlockTree blk -> PointSchedule blk + delaySchedule tree = + let trunkTip = getTrunkTip tree + branch = getOnlyBranch tree + intersectM = case btbPrefix branch of + (AF.Empty _) -> Nothing + (_ AF.:> tipBlock) -> Just tipBlock + branchTip = getOnlyBranchTip tree + psSchedule = + peers' -- Eagerly serve the honest tree, but after the adversary has -- advertised its chain. [ (Time 0, scheduleTipPoint trunkTip) : case intersectM of Nothing -> - [ (Time 0.5, scheduleHeaderPoint trunkTip), - (Time 0.5, scheduleBlockPoint trunkTip) + [ (Time 0.5, scheduleHeaderPoint trunkTip) + , (Time 0.5, scheduleBlockPoint trunkTip) ] Just intersect -> - [ (Time 0.5, scheduleHeaderPoint intersect), - (Time 0.5, scheduleBlockPoint intersect), - (Time 5, scheduleHeaderPoint trunkTip), - (Time 5, scheduleBlockPoint trunkTip) + [ (Time 0.5, scheduleHeaderPoint intersect) + , (Time 0.5, scheduleBlockPoint intersect) + , (Time 5, scheduleHeaderPoint trunkTip) + , (Time 5, scheduleBlockPoint trunkTip) ] ] -- Advertise the alternate branch early, but don't serve it @@ -271,10 +293,10 @@ prop_delayAttack lopEnabled = Nothing -> [] -- the alternate branch forks from `intersect` Just intersect -> - [ (Time 0, scheduleHeaderPoint intersect), - (Time 0, scheduleBlockPoint intersect) + [ (Time 0, scheduleHeaderPoint intersect) + , (Time 0, scheduleBlockPoint intersect) ] ] - -- Wait for LoP bucket to empty - psMinEndTime = Time 11 - in PointSchedule {psSchedule, psStartOrder = [], psMinEndTime} + -- Wait for LoP bucket to empty + psMinEndTime = Time 11 + in PointSchedule{psSchedule, psStartOrder = [], psMinEndTime} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LongRangeAttack.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LongRangeAttack.hs index 31ba9c078f..dbbdf026e6 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LongRangeAttack.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/LongRangeAttack.hs @@ -5,34 +5,37 @@ module Test.Consensus.Genesis.Tests.LongRangeAttack (tests) where -import Data.Functor (($>)) -import Ouroboros.Consensus.Block.Abstract (Header, HeaderHash) -import Ouroboros.Network.AnchoredFragment (headAnchor) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Test.Consensus.Genesis.Setup -import Test.Consensus.Genesis.Setup.Classifiers - (allAdversariesForecastable, allAdversariesSelectable, - classifiers) -import Test.Consensus.PeerSimulator.Run (defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock, unTestHash) -import Test.Util.TestEnv (adjustQuickCheckTests) +import Data.Functor (($>)) +import Ouroboros.Consensus.Block.Abstract (Header, HeaderHash) +import Ouroboros.Network.AnchoredFragment (headAnchor) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Test.Consensus.Genesis.Setup +import Test.Consensus.Genesis.Setup.Classifiers + ( allAdversariesForecastable + , allAdversariesSelectable + , classifiers + ) +import Test.Consensus.PeerSimulator.Run (defaultSchedulerConfig) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Shrinking (shrinkPeerSchedules) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (TestBlock, unTestHash) +import Test.Util.TestEnv (adjustQuickCheckTests) tests :: TestTree tests = - testGroup "long range attack" [ - -- NOTE: We want to keep this test to show that Praos is vulnerable to this - -- attack but Genesis is not. This requires to first fix it as mentioned - -- above. - -- - adjustQuickCheckTests (`div` 10) $ - testProperty "one adversary" prop_longRangeAttack - ] + testGroup + "long range attack" + [ -- NOTE: We want to keep this test to show that Praos is vulnerable to this + -- attack but Genesis is not. This requires to first fix it as mentioned + -- above. + -- + adjustQuickCheckTests (`div` 10) $ + testProperty "one adversary" prop_longRangeAttack + ] -- | This test case features a long-range attack with one adversary. The honest -- peer serves the block tree trunk, while the adversary serves its own chain, @@ -45,32 +48,29 @@ prop_longRangeAttack = -- NOTE: `shrinkPeerSchedules` only makes sense for tests that expect the -- honest node to win. Hence the `noShrinking`. - noShrinking $ forAllGenesisTest + noShrinking $ + forAllGenesisTest + ( do + -- Create a block tree with @1@ alternative chain. + gt@GenesisTest{gtBlockTree} <- genChains (pure 1) + -- Create a 'longRangeAttack' schedule based on the generated chains. + ps <- stToGen (longRangeAttack gtBlockTree) + let cls = classifiers gt + if allAdversariesSelectable cls && allAdversariesForecastable cls + then pure $ gt $> ps + else discard + ) + defaultSchedulerConfig + shrinkPeerSchedules + -- NOTE: This is the expected behaviour of Praos to be reversed with + -- Genesis. But we are testing Praos for the moment. Do not forget to remove + -- `noShrinking` above when removing this negation. + (\_ -> not . isHonestTestFragH . svSelectedChain) + where + isHonestTestFragH :: AF.AnchoredFragment (Header TestBlock) -> Bool + isHonestTestFragH frag = case headAnchor frag of + AF.AnchorGenesis -> True + AF.Anchor _ hash _ -> isHonestTestHeaderHash hash - (do - -- Create a block tree with @1@ alternative chain. - gt@GenesisTest{gtBlockTree} <- genChains (pure 1) - -- Create a 'longRangeAttack' schedule based on the generated chains. - ps <- stToGen (longRangeAttack gtBlockTree) - let cls = classifiers gt - if allAdversariesSelectable cls && allAdversariesForecastable cls - then pure $ gt $> ps - else discard) - - defaultSchedulerConfig - - shrinkPeerSchedules - - -- NOTE: This is the expected behaviour of Praos to be reversed with - -- Genesis. But we are testing Praos for the moment. Do not forget to remove - -- `noShrinking` above when removing this negation. - (\_ -> not . isHonestTestFragH . svSelectedChain) - - where - isHonestTestFragH :: AF.AnchoredFragment (Header TestBlock) -> Bool - isHonestTestFragH frag = case headAnchor frag of - AF.AnchorGenesis -> True - AF.Anchor _ hash _ -> isHonestTestHeaderHash hash - - isHonestTestHeaderHash :: HeaderHash TestBlock -> Bool - isHonestTestHeaderHash = all (0 ==) . unTestHash + isHonestTestHeaderHash :: HeaderHash TestBlock -> Bool + isHonestTestHeaderHash = all (0 ==) . unTestHash diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/Uniform.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/Uniform.hs index 2c43aef6c5..ae191930d8 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/Uniform.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Genesis/Tests/Uniform.hs @@ -11,72 +11,85 @@ -- block tree with the right age (roughly @k@ blocks from the tip). Contrary to -- other tests cases (eg. long range attack), the schedules are not particularly -- biased towards a specific situation. -module Test.Consensus.Genesis.Tests.Uniform ( - genUniformSchedulePoints +module Test.Consensus.Genesis.Tests.Uniform + ( genUniformSchedulePoints , tests ) where -import Cardano.Slotting.Slot (SlotNo (SlotNo), WithOrigin (..)) -import Control.Monad (replicateM) -import Control.Monad.Class.MonadTime.SI (Time (..), addTime) -import qualified Data.IntSet as IntSet -import Data.List (intercalate, sort, uncons) -import qualified Data.List.NonEmpty as NE -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe, mapMaybe) -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block.Abstract (WithOrigin (NotOrigin)) -import Ouroboros.Consensus.Util.Condense (condense) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (blockNo, blockSlot, unBlockNo) -import Ouroboros.Network.Protocol.ChainSync.Codec - (ChainSyncTimeout (..)) -import Ouroboros.Network.Protocol.Limits (shortWait) -import Test.Consensus.BlockTree (BlockTree (..), btbSuffix) -import Test.Consensus.Genesis.Setup -import Test.Consensus.Genesis.Setup.Classifiers -import Test.Consensus.PeerSimulator.ChainSync (chainSyncNoTimeouts) -import Test.Consensus.PeerSimulator.Run (SchedulerConfig (..), - defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (Peers (..), getPeerIds, - isHonestPeerId, peers') -import Test.Consensus.PointSchedule.Shrinking - (shrinkByRemovingAdversaries, shrinkPeerSchedules) -import Test.Consensus.PointSchedule.SinglePeer - (SchedulePoint (ScheduleBlockPoint, ScheduleTipPoint)) -import Test.Ouroboros.Consensus.ChainGenerator.Params (Delta (Delta)) -import qualified Test.QuickCheck as QC -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.PartialAccessors -import Test.Util.QuickCheck (le) -import Test.Util.TestBlock (TestBlock) -import Test.Util.TestEnv (adjustQuickCheckMaxSize, - adjustQuickCheckTests) -import Text.Printf (printf) +import Cardano.Slotting.Slot (SlotNo (SlotNo), WithOrigin (..)) +import Control.Monad (replicateM) +import Control.Monad.Class.MonadTime.SI (Time (..), addTime) +import Data.IntSet qualified as IntSet +import Data.List (intercalate, sort, uncons) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe, mapMaybe) +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block.Abstract (WithOrigin (NotOrigin)) +import Ouroboros.Consensus.Util.Condense (condense) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (blockNo, blockSlot, unBlockNo) +import Ouroboros.Network.Protocol.ChainSync.Codec + ( ChainSyncTimeout (..) + ) +import Ouroboros.Network.Protocol.Limits (shortWait) +import Test.Consensus.BlockTree (BlockTree (..), btbSuffix) +import Test.Consensus.Genesis.Setup +import Test.Consensus.Genesis.Setup.Classifiers +import Test.Consensus.PeerSimulator.ChainSync (chainSyncNoTimeouts) +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (..) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers + ( Peers (..) + , getPeerIds + , isHonestPeerId + , peers' + ) +import Test.Consensus.PointSchedule.Shrinking + ( shrinkByRemovingAdversaries + , shrinkPeerSchedules + ) +import Test.Consensus.PointSchedule.SinglePeer + ( SchedulePoint (ScheduleBlockPoint, ScheduleTipPoint) + ) +import Test.Ouroboros.Consensus.ChainGenerator.Params (Delta (Delta)) +import Test.QuickCheck +import Test.QuickCheck qualified as QC +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.PartialAccessors +import Test.Util.QuickCheck (le) +import Test.Util.TestBlock (TestBlock) +import Test.Util.TestEnv + ( adjustQuickCheckMaxSize + , adjustQuickCheckTests + ) +import Text.Printf (printf) tests :: TestTree tests = adjustQuickCheckTests (* 10) $ - adjustQuickCheckMaxSize (`div` 5) $ - testGroup "uniform" [ - -- See Note [Leashing attacks] - testProperty "stalling leashing attack" prop_leashingAttackStalling, - testProperty "time limited leashing attack" prop_leashingAttackTimeLimited, - testProperty "serve adversarial branches" prop_serveAdversarialBranches, - testProperty "the LoE stalls the chain, but the immutable tip is honest" prop_loeStalling, - -- This is a crude way of ensuring that we don't get chains with more than 100 blocks, - -- because this test writes the immutable chain to disk and `instance Binary TestBlock` - -- chokes on long chains. - adjustQuickCheckMaxSize (const 10) $ - testProperty "the node is shut down and restarted after some time" prop_downtime, - testProperty "block fetch leashing attack" prop_blockFetchLeashingAttack - ] + adjustQuickCheckMaxSize (`div` 5) $ + testGroup + "uniform" + [ -- See Note [Leashing attacks] + testProperty "stalling leashing attack" prop_leashingAttackStalling + , testProperty "time limited leashing attack" prop_leashingAttackTimeLimited + , testProperty "serve adversarial branches" prop_serveAdversarialBranches + , testProperty "the LoE stalls the chain, but the immutable tip is honest" prop_loeStalling + , -- This is a crude way of ensuring that we don't get chains with more than 100 blocks, + -- because this test writes the immutable chain to disk and `instance Binary TestBlock` + -- chokes on long chains. + adjustQuickCheckMaxSize (const 10) $ + testProperty "the node is shut down and restarted after some time" prop_downtime + , testProperty "block fetch leashing attack" prop_blockFetchLeashingAttack + ] -- | The conjunction of -- @@ -89,81 +102,89 @@ theProperty :: Property theProperty genesisTest stateView@StateView{svSelectedChain} = classify genesisWindowAfterIntersection "Full genesis window after intersection" $ - classify (isOrigin immutableTipHash) "Immutable tip is Origin" $ - label disconnectedLabel $ - classify (advCount < length (btBranches gtBlockTree)) "Some adversaries performed rollbacks" $ - counterexample killedPeers $ - -- We require the honest chain to fit a Genesis window, because otherwise its tip may suggest - -- to the governor that the density is too low. - longerThanGenesisWindow ==> - conjoin [ - counterexample "Honest peers shouldn't be disconnected" (not $ any isHonestPeerId disconnected), - counterexample ("The immutable tip should be honest: " ++ show immutableTip) $ - property (isHonest immutableTipHash), - immutableTipIsRecent - ] - where - advCount = Map.size (adversarialPeers (psSchedule $ gtSchedule genesisTest)) - - immutableTipIsRecent = - counterexample ("The immutable tip is too old: " ++ show immutableTipAge) $ + classify (isOrigin immutableTipHash) "Immutable tip is Origin" $ + label disconnectedLabel $ + classify (advCount < length (btBranches gtBlockTree)) "Some adversaries performed rollbacks" $ + counterexample killedPeers $ + -- We require the honest chain to fit a Genesis window, because otherwise its tip may suggest + -- to the governor that the density is too low. + longerThanGenesisWindow ==> + conjoin + [ counterexample "Honest peers shouldn't be disconnected" (not $ any isHonestPeerId disconnected) + , counterexample ("The immutable tip should be honest: " ++ show immutableTip) $ + property (isHonest immutableTipHash) + , immutableTipIsRecent + ] + where + advCount = Map.size (adversarialPeers (psSchedule $ gtSchedule genesisTest)) + + immutableTipIsRecent = + counterexample ("The immutable tip is too old: " ++ show immutableTipAge) $ immutableTipAge `le` s + fromIntegral d + 1 - SlotNo immutableTipAge = case (honestTipSlot, immutableTipSlot) of - (At h, At i) -> h - i - (At h, Origin) -> h - _ -> 0 + SlotNo immutableTipAge = case (honestTipSlot, immutableTipSlot) of + (At h, At i) -> h - i + (At h, Origin) -> h + _ -> 0 - isOrigin = null + isOrigin = null - isHonest = all (0 ==) + isHonest = all (0 ==) - immutableTipHash = simpleHash (AF.anchorToHash immutableTip) + immutableTipHash = simpleHash (AF.anchorToHash immutableTip) - immutableTip = AF.anchor svSelectedChain + immutableTip = AF.anchor svSelectedChain - immutableTipSlot = AF.anchorToSlotNo (AF.anchor svSelectedChain) + immutableTipSlot = AF.anchorToSlotNo (AF.anchor svSelectedChain) - disconnectedLabel = - printf "disconnected %.1f%% of adversaries" disconnectedPercent + disconnectedLabel = + printf "disconnected %.1f%% of adversaries" disconnectedPercent - disconnected = collectDisconnectedPeers stateView + disconnected = collectDisconnectedPeers stateView - disconnectedPercent :: Double - disconnectedPercent = - 100 * fromIntegral (length disconnected) / fromIntegral advCount + disconnectedPercent :: Double + disconnectedPercent = + 100 * fromIntegral (length disconnected) / fromIntegral advCount - killedPeers = case disconnected of - [] -> "No peers were disconnected" - peers -> "Some peers were disconnected: " ++ intercalate ", " (condense <$> peers) + killedPeers = case disconnected of + [] -> "No peers were disconnected" + peers -> "Some peers were disconnected: " ++ intercalate ", " (condense <$> peers) - honestTipSlot = At $ blockSlot $ snd $ last $ mapMaybe fromBlockPoint $ getHonestPeer $ honestPeers $ psSchedule $ gtSchedule genesisTest + honestTipSlot = + At $ + blockSlot $ + snd $ + last $ + mapMaybe fromBlockPoint $ + getHonestPeer $ + honestPeers $ + psSchedule $ + gtSchedule genesisTest - GenesisTest {gtBlockTree, gtGenesisWindow = GenesisWindow s, gtDelay = Delta d} = genesisTest + GenesisTest{gtBlockTree, gtGenesisWindow = GenesisWindow s, gtDelay = Delta d} = genesisTest - Classifiers {genesisWindowAfterIntersection, longerThanGenesisWindow} = classifiers genesisTest + Classifiers{genesisWindowAfterIntersection, longerThanGenesisWindow} = classifiers genesisTest fromBlockPoint :: (Time, SchedulePoint blk) -> Maybe (Time, blk) fromBlockPoint (t, ScheduleBlockPoint (NotOrigin bp)) = Just (t, bp) -fromBlockPoint _ = Nothing +fromBlockPoint _ = Nothing -- | Tests that the immutable tip is not delayed and stays honest with the -- adversarial peers serving adversarial branches. prop_serveAdversarialBranches :: Property -prop_serveAdversarialBranches = forAllGenesisTest - +prop_serveAdversarialBranches = + forAllGenesisTest (genChains (QC.choose (1, 4)) `enrichedWith` genUniformSchedulePoints) - - (defaultSchedulerConfig - { scTraceState = False - , scTrace = False - , scEnableLoE = True - , scEnableCSJ = True - , scEnableLoP = False - , scEnableChainSyncTimeouts = False - , scEnableBlockFetchTimeouts = False - }) - + ( defaultSchedulerConfig + { scTraceState = False + , scTrace = False + , scEnableLoE = True + , scEnableCSJ = True + , scEnableLoP = False + , scEnableChainSyncTimeouts = False + , scEnableBlockFetchTimeouts = False + } + ) -- We cannot shrink by removing points from the adversarial schedules. -- Removing ticks could make an adversary unable to serve any blocks or headers. -- Because LoP and timeouts are disabled, this would cause the immutable tip @@ -173,13 +194,13 @@ prop_serveAdversarialBranches = forAllGenesisTest -- timeouts to expire. The leashing attack tests are testing the timeouts -- together with LoP. shrinkByRemovingAdversaries - theProperty genUniformSchedulePoints :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) genUniformSchedulePoints gt = stToGen (uniformPoints pointsGeneratorParams (gtBlockTree gt)) - where - pointsGeneratorParams = PointsGeneratorParams + where + pointsGeneratorParams = + PointsGeneratorParams { pgpExtraHonestPeers = fromIntegral $ gtExtraHonestPeers gt , pgpDowntime = NoDowntime } @@ -210,9 +231,7 @@ genUniformSchedulePoints gt = stToGen (uniformPoints pointsGeneratorParams (gtBl prop_leashingAttackStalling :: Property prop_leashingAttackStalling = forAllGenesisTest - (genChains (QC.choose (1, 4)) `enrichedWith` genLeashingSchedule) - defaultSchedulerConfig { scTrace = False , scEnableLoE = True @@ -220,36 +239,34 @@ prop_leashingAttackStalling = , scEnableCSJ = True , scEnableBlockFetchTimeouts = False } - shrinkPeerSchedules - theProperty - - where - -- | Produces schedules that might cause the node under test to stall. - -- - -- This is achieved by dropping random points from the schedule of each peer - -- and by adding sufficient time at the end of a test to allow LoP and - -- timeouts to disconnect adversaries. - genLeashingSchedule :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) - genLeashingSchedule genesisTest = do - ps@PointSchedule{psSchedule = sch} <- ensureScheduleDuration genesisTest <$> genUniformSchedulePoints genesisTest - advs <- mapM dropRandomPoints $ adversarialPeers sch - pure $ ps {psSchedule = sch {adversarialPeers = advs}} + where + -- \| Produces schedules that might cause the node under test to stall. + -- + -- This is achieved by dropping random points from the schedule of each peer + -- and by adding sufficient time at the end of a test to allow LoP and + -- timeouts to disconnect adversaries. + genLeashingSchedule :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) + genLeashingSchedule genesisTest = do + ps@PointSchedule{psSchedule = sch} <- + ensureScheduleDuration genesisTest <$> genUniformSchedulePoints genesisTest + advs <- mapM dropRandomPoints $ adversarialPeers sch + pure $ ps{psSchedule = sch{adversarialPeers = advs}} dropRandomPoints :: [(Time, SchedulePoint blk)] -> QC.Gen [(Time, SchedulePoint blk)] dropRandomPoints ps = do - let lenps = length ps - dropsMax = max 1 $ lenps - 1 - dropCount <- QC.choose (div dropsMax 2, dropsMax) - let dedup = map NE.head . NE.group - is <- fmap (dedup . sort) $ replicateM dropCount $ QC.choose (0, lenps - 1) - pure $ dropElemsAt ps is - where - dropElemsAt :: [a] -> [Int] -> [a] - dropElemsAt xs is' = - let is = IntSet.fromList is' - in [x | (x, i) <- zip xs [0..], i `IntSet.notMember` is] + let lenps = length ps + dropsMax = max 1 $ lenps - 1 + dropCount <- QC.choose (div dropsMax 2, dropsMax) + let dedup = map NE.head . NE.group + is <- fmap (dedup . sort) $ replicateM dropCount $ QC.choose (0, lenps - 1) + pure $ dropElemsAt ps is + where + dropElemsAt :: [a] -> [Int] -> [a] + dropElemsAt xs is' = + let is = IntSet.fromList is' + in [x | (x, i) <- zip xs [0 ..], i `IntSet.notMember` is] -- | Test that the leashing attacks do not delay the immutable tip after. The -- immutable tip needs to be advanced enough when the honest peer has offered @@ -259,9 +276,7 @@ dropRandomPoints ps = do prop_leashingAttackTimeLimited :: Property prop_leashingAttackTimeLimited = forAllGenesisTest - (genChains (QC.choose (1, 4)) `enrichedWith` genTimeLimitedSchedule) - defaultSchedulerConfig { scTrace = False , scEnableLoE = True @@ -269,127 +284,124 @@ prop_leashingAttackTimeLimited = , scEnableCSJ = True , scEnableBlockFetchTimeouts = False } - shrinkPeerSchedules - theProperty - - where - -- | A schedule which doesn't run past the last event of the honest peer - genTimeLimitedSchedule :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) - genTimeLimitedSchedule genesisTest = do - Peers honests advs0 <- psSchedule <$> genUniformSchedulePoints genesisTest - let timeLimit = estimateTimeBound + where + -- \| A schedule which doesn't run past the last event of the honest peer + genTimeLimitedSchedule :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) + genTimeLimitedSchedule genesisTest = do + Peers honests advs0 <- psSchedule <$> genUniformSchedulePoints genesisTest + let timeLimit = + estimateTimeBound (gtChainSyncTimeouts genesisTest) (gtLoPBucketParams genesisTest) (getHonestPeer honests) (Map.elems advs0) - advs1 = fmap (takePointsUntil timeLimit) advs0 - advs <- mapM dropRandomPoints advs1 - pure $ PointSchedule + advs1 = fmap (takePointsUntil timeLimit) advs0 + advs <- mapM dropRandomPoints advs1 + pure $ + PointSchedule { psSchedule = Peers honests advs , psStartOrder = [] , psMinEndTime = addGracePeriodDelay (length advs) timeLimit } - takePointsUntil limit = takeWhile ((<= limit) . fst) - - estimateTimeBound - :: AF.HasHeader blk - => ChainSyncTimeout - -> LoPBucketParams - -> PeerSchedule blk - -> [PeerSchedule blk] - -> Time - estimateTimeBound cst LoPBucketParams{lbpCapacity, lbpRate} honest advs = - let firstTipPointTime = fst $ headCallStack (mapMaybe fromTipPoint honest) - lastBlockPoint = last (mapMaybe fromBlockPoint honest) - peerCount = fromIntegral $ length advs + 1 - maxBlockNo = fromIntegral $ maximum $ 0 : blockPointNos honest ++ concatMap blockPointNos advs - timeCapacity = fromRational $ (fromIntegral lbpCapacity) / lbpRate - timePerToken = fromRational $ 1 / lbpRate - intersectDiffTime = fromMaybe (error "no intersect timeout") (intersectTimeout cst) - -- Since the moment a peer offers the first tip, LoP should - -- start ticking for it. This can be no later than what the intersect - -- timeout allows for all peers. - -- - -- Additionally, the actual delay might be greater if the honest peer - -- has its last tick dispatched later. - -- - -- Adversarial peers might cause more ticks to be sent as well. We - -- bound it all by considering the highest block number that is ever - -- sent. - in addTime 1 $ max - (fst lastBlockPoint) - (addTime - (intersectDiffTime + timePerToken * maxBlockNo + timeCapacity * peerCount) - firstTipPointTime - ) - - blockPointNos :: AF.HasHeader blk => [(Time, SchedulePoint blk)] -> [Word64] - blockPointNos = - map (unBlockNo . blockNo . snd) . - mapMaybe fromBlockPoint - - fromTipPoint (t, ScheduleTipPoint bp) = Just (t, bp) - fromTipPoint _ = Nothing + takePointsUntil limit = takeWhile ((<= limit) . fst) + + estimateTimeBound :: + AF.HasHeader blk => + ChainSyncTimeout -> + LoPBucketParams -> + PeerSchedule blk -> + [PeerSchedule blk] -> + Time + estimateTimeBound cst LoPBucketParams{lbpCapacity, lbpRate} honest advs = + let firstTipPointTime = fst $ headCallStack (mapMaybe fromTipPoint honest) + lastBlockPoint = last (mapMaybe fromBlockPoint honest) + peerCount = fromIntegral $ length advs + 1 + maxBlockNo = fromIntegral $ maximum $ 0 : blockPointNos honest ++ concatMap blockPointNos advs + timeCapacity = fromRational $ (fromIntegral lbpCapacity) / lbpRate + timePerToken = fromRational $ 1 / lbpRate + intersectDiffTime = fromMaybe (error "no intersect timeout") (intersectTimeout cst) + in -- Since the moment a peer offers the first tip, LoP should + -- start ticking for it. This can be no later than what the intersect + -- timeout allows for all peers. + -- + -- Additionally, the actual delay might be greater if the honest peer + -- has its last tick dispatched later. + -- + -- Adversarial peers might cause more ticks to be sent as well. We + -- bound it all by considering the highest block number that is ever + -- sent. + addTime 1 $ + max + (fst lastBlockPoint) + ( addTime + (intersectDiffTime + timePerToken * maxBlockNo + timeCapacity * peerCount) + firstTipPointTime + ) + + blockPointNos :: AF.HasHeader blk => [(Time, SchedulePoint blk)] -> [Word64] + blockPointNos = + map (unBlockNo . blockNo . snd) + . mapMaybe fromBlockPoint + + fromTipPoint (t, ScheduleTipPoint bp) = Just (t, bp) + fromTipPoint _ = Nothing headCallStack :: HasCallStack => [a] -> a headCallStack = \case - x:_ -> x - _ -> error "headCallStack: empty list" + x : _ -> x + _ -> error "headCallStack: empty list" -- | Test that enabling the LoE causes the selection to remain at -- the first fork intersection (keeping the immutable tip honest). prop_loeStalling :: Property prop_loeStalling = forAllGenesisTest - - (do gt <- genChains (QC.choose (1, 4)) - `enrichedWith` - genUniformSchedulePoints - pure gt {gtChainSyncTimeouts = chainSyncNoTimeouts {canAwaitTimeout = shortWait}} + ( do + gt <- + genChains (QC.choose (1, 4)) + `enrichedWith` genUniformSchedulePoints + pure gt{gtChainSyncTimeouts = chainSyncNoTimeouts{canAwaitTimeout = shortWait}} ) - - defaultSchedulerConfig { - scEnableLoE = True, - scEnableCSJ = True, - scEnableBlockFetchTimeouts = False - } - + defaultSchedulerConfig + { scEnableLoE = True + , scEnableCSJ = True + , scEnableBlockFetchTimeouts = False + } shrinkPeerSchedules - prop - where - prop GenesisTest {gtBlockTree = BlockTree {btTrunk, btBranches}} StateView{svSelectedChain} = - classify (any (== selectionTip) allTips) "The selection is at a branch tip" $ + where + prop GenesisTest{gtBlockTree = BlockTree{btTrunk, btBranches}} StateView{svSelectedChain} = + classify (any (== selectionTip) allTips) "The selection is at a branch tip" $ classify (any anchorIsImmutableTip suffixes) "The immutable tip is at a fork intersection" $ - property (isHonest immutableTipHash) - where - anchorIsImmutableTip branch = simpleHash (AF.anchorToHash (AF.anchor branch)) == immutableTipHash + property (isHonest immutableTipHash) + where + anchorIsImmutableTip branch = simpleHash (AF.anchorToHash (AF.anchor branch)) == immutableTipHash - isHonest = all (0 ==) + isHonest = all (0 ==) - immutableTipHash = simpleHash (AF.anchorToHash immutableTip) + immutableTipHash = simpleHash (AF.anchorToHash immutableTip) - immutableTip = AF.anchor svSelectedChain + immutableTip = AF.anchor svSelectedChain - selectionTip = simpleHash (AF.headHash svSelectedChain) + selectionTip = simpleHash (AF.headHash svSelectedChain) - allTips = simpleHash . AF.headHash <$> (btTrunk : suffixes) + allTips = simpleHash . AF.headHash <$> (btTrunk : suffixes) - suffixes = btbSuffix <$> btBranches + suffixes = btbSuffix <$> btBranches -- | This test sets 'scDowntime', which instructs the scheduler to shut all components down whenever a tick's duration -- is greater than 11 seconds, and restarts it while only preserving the immutable DB after advancing the time. -- -- This ensures that a user may shut down their machine while syncing without additional vulnerabilities. prop_downtime :: Property -prop_downtime = forAllGenesisTest - - (genChains (QC.choose (1, 4)) `enrichedWith` \ gt -> - ensureScheduleDuration gt <$> stToGen (uniformPoints (pointsGeneratorParams gt) (gtBlockTree gt))) - +prop_downtime = + forAllGenesisTest + ( genChains (QC.choose (1, 4)) `enrichedWith` \gt -> + ensureScheduleDuration gt <$> stToGen (uniformPoints (pointsGeneratorParams gt) (gtBlockTree gt)) + ) defaultSchedulerConfig { scEnableLoE = True , scEnableLoP = True @@ -397,20 +409,20 @@ prop_downtime = forAllGenesisTest , scEnableCSJ = True , scEnableBlockFetchTimeouts = False } - shrinkPeerSchedules - - (\genesisTest stateView -> - counterexample (unlines - [ "TODO: Shutting down the node inserts delays in the simulation that" - , "are not reflected in the point schedule table. Reporting these delays" - , "correctly is still to be done." - ]) $ - theProperty genesisTest stateView + ( \genesisTest stateView -> + counterexample + ( unlines + [ "TODO: Shutting down the node inserts delays in the simulation that" + , "are not reflected in the point schedule table. Reporting these delays" + , "correctly is still to be done." + ] + ) + $ theProperty genesisTest stateView ) - - where - pointsGeneratorParams gt = PointsGeneratorParams + where + pointsGeneratorParams gt = + PointsGeneratorParams { pgpExtraHonestPeers = fromIntegral (gtExtraHonestPeers gt) , pgpDowntime = DowntimeWithSecurityParam (gtSecurityParam gt) } @@ -425,45 +437,48 @@ prop_blockFetchLeashingAttack = forAllGenesisTest (genChains (pure 0) `enrichedWith` genBlockFetchLeashingSchedule) defaultSchedulerConfig - { scEnableLoE = True, - scEnableLoP = True, - scEnableCSJ = True, - scEnableBlockFetchTimeouts = False + { scEnableLoE = True + , scEnableLoP = True + , scEnableCSJ = True + , scEnableBlockFetchTimeouts = False } shrinkPeerSchedules theProperty - where - genBlockFetchLeashingSchedule :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) - genBlockFetchLeashingSchedule genesisTest = do - -- A schedule with several honest peers and no adversaries. We will then - -- keep one of those as honest and remove the block points from the - -- others, hence producing one honest peer and several adversaries. - PointSchedule {psSchedule} <- - stToGen $ - uniformPoints - (PointsGeneratorParams {pgpExtraHonestPeers = 1, pgpDowntime = NoDowntime}) - (gtBlockTree genesisTest) - peers <- QC.shuffle $ Map.elems $ honestPeers psSchedule - let (honest, adversaries) = fromMaybe (error "blockFetchLeashingAttack") $ uncons peers - adversaries' = map (filter (not . isBlockPoint . snd)) adversaries - psSchedule' = peers' [honest] adversaries' - -- Important to shuffle the order in which the peers start, otherwise the - -- honest peer starts first and systematically becomes dynamo. - psStartOrder <- shuffle $ getPeerIds psSchedule' - let maxTime = addGracePeriodDelay (length adversaries') $ maximum $ - Time 0 : [ pt | s <- honest : adversaries', (pt, _) <- take 1 (reverse s) ] - pure $ PointSchedule { - psSchedule = psSchedule', - psStartOrder, - -- Allow to run the blockfetch decision logic after the last tick + where + genBlockFetchLeashingSchedule :: GenesisTest TestBlock () -> QC.Gen (PointSchedule TestBlock) + genBlockFetchLeashingSchedule genesisTest = do + -- A schedule with several honest peers and no adversaries. We will then + -- keep one of those as honest and remove the block points from the + -- others, hence producing one honest peer and several adversaries. + PointSchedule{psSchedule} <- + stToGen $ + uniformPoints + (PointsGeneratorParams{pgpExtraHonestPeers = 1, pgpDowntime = NoDowntime}) + (gtBlockTree genesisTest) + peers <- QC.shuffle $ Map.elems $ honestPeers psSchedule + let (honest, adversaries) = fromMaybe (error "blockFetchLeashingAttack") $ uncons peers + adversaries' = map (filter (not . isBlockPoint . snd)) adversaries + psSchedule' = peers' [honest] adversaries' + -- Important to shuffle the order in which the peers start, otherwise the + -- honest peer starts first and systematically becomes dynamo. + psStartOrder <- shuffle $ getPeerIds psSchedule' + let maxTime = + addGracePeriodDelay (length adversaries') $ + maximum $ + Time 0 : [pt | s <- honest : adversaries', (pt, _) <- take 1 (reverse s)] + pure $ + PointSchedule + { psSchedule = psSchedule' + , psStartOrder + , -- Allow to run the blockfetch decision logic after the last tick -- 11 is the grace period for unresponsive peers that should send -- blocks psMinEndTime = addTime 11 maxTime } - isBlockPoint :: SchedulePoint blk -> Bool - isBlockPoint (ScheduleBlockPoint _) = True - isBlockPoint _ = False + isBlockPoint :: SchedulePoint blk -> Bool + isBlockPoint (ScheduleBlockPoint _) = True + isBlockPoint _ = False -- | Add a delay at the end of tests to account for retention of blocks -- by adversarial peers in blockfetch. This delay is 10 seconds per diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator.hs index 014d286490..e0c2925e18 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator.hs @@ -14,355 +14,390 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.HardFork.Combinator (tests) where -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.SOP.BasicFunctors -import Data.SOP.Counting -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index (Index (..), hcimap) -import Data.SOP.InPairs (RequiringBoth (..)) -import qualified Data.SOP.InPairs as InPairs -import Data.SOP.OptNP (OptNP (..)) -import Data.SOP.Strict -import qualified Data.SOP.Tails as Tails -import qualified Data.SOP.Telescope as Telescope -import Data.Void (Void, absurd) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Condense () -import Ouroboros.Consensus.HardFork.Combinator.Serialisation -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.HardFork.History (EraParams (..)) -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.LeaderSchedule - (LeaderSchedule (..), leaderScheduleFor) -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.Orphans () -import qualified Ouroboros.Network.Mock.Chain as Mock -import Quiet (Quiet (..)) -import Test.Consensus.HardFork.Combinator.A -import Test.Consensus.HardFork.Combinator.B -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.Network -import Test.ThreadNet.TxGen -import Test.ThreadNet.Util -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeToNodeVersion -import Test.ThreadNet.Util.NodeTopology -import Test.ThreadNet.Util.Seed -import Test.Util.HardFork.Future -import Test.Util.SanityCheck (prop_sanityChecks) -import Test.Util.Slots (NumSlots (..)) -import Test.Util.Time (dawnOfTime) +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.SOP.BasicFunctors +import Data.SOP.Counting +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs (RequiringBoth (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index (Index (..), hcimap) +import Data.SOP.OptNP (OptNP (..)) +import Data.SOP.Strict +import Data.SOP.Tails qualified as Tails +import Data.SOP.Telescope qualified as Telescope +import Data.Void (Void, absurd) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Condense () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.History (EraParams (..)) +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.LeaderSchedule + ( LeaderSchedule (..) + , leaderScheduleFor + ) +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Mock.Chain qualified as Mock +import Quiet (Quiet (..)) +import Test.Consensus.HardFork.Combinator.A +import Test.Consensus.HardFork.Combinator.B +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Network +import Test.ThreadNet.TxGen +import Test.ThreadNet.Util +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeToNodeVersion +import Test.ThreadNet.Util.NodeTopology +import Test.ThreadNet.Util.Seed +import Test.Util.HardFork.Future +import Test.Util.SanityCheck (prop_sanityChecks) +import Test.Util.Slots (NumSlots (..)) +import Test.Util.Time (dawnOfTime) tests :: TestTree -tests = testGroup "Consensus" [ - testProperty "simple convergence" $ +tests = + testGroup + "Consensus" + [ testProperty "simple convergence" $ prop_simple_hfc_convergence ] data AB a = AB {getA, getB :: a} deriving (Foldable, Functor, Generic, Traversable) - deriving (Show) via (Quiet (AB a)) + deriving Show via (Quiet (AB a)) instance Applicative AB where pure x = AB x x AB af bf <*> AB a b = AB (af a) (bf b) -data TestSetup = TestSetup { - testSetupEpochSize :: AB EpochSize - -- ^ INVARIANT: @> 0@ - , testSetupK :: SecurityParam - , testSetupSeed :: Seed - , testSetupSlotLength :: AB SlotLength - , testSetupTxSlot :: SlotNo - } - deriving (Show) +data TestSetup = TestSetup + { testSetupEpochSize :: AB EpochSize + -- ^ INVARIANT: @> 0@ + , testSetupK :: SecurityParam + , testSetupSeed :: Seed + , testSetupSlotLength :: AB SlotLength + , testSetupTxSlot :: SlotNo + } + deriving Show instance Arbitrary TestSetup where arbitrary = do - testSetupEpochSize <- abM $ EpochSize <$> choose (1, 10) - testSetupK <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero - -- TODO why does k=1 cause the nodes to only forge in the first epoch? - testSetupTxSlot <- SlotNo <$> choose (0, 9) - - testSetupSeed <- arbitrary - testSetupSlotLength <- abM arbitrary - return TestSetup{..} - where - abM :: Monad m => m a -> m (AB a) - abM = sequence . pure + testSetupEpochSize <- abM $ EpochSize <$> choose (1, 10) + testSetupK <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero + -- TODO why does k=1 cause the nodes to only forge in the first epoch? + testSetupTxSlot <- SlotNo <$> choose (0, 9) + + testSetupSeed <- arbitrary + testSetupSlotLength <- abM arbitrary + return TestSetup{..} + where + abM :: Monad m => m a -> m (AB a) + abM = sequence . pure -- | The number of epochs in the A era testSetupEraSizeA :: TestSetup -> EraSize testSetupEraSizeA TestSetup{..} = - -- This function, as a specification, intentionally independently - -- reimplements the interpretation of the 'InitiateAtoB' transaction by the - -- A ledger. - EraSize $ succ lastEpochA - where - lastEpochA = lastSlotA `div` unEpochSize (getA testSetupEpochSize) - lastSlotA = - unSlotNo testSetupTxSlot + - stabilityWindowA testSetupK + - safeFromTipA testSetupK + -- This function, as a specification, intentionally independently + -- reimplements the interpretation of the 'InitiateAtoB' transaction by the + -- A ledger. + EraSize $ succ lastEpochA + where + lastEpochA = lastSlotA `div` unEpochSize (getA testSetupEpochSize) + lastSlotA = + unSlotNo testSetupTxSlot + + stabilityWindowA testSetupK + + safeFromTipA testSetupK -- | Minimum number of slots needed to include exactly one epoch of the B era testSetupNumSlots :: TestSetup -> NumSlots testSetupNumSlots testSetup@TestSetup{..} = - -- this test doesn't need more than one B epoch - NumSlots $ eraSizeA * epoSizeA + epoSizeB - where - EraSize eraSizeA = testSetupEraSizeA testSetup - AB epoSizeA epoSizeB = unEpochSize <$> testSetupEpochSize + -- this test doesn't need more than one B epoch + NumSlots $ eraSizeA * epoSizeA + epoSizeB + where + EraSize eraSizeA = testSetupEraSizeA testSetup + AB epoSizeA epoSizeB = unEpochSize <$> testSetupEpochSize prop_simple_hfc_convergence :: TestSetup -> Property prop_simple_hfc_convergence testSetup@TestSetup{..} = - counterexample (show testConfig) $ + counterexample (show testConfig) $ counterexample ("eraSizeA: " <> show eraSizeA) $ - tabulate "epochs in era A" [labelEraSizeA] $ - prop_general args testOutput .&&. - prop_sanityChecks (topLevelConfig (CoreNodeId 0)) .&&. - prop_allExpectedBlocks - where - k :: SecurityParam - k = testSetupK - - eraParamsA, eraParamsB :: EraParams - AB eraParamsA eraParamsB = - EraParams - <$> testSetupEpochSize - <*> testSetupSlotLength - <*> AB (History.StandardSafeZone (safeFromTipA k)) - (safeZoneB k) - <*> pure (GenesisWindow ((unNonZero $ maxRollbacks k) * 2)) - - shape :: History.Shape '[BlockA, BlockB] - shape = History.Shape $ exactlyTwo eraParamsA eraParamsB - - leaderSchedule :: LeaderSchedule - leaderSchedule = roundRobinLeaderSchedule numCoreNodes numSlots - where - TestConfig{..} = testConfig - - args :: PropGeneralArgs TestBlock - args = PropGeneralArgs { - pgaBlockProperty = const $ property True - , pgaCountTxs = fromIntegral . length . extractTxs - , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = BlockNo 0 - , pgaFixedMaxForkLength = Nothing - , pgaFixedSchedule = Just leaderSchedule - , pgaSecurityParam = k - , pgaTestConfig = testConfig - , pgaTestConfigB = testConfigB - } - - testConfig :: TestConfig - testConfig = TestConfig { - numCoreNodes = ncn - , numSlots = testSetupNumSlots testSetup - , nodeTopology = meshNodeTopology ncn - , initSeed = testSetupSeed - } - where - ncn :: NumCoreNodes - ncn = NumCoreNodes 2 - - eraSizeA :: EraSize - eraSizeA = testSetupEraSizeA testSetup - - testConfigB :: TestConfigB TestBlock - testConfigB = TestConfigB { - forgeEbbEnv = Nothing - , future = - EraCons (eraSlotLength eraParamsA) - (eraEpochSize eraParamsA) - eraSizeA $ - EraFinal (eraSlotLength eraParamsB) - (eraEpochSize eraParamsB) - , messageDelay = noCalcMessageDelay - , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - , nodeRestarts = noRestarts - , txGenExtra = () - , version = newestVersion (Proxy @TestBlock) - } - where - TestConfig{..} = testConfig - - testConfigMB :: Monad m => TestConfigMB m TestBlock - testConfigMB = TestConfigMB { - nodeInfo = \a -> plainTestNodeInitialization (protocolInfo a) - (return blockForging) - , mkRekeyM = Nothing - } - - labelEraSizeA :: String - labelEraSizeA = - if sz >= 10 then ">=10" else show sz - where - EraSize sz = eraSizeA - - protocolInfo :: CoreNodeId -> ProtocolInfo TestBlock - protocolInfo nid = ProtocolInfo { - pInfoConfig = - topLevelConfig nid - , pInfoInitLedger = ExtLedgerState { - ledgerState = HardForkLedgerState $ - initHardForkState - (Flip initLedgerState) - , headerState = genesisHeaderState $ - initHardForkState - (WrapChainDepState initChainDepState) + tabulate "epochs in era A" [labelEraSizeA] $ + prop_general args testOutput + .&&. prop_sanityChecks (topLevelConfig (CoreNodeId 0)) + .&&. prop_allExpectedBlocks + where + k :: SecurityParam + k = testSetupK + + eraParamsA, eraParamsB :: EraParams + AB eraParamsA eraParamsB = + EraParams + <$> testSetupEpochSize + <*> testSetupSlotLength + <*> AB + (History.StandardSafeZone (safeFromTipA k)) + (safeZoneB k) + <*> pure (GenesisWindow ((unNonZero $ maxRollbacks k) * 2)) + + shape :: History.Shape '[BlockA, BlockB] + shape = History.Shape $ exactlyTwo eraParamsA eraParamsB + + leaderSchedule :: LeaderSchedule + leaderSchedule = roundRobinLeaderSchedule numCoreNodes numSlots + where + TestConfig{..} = testConfig + + args :: PropGeneralArgs TestBlock + args = + PropGeneralArgs + { pgaBlockProperty = const $ property True + , pgaCountTxs = fromIntegral . length . extractTxs + , pgaExpectedCannotForge = noExpectedCannotForges + , pgaFirstBlockNo = BlockNo 0 + , pgaFixedMaxForkLength = Nothing + , pgaFixedSchedule = Just leaderSchedule + , pgaSecurityParam = k + , pgaTestConfig = testConfig + , pgaTestConfigB = testConfigB + } + + testConfig :: TestConfig + testConfig = + TestConfig + { numCoreNodes = ncn + , numSlots = testSetupNumSlots testSetup + , nodeTopology = meshNodeTopology ncn + , initSeed = testSetupSeed + } + where + ncn :: NumCoreNodes + ncn = NumCoreNodes 2 + + eraSizeA :: EraSize + eraSizeA = testSetupEraSizeA testSetup + + testConfigB :: TestConfigB TestBlock + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + EraCons + (eraSlotLength eraParamsA) + (eraEpochSize eraParamsA) + eraSizeA + $ EraFinal + (eraSlotLength eraParamsB) + (eraEpochSize eraParamsB) + , messageDelay = noCalcMessageDelay + , nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + , nodeRestarts = noRestarts + , txGenExtra = () + , version = newestVersion (Proxy @TestBlock) + } + where + TestConfig{..} = testConfig + + testConfigMB :: Monad m => TestConfigMB m TestBlock + testConfigMB = + TestConfigMB + { nodeInfo = \a -> + plainTestNodeInitialization + (protocolInfo a) + (return blockForging) + , mkRekeyM = Nothing + } + + labelEraSizeA :: String + labelEraSizeA = + if sz >= 10 then ">=10" else show sz + where + EraSize sz = eraSizeA + + protocolInfo :: CoreNodeId -> ProtocolInfo TestBlock + protocolInfo nid = + ProtocolInfo + { pInfoConfig = + topLevelConfig nid + , pInfoInitLedger = + ExtLedgerState + { ledgerState = + HardForkLedgerState $ + initHardForkState + (Flip initLedgerState) + , headerState = + genesisHeaderState $ + initHardForkState + (WrapChainDepState initChainDepState) } - } - - blockForging :: Monad m => [BlockForging m TestBlock] - blockForging = - [ hardForkBlockForging "Test" - $ OptCons blockForgingA - $ OptCons blockForgingB - $ OptNil - ] - - initLedgerState :: LedgerState BlockA ValuesMK - initLedgerState = LgrA { - lgrA_tip = GenesisPoint - , lgrA_transition = Nothing - } - - initChainDepState :: ChainDepState ProtocolA - initChainDepState = () - - topLevelConfig :: CoreNodeId -> TopLevelConfig TestBlock - topLevelConfig nid = TopLevelConfig { - topLevelConfigProtocol = HardForkConsensusConfig { - hardForkConsensusConfigK = k - , hardForkConsensusConfigShape = shape - , hardForkConsensusConfigPerEra = PerEraConsensusConfig $ - (WrapPartialConsensusConfig $ consensusConfigA nid) - :* (WrapPartialConsensusConfig $ consensusConfigB nid) - :* Nil + } + + blockForging :: Monad m => [BlockForging m TestBlock] + blockForging = + [ hardForkBlockForging "Test" $ + OptCons blockForgingA $ + OptCons blockForgingB $ + OptNil + ] + + initLedgerState :: LedgerState BlockA ValuesMK + initLedgerState = + LgrA + { lgrA_tip = GenesisPoint + , lgrA_transition = Nothing + } + + initChainDepState :: ChainDepState ProtocolA + initChainDepState = () + + topLevelConfig :: CoreNodeId -> TopLevelConfig TestBlock + topLevelConfig nid = + TopLevelConfig + { topLevelConfigProtocol = + HardForkConsensusConfig + { hardForkConsensusConfigK = k + , hardForkConsensusConfigShape = shape + , hardForkConsensusConfigPerEra = + PerEraConsensusConfig $ + (WrapPartialConsensusConfig $ consensusConfigA nid) + :* (WrapPartialConsensusConfig $ consensusConfigB nid) + :* Nil } - , topLevelConfigLedger = HardForkLedgerConfig { - hardForkLedgerConfigShape = shape - , hardForkLedgerConfigPerEra = PerEraLedgerConfig $ - (WrapPartialLedgerConfig $ ledgerConfigA nid) - :* (WrapPartialLedgerConfig $ ledgerConfigB nid) - :* Nil + , topLevelConfigLedger = + HardForkLedgerConfig + { hardForkLedgerConfigShape = shape + , hardForkLedgerConfigPerEra = + PerEraLedgerConfig $ + (WrapPartialLedgerConfig $ ledgerConfigA nid) + :* (WrapPartialLedgerConfig $ ledgerConfigB nid) + :* Nil } - , topLevelConfigBlock = HardForkBlockConfig { - hardForkBlockConfigPerEra = PerEraBlockConfig $ - blockConfigA nid - :* blockConfigB nid - :* Nil + , topLevelConfigBlock = + HardForkBlockConfig + { hardForkBlockConfigPerEra = + PerEraBlockConfig $ + blockConfigA nid + :* blockConfigB nid + :* Nil } - , topLevelConfigCodec = HardForkCodecConfig { - hardForkCodecConfigPerEra = PerEraCodecConfig $ - CCfgA - :* CCfgB - :* Nil + , topLevelConfigCodec = + HardForkCodecConfig + { hardForkCodecConfigPerEra = + PerEraCodecConfig $ + CCfgA + :* CCfgB + :* Nil } - , topLevelConfigStorage = HardForkStorageConfig { - hardForkStorageConfigPerEra = PerEraStorageConfig $ - SCfgA - :* SCfgB - :* Nil + , topLevelConfigStorage = + HardForkStorageConfig + { hardForkStorageConfigPerEra = + PerEraStorageConfig $ + SCfgA + :* SCfgB + :* Nil } - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - - consensusConfigA :: CoreNodeId -> ConsensusConfig ProtocolA - consensusConfigA nid = CfgA { - cfgA_k = k - , cfgA_leadInSlots = leaderScheduleFor nid leaderSchedule - } - - consensusConfigB :: CoreNodeId -> ConsensusConfig ProtocolB - consensusConfigB nid = CfgB { - cfgB_k = k - , cfgB_leadInSlots = leaderScheduleFor nid leaderSchedule - } - - ledgerConfigA :: CoreNodeId -> PartialLedgerConfig BlockA - ledgerConfigA _nid = LCfgA { - lcfgA_k = k - , lcfgA_systemStart = SystemStart dawnOfTime -- required for RunNode - , lcfgA_forgeTxs = Map.fromList [ - (testSetupTxSlot, [TxA (TxIdA 0) InitiateAtoB]) + , topLevelConfigCheckpoints = emptyCheckpointsMap + } + + consensusConfigA :: CoreNodeId -> ConsensusConfig ProtocolA + consensusConfigA nid = + CfgA + { cfgA_k = k + , cfgA_leadInSlots = leaderScheduleFor nid leaderSchedule + } + + consensusConfigB :: CoreNodeId -> ConsensusConfig ProtocolB + consensusConfigB nid = + CfgB + { cfgB_k = k + , cfgB_leadInSlots = leaderScheduleFor nid leaderSchedule + } + + ledgerConfigA :: CoreNodeId -> PartialLedgerConfig BlockA + ledgerConfigA _nid = + LCfgA + { lcfgA_k = k + , lcfgA_systemStart = SystemStart dawnOfTime -- required for RunNode + , lcfgA_forgeTxs = + Map.fromList + [ (testSetupTxSlot, [TxA (TxIdA 0) InitiateAtoB]) ] - } - - ledgerConfigB :: CoreNodeId -> LedgerConfig BlockB - ledgerConfigB _nid = () - - blockConfigA :: CoreNodeId -> BlockConfig BlockA - blockConfigA _ = BCfgA - - blockConfigB :: CoreNodeId -> BlockConfig BlockB - blockConfigB _ = BCfgB - - testOutput :: TestOutput TestBlock - testOutput = runTestNetwork testConfig testConfigB testConfigMB - - prop_allExpectedBlocks :: Property - prop_allExpectedBlocks = - counterexample - ( "some final chain does not have " <> - show a <> " blocks from A and " <> - show b <> " blocks from B" - ) $ - counterexample (show $ Map.toList counts) $ - property $ all (== (a, b)) counts - where - TestConfig{..} = testConfig - NumSlots t = numSlots - - -- we expect one epoch from B and the rest from A - b = unEpochSize (getB testSetupEpochSize) - a = t - b - - -- counts of A blocks and of B blocks for each final chain - counts :: Map.Map NodeId (Word64, Word64) - counts = - (\c -> (chainLen isA c, chainLen isB c)) <$> testOutputNodes - where - TestOutput{..} = testOutput - - isA, isB :: TestBlock -> Bool - isA (HardForkBlock (OneEraBlock blk)) = index_NS blk == 0 - isB (HardForkBlock (OneEraBlock blk)) = index_NS blk == 1 - - chainLen :: (a -> Bool) -> NodeOutput a -> Word64 - chainLen p NodeOutput{..} = - fromIntegral - . length - . filter p - $ Mock.chainToList nodeOutputFinalChain + } + + ledgerConfigB :: CoreNodeId -> LedgerConfig BlockB + ledgerConfigB _nid = () + + blockConfigA :: CoreNodeId -> BlockConfig BlockA + blockConfigA _ = BCfgA + + blockConfigB :: CoreNodeId -> BlockConfig BlockB + blockConfigB _ = BCfgB + + testOutput :: TestOutput TestBlock + testOutput = runTestNetwork testConfig testConfigB testConfigMB + + prop_allExpectedBlocks :: Property + prop_allExpectedBlocks = + counterexample + ( "some final chain does not have " + <> show a + <> " blocks from A and " + <> show b + <> " blocks from B" + ) + $ counterexample (show $ Map.toList counts) + $ property + $ all (== (a, b)) counts + where + TestConfig{..} = testConfig + NumSlots t = numSlots + + -- we expect one epoch from B and the rest from A + b = unEpochSize (getB testSetupEpochSize) + a = t - b + + -- counts of A blocks and of B blocks for each final chain + counts :: Map.Map NodeId (Word64, Word64) + counts = + (\c -> (chainLen isA c, chainLen isB c)) <$> testOutputNodes + where + TestOutput{..} = testOutput + + isA, isB :: TestBlock -> Bool + isA (HardForkBlock (OneEraBlock blk)) = index_NS blk == 0 + isB (HardForkBlock (OneEraBlock blk)) = index_NS blk == 1 + + chainLen :: (a -> Bool) -> NodeOutput a -> Word64 + chainLen p NodeOutput{..} = + fromIntegral + . length + . filter p + $ Mock.chainToList nodeOutputFinalChain -- We ignore the mempool for these tests instance TxGen TestBlock where @@ -373,15 +408,15 @@ instance TxGen TestBlock where -------------------------------------------------------------------------------} instance HasCanonicalTxIn '[BlockA, BlockB] where - newtype instance CanonicalTxIn '[BlockA, BlockB] = BlockABTxIn { - getBlockABTxIn :: Void + newtype CanonicalTxIn '[BlockA, BlockB] = BlockABTxIn + { getBlockABTxIn :: Void } deriving stock (Show, Eq, Ord) deriving newtype (NoThunks, MemPack) - injectCanonicalTxIn IZ key = absurd key - injectCanonicalTxIn (IS IZ) key = absurd key - injectCanonicalTxIn (IS (IS idx')) _ = case idx' of {} + injectCanonicalTxIn IZ key = absurd key + injectCanonicalTxIn (IS IZ) key = absurd key + injectCanonicalTxIn (IS (IS idx')) _ = case idx' of {} ejectCanonicalTxIn _ key = absurd $ getBlockABTxIn key @@ -399,137 +434,145 @@ type TestBlock = HardForkBlock '[BlockA, BlockB] instance CanHardFork '[BlockA, BlockB] where type HardForkTxMeasure '[BlockA, BlockB] = IgnoringOverflow ByteSize32 - hardForkEraTranslation = EraTranslation { - translateLedgerState = PCons ledgerState_AtoB PNil - , translateLedgerTables = PCons ledgerTables_AtoB PNil + hardForkEraTranslation = + EraTranslation + { translateLedgerState = PCons ledgerState_AtoB PNil + , translateLedgerTables = PCons ledgerTables_AtoB PNil , translateChainDepState = PCons chainDepState_AtoB PNil - , crossEraForecast = PCons forecast_AtoB PNil + , crossEraForecast = PCons forecast_AtoB PNil } - hardForkChainSel = Tails.mk2 CompareBlockNo + hardForkChainSel = Tails.mk2 CompareBlockNo hardForkInjectTxs = InPairs.mk2 injectTx_AtoB hardForkInjTxMeasure = \case - ( Z (WrapTxMeasure x)) -> x + (Z (WrapTxMeasure x)) -> x S (Z (WrapTxMeasure x)) -> x versionN2N :: BlockNodeToNodeVersion TestBlock versionN2N = - HardForkNodeToNodeEnabled - maxBound - ( WrapNodeToNodeVersion () - :* WrapNodeToNodeVersion () - :* Nil - ) + HardForkNodeToNodeEnabled + maxBound + ( WrapNodeToNodeVersion () + :* WrapNodeToNodeVersion () + :* Nil + ) versionN2C :: BlockNodeToClientVersion TestBlock versionN2C = - HardForkNodeToClientEnabled - maxBound - ( EraNodeToClientEnabled () - :* EraNodeToClientEnabled () - :* Nil - ) + HardForkNodeToClientEnabled + maxBound + ( EraNodeToClientEnabled () + :* EraNodeToClientEnabled () + :* Nil + ) instance SupportedNetworkProtocolVersion TestBlock where - supportedNodeToNodeVersions _ = Map.singleton maxBound versionN2N + supportedNodeToNodeVersions _ = Map.singleton maxBound versionN2N supportedNodeToClientVersions _ = Map.singleton maxBound versionN2C latestReleasedNodeVersion = latestReleasedNodeVersionDefault instance SerialiseHFC '[BlockA, BlockB] - -- Use defaults + +-- Use defaults instance SerializeTablesWithHint (LedgerState (HardForkBlock '[BlockA, BlockB])) where encodeTablesWithHint = defaultEncodeTablesWithHint decodeTablesWithHint = defaultDecodeTablesWithHint -instance IndexedMemPack - (LedgerState (HardForkBlock '[BlockA, BlockB]) EmptyMK) - (DefaultHardForkTxOut '[BlockA, BlockB]) where +instance + IndexedMemPack + (LedgerState (HardForkBlock '[BlockA, BlockB]) EmptyMK) + (DefaultHardForkTxOut '[BlockA, BlockB]) + where indexedTypeName _ = typeName @(DefaultHardForkTxOut '[BlockA, BlockB]) indexedPackedByteCount _ txout = hcollapse $ - hcmap (Proxy @MemPackTxOut) - (K . packedByteCount . unwrapTxOut) - txout + hcmap + (Proxy @MemPackTxOut) + (K . packedByteCount . unwrapTxOut) + txout indexedPackM _ = - hcollapse . hcimap - (Proxy @MemPackTxOut) - (\_ (WrapTxOut txout) -> K $ do - packM txout - ) + hcollapse + . hcimap + (Proxy @MemPackTxOut) + ( \_ (WrapTxOut txout) -> K $ do + packM txout + ) indexedUnpackM (HardForkLedgerState (HardForkState idx)) = do hsequence' $ hcmap - (Proxy @MemPackTxOut) - (const $ Comp $ WrapTxOut <$> unpackM) - $ Telescope.tip idx + (Proxy @MemPackTxOut) + (const $ Comp $ WrapTxOut <$> unpackM) + $ Telescope.tip idx {------------------------------------------------------------------------------- Translation -------------------------------------------------------------------------------} ledgerState_AtoB :: - RequiringBoth - WrapLedgerConfig - TranslateLedgerState - BlockA - BlockB + RequiringBoth + WrapLedgerConfig + TranslateLedgerState + BlockA + BlockB ledgerState_AtoB = - InPairs.ignoringBoth - $ TranslateLedgerState { - translateLedgerStateWith = \_ LgrA{..} -> - LgrB { - lgrB_tip = castPoint lgrA_tip + InPairs.ignoringBoth $ + TranslateLedgerState + { translateLedgerStateWith = \_ LgrA{..} -> + LgrB + { lgrB_tip = castPoint lgrA_tip } - } + } ledgerTables_AtoB :: TranslateLedgerTables BlockA BlockB -ledgerTables_AtoB = TranslateLedgerTables { - translateTxInWith = id +ledgerTables_AtoB = + TranslateLedgerTables + { translateTxInWith = id , translateTxOutWith = id } chainDepState_AtoB :: - RequiringBoth - WrapConsensusConfig - (Translate WrapChainDepState) - BlockA - BlockB + RequiringBoth + WrapConsensusConfig + (Translate WrapChainDepState) + BlockA + BlockB chainDepState_AtoB = InPairs.ignoringBoth $ Translate $ \_ _ -> - WrapChainDepState () + WrapChainDepState () forecast_AtoB :: - RequiringBoth - WrapLedgerConfig - (CrossEraForecaster LedgerState WrapLedgerView) - BlockA - BlockB -forecast_AtoB = InPairs.ignoringBoth $ CrossEraForecaster $ \_ _ _ -> return $ + RequiringBoth + WrapLedgerConfig + (CrossEraForecaster LedgerState WrapLedgerView) + BlockA + BlockB +forecast_AtoB = InPairs.ignoringBoth $ CrossEraForecaster $ \_ _ _ -> + return $ WrapLedgerView () injectTx_AtoB :: - RequiringBoth - WrapLedgerConfig - (Product2 InjectTx InjectValidatedTx) - BlockA - BlockB + RequiringBoth + WrapLedgerConfig + (Product2 InjectTx InjectValidatedTx) + BlockA + BlockB injectTx_AtoB = - InPairs.ignoringBoth $ Pair2 cannotInjectTx cannotInjectValidatedTx + InPairs.ignoringBoth $ Pair2 cannotInjectTx cannotInjectValidatedTx {------------------------------------------------------------------------------- Query HF -------------------------------------------------------------------------------} instance BlockSupportsHFLedgerQuery '[BlockA, BlockB] where - answerBlockQueryHFLookup IZ _ q = case q of {} - answerBlockQueryHFLookup (IS IZ) _cfg q = case q of {} + answerBlockQueryHFLookup IZ _ q = case q of {} + answerBlockQueryHFLookup (IS IZ) _cfg q = case q of {} answerBlockQueryHFLookup (IS (IS idx)) _cfg _q = case idx of {} - answerBlockQueryHFTraverse IZ _cfg q = case q of {} - answerBlockQueryHFTraverse (IS IZ) _cfg q = case q of {} + answerBlockQueryHFTraverse IZ _cfg q = case q of {} + answerBlockQueryHFTraverse (IS IZ) _cfg q = case q of {} answerBlockQueryHFTraverse (IS (IS idx)) _cfg _q = case idx of {} - queryLedgerGetTraversingFilter IZ q = case q of {} - queryLedgerGetTraversingFilter (IS IZ) q = case q of {} + queryLedgerGetTraversingFilter IZ q = case q of {} + queryLedgerGetTraversingFilter (IS IZ) q = case q of {} queryLedgerGetTraversingFilter (IS (IS idx)) _q = case idx of {} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/A.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/A.hs index c6f6d0c9d1..f3c20c46c7 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/A.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/A.hs @@ -14,18 +14,19 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Test.Consensus.HardFork.Combinator.A ( - BlockA (..) +module Test.Consensus.HardFork.Combinator.A + ( BlockA (..) , ProtocolA , blockForgingA , safeFromTipA , stabilityWindowA + -- * Additional types , PartialLedgerConfigA (..) , TxPayloadA (..) + -- * Type family instances , BlockConfig (..) , CodecConfig (..) @@ -39,60 +40,65 @@ module Test.Consensus.HardFork.Combinator.A ( , TxId (..) ) where -import Cardano.Ledger.BaseTypes.NonZero -import Cardano.Slotting.EpochInfo -import Codec.Serialise -import Control.Monad (guard) -import qualified Data.Binary as B -import Data.ByteString as Strict -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.ByteString.Short as SBS -import Data.Functor.Identity -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Void -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Condense -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import Ouroboros.Consensus.HardFork.History (Bound (..), - EraParams (..)) -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util (repeatedlyM) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, - wrapCBORinCBOR) -import Ouroboros.Network.Magic -import Test.Cardano.Slotting.Numeric () -import Test.Util.Time (dawnOfTime) +import Cardano.Ledger.BaseTypes.NonZero +import Cardano.Slotting.EpochInfo +import Codec.Serialise +import Control.Monad (guard) +import Data.Binary qualified as B +import Data.ByteString as Strict +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short qualified as SBS +import Data.Functor.Identity +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Void +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Condense +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HardFork.History + ( Bound (..) + , EraParams (..) + ) +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util (repeatedlyM) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Block + ( Serialised + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) +import Ouroboros.Network.Magic +import Test.Cardano.Slotting.Numeric () +import Test.Util.Time (dawnOfTime) {------------------------------------------------------------------------------- BlockA @@ -100,57 +106,57 @@ import Test.Util.Time (dawnOfTime) data ProtocolA -data instance ConsensusConfig ProtocolA = CfgA { - cfgA_k :: SecurityParam - , cfgA_leadInSlots :: Set SlotNo - } +data instance ConsensusConfig ProtocolA = CfgA + { cfgA_k :: SecurityParam + , cfgA_leadInSlots :: Set SlotNo + } deriving NoThunks via OnlyCheckWhnfNamed "CfgA" (ConsensusConfig ProtocolA) instance ConsensusProtocol ProtocolA where type ChainDepState ProtocolA = () - type LedgerView ProtocolA = () - type IsLeader ProtocolA = () - type CanBeLeader ProtocolA = () - type ValidateView ProtocolA = () + type LedgerView ProtocolA = () + type IsLeader ProtocolA = () + type CanBeLeader ProtocolA = () + type ValidateView ProtocolA = () type ValidationErr ProtocolA = Void checkIsLeader CfgA{..} () slot _ = - if slot `Set.member` cfgA_leadInSlots + if slot `Set.member` cfgA_leadInSlots then Just () else Nothing protocolSecurityParam = cfgA_k - tickChainDepState _ _ _ _ = TickedTrivial - updateChainDepState _ _ _ _ = return () + tickChainDepState _ _ _ _ = TickedTrivial + updateChainDepState _ _ _ _ = return () reupdateChainDepState _ _ _ _ = () -data BlockA = BlkA { - blkA_header :: Header BlockA - , blkA_body :: [GenTx BlockA] - } - deriving stock (Show, Eq, Generic) - deriving anyclass (Serialise) +data BlockA = BlkA + { blkA_header :: Header BlockA + , blkA_body :: [GenTx BlockA] + } + deriving stock (Show, Eq, Generic) + deriving anyclass Serialise deriving NoThunks via OnlyCheckWhnfNamed "BlkA" BlockA -data instance Header BlockA = HdrA { - hdrA_fields :: HeaderFields BlockA - , hdrA_prev :: ChainHash BlockA - } - deriving stock (Show, Eq, Generic) - deriving anyclass (Serialise) +data instance Header BlockA = HdrA + { hdrA_fields :: HeaderFields BlockA + , hdrA_prev :: ChainHash BlockA + } + deriving stock (Show, Eq, Generic) + deriving anyclass Serialise deriving NoThunks via OnlyCheckWhnfNamed "HdrA" (Header BlockA) instance GetHeader BlockA where - getHeader = blkA_header + getHeader = blkA_header blockMatchesHeader = \_ _ -> True -- We are not interested in integrity here - headerIsEBB = const Nothing + headerIsEBB = const Nothing data instance BlockConfig BlockA = BCfgA deriving (Generic, NoThunks) type instance BlockProtocol BlockA = ProtocolA -type instance HeaderHash BlockA = Strict.ByteString +type instance HeaderHash BlockA = Strict.ByteString data instance CodecConfig BlockA = CCfgA deriving (Generic, NoThunks) @@ -159,7 +165,7 @@ data instance StorageConfig BlockA = SCfgA deriving (Generic, NoThunks) instance ConfigSupportsNode BlockA where - getSystemStart _ = SystemStart dawnOfTime + getSystemStart _ = SystemStart dawnOfTime getNetworkMagic _ = NetworkMagic 0 instance StandardHash BlockA @@ -173,26 +179,26 @@ instance HasHeader (Header BlockA) where instance GetPrevHash BlockA where headerPrevHash = hdrA_prev -instance HasAnnTip BlockA where +instance HasAnnTip BlockA -instance BasicEnvelopeValidation BlockA where - -- Use defaults +instance BasicEnvelopeValidation BlockA -instance ValidateEnvelope BlockA where +-- Use defaults -data instance LedgerState BlockA mk = LgrA { - lgrA_tip :: Point BlockA +instance ValidateEnvelope BlockA - -- | The 'SlotNo' of the block containing the 'InitiateAtoB' transaction - , lgrA_transition :: Maybe SlotNo - } +data instance LedgerState BlockA mk = LgrA + { lgrA_tip :: Point BlockA + , lgrA_transition :: Maybe SlotNo + -- ^ The 'SlotNo' of the block containing the 'InitiateAtoB' transaction + } deriving (Show, Eq, Generic, Serialise) deriving NoThunks via OnlyCheckWhnfNamed "LgrA" (LedgerState BlockA mk) -- | Ticking has no state on the A ledger state -newtype instance Ticked (LedgerState BlockA) mk = TickedLedgerStateA { - getTickedLedgerStateA :: LedgerState BlockA mk - } +newtype instance Ticked (LedgerState BlockA) mk = TickedLedgerStateA + { getTickedLedgerStateA :: LedgerState BlockA mk + } deriving stock (Generic, Show, Eq) deriving NoThunks via OnlyCheckWhnfNamed "TickedLgrA" (Ticked (LedgerState BlockA) mk) @@ -200,32 +206,43 @@ newtype instance Ticked (LedgerState BlockA) mk = TickedLedgerStateA { Ledger Tables -------------------------------------------------------------------------------} -type instance TxIn (LedgerState BlockA) = Void +type instance TxIn (LedgerState BlockA) = Void type instance TxOut (LedgerState BlockA) = Void instance LedgerTablesAreTrivial (LedgerState BlockA) where convertMapKind (LgrA x y) = LgrA x y instance LedgerTablesAreTrivial (Ticked (LedgerState BlockA)) where convertMapKind (TickedLedgerStateA x) = TickedLedgerStateA (convertMapKind x) -deriving via Void - instance IndexedMemPack (LedgerState BlockA EmptyMK) Void -deriving via TrivialLedgerTables (LedgerState BlockA) - instance HasLedgerTables (LedgerState BlockA) -deriving via TrivialLedgerTables (Ticked (LedgerState BlockA)) - instance HasLedgerTables (Ticked (LedgerState BlockA)) -deriving via TrivialLedgerTables (LedgerState BlockA) - instance CanStowLedgerTables (LedgerState BlockA) -deriving via TrivialLedgerTables (LedgerState BlockA) - instance CanUpgradeLedgerTables (LedgerState BlockA) -deriving via TrivialLedgerTables (LedgerState BlockA) - instance SerializeTablesWithHint (LedgerState BlockA) - - -data PartialLedgerConfigA = LCfgA { - lcfgA_k :: SecurityParam - , lcfgA_systemStart :: SystemStart - , lcfgA_forgeTxs :: Map SlotNo [GenTx BlockA] - } +deriving via + Void + instance + IndexedMemPack (LedgerState BlockA EmptyMK) Void +deriving via + TrivialLedgerTables (LedgerState BlockA) + instance + HasLedgerTables (LedgerState BlockA) +deriving via + TrivialLedgerTables (Ticked (LedgerState BlockA)) + instance + HasLedgerTables (Ticked (LedgerState BlockA)) +deriving via + TrivialLedgerTables (LedgerState BlockA) + instance + CanStowLedgerTables (LedgerState BlockA) +deriving via + TrivialLedgerTables (LedgerState BlockA) + instance + CanUpgradeLedgerTables (LedgerState BlockA) +deriving via + TrivialLedgerTables (LedgerState BlockA) + instance + SerializeTablesWithHint (LedgerState BlockA) + +data PartialLedgerConfigA = LCfgA + { lcfgA_k :: SecurityParam + , lcfgA_systemStart :: SystemStart + , lcfgA_forgeTxs :: Map SlotNo [GenTx BlockA] + } deriving Show deriving NoThunks via OnlyCheckWhnfNamed "LCfgA" PartialLedgerConfigA deriving Generic @@ -240,7 +257,8 @@ instance (HasZero a, Serialise a) => Serialise (NonZero a) where Nothing -> fail "Expected non zero but found zero!" Just a' -> pure a' -type instance LedgerCfg (LedgerState BlockA) = +type instance + LedgerCfg (LedgerState BlockA) = (EpochInfo Identity, PartialLedgerConfigA) instance GetTip (LedgerState BlockA) where @@ -252,22 +270,24 @@ instance GetTip (Ticked (LedgerState BlockA)) where instance IsLedger (LedgerState BlockA) where type LedgerErr (LedgerState BlockA) = Void - type AuxLedgerEvent (LedgerState BlockA) = - VoidLedgerEvent (LedgerState BlockA) + type + AuxLedgerEvent (LedgerState BlockA) = + VoidLedgerEvent (LedgerState BlockA) - applyChainTickLedgerResult _ _ _ = pureLedgerResult - . TickedLedgerStateA - . noNewTickingDiffs + applyChainTickLedgerResult _ _ _ = + pureLedgerResult + . TickedLedgerStateA + . noNewTickingDiffs instance ApplyBlock (LedgerState BlockA) BlockA where applyBlockLedgerResultWithValidation _ _ cfg blk = - fmap (pureLedgerResult . convertMapKind . setTip) + fmap (pureLedgerResult . convertMapKind . setTip) . repeatedlyM - (fmap (convertMapKind . fst) .: applyTx cfg DoNotIntervene (blockSlot blk)) - (blkA_body blk) - where - setTip :: TickedLedgerState BlockA mk -> LedgerState BlockA mk - setTip (TickedLedgerStateA st) = st { lgrA_tip = blockPoint blk } + (fmap (convertMapKind . fst) .: applyTx cfg DoNotIntervene (blockSlot blk)) + (blkA_body blk) + where + setTip :: TickedLedgerState BlockA mk -> LedgerState BlockA mk + setTip (TickedLedgerStateA st) = st{lgrA_tip = blockPoint blk} applyBlockLedgerResult = defaultApplyBlockLedgerResult reapplyBlockLedgerResult = @@ -279,14 +299,14 @@ instance UpdateLedger BlockA instance CommonProtocolParams BlockA where maxHeaderSize _ = maxBound - maxTxSize _ = maxBound + maxTxSize _ = maxBound instance BlockSupportsProtocol BlockA where validateView _ _ = () instance LedgerSupportsProtocol BlockA where - protocolLedgerView _ _ = () - ledgerViewForecastAt _ = trivialForecast + protocolLedgerView _ _ = () + ledgerViewForecastAt _ = trivialForecast instance HasPartialConsensusConfig ProtocolA @@ -298,43 +318,47 @@ instance HasPartialLedgerConfig BlockA where data TxPayloadA = InitiateAtoB deriving (Show, Eq, Generic, NoThunks, Serialise) -type instance CannotForge BlockA = Void -type instance ForgeStateInfo BlockA = () +type instance CannotForge BlockA = Void +type instance ForgeStateInfo BlockA = () type instance ForgeStateUpdateError BlockA = Void - forgeBlockA :: - TopLevelConfig BlockA - -> BlockNo - -> SlotNo - -> TickedLedgerState BlockA mk - -> [GenTx BlockA] - -> IsLeader (BlockProtocol BlockA) - -> BlockA -forgeBlockA tlc bno sno (TickedLedgerStateA st) _txs _ = BlkA { - blkA_header = HdrA { - hdrA_fields = HeaderFields { - headerFieldHash = Lazy.toStrict . B.encode $ unSlotNo sno - , headerFieldSlot = sno - , headerFieldBlockNo = bno - } - , hdrA_prev = ledgerTipHash st - } + TopLevelConfig BlockA -> + BlockNo -> + SlotNo -> + TickedLedgerState BlockA mk -> + [GenTx BlockA] -> + IsLeader (BlockProtocol BlockA) -> + BlockA +forgeBlockA tlc bno sno (TickedLedgerStateA st) _txs _ = + BlkA + { blkA_header = + HdrA + { hdrA_fields = + HeaderFields + { headerFieldHash = Lazy.toStrict . B.encode $ unSlotNo sno + , headerFieldSlot = sno + , headerFieldBlockNo = bno + } + , hdrA_prev = ledgerTipHash st + } , blkA_body = Map.findWithDefault [] sno (lcfgA_forgeTxs ledgerConfig) } - where - ledgerConfig :: PartialLedgerConfig BlockA - ledgerConfig = snd $ configLedger tlc + where + ledgerConfig :: PartialLedgerConfig BlockA + ledgerConfig = snd $ configLedger tlc blockForgingA :: Monad m => BlockForging m BlockA -blockForgingA = BlockForging { - forgeLabel = "BlockA" - , canBeLeader = () - , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () - , checkCanForge = \_ _ _ _ _ -> return () - , forgeBlock = \cfg bno slot st txs proof -> return $ - forgeBlockA cfg bno slot st (fmap txForgetValidated txs) proof - } +blockForgingA = + BlockForging + { forgeLabel = "BlockA" + , canBeLeader = () + , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () + , checkCanForge = \_ _ _ _ _ -> return () + , forgeBlock = \cfg bno slot st txs proof -> + return $ + forgeBlockA cfg bno slot st (fmap txForgetValidated txs) proof + } -- | See 'Ouroboros.Consensus.HardFork.History.EraParams.safeFromTip' safeFromTipA :: SecurityParam -> Word64 @@ -346,25 +370,25 @@ safeFromTipA (SecurityParam k) = unNonZero k stabilityWindowA :: SecurityParam -> Word64 stabilityWindowA (SecurityParam k) = unNonZero k -data instance GenTx BlockA = TxA { - txA_id :: TxId (GenTx BlockA) - , txA_payload :: TxPayloadA - } +data instance GenTx BlockA = TxA + { txA_id :: TxId (GenTx BlockA) + , txA_payload :: TxPayloadA + } deriving (Show, Eq, Generic, Serialise) deriving NoThunks via OnlyCheckWhnfNamed "TxA" (GenTx BlockA) -newtype instance Validated (GenTx BlockA) = ValidatedGenTxA { forgetValidatedGenTxA :: GenTx BlockA } - deriving stock (Show) +newtype instance Validated (GenTx BlockA) = ValidatedGenTxA {forgetValidatedGenTxA :: GenTx BlockA} + deriving stock Show deriving newtype (Generic, Eq) - deriving anyclass (NoThunks) + deriving anyclass NoThunks type instance ApplyTxErr BlockA = Void instance LedgerSupportsMempool BlockA where applyTx _ _wti sno tx@(TxA _ payload) (TickedLedgerStateA st) = - case payload of - InitiateAtoB -> do - return (TickedLedgerStateA $ st { lgrA_transition = Just sno }, ValidatedGenTxA tx) + case payload of + InitiateAtoB -> do + return (TickedLedgerStateA $ st{lgrA_transition = Just sno}, ValidatedGenTxA tx) reapplyTx _ cfg slot tx st = attachAndApplyDiffs st . fst <$> applyTx cfg DoNotIntervene slot (forgetValidatedGenTxA tx) st @@ -375,11 +399,11 @@ instance LedgerSupportsMempool BlockA where instance TxLimits BlockA where type TxMeasure BlockA = IgnoringOverflow ByteSize32 - blockCapacityTxMeasure _cfg _st = IgnoringOverflow $ ByteSize32 $ 100 * 1024 -- arbitrary - txMeasure _cfg _st _tx = pure $ IgnoringOverflow $ ByteSize32 0 + blockCapacityTxMeasure _cfg _st = IgnoringOverflow $ ByteSize32 $ 100 * 1024 -- arbitrary + txMeasure _cfg _st _tx = pure $ IgnoringOverflow $ ByteSize32 0 newtype instance TxId (GenTx BlockA) = TxIdA Int - deriving stock (Show, Eq, Ord, Generic) + deriving stock (Show, Eq, Ord, Generic) deriving newtype (NoThunks, Serialise) instance HasTxId (GenTx BlockA) where @@ -392,7 +416,7 @@ instance ShowQuery (BlockQuery BlockA fp) where showResult qry = case qry of {} data instance BlockQuery BlockA fp result - deriving (Show) + deriving Show instance BlockSupportsLedgerQuery BlockA where answerPureBlockQuery _ qry = case qry of {} @@ -400,14 +424,13 @@ instance BlockSupportsLedgerQuery BlockA where answerBlockQueryTraverse _ qry = case qry of {} blockQueryIsSupportedOnVersion qry _ = case qry of {} - instance SameDepIndex2 (BlockQuery BlockA) where sameDepIndex2 qry _qry' = case qry of {} instance ConvertRawHash BlockA where - toRawHash _ = id + toRawHash _ = id fromRawHash _ = id - hashSize _ = 8 -- We use the SlotNo as the hash, which is Word64 + hashSize _ = 8 -- We use the SlotNo as the hash, which is Word64 data instance NestedCtxt_ BlockA f a where CtxtA :: NestedCtxt_ BlockA f (f BlockA) @@ -425,57 +448,61 @@ instance DecodeDisk BlockA (Lazy.ByteString -> Header BlockA) where decodeDisk _ = const <$> decode instance EncodeDiskDepIx (NestedCtxt Header) BlockA -instance EncodeDiskDep (NestedCtxt Header) BlockA +instance EncodeDiskDep (NestedCtxt Header) BlockA instance DecodeDiskDepIx (NestedCtxt Header) BlockA -instance DecodeDiskDep (NestedCtxt Header) BlockA +instance DecodeDiskDep (NestedCtxt Header) BlockA -instance HasNestedContent Header BlockA where - -- Use defaults +instance HasNestedContent Header BlockA + +-- Use defaults instance ReconstructNestedCtxt Header BlockA - -- Use defaults + +-- Use defaults instance LedgerSupportsPeerSelection BlockA where getPeers = const [] -data UpdateA = - ProposalSubmitted - | ProposalStable - deriving (Show, Eq) +data UpdateA + = ProposalSubmitted + | ProposalStable + deriving (Show, Eq) instance Condense UpdateA where condense = show instance InspectLedger BlockA where type LedgerWarning BlockA = Void - type LedgerUpdate BlockA = UpdateA + type LedgerUpdate BlockA = UpdateA inspectLedger cfg before after = - case (getConfirmationDepth before, getConfirmationDepth after) of - (Nothing, Just _) -> - return $ LedgerUpdate ProposalSubmitted - (Just (_, d), Just (_, d')) -> do - guard $ d < k && d' >= k - return $ LedgerUpdate ProposalStable - _otherwise -> - [] - where - k = stabilityWindowA (lcfgA_k (snd (configLedger cfg))) + case (getConfirmationDepth before, getConfirmationDepth after) of + (Nothing, Just _) -> + return $ LedgerUpdate ProposalSubmitted + (Just (_, d), Just (_, d')) -> do + guard $ d < k && d' >= k + return $ LedgerUpdate ProposalStable + _otherwise -> + [] + where + k = stabilityWindowA (lcfgA_k (snd (configLedger cfg))) getConfirmationDepth :: LedgerState BlockA mk -> Maybe (SlotNo, Word64) getConfirmationDepth st = do - confirmedInSlot <- lgrA_transition st - return $ case ledgerTipSlot st of - Origin -> error "impossible" - NotOrigin s -> if s < confirmedInSlot - then error "impossible" - else ( confirmedInSlot - , History.countSlots s confirmedInSlot - ) + confirmedInSlot <- lgrA_transition st + return $ case ledgerTipSlot st of + Origin -> error "impossible" + NotOrigin s -> + if s < confirmedInSlot + then error "impossible" + else + ( confirmedInSlot + , History.countSlots s confirmedInSlot + ) instance NodeInitStorage BlockA where - nodeCheckIntegrity _ _ = True + nodeCheckIntegrity _ _ = True -- Pick some chunk size nodeImmutableDbChunkInfo _ = simpleChunkInfo 10 @@ -483,58 +510,61 @@ instance NodeInitStorage BlockA where instance BlockSupportsMetrics BlockA where isSelfIssued = isSelfIssuedConstUnknown -deriving via SelectViewDiffusionPipelining BlockA - instance BlockSupportsDiffusionPipelining BlockA +deriving via + SelectViewDiffusionPipelining BlockA + instance + BlockSupportsDiffusionPipelining BlockA instance SingleEraBlock BlockA where singleEraInfo _ = SingleEraInfo "A" singleEraTransition cfg EraParams{..} eraStart st = do - (confirmedInSlot, confirmationDepth) <- getConfirmationDepth st - - -- The ledger must report the scheduled transition to the next era as soon - -- as the block containing this transaction is immutable (that is, at - -- least @k@ blocks have come after) -- this happens elsewhere in the - -- corresponding 'SingleEraBlock' instance. It must not report it sooner - -- than that because the consensus layer requires that conversions about - -- time (when successful) must not be subject to rollback. - guard $ confirmationDepth >= stabilityWindowA (lcfgA_k cfg) - - -- Consensus /also/ insists that as long as the transition to the next era - -- is not yet known (ie not yet determined by an immutable block), there - -- is a safe zone that extends past the tip of the ledger in which we - -- guarantee the next era will not begin. This means that we must have an - -- additional @safeFromTipA k@ blocks /after/ reporting the transition and - -- /before/ the start of the next era. - -- - -- Thus, we schedule the next era to begin with the first upcoming epoch - -- that starts /after/ we're guaranteed to see both the aforementioned @k@ - -- additional blocks and also a further @safeFromTipA k@ slots after the - -- last of those. - - let -- The last slot that must be in the current era - firstPossibleLastSlotThisEra = - History.addSlots - (stabilityWindowA k + safeFromTipA k) - confirmedInSlot - - -- The 'EpochNo' corresponding to 'firstPossibleLastSlotThisEra' - lastEpochThisEra = slotToEpoch firstPossibleLastSlotThisEra - - -- The first epoch that may be in the next era - -- (recall: eras are epoch-aligned) - firstEpochNextEra = succ lastEpochThisEra - - return firstEpochNextEra + (confirmedInSlot, confirmationDepth) <- getConfirmationDepth st + + -- The ledger must report the scheduled transition to the next era as soon + -- as the block containing this transaction is immutable (that is, at + -- least @k@ blocks have come after) -- this happens elsewhere in the + -- corresponding 'SingleEraBlock' instance. It must not report it sooner + -- than that because the consensus layer requires that conversions about + -- time (when successful) must not be subject to rollback. + guard $ confirmationDepth >= stabilityWindowA (lcfgA_k cfg) + + -- Consensus /also/ insists that as long as the transition to the next era + -- is not yet known (ie not yet determined by an immutable block), there + -- is a safe zone that extends past the tip of the ledger in which we + -- guarantee the next era will not begin. This means that we must have an + -- additional @safeFromTipA k@ blocks /after/ reporting the transition and + -- /before/ the start of the next era. + -- + -- Thus, we schedule the next era to begin with the first upcoming epoch + -- that starts /after/ we're guaranteed to see both the aforementioned @k@ + -- additional blocks and also a further @safeFromTipA k@ slots after the + -- last of those. + + let + -- The last slot that must be in the current era + firstPossibleLastSlotThisEra = + History.addSlots + (stabilityWindowA k + safeFromTipA k) + confirmedInSlot + + -- The 'EpochNo' corresponding to 'firstPossibleLastSlotThisEra' + lastEpochThisEra = slotToEpoch firstPossibleLastSlotThisEra + + -- The first epoch that may be in the next era + -- (recall: eras are epoch-aligned) + firstEpochNextEra = succ lastEpochThisEra + + return firstEpochNextEra where - k = lcfgA_k cfg + k = lcfgA_k cfg - -- Slot conversion (valid for slots in this era only) - slotToEpoch :: SlotNo -> EpochNo - slotToEpoch s = - History.addEpochs - (History.countSlots s (boundSlot eraStart) `div` unEpochSize eraEpochSize) - (boundEpoch eraStart) + -- Slot conversion (valid for slots in this era only) + slotToEpoch :: SlotNo -> EpochNo + slotToEpoch s = + History.addEpochs + (History.countSlots s (boundSlot eraStart) `div` unEpochSize eraEpochSize) + (boundEpoch eraStart) instance HasTxs BlockA where extractTxs = blkA_body @@ -545,9 +575,9 @@ instance HasTxs BlockA where instance CondenseConstraints BlockA -instance Condense BlockA where condense = show -instance Condense (Header BlockA) where condense = show -instance Condense (GenTx BlockA) where condense = show +instance Condense BlockA where condense = show +instance Condense (Header BlockA) where condense = show +instance Condense (GenTx BlockA) where condense = show instance Condense (TxId (GenTx BlockA)) where condense = show {------------------------------------------------------------------------------- @@ -562,12 +592,12 @@ instance HasBinaryBlockInfo BlockA where -- > field1 -- > .. -- > fieldN - getBinaryBlockInfo BlkA{..} = BinaryBlockInfo { - headerOffset = 2 - , headerSize = fromIntegral $ Lazy.length (serialise blkA_header) + getBinaryBlockInfo BlkA{..} = + BinaryBlockInfo + { headerOffset = 2 + , headerSize = fromIntegral $ Lazy.length (serialise blkA_header) } - instance SerialiseNodeToClient BlockA PartialLedgerConfigA -- NOTE: we will never use BlockA as a SingleEraBlock, however in order to fulfill the @@ -579,11 +609,11 @@ instance SerialiseNodeToClient BlockA (EpochInfo Identity, PartialLedgerConfigA) encodeNodeToClient = error "BlockA being used as a SingleEraBlock" decodeNodeToClient = error "BlockA being used as a SingleEraBlock" -instance SerialiseConstraintsHFC BlockA -instance SerialiseDiskConstraints BlockA +instance SerialiseConstraintsHFC BlockA +instance SerialiseDiskConstraints BlockA instance SerialiseNodeToClientConstraints BlockA -instance SerialiseNodeToNodeConstraints BlockA where - estimateBlockSize = const 0 +instance SerialiseNodeToNodeConstraints BlockA where + estimateBlockSize = const 0 {------------------------------------------------------------------------------- SerialiseDiskConstraints @@ -620,7 +650,7 @@ instance SerialiseNodeToNode BlockA (GenTxId BlockA) -- Must be compatible with @(SerialisedHeader BlockA)@, which uses -- the @Serialise (SerialisedHeader BlockA)@ instance below instance SerialiseNodeToNode BlockA (Header BlockA) where - encodeNodeToNode _ _ = wrapCBORinCBOR encode + encodeNodeToNode _ _ = wrapCBORinCBOR encode decodeNodeToNode _ _ = unwrapCBORinCBOR (const <$> decode) instance Serialise (SerialisedHeader BlockA) where diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/B.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/B.hs index 0e43f6fbf8..b837b2869c 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/B.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/HardFork/Combinator/B.hs @@ -14,14 +14,14 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Test.Consensus.HardFork.Combinator.B ( - BlockB (..) +module Test.Consensus.HardFork.Combinator.B + ( BlockB (..) , ProtocolB , blockForgingB , safeZoneB + -- * Type family instances , BlockConfig (..) , CodecConfig (..) @@ -35,50 +35,53 @@ module Test.Consensus.HardFork.Combinator.B ( , TxId (..) ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Codec.Serialise -import qualified Data.Binary as B -import qualified Data.ByteString as Strict -import qualified Data.ByteString.Lazy as Lazy -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Void -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.HardFork.Combinator.Condense -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, - wrapCBORinCBOR) -import Ouroboros.Network.Magic -import Test.Cardano.Slotting.Numeric () -import Test.Util.Time (dawnOfTime) +import Cardano.Ledger.BaseTypes (unNonZero) +import Codec.Serialise +import Data.Binary qualified as B +import Data.ByteString qualified as Strict +import Data.ByteString.Lazy qualified as Lazy +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Void +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.Condense +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Block + ( Serialised + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) +import Ouroboros.Network.Magic +import Test.Cardano.Slotting.Numeric () +import Test.Util.Time (dawnOfTime) {------------------------------------------------------------------------------- BlockB @@ -86,56 +89,56 @@ import Test.Util.Time (dawnOfTime) data ProtocolB -data instance ConsensusConfig ProtocolB = CfgB { - cfgB_k :: SecurityParam - , cfgB_leadInSlots :: Set SlotNo - } +data instance ConsensusConfig ProtocolB = CfgB + { cfgB_k :: SecurityParam + , cfgB_leadInSlots :: Set SlotNo + } deriving NoThunks via OnlyCheckWhnfNamed "CfgB" (ConsensusConfig ProtocolB) instance ConsensusProtocol ProtocolB where type ChainDepState ProtocolB = () - type LedgerView ProtocolB = () - type IsLeader ProtocolB = () - type CanBeLeader ProtocolB = () - type ValidateView ProtocolB = () + type LedgerView ProtocolB = () + type IsLeader ProtocolB = () + type CanBeLeader ProtocolB = () + type ValidateView ProtocolB = () type ValidationErr ProtocolB = Void checkIsLeader CfgB{..} () slot _ = - if slot `Set.member` cfgB_leadInSlots + if slot `Set.member` cfgB_leadInSlots then Just () else Nothing protocolSecurityParam = cfgB_k - tickChainDepState _ _ _ _ = TickedTrivial - updateChainDepState _ _ _ _ = return () + tickChainDepState _ _ _ _ = TickedTrivial + updateChainDepState _ _ _ _ = return () reupdateChainDepState _ _ _ _ = () -data BlockB = BlkB { - blkB_header :: Header BlockB - } - deriving stock (Show, Eq, Generic) - deriving anyclass (Serialise) +data BlockB = BlkB + { blkB_header :: Header BlockB + } + deriving stock (Show, Eq, Generic) + deriving anyclass Serialise deriving NoThunks via OnlyCheckWhnfNamed "BlkB" BlockB -data instance Header BlockB = HdrB { - hdrB_fields :: HeaderFields BlockB - , hdrB_prev :: ChainHash BlockB - } - deriving stock (Show, Eq, Generic) - deriving anyclass (Serialise) +data instance Header BlockB = HdrB + { hdrB_fields :: HeaderFields BlockB + , hdrB_prev :: ChainHash BlockB + } + deriving stock (Show, Eq, Generic) + deriving anyclass Serialise deriving NoThunks via OnlyCheckWhnfNamed "HdrB" (Header BlockB) instance GetHeader BlockB where - getHeader = blkB_header + getHeader = blkB_header blockMatchesHeader = \_ _ -> True -- We are not interested in integrity here - headerIsEBB = const Nothing + headerIsEBB = const Nothing data instance BlockConfig BlockB = BCfgB deriving (Generic, NoThunks) type instance BlockProtocol BlockB = ProtocolB -type instance HeaderHash BlockB = Strict.ByteString +type instance HeaderHash BlockB = Strict.ByteString data instance CodecConfig BlockB = CCfgB deriving (Generic, NoThunks) @@ -144,7 +147,7 @@ data instance StorageConfig BlockB = SCfgB deriving (Generic, NoThunks) instance ConfigSupportsNode BlockB where - getSystemStart _ = SystemStart dawnOfTime + getSystemStart _ = SystemStart dawnOfTime getNetworkMagic _ = NetworkMagic 0 instance StandardHash BlockB @@ -158,16 +161,17 @@ instance HasHeader (Header BlockB) where instance GetPrevHash BlockB where headerPrevHash = hdrB_prev -instance HasAnnTip BlockB where +instance HasAnnTip BlockB -instance BasicEnvelopeValidation BlockB where - -- Use defaults +instance BasicEnvelopeValidation BlockB -instance ValidateEnvelope BlockB where +-- Use defaults -data instance LedgerState BlockB mk = LgrB { - lgrB_tip :: Point BlockB - } +instance ValidateEnvelope BlockB + +data instance LedgerState BlockB mk = LgrB + { lgrB_tip :: Point BlockB + } deriving (Show, Eq, Generic, Serialise) deriving NoThunks via OnlyCheckWhnfNamed "LgrB" (LedgerState BlockB mk) @@ -175,7 +179,7 @@ data instance LedgerState BlockB mk = LgrB { Ledger Tables -------------------------------------------------------------------------------} -type instance TxIn (LedgerState BlockB) = Void +type instance TxIn (LedgerState BlockB) = Void type instance TxOut (LedgerState BlockB) = Void instance LedgerTablesAreTrivial (LedgerState BlockB) where @@ -183,27 +187,39 @@ instance LedgerTablesAreTrivial (LedgerState BlockB) where instance LedgerTablesAreTrivial (Ticked (LedgerState BlockB)) where convertMapKind (TickedLedgerStateB x) = TickedLedgerStateB (convertMapKind x) -deriving via TrivialLedgerTables (LedgerState BlockB) - instance HasLedgerTables (LedgerState BlockB) -deriving via TrivialLedgerTables (Ticked (LedgerState BlockB)) - instance HasLedgerTables (Ticked (LedgerState BlockB)) -deriving via TrivialLedgerTables (LedgerState BlockB) - instance CanStowLedgerTables (LedgerState BlockB) -deriving via TrivialLedgerTables (LedgerState BlockB) - instance CanUpgradeLedgerTables (LedgerState BlockB) -deriving via TrivialLedgerTables (LedgerState BlockB) - instance SerializeTablesWithHint (LedgerState BlockB) -deriving via Void - instance IndexedMemPack (LedgerState BlockB EmptyMK) Void +deriving via + TrivialLedgerTables (LedgerState BlockB) + instance + HasLedgerTables (LedgerState BlockB) +deriving via + TrivialLedgerTables (Ticked (LedgerState BlockB)) + instance + HasLedgerTables (Ticked (LedgerState BlockB)) +deriving via + TrivialLedgerTables (LedgerState BlockB) + instance + CanStowLedgerTables (LedgerState BlockB) +deriving via + TrivialLedgerTables (LedgerState BlockB) + instance + CanUpgradeLedgerTables (LedgerState BlockB) +deriving via + TrivialLedgerTables (LedgerState BlockB) + instance + SerializeTablesWithHint (LedgerState BlockB) +deriving via + Void + instance + IndexedMemPack (LedgerState BlockB EmptyMK) Void type PartialLedgerCfgB = () type instance LedgerCfg (LedgerState BlockB) = PartialLedgerCfgB -- | Ticking has no state on the B ledger state -newtype instance Ticked (LedgerState BlockB) mk = TickedLedgerStateB { - getTickedLedgerStateB :: LedgerState BlockB mk - } +newtype instance Ticked (LedgerState BlockB) mk = TickedLedgerStateB + { getTickedLedgerStateB :: LedgerState BlockB mk + } deriving NoThunks via OnlyCheckWhnfNamed "TickedLgrB" (Ticked (LedgerState BlockB) mk) instance GetTip (LedgerState BlockB) where @@ -215,12 +231,14 @@ instance GetTip (Ticked (LedgerState BlockB)) where instance IsLedger (LedgerState BlockB) where type LedgerErr (LedgerState BlockB) = Void - type AuxLedgerEvent (LedgerState BlockB) = - VoidLedgerEvent (LedgerState BlockB) + type + AuxLedgerEvent (LedgerState BlockB) = + VoidLedgerEvent (LedgerState BlockB) - applyChainTickLedgerResult _ _ _ = pureLedgerResult - . TickedLedgerStateB - . noNewTickingDiffs + applyChainTickLedgerResult _ _ _ = + pureLedgerResult + . TickedLedgerStateB + . noNewTickingDiffs instance ApplyBlock (LedgerState BlockB) BlockB where applyBlockLedgerResultWithValidation = \_ _ _ b _ -> return $ pureLedgerResult $ LgrB (blockPoint b) @@ -233,51 +251,56 @@ instance UpdateLedger BlockB instance CommonProtocolParams BlockB where maxHeaderSize _ = maxBound - maxTxSize _ = maxBound + maxTxSize _ = maxBound instance BlockSupportsProtocol BlockB where validateView _ _ = () instance LedgerSupportsProtocol BlockB where - protocolLedgerView _ _ = () - ledgerViewForecastAt _ = trivialForecast + protocolLedgerView _ _ = () + ledgerViewForecastAt _ = trivialForecast instance HasPartialConsensusConfig ProtocolB instance HasPartialLedgerConfig BlockB -type instance CannotForge BlockB = Void -type instance ForgeStateInfo BlockB = () +type instance CannotForge BlockB = Void +type instance ForgeStateInfo BlockB = () type instance ForgeStateUpdateError BlockB = Void forgeBlockB :: - TopLevelConfig BlockB - -> BlockNo - -> SlotNo - -> TickedLedgerState BlockB mk - -> [GenTx BlockB] - -> IsLeader (BlockProtocol BlockB) - -> BlockB -forgeBlockB _ bno sno (TickedLedgerStateB st) _txs _ = BlkB { - blkB_header = HdrB { - hdrB_fields = HeaderFields { - headerFieldHash = Lazy.toStrict . B.encode $ unSlotNo sno - , headerFieldSlot = sno - , headerFieldBlockNo = bno - } - , hdrB_prev = ledgerTipHash st - } + TopLevelConfig BlockB -> + BlockNo -> + SlotNo -> + TickedLedgerState BlockB mk -> + [GenTx BlockB] -> + IsLeader (BlockProtocol BlockB) -> + BlockB +forgeBlockB _ bno sno (TickedLedgerStateB st) _txs _ = + BlkB + { blkB_header = + HdrB + { hdrB_fields = + HeaderFields + { headerFieldHash = Lazy.toStrict . B.encode $ unSlotNo sno + , headerFieldSlot = sno + , headerFieldBlockNo = bno + } + , hdrB_prev = ledgerTipHash st + } } blockForgingB :: Monad m => BlockForging m BlockB -blockForgingB = BlockForging { - forgeLabel = "BlockB" - , canBeLeader = () - , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () - , checkCanForge = \_ _ _ _ _ -> return () - , forgeBlock = \cfg bno slot st txs proof -> return $ - forgeBlockB cfg bno slot st (fmap txForgetValidated txs) proof - } +blockForgingB = + BlockForging + { forgeLabel = "BlockB" + , canBeLeader = () + , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () + , checkCanForge = \_ _ _ _ _ -> return () + , forgeBlock = \cfg bno slot st txs proof -> + return $ + forgeBlockB cfg bno slot st (fmap txForgetValidated txs) proof + } -- | A basic 'History.SafeZone' -- @@ -296,7 +319,7 @@ data instance Validated (GenTx BlockB) type instance ApplyTxErr BlockB = Void instance LedgerSupportsMempool BlockB where - applyTx = \_ _ _wti tx -> case tx of {} + applyTx = \_ _ _wti tx -> case tx of {} reapplyTx = \_ _ _ vtx -> case vtx of {} txForgetValidated = \case {} @@ -305,11 +328,11 @@ instance LedgerSupportsMempool BlockB where instance TxLimits BlockB where type TxMeasure BlockB = IgnoringOverflow ByteSize32 - blockCapacityTxMeasure _cfg _st = IgnoringOverflow $ ByteSize32 $ 100 * 1024 -- arbitrary - txMeasure _cfg _st _tx = pure $ IgnoringOverflow $ ByteSize32 0 + blockCapacityTxMeasure _cfg _st = IgnoringOverflow $ ByteSize32 $ 100 * 1024 -- arbitrary + txMeasure _cfg _st _tx = pure $ IgnoringOverflow $ ByteSize32 0 data instance TxId (GenTx BlockB) - deriving stock (Show, Eq, Ord, Generic) + deriving stock (Show, Eq, Ord, Generic) deriving anyclass (NoThunks, Serialise) instance HasTxId (GenTx BlockB) where @@ -322,7 +345,7 @@ instance ShowQuery (BlockQuery BlockB fp) where showResult qry = case qry of {} data instance BlockQuery BlockB fp result - deriving (Show) + deriving Show instance BlockSupportsLedgerQuery BlockB where answerPureBlockQuery _ qry = case qry of {} @@ -334,9 +357,9 @@ instance SameDepIndex2 (BlockQuery BlockB) where sameDepIndex2 qry _qry' = case qry of {} instance ConvertRawHash BlockB where - toRawHash _ = id + toRawHash _ = id fromRawHash _ = id - hashSize _ = 8 -- We use the SlotNo as the hash, which is Word64 + hashSize _ = 8 -- We use the SlotNo as the hash, which is Word64 data instance NestedCtxt_ BlockB f a where CtxtB :: NestedCtxt_ BlockB f (f BlockB) @@ -354,25 +377,28 @@ instance DecodeDisk BlockB (Lazy.ByteString -> Header BlockB) where decodeDisk _ = const <$> decode instance EncodeDiskDepIx (NestedCtxt Header) BlockB -instance EncodeDiskDep (NestedCtxt Header) BlockB +instance EncodeDiskDep (NestedCtxt Header) BlockB instance DecodeDiskDepIx (NestedCtxt Header) BlockB -instance DecodeDiskDep (NestedCtxt Header) BlockB +instance DecodeDiskDep (NestedCtxt Header) BlockB -instance HasNestedContent Header BlockB where - -- Use defaults +instance HasNestedContent Header BlockB + +-- Use defaults instance ReconstructNestedCtxt Header BlockB - -- Use defaults -instance InspectLedger BlockB where - -- Use defaults +-- Use defaults + +instance InspectLedger BlockB + +-- Use defaults instance LedgerSupportsPeerSelection BlockB where getPeers = const [] instance NodeInitStorage BlockB where - nodeCheckIntegrity _ _ = True + nodeCheckIntegrity _ _ = True -- Pick some chunk size nodeImmutableDbChunkInfo _ = simpleChunkInfo 10 @@ -380,11 +406,13 @@ instance NodeInitStorage BlockB where instance BlockSupportsMetrics BlockB where isSelfIssued = isSelfIssuedConstUnknown -deriving via SelectViewDiffusionPipelining BlockB - instance BlockSupportsDiffusionPipelining BlockB +deriving via + SelectViewDiffusionPipelining BlockB + instance + BlockSupportsDiffusionPipelining BlockB instance SingleEraBlock BlockB where - singleEraInfo _ = SingleEraInfo "B" + singleEraInfo _ = SingleEraInfo "B" singleEraTransition = \_ _ _ _ -> Nothing instance HasTxs BlockB where @@ -396,9 +424,9 @@ instance HasTxs BlockB where instance CondenseConstraints BlockB -instance Condense BlockB where condense = show -instance Condense (Header BlockB) where condense = show -instance Condense (GenTx BlockB) where condense = show +instance Condense BlockB where condense = show +instance Condense (Header BlockB) where condense = show +instance Condense (GenTx BlockB) where condense = show instance Condense (TxId (GenTx BlockB)) where condense = show {------------------------------------------------------------------------------- @@ -413,16 +441,17 @@ instance HasBinaryBlockInfo BlockB where -- > field1 -- > .. -- > fieldN - getBinaryBlockInfo BlkB{..} = BinaryBlockInfo { - headerOffset = 2 - , headerSize = fromIntegral $ Lazy.length (serialise blkB_header) + getBinaryBlockInfo BlkB{..} = + BinaryBlockInfo + { headerOffset = 2 + , headerSize = fromIntegral $ Lazy.length (serialise blkB_header) } -instance SerialiseConstraintsHFC BlockB -instance SerialiseDiskConstraints BlockB +instance SerialiseConstraintsHFC BlockB +instance SerialiseDiskConstraints BlockB instance SerialiseNodeToClientConstraints BlockB -instance SerialiseNodeToNodeConstraints BlockB where - estimateBlockSize = const 0 +instance SerialiseNodeToNodeConstraints BlockB where + estimateBlockSize = const 0 {------------------------------------------------------------------------------- Serialisation @@ -459,7 +488,7 @@ instance SerialiseNodeToNode BlockB (GenTxId BlockB) -- Must be compatible with @(SerialisedHeader BlockB)@, which uses -- the @Serialise (SerialisedHeader BlockB)@ instance below instance SerialiseNodeToNode BlockB (Header BlockB) where - encodeNodeToNode _ _ = wrapCBORinCBOR encode + encodeNodeToNode _ _ = wrapCBORinCBOR encode decodeNodeToNode _ _ = unwrapCBORinCBOR (const <$> decode) instance Serialise (SerialisedHeader BlockB) where diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/IOSimQSM/Test/StateMachine/Sequential.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/IOSimQSM/Test/StateMachine/Sequential.hs index 613b754a97..1d9c4dd934 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/IOSimQSM/Test/StateMachine/Sequential.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/IOSimQSM/Test/StateMachine/Sequential.hs @@ -5,6 +5,9 @@ {-# LANGUAGE ScopedTypeVariables #-} ----------------------------------------------------------------------------- + +----------------------------------------------------------------------------- + -- | -- -- @@ -26,50 +29,65 @@ -- -- This module contains helpers for generating, shrinking, and checking -- sequential programs. --- ------------------------------------------------------------------------------ - module Test.Consensus.IOSimQSM.Test.StateMachine.Sequential (runCommands') where -import Control.Concurrent.Class.MonadSTM.TChan (TChan, newTChanIO, - tryReadTChan, writeTChan) -import Control.Exception (SomeAsyncException (..), SomeException, - displayException, fromException) -import Control.Monad (when) -import Control.Monad.Class.MonadSay -import Control.Monad.State.Strict (StateT, get, lift, put, runStateT) -import Data.Dynamic (Dynamic, toDyn) -import Data.Either (fromRight) -import Data.Maybe (fromMaybe) -import qualified Data.Set as S -import Ouroboros.Consensus.Util.IOLike (ExitCase (..), IOLike, - MonadCatch (..), atomically, catch, throwIO) -import Test.StateMachine.Logic -import Test.StateMachine.Types -import qualified Test.StateMachine.Types.Rank2 as Rank2 -import Test.StateMachine.Utils -import Text.Show.Pretty (ppShow) +import Control.Concurrent.Class.MonadSTM.TChan + ( TChan + , newTChanIO + , tryReadTChan + , writeTChan + ) +import Control.Exception + ( SomeAsyncException (..) + , SomeException + , displayException + , fromException + ) +import Control.Monad (when) +import Control.Monad.Class.MonadSay +import Control.Monad.State.Strict (StateT, get, lift, put, runStateT) +import Data.Dynamic (Dynamic, toDyn) +import Data.Either (fromRight) +import Data.Maybe (fromMaybe) +import Data.Set qualified as S +import Ouroboros.Consensus.Util.IOLike + ( ExitCase (..) + , IOLike + , MonadCatch (..) + , atomically + , catch + , throwIO + ) +import Test.StateMachine.Logic +import Test.StateMachine.Types +import Test.StateMachine.Types.Rank2 qualified as Rank2 +import Test.StateMachine.Utils +import Text.Show.Pretty (ppShow) ------------------------------------------------------------------------ -runCommands' :: (Show (cmd Concrete), Show (resp Concrete)) - => (Rank2.Traversable cmd, Rank2.Foldable resp) - => (IOLike m, MonadSay m) - => m (StateMachine model cmd m resp) - -> Commands cmd resp - -> m (History cmd resp, model Concrete, Reason) +runCommands' :: + (Show (cmd Concrete), Show (resp Concrete)) => + (Rank2.Traversable cmd, Rank2.Foldable resp) => + (IOLike m, MonadSay m) => + m (StateMachine model cmd m resp) -> + Commands cmd resp -> + m (History cmd resp, model Concrete, Reason) runCommands' msm cmds = do hchan <- newTChanIO (reason, (_, _, _, model)) <- - fst <$> generalBracket - msm - (\sm' ec -> case ec of - ExitCaseSuccess (_, (_,_,_,model)) -> cleanup sm' model - _ -> getChanContents hchan >>= cleanup sm' . mkModel sm' . History - ) - (\sm'@StateMachine{ initModel } -> runStateT - (executeCommands sm' hchan (Pid 0) CheckEverything cmds) - (emptyEnvironment, initModel, newCounter, initModel)) + fst + <$> generalBracket + msm + ( \sm' ec -> case ec of + ExitCaseSuccess (_, (_, _, _, model)) -> cleanup sm' model + _ -> getChanContents hchan >>= cleanup sm' . mkModel sm' . History + ) + ( \sm'@StateMachine{initModel} -> + runStateT + (executeCommands sm' hchan (Pid 0) CheckEverything cmds) + (emptyEnvironment, initModel, newCounter, initModel) + ) hist <- getChanContents hchan return (History hist, model, reason) @@ -77,49 +95,52 @@ runCommands' msm cmds = do -- since it is used to cleanup resources, in parallel programs. getChanContents :: IOLike m => TChan m a -> m [a] getChanContents chan = reverse <$> atomically (go' []) - where - go' acc = do - mx <- tryReadTChan chan - case mx of - Just x -> go' (x : acc) - Nothing -> return acc + where + go' acc = do + mx <- tryReadTChan chan + case mx of + Just x -> go' (x : acc) + Nothing -> return acc data Check = CheckPrecondition | CheckEverything -executeCommands :: (Show (cmd Concrete), Show (resp Concrete)) - => (Rank2.Traversable cmd, Rank2.Foldable resp) - => (IOLike m, MonadSay m) - => StateMachine model cmd m resp - -> TChan m (Pid, HistoryEvent cmd resp) - -> Pid - -> Check - -> Commands cmd resp - -> StateT (Environment, model Symbolic, Counter, model Concrete) m Reason -executeCommands StateMachine {..} hchan pid check = +executeCommands :: + (Show (cmd Concrete), Show (resp Concrete)) => + (Rank2.Traversable cmd, Rank2.Foldable resp) => + (IOLike m, MonadSay m) => + StateMachine model cmd m resp -> + TChan m (Pid, HistoryEvent cmd resp) -> + Pid -> + Check -> + Commands cmd resp -> + StateT (Environment, model Symbolic, Counter, model Concrete) m Reason +executeCommands StateMachine{..} hchan pid check = go . unCommands - where - go [] = return Ok - go (Command scmd _ vars : cmds) = do - (env, smodel, counter, cmodel) <- get - case (check, logic (precondition smodel scmd)) of - (CheckPrecondition, VFalse ce) -> return (PreconditionFailed (show ce)) - (CheckEverything, VFalse ce) -> return (PreconditionFailed (show ce)) - _otherwise -> do - let ccmd = fromRight (error "executeCommands: impossible") (reify env scmd) - lift $ atomically (writeTChan hchan (pid, Invocation ccmd (S.fromList vars))) - !ecresp <- lift $ fmap Right (semantics ccmd) `catch` - \(err :: SomeException) -> do - when (isSomeAsyncException err) (say (displayException err) >> throwIO err) - return (Left (displayException err)) - case ecresp of - Left err -> do - lift $ atomically (writeTChan hchan (pid, Exception err)) - return $ ExceptionThrown err - Right cresp -> do - let cvars = getUsedConcrete cresp - if length vars /= length cvars + where + go [] = return Ok + go (Command scmd _ vars : cmds) = do + (env, smodel, counter, cmodel) <- get + case (check, logic (precondition smodel scmd)) of + (CheckPrecondition, VFalse ce) -> return (PreconditionFailed (show ce)) + (CheckEverything, VFalse ce) -> return (PreconditionFailed (show ce)) + _otherwise -> do + let ccmd = fromRight (error "executeCommands: impossible") (reify env scmd) + lift $ atomically (writeTChan hchan (pid, Invocation ccmd (S.fromList vars))) + !ecresp <- + lift $ + fmap Right (semantics ccmd) + `catch` \(err :: SomeException) -> do + when (isSomeAsyncException err) (say (displayException err) >> throwIO err) + return (Left (displayException err)) + case ecresp of + Left err -> do + lift $ atomically (writeTChan hchan (pid, Exception err)) + return $ ExceptionThrown err + Right cresp -> do + let cvars = getUsedConcrete cresp + if length vars /= length cvars then do let err = mockSemanticsMismatchError (ppShow ccmd) (ppShow vars) (ppShow cresp) (ppShow cvars) lift $ atomically (writeTChan hchan (pid, Response cresp)) @@ -131,40 +152,46 @@ executeCommands StateMachine {..} hchan pid check = _otherwise -> case (check, logic (fromMaybe (const Top) invariant cmodel)) of (CheckEverything, VFalse ce') -> return (InvariantBroken (show ce')) - _otherwise -> do + _otherwise -> do let (sresp, counter') = runGenSym (mock smodel scmd) counter - put ( insertConcretes vars cvars env - , transition smodel scmd sresp - , counter' - , transition cmodel ccmd cresp - ) + put + ( insertConcretes vars cvars env + , transition smodel scmd sresp + , counter' + , transition cmodel ccmd cresp + ) go cmds - isSomeAsyncException :: SomeException -> Bool - isSomeAsyncException se = case fromException se of - Just (SomeAsyncException _) -> True - _ -> False + isSomeAsyncException :: SomeException -> Bool + isSomeAsyncException se = case fromException se of + Just (SomeAsyncException _) -> True + _ -> False - mockSemanticsMismatchError :: String -> String -> String -> String -> String - mockSemanticsMismatchError cmd svars cresp cvars = unlines + mockSemanticsMismatchError :: String -> String -> String -> String -> String + mockSemanticsMismatchError cmd svars cresp cvars = + unlines [ "" , "Mismatch between `mock` and `semantics`." , "" , "The definition of `mock` for the command:" , "" - , " ", cmd + , " " + , cmd , "" , "returns the following references:" , "" - , " ", svars + , " " + , svars , "" , "while the response from `semantics`:" , "" - , " ", cresp + , " " + , cresp , "" , "returns the following references:" , "" - , " ", cvars + , " " + , cvars , "" , "Continuing to execute commands at this point could result in scope" , "errors, because we might have commands that use references (returned" diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Network/AnchoredFragment/Extras.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Network/AnchoredFragment/Extras.hs index ae56621602..8cb8c8546f 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Network/AnchoredFragment/Extras.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Network/AnchoredFragment/Extras.hs @@ -1,16 +1,21 @@ -- | Functions to move to Ouroboros.Network.AnchoredFragment -module Test.Consensus.Network.AnchoredFragment.Extras ( - intersectWith +module Test.Consensus.Network.AnchoredFragment.Extras + ( intersectWith , slotLength ) where -import Cardano.Slotting.Slot (SlotNo (unSlotNo), withOrigin) -import Data.List (find) -import Data.Maybe (isJust) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - HasHeader, Point, anchor, anchorToSlotNo, headAnchor, - splitAfterPoint) - +import Cardano.Slotting.Slot (SlotNo (unSlotNo), withOrigin) +import Data.List (find) +import Data.Maybe (isJust) +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , HasHeader + , Point + , anchor + , anchorToSlotNo + , headAnchor + , splitAfterPoint + ) -- | Find the first point in the fragment intersectWith :: HasHeader b => AnchoredFragment b -> [Point b] -> Maybe (Point b) @@ -20,6 +25,7 @@ intersectWith fullFrag = find (isJust . splitAfterPoint fullFrag) -- 'length' which is the number of blocks in the fragment. slotLength :: HasHeader blk => AnchoredFragment blk -> Int slotLength fragment = - fromIntegral $ unSlotNo $ - withOrigin 0 id (anchorToSlotNo $ headAnchor fragment) - - withOrigin 0 id (anchorToSlotNo $ anchor fragment) + fromIntegral $ + unSlotNo $ + withOrigin 0 id (anchorToSlotNo $ headAnchor fragment) + - withOrigin 0 id (anchorToSlotNo $ anchor fragment) diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Node.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Node.hs index 1cf57eb8cf..0762c9c6f5 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Node.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/Node.hs @@ -19,48 +19,51 @@ -- -- This module contains a bunch of unit tests to make sure that these locks and -- markers are created correctly and behave as expected. --- module Test.Consensus.Node (tests) where -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import Control.Monad.IOSim (runSimOrThrow) -import Data.Bifunctor (second) -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import Data.Time.Clock (secondsToDiffTime) -import Ouroboros.Consensus.Node.DbLock -import Ouroboros.Consensus.Node.DbMarker -import Ouroboros.Consensus.Util.FileLock (FileLock, ioFileLock) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Magic (NetworkMagic (..)) -import System.Directory (getTemporaryDirectory) -import System.FS.API.Types -import System.FS.Sim.FsTree (FsTree (..)) -import qualified System.FS.Sim.MockFS as Mock -import System.FS.Sim.MockFS (Files) -import System.FS.Sim.STM (runSimFS) -import System.IO.Temp (withTempDirectory) -import Test.Tasty -import Test.Tasty.HUnit -import Test.Tasty.QuickCheck -import Test.Util.FileLock -import Test.Util.QuickCheck (ge) +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.Monad.IOSim (runSimOrThrow) +import Data.Bifunctor (second) +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Time.Clock (secondsToDiffTime) +import Ouroboros.Consensus.Node.DbLock +import Ouroboros.Consensus.Node.DbMarker +import Ouroboros.Consensus.Util.FileLock (FileLock, ioFileLock) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Magic (NetworkMagic (..)) +import System.Directory (getTemporaryDirectory) +import System.FS.API.Types +import System.FS.Sim.FsTree (FsTree (..)) +import System.FS.Sim.MockFS (Files) +import System.FS.Sim.MockFS qualified as Mock +import System.FS.Sim.STM (runSimFS) +import System.IO.Temp (withTempDirectory) +import Test.Tasty +import Test.Tasty.HUnit +import Test.Tasty.QuickCheck +import Test.Util.FileLock +import Test.Util.QuickCheck (ge) tests :: TestTree -tests = testGroup "Node" - [ testGroup "checkDbMarker" - [ testCase "match" test_checkNetworkMagic_match - , testCase "mismatch" test_checkNetworkMagic_mismatch - , testCase "empty folder" test_checkNetworkMagic_empty_folder - , testCase "missing" test_checkNetworkMagic_missing - , testCase "corrupt" test_checkNetworkMagic_corrupt - , testCase "empty" test_checkNetworkMagic_empty - ] - , testGroup "lockDb" - [ testProperty "reacquire a released lock" prop_reacquire_lock - , testCase "acquire a held lock" test_acquire_held_lock - , testProperty "wait to acquire a held lock" prop_wait_to_acquire_lock - ] +tests = + testGroup + "Node" + [ testGroup + "checkDbMarker" + [ testCase "match" test_checkNetworkMagic_match + , testCase "mismatch" test_checkNetworkMagic_mismatch + , testCase "empty folder" test_checkNetworkMagic_empty_folder + , testCase "missing" test_checkNetworkMagic_missing + , testCase "corrupt" test_checkNetworkMagic_corrupt + , testCase "empty" test_checkNetworkMagic_empty + ] + , testGroup + "lockDb" + [ testProperty "reacquire a released lock" prop_reacquire_lock + , testCase "acquire a held lock" test_acquire_held_lock + , testProperty "wait to acquire a held lock" prop_wait_to_acquire_lock + ] ] {------------------------------------------------------------------------------- @@ -74,84 +77,99 @@ mountPoint :: MountPoint mountPoint = MountPoint "root" fullPath :: FilePath -fullPath = fsToFilePath - mountPoint (fsPathFromList [dbMarkerFile]) +fullPath = + fsToFilePath + mountPoint + (fsPathFromList [dbMarkerFile]) runCheck :: Files -> (Either DbMarkerError (), Files) runCheck files = runSimOrThrow $ do - fmap (second Mock.mockFiles) $ - runSimFS Mock.empty { Mock.mockFiles = files } $ \hasFS -> - checkDbMarker hasFS mountPoint expectedNetworkMagic + fmap (second Mock.mockFiles) $ + runSimFS Mock.empty{Mock.mockFiles = files} $ \hasFS -> + checkDbMarker hasFS mountPoint expectedNetworkMagic test_checkNetworkMagic_match :: Assertion test_checkNetworkMagic_match = res @?= Right () - where - fs = Folder $ Map.fromList - [ (dbMarkerFile, File $ dbMarkerContents expectedNetworkMagic) - , ("immutable", Folder mempty) - , ("ledger", Folder mempty) - , ("volatile", Folder mempty) - ] - (res, _) = runCheck fs + where + fs = + Folder $ + Map.fromList + [ (dbMarkerFile, File $ dbMarkerContents expectedNetworkMagic) + , ("immutable", Folder mempty) + , ("ledger", Folder mempty) + , ("volatile", Folder mempty) + ] + (res, _) = runCheck fs test_checkNetworkMagic_mismatch :: Assertion test_checkNetworkMagic_mismatch = res @?= Left e - where - fs = Folder $ Map.fromList - [ (dbMarkerFile, File $ dbMarkerContents actual) - , ("immutable", Folder mempty) - , ("ledger", Folder mempty) - , ("volatile", Folder mempty) - ] - (res, _) = runCheck fs - actual = NetworkMagic 10 - e = NetworkMagicMismatch + where + fs = + Folder $ + Map.fromList + [ (dbMarkerFile, File $ dbMarkerContents actual) + , ("immutable", Folder mempty) + , ("ledger", Folder mempty) + , ("volatile", Folder mempty) + ] + (res, _) = runCheck fs + actual = NetworkMagic 10 + e = + NetworkMagicMismatch fullPath actual expectedNetworkMagic test_checkNetworkMagic_empty_folder :: Assertion test_checkNetworkMagic_empty_folder = do - res @?= Right () - fs' @?= expectedFs' - where - fs = Folder mempty - (res, fs') = runCheck fs - expectedFs' = Folder $ Map.fromList - [ (dbMarkerFile, File $ dbMarkerContents expectedNetworkMagic) ] + res @?= Right () + fs' @?= expectedFs' + where + fs = Folder mempty + (res, fs') = runCheck fs + expectedFs' = + Folder $ + Map.fromList + [(dbMarkerFile, File $ dbMarkerContents expectedNetworkMagic)] test_checkNetworkMagic_missing :: Assertion test_checkNetworkMagic_missing = res @?= Left e - where - fs = Folder $ Map.fromList - [ ("passwords.txt", File "qwerty\n123456\n") - ] - (res, _) = runCheck fs - e = NoDbMarkerAndNotEmpty fullPath + where + fs = + Folder $ + Map.fromList + [ ("passwords.txt", File "qwerty\n123456\n") + ] + (res, _) = runCheck fs + e = NoDbMarkerAndNotEmpty fullPath test_checkNetworkMagic_corrupt :: Assertion test_checkNetworkMagic_corrupt = res @?= Left e - where - fs = Folder $ Map.fromList - [ (dbMarkerFile, File "garbage") - , ("immutable", Folder mempty) - , ("ledger", Folder mempty) - , ("volatile", Folder mempty) - ] - (res, _) = runCheck fs - e = CorruptDbMarker fullPath + where + fs = + Folder $ + Map.fromList + [ (dbMarkerFile, File "garbage") + , ("immutable", Folder mempty) + , ("ledger", Folder mempty) + , ("volatile", Folder mempty) + ] + (res, _) = runCheck fs + e = CorruptDbMarker fullPath test_checkNetworkMagic_empty :: Assertion test_checkNetworkMagic_empty = res @?= Left e - where - fs = Folder $ Map.fromList - [ (dbMarkerFile, File "") - , ("immutable", Folder mempty) - , ("ledger", Folder mempty) - , ("volatile", Folder mempty) - ] - (res, _) = runCheck fs - e = CorruptDbMarker fullPath + where + fs = + Folder $ + Map.fromList + [ (dbMarkerFile, File "") + , ("immutable", Folder mempty) + , ("ledger", Folder mempty) + , ("volatile", Folder mempty) + ] + (res, _) = runCheck fs + e = CorruptDbMarker fullPath {------------------------------------------------------------------------------- lockDb @@ -162,58 +180,60 @@ test_checkNetworkMagic_empty = res @?= Left e -- to fail. prop_reacquire_lock :: ReleaseDelay -> Property prop_reacquire_lock (ReleaseDelay releaseDelay) = - runSimOrThrow $ do - fileLock <- mockFileLock (Just releaseDelay) - -- Lock and unlock it - touchLock fileLock - - -- Lock and unlock it again, which might fail: - tryL (touchLock fileLock) <&> \case - -- If we failed to obtain the lock, it must be because the release - -- delay we simulate is greater than or equal to the timeout - Left _ -> label "timed out" $ releaseDelay `ge` timeout - Right () -> property True - where - timeout = secondsToDiffTime 2 - - touchLock :: (IOLike m, MonadTimer m) => FileLock m -> m () - touchLock fileLock = - withLockDB_ - fileLock - mountPoint - dbLockFsPath - timeout - (return ()) + runSimOrThrow $ do + fileLock <- mockFileLock (Just releaseDelay) + -- Lock and unlock it + touchLock fileLock + + -- Lock and unlock it again, which might fail: + tryL (touchLock fileLock) <&> \case + -- If we failed to obtain the lock, it must be because the release + -- delay we simulate is greater than or equal to the timeout + Left _ -> label "timed out" $ releaseDelay `ge` timeout + Right () -> property True + where + timeout = secondsToDiffTime 2 + + touchLock :: (IOLike m, MonadTimer m) => FileLock m -> m () + touchLock fileLock = + withLockDB_ + fileLock + mountPoint + dbLockFsPath + timeout + (return ()) -- | Test with a real lock that while holding the lock, we cannot reacquire -- it. test_acquire_held_lock :: Assertion test_acquire_held_lock = withTempDir $ \dbPath -> do - let dbMountPoint = MountPoint dbPath - - -- While holding the lock, try to acquire it again, which should fail - res <- - tryL $ withLock dbMountPoint (secondsToDiffTime 0) $ - tryL $ withLock dbMountPoint (millisecondsToDiffTime 10) $ - return () - - -- The outer 'Right' means that the first call to 'withLock' - -- succeeded, the inner 'Left' means that the second call to - -- 'touchLock' failed. - res @?= (Left (DbLocked (fsToFilePath dbMountPoint dbLockFsPath))) - where - withTempDir :: (FilePath -> IO a) -> IO a - withTempDir k = do - sysTmpDir <- getTemporaryDirectory - withTempDirectory sysTmpDir "ouroboros-network-test" k - - withLock :: MountPoint -> DiffTime -> IO a -> IO a - withLock dbMountPoint lockTimeout = - withLockDB_ - ioFileLock - dbMountPoint - dbLockFsPath - lockTimeout + let dbMountPoint = MountPoint dbPath + + -- While holding the lock, try to acquire it again, which should fail + res <- + tryL $ + withLock dbMountPoint (secondsToDiffTime 0) $ + tryL $ + withLock dbMountPoint (millisecondsToDiffTime 10) $ + return () + + -- The outer 'Right' means that the first call to 'withLock' + -- succeeded, the inner 'Left' means that the second call to + -- 'touchLock' failed. + res @?= (Left (DbLocked (fsToFilePath dbMountPoint dbLockFsPath))) + where + withTempDir :: (FilePath -> IO a) -> IO a + withTempDir k = do + sysTmpDir <- getTemporaryDirectory + withTempDirectory sysTmpDir "ouroboros-network-test" k + + withLock :: MountPoint -> DiffTime -> IO a -> IO a + withLock dbMountPoint lockTimeout = + withLockDB_ + ioFileLock + dbMountPoint + dbLockFsPath + lockTimeout tryL :: MonadCatch m => m a -> m (Either DbLocked a) tryL = try @@ -224,43 +244,43 @@ tryL = try -- A maximum delay of MAX can cope with any hold up of ACTUAL < MAX. -- -- Note that we exclude ACTUAL == MAX, as it is \"racy\". --- prop_wait_to_acquire_lock :: ActualAndMaxDelay -> Property -prop_wait_to_acquire_lock ActualAndMaxDelay { actualDelay, maxDelay } = - runSimOrThrow $ do - -- We don't simulate delayed releases because the test depends on - -- precise timing. - fileLock <- mockFileLock Nothing - - -- Hold the lock for 'actualDelay' and then signal we have released it - let bgThread = - -- The lock will not be held, so just use the default parameters - -- to acquire it - withLock fileLock dbLockTimeout $ - -- Hold the lock for ACTUAL - threadDelay actualDelay - - withAsync bgThread $ \asyncBgThread -> do - link asyncBgThread - -- Try to obtain the held lock, waiting MAX for it - -- - -- The test will fail when an exception is thrown below because it - -- timed out while waiting on the lock. - withLock fileLock maxDelay $ - return $ property True - where - withLock - :: (IOLike m, MonadTimer m) - => FileLock m - -> DiffTime - -> m a - -> m a - withLock fileLock timeout = - withLockDB_ - fileLock - mountPoint - dbLockFsPath - timeout +prop_wait_to_acquire_lock ActualAndMaxDelay{actualDelay, maxDelay} = + runSimOrThrow $ do + -- We don't simulate delayed releases because the test depends on + -- precise timing. + fileLock <- mockFileLock Nothing + + -- Hold the lock for 'actualDelay' and then signal we have released it + let bgThread = + -- The lock will not be held, so just use the default parameters + -- to acquire it + withLock fileLock dbLockTimeout $ + -- Hold the lock for ACTUAL + threadDelay actualDelay + + withAsync bgThread $ \asyncBgThread -> do + link asyncBgThread + -- Try to obtain the held lock, waiting MAX for it + -- + -- The test will fail when an exception is thrown below because it + -- timed out while waiting on the lock. + withLock fileLock maxDelay $ + return $ + property True + where + withLock :: + (IOLike m, MonadTimer m) => + FileLock m -> + DiffTime -> + m a -> + m a + withLock fileLock timeout = + withLockDB_ + fileLock + mountPoint + dbLockFsPath + timeout {------------------------------------------------------------------------------- Generators @@ -277,29 +297,30 @@ instance Arbitrary ReleaseDelay where [ReleaseDelay (fromRational t') | t' <- shrink (toRational t)] -- | Invariant: @actualDelay < maxDelay@ -data ActualAndMaxDelay = ActualAndMaxDelay { - actualDelay :: DiffTime - , maxDelay :: DiffTime - } +data ActualAndMaxDelay = ActualAndMaxDelay + { actualDelay :: DiffTime + , maxDelay :: DiffTime + } deriving (Eq, Show) instance Arbitrary ActualAndMaxDelay where - arbitrary = do - maxDelayMs <- choose (1, 2000) - actualDelayMs <- choose (0, maxDelayMs - 1) - return ActualAndMaxDelay { - actualDelay = millisecondsToDiffTime actualDelayMs - , maxDelay = millisecondsToDiffTime maxDelayMs - } - - shrink (ActualAndMaxDelay actualDelay maxDelay) = - [ ActualAndMaxDelay actualDelay' maxDelay - | actualDelay' <- fromRational <$> shrink (toRational actualDelay) - ] <> - [ ActualAndMaxDelay actualDelay maxDelay - | maxDelay' <- fromRational <$> shrink (toRational maxDelay) - , actualDelay < maxDelay' - ] + arbitrary = do + maxDelayMs <- choose (1, 2000) + actualDelayMs <- choose (0, maxDelayMs - 1) + return + ActualAndMaxDelay + { actualDelay = millisecondsToDiffTime actualDelayMs + , maxDelay = millisecondsToDiffTime maxDelayMs + } + + shrink (ActualAndMaxDelay actualDelay maxDelay) = + [ ActualAndMaxDelay actualDelay' maxDelay + | actualDelay' <- fromRational <$> shrink (toRational actualDelay) + ] + <> [ ActualAndMaxDelay actualDelay maxDelay + | maxDelay' <- fromRational <$> shrink (toRational maxDelay) + , actualDelay < maxDelay' + ] millisecondsToDiffTime :: Integer -> DiffTime millisecondsToDiffTime = (/ 1000) . secondsToDiffTime diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/BlockFetch.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/BlockFetch.hs index b4a111d243..35803fa14a 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/BlockFetch.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/BlockFetch.hs @@ -7,107 +7,137 @@ {-# LANGUAGE TypeFamilies #-} -- | Functions that call to the BlockFetch API to start clients and servers -module Test.Consensus.PeerSimulator.BlockFetch ( - blockFetchNoTimeouts +module Test.Consensus.PeerSimulator.BlockFetch + ( blockFetchNoTimeouts , runBlockFetchClient , runBlockFetchServer , startBlockFetchLogic , startKeepAliveThread ) where -import Control.Monad (void) -import Control.Monad.Class.MonadTime -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import Control.ResourceRegistry -import Control.Tracer (Tracer, nullTracer, traceWith) -import Data.Functor.Contravariant ((>$<)) -import Network.TypedProtocol.Codec (ActiveState, AnyMessage, - StateToken, notActiveState) -import Ouroboros.Consensus.Block (HasHeader) -import Ouroboros.Consensus.Block.Abstract (Header, Point (..)) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import qualified Ouroboros.Consensus.MiniProtocol.BlockFetch.ClientInterface as BlockFetchClientInterface -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientHandleCollection) -import Ouroboros.Consensus.Node.Genesis (GenesisConfig (..), - enableGenesisConfigDefault) -import Ouroboros.Consensus.Node.ProtocolInfo - (NumCoreNodes (NumCoreNodes)) -import Ouroboros.Consensus.Storage.ChainDB.API -import Ouroboros.Consensus.Util (ShowProxy) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.BlockFetch (BlockFetchConfiguration (..), - FetchClientRegistry, GenesisBlockFetchConfiguration (..), - blockFetchLogic, bracketFetchClient, - bracketKeepAliveClient) -import Ouroboros.Network.BlockFetch.Client (blockFetchClient) -import Ouroboros.Network.BlockFetch.ConsensusInterface - (FetchMode (..)) -import Ouroboros.Network.Channel (Channel) -import Ouroboros.Network.ControlMessage (ControlMessageSTM) -import Ouroboros.Network.Driver (runPeer) -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededSizeLimit, ExceededTimeLimit), - runPipelinedPeerWithLimits) -import Ouroboros.Network.NodeToNode.Version (NodeToNodeVersion) -import Ouroboros.Network.Protocol.BlockFetch.Codec - (byteLimitsBlockFetch, codecBlockFetchId) -import Ouroboros.Network.Protocol.BlockFetch.Server - (BlockFetchServer (..), blockFetchServerPeer) -import Ouroboros.Network.Protocol.BlockFetch.Type (BlockFetch (..), - SingBlockFetch (..)) -import Ouroboros.Network.Protocol.Limits (ProtocolSizeLimits (..), - ProtocolTimeLimits (..), waitForever) -import Test.Consensus.PeerSimulator.StateView - (PeerSimulatorComponentResult (..), - PeerSimulatorResult (..), - StateViewTracers (StateViewTracers, svtPeerSimulatorResultsTracer)) -import Test.Consensus.PeerSimulator.Trace - (TraceBlockFetchClientTerminationEvent (..), - TraceEvent (..)) -import Test.Consensus.PointSchedule (BlockFetchTimeout (..)) -import Test.Consensus.PointSchedule.Peers (PeerId) -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (BlockConfig (TestBlockConfig), TestBlock) +import Control.Monad (void) +import Control.Monad.Class.MonadTime +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.ResourceRegistry +import Control.Tracer (Tracer, nullTracer, traceWith) +import Data.Functor.Contravariant ((>$<)) +import Network.TypedProtocol.Codec + ( ActiveState + , AnyMessage + , StateToken + , notActiveState + ) +import Ouroboros.Consensus.Block (HasHeader) +import Ouroboros.Consensus.Block.Abstract (Header, Point (..)) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.MiniProtocol.BlockFetch.ClientInterface qualified as BlockFetchClientInterface +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientHandleCollection + ) +import Ouroboros.Consensus.Node.Genesis + ( GenesisConfig (..) + , enableGenesisConfigDefault + ) +import Ouroboros.Consensus.Node.ProtocolInfo + ( NumCoreNodes (NumCoreNodes) + ) +import Ouroboros.Consensus.Storage.ChainDB.API +import Ouroboros.Consensus.Util (ShowProxy) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.BlockFetch + ( BlockFetchConfiguration (..) + , FetchClientRegistry + , GenesisBlockFetchConfiguration (..) + , blockFetchLogic + , bracketFetchClient + , bracketKeepAliveClient + ) +import Ouroboros.Network.BlockFetch.Client (blockFetchClient) +import Ouroboros.Network.BlockFetch.ConsensusInterface + ( FetchMode (..) + ) +import Ouroboros.Network.Channel (Channel) +import Ouroboros.Network.ControlMessage (ControlMessageSTM) +import Ouroboros.Network.Driver (runPeer) +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededSizeLimit, ExceededTimeLimit) + , runPipelinedPeerWithLimits + ) +import Ouroboros.Network.NodeToNode.Version (NodeToNodeVersion) +import Ouroboros.Network.Protocol.BlockFetch.Codec + ( byteLimitsBlockFetch + , codecBlockFetchId + ) +import Ouroboros.Network.Protocol.BlockFetch.Server + ( BlockFetchServer (..) + , blockFetchServerPeer + ) +import Ouroboros.Network.Protocol.BlockFetch.Type + ( BlockFetch (..) + , SingBlockFetch (..) + ) +import Ouroboros.Network.Protocol.Limits + ( ProtocolSizeLimits (..) + , ProtocolTimeLimits (..) + , waitForever + ) +import Test.Consensus.PeerSimulator.StateView + ( PeerSimulatorComponentResult (..) + , PeerSimulatorResult (..) + , StateViewTracers (StateViewTracers, svtPeerSimulatorResultsTracer) + ) +import Test.Consensus.PeerSimulator.Trace + ( TraceBlockFetchClientTerminationEvent (..) + , TraceEvent (..) + ) +import Test.Consensus.PointSchedule (BlockFetchTimeout (..)) +import Test.Consensus.PointSchedule.Peers (PeerId) +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (BlockConfig (TestBlockConfig), TestBlock) startBlockFetchLogic :: - forall m. - (IOLike m, MonadTimer m) - => Bool -- ^ Whether to enable chain selection starvation - -> ResourceRegistry m - -> Tracer m (TraceEvent TestBlock) - -> ChainDB m TestBlock - -> FetchClientRegistry PeerId (HeaderWithTime TestBlock) TestBlock m - -> ChainSyncClientHandleCollection PeerId m TestBlock - -> m () + forall m. + (IOLike m, MonadTimer m) => + -- | Whether to enable chain selection starvation + Bool -> + ResourceRegistry m -> + Tracer m (TraceEvent TestBlock) -> + ChainDB m TestBlock -> + FetchClientRegistry PeerId (HeaderWithTime TestBlock) TestBlock m -> + ChainSyncClientHandleCollection PeerId m TestBlock -> + m () startBlockFetchLogic enableChainSelStarvation registry tracer chainDb fetchClientRegistry csHandlesCol = do - let blockFetchConsensusInterface = - BlockFetchClientInterface.mkBlockFetchConsensusInterface - nullTracer -- FIXME - (TestBlockConfig $ NumCoreNodes 0) -- Only needed when minting blocks - (BlockFetchClientInterface.defaultChainDbView chainDb) - csHandlesCol - -- The size of headers in bytes is irrelevant because our tests - -- do not serialize the blocks. - (\_hdr -> 1000) - -- This is a syncing test, so we use 'FetchModeGenesis'. - (pure FetchModeGenesis) - DiffusionPipeliningOn + let blockFetchConsensusInterface = + BlockFetchClientInterface.mkBlockFetchConsensusInterface + nullTracer -- FIXME + (TestBlockConfig $ NumCoreNodes 0) -- Only needed when minting blocks + (BlockFetchClientInterface.defaultChainDbView chainDb) + csHandlesCol + -- The size of headers in bytes is irrelevant because our tests + -- do not serialize the blocks. + (\_hdr -> 1000) + -- This is a syncing test, so we use 'FetchModeGenesis'. + (pure FetchModeGenesis) + DiffusionPipeliningOn - bfcGenesisBFConfig = if enableChainSelStarvation - then GenesisBlockFetchConfiguration - { gbfcGracePeriod = - if enableChainSelStarvation then - 10 -- default value for cardano-node at the time of writing - else - 1000000 -- (more than 11 days) - } + bfcGenesisBFConfig = + if enableChainSelStarvation + then + GenesisBlockFetchConfiguration + { gbfcGracePeriod = + if enableChainSelStarvation + then + 10 -- default value for cardano-node at the time of writing + else + 1000000 -- (more than 11 days) + } else gcBlockFetchConfig enableGenesisConfigDefault - -- Values taken from - -- ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Network.hs - blockFetchCfg = BlockFetchConfiguration + -- Values taken from + -- ouroboros-consensus-diffusion/src/unstable-diffusion-testlib/Test/ThreadNet/Network.hs + blockFetchCfg = + BlockFetchConfiguration { bfcMaxConcurrencyBulkSync = 50 , bfcMaxConcurrencyDeadline = 50 -- unused because of @pure FetchModeBulkSync@ above , bfcMaxRequestsInflight = 10 @@ -117,65 +147,72 @@ startBlockFetchLogic enableChainSelStarvation registry tracer chainDb fetchClien , bfcGenesisBFConfig } - void $ forkLinkedThread registry "BlockFetchLogic" $ + void $ + forkLinkedThread registry "BlockFetchLogic" $ blockFetchLogic decisionTracer nullTracer blockFetchConsensusInterface fetchClientRegistry blockFetchCfg - where - decisionTracer = TraceOther . ("BlockFetchLogic | " ++) . show >$< tracer + where + decisionTracer = TraceOther . ("BlockFetchLogic | " ++) . show >$< tracer startKeepAliveThread :: - forall m peer blk hdr. - (Ord peer, IOLike m) - => ResourceRegistry m - -> FetchClientRegistry peer hdr blk m - -> peer - -> m () + forall m peer blk hdr. + (Ord peer, IOLike m) => + ResourceRegistry m -> + FetchClientRegistry peer hdr blk m -> + peer -> + m () startKeepAliveThread registry fetchClientRegistry peerId = - void $ forkLinkedThread registry "KeepAlive" $ + void $ + forkLinkedThread registry "KeepAlive" $ bracketKeepAliveClient fetchClientRegistry peerId $ \_ -> atomically retry runBlockFetchClient :: - (IOLike m, MonadTime m, MonadTimer m, HasHeader blk, HasHeader (Header blk), ShowProxy blk) - => Tracer m (TraceEvent blk) - -> PeerId - -> BlockFetchTimeout - -> StateViewTracers blk m - -> FetchClientRegistry PeerId (HeaderWithTime blk) blk m - -> ControlMessageSTM m - -> Channel m (AnyMessage (BlockFetch blk (Point blk))) - -- ^ Send and receive message via the given 'Channel'. - -> m () -runBlockFetchClient tracer peerId blockFetchTimeouts StateViewTracers {svtPeerSimulatorResultsTracer} fetchClientRegistry controlMsgSTM channel = do - bracketFetchClient fetchClientRegistry ntnVersion peerId $ \clientCtx -> do - res <- - try $ - runPipelinedPeerWithLimits - nullTracer - codecBlockFetchId - blockFetchNoSizeLimits - (timeLimitsBlockFetch blockFetchTimeouts) - channel - (blockFetchClient ntnVersion controlMsgSTM nullTracer clientCtx) - case res of - Right ((), msgRes) -> traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeBlockFetchClientResult $ Right msgRes - Left exn -> do - traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeBlockFetchClientResult $ Left exn - case fromException exn of - Just (ExceededSizeLimit _) -> - traceWith tracer $ TraceBlockFetchClientTerminationEvent peerId TraceExceededSizeLimitBF - Just (ExceededTimeLimit _) -> - traceWith tracer $ TraceBlockFetchClientTerminationEvent peerId TraceExceededTimeLimitBF - Nothing -> pure () - where - ntnVersion :: NodeToNodeVersion - ntnVersion = maxBound + (IOLike m, MonadTime m, MonadTimer m, HasHeader blk, HasHeader (Header blk), ShowProxy blk) => + Tracer m (TraceEvent blk) -> + PeerId -> + BlockFetchTimeout -> + StateViewTracers blk m -> + FetchClientRegistry PeerId (HeaderWithTime blk) blk m -> + ControlMessageSTM m -> + -- | Send and receive message via the given 'Channel'. + Channel m (AnyMessage (BlockFetch blk (Point blk))) -> + m () +runBlockFetchClient tracer peerId blockFetchTimeouts StateViewTracers{svtPeerSimulatorResultsTracer} fetchClientRegistry controlMsgSTM channel = do + bracketFetchClient fetchClientRegistry ntnVersion peerId $ \clientCtx -> do + res <- + try $ + runPipelinedPeerWithLimits + nullTracer + codecBlockFetchId + blockFetchNoSizeLimits + (timeLimitsBlockFetch blockFetchTimeouts) + channel + (blockFetchClient ntnVersion controlMsgSTM nullTracer clientCtx) + case res of + Right ((), msgRes) -> + traceWith svtPeerSimulatorResultsTracer $ + PeerSimulatorResult peerId $ + SomeBlockFetchClientResult $ + Right msgRes + Left exn -> do + traceWith svtPeerSimulatorResultsTracer $ + PeerSimulatorResult peerId $ + SomeBlockFetchClientResult $ + Left exn + case fromException exn of + Just (ExceededSizeLimit _) -> + traceWith tracer $ TraceBlockFetchClientTerminationEvent peerId TraceExceededSizeLimitBF + Just (ExceededTimeLimit _) -> + traceWith tracer $ TraceBlockFetchClientTerminationEvent peerId TraceExceededTimeLimitBF + Nothing -> pure () + where + ntnVersion :: NodeToNodeVersion + ntnVersion = maxBound blockFetchNoSizeLimits :: ProtocolSizeLimits (BlockFetch block point) bytes blockFetchNoSizeLimits = byteLimitsBlockFetch (const 0) @@ -184,22 +221,24 @@ blockFetchNoSizeLimits = byteLimitsBlockFetch (const 0) -- @timeLimitsBlockFetch@ in 'Ouroboros.Network.Protocol.BlockFetch.Codec' but -- it does not allow customising the values as 'timeLimitsChainSync' does. -- REVIEW: Should this be upstreamed to `ouroboros-network-protocols`? -timeLimitsBlockFetch :: forall block point. BlockFetchTimeout -> ProtocolTimeLimits (BlockFetch block point) +timeLimitsBlockFetch :: + forall block point. BlockFetchTimeout -> ProtocolTimeLimits (BlockFetch block point) timeLimitsBlockFetch BlockFetchTimeout{busyTimeout, streamingTimeout} = ProtocolTimeLimits stateToLimit - where - stateToLimit :: forall (st :: BlockFetch block point). - ActiveState st => StateToken st-> Maybe DiffTime - stateToLimit SingBFIdle = waitForever - stateToLimit SingBFBusy = busyTimeout - stateToLimit SingBFStreaming = streamingTimeout - stateToLimit a@SingBFDone = notActiveState a + where + stateToLimit :: + forall (st :: BlockFetch block point). + ActiveState st => StateToken st -> Maybe DiffTime + stateToLimit SingBFIdle = waitForever + stateToLimit SingBFBusy = busyTimeout + stateToLimit SingBFStreaming = streamingTimeout + stateToLimit a@SingBFDone = notActiveState a blockFetchNoTimeouts :: BlockFetchTimeout blockFetchNoTimeouts = BlockFetchTimeout - { busyTimeout = Nothing, - streamingTimeout = Nothing + { busyTimeout = Nothing + , streamingTimeout = Nothing } runBlockFetchServer :: @@ -208,17 +247,22 @@ runBlockFetchServer :: PeerId -> StateViewTracers blk m -> BlockFetchServer blk (Point blk) m () -> + -- | Send and receive message via the given 'Channel'. Channel m (AnyMessage (BlockFetch blk (Point blk))) -> - -- ^ Send and receive message via the given 'Channel'. m () -runBlockFetchServer _tracer peerId StateViewTracers {svtPeerSimulatorResultsTracer} server channel = do +runBlockFetchServer _tracer peerId StateViewTracers{svtPeerSimulatorResultsTracer} server channel = do res <- try $ runPeer nullTracer codecBlockFetchId channel (blockFetchServerPeer server) case res of - Right ((), msgRes) -> traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeBlockFetchServerResult $ Right msgRes + Right ((), msgRes) -> + traceWith svtPeerSimulatorResultsTracer $ + PeerSimulatorResult peerId $ + SomeBlockFetchServerResult $ + Right msgRes Left exn -> do traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeBlockFetchServerResult $ Left exn + PeerSimulatorResult peerId $ + SomeBlockFetchServerResult $ + Left exn -- NOTE: here we are able to trace exceptions, as what is done in `runChainSyncClient` case fromException exn of (_ :: Maybe SomeException) -> pure () diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/CSJInvariants.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/CSJInvariants.hs index 97e8b50f29..03e67de3c4 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/CSJInvariants.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/CSJInvariants.hs @@ -7,20 +7,24 @@ -- crucial for CSJ to work properly. This watcher monitors the ChainSync -- handlers and throws a 'Violation' exception when an invariant stops holding. -- It is intended for testing purposes. -module Test.Consensus.PeerSimulator.CSJInvariants ( - Violation +module Test.Consensus.PeerSimulator.CSJInvariants + ( Violation , watcher ) where -import Control.Monad (forM_, when) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (Point, StandardHash, castPoint) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State as CSState -import Ouroboros.Consensus.Util.IOLike (Exception, MonadSTM (STM), - MonadThrow (throwIO), readTVar) -import Ouroboros.Consensus.Util.STM (Watcher (..)) +import Control.Monad (forM_, when) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (Point, StandardHash, castPoint) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State qualified as CSState +import Ouroboros.Consensus.Util.IOLike + ( Exception + , MonadSTM (STM) + , MonadThrow (throwIO) + , readTVar + ) +import Ouroboros.Consensus.Util.STM (Watcher (..)) -------------------------------------------------------------------------------- -- Idealised view of the ChainSync client's state @@ -64,15 +68,15 @@ data JumperState blk allInvariants :: [Invariant peer blk] allInvariants = - [ thereIsAlwaysOneDynamoUnlessDisengaged, - thereIsAlwaysAtMostOneObjector + [ thereIsAlwaysOneDynamoUnlessDisengaged + , thereIsAlwaysAtMostOneObjector ] thereIsAlwaysOneDynamoUnlessDisengaged :: Invariant peer blk thereIsAlwaysOneDynamoUnlessDisengaged = Invariant - { name = "There is always one dynamo, unless all are disengaged", - check = \view -> + { name = "There is always one dynamo, unless all are disengaged" + , check = \view -> null (filter (not . isDisengaged) $ Map.elems view) || length (filter isDynamo $ Map.elems view) == 1 } @@ -80,8 +84,8 @@ thereIsAlwaysOneDynamoUnlessDisengaged = thereIsAlwaysAtMostOneObjector :: Invariant peer blk thereIsAlwaysAtMostOneObjector = Invariant - { name = "There is always at most one objector", - check = \view -> + { name = "There is always at most one objector" + , check = \view -> length (filter isObjector $ Map.elems view) <= 1 } @@ -90,16 +94,16 @@ thereIsAlwaysAtMostOneObjector = -------------------------------------------------------------------------------- isDynamo :: State blk -> Bool -isDynamo (Dynamo {}) = True -isDynamo _ = False +isDynamo (Dynamo{}) = True +isDynamo _ = False isObjector :: State blk -> Bool -isObjector (Objector {}) = True -isObjector _ = False +isObjector (Objector{}) = True +isObjector _ = False isDisengaged :: State blk -> Bool -isDisengaged (Disengaged {}) = True -isDisengaged _ = False +isDisengaged (Disengaged{}) = True +isDisengaged _ = False -------------------------------------------------------------------------------- -- Invariant enforcement implementation @@ -107,40 +111,39 @@ isDisengaged _ = False readAndView :: forall m peer blk. - ( MonadSTM m - ) => + MonadSTM m => STM m (Map peer (CSState.ChainSyncClientHandle m blk)) -> STM m (View peer blk) readAndView readHandles = traverse (fmap idealiseState . readTVar . CSState.cschJumping) =<< readHandles - where - -- Idealise the state of a ChainSync peer with respect to ChainSync jumping. - -- In particular, we get rid of non-comparable information such as the TVars - -- it may contain. - idealiseState :: CSState.ChainSyncJumpingState m blk -> State blk - idealiseState (CSState.Dynamo {}) = Dynamo - idealiseState (CSState.Objector _ point _) = Objector $ idealiseJumpInfo point - idealiseState (CSState.Disengaged _) = Disengaged - idealiseState (CSState.Jumper _ state) = Jumper $ idealiseJumperState state - -- Idealise the jumper state by stripping away everything that is more of a - -- technical necessity and not actually relevant for the invariants. - idealiseJumperState :: CSState.ChainSyncJumpingJumperState blk -> JumperState blk - idealiseJumperState (CSState.Happy _ lastAccepted) = Happy $ idealiseJumpInfo <$> lastAccepted - idealiseJumperState (CSState.LookingForIntersection lastAccepted firstRejected) = - LookingForIntersection (idealiseJumpInfo lastAccepted) (idealiseJumpInfo firstRejected) - idealiseJumperState (CSState.FoundIntersection _ lastAccepted firstRejected) = - FoundIntersection (idealiseJumpInfo lastAccepted) (castPoint firstRejected) - -- Jumpers actually carry a lot of information regarding the jump. From our - -- idealised point of view, we only care about the points where the jumpers - -- agree or disagree with the dynamo. - idealiseJumpInfo :: CSState.JumpInfo blk -> Point blk - idealiseJumpInfo = CSState.jMostRecentIntersection + where + -- Idealise the state of a ChainSync peer with respect to ChainSync jumping. + -- In particular, we get rid of non-comparable information such as the TVars + -- it may contain. + idealiseState :: CSState.ChainSyncJumpingState m blk -> State blk + idealiseState (CSState.Dynamo{}) = Dynamo + idealiseState (CSState.Objector _ point _) = Objector $ idealiseJumpInfo point + idealiseState (CSState.Disengaged _) = Disengaged + idealiseState (CSState.Jumper _ state) = Jumper $ idealiseJumperState state + -- Idealise the jumper state by stripping away everything that is more of a + -- technical necessity and not actually relevant for the invariants. + idealiseJumperState :: CSState.ChainSyncJumpingJumperState blk -> JumperState blk + idealiseJumperState (CSState.Happy _ lastAccepted) = Happy $ idealiseJumpInfo <$> lastAccepted + idealiseJumperState (CSState.LookingForIntersection lastAccepted firstRejected) = + LookingForIntersection (idealiseJumpInfo lastAccepted) (idealiseJumpInfo firstRejected) + idealiseJumperState (CSState.FoundIntersection _ lastAccepted firstRejected) = + FoundIntersection (idealiseJumpInfo lastAccepted) (castPoint firstRejected) + -- Jumpers actually carry a lot of information regarding the jump. From our + -- idealised point of view, we only care about the points where the jumpers + -- agree or disagree with the dynamo. + idealiseJumpInfo :: CSState.JumpInfo blk -> Point blk + idealiseJumpInfo = CSState.jMostRecentIntersection -- | The type of an invariant. Basically a glorified pair of a name and a check -- function. data Invariant peer blk = Invariant - { name :: !String, - check :: !(View peer blk -> Bool) + { name :: !String + , check :: !(View peer blk -> Bool) } -- | An exception that is thrown when an invariant is violated. It carries the @@ -150,11 +153,11 @@ data Violation peer blk = Violation !String !(View peer blk) deriving (Eq, Show) instance - ( Typeable blk, - StandardHash blk, - Eq peer, - Show peer, - Typeable peer + ( Typeable blk + , StandardHash blk + , Eq peer + , Show peer + , Typeable peer ) => Exception (Violation peer blk) @@ -162,22 +165,22 @@ instance -- handles and monitors them for changes. When a change is detected, it runs all -- the invariants and throws 'Violation' if any of the invariants is violated. watcher :: - ( MonadSTM m, - MonadThrow m, - Eq peer, - Show peer, - Typeable peer, - Typeable blk, - StandardHash blk + ( MonadSTM m + , MonadThrow m + , Eq peer + , Show peer + , Typeable peer + , Typeable blk + , StandardHash blk ) => STM m (Map peer (CSState.ChainSyncClientHandle m blk)) -> Watcher m (View peer blk) (View peer blk) watcher handles = Watcher - { wFingerprint = id, - wInitial = Nothing, - wReader = readAndView handles, - wNotify = - forM_ allInvariants . \view Invariant {name, check} -> + { wFingerprint = id + , wInitial = Nothing + , wReader = readAndView handles + , wNotify = + forM_ allInvariants . \view Invariant{name, check} -> when (not $ check view) $ throwIO $ Violation name view } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ChainSync.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ChainSync.hs index 9a6fac5558..6f60d063fc 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ChainSync.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ChainSync.hs @@ -3,65 +3,91 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Consensus.PeerSimulator.ChainSync ( - chainSyncNoSizeLimits +module Test.Consensus.PeerSimulator.ChainSync + ( chainSyncNoSizeLimits , chainSyncNoTimeouts , runChainSyncClient , runChainSyncServer ) where -import Control.Exception (SomeException) -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import Control.Tracer (Tracer (Tracer), contramap, nullTracer, - traceWith) -import Data.Proxy (Proxy (..)) -import Network.TypedProtocol.Codec (AnyMessage) -import Ouroboros.Consensus.Block (Header, Point) -import Ouroboros.Consensus.BlockchainTime (RelativeTime (..)) -import Ouroboros.Consensus.Config (DiffusionPipeliningSupport (..), - TopLevelConfig (..)) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (CSJConfig (..), ChainDbView, - ChainSyncClientHandleCollection, ChainSyncLoPBucketConfig, - ChainSyncStateView (..), Consensus, bracketChainSyncClient, - chainSyncClient) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck as HistoricityCheck -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck as InFutureCheck -import Ouroboros.Consensus.Node.GsmState (GsmState (Syncing)) -import Ouroboros.Consensus.Util (ShowProxy) -import Ouroboros.Consensus.Util.IOLike (Exception (fromException), - IOLike, MonadCatch (try)) -import Ouroboros.Network.Block (Tip) -import Ouroboros.Network.Channel (Channel) -import Ouroboros.Network.ControlMessage (ControlMessage (..)) -import Ouroboros.Network.Driver (runPeer) -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededSizeLimit, ExceededTimeLimit), - runPipelinedPeerWithLimits) -import Ouroboros.Network.NodeToNode.Version (NodeToNodeVersion) -import Ouroboros.Network.Protocol.ChainSync.ClientPipelined - (ChainSyncClientPipelined, chainSyncClientPeerPipelined) -import Ouroboros.Network.Protocol.ChainSync.Codec - (ChainSyncTimeout (..), byteLimitsChainSync, - codecChainSyncId, timeLimitsChainSync) -import Ouroboros.Network.Protocol.ChainSync.PipelineDecision - (pipelineDecisionLowHighMark) -import Ouroboros.Network.Protocol.ChainSync.Server (ChainSyncServer, - chainSyncServerPeer) -import Ouroboros.Network.Protocol.ChainSync.Type (ChainSync) -import Ouroboros.Network.Protocol.Limits (ProtocolSizeLimits (..)) -import Test.Consensus.PeerSimulator.StateView - (PeerSimulatorComponentResult (..), - PeerSimulatorResult (..), - StateViewTracers (StateViewTracers, svtPeerSimulatorResultsTracer)) -import Test.Consensus.PeerSimulator.Trace - (TraceChainSyncClientTerminationEvent (..), - TraceEvent (..)) -import Test.Consensus.PointSchedule.Peers (PeerId) -import Test.Util.Orphans.IOLike () +import Control.Exception (SomeException) +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.Tracer + ( Tracer (Tracer) + , contramap + , nullTracer + , traceWith + ) +import Data.Proxy (Proxy (..)) +import Network.TypedProtocol.Codec (AnyMessage) +import Ouroboros.Consensus.Block (Header, Point) +import Ouroboros.Consensus.BlockchainTime (RelativeTime (..)) +import Ouroboros.Consensus.Config + ( DiffusionPipeliningSupport (..) + , TopLevelConfig (..) + ) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( CSJConfig (..) + , ChainDbView + , ChainSyncClientHandleCollection + , ChainSyncLoPBucketConfig + , ChainSyncStateView (..) + , Consensus + , bracketChainSyncClient + , chainSyncClient + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck qualified as HistoricityCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck qualified as InFutureCheck +import Ouroboros.Consensus.Node.GsmState (GsmState (Syncing)) +import Ouroboros.Consensus.Util (ShowProxy) +import Ouroboros.Consensus.Util.IOLike + ( Exception (fromException) + , IOLike + , MonadCatch (try) + ) +import Ouroboros.Network.Block (Tip) +import Ouroboros.Network.Channel (Channel) +import Ouroboros.Network.ControlMessage (ControlMessage (..)) +import Ouroboros.Network.Driver (runPeer) +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededSizeLimit, ExceededTimeLimit) + , runPipelinedPeerWithLimits + ) +import Ouroboros.Network.NodeToNode.Version (NodeToNodeVersion) +import Ouroboros.Network.Protocol.ChainSync.ClientPipelined + ( ChainSyncClientPipelined + , chainSyncClientPeerPipelined + ) +import Ouroboros.Network.Protocol.ChainSync.Codec + ( ChainSyncTimeout (..) + , byteLimitsChainSync + , codecChainSyncId + , timeLimitsChainSync + ) +import Ouroboros.Network.Protocol.ChainSync.PipelineDecision + ( pipelineDecisionLowHighMark + ) +import Ouroboros.Network.Protocol.ChainSync.Server + ( ChainSyncServer + , chainSyncServerPeer + ) +import Ouroboros.Network.Protocol.ChainSync.Type (ChainSync) +import Ouroboros.Network.Protocol.Limits (ProtocolSizeLimits (..)) +import Test.Consensus.PeerSimulator.StateView + ( PeerSimulatorComponentResult (..) + , PeerSimulatorResult (..) + , StateViewTracers (StateViewTracers, svtPeerSimulatorResultsTracer) + ) +import Test.Consensus.PeerSimulator.Trace + ( TraceChainSyncClientTerminationEvent (..) + , TraceEvent (..) + ) +import Test.Consensus.PointSchedule.Peers (PeerId) +import Test.Util.Orphans.IOLike () -- | A basic ChainSync client. It wraps around 'chainSyncClient', but simplifies -- quite a few aspects. In particular, the size of the pipeline cannot exceed 20 @@ -76,46 +102,47 @@ basicChainSyncClient :: ChainSyncStateView m blk -> Consensus ChainSyncClientPipelined blk m basicChainSyncClient - peerId - tracer - cfg - chainDbView - csState = - chainSyncClient - CSClient.ConfigEnv { - CSClient.mkPipelineDecision0 = pipelineDecisionLowHighMark 10 20 - , CSClient.tracer = Tracer (traceWith tracer . TraceChainSyncClientEvent peerId) - , CSClient.cfg - , CSClient.chainDbView - , CSClient.someHeaderInFutureCheck = dummyHeaderInFutureCheck - -- Preventing historical MsgRollBack and MsgAwaitReply messages is - -- motivated by preventing additional load from CSJ-disengaged peers; we - -- do not care about this in these tests. - , CSClient.historicityCheck = HistoricityCheck.noCheck - , CSClient.getDiffusionPipeliningSupport = DiffusionPipeliningOn - } - CSClient.DynamicEnv { - CSClient.version = maxBound - , CSClient.controlMessageSTM = return Continue - , CSClient.headerMetricsTracer = nullTracer - , CSClient.setCandidate = csvSetCandidate csState - , CSClient.idling = csvIdling csState - , CSClient.loPBucket = csvLoPBucket csState - , CSClient.setLatestSlot = csvSetLatestSlot csState - , CSClient.jumping = csvJumping csState - } - where + peerId + tracer + cfg + chainDbView + csState = + chainSyncClient + CSClient.ConfigEnv + { CSClient.mkPipelineDecision0 = pipelineDecisionLowHighMark 10 20 + , CSClient.tracer = Tracer (traceWith tracer . TraceChainSyncClientEvent peerId) + , CSClient.cfg + , CSClient.chainDbView + , CSClient.someHeaderInFutureCheck = dummyHeaderInFutureCheck + , -- Preventing historical MsgRollBack and MsgAwaitReply messages is + -- motivated by preventing additional load from CSJ-disengaged peers; we + -- do not care about this in these tests. + CSClient.historicityCheck = HistoricityCheck.noCheck + , CSClient.getDiffusionPipeliningSupport = DiffusionPipeliningOn + } + CSClient.DynamicEnv + { CSClient.version = maxBound + , CSClient.controlMessageSTM = return Continue + , CSClient.headerMetricsTracer = nullTracer + , CSClient.setCandidate = csvSetCandidate csState + , CSClient.idling = csvIdling csState + , CSClient.loPBucket = csvLoPBucket csState + , CSClient.setLatestSlot = csvSetLatestSlot csState + , CSClient.jumping = csvJumping csState + } + where dummyHeaderInFutureCheck :: InFutureCheck.SomeHeaderInFutureCheck m blk dummyHeaderInFutureCheck = - InFutureCheck.SomeHeaderInFutureCheck InFutureCheck.HeaderInFutureCheck - { InFutureCheck.proxyArrival = Proxy - , InFutureCheck.recordHeaderArrival = \_ -> pure () - , InFutureCheck.judgeHeaderArrival = \_ _ _ -> pure () - , InFutureCheck.handleHeaderArrival = \_ -> - -- We are not inspecting header slot time in the Genesis tests. - pure $ pure $ RelativeTime 0 - } + InFutureCheck.SomeHeaderInFutureCheck + InFutureCheck.HeaderInFutureCheck + { InFutureCheck.proxyArrival = Proxy + , InFutureCheck.recordHeaderArrival = \_ -> pure () + , InFutureCheck.judgeHeaderArrival = \_ _ _ -> pure () + , InFutureCheck.handleHeaderArrival = \_ -> + -- We are not inspecting header slot time in the Genesis tests. + pure $ pure $ RelativeTime 0 + } -- | Create and run a ChainSync client using 'bracketChainSyncClient' and -- 'basicChainSyncClient', synchronously. Exceptions are caught, sent to the @@ -125,20 +152,20 @@ runChainSyncClient :: Tracer m (TraceEvent blk) -> TopLevelConfig blk -> ChainDbView m blk -> + -- | The id of the peer to which the client connects. PeerId -> - -- ^ The id of the peer to which the client connects. + -- | Timeouts for this client. ChainSyncTimeout -> - -- ^ Timeouts for this client. + -- | Configuration for the LoP bucket. ChainSyncLoPBucketConfig -> - -- ^ Configuration for the LoP bucket. + -- | Configuration for ChainSync Jumping CSJConfig -> - -- ^ Configuration for ChainSync Jumping + -- | Tracers used to record information for the future 'StateView'. StateViewTracers blk m -> - -- ^ Tracers used to record information for the future 'StateView'. - ChainSyncClientHandleCollection PeerId m blk -> - -- ^ A TVar containing a map of states for each peer. This + -- | A TVar containing a map of states for each peer. This -- function will (via 'bracketChainSyncClient') register and de-register a -- TVar for the state of the peer. + ChainSyncClientHandleCollection PeerId m blk -> Channel m (AnyMessage (ChainSync (Header blk) (Point blk) (Tip blk))) -> m () runChainSyncClient @@ -149,7 +176,7 @@ runChainSyncClient chainSyncTimeouts lopBucketConfig csjConfig - StateViewTracers {svtPeerSimulatorResultsTracer} + StateViewTracers{svtPeerSimulatorResultsTracer} varHandles channel = bracketChainSyncClient @@ -162,7 +189,8 @@ runChainSyncClient (maxBound :: NodeToNodeVersion) lopBucketConfig csjConfig - DiffusionPipeliningOn -- ^ TODO make this a parameter? + DiffusionPipeliningOn + -- \^ TODO make this a parameter? $ \csState -> do res <- try $ @@ -172,33 +200,40 @@ runChainSyncClient chainSyncNoSizeLimits (timeLimitsChainSync chainSyncTimeouts) channel - (chainSyncClientPeerPipelined - (basicChainSyncClient - peerId - tracer - cfg - chainDbView - csState)) + ( chainSyncClientPeerPipelined + ( basicChainSyncClient + peerId + tracer + cfg + chainDbView + csState + ) + ) case res of - Right res' -> traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeChainSyncClientResult $ Right res' + Right res' -> + traceWith svtPeerSimulatorResultsTracer $ + PeerSimulatorResult peerId $ + SomeChainSyncClientResult $ + Right res' Left exn -> traceException exn - where - traceException exn = do - traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeChainSyncClientResult $ Left exn - case fromException exn of - Just (ExceededSizeLimit _) -> - traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceExceededSizeLimitCS - Just (ExceededTimeLimit _) -> - traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceExceededTimeLimitCS - Nothing -> pure () - case fromException exn of - Just CSClient.DensityTooLow -> - traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceTerminatedByGDDGovernor - Just CSClient.EmptyBucket -> - traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceTerminatedByLoP - _ -> pure () + where + traceException exn = do + traceWith svtPeerSimulatorResultsTracer $ + PeerSimulatorResult peerId $ + SomeChainSyncClientResult $ + Left exn + case fromException exn of + Just (ExceededSizeLimit _) -> + traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceExceededSizeLimitCS + Just (ExceededTimeLimit _) -> + traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceExceededTimeLimitCS + Nothing -> pure () + case fromException exn of + Just CSClient.DensityTooLow -> + traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceTerminatedByGDDGovernor + Just CSClient.EmptyBucket -> + traceWith tracer $ TraceChainSyncClientTerminationEvent peerId TraceTerminatedByLoP + _ -> pure () chainSyncNoSizeLimits :: ProtocolSizeLimits (ChainSync header point tip) bytes chainSyncNoSizeLimits = byteLimitsChainSync (const 0) @@ -206,10 +241,10 @@ chainSyncNoSizeLimits = byteLimitsChainSync (const 0) chainSyncNoTimeouts :: ChainSyncTimeout chainSyncNoTimeouts = ChainSyncTimeout - { canAwaitTimeout = Nothing, - intersectTimeout = Nothing, - mustReplyTimeout = Nothing, - idleTimeout = Nothing + { canAwaitTimeout = Nothing + , intersectTimeout = Nothing + , mustReplyTimeout = Nothing + , idleTimeout = Nothing } runChainSyncServer :: @@ -220,15 +255,20 @@ runChainSyncServer :: ChainSyncServer (Header blk) (Point blk) (Tip blk) m () -> Channel m (AnyMessage (ChainSync (Header blk) (Point blk) (Tip blk))) -> m () -runChainSyncServer tracer peerId StateViewTracers {svtPeerSimulatorResultsTracer} server channel = +runChainSyncServer tracer peerId StateViewTracers{svtPeerSimulatorResultsTracer} server channel = (try $ runPeer sendRecvTracer codecChainSyncId channel (chainSyncServerPeer server)) >>= \case - Right ((), msgRes) -> traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeChainSyncServerResult $ Right msgRes + Right ((), msgRes) -> + traceWith svtPeerSimulatorResultsTracer $ + PeerSimulatorResult peerId $ + SomeChainSyncServerResult $ + Right msgRes Left exn -> do traceWith svtPeerSimulatorResultsTracer $ - PeerSimulatorResult peerId $ SomeChainSyncServerResult $ Left exn + PeerSimulatorResult peerId $ + SomeChainSyncServerResult $ + Left exn -- NOTE: here we are able to trace exceptions, as what is done in `runChainSyncClient` case fromException exn of (_ :: Maybe SomeException) -> pure () - where - sendRecvTracer = Tracer $ traceWith tracer . TraceChainSyncSendRecvEvent peerId "Server" + where + sendRecvTracer = Tracer $ traceWith tracer . TraceChainSyncSendRecvEvent peerId "Server" diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Config.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Config.hs index 7ff200ab3d..16f0eb9b95 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Config.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Config.hs @@ -2,56 +2,71 @@ module Test.Consensus.PeerSimulator.Config (defaultCfg) where -import Cardano.Crypto.DSIGN (SignKeyDSIGN (..), VerKeyDSIGN (..)) -import Cardano.Slotting.Time (SlotLength, slotLengthFromSec) -import qualified Data.Map.Strict as Map -import Data.Maybe.Strict (StrictMaybe (..)) -import Ouroboros.Consensus.Config (SecurityParam, TopLevelConfig (..), - emptyCheckpointsMap) -import Ouroboros.Consensus.HardFork.History - (EraParams (eraGenesisWin)) -import qualified Ouroboros.Consensus.HardFork.History.EraParams as HardFork -import Ouroboros.Consensus.Ledger.SupportsProtocol (GenesisWindow) -import Ouroboros.Consensus.Node.ProtocolInfo - (NumCoreNodes (NumCoreNodes)) -import Ouroboros.Consensus.NodeId (CoreNodeId (CoreNodeId), - NodeId (CoreId)) -import Ouroboros.Consensus.Protocol.BFT - (BftParams (BftParams, bftNumNodes, bftSecurityParam), - ConsensusConfig (BftConfig, bftParams, bftSignKey, bftVerKeys)) -import Test.Consensus.PointSchedule (ForecastRange (ForecastRange)) -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (BlockConfig (TestBlockConfig), - CodecConfig (TestBlockCodecConfig), - StorageConfig (TestBlockStorageConfig), TestBlock, - TestBlockLedgerConfig (..)) +import Cardano.Crypto.DSIGN (SignKeyDSIGN (..), VerKeyDSIGN (..)) +import Cardano.Slotting.Time (SlotLength, slotLengthFromSec) +import Data.Map.Strict qualified as Map +import Data.Maybe.Strict (StrictMaybe (..)) +import Ouroboros.Consensus.Config + ( SecurityParam + , TopLevelConfig (..) + , emptyCheckpointsMap + ) +import Ouroboros.Consensus.HardFork.History + ( EraParams (eraGenesisWin) + ) +import Ouroboros.Consensus.HardFork.History.EraParams qualified as HardFork +import Ouroboros.Consensus.Ledger.SupportsProtocol (GenesisWindow) +import Ouroboros.Consensus.Node.ProtocolInfo + ( NumCoreNodes (NumCoreNodes) + ) +import Ouroboros.Consensus.NodeId + ( CoreNodeId (CoreNodeId) + , NodeId (CoreId) + ) +import Ouroboros.Consensus.Protocol.BFT + ( BftParams (BftParams, bftNumNodes, bftSecurityParam) + , ConsensusConfig (BftConfig, bftParams, bftSignKey, bftVerKeys) + ) +import Test.Consensus.PointSchedule (ForecastRange (ForecastRange)) +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock + ( BlockConfig (TestBlockConfig) + , CodecConfig (TestBlockCodecConfig) + , StorageConfig (TestBlockStorageConfig) + , TestBlock + , TestBlockLedgerConfig (..) + ) -- REVIEW: this has not been deliberately chosen defaultCfg :: SecurityParam -> ForecastRange -> GenesisWindow -> TopLevelConfig TestBlock -defaultCfg secParam (ForecastRange sfor) sgen = TopLevelConfig { - topLevelConfigProtocol = BftConfig { - bftParams = BftParams { - bftSecurityParam = secParam - , bftNumNodes = NumCoreNodes 2 - } - , bftSignKey = SignKeyMockDSIGN 0 - , bftVerKeys = Map.fromList [ - (CoreId (CoreNodeId 0), VerKeyMockDSIGN 0) - , (CoreId (CoreNodeId 1), VerKeyMockDSIGN 1) - ] +defaultCfg secParam (ForecastRange sfor) sgen = + TopLevelConfig + { topLevelConfigProtocol = + BftConfig + { bftParams = + BftParams + { bftSecurityParam = secParam + , bftNumNodes = NumCoreNodes 2 + } + , bftSignKey = SignKeyMockDSIGN 0 + , bftVerKeys = + Map.fromList + [ (CoreId (CoreNodeId 0), VerKeyMockDSIGN 0) + , (CoreId (CoreNodeId 1), VerKeyMockDSIGN 1) + ] + } + , topLevelConfigLedger = TestBlockLedgerConfig eraParams (SJust (fromIntegral sfor)) + , topLevelConfigBlock = TestBlockConfig numCoreNodes + , topLevelConfigCodec = TestBlockCodecConfig + , topLevelConfigStorage = TestBlockStorageConfig + , topLevelConfigCheckpoints = emptyCheckpointsMap } - , topLevelConfigLedger = TestBlockLedgerConfig eraParams (SJust (fromIntegral sfor)) - , topLevelConfigBlock = TestBlockConfig numCoreNodes - , topLevelConfigCodec = TestBlockCodecConfig - , topLevelConfigStorage = TestBlockStorageConfig - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - where - -- REVIEW: Make it 1s or a parameter? - slotLength :: SlotLength - slotLength = slotLengthFromSec 20 + where + -- REVIEW: Make it 1s or a parameter? + slotLength :: SlotLength + slotLength = slotLengthFromSec 20 - eraParams :: HardFork.EraParams - eraParams = (HardFork.defaultEraParams secParam slotLength) {eraGenesisWin = sgen} + eraParams :: HardFork.EraParams + eraParams = (HardFork.defaultEraParams secParam slotLength){eraGenesisWin = sgen} - numCoreNodes = NumCoreNodes 2 + numCoreNodes = NumCoreNodes 2 diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Handlers.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Handlers.hs index d289665db0..0d84d268d7 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Handlers.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Handlers.hs @@ -11,44 +11,66 @@ -- server mocks that the peer simulator uses, in -- "Test.Consensus.PeerSimulator.ScheduledChainSyncServer" and -- "Test.Consensus.PeerSimulator.ScheduledBlockFetchServer". -module Test.Consensus.PeerSimulator.Handlers ( - handlerBlockFetch +module Test.Consensus.PeerSimulator.Handlers + ( handlerBlockFetch , handlerFindIntersection , handlerRequestNext , handlerSendBlocks ) where -import Cardano.Slotting.Slot (WithOrigin (..)) -import Control.Monad.Trans (lift) -import Control.Monad.Writer.Strict (MonadWriter (tell), - WriterT (runWriterT)) -import Data.List (isSuffixOf) -import qualified Data.List.NonEmpty as NonEmpty -import Data.Maybe (fromJust, fromMaybe) -import Ouroboros.Consensus.Block (HasHeader, HeaderHash, - Point (GenesisPoint), blockHash, getHeader, withOrigin) -import Ouroboros.Consensus.Util.IOLike (IOLike, STM, StrictTVar, - readTVar, writeTVar) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (Tip (TipGenesis), blockPoint, - getTipPoint, tipFromHeader) -import Ouroboros.Network.BlockFetch.ClientState - (ChainRange (ChainRange)) -import qualified Test.Consensus.BlockTree as BT -import Test.Consensus.BlockTree (BlockTree) -import Test.Consensus.Network.AnchoredFragment.Extras (intersectWith) -import Test.Consensus.PeerSimulator.ScheduledBlockFetchServer - (BlockFetch (..), SendBlocks (..)) -import Test.Consensus.PeerSimulator.ScheduledChainSyncServer - (FindIntersect (..), - RequestNext (AwaitReply, RollBackward, RollForward)) -import Test.Consensus.PeerSimulator.Trace - (TraceScheduledBlockFetchServerEvent (..), - TraceScheduledChainSyncServerEvent (..)) -import Test.Consensus.PointSchedule.NodeState -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock, TestHash (TestHash)) +import Cardano.Slotting.Slot (WithOrigin (..)) +import Control.Monad.Trans (lift) +import Control.Monad.Writer.Strict + ( MonadWriter (tell) + , WriterT (runWriterT) + ) +import Data.List (isSuffixOf) +import Data.List.NonEmpty qualified as NonEmpty +import Data.Maybe (fromJust, fromMaybe) +import Ouroboros.Consensus.Block + ( HasHeader + , HeaderHash + , Point (GenesisPoint) + , blockHash + , getHeader + , withOrigin + ) +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , STM + , StrictTVar + , readTVar + , writeTVar + ) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block + ( Tip (TipGenesis) + , blockPoint + , getTipPoint + , tipFromHeader + ) +import Ouroboros.Network.BlockFetch.ClientState + ( ChainRange (ChainRange) + ) +import Test.Consensus.BlockTree (BlockTree) +import Test.Consensus.BlockTree qualified as BT +import Test.Consensus.Network.AnchoredFragment.Extras (intersectWith) +import Test.Consensus.PeerSimulator.ScheduledBlockFetchServer + ( BlockFetch (..) + , SendBlocks (..) + ) +import Test.Consensus.PeerSimulator.ScheduledChainSyncServer + ( FindIntersect (..) + , RequestNext (AwaitReply, RollBackward, RollForward) + ) +import Test.Consensus.PeerSimulator.Trace + ( TraceScheduledBlockFetchServerEvent (..) + , TraceScheduledChainSyncServerEvent (..) + ) +import Test.Consensus.PointSchedule.NodeState +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (TestBlock, TestHash (TestHash)) -- | More efficient implementation of a check used in some of the handlers, -- determining whether the first argument is on the chain that ends in the @@ -68,9 +90,9 @@ isAncestorOf :: Bool isAncestorOf (At ancestor) (At descendant) = isSuffixOf (NonEmpty.toList hashA) (NonEmpty.toList hashD) - where - TestHash hashA = blockHash ancestor - TestHash hashD = blockHash descendant + where + TestHash hashA = blockHash ancestor + TestHash hashD = blockHash descendant isAncestorOf (At _) Origin = False isAncestorOf Origin _ = True @@ -107,74 +129,78 @@ handlerFindIntersection currentIntersection blockTree clientPoints points = do -- - Anchor != intersection handlerRequestNext :: forall m. - (IOLike m) => + IOLike m => StrictTVar m (Point TestBlock) -> BlockTree TestBlock -> NodeState TestBlock -> - STM m (Maybe (RequestNext TestBlock), [TraceScheduledChainSyncServerEvent (NodeState TestBlock) TestBlock]) + STM + m + ( Maybe (RequestNext TestBlock) + , [TraceScheduledChainSyncServerEvent (NodeState TestBlock) TestBlock] + ) handlerRequestNext currentIntersection blockTree points = runWriterT $ do intersection <- lift $ readTVar currentIntersection trace $ TraceLastIntersection intersection withHeader intersection (nsHeader points) - where - withHeader :: - Point TestBlock -> - WithOrigin TestBlock -> - WriterT - [TraceScheduledChainSyncServerEvent (NodeState TestBlock) TestBlock] - (STM m) - (Maybe (RequestNext TestBlock)) - withHeader intersection h = - maybe noPathError analysePath (BT.findPath intersection hp blockTree) - where - hp = withOrigin GenesisPoint blockPoint h - - noPathError = error "serveHeader: intersection and headerPoint should always be in the block tree" - - analysePath = \case - -- If the anchor is the intersection (the source of the path-finding) but - -- the fragment is empty, then the intersection is exactly our header - -- point and there is nothing to do. If additionally the header point is - -- also the tip point or a descendent of it (because we served our whole - -- chain, or we are stalling as an adversarial behaviour), then we ask the - -- client to wait; otherwise we just do nothing. - (BT.PathAnchoredAtSource True, AF.Empty _) | isAncestorOf (nsTip points) (nsHeader points) -> do - trace TraceChainIsFullyServed - pure (Just AwaitReply) - (BT.PathAnchoredAtSource True, AF.Empty _) -> do - trace TraceIntersectionIsHeaderPoint - pure Nothing - -- If the anchor is the intersection and the fragment is non-empty, then - -- we have something to serve. - (BT.PathAnchoredAtSource True, fragmentAhead@(next AF.:< _)) -> do - trace $ TraceIntersectionIsStrictAncestorOfHeaderPoint fragmentAhead - lift $ writeTVar currentIntersection $ blockPoint next - pure $ Just (RollForward (getHeader next) (nsTipTip points)) - -- If the anchor is not the intersection but the fragment is empty, then - -- the intersection is further than the tip that we can serve. - (BT.PathAnchoredAtSource False, AF.Empty _) -> do - trace TraceIntersectionIsStrictDescendentOfHeaderPoint - -- REVIEW: The following is a hack that allows the honest peer to not - -- get disconnected when it falls behind. Why does a peer doing that not - -- get disconnected from? - -- - -- We decided to hold off on making this work with timeouts, so we'll return - -- Nothing here for now. - -- The consequence of this is that a slow peer will just block until it reaches - -- the fork intersection in its schedule. - -- pure (Just AwaitReply) - pure Nothing - -- If the anchor is not the intersection and the fragment is non-empty, - -- then we require a rollback - (BT.PathAnchoredAtSource False, fragment) -> do - let point = AF.anchorPoint fragment - lift $ writeTVar currentIntersection point - pure $ Just (RollBackward point tip') - - tip' = withOrigin TipGenesis tipFromHeader $ nsTip points - - trace = tell . pure + where + withHeader :: + Point TestBlock -> + WithOrigin TestBlock -> + WriterT + [TraceScheduledChainSyncServerEvent (NodeState TestBlock) TestBlock] + (STM m) + (Maybe (RequestNext TestBlock)) + withHeader intersection h = + maybe noPathError analysePath (BT.findPath intersection hp blockTree) + where + hp = withOrigin GenesisPoint blockPoint h + + noPathError = error "serveHeader: intersection and headerPoint should always be in the block tree" + + analysePath = \case + -- If the anchor is the intersection (the source of the path-finding) but + -- the fragment is empty, then the intersection is exactly our header + -- point and there is nothing to do. If additionally the header point is + -- also the tip point or a descendent of it (because we served our whole + -- chain, or we are stalling as an adversarial behaviour), then we ask the + -- client to wait; otherwise we just do nothing. + (BT.PathAnchoredAtSource True, AF.Empty _) | isAncestorOf (nsTip points) (nsHeader points) -> do + trace TraceChainIsFullyServed + pure (Just AwaitReply) + (BT.PathAnchoredAtSource True, AF.Empty _) -> do + trace TraceIntersectionIsHeaderPoint + pure Nothing + -- If the anchor is the intersection and the fragment is non-empty, then + -- we have something to serve. + (BT.PathAnchoredAtSource True, fragmentAhead@(next AF.:< _)) -> do + trace $ TraceIntersectionIsStrictAncestorOfHeaderPoint fragmentAhead + lift $ writeTVar currentIntersection $ blockPoint next + pure $ Just (RollForward (getHeader next) (nsTipTip points)) + -- If the anchor is not the intersection but the fragment is empty, then + -- the intersection is further than the tip that we can serve. + (BT.PathAnchoredAtSource False, AF.Empty _) -> do + trace TraceIntersectionIsStrictDescendentOfHeaderPoint + -- REVIEW: The following is a hack that allows the honest peer to not + -- get disconnected when it falls behind. Why does a peer doing that not + -- get disconnected from? + -- + -- We decided to hold off on making this work with timeouts, so we'll return + -- Nothing here for now. + -- The consequence of this is that a slow peer will just block until it reaches + -- the fork intersection in its schedule. + -- pure (Just AwaitReply) + pure Nothing + -- If the anchor is not the intersection and the fragment is non-empty, + -- then we require a rollback + (BT.PathAnchoredAtSource False, fragment) -> do + let point = AF.anchorPoint fragment + lift $ writeTVar currentIntersection point + pure $ Just (RollBackward point tip') + + tip' = withOrigin TipGenesis tipFromHeader $ nsTip points + + trace = tell . pure -- REVIEW: We call this a lot, and I assume it creates some significant overhead. -- We should figure out a cheaper way to achieve what we're doing with the result. @@ -183,8 +209,8 @@ handlerRequestNext currentIntersection blockTree points = fragmentUpTo :: HasHeader blk => BlockTree blk -> String -> Point blk -> AnchoredFragment blk fragmentUpTo blockTree desc b = fromMaybe fatal (BT.findFragment b blockTree) - where - fatal = error ("BlockFetch: Could not find " ++ desc ++ " in the block tree") + where + fatal = error ("BlockFetch: Could not find " ++ desc ++ " in the block tree") -- | Handle the BlockFetch message (it actually has only one unnamed entry point). -- @@ -194,33 +220,33 @@ fragmentUpTo blockTree desc b = handlerBlockFetch :: forall m blk. (IOLike m, HasHeader blk) => + -- | The tree of blocks in this scenario -- aka. the “universe”. BlockTree blk -> - -- ^ The tree of blocks in this scenario -- aka. the “universe”. - ChainRange (Point blk) -> - -- ^ A requested range of blocks. If the client behaves correctly, they + -- | A requested range of blocks. If the client behaves correctly, they -- correspond to headers that have been sent before, and if the scheduled -- ChainSync server behaves correctly, then they are all in the block tree. - NodeState blk -> - -- ^ The current advertised points (tip point, header point and block point). + ChainRange (Point blk) -> + -- | The current advertised points (tip point, header point and block point). -- They are in the block tree. + NodeState blk -> STM m (Maybe (BlockFetch blk), [TraceScheduledBlockFetchServerEvent (NodeState blk) blk]) handlerBlockFetch blockTree (ChainRange from to) _ = runWriterT (serveFromBpFragment (AF.sliceRange chain from to)) - where - -- Check whether the requested range is contained in the fragment before the header point. - -- We may only initiate batch serving if the full range is available; if the server has only some of the blocks, it - -- must refuse. - serveFromBpFragment = \case - Just slice -> do - trace $ TraceStartingBatch slice - pure (Just (StartBatch (AF.toOldestFirst slice))) - Nothing -> do - trace $ TraceWaitingForRange from to - pure Nothing - - chain = fragmentUpTo blockTree "upper range bound" to - - trace = tell . pure + where + -- Check whether the requested range is contained in the fragment before the header point. + -- We may only initiate batch serving if the full range is available; if the server has only some of the blocks, it + -- must refuse. + serveFromBpFragment = \case + Just slice -> do + trace $ TraceStartingBatch slice + pure (Just (StartBatch (AF.toOldestFirst slice))) + Nothing -> do + trace $ TraceWaitingForRange from to + pure Nothing + + chain = fragmentUpTo blockTree "upper range bound" to + + trace = tell . pure {- If we cannot serve blocks from the block point chain (that is the chain on which @@ -302,53 +328,56 @@ The cases to consider follow: -} handlerSendBlocks :: - forall m . + forall m. IOLike m => [TestBlock] -> NodeState TestBlock -> - STM m (Maybe (SendBlocks TestBlock), [TraceScheduledBlockFetchServerEvent (NodeState TestBlock) TestBlock]) -handlerSendBlocks blocks NodeState {nsHeader, nsBlock} = + STM + m + ( Maybe (SendBlocks TestBlock) + , [TraceScheduledBlockFetchServerEvent (NodeState TestBlock) TestBlock] + ) +handlerSendBlocks blocks NodeState{nsHeader, nsBlock} = runWriterT (checkDone blocks) - where - checkDone = \case - [] -> do - trace TraceBatchIsDone - pure (Just BatchDone) - (next : future) -> - blocksLeft next future - - blocksLeft next future - | isAncestorOf (At next) nsBlock - || compensateForScheduleRollback next - = do - trace $ TraceSendingBlock next - pure (Just (SendBlock next future)) - - | otherwise - = do - trace TraceBlockPointIsBehind - pure Nothing - - -- Here we encode the conditions for the special situation mentioned above. - -- These use aliases for @withinFragmentBounds@ to illustrate what we're testing. - -- The names don't precisely match semantically, but it's difficult to understand the - -- circumstances otherwise. - -- - -- The involved points are BP, HP, and @next@, which is the block we're deciding whether to - -- send or not. - -- - -- Remember that at this point, we already know that we cannot send @next@ regularly, i.e. - -- @next@ is not on the chain leading up to BP. - -- The conditions in which we send @next@ to compensate for rollbacks are: - -- - -- * @next@ is not on the chain leading up to HP – HP moved to another chain, and - -- - -- * BP is in the same chain as HP and is not an ancestor of @next@ - BP also moved away from the chain of @next@. - -- - -- Precondition: @not (isAncestorOf (At next) bp)@ - compensateForScheduleRollback next = - not (isAncestorOf (At next) nsHeader) - && isAncestorOf nsBlock nsHeader - && not (isAncestorOf nsBlock (At next)) - - trace = tell . pure + where + checkDone = \case + [] -> do + trace TraceBatchIsDone + pure (Just BatchDone) + (next : future) -> + blocksLeft next future + + blocksLeft next future + | isAncestorOf (At next) nsBlock + || compensateForScheduleRollback next = + do + trace $ TraceSendingBlock next + pure (Just (SendBlock next future)) + | otherwise = + do + trace TraceBlockPointIsBehind + pure Nothing + + -- Here we encode the conditions for the special situation mentioned above. + -- These use aliases for @withinFragmentBounds@ to illustrate what we're testing. + -- The names don't precisely match semantically, but it's difficult to understand the + -- circumstances otherwise. + -- + -- The involved points are BP, HP, and @next@, which is the block we're deciding whether to + -- send or not. + -- + -- Remember that at this point, we already know that we cannot send @next@ regularly, i.e. + -- @next@ is not on the chain leading up to BP. + -- The conditions in which we send @next@ to compensate for rollbacks are: + -- + -- \* @next@ is not on the chain leading up to HP – HP moved to another chain, and + -- + -- \* BP is in the same chain as HP and is not an ancestor of @next@ - BP also moved away from the chain of @next@. + -- + -- Precondition: @not (isAncestorOf (At next) bp)@ + compensateForScheduleRollback next = + not (isAncestorOf (At next) nsHeader) + && isAncestorOf nsBlock nsHeader + && not (isAncestorOf nsBlock (At next)) + + trace = tell . pure diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/NodeLifecycle.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/NodeLifecycle.hs index dc99e61c6f..2cf6ff0170 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/NodeLifecycle.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/NodeLifecycle.hs @@ -2,8 +2,8 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Consensus.PeerSimulator.NodeLifecycle ( - LiveInterval (..) +module Test.Consensus.PeerSimulator.NodeLifecycle + ( LiveInterval (..) , LiveIntervalResult (..) , LiveNode (..) , LiveResources (..) @@ -13,106 +13,103 @@ module Test.Consensus.PeerSimulator.NodeLifecycle ( , restoreNode ) where -import Control.ResourceRegistry -import Control.Tracer (Tracer (..), traceWith) -import Data.Functor (void) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config (TopLevelConfig (..)) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientHandleCollection (..)) -import Ouroboros.Consensus.Storage.ChainDB.API -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl as ChainDB -import Ouroboros.Consensus.Storage.ChainDB.Impl.Args (cdbsLoE, - updateTracer) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import qualified System.FS.Sim.MockFS as MockFS -import System.FS.Sim.MockFS (MockFS) -import Test.Consensus.PeerSimulator.Resources -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PeerSimulator.Trace -import Test.Consensus.PointSchedule.Peers (PeerId) -import Test.Util.ChainDB -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock, testInitExtLedger) +import Control.ResourceRegistry +import Control.Tracer (Tracer (..), traceWith) +import Data.Functor (void) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config (TopLevelConfig (..)) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientHandleCollection (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.API +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args + ( cdbsLoE + , updateTracer + ) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import System.FS.Sim.MockFS (MockFS) +import System.FS.Sim.MockFS qualified as MockFS +import Test.Consensus.PeerSimulator.Resources +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PeerSimulator.Trace +import Test.Consensus.PointSchedule.Peers (PeerId) +import Test.Util.ChainDB +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (TestBlock, testInitExtLedger) -- | Resources used for a single live interval of the node, constructed when the -- node is started. -- When the node is shut down, 'lnCopyToImmDb' is used to persist the current -- chain. -data LiveNode blk m = LiveNode { - lnChainDb :: ChainDB m blk +data LiveNode blk m = LiveNode + { lnChainDb :: ChainDB m blk , lnStateViewTracers :: StateViewTracers blk m - , lnStateTracer :: Tracer m () - - -- | Write persistent ChainDB state (the immutable and volatile DBs, but not - -- the ledger and GSM state) to the VFS TVars to preserve it for the next - -- interval. - -- Returns the immutable tip's slot for tracing. - , lnCopyToImmDb :: m (WithOrigin SlotNo) - - -- | The set of peers that should be started. - -- Based on the simulation results at node shutdown, disconnected peers are - -- removed for the next live interval. - , lnPeers :: Set PeerId + , lnStateTracer :: Tracer m () + , lnCopyToImmDb :: m (WithOrigin SlotNo) + -- ^ Write persistent ChainDB state (the immutable and volatile DBs, but not + -- the ledger and GSM state) to the VFS TVars to preserve it for the next + -- interval. + -- Returns the immutable tip's slot for tracing. + , lnPeers :: Set PeerId + -- ^ The set of peers that should be started. + -- Based on the simulation results at node shutdown, disconnected peers are + -- removed for the next live interval. } -- | Result of a node shutdown at the end of a live interval. -data LiveIntervalResult blk = LiveIntervalResult { - -- | Used to initialize the 'StateViewTracers' of the next run to preserve - -- earlier disconnections for the final result. - lirPeerResults :: [PeerSimulatorResult blk] - - -- | The remaining peers, computed by removing all peers present in - -- 'lrPeerResults' from the current state in 'lnPeers'. - , lirActive :: Set PeerId +data LiveIntervalResult blk = LiveIntervalResult + { lirPeerResults :: [PeerSimulatorResult blk] + -- ^ Used to initialize the 'StateViewTracers' of the next run to preserve + -- earlier disconnections for the final result. + , lirActive :: Set PeerId + -- ^ The remaining peers, computed by removing all peers present in + -- 'lrPeerResults' from the current state in 'lnPeers'. } -- | Resources used by the handlers 'lifecycleStart' and 'lifecycleStop' to -- shut down running components, construct tracers used for single intervals, -- and reset and persist state. -data LiveResources blk m = LiveResources { - lrRegistry :: ResourceRegistry m - , lrPeerSim :: PeerSimulatorResources m blk - , lrTracer :: Tracer m (TraceEvent blk) - , lrSTracer :: ChainDB m blk -> m (Tracer m ()) - , lrConfig :: TopLevelConfig blk - - -- | The chain DB state consists of several transient parts and the - -- immutable DB's virtual file system. - -- After 'lnCopyToImmDb' was executed, the latter will contain the final - -- state of an interval. - -- The rest is reset when the chain DB is recreated. - , lrCdb :: NodeDBs (StrictTMVar m MockFS) - - -- | The LoE fragment must be reset for each live interval. - , lrLoEVar :: LoE (StrictTVar m (AnchoredFragment (HeaderWithTime blk))) +data LiveResources blk m = LiveResources + { lrRegistry :: ResourceRegistry m + , lrPeerSim :: PeerSimulatorResources m blk + , lrTracer :: Tracer m (TraceEvent blk) + , lrSTracer :: ChainDB m blk -> m (Tracer m ()) + , lrConfig :: TopLevelConfig blk + , lrCdb :: NodeDBs (StrictTMVar m MockFS) + -- ^ The chain DB state consists of several transient parts and the + -- immutable DB's virtual file system. + -- After 'lnCopyToImmDb' was executed, the latter will contain the final + -- state of an interval. + -- The rest is reset when the chain DB is recreated. + , lrLoEVar :: LoE (StrictTVar m (AnchoredFragment (HeaderWithTime blk))) + -- ^ The LoE fragment must be reset for each live interval. } -data LiveInterval blk m = LiveInterval { - liResources :: LiveResources blk m - , liResult :: LiveIntervalResult blk - , liNode :: LiveNode blk m +data LiveInterval blk m = LiveInterval + { liResources :: LiveResources blk m + , liResult :: LiveIntervalResult blk + , liNode :: LiveNode blk m } -- | Handlers for starting the node and shutting it down for each live interval, -- using the state of the previous run. -data NodeLifecycle blk m = NodeLifecycle { - -- | The minimum tick duration that triggers a node downtime. - -- If this is 'Nothing', downtimes are disabled. - nlMinDuration :: Maybe DiffTime - - -- | Start the node with prior state. - -- For the first start, this must be called with an empty 'lirPeerResults' - -- and the initial set of all peers in 'lirActive'. - , nlStart :: LiveIntervalResult blk -> m (LiveNode blk m) - , nlShutdown :: LiveNode blk m -> m (LiveIntervalResult blk) +data NodeLifecycle blk m = NodeLifecycle + { nlMinDuration :: Maybe DiffTime + -- ^ The minimum tick duration that triggers a node downtime. + -- If this is 'Nothing', downtimes are disabled. + , nlStart :: LiveIntervalResult blk -> m (LiveNode blk m) + -- ^ Start the node with prior state. + -- For the first start, this must be called with an empty 'lirPeerResults' + -- and the initial set of all peers in 'lirActive'. + , nlShutdown :: LiveNode blk m -> m (LiveIntervalResult blk) } -- | Create a ChainDB and start a BlockRunner that operate on the peers' @@ -122,34 +119,42 @@ mkChainDb :: LiveResources TestBlock m -> m (ChainDB m TestBlock, m (WithOrigin SlotNo)) mkChainDb resources = do - atomically $ do - -- Reset only the non-persisted state of the ChainDB's file system mocks: - -- - GSM state and Ledger DB are discarded - -- - Immutable DB and Volatile DB are preserved for the next interval - void $ swapTMVar (nodeDBsGsm lrCdb) MockFS.empty - void $ swapTMVar (nodeDBsLgr lrCdb) MockFS.empty - chainDbArgs <- do - let args = updateTracer + atomically $ do + -- Reset only the non-persisted state of the ChainDB's file system mocks: + -- - GSM state and Ledger DB are discarded + -- - Immutable DB and Volatile DB are preserved for the next interval + void $ swapTMVar (nodeDBsGsm lrCdb) MockFS.empty + void $ swapTMVar (nodeDBsLgr lrCdb) MockFS.empty + chainDbArgs <- do + let args = + updateTracer (Tracer (traceWith lrTracer . TraceChainDBEvent)) - (fromMinimalChainDbArgs MinimalChainDbArgs { - mcdbTopLevelConfig = lrConfig - , mcdbChunkInfo = mkTestChunkInfo lrConfig - , mcdbInitLedger = testInitExtLedger - , mcdbRegistry = lrRegistry - , mcdbNodeDBs = lrCdb - }) - pure $ args { ChainDB.cdbsArgs = (ChainDB.cdbsArgs args) { - cdbsLoE = traverse readTVarIO lrLoEVar - } } - (_, (chainDB, internal)) <- allocate - lrRegistry - (\_ -> ChainDB.openDBInternal chainDbArgs False) - (ChainDB.closeDB . fst) - let ChainDB.Internal {intCopyToImmutableDB, intAddBlockRunner} = internal - void $ forkLinkedThread lrRegistry "AddBlockRunner" (void intAddBlockRunner) - pure (chainDB, intCopyToImmutableDB) - where - LiveResources {lrRegistry, lrTracer, lrConfig, lrCdb, lrLoEVar} = resources + ( fromMinimalChainDbArgs + MinimalChainDbArgs + { mcdbTopLevelConfig = lrConfig + , mcdbChunkInfo = mkTestChunkInfo lrConfig + , mcdbInitLedger = testInitExtLedger + , mcdbRegistry = lrRegistry + , mcdbNodeDBs = lrCdb + } + ) + pure $ + args + { ChainDB.cdbsArgs = + (ChainDB.cdbsArgs args) + { cdbsLoE = traverse readTVarIO lrLoEVar + } + } + (_, (chainDB, internal)) <- + allocate + lrRegistry + (\_ -> ChainDB.openDBInternal chainDbArgs False) + (ChainDB.closeDB . fst) + let ChainDB.Internal{intCopyToImmutableDB, intAddBlockRunner} = internal + void $ forkLinkedThread lrRegistry "AddBlockRunner" (void intAddBlockRunner) + pure (chainDB, intCopyToImmutableDB) + where + LiveResources{lrRegistry, lrTracer, lrConfig, lrCdb, lrLoEVar} = resources -- | Allocate all the resources that depend on the results of previous live -- intervals, the ChainDB and its persisted state. @@ -158,17 +163,18 @@ restoreNode :: LiveResources TestBlock m -> LiveIntervalResult TestBlock -> m (LiveNode TestBlock m) -restoreNode resources LiveIntervalResult {lirPeerResults, lirActive} = do +restoreNode resources LiveIntervalResult{lirPeerResults, lirActive} = do lnStateViewTracers <- stateViewTracersWithInitial lirPeerResults (lnChainDb, lnCopyToImmDb) <- mkChainDb resources lnStateTracer <- lrSTracer resources lnChainDb - pure LiveNode { - lnChainDb - , lnStateViewTracers - , lnStateTracer - , lnCopyToImmDb - , lnPeers = lirActive - } + pure + LiveNode + { lnChainDb + , lnStateViewTracers + , lnStateTracer + , lnCopyToImmDb + , lnPeers = lirActive + } -- | Allocate resources with 'restoreNode' and pass them to the callback that -- starts the node's threads. @@ -182,12 +188,12 @@ lifecycleStart :: lifecycleStart start liResources liResult = do trace (TraceSchedulerEvent TraceNodeStartupStart) liNode <- restoreNode liResources liResult - start LiveInterval {liResources, liResult, liNode} + start LiveInterval{liResources, liResult, liNode} chain <- atomically (ChainDB.getCurrentChain (lnChainDb liNode)) trace (TraceSchedulerEvent (TraceNodeStartupComplete chain)) pure liNode - where - trace = traceWith (lrTracer liResources) + where + trace = traceWith (lrTracer liResources) -- | Shut down the node by killing all its threads after extracting the -- persistent state used to restart the node later. @@ -196,7 +202,7 @@ lifecycleStop :: LiveResources blk m -> LiveNode blk m -> m (LiveIntervalResult blk) -lifecycleStop resources LiveNode {lnStateViewTracers, lnCopyToImmDb, lnPeers} = do +lifecycleStop resources LiveNode{lnStateViewTracers, lnCopyToImmDb, lnPeers} = do -- Trigger writing the immutable tip to the MockFS in our TVar for restoring in 'startNode' immutableTip <- lnCopyToImmDb trace (TraceSchedulerEvent (TraceNodeShutdownStart immutableTip)) @@ -211,14 +217,14 @@ lifecycleStop resources LiveNode {lnStateViewTracers, lnCopyToImmDb, lnPeers} = cschcRemoveAllHandles psrHandles case lrLoEVar of LoEEnabled var -> modifyTVar var (const (AF.Empty AF.AnchorGenesis)) - LoEDisabled -> pure () + LoEDisabled -> pure () trace (TraceSchedulerEvent TraceNodeShutdownComplete) - pure LiveIntervalResult {lirActive, lirPeerResults} - where - trace = traceWith lrTracer - LiveResources { - lrRegistry - , lrTracer - , lrPeerSim = PeerSimulatorResources {psrHandles} - , lrLoEVar - } = resources + pure LiveIntervalResult{lirActive, lirPeerResults} + where + trace = traceWith lrTracer + LiveResources + { lrRegistry + , lrTracer + , lrPeerSim = PeerSimulatorResources{psrHandles} + , lrLoEVar + } = resources diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Resources.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Resources.hs index a594d9059c..d219498944 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Resources.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Resources.hs @@ -3,8 +3,8 @@ -- | Data types and resource allocating constructors for the concurrency -- primitives used by ChainSync and BlockFetch in the handlers that implement -- the block tree analysis specific to our peer simulator. -module Test.Consensus.PeerSimulator.Resources ( - BlockFetchResources (..) +module Test.Consensus.PeerSimulator.Resources + ( BlockFetchResources (..) , ChainSyncResources (..) , PeerResources (..) , PeerSimulatorResources (..) @@ -14,122 +14,126 @@ module Test.Consensus.PeerSimulator.Resources ( , makePeerSimulatorResources ) where -import Control.Concurrent.Class.MonadSTM.Strict (atomically, dupTChan, - newBroadcastTChan, readTChan, writeTChan) -import Control.Tracer (Tracer) -import Data.Foldable (toList) -import Data.List.NonEmpty (NonEmpty) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Traversable (for) -import Ouroboros.Consensus.Block (WithOrigin (Origin)) -import Ouroboros.Consensus.Block.Abstract (Header, Point (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientHandleCollection, - newChainSyncClientHandleCollection) -import Ouroboros.Consensus.Util.IOLike (IOLike, MonadSTM (STM), - StrictTVar, readTVar, uncheckedNewTVarM, writeTVar) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (Tip (..)) -import Ouroboros.Network.Protocol.BlockFetch.Server (BlockFetchServer) -import Ouroboros.Network.Protocol.ChainSync.Server - (ChainSyncServer (..)) -import Test.Consensus.BlockTree (BlockTree) -import Test.Consensus.PeerSimulator.Handlers -import Test.Consensus.PeerSimulator.ScheduledBlockFetchServer - (BlockFetchServerHandlers (..), - runScheduledBlockFetchServer) -import Test.Consensus.PeerSimulator.ScheduledChainSyncServer -import Test.Consensus.PeerSimulator.Trace (TraceEvent) -import Test.Consensus.PointSchedule.NodeState -import Test.Consensus.PointSchedule.Peers (PeerId) -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock) +import Control.Concurrent.Class.MonadSTM.Strict + ( atomically + , dupTChan + , newBroadcastTChan + , readTChan + , writeTChan + ) +import Control.Tracer (Tracer) +import Data.Foldable (toList) +import Data.List.NonEmpty (NonEmpty) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Traversable (for) +import Ouroboros.Consensus.Block (WithOrigin (Origin)) +import Ouroboros.Consensus.Block.Abstract (Header, Point (..)) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientHandleCollection + , newChainSyncClientHandleCollection + ) +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , MonadSTM (STM) + , StrictTVar + , readTVar + , uncheckedNewTVarM + , writeTVar + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (Tip (..)) +import Ouroboros.Network.Protocol.BlockFetch.Server (BlockFetchServer) +import Ouroboros.Network.Protocol.ChainSync.Server + ( ChainSyncServer (..) + ) +import Test.Consensus.BlockTree (BlockTree) +import Test.Consensus.PeerSimulator.Handlers +import Test.Consensus.PeerSimulator.ScheduledBlockFetchServer + ( BlockFetchServerHandlers (..) + , runScheduledBlockFetchServer + ) +import Test.Consensus.PeerSimulator.ScheduledChainSyncServer +import Test.Consensus.PeerSimulator.Trace (TraceEvent) +import Test.Consensus.PointSchedule.NodeState +import Test.Consensus.PointSchedule.Peers (PeerId) +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (TestBlock) -- | Resources used by both ChainSync and BlockFetch for a single peer. -data SharedResources m blk = - SharedResources { - -- | The name of the peer. - srPeerId :: PeerId, - - -- | The block tree in which the test is taking place. In combination to - -- 'csssCurrentIntersection' and the current point schedule tick, it allows - -- to define which blocks to serve to the client. - srBlockTree :: BlockTree blk, - - -- | The currently active schedule point. - -- - -- This is 'Maybe' because we cannot wait for the initial state otherwise. - srCurrentState :: StrictTVar m (Maybe (NodeState blk)), - - srTracer :: Tracer m (TraceEvent blk) +data SharedResources m blk + = SharedResources + { srPeerId :: PeerId + -- ^ The name of the peer. + , srBlockTree :: BlockTree blk + -- ^ The block tree in which the test is taking place. In combination to + -- 'csssCurrentIntersection' and the current point schedule tick, it allows + -- to define which blocks to serve to the client. + , srCurrentState :: StrictTVar m (Maybe (NodeState blk)) + -- ^ The currently active schedule point. + -- + -- This is 'Maybe' because we cannot wait for the initial state otherwise. + , srTracer :: Tracer m (TraceEvent blk) } -- | The data used by the point scheduler to interact with the mocked protocol handler in -- "Test.Consensus.PeerSimulator.ScheduledChainSyncServer". -data ChainSyncResources m blk = - ChainSyncResources { - -- | The current known intersection with the chain of the client. - csrCurrentIntersection :: StrictTVar m (Point blk), - - -- | The final server passed to typed-protocols. - csrServer :: ChainSyncServer (Header blk) (Point blk) (Tip blk) m (), - - -- | This action blocks while this peer is inactive in the point schedule. - csrTickStarted :: STM m () +data ChainSyncResources m blk + = ChainSyncResources + { csrCurrentIntersection :: StrictTVar m (Point blk) + -- ^ The current known intersection with the chain of the client. + , csrServer :: ChainSyncServer (Header blk) (Point blk) (Tip blk) m () + -- ^ The final server passed to typed-protocols. + , csrTickStarted :: STM m () + -- ^ This action blocks while this peer is inactive in the point schedule. } -- | The data used by the point scheduler to interact with the mocked protocol handler in -- "Test.Consensus.PeerSimulator.BlockFetch". -data BlockFetchResources m blk = - BlockFetchResources { - -- | The final server passed to typed-protocols. - bfrServer :: BlockFetchServer blk (Point blk) m (), - - -- | This action blocks while this peer is inactive in the point schedule. - bfrTickStarted :: STM m () +data BlockFetchResources m blk + = BlockFetchResources + { bfrServer :: BlockFetchServer blk (Point blk) m () + -- ^ The final server passed to typed-protocols. + , bfrTickStarted :: STM m () + -- ^ This action blocks while this peer is inactive in the point schedule. } -- | The totality of resources used by a single peer in ChainSync and BlockFetch and by -- the scheduler to interact with it. -data PeerResources m blk = - PeerResources { - -- | Resources used by ChainSync and BlockFetch. - prShared :: SharedResources m blk, - - -- | Resources used by ChainSync only. - prChainSync :: ChainSyncResources m blk, - - -- | Resources used by BlockFetch only. - prBlockFetch :: BlockFetchResources m blk, - - -- | An action used by the scheduler to update the peer's advertised points and - -- resume processing for the ChainSync and BlockFetch servers. - prUpdateState :: NodeState blk -> STM m () +data PeerResources m blk + = PeerResources + { prShared :: SharedResources m blk + -- ^ Resources used by ChainSync and BlockFetch. + , prChainSync :: ChainSyncResources m blk + -- ^ Resources used by ChainSync only. + , prBlockFetch :: BlockFetchResources m blk + -- ^ Resources used by BlockFetch only. + , prUpdateState :: NodeState blk -> STM m () + -- ^ An action used by the scheduler to update the peer's advertised points and + -- resume processing for the ChainSync and BlockFetch servers. } -- | Resources for the peer simulator. -data PeerSimulatorResources m blk = - PeerSimulatorResources { - -- | Resources for individual peers. - psrPeers :: Map PeerId (PeerResources m blk), - - -- | Handles to interact with the ChainSync client of each peer. - -- See 'ChainSyncClientHandle' for more details. - psrHandles :: ChainSyncClientHandleCollection PeerId m TestBlock +data PeerSimulatorResources m blk + = PeerSimulatorResources + { psrPeers :: Map PeerId (PeerResources m blk) + -- ^ Resources for individual peers. + , psrHandles :: ChainSyncClientHandleCollection PeerId m TestBlock + -- ^ Handles to interact with the ChainSync client of each peer. + -- See 'ChainSyncClientHandle' for more details. } -- | Create 'ChainSyncServerHandlers' for our default implementation using 'NodeState'. makeChainSyncServerHandlers :: - (IOLike m) => + IOLike m => StrictTVar m (Point TestBlock) -> BlockTree TestBlock -> ChainSyncServerHandlers m (NodeState TestBlock) TestBlock makeChainSyncServerHandlers currentIntersection blockTree = - ChainSyncServerHandlers { - csshFindIntersection = handlerFindIntersection currentIntersection blockTree, - csshRequestNext = handlerRequestNext currentIntersection blockTree - } + ChainSyncServerHandlers + { csshFindIntersection = handlerFindIntersection currentIntersection blockTree + , csshRequestNext = handlerRequestNext currentIntersection blockTree + } -- | Create all the resources used exclusively by the ChainSync handlers, and -- the ChainSync protocol server that uses the handlers to interface with the @@ -137,35 +141,40 @@ makeChainSyncServerHandlers currentIntersection blockTree = -- -- TODO move server construction to Run? makeChainSyncResources :: - (IOLike m) => + IOLike m => STM m () -> SharedResources m TestBlock -> m (ChainSyncResources m TestBlock) -makeChainSyncResources csrTickStarted SharedResources {srPeerId, srTracer, srBlockTree, srCurrentState} = do +makeChainSyncResources csrTickStarted SharedResources{srPeerId, srTracer, srBlockTree, srCurrentState} = do csrCurrentIntersection <- uncheckedNewTVarM $ AF.Point Origin let handlers = makeChainSyncServerHandlers csrCurrentIntersection srBlockTree csrServer = runScheduledChainSyncServer srPeerId csrTickStarted (readTVar srCurrentState) srTracer handlers - pure ChainSyncResources {csrTickStarted, csrServer, csrCurrentIntersection} + pure ChainSyncResources{csrTickStarted, csrServer, csrCurrentIntersection} makeBlockFetchResources :: IOLike m => STM m () -> SharedResources m TestBlock -> BlockFetchResources m TestBlock -makeBlockFetchResources bfrTickStarted SharedResources {srPeerId, srTracer, srBlockTree, srCurrentState} = - BlockFetchResources { - bfrTickStarted, - bfrServer - } - where - handlers = BlockFetchServerHandlers { - bfshBlockFetch = handlerBlockFetch srBlockTree, - bfshSendBlocks = handlerSendBlocks +makeBlockFetchResources bfrTickStarted SharedResources{srPeerId, srTracer, srBlockTree, srCurrentState} = + BlockFetchResources + { bfrTickStarted + , bfrServer } - bfrServer = - runScheduledBlockFetchServer srPeerId bfrTickStarted (readTVar srCurrentState) - srTracer handlers + where + handlers = + BlockFetchServerHandlers + { bfshBlockFetch = handlerBlockFetch srBlockTree + , bfshSendBlocks = handlerSendBlocks + } + bfrServer = + runScheduledBlockFetchServer + srPeerId + bfrTickStarted + (readTVar srCurrentState) + srTracer + handlers -- | Create the concurrency transactions for communicating the begin of a peer's -- tick and its new state to the ChainSync and BlockFetch servers. @@ -194,10 +203,10 @@ updateState srCurrentState = let newState points = do writeTVar srCurrentState =<< do - -- REVIEW: Is it ok to only unblock the peer when it is online? - -- So far we've handled Nothing in the ChainSync server by skipping the tick. - writeTChan publisher () - pure (Just points) + -- REVIEW: Is it ok to only unblock the peer when it is online? + -- So far we've handled Nothing in the ChainSync server by skipping the tick. + writeTChan publisher () + pure (Just points) pure (newState, readTChan consumer1, readTChan consumer2) -- | Create all concurrency resources and the ChainSync protocol server used @@ -218,10 +227,10 @@ makePeerResources :: makePeerResources srTracer srBlockTree srPeerId = do srCurrentState <- uncheckedNewTVarM Nothing (prUpdateState, csrTickStarted, bfrTickStarted) <- updateState srCurrentState - let prShared = SharedResources {srTracer, srBlockTree, srPeerId, srCurrentState} + let prShared = SharedResources{srTracer, srBlockTree, srPeerId, srCurrentState} prBlockFetch = makeBlockFetchResources bfrTickStarted prShared prChainSync <- makeChainSyncResources csrTickStarted prShared - pure PeerResources {prShared, prChainSync, prBlockFetch, prUpdateState} + pure PeerResources{prShared, prChainSync, prBlockFetch, prUpdateState} -- | Create resources for all given peers operating on the given block tree. makePeerSimulatorResources :: @@ -231,8 +240,8 @@ makePeerSimulatorResources :: NonEmpty PeerId -> m (PeerSimulatorResources m TestBlock) makePeerSimulatorResources tracer blockTree peers = do - resources <- for peers $ \ peerId -> do + resources <- for peers $ \peerId -> do peerResources <- makePeerResources tracer blockTree peerId pure (peerId, peerResources) psrHandles <- atomically newChainSyncClientHandleCollection - pure PeerSimulatorResources {psrPeers = Map.fromList $ toList resources, psrHandles} + pure PeerSimulatorResources{psrPeers = Map.fromList $ toList resources, psrHandles} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Run.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Run.hs index 86f4710af3..6f37d133f7 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Run.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Run.hs @@ -4,139 +4,150 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} -module Test.Consensus.PeerSimulator.Run ( - SchedulerConfig (..) +module Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (..) , debugScheduler , defaultSchedulerConfig , runPointSchedule ) where -import Control.Monad (foldM, forM, void, when) -import Control.Monad.Class.MonadTime (MonadTime) -import Control.Monad.Class.MonadTimer.SI (MonadTimer) -import Control.ResourceRegistry -import Control.Tracer (Tracer (..), nullTracer, traceWith) -import Data.Coerce (coerce) -import Data.Foldable (for_) -import Data.List (sort) -import qualified Data.List.NonEmpty as NonEmpty -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config (TopLevelConfig (..)) -import Ouroboros.Consensus.Genesis.Governor (gddWatcher) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (CSJConfig (..), CSJEnabledConfig (..), ChainDbView, - ChainSyncClientHandle, - ChainSyncClientHandleCollection (..), - ChainSyncLoPBucketConfig (..), - ChainSyncLoPBucketEnabledConfig (..), viewChainSyncState) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import qualified Ouroboros.Consensus.Node.GsmState as GSM -import Ouroboros.Consensus.Storage.ChainDB.API -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (forkLinkedWatcher) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.BlockFetch (FetchClientRegistry, - bracketSyncWithFetchClient, newFetchClientRegistry) -import Ouroboros.Network.Channel (createConnectedChannels) -import Ouroboros.Network.ControlMessage (ControlMessage (..), - ControlMessageSTM) -import Ouroboros.Network.Protocol.ChainSync.Codec -import Ouroboros.Network.Util.ShowProxy (ShowProxy) -import qualified Test.Consensus.PeerSimulator.BlockFetch as BlockFetch -import qualified Test.Consensus.PeerSimulator.ChainSync as ChainSync -import Test.Consensus.PeerSimulator.Config -import qualified Test.Consensus.PeerSimulator.CSJInvariants as CSJInvariants -import Test.Consensus.PeerSimulator.NodeLifecycle -import Test.Consensus.PeerSimulator.Resources -import Test.Consensus.PeerSimulator.StateDiagram - (peerSimStateDiagramSTMTracerDebug) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PeerSimulator.Trace -import Test.Consensus.PointSchedule (BlockFetchTimeout, - CSJParams (..), GenesisTest (..), GenesisTestFull, - LoPBucketParams (..), PointSchedule (..), peersStates, - peersStatesRelative) -import Test.Consensus.PointSchedule.NodeState (NodeState) -import Test.Consensus.PointSchedule.Peers (Peer (..), PeerId, - getPeerIds) -import Test.Util.ChainDB -import Test.Util.Header (dropTimeFromFragment) -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock) +import Control.Monad (foldM, forM, void, when) +import Control.Monad.Class.MonadTime (MonadTime) +import Control.Monad.Class.MonadTimer.SI (MonadTimer) +import Control.ResourceRegistry +import Control.Tracer (Tracer (..), nullTracer, traceWith) +import Data.Coerce (coerce) +import Data.Foldable (for_) +import Data.List (sort) +import Data.List.NonEmpty qualified as NonEmpty +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config (TopLevelConfig (..)) +import Ouroboros.Consensus.Genesis.Governor (gddWatcher) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( CSJConfig (..) + , CSJEnabledConfig (..) + , ChainDbView + , ChainSyncClientHandle + , ChainSyncClientHandleCollection (..) + , ChainSyncLoPBucketConfig (..) + , ChainSyncLoPBucketEnabledConfig (..) + , viewChainSyncState + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.Node.GsmState qualified as GSM +import Ouroboros.Consensus.Storage.ChainDB.API +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (forkLinkedWatcher) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.BlockFetch + ( FetchClientRegistry + , bracketSyncWithFetchClient + , newFetchClientRegistry + ) +import Ouroboros.Network.Channel (createConnectedChannels) +import Ouroboros.Network.ControlMessage + ( ControlMessage (..) + , ControlMessageSTM + ) +import Ouroboros.Network.Protocol.ChainSync.Codec +import Ouroboros.Network.Util.ShowProxy (ShowProxy) +import Test.Consensus.PeerSimulator.BlockFetch qualified as BlockFetch +import Test.Consensus.PeerSimulator.CSJInvariants qualified as CSJInvariants +import Test.Consensus.PeerSimulator.ChainSync qualified as ChainSync +import Test.Consensus.PeerSimulator.Config +import Test.Consensus.PeerSimulator.NodeLifecycle +import Test.Consensus.PeerSimulator.Resources +import Test.Consensus.PeerSimulator.StateDiagram + ( peerSimStateDiagramSTMTracerDebug + ) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PeerSimulator.Trace +import Test.Consensus.PointSchedule + ( BlockFetchTimeout + , CSJParams (..) + , GenesisTest (..) + , GenesisTestFull + , LoPBucketParams (..) + , PointSchedule (..) + , peersStates + , peersStatesRelative + ) +import Test.Consensus.PointSchedule.NodeState (NodeState) +import Test.Consensus.PointSchedule.Peers + ( Peer (..) + , PeerId + , getPeerIds + ) +import Test.Util.ChainDB +import Test.Util.Header (dropTimeFromFragment) +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (TestBlock) -- | Behavior config for the scheduler. -data SchedulerConfig = - SchedulerConfig { - -- | Whether to enable timeouts for the ChainSync protocol. The value of - -- timeouts themselves is defined in 'GenesisTest'. - scEnableChainSyncTimeouts :: Bool - - -- | Whether to enable timeouts for the BlockFetch protocol. The value of - -- timeouts themselves is defined in 'GenesisTest'. - , scEnableBlockFetchTimeouts :: Bool - - -- | If 'True', 'Test.Consensus.Genesis.Setup.runTest' will print traces - -- to stderr. - -- - -- Use 'debugScheduler' to toggle it conveniently. - , scDebug :: Bool - - -- | Whether to trace when running the scheduler. - , scTrace :: Bool - - -- | Whether to trace only the current state of the candidates and selection, - -- which provides a less verbose view of the test progress. - , scTraceState :: Bool - - -- | Enable Limit on Eagerness (LoE) and the Genesis Density Disconnection - -- governor (GDD). - , scEnableLoE :: Bool - - -- | Whether to enable the LoP. The parameters of the LoP come from - -- 'GenesisTest'. - , scEnableLoP :: Bool - - -- | Enable node downtime if this is 'Just', using the value as minimum tick - -- duration to trigger it. - , scDowntime :: Maybe DiffTime - - -- | Enable the use of ChainSel starvation information in the block fetch - -- decision logic. It is never actually disabled, but rather the grace - -- period is made virtually infinite. - , scEnableChainSelStarvation :: Bool - - -- | Whether to enable ChainSync Jumping. The parameters come from - -- 'GenesisTest'. - , scEnableCSJ :: Bool +data SchedulerConfig + = SchedulerConfig + { scEnableChainSyncTimeouts :: Bool + -- ^ Whether to enable timeouts for the ChainSync protocol. The value of + -- timeouts themselves is defined in 'GenesisTest'. + , scEnableBlockFetchTimeouts :: Bool + -- ^ Whether to enable timeouts for the BlockFetch protocol. The value of + -- timeouts themselves is defined in 'GenesisTest'. + , scDebug :: Bool + -- ^ If 'True', 'Test.Consensus.Genesis.Setup.runTest' will print traces + -- to stderr. + -- + -- Use 'debugScheduler' to toggle it conveniently. + , scTrace :: Bool + -- ^ Whether to trace when running the scheduler. + , scTraceState :: Bool + -- ^ Whether to trace only the current state of the candidates and selection, + -- which provides a less verbose view of the test progress. + , scEnableLoE :: Bool + -- ^ Enable Limit on Eagerness (LoE) and the Genesis Density Disconnection + -- governor (GDD). + , scEnableLoP :: Bool + -- ^ Whether to enable the LoP. The parameters of the LoP come from + -- 'GenesisTest'. + , scDowntime :: Maybe DiffTime + -- ^ Enable node downtime if this is 'Just', using the value as minimum tick + -- duration to trigger it. + , scEnableChainSelStarvation :: Bool + -- ^ Enable the use of ChainSel starvation information in the block fetch + -- decision logic. It is never actually disabled, but rather the grace + -- period is made virtually infinite. + , scEnableCSJ :: Bool + -- ^ Whether to enable ChainSync Jumping. The parameters come from + -- 'GenesisTest'. } -- | Default scheduler config defaultSchedulerConfig :: SchedulerConfig defaultSchedulerConfig = - SchedulerConfig { - scEnableChainSyncTimeouts = True, - scEnableBlockFetchTimeouts = True, - scDebug = False, - scTrace = True, - scTraceState = False, - scEnableLoE = False, - scEnableLoP = False, - scDowntime = Nothing, - scEnableChainSelStarvation = True, - scEnableCSJ = False - } + SchedulerConfig + { scEnableChainSyncTimeouts = True + , scEnableBlockFetchTimeouts = True + , scDebug = False + , scTrace = True + , scTraceState = False + , scEnableLoE = False + , scEnableLoP = False + , scDowntime = Nothing + , scEnableChainSelStarvation = True + , scEnableCSJ = False + } -- | Enable debug tracing during a scheduler test. debugScheduler :: SchedulerConfig -> SchedulerConfig -debugScheduler conf = conf { scDebug = True } +debugScheduler conf = conf{scDebug = True} -- | Run a ChainSync protocol for one peer, consisting of a server and client. -- @@ -167,23 +178,33 @@ startChainSyncConnectionThread cfg chainDbView fetchClientRegistry - SharedResources {srPeerId} - ChainSyncResources {csrServer} + SharedResources{srPeerId} + ChainSyncResources{csrServer} chainSyncTimeouts_ chainSyncLoPBucketConfig csjConfig tracers - varHandles - = do - (clientChannel, serverChannel) <- createConnectedChannels - clientThread <- - forkLinkedThread registry ("ChainSyncClient" <> condense srPeerId) $ - bracketSyncWithFetchClient fetchClientRegistry srPeerId $ - ChainSync.runChainSyncClient tracer cfg chainDbView srPeerId chainSyncTimeouts_ chainSyncLoPBucketConfig csjConfig tracers varHandles clientChannel - serverThread <- - forkLinkedThread registry ("ChainSyncServer" <> condense srPeerId) $ - ChainSync.runChainSyncServer tracer srPeerId tracers csrServer serverChannel - pure (clientThread, serverThread) + varHandles = + do + (clientChannel, serverChannel) <- createConnectedChannels + clientThread <- + forkLinkedThread registry ("ChainSyncClient" <> condense srPeerId) $ + bracketSyncWithFetchClient fetchClientRegistry srPeerId $ + ChainSync.runChainSyncClient + tracer + cfg + chainDbView + srPeerId + chainSyncTimeouts_ + chainSyncLoPBucketConfig + csjConfig + tracers + varHandles + clientChannel + serverThread <- + forkLinkedThread registry ("ChainSyncServer" <> condense srPeerId) $ + ChainSync.runChainSyncServer tracer srPeerId tracers csrServer serverChannel + pure (clientThread, serverThread) -- | Start the BlockFetch client, using the supplied 'FetchClientRegistry' to -- register it for synchronization with the ChainSync client. @@ -204,13 +225,20 @@ startBlockFetchConnectionThread tracers fetchClientRegistry controlMsgSTM - SharedResources {srPeerId} - BlockFetchResources {bfrServer} + SharedResources{srPeerId} + BlockFetchResources{bfrServer} blockFetchTimeouts = do (clientChannel, serverChannel) <- createConnectedChannels clientThread <- forkLinkedThread registry ("BlockFetchClient" <> condense srPeerId) $ - BlockFetch.runBlockFetchClient tracer srPeerId blockFetchTimeouts tracers fetchClientRegistry controlMsgSTM clientChannel + BlockFetch.runBlockFetchClient + tracer + srPeerId + blockFetchTimeouts + tracers + fetchClientRegistry + controlMsgSTM + clientChannel serverThread <- forkLinkedThread registry ("BlockFetchServer" <> condense srPeerId) $ BlockFetch.runBlockFetchServer tracer srPeerId tracers bfrServer serverChannel @@ -219,25 +247,25 @@ startBlockFetchConnectionThread -- | Wait for the given duration, but if the duration is longer than the minimum -- duration in the live cycle, shutdown the node and restart it after the delay. smartDelay :: - (MonadDelay m) => + MonadDelay m => NodeLifecycle blk m -> LiveNode blk m -> DiffTime -> m (LiveNode blk m) -smartDelay lifecycle@NodeLifecycle {nlStart, nlShutdown} node duration +smartDelay lifecycle@NodeLifecycle{nlStart, nlShutdown} node duration | itIsTimeToRestartTheNode lifecycle duration = do - results <- nlShutdown node - threadDelay duration - nlStart results + results <- nlShutdown node + threadDelay duration + nlStart results smartDelay _ node duration = do threadDelay duration pure node itIsTimeToRestartTheNode :: NodeLifecycle blk m -> DiffTime -> Bool -itIsTimeToRestartTheNode NodeLifecycle {nlMinDuration} duration = +itIsTimeToRestartTheNode NodeLifecycle{nlMinDuration} duration = case nlMinDuration of Just minInterval -> duration > minInterval - Nothing -> False + Nothing -> False -- | The 'Tick' contains a state update for a specific peer. -- If the peer has not terminated by protocol rules, this will update its TMVar @@ -245,7 +273,8 @@ itIsTimeToRestartTheNode NodeLifecycle {nlMinDuration} duration = -- for new instructions. -- -- TODO doc is outdated -dispatchTick :: forall m blk. +dispatchTick :: + forall m blk. (IOLike m, HasHeader (Header blk)) => Tracer m (TraceSchedulerEvent blk) -> STM m (Map PeerId (ChainSyncClientHandle m blk)) -> @@ -256,25 +285,26 @@ dispatchTick :: forall m blk. m (LiveNode blk m) dispatchTick tracer varHandles peers lifecycle node (number, (duration, Peer pid state)) = case peers Map.!? pid of - Just PeerResources {prUpdateState} -> do + Just PeerResources{prUpdateState} -> do traceNewTick atomically (prUpdateState state) newNode <- smartDelay lifecycle node duration traceWith (lnStateTracer newNode) () pure newNode Nothing -> error "“The impossible happened,” as GHC would say." - where - traceNewTick :: m () - traceNewTick = do - currentChain <- atomically $ ChainDB.getCurrentChain (lnChainDb node) - (csState, jumpingStates) <- atomically $ do - m <- varHandles - csState <- traverse (readTVar . CSClient.cschState) (m Map.!? pid) - jumpingStates <- forM (Map.toList m) $ \(peer, h) -> do - st <- readTVar (CSClient.cschJumping h) - pure (peer, st) - pure (csState, jumpingStates) - traceWith tracer $ TraceNewTick + where + traceNewTick :: m () + traceNewTick = do + currentChain <- atomically $ ChainDB.getCurrentChain (lnChainDb node) + (csState, jumpingStates) <- atomically $ do + m <- varHandles + csState <- traverse (readTVar . CSClient.cschState) (m Map.!? pid) + jumpingStates <- forM (Map.toList m) $ \(peer, h) -> do + st <- readTVar (CSClient.cschJumping h) + pure (peer, st) + pure (csState, jumpingStates) + traceWith tracer $ + TraceNewTick number duration (Peer pid state) @@ -295,15 +325,16 @@ runScheduler :: Map PeerId (PeerResources m blk) -> NodeLifecycle blk m -> m (ChainDB m blk, StateViewTracers blk m) -runScheduler tracer varHandles ps@PointSchedule{psMinEndTime} peers lifecycle@NodeLifecycle {nlStart} = do - node0 <- nlStart LiveIntervalResult {lirActive = Map.keysSet peers, lirPeerResults = []} +runScheduler tracer varHandles ps@PointSchedule{psMinEndTime} peers lifecycle@NodeLifecycle{nlStart} = do + node0 <- nlStart LiveIntervalResult{lirActive = Map.keysSet peers, lirPeerResults = []} traceWith tracer TraceBeginningOfTime - nodeEnd <- foldM tick node0 (zip [0..] (peersStatesRelative ps)) + nodeEnd <- foldM tick node0 (zip [0 ..] (peersStatesRelative ps)) let extraDelay = case take 1 $ reverse $ peersStates ps of - [(t, _)] -> if t < psMinEndTime + [(t, _)] -> + if t < psMinEndTime then Just $ diffTime psMinEndTime t else Nothing - _ -> Just $ coerce psMinEndTime + _ -> Just $ coerce psMinEndTime LiveNode{lnChainDb, lnStateViewTracers} <- case extraDelay of Just duration -> do @@ -311,14 +342,15 @@ runScheduler tracer varHandles ps@PointSchedule{psMinEndTime} peers lifecycle@No -- Give an opportunity to the node to finish whatever it was doing at -- shutdown when (itIsTimeToRestartTheNode lifecycle duration) $ - threadDelay $ coerce psMinEndTime + threadDelay $ + coerce psMinEndTime pure nodeEnd' Nothing -> pure nodeEnd traceWith tracer TraceEndOfTime pure (lnChainDb, lnStateViewTracers) - where - tick = dispatchTick tracer varHandles peers lifecycle + where + tick = dispatchTick tracer varHandles peers lifecycle -- | Create the shared resource for the LoE if the feature is enabled in the config. -- This is used by the ChainDB and the GDD governor. @@ -326,11 +358,11 @@ mkLoEVar :: IOLike m => SchedulerConfig -> m (LoE (StrictTVar m (AnchoredFragment (HeaderWithTime TestBlock)))) -mkLoEVar SchedulerConfig {scEnableLoE} - | scEnableLoE - = LoEEnabled <$> newTVarIO (AF.Empty AF.AnchorGenesis) - | otherwise - = pure LoEDisabled +mkLoEVar SchedulerConfig{scEnableLoE} + | scEnableLoE = + LoEEnabled <$> newTVarIO (AF.Empty AF.AnchorGenesis) + | otherwise = + pure LoEDisabled mkStateTracer :: IOLike m => @@ -339,18 +371,18 @@ mkStateTracer :: PeerSimulatorResources m TestBlock -> ChainDB m TestBlock -> m (Tracer m ()) -mkStateTracer schedulerConfig GenesisTest {gtBlockTree} PeerSimulatorResources {psrHandles, psrPeers} chainDb +mkStateTracer schedulerConfig GenesisTest{gtBlockTree} PeerSimulatorResources{psrHandles, psrPeers} chainDb | scTraceState schedulerConfig , let getCandidates = viewChainSyncState (cschcMap psrHandles) CSClient.csCandidate getCurrentChain = ChainDB.getCurrentChain chainDb - getPoints = traverse readTVar (srCurrentState . prShared <$> psrPeers) - = peerSimStateDiagramSTMTracerDebug + getPoints = traverse readTVar (srCurrentState . prShared <$> psrPeers) = + peerSimStateDiagramSTMTracerDebug gtBlockTree getCurrentChain (fmap (Map.map dropTimeFromFragment) getCandidates) getPoints - | otherwise - = pure nullTracer + | otherwise = + pure nullTracer -- | Start all threads for ChainSync, BlockFetch and GDD, using the resources -- for a single live interval. @@ -372,13 +404,13 @@ startNode schedulerConfig genesisTest interval = do let chainDbView = CSClient.defaultChainDbView lnChainDb activePeers = Map.toList $ Map.restrictKeys (psrPeers lrPeerSim) (lirActive liveResult) peersStartOrder = psStartOrder ++ sort [pid | (pid, _) <- activePeers, pid `notElem` psStartOrder] - activePeersOrdered = [ - peerResources - | pid <- peersStartOrder - , (pid', peerResources) <- activePeers - , pid == pid' - ] - for_ activePeersOrdered $ \PeerResources {prShared, prChainSync, prBlockFetch} -> do + activePeersOrdered = + [ peerResources + | pid <- peersStartOrder + , (pid', peerResources) <- activePeers + , pid == pid' + ] + for_ activePeersOrdered $ \PeerResources{prShared, prChainSync, prBlockFetch} -> do let pid = srPeerId prShared forkLinkedThread lrRegistry ("Peer overview " ++ show pid) $ -- The peerRegistry helps ensuring that if any thread fails, then @@ -387,29 +419,29 @@ startNode schedulerConfig genesisTest interval = do withRegistry $ \peerRegistry -> do (csClient, csServer) <- startChainSyncConnectionThread - peerRegistry - tracer - lrConfig - chainDbView - fetchClientRegistry - prShared - prChainSync - chainSyncTimeouts_ - chainSyncLoPBucketConfig - csjConfig - lnStateViewTracers - handles + peerRegistry + tracer + lrConfig + chainDbView + fetchClientRegistry + prShared + prChainSync + chainSyncTimeouts_ + chainSyncLoPBucketConfig + csjConfig + lnStateViewTracers + handles BlockFetch.startKeepAliveThread peerRegistry fetchClientRegistry pid (bfClient, bfServer) <- startBlockFetchConnectionThread - peerRegistry - tracer - lnStateViewTracers - fetchClientRegistry - (pure Continue) - prShared - prBlockFetch - blockFetchTimeouts_ + peerRegistry + tracer + lnStateViewTracers + fetchClientRegistry + (pure Continue) + prShared + prBlockFetch + blockFetchTimeouts_ waitAnyThread [csClient, csServer, bfClient, bfServer] -- The block fetch logic needs to be started after the block fetch clients -- otherwise, an internal assertion fails because getCandidates yields more @@ -422,65 +454,69 @@ startNode schedulerConfig genesisTest interval = do fetchClientRegistry handles - for_ lrLoEVar $ \ var -> do - forkLinkedWatcher lrRegistry "LoE updater background" $ - gddWatcher - lrConfig - (mkGDDTracerTestBlock lrTracer) - lnChainDb - 0.0 -- The rate limit makes simpler the calculations of how long tests - -- should run and still should produce interesting interleavings. - -- It is similar to the setting of bfcDecisionLoopInterval in - -- Test.Consensus.PeerSimulator.BlockFetch - (pure GSM.Syncing) -- TODO actually run GSM - (cschcMap handles) - var - - void $ forkLinkedWatcher lrRegistry "CSJ invariants watcher" $ - CSJInvariants.watcher (cschcMap handles) - where - LiveResources {lrRegistry, lrTracer, lrConfig, lrPeerSim, lrLoEVar} = resources - - LiveInterval { - liResources = resources - , liResult = liveResult - , liNode = LiveNode {lnChainDb, lnStateViewTracers} - } = interval - - GenesisTest - { gtChainSyncTimeouts - , gtBlockFetchTimeouts - , gtLoPBucketParams = LoPBucketParams { lbpCapacity, lbpRate } - , gtCSJParams = CSJParams { csjpJumpSize } - , gtSchedule = PointSchedule {psStartOrder} - } = genesisTest - - StateViewTracers{svtTraceTracer} = lnStateViewTracers - - -- FIXME: This type of configuration should move to `Trace.mkTracer`. - tracer = if scTrace schedulerConfig + for_ lrLoEVar $ \var -> do + forkLinkedWatcher lrRegistry "LoE updater background" $ + gddWatcher + lrConfig + (mkGDDTracerTestBlock lrTracer) + lnChainDb + 0.0 -- The rate limit makes simpler the calculations of how long tests + -- should run and still should produce interesting interleavings. + -- It is similar to the setting of bfcDecisionLoopInterval in + -- Test.Consensus.PeerSimulator.BlockFetch + (pure GSM.Syncing) -- TODO actually run GSM + (cschcMap handles) + var + + void $ + forkLinkedWatcher lrRegistry "CSJ invariants watcher" $ + CSJInvariants.watcher (cschcMap handles) + where + LiveResources{lrRegistry, lrTracer, lrConfig, lrPeerSim, lrLoEVar} = resources + + LiveInterval + { liResources = resources + , liResult = liveResult + , liNode = LiveNode{lnChainDb, lnStateViewTracers} + } = interval + + GenesisTest + { gtChainSyncTimeouts + , gtBlockFetchTimeouts + , gtLoPBucketParams = LoPBucketParams{lbpCapacity, lbpRate} + , gtCSJParams = CSJParams{csjpJumpSize} + , gtSchedule = PointSchedule{psStartOrder} + } = genesisTest + + StateViewTracers{svtTraceTracer} = lnStateViewTracers + + -- FIXME: This type of configuration should move to `Trace.mkTracer`. + tracer = + if scTrace schedulerConfig then Tracer (\evt -> traceWith lrTracer evt >> traceWith svtTraceTracer evt) else svtTraceTracer - chainSyncTimeouts_ = - if scEnableChainSyncTimeouts schedulerConfig - then gtChainSyncTimeouts - else ChainSync.chainSyncNoTimeouts + chainSyncTimeouts_ = + if scEnableChainSyncTimeouts schedulerConfig + then gtChainSyncTimeouts + else ChainSync.chainSyncNoTimeouts - chainSyncLoPBucketConfig = - if scEnableLoP schedulerConfig - then ChainSyncLoPBucketEnabled ChainSyncLoPBucketEnabledConfig { csbcCapacity = lbpCapacity, csbcRate = lbpRate } - else ChainSyncLoPBucketDisabled + chainSyncLoPBucketConfig = + if scEnableLoP schedulerConfig + then + ChainSyncLoPBucketEnabled + ChainSyncLoPBucketEnabledConfig{csbcCapacity = lbpCapacity, csbcRate = lbpRate} + else ChainSyncLoPBucketDisabled - csjConfig = - if scEnableCSJ schedulerConfig - then CSJEnabled CSJEnabledConfig { csjcJumpSize = csjpJumpSize } - else CSJDisabled + csjConfig = + if scEnableCSJ schedulerConfig + then CSJEnabled CSJEnabledConfig{csjcJumpSize = csjpJumpSize} + else CSJDisabled - blockFetchTimeouts_ = - if scEnableBlockFetchTimeouts schedulerConfig - then gtBlockFetchTimeouts - else BlockFetch.blockFetchNoTimeouts + blockFetchTimeouts_ = + if scEnableBlockFetchTimeouts schedulerConfig + then gtBlockFetchTimeouts + else BlockFetch.blockFetchNoTimeouts -- | Set up all resources related to node start/shutdown. nodeLifecycle :: @@ -496,8 +532,8 @@ nodeLifecycle schedulerConfig genesisTest lrTracer lrRegistry lrPeerSim = do lrLoEVar <- mkLoEVar schedulerConfig let resources = - LiveResources { - lrRegistry + LiveResources + { lrRegistry , lrTracer , lrSTracer = mkStateTracer schedulerConfig genesisTest lrPeerSim , lrConfig @@ -505,19 +541,20 @@ nodeLifecycle schedulerConfig genesisTest lrTracer lrRegistry lrPeerSim = do , lrCdb , lrLoEVar } - pure NodeLifecycle { - nlMinDuration = scDowntime schedulerConfig - , nlStart = lifecycleStart (startNode schedulerConfig genesisTest) resources - , nlShutdown = lifecycleStop resources - } - where - lrConfig = defaultCfg k gtForecastRange gtGenesisWindow - - GenesisTest { - gtSecurityParam = k - , gtForecastRange - , gtGenesisWindow - } = genesisTest + pure + NodeLifecycle + { nlMinDuration = scDowntime schedulerConfig + , nlStart = lifecycleStart (startNode schedulerConfig genesisTest) resources + , nlShutdown = lifecycleStop resources + } + where + lrConfig = defaultCfg k gtForecastRange gtGenesisWindow + + GenesisTest + { gtSecurityParam = k + , gtForecastRange + , gtGenesisWindow + } = genesisTest -- | Construct STM resources, set up ChainSync and BlockFetch threads, and -- send all ticks in a 'PointSchedule' to all given peers in turn. @@ -530,21 +567,25 @@ runPointSchedule :: m (StateView TestBlock) runPointSchedule schedulerConfig genesisTest tracer0 = withRegistry $ \registry -> do - peerSim <- makePeerSimulatorResources tracer gtBlockTree (NonEmpty.fromList $ getPeerIds $ psSchedule gtSchedule) + peerSim <- + makePeerSimulatorResources + tracer + gtBlockTree + (NonEmpty.fromList $ getPeerIds $ psSchedule gtSchedule) lifecycle <- nodeLifecycle schedulerConfig genesisTest tracer registry peerSim - (chainDb, stateViewTracers) <- runScheduler - (Tracer $ traceWith tracer . TraceSchedulerEvent) - (cschcMap (psrHandles peerSim)) - gtSchedule - (psrPeers peerSim) - lifecycle + (chainDb, stateViewTracers) <- + runScheduler + (Tracer $ traceWith tracer . TraceSchedulerEvent) + (cschcMap (psrHandles peerSim)) + gtSchedule + (psrPeers peerSim) + lifecycle snapshotStateView stateViewTracers chainDb - where - - GenesisTest { - gtBlockTree - , gtSchedule - } = genesisTest - - -- FIXME: This type of configuration should move to `Trace.mkTracer`. - tracer = if scTrace schedulerConfig then tracer0 else nullTracer + where + GenesisTest + { gtBlockTree + , gtSchedule + } = genesisTest + + -- FIXME: This type of configuration should move to `Trace.mkTracer`. + tracer = if scTrace schedulerConfig then tracer0 else nullTracer diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledBlockFetchServer.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledBlockFetchServer.hs index 8b4da699af..cb523d570f 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledBlockFetchServer.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledBlockFetchServer.hs @@ -2,55 +2,62 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Consensus.PeerSimulator.ScheduledBlockFetchServer ( - BlockFetch (..) +module Test.Consensus.PeerSimulator.ScheduledBlockFetchServer + ( BlockFetch (..) , BlockFetchServerHandlers (..) , ScheduledBlockFetchServer (..) , SendBlocks (..) , runScheduledBlockFetchServer ) where -import Control.Tracer -import Ouroboros.Consensus.Block (Point) -import Ouroboros.Consensus.Util.IOLike (IOLike, MonadSTM (STM)) -import Ouroboros.Network.BlockFetch.ClientState (ChainRange) -import Ouroboros.Network.Protocol.BlockFetch.Server -import Test.Consensus.PeerSimulator.ScheduledServer - (ScheduledServer (..), awaitOnlineState, runHandler) -import Test.Consensus.PeerSimulator.Trace -import Test.Consensus.PointSchedule.NodeState (NodeState) -import Test.Consensus.PointSchedule.Peers (PeerId) +import Control.Tracer +import Ouroboros.Consensus.Block (Point) +import Ouroboros.Consensus.Util.IOLike (IOLike, MonadSTM (STM)) +import Ouroboros.Network.BlockFetch.ClientState (ChainRange) +import Ouroboros.Network.Protocol.BlockFetch.Server +import Test.Consensus.PeerSimulator.ScheduledServer + ( ScheduledServer (..) + , awaitOnlineState + , runHandler + ) +import Test.Consensus.PeerSimulator.Trace +import Test.Consensus.PointSchedule.NodeState (NodeState) +import Test.Consensus.PointSchedule.Peers (PeerId) -- | Return values for the 'handlerSendBlocks'. -data SendBlocks blk = - SendBlock blk [blk] - | - BatchDone +data SendBlocks blk + = SendBlock blk [blk] + | BatchDone -- | Return values for the 'handlerBlockFetch'. -data BlockFetch blk = - StartBatch [blk] - -- ^ As a response to the client request, we should send the blocks in the - -- given batch. - | - NoBlocks - -- ^ Negative response to the client's request for blocks. +data BlockFetch blk + = -- | As a response to the client request, we should send the blocks in the + -- given batch. + StartBatch [blk] + | -- | Negative response to the client's request for blocks. + NoBlocks deriving (Eq, Show) -- | Handlers for the scheduled BlockFetch server. -data BlockFetchServerHandlers m state blk = - BlockFetchServerHandlers { - bfshBlockFetch :: ChainRange (Point blk) -> state -> STM m (Maybe (BlockFetch blk), [TraceScheduledBlockFetchServerEvent state blk]), - bfshSendBlocks :: [blk] -> state -> STM m (Maybe (SendBlocks blk), [TraceScheduledBlockFetchServerEvent state blk]) +data BlockFetchServerHandlers m state blk + = BlockFetchServerHandlers + { bfshBlockFetch :: + ChainRange (Point blk) -> + state -> + STM m (Maybe (BlockFetch blk), [TraceScheduledBlockFetchServerEvent state blk]) + , bfshSendBlocks :: + [blk] -> + state -> + STM m (Maybe (SendBlocks blk), [TraceScheduledBlockFetchServerEvent state blk]) } -- | Resources used by a scheduled BlockFetch server. This comprises a generic -- 'ScheduledServer' and BlockFetch-specific handlers. -data ScheduledBlockFetchServer m state blk = - ScheduledBlockFetchServer { - sbfsServer :: ScheduledServer m state blk, - sbfsTracer :: Tracer m (TraceScheduledBlockFetchServerEvent state blk), - sbfsHandlers :: BlockFetchServerHandlers m state blk +data ScheduledBlockFetchServer m state blk + = ScheduledBlockFetchServer + { sbfsServer :: ScheduledServer m state blk + , sbfsTracer :: Tracer m (TraceScheduledBlockFetchServerEvent state blk) + , sbfsHandlers :: BlockFetchServerHandlers m state blk } -- | Make a 'BlockFetchServer' able to run with the normal infrastructure from a @@ -60,27 +67,27 @@ scheduledBlockFetchServer :: IOLike m => ScheduledBlockFetchServer m state blk -> BlockFetchServer blk (Point blk) m () -scheduledBlockFetchServer ScheduledBlockFetchServer {sbfsServer, sbfsTracer, sbfsHandlers} = +scheduledBlockFetchServer ScheduledBlockFetchServer{sbfsServer, sbfsTracer, sbfsHandlers} = server - where - server = BlockFetchServer blockFetch () + where + server = BlockFetchServer blockFetch () - BlockFetchServerHandlers {bfshBlockFetch, bfshSendBlocks} = sbfsHandlers + BlockFetchServerHandlers{bfshBlockFetch, bfshSendBlocks} = sbfsHandlers - blockFetch range = - runHandler sbfsServer "BlockFetch" (bfshBlockFetch range) sbfsTracer $ \case - StartBatch blocks -> do - pure $ SendMsgStartBatch (sendBlocks blocks) - NoBlocks -> do - trace $ TraceNoBlocks - pure (SendMsgNoBlocks (server <$ awaitOnlineState sbfsServer)) + blockFetch range = + runHandler sbfsServer "BlockFetch" (bfshBlockFetch range) sbfsTracer $ \case + StartBatch blocks -> do + pure $ SendMsgStartBatch (sendBlocks blocks) + NoBlocks -> do + trace $ TraceNoBlocks + pure (SendMsgNoBlocks (server <$ awaitOnlineState sbfsServer)) - sendBlocks bs = - runHandler sbfsServer "SendBlocks" (bfshSendBlocks bs) sbfsTracer $ \case - SendBlock blk blks -> pure (SendMsgBlock blk (sendBlocks blks)) - BatchDone -> pure (SendMsgBatchDone (pure server)) + sendBlocks bs = + runHandler sbfsServer "SendBlocks" (bfshSendBlocks bs) sbfsTracer $ \case + SendBlock blk blks -> pure (SendMsgBlock blk (sendBlocks blks)) + BatchDone -> pure (SendMsgBatchDone (pure server)) - trace = traceWith sbfsTracer + trace = traceWith sbfsTracer -- | Construct a BlockFetch server for the peer simulator. -- @@ -94,13 +101,16 @@ runScheduledBlockFetchServer :: BlockFetchServerHandlers m (NodeState blk) blk -> BlockFetchServer blk (Point blk) m () runScheduledBlockFetchServer ssPeerId ssTickStarted ssCurrentState tracer sbfsHandlers = - scheduledBlockFetchServer ScheduledBlockFetchServer { - sbfsServer = ScheduledServer { - ssPeerId, - ssTickStarted, - ssCurrentState, - ssCommonTracer = Tracer (traceWith tracer . TraceScheduledBlockFetchServerEvent ssPeerId . TraceHandlerEventBF) - }, - sbfsTracer = Tracer (traceWith tracer . TraceScheduledBlockFetchServerEvent ssPeerId), - sbfsHandlers - } + scheduledBlockFetchServer + ScheduledBlockFetchServer + { sbfsServer = + ScheduledServer + { ssPeerId + , ssTickStarted + , ssCurrentState + , ssCommonTracer = + Tracer (traceWith tracer . TraceScheduledBlockFetchServerEvent ssPeerId . TraceHandlerEventBF) + } + , sbfsTracer = Tracer (traceWith tracer . TraceScheduledBlockFetchServerEvent ssPeerId) + , sbfsHandlers + } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledChainSyncServer.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledChainSyncServer.hs index bc4d7b3916..f52aaa5ffd 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledChainSyncServer.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledChainSyncServer.hs @@ -4,65 +4,72 @@ -- | A ChainSync protocol server that allows external scheduling of its -- operations, while deferring the implementation of the message handler -- logic to a simplified, abstract interface provided as a parameter. -module Test.Consensus.PeerSimulator.ScheduledChainSyncServer ( - ChainSyncServerHandlers (..) +module Test.Consensus.PeerSimulator.ScheduledChainSyncServer + ( ChainSyncServerHandlers (..) , FindIntersect (..) , RequestNext (..) , ScheduledChainSyncServer (..) , runScheduledChainSyncServer ) where -import Control.Tracer (Tracer (Tracer), traceWith) -import Ouroboros.Consensus.Block (Header) -import Ouroboros.Consensus.Block.Abstract (Point (..)) -import Ouroboros.Consensus.Util.IOLike (IOLike, MonadSTM (STM)) -import Ouroboros.Network.Block (Tip (..)) -import Ouroboros.Network.Protocol.ChainSync.Server - (ChainSyncServer (..), - ServerStIdle (ServerStIdle, recvMsgDoneClient, recvMsgFindIntersect, recvMsgRequestNext), - ServerStIntersect (SendMsgIntersectFound, SendMsgIntersectNotFound), - ServerStNext (SendMsgRollBackward, SendMsgRollForward)) -import Test.Consensus.PeerSimulator.ScheduledServer - (ScheduledServer (..), awaitOnlineState, runHandler) -import Test.Consensus.PeerSimulator.Trace - (TraceEvent (TraceScheduledChainSyncServerEvent), - TraceScheduledChainSyncServerEvent (..)) -import Test.Consensus.PointSchedule.NodeState (NodeState) -import Test.Consensus.PointSchedule.Peers (PeerId) +import Control.Tracer (Tracer (Tracer), traceWith) +import Ouroboros.Consensus.Block (Header) +import Ouroboros.Consensus.Block.Abstract (Point (..)) +import Ouroboros.Consensus.Util.IOLike (IOLike, MonadSTM (STM)) +import Ouroboros.Network.Block (Tip (..)) +import Ouroboros.Network.Protocol.ChainSync.Server + ( ChainSyncServer (..) + , ServerStIdle (ServerStIdle, recvMsgDoneClient, recvMsgFindIntersect, recvMsgRequestNext) + , ServerStIntersect (SendMsgIntersectFound, SendMsgIntersectNotFound) + , ServerStNext (SendMsgRollBackward, SendMsgRollForward) + ) +import Test.Consensus.PeerSimulator.ScheduledServer + ( ScheduledServer (..) + , awaitOnlineState + , runHandler + ) +import Test.Consensus.PeerSimulator.Trace + ( TraceEvent (TraceScheduledChainSyncServerEvent) + , TraceScheduledChainSyncServerEvent (..) + ) +import Test.Consensus.PointSchedule.NodeState (NodeState) +import Test.Consensus.PointSchedule.Peers (PeerId) -- | Pure representation of the messages produced by the handler for the @StNext@ -- protocol state of a ChainSync server. -data RequestNext blk = - RollForward (Header blk) (Tip blk) - | - RollBackward (Point blk) (Tip blk) - | - AwaitReply +data RequestNext blk + = RollForward (Header blk) (Tip blk) + | RollBackward (Point blk) (Tip blk) + | AwaitReply -- | Pure representation of the messages produced by the handler for the @StIntersect@ -- protocol state of a ChainSync server. -data FindIntersect blk = - IntersectFound (Point blk) (Tip blk) - | - IntersectNotFound (Tip blk) +data FindIntersect blk + = IntersectFound (Point blk) (Tip blk) + | IntersectNotFound (Tip blk) -- | Handlers for the request a ChainSync server might receive from a client. -- These take an abstract argument that corresponds to the state of a point -- schedule tick and return the simplified protocol message types. -- -- See 'runHandlerWithTrace' for the meaning of @[String]@. -data ChainSyncServerHandlers m state blk = - ChainSyncServerHandlers { - csshRequestNext :: state -> STM m (Maybe (RequestNext blk), [TraceScheduledChainSyncServerEvent state blk]), - csshFindIntersection :: [Point blk] -> state -> STM m (Maybe (FindIntersect blk), [TraceScheduledChainSyncServerEvent state blk]) +data ChainSyncServerHandlers m state blk + = ChainSyncServerHandlers + { csshRequestNext :: + state -> + STM m (Maybe (RequestNext blk), [TraceScheduledChainSyncServerEvent state blk]) + , csshFindIntersection :: + [Point blk] -> + state -> + STM m (Maybe (FindIntersect blk), [TraceScheduledChainSyncServerEvent state blk]) } -- | Resources used by a ChainSync server mock. -data ScheduledChainSyncServer m state blk = - ScheduledChainSyncServer { - scssServer :: ScheduledServer m state blk, - scssTracer :: Tracer m (TraceScheduledChainSyncServerEvent state blk), - scssHandlers :: ChainSyncServerHandlers m state blk +data ScheduledChainSyncServer m state blk + = ScheduledChainSyncServer + { scssServer :: ScheduledServer m state blk + , scssTracer :: Tracer m (TraceScheduledChainSyncServerEvent state blk) + , scssHandlers :: ChainSyncServerHandlers m state blk } -- | Declare a mock ChainSync protocol server in its typed-protocols encoding @@ -80,54 +87,57 @@ scheduledChainSyncServer :: IOLike m => ScheduledChainSyncServer m a blk -> ChainSyncServer (Header blk) (Point blk) (Tip blk) m () -scheduledChainSyncServer ScheduledChainSyncServer {scssHandlers, scssTracer, scssServer} = +scheduledChainSyncServer ScheduledChainSyncServer{scssHandlers, scssTracer, scssServer} = go - where - ChainSyncServerHandlers {csshRequestNext, csshFindIntersection} = scssHandlers + where + ChainSyncServerHandlers{csshRequestNext, csshFindIntersection} = scssHandlers - go = - ChainSyncServer $ pure ServerStIdle { - recvMsgRequestNext - , recvMsgFindIntersect - , recvMsgDoneClient - } + go = + ChainSyncServer $ + pure + ServerStIdle + { recvMsgRequestNext + , recvMsgFindIntersect + , recvMsgDoneClient + } - recvMsgRequestNext = - runHandler scssServer "MsgRequestNext" csshRequestNext scssTracer $ \case - RollForward header tip -> do - trace $ TraceRollForward header tip - pure $ Left $ SendMsgRollForward header tip go - RollBackward point tip -> do - trace $ TraceRollBackward point tip - pure $ Left $ SendMsgRollBackward point tip go - AwaitReply -> - pure $ Right $ do -- beginning of the continuation - restart >>= \case - -- If we get 'Right', then we still do not have anything to serve - -- and we loop; what 'Right' contains is the continuation starting - -- at 'do' above; by unwrapping the 'Right', we do not send - -- another AwaitReply message (which Typed Protocols does not - -- allow anyway). - Right cont -> cont - Left msg -> pure msg - where - -- Yield control back to the scheduler, then wait for the next state and - -- continue processing the client's current 'MsgRequestNext'. - restart = awaitOnlineState scssServer *> recvMsgRequestNext + recvMsgRequestNext = + runHandler scssServer "MsgRequestNext" csshRequestNext scssTracer $ \case + RollForward header tip -> do + trace $ TraceRollForward header tip + pure $ Left $ SendMsgRollForward header tip go + RollBackward point tip -> do + trace $ TraceRollBackward point tip + pure $ Left $ SendMsgRollBackward point tip go + AwaitReply -> + pure $ Right $ do + -- beginning of the continuation + restart >>= \case + -- If we get 'Right', then we still do not have anything to serve + -- and we loop; what 'Right' contains is the continuation starting + -- at 'do' above; by unwrapping the 'Right', we do not send + -- another AwaitReply message (which Typed Protocols does not + -- allow anyway). + Right cont -> cont + Left msg -> pure msg + where + -- Yield control back to the scheduler, then wait for the next state and + -- continue processing the client's current 'MsgRequestNext'. + restart = awaitOnlineState scssServer *> recvMsgRequestNext - recvMsgFindIntersect pts = - runHandler scssServer "MsgFindIntersect" (csshFindIntersection pts) scssTracer $ \case - IntersectNotFound tip -> do - trace TraceIntersectionNotFound - pure $ SendMsgIntersectNotFound tip go - IntersectFound intersection tip -> do - trace $ TraceIntersectionFound intersection - pure $ SendMsgIntersectFound intersection tip go + recvMsgFindIntersect pts = + runHandler scssServer "MsgFindIntersect" (csshFindIntersection pts) scssTracer $ \case + IntersectNotFound tip -> do + trace TraceIntersectionNotFound + pure $ SendMsgIntersectNotFound tip go + IntersectFound intersection tip -> do + trace $ TraceIntersectionFound intersection + pure $ SendMsgIntersectFound intersection tip go - recvMsgDoneClient = - trace TraceClientIsDone + recvMsgDoneClient = + trace TraceClientIsDone - trace = traceWith scssTracer + trace = traceWith scssTracer -- | Construct a ChainSync server for the peer simulator. -- @@ -141,13 +151,16 @@ runScheduledChainSyncServer :: ChainSyncServerHandlers m (NodeState blk) blk -> ChainSyncServer (Header blk) (Point blk) (Tip blk) m () runScheduledChainSyncServer ssPeerId ssTickStarted ssCurrentState tracer scssHandlers = - scheduledChainSyncServer ScheduledChainSyncServer { - scssServer = ScheduledServer { - ssPeerId, - ssTickStarted, - ssCurrentState, - ssCommonTracer = Tracer (traceWith tracer . TraceScheduledChainSyncServerEvent ssPeerId . TraceHandlerEventCS) - }, - scssTracer = Tracer (traceWith tracer . TraceScheduledChainSyncServerEvent ssPeerId), - scssHandlers - } + scheduledChainSyncServer + ScheduledChainSyncServer + { scssServer = + ScheduledServer + { ssPeerId + , ssTickStarted + , ssCurrentState + , ssCommonTracer = + Tracer (traceWith tracer . TraceScheduledChainSyncServerEvent ssPeerId . TraceHandlerEventCS) + } + , scssTracer = Tracer (traceWith tracer . TraceScheduledChainSyncServerEvent ssPeerId) + , scssHandlers + } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledServer.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledServer.hs index 726eba197f..5149b03cb7 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledServer.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/ScheduledServer.hs @@ -7,32 +7,35 @@ -- there is nothing new to process, or what needs to process requires a -- different state of the point schedule, the scheduled server goes back to -- sleep, awaiting another tick. -module Test.Consensus.PeerSimulator.ScheduledServer ( - ScheduledServer (..) +module Test.Consensus.PeerSimulator.ScheduledServer + ( ScheduledServer (..) , awaitOnlineState , ensureCurrentState , runHandler , runHandlerWithTrace ) where -import Control.Tracer (Tracer, traceWith) -import Data.Foldable (traverse_) -import Ouroboros.Consensus.Util.IOLike (IOLike, - MonadSTM (STM, atomically)) -import Test.Consensus.PeerSimulator.Trace - (TraceScheduledServerHandlerEvent (..)) -import Test.Consensus.PointSchedule.Peers (PeerId) +import Control.Tracer (Tracer, traceWith) +import Data.Foldable (traverse_) +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , MonadSTM (STM, atomically) + ) +import Test.Consensus.PeerSimulator.Trace + ( TraceScheduledServerHandlerEvent (..) + ) +import Test.Consensus.PointSchedule.Peers (PeerId) -data ScheduledServer m state blk = - ScheduledServer { - ssPeerId :: PeerId, - ssCurrentState :: STM m (Maybe state), - ssTickStarted :: STM m (), - ssCommonTracer :: Tracer m (TraceScheduledServerHandlerEvent state blk) +data ScheduledServer m state blk + = ScheduledServer + { ssPeerId :: PeerId + , ssCurrentState :: STM m (Maybe state) + , ssTickStarted :: STM m () + , ssCommonTracer :: Tracer m (TraceScheduledServerHandlerEvent state blk) } nextTickState :: IOLike m => ScheduledServer m state blk -> m (Maybe state) -nextTickState ScheduledServer {ssCurrentState, ssTickStarted} = +nextTickState ScheduledServer{ssCurrentState, ssTickStarted} = atomically (ssTickStarted >> ssCurrentState) retryOffline :: IOLike m => ScheduledServer m state blk -> Maybe state -> m state @@ -92,16 +95,16 @@ runHandler :: m h runHandler server@ScheduledServer{ssCommonTracer} handlerName handler handlerTracer dispatchMessage = run - where - run = do - currentState <- ensureCurrentState server - traceWith ssCommonTracer $ TraceHandling handlerName currentState - maybe restart done =<< runHandlerWithTrace handlerTracer (handler currentState) + where + run = do + currentState <- ensureCurrentState server + traceWith ssCommonTracer $ TraceHandling handlerName currentState + maybe restart done =<< runHandlerWithTrace handlerTracer (handler currentState) - restart = do - traceWith ssCommonTracer $ TraceRestarting handlerName - awaitOnlineState server *> run + restart = do + traceWith ssCommonTracer $ TraceRestarting handlerName + awaitOnlineState server *> run - done msg = do - traceWith ssCommonTracer $ TraceDoneHandling handlerName - dispatchMessage msg + done msg = do + traceWith ssCommonTracer $ TraceDoneHandling handlerName + dispatchMessage msg diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateDiagram.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateDiagram.hs index 7e1fe33e1d..8093224b56 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateDiagram.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateDiagram.hs @@ -8,8 +8,8 @@ -- | A pretty-printer and tracer that shows the current peer simulator state in -- a block tree, highlighting the candidate fragments, selection, and forks in -- different colors, omitting uninteresting segments. -module Test.Consensus.PeerSimulator.StateDiagram ( - PeerSimState (..) +module Test.Consensus.PeerSimulator.StateDiagram + ( PeerSimState (..) , RenderConfig (..) , defaultRenderConfig , peerSimStateDiagram @@ -19,70 +19,92 @@ module Test.Consensus.PeerSimulator.StateDiagram ( , peerSimStateDiagramWith ) where -import Cardano.Slotting.Block (BlockNo (BlockNo)) -import Cardano.Slotting.Slot (SlotNo (SlotNo), WithOrigin (..), - fromWithOrigin, withOrigin) -import Control.Monad (guard) -import Control.Monad.State.Strict (State, gets, modify', runState, - state) -import Control.Tracer (Tracer (Tracer), debugTracer, traceWith) -import Data.Bifunctor (first) -import Data.Foldable as Foldable (foldl', foldr') -import Data.List (find, intersperse, mapAccumL, sort, transpose) -import Data.List.NonEmpty (NonEmpty ((:|)), nonEmpty, (<|)) -import qualified Data.List.NonEmpty as NonEmpty -import Data.Map (Map) -import Data.Map.Strict ((!?)) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe, mapMaybe) -import Data.String (IsString (fromString)) -import Data.Vector (Vector) -import qualified Data.Vector as Vector -import qualified Data.Vector.Mutable as MV -import Data.Word (Word64) -import qualified Debug.Trace as Debug -import GHC.Exts (IsList (..)) -import Ouroboros.Consensus.Block (ChainHash (BlockHash), Header, - WithOrigin (NotOrigin), blockHash, blockNo, blockSlot, - getHeader) -import Ouroboros.Consensus.Util (eitherToMaybe) -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import Ouroboros.Consensus.Util.IOLike (IOLike, MonadSTM (STM), - atomically, modifyTVar, readTVar, uncheckedNewTVarM) -import Ouroboros.Network.AnchoredFragment (anchor, anchorToSlotNo) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (HeaderHash) -import Test.Consensus.BlockTree (BlockTree (btBranches, btTrunk), - BlockTreeBranch (btbSuffix), prettyBlockTree) -import Test.Consensus.PointSchedule.NodeState (NodeState (..), - genesisNodeState) -import Test.Consensus.PointSchedule.Peers (PeerId (..)) -import Test.Util.TestBlock (TestBlock, TestHash (TestHash)) +import Cardano.Slotting.Block (BlockNo (BlockNo)) +import Cardano.Slotting.Slot + ( SlotNo (SlotNo) + , WithOrigin (..) + , fromWithOrigin + , withOrigin + ) +import Control.Monad (guard) +import Control.Monad.State.Strict + ( State + , gets + , modify' + , runState + , state + ) +import Control.Tracer (Tracer (Tracer), debugTracer, traceWith) +import Data.Bifunctor (first) +import Data.Foldable as Foldable (foldl', foldr') +import Data.List (find, intersperse, mapAccumL, sort, transpose) +import Data.List.NonEmpty (NonEmpty ((:|)), nonEmpty, (<|)) +import Data.List.NonEmpty qualified as NonEmpty +import Data.Map (Map) +import Data.Map.Strict ((!?)) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe, mapMaybe) +import Data.String (IsString (fromString)) +import Data.Vector (Vector) +import Data.Vector qualified as Vector +import Data.Vector.Mutable qualified as MV +import Data.Word (Word64) +import Debug.Trace qualified as Debug +import GHC.Exts (IsList (..)) +import Ouroboros.Consensus.Block + ( ChainHash (BlockHash) + , Header + , WithOrigin (NotOrigin) + , blockHash + , blockNo + , blockSlot + , getHeader + ) +import Ouroboros.Consensus.Util (eitherToMaybe) +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , MonadSTM (STM) + , atomically + , modifyTVar + , readTVar + , uncheckedNewTVarM + ) +import Ouroboros.Network.AnchoredFragment (anchor, anchorToSlotNo) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (HeaderHash) +import Test.Consensus.BlockTree + ( BlockTree (btBranches, btTrunk) + , BlockTreeBranch (btbSuffix) + , prettyBlockTree + ) +import Test.Consensus.PointSchedule.NodeState + ( NodeState (..) + , genesisNodeState + ) +import Test.Consensus.PointSchedule.Peers (PeerId (..)) +import Test.Util.TestBlock (TestBlock, TestHash (TestHash)) enableDebug :: Bool enableDebug = False debugRender :: String -> a -> a debugRender - | enableDebug - = Debug.trace - | otherwise - = const id + | enableDebug = + Debug.trace + | otherwise = + const id ---------------------------------------------------------------------------------------------------- -- Colors ---------------------------------------------------------------------------------------------------- -data SGR = - Color Word64 - | - BgColor Word64 - | - Bold - | - Reset - | - Keep +data SGR + = Color Word64 + | BgColor Word64 + | Bold + | Reset + | Keep renderSgr :: [SGR] -> String renderSgr = @@ -92,17 +114,14 @@ renderSgr = Bold -> sgr "1" Reset -> sgr "0" Keep -> "" - where - sgr x = "\ESC[" ++ x ++ "m" - -data Col = - ColAspect (NonEmpty Aspect) Col - | - Col [SGR] Col - | - ColString String - | - ColCat [Col] + where + sgr x = "\ESC[" ++ x ++ "m" + +data Col + = ColAspect (NonEmpty Aspect) Col + | Col [SGR] Col + | ColString String + | ColCat [Col] instance IsString Col where fromString = ColString @@ -127,28 +146,26 @@ colLength = \case ColString s -> length s ColCat cs -> sum (colLength <$> cs) -data Colors = - Colors { - candidates :: [Word64], - selection :: Maybe Word64, - slotNumber :: Word64, - cache :: Map PeerId Word64, - stack :: [[SGR]] +data Colors + = Colors + { candidates :: [Word64] + , selection :: Maybe Word64 + , slotNumber :: Word64 + , cache :: Map PeerId Word64 + , stack :: [[SGR]] } candidateColor :: PeerId -> Colors -> (Maybe Word64, Colors) -candidateColor pid s@Colors {candidates, cache} - | Just c <- cached - = (Just c, s) - - | h : t <- filter unused candidates - = (Just h, s {candidates = t, cache = Map.insert pid h cache}) - - | otherwise - = (Nothing, s) - where - cached = cache !? pid - unused c = not (elem c cache) +candidateColor pid s@Colors{candidates, cache} + | Just c <- cached = + (Just c, s) + | h : t <- filter unused candidates = + (Just h, s{candidates = t, cache = Map.insert pid h cache}) + | otherwise = + (Nothing, s) + where + cached = cache !? pid + unused c = not (elem c cache) getColor :: Bool -> Aspect -> State Colors (Maybe [SGR]) getColor bg = \case @@ -163,53 +180,54 @@ getColor bg = \case pure (Just [mkColor c]) TipPoint pid -> peerColor pid - where - peerColor pid = - fmap (pure . mkColor) <$> state (candidateColor pid) - mkColor | bg = BgColor - | otherwise = Color + where + peerColor pid = + fmap (pure . mkColor) <$> state (candidateColor pid) + mkColor + | bg = BgColor + | otherwise = Color getColors :: NonEmpty Aspect -> State Colors [SGR] getColors aspects = do (main, rest) <- findColor False (NonEmpty.toList aspects) (bg, _) <- findColor True rest pure (main ++ bg) - where - findColor bg (h : t) = - getColor bg h >>= \case - Just c -> pure (c, t) - Nothing -> findColor bg t - findColor _ [] = pure ([], []) + where + findColor bg (h : t) = + getColor bg h >>= \case + Just c -> pure (c, t) + Nothing -> findColor bg t + findColor _ [] = pure ([], []) renderCol :: Col -> State Colors String renderCol col = spin col - where - spin = \case - ColAspect aspects sub -> do - sgr <- getColors aspects - withSgr sgr sub - Col sgr sub -> - withSgr sgr sub - ColString s -> pure s - ColCat cols -> concat <$> traverse spin cols - - withSgr sgr sub = do - pre <- push sgr - s <- spin sub - pop - pure (renderSgr sgr ++ s ++ renderSgr pre) - - push sgr = - state $ \case - s@Colors {stack = []} -> ([Reset], s {stack = [sgr, [Reset]]}) - s@Colors {stack = h : t} -> ([Reset], s {stack = sgr : h : t}) - - pop = modify' $ \ s@Colors {stack} -> s {stack = drop 1 stack} + where + spin = \case + ColAspect aspects sub -> do + sgr <- getColors aspects + withSgr sgr sub + Col sgr sub -> + withSgr sgr sub + ColString s -> pure s + ColCat cols -> concat <$> traverse spin cols + + withSgr sgr sub = do + pre <- push sgr + s <- spin sub + pop + pure (renderSgr sgr ++ s ++ renderSgr pre) + + push sgr = + state $ \case + s@Colors{stack = []} -> ([Reset], s{stack = [sgr, [Reset]]}) + s@Colors{stack = h : t} -> ([Reset], s{stack = sgr : h : t}) + + pop = modify' $ \s@Colors{stack} -> s{stack = drop 1 stack} runCol :: [Word64] -> Maybe Word64 -> Word64 -> Map PeerId Word64 -> State Colors a -> (a, Colors) runCol cand selection slotNumber cache s = - runState s Colors {candidates = cand, selection, slotNumber, cache, stack = []} + runState s Colors{candidates = cand, selection, slotNumber, cache, stack = []} ---------------------------------------------------------------------------------------------------- -- Slots @@ -221,26 +239,22 @@ slotInt (SlotNo s) = fromIntegral s blockInt :: BlockNo -> Int blockInt (BlockNo s) = fromIntegral s -data Range = - Range { - from :: Int, - to :: Int +data Range + = Range + { from :: Int + , to :: Int } deriving (Show, Eq, Ord) instance Condense Range where condense (Range from to) = "[" ++ condense from ++ "," ++ condense to ++ "]" -data Aspect = - Fork - | - Selection - | - Candidate PeerId - | - SlotNumber - | - TipPoint PeerId +data Aspect + = Fork + | Selection + | Candidate PeerId + | SlotNumber + | TipPoint PeerId deriving (Eq, Show, Ord) instance Condense Aspect where @@ -251,28 +265,24 @@ instance Condense Aspect where SlotNumber -> "n" TipPoint _ -> "t" -data AspectEdge = - EdgeLeft - | - EdgeRight - | - NoEdge - deriving (Show) - -data SlotAspect = - SlotAspect { - slotAspect :: Aspect, - edge :: AspectEdge +data AspectEdge + = EdgeLeft + | EdgeRight + | NoEdge + deriving Show + +data SlotAspect + = SlotAspect + { slotAspect :: Aspect + , edge :: AspectEdge } - deriving (Show) + deriving Show -data SlotCapacity = - SlotOutside - | - SlotBlock Int - | - SlotEmpty - deriving (Show) +data SlotCapacity + = SlotOutside + | SlotBlock Int + | SlotEmpty + deriving Show instance Condense SlotCapacity where condense = \case @@ -280,75 +290,73 @@ instance Condense SlotCapacity where SlotBlock n -> "-" ++ condense n SlotEmpty -> "" -data Slot = - Slot { - num :: WithOrigin Int, - capacity :: SlotCapacity, - aspects :: [SlotAspect] +data Slot + = Slot + { num :: WithOrigin Int + , capacity :: SlotCapacity + , aspects :: [SlotAspect] } - deriving (Show) + deriving Show instance Condense Slot where - condense Slot {num, capacity} = + condense Slot{num, capacity} = sn ++ condense capacity - where - sn = case num of - Origin -> "G" - At n -> show n + where + sn = case num of + Origin -> "G" + At n -> show n ---------------------------------------------------------------------------------------------------- -- Slots vectors ---------------------------------------------------------------------------------------------------- -data BranchSlots = - BranchSlots { - frag :: AF.AnchoredFragment (Header TestBlock), - slots :: Vector Slot, - cands :: [PeerId], - forkNo :: Word64 +data BranchSlots + = BranchSlots + { frag :: AF.AnchoredFragment (Header TestBlock) + , slots :: Vector Slot + , cands :: [PeerId] + , forkNo :: Word64 } - deriving (Show) + deriving Show addAspect :: Aspect -> Range -> Bool -> Vector Slot -> Vector Slot addAspect slotAspect (Range l u) overFork slots = debugRender (show (l, u, slotAspect)) $ - debugRender (condense (Vector.toList (ins <$> sub))) $ - Vector.update slots (ins <$> sub) - where - ins (i, slot) = - (i, slot {aspects = newAspect : aspects slot}) - where - newAspect = SlotAspect {slotAspect, edge = mkEdge i} + debugRender (condense (Vector.toList (ins <$> sub))) $ + Vector.update slots (ins <$> sub) + where + ins (i, slot) = + (i, slot{aspects = newAspect : aspects slot}) + where + newAspect = SlotAspect{slotAspect, edge = mkEdge i} - mkEdge i | i == l && not overFork = EdgeLeft - | i == u = EdgeRight - | otherwise = NoEdge + mkEdge i + | i == l && not overFork = EdgeLeft + | i == u = EdgeRight + | otherwise = NoEdge - sub = Vector.slice l count (Vector.indexed slots) + sub = Vector.slice l count (Vector.indexed slots) - count = u - l + 1 + count = u - l + 1 initSlots :: Int -> Range -> AF.AnchoredFragment TestBlock -> Vector Slot initSlots lastSlot (Range l u) blocks = Vector.fromList (snd (mapAccumL step (AF.toOldestFirst blocks) [-1 .. lastSlot])) - where - step bs cur - | cur == -1 - = (bs, Slot {num = Origin, capacity = SlotOutside, aspects = []}) - - | b : rest <- bs - , s <- slotInt (blockSlot b) - , s == cur - = (rest, mkSlot cur (SlotBlock (blockInt (blockNo b)))) - - | cur >= l - 1 && cur <= u - = (bs, mkSlot cur SlotEmpty) - - | otherwise - = (bs, mkSlot cur SlotOutside) - - mkSlot num capacity = - Slot {num = At num, capacity, aspects = []} + where + step bs cur + | cur == -1 = + (bs, Slot{num = Origin, capacity = SlotOutside, aspects = []}) + | b : rest <- bs + , s <- slotInt (blockSlot b) + , s == cur = + (rest, mkSlot cur (SlotBlock (blockInt (blockNo b)))) + | cur >= l - 1 && cur <= u = + (bs, mkSlot cur SlotEmpty) + | otherwise = + (bs, mkSlot cur SlotOutside) + + mkSlot num capacity = + Slot{num = At num, capacity, aspects = []} hashForkNo :: HeaderHash TestBlock -> Word64 hashForkNo (TestHash h) = @@ -361,45 +369,48 @@ blockForkNo = \case initBranch :: Int -> Range -> AF.AnchoredFragment TestBlock -> BranchSlots initBranch lastSlot fragRange fragment = - BranchSlots { - frag = AF.mapAnchoredFragment getHeader fragment, - slots = initSlots lastSlot fragRange fragment, - cands = [], - forkNo = blockForkNo (AF.headHash fragment) - } - -data TreeSlots = - TreeSlots { - lastSlot :: Int, - branches :: [BranchSlots] + BranchSlots + { frag = AF.mapAnchoredFragment getHeader fragment + , slots = initSlots lastSlot fragRange fragment + , cands = [] + , forkNo = blockForkNo (AF.headHash fragment) + } + +data TreeSlots + = TreeSlots + { lastSlot :: Int + , branches :: [BranchSlots] } - deriving (Show) + deriving Show initTree :: BlockTree TestBlock -> TreeSlots initTree blockTree = - TreeSlots {lastSlot, branches = trunk : branches} - where - trunk = initFR trunkRange + TreeSlots{lastSlot, branches = trunk : branches} + where + trunk = initFR trunkRange - branches = initFR <$> branchRanges + branches = initFR <$> branchRanges - initFR = uncurry (initBranch lastSlot) + initFR = uncurry (initBranch lastSlot) - lastSlot = foldr' (max . (to . fst)) 0 (trunkRange : branchRanges) + lastSlot = foldr' (max . (to . fst)) 0 (trunkRange : branchRanges) - trunkRange = withRange (btTrunk blockTree) + trunkRange = withRange (btTrunk blockTree) - branchRanges = withRange . btbSuffix <$> btBranches blockTree + branchRanges = withRange . btbSuffix <$> btBranches blockTree - withRange f = (mkRange f, f) + withRange f = (mkRange f, f) - mkRange f = - Range l u - where - l = withOrigin 0 slotInt (AF.lastSlot f) - u = withOrigin 0 slotInt (AF.headSlot f) + mkRange f = + Range l u + where + l = withOrigin 0 slotInt (AF.lastSlot f) + u = withOrigin 0 slotInt (AF.headSlot f) -commonRange :: AF.AnchoredFragment (Header TestBlock) -> AF.AnchoredFragment (Header TestBlock) -> Maybe (Range, Bool) +commonRange :: + AF.AnchoredFragment (Header TestBlock) -> + AF.AnchoredFragment (Header TestBlock) -> + Maybe (Range, Bool) commonRange branch segment = do (preB, preS, _, _) <- AF.intersect branch segment lower <- findLower (AF.toNewestFirst preB) (AF.toNewestFirst preS) @@ -414,23 +425,25 @@ commonRange branch segment = do overFork = asS < asB && aB == anchor branch guard (u >= l) pure (Range (slotInt l + (if overFork then 0 else 1)) (slotInt u + 1), overFork) - where - findLower preB preS = - Foldable.foldl' step Nothing (zip preB preS) - step prev (b1, b2) | b1 == b2 = Just b1 - | otherwise = prev + where + findLower preB preS = + Foldable.foldl' step Nothing (zip preB preS) + step prev (b1, b2) + | b1 == b2 = Just b1 + | otherwise = prev addFragRange :: Aspect -> AF.AnchoredFragment (Header TestBlock) -> TreeSlots -> TreeSlots -addFragRange aspect selection TreeSlots {lastSlot, branches} = - TreeSlots {lastSlot, branches = forBranch <$> branches} - where - forBranch branch@BranchSlots {frag, slots, cands} = - case commonRange frag selection of - Just (range, overFork) -> branch {slots = addAspect aspect range overFork slots, cands = addCandidate cands} - _ -> branch - - addCandidate old | Candidate peerId <- aspect = peerId : old - | otherwise = old +addFragRange aspect selection TreeSlots{lastSlot, branches} = + TreeSlots{lastSlot, branches = forBranch <$> branches} + where + forBranch branch@BranchSlots{frag, slots, cands} = + case commonRange frag selection of + Just (range, overFork) -> branch{slots = addAspect aspect range overFork slots, cands = addCandidate cands} + _ -> branch + + addCandidate old + | Candidate peerId <- aspect = peerId : old + | otherwise = old addCandidateRange :: TreeSlots -> (PeerId, AF.AnchoredFragment (Header TestBlock)) -> TreeSlots addCandidateRange treeSlots (pid, candidate) = @@ -438,88 +451,83 @@ addCandidateRange treeSlots (pid, candidate) = updateSlot :: Int -> (Slot -> Slot) -> Vector Slot -> Vector Slot updateSlot i f = - Vector.modify (\ mv -> MV.modify mv f i) + Vector.modify (\mv -> MV.modify mv f i) addForks :: TreeSlots -> TreeSlots -addForks treeSlots@TreeSlots {branches} = - treeSlots {branches = addFork <$> branches} - where - addFork fr@BranchSlots {frag, slots, forkNo} - | forkNo == 0 - = fr - | otherwise - = fr {slots = updateSlot s update slots} - where - update slot = - slot { - capacity = SlotEmpty, - aspects = SlotAspect {slotAspect = Fork, edge = NoEdge} : aspects slot - } - s = slotInt (withOrigin 0 (+ 1) (anchorToSlotNo (anchor frag))) +addForks treeSlots@TreeSlots{branches} = + treeSlots{branches = addFork <$> branches} + where + addFork fr@BranchSlots{frag, slots, forkNo} + | forkNo == 0 = + fr + | otherwise = + fr{slots = updateSlot s update slots} + where + update slot = + slot + { capacity = SlotEmpty + , aspects = SlotAspect{slotAspect = Fork, edge = NoEdge} : aspects slot + } + s = slotInt (withOrigin 0 (+ 1) (anchorToSlotNo (anchor frag))) addTipPoint :: PeerId -> WithOrigin TestBlock -> TreeSlots -> TreeSlots -addTipPoint pid (NotOrigin b) TreeSlots {lastSlot, branches} = - TreeSlots {lastSlot, branches = tryBranch <$> branches} - where - tryBranch branch@BranchSlots {forkNo, slots} - | tipForkNo == forkNo - = branch {slots = updateSlot (slotInt (blockSlot b + 1)) update slots} - | otherwise - = branch - where - update slot = - slot {aspects = SlotAspect {slotAspect = TipPoint pid, edge = NoEdge} : aspects slot} - - tipForkNo = hashForkNo (blockHash b) - +addTipPoint pid (NotOrigin b) TreeSlots{lastSlot, branches} = + TreeSlots{lastSlot, branches = tryBranch <$> branches} + where + tryBranch branch@BranchSlots{forkNo, slots} + | tipForkNo == forkNo = + branch{slots = updateSlot (slotInt (blockSlot b + 1)) update slots} + | otherwise = + branch + where + update slot = + slot{aspects = SlotAspect{slotAspect = TipPoint pid, edge = NoEdge} : aspects slot} + + tipForkNo = hashForkNo (blockHash b) addTipPoint _ _ treeSlots = treeSlots addPoints :: Map PeerId (NodeState TestBlock) -> TreeSlots -> TreeSlots addPoints peerPoints treeSlots = Foldable.foldl' step treeSlots (Map.toList peerPoints) - where - step z (pid, ap) = addTipPoint pid (nsTip ap) z + where + step z (pid, ap) = addTipPoint pid (nsTip ap) z ---------------------------------------------------------------------------------------------------- -- Cells ---------------------------------------------------------------------------------------------------- -data CellSort = - CellHere (NonEmpty Aspect) - | - CellOther - deriving (Show) +data CellSort + = CellHere (NonEmpty Aspect) + | CellOther + deriving Show instance Condense CellSort where condense = \case CellHere a -> "h" ++ condense (toList a) CellOther -> "o" -data FragCell = - FragCell { - fcLabel :: Maybe String, - fcSort :: CellSort, - fcLineAspects :: [Aspect] +data FragCell + = FragCell + { fcLabel :: Maybe String + , fcSort :: CellSort + , fcLineAspects :: [Aspect] } - deriving (Show) + deriving Show instance Condense FragCell where condense (FragCell l s a) = lb ++ " " ++ condense s ++ condense a - where - lb = case l of - Just x -> x - Nothing -> "-" - -data Cell = - Cell FragCell - | - CellEmpty - | - CellSlotNo (WithOrigin Int) - | - CellPeers [PeerId] - deriving (Show) + where + lb = case l of + Just x -> x + Nothing -> "-" + +data Cell + = Cell FragCell + | CellEmpty + | CellSlotNo (WithOrigin Int) + | CellPeers [PeerId] + deriving Show instance Condense Cell where condense = \case @@ -535,12 +543,12 @@ mainAspects = lineAspects :: [SlotAspect] -> [Aspect] lineAspects = sort . mapMaybe check - where - check SlotAspect {edge, slotAspect} - | EdgeLeft <- edge - = Nothing - | otherwise - = Just slotAspect + where + check SlotAspect{edge, slotAspect} + | EdgeLeft <- edge = + Nothing + | otherwise = + Just slotAspect prependList :: [a] -> NonEmpty a -> NonEmpty a prependList = \case @@ -548,50 +556,47 @@ prependList = \case h : t -> ((h :| t) <>) branchCells :: BranchSlots -> NonEmpty Cell -branchCells BranchSlots {cands, slots} = +branchCells BranchSlots{cands, slots} = prependList (fragCell <$> Vector.toList slots) (pure peers) - where - fragCell Slot {capacity, aspects} - | SlotOutside <- capacity - = CellEmpty - - | otherwise - , cellSort <- maybe CellOther CellHere (mainAspects aspects) - = Cell (FragCell (content capacity) cellSort (lineAspects aspects)) + where + fragCell Slot{capacity, aspects} + | SlotOutside <- capacity = + CellEmpty + | otherwise + , cellSort <- maybe CellOther CellHere (mainAspects aspects) = + Cell (FragCell (content capacity) cellSort (lineAspects aspects)) - content = \case - SlotBlock num -> Just (show num) - _ -> Nothing + content = \case + SlotBlock num -> Just (show num) + _ -> Nothing - peers = CellPeers cands + peers = CellPeers cands slotNoCells :: Int -> NonEmpty Cell slotNoCells lastSlot = CellSlotNo Origin :| (CellSlotNo . At <$> [0 .. lastSlot]) ++ [CellEmpty] treeCells :: TreeSlots -> NonEmpty (NonEmpty Cell) -treeCells TreeSlots {lastSlot, branches} = +treeCells TreeSlots{lastSlot, branches} = slotNoCells lastSlot :| (branchCells <$> branches) ---------------------------------------------------------------------------------------------------- -- Render cells ---------------------------------------------------------------------------------------------------- -newtype SlotWidth = - SlotWidth Int +newtype SlotWidth + = SlotWidth Int deriving (Eq, Show, Ord, Num) -data RenderSlot = - SlotEllipsis Int - | - RenderSlot Int SlotWidth (NonEmpty Cell) - deriving (Show) +data RenderSlot + = SlotEllipsis Int + | RenderSlot Int SlotWidth (NonEmpty Cell) + deriving Show -data RenderCell = - CellEllipsis - | - RenderCell SlotWidth Cell - deriving (Show) +data RenderCell + = CellEllipsis + | RenderCell SlotWidth Cell + deriving Show instance Condense RenderCell where condense = \case @@ -601,106 +606,106 @@ instance Condense RenderCell where slotWidth :: NonEmpty Cell -> SlotWidth slotWidth = maximum . fmap cellWidth - where - cellWidth = \case - Cell FragCell {fcLabel = Just label, fcSort} -> SlotWidth (length label) + sortWidth fcSort - CellPeers peerIds -> SlotWidth (sum (labelWidth <$> peerIds)) - _ -> 1 + where + cellWidth = \case + Cell FragCell{fcLabel = Just label, fcSort} -> SlotWidth (length label) + sortWidth fcSort + CellPeers peerIds -> SlotWidth (sum (labelWidth <$> peerIds)) + _ -> 1 - labelWidth pid = 2 + length (show pid) + labelWidth pid = 2 + length (show pid) - sortWidth = \case - CellHere as -> sum (pointWidth <$> as) - _ -> 0 + sortWidth = \case + CellHere as -> sum (pointWidth <$> as) + _ -> 0 - pointWidth = \case - TipPoint _ -> 1 - _ -> 0 + pointWidth = \case + TipPoint _ -> 1 + _ -> 0 contiguous :: [(Int, Bool, a)] -> [[(Int, a)]] contiguous ((i0, _, a0) : rest) = result (Foldable.foldl' step (pure (i0, a0), []) rest) - where - result (cur, res) = reverse (reverse (toList cur) : res) - - step (cur@((prev, _) :| _), res) (i, force, a) - | i == prev + 1 || force - = ((i, a) <| cur, res) - | otherwise - = (pure (i, a), reverse (toList cur) : res) + where + result (cur, res) = reverse (reverse (toList cur) : res) + + step (cur@((prev, _) :| _), res) (i, force, a) + | i == prev + 1 || force = + ((i, a) <| cur, res) + | otherwise = + (pure (i, a), reverse (toList cur) : res) contiguous [] = [] cellSlots :: Int -> [(Int, Bool, NonEmpty Cell)] -> [RenderSlot] cellSlots branches = concat . intersperse [SlotEllipsis branches] . fmap (fmap (uncurry withMaxSize)) . contiguous - where - withMaxSize slot cells = RenderSlot slot (slotWidth cells) cells + where + withMaxSize slot cells = RenderSlot slot (slotWidth cells) cells pruneCells :: NonEmpty (NonEmpty Cell) -> [RenderSlot] pruneCells branches = -- debugRender (unlines (condense <$> branches)) $ - cellSlots (length branches) (mapMaybe cellSlot (zip slotRange (NonEmpty.toList (NonEmpty.transpose branches)))) - where - cellSlot :: (WithOrigin Int, NonEmpty Cell) -> Maybe (Int, Bool, NonEmpty Cell) - cellSlot (num, frags) - | let noEll = any forceNoEllipsis frags - , noEll || any essential frags - = keep num noEll frags - | otherwise - = Nothing - - keep num noEll frags = Just (fromWithOrigin (-1) num, noEll, frags) - - essential = \case - Cell FragCell {fcSort = CellHere _} -> True - _ -> False - - forceNoEllipsis = \case - CellPeers _ -> True - _ -> False - - slotRange = Origin : (At <$> [0 ..]) + cellSlots + (length branches) + (mapMaybe cellSlot (zip slotRange (NonEmpty.toList (NonEmpty.transpose branches)))) + where + cellSlot :: (WithOrigin Int, NonEmpty Cell) -> Maybe (Int, Bool, NonEmpty Cell) + cellSlot (num, frags) + | let noEll = any forceNoEllipsis frags + , noEll || any essential frags = + keep num noEll frags + | otherwise = + Nothing + + keep num noEll frags = Just (fromWithOrigin (-1) num, noEll, frags) + + essential = \case + Cell FragCell{fcSort = CellHere _} -> True + _ -> False + + forceNoEllipsis = \case + CellPeers _ -> True + _ -> False + + slotRange = Origin : (At <$> [0 ..]) ---------------------------------------------------------------------------------------------------- -- Render ---------------------------------------------------------------------------------------------------- -data RenderConfig = - RenderConfig { - lineWidth :: Int, - ellipsis :: String, - slotDistance :: Int, - boringChar :: Char, - candidateChar :: Char, - selectionChar :: Char, - forkChar :: Char, - candidateColors :: [Word64], - cachedPeers :: Map PeerId Word64, - selectionColor :: Maybe Word64, - slotNumberColor :: Word64 +data RenderConfig + = RenderConfig + { lineWidth :: Int + , ellipsis :: String + , slotDistance :: Int + , boringChar :: Char + , candidateChar :: Char + , selectionChar :: Char + , forkChar :: Char + , candidateColors :: [Word64] + , cachedPeers :: Map PeerId Word64 + , selectionColor :: Maybe Word64 + , slotNumberColor :: Word64 } padCell :: RenderConfig -> Char -> SlotWidth -> String -> String -padCell RenderConfig {slotDistance} padChar (SlotWidth w) s +padCell RenderConfig{slotDistance} padChar (SlotWidth w) s | pad <= 0 = s - |otherwise = replicate pad padChar ++ s - where - pad = w - length s + slotDistance + | otherwise = replicate pad padChar ++ s + where + pad = w - length s + slotDistance lineChar :: RenderConfig -> [Aspect] -> Char lineChar config aspects - | elem Selection aspects - = selectionChar config - - | any isCandidate aspects - = candidateChar config - - | otherwise - = boringChar config - where - isCandidate = \case - Candidate _ -> True - _ -> False + | elem Selection aspects = + selectionChar config + | any isCandidate aspects = + candidateChar config + | otherwise = + boringChar config + where + isCandidate = \case + Candidate _ -> True + _ -> False colorAspects :: [Aspect] -> [Aspect] colorAspects = @@ -711,15 +716,16 @@ colorAspects = _ -> True renderLine :: RenderConfig -> SlotWidth -> [Aspect] -> Int -> Col -renderLine config@RenderConfig {slotDistance, forkChar} (SlotWidth width) aspects labelWidth = +renderLine config@RenderConfig{slotDistance, forkChar} (SlotWidth width) aspects labelWidth = case colorAspects aspects of [] -> ColString lineString colors -> ColCat [ColAspect (pure color) (ColString [c]) | (c, color) <- zip lineString (cycle colors)] - where - lineString | elem Fork aspects = replicate (lineWidth - 1) ' ' ++ [forkChar] - | otherwise = replicate lineWidth lc - lineWidth = max 0 (width - labelWidth + slotDistance) - lc = lineChar config aspects + where + lineString + | elem Fork aspects = replicate (lineWidth - 1) ' ' ++ [forkChar] + | otherwise = replicate lineWidth lc + lineWidth = max 0 (width - labelWidth + slotDistance) + lc = lineChar config aspects labelColor :: CellSort -> Maybe (NonEmpty Aspect) labelColor = \case @@ -732,46 +738,46 @@ renderSpecifiedLabel label srt = case labelColor srt of Nothing -> text Just as -> ColAspect as text - where - text = ColString label + where + text = ColString label renderLabel :: Maybe String -> CellSort -> Col renderLabel label srt - | Just specified <- label - = renderSpecifiedLabel specified srt - | otherwise - = "" + | Just specified <- label = + renderSpecifiedLabel specified srt + | otherwise = + "" renderPoint :: CellSort -> Col renderPoint = \case CellHere aspects -> mconcat (mapMaybe pointMarker (toList aspects)) _ -> "" - where - pointMarker = \case - TipPoint pid -> Just (ColAspect (pure (TipPoint pid)) "↑") - _ -> Nothing + where + pointMarker = \case + TipPoint pid -> Just (ColAspect (pure (TipPoint pid)) "↑") + _ -> Nothing renderFragCell :: RenderConfig -> SlotWidth -> FragCell -> Col -renderFragCell config width FragCell {fcLabel, fcSort, fcLineAspects} = +renderFragCell config width FragCell{fcLabel, fcSort, fcLineAspects} = renderLine config width fcLineAspects (colLength label) <> label - where - label = renderLabel fcLabel fcSort <> renderPoint fcSort + where + label = renderLabel fcLabel fcSort <> renderPoint fcSort renderSlotNo :: RenderConfig -> SlotWidth -> WithOrigin Int -> Col renderSlotNo config width num = ColAspect (pure SlotNumber) (ColString (padCell config ' ' width label)) - where - label = case num of - Origin -> "G" - At s -> show s + where + label = case num of + Origin -> "G" + At s -> show s renderPeers :: [PeerId] -> Col renderPeers peers = ColCat [ColAspect (pure (Candidate p)) (ColString (" " ++ show p)) | p <- peers] renderCell :: RenderConfig -> RenderCell -> Col -renderCell config@RenderConfig {ellipsis} = \case +renderCell config@RenderConfig{ellipsis} = \case RenderCell width (Cell cell) -> renderFragCell config width cell RenderCell width (CellEmpty) -> ColString (padCell config ' ' width "") RenderCell width (CellSlotNo n) -> renderSlotNo config width n @@ -781,7 +787,7 @@ renderCell config@RenderConfig {ellipsis} = \case renderBranch :: RenderConfig -> [RenderCell] -> Col renderBranch config cells = debugRender (condense cells) $ - foldMap (renderCell config) cells + foldMap (renderCell config) cells -- | Use w + 2 because we want the effective width, which includes the line segment. renderSlotWidth :: Int -> RenderSlot -> Int @@ -790,24 +796,24 @@ renderSlotWidth ellipsisWidth = \case RenderSlot _ (SlotWidth w) _ -> w + 2 breakLines :: RenderConfig -> [RenderSlot] -> [[RenderSlot]] -breakLines RenderConfig {lineWidth, ellipsis} = +breakLines RenderConfig{lineWidth, ellipsis} = result . Foldable.foldl' step (0, [], []) - where - result (_, cur, res) = reverse (reverse cur : res) - step (w, cur, res) slot - | new <= lineWidth = (new, slot : cur, res) - | otherwise = (curW, [slot], reverse cur : res) - where - new = w + curW - curW = renderSlotWidth (length ellipsis) slot + where + result (_, cur, res) = reverse (reverse cur : res) + step (w, cur, res) slot + | new <= lineWidth = (new, slot : cur, res) + | otherwise = (curW, [slot], reverse cur : res) + where + new = w + curW + curW = renderSlotWidth (length ellipsis) slot renderCells :: [RenderSlot] -> [[RenderCell]] renderCells = transpose . fmap toCells - where - toCells = \case - RenderSlot _ width cells -> RenderCell width <$> toList cells - SlotEllipsis n -> replicate n CellEllipsis + where + toCells = \case + RenderSlot _ width cells -> RenderCell width <$> toList cells + SlotEllipsis n -> replicate n CellEllipsis renderSlotSequence :: RenderConfig -> [RenderSlot] -> [Col] renderSlotSequence config = @@ -818,20 +824,28 @@ renderSlots config slots = renderSlotSequence config <$> breakLines config slots renderColBlocks :: RenderConfig -> [[Col]] -> ([String], Colors) -renderColBlocks RenderConfig {candidateColors, selectionColor, slotNumberColor, cachedPeers} cols = - first (fmap unlines) (runCol candidateColors selectionColor slotNumberColor cachedPeers (traverse (traverse renderCol) cols)) +renderColBlocks RenderConfig{candidateColors, selectionColor, slotNumberColor, cachedPeers} cols = + first + (fmap unlines) + ( runCol + candidateColors + selectionColor + slotNumberColor + cachedPeers + (traverse (traverse renderCol) cols) + ) ------------------------------------------------------------------------------------------------------ ---- API ------------------------------------------------------------------------------------------------------ -- | All inputs for the state diagram printer. -data PeerSimState = - PeerSimState { - pssBlockTree :: BlockTree TestBlock, - pssSelection :: AF.AnchoredFragment (Header TestBlock), - pssCandidates :: Map PeerId (AF.AnchoredFragment (Header TestBlock)), - pssPoints :: Map PeerId (NodeState TestBlock) +data PeerSimState + = PeerSimState + { pssBlockTree :: BlockTree TestBlock + , pssSelection :: AF.AnchoredFragment (Header TestBlock) + , pssCandidates :: Map PeerId (AF.AnchoredFragment (Header TestBlock)) + , pssPoints :: Map PeerId (NodeState TestBlock) } -- TODO add an aspect for the last block of each branch? @@ -840,36 +854,36 @@ data PeerSimState = -- the candidate fragments, selection, and forks in different colors, omitting -- uninteresting segments. peerSimStateDiagramWith :: RenderConfig -> PeerSimState -> (String, Map PeerId Word64) -peerSimStateDiagramWith config PeerSimState {pssBlockTree, pssSelection, pssCandidates, pssPoints} = +peerSimStateDiagramWith config PeerSimState{pssBlockTree, pssSelection, pssCandidates, pssPoints} = debugRender (unlines (prettyBlockTree pssBlockTree)) $ - (unlines blocks, cache) - where - (blocks, Colors {cache}) = renderColBlocks config (renderSlots config frags) + (unlines blocks, cache) + where + (blocks, Colors{cache}) = renderColBlocks config (renderSlots config frags) - frags = - pruneCells $ + frags = + pruneCells $ treeCells $ - addPoints pssPoints $ - addForks $ - flip (Foldable.foldl' addCandidateRange) (Map.toList pssCandidates) $ - addFragRange Selection pssSelection $ - initTree pssBlockTree + addPoints pssPoints $ + addForks $ + flip (Foldable.foldl' addCandidateRange) (Map.toList pssCandidates) $ + addFragRange Selection pssSelection $ + initTree pssBlockTree defaultRenderConfig :: RenderConfig defaultRenderConfig = - RenderConfig { - lineWidth = 80, - ellipsis = " .. ", - slotDistance = 2, - boringChar = '·', - candidateChar = '-', - selectionChar = '*', - forkChar = '`', - candidateColors = [164, 113, 142, 81, 33], - cachedPeers = mempty, - selectionColor = Just 123, - slotNumberColor = 166 - } + RenderConfig + { lineWidth = 80 + , ellipsis = " .. " + , slotDistance = 2 + , boringChar = '·' + , candidateChar = '-' + , selectionChar = '*' + , forkChar = '`' + , candidateColors = [164, 113, 142, 81, 33] + , cachedPeers = mempty + , selectionColor = Just 123 + , slotNumberColor = 166 + } peerSimStateDiagram :: PeerSimState -> String peerSimStateDiagram = @@ -906,8 +920,8 @@ peerSimStateDiagramSTMTracer stringTracer pssBlockTree selectionVar candidatesVa pssCandidates <- candidatesVar pssPoints <- fmap (fromMaybe genesisNodeState) <$> pointsVar cachedPeers <- readTVar peerCache - pure (PeerSimState {pssBlockTree, pssSelection, pssCandidates, pssPoints}, cachedPeers) - let (blocks, newPeers) = peerSimStateDiagramWith (defaultRenderConfig {cachedPeers}) s + pure (PeerSimState{pssBlockTree, pssSelection, pssCandidates, pssPoints}, cachedPeers) + let (blocks, newPeers) = peerSimStateDiagramWith (defaultRenderConfig{cachedPeers}) s atomically (modifyTVar peerCache (newPeers <>)) traceWith stringTracer blocks diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateView.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateView.hs index 2b6404df8e..ca5d3e5200 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateView.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/StateView.hs @@ -4,8 +4,8 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE UndecidableInstances #-} -module Test.Consensus.PeerSimulator.StateView ( - PeerSimulatorComponent (..) +module Test.Consensus.PeerSimulator.StateView + ( PeerSimulatorComponent (..) , PeerSimulatorComponentResult (..) , PeerSimulatorResult (..) , StateView (..) @@ -18,38 +18,48 @@ module Test.Consensus.PeerSimulator.StateView ( , stateViewTracersWithInitial ) where -import Control.Tracer (Tracer, traceWith) -import Data.Containers.ListUtils (nubOrd) -import Data.Foldable (for_) -import Data.List (sort) -import Data.Maybe (mapMaybe) -import Network.TypedProtocol.Codec (AnyMessage) -import Ouroboros.Consensus.Block (Header, Point) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import Ouroboros.Consensus.Storage.ChainDB (ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import Ouroboros.Consensus.Util.Condense (Condense (..), - CondenseList (..), PaddingDirection (..), padListWith) -import Ouroboros.Consensus.Util.IOLike (IOLike, SomeException, - atomically) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import Ouroboros.Network.Block (StandardHash, Tip) -import Ouroboros.Network.Protocol.BlockFetch.Type (BlockFetch) -import Ouroboros.Network.Protocol.ChainSync.Type (ChainSync) -import Test.Consensus.PeerSimulator.Trace (TraceEvent) -import Test.Consensus.PointSchedule.Peers (PeerId) -import Test.Util.TersePrinting (terseBlock, terseHFragment, - terseMaybe) -import Test.Util.TestBlock (TestBlock) -import Test.Util.Tracer (recordingTracerTVar) +import Control.Tracer (Tracer, traceWith) +import Data.Containers.ListUtils (nubOrd) +import Data.Foldable (for_) +import Data.List (sort) +import Data.Maybe (mapMaybe) +import Network.TypedProtocol.Codec (AnyMessage) +import Ouroboros.Consensus.Block (Header, Point) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.Storage.ChainDB (ChainDB) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Util.Condense + ( Condense (..) + , CondenseList (..) + , PaddingDirection (..) + , padListWith + ) +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , SomeException + , atomically + ) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.Block (StandardHash, Tip) +import Ouroboros.Network.Protocol.BlockFetch.Type (BlockFetch) +import Ouroboros.Network.Protocol.ChainSync.Type (ChainSync) +import Test.Consensus.PeerSimulator.Trace (TraceEvent) +import Test.Consensus.PointSchedule.Peers (PeerId) +import Test.Util.TersePrinting + ( terseBlock + , terseHFragment + , terseMaybe + ) +import Test.Util.TestBlock (TestBlock) +import Test.Util.Tracer (recordingTracerTVar) -- | A record to associate an exception thrown by a thread -- running a component of the peer simulator with the peer -- that it was running for. data PeerSimulatorResult blk = PeerSimulatorResult - { psePeerId :: PeerId - , pseResult :: PeerSimulatorComponentResult blk - } + { psePeerId :: PeerId + , pseResult :: PeerSimulatorComponentResult blk + } deriving (Eq, Ord) data PeerSimulatorComponent @@ -61,40 +71,40 @@ data PeerSimulatorComponent data PeerSimulatorComponentResult blk = SomeChainSyncClientResult - ( Either - SomeException - ( CSClient.ChainSyncClientResult - , Maybe (ChainSyncResult blk) + ( Either + SomeException + ( CSClient.ChainSyncClientResult + , Maybe (ChainSyncResult blk) + ) ) - ) | SomeChainSyncServerResult - ( Either - SomeException - (Maybe (ChainSyncResult blk)) - ) + ( Either + SomeException + (Maybe (ChainSyncResult blk)) + ) | SomeBlockFetchClientResult - ( Either - SomeException - (Maybe (BlockFetchResult blk)) - ) + ( Either + SomeException + (Maybe (BlockFetchResult blk)) + ) | SomeBlockFetchServerResult - ( Either - SomeException - (Maybe (BlockFetchResult blk)) - ) + ( Either + SomeException + (Maybe (BlockFetchResult blk)) + ) toComponent :: PeerSimulatorComponentResult blk -> PeerSimulatorComponent -toComponent (SomeChainSyncClientResult _) = ChainSyncClient -toComponent (SomeChainSyncServerResult _) = ChainSyncServer +toComponent (SomeChainSyncClientResult _) = ChainSyncClient +toComponent (SomeChainSyncServerResult _) = ChainSyncServer toComponent (SomeBlockFetchClientResult _) = BlockFetchClient toComponent (SomeBlockFetchServerResult _) = BlockFetchServer pscrToException :: PeerSimulatorComponentResult blk -> Maybe SomeException -pscrToException (SomeChainSyncClientResult (Left exn)) = Just exn -pscrToException (SomeChainSyncServerResult (Left exn)) = Just exn +pscrToException (SomeChainSyncClientResult (Left exn)) = Just exn +pscrToException (SomeChainSyncServerResult (Left exn)) = Just exn pscrToException (SomeBlockFetchClientResult (Left exn)) = Just exn pscrToException (SomeBlockFetchServerResult (Left exn)) = Just exn -pscrToException _ = Nothing +pscrToException _ = Nothing instance Eq (PeerSimulatorComponentResult blk) where (==) a b = toComponent a == toComponent b @@ -120,7 +130,7 @@ instance (StandardHash blk, Show blk, Show (Header blk)) => Condense (PeerSimula condense (SomeBlockFetchServerResult (Right res)) = "(BlockFetchServer - Success) : " ++ show res -type ChainSyncResult blk = AnyMessage (ChainSync (Header blk) (Point blk) (Tip blk)) +type ChainSyncResult blk = AnyMessage (ChainSync (Header blk) (Point blk) (Tip blk)) type BlockFetchResult blk = AnyMessage (BlockFetch blk (Point blk)) instance (StandardHash blk, Show blk, Show (Header blk)) => Condense (PeerSimulatorResult blk) where @@ -139,36 +149,42 @@ instance (StandardHash blk, Show blk, Show (Header blk)) => CondenseList (PeerSi -- (for instance the fragment that is selected by the ChainDB) but also -- information about the mocked peers (for instance the exceptions raised in the -- mocked ChainSync server threads). -data StateView blk = StateView { - svSelectedChain :: AnchoredFragment (Header blk), - svPeerSimulatorResults :: [PeerSimulatorResult blk], - -- | This field holds the most recent point in the selection (incl. anchor) - -- for which we have a full block (not just a header). - svTipBlock :: Maybe blk, - -- | List of all TraceEvent that have been sent during the simulation. - svTrace :: [TraceEvent blk] +data StateView blk = StateView + { svSelectedChain :: AnchoredFragment (Header blk) + , svPeerSimulatorResults :: [PeerSimulatorResult blk] + , svTipBlock :: Maybe blk + -- ^ This field holds the most recent point in the selection (incl. anchor) + -- for which we have a full block (not just a header). + , svTrace :: [TraceEvent blk] + -- ^ List of all TraceEvent that have been sent during the simulation. } instance Condense (StateView TestBlock) where - condense StateView {svSelectedChain, svPeerSimulatorResults, svTipBlock} = - "SelectedChain: " ++ terseHFragment svSelectedChain ++ "\n" - ++ "TipBlock: " ++ terseMaybe terseBlock svTipBlock ++ "\n" - ++ "PeerSimulatorResults:\n" ++ unlines (fmap (" - " ++) $ condenseList $ sort svPeerSimulatorResults) + condense StateView{svSelectedChain, svPeerSimulatorResults, svTipBlock} = + "SelectedChain: " + ++ terseHFragment svSelectedChain + ++ "\n" + ++ "TipBlock: " + ++ terseMaybe terseBlock svTipBlock + ++ "\n" + ++ "PeerSimulatorResults:\n" + ++ unlines (fmap (" - " ++) $ condenseList $ sort svPeerSimulatorResults) -- | Return the list of peer ids for all peers whose ChainSync thread or -- BlockFetch thread was terminated. collectDisconnectedPeers :: StateView blk -> [PeerId] -collectDisconnectedPeers stateView = nubOrd $ +collectDisconnectedPeers stateView = + nubOrd $ map psePeerId (svPeerSimulatorResults stateView) -- | State view tracers are a lightweight mechanism to record information that -- can later be used to produce a state view. This mechanism relies on -- contra-tracers which we already use in a pervasives way. -data StateViewTracers blk m = StateViewTracers { - svtPeerSimulatorResultsTracer :: Tracer m (PeerSimulatorResult blk) - , svtGetPeerSimulatorResults :: m [PeerSimulatorResult blk] - , svtTraceTracer :: Tracer m (TraceEvent blk) - , svtGetTracerTrace :: m [TraceEvent blk] +data StateViewTracers blk m = StateViewTracers + { svtPeerSimulatorResultsTracer :: Tracer m (PeerSimulatorResult blk) + , svtGetPeerSimulatorResults :: m [PeerSimulatorResult blk] + , svtTraceTracer :: Tracer m (TraceEvent blk) + , svtGetTracerTrace :: m [TraceEvent blk] } -- | Helper to get exceptions from a StateView. @@ -178,21 +194,21 @@ exceptionsByComponent :: [SomeException] exceptionsByComponent component StateView{svPeerSimulatorResults} = mapMaybe (matchComponent component) $ pseResult <$> svPeerSimulatorResults - where - matchComponent :: PeerSimulatorComponent -> PeerSimulatorComponentResult blk -> Maybe SomeException - matchComponent = \case - ChainSyncClient -> \case - SomeChainSyncClientResult (Left exn) -> Just exn - _ -> Nothing - ChainSyncServer -> \case - SomeChainSyncServerResult (Left exn) -> Just exn - _ -> Nothing - BlockFetchClient -> \case - SomeBlockFetchClientResult (Left exn) -> Just exn - _ -> Nothing - BlockFetchServer -> \case - SomeBlockFetchServerResult (Left exn) -> Just exn - _ -> Nothing + where + matchComponent :: PeerSimulatorComponent -> PeerSimulatorComponentResult blk -> Maybe SomeException + matchComponent = \case + ChainSyncClient -> \case + SomeChainSyncClientResult (Left exn) -> Just exn + _ -> Nothing + ChainSyncServer -> \case + SomeChainSyncServerResult (Left exn) -> Just exn + _ -> Nothing + BlockFetchClient -> \case + SomeBlockFetchClientResult (Left exn) -> Just exn + _ -> Nothing + BlockFetchServer -> \case + SomeBlockFetchServerResult (Left exn) -> Just exn + _ -> Nothing -- | Make default state view tracers. The tracers are all freshly initialised -- and contain no information. @@ -202,12 +218,13 @@ defaultStateViewTracers :: defaultStateViewTracers = do (svtPeerSimulatorResultsTracer, svtGetPeerSimulatorResults) <- recordingTracerTVar (svtTraceTracer, svtGetTracerTrace) <- recordingTracerTVar - pure StateViewTracers - { svtPeerSimulatorResultsTracer - , svtGetPeerSimulatorResults - , svtTraceTracer - , svtGetTracerTrace - } + pure + StateViewTracers + { svtPeerSimulatorResultsTracer + , svtGetPeerSimulatorResults + , svtTraceTracer + , svtGetTracerTrace + } -- | Call 'defaultStateViewTracers' and add the provided results. stateViewTracersWithInitial :: @@ -232,4 +249,4 @@ snapshotStateView StateViewTracers{svtGetPeerSimulatorResults, svtGetTracerTrace svTrace <- svtGetTracerTrace svSelectedChain <- atomically $ ChainDB.getCurrentChain chainDb svTipBlock <- ChainDB.getTipBlock chainDb - pure StateView {svSelectedChain, svPeerSimulatorResults, svTipBlock, svTrace} + pure StateView{svSelectedChain, svPeerSimulatorResults, svTipBlock, svTrace} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests.hs index bcd9f888e7..0eef161dc7 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests.hs @@ -1,13 +1,15 @@ module Test.Consensus.PeerSimulator.Tests (tests) where -import qualified Test.Consensus.PeerSimulator.Tests.LinkedThreads as LinkedThreads -import qualified Test.Consensus.PeerSimulator.Tests.Rollback as Rollback -import qualified Test.Consensus.PeerSimulator.Tests.Timeouts as Timeouts -import Test.Tasty +import Test.Consensus.PeerSimulator.Tests.LinkedThreads qualified as LinkedThreads +import Test.Consensus.PeerSimulator.Tests.Rollback qualified as Rollback +import Test.Consensus.PeerSimulator.Tests.Timeouts qualified as Timeouts +import Test.Tasty tests :: TestTree -tests = testGroup "PeerSimulator" [ - Rollback.tests, - Timeouts.tests, - LinkedThreads.tests - ] +tests = + testGroup + "PeerSimulator" + [ Rollback.tests + , Timeouts.tests + , LinkedThreads.tests + ] diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/LinkedThreads.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/LinkedThreads.hs index c5c2cad189..8887a10ce9 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/LinkedThreads.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/LinkedThreads.hs @@ -6,28 +6,32 @@ -- contains a collection of smoke tests to make sure of that. module Test.Consensus.PeerSimulator.Tests.LinkedThreads (tests) where -import Control.Monad.Class.MonadAsync (AsyncCancelled (..)) -import Control.Monad.Class.MonadTime.SI (Time (Time)) -import Data.Functor (($>)) -import Ouroboros.Consensus.Util.IOLike (fromException) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededTimeLimit)) -import Ouroboros.Network.Protocol.ChainSync.Codec (mustReplyTimeout) -import Test.Consensus.BlockTree (BlockTree (..)) -import Test.Consensus.Genesis.Setup -import Test.Consensus.PeerSimulator.Run - (SchedulerConfig (scEnableChainSyncTimeouts), - defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (peersOnlyHonest) -import Test.Consensus.PointSchedule.SinglePeer (scheduleHeaderPoint, - scheduleTipPoint) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () +import Control.Monad.Class.MonadAsync (AsyncCancelled (..)) +import Control.Monad.Class.MonadTime.SI (Time (Time)) +import Data.Functor (($>)) +import Ouroboros.Consensus.Util.IOLike (fromException) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededTimeLimit) + ) +import Ouroboros.Network.Protocol.ChainSync.Codec (mustReplyTimeout) +import Test.Consensus.BlockTree (BlockTree (..)) +import Test.Consensus.Genesis.Setup +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (scEnableChainSyncTimeouts) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers (peersOnlyHonest) +import Test.Consensus.PointSchedule.SinglePeer + ( scheduleHeaderPoint + , scheduleTipPoint + ) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () tests :: TestTree tests = testProperty "ChainSync kills BlockFetch" prop_chainSyncKillsBlockFetch @@ -40,44 +44,43 @@ tests = testProperty "ChainSync kills BlockFetch" prop_chainSyncKillsBlockFetch prop_chainSyncKillsBlockFetch :: Property prop_chainSyncKillsBlockFetch = do forAllGenesisTest - (do gt@GenesisTest{gtBlockTree} <- genChains (pure 0) + ( do + gt@GenesisTest{gtBlockTree} <- genChains (pure 0) pure $ enableMustReplyTimeout $ gt $> dullSchedule (btTrunk gtBlockTree) ) - - defaultSchedulerConfig {scEnableChainSyncTimeouts = True} - + defaultSchedulerConfig{scEnableChainSyncTimeouts = True} -- No shrinking because the schedule is tiny and hand-crafted (\_ _ -> []) - - ( \_ stateView@StateView {svTipBlock} -> + ( \_ stateView@StateView{svTipBlock} -> svTipBlock == Nothing && case exceptionsByComponent ChainSyncClient stateView of [fromException -> Just (ExceededTimeLimit _)] -> True - _ -> False + _ -> False && case exceptionsByComponent BlockFetchClient stateView of [fromException -> Just AsyncCancelled] -> True - _ -> False + _ -> False && case exceptionsByComponent ChainSyncServer stateView of [fromException -> Just AsyncCancelled] -> True - _ -> False + _ -> False && case exceptionsByComponent BlockFetchServer stateView of [fromException -> Just AsyncCancelled] -> True - _ -> False + _ -> False ) - where - timeout = 10 + where + timeout = 10 - dullSchedule :: AF.AnchoredFragment blk -> PointSchedule blk - dullSchedule trunk = - let (firstBlock, secondBlock) = case AF.toOldestFirst trunk of - b1 : b2 : _ -> (b1, b2) - _ -> error "block tree must have two blocks" - psSchedule = peersOnlyHonest $ - [ (Time 0, scheduleTipPoint secondBlock), - (Time 0, scheduleHeaderPoint firstBlock) + dullSchedule :: AF.AnchoredFragment blk -> PointSchedule blk + dullSchedule trunk = + let (firstBlock, secondBlock) = case AF.toOldestFirst trunk of + b1 : b2 : _ -> (b1, b2) + _ -> error "block tree must have two blocks" + psSchedule = + peersOnlyHonest $ + [ (Time 0, scheduleTipPoint secondBlock) + , (Time 0, scheduleHeaderPoint firstBlock) ] - psMinEndTime = Time $ timeout + 1 - in PointSchedule {psSchedule, psStartOrder = [], psMinEndTime} + psMinEndTime = Time $ timeout + 1 + in PointSchedule{psSchedule, psStartOrder = [], psMinEndTime} - enableMustReplyTimeout :: GenesisTest blk schedule -> GenesisTest blk schedule - enableMustReplyTimeout gt = gt { gtChainSyncTimeouts = (gtChainSyncTimeouts gt) { mustReplyTimeout = Just timeout } } + enableMustReplyTimeout :: GenesisTest blk schedule -> GenesisTest blk schedule + enableMustReplyTimeout gt = gt{gtChainSyncTimeouts = (gtChainSyncTimeouts gt){mustReplyTimeout = Just timeout}} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Rollback.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Rollback.hs index c75732ecce..24a0e52391 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Rollback.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Rollback.hs @@ -5,39 +5,48 @@ module Test.Consensus.PeerSimulator.Tests.Rollback (tests) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Monad.Class.MonadTime.SI (Time (Time)) -import Ouroboros.Consensus.Block (ChainHash (..), Header) -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - toOldestFirst) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) -import Test.Consensus.Genesis.Setup -import Test.Consensus.Genesis.Setup.Classifiers - (Classifiers (allAdversariesKPlus1InForecast), - allAdversariesForecastable, classifiers) -import Test.Consensus.PeerSimulator.Run (defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (peersOnlyHonest) -import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..), - scheduleBlockPoint, scheduleHeaderPoint, scheduleTipPoint) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.TestBlock (TestBlock, unTestHash) -import Test.Util.TestEnv (adjustQuickCheckTests) +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Monad.Class.MonadTime.SI (Time (Time)) +import Ouroboros.Consensus.Block (ChainHash (..), Header) +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , toOldestFirst + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..)) +import Test.Consensus.Genesis.Setup +import Test.Consensus.Genesis.Setup.Classifiers + ( Classifiers (allAdversariesKPlus1InForecast) + , allAdversariesForecastable + , classifiers + ) +import Test.Consensus.PeerSimulator.Run (defaultSchedulerConfig) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers (peersOnlyHonest) +import Test.Consensus.PointSchedule.SinglePeer + ( SchedulePoint (..) + , scheduleBlockPoint + , scheduleHeaderPoint + , scheduleTipPoint + ) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock (TestBlock, unTestHash) +import Test.Util.TestEnv (adjustQuickCheckTests) tests :: TestTree -tests = testGroup "rollback" [ - adjustQuickCheckTests (`div` 2) $ - testProperty "can rollback" prop_rollback - , - adjustQuickCheckTests (`div` 2) $ - testProperty "cannot rollback" prop_cannotRollback - ] +tests = + testGroup + "rollback" + [ adjustQuickCheckTests (`div` 2) $ + testProperty "can rollback" prop_rollback + , adjustQuickCheckTests (`div` 2) $ + testProperty "cannot rollback" prop_cannotRollback + ] -- | @prop_rollback@ tests that the selection of the node under test -- changes branches when sent a rollback to a block no older than 'k' blocks @@ -45,22 +54,23 @@ tests = testGroup "rollback" [ prop_rollback :: Property prop_rollback = do forAllGenesisTest - - (do + ( do -- Create a block tree with @1@ alternative chain, such that we can rollback -- from the trunk to that chain. gt@GenesisTest{gtSecurityParam, gtBlockTree} <- genChains (pure 1) -- TODO: Trim block tree, the rollback schedule does not use all of it let cls = classifiers gt if allAdversariesForecastable cls && allAdversariesKPlus1InForecast cls - then pure gt {gtSchedule = rollbackSchedule (fromIntegral (unNonZero $ maxRollbacks gtSecurityParam)) gtBlockTree} - else discard) - + then + pure + gt + { gtSchedule = rollbackSchedule (fromIntegral (unNonZero $ maxRollbacks gtSecurityParam)) gtBlockTree + } + else discard + ) defaultSchedulerConfig - -- No shrinking because the schedule is tiny and hand-crafted (\_ _ -> []) - (\_ -> not . hashOnTrunk . AF.headHash . svSelectedChain) -- @prop_cannotRollback@ tests that the selection of the node under test *does @@ -69,15 +79,17 @@ prop_rollback = do prop_cannotRollback :: Property prop_cannotRollback = forAllGenesisTest - - (do gt@GenesisTest{gtSecurityParam, gtBlockTree} <- genChains (pure 1) - pure gt {gtSchedule = rollbackSchedule (fromIntegral (unNonZero $ maxRollbacks gtSecurityParam) + 1) gtBlockTree}) - + ( do + gt@GenesisTest{gtSecurityParam, gtBlockTree} <- genChains (pure 1) + pure + gt + { gtSchedule = + rollbackSchedule (fromIntegral (unNonZero $ maxRollbacks gtSecurityParam) + 1) gtBlockTree + } + ) defaultSchedulerConfig - -- No shrinking because the schedule is tiny and hand-crafted (\_ _ -> []) - (\_ -> hashOnTrunk . AF.headHash . svSelectedChain) -- | A schedule that advertises all the points of the trunk up until the nth @@ -87,28 +99,29 @@ prop_cannotRollback = -- PRECONDITION: Block tree with at least one alternative chain. rollbackSchedule :: AF.HasHeader blk => Int -> BlockTree blk -> PointSchedule blk rollbackSchedule n blockTree = - let branch = case btBranches blockTree of - [b] -> b - _ -> error "The block tree must have exactly one alternative branch" - trunkSuffix = AF.takeOldest n (btbTrunkSuffix branch) - schedulePoints = concat + let branch = case btBranches blockTree of + [b] -> b + _ -> error "The block tree must have exactly one alternative branch" + trunkSuffix = AF.takeOldest n (btbTrunkSuffix branch) + schedulePoints = + concat [ banalSchedulePoints (btbPrefix branch) , banalSchedulePoints trunkSuffix , banalSchedulePoints (btbSuffix branch) ] - in PointSchedule { - psSchedule = peersOnlyHonest $ zip (map (Time . (/30)) [0..]) schedulePoints, - psStartOrder = [], - psMinEndTime = Time 0 - } - where - banalSchedulePoints :: AnchoredFragment blk -> [SchedulePoint blk] - banalSchedulePoints = concatMap banalSchedulePoints' . toOldestFirst - banalSchedulePoints' :: blk -> [SchedulePoint blk] - banalSchedulePoints' block = [scheduleTipPoint block, scheduleHeaderPoint block, scheduleBlockPoint block] + in PointSchedule + { psSchedule = peersOnlyHonest $ zip (map (Time . (/ 30)) [0 ..]) schedulePoints + , psStartOrder = [] + , psMinEndTime = Time 0 + } + where + banalSchedulePoints :: AnchoredFragment blk -> [SchedulePoint blk] + banalSchedulePoints = concatMap banalSchedulePoints' . toOldestFirst + banalSchedulePoints' :: blk -> [SchedulePoint blk] + banalSchedulePoints' block = [scheduleTipPoint block, scheduleHeaderPoint block, scheduleBlockPoint block] -- | Given a hash, checks whether it is on the trunk of the block tree, that is -- if it only contains zeroes. hashOnTrunk :: ChainHash (Header TestBlock) -> Bool -hashOnTrunk GenesisHash = True +hashOnTrunk GenesisHash = True hashOnTrunk (BlockHash hash) = all (== 0) $ unTestHash hash diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Timeouts.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Timeouts.hs index 5d45137f09..6dc2b83453 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Timeouts.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Tests/Timeouts.hs @@ -4,76 +4,85 @@ module Test.Consensus.PeerSimulator.Tests.Timeouts (tests) where -import Data.Functor (($>)) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IOLike (DiffTime, Time (Time), - fromException) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Driver.Limits - (ProtocolLimitFailure (ExceededTimeLimit)) -import Ouroboros.Network.Protocol.ChainSync.Codec (mustReplyTimeout) -import Test.Consensus.BlockTree (btTrunk) -import Test.Consensus.Genesis.Setup -import Test.Consensus.PeerSimulator.Run - (SchedulerConfig (scEnableChainSyncTimeouts), - defaultSchedulerConfig) -import Test.Consensus.PeerSimulator.StateView -import Test.Consensus.PointSchedule -import Test.Consensus.PointSchedule.Peers (peersOnlyAdversary, - peersOnlyHonest) -import Test.Consensus.PointSchedule.SinglePeer (scheduleBlockPoint, - scheduleHeaderPoint, scheduleTipPoint) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.IOLike () -import Test.Util.TestEnv (adjustQuickCheckTests) +import Data.Functor (($>)) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IOLike + ( DiffTime + , Time (Time) + , fromException + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Driver.Limits + ( ProtocolLimitFailure (ExceededTimeLimit) + ) +import Ouroboros.Network.Protocol.ChainSync.Codec (mustReplyTimeout) +import Test.Consensus.BlockTree (btTrunk) +import Test.Consensus.Genesis.Setup +import Test.Consensus.PeerSimulator.Run + ( SchedulerConfig (scEnableChainSyncTimeouts) + , defaultSchedulerConfig + ) +import Test.Consensus.PeerSimulator.StateView +import Test.Consensus.PointSchedule +import Test.Consensus.PointSchedule.Peers + ( peersOnlyAdversary + , peersOnlyHonest + ) +import Test.Consensus.PointSchedule.SinglePeer + ( scheduleBlockPoint + , scheduleHeaderPoint + , scheduleTipPoint + ) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.IOLike () +import Test.Util.TestEnv (adjustQuickCheckTests) tests :: TestTree -tests = testGroup "timeouts" [ - adjustQuickCheckTests (`div` 10) $ testProperty "does time out" (prop_timeouts True), - adjustQuickCheckTests (`div` 10) $ testProperty "does not time out" (prop_timeouts False) - ] +tests = + testGroup + "timeouts" + [ adjustQuickCheckTests (`div` 10) $ testProperty "does time out" (prop_timeouts True) + , adjustQuickCheckTests (`div` 10) $ testProperty "does not time out" (prop_timeouts False) + ] prop_timeouts :: Bool -> Property prop_timeouts mustTimeout = do forAllGenesisTest - - (do gt@GenesisTest{gtBlockTree} <- genChains (pure 0) + ( do + gt@GenesisTest{gtBlockTree} <- genChains (pure 0) pure $ enableMustReplyTimeout $ gt $> dullSchedule (btTrunk gtBlockTree) ) - - defaultSchedulerConfig {scEnableChainSyncTimeouts = True} - + defaultSchedulerConfig{scEnableChainSyncTimeouts = True} -- Here we can't shrink because we exploit the properties of the point schedule to wait -- at the end of the test for the adversaries to get disconnected, by adding an extra point. -- If this point gets removed by the shrinker, we lose that property and the test becomes useless. (\_ _ -> []) - - (\_ stateView -> - case exceptionsByComponent ChainSyncClient stateView of - [] -> - counterexample ("result: " ++ condense (svSelectedChain stateView)) (not mustTimeout) - [fromException -> Just (ExceededTimeLimit _)] -> property mustTimeout - exns -> - counterexample ("exceptions: " ++ show exns) False + ( \_ stateView -> + case exceptionsByComponent ChainSyncClient stateView of + [] -> + counterexample ("result: " ++ condense (svSelectedChain stateView)) (not mustTimeout) + [fromException -> Just (ExceededTimeLimit _)] -> property mustTimeout + exns -> + counterexample ("exceptions: " ++ show exns) False ) + where + timeout = 10 - where - timeout = 10 - - dullSchedule :: AF.HasHeader blk => AF.AnchoredFragment blk -> PointSchedule blk - dullSchedule (AF.Empty _) = error "requires a non-empty block tree" - dullSchedule (_ AF.:> tipBlock) = - let offset :: DiffTime = if mustTimeout then 1 else -1 - psSchedule = (if mustTimeout then peersOnlyAdversary else peersOnlyHonest) $ [ - (Time 0, scheduleTipPoint tipBlock), - (Time 0, scheduleHeaderPoint tipBlock), - (Time 0, scheduleBlockPoint tipBlock) + dullSchedule :: AF.HasHeader blk => AF.AnchoredFragment blk -> PointSchedule blk + dullSchedule (AF.Empty _) = error "requires a non-empty block tree" + dullSchedule (_ AF.:> tipBlock) = + let offset :: DiffTime = if mustTimeout then 1 else -1 + psSchedule = + (if mustTimeout then peersOnlyAdversary else peersOnlyHonest) $ + [ (Time 0, scheduleTipPoint tipBlock) + , (Time 0, scheduleHeaderPoint tipBlock) + , (Time 0, scheduleBlockPoint tipBlock) ] - -- This keeps the test running long enough to pass the timeout by 'offset'. - psMinEndTime = Time $ timeout + offset - in PointSchedule {psSchedule, psStartOrder = [], psMinEndTime} + -- This keeps the test running long enough to pass the timeout by 'offset'. + psMinEndTime = Time $ timeout + offset + in PointSchedule{psSchedule, psStartOrder = [], psMinEndTime} - enableMustReplyTimeout :: GenesisTest blk schedule -> GenesisTest blk schedule - enableMustReplyTimeout gt = gt { gtChainSyncTimeouts = (gtChainSyncTimeouts gt) { mustReplyTimeout = Just timeout } } + enableMustReplyTimeout :: GenesisTest blk schedule -> GenesisTest blk schedule + enableMustReplyTimeout gt = gt{gtChainSyncTimeouts = (gtChainSyncTimeouts gt){mustReplyTimeout = Just timeout}} diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Trace.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Trace.hs index a53eb98a02..3370a39bd4 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Trace.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PeerSimulator/Trace.hs @@ -6,8 +6,8 @@ {-# LANGUAGE TypeFamilies #-} -- | Helpers for tracing used by the peer simulator. -module Test.Consensus.PeerSimulator.Trace ( - TraceBlockFetchClientTerminationEvent (..) +module Test.Consensus.PeerSimulator.Trace + ( TraceBlockFetchClientTerminationEvent (..) , TraceChainSyncClientTerminationEvent (..) , TraceEvent (..) , TraceScheduledBlockFetchServerEvent (..) @@ -20,49 +20,84 @@ module Test.Consensus.PeerSimulator.Trace ( , tracerTestBlock ) where -import Control.Tracer (Tracer (Tracer), contramap, traceWith) -import Data.Bifunctor (second) -import Data.List (intersperse) -import qualified Data.List.NonEmpty as NE -import Data.Time.Clock (DiffTime, diffTimeToPicoseconds) -import Network.TypedProtocol.Codec (AnyMessage (..)) -import Ouroboros.Consensus.Block (GenesisWindow (..), Header, Point, - WithOrigin (NotOrigin, Origin), succWithOrigin) -import Ouroboros.Consensus.Genesis.Governor (DensityBounds (..), - GDDDebugInfo (..), TraceGDDEvent (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (TraceChainSyncClientEvent (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping - (Instruction (..), JumpInstruction (..), JumpResult (..), - TraceCsjReason (..), TraceEventCsj (..), - TraceEventDbf (..)) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State - (ChainSyncJumpingJumperState (..), - ChainSyncJumpingState (..), DynamoInitState (..), - JumpInfo (..)) -import Ouroboros.Consensus.Storage.ChainDB.API (LoE (..)) -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl as ChainDB -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types - (TraceAddBlockEvent (..)) -import Ouroboros.Consensus.Util.Condense (condense) -import Ouroboros.Consensus.Util.Enclose -import Ouroboros.Consensus.Util.IOLike (IOLike, MonadMonotonicTime, - Time (Time), atomically, getMonotonicTime, readTVarIO, - uncheckedNewTVarM, writeTVar) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - headPoint) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (SlotNo (SlotNo), Tip, castPoint) -import Ouroboros.Network.Driver.Simple (TraceSendRecv (..)) -import Ouroboros.Network.Protocol.ChainSync.Type (ChainSync, - Message (..)) -import Test.Consensus.PointSchedule.NodeState (NodeState) -import Test.Consensus.PointSchedule.Peers (Peer (Peer), PeerId) -import Test.Util.TersePrinting (terseAnchor, terseBlock, - terseFragment, terseHFragment, terseHeader, tersePoint, - terseRealPoint, terseTip, terseWithOrigin) -import Test.Util.TestBlock (TestBlock) -import Text.Printf (printf) +import Control.Tracer (Tracer (Tracer), contramap, traceWith) +import Data.Bifunctor (second) +import Data.List (intersperse) +import Data.List.NonEmpty qualified as NE +import Data.Time.Clock (DiffTime, diffTimeToPicoseconds) +import Network.TypedProtocol.Codec (AnyMessage (..)) +import Ouroboros.Consensus.Block + ( GenesisWindow (..) + , Header + , Point + , WithOrigin (NotOrigin, Origin) + , succWithOrigin + ) +import Ouroboros.Consensus.Genesis.Governor + ( DensityBounds (..) + , GDDDebugInfo (..) + , TraceGDDEvent (..) + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( TraceChainSyncClientEvent (..) + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping + ( Instruction (..) + , JumpInstruction (..) + , JumpResult (..) + , TraceCsjReason (..) + , TraceEventCsj (..) + , TraceEventDbf (..) + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State + ( ChainSyncJumpingJumperState (..) + , ChainSyncJumpingState (..) + , DynamoInitState (..) + , JumpInfo (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.API (LoE (..)) +import Ouroboros.Consensus.Storage.ChainDB.Impl qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types + ( TraceAddBlockEvent (..) + ) +import Ouroboros.Consensus.Util.Condense (condense) +import Ouroboros.Consensus.Util.Enclose +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , MonadMonotonicTime + , Time (Time) + , atomically + , getMonotonicTime + , readTVarIO + , uncheckedNewTVarM + , writeTVar + ) +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , headPoint + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (SlotNo (SlotNo), Tip, castPoint) +import Ouroboros.Network.Driver.Simple (TraceSendRecv (..)) +import Ouroboros.Network.Protocol.ChainSync.Type + ( ChainSync + , Message (..) + ) +import Test.Consensus.PointSchedule.NodeState (NodeState) +import Test.Consensus.PointSchedule.Peers (Peer (Peer), PeerId) +import Test.Util.TersePrinting + ( terseAnchor + , terseBlock + , terseFragment + , terseHFragment + , terseHeader + , tersePoint + , terseRealPoint + , terseTip + , terseWithOrigin + ) +import Test.Util.TestBlock (TestBlock) +import Text.Printf (printf) -- * Trace events for the peer simulator @@ -137,7 +172,10 @@ data TraceEvent blk | TraceChainSyncClientTerminationEvent PeerId TraceChainSyncClientTerminationEvent | TraceBlockFetchClientTerminationEvent PeerId TraceBlockFetchClientTerminationEvent | TraceGenesisDDEvent (TraceGDDEvent PeerId blk) - | TraceChainSyncSendRecvEvent PeerId String (TraceSendRecv (ChainSync (Header blk) (Point blk) (Tip blk))) + | TraceChainSyncSendRecvEvent + PeerId + String + (TraceSendRecv (ChainSync (Header blk) (Point blk) (Tip blk))) | TraceDbfEvent (TraceEventDbf PeerId) | TraceCsjEvent PeerId (TraceEventCsj PeerId blk) | TraceOther String @@ -145,7 +183,7 @@ data TraceEvent blk -- * 'TestBlock'-specific tracers for the peer simulator tracerTestBlock :: - (IOLike m) => + IOLike m => Tracer m String -> m (Tracer m (TraceEvent TestBlock)) tracerTestBlock tracer0 = do @@ -162,9 +200,10 @@ tracerTestBlock tracer0 = do time <- getMonotonicTime tickTime <- readTVarIO tickTimeVar let timeHeader = prettyTime time ++ " " - prefix = if time /= tickTime - then timeHeader - else replicate (length timeHeader) ' ' + prefix = + if time /= tickTime + then timeHeader + else replicate (length timeHeader) ' ' traceWith tracer0 $ concat $ intersperse "\n" $ map (prefix ++) $ lines msg pure $ Tracer $ traceEventTestBlockWith setTickTime tracer0 tracer @@ -174,106 +213,112 @@ mkGDDTracerTestBlock :: mkGDDTracerTestBlock = contramap TraceGenesisDDEvent traceEventTestBlockWith :: - (MonadMonotonicTime m) => + MonadMonotonicTime m => (Time -> m ()) -> - Tracer m String -> - -- ^ Underlying, non-time- and tick-aware tracer. To be used only with lines + -- | Underlying, non-time- and tick-aware tracer. To be used only with lines -- that should not be prefixed by time. Tracer m String -> - -- ^ Normal, time- and tick-aware tracer. Should be used by default. + -- | Normal, time- and tick-aware tracer. Should be used by default. + Tracer m String -> TraceEvent TestBlock -> m () traceEventTestBlockWith setTickTime tracer0 tracer = \case - TraceSchedulerEvent traceEvent -> traceSchedulerEventTestBlockWith setTickTime tracer0 tracer traceEvent - TraceScheduledChainSyncServerEvent peerId traceEvent -> traceScheduledChainSyncServerEventTestBlockWith tracer peerId traceEvent - TraceScheduledBlockFetchServerEvent peerId traceEvent -> traceScheduledBlockFetchServerEventTestBlockWith tracer peerId traceEvent - TraceChainDBEvent traceEvent -> traceChainDBEventTestBlockWith tracer traceEvent - TraceChainSyncClientEvent peerId traceEvent -> traceChainSyncClientEventTestBlockWith peerId tracer traceEvent - TraceChainSyncClientTerminationEvent peerId traceEvent -> traceChainSyncClientTerminationEventTestBlockWith peerId tracer traceEvent - TraceBlockFetchClientTerminationEvent peerId traceEvent -> traceBlockFetchClientTerminationEventTestBlockWith peerId tracer traceEvent - TraceGenesisDDEvent gddEvent -> traceWith tracer (terseGDDEvent gddEvent) - TraceChainSyncSendRecvEvent peerId peerType traceEvent -> traceChainSyncSendRecvEventTestBlockWith peerId peerType tracer traceEvent - TraceDbfEvent traceEvent -> traceDbjEventWith tracer traceEvent - TraceCsjEvent peerId traceEvent -> traceCsjEventWith peerId tracer traceEvent - TraceOther msg -> traceWith tracer msg + TraceSchedulerEvent traceEvent -> traceSchedulerEventTestBlockWith setTickTime tracer0 tracer traceEvent + TraceScheduledChainSyncServerEvent peerId traceEvent -> traceScheduledChainSyncServerEventTestBlockWith tracer peerId traceEvent + TraceScheduledBlockFetchServerEvent peerId traceEvent -> traceScheduledBlockFetchServerEventTestBlockWith tracer peerId traceEvent + TraceChainDBEvent traceEvent -> traceChainDBEventTestBlockWith tracer traceEvent + TraceChainSyncClientEvent peerId traceEvent -> traceChainSyncClientEventTestBlockWith peerId tracer traceEvent + TraceChainSyncClientTerminationEvent peerId traceEvent -> traceChainSyncClientTerminationEventTestBlockWith peerId tracer traceEvent + TraceBlockFetchClientTerminationEvent peerId traceEvent -> traceBlockFetchClientTerminationEventTestBlockWith peerId tracer traceEvent + TraceGenesisDDEvent gddEvent -> traceWith tracer (terseGDDEvent gddEvent) + TraceChainSyncSendRecvEvent peerId peerType traceEvent -> traceChainSyncSendRecvEventTestBlockWith peerId peerType tracer traceEvent + TraceDbfEvent traceEvent -> traceDbjEventWith tracer traceEvent + TraceCsjEvent peerId traceEvent -> traceCsjEventWith peerId tracer traceEvent + TraceOther msg -> traceWith tracer msg traceSchedulerEventTestBlockWith :: - (MonadMonotonicTime m) => + MonadMonotonicTime m => (Time -> m ()) -> Tracer m String -> Tracer m String -> TraceSchedulerEvent TestBlock -> m () traceSchedulerEventTestBlockWith setTickTime tracer0 tracer = \case - TraceBeginningOfTime -> - traceWith tracer0 "Running point schedule ..." - TraceEndOfTime -> - traceLinesWith tracer0 - [ "╶──────────────────────────────────────────────────────────────────────────────╴", - "Finished running point schedule" - ] - TraceExtraDelay delay -> do - time <- getMonotonicTime - traceLinesWith tracer0 - [ "┌──────────────────────────────────────────────────────────────────────────────┐", - "└─ " ++ prettyTime time, - "Waiting an extra delay to keep the simulation running for: " ++ prettyTime (Time delay) - ] - TraceNewTick number duration (Peer pid state) currentChain mCandidateFrag jumpingStates -> do - time <- getMonotonicTime - setTickTime time - traceLinesWith tracer0 - [ "┌──────────────────────────────────────────────────────────────────────────────┐", - "└─ " ++ prettyTime time, - "Tick:", - " number: " ++ show number, - " duration: " ++ show duration, - " peer: " ++ condense pid, - " state: " ++ condense state, - " current chain: " ++ terseHFragment currentChain, - " candidate fragment: " ++ maybe "Nothing" terseHFragment mCandidateFrag, - " jumping states:\n" ++ traceJumpingStates jumpingStates + TraceBeginningOfTime -> + traceWith tracer0 "Running point schedule ..." + TraceEndOfTime -> + traceLinesWith + tracer0 + [ "╶──────────────────────────────────────────────────────────────────────────────╴" + , "Finished running point schedule" + ] + TraceExtraDelay delay -> do + time <- getMonotonicTime + traceLinesWith + tracer0 + [ "┌──────────────────────────────────────────────────────────────────────────────┐" + , "└─ " ++ prettyTime time + , "Waiting an extra delay to keep the simulation running for: " ++ prettyTime (Time delay) + ] + TraceNewTick number duration (Peer pid state) currentChain mCandidateFrag jumpingStates -> do + time <- getMonotonicTime + setTickTime time + traceLinesWith + tracer0 + [ "┌──────────────────────────────────────────────────────────────────────────────┐" + , "└─ " ++ prettyTime time + , "Tick:" + , " number: " ++ show number + , " duration: " ++ show duration + , " peer: " ++ condense pid + , " state: " ++ condense state + , " current chain: " ++ terseHFragment currentChain + , " candidate fragment: " ++ maybe "Nothing" terseHFragment mCandidateFrag + , " jumping states:\n" ++ traceJumpingStates jumpingStates + ] + TraceNodeShutdownStart immTip -> + traceWith tracer (" Initiating node shutdown with immutable tip at slot " ++ condense immTip) + TraceNodeShutdownComplete -> + traceWith tracer " Node shutdown complete" + TraceNodeStartupStart -> + traceWith tracer " Initiating node startup" + TraceNodeStartupComplete selection -> + traceWith tracer (" Node startup complete with selection " ++ terseHFragment selection) + where + traceJumpingStates :: [(PeerId, ChainSyncJumpingState m TestBlock)] -> String + traceJumpingStates = unlines . map (\(pid, state) -> " " ++ condense pid ++ ": " ++ traceJumpingState state) + + traceJumpingState :: ChainSyncJumpingState m TestBlock -> String + traceJumpingState = \case + Dynamo initState lastJump -> + let showInitState = case initState of + DynamoStarting ji -> "(DynamoStarting " ++ terseJumpInfo ji ++ ")" + DynamoStarted -> "DynamoStarted" + in unwords ["Dynamo", showInitState, terseWithOrigin show lastJump] + Objector initState goodJumpInfo badPoint -> + unwords + [ "Objector" + , show initState + , terseJumpInfo goodJumpInfo + , tersePoint (castPoint badPoint) ] - TraceNodeShutdownStart immTip -> - traceWith tracer (" Initiating node shutdown with immutable tip at slot " ++ condense immTip) - TraceNodeShutdownComplete -> - traceWith tracer " Node shutdown complete" - TraceNodeStartupStart -> - traceWith tracer " Initiating node startup" - TraceNodeStartupComplete selection -> - traceWith tracer (" Node startup complete with selection " ++ terseHFragment selection) - - where - traceJumpingStates :: [(PeerId, ChainSyncJumpingState m TestBlock)] -> String - traceJumpingStates = unlines . map (\(pid, state) -> " " ++ condense pid ++ ": " ++ traceJumpingState state) - - traceJumpingState :: ChainSyncJumpingState m TestBlock -> String - traceJumpingState = \case - Dynamo initState lastJump -> - let showInitState = case initState of - DynamoStarting ji -> "(DynamoStarting " ++ terseJumpInfo ji ++ ")" - DynamoStarted -> "DynamoStarted" - in unwords ["Dynamo", showInitState, terseWithOrigin show lastJump] - Objector initState goodJumpInfo badPoint -> unwords - [ "Objector" - , show initState - , terseJumpInfo goodJumpInfo - , tersePoint (castPoint badPoint) - ] - Disengaged initState -> "Disengaged " ++ show initState - Jumper _ st -> "Jumper _ " ++ traceJumperState st - - traceJumperState :: ChainSyncJumpingJumperState TestBlock -> String - traceJumperState = \case - Happy initState mGoodJumpInfo -> - "Happy " ++ show initState ++ " " ++ maybe "Nothing" terseJumpInfo mGoodJumpInfo - FoundIntersection initState goodJumpInfo point -> unwords + Disengaged initState -> "Disengaged " ++ show initState + Jumper _ st -> "Jumper _ " ++ traceJumperState st + + traceJumperState :: ChainSyncJumpingJumperState TestBlock -> String + traceJumperState = \case + Happy initState mGoodJumpInfo -> + "Happy " ++ show initState ++ " " ++ maybe "Nothing" terseJumpInfo mGoodJumpInfo + FoundIntersection initState goodJumpInfo point -> + unwords [ "(FoundIntersection" , show initState , terseJumpInfo goodJumpInfo - , tersePoint $ castPoint point, ")" + , tersePoint $ castPoint point + , ")" ] - LookingForIntersection goodJumpInfo badJumpInfo -> unwords + LookingForIntersection goodJumpInfo badJumpInfo -> + unwords ["(LookingForIntersection", terseJumpInfo goodJumpInfo, terseJumpInfo badJumpInfo, ")"] traceScheduledServerHandlerEventTestBlockWith :: @@ -282,18 +327,18 @@ traceScheduledServerHandlerEventTestBlockWith :: TraceScheduledServerHandlerEvent (NodeState TestBlock) TestBlock -> m () traceScheduledServerHandlerEventTestBlockWith tracer unit = \case - TraceHandling handler state -> - traceLines - [ "handling " ++ handler, - " state is " ++ condense state - ] - TraceRestarting _-> - trace " cannot serve at this point; waiting for node state and starting again" - TraceDoneHandling handler -> - trace $ "done handling " ++ handler - where - trace = traceUnitWith tracer unit - traceLines = traceUnitLinesWith tracer unit + TraceHandling handler state -> + traceLines + [ "handling " ++ handler + , " state is " ++ condense state + ] + TraceRestarting _ -> + trace " cannot serve at this point; waiting for node state and starting again" + TraceDoneHandling handler -> + trace $ "done handling " ++ handler + where + trace = traceUnitWith tracer unit + traceLines = traceUnitLinesWith tracer unit traceScheduledChainSyncServerEventTestBlockWith :: Tracer m String -> @@ -301,40 +346,40 @@ traceScheduledChainSyncServerEventTestBlockWith :: TraceScheduledChainSyncServerEvent (NodeState TestBlock) TestBlock -> m () traceScheduledChainSyncServerEventTestBlockWith tracer peerId = \case - TraceHandlerEventCS traceEvent -> traceScheduledServerHandlerEventTestBlockWith tracer unit traceEvent - TraceLastIntersection point -> - trace $ " last intersection is " ++ tersePoint point - TraceClientIsDone -> - trace "received MsgDoneClient" - TraceIntersectionNotFound -> - trace " no intersection found" - TraceIntersectionFound point -> - trace $ " intersection found: " ++ tersePoint point - TraceRollForward header tip -> - traceLines [ - " gotta serve " ++ terseHeader header, - " tip is " ++ terseTip tip + TraceHandlerEventCS traceEvent -> traceScheduledServerHandlerEventTestBlockWith tracer unit traceEvent + TraceLastIntersection point -> + trace $ " last intersection is " ++ tersePoint point + TraceClientIsDone -> + trace "received MsgDoneClient" + TraceIntersectionNotFound -> + trace " no intersection found" + TraceIntersectionFound point -> + trace $ " intersection found: " ++ tersePoint point + TraceRollForward header tip -> + traceLines + [ " gotta serve " ++ terseHeader header + , " tip is " ++ terseTip tip ] - TraceRollBackward point tip -> - traceLines [ - " gotta roll back to " ++ tersePoint point, - " new tip is " ++ terseTip tip + TraceRollBackward point tip -> + traceLines + [ " gotta roll back to " ++ tersePoint point + , " new tip is " ++ terseTip tip ] - TraceChainIsFullyServed -> - trace " chain has been fully served" - TraceIntersectionIsHeaderPoint -> - trace " intersection is exactly our header point" - TraceIntersectionIsStrictAncestorOfHeaderPoint fragment -> - traceLines - [ " intersection is before our header point", - " fragment ahead: " ++ terseFragment fragment - ] - TraceIntersectionIsStrictDescendentOfHeaderPoint -> - trace " intersection is further than our header point" - where - unit = "ChainSyncServer " ++ condense peerId - trace = traceUnitWith tracer unit - traceLines = traceUnitLinesWith tracer unit + TraceChainIsFullyServed -> + trace " chain has been fully served" + TraceIntersectionIsHeaderPoint -> + trace " intersection is exactly our header point" + TraceIntersectionIsStrictAncestorOfHeaderPoint fragment -> + traceLines + [ " intersection is before our header point" + , " fragment ahead: " ++ terseFragment fragment + ] + TraceIntersectionIsStrictDescendentOfHeaderPoint -> + trace " intersection is further than our header point" + where + unit = "ChainSyncServer " ++ condense peerId + trace = traceUnitWith tracer unit + traceLines = traceUnitLinesWith tracer unit traceScheduledBlockFetchServerEventTestBlockWith :: Tracer m String -> @@ -342,54 +387,54 @@ traceScheduledBlockFetchServerEventTestBlockWith :: TraceScheduledBlockFetchServerEvent (NodeState TestBlock) TestBlock -> m () traceScheduledBlockFetchServerEventTestBlockWith tracer peerId = \case - TraceHandlerEventBF traceEvent -> traceScheduledServerHandlerEventTestBlockWith tracer unit traceEvent - TraceNoBlocks -> - trace " no blocks available" - TraceStartingBatch fragment -> - trace $ "Starting batch for slice " ++ terseFragment fragment - TraceWaitingForRange pointFrom pointTo -> - trace $ "Waiting for next tick for range: " ++ tersePoint pointFrom ++ " -> " ++ tersePoint pointTo - TraceSendingBlock block -> - trace $ "Sending " ++ terseBlock block - TraceBatchIsDone -> - trace "Batch is done" - TraceBlockPointIsBehind -> - trace "BP is behind" - where - unit = "BlockFetchServer " ++ condense peerId - trace = traceUnitWith tracer unit + TraceHandlerEventBF traceEvent -> traceScheduledServerHandlerEventTestBlockWith tracer unit traceEvent + TraceNoBlocks -> + trace " no blocks available" + TraceStartingBatch fragment -> + trace $ "Starting batch for slice " ++ terseFragment fragment + TraceWaitingForRange pointFrom pointTo -> + trace $ "Waiting for next tick for range: " ++ tersePoint pointFrom ++ " -> " ++ tersePoint pointTo + TraceSendingBlock block -> + trace $ "Sending " ++ terseBlock block + TraceBatchIsDone -> + trace "Batch is done" + TraceBlockPointIsBehind -> + trace "BP is behind" + where + unit = "BlockFetchServer " ++ condense peerId + trace = traceUnitWith tracer unit traceChainDBEventTestBlockWith :: - (Monad m) => + Monad m => Tracer m String -> ChainDB.TraceEvent TestBlock -> m () traceChainDBEventTestBlockWith tracer = \case - ChainDB.TraceAddBlockEvent event -> - case event of - AddedToCurrentChain _ _ _ newFragment -> - trace $ "Added to current chain; now: " ++ terseHFragment newFragment - SwitchedToAFork _ _ _ newFragment -> - trace $ "Switched to a fork; now: " ++ terseHFragment newFragment - StoreButDontChange point -> - trace $ "Did not select block due to LoE: " ++ terseRealPoint point - IgnoreBlockOlderThanK point -> - trace $ "Ignored block older than k: " ++ terseRealPoint point - ChainSelectionLoEDebug curChain (LoEEnabled loeFrag0) -> do - trace $ "Current chain: " ++ terseHFragment curChain - trace $ "LoE fragment: " ++ terseHFragment loeFrag0 - ChainSelectionLoEDebug _ LoEDisabled -> - pure () - AddedReprocessLoEBlocksToQueue -> - trace $ "Requested ChainSel run" - _ -> pure () - ChainDB.TraceChainSelStarvationEvent (ChainDB.ChainSelStarvation RisingEdge) -> - trace "ChainSel starvation started" - ChainDB.TraceChainSelStarvationEvent (ChainDB.ChainSelStarvation (FallingEdgeWith pt)) -> - trace $ "ChainSel starvation ended thanks to " ++ terseRealPoint pt - _ -> pure () - where - trace = traceUnitWith tracer "ChainDB" + ChainDB.TraceAddBlockEvent event -> + case event of + AddedToCurrentChain _ _ _ newFragment -> + trace $ "Added to current chain; now: " ++ terseHFragment newFragment + SwitchedToAFork _ _ _ newFragment -> + trace $ "Switched to a fork; now: " ++ terseHFragment newFragment + StoreButDontChange point -> + trace $ "Did not select block due to LoE: " ++ terseRealPoint point + IgnoreBlockOlderThanK point -> + trace $ "Ignored block older than k: " ++ terseRealPoint point + ChainSelectionLoEDebug curChain (LoEEnabled loeFrag0) -> do + trace $ "Current chain: " ++ terseHFragment curChain + trace $ "LoE fragment: " ++ terseHFragment loeFrag0 + ChainSelectionLoEDebug _ LoEDisabled -> + pure () + AddedReprocessLoEBlocksToQueue -> + trace $ "Requested ChainSel run" + _ -> pure () + ChainDB.TraceChainSelStarvationEvent (ChainDB.ChainSelStarvation RisingEdge) -> + trace "ChainSel starvation started" + ChainDB.TraceChainSelStarvationEvent (ChainDB.ChainSelStarvation (FallingEdgeWith pt)) -> + trace $ "ChainSel starvation ended thanks to " ++ terseRealPoint pt + _ -> pure () + where + trace = traceUnitWith tracer "ChainDB" traceChainSyncClientEventTestBlockWith :: PeerId -> @@ -397,52 +442,54 @@ traceChainSyncClientEventTestBlockWith :: TraceChainSyncClientEvent TestBlock -> m () traceChainSyncClientEventTestBlockWith pid tracer = \case - TraceRolledBack point -> - trace $ "Rolled back to: " ++ tersePoint point - TraceFoundIntersection point _ourTip _theirTip -> - trace $ "Found intersection at: " ++ tersePoint point - TraceWaitingBeyondForecastHorizon slot -> - trace $ "Waiting for " ++ show slot ++ " beyond forecast horizon" - TraceAccessingForecastHorizon slot -> - trace $ "Accessing " ++ show slot ++ ", previously beyond forecast horizon" - TraceValidatedHeader header -> - trace $ "Validated header: " ++ terseHeader header - TraceDownloadedHeader header -> - trace $ "Downloaded header: " ++ terseHeader header - TraceGaveLoPToken didGive header bestBlockNo -> - trace $ - (if didGive then "Gave" else "Did not give") - ++ " LoP token to " ++ terseHeader header - ++ " compared to " ++ show bestBlockNo - TraceException exception -> - trace $ "Threw an exception: " ++ show exception - TraceTermination result -> - trace $ "Terminated with result: " ++ show result - TraceOfferJump point -> - trace $ "Offering jump to " ++ tersePoint point - TraceJumpResult (AcceptedJump (JumpTo ji)) -> - trace $ "Accepted jump to " ++ terseJumpInfo ji - TraceJumpResult (RejectedJump (JumpTo ji)) -> - trace $ "Rejected jump to " ++ terseJumpInfo ji - TraceJumpResult (AcceptedJump (JumpToGoodPoint ji)) -> - trace $ "Accepted jump to good point: " ++ terseJumpInfo ji - TraceJumpResult (RejectedJump (JumpToGoodPoint ji)) -> - trace $ "Rejected jump to good point: " ++ terseJumpInfo ji - TraceJumpingWaitingForNextInstruction -> - trace "Waiting for next instruction from the jumping governor" - TraceJumpingInstructionIs instr -> - trace $ "Received instruction: " ++ showInstr instr - TraceDrainingThePipe n -> - trace $ "Draining the pipe, remaining messages: " ++ show n - where - trace = traceUnitWith tracer ("ChainSyncClient " ++ condense pid) - - showInstr :: Instruction TestBlock -> String - showInstr = \case - JumpInstruction (JumpTo ji) -> "JumpTo " ++ terseJumpInfo ji - JumpInstruction (JumpToGoodPoint ji) -> "JumpToGoodPoint " ++ terseJumpInfo ji - RunNormally -> "RunNormally" - Restart -> "Restart" + TraceRolledBack point -> + trace $ "Rolled back to: " ++ tersePoint point + TraceFoundIntersection point _ourTip _theirTip -> + trace $ "Found intersection at: " ++ tersePoint point + TraceWaitingBeyondForecastHorizon slot -> + trace $ "Waiting for " ++ show slot ++ " beyond forecast horizon" + TraceAccessingForecastHorizon slot -> + trace $ "Accessing " ++ show slot ++ ", previously beyond forecast horizon" + TraceValidatedHeader header -> + trace $ "Validated header: " ++ terseHeader header + TraceDownloadedHeader header -> + trace $ "Downloaded header: " ++ terseHeader header + TraceGaveLoPToken didGive header bestBlockNo -> + trace $ + (if didGive then "Gave" else "Did not give") + ++ " LoP token to " + ++ terseHeader header + ++ " compared to " + ++ show bestBlockNo + TraceException exception -> + trace $ "Threw an exception: " ++ show exception + TraceTermination result -> + trace $ "Terminated with result: " ++ show result + TraceOfferJump point -> + trace $ "Offering jump to " ++ tersePoint point + TraceJumpResult (AcceptedJump (JumpTo ji)) -> + trace $ "Accepted jump to " ++ terseJumpInfo ji + TraceJumpResult (RejectedJump (JumpTo ji)) -> + trace $ "Rejected jump to " ++ terseJumpInfo ji + TraceJumpResult (AcceptedJump (JumpToGoodPoint ji)) -> + trace $ "Accepted jump to good point: " ++ terseJumpInfo ji + TraceJumpResult (RejectedJump (JumpToGoodPoint ji)) -> + trace $ "Rejected jump to good point: " ++ terseJumpInfo ji + TraceJumpingWaitingForNextInstruction -> + trace "Waiting for next instruction from the jumping governor" + TraceJumpingInstructionIs instr -> + trace $ "Received instruction: " ++ showInstr instr + TraceDrainingThePipe n -> + trace $ "Draining the pipe, remaining messages: " ++ show n + where + trace = traceUnitWith tracer ("ChainSyncClient " ++ condense pid) + + showInstr :: Instruction TestBlock -> String + showInstr = \case + JumpInstruction (JumpTo ji) -> "JumpTo " ++ terseJumpInfo ji + JumpInstruction (JumpToGoodPoint ji) -> "JumpToGoodPoint " ++ terseJumpInfo ji + RunNormally -> "RunNormally" + Restart -> "Restart" terseJumpInfo :: JumpInfo TestBlock -> String terseJumpInfo ji = tersePoint (castPoint $ headPoint $ jTheirFragment ji) @@ -453,16 +500,16 @@ traceChainSyncClientTerminationEventTestBlockWith :: TraceChainSyncClientTerminationEvent -> m () traceChainSyncClientTerminationEventTestBlockWith pid tracer = \case - TraceExceededSizeLimitCS -> - trace "Terminated because of size limit exceeded." - TraceExceededTimeLimitCS -> - trace "Terminated because of time limit exceeded." - TraceTerminatedByGDDGovernor -> - trace "Terminated by the GDD governor." - TraceTerminatedByLoP -> - trace "Terminated by the limit on patience." - where - trace = traceUnitWith tracer ("ChainSyncClient " ++ condense pid) + TraceExceededSizeLimitCS -> + trace "Terminated because of size limit exceeded." + TraceExceededTimeLimitCS -> + trace "Terminated because of time limit exceeded." + TraceTerminatedByGDDGovernor -> + trace "Terminated by the GDD governor." + TraceTerminatedByLoP -> + trace "Terminated by the limit on patience." + where + trace = traceUnitWith tracer ("ChainSyncClient " ++ condense pid) traceBlockFetchClientTerminationEventTestBlockWith :: PeerId -> @@ -470,12 +517,12 @@ traceBlockFetchClientTerminationEventTestBlockWith :: TraceBlockFetchClientTerminationEvent -> m () traceBlockFetchClientTerminationEventTestBlockWith pid tracer = \case - TraceExceededSizeLimitBF -> - trace "Terminated because of size limit exceeded." - TraceExceededTimeLimitBF -> - trace "Terminated because of time limit exceeded." - where - trace = traceUnitWith tracer ("BlockFetchClient " ++ condense pid) + TraceExceededSizeLimitBF -> + trace "Terminated because of size limit exceeded." + TraceExceededTimeLimitBF -> + trace "Terminated because of time limit exceeded." + where + trace = traceUnitWith tracer ("BlockFetchClient " ++ condense pid) -- | Trace all the SendRecv events of the ChainSync mini-protocol. traceChainSyncSendRecvEventTestBlockWith :: @@ -486,29 +533,32 @@ traceChainSyncSendRecvEventTestBlockWith :: TraceSendRecv (ChainSync (Header TestBlock) (Point TestBlock) (Tip TestBlock)) -> m () traceChainSyncSendRecvEventTestBlockWith pid ptp tracer = \case - TraceSendMsg amsg -> traceMsg "send" amsg - TraceRecvMsg amsg -> traceMsg "recv" amsg - where - -- This can be very verbose and is only useful in rare situations, so it - -- does nothing by default. - -- trace = traceUnitWith tracer ("ChainSync " ++ condense pid) . ((ptp ++ " ") ++) - trace = (\_ _ _ -> const (pure ())) pid ptp tracer - traceMsg kd amsg = trace $ kd ++ " " ++ case amsg of - AnyMessage msg -> case msg of - MsgRequestNext -> "MsgRequestNext" - MsgAwaitReply -> "MsgAwaitReply" - MsgRollForward header tip -> "MsgRollForward " ++ terseHeader header ++ " " ++ terseTip tip - MsgRollBackward point tip -> "MsgRollBackward " ++ tersePoint point ++ " " ++ terseTip tip - MsgFindIntersect points -> "MsgFindIntersect [" ++ unwords (map tersePoint points) ++ "]" - MsgIntersectFound point tip -> "MsgIntersectFound " ++ tersePoint point ++ " " ++ terseTip tip - MsgIntersectNotFound tip -> "MsgIntersectNotFound " ++ terseTip tip - MsgDone -> "MsgDone" + TraceSendMsg amsg -> traceMsg "send" amsg + TraceRecvMsg amsg -> traceMsg "recv" amsg + where + -- This can be very verbose and is only useful in rare situations, so it + -- does nothing by default. + -- trace = traceUnitWith tracer ("ChainSync " ++ condense pid) . ((ptp ++ " ") ++) + trace = (\_ _ _ -> const (pure ())) pid ptp tracer + traceMsg kd amsg = + trace $ + kd ++ " " ++ case amsg of + AnyMessage msg -> case msg of + MsgRequestNext -> "MsgRequestNext" + MsgAwaitReply -> "MsgAwaitReply" + MsgRollForward header tip -> "MsgRollForward " ++ terseHeader header ++ " " ++ terseTip tip + MsgRollBackward point tip -> "MsgRollBackward " ++ tersePoint point ++ " " ++ terseTip tip + MsgFindIntersect points -> "MsgFindIntersect [" ++ unwords (map tersePoint points) ++ "]" + MsgIntersectFound point tip -> "MsgIntersectFound " ++ tersePoint point ++ " " ++ terseTip tip + MsgIntersectNotFound tip -> "MsgIntersectNotFound " ++ terseTip tip + MsgDone -> "MsgDone" traceDbjEventWith :: Tracer m String -> TraceEventDbf PeerId -> m () -traceDbjEventWith tracer = traceWith tracer . \case +traceDbjEventWith tracer = + traceWith tracer . \case RotatedDynamo old new -> "Rotated dynamo from " ++ condense old ++ " to " ++ condense new traceCsjEventWith :: @@ -516,95 +566,114 @@ traceCsjEventWith :: Tracer m String -> TraceEventCsj PeerId TestBlock -> m () -traceCsjEventWith peer tracer = f . \case +traceCsjEventWith peer tracer = + f . \case BecomingObjector mbOld -> "is now the Objector" ++ replacing mbOld BlockedOnJump -> "is a happy Jumper blocked on the next CSJ instruction" InitializedAsDynamo -> "initialized as the Dynamo" NoLongerDynamo mbNew reason -> g reason ++ " and so is no longer the Dynamo" ++ replacedBy mbNew NoLongerObjector mbNew reason -> g reason ++ " and so is no longer the Objector" ++ replacedBy mbNew SentJumpInstruction p -> "instructed Jumpers to " ++ tersePoint p - where - f = traceUnitWith tracer ("CSJ " ++ condense peer) + where + f = traceUnitWith tracer ("CSJ " ++ condense peer) - g = \case - BecauseCsjDisconnect -> "disconnected" - BecauseCsjDisengage -> "disengaged" + g = \case + BecauseCsjDisconnect -> "disconnected" + BecauseCsjDisengage -> "disengaged" - replacedBy = \case - Nothing -> "" - Just new -> ", replaced by: " ++ condense new + replacedBy = \case + Nothing -> "" + Just new -> ", replaced by: " ++ condense new - replacing = \case - Nothing -> "" - Just old -> ", replacing: " ++ condense old + replacing = \case + Nothing -> "" + Just old -> ", replacing: " ++ condense old prettyDensityBounds :: [(PeerId, DensityBounds TestBlock)] -> [String] prettyDensityBounds bounds = showPeers (second showBounds <$> bounds) - where - showBounds DensityBounds {clippedFragment, offersMoreThanK, lowerBound, upperBound, hasBlockAfter, latestSlot, idling} = - show lowerBound ++ "/" ++ show upperBound ++ "[" ++ more ++ "], " ++ - lastPoint ++ "latest: " ++ showLatestSlot latestSlot ++ block ++ showIdling - where - more = if offersMoreThanK then "+" else " " - - block = if hasBlockAfter then ", has header after sgen" else " " - - -- Note: At some point, I changed this to use @headPoint@ erroneously, so to be clear about what this signifies: - -- The first point after the anchor (which is returned by @lastPoint@, clearly) is used for the condition that - -- the density comparison should not be applied to two peers if they share any headers after the LoE fragment. - lastPoint = - "point: " ++ - tersePoint (castPoint @(Header TestBlock) @TestBlock (AF.lastPoint clippedFragment)) ++ - ", " - - showLatestSlot = \case - Origin -> "unknown" - NotOrigin (SlotNo slot) -> show slot - - showIdling | idling = ", idling" - | otherwise = "" + where + showBounds + DensityBounds + { clippedFragment + , offersMoreThanK + , lowerBound + , upperBound + , hasBlockAfter + , latestSlot + , idling + } = + show lowerBound + ++ "/" + ++ show upperBound + ++ "[" + ++ more + ++ "], " + ++ lastPoint + ++ "latest: " + ++ showLatestSlot latestSlot + ++ block + ++ showIdling + where + more = if offersMoreThanK then "+" else " " + + block = if hasBlockAfter then ", has header after sgen" else " " + + -- Note: At some point, I changed this to use @headPoint@ erroneously, so to be clear about what this signifies: + -- The first point after the anchor (which is returned by @lastPoint@, clearly) is used for the condition that + -- the density comparison should not be applied to two peers if they share any headers after the LoE fragment. + lastPoint = + "point: " + ++ tersePoint (castPoint @(Header TestBlock) @TestBlock (AF.lastPoint clippedFragment)) + ++ ", " + + showLatestSlot = \case + Origin -> "unknown" + NotOrigin (SlotNo slot) -> show slot + + showIdling + | idling = ", idling" + | otherwise = "" showPeers :: [(PeerId, String)] -> [String] -showPeers = map (\ (peer, v) -> " " ++ condense peer ++ ": " ++ v) +showPeers = map (\(peer, v) -> " " ++ condense peer ++ ": " ++ v) -- * Other utilities terseGDDEvent :: TraceGDDEvent PeerId TestBlock -> String terseGDDEvent = \case TraceGDDDisconnected peers -> "GDD | Disconnected " <> show (NE.toList peers) - TraceGDDDebug GDDDebugInfo { - sgen = GenesisWindow sgen - , curChain, bounds - , candidates - , candidateSuffixes - , losingPeers - , loeHead - } -> - unlines $ [ - "GDD | Window: " ++ window sgen loeHead, - " Selection: " ++ terseHFragment curChain, - " Candidates:" - ] ++ - showPeers (second (tersePoint . castPoint . AF.headPoint) <$> candidates) ++ - [ - " Candidate suffixes (bounds):" - ] ++ - showPeers (second (terseHFragment . clippedFragment) <$> bounds) ++ - [" Density bounds:"] ++ - prettyDensityBounds bounds ++ - [" New candidate tips:"] ++ - showPeers (second (tersePoint . castPoint . AF.headPoint) <$> candidateSuffixes) ++ - [ - " Losing peers: " ++ show losingPeers, - " Setting loeFrag: " ++ terseAnchor (AF.castAnchor loeHead) - ] - where - - window sgen loeHead = - show winStart ++ " -> " ++ show winEnd - where - winEnd = winStart + sgen - 1 - SlotNo winStart = succWithOrigin (AF.anchorToSlotNo loeHead) + TraceGDDDebug + GDDDebugInfo + { sgen = GenesisWindow sgen + , curChain + , bounds + , candidates + , candidateSuffixes + , losingPeers + , loeHead + } -> + unlines $ + [ "GDD | Window: " ++ window sgen loeHead + , " Selection: " ++ terseHFragment curChain + , " Candidates:" + ] + ++ showPeers (second (tersePoint . castPoint . AF.headPoint) <$> candidates) + ++ [ " Candidate suffixes (bounds):" + ] + ++ showPeers (second (terseHFragment . clippedFragment) <$> bounds) + ++ [" Density bounds:"] + ++ prettyDensityBounds bounds + ++ [" New candidate tips:"] + ++ showPeers (second (tersePoint . castPoint . AF.headPoint) <$> candidateSuffixes) + ++ [ " Losing peers: " ++ show losingPeers + , " Setting loeFrag: " ++ terseAnchor (AF.castAnchor loeHead) + ] + where + window sgen loeHead = + show winStart ++ " -> " ++ show winEnd + where + winEnd = winStart + sgen - 1 + SlotNo winStart = succWithOrigin (AF.anchorToSlotNo loeHead) prettyTime :: Time -> String prettyTime (Time time) = @@ -612,7 +681,7 @@ prettyTime (Time time) = milliseconds = ps `quot` 1_000_000_000 seconds = milliseconds `quot` 1_000 minutes = seconds `quot` 60 - in printf "%02d:%02d.%03d" minutes (seconds `rem` 60) (milliseconds `rem` 1_000) + in printf "%02d:%02d.%03d" minutes (seconds `rem` 60) (milliseconds `rem` 1_000) traceLinesWith :: Tracer m String -> diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule.hs index 399d9a3397..7f114ef751 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule.hs @@ -20,8 +20,8 @@ -- The state in the current tick determines the actions that the peer is allowed to perform, -- and once it fulfills the state's criteria, it yields control back to the scheduler, -- who then activates the next tick's peer. -module Test.Consensus.PointSchedule ( - BlockFetchTimeout (..) +module Test.Consensus.PointSchedule + ( BlockFetchTimeout (..) , CSJParams (..) , DowntimeParams (..) , ForecastRange (..) @@ -47,58 +47,85 @@ module Test.Consensus.PointSchedule ( , uniformPoints ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Cardano.Slotting.Time (SlotLength) -import Control.Monad (replicateM) -import Control.Monad.Class.MonadTime.SI (Time (Time), addTime, - diffTime) -import Control.Monad.ST (ST) -import Data.Bifunctor (first) -import Data.Functor (($>)) -import Data.List (mapAccumL, partition, scanl') -import qualified Data.Map.Strict as Map -import Data.Maybe (catMaybes, fromMaybe, mapMaybe) -import Data.Time (DiffTime) -import Data.Word (Word64) -import Ouroboros.Consensus.Block.Abstract (withOriginToMaybe) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (GenesisWindow (..)) -import Ouroboros.Consensus.Network.NodeToNode (ChainSyncTimeout (..)) -import Ouroboros.Consensus.Protocol.Abstract - (SecurityParam (SecurityParam), maxRollbacks) -import Ouroboros.Consensus.Util.Condense (CondenseList (..), - PaddingDirection (..), condenseListWithPadding) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (SlotNo (..), blockSlot) -import Ouroboros.Network.Point (withOrigin) -import qualified System.Random.Stateful as Random -import System.Random.Stateful (STGenM, StatefulGen, runSTGen_) -import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..), - allFragments, prettyBlockTree) -import Test.Consensus.PeerSimulator.StateView (StateView) -import Test.Consensus.PointSchedule.NodeState (NodeState (..), - genesisNodeState) -import Test.Consensus.PointSchedule.Peers (Peer (..), PeerId, - Peers (..), getPeerIds, peers', peersList) -import Test.Consensus.PointSchedule.SinglePeer - (IsTrunk (IsBranch, IsTrunk), PeerScheduleParams (..), - SchedulePoint (..), defaultPeerScheduleParams, mergeOn, - peerScheduleFromTipPoints, schedulePointToBlock) -import Test.Consensus.PointSchedule.SinglePeer.Indices - (uniformRMDiffTime) -import Test.Ouroboros.Consensus.ChainGenerator.Params (Delta (Delta)) -import Test.QuickCheck (Gen, arbitrary) -import Test.QuickCheck.Random (QCGen) -import Test.Util.TersePrinting (terseFragment) -import Test.Util.TestBlock (TestBlock) -import Text.Printf (printf) +import Cardano.Ledger.BaseTypes (unNonZero) +import Cardano.Slotting.Time (SlotLength) +import Control.Monad (replicateM) +import Control.Monad.Class.MonadTime.SI + ( Time (Time) + , addTime + , diffTime + ) +import Control.Monad.ST (ST) +import Data.Bifunctor (first) +import Data.Functor (($>)) +import Data.List (mapAccumL, partition, scanl') +import Data.Map.Strict qualified as Map +import Data.Maybe (catMaybes, fromMaybe, mapMaybe) +import Data.Time (DiffTime) +import Data.Word (Word64) +import Ouroboros.Consensus.Block.Abstract (withOriginToMaybe) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( GenesisWindow (..) + ) +import Ouroboros.Consensus.Network.NodeToNode (ChainSyncTimeout (..)) +import Ouroboros.Consensus.Protocol.Abstract + ( SecurityParam (SecurityParam) + , maxRollbacks + ) +import Ouroboros.Consensus.Util.Condense + ( CondenseList (..) + , PaddingDirection (..) + , condenseListWithPadding + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (SlotNo (..), blockSlot) +import Ouroboros.Network.Point (withOrigin) +import System.Random.Stateful (STGenM, StatefulGen, runSTGen_) +import System.Random.Stateful qualified as Random +import Test.Consensus.BlockTree + ( BlockTree (..) + , BlockTreeBranch (..) + , allFragments + , prettyBlockTree + ) +import Test.Consensus.PeerSimulator.StateView (StateView) +import Test.Consensus.PointSchedule.NodeState + ( NodeState (..) + , genesisNodeState + ) +import Test.Consensus.PointSchedule.Peers + ( Peer (..) + , PeerId + , Peers (..) + , getPeerIds + , peers' + , peersList + ) +import Test.Consensus.PointSchedule.SinglePeer + ( IsTrunk (IsBranch, IsTrunk) + , PeerScheduleParams (..) + , SchedulePoint (..) + , defaultPeerScheduleParams + , mergeOn + , peerScheduleFromTipPoints + , schedulePointToBlock + ) +import Test.Consensus.PointSchedule.SinglePeer.Indices + ( uniformRMDiffTime + ) +import Test.Ouroboros.Consensus.ChainGenerator.Params (Delta (Delta)) +import Test.QuickCheck (Gen, arbitrary) +import Test.QuickCheck.Random (QCGen) +import Test.Util.TersePrinting (terseFragment) +import Test.Util.TestBlock (TestBlock) +import Text.Printf (printf) prettyPointSchedule :: forall blk. - (CondenseList (NodeState blk)) => + CondenseList (NodeState blk) => PointSchedule blk -> [String] -prettyPointSchedule ps@PointSchedule {psStartOrder, psMinEndTime} = +prettyPointSchedule ps@PointSchedule{psStartOrder, psMinEndTime} = [] ++ [ "psSchedule =" ] @@ -110,15 +137,15 @@ prettyPointSchedule ps@PointSchedule {psStartOrder, psMinEndTime} = (showDT . fst . snd <$> numberedPeersStates) (condenseList $ (snd . snd) <$> numberedPeersStates) ) - ++ [ "psStartOrder = " ++ show psStartOrder, - "psMinEndTime = " ++ show psMinEndTime + ++ [ "psStartOrder = " ++ show psStartOrder + , "psMinEndTime = " ++ show psMinEndTime ] - where - numberedPeersStates :: [(Int, (Time, Peer (NodeState blk)))] - numberedPeersStates = zip [0 ..] (peersStates ps) + where + numberedPeersStates :: [(Int, (Time, Peer (NodeState blk)))] + numberedPeersStates = zip [0 ..] (peersStates ps) - showDT :: Time -> String - showDT (Time dt) = printf "%.6f" (realToFrac dt :: Double) + showDT :: Time -> String + showDT (Time dt) = printf "%.6f" (realToFrac dt :: Double) ---------------------------------------------------------------------------------------------------- -- Conversion to 'PointSchedule' @@ -135,15 +162,15 @@ prettyPointSchedule ps@PointSchedule {psStartOrder, psMinEndTime} = -- -- TODO Remove dropping the first state in favor of better GDD logic peerStates :: Peer (PeerSchedule blk) -> [(Time, Peer (NodeState blk))] -peerStates Peer {name, value = schedulePoints} = +peerStates Peer{name, value = schedulePoints} = drop 1 (zip (Time 0 : times) (Peer name <$> scanl' modPoint genesisNodeState points)) - where - modPoint z = \case - ScheduleTipPoint nsTip -> z {nsTip} - ScheduleHeaderPoint nsHeader -> z {nsHeader} - ScheduleBlockPoint nsBlock -> z {nsBlock} + where + modPoint z = \case + ScheduleTipPoint nsTip -> z{nsTip} + ScheduleHeaderPoint nsHeader -> z{nsHeader} + ScheduleBlockPoint nsBlock -> z{nsBlock} - (times, points) = unzip schedulePoints + (times, points) = unzip schedulePoints -- | Convert several @SinglePeer@ schedules to a common 'NodeState' schedule. -- @@ -157,7 +184,7 @@ peersStates PointSchedule{psSchedule} = peersStatesRelative :: PointSchedule blk -> [(DiffTime, Peer (NodeState blk))] peersStatesRelative peers = let (starts, states) = unzip $ peersStates peers - durations = snd (mapAccumL (\ prev start -> (start, diffTime start prev)) (Time 0) (drop 1 starts)) ++ [0.1] + durations = snd (mapAccumL (\prev start -> (start, diffTime start prev)) (Time 0) (drop 1 starts)) ++ [0.1] in zip durations states type PeerSchedule blk = [(Time, SchedulePoint blk)] @@ -166,18 +193,18 @@ type PeerSchedule blk = [(Time, SchedulePoint blk)] peerScheduleBlocks :: (PeerSchedule blk) -> [blk] peerScheduleBlocks = mapMaybe (withOriginToMaybe . schedulePointToBlock . snd) -data PointSchedule blk = PointSchedule { - -- | The actual point schedule - psSchedule :: Peers (PeerSchedule blk), - -- | The order in which the peers start and connect to the node under test. - -- The peers that are absent from 'psSchedule' are ignored; the peers from - -- 'psSchedule' that are absent of 'psStartOrder' are started in the end in - -- the order of 'PeerId'. - psStartOrder :: [PeerId], - -- | Minimum duration for the simulation of this point schedule. - -- If no point in the schedule is larger than 'psMinEndTime', - -- the simulation will still run until this time is reached. - psMinEndTime :: Time +data PointSchedule blk = PointSchedule + { psSchedule :: Peers (PeerSchedule blk) + -- ^ The actual point schedule + , psStartOrder :: [PeerId] + -- ^ The order in which the peers start and connect to the node under test. + -- The peers that are absent from 'psSchedule' are ignored; the peers from + -- 'psSchedule' that are absent of 'psStartOrder' are started in the end in + -- the order of 'PeerId'. + , psMinEndTime :: Time + -- ^ Minimum duration for the simulation of this point schedule. + -- If no point in the schedule is larger than 'psMinEndTime', + -- the simulation will still run until this time is reached. } -- | List of all blocks appearing in the schedules. @@ -199,25 +226,32 @@ longRangeAttack :: BlockTree blk -> g -> m (PointSchedule blk) -longRangeAttack BlockTree {btTrunk, btBranches = [branch]} g = do +longRangeAttack BlockTree{btTrunk, btBranches = [branch]} g = do honest <- peerScheduleFromTipPoints g honParams [(IsTrunk, [AF.length btTrunk - 1])] btTrunk [] - adv <- peerScheduleFromTipPoints g advParams [(IsBranch, [AF.length (btbFull branch) - 1])] btTrunk [btbFull branch] - pure $ shiftPointSchedule $ PointSchedule { - psSchedule = peers' [honest] [adv], - psStartOrder = [], - psMinEndTime = Time 0 - } - where - honParams = defaultPeerScheduleParams {pspHeaderDelayInterval = (0.3, 0.4)} - advParams = defaultPeerScheduleParams {pspTipDelayInterval = (0, 0.1)} - + adv <- + peerScheduleFromTipPoints + g + advParams + [(IsBranch, [AF.length (btbFull branch) - 1])] + btTrunk + [btbFull branch] + pure $ + shiftPointSchedule $ + PointSchedule + { psSchedule = peers' [honest] [adv] + , psStartOrder = [] + , psMinEndTime = Time 0 + } + where + honParams = defaultPeerScheduleParams{pspHeaderDelayInterval = (0.3, 0.4)} + advParams = defaultPeerScheduleParams{pspTipDelayInterval = (0, 0.1)} longRangeAttack _ _ = error "longRangeAttack can only deal with single adversary" -data PointsGeneratorParams = PointsGeneratorParams { - pgpExtraHonestPeers :: Int, - pgpDowntime :: DowntimeParams -} +data PointsGeneratorParams = PointsGeneratorParams + { pgpExtraHonestPeers :: Int + , pgpDowntime :: DowntimeParams + } data DowntimeParams = NoDowntime | DowntimeWithSecurityParam SecurityParam @@ -227,9 +261,9 @@ uniformPoints :: BlockTree blk -> g -> m (PointSchedule blk) -uniformPoints PointsGeneratorParams {pgpExtraHonestPeers, pgpDowntime} bt = +uniformPoints PointsGeneratorParams{pgpExtraHonestPeers, pgpDowntime} bt = fmap shiftPointSchedule . case pgpDowntime of - NoDowntime -> + NoDowntime -> uniformPointsWithExtraHonestPeers pgpExtraHonestPeers bt DowntimeWithSecurityParam k -> uniformPointsWithExtraHonestPeersAndDowntime pgpExtraHonestPeers k bt @@ -243,24 +277,22 @@ uniformPoints PointsGeneratorParams {pgpExtraHonestPeers, pgpDowntime} bt = -- is not possible if the adversary's first tip point is delayed by 20 or -- more seconds due to being in a later slot. shiftPointSchedule :: PointSchedule blk -> PointSchedule blk -shiftPointSchedule s = s {psSchedule = shiftPeerSchedule <$> psSchedule s} - where - shiftPeerSchedule :: PeerSchedule blk -> PeerSchedule blk - shiftPeerSchedule times = map (first shiftTime) times - where - shiftTime :: Time -> Time - shiftTime t = addTime (- firstTipOffset) t - - firstTipOffset :: DiffTime - firstTipOffset = case times of [] -> 0; ((Time dt, _) : _) -> dt +shiftPointSchedule s = s{psSchedule = shiftPeerSchedule <$> psSchedule s} + where + shiftPeerSchedule :: PeerSchedule blk -> PeerSchedule blk + shiftPeerSchedule times = map (first shiftTime) times + where + shiftTime :: Time -> Time + shiftTime t = addTime (-firstTipOffset) t + firstTipOffset :: DiffTime + firstTipOffset = case times of [] -> 0; ((Time dt, _) : _) -> dt -- | Generate a schedule in which the trunk is served by @pgpExtraHonestPeers + 1@ peers, -- and extra branches are served by one peer each, using a single tip point, -- without specifically assigned delay intervals like in 'newLongRangeAttack'. -- -- Include rollbacks in a percentage of adversaries, in which case that peer uses two branchs. --- uniformPointsWithExtraHonestPeers :: forall g m blk. (StatefulGen g m, AF.HasHeader blk) => @@ -269,24 +301,25 @@ uniformPointsWithExtraHonestPeers :: g -> m (PointSchedule blk) uniformPointsWithExtraHonestPeers - extraHonestPeers - BlockTree {btTrunk, btBranches} - g - = do - honestTip0 <- firstTip btTrunk - honests <- replicateM (extraHonestPeers + 1) $ - mkSchedule [(IsTrunk, [honestTip0 .. AF.length btTrunk - 1])] [] - advs <- takeBranches btBranches - let psSchedule = peers' honests advs - psStartOrder <- shuffle (getPeerIds psSchedule) - pure $ PointSchedule {psSchedule, psStartOrder, psMinEndTime = Time 0} - where + extraHonestPeers + BlockTree{btTrunk, btBranches} + g = + do + honestTip0 <- firstTip btTrunk + honests <- + replicateM (extraHonestPeers + 1) $ + mkSchedule [(IsTrunk, [honestTip0 .. AF.length btTrunk - 1])] [] + advs <- takeBranches btBranches + let psSchedule = peers' honests advs + psStartOrder <- shuffle (getPeerIds psSchedule) + pure $ PointSchedule{psSchedule, psStartOrder, psMinEndTime = Time 0} + where takeBranches = \case - [] -> pure [] - [b] -> pure <$> withoutRollback b - b1 : b2 : branches -> do - a <- Random.uniformDouble01M g - if a < rollbackProb + [] -> pure [] + [b] -> pure <$> withoutRollback b + b1 : b2 : branches -> do + a <- Random.uniformDouble01M g + if a < rollbackProb then do this <- withRollback b1 b2 rest <- takeBranches branches @@ -313,11 +346,11 @@ uniformPointsWithExtraHonestPeers tip0 <- firstTip (btbFull branch) let (pre, post) = partition (< firstSuffixBlock) [tip0 .. lastBlock] pure ((if null pre then [] else [(IsTrunk, pre)]) ++ [(IsBranch, (shift <$> post))]) - where - shift i = i - firstSuffixBlock - firstSuffixBlock = lastBlock - AF.length (btbSuffix branch) + 1 - lastBlock = AF.length full - 1 - full = btbFull branch + where + shift i = i - firstSuffixBlock + firstSuffixBlock = lastBlock - AF.length (btbSuffix branch) + 1 + lastBlock = AF.length full - 1 + full = btbFull branch firstTip frag = pure (AF.length frag - 1) @@ -326,7 +359,11 @@ uniformPointsWithExtraHonestPeers tipU <- uniformRMDiffTime (1, 2) g headerL <- uniformRMDiffTime (0.018, 0.03) g headerU <- uniformRMDiffTime (0.021, 0.04) g - pure defaultPeerScheduleParams {pspTipDelayInterval = (tipL, tipU), pspHeaderDelayInterval = (headerL, headerU)} + pure + defaultPeerScheduleParams + { pspTipDelayInterval = (tipL, tipU) + , pspHeaderDelayInterval = (headerL, headerU) + } rollbackProb = 0.2 @@ -336,26 +373,27 @@ uniformPointsWithExtraHonestPeers shuffle xs = do i <- Random.uniformRM (0, length xs - 1) g let x = xs !! i - xs' = take i xs ++ drop (i+1) xs + xs' = take i xs ++ drop (i + 1) xs (x :) <$> shuffle xs' minusClamp :: (Ord a, Num a) => a -> a -> a -minusClamp a b | a <= b = 0 - | otherwise = a - b +minusClamp a b + | a <= b = 0 + | otherwise = a - b -zipPadN :: forall a . [[a]] -> [[Maybe a]] +zipPadN :: forall a. [[a]] -> [[Maybe a]] zipPadN = spin [] - where - spin acc as - | all null as - = reverse acc - | let (h, t) = unzip (takeNext <$> as) - = spin (h : acc) t + where + spin acc as + | all null as = + reverse acc + | let (h, t) = unzip (takeNext <$> as) = + spin (h : acc) t - takeNext = \case - [] -> (Nothing, []) - h : t -> (Just h, t) + takeNext = \case + [] -> (Nothing, []) + h : t -> (Just h, t) isTip :: SchedulePoint blk -> Bool isTip = \case @@ -369,20 +407,23 @@ tipTimes = bumpTips :: [Time] -> [(Time, SchedulePoint blk)] -> [(Time, SchedulePoint blk)] bumpTips tips = snd . mapAccumL step tips - where - step (t0 : tn) (_, p) - | isTip p - = (tn, (t0, p)) - step ts a = (ts, a) - -syncTips :: [[(Time, SchedulePoint blk)]] -> [[(Time, SchedulePoint blk)]] -> ([[(Time, SchedulePoint blk)]], [[(Time, SchedulePoint blk)]]) + where + step (t0 : tn) (_, p) + | isTip p = + (tn, (t0, p)) + step ts a = (ts, a) + +syncTips :: + [[(Time, SchedulePoint blk)]] -> + [[(Time, SchedulePoint blk)]] -> + ([[(Time, SchedulePoint blk)]], [[(Time, SchedulePoint blk)]]) syncTips honests advs = (bump <$> honests, bump <$> advs) - where - bump = bumpTips earliestTips - earliestTips = chooseEarliest <$> zipPadN (tipTimes <$> scheds) - scheds = honests <> advs - chooseEarliest times = minimum (fromMaybe (Time 0) <$> times) + where + bump = bumpTips earliestTips + earliestTips = chooseEarliest <$> zipPadN (tipTimes <$> scheds) + scheds = honests <> advs + chooseEarliest times = minimum (fromMaybe (Time 0) <$> times) -- | This is a variant of 'uniformPointsWithExtraHonestPeers' that uses multiple tip points, used to simulate node downtimes. -- Ultimately, this should be replaced by a redesign of the peer schedule generator that is aware of node liveness @@ -403,31 +444,36 @@ uniformPointsWithExtraHonestPeersAndDowntime :: g -> m (PointSchedule blk) uniformPointsWithExtraHonestPeersAndDowntime - extraHonestPeers - (SecurityParam k) - BlockTree {btTrunk, btBranches} - g - = do - let - kSlot = withOrigin 0 (fromIntegral . unSlotNo) (AF.headSlot (AF.takeOldest (fromIntegral $ unNonZero k) btTrunk)) - midSlot = (AF.length btTrunk) `div` 2 - lowerBound = max kSlot midSlot - pauseSlot <- SlotNo . fromIntegral <$> Random.uniformRM (lowerBound, AF.length btTrunk - 1) g - honestTip0 <- firstTip pauseSlot btTrunk - honests <- replicateM (extraHonestPeers + 1) $ - mkSchedule [(IsTrunk, [honestTip0, minusClamp (AF.length btTrunk) 1])] [] - advs <- takeBranches pauseSlot btBranches - let (honests', advs') = syncTips honests advs - psSchedule = peers' honests' advs' - psStartOrder <- shuffle $ getPeerIds psSchedule - pure $ PointSchedule {psSchedule, psStartOrder, psMinEndTime = Time 0} - where + extraHonestPeers + (SecurityParam k) + BlockTree{btTrunk, btBranches} + g = + do + let + kSlot = + withOrigin + 0 + (fromIntegral . unSlotNo) + (AF.headSlot (AF.takeOldest (fromIntegral $ unNonZero k) btTrunk)) + midSlot = (AF.length btTrunk) `div` 2 + lowerBound = max kSlot midSlot + pauseSlot <- SlotNo . fromIntegral <$> Random.uniformRM (lowerBound, AF.length btTrunk - 1) g + honestTip0 <- firstTip pauseSlot btTrunk + honests <- + replicateM (extraHonestPeers + 1) $ + mkSchedule [(IsTrunk, [honestTip0, minusClamp (AF.length btTrunk) 1])] [] + advs <- takeBranches pauseSlot btBranches + let (honests', advs') = syncTips honests advs + psSchedule = peers' honests' advs' + psStartOrder <- shuffle $ getPeerIds psSchedule + pure $ PointSchedule{psSchedule, psStartOrder, psMinEndTime = Time 0} + where takeBranches pause = \case - [] -> pure [] - [b] -> pure <$> withoutRollback pause b - b1 : b2 : branches -> do - a <- Random.uniformDouble01M g - if a < rollbackProb + [] -> pure [] + [b] -> pure <$> withoutRollback pause b + b1 : b2 : branches -> do + a <- Random.uniformDouble01M g + if a < rollbackProb then do this <- withRollback pause b1 b2 rest <- takeBranches pause branches @@ -452,18 +498,18 @@ uniformPointsWithExtraHonestPeersAndDowntime mkTips pause branch | AF.length full == 0 = - error "empty branch" + error "empty branch" | otherwise = do - tip0 <- firstTip pause (btbFull branch) - let (pre, post) = partition (< firstSuffixBlock) [tip0, fullLen - 1] - pure ((if null pre then [] else [(IsTrunk, pre)]) ++ [(IsBranch, shift <$> post)]) - where - shift i = i - firstSuffixBlock - firstSuffixBlock = fullLen - AF.length (btbSuffix branch) - fullLen = AF.length full - full = btbFull branch + tip0 <- firstTip pause (btbFull branch) + let (pre, post) = partition (< firstSuffixBlock) [tip0, fullLen - 1] + pure ((if null pre then [] else [(IsTrunk, pre)]) ++ [(IsBranch, shift <$> post)]) + where + shift i = i - firstSuffixBlock + firstSuffixBlock = fullLen - AF.length (btbSuffix branch) + fullLen = AF.length full + full = btbFull branch - firstTip pause frag = pure (minusClamp (AF.length (AF.dropWhileNewest (\ b -> blockSlot b > pause) frag)) 1) + firstTip pause frag = pure (minusClamp (AF.length (AF.dropWhileNewest (\b -> blockSlot b > pause) frag)) 1) mkParams = do -- These values appear to be large enough to create pauses of 100 seconds and more. @@ -471,7 +517,11 @@ uniformPointsWithExtraHonestPeersAndDowntime tipU <- uniformRMDiffTime (1, 2) g headerL <- uniformRMDiffTime (0.018, 0.03) g headerU <- uniformRMDiffTime (0.021, 0.04) g - pure defaultPeerScheduleParams {pspTipDelayInterval = (tipL, tipU), pspHeaderDelayInterval = (headerL, headerU)} + pure + defaultPeerScheduleParams + { pspTipDelayInterval = (tipL, tipU) + , pspHeaderDelayInterval = (headerL, headerU) + } rollbackProb = 0.2 @@ -481,19 +531,19 @@ uniformPointsWithExtraHonestPeersAndDowntime shuffle xs = do i <- Random.uniformRM (0, length xs - 1) g let x = xs !! i - xs' = take i xs ++ drop (i+1) xs + xs' = take i xs ++ drop (i + 1) xs (x :) <$> shuffle xs' -newtype ForecastRange = ForecastRange { unForecastRange :: Word64 } - deriving (Show) +newtype ForecastRange = ForecastRange {unForecastRange :: Word64} + deriving Show -data LoPBucketParams = LoPBucketParams { - lbpCapacity :: Integer, - lbpRate :: Rational +data LoPBucketParams = LoPBucketParams + { lbpCapacity :: Integer + , lbpRate :: Rational } -data CSJParams = CSJParams { - csjpJumpSize :: SlotNo +data CSJParams = CSJParams + { csjpJumpSize :: SlotNo } deriving Show @@ -501,38 +551,38 @@ data CSJParams = CSJParams { -- server has agency are specified. REVIEW: Should it be upstreamed to -- ouroboros-network-protocols? data BlockFetchTimeout = BlockFetchTimeout - { busyTimeout :: Maybe DiffTime, - streamingTimeout :: Maybe DiffTime + { busyTimeout :: Maybe DiffTime + , streamingTimeout :: Maybe DiffTime } -- | All the data used by point schedule tests. data GenesisTest blk schedule = GenesisTest - { gtSecurityParam :: SecurityParam, - gtGenesisWindow :: GenesisWindow, - gtForecastRange :: ForecastRange, -- REVIEW: Do we want to allow infinite forecast ranges? - gtDelay :: Delta, - gtBlockTree :: BlockTree blk, - gtChainSyncTimeouts :: ChainSyncTimeout, - gtBlockFetchTimeouts :: BlockFetchTimeout, - gtLoPBucketParams :: LoPBucketParams, - gtCSJParams :: CSJParams, - gtSlotLength :: SlotLength, - -- | The number of extra honest peers we want in the test. - -- It is stored here for convenience, and because it may affect schedule and block tree generation. - -- - -- There will be at most one adversarial peer per alternative branch in the block tree - -- (exactly one per branch if no adversary does a rollback), - -- and @1 + gtExtraHonestPeers@ honest peers. - gtExtraHonestPeers :: Word, - gtSchedule :: schedule + { gtSecurityParam :: SecurityParam + , gtGenesisWindow :: GenesisWindow + , gtForecastRange :: ForecastRange -- REVIEW: Do we want to allow infinite forecast ranges? + , gtDelay :: Delta + , gtBlockTree :: BlockTree blk + , gtChainSyncTimeouts :: ChainSyncTimeout + , gtBlockFetchTimeouts :: BlockFetchTimeout + , gtLoPBucketParams :: LoPBucketParams + , gtCSJParams :: CSJParams + , gtSlotLength :: SlotLength + , gtExtraHonestPeers :: Word + -- ^ The number of extra honest peers we want in the test. + -- It is stored here for convenience, and because it may affect schedule and block tree generation. + -- + -- There will be at most one adversarial peer per alternative branch in the block tree + -- (exactly one per branch if no adversary does a rollback), + -- and @1 + gtExtraHonestPeers@ honest peers. + , gtSchedule :: schedule } type GenesisTestFull blk = GenesisTest blk (PointSchedule blk) -- | All the data describing the result of a test data RunGenesisTestResult = RunGenesisTestResult - { rgtrTrace :: String, - rgtrStateView :: StateView TestBlock + { rgtrTrace :: String + , rgtrStateView :: StateView TestBlock } prettyGenesisTest :: (schedule -> [String]) -> GenesisTest TestBlock schedule -> [String] @@ -554,37 +604,42 @@ prettyGenesisTest prettySchedule genesisTest = , " streaming = " ++ show streamingTimeout , " gtLoPBucketParams: " , " lbpCapacity = " ++ show lbpCapacity ++ " tokens" - , " lbpRate = " ++ show lbpRate ++ " ≅ " ++ printf "%.2f" (fromRational lbpRate :: Float) ++ " tokens per second" + , " lbpRate = " + ++ show lbpRate + ++ " ≅ " + ++ printf "%.2f" (fromRational lbpRate :: Float) + ++ " tokens per second" , " gtBlockTree:" - ] ++ map ((" " ++) . terseFragment) (allFragments gtBlockTree) + ] + ++ map ((" " ++) . terseFragment) (allFragments gtBlockTree) ++ map (" " ++) (prettyBlockTree gtBlockTree) ++ [" gtSchedule:"] ++ map (" " ++) (prettySchedule gtSchedule) - where - GenesisTest { - gtSecurityParam - , gtGenesisWindow - , gtForecastRange - , gtDelay = Delta delta - , gtBlockTree - , gtChainSyncTimeouts = - ChainSyncTimeout{canAwaitTimeout, intersectTimeout, mustReplyTimeout, idleTimeout} - , gtBlockFetchTimeouts = BlockFetchTimeout{busyTimeout, streamingTimeout} - , gtLoPBucketParams = LoPBucketParams{lbpCapacity, lbpRate} - , gtSlotLength - , gtCSJParams - , gtSchedule - } = genesisTest + where + GenesisTest + { gtSecurityParam + , gtGenesisWindow + , gtForecastRange + , gtDelay = Delta delta + , gtBlockTree + , gtChainSyncTimeouts = + ChainSyncTimeout{canAwaitTimeout, intersectTimeout, mustReplyTimeout, idleTimeout} + , gtBlockFetchTimeouts = BlockFetchTimeout{busyTimeout, streamingTimeout} + , gtLoPBucketParams = LoPBucketParams{lbpCapacity, lbpRate} + , gtSlotLength + , gtCSJParams + , gtSchedule + } = genesisTest instance Functor (GenesisTest blk) where - fmap f gt@GenesisTest{gtSchedule} = gt {gtSchedule = f gtSchedule} + fmap f gt@GenesisTest{gtSchedule} = gt{gtSchedule = f gtSchedule} enrichedWith :: (Functor f, Monad m) => m (f a) -> (f a -> m b) -> m (f b) enrichedWith mfa convert = mfa >>= \fa -> (fa $>) <$> convert fa -- | Wrap a 'ST' generator in 'Gen'. stToGen :: - (forall s . STGenM QCGen s -> ST s a) -> + (forall s. STGenM QCGen s -> ST s a) -> Gen a stToGen gen = do seed :: QCGen <- arbitrary @@ -592,21 +647,27 @@ stToGen gen = do ensureScheduleDuration :: GenesisTest blk a -> PointSchedule blk -> PointSchedule blk ensureScheduleDuration gt PointSchedule{psSchedule, psStartOrder, psMinEndTime} = - PointSchedule - { psSchedule - , psStartOrder - , psMinEndTime = max psMinEndTime (Time endingDelay) - } - where - endingDelay = - let cst = gtChainSyncTimeouts gt - bft = gtBlockFetchTimeouts gt - bfGracePeriodDelay = fromIntegral adversaryCount * 10 - in 1 + bfGracePeriodDelay + fromIntegral peerCount * maximum (0 : catMaybes - [ canAwaitTimeout cst - , intersectTimeout cst - , busyTimeout bft - , streamingTimeout bft - ]) - peerCount = length (peersList psSchedule) - adversaryCount = Map.size (adversarialPeers psSchedule) + PointSchedule + { psSchedule + , psStartOrder + , psMinEndTime = max psMinEndTime (Time endingDelay) + } + where + endingDelay = + let cst = gtChainSyncTimeouts gt + bft = gtBlockFetchTimeouts gt + bfGracePeriodDelay = fromIntegral adversaryCount * 10 + in 1 + + bfGracePeriodDelay + + fromIntegral peerCount + * maximum + ( 0 + : catMaybes + [ canAwaitTimeout cst + , intersectTimeout cst + , busyTimeout bft + , streamingTimeout bft + ] + ) + peerCount = length (peersList psSchedule) + adversaryCount = Map.size (adversarialPeers psSchedule) diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/NodeState.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/NodeState.hs index 0aa86f137a..da5efbcc9f 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/NodeState.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/NodeState.hs @@ -1,27 +1,31 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE NamedFieldPuns #-} -module Test.Consensus.PointSchedule.NodeState ( - NodeState (..) +module Test.Consensus.PointSchedule.NodeState + ( NodeState (..) , genesisNodeState , nsTipTip ) where -import Ouroboros.Consensus.Block.Abstract (WithOrigin (..)) -import Ouroboros.Consensus.Util.Condense (Condense (..), - CondenseList (..), PaddingDirection (..), padListWith) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (Tip (..), tipFromHeader) -import Ouroboros.Network.Point (withOrigin) -import Test.Util.TersePrinting (terseBlock, terseWithOrigin) -import Test.Util.TestBlock (TestBlock) +import Ouroboros.Consensus.Block.Abstract (WithOrigin (..)) +import Ouroboros.Consensus.Util.Condense + ( Condense (..) + , CondenseList (..) + , PaddingDirection (..) + , padListWith + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (Tip (..), tipFromHeader) +import Ouroboros.Network.Point (withOrigin) +import Test.Util.TersePrinting (terseBlock, terseWithOrigin) +import Test.Util.TestBlock (TestBlock) -- | The state of a peer at a given point in time. -data NodeState blk = - NodeState { - nsTip :: WithOrigin blk, - nsHeader :: WithOrigin blk, - nsBlock :: WithOrigin blk +data NodeState blk + = NodeState + { nsTip :: WithOrigin blk + , nsHeader :: WithOrigin blk + , nsBlock :: WithOrigin blk } deriving (Eq, Show) @@ -29,18 +33,24 @@ nsTipTip :: AF.HasHeader blk => NodeState blk -> Tip blk nsTipTip = withOrigin TipGenesis tipFromHeader . nsTip instance Condense (NodeState TestBlock) where - condense NodeState {nsTip, nsHeader, nsBlock} = - "TP " ++ terseWithOrigin terseBlock nsTip ++ - " | HP " ++ terseWithOrigin terseBlock nsHeader ++ - " | BP " ++ terseWithOrigin terseBlock nsBlock + condense NodeState{nsTip, nsHeader, nsBlock} = + "TP " + ++ terseWithOrigin terseBlock nsTip + ++ " | HP " + ++ terseWithOrigin terseBlock nsHeader + ++ " | BP " + ++ terseWithOrigin terseBlock nsBlock instance CondenseList (NodeState TestBlock) where condenseList points = zipWith3 - (\tip header block -> - "TP " ++ tip ++ - " | HP " ++ header ++ - " | BP " ++ block + ( \tip header block -> + "TP " + ++ tip + ++ " | HP " + ++ header + ++ " | BP " + ++ block ) (padListWith PadRight $ map (terseWithOrigin terseBlock . nsTip) points) (padListWith PadRight $ map (terseWithOrigin terseBlock . nsHeader) points) @@ -48,10 +58,8 @@ instance CondenseList (NodeState TestBlock) where genesisNodeState :: NodeState blk genesisNodeState = - NodeState { - nsTip = Origin, - nsHeader = Origin, - nsBlock = Origin - } - - + NodeState + { nsTip = Origin + , nsHeader = Origin + , nsBlock = Origin + } diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Peers.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Peers.hs index d8a31e8125..6d1a87e39f 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Peers.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Peers.hs @@ -10,9 +10,8 @@ -- | This module contains the definition of point schedule _peers_ as well as -- all kind of utilities to manipulate them. - -module Test.Consensus.PointSchedule.Peers ( - Peer (..) +module Test.Consensus.PointSchedule.Peers + ( Peer (..) , PeerId (..) , Peers (..) , adversarialPeers' @@ -40,15 +39,18 @@ module Test.Consensus.PointSchedule.Peers ( , updatePeer ) where -import Data.Hashable (Hashable) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.String (IsString (fromString)) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Util.Condense (Condense (..), - CondenseList (..), PaddingDirection (..), - condenseListWithPadding) +import Data.Hashable (Hashable) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.String (IsString (fromString)) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Util.Condense + ( Condense (..) + , CondenseList (..) + , PaddingDirection (..) + , condenseListWithPadding + ) -- | Identifier used to index maps and specify which peer is active during a tick. data PeerId @@ -58,11 +60,11 @@ data PeerId instance IsString PeerId where fromString s = case words s of - ["honest"] -> HonestPeer 1 - ["honest", n] -> HonestPeer (read n) - ["adversary"] -> AdversarialPeer 1 + ["honest"] -> HonestPeer 1 + ["honest", n] -> HonestPeer (read n) + ["adversary"] -> AdversarialPeer 1 ["adversary", n] -> AdversarialPeer (read n) - _ -> error $ "fromString: invalid PeerId: " ++ s + _ -> error $ "fromString: invalid PeerId: " ++ s instance Condense PeerId where condense = \case @@ -77,15 +79,15 @@ instance CondenseList PeerId where instance Hashable PeerId -- | General-purpose functor associated with a peer. -data Peer a = - Peer { - name :: PeerId, - value :: a +data Peer a + = Peer + { name :: PeerId + , value :: a } deriving (Eq, Show) instance Functor Peer where - fmap f Peer {name, value} = Peer {name, value = f value} + fmap f Peer{name, value} = Peer{name, value = f value} instance Foldable Peer where foldr step z (Peer _ a) = step a z @@ -95,7 +97,7 @@ instance Traversable Peer where Peer name <$> fa instance Condense a => Condense (Peer a) where - condense Peer {name, value} = condense name ++ ": " ++ condense value + condense Peer{name, value} = condense name ++ ": " ++ condense value instance CondenseList a => CondenseList (Peer a) where condenseList peers = @@ -106,8 +108,8 @@ instance CondenseList a => CondenseList (Peer a) where -- | General-purpose functor for a set of peers. data Peers a = Peers - { honestPeers :: Map Int a, - adversarialPeers :: Map Int a + { honestPeers :: Map Int a + , adversarialPeers :: Map Int a } deriving (Eq, Show) @@ -130,51 +132,51 @@ adversarialPeers'' :: Peers a -> Map PeerId (Peer a) adversarialPeers'' = Map.mapWithKey Peer . adversarialPeers' instance Functor Peers where - fmap f Peers {honestPeers, adversarialPeers} = + fmap f Peers{honestPeers, adversarialPeers} = Peers - { honestPeers = f <$> honestPeers, - adversarialPeers = f <$> adversarialPeers + { honestPeers = f <$> honestPeers + , adversarialPeers = f <$> adversarialPeers } instance Foldable Peers where - foldMap f Peers {honestPeers, adversarialPeers} = + foldMap f Peers{honestPeers, adversarialPeers} = foldMap f honestPeers <> foldMap f adversarialPeers -- | A set of peers with only one honest peer carrying the given value. peersOnlyHonest :: a -> Peers a peersOnlyHonest value = Peers - { honestPeers = Map.singleton 1 value, - adversarialPeers = Map.empty + { honestPeers = Map.singleton 1 value + , adversarialPeers = Map.empty } peersOnlyAdversary :: a -> Peers a peersOnlyAdversary value = Peers - { adversarialPeers = Map.singleton 1 value, - honestPeers = Map.empty + { adversarialPeers = Map.singleton 1 value + , honestPeers = Map.empty } -- | Extract all 'PeerId's. getPeerIds :: Peers a -> [PeerId] -getPeerIds Peers {honestPeers, adversarialPeers} = +getPeerIds Peers{honestPeers, adversarialPeers} = (HonestPeer <$> Map.keys honestPeers) ++ (AdversarialPeer <$> Map.keys adversarialPeers) getPeer :: PeerId -> Peers a -> Peer a -getPeer (HonestPeer n) Peers {honestPeers} = Peer (HonestPeer n) (honestPeers Map.! n) -getPeer (AdversarialPeer n) Peers {adversarialPeers} = Peer (AdversarialPeer n) (adversarialPeers Map.! n) +getPeer (HonestPeer n) Peers{honestPeers} = Peer (HonestPeer n) (honestPeers Map.! n) +getPeer (AdversarialPeer n) Peers{adversarialPeers} = Peer (AdversarialPeer n) (adversarialPeers Map.! n) updatePeer :: (a -> (a, b)) -> PeerId -> Peers a -> (Peers a, b) -updatePeer f (HonestPeer n) Peers {honestPeers, adversarialPeers} = +updatePeer f (HonestPeer n) Peers{honestPeers, adversarialPeers} = let (a, b) = f (honestPeers Map.! n) - in (Peers {honestPeers = Map.insert n a honestPeers, adversarialPeers}, b) -updatePeer f (AdversarialPeer n) Peers {honestPeers, adversarialPeers} = + in (Peers{honestPeers = Map.insert n a honestPeers, adversarialPeers}, b) +updatePeer f (AdversarialPeer n) Peers{honestPeers, adversarialPeers} = let (a, b) = f (adversarialPeers Map.! n) - in (Peers {honestPeers, adversarialPeers = Map.insert n a adversarialPeers}, b) + in (Peers{honestPeers, adversarialPeers = Map.insert n a adversarialPeers}, b) -- | Convert 'Peers' to a list of 'Peer'. peersList :: Peers a -> [Peer a] -peersList Peers {honestPeers, adversarialPeers} = +peersList Peers{honestPeers, adversarialPeers} = Map.foldrWithKey (\k v -> (Peer (HonestPeer k) v :)) ( Map.foldrWithKey @@ -191,8 +193,8 @@ enumerateAdversaries = AdversarialPeer <$> [1 ..] peers' :: [a] -> [a] -> Peers a peers' hs as = Peers - { honestPeers = Map.fromList $ zip [1 ..] hs, - adversarialPeers = Map.fromList $ zip [1 ..] as + { honestPeers = Map.fromList $ zip [1 ..] hs + , adversarialPeers = Map.fromList $ zip [1 ..] as } -- | Make a 'Peers' structure from individual 'Peer's. @@ -200,24 +202,25 @@ peersFromPeerList :: [Peer a] -> Peers a peersFromPeerList peers = let (hs, as) = partitionPeers peers in Peers - { honestPeers = Map.fromList hs, - adversarialPeers = Map.fromList as + { honestPeers = Map.fromList hs + , adversarialPeers = Map.fromList as } - where - partitionPeers :: [Peer a] -> ([(Int, a)], [(Int, a)]) - partitionPeers = - foldl - ( \(hs, as) (Peer pid v) -> case pid of - HonestPeer n -> ((n, v) : hs, as) - AdversarialPeer n -> (hs, (n, v) : as) - ) - ([], []) + where + partitionPeers :: [Peer a] -> ([(Int, a)], [(Int, a)]) + partitionPeers = + foldl + ( \(hs, as) (Peer pid v) -> case pid of + HonestPeer n -> ((n, v) : hs, as) + AdversarialPeer n -> (hs, (n, v) : as) + ) + ([], []) unionWithKey :: (PeerId -> a -> a -> a) -> Peers a -> Peers a -> Peers a unionWithKey f peers1 peers2 = Peers - { honestPeers = Map.unionWithKey (f . HonestPeer) (honestPeers peers1) (honestPeers peers2), - adversarialPeers = Map.unionWithKey (f . AdversarialPeer) (adversarialPeers peers1) (adversarialPeers peers2) + { honestPeers = Map.unionWithKey (f . HonestPeer) (honestPeers peers1) (honestPeers peers2) + , adversarialPeers = + Map.unionWithKey (f . AdversarialPeer) (adversarialPeers peers1) (adversarialPeers peers2) } -- | Make a 'Peers' structure from a list of peer ids and a default value. @@ -230,7 +233,7 @@ peersFromPeerIdList' = flip peersFromPeerIdList () -- | Same as 'toMap' but the map contains unwrapped values. toMap' :: Peers a -> Map PeerId a -toMap' Peers {honestPeers, adversarialPeers} = +toMap' Peers{honestPeers, adversarialPeers} = Map.union (Map.mapKeysMonotonic HonestPeer honestPeers) (Map.mapKeysMonotonic AdversarialPeer adversarialPeers) @@ -249,28 +252,28 @@ fromMap' peers = ) peers in Peers - { honestPeers = Map.mapKeysMonotonic unHonestPeer honestPeers, - adversarialPeers = Map.mapKeysMonotonic unAdversarialPeer adversarialPeers + { honestPeers = Map.mapKeysMonotonic unHonestPeer honestPeers + , adversarialPeers = Map.mapKeysMonotonic unAdversarialPeer adversarialPeers } - where - unHonestPeer (HonestPeer n) = n - unHonestPeer _ = error "unHonestPeer: not a honest peer" - unAdversarialPeer (AdversarialPeer n) = n - unAdversarialPeer _ = error "unAdversarialPeer: not an adversarial peer" + where + unHonestPeer (HonestPeer n) = n + unHonestPeer _ = error "unHonestPeer: not a honest peer" + unAdversarialPeer (AdversarialPeer n) = n + unAdversarialPeer _ = error "unAdversarialPeer: not an adversarial peer" fromMap :: Map PeerId (Peer a) -> Peers a fromMap = fromMap' . Map.map value deletePeer :: PeerId -> Peers a -> Peers a -deletePeer (HonestPeer n) Peers {honestPeers, adversarialPeers} = - Peers {honestPeers = Map.delete n honestPeers, adversarialPeers} -deletePeer (AdversarialPeer n) Peers {honestPeers, adversarialPeers} = - Peers {honestPeers, adversarialPeers = Map.delete n adversarialPeers} +deletePeer (HonestPeer n) Peers{honestPeers, adversarialPeers} = + Peers{honestPeers = Map.delete n honestPeers, adversarialPeers} +deletePeer (AdversarialPeer n) Peers{honestPeers, adversarialPeers} = + Peers{honestPeers, adversarialPeers = Map.delete n adversarialPeers} isHonestPeerId :: PeerId -> Bool isHonestPeerId (HonestPeer _) = True -isHonestPeerId _ = False +isHonestPeerId _ = False isAdversarialPeerId :: PeerId -> Bool isAdversarialPeerId (AdversarialPeer _) = True -isAdversarialPeerId _ = False +isAdversarialPeerId _ = False diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking.hs index 7443b0a50b..152aae02ad 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking.hs @@ -1,34 +1,52 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} -module Test.Consensus.PointSchedule.Shrinking ( - -- | Exported only for testing (that is, checking the properties of the function) +module Test.Consensus.PointSchedule.Shrinking + ( -- | Exported only for testing (that is, checking the properties of the function) shrinkByRemovingAdversaries , shrinkHonestPeer , shrinkHonestPeers , shrinkPeerSchedules ) where -import Control.Monad.Class.MonadTime.SI (DiffTime, Time, addTime, - diffTime) -import Data.Containers.ListUtils (nubOrd) -import Data.Foldable (toList) -import Data.Functor ((<&>)) -import qualified Data.Map.Strict as Map -import Data.Maybe (mapMaybe) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - AnchoredSeq (Empty), takeWhileOldest) -import Test.Consensus.BlockTree (BlockTree (..), BlockTreeBranch (..), - addBranch', mkTrunk) -import Test.Consensus.PeerSimulator.StateView (StateView) -import Test.Consensus.PointSchedule (GenesisTest (..), - GenesisTestFull, PeerSchedule, PointSchedule (..), - peerSchedulesBlocks) -import Test.Consensus.PointSchedule.Peers (Peers (..)) -import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..)) -import Test.QuickCheck (shrinkList) -import Test.Util.TestBlock (TestBlock, isAncestorOf, - isStrictAncestorOf) +import Control.Monad.Class.MonadTime.SI + ( DiffTime + , Time + , addTime + , diffTime + ) +import Data.Containers.ListUtils (nubOrd) +import Data.Foldable (toList) +import Data.Functor ((<&>)) +import Data.Map.Strict qualified as Map +import Data.Maybe (mapMaybe) +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , AnchoredSeq (Empty) + , takeWhileOldest + ) +import Test.Consensus.BlockTree + ( BlockTree (..) + , BlockTreeBranch (..) + , addBranch' + , mkTrunk + ) +import Test.Consensus.PeerSimulator.StateView (StateView) +import Test.Consensus.PointSchedule + ( GenesisTest (..) + , GenesisTestFull + , PeerSchedule + , PointSchedule (..) + , peerSchedulesBlocks + ) +import Test.Consensus.PointSchedule.Peers (Peers (..)) +import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..)) +import Test.QuickCheck (shrinkList) +import Test.Util.TestBlock + ( TestBlock + , isAncestorOf + , isStrictAncestorOf + ) -- | Shrink a 'PointSchedule'. We use a different logic to shrink honest and -- adversarial peers. For adversarial peers, we just remove arbitrary points, @@ -41,31 +59,34 @@ shrinkPeerSchedules :: StateView TestBlock -> [GenesisTestFull TestBlock] shrinkPeerSchedules genesisTest@GenesisTest{gtBlockTree, gtSchedule} _stateView = - let PointSchedule {psSchedule, psStartOrder} = gtSchedule + let PointSchedule{psSchedule, psStartOrder} = gtSchedule simulationDuration = duration gtSchedule trimmedBlockTree sch = trimBlockTree' sch gtBlockTree shrunkAdversarialPeers = shrinkAdversarialPeers shrinkAdversarialPeer psSchedule <&> \shrunkSchedule -> genesisTest - { gtSchedule = PointSchedule - { psSchedule = shrunkSchedule - , psStartOrder - , psMinEndTime = simulationDuration - } + { gtSchedule = + PointSchedule + { psSchedule = shrunkSchedule + , psStartOrder + , psMinEndTime = simulationDuration + } , gtBlockTree = trimmedBlockTree shrunkSchedule } shrunkHonestPeers = shrinkHonestPeers psSchedule - -- No need to update the tree here, shrinking the honest peers never discards blocks - <&> \shrunkSchedule -> genesisTest - { gtSchedule = PointSchedule - { psSchedule = shrunkSchedule - , psStartOrder - , psMinEndTime = simulationDuration - } - } + -- No need to update the tree here, shrinking the honest peers never discards blocks + <&> \shrunkSchedule -> + genesisTest + { gtSchedule = + PointSchedule + { psSchedule = shrunkSchedule + , psStartOrder + , psMinEndTime = simulationDuration + } + } in shrunkAdversarialPeers ++ shrunkHonestPeers -- | Shrink a 'PointSchedule' by removing adversaries. This does not affect @@ -80,18 +101,20 @@ shrinkByRemovingAdversaries genesisTest@GenesisTest{gtSchedule, gtBlockTree} _st let trimmedBlockTree = trimBlockTree' shrunkSchedule gtBlockTree simulationDuration = duration gtSchedule - in genesisTest - { gtSchedule = PointSchedule - { psSchedule = shrunkSchedule - , psStartOrder = psStartOrder gtSchedule - , psMinEndTime = simulationDuration + in + genesisTest + { gtSchedule = + PointSchedule + { psSchedule = shrunkSchedule + , psStartOrder = psStartOrder gtSchedule + , psMinEndTime = simulationDuration + } + , gtBlockTree = trimmedBlockTree } - , gtBlockTree = trimmedBlockTree - } duration :: PointSchedule blk -> Time -duration PointSchedule {psSchedule, psMinEndTime} = - maximum $ psMinEndTime : [ t | sch <- toList psSchedule, (t, _) <- take 1 (reverse sch) ] +duration PointSchedule{psSchedule, psMinEndTime} = + maximum $ psMinEndTime : [t | sch <- toList psSchedule, (t, _) <- take 1 (reverse sch)] -- | Shrink a 'PeerSchedule' by removing ticks from it. The other ticks are kept -- unchanged. @@ -101,7 +124,7 @@ shrinkAdversarialPeer = shrinkList (const []) -- | Shrink the 'others' field of a 'Peers' structure by attempting to remove -- peers or by shrinking their values using the given shrinking function. shrinkAdversarialPeers :: (a -> [a]) -> Peers a -> [Peers a] -shrinkAdversarialPeers shrink Peers {honestPeers, adversarialPeers} = +shrinkAdversarialPeers shrink Peers{honestPeers, adversarialPeers} = map (Peers honestPeers . Map.fromList) $ shrinkList (traverse shrink) $ Map.toList adversarialPeers @@ -117,25 +140,27 @@ shrinkAdversarialPeers shrink Peers {honestPeers, adversarialPeers} = -- trigger disconnections when the timeout for MsgAwaitReply is reached. In those cases, -- it is probably more pertinent to disable this timeout in tests than to disable shrinking. shrinkHonestPeers :: Peers (PeerSchedule blk) -> [Peers (PeerSchedule blk)] -shrinkHonestPeers Peers {honestPeers, adversarialPeers} = do +shrinkHonestPeers Peers{honestPeers, adversarialPeers} = do (k, honestSch) <- Map.toList honestPeers shrunk <- shrinkHonestPeer honestSch - pure $ Peers - { honestPeers = Map.insert k shrunk honestPeers - , adversarialPeers - } + pure $ + Peers + { honestPeers = Map.insert k shrunk honestPeers + , adversarialPeers + } shrinkHonestPeer :: PeerSchedule blk -> [PeerSchedule blk] shrinkHonestPeer sch = mapMaybe (speedUpTheSchedule sch) splits - where - -- | A list of non-zero time intervals between successive points of the honest schedule - splits :: [(Time, DiffTime)] - splits = mapMaybe - (\((t1, _), (t2, _)) -> - if t1 == t2 - then Nothing - else Just (t1, diffTime t2 t1) + where + -- \| A list of non-zero time intervals between successive points of the honest schedule + splits :: [(Time, DiffTime)] + splits = + mapMaybe + ( \((t1, _), (t2, _)) -> + if t1 == t2 + then Nothing + else Just (t1, diffTime t2 t1) ) (zip sch (drop 1 sch)) @@ -148,19 +173,20 @@ shrinkHonestPeer sch = speedUpTheSchedule :: PeerSchedule blk -> (Time, DiffTime) -> Maybe (PeerSchedule blk) speedUpTheSchedule sch (at, speedUpBy) = if stillValid then Just $ beforeSplit ++ spedUpSchedule else Nothing - where - (beforeSplit, afterSplit) = span ((< at) . fst) sch - threshold = addTime speedUpBy at - spedUpSchedule = mapMaybe + where + (beforeSplit, afterSplit) = span ((< at) . fst) sch + threshold = addTime speedUpBy at + spedUpSchedule = + mapMaybe (\(t, p) -> if t < threshold then Nothing else Just (addTime (-speedUpBy) t, p)) afterSplit - stillValid = - (hasTP spedUpSchedule == hasTP afterSplit) + stillValid = + (hasTP spedUpSchedule == hasTP afterSplit) && (hasHP spedUpSchedule == hasHP afterSplit) && (hasBP spedUpSchedule == hasBP afterSplit) - hasTP = any (\case (_, ScheduleTipPoint _) -> True; _ -> False) - hasHP = any (\case (_, ScheduleHeaderPoint _) -> True; _ -> False) - hasBP = any (\case (_, ScheduleBlockPoint _) -> True; _ -> False) + hasTP = any (\case (_, ScheduleTipPoint _) -> True; _ -> False) + hasHP = any (\case (_, ScheduleHeaderPoint _) -> True; _ -> False) + hasBP = any (\case (_, ScheduleBlockPoint _) -> True; _ -> False) -- | Remove blocks from the given block tree that are not necessary for the -- given peer schedules. If entire branches are unused, they are removed. If the @@ -172,22 +198,22 @@ trimBlockTree' = keepOnlyAncestorsOf . peerSchedulesBlocks -- that contains ancestors of the given blocks. keepOnlyAncestorsOf :: [TestBlock] -> BlockTree TestBlock -> BlockTree TestBlock keepOnlyAncestorsOf blocks bt = - let leaves = blocksWithoutDescendents blocks - trunk = keepOnlyAncestorsOf' leaves (btTrunk bt) - branches = mapMaybe (fragmentToMaybe . keepOnlyAncestorsOf' leaves . btbSuffix) (btBranches bt) - in foldr addBranch' (mkTrunk trunk) branches - where - fragmentToMaybe (Empty _) = Nothing - fragmentToMaybe fragment = Just fragment + let leaves = blocksWithoutDescendents blocks + trunk = keepOnlyAncestorsOf' leaves (btTrunk bt) + branches = mapMaybe (fragmentToMaybe . keepOnlyAncestorsOf' leaves . btbSuffix) (btBranches bt) + in foldr addBranch' (mkTrunk trunk) branches + where + fragmentToMaybe (Empty _) = Nothing + fragmentToMaybe fragment = Just fragment - -- | Given some blocks and a fragment, keep only the prefix of the fragment - -- that contains ancestors of the given blocks. - keepOnlyAncestorsOf' :: [TestBlock] -> AnchoredFragment TestBlock -> AnchoredFragment TestBlock - keepOnlyAncestorsOf' leaves = takeWhileOldest (\block -> (block `isAncestorOf`) `any` leaves) + -- \| Given some blocks and a fragment, keep only the prefix of the fragment + -- that contains ancestors of the given blocks. + keepOnlyAncestorsOf' :: [TestBlock] -> AnchoredFragment TestBlock -> AnchoredFragment TestBlock + keepOnlyAncestorsOf' leaves = takeWhileOldest (\block -> (block `isAncestorOf`) `any` leaves) - -- | Return a subset of the given blocks containing only the ones that do - -- not have any other descendents in the set. - blocksWithoutDescendents :: [TestBlock] -> [TestBlock] - blocksWithoutDescendents bs = - let bs' = nubOrd bs - in [ b | b <- bs', not ((b `isStrictAncestorOf`) `any` bs') ] + -- \| Return a subset of the given blocks containing only the ones that do + -- not have any other descendents in the set. + blocksWithoutDescendents :: [TestBlock] -> [TestBlock] + blocksWithoutDescendents bs = + let bs' = nubOrd bs + in [b | b <- bs', not ((b `isStrictAncestorOf`) `any` bs')] diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking/Tests.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking/Tests.hs index 5fb46a6356..42a92d231c 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking/Tests.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Shrinking/Tests.hs @@ -5,29 +5,34 @@ -- | Test properties of the shrinking functions module Test.Consensus.PointSchedule.Shrinking.Tests (tests) where -import Data.Foldable (toList) -import Data.Map (keys) -import Data.Maybe (mapMaybe) -import Ouroboros.Consensus.Util (lastMaybe) -import Test.Consensus.Genesis.Setup (genChains) -import Test.Consensus.Genesis.Tests.Uniform (genUniformSchedulePoints) -import Test.Consensus.PointSchedule (PeerSchedule, PointSchedule (..), - prettyPointSchedule) -import Test.Consensus.PointSchedule.Peers (Peers (..)) -import Test.Consensus.PointSchedule.Shrinking (shrinkHonestPeers) -import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..)) -import Test.QuickCheck (Property, conjoin, counterexample) -import Test.Tasty -import Test.Tasty.QuickCheck (choose, forAllBlind, testProperty) -import Test.Util.TestBlock (TestBlock) +import Data.Foldable (toList) +import Data.Map (keys) +import Data.Maybe (mapMaybe) +import Ouroboros.Consensus.Util (lastMaybe) +import Test.Consensus.Genesis.Setup (genChains) +import Test.Consensus.Genesis.Tests.Uniform (genUniformSchedulePoints) +import Test.Consensus.PointSchedule + ( PeerSchedule + , PointSchedule (..) + , prettyPointSchedule + ) +import Test.Consensus.PointSchedule.Peers (Peers (..)) +import Test.Consensus.PointSchedule.Shrinking (shrinkHonestPeers) +import Test.Consensus.PointSchedule.SinglePeer (SchedulePoint (..)) +import Test.QuickCheck (Property, conjoin, counterexample) +import Test.Tasty +import Test.Tasty.QuickCheck (choose, forAllBlind, testProperty) +import Test.Util.TestBlock (TestBlock) tests :: TestTree tests = - testGroup "shrinking functions" - [ testGroup "honest peer shrinking" - [ testProperty "actually shortens the schedule" prop_shortens - , testProperty "preserves the final state all peers" prop_preservesFinalStates - ] + testGroup + "shrinking functions" + [ testGroup + "honest peer shrinking" + [ testProperty "actually shortens the schedule" prop_shortens + , testProperty "preserves the final state all peers" prop_preservesFinalStates + ] ] prop_shortens :: Property @@ -47,46 +52,54 @@ samePeers sch1 sch2 = isShorterThan :: Peers (PeerSchedule blk) -> Peers (PeerSchedule blk) -> Bool isShorterThan original shrunk = samePeers original shrunk - && (or $ zipWith - (\oldSch newSch -> (fst <$> lastMaybe newSch) < (fst <$> lastMaybe oldSch)) - (toList original) - (toList shrunk) - ) + && ( or $ + zipWith + (\oldSch newSch -> (fst <$> lastMaybe newSch) < (fst <$> lastMaybe oldSch)) + (toList original) + (toList shrunk) + ) doesNotChangeFinalState :: Eq blk => Peers (PeerSchedule blk) -> Peers (PeerSchedule blk) -> Bool doesNotChangeFinalState original shrunk = samePeers original shrunk - && (and $ zipWith - (\oldSch newSch -> - lastTP oldSch == lastTP newSch && - lastHP oldSch == lastHP newSch && - lastBP oldSch == lastBP newSch - ) - (toList original) - (toList shrunk) - ) - where - lastTP :: PeerSchedule blk -> Maybe (SchedulePoint blk) - lastTP sch = lastMaybe $ mapMaybe (\case (_, p@(ScheduleTipPoint _)) -> Just p ; _ -> Nothing) sch - lastHP :: PeerSchedule blk -> Maybe (SchedulePoint blk) - lastHP sch = lastMaybe $ mapMaybe (\case (_, p@(ScheduleHeaderPoint _)) -> Just p ; _ -> Nothing) sch - lastBP :: PeerSchedule blk -> Maybe (SchedulePoint blk) - lastBP sch = lastMaybe $ mapMaybe (\case (_, p@(ScheduleBlockPoint _)) -> Just p ; _ -> Nothing) sch + && ( and $ + zipWith + ( \oldSch newSch -> + lastTP oldSch == lastTP newSch + && lastHP oldSch == lastHP newSch + && lastBP oldSch == lastBP newSch + ) + (toList original) + (toList shrunk) + ) + where + lastTP :: PeerSchedule blk -> Maybe (SchedulePoint blk) + lastTP sch = lastMaybe $ mapMaybe (\case (_, p@(ScheduleTipPoint _)) -> Just p; _ -> Nothing) sch + lastHP :: PeerSchedule blk -> Maybe (SchedulePoint blk) + lastHP sch = lastMaybe $ mapMaybe (\case (_, p@(ScheduleHeaderPoint _)) -> Just p; _ -> Nothing) sch + lastBP :: PeerSchedule blk -> Maybe (SchedulePoint blk) + lastBP sch = lastMaybe $ mapMaybe (\case (_, p@(ScheduleBlockPoint _)) -> Just p; _ -> Nothing) sch -checkShrinkProperty :: (Peers (PeerSchedule TestBlock) -> Peers (PeerSchedule TestBlock) -> Bool) -> Property +checkShrinkProperty :: + (Peers (PeerSchedule TestBlock) -> Peers (PeerSchedule TestBlock) -> Bool) -> Property checkShrinkProperty prop = forAllBlind (genChains (choose (1, 4)) >>= genUniformSchedulePoints) - (\sch@PointSchedule{psSchedule, psStartOrder, psMinEndTime} -> - conjoin $ map - (\shrunk -> - counterexample - ( "Original schedule:\n" - ++ unlines (map (" " ++) $ prettyPointSchedule sch) - ++ "\nShrunk schedule:\n" - ++ unlines (map (" " ++) $ prettyPointSchedule $ PointSchedule {psSchedule = shrunk, psStartOrder, psMinEndTime}) - ) - (prop psSchedule shrunk) - ) - (shrinkHonestPeers psSchedule) + ( \sch@PointSchedule{psSchedule, psStartOrder, psMinEndTime} -> + conjoin $ + map + ( \shrunk -> + counterexample + ( "Original schedule:\n" + ++ unlines (map (" " ++) $ prettyPointSchedule sch) + ++ "\nShrunk schedule:\n" + ++ unlines + ( map (" " ++) $ + prettyPointSchedule $ + PointSchedule{psSchedule = shrunk, psStartOrder, psMinEndTime} + ) + ) + (prop psSchedule shrunk) + ) + (shrinkHonestPeers psSchedule) ) diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer.hs index 5e37282e6a..46dac85778 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer.hs @@ -79,15 +79,15 @@ -- > +--------+----------------+----------------+----------------+ -- > | 3.1s | E | E | E | -- > +--------+----------------+----------------+----------------+ --- -module Test.Consensus.PointSchedule.SinglePeer ( - IsTrunk (..) +module Test.Consensus.PointSchedule.SinglePeer + ( IsTrunk (..) , PeerScheduleParams (..) , SchedulePoint (..) , defaultPeerScheduleParams , peerScheduleFromTipPoints , schedulePointToBlock , singleJumpPeerSchedule + -- * Exposed for testing , mergeOn , scheduleBlockPoint @@ -96,19 +96,22 @@ module Test.Consensus.PointSchedule.SinglePeer ( , zipMany ) where -import Cardano.Slotting.Slot (WithOrigin (At, Origin), withOrigin) -import Control.Arrow (second) -import Control.Monad.Class.MonadTime.SI (Time) -import Data.List (mapAccumL) -import Data.Time.Clock (DiffTime) -import Data.Vector (Vector) -import qualified Data.Vector as Vector -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (BlockNo (unBlockNo), blockSlot) -import qualified System.Random.Stateful as R (StatefulGen) -import Test.Consensus.PointSchedule.SinglePeer.Indices - (HeaderPointSchedule (hpsBranch, hpsTrunk), - headerPointSchedule, singleJumpTipPoints, tipPointSchedule) +import Cardano.Slotting.Slot (WithOrigin (At, Origin), withOrigin) +import Control.Arrow (second) +import Control.Monad.Class.MonadTime.SI (Time) +import Data.List (mapAccumL) +import Data.Time.Clock (DiffTime) +import Data.Vector (Vector) +import Data.Vector qualified as Vector +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (BlockNo (unBlockNo), blockSlot) +import System.Random.Stateful qualified as R (StatefulGen) +import Test.Consensus.PointSchedule.SinglePeer.Indices + ( HeaderPointSchedule (hpsBranch, hpsTrunk) + , headerPointSchedule + , singleJumpTipPoints + , tipPointSchedule + ) -- | A point in the schedule of a single peer. data SchedulePoint blk @@ -127,9 +130,9 @@ scheduleBlockPoint :: blk -> SchedulePoint blk scheduleBlockPoint = ScheduleBlockPoint . At schedulePointToBlock :: SchedulePoint blk -> WithOrigin blk -schedulePointToBlock (ScheduleTipPoint b) = b +schedulePointToBlock (ScheduleTipPoint b) = b schedulePointToBlock (ScheduleHeaderPoint b) = b -schedulePointToBlock (ScheduleBlockPoint b) = b +schedulePointToBlock (ScheduleBlockPoint b) = b -- | Parameters for generating a schedule for a single peer. -- @@ -138,76 +141,77 @@ schedulePointToBlock (ScheduleBlockPoint b) = b -- and block points are sent eventually, but the points are delayed according -- to these parameters. data PeerScheduleParams = PeerScheduleParams - { pspSlotLength :: DiffTime - -- | Each of these pairs specifies a range of delays for a point. The - -- actual delay is chosen uniformly at random from the range. - -- - -- For tip points, the delay is relative to the slot of the tip point. - , pspTipDelayInterval :: (DiffTime, DiffTime) - -- | For header points, the delay is relative to the previous header point - -- or the tip point that advertises the existence of the header (whichever - -- happened most recently). + { pspSlotLength :: DiffTime + , pspTipDelayInterval :: (DiffTime, DiffTime) + -- ^ Each of these pairs specifies a range of delays for a point. The + -- actual delay is chosen uniformly at random from the range. + -- + -- For tip points, the delay is relative to the slot of the tip point. , pspHeaderDelayInterval :: (DiffTime, DiffTime) - -- | For block points, the delay is relative to the previous block point or - -- the header point that advertises the existence of the block (whichever - -- happened most recently). - , pspBlockDelayInterval :: (DiffTime, DiffTime) + -- ^ For header points, the delay is relative to the previous header point + -- or the tip point that advertises the existence of the header (whichever + -- happened most recently). + , pspBlockDelayInterval :: (DiffTime, DiffTime) + -- ^ For block points, the delay is relative to the previous block point or + -- the header point that advertises the existence of the block (whichever + -- happened most recently). } - deriving (Show) + deriving Show defaultPeerScheduleParams :: PeerScheduleParams -defaultPeerScheduleParams = PeerScheduleParams - { pspSlotLength = 20 - , pspTipDelayInterval = (0, 1) - , pspHeaderDelayInterval = (0.018, 0.021) - , pspBlockDelayInterval = (0.050, 0.055) - } +defaultPeerScheduleParams = + PeerScheduleParams + { pspSlotLength = 20 + , pspTipDelayInterval = (0, 1) + , pspHeaderDelayInterval = (0.018, 0.021) + , pspBlockDelayInterval = (0.050, 0.055) + } -- | Generate a schedule for a single peer that jumps once to the middle of a -- sequence of blocks. -- -- See 'peerScheduleFromTipPoints' for generation of schedules with rollbacks -singleJumpPeerSchedule - :: (R.StatefulGen g m, AF.HasHeader blk) - => g - -> PeerScheduleParams - -> AF.AnchoredFragment blk - -> m [(Time, SchedulePoint blk)] +singleJumpPeerSchedule :: + (R.StatefulGen g m, AF.HasHeader blk) => + g -> + PeerScheduleParams -> + AF.AnchoredFragment blk -> + m [(Time, SchedulePoint blk)] singleJumpPeerSchedule g psp chain = do - let chainv = Vector.fromList $ AF.toOldestFirst chain - (tps, hps, bps) <- singleJumpRawPeerSchedule g psp chainv - let tipPoints = map (second scheduleTipPoint) tps - headerPoints = map (second scheduleHeaderPoint) hps - blockPoints = map (second scheduleBlockPoint) bps - -- merge the schedules - pure $ - mergeOn fst tipPoints $ + let chainv = Vector.fromList $ AF.toOldestFirst chain + (tps, hps, bps) <- singleJumpRawPeerSchedule g psp chainv + let tipPoints = map (second scheduleTipPoint) tps + headerPoints = map (second scheduleHeaderPoint) hps + blockPoints = map (second scheduleBlockPoint) bps + -- merge the schedules + pure $ + mergeOn fst tipPoints $ mergeOn fst headerPoints blockPoints -singleJumpRawPeerSchedule - :: (R.StatefulGen g m, AF.HasHeader b) - => g - -> PeerScheduleParams - -> Vector b - -> m ([(Time, b)], [(Time, b)], [(Time, b)]) +singleJumpRawPeerSchedule :: + (R.StatefulGen g m, AF.HasHeader b) => + g -> + PeerScheduleParams -> + Vector b -> + m ([(Time, b)], [(Time, b)], [(Time, b)]) singleJumpRawPeerSchedule g psp chainv = do - -- generate the tip points - ixs <- singleJumpTipPoints g 0 (Vector.length chainv - 1) - let tipPointBlks = map (chainv Vector.!) ixs - tipPointSlots = map blockSlot tipPointBlks - -- generate the tip point schedule - ts <- tipPointSchedule g (pspSlotLength psp) (pspTipDelayInterval psp) tipPointSlots - -- generate the header point schedule - hpss <- headerPointSchedule g (pspHeaderDelayInterval psp) [(Nothing, zip ts ixs)] - let hps = concatMap hpsTrunk hpss - -- generate the block point schedule - bpss <- headerPointSchedule g (pspBlockDelayInterval psp) [(Nothing, hps)] - -- collect the blocks for each schedule - let bps = concatMap hpsTrunk bpss - tipPointTips = zip ts tipPointBlks - hpsHeaders = map (second (chainv Vector.!)) hps - bpsBlks = map (second (chainv Vector.!)) bps - pure (tipPointTips, hpsHeaders, bpsBlks) + -- generate the tip points + ixs <- singleJumpTipPoints g 0 (Vector.length chainv - 1) + let tipPointBlks = map (chainv Vector.!) ixs + tipPointSlots = map blockSlot tipPointBlks + -- generate the tip point schedule + ts <- tipPointSchedule g (pspSlotLength psp) (pspTipDelayInterval psp) tipPointSlots + -- generate the header point schedule + hpss <- headerPointSchedule g (pspHeaderDelayInterval psp) [(Nothing, zip ts ixs)] + let hps = concatMap hpsTrunk hpss + -- generate the block point schedule + bpss <- headerPointSchedule g (pspBlockDelayInterval psp) [(Nothing, hps)] + -- collect the blocks for each schedule + let bps = concatMap hpsTrunk bpss + tipPointTips = zip ts tipPointBlks + hpsHeaders = map (second (chainv Vector.!)) hps + bpsBlks = map (second (chainv Vector.!)) bps + pure (tipPointTips, hpsHeaders, bpsBlks) data IsTrunk = IsTrunk | IsBranch deriving (Eq, Show) @@ -224,115 +228,114 @@ data IsTrunk = IsTrunk | IsBranch -- order of their intersections with the honest chain. Each fragment is anchored -- at the intersection, and therefore their first block must be the first block -- after the intersection. --- -peerScheduleFromTipPoints - :: (R.StatefulGen g m, AF.HasHeader blk) - => g - -> PeerScheduleParams - -> [(IsTrunk, [Int])] - -> AF.AnchoredFragment blk - -> [AF.AnchoredFragment blk] - -> m [(Time, SchedulePoint blk)] +peerScheduleFromTipPoints :: + (R.StatefulGen g m, AF.HasHeader blk) => + g -> + PeerScheduleParams -> + [(IsTrunk, [Int])] -> + AF.AnchoredFragment blk -> + [AF.AnchoredFragment blk] -> + m [(Time, SchedulePoint blk)] peerScheduleFromTipPoints g psp tipPoints trunk0 branches0 = do - let trunk0v = Vector.fromList $ AF.toOldestFirst trunk0 - -- NOTE: Is this still correct? Shouldn't it be `withOrigin 0 (+1)`? - firstTrunkBlockNo = withOrigin 1 (+1) $ AF.anchorBlockNo trunk0 - branches0v = map (Vector.fromList . AF.toOldestFirst) branches0 - anchorBlockIndices = - [ fromIntegral $ unBlockNo $ fragmentAnchorBlockNo b - firstTrunkBlockNo - | b <- branches0 - ] - isTrunks = map fst tipPoints - intersections = intersperseTrunkFragments anchorBlockIndices isTrunks - (tps, hps, bps) <- rawPeerScheduleFromTipPoints g psp tipPoints trunk0v branches0v intersections - let tipPoints' = map (second scheduleTipPoint) tps - headerPoints = map (second scheduleHeaderPoint) hps - blockPoints = map (second scheduleBlockPoint) bps - -- merge the schedules - pure $ - mergeOn fst tipPoints' $ + let trunk0v = Vector.fromList $ AF.toOldestFirst trunk0 + -- NOTE: Is this still correct? Shouldn't it be `withOrigin 0 (+1)`? + firstTrunkBlockNo = withOrigin 1 (+ 1) $ AF.anchorBlockNo trunk0 + branches0v = map (Vector.fromList . AF.toOldestFirst) branches0 + anchorBlockIndices = + [ fromIntegral $ unBlockNo $ fragmentAnchorBlockNo b - firstTrunkBlockNo + | b <- branches0 + ] + isTrunks = map fst tipPoints + intersections = intersperseTrunkFragments anchorBlockIndices isTrunks + (tps, hps, bps) <- rawPeerScheduleFromTipPoints g psp tipPoints trunk0v branches0v intersections + let tipPoints' = map (second scheduleTipPoint) tps + headerPoints = map (second scheduleHeaderPoint) hps + blockPoints = map (second scheduleBlockPoint) bps + -- merge the schedules + pure $ + mergeOn fst tipPoints' $ mergeOn fst headerPoints blockPoints - where - fragmentAnchorBlockNo :: AF.AnchoredFragment blk -> BlockNo - fragmentAnchorBlockNo f = case AF.anchorBlockNo f of - At s -> s - Origin -> 0 + where + fragmentAnchorBlockNo :: AF.AnchoredFragment blk -> BlockNo + fragmentAnchorBlockNo f = case AF.anchorBlockNo f of + At s -> s + Origin -> 0 - intersperseTrunkFragments :: [Int] -> [IsTrunk] -> [Maybe Int] - intersperseTrunkFragments [] [] = [] - intersperseTrunkFragments iis (IsTrunk:isTrunks) = Nothing : intersperseTrunkFragments iis isTrunks - intersperseTrunkFragments (i:is) (IsBranch:isTrunks) = Just i : intersperseTrunkFragments is isTrunks - intersperseTrunkFragments _ [] = error "intersperseTrunkFragments: not enough isTrunk flags" - intersperseTrunkFragments [] _ = error "intersperseTrunkFragments: not enough intersections" + intersperseTrunkFragments :: [Int] -> [IsTrunk] -> [Maybe Int] + intersperseTrunkFragments [] [] = [] + intersperseTrunkFragments iis (IsTrunk : isTrunks) = Nothing : intersperseTrunkFragments iis isTrunks + intersperseTrunkFragments (i : is) (IsBranch : isTrunks) = Just i : intersperseTrunkFragments is isTrunks + intersperseTrunkFragments _ [] = error "intersperseTrunkFragments: not enough isTrunk flags" + intersperseTrunkFragments [] _ = error "intersperseTrunkFragments: not enough intersections" -rawPeerScheduleFromTipPoints - :: (R.StatefulGen g m, AF.HasHeader b) - => g - -> PeerScheduleParams - -> [(IsTrunk, [Int])] - -> Vector b - -> [Vector b] - -> [Maybe Int] - -> m ([(Time, b)], [(Time, b)], [(Time, b)]) +rawPeerScheduleFromTipPoints :: + (R.StatefulGen g m, AF.HasHeader b) => + g -> + PeerScheduleParams -> + [(IsTrunk, [Int])] -> + Vector b -> + [Vector b] -> + [Maybe Int] -> + m ([(Time, b)], [(Time, b)], [(Time, b)]) rawPeerScheduleFromTipPoints g psp tipPoints trunk0v branches0v intersections = do - let (isTrunks, tpIxs) = unzip tipPoints - pairedVectors = pairVectorsWithChunks trunk0v branches0v isTrunks - tipPointBlks = concat $ zipWith indicesToBlocks pairedVectors tpIxs - tipPointSlots = map blockSlot tipPointBlks - -- generate the tip point schedule - ts <- tipPointSchedule g (pspSlotLength psp) (pspTipDelayInterval psp) tipPointSlots - -- generate the header point schedule - let tpSchedules = zipMany ts tpIxs - hpss <- headerPointSchedule g (pspHeaderDelayInterval psp) $ zip intersections tpSchedules - -- generate the block point schedule - let (hpsPerBranch, vs) = unzip $ filter (not . null . snd .fst) $ concat - [ [((Nothing, hpsTrunk hps), trunk0v), ((mi, hpsBranch hps), v)] - | (mi, hps, v) <- zip3 intersections hpss pairedVectors - ] - bpss <- headerPointSchedule g (pspBlockDelayInterval psp) hpsPerBranch - let tipPointTips = zip ts tipPointBlks - hpsHeaders = concat $ zipWith (scheduleIndicesToBlocks trunk0v) pairedVectors hpss - bpsBlks = concat $ zipWith (scheduleIndicesToBlocks trunk0v) vs bpss - pure (tipPointTips, hpsHeaders, bpsBlks) + let (isTrunks, tpIxs) = unzip tipPoints + pairedVectors = pairVectorsWithChunks trunk0v branches0v isTrunks + tipPointBlks = concat $ zipWith indicesToBlocks pairedVectors tpIxs + tipPointSlots = map blockSlot tipPointBlks + -- generate the tip point schedule + ts <- tipPointSchedule g (pspSlotLength psp) (pspTipDelayInterval psp) tipPointSlots + -- generate the header point schedule + let tpSchedules = zipMany ts tpIxs + hpss <- headerPointSchedule g (pspHeaderDelayInterval psp) $ zip intersections tpSchedules + -- generate the block point schedule + let (hpsPerBranch, vs) = + unzip $ + filter (not . null . snd . fst) $ + concat + [ [((Nothing, hpsTrunk hps), trunk0v), ((mi, hpsBranch hps), v)] + | (mi, hps, v) <- zip3 intersections hpss pairedVectors + ] + bpss <- headerPointSchedule g (pspBlockDelayInterval psp) hpsPerBranch + let tipPointTips = zip ts tipPointBlks + hpsHeaders = concat $ zipWith (scheduleIndicesToBlocks trunk0v) pairedVectors hpss + bpsBlks = concat $ zipWith (scheduleIndicesToBlocks trunk0v) vs bpss + pure (tipPointTips, hpsHeaders, bpsBlks) + where + pairVectorsWithChunks :: + Vector b -> + [Vector b] -> + [IsTrunk] -> + [Vector b] + pairVectorsWithChunks trunk branches = + snd . mapAccumL pairVectors branches + where + pairVectors brs IsTrunk = (brs, trunk) + pairVectors (br : brs) IsBranch = (brs, br) + pairVectors [] IsBranch = error "not enough branches" - where - pairVectorsWithChunks - :: Vector b - -> [Vector b] - -> [IsTrunk] - -> [Vector b] - pairVectorsWithChunks trunk branches = - snd . mapAccumL pairVectors branches - where - pairVectors brs IsTrunk = (brs, trunk) - pairVectors (br:brs) IsBranch = (brs, br) - pairVectors [] IsBranch = error "not enough branches" - - -- | Replaces block indices with the actual blocks - scheduleIndicesToBlocks :: Vector b -> Vector b -> HeaderPointSchedule -> [(Time, b)] - scheduleIndicesToBlocks trunk v hps = - map (second (trunk Vector.!)) (hpsTrunk hps) - ++ map (second (v Vector.!)) (hpsBranch hps) - - indicesToBlocks :: Vector b -> [Int] -> [b] - indicesToBlocks v ixs = map (v Vector.!) ixs + -- \| Replaces block indices with the actual blocks + scheduleIndicesToBlocks :: Vector b -> Vector b -> HeaderPointSchedule -> [(Time, b)] + scheduleIndicesToBlocks trunk v hps = + map (second (trunk Vector.!)) (hpsTrunk hps) + ++ map (second (v Vector.!)) (hpsBranch hps) + indicesToBlocks :: Vector b -> [Int] -> [b] + indicesToBlocks v ixs = map (v Vector.!) ixs -- | Merge two sorted lists. -- -- PRECONDITION: The lists are sorted. --- mergeOn :: Ord b => (a -> b) -> [a] -> [a] -> [a] mergeOn _f [] ys = ys mergeOn _f xs [] = xs -mergeOn f xxs@(x:xs) yys@(y:ys) = - if f x <= f y - then x : mergeOn f xs yys - else y : mergeOn f xxs ys +mergeOn f xxs@(x : xs) yys@(y : ys) = + if f x <= f y + then x : mergeOn f xs yys + else y : mergeOn f xxs ys zipMany :: [a] -> [[b]] -> [[(a, b)]] zipMany xs0 = snd . mapAccumL (go []) xs0 - where - go acc xs [] = (xs, reverse acc) - go _acc [] _ys = error "zipMany: lengths don't match" - go acc (x:xs) (y:ys) = go ((x, y) : acc) xs ys + where + go acc xs [] = (xs, reverse acc) + go _acc [] _ys = error "zipMany: lengths don't match" + go acc (x : xs) (y : ys) = go ((x, y) : acc) xs ys diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer/Indices.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer/Indices.hs index c67f632e57..82d9f5e9a0 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer/Indices.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/SinglePeer/Indices.hs @@ -13,9 +13,8 @@ -- generated with 'headerPointSchedule'. Finally, the block points can be -- generated with 'headerPointSchedule' as well. See the implementation of -- 'Test.Consensus.PointSchedule.Random.singleJumpPeerSchedule' for an example. --- -module Test.Consensus.PointSchedule.SinglePeer.Indices ( - HeaderPointSchedule (..) +module Test.Consensus.PointSchedule.SinglePeer.Indices + ( HeaderPointSchedule (..) , headerPointSchedule , rollbacksTipPoints , singleJumpTipPoints @@ -23,15 +22,17 @@ module Test.Consensus.PointSchedule.SinglePeer.Indices ( , uniformRMDiffTime ) where -import Control.Monad (forM, replicateM) -import Control.Monad.Class.MonadTime.SI (Time (Time), addTime) -import Data.List (sort) -import Data.Time.Clock (DiffTime, diffTimeToPicoseconds, - picosecondsToDiffTime) -import GHC.Stack (HasCallStack) -import Ouroboros.Network.Block (SlotNo (SlotNo)) -import qualified System.Random.Stateful as R - +import Control.Monad (forM, replicateM) +import Control.Monad.Class.MonadTime.SI (Time (Time), addTime) +import Data.List (sort) +import Data.Time.Clock + ( DiffTime + , diffTimeToPicoseconds + , picosecondsToDiffTime + ) +import GHC.Stack (HasCallStack) +import Ouroboros.Network.Block (SlotNo (SlotNo)) +import System.Random.Stateful qualified as R -- | @singleJumpTipPoints g m n@ generates a list of tip points for a single peer -- serving a single branch between block indices @m@ and @n@. The schedule is a @@ -50,13 +51,12 @@ import qualified System.Random.Stateful as R -- > all (<=n) v && -- > not (hasDuplicates v) -- > } --- singleJumpTipPoints :: R.StatefulGen g m => g -> Int -> Int -> m [Int] singleJumpTipPoints _g m n - | n < m = pure [] + | n < m = pure [] singleJumpTipPoints g m n = do - jump <- R.uniformRM (m, n) g - pure [jump..n] + jump <- R.uniformRM (m, n) g + pure [jump .. n] -- | @rollbacksTipPoints k bs g@ generates a schedule for a single peer -- serving from multiple alternative branches. The schedule is a list of block @@ -83,13 +83,11 @@ singleJumpTipPoints g m n = do -- > and [all ( length v == length bs bracketChainSyncClient -- > } --- -rollbacksTipPoints - :: R.StatefulGen g m => g -> Int -> [Int] -> m [[Int]] +rollbacksTipPoints :: + R.StatefulGen g m => g -> Int -> [Int] -> m [[Int]] rollbacksTipPoints g k = mapM walkBranch - where - walkBranch bn = singleJumpTipPoints g 0 (min (k-1) (bn - 1)) - + where + walkBranch bn = singleJumpTipPoints g 0 (min (k - 1) (bn - 1)) -- | @tipPointSchedule g slotLengh msgDelayInterval slots@ attaches times to a -- sequence of tip points. These times are the times at which the tip points @@ -127,83 +125,85 @@ rollbacksTipPoints g k = mapM walkBranch -- > } -- > -> {slots:[SlotNo] | all (0<=) slots} -- > -> m {v:[DiffTime] | isSorted v && length v == length slots} --- -tipPointSchedule - :: forall g m. R.StatefulGen g m => g -> DiffTime -> (DiffTime, DiffTime) -> [SlotNo] -> m [Time] +tipPointSchedule :: + forall g m. R.StatefulGen g m => g -> DiffTime -> (DiffTime, DiffTime) -> [SlotNo] -> m [Time] tipPointSchedule _g slotLength (a, b) _slots - | slotLength <= b = error "tipPointSchedule: slotLength <= maximum delay" - | b < a = error "tipPointSchedule: empty delay interval" + | slotLength <= b = error "tipPointSchedule: slotLength <= maximum delay" + | b < a = error "tipPointSchedule: empty delay interval" tipPointSchedule g slotLength msgDelayInterval slots = do - let -- pairs of times corresponding to the start and end of each interval - -- between tip points - slotTimes = map slotTime slots - timePairs = zip slotTimes $ (drop 1 slotTimes) ++ [addTime 1 (last slotTimes)] - go timePairs - where - go :: [(Time, Time)] -> m [Time] - go [] = pure [] - go xs = do - -- While the slots are increasing, assign a time to each point - -- by choosing a random time in the delay interval after the - -- slot start - let (pointSeq, newBranch) = span (\(a, b) -> a < b) xs - times <- forM pointSeq $ \(s, _) -> do - delay <- uniformRMDiffTime msgDelayInterval g - pure $ addTime delay s - (times', xss) <- case newBranch of - [] -> pure ([], []) - ((seqLast, _) : branches) -> do - delay <- uniformRMDiffTime msgDelayInterval g - let lastTime = addTime delay seqLast - (times', xss) <- handleDelayedTipPoints lastTime branches - pure (lastTime : times', xss) - -- When the slots are not increasing, we must be doing a rollback. - -- We might have tip points in past slots. - times'' <- go xss - pure $ times ++ times' ++ times'' + let + -- pairs of times corresponding to the start and end of each interval + -- between tip points + slotTimes = map slotTime slots + timePairs = zip slotTimes $ (drop 1 slotTimes) ++ [addTime 1 (last slotTimes)] + go timePairs + where + go :: [(Time, Time)] -> m [Time] + go [] = pure [] + go xs = do + -- While the slots are increasing, assign a time to each point + -- by choosing a random time in the delay interval after the + -- slot start + let (pointSeq, newBranch) = span (\(a, b) -> a < b) xs + times <- forM pointSeq $ \(s, _) -> do + delay <- uniformRMDiffTime msgDelayInterval g + pure $ addTime delay s + (times', xss) <- case newBranch of + [] -> pure ([], []) + ((seqLast, _) : branches) -> do + delay <- uniformRMDiffTime msgDelayInterval g + let lastTime = addTime delay seqLast + (times', xss) <- handleDelayedTipPoints lastTime branches + pure (lastTime : times', xss) + -- When the slots are not increasing, we must be doing a rollback. + -- We might have tip points in past slots. + times'' <- go xss + pure $ times ++ times' ++ times'' - -- | The amount of time taken by the given number of slots. - slotsDiffTime :: Int -> DiffTime - slotsDiffTime s = fromIntegral s * slotLength + -- \| The amount of time taken by the given number of slots. + slotsDiffTime :: Int -> DiffTime + slotsDiffTime s = fromIntegral s * slotLength - -- | The time at the start of the slot. - slotTime :: SlotNo -> Time - slotTime (SlotNo s) = Time (slotsDiffTime (fromIntegral s)) + -- \| The time at the start of the slot. + slotTime :: SlotNo -> Time + slotTime (SlotNo s) = Time (slotsDiffTime (fromIntegral s)) - -- | Assign times to tip points in past slots. A past slot is - -- any earlier slot than the first parameter. - -- - -- Yields the assigned times and the remaining tip points which - -- aren't in the past. - handleDelayedTipPoints :: Time -> [(Time, Time)] -> m ([Time], [(Time, Time)]) - handleDelayedTipPoints lastTime xss = do - let (pointSeq, newBranch) = span (\(a, _) -> addTime (fst msgDelayInterval) a <= lastTime) xss - nseq = length pointSeq - -- The first point in xss that is not in the past - firstLater = case newBranch of - -- If there is no later point, pick an arbitrary later time interval - -- to sample from - [] -> addTime (slotsDiffTime (toEnum nseq)) lastTime - ((a, _) : _) -> addTime (fst msgDelayInterval) a - times <- replicateM nseq (uniformRMTime (lastTime, firstLater) g) - pure (sort times, newBranch) + -- \| Assign times to tip points in past slots. A past slot is + -- any earlier slot than the first parameter. + -- + -- Yields the assigned times and the remaining tip points which + -- aren't in the past. + handleDelayedTipPoints :: Time -> [(Time, Time)] -> m ([Time], [(Time, Time)]) + handleDelayedTipPoints lastTime xss = do + let (pointSeq, newBranch) = span (\(a, _) -> addTime (fst msgDelayInterval) a <= lastTime) xss + nseq = length pointSeq + -- The first point in xss that is not in the past + firstLater = case newBranch of + -- If there is no later point, pick an arbitrary later time interval + -- to sample from + [] -> addTime (slotsDiffTime (toEnum nseq)) lastTime + ((a, _) : _) -> addTime (fst msgDelayInterval) a + times <- replicateM nseq (uniformRMTime (lastTime, firstLater) g) + pure (sort times, newBranch) -- | Uniformely choose a relative 'DiffTime' in the given range. uniformRMDiffTime :: R.StatefulGen g m => (DiffTime, DiffTime) -> g -> m DiffTime uniformRMDiffTime (a, b) g = - picosecondsToDiffTime <$> - R.uniformRM (diffTimeToPicoseconds a, diffTimeToPicoseconds b) g + picosecondsToDiffTime + <$> R.uniformRM (diffTimeToPicoseconds a, diffTimeToPicoseconds b) g -- | Uniformely choose an absolute 'Time' in the given range. uniformRMTime :: R.StatefulGen g m => (Time, Time) -> g -> m Time uniformRMTime (Time a, Time b) g = Time <$> uniformRMDiffTime (a, b) g -data HeaderPointSchedule = HeaderPointSchedule { - hpsTrunk :: [(Time, Int)] -- ^ header points up to the intersection - , hpsBranch :: [(Time, Int)] -- ^ header points after the intersection - -- indices are relative to the branch +data HeaderPointSchedule = HeaderPointSchedule + { hpsTrunk :: [(Time, Int)] + -- ^ header points up to the intersection + , hpsBranch :: [(Time, Int)] + -- ^ header points after the intersection + -- indices are relative to the branch } - deriving (Show) + deriving Show -- | @headerPointSchedule g msgDelayInterval tpSchedule@ generates a -- schedule of header points for a single peer. @@ -248,84 +248,87 @@ data HeaderPointSchedule = HeaderPointSchedule { -- > all (\hps -> not (hasDuplicates (map snd (hpsTrunk hps)))) v && -- > all (\hps -> not (hasDuplicates (map snd (hpsBranch hps)))) v -- > } -headerPointSchedule - :: forall g m. (HasCallStack, R.StatefulGen g m) - => g - -> (DiffTime, DiffTime) - -> [(Maybe Int, [(Time, Int)])] - -> m [HeaderPointSchedule] +headerPointSchedule :: + forall g m. + (HasCallStack, R.StatefulGen g m) => + g -> + (DiffTime, DiffTime) -> + [(Maybe Int, [(Time, Int)])] -> + m [HeaderPointSchedule] headerPointSchedule g msgDelayInterval xs = - let -- Pair each branch with the maximum time at which its header points - -- should be offered - xs' = zip xs $ map (Just . fst . headCallStack . snd) (drop 1 xs) ++ [Nothing] - in snd <$> mapAccumM genHPBranchSchedule (Time 0, 0) xs' - - where - -- | @genHPBranchSchedule (tNext, trunkNextHp) ((mi, tps), mtMax)@ generates - -- a schedule for a single branch. - -- - -- @tNext@ is the time at which the next header point should be offered. - -- - -- @trunkNextHp@ is the index of the next header point that was offered - -- from the trunk. - -- - -- @mi@ is the index of the intersection block with the trunk. Nothing - -- means this group has tip points from the trunk. - -- - -- @tps@ is the list of tip point indices relative to the branch. - -- - -- @mtMax@ is the maximum time at which the last header point can be - -- offered. 'Nothing' stands for infinity. - -- - -- Returns the time at which the last header point was offered, the next - -- header point to offer and the schedule for the branch. - genHPBranchSchedule - :: (Time, Int) - -> ((Maybe Int, [(Time, Int)]), Maybe Time) - -> m ((Time, Int), HeaderPointSchedule) - genHPBranchSchedule (tNext, trunkNextHp) ((_mi, []), _mtMax) = - pure ((tNext, trunkNextHp), HeaderPointSchedule [] []) - genHPBranchSchedule (tNext, trunkNextHp) ((Nothing, tps), mtMax) = do - (p, tsTrunk) <- mapAccumM (generatePerTipPointTimes mtMax) (tNext, trunkNextHp) tps - pure (p, HeaderPointSchedule (concat tsTrunk) []) - genHPBranchSchedule (tNext, trunkNextHp) ((Just iLast, tps@((firstTipTime, _):_)), mtMax) = do - ((tNext', trunkNextHp'), tsTrunk) <- generatePerTipPointTimes mtMax (tNext, trunkNextHp) (firstTipTime, iLast) - ((tNext'', _), tsBranch) <- mapAccumM (generatePerTipPointTimes mtMax) (tNext', 0) tps - pure ((tNext'', trunkNextHp'), HeaderPointSchedule tsTrunk (concat tsBranch)) + let + -- Pair each branch with the maximum time at which its header points + -- should be offered + xs' = zip xs $ map (Just . fst . headCallStack . snd) (drop 1 xs) ++ [Nothing] + in + snd <$> mapAccumM genHPBranchSchedule (Time 0, 0) xs' + where + -- \| @genHPBranchSchedule (tNext, trunkNextHp) ((mi, tps), mtMax)@ generates + -- a schedule for a single branch. + -- + -- @tNext@ is the time at which the next header point should be offered. + -- + -- @trunkNextHp@ is the index of the next header point that was offered + -- from the trunk. + -- + -- @mi@ is the index of the intersection block with the trunk. Nothing + -- means this group has tip points from the trunk. + -- + -- @tps@ is the list of tip point indices relative to the branch. + -- + -- @mtMax@ is the maximum time at which the last header point can be + -- offered. 'Nothing' stands for infinity. + -- + -- Returns the time at which the last header point was offered, the next + -- header point to offer and the schedule for the branch. + genHPBranchSchedule :: + (Time, Int) -> + ((Maybe Int, [(Time, Int)]), Maybe Time) -> + m ((Time, Int), HeaderPointSchedule) + genHPBranchSchedule (tNext, trunkNextHp) ((_mi, []), _mtMax) = + pure ((tNext, trunkNextHp), HeaderPointSchedule [] []) + genHPBranchSchedule (tNext, trunkNextHp) ((Nothing, tps), mtMax) = do + (p, tsTrunk) <- mapAccumM (generatePerTipPointTimes mtMax) (tNext, trunkNextHp) tps + pure (p, HeaderPointSchedule (concat tsTrunk) []) + genHPBranchSchedule (tNext, trunkNextHp) ((Just iLast, tps@((firstTipTime, _) : _)), mtMax) = do + ((tNext', trunkNextHp'), tsTrunk) <- + generatePerTipPointTimes mtMax (tNext, trunkNextHp) (firstTipTime, iLast) + ((tNext'', _), tsBranch) <- mapAccumM (generatePerTipPointTimes mtMax) (tNext', 0) tps + pure ((tNext'', trunkNextHp'), HeaderPointSchedule tsTrunk (concat tsBranch)) - -- | @generatePerTipPointTimes mtMax (tNext, nextHp) (tTip, tp)@ schedules the header - -- points from @nextHp@ to @tp@ in ascending order starting from the maximum - -- of @tNext@ and @tTip + t@ where t is sampled from @msgDelayInterval@. - -- - -- Less header points are scheduled if they would be scheduled after @mtMax@. - -- - -- The delay of each tipPoint is sampled from @msgDelayInterval@. - -- - generatePerTipPointTimes - :: Maybe Time - -> (Time, Int) - -> (Time, Int) - -> m ((Time, Int), [(Time, Int)]) - generatePerTipPointTimes mtMax (tNext0, nextHp0) (tTip, tp) = do - t <- uniformRMDiffTime msgDelayInterval g - go (max tNext0 (addTime t tTip)) nextHp0 [] - where - go :: Time -> Int -> [(Time, Int)] -> m ((Time, Int), [(Time, Int)]) - go tNext nextHp acc = do - if maybe False (tNext >) mtMax || nextHp > tp then - pure ((tNext, nextHp), reverse acc) - else do - t <- (`addTime` tNext) <$> uniformRMDiffTime msgDelayInterval g - go t (nextHp+1) ((tNext, nextHp) : acc) + -- \| @generatePerTipPointTimes mtMax (tNext, nextHp) (tTip, tp)@ schedules the header + -- points from @nextHp@ to @tp@ in ascending order starting from the maximum + -- of @tNext@ and @tTip + t@ where t is sampled from @msgDelayInterval@. + -- + -- Less header points are scheduled if they would be scheduled after @mtMax@. + -- + -- The delay of each tipPoint is sampled from @msgDelayInterval@. + generatePerTipPointTimes :: + Maybe Time -> + (Time, Int) -> + (Time, Int) -> + m ((Time, Int), [(Time, Int)]) + generatePerTipPointTimes mtMax (tNext0, nextHp0) (tTip, tp) = do + t <- uniformRMDiffTime msgDelayInterval g + go (max tNext0 (addTime t tTip)) nextHp0 [] + where + go :: Time -> Int -> [(Time, Int)] -> m ((Time, Int), [(Time, Int)]) + go tNext nextHp acc = do + if maybe False (tNext >) mtMax || nextHp > tp + then + pure ((tNext, nextHp), reverse acc) + else do + t <- (`addTime` tNext) <$> uniformRMDiffTime msgDelayInterval g + go t (nextHp + 1) ((tNext, nextHp) : acc) mapAccumM :: Monad m => (s -> x -> m (s, y)) -> s -> [x] -> m (s, [y]) mapAccumM _ acc [] = pure (acc, []) -mapAccumM f acc (x:xs) = do - (acc', y) <- f acc x - (acc'', ys) <- mapAccumM f acc' xs - pure (acc'', y:ys) +mapAccumM f acc (x : xs) = do + (acc', y) <- f acc x + (acc'', ys) <- mapAccumM f acc' xs + pure (acc'', y : ys) headCallStack :: HasCallStack => [a] -> a headCallStack = \case - x:_ -> x - _ -> error "headCallStack: empty list" + x : _ -> x + _ -> error "headCallStack: empty list" diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Tests.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Tests.hs index a24dc6c975..dcc101de2d 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Tests.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Consensus/PointSchedule/Tests.hs @@ -1,67 +1,77 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.PointSchedule.Tests (tests) where -import Cardano.Slotting.Slot (SlotNo (..), WithOrigin (..), - withOrigin) -import Control.Monad (forM, replicateM) -import Control.Monad.Class.MonadTime.SI (Time (Time)) -import Data.Bifunctor (second) -import Data.Coerce (coerce) -import Data.List as List (foldl', group, isSuffixOf, partition, sort) -import qualified Data.List.NonEmpty as NonEmpty -import Data.Maybe (isNothing) -import Data.Time.Clock (DiffTime, diffTimeToPicoseconds, - picosecondsToDiffTime) -import GHC.Stack (HasCallStack) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (blockHash) -import System.Random.Stateful (runSTGen_) -import Test.Consensus.PointSchedule.SinglePeer -import Test.Consensus.PointSchedule.SinglePeer.Indices -import qualified Test.QuickCheck as QC -import Test.QuickCheck -import Test.QuickCheck.Random -import Test.Tasty -import Test.Tasty.QuickCheck -import qualified Test.Util.QuickCheck as QC -import Test.Util.TersePrinting (terseBlock, terseWithOrigin) -import Test.Util.TestBlock (TestBlock, TestHash (unTestHash), - firstBlock, modifyFork, successorBlock, tbSlot) -import Test.Util.TestEnv +import Cardano.Slotting.Slot + ( SlotNo (..) + , WithOrigin (..) + , withOrigin + ) +import Control.Monad (forM, replicateM) +import Control.Monad.Class.MonadTime.SI (Time (Time)) +import Data.Bifunctor (second) +import Data.Coerce (coerce) +import Data.List as List (foldl', group, isSuffixOf, partition, sort) +import Data.List.NonEmpty qualified as NonEmpty +import Data.Maybe (isNothing) +import Data.Time.Clock + ( DiffTime + , diffTimeToPicoseconds + , picosecondsToDiffTime + ) +import GHC.Stack (HasCallStack) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (blockHash) +import System.Random.Stateful (runSTGen_) +import Test.Consensus.PointSchedule.SinglePeer +import Test.Consensus.PointSchedule.SinglePeer.Indices +import Test.QuickCheck +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Random +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.QuickCheck qualified as QC +import Test.Util.TersePrinting (terseBlock, terseWithOrigin) +import Test.Util.TestBlock + ( TestBlock + , TestHash (unTestHash) + , firstBlock + , modifyFork + , successorBlock + , tbSlot + ) +import Test.Util.TestEnv tests :: TestTree tests = - adjustQuickCheckTests (* 100) $ + adjustQuickCheckTests (* 100) $ adjustOption (\(QuickCheckMaxSize n) -> QuickCheckMaxSize (n `div` 10)) $ - testGroup "PointSchedule" - [ testProperty "zipMany" prop_zipMany - , testProperty "singleJumpTipPoints" prop_singleJumpTipPoints - , testProperty "tipPointSchedule" prop_tipPointSchedule - , testProperty "headerPointSchedule" prop_headerPointSchedule - , testProperty "peerScheduleFromTipPoints" prop_peerScheduleFromTipPoints - ] + testGroup + "PointSchedule" + [ testProperty "zipMany" prop_zipMany + , testProperty "singleJumpTipPoints" prop_singleJumpTipPoints + , testProperty "tipPointSchedule" prop_tipPointSchedule + , testProperty "headerPointSchedule" prop_headerPointSchedule + , testProperty "peerScheduleFromTipPoints" prop_peerScheduleFromTipPoints + ] prop_zipMany :: [[Int]] -> QC.Property prop_zipMany xss = - let xs :: [Int] - xs = concatMap (map (+1)) xss - ys :: [[(Int, Int)]] - ys = zipMany xs xss - in - length xss QC.=== length ys - QC..&&. - map (map snd) ys QC.=== xss - QC..&&. - concatMap (map fst) ys QC.=== xs + let xs :: [Int] + xs = concatMap (map (+ 1)) xss + ys :: [[(Int, Int)]] + ys = zipMany xs xss + in length xss QC.=== length ys + QC..&&. map (map snd) ys QC.=== xss + QC..&&. concatMap (map fst) ys QC.=== xs data SingleJumpTipPointsInput = SingleJumpTipPointsInput { sjtpMin :: Int , sjtpMax :: Int - } deriving (Show) + } + deriving Show instance QC.Arbitrary SingleJumpTipPointsInput where arbitrary = do @@ -71,25 +81,24 @@ instance QC.Arbitrary SingleJumpTipPointsInput where prop_singleJumpTipPoints :: QCGen -> SingleJumpTipPointsInput -> QC.Property prop_singleJumpTipPoints seed (SingleJumpTipPointsInput m n) = - runSTGen_ seed $ \g -> do - xs <- singleJumpTipPoints g m n - pure $ isSorted QC.le xs - QC..&&. - (QC.counterexample ("length xs = " ++ show (length xs)) $ - length xs `QC.le` n - m + 1 - ) - QC..&&. - (QC.counterexample ("head xs = " ++ show (headCallStack xs)) $ - headCallStack xs `QC.le` n - QC..&&. - m `QC.le` headCallStack xs - ) + runSTGen_ seed $ \g -> do + xs <- singleJumpTipPoints g m n + pure $ + isSorted QC.le xs + QC..&&. ( QC.counterexample ("length xs = " ++ show (length xs)) $ + length xs `QC.le` n - m + 1 + ) + QC..&&. ( QC.counterexample ("head xs = " ++ show (headCallStack xs)) $ + headCallStack xs `QC.le` n + QC..&&. m `QC.le` headCallStack xs + ) data TipPointScheduleInput = TipPointScheduleInput - { tpsSlotLength :: DiffTime + { tpsSlotLength :: DiffTime , tpsMsgInterval :: (DiffTime, DiffTime) - , tpsSlots :: [SlotNo] - } deriving (Show) + , tpsSlots :: [SlotNo] + } + deriving Show instance QC.Arbitrary TipPointScheduleInput where arbitrary = do @@ -101,20 +110,20 @@ instance QC.Arbitrary TipPointScheduleInput where prop_tipPointSchedule :: QCGen -> TipPointScheduleInput -> QC.Property prop_tipPointSchedule seed (TipPointScheduleInput slotLength msgInterval slots) = - runSTGen_ seed $ \g -> do - ts <- tipPointSchedule g slotLength msgInterval slots - pure $ - (QC.counterexample ("length slots = " ++ show (length slots)) $ - QC.counterexample ("length ts = " ++ show (length ts)) $ - length slots QC.=== length ts - ) - QC..&&. - isSorted QC.le ts + runSTGen_ seed $ \g -> do + ts <- tipPointSchedule g slotLength msgInterval slots + pure $ + ( QC.counterexample ("length slots = " ++ show (length slots)) $ + QC.counterexample ("length ts = " ++ show (length ts)) $ + length slots QC.=== length ts + ) + QC..&&. isSorted QC.le ts data HeaderPointScheduleInput = HeaderPointScheduleInput { hpsMsgInterval :: (DiffTime, DiffTime) - , hpsTipPoints :: [(Maybe Int, [(Time, Int)])] - } deriving (Show) + , hpsTipPoints :: [(Maybe Int, [(Time, Int)])] + } + deriving Show instance QC.Arbitrary HeaderPointScheduleInput where arbitrary = do @@ -131,63 +140,67 @@ instance QC.Arbitrary HeaderPointScheduleInput where prop_headerPointSchedule :: QCGen -> HeaderPointScheduleInput -> QC.Property prop_headerPointSchedule g (HeaderPointScheduleInput msgDelayInterval xs) = - runSTGen_ g $ \g' -> do - hpss <- headerPointSchedule g' msgDelayInterval xs - pure $ - (QC.counterexample ("length xs = " ++ show (length xs)) $ - QC.counterexample ("length hpss = " ++ show (length hpss)) $ + runSTGen_ g $ \g' -> do + hpss <- headerPointSchedule g' msgDelayInterval xs + pure $ + ( QC.counterexample ("length xs = " ++ show (length xs)) $ + QC.counterexample ("length hpss = " ++ show (length hpss)) $ length xs QC.=== length hpss - ) - QC..&&. - (QC.counterexample ("header points are sorted in each branch") $ - foldr (QC..&&.) (QC.property True) - [ QC.counterexample ("branch = " ++ show hps) $ - isSorted QC.lt (map snd trunk) QC..&&. isSorted QC.lt (map snd branch) - | hps@(HeaderPointSchedule trunk branch) <- hpss - ] - ) - QC..&&. - (QC.counterexample ("times are sorted accross branches") $ - QC.counterexample ("branches = " ++ show hpss) $ - isSorted QC.le $ concat - [ map fst trunk ++ map fst branch - | HeaderPointSchedule trunk branch <- hpss - ] - ) - QC..&&. - (QC.counterexample ("trunk header points are sorted accross branches") $ - QC.counterexample ("branches = " ++ show hpss) $ - isSorted QC.lt $ concat - [ map snd trunk | HeaderPointSchedule trunk _ <- hpss ] - ) - QC..&&. - (QC.counterexample "branch header points follow tip points" $ - QC.counterexample ("branches = " ++ show hpss) $ - foldr (QC..&&.) (QC.property True) $ - zipWith (\hps x -> - case x of - (Just _, b) -> headerPointsFollowTipPoints leMaybe (hpsBranch hps) b - _ -> QC.property True - ) hpss xs - ) - where - leMaybe :: Ord a => a -> a -> Maybe Ordering - leMaybe a b = Just $ compare a b - -data PeerScheduleFromTipPointsInput = PeerScheduleFromTipPointsInput - PeerScheduleParams - [(IsTrunk, [Int])] - (AF.AnchoredFragment TestBlock) - [AF.AnchoredFragment TestBlock] + ) + QC..&&. ( QC.counterexample ("header points are sorted in each branch") $ + foldr + (QC..&&.) + (QC.property True) + [ QC.counterexample ("branch = " ++ show hps) $ + isSorted QC.lt (map snd trunk) QC..&&. isSorted QC.lt (map snd branch) + | hps@(HeaderPointSchedule trunk branch) <- hpss + ] + ) + QC..&&. ( QC.counterexample ("times are sorted accross branches") $ + QC.counterexample ("branches = " ++ show hpss) $ + isSorted QC.le $ + concat + [ map fst trunk ++ map fst branch + | HeaderPointSchedule trunk branch <- hpss + ] + ) + QC..&&. ( QC.counterexample ("trunk header points are sorted accross branches") $ + QC.counterexample ("branches = " ++ show hpss) $ + isSorted QC.lt $ + concat + [map snd trunk | HeaderPointSchedule trunk _ <- hpss] + ) + QC..&&. ( QC.counterexample "branch header points follow tip points" $ + QC.counterexample ("branches = " ++ show hpss) $ + foldr (QC..&&.) (QC.property True) $ + zipWith + ( \hps x -> + case x of + (Just _, b) -> headerPointsFollowTipPoints leMaybe (hpsBranch hps) b + _ -> QC.property True + ) + hpss + xs + ) + where + leMaybe :: Ord a => a -> a -> Maybe Ordering + leMaybe a b = Just $ compare a b + +data PeerScheduleFromTipPointsInput + = PeerScheduleFromTipPointsInput + PeerScheduleParams + [(IsTrunk, [Int])] + (AF.AnchoredFragment TestBlock) + [AF.AnchoredFragment TestBlock] instance Show PeerScheduleFromTipPointsInput where show (PeerScheduleFromTipPointsInput psp tps trunk branches) = unlines [ "PeerScheduleFromTipPointsInput" - , " params = " ++ show psp + , " params = " ++ show psp , " tipPoints = " ++ show tps , " trunk = " ++ show (AF.toOldestFirst trunk) - , " branches = " ++ show [ (AF.anchorBlockNo b, AF.toOldestFirst b) | b <- branches ] + , " branches = " ++ show [(AF.anchorBlockNo b, AF.toOldestFirst b) | b <- branches] ] instance QC.Arbitrary PeerScheduleFromTipPointsInput where @@ -203,25 +216,27 @@ instance QC.Arbitrary PeerScheduleFromTipPointsInput where tsi = zip isTrunks intersections -- The maximum block number in the tip points and the intersections. maxBlock = - maximum $ concat [ b | (IsTrunk, b) <- tstps ] ++ - [ i | (IsBranch, i) <- tsi ] + maximum $ + concat [b | (IsTrunk, b) <- tstps] + ++ [i | (IsBranch, i) <- tsi] trunkSlots <- map SlotNo <$> genSortedVectorWithoutDuplicates (maxBlock + 1) - let branchesTipPoints = [ b | (IsBranch, b) <- tstps ] + let branchesTipPoints = [b | (IsBranch, b) <- tstps] branchesSlots <- forM branchesTipPoints $ \b -> do let maxBranchBlock = maximum b map SlotNo <$> genSortedVectorWithoutDuplicates (maxBranchBlock + 1) let trunk = mkFragment Origin trunkSlots 0 - branchIntersections = [ i | (IsBranch, i) <- tsi ] + branchIntersections = [i | (IsBranch, i) <- tsi] branches = [ genAdversarialFragment trunk fNo i branchSlots - | (fNo, branchSlots, i) <- zip3 [1..] branchesSlots branchIntersections + | (fNo, branchSlots, i) <- zip3 [1 ..] branchesSlots branchIntersections ] - psp = PeerScheduleParams - { pspSlotLength = slotLength - , pspTipDelayInterval = tipDelayInterval - , pspHeaderDelayInterval = headerDelayInterval - , pspBlockDelayInterval = blockDelayInterval - } + psp = + PeerScheduleParams + { pspSlotLength = slotLength + , pspTipDelayInterval = tipDelayInterval + , pspHeaderDelayInterval = headerDelayInterval + , pspBlockDelayInterval = blockDelayInterval + } pure $ PeerScheduleFromTipPointsInput psp tstps trunk branches @@ -230,162 +245,170 @@ instance QC.Arbitrary IsTrunk where prop_peerScheduleFromTipPoints :: QCGen -> PeerScheduleFromTipPointsInput -> QC.Property prop_peerScheduleFromTipPoints seed (PeerScheduleFromTipPointsInput psp tps trunk branches) = - runSTGen_ seed $ \g -> do - ss <- peerScheduleFromTipPoints g psp tps trunk branches - let (tps', (hps, _bps)) = - partition (isHeaderPoint . snd) <$> partition (isTipPoint . snd) ss - pure $ - (QC.counterexample ("hps = " ++ show (map (second showPoint) hps)) $ - QC.counterexample ("tps' = " ++ show (map (second showPoint) tps')) $ - headerPointsFollowTipPoints isAncestorBlock' - (map (second schedulePointToBlock) hps) - (map (second schedulePointToBlock) tps') - ) - QC..&&. - (QC.counterexample ("schedule = " ++ show (map (second showPoint) ss)) $ - isSorted QC.le (map fst ss)) - QC..&&. - (QC.counterexample ("schedule = " ++ show (map (second showPoint) ss)) $ - QC.counterexample ("header points don't decrease or repeat") $ - noReturnToAncestors (filter isHeaderPoint $ map snd ss) - ) - QC..&&. - (QC.counterexample ("schedule = " ++ show (map (second showPoint) ss)) $ - QC.counterexample ("block points don't decrease or repeat") $ - noReturnToAncestors (filter isBlockPoint $ map snd ss) - ) - where - showPoint :: SchedulePoint TestBlock -> String - showPoint (ScheduleTipPoint b) = "TP " ++ terseWithOrigin terseBlock b - showPoint (ScheduleHeaderPoint b) = "HP " ++ terseWithOrigin terseBlock b - showPoint (ScheduleBlockPoint b) = "BP " ++ terseWithOrigin terseBlock b - - isTipPoint :: SchedulePoint blk -> Bool - isTipPoint (ScheduleTipPoint _) = True - isTipPoint _ = False - - isHeaderPoint :: SchedulePoint blk -> Bool - isHeaderPoint (ScheduleHeaderPoint _) = True - isHeaderPoint _ = False - - isBlockPoint :: SchedulePoint blk -> Bool - isBlockPoint (ScheduleBlockPoint _) = True - isBlockPoint _ = False + runSTGen_ seed $ \g -> do + ss <- peerScheduleFromTipPoints g psp tps trunk branches + let (tps', (hps, _bps)) = + partition (isHeaderPoint . snd) <$> partition (isTipPoint . snd) ss + pure $ + ( QC.counterexample ("hps = " ++ show (map (second showPoint) hps)) $ + QC.counterexample ("tps' = " ++ show (map (second showPoint) tps')) $ + headerPointsFollowTipPoints + isAncestorBlock' + (map (second schedulePointToBlock) hps) + (map (second schedulePointToBlock) tps') + ) + QC..&&. ( QC.counterexample ("schedule = " ++ show (map (second showPoint) ss)) $ + isSorted QC.le (map fst ss) + ) + QC..&&. ( QC.counterexample ("schedule = " ++ show (map (second showPoint) ss)) $ + QC.counterexample ("header points don't decrease or repeat") $ + noReturnToAncestors (filter isHeaderPoint $ map snd ss) + ) + QC..&&. ( QC.counterexample ("schedule = " ++ show (map (second showPoint) ss)) $ + QC.counterexample ("block points don't decrease or repeat") $ + noReturnToAncestors (filter isBlockPoint $ map snd ss) + ) + where + showPoint :: SchedulePoint TestBlock -> String + showPoint (ScheduleTipPoint b) = "TP " ++ terseWithOrigin terseBlock b + showPoint (ScheduleHeaderPoint b) = "HP " ++ terseWithOrigin terseBlock b + showPoint (ScheduleBlockPoint b) = "BP " ++ terseWithOrigin terseBlock b + + isTipPoint :: SchedulePoint blk -> Bool + isTipPoint (ScheduleTipPoint _) = True + isTipPoint _ = False + + isHeaderPoint :: SchedulePoint blk -> Bool + isHeaderPoint (ScheduleHeaderPoint _) = True + isHeaderPoint _ = False + + isBlockPoint :: SchedulePoint blk -> Bool + isBlockPoint (ScheduleBlockPoint _) = True + isBlockPoint _ = False isAncestorBlock :: TestBlock -> TestBlock -> Maybe Ordering isAncestorBlock b0 b1 = - if isSuffixOf - (NonEmpty.toList (unTestHash (blockHash b0))) - (NonEmpty.toList (unTestHash (blockHash b1))) - then if blockHash b0 == blockHash b1 - then Just EQ - else Just LT + if isSuffixOf + (NonEmpty.toList (unTestHash (blockHash b0))) + (NonEmpty.toList (unTestHash (blockHash b1))) + then + if blockHash b0 == blockHash b1 + then Just EQ + else Just LT else Nothing isAncestorBlock' :: WithOrigin TestBlock -> WithOrigin TestBlock -> Maybe Ordering -isAncestorBlock' Origin Origin = Just EQ -isAncestorBlock' Origin _ = Just LT -isAncestorBlock' _ Origin = Just GT +isAncestorBlock' Origin Origin = Just EQ +isAncestorBlock' Origin _ = Just LT +isAncestorBlock' _ Origin = Just GT isAncestorBlock' (At b0) (At b1) = isAncestorBlock b0 b1 noReturnToAncestors :: [SchedulePoint TestBlock] -> QC.Property noReturnToAncestors = go [] - where - go _ [] = QC.property True - go ancestors (p : ss) = - let b = schedulePointToBlock p - in foldr (QC..&&.) (QC.property True) - (map (isNotAncestorOf' b) ancestors) - QC..&&. - go (b : ancestors) ss - - isNotAncestorOf' :: WithOrigin TestBlock -> WithOrigin TestBlock -> QC.Property - isNotAncestorOf' b0 b1 = - QC.counterexample ("return to ancestor: " ++ terseWithOrigin terseBlock b0 ++ " -> " ++ terseWithOrigin terseBlock b1) $ - QC.property $ isNothing $ isAncestorBlock' b0 b1 + where + go _ [] = QC.property True + go ancestors (p : ss) = + let b = schedulePointToBlock p + in foldr + (QC..&&.) + (QC.property True) + (map (isNotAncestorOf' b) ancestors) + QC..&&. go (b : ancestors) ss + + isNotAncestorOf' :: WithOrigin TestBlock -> WithOrigin TestBlock -> QC.Property + isNotAncestorOf' b0 b1 = + QC.counterexample + ("return to ancestor: " ++ terseWithOrigin terseBlock b0 ++ " -> " ++ terseWithOrigin terseBlock b1) + $ QC.property + $ isNothing + $ isAncestorBlock' b0 b1 genTimeInterval :: DiffTime -> QC.Gen (DiffTime, DiffTime) genTimeInterval trange = do - a <- chooseDiffTime (1, trange) - b <- chooseDiffTime (1, trange) - pure (min a b, max a b) + a <- chooseDiffTime (1, trange) + b <- chooseDiffTime (1, trange) + pure (min a b, max a b) genTipPoints :: QC.Gen [[Int]] genTipPoints = do - branchCount <- QC.choose (1, 5) - xss <- QC.vector branchCount - pure $ map (dedupSorted . sort . map QC.getNonNegative . QC.getNonEmpty) xss + branchCount <- QC.choose (1, 5) + xss <- QC.vector branchCount + pure $ map (dedupSorted . sort . map QC.getNonNegative . QC.getNonEmpty) xss -- | @genIntersections n@ generates a list of @n@ intersections as block numbers. genIntersections :: Int -> QC.Gen [Int] genIntersections n = - -- Intersection with the genesis block is represented by @Just (-1)@. - map (\x -> x - 1) . sort . map QC.getNonNegative <$> QC.vector n + -- Intersection with the genesis block is represented by @Just (-1)@. + map (\x -> x - 1) . sort . map QC.getNonNegative <$> QC.vector n isSorted :: Show a => (a -> a -> QC.Property) -> [a] -> QC.Property isSorted cmp xs = - QC.counterexample ("isSorted " ++ show xs) $ - foldr (QC..&&.) (QC.property True) - [ cmp a b | (a, b) <- zip xs (drop 1 xs) ] + QC.counterexample ("isSorted " ++ show xs) $ + foldr + (QC..&&.) + (QC.property True) + [cmp a b | (a, b) <- zip xs (drop 1 xs)] chooseDiffTime :: (DiffTime, DiffTime) -> QC.Gen DiffTime chooseDiffTime (a, b) = do - let aInt = diffTimeToPicoseconds a - bInt = diffTimeToPicoseconds b - picosecondsToDiffTime <$> QC.chooseInteger (aInt, bInt) + let aInt = diffTimeToPicoseconds a + bInt = diffTimeToPicoseconds b + picosecondsToDiffTime <$> QC.chooseInteger (aInt, bInt) dedupSorted :: Eq a => [a] -> [a] dedupSorted = map headCallStack . group headCallStack :: HasCallStack => [a] -> a headCallStack = \case - x:_ -> x - _ -> error "headCallStack: empty list" + x : _ -> x + _ -> error "headCallStack: empty list" -headerPointsFollowTipPoints :: Show a => (a -> a -> Maybe Ordering) -> [(Time, a)] -> [(Time, a)] -> QC.Property +headerPointsFollowTipPoints :: + Show a => (a -> a -> Maybe Ordering) -> [(Time, a)] -> [(Time, a)] -> QC.Property headerPointsFollowTipPoints _ [] [] = QC.property True headerPointsFollowTipPoints isAncestor ((t0, i0) : ss) ((t1, i1) : ps) = - QC.counterexample "schedule times follow tip points" (QC.ge t0 t1) - QC..&&. - (case isAncestor i0 i1 of - Just LT -> headerPointsFollowTipPoints isAncestor ss ((t1, i1) : ps) - Just EQ -> headerPointsFollowTipPoints isAncestor ss ps - _ -> headerPointsFollowTipPoints isAncestor ((t0, i0) : ss) ps - ) + QC.counterexample "schedule times follow tip points" (QC.ge t0 t1) + QC..&&. ( case isAncestor i0 i1 of + Just LT -> headerPointsFollowTipPoints isAncestor ss ((t1, i1) : ps) + Just EQ -> headerPointsFollowTipPoints isAncestor ss ps + _ -> headerPointsFollowTipPoints isAncestor ((t0, i0) : ss) ps + ) headerPointsFollowTipPoints _ [] _ps = --- There can be unscheduled header points if they would be produced so --- late that they would come after the tip point has moved to another branch. --- --- QC.counterexample ("schedule times are sufficient for: " ++ show ps) $ --- QC.property False - QC.property True + -- There can be unscheduled header points if they would be produced so + -- late that they would come after the tip point has moved to another branch. + -- + -- QC.counterexample ("schedule times are sufficient for: " ++ show ps) $ + -- QC.property False + QC.property True headerPointsFollowTipPoints _ ss [] = - QC.counterexample ("schedule times finish after last tip point: " ++ show ss) $ - QC.property False + QC.counterexample ("schedule times finish after last tip point: " ++ show ss) $ + QC.property False -- | @genAdversarialFragment goodBlocks forkNo prefixCount slotsA@ generates -- a fragment for a chain that forks off the given chain. -genAdversarialFragment :: AF.AnchoredFragment TestBlock -> Int -> Int -> [SlotNo] -> AF.AnchoredFragment TestBlock -genAdversarialFragment goodBlocks forkNo prefixCount slotsA - = mkFragment intersectionBlock slotsA forkNo - where - -- blocks in the common prefix in reversed order - intersectionBlock = case AF.head $ AF.takeOldest (prefixCount + 1) goodBlocks of - Left _ -> Origin - Right b -> At b +genAdversarialFragment :: + AF.AnchoredFragment TestBlock -> Int -> Int -> [SlotNo] -> AF.AnchoredFragment TestBlock +genAdversarialFragment goodBlocks forkNo prefixCount slotsA = + mkFragment intersectionBlock slotsA forkNo + where + -- blocks in the common prefix in reversed order + intersectionBlock = case AF.head $ AF.takeOldest (prefixCount + 1) goodBlocks of + Left _ -> Origin + Right b -> At b -- | @mkFragment pre active forkNo@ generates a list of blocks at the given slots. mkFragment :: WithOrigin TestBlock -> [SlotNo] -> Int -> AF.AnchoredFragment TestBlock mkFragment pre active forkNo = AF.fromNewestFirst anchor $ List.foldl' issue [] active - where - anchor = withOrigin AF.AnchorGenesis AF.anchorFromBlock pre - issue (h : t) s = (successorBlock h) {tbSlot = s} : h : t - issue [] s | Origin <- pre = [(firstBlock (fromIntegral forkNo)) {tbSlot = s}] - | At h <- pre = [(modifyFork (const (fromIntegral forkNo)) (successorBlock h)) {tbSlot = s}] + where + anchor = withOrigin AF.AnchorGenesis AF.anchorFromBlock pre + issue (h : t) s = (successorBlock h){tbSlot = s} : h : t + issue [] s + | Origin <- pre = [(firstBlock (fromIntegral forkNo)){tbSlot = s}] + | At h <- pre = [(modifyFork (const (fromIntegral forkNo)) (successorBlock h)){tbSlot = s}] -- | @genVectorWithoutDuplicates n@ generates a vector of length @n@ -- without duplicates. genSortedVectorWithoutDuplicates :: (QC.Arbitrary a, Num a, Ord a) => Int -> QC.Gen [a] genSortedVectorWithoutDuplicates n = do - x0 <- QC.arbitrary - scanl (+) x0 . map ((+1) . QC.getNonNegative) <$> QC.vector (n - 1) + x0 <- QC.arbitrary + scanl (+) x0 . map ((+ 1) . QC.getNonNegative) <$> QC.vector (n - 1) diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/PartialAccessors.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/PartialAccessors.hs index 470e3f43df..13241e8b43 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/PartialAccessors.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/PartialAccessors.hs @@ -4,32 +4,32 @@ -- Those functions are partial, and are designed to only be used in tests. -- We know they won't fail there, because we generated the structures -- with the correct properties. -module Test.Util.PartialAccessors ( - getHonestPeer +module Test.Util.PartialAccessors + ( getHonestPeer , getOnlyBranch , getOnlyBranchTip , getTrunkTip ) where -import qualified Data.Map as Map -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (HasHeader) -import Test.Consensus.BlockTree +import Data.Map qualified as Map +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (HasHeader) +import Test.Consensus.BlockTree getOnlyBranch :: BlockTree blk -> BlockTreeBranch blk -getOnlyBranch BlockTree {btBranches} = case btBranches of +getOnlyBranch BlockTree{btBranches} = case btBranches of [branch] -> branch - _ -> error "tree must have exactly one alternate branch" + _ -> error "tree must have exactly one alternate branch" getTrunkTip :: HasHeader blk => BlockTree blk -> blk getTrunkTip tree = case btTrunk tree of - (AF.Empty _) -> error "tree must have at least one block" + (AF.Empty _) -> error "tree must have at least one block" (_ AF.:> tipBlock) -> tipBlock getOnlyBranchTip :: HasHeader blk => BlockTree blk -> blk -getOnlyBranchTip BlockTree {btBranches} = case btBranches of +getOnlyBranchTip BlockTree{btBranches} = case btBranches of [branch] -> case btbFull branch of - (AF.Empty _) -> error "alternate branch must have at least one block" + (AF.Empty _) -> error "alternate branch must have at least one block" (_ AF.:> tipBlock) -> tipBlock _ -> error "tree must have exactly one alternate branch" @@ -39,4 +39,4 @@ getHonestPeer honests = then error "there must be exactly one honest peer" else case Map.lookup 1 honests of Nothing -> error "the only honest peer must have id 1" - Just p -> p + Just p -> p diff --git a/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/TersePrinting.hs b/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/TersePrinting.hs index 180ae0dd0b..b3a2d108e0 100644 --- a/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/TersePrinting.hs +++ b/ouroboros-consensus-diffusion/test/consensus-test/Test/Util/TersePrinting.hs @@ -3,8 +3,8 @@ -- | Helpers for printing various objects in a terse way. Terse printing is -- similar to that provided by the 'Condense' typeclass except it can be -- sometimes even more compact and it is very specific to tests. -module Test.Util.TersePrinting ( - terseAnchor +module Test.Util.TersePrinting + ( terseAnchor , terseBlock , terseFragment , terseHFragment @@ -17,21 +17,37 @@ module Test.Util.TersePrinting ( , terseWithOrigin ) where -import Cardano.Slotting.Block (BlockNo (BlockNo)) -import Data.List (intercalate) -import Data.List.NonEmpty (NonEmpty ((:|)), toList) -import qualified Data.List.NonEmpty as NE -import Ouroboros.Consensus.Block (Header, - Point (BlockPoint, GenesisPoint), RealPoint, - SlotNo (SlotNo), blockHash, blockNo, blockSlot, - realPointToPoint) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Network.AnchoredFragment (Anchor, AnchoredFragment, - anchor, anchorToPoint, mapAnchoredFragment, toOldestFirst) -import Ouroboros.Network.Block (Tip (..)) -import Ouroboros.Network.Point (WithOrigin (..)) -import Test.Util.TestBlock (Header (TestHeader), TestBlock, - TestHash (TestHash), unTestHash) +import Cardano.Slotting.Block (BlockNo (BlockNo)) +import Data.List (intercalate) +import Data.List.NonEmpty (NonEmpty ((:|)), toList) +import Data.List.NonEmpty qualified as NE +import Ouroboros.Consensus.Block + ( Header + , Point (BlockPoint, GenesisPoint) + , RealPoint + , SlotNo (SlotNo) + , blockHash + , blockNo + , blockSlot + , realPointToPoint + ) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Network.AnchoredFragment + ( Anchor + , AnchoredFragment + , anchor + , anchorToPoint + , mapAnchoredFragment + , toOldestFirst + ) +import Ouroboros.Network.Block (Tip (..)) +import Ouroboros.Network.Point (WithOrigin (..)) +import Test.Util.TestBlock + ( Header (TestHeader) + , TestBlock + , TestHash (TestHash) + , unTestHash + ) -- | Run-length encoding of a list. This groups consecutive duplicate elements, -- counting them. Only the first element of the equality is kept. For instance: @@ -47,23 +63,23 @@ runLengthEncoding xs = [(length ys, NE.head ys) | ys <- NE.group xs] -- for other functions. terseBlockSlotHash :: BlockNo -> SlotNo -> TestHash -> String terseBlockSlotHash (BlockNo bno) (SlotNo sno) (TestHash hash) = - show bno ++ "-" ++ show sno ++ renderHash - where - renderHash = case runLengthEncoding (reverse (toList hash)) of - [(_, 0)] -> "" - hashGrouped -> "[" ++ intercalate "," (map renderGroup hashGrouped) ++ "]" - renderGroup (1, e) = show e - renderGroup (n, e) = show n ++ "x" ++ show e + show bno ++ "-" ++ show sno ++ renderHash + where + renderHash = case runLengthEncoding (reverse (toList hash)) of + [(_, 0)] -> "" + hashGrouped -> "[" ++ intercalate "," (map renderGroup hashGrouped) ++ "]" + renderGroup (1, e) = show e + renderGroup (n, e) = show n ++ "x" ++ show e -- | Same as 'terseBlockSlotHash' except only the last element of the hash -- shows, if it is non-zero. This makes sense when showing a fragment. terseBlockSlotHash' :: BlockNo -> SlotNo -> TestHash -> String terseBlockSlotHash' (BlockNo bno) (SlotNo sno) (TestHash hash) = - show bno ++ "-" ++ show sno ++ renderHashSuffix hash - where - renderHashSuffix (forkNo :| _) - | forkNo == 0 = "" - | otherwise = "[" ++ show forkNo ++ "]" + show bno ++ "-" ++ show sno ++ renderHashSuffix hash + where + renderHashSuffix (forkNo :| _) + | forkNo == 0 = "" + | otherwise = "[" ++ show forkNo ++ "]" -- | Print a 'TestBlock' as @block-slot[hash]@. @hash@ only shows if there is a -- non-zero element in it. When it shows, it shows in a compact form. For @@ -95,13 +111,13 @@ terseAnchor = tersePoint . anchorToPoint -- | Same as 'tersePoint' for tips. terseTip :: Tip TestBlock -> String -terseTip TipGenesis = "G" +terseTip TipGenesis = "G" terseTip (Tip sno hash bno) = terseBlockSlotHash bno sno hash -- | Given a printer for elements of type @a@, prints a @WithOrigin a@ in a -- terse way. Origin shows as @G@. terseWithOrigin :: (a -> String) -> WithOrigin a -> String -terseWithOrigin _ Origin = "G" +terseWithOrigin _ Origin = "G" terseWithOrigin terseA (At a) = terseA a -- | Print a fragment of 'TestBlock' in a terse way. This shows as @anchor | @@ -110,22 +126,21 @@ terseWithOrigin terseA (At a) = terseA a -- shows and only when it is non-zero. terseFragment :: AnchoredFragment TestBlock -> String terseFragment fragment = - terseAnchor (anchor fragment) ++ renderBlocks - where - renderBlocks = case toOldestFirst fragment of - [] -> "" - blocks -> " | " ++ unwords (map terseBlock' blocks) + terseAnchor (anchor fragment) ++ renderBlocks + where + renderBlocks = case toOldestFirst fragment of + [] -> "" + blocks -> " | " ++ unwords (map terseBlock' blocks) -- | Same as 'terseFragment' for fragments of headers. terseHFragment :: AnchoredFragment (Header TestBlock) -> String terseHFragment = terseFragment . mapAnchoredFragment (\(TestHeader block) -> block) -- | Same as 'terseFragment' for fragments of headers with time. --- terseHWTFragment :: AnchoredFragment (HeaderWithTime TestBlock) -> String terseHWTFragment = terseHFragment . mapAnchoredFragment hwtHeader -- | Same as 'terseWithOrigin' for 'Maybe'. terseMaybe :: (a -> String) -> Maybe a -> String -terseMaybe _ Nothing = "X" +terseMaybe _ Nothing = "X" terseMaybe terseA (Just a) = terseA a diff --git a/ouroboros-consensus-diffusion/test/infra-test/Main.hs b/ouroboros-consensus-diffusion/test/infra-test/Main.hs index 4e2acf41d3..77f4c45f26 100644 --- a/ouroboros-consensus-diffusion/test/infra-test/Main.hs +++ b/ouroboros-consensus-diffusion/test/infra-test/Main.hs @@ -1,15 +1,18 @@ module Main (main) where -import Test.Tasty -import qualified Test.ThreadNet.Util.Tests (tests) -import Test.Util.TestEnv (defaultMainWithTestEnv, - defaultTestEnvConfig) +import Test.Tasty +import Test.ThreadNet.Util.Tests qualified (tests) +import Test.Util.TestEnv + ( defaultMainWithTestEnv + , defaultTestEnvConfig + ) main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests tests :: TestTree tests = - testGroup "test-infra" - [ Test.ThreadNet.Util.Tests.tests - ] + testGroup + "test-infra" + [ Test.ThreadNet.Util.Tests.tests + ] diff --git a/ouroboros-consensus-diffusion/test/infra-test/Test/ThreadNet/Util/Tests.hs b/ouroboros-consensus-diffusion/test/infra-test/Test/ThreadNet/Util/Tests.hs index 3ed0408a26..b83adec4a7 100644 --- a/ouroboros-consensus-diffusion/test/infra-test/Test/ThreadNet/Util/Tests.hs +++ b/ouroboros-consensus-diffusion/test/infra-test/Test/ThreadNet/Util/Tests.hs @@ -1,30 +1,32 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE TypeApplications #-} + module Test.ThreadNet.Util.Tests (tests) where -import Cardano.Ledger.BaseTypes (knownNonZeroBounded) -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.Util -import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) +import Cardano.Ledger.BaseTypes (knownNonZeroBounded) +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.Util +import Test.ThreadNet.Util.NodeJoinPlan (trivialNodeJoinPlan) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) tests :: TestTree -tests = testGroup "Test.ThreadNet.Util.Tests" $ +tests = + testGroup "Test.ThreadNet.Util.Tests" $ [ testProperty "0 = determineForkLength roundRobinLeaderSchedule" $ - prop_roundRobin_forkLength securityParam + prop_roundRobin_forkLength securityParam ] - where - securityParam = SecurityParam $ knownNonZeroBounded @5 + where + securityParam = SecurityParam $ knownNonZeroBounded @5 -- | A round-robin schedule should reach consensus prop_roundRobin_forkLength :: - SecurityParam -> NumCoreNodes -> NumSlots -> Property + SecurityParam -> NumCoreNodes -> NumSlots -> Property prop_roundRobin_forkLength k numCoreNodes numSlots = determineForkLength k nodeJoinPlan schedule === NumBlocks 0 - where - nodeJoinPlan = trivialNodeJoinPlan numCoreNodes - schedule = roundRobinLeaderSchedule numCoreNodes numSlots + where + nodeJoinPlan = trivialNodeJoinPlan numCoreNodes + schedule = roundRobinLeaderSchedule numCoreNodes numSlots diff --git a/ouroboros-consensus-diffusion/test/mock-test/Main.hs b/ouroboros-consensus-diffusion/test/mock-test/Main.hs index 08a2a0d21e..4f823c8984 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Main.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Main.hs @@ -1,25 +1,28 @@ module Main (main) where -import qualified Test.Consensus.Ledger.Mock (tests) -import qualified Test.Consensus.Ledger.Mock.LedgerTables (tests) -import Test.Tasty -import qualified Test.ThreadNet.BFT (tests) -import qualified Test.ThreadNet.LeaderSchedule (tests) -import qualified Test.ThreadNet.PBFT (tests) -import qualified Test.ThreadNet.Praos (tests) -import Test.Util.TestEnv (defaultMainWithTestEnv, - defaultTestEnvConfig) +import Test.Consensus.Ledger.Mock qualified (tests) +import Test.Consensus.Ledger.Mock.LedgerTables qualified (tests) +import Test.Tasty +import Test.ThreadNet.BFT qualified (tests) +import Test.ThreadNet.LeaderSchedule qualified (tests) +import Test.ThreadNet.PBFT qualified (tests) +import Test.ThreadNet.Praos qualified (tests) +import Test.Util.TestEnv + ( defaultMainWithTestEnv + , defaultTestEnvConfig + ) main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests tests :: TestTree tests = - testGroup "ouroboros-consensus" - [ Test.Consensus.Ledger.Mock.tests - , Test.Consensus.Ledger.Mock.LedgerTables.tests - , Test.ThreadNet.BFT.tests - , Test.ThreadNet.LeaderSchedule.tests - , Test.ThreadNet.PBFT.tests - , Test.ThreadNet.Praos.tests - ] + testGroup + "ouroboros-consensus" + [ Test.Consensus.Ledger.Mock.tests + , Test.Consensus.Ledger.Mock.LedgerTables.tests + , Test.ThreadNet.BFT.tests + , Test.ThreadNet.LeaderSchedule.tests + , Test.ThreadNet.PBFT.tests + , Test.ThreadNet.Praos.tests + ] diff --git a/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock.hs b/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock.hs index ebaa4bfbeb..8fb523ba49 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock.hs @@ -5,34 +5,40 @@ module Test.Consensus.Ledger.Mock (tests) where -import Codec.CBOR.Write (toLazyByteString) -import Codec.Serialise (Serialise, encode) -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.ByteString.Short as Short -import Data.Proxy -import Data.Typeable -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) -import Test.Consensus.Ledger.Mock.Generators () -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck +import Codec.CBOR.Write (toLazyByteString) +import Codec.Serialise (Serialise, encode) +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short qualified as Short +import Data.Proxy +import Data.Typeable +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..)) +import Test.Consensus.Ledger.Mock.Generators () +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck tests :: TestTree -tests = testGroup "Mock" +tests = + testGroup + "Mock" [ props (Proxy @SimpleStandardCrypto) "SimpleStandardCrypto" - , props (Proxy @SimpleMockCrypto) "SimpleMockCrypto" + , props (Proxy @SimpleMockCrypto) "SimpleMockCrypto" ] - where - props :: forall c proxy. - ( SimpleCrypto c - , Arbitrary (HeaderHash (SimpleBlock c ())) - ) - => proxy c -> String -> TestTree - props _ title = testGroup title + where + props :: + forall c proxy. + ( SimpleCrypto c + , Arbitrary (HeaderHash (SimpleBlock c ())) + ) => + proxy c -> String -> TestTree + props _ title = + testGroup + title [ testProperty "BinaryBlockInfo sanity check" (prop_simpleBlockBinaryBlockInfo @c @()) - , testGroup "ConvertRawHash sanity check" + , testGroup + "ConvertRawHash sanity check" [ testProperty "ConvertRawHash roundtrip" (prop_simpleBlock_roundtrip_ConvertRawHash @c @()) , testProperty "hashSize sanity check" (prop_simpleBlock_hashSize @c @()) ] @@ -42,42 +48,45 @@ tests = testGroup "Mock" BinaryBlockInfo -------------------------------------------------------------------------------} -prop_simpleBlockBinaryBlockInfo - :: (SimpleCrypto c, Serialise ext, Typeable ext) - => SimpleBlock c ext -> Property +prop_simpleBlockBinaryBlockInfo :: + (SimpleCrypto c, Serialise ext, Typeable ext) => + SimpleBlock c ext -> Property prop_simpleBlockBinaryBlockInfo blk = - serialisedHeader === extractedHeader - where - BinaryBlockInfo { headerOffset, headerSize } = - simpleBlockBinaryBlockInfo blk + serialisedHeader === extractedHeader + where + BinaryBlockInfo{headerOffset, headerSize} = + simpleBlockBinaryBlockInfo blk - extractedHeader :: Lazy.ByteString - extractedHeader = - Lazy.take (fromIntegral headerSize) $ - Lazy.drop (fromIntegral headerOffset) $ + extractedHeader :: Lazy.ByteString + extractedHeader = + Lazy.take (fromIntegral headerSize) $ + Lazy.drop (fromIntegral headerOffset) $ toLazyByteString (encode blk) - serialisedHeader :: Lazy.ByteString - serialisedHeader = toLazyByteString $ + serialisedHeader :: Lazy.ByteString + serialisedHeader = + toLazyByteString $ encodeSimpleHeader encode (getHeader blk) {------------------------------------------------------------------------------- ConvertRawHash -------------------------------------------------------------------------------} -prop_simpleBlock_roundtrip_ConvertRawHash - :: forall c ext. SimpleCrypto c - => HeaderHash (SimpleBlock c ext) -> Property +prop_simpleBlock_roundtrip_ConvertRawHash :: + forall c ext. + SimpleCrypto c => + HeaderHash (SimpleBlock c ext) -> Property prop_simpleBlock_roundtrip_ConvertRawHash h = - h === fromShortRawHash p (toShortRawHash p h) - where - p = Proxy @(SimpleBlock c ext) + h === fromShortRawHash p (toShortRawHash p h) + where + p = Proxy @(SimpleBlock c ext) -prop_simpleBlock_hashSize - :: forall c ext. SimpleCrypto c - => HeaderHash (SimpleBlock c ext) -> Property +prop_simpleBlock_hashSize :: + forall c ext. + SimpleCrypto c => + HeaderHash (SimpleBlock c ext) -> Property prop_simpleBlock_hashSize h = - counterexample ("rawHash: " ++ show (toShortRawHash p h)) - $ hashSize p === fromIntegral (Short.length (toShortRawHash p h)) - where - p = Proxy @(SimpleBlock c ext) + counterexample ("rawHash: " ++ show (toShortRawHash p h)) $ + hashSize p === fromIntegral (Short.length (toShortRawHash p h)) + where + p = Proxy @(SimpleBlock c ext) diff --git a/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock/LedgerTables.hs b/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock/LedgerTables.hs index 7a195f25d1..1ccc399dfe 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock/LedgerTables.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Test/Consensus/Ledger/Mock/LedgerTables.hs @@ -4,17 +4,19 @@ module Test.Consensus.Ledger.Mock.LedgerTables (tests) where -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Protocol.PBFT -import Test.Consensus.Ledger.Mock.Generators () -import Test.LedgerTables -import Test.Tasty -import Test.Tasty.QuickCheck +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Protocol.PBFT +import Test.Consensus.Ledger.Mock.Generators () +import Test.LedgerTables +import Test.Tasty +import Test.Tasty.QuickCheck type Block = SimpleBlock SimpleMockCrypto (SimplePBftExt SimpleMockCrypto PBftMockCrypto) tests :: TestTree -tests = testGroup "LedgerTables" - [ testProperty "Stowable laws" (prop_stowable_laws @Block) - , testProperty "HasLedgerTables laws" (prop_hasledgertables_laws @Block) - ] +tests = + testGroup + "LedgerTables" + [ testProperty "Stowable laws" (prop_stowable_laws @Block) + , testProperty "HasLedgerTables laws" (prop_hasledgertables_laws @Block) + ] diff --git a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/BFT.hs b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/BFT.hs index 3fffdc5ba2..4ad385bfcb 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/BFT.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/BFT.hs @@ -5,108 +5,117 @@ module Test.ThreadNet.BFT (tests) where -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import Data.Constraint -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config.SecurityParam -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node () -import Ouroboros.Consensus.Mock.Node.BFT -import Ouroboros.Consensus.Mock.Node.Serialisation -import Test.Consensus.Ledger.Mock.Generators () -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.TxGen.Mock () -import Test.ThreadNet.Util -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeToNodeVersion -import Test.ThreadNet.Util.SimpleBlock -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Roundtrip -import Test.Util.Serialisation.SomeResult () +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Data.Constraint +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node () +import Ouroboros.Consensus.Mock.Node.BFT +import Ouroboros.Consensus.Mock.Node.Serialisation +import Test.Consensus.Ledger.Mock.Generators () +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.TxGen.Mock () +import Test.ThreadNet.Util +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeToNodeVersion +import Test.ThreadNet.Util.SimpleBlock +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Roundtrip +import Test.Util.Serialisation.SomeResult () data TestSetup = TestSetup - { setupK :: SecurityParam - , setupTestConfig :: TestConfig + { setupK :: SecurityParam + , setupTestConfig :: TestConfig , setupNodeJoinPlan :: NodeJoinPlan } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - -- TODO k > 1 as a workaround for Issue #1511. - k <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero + -- TODO k > 1 as a workaround for Issue #1511. + k <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero - testConfig <- arbitrary - let TestConfig{numCoreNodes, numSlots} = testConfig + testConfig <- arbitrary + let TestConfig{numCoreNodes, numSlots} = testConfig - nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots + nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots - pure $ TestSetup k testConfig nodeJoinPlan + pure $ TestSetup k testConfig nodeJoinPlan - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "BFT" $ +tests = + testGroup "BFT" $ [ roundtrip_all SimpleCodecConfig dictNestedHdr , testProperty "simple convergence" $ \setup -> prop_simple_bft_convergence setup ] - where - dictNestedHdr :: forall a. NestedCtxt_ MockBftBlock Header a -> Dict (Eq a, Show a) - dictNestedHdr CtxtMock = Dict + where + dictNestedHdr :: forall a. NestedCtxt_ MockBftBlock Header a -> Dict (Eq a, Show a) + dictNestedHdr CtxtMock = Dict prop_simple_bft_convergence :: TestSetup -> Property -prop_simple_bft_convergence TestSetup - { setupK = k - , setupTestConfig = testConfig - , setupNodeJoinPlan = nodeJoinPlan - } = - prop_general PropGeneralArgs - { pgaBlockProperty = prop_validSimpleBlock - , pgaCountTxs = countSimpleGenTxs - , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Nothing - , pgaFixedSchedule = - Just $ roundRobinLeaderSchedule numCoreNodes numSlots - , pgaSecurityParam = k - , pgaTestConfig = testConfig - , pgaTestConfigB = testConfigB - } +prop_simple_bft_convergence + TestSetup + { setupK = k + , setupTestConfig = testConfig + , setupNodeJoinPlan = nodeJoinPlan + } = + prop_general + PropGeneralArgs + { pgaBlockProperty = prop_validSimpleBlock + , pgaCountTxs = countSimpleGenTxs + , pgaExpectedCannotForge = noExpectedCannotForges + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Nothing + , pgaFixedSchedule = + Just $ roundRobinLeaderSchedule numCoreNodes numSlots + , pgaSecurityParam = k + , pgaTestConfig = testConfig + , pgaTestConfigB = testConfigB + } testOutput - where + where TestConfig{numCoreNodes, numSlots} = testConfig slotLength = slotLengthFromSec 20 - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = singleEraFuture - slotLength - (EpochSize $ unNonZero (maxRollbacks k) * 10) - -- The mock ledger doesn't really care, and neither does BFT. We + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + singleEraFuture + slotLength + (EpochSize $ unNonZero (maxRollbacks k) * 10) + , -- The mock ledger doesn't really care, and neither does BFT. We -- stick with the common @k * 10@ size for now. - , messageDelay = noCalcMessageDelay - , nodeJoinPlan - , nodeRestarts = noRestarts - , txGenExtra = () - , version = newestVersion (Proxy @MockBftBlock) - } + messageDelay = noCalcMessageDelay + , nodeJoinPlan + , nodeRestarts = noRestarts + , txGenExtra = () + , version = newestVersion (Proxy @MockBftBlock) + } testOutput = - runTestNetwork testConfig testConfigB TestConfigMB - { nodeInfo = \nid -> - plainTestNodeInitialization - (protocolInfoBft + runTestNetwork + testConfig + testConfigB + TestConfigMB + { nodeInfo = \nid -> + plainTestNodeInitialization + ( protocolInfoBft numCoreNodes nid k - (HardFork.defaultEraParams k slotLength)) - (pure $ blockForgingBft nid) - , mkRekeyM = Nothing - } + (HardFork.defaultEraParams k slotLength) + ) + (pure $ blockForgingBft nid) + , mkRekeyM = Nothing + } diff --git a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/LeaderSchedule.hs b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/LeaderSchedule.hs index 1d325347cb..7dfb661d39 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/LeaderSchedule.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/LeaderSchedule.hs @@ -3,62 +3,63 @@ module Test.ThreadNet.LeaderSchedule (tests) where -import Cardano.Ledger.BaseTypes (nonZero) -import Control.Monad (replicateM) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config.SecurityParam -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node () -import Ouroboros.Consensus.Mock.Node.PraosRule -import Ouroboros.Consensus.Mock.Protocol.LeaderSchedule -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.TxGen.Mock () -import Test.ThreadNet.Util -import Test.ThreadNet.Util.HasCreator.Mock () -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeToNodeVersion -import Test.ThreadNet.Util.SimpleBlock -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (..)) +import Cardano.Ledger.BaseTypes (nonZero) +import Control.Monad (replicateM) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node () +import Ouroboros.Consensus.Mock.Node.PraosRule +import Ouroboros.Consensus.Mock.Protocol.LeaderSchedule +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.TxGen.Mock () +import Test.ThreadNet.Util +import Test.ThreadNet.Util.HasCreator.Mock () +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeToNodeVersion +import Test.ThreadNet.Util.SimpleBlock +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (..)) data TestSetup = TestSetup - { setupK :: SecurityParam - , setupTestConfig :: TestConfig - , setupEpochSize :: EpochSize - -- ^ Note: we don't think this value actually matters, since this test - -- overrides the leader schedule. - , setupNodeJoinPlan :: NodeJoinPlan + { setupK :: SecurityParam + , setupTestConfig :: TestConfig + , setupEpochSize :: EpochSize + -- ^ Note: we don't think this value actually matters, since this test + -- overrides the leader schedule. + , setupNodeJoinPlan :: NodeJoinPlan , setupLeaderSchedule :: LeaderSchedule - , setupSlotLength :: SlotLength + , setupSlotLength :: SlotLength } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - -- TODO k > 1 as a workaround for Issue #1511. - k <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero - epochSize <- EpochSize <$> choose (1, 10) - slotLength <- arbitrary + -- TODO k > 1 as a workaround for Issue #1511. + k <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero + epochSize <- EpochSize <$> choose (1, 10) + slotLength <- arbitrary - testConfig <- arbitrary - let TestConfig{numCoreNodes, numSlots} = testConfig + testConfig <- arbitrary + let TestConfig{numCoreNodes, numSlots} = testConfig - nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots - leaderSchedule <- genLeaderSchedule k numSlots numCoreNodes nodeJoinPlan + nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots + leaderSchedule <- genLeaderSchedule k numSlots numCoreNodes nodeJoinPlan - pure $ TestSetup + pure $ + TestSetup k testConfig epochSize @@ -66,112 +67,124 @@ instance Arbitrary TestSetup where leaderSchedule slotLength - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "LeaderSchedule" +tests = + testGroup + "LeaderSchedule" [ testProperty "simple convergence" $ \setup -> prop_simple_leader_schedule_convergence setup ] prop_simple_leader_schedule_convergence :: TestSetup -> Property -prop_simple_leader_schedule_convergence TestSetup - { setupK = k - , setupTestConfig = testConfig - , setupEpochSize = epochSize - , setupNodeJoinPlan = nodeJoinPlan - , setupLeaderSchedule = schedule - , setupSlotLength = slotLength - } = +prop_simple_leader_schedule_convergence + TestSetup + { setupK = k + , setupTestConfig = testConfig + , setupEpochSize = epochSize + , setupNodeJoinPlan = nodeJoinPlan + , setupLeaderSchedule = schedule + , setupSlotLength = slotLength + } = counterexample (tracesToDot testOutputNodes) $ - prop_general PropGeneralArgs - { pgaBlockProperty = prop_validSimpleBlock - , pgaCountTxs = countSimpleGenTxs - , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Nothing - , pgaFixedSchedule = Just schedule - , pgaSecurityParam = k - , pgaTestConfig = testConfig - , pgaTestConfigB = testConfigB - } - testOutput - where + prop_general + PropGeneralArgs + { pgaBlockProperty = prop_validSimpleBlock + , pgaCountTxs = countSimpleGenTxs + , pgaExpectedCannotForge = noExpectedCannotForges + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Nothing + , pgaFixedSchedule = Just schedule + , pgaSecurityParam = k + , pgaTestConfig = testConfig + , pgaTestConfigB = testConfigB + } + testOutput + where TestConfig{numCoreNodes} = testConfig - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = singleEraFuture slotLength epochSize - , messageDelay = noCalcMessageDelay - , nodeJoinPlan - , nodeRestarts = noRestarts - , txGenExtra = () - , version = newestVersion (Proxy @MockPraosRuleBlock) - } + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = singleEraFuture slotLength epochSize + , messageDelay = noCalcMessageDelay + , nodeJoinPlan + , nodeRestarts = noRestarts + , txGenExtra = () + , version = newestVersion (Proxy @MockPraosRuleBlock) + } -- this is entirely ignored because of the 'WithLeaderSchedule' combinator dummyF = 0.5 testOutput@TestOutput{testOutputNodes} = - runTestNetwork testConfig testConfigB TestConfigMB - { nodeInfo = \nid -> - plainTestNodeInitialization - (protocolInfoPraosRule + runTestNetwork + testConfig + testConfigB + TestConfigMB + { nodeInfo = \nid -> + plainTestNodeInitialization + ( protocolInfoPraosRule numCoreNodes nid PraosParams - { praosSecurityParam = k - , praosSlotsPerEpoch = unEpochSize epochSize - , praosLeaderF = dummyF - } + { praosSecurityParam = k + , praosSlotsPerEpoch = unEpochSize epochSize + , praosLeaderF = dummyF + } (HardFork.defaultEraParams k slotLength) schedule - emptyPraosEvolvingStake) - (pure blockForgingPraosRule) - , mkRekeyM = Nothing - } + emptyPraosEvolvingStake + ) + (pure blockForgingPraosRule) + , mkRekeyM = Nothing + } {------------------------------------------------------------------------------- Dependent generation and shrinking of leader schedules -------------------------------------------------------------------------------} -genLeaderSchedule :: SecurityParam - -> NumSlots - -> NumCoreNodes - -> NodeJoinPlan - -> Gen LeaderSchedule +genLeaderSchedule :: + SecurityParam -> + NumSlots -> + NumCoreNodes -> + NodeJoinPlan -> + Gen LeaderSchedule genLeaderSchedule k (NumSlots numSlots) numCoreNodes nodeJoinPlan = - flip suchThat (consensusExpected k nodeJoinPlan) $ do - leaders <- replicateM (fromIntegral numSlots) $ frequency - [ ( 4, pick 0) - , ( 2, pick 1) - , ( 1, pick 2) - , ( 1, pick 3) - ] - return $ LeaderSchedule $ Map.fromList $ zip [0..] leaders - where - pick :: Int -> Gen [CoreNodeId] - pick = go (enumCoreNodes numCoreNodes) - where - go :: [CoreNodeId] -> Int -> Gen [CoreNodeId] - go [] _ = return [] - go _ 0 = return [] - go nids n = do - nid <- elements nids - xs <- go (filter (/= nid) nids) (n - 1) - return $ nid : xs + flip suchThat (consensusExpected k nodeJoinPlan) $ do + leaders <- + replicateM (fromIntegral numSlots) $ + frequency + [ (4, pick 0) + , (2, pick 1) + , (1, pick 2) + , (1, pick 3) + ] + return $ LeaderSchedule $ Map.fromList $ zip [0 ..] leaders + where + pick :: Int -> Gen [CoreNodeId] + pick = go (enumCoreNodes numCoreNodes) + where + go :: [CoreNodeId] -> Int -> Gen [CoreNodeId] + go [] _ = return [] + go _ 0 = return [] + go nids n = do + nid <- elements nids + xs <- go (filter (/= nid) nids) (n - 1) + return $ nid : xs _shrinkLeaderSchedule :: NumSlots -> LeaderSchedule -> [LeaderSchedule] _shrinkLeaderSchedule (NumSlots numSlots) (LeaderSchedule m) = - [ LeaderSchedule m' - | slot <- [0 .. fromIntegral numSlots - 1] - , m' <- reduceSlot slot m - ] - where - reduceSlot :: SlotNo -> Map SlotNo [CoreNodeId] -> [Map SlotNo [CoreNodeId]] - reduceSlot s m' = [Map.insert s xs m' | xs <- reduceList $ m' Map.! s] - - reduceList :: [a] -> [[a]] - reduceList [] = [] - reduceList [_] = [] - reduceList (x : xs) = xs : map (x :) (reduceList xs) + [ LeaderSchedule m' + | slot <- [0 .. fromIntegral numSlots - 1] + , m' <- reduceSlot slot m + ] + where + reduceSlot :: SlotNo -> Map SlotNo [CoreNodeId] -> [Map SlotNo [CoreNodeId]] + reduceSlot s m' = [Map.insert s xs m' | xs <- reduceList $ m' Map.! s] + + reduceList :: [a] -> [[a]] + reduceList [] = [] + reduceList [_] = [] + reduceList (x : xs) = xs : map (x :) (reduceList xs) diff --git a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/PBFT.hs b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/PBFT.hs index 4d8a4aa5ee..8901ce0ede 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/PBFT.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/PBFT.hs @@ -4,105 +4,113 @@ module Test.ThreadNet.PBFT (tests) where -import Cardano.Ledger.BaseTypes (nonZero, unNonZero) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config.SecurityParam -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Mock.Ledger.Block.PBFT -import Ouroboros.Consensus.Mock.Node () -import Ouroboros.Consensus.Mock.Node.PBFT (MockPBftBlock, - blockForgingMockPBFT, protocolInfoMockPBFT) -import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.PBFT -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.Condense (condense) -import Ouroboros.Network.Mock.Chain (foldChain) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.Network -import qualified Test.ThreadNet.Ref.PBFT as Ref -import Test.ThreadNet.TxGen.Mock () -import Test.ThreadNet.Util -import Test.ThreadNet.Util.HasCreator.Mock () -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeToNodeVersion -import Test.ThreadNet.Util.SimpleBlock -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Orphans.Arbitrary () +import Cardano.Ledger.BaseTypes (nonZero, unNonZero) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Mock.Ledger.Block.PBFT +import Ouroboros.Consensus.Mock.Node () +import Ouroboros.Consensus.Mock.Node.PBFT + ( MockPBftBlock + , blockForgingMockPBFT + , protocolInfoMockPBFT + ) +import Ouroboros.Consensus.Node.ProtocolInfo (NumCoreNodes (..)) +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.Condense (condense) +import Ouroboros.Network.Mock.Chain (foldChain) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.Network +import Test.ThreadNet.Ref.PBFT qualified as Ref +import Test.ThreadNet.TxGen.Mock () +import Test.ThreadNet.Util +import Test.ThreadNet.Util.HasCreator.Mock () +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeToNodeVersion +import Test.ThreadNet.Util.SimpleBlock +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Orphans.Arbitrary () data TestSetup = TestSetup - { setupK :: SecurityParam - , setupTestConfig :: TestConfig + { setupK :: SecurityParam + , setupTestConfig :: TestConfig , setupNodeJoinPlan :: NodeJoinPlan } - deriving (Show) + deriving Show instance Arbitrary TestSetup where arbitrary = do - k <- SecurityParam <$> choose (1, 10) `suchThatMap` nonZero + k <- SecurityParam <$> choose (1, 10) `suchThatMap` nonZero - testConfig <- arbitrary - let TestConfig{numCoreNodes, numSlots} = testConfig + testConfig <- arbitrary + let TestConfig{numCoreNodes, numSlots} = testConfig - nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots - pure $ TestSetup k testConfig nodeJoinPlan + nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots + pure $ TestSetup k testConfig nodeJoinPlan - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "PBFT" $ +tests = + testGroup "PBFT" $ [ testProperty "simple convergence" $ \setup -> prop_simple_pbft_convergence setup ] prop_simple_pbft_convergence :: TestSetup -> Property -prop_simple_pbft_convergence TestSetup - { setupK = k - , setupTestConfig = testConfig - , setupNodeJoinPlan = nodeJoinPlan - } = +prop_simple_pbft_convergence + TestSetup + { setupK = k + , setupTestConfig = testConfig + , setupNodeJoinPlan = nodeJoinPlan + } = tabulate "Ref.PBFT result" [Ref.resultConstrName refResult] $ - prop_asSimulated .&&. - prop_general PropGeneralArgs - { pgaBlockProperty = prop_validSimpleBlock - , pgaCountTxs = countSimpleGenTxs - , pgaExpectedCannotForge = expectedCannotForge numCoreNodes - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = - Just $ NumBlocks $ case refResult of - Ref.Forked{} -> 1 - _ -> 0 - , pgaFixedSchedule = - Just $ roundRobinLeaderSchedule numCoreNodes numSlots - , pgaSecurityParam = k - , pgaTestConfig = testConfig - , pgaTestConfigB = testConfigB - } - testOutput - where + prop_asSimulated + .&&. prop_general + PropGeneralArgs + { pgaBlockProperty = prop_validSimpleBlock + , pgaCountTxs = countSimpleGenTxs + , pgaExpectedCannotForge = expectedCannotForge numCoreNodes + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = + Just $ NumBlocks $ case refResult of + Ref.Forked{} -> 1 + _ -> 0 + , pgaFixedSchedule = + Just $ roundRobinLeaderSchedule numCoreNodes numSlots + , pgaSecurityParam = k + , pgaTestConfig = testConfig + , pgaTestConfigB = testConfigB + } + testOutput + where TestConfig{numCoreNodes, numSlots} = testConfig slotLength = slotLengthFromSec 1 - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = singleEraFuture - slotLength - (EpochSize $ unNonZero (maxRollbacks k) * 10) - -- The mock ledger doesn't really care, and neither does PBFT. We + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = + singleEraFuture + slotLength + (EpochSize $ unNonZero (maxRollbacks k) * 10) + , -- The mock ledger doesn't really care, and neither does PBFT. We -- stick with the common @k * 10@ size for now. - , messageDelay = noCalcMessageDelay - , nodeJoinPlan - , nodeRestarts = noRestarts - , txGenExtra = () - , version = newestVersion (Proxy @MockPBftBlock) - } + messageDelay = noCalcMessageDelay + , nodeJoinPlan + , nodeRestarts = noRestarts + , txGenExtra = () + , version = newestVersion (Proxy @MockPBftBlock) + } NumCoreNodes nn = numCoreNodes @@ -110,61 +118,66 @@ prop_simple_pbft_convergence TestSetup params = PBftParams k numCoreNodes sigThd testOutput = - runTestNetwork testConfig testConfigB TestConfigMB - { nodeInfo = \nid -> plainTestNodeInitialization - (protocolInfoMockPBFT - params - (HardFork.defaultEraParams k slotLength)) - (pure $ blockForgingMockPBFT nid) - , mkRekeyM = Nothing - } + runTestNetwork + testConfig + testConfigB + TestConfigMB + { nodeInfo = \nid -> + plainTestNodeInitialization + ( protocolInfoMockPBFT + params + (HardFork.defaultEraParams k slotLength) + ) + (pure $ blockForgingMockPBFT nid) + , mkRekeyM = Nothing + } refResult :: Ref.Result refResult = Ref.simulate params nodeJoinPlan numSlots prop_asSimulated :: Property prop_asSimulated = - counterexample ("Unexpected Nominal slots:") $ + counterexample ("Unexpected Nominal slots:") $ conjoin $ - [ counterexample ("In final chain of " <> condense nid) $ - counterexample ("actual: " <> condense actualSlots) $ - case refResult of - Ref.Forked _ m -> - let expectedSlotss = - case Map.lookup cid m of - Nothing -> error "node missing from Ref.Forked" - Just ss -> map (:[]) $ Set.toList ss - in - counterexample - ("expected: one of " <> condense expectedSlotss) $ - actualSlots `elem` expectedSlotss - Ref.Nondeterministic -> property True -- TODO improve? - Ref.Outcomes outcomes -> - let expectedSlots = - [ s | (Ref.Nominal, s) <- zip outcomes [0..] ] - in - counterexample ("expected: " <> condense expectedSlots) $ - actualSlots == expectedSlots - | (nid@(CoreId cid), no) <- Map.toList testOutputNodes - , let actualSlots = actualSlotsOf no - ] - where - TestOutput{testOutputNodes} = testOutput - - actualSlotsOf NodeOutput{nodeOutputFinalChain} = - foldChain snoc id nodeOutputFinalChain [] :: [SlotNo] - where - snoc acc blk = acc . (blockSlot blk :) - -type Blk = SimpleBlock SimpleMockCrypto - (SimplePBftExt SimpleMockCrypto PBftMockCrypto) + [ counterexample ("In final chain of " <> condense nid) $ + counterexample ("actual: " <> condense actualSlots) $ + case refResult of + Ref.Forked _ m -> + let expectedSlotss = + case Map.lookup cid m of + Nothing -> error "node missing from Ref.Forked" + Just ss -> map (: []) $ Set.toList ss + in counterexample + ("expected: one of " <> condense expectedSlotss) + $ actualSlots `elem` expectedSlotss + Ref.Nondeterministic -> property True -- TODO improve? + Ref.Outcomes outcomes -> + let expectedSlots = + [s | (Ref.Nominal, s) <- zip outcomes [0 ..]] + in counterexample ("expected: " <> condense expectedSlots) $ + actualSlots == expectedSlots + | (nid@(CoreId cid), no) <- Map.toList testOutputNodes + , let actualSlots = actualSlotsOf no + ] + where + TestOutput{testOutputNodes} = testOutput + + actualSlotsOf NodeOutput{nodeOutputFinalChain} = + foldChain snoc id nodeOutputFinalChain [] :: [SlotNo] + where + snoc acc blk = acc . (blockSlot blk :) + +type Blk = + SimpleBlock + SimpleMockCrypto + (SimplePBftExt SimpleMockCrypto PBftMockCrypto) expectedCannotForge :: - NumCoreNodes - -> SlotNo - -> NodeId - -> WrapCannotForge Blk - -> Bool + NumCoreNodes -> + SlotNo -> + NodeId -> + WrapCannotForge Blk -> + Bool expectedCannotForge _ _ _ = \case - WrapCannotForge PBftCannotForgeThresholdExceeded{} -> True - _ -> False + WrapCannotForge PBftCannotForgeThresholdExceeded{} -> True + _ -> False diff --git a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/Praos.hs b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/Praos.hs index eec86b15ff..80b1498df6 100644 --- a/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/Praos.hs +++ b/ouroboros-consensus-diffusion/test/mock-test/Test/ThreadNet/Praos.hs @@ -3,85 +3,91 @@ module Test.ThreadNet.Praos (tests) where -import Cardano.Ledger.BaseTypes (nonZero) -import Control.Monad (replicateM) -import qualified Data.Map.Strict as Map -import Data.Word (Word64) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config.SecurityParam -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node () -import Ouroboros.Consensus.Mock.Node.Praos (MockPraosBlock, - blockForgingPraos, protocolInfoPraos) -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.Node.ProtocolInfo - (NumCoreNodes (NumCoreNodes), enumCoreNodes) -import Test.QuickCheck -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.ThreadNet.General -import Test.ThreadNet.TxGen.Mock () -import Test.ThreadNet.Util -import Test.ThreadNet.Util.HasCreator.Mock () -import Test.ThreadNet.Util.NodeJoinPlan -import Test.ThreadNet.Util.NodeRestarts -import Test.ThreadNet.Util.NodeToNodeVersion -import Test.ThreadNet.Util.SimpleBlock -import Test.Util.HardFork.Future (singleEraFuture) -import Test.Util.Orphans.Arbitrary () -import Test.Util.Slots (NumSlots (unNumSlots)) +import Cardano.Ledger.BaseTypes (nonZero) +import Control.Monad (replicateM) +import Data.Map.Strict qualified as Map +import Data.Word (Word64) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node () +import Ouroboros.Consensus.Mock.Node.Praos + ( MockPraosBlock + , blockForgingPraos + , protocolInfoPraos + ) +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.Node.ProtocolInfo + ( NumCoreNodes (NumCoreNodes) + , enumCoreNodes + ) +import Test.QuickCheck +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.ThreadNet.General +import Test.ThreadNet.TxGen.Mock () +import Test.ThreadNet.Util +import Test.ThreadNet.Util.HasCreator.Mock () +import Test.ThreadNet.Util.NodeJoinPlan +import Test.ThreadNet.Util.NodeRestarts +import Test.ThreadNet.Util.NodeToNodeVersion +import Test.ThreadNet.Util.SimpleBlock +import Test.Util.HardFork.Future (singleEraFuture) +import Test.Util.Orphans.Arbitrary () +import Test.Util.Slots (NumSlots (unNumSlots)) data TestSetup = TestSetup - { setupEpochSize :: EpochSize - , setupInitialNonce :: Natural - -- ^ the initial Shelley 'praosInitialEta' - -- - -- This test varies it too ensure it explores different leader schedules. - , setupK :: SecurityParam - , setupNodeJoinPlan :: NodeJoinPlan - , setupSlotLength :: SlotLength - , setupTestConfig :: TestConfig + { setupEpochSize :: EpochSize + , setupInitialNonce :: Natural + -- ^ the initial Shelley 'praosInitialEta' + -- + -- This test varies it too ensure it explores different leader schedules. + , setupK :: SecurityParam + , setupNodeJoinPlan :: NodeJoinPlan + , setupSlotLength :: SlotLength + , setupTestConfig :: TestConfig , setupEvolvingStake :: PraosEvolvingStake } - deriving (Show) + deriving Show genEvolvingStake :: EpochSize -> TestConfig -> Gen PraosEvolvingStake -genEvolvingStake epochSize TestConfig {numSlots, numCoreNodes} = do - chosenEpochs <- sublistOf [0..EpochNo $ max 1 maxEpochs - 1] - let l = fromIntegral maxEpochs - stakeDists <- replicateM l genStakeDist - return . PraosEvolvingStake . Map.fromList $ zip chosenEpochs stakeDists - where - maxEpochs = unNumSlots numSlots `div` unEpochSize epochSize - relativeStake ts nid stk = (nid, fromIntegral stk / ts) - genStakeDist = do - stakes <- vector (fromIntegral x) `suchThat` any (> 0) :: Gen [Amount] - let totalStake = fromIntegral $ sum stakes - return - . StakeDist - . Map.fromList - $ zipWith (relativeStake totalStake) (enumCoreNodes numCoreNodes) stakes - NumCoreNodes x = numCoreNodes +genEvolvingStake epochSize TestConfig{numSlots, numCoreNodes} = do + chosenEpochs <- sublistOf [0 .. EpochNo $ max 1 maxEpochs - 1] + let l = fromIntegral maxEpochs + stakeDists <- replicateM l genStakeDist + return . PraosEvolvingStake . Map.fromList $ zip chosenEpochs stakeDists + where + maxEpochs = unNumSlots numSlots `div` unEpochSize epochSize + relativeStake ts nid stk = (nid, fromIntegral stk / ts) + genStakeDist = do + stakes <- vector (fromIntegral x) `suchThat` any (> 0) :: Gen [Amount] + let totalStake = fromIntegral $ sum stakes + return + . StakeDist + . Map.fromList + $ zipWith (relativeStake totalStake) (enumCoreNodes numCoreNodes) stakes + NumCoreNodes x = numCoreNodes instance Arbitrary TestSetup where arbitrary = do - -- TODO k > 1 as a workaround for Issue #1511. - k <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero - epochSize <- EpochSize <$> choose (1, 10) - slotLength <- arbitrary + -- TODO k > 1 as a workaround for Issue #1511. + k <- SecurityParam <$> choose (2, 10) `suchThatMap` nonZero + epochSize <- EpochSize <$> choose (1, 10) + slotLength <- arbitrary - initialNonce <- fromIntegral <$> choose (0, maxBound :: Word64) + initialNonce <- fromIntegral <$> choose (0, maxBound :: Word64) - testConfig <- arbitrary - let TestConfig{numCoreNodes, numSlots} = testConfig + testConfig <- arbitrary + let TestConfig{numCoreNodes, numSlots} = testConfig - nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots - evolvingStake <- genEvolvingStake epochSize testConfig + nodeJoinPlan <- genNodeJoinPlan numCoreNodes numSlots + evolvingStake <- genEvolvingStake epochSize testConfig - pure $ TestSetup + pure $ + TestSetup epochSize initialNonce k @@ -90,68 +96,80 @@ instance Arbitrary TestSetup where testConfig evolvingStake - -- TODO shrink +-- TODO shrink tests :: TestTree -tests = testGroup "Praos" +tests = + testGroup + "Praos" [ testProperty "simple convergence" $ \setup -> prop_simple_praos_convergence setup ] prop_simple_praos_convergence :: TestSetup -> Property -prop_simple_praos_convergence TestSetup - { setupEpochSize = epochSize - , setupK = k - , setupInitialNonce - , setupNodeJoinPlan = nodeJoinPlan - , setupSlotLength = slotLength - , setupTestConfig = testConfig - , setupEvolvingStake = evolvingStake - } = +prop_simple_praos_convergence + TestSetup + { setupEpochSize = epochSize + , setupK = k + , setupInitialNonce + , setupNodeJoinPlan = nodeJoinPlan + , setupSlotLength = slotLength + , setupTestConfig = testConfig + , setupEvolvingStake = evolvingStake + } = counterexample (tracesToDot testOutputNodes) $ - prop_general PropGeneralArgs - { pgaBlockProperty = prop_validSimpleBlock - , pgaCountTxs = countSimpleGenTxs - , pgaExpectedCannotForge = noExpectedCannotForges - , pgaFirstBlockNo = 0 - , pgaFixedMaxForkLength = Nothing - , pgaFixedSchedule = Nothing - , pgaSecurityParam = k - , pgaTestConfig = testConfig - , pgaTestConfigB = testConfigB - } - testOutput - where - testConfigB = TestConfigB - { forgeEbbEnv = Nothing - , future = singleEraFuture slotLength epochSize - , messageDelay = noCalcMessageDelay - , nodeJoinPlan - , nodeRestarts = noRestarts - , txGenExtra = () - , version = newestVersion (Proxy @MockPraosBlock) - } + prop_general + PropGeneralArgs + { pgaBlockProperty = prop_validSimpleBlock + , pgaCountTxs = countSimpleGenTxs + , pgaExpectedCannotForge = noExpectedCannotForges + , pgaFirstBlockNo = 0 + , pgaFixedMaxForkLength = Nothing + , pgaFixedSchedule = Nothing + , pgaSecurityParam = k + , pgaTestConfig = testConfig + , pgaTestConfigB = testConfigB + } + testOutput + where + testConfigB = + TestConfigB + { forgeEbbEnv = Nothing + , future = singleEraFuture slotLength epochSize + , messageDelay = noCalcMessageDelay + , nodeJoinPlan + , nodeRestarts = noRestarts + , txGenExtra = () + , version = newestVersion (Proxy @MockPraosBlock) + } - params = PraosParams - { praosSecurityParam = k - , praosSlotsPerEpoch = unEpochSize epochSize - , praosLeaderF = 0.5 - } + params = + PraosParams + { praosSecurityParam = k + , praosSlotsPerEpoch = unEpochSize epochSize + , praosLeaderF = 0.5 + } TestConfig{numCoreNodes} = testConfig testOutput@TestOutput{testOutputNodes} = - runTestNetwork testConfig testConfigB TestConfigMB - { nodeInfo = \nid -> plainTestNodeInitialization - (protocolInfoPraos - numCoreNodes - nid - params - (HardFork.defaultEraParams - k - slotLength) - setupInitialNonce - evolvingStake) - (blockForgingPraos numCoreNodes nid) - , mkRekeyM = Nothing - } + runTestNetwork + testConfig + testConfigB + TestConfigMB + { nodeInfo = \nid -> + plainTestNodeInitialization + ( protocolInfoPraos + numCoreNodes + nid + params + ( HardFork.defaultEraParams + k + slotLength + ) + setupInitialNonce + evolvingStake + ) + (blockForgingPraos numCoreNodes nid) + , mkRekeyM = Nothing + } diff --git a/ouroboros-consensus-protocol/ouroboros-consensus-protocol.cabal b/ouroboros-consensus-protocol/ouroboros-consensus-protocol.cabal index 1a1906df4f..26018f5c4e 100644 --- a/ouroboros-consensus-protocol/ouroboros-consensus-protocol.cabal +++ b/ouroboros-consensus-protocol/ouroboros-consensus-protocol.cabal @@ -28,6 +28,7 @@ flag asserts common common-lib default-language: Haskell2010 + default-extensions: ImportQualifiedPost ghc-options: -Wall -Wcompat @@ -39,6 +40,7 @@ common common-lib -Wmissing-export-lists -Wunused-packages -Wno-unticked-promoted-constructors + -Wprepositive-qualified-module if flag(asserts) ghc-options: -fno-ignore-asserts diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/HotKey.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/HotKey.hs index c729d20944..8c16f152fa 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/HotKey.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/HotKey.hs @@ -8,14 +8,16 @@ -- | Hot key -- -- Intended for qualified import -module Ouroboros.Consensus.Protocol.Ledger.HotKey ( - -- * KES Info +module Ouroboros.Consensus.Protocol.Ledger.HotKey + ( -- * KES Info KESEvolution , KESInfo (..) , kesAbsolutePeriod + -- * KES Status , KESStatus (..) , kesStatus + -- * Hot Key , HotKey (..) , KESEvolutionError (..) @@ -24,15 +26,15 @@ module Ouroboros.Consensus.Protocol.Ledger.HotKey ( , sign ) where -import qualified Cardano.Crypto.KES as KES -import qualified Cardano.Crypto.KES as Relative (Period) -import Cardano.Protocol.Crypto (Crypto (..)) -import qualified Cardano.Protocol.TPraos.OCert as Absolute (KESPeriod (..)) -import Data.Word (Word64) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block.Forging (UpdateInfo (..)) -import Ouroboros.Consensus.Util.IOLike +import Cardano.Crypto.KES qualified as KES +import Cardano.Crypto.KES qualified as Relative (Period) +import Cardano.Protocol.Crypto (Crypto (..)) +import Cardano.Protocol.TPraos.OCert qualified as Absolute (KESPeriod (..)) +import Data.Word (Word64) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block.Forging (UpdateInfo (..)) +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- KES Info @@ -42,45 +44,48 @@ import Ouroboros.Consensus.Util.IOLike -- avoid confusion with absolute periods. type KESEvolution = Relative.Period -data KESInfo = KESInfo { - kesStartPeriod :: !Absolute.KESPeriod - , kesEndPeriod :: !Absolute.KESPeriod - -- ^ Currently derived from 'TPraosParams': - -- > kesEndPeriod = kesStartPeriod + tpraosMaxKESEvo - , kesEvolution :: !KESEvolution - -- ^ Current evolution or /relative period/. - -- - -- Invariant: - -- > kesStartPeriod + kesEvolution in [kesStartPeriod, kesEndPeriod) - } +data KESInfo = KESInfo + { kesStartPeriod :: !Absolute.KESPeriod + , kesEndPeriod :: !Absolute.KESPeriod + -- ^ Currently derived from 'TPraosParams': + -- > kesEndPeriod = kesStartPeriod + tpraosMaxKESEvo + , kesEvolution :: !KESEvolution + -- ^ Current evolution or /relative period/. + -- + -- Invariant: + -- > kesStartPeriod + kesEvolution in [kesStartPeriod, kesEndPeriod) + } deriving (Show, Generic, NoThunks) -- | Return the absolute KES period kesAbsolutePeriod :: KESInfo -> Absolute.KESPeriod -kesAbsolutePeriod KESInfo { kesStartPeriod, kesEvolution } = - Absolute.KESPeriod $ start + kesEvolution - where - Absolute.KESPeriod start = kesStartPeriod +kesAbsolutePeriod KESInfo{kesStartPeriod, kesEvolution} = + Absolute.KESPeriod $ start + kesEvolution + where + Absolute.KESPeriod start = kesStartPeriod {------------------------------------------------------------------------------- KES Status -------------------------------------------------------------------------------} -data KESStatus = - -- | The given period is before the start period of the KES key. +data KESStatus + = -- | The given period is before the start period of the KES key. BeforeKESStart - Absolute.KESPeriod -- ^ Given period - Absolute.KESPeriod -- ^ Start period of the KES key - - -- | The given period is in the range of the KES key. - | InKESRange - KESEvolution -- ^ Relative period or evolution corresponding to the - -- given absolute period - - -- | The given period is after the end period of the KES key. - | AfterKESEnd - Absolute.KESPeriod -- ^ Given period - Absolute.KESPeriod -- ^ End period of the KES key + -- | Given period + Absolute.KESPeriod + -- | Start period of the KES key + Absolute.KESPeriod + | -- | The given period is in the range of the KES key. + InKESRange + -- | Relative period or evolution corresponding to the + -- given absolute period + KESEvolution + | -- | The given period is after the end period of the KES key. + AfterKESEnd + -- | Given period + Absolute.KESPeriod + -- | End period of the KES key + Absolute.KESPeriod -- | Return the evolution of the given KES period, /when/ it falls within the -- range of the 'HotKey' (@[hkStart, hkEnd)@). @@ -88,11 +93,13 @@ data KESStatus = -- Note that the upper bound is exclusive, the spec says: -- > c0 <= kesPeriod s < c0 + MaxKESEvo kesStatus :: KESInfo -> Absolute.KESPeriod -> KESStatus -kesStatus KESInfo { kesStartPeriod = lo'@(Absolute.KESPeriod lo) - , kesEndPeriod = hi'@(Absolute.KESPeriod hi) - } - cur'@(Absolute.KESPeriod cur) - | cur < lo = BeforeKESStart cur' lo' +kesStatus + KESInfo + { kesStartPeriod = lo'@(Absolute.KESPeriod lo) + , kesEndPeriod = hi'@(Absolute.KESPeriod hi) + } + cur'@(Absolute.KESPeriod cur) + | cur < lo = BeforeKESStart cur' lo' | cur >= hi = AfterKESEnd cur' hi' | otherwise = InKESRange (cur - lo) @@ -101,104 +108,112 @@ kesStatus KESInfo { kesStartPeriod = lo'@(Absolute.KESPeriod lo) -------------------------------------------------------------------------------} -- | Failed to evolve the KES key. -data KESEvolutionError = - -- | The KES key could not be evolved to the target period. +data KESEvolutionError + = -- | The KES key could not be evolved to the target period. KESCouldNotEvolve KESInfo + -- | Target period outside the range of the current KES key. Typically + -- the current KES period according to the wallclock slot. Absolute.KESPeriod - -- ^ Target period outside the range of the current KES key. Typically - -- the current KES period according to the wallclock slot. - - -- | The KES key was already poisoned. - | KESKeyAlreadyPoisoned + | -- | The KES key was already poisoned. + KESKeyAlreadyPoisoned KESInfo + -- | Target period outside the range of the current KES key. Typically + -- the current KES period according to the wallclock slot. Absolute.KESPeriod - -- ^ Target period outside the range of the current KES key. Typically - -- the current KES period according to the wallclock slot. - deriving (Show) + deriving Show -- | Result of evolving the KES key. type KESEvolutionInfo = UpdateInfo KESInfo KESEvolutionError -- | API to interact with the key. -data HotKey c m = HotKey { - -- | Evolve the KES signing key to the given absolute KES period. - -- - -- When the key cannot evolve anymore, we poison it. - evolve :: Absolute.KESPeriod -> m KESEvolutionInfo - -- | Return 'KESInfo' of the signing key. - , getInfo :: m KESInfo - -- | Return 'True' when the signing key is poisoned because it expired. - , isPoisoned :: m Bool - -- | Sign the given @toSign@ with the current signing key. - -- - -- PRECONDITION: the key is not poisoned. - -- - -- POSTCONDITION: the signature is in normal form. - , sign_ :: forall toSign. (KES.Signable (KES c) toSign, HasCallStack) - => toSign -> m (KES.SignedKES (KES c) toSign) - } +data HotKey c m = HotKey + { evolve :: Absolute.KESPeriod -> m KESEvolutionInfo + -- ^ Evolve the KES signing key to the given absolute KES period. + -- + -- When the key cannot evolve anymore, we poison it. + , getInfo :: m KESInfo + -- ^ Return 'KESInfo' of the signing key. + , isPoisoned :: m Bool + -- ^ Return 'True' when the signing key is poisoned because it expired. + , sign_ :: + forall toSign. + (KES.Signable (KES c) toSign, HasCallStack) => + toSign -> m (KES.SignedKES (KES c) toSign) + -- ^ Sign the given @toSign@ with the current signing key. + -- + -- PRECONDITION: the key is not poisoned. + -- + -- POSTCONDITION: the signature is in normal form. + } sign :: - (KES.Signable (KES c) toSign, HasCallStack) - => HotKey c m - -> toSign -> m (KES.SignedKES (KES c) toSign) + (KES.Signable (KES c) toSign, HasCallStack) => + HotKey c m -> + toSign -> + m (KES.SignedKES (KES c) toSign) sign = sign_ -- | The actual KES key, unless it expired, in which case it is replaced by -- \"poison\". -data KESKey c = - KESKey !(KES.UnsoundPureSignKeyKES (KES c)) +data KESKey c + = KESKey !(KES.UnsoundPureSignKeyKES (KES c)) | KESKeyPoisoned - deriving (Generic) + deriving Generic instance Crypto c => NoThunks (KESKey c) kesKeyIsPoisoned :: KESKey c -> Bool kesKeyIsPoisoned KESKeyPoisoned = True -kesKeyIsPoisoned (KESKey _) = False +kesKeyIsPoisoned (KESKey _) = False -data KESState c = KESState { - kesStateInfo :: !KESInfo - , kesStateKey :: !(KESKey c) - } - deriving (Generic) +data KESState c = KESState + { kesStateInfo :: !KESInfo + , kesStateKey :: !(KESKey c) + } + deriving Generic instance Crypto c => NoThunks (KESState c) mkHotKey :: - forall m c. (Crypto c, IOLike m) - => KES.UnsoundPureSignKeyKES (KES c) - -> Absolute.KESPeriod -- ^ Start period - -> Word64 -- ^ Max KES evolutions - -> m (HotKey c m) + forall m c. + (Crypto c, IOLike m) => + KES.UnsoundPureSignKeyKES (KES c) -> + -- | Start period + Absolute.KESPeriod -> + -- | Max KES evolutions + Word64 -> + m (HotKey c m) mkHotKey initKey startPeriod@(Absolute.KESPeriod start) maxKESEvolutions = do - varKESState <- newMVar initKESState - return HotKey { - evolve = evolveKey varKESState - , getInfo = kesStateInfo <$> readMVar varKESState + varKESState <- newMVar initKESState + return + HotKey + { evolve = evolveKey varKESState + , getInfo = kesStateInfo <$> readMVar varKESState , isPoisoned = kesKeyIsPoisoned . kesStateKey <$> readMVar varKESState - , sign_ = \toSign -> do - KESState { kesStateInfo, kesStateKey } <- readMVar varKESState + , sign_ = \toSign -> do + KESState{kesStateInfo, kesStateKey} <- readMVar varKESState case kesStateKey of KESKeyPoisoned -> error "trying to sign with a poisoned key" - KESKey key -> do + KESKey key -> do let evolution = kesEvolution kesStateInfo - signed = KES.unsoundPureSignedKES () evolution toSign key + signed = KES.unsoundPureSignedKES () evolution toSign key -- Force the signature to WHNF (for 'SignedKES', WHNF implies -- NF) so that we don't have any thunks holding on to a key that -- might be destructively updated when evolved. evaluate signed } - where - initKESState :: KESState c - initKESState = KESState { - kesStateInfo = KESInfo { - kesStartPeriod = startPeriod - , kesEndPeriod = Absolute.KESPeriod (start + fromIntegral maxKESEvolutions) - -- We always start from 0 as the key hasn't evolved yet. - , kesEvolution = 0 - } + where + initKESState :: KESState c + initKESState = + KESState + { kesStateInfo = + KESInfo + { kesStartPeriod = startPeriod + , kesEndPeriod = Absolute.KESPeriod (start + fromIntegral maxKESEvolutions) + , -- We always start from 0 as the key hasn't evolved yet. + kesEvolution = 0 + } , kesStateKey = KESKey initKey } @@ -216,65 +231,60 @@ mkHotKey initKey startPeriod@(Absolute.KESPeriod start) maxKESEvolutions = do -- -- When the key is poisoned, we always return 'UpdateFailed'. evolveKey :: - forall m c. (Crypto c, IOLike m) - => StrictMVar m (KESState c) -> Absolute.KESPeriod -> m KESEvolutionInfo + forall m c. + (Crypto c, IOLike m) => + StrictMVar m (KESState c) -> Absolute.KESPeriod -> m KESEvolutionInfo evolveKey varKESState targetPeriod = modifyMVar varKESState $ \kesState -> do - let info = kesStateInfo kesState - -- We mask the evolution process because if we got interrupted after - -- calling 'forgetSignKeyKES', which destructively updates the current - -- signing key, we would leave an erased key in the state, which might - -- cause a segfault when used afterwards. - uninterruptibleMask_ $ case kesStateKey kesState of - - KESKeyPoisoned -> - let err = KESKeyAlreadyPoisoned info targetPeriod - in return (kesState, UpdateFailed err) - - KESKey key -> case kesStatus info targetPeriod of - -- When the absolute period is before the start period, we can't - -- update the key. 'checkCanForge' will say we can't forge because the - -- key is not valid yet. - BeforeKESStart {} -> + let info = kesStateInfo kesState + -- We mask the evolution process because if we got interrupted after + -- calling 'forgetSignKeyKES', which destructively updates the current + -- signing key, we would leave an erased key in the state, which might + -- cause a segfault when used afterwards. + uninterruptibleMask_ $ case kesStateKey kesState of + KESKeyPoisoned -> + let err = KESKeyAlreadyPoisoned info targetPeriod + in return (kesState, UpdateFailed err) + KESKey key -> case kesStatus info targetPeriod of + -- When the absolute period is before the start period, we can't + -- update the key. 'checkCanForge' will say we can't forge because the + -- key is not valid yet. + BeforeKESStart{} -> + return (kesState, Updated info) + -- When the absolute period is after the end period, we can't evolve + -- anymore and poison the expired key. + AfterKESEnd{} -> + let err = KESCouldNotEvolve info targetPeriod + in return (poisonState kesState, UpdateFailed err) + InKESRange targetEvolution + -- No evolving needed + | targetEvolution <= kesEvolution info -> return (kesState, Updated info) + -- Evolving needed + | otherwise -> + (\s' -> (s', Updated (kesStateInfo s'))) + <$> go targetEvolution info key + where + poisonState :: KESState c -> KESState c + poisonState kesState = kesState{kesStateKey = KESKeyPoisoned} - -- When the absolute period is after the end period, we can't evolve - -- anymore and poison the expired key. - AfterKESEnd {} -> - let err = KESCouldNotEvolve info targetPeriod - in return (poisonState kesState, UpdateFailed err) - - InKESRange targetEvolution - -- No evolving needed - | targetEvolution <= kesEvolution info - -> return (kesState, Updated info) - - -- Evolving needed - | otherwise - -> (\s' -> (s', Updated (kesStateInfo s'))) <$> - go targetEvolution info key - - where - poisonState :: KESState c -> KESState c - poisonState kesState = kesState { kesStateKey = KESKeyPoisoned } - - -- | PRECONDITION: - -- - -- > targetEvolution >= curEvolution - go :: KESEvolution -> KESInfo -> KES.UnsoundPureSignKeyKES (KES c) -> m (KESState c) - go targetEvolution info key - | targetEvolution <= curEvolution - = return $ KESState { kesStateInfo = info, kesStateKey = KESKey key } - | otherwise - = case KES.unsoundPureUpdateKES () key curEvolution of + -- \| PRECONDITION: + -- + -- > targetEvolution >= curEvolution + go :: KESEvolution -> KESInfo -> KES.UnsoundPureSignKeyKES (KES c) -> m (KESState c) + go targetEvolution info key + | targetEvolution <= curEvolution = + return $ KESState{kesStateInfo = info, kesStateKey = KESKey key} + | otherwise = + case KES.unsoundPureUpdateKES () key curEvolution of -- This cannot happen - Nothing -> error "Could not update KES key" + Nothing -> error "Could not update KES key" Just !key' -> do -- Clear the memory associated with the old key -- FIXME: Secure forgetting is not available through the unsound KES API, -- but we must restore this invocation when moving to the new mlocked KES -- API. -- forgetSignKeyKES key - let info' = info { kesEvolution = curEvolution + 1 } + let info' = info{kesEvolution = curEvolution + 1} go targetEvolution info' key' - where - curEvolution = kesEvolution info + where + curEvolution = kesEvolution info diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/Util.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/Util.hs index 1778ad06e6..026210bb82 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/Util.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Ledger/Util.hs @@ -2,48 +2,50 @@ -- -- In particular, various things we need for integration with the @delegation@ -- package from cardano-ledger-specs. -module Ouroboros.Consensus.Protocol.Ledger.Util ( - firstSlotOfEpochOfSlot +module Ouroboros.Consensus.Protocol.Ledger.Util + ( firstSlotOfEpochOfSlot , isNewEpoch ) where -import Cardano.Slotting.EpochInfo -import Data.Functor.Identity (Identity (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.History.Util (addEpochs, - countSlots) +import Cardano.Slotting.EpochInfo +import Data.Functor.Identity (Identity (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.History.Util + ( addEpochs + , countSlots + ) -- | Verify whether a slot represents a change to a new epoch with regard to -- some other slot. -- -- PRECONDITION: the two slots must be in the same era. isNewEpoch :: - EpochInfo Identity - -> WithOrigin SlotNo - -- ^ Slot we are comparing a new epoch against - -> SlotNo - -- ^ Slot we want to check - -> Bool + EpochInfo Identity -> + -- | Slot we are comparing a new epoch against + WithOrigin SlotNo -> + -- | Slot we want to check + SlotNo -> + Bool isNewEpoch ei reference newSlot = runIdentity $ do - oldEpoch <- case reference of - Origin -> return $ EpochNo 0 - NotOrigin s -> epochInfoEpoch ei s - epochSize <- epochInfoSize ei oldEpoch - firstSlot <- epochInfoFirst ei oldEpoch + oldEpoch <- case reference of + Origin -> return $ EpochNo 0 + NotOrigin s -> epochInfoEpoch ei s + epochSize <- epochInfoSize ei oldEpoch + firstSlot <- epochInfoFirst ei oldEpoch - let epochsAfter = (countSlots newSlot firstSlot) `div` unEpochSize epochSize - newEpoch = addEpochs epochsAfter oldEpoch - -- Note that we don't call: - -- > epochInfoEpoch ei newSlot - -- because the 'EpochInfo' might have limited range. The precondition - -- justifies the calculation that we do here. - pure $ newEpoch > oldEpoch + let epochsAfter = (countSlots newSlot firstSlot) `div` unEpochSize epochSize + newEpoch = addEpochs epochsAfter oldEpoch + -- Note that we don't call: + -- > epochInfoEpoch ei newSlot + -- because the 'EpochInfo' might have limited range. The precondition + -- justifies the calculation that we do here. + pure $ newEpoch > oldEpoch -- | Return the first slot in the epoch of the given slot. firstSlotOfEpochOfSlot :: - EpochInfo Identity - -> SlotNo - -> SlotNo + EpochInfo Identity -> + SlotNo -> + SlotNo firstSlotOfEpochOfSlot ei slot = runIdentity $ do - epoch <- epochInfoEpoch ei slot - epochInfoFirst ei epoch + epoch <- epochInfoEpoch ei slot + epochInfoFirst ei epoch diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos.hs index 9198fa982e..154bba0353 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos.hs @@ -14,8 +14,8 @@ {-# LANGUAGE UndecidableSuperClasses #-} {-# LANGUAGE ViewPatterns #-} -module Ouroboros.Consensus.Protocol.Praos ( - ConsensusConfig (..) +module Ouroboros.Consensus.Protocol.Praos + ( ConsensusConfig (..) , Praos , PraosCannotForge (..) , PraosCrypto @@ -28,79 +28,108 @@ module Ouroboros.Consensus.Protocol.Praos ( , Ticked (..) , forgePraosFields , praosCheckCanForge + -- * For testing purposes , doValidateKESSignature , doValidateVRFSignature ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..), enforceSize) -import qualified Cardano.Crypto.DSIGN as DSIGN -import qualified Cardano.Crypto.Hash as Hash -import qualified Cardano.Crypto.KES as KES -import qualified Cardano.Crypto.VRF as VRF -import Cardano.Ledger.BaseTypes (ActiveSlotCoeff, Nonce, (⭒)) -import qualified Cardano.Ledger.BaseTypes as SL -import qualified Cardano.Ledger.Chain as SL -import Cardano.Ledger.Hashes (HASH) -import Cardano.Ledger.Keys (DSIGN, KeyHash, KeyRole (BlockIssuer), - VKey (VKey), coerceKeyRole, hashKey) -import qualified Cardano.Ledger.Keys as SL -import Cardano.Ledger.Slot (Duration (Duration), (+*)) -import qualified Cardano.Ledger.State as SL -import Cardano.Protocol.Crypto (Crypto, KES, StandardCrypto, VRF) -import qualified Cardano.Protocol.TPraos.API as SL -import Cardano.Protocol.TPraos.BHeader (BoundedNatural (bvValue), - checkLeaderNatValue, prevHashToNonce) -import Cardano.Protocol.TPraos.OCert (KESPeriod (KESPeriod), - OCert (OCert), OCertSignable) -import qualified Cardano.Protocol.TPraos.OCert as OCert -import qualified Cardano.Protocol.TPraos.Rules.Prtcl as SL -import qualified Cardano.Protocol.TPraos.Rules.Tickn as SL -import Cardano.Slotting.EpochInfo (EpochInfo, epochInfoEpoch, - epochInfoFirst, hoistEpochInfo) -import Cardano.Slotting.Slot (EpochNo (EpochNo), SlotNo (SlotNo), - WithOrigin, unSlotNo) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (Serialise (decode, encode)) -import Control.Exception (throw) -import Control.Monad (unless) -import Control.Monad.Except (Except, runExcept, throwError) -import Data.Coerce (coerce) -import Data.Functor.Identity (runIdentity) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy (Proxy (Proxy)) -import qualified Data.Set as Set -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Block (WithOrigin (NotOrigin)) -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) -import qualified Ouroboros.Consensus.Protocol.Ledger.HotKey as HotKey -import Ouroboros.Consensus.Protocol.Ledger.Util (isNewEpoch) -import Ouroboros.Consensus.Protocol.Praos.Common -import Ouroboros.Consensus.Protocol.Praos.Header (HeaderBody) -import qualified Ouroboros.Consensus.Protocol.Praos.Views as Views -import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF, mkInputVRF, - vrfLeaderValue, vrfNonceValue) -import Ouroboros.Consensus.Protocol.TPraos - (ConsensusConfig (TPraosConfig, tpraosEpochInfo, tpraosParams), - TPraos, - TPraosState (tpraosStateChainDepState, tpraosStateLastSlot)) -import Ouroboros.Consensus.Ticked (Ticked) -import Ouroboros.Consensus.Util.Versioned (VersionDecoder (Decode), - decodeVersion, encodeVersion) +import Cardano.Binary (FromCBOR (..), ToCBOR (..), enforceSize) +import Cardano.Crypto.DSIGN qualified as DSIGN +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Crypto.KES qualified as KES +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.BaseTypes (ActiveSlotCoeff, Nonce, (⭒)) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Chain qualified as SL +import Cardano.Ledger.Hashes (HASH) +import Cardano.Ledger.Keys + ( DSIGN + , KeyHash + , KeyRole (BlockIssuer) + , VKey (VKey) + , coerceKeyRole + , hashKey + ) +import Cardano.Ledger.Keys qualified as SL +import Cardano.Ledger.Slot (Duration (Duration), (+*)) +import Cardano.Ledger.State qualified as SL +import Cardano.Protocol.Crypto (Crypto, KES, StandardCrypto, VRF) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.BHeader + ( BoundedNatural (bvValue) + , checkLeaderNatValue + , prevHashToNonce + ) +import Cardano.Protocol.TPraos.OCert + ( KESPeriod (KESPeriod) + , OCert (OCert) + , OCertSignable + ) +import Cardano.Protocol.TPraos.OCert qualified as OCert +import Cardano.Protocol.TPraos.Rules.Prtcl qualified as SL +import Cardano.Protocol.TPraos.Rules.Tickn qualified as SL +import Cardano.Slotting.EpochInfo + ( EpochInfo + , epochInfoEpoch + , epochInfoFirst + , hoistEpochInfo + ) +import Cardano.Slotting.Slot + ( EpochNo (EpochNo) + , SlotNo (SlotNo) + , WithOrigin + , unSlotNo + ) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (Serialise (decode, encode)) +import Control.Exception (throw) +import Control.Monad (unless) +import Control.Monad.Except (Except, runExcept, throwError) +import Data.Coerce (coerce) +import Data.Functor.Identity (runIdentity) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy (Proxy (Proxy)) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Block (WithOrigin (NotOrigin)) +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) +import Ouroboros.Consensus.Protocol.Ledger.HotKey qualified as HotKey +import Ouroboros.Consensus.Protocol.Ledger.Util (isNewEpoch) +import Ouroboros.Consensus.Protocol.Praos.Common +import Ouroboros.Consensus.Protocol.Praos.Header (HeaderBody) +import Ouroboros.Consensus.Protocol.Praos.VRF + ( InputVRF + , mkInputVRF + , vrfLeaderValue + , vrfNonceValue + ) +import Ouroboros.Consensus.Protocol.Praos.Views qualified as Views +import Ouroboros.Consensus.Protocol.TPraos + ( ConsensusConfig (TPraosConfig, tpraosEpochInfo, tpraosParams) + , TPraos + , TPraosState (tpraosStateChainDepState, tpraosStateLastSlot) + ) +import Ouroboros.Consensus.Ticked (Ticked) +import Ouroboros.Consensus.Util.Versioned + ( VersionDecoder (Decode) + , decodeVersion + , encodeVersion + ) data Praos c class - ( Crypto c, - DSIGN.Signable DSIGN (OCertSignable c), - KES.Signable (KES c) (HeaderBody c), - VRF.Signable (VRF c) InputVRF + ( Crypto c + , DSIGN.Signable DSIGN (OCertSignable c) + , KES.Signable (KES c) (HeaderBody c) + , VRF.Signable (VRF c) InputVRF ) => PraosCrypto c @@ -111,10 +140,10 @@ instance PraosCrypto StandardCrypto -------------------------------------------------------------------------------} data PraosFields c toSign = PraosFields - { praosSignature :: KES.SignedKES (KES c) toSign, - praosToSign :: toSign + { praosSignature :: KES.SignedKES (KES c) toSign + , praosToSign :: toSign } - deriving (Generic) + deriving Generic deriving instance (NoThunks toSign, PraosCrypto c) => @@ -127,26 +156,26 @@ deriving instance -- | Fields arising from praos execution which must be included in -- the block signature. data PraosToSign c = PraosToSign - { -- | Verification key for the issuer of this block. - praosToSignIssuerVK :: SL.VKey 'SL.BlockIssuer, - praosToSignVrfVK :: VRF.VerKeyVRF (VRF c), - -- | Verifiable random value. This is used both to prove the issuer is - -- eligible to issue a block, and to contribute to the evolving nonce. - praosToSignVrfRes :: VRF.CertifiedVRF (VRF c) InputVRF, - -- | Lightweight delegation certificate mapping the cold (DSIGN) key to - -- the online KES key. - praosToSignOCert :: OCert.OCert c + { praosToSignIssuerVK :: SL.VKey 'SL.BlockIssuer + -- ^ Verification key for the issuer of this block. + , praosToSignVrfVK :: VRF.VerKeyVRF (VRF c) + , praosToSignVrfRes :: VRF.CertifiedVRF (VRF c) InputVRF + -- ^ Verifiable random value. This is used both to prove the issuer is + -- eligible to issue a block, and to contribute to the evolving nonce. + , praosToSignOCert :: OCert.OCert c + -- ^ Lightweight delegation certificate mapping the cold (DSIGN) key to + -- the online KES key. } - deriving (Generic) + deriving Generic instance PraosCrypto c => NoThunks (PraosToSign c) deriving instance PraosCrypto c => Show (PraosToSign c) forgePraosFields :: - ( PraosCrypto c, - KES.Signable (KES c) toSign, - Monad m + ( PraosCrypto c + , KES.Signable (KES c) toSign + , Monad m ) => HotKey c m -> CanBeLeader (Praos c) -> @@ -156,28 +185,28 @@ forgePraosFields :: forgePraosFields hotKey PraosCanBeLeader - { praosCanBeLeaderColdVerKey, - praosCanBeLeaderSignKeyVRF, - praosCanBeLeaderOpCert + { praosCanBeLeaderColdVerKey + , praosCanBeLeaderSignKeyVRF + , praosCanBeLeaderOpCert } - PraosIsLeader {praosIsLeaderVrfRes} + PraosIsLeader{praosIsLeaderVrfRes} mkToSign = do signature <- HotKey.sign hotKey toSign return PraosFields - { praosSignature = signature, - praosToSign = toSign + { praosSignature = signature + , praosToSign = toSign + } + where + toSign = mkToSign signedFields + + signedFields = + PraosToSign + { praosToSignIssuerVK = praosCanBeLeaderColdVerKey + , praosToSignVrfVK = VRF.deriveVerKeyVRF praosCanBeLeaderSignKeyVRF + , praosToSignVrfRes = praosIsLeaderVrfRes + , praosToSignOCert = praosCanBeLeaderOpCert } - where - toSign = mkToSign signedFields - - signedFields = - PraosToSign - { praosToSignIssuerVK = praosCanBeLeaderColdVerKey, - praosToSignVrfVK = VRF.deriveVerKeyVRF praosCanBeLeaderSignKeyVRF, - praosToSignVrfRes = praosIsLeaderVrfRes, - praosToSignOCert = praosCanBeLeaderOpCert - } {------------------------------------------------------------------------------- Protocol proper @@ -185,26 +214,26 @@ forgePraosFields -- | Praos parameters that are node independent data PraosParams = PraosParams - { -- | See 'Globals.slotsPerKESPeriod'. - praosSlotsPerKESPeriod :: !Word64, - -- | Active slots coefficient. This parameter represents the proportion - -- of slots in which blocks should be issued. This can be interpreted as - -- the probability that a party holding all the stake will be elected as - -- leader for a given slot. - praosLeaderF :: !SL.ActiveSlotCoeff, - -- | See 'Globals.securityParameter'. - praosSecurityParam :: !SecurityParam, - -- | Maximum number of KES iterations, see 'Globals.maxKESEvo'. - praosMaxKESEvo :: !Word64, - -- | All blocks invalid after this protocol version, see - -- 'Globals.maxMajorPV'. - praosMaxMajorPV :: !MaxMajorProtVer, - -- | The number of slots before the start of an epoch where the - -- corresponding epoch nonce is snapshotted. This has to be at least one - -- stability window such that the nonce is stable at the beginning of the - -- epoch. Ouroboros Genesis requires this to be even larger, see - -- 'SL.computeRandomnessStabilisationWindow'. - praosRandomnessStabilisationWindow :: !Word64 + { praosSlotsPerKESPeriod :: !Word64 + -- ^ See 'Globals.slotsPerKESPeriod'. + , praosLeaderF :: !SL.ActiveSlotCoeff + -- ^ Active slots coefficient. This parameter represents the proportion + -- of slots in which blocks should be issued. This can be interpreted as + -- the probability that a party holding all the stake will be elected as + -- leader for a given slot. + , praosSecurityParam :: !SecurityParam + -- ^ See 'Globals.securityParameter'. + , praosMaxKESEvo :: !Word64 + -- ^ Maximum number of KES iterations, see 'Globals.maxKESEvo'. + , praosMaxMajorPV :: !MaxMajorProtVer + -- ^ All blocks invalid after this protocol version, see + -- 'Globals.maxMajorPV'. + , praosRandomnessStabilisationWindow :: !Word64 + -- ^ The number of slots before the start of an epoch where the + -- corresponding epoch nonce is snapshotted. This has to be at least one + -- stability window such that the nonce is stable at the beginning of the + -- epoch. Ouroboros Genesis requires this to be even larger, see + -- 'SL.computeRandomnessStabilisationWindow'. } deriving (Generic, NoThunks) @@ -213,19 +242,19 @@ data PraosParams = PraosParams newtype PraosIsLeader c = PraosIsLeader { praosIsLeaderVrfRes :: VRF.CertifiedVRF (VRF c) InputVRF } - deriving (Generic) + deriving Generic instance PraosCrypto c => NoThunks (PraosIsLeader c) -- | Static configuration data instance ConsensusConfig (Praos c) = PraosConfig - { praosParams :: !PraosParams, - praosEpochInfo :: !(EpochInfo (Except History.PastHorizonException)) - -- it's useful for this record to be EpochInfo and one other thing, - -- because the one other thing can then be used as the - -- PartialConsensConfig in the HFC instance. + { praosParams :: !PraosParams + , praosEpochInfo :: !(EpochInfo (Except History.PastHorizonException)) + -- it's useful for this record to be EpochInfo and one other thing, + -- because the one other thing can then be used as the + -- PartialConsensConfig in the HFC instance. } - deriving (Generic) + deriving Generic instance PraosCrypto c => NoThunks (ConsensusConfig (Praos c)) @@ -241,20 +270,20 @@ type PraosValidateView c = Views.HeaderView c -- as a series of nonces which get updated in different ways over the course of -- an epoch. data PraosState = PraosState - { praosStateLastSlot :: !(WithOrigin SlotNo), - -- | Operation Certificate counters - praosStateOCertCounters :: !(Map (KeyHash 'BlockIssuer) Word64), - -- | Evolving nonce - praosStateEvolvingNonce :: !Nonce, - -- | Candidate nonce - praosStateCandidateNonce :: !Nonce, - -- | Epoch nonce - praosStateEpochNonce :: !Nonce, - -- | Nonce constructed from the hash of the previous block - praosStateLabNonce :: !Nonce, - -- | Nonce corresponding to the LAB nonce of the last block of the previous - -- epoch - praosStateLastEpochBlockNonce :: !Nonce + { praosStateLastSlot :: !(WithOrigin SlotNo) + , praosStateOCertCounters :: !(Map (KeyHash 'BlockIssuer) Word64) + -- ^ Operation Certificate counters + , praosStateEvolvingNonce :: !Nonce + -- ^ Evolving nonce + , praosStateCandidateNonce :: !Nonce + -- ^ Candidate nonce + , praosStateEpochNonce :: !Nonce + -- ^ Epoch nonce + , praosStateLabNonce :: !Nonce + -- ^ Nonce constructed from the hash of the previous block + , praosStateLastEpochBlockNonce :: !Nonce + -- ^ Nonce corresponding to the LAB nonce of the last block of the previous + -- epoch } deriving (Generic, Show, Eq) @@ -269,44 +298,44 @@ instance FromCBOR PraosState where instance Serialise PraosState where encode PraosState - { praosStateLastSlot, - praosStateOCertCounters, - praosStateEvolvingNonce, - praosStateCandidateNonce, - praosStateEpochNonce, - praosStateLabNonce, - praosStateLastEpochBlockNonce + { praosStateLastSlot + , praosStateOCertCounters + , praosStateEvolvingNonce + , praosStateCandidateNonce + , praosStateEpochNonce + , praosStateLabNonce + , praosStateLastEpochBlockNonce } = encodeVersion 0 $ mconcat - [ CBOR.encodeListLen 7, - toCBOR praosStateLastSlot, - toCBOR praosStateOCertCounters, - toCBOR praosStateEvolvingNonce, - toCBOR praosStateCandidateNonce, - toCBOR praosStateEpochNonce, - toCBOR praosStateLabNonce, - toCBOR praosStateLastEpochBlockNonce + [ CBOR.encodeListLen 7 + , toCBOR praosStateLastSlot + , toCBOR praosStateOCertCounters + , toCBOR praosStateEvolvingNonce + , toCBOR praosStateCandidateNonce + , toCBOR praosStateEpochNonce + , toCBOR praosStateLabNonce + , toCBOR praosStateLastEpochBlockNonce ] decode = decodeVersion [(0, Decode decodePraosState)] - where - decodePraosState = do - enforceSize "PraosState" 7 - PraosState - <$> fromCBOR - <*> fromCBOR - <*> fromCBOR - <*> fromCBOR - <*> fromCBOR - <*> fromCBOR - <*> fromCBOR + where + decodePraosState = do + enforceSize "PraosState" 7 + PraosState + <$> fromCBOR + <*> fromCBOR + <*> fromCBOR + <*> fromCBOR + <*> fromCBOR + <*> fromCBOR + <*> fromCBOR data instance Ticked PraosState = TickedPraosState - { tickedPraosStateChainDepState :: PraosState, - tickedPraosStateLedgerView :: Views.LedgerView + { tickedPraosStateChainDepState :: PraosState + , tickedPraosStateLedgerView :: Views.LedgerView } -- | Errors which we might encounter @@ -348,7 +377,7 @@ data PraosValidationErr c !String -- error message given by Consensus Layer | NoCounterForKeyHashOCERT !(KeyHash 'BlockIssuer) -- stake pool key hash - deriving (Generic) + deriving Generic deriving instance PraosCrypto c => Eq (PraosValidationErr c) @@ -370,8 +399,8 @@ instance PraosCrypto c => ConsensusProtocol (Praos c) where checkIsLeader cfg PraosCanBeLeader - { praosCanBeLeaderSignKeyVRF, - praosCanBeLeaderColdVerKey + { praosCanBeLeaderSignKeyVRF + , praosCanBeLeaderColdVerKey } slot cs = @@ -382,14 +411,14 @@ instance PraosCrypto c => ConsensusProtocol (Praos c) where { praosIsLeaderVrfRes = coerce rho } else Nothing - where - chainState = tickedPraosStateChainDepState cs - lv = tickedPraosStateLedgerView cs - eta0 = praosStateEpochNonce chainState - vkhCold = SL.hashKey praosCanBeLeaderColdVerKey - rho' = mkInputVRF slot eta0 + where + chainState = tickedPraosStateChainDepState cs + lv = tickedPraosStateLedgerView cs + eta0 = praosStateEpochNonce chainState + vkhCold = SL.hashKey praosCanBeLeaderColdVerKey + rho' = mkInputVRF slot eta0 - rho = VRF.evalCertified () rho' praosCanBeLeaderSignKeyVRF + rho = VRF.evalCertified () rho' praosCanBeLeaderSignKeyVRF -- Updating the chain dependent state for Praos. -- @@ -400,30 +429,30 @@ instance PraosCrypto c => ConsensusProtocol (Praos c) where -- - Update the "last block of previous epoch" nonce to the nonce derived from -- the last applied block. tickChainDepState - PraosConfig {praosEpochInfo} + PraosConfig{praosEpochInfo} lv slot st = TickedPraosState - { tickedPraosStateChainDepState = st', - tickedPraosStateLedgerView = lv + { tickedPraosStateChainDepState = st' + , tickedPraosStateLedgerView = lv } - where - newEpoch = - isNewEpoch - (History.toPureEpochInfo praosEpochInfo) - (praosStateLastSlot st) - slot - st' = - if newEpoch - then - st - { praosStateEpochNonce = - praosStateCandidateNonce st - ⭒ praosStateLastEpochBlockNonce st, - praosStateLastEpochBlockNonce = praosStateLabNonce st - } - else st + where + newEpoch = + isNewEpoch + (History.toPureEpochInfo praosEpochInfo) + (praosStateLastSlot st) + slot + st' = + if newEpoch + then + st + { praosStateEpochNonce = + praosStateCandidateNonce st + ⭒ praosStateLastEpochBlockNonce st + , praosStateLastEpochBlockNonce = praosStateLabNonce st + } + else st -- Validate and update the chain dependent state as a result of processing a -- new header. @@ -435,7 +464,7 @@ instance PraosCrypto c => ConsensusProtocol (Praos c) where -- updateChainDepState cfg@( PraosConfig - PraosParams {praosLeaderF} + PraosParams{praosLeaderF} _ ) b @@ -449,9 +478,9 @@ instance PraosCrypto c => ConsensusProtocol (Praos c) where validateVRFSignature (praosStateEpochNonce cs) lv praosLeaderF b -- Finally, we apply the changes from this header to the chain state. pure $ reupdateChainDepState cfg b slot tcs - where - lv = tickedPraosStateLedgerView tcs - cs = tickedPraosStateChainDepState tcs + where + lv = tickedPraosStateLedgerView tcs + cs = tickedPraosStateChainDepState tcs -- Re-update the chain dependent state as a result of processing a header. -- @@ -462,37 +491,37 @@ instance PraosCrypto c => ConsensusProtocol (Praos c) where -- - Update the operational certificate counter. reupdateChainDepState _cfg@( PraosConfig - PraosParams {praosRandomnessStabilisationWindow} + PraosParams{praosRandomnessStabilisationWindow} ei ) b slot tcs = cs - { praosStateLastSlot = NotOrigin slot, - praosStateLabNonce = prevHashToNonce (Views.hvPrevHash b), - praosStateEvolvingNonce = newEvolvingNonce, - praosStateCandidateNonce = + { praosStateLastSlot = NotOrigin slot + , praosStateLabNonce = prevHashToNonce (Views.hvPrevHash b) + , praosStateEvolvingNonce = newEvolvingNonce + , praosStateCandidateNonce = if slot +* Duration praosRandomnessStabilisationWindow < firstSlotNextEpoch then newEvolvingNonce - else praosStateCandidateNonce cs, - praosStateOCertCounters = + else praosStateCandidateNonce cs + , praosStateOCertCounters = Map.insert hk n $ praosStateOCertCounters cs } - where - epochInfoWithErr = - hoistEpochInfo - (either throw pure . runExcept) - ei - firstSlotNextEpoch = runIdentity $ do - EpochNo currentEpochNo <- epochInfoEpoch epochInfoWithErr slot - let nextEpoch = EpochNo $ currentEpochNo + 1 - epochInfoFirst epochInfoWithErr nextEpoch - cs = tickedPraosStateChainDepState tcs - eta = vrfNonceValue (Proxy @c) $ Views.hvVrfRes b - newEvolvingNonce = praosStateEvolvingNonce cs ⭒ eta - OCert _ n _ _ = Views.hvOCert b - hk = hashKey $ Views.hvVK b + where + epochInfoWithErr = + hoistEpochInfo + (either throw pure . runExcept) + ei + firstSlotNextEpoch = runIdentity $ do + EpochNo currentEpochNo <- epochInfoEpoch epochInfoWithErr slot + let nextEpoch = EpochNo $ currentEpochNo + 1 + epochInfoFirst epochInfoWithErr nextEpoch + cs = tickedPraosStateChainDepState tcs + eta = vrfNonceValue (Proxy @c) $ Views.hvVrfRes b + newEvolvingNonce = praosStateEvolvingNonce cs ⭒ eta + OCert _ n _ _ = Views.hvOCert b + hk = hashKey $ Views.hvVK b -- | Check whether this node meets the leader threshold to issue a block. meetsLeaderThreshold :: @@ -503,24 +532,23 @@ meetsLeaderThreshold :: VRF.CertifiedVRF (VRF c) InputVRF -> Bool meetsLeaderThreshold - PraosConfig {praosParams} - Views.LedgerView {Views.lvPoolDistr} + PraosConfig{praosParams} + Views.LedgerView{Views.lvPoolDistr} keyHash rho = checkLeaderNatValue (vrfLeaderValue (Proxy @c) rho) r (praosLeaderF praosParams) - where - SL.PoolDistr poolDistr _totalActiveStake = lvPoolDistr - r = - maybe 0 SL.individualPoolStake $ - Map.lookup keyHash poolDistr + where + SL.PoolDistr poolDistr _totalActiveStake = lvPoolDistr + r = + maybe 0 SL.individualPoolStake $ + Map.lookup keyHash poolDistr validateVRFSignature :: forall c. - ( PraosCrypto c - ) => + PraosCrypto c => Nonce -> Views.LedgerView -> ActiveSlotCoeff -> @@ -545,8 +573,9 @@ doValidateVRFSignature eta0 pd f b = do Just (SL.IndividualPoolStake sigma _totalPoolStake vrfHK) -> do let vrfHKStake = SL.fromVRFVerKeyHash vrfHK vrfHKBlock = VRF.hashVerKeyVRF vrfK - vrfHKStake == vrfHKBlock - ?! VRFKeyWrongVRFKey hk vrfHKStake vrfHKBlock + vrfHKStake + == vrfHKBlock + ?! VRFKeyWrongVRFKey hk vrfHKStake vrfHKBlock VRF.verifyCertified () vrfK @@ -555,12 +584,12 @@ doValidateVRFSignature eta0 pd f b = do ?! VRFKeyBadProof slot eta0 vrfCert checkLeaderNatValue vrfLeaderVal sigma f ?! VRFLeaderValueTooBig (bvValue vrfLeaderVal) sigma f - where - hk = coerceKeyRole . hashKey . Views.hvVK $ b - vrfK = Views.hvVrfVK b - vrfCert = Views.hvVrfRes b - vrfLeaderVal = vrfLeaderValue (Proxy @c) vrfCert - slot = Views.hvSlotNo b + where + hk = coerceKeyRole . hashKey . Views.hvVK $ b + vrfK = Views.hvVrfVK b + vrfCert = Views.hvVrfRes b + vrfLeaderVal = vrfLeaderValue (Proxy @c) vrfCert + slot = Views.hvSlotNo b validateKESSignature :: PraosCrypto c => @@ -571,9 +600,9 @@ validateKESSignature :: Except (PraosValidationErr c) () validateKESSignature _cfg@( PraosConfig - PraosParams{praosMaxKESEvo, praosSlotsPerKESPeriod} - _ei - ) + PraosParams{praosMaxKESEvo, praosSlotsPerKESPeriod} + _ei + ) Views.LedgerView{Views.lvPoolDistr = SL.PoolDistr lvPoolDistr _totalActiveStake} ocertCounters = doValidateKESSignature praosMaxKESEvo praosSlotsPerKESPeriod lvPoolDistr ocertCounters @@ -597,10 +626,10 @@ doValidateKESSignature praosMaxKESEvo praosSlotsPerKESPeriod stakeDistribution o -- this is required to prevent an arithmetic underflow, in the case of kp_ < -- c0_ we get the above `KESBeforeStartOCERT` failure in the transition. - DSIGN.verifySignedDSIGN () vkcold (OCert.ocertToSignable oc) tau ?!: - InvalidSignatureOCERT n c0 - KES.verifySignedKES () vk_hot t (Views.hvSigned b) (Views.hvSignature b) ?!: - InvalidKesSignatureOCERT kp_ c0_ t praosMaxKESEvo + DSIGN.verifySignedDSIGN () vkcold (OCert.ocertToSignable oc) tau + ?!: InvalidSignatureOCERT n c0 + KES.verifySignedKES () vk_hot t (Views.hvSigned b) (Views.hvSignature b) + ?!: InvalidKesSignatureOCERT kp_ c0_ t praosMaxKESEvo case currentIssueNo of Nothing -> do @@ -608,22 +637,22 @@ doValidateKESSignature praosMaxKESEvo praosSlotsPerKESPeriod stakeDistribution o Just m -> do m <= n ?! CounterTooSmallOCERT m n n <= m + 1 ?! CounterOverIncrementedOCERT m n - where - oc@(OCert vk_hot n c0@(KESPeriod c0_) tau) = Views.hvOCert b - (VKey vkcold) = Views.hvVK b - SlotNo s = Views.hvSlotNo b - hk = hashKey $ Views.hvVK b - kp@(KESPeriod kp_) = - if praosSlotsPerKESPeriod == 0 - then error "kesPeriod: slots per KES period was set to zero" - else KESPeriod . fromIntegral $ s `div` praosSlotsPerKESPeriod - - currentIssueNo :: Maybe Word64 - currentIssueNo - | Map.member hk ocertCounters = Map.lookup hk ocertCounters - | Set.member (coerceKeyRole hk) (Map.keysSet stakeDistribution) = - Just 0 - | otherwise = Nothing + where + oc@(OCert vk_hot n c0@(KESPeriod c0_) tau) = Views.hvOCert b + (VKey vkcold) = Views.hvVK b + SlotNo s = Views.hvSlotNo b + hk = hashKey $ Views.hvVK b + kp@(KESPeriod kp_) = + if praosSlotsPerKESPeriod == 0 + then error "kesPeriod: slots per KES period was set to zero" + else KESPeriod . fromIntegral $ s `div` praosSlotsPerKESPeriod + + currentIssueNo :: Maybe Word64 + currentIssueNo + | Map.member hk ocertCounters = Map.lookup hk ocertCounters + | Set.member (coerceKeyRole hk) (Map.keysSet stakeDistribution) = + Just 0 + | otherwise = Nothing {------------------------------------------------------------------------------- CannotForge @@ -640,12 +669,12 @@ data PraosCannotForge c -- end period of the key, is caught when trying to update the key in -- 'updateForgeState'. PraosCannotForgeKeyNotUsableYet - !OCert.KESPeriod - -- ^ Current KES period according to the wallclock slot, i.e., the KES + -- | Current KES period according to the wallclock slot, i.e., the KES -- period in which we want to use the key. !OCert.KESPeriod - -- ^ Start KES period of the KES key. - deriving (Generic) + -- | Start KES period of the KES key. + !OCert.KESPeriod + deriving Generic deriving instance PraosCrypto c => Show (PraosCannotForge c) @@ -655,22 +684,21 @@ praosCheckCanForge :: HotKey.KESInfo -> Either (PraosCannotForge c) () praosCheckCanForge - PraosConfig {praosParams} + PraosConfig{praosParams} curSlot kesInfo - | let startPeriod = HotKey.kesStartPeriod kesInfo, - startPeriod > wallclockPeriod = - throwError $ PraosCannotForgeKeyNotUsableYet wallclockPeriod startPeriod + | let startPeriod = HotKey.kesStartPeriod kesInfo + , startPeriod > wallclockPeriod = + throwError $ PraosCannotForgeKeyNotUsableYet wallclockPeriod startPeriod | otherwise = - return () - where - -- The current wallclock KES period - wallclockPeriod :: OCert.KESPeriod - wallclockPeriod = - OCert.KESPeriod $ - fromIntegral $ - unSlotNo curSlot `div` praosSlotsPerKESPeriod praosParams - + return () + where + -- The current wallclock KES period + wallclockPeriod :: OCert.KESPeriod + wallclockPeriod = + OCert.KESPeriod $ + fromIntegral $ + unSlotNo curSlot `div` praosSlotsPerKESPeriod praosParams {------------------------------------------------------------------------------- PraosProtocolSupportsNode @@ -680,28 +708,28 @@ instance PraosCrypto c => PraosProtocolSupportsNode (Praos c) where type PraosProtocolSupportsNodeCrypto (Praos c) = c getPraosNonces _prx cdst = - PraosNonces { - candidateNonce = praosStateCandidateNonce - , epochNonce = praosStateEpochNonce - , evolvingNonce = praosStateEvolvingNonce - , labNonce = praosStateLabNonce - , previousLabNonce = praosStateLastEpochBlockNonce - } - where - PraosState { - praosStateCandidateNonce - , praosStateEpochNonce - , praosStateEvolvingNonce - , praosStateLabNonce - , praosStateLastEpochBlockNonce - } = cdst + PraosNonces + { candidateNonce = praosStateCandidateNonce + , epochNonce = praosStateEpochNonce + , evolvingNonce = praosStateEvolvingNonce + , labNonce = praosStateLabNonce + , previousLabNonce = praosStateLastEpochBlockNonce + } + where + PraosState + { praosStateCandidateNonce + , praosStateEpochNonce + , praosStateEvolvingNonce + , praosStateLabNonce + , praosStateLastEpochBlockNonce + } = cdst getOpCertCounters _prx cdst = - praosStateOCertCounters - where - PraosState { - praosStateOCertCounters - } = cdst + praosStateOCertCounters + where + PraosState + { praosStateOCertCounters + } = cdst {------------------------------------------------------------------------------- Translation from transitional Praos @@ -713,32 +741,30 @@ instance PraosCrypto c => PraosProtocolSupportsNode (Praos c) where -- - They share the same ADDRHASH algorithm -- - They share the same DSIGN verification keys -- - They share the same VRF verification keys -instance - TranslateProto (TPraos c) (Praos c) - where - translateLedgerView _ SL.LedgerView {SL.lvPoolDistr, SL.lvChainChecks} = - Views.LedgerView - { Views.lvPoolDistr = lvPoolDistr, - Views.lvMaxHeaderSize = SL.ccMaxBHSize lvChainChecks, - Views.lvMaxBodySize = SL.ccMaxBBSize lvChainChecks, - Views.lvProtocolVersion = SL.ccProtocolVersion lvChainChecks - } +instance TranslateProto (TPraos c) (Praos c) where + translateLedgerView _ SL.LedgerView{SL.lvPoolDistr, SL.lvChainChecks} = + Views.LedgerView + { Views.lvPoolDistr = lvPoolDistr + , Views.lvMaxHeaderSize = SL.ccMaxBHSize lvChainChecks + , Views.lvMaxBodySize = SL.ccMaxBBSize lvChainChecks + , Views.lvProtocolVersion = SL.ccProtocolVersion lvChainChecks + } translateChainDepState _ tpState = PraosState - { praosStateLastSlot = tpraosStateLastSlot tpState, - praosStateOCertCounters = Map.mapKeysMonotonic coerce certCounters, - praosStateEvolvingNonce = evolvingNonce, - praosStateCandidateNonce = candidateNonce, - praosStateEpochNonce = SL.ticknStateEpochNonce csTickn, - praosStateLabNonce = csLabNonce, - praosStateLastEpochBlockNonce = SL.ticknStatePrevHashNonce csTickn + { praosStateLastSlot = tpraosStateLastSlot tpState + , praosStateOCertCounters = Map.mapKeysMonotonic coerce certCounters + , praosStateEvolvingNonce = evolvingNonce + , praosStateCandidateNonce = candidateNonce + , praosStateEpochNonce = SL.ticknStateEpochNonce csTickn + , praosStateLabNonce = csLabNonce + , praosStateLastEpochBlockNonce = SL.ticknStatePrevHashNonce csTickn } - where - SL.ChainDepState {SL.csProtocol, SL.csTickn, SL.csLabNonce} = - tpraosStateChainDepState tpState - SL.PrtclState certCounters evolvingNonce candidateNonce = - csProtocol + where + SL.ChainDepState{SL.csProtocol, SL.csTickn, SL.csLabNonce} = + tpraosStateChainDepState tpState + SL.PrtclState certCounters evolvingNonce candidateNonce = + csProtocol {------------------------------------------------------------------------------- Util diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Common.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Common.hs index 210457ea89..70ed7245e1 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Common.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Common.hs @@ -7,32 +7,33 @@ {-# LANGUAGE UndecidableInstances #-} -- | Various things common to iterations of the Praos protocol. -module Ouroboros.Consensus.Protocol.Praos.Common ( - MaxMajorProtVer (..) +module Ouroboros.Consensus.Protocol.Praos.Common + ( MaxMajorProtVer (..) , PraosCanBeLeader (..) , PraosChainSelectView (..) , VRFTiebreakerFlavor (..) + -- * node support , PraosNonces (..) , PraosProtocolSupportsNode (..) ) where -import qualified Cardano.Crypto.VRF as VRF -import Cardano.Ledger.BaseTypes (Nonce) -import qualified Cardano.Ledger.BaseTypes as SL -import Cardano.Ledger.Keys (KeyHash, KeyRole (BlockIssuer)) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (Crypto, VRF) -import qualified Cardano.Protocol.TPraos.OCert as OCert -import Cardano.Slotting.Block (BlockNo) -import Cardano.Slotting.Slot (SlotNo) -import Data.Function (on) -import Data.Map.Strict (Map) -import Data.Ord (Down (Down)) -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Protocol.Abstract +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.BaseTypes (Nonce) +import Cardano.Ledger.BaseTypes qualified as SL +import Cardano.Ledger.Keys (KeyHash, KeyRole (BlockIssuer)) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (Crypto, VRF) +import Cardano.Protocol.TPraos.OCert qualified as OCert +import Cardano.Slotting.Block (BlockNo) +import Cardano.Slotting.Slot (SlotNo) +import Data.Function (on) +import Data.Map.Strict (Map) +import Data.Ord (Down (Down)) +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Protocol.Abstract -- | The maximum major protocol version. -- @@ -62,17 +63,17 @@ newtype MaxMajorProtVer = MaxMajorProtVer -- | View of the tip of a header fragment for chain selection. data PraosChainSelectView c = PraosChainSelectView - { csvChainLength :: BlockNo, - csvSlotNo :: SlotNo, - csvIssuer :: SL.VKey 'SL.BlockIssuer, - csvIssueNo :: Word64, - csvTieBreakVRF :: VRF.OutputVRF (VRF c) + { csvChainLength :: BlockNo + , csvSlotNo :: SlotNo + , csvIssuer :: SL.VKey 'SL.BlockIssuer + , csvIssueNo :: Word64 + , csvTieBreakVRF :: VRF.OutputVRF (VRF c) } deriving (Show, Eq, Generic, NoThunks) -- | When to compare the VRF tiebreakers. -data VRFTiebreakerFlavor = - -- | Always compare the VRF tiebreakers. This is the behavior of all eras +data VRFTiebreakerFlavor + = -- | Always compare the VRF tiebreakers. This is the behavior of all eras -- before Conway. Once mainnet has transitioned to Conway, we can remove -- this option. (The honest /historical/ Ouroboros chain cannot rely on -- tiebreakers to win, so /retroactively/ disabling the tiebreaker won't @@ -96,45 +97,45 @@ data VRFTiebreakerFlavor = -- pools. RestrictedVRFTiebreaker SlotNo deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -- Used to implement the 'Ord' and 'ChainOrder' instances for Praos. comparePraos :: - VRFTiebreakerFlavor - -> PraosChainSelectView c - -> PraosChainSelectView c - -> Ordering + VRFTiebreakerFlavor -> + PraosChainSelectView c -> + PraosChainSelectView c -> + Ordering comparePraos tiebreakerFlavor = - (compare `on` csvChainLength) + (compare `on` csvChainLength) <> when' issueNoArmed (compare `on` csvIssueNo) <> when' vrfArmed (compare `on` Down . csvTieBreakVRF) - where - -- When the predicate @p@ returns 'True', use the given comparison function, - -- otherwise, no preference. - when' :: - (a -> a -> Bool) - -> (a -> a -> Ordering) - -> (a -> a -> Ordering) - when' p comp a1 a2 = - if p a1 a2 then comp a1 a2 else EQ + where + -- When the predicate @p@ returns 'True', use the given comparison function, + -- otherwise, no preference. + when' :: + (a -> a -> Bool) -> + (a -> a -> Ordering) -> + (a -> a -> Ordering) + when' p comp a1 a2 = + if p a1 a2 then comp a1 a2 else EQ - -- Only compare the issue numbers when the issuers and slots are identical. - -- Note that this case implies the VRFs also coincide. - issueNoArmed v1 v2 = - csvSlotNo v1 == csvSlotNo v2 - && csvIssuer v1 == csvIssuer v2 + -- Only compare the issue numbers when the issuers and slots are identical. + -- Note that this case implies the VRFs also coincide. + issueNoArmed v1 v2 = + csvSlotNo v1 == csvSlotNo v2 + && csvIssuer v1 == csvIssuer v2 - -- Whether to do a VRF comparison. - vrfArmed v1 v2 = case tiebreakerFlavor of - UnrestrictedVRFTiebreaker -> True - RestrictedVRFTiebreaker maxDist -> - slotDist (csvSlotNo v1) (csvSlotNo v2) <= maxDist + -- Whether to do a VRF comparison. + vrfArmed v1 v2 = case tiebreakerFlavor of + UnrestrictedVRFTiebreaker -> True + RestrictedVRFTiebreaker maxDist -> + slotDist (csvSlotNo v1) (csvSlotNo v2) <= maxDist - slotDist :: SlotNo -> SlotNo -> SlotNo - slotDist s t - -- slot numbers are unsigned, so have to take care with subtraction - | s >= t = s - t - | otherwise = t - s + slotDist :: SlotNo -> SlotNo -> SlotNo + slotDist s t + -- slot numbers are unsigned, so have to take care with subtraction + | s >= t = s - t + | otherwise = t - s -- | We order between chains as follows: -- @@ -244,27 +245,27 @@ instance Crypto c => ChainOrder (PraosChainSelectView c) where preferCandidate cfg ours cand = comparePraos cfg ours cand == LT data PraosCanBeLeader c = PraosCanBeLeader - { -- | Certificate delegating rights from the stake pool cold key (or - -- genesis stakeholder delegate cold key) to the online KES key. - praosCanBeLeaderOpCert :: !(OCert.OCert c), - -- | Stake pool cold key or genesis stakeholder delegate cold key. - praosCanBeLeaderColdVerKey :: !(SL.VKey 'SL.BlockIssuer), - praosCanBeLeaderSignKeyVRF :: !(VRF.SignKeyVRF (VRF c)) + { praosCanBeLeaderOpCert :: !(OCert.OCert c) + -- ^ Certificate delegating rights from the stake pool cold key (or + -- genesis stakeholder delegate cold key) to the online KES key. + , praosCanBeLeaderColdVerKey :: !(SL.VKey 'SL.BlockIssuer) + -- ^ Stake pool cold key or genesis stakeholder delegate cold key. + , praosCanBeLeaderSignKeyVRF :: !(VRF.SignKeyVRF (VRF c)) } - deriving (Generic) + deriving Generic instance Crypto c => NoThunks (PraosCanBeLeader c) -- | See 'PraosProtocolSupportsNode' -data PraosNonces = PraosNonces { - candidateNonce :: !Nonce - , epochNonce :: !Nonce - , evolvingNonce :: !Nonce - -- | Nonce constructed from the hash of the Last Applied Block - , labNonce :: !Nonce - -- | Nonce corresponding to the LAB nonce of the last block of the previous - -- epoch +data PraosNonces = PraosNonces + { candidateNonce :: !Nonce + , epochNonce :: !Nonce + , evolvingNonce :: !Nonce + , labNonce :: !Nonce + -- ^ Nonce constructed from the hash of the Last Applied Block , previousLabNonce :: !Nonce + -- ^ Nonce corresponding to the LAB nonce of the last block of the previous + -- epoch } -- | The node has Praos-aware code that inspects nonces in order to support diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Header.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Header.hs index cc4cc22a55..b7ba33d2b4 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Header.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Header.hs @@ -20,66 +20,81 @@ -- header (in 'Ouroboros.Consensus.Protocol.Praos.Views') which extracts just -- the fields needed for the Praos protocol. This also allows us to hide the -- more detailed construction of the header. -module Ouroboros.Consensus.Protocol.Praos.Header ( - Header (Header, headerBody, headerSig) +module Ouroboros.Consensus.Protocol.Praos.Header + ( Header (Header, headerBody, headerSig) , HeaderBody (..) , headerHash , headerSize ) where -import qualified Cardano.Crypto.Hash as Hash -import qualified Cardano.Crypto.KES as KES -import Cardano.Crypto.Util - (SignableRepresentation (getSignableRepresentation)) -import qualified Cardano.Crypto.VRF as VRF -import Cardano.Ledger.BaseTypes (ProtVer (pvMajor)) -import Cardano.Ledger.Binary (Annotator (..), DecCBOR (decCBOR), - EncCBOR (..), ToCBOR (..), encodedSigKESSizeExpr, - serialize', unCBORGroup, withSlice) -import Cardano.Ledger.Binary.Coders -import Cardano.Ledger.Binary.Crypto (decodeSignedKES, decodeVerKeyVRF, - encodeSignedKES, encodeVerKeyVRF) -import qualified Cardano.Ledger.Binary.Plain as Plain -import Cardano.Ledger.Hashes (EraIndependentBlockBody, - EraIndependentBlockHeader, HASH) -import Cardano.Ledger.Keys (KeyRole (BlockIssuer), VKey) -import Cardano.Protocol.Crypto (Crypto, KES, VRF) -import Cardano.Protocol.TPraos.BHeader (PrevHash) -import Cardano.Protocol.TPraos.OCert (OCert) -import Cardano.Slotting.Block (BlockNo) -import Cardano.Slotting.Slot (SlotNo) -import qualified Data.ByteString as BS -import qualified Data.ByteString.Lazy as BSL -import Data.Word (Word32) -import GHC.Generics (Generic) -import NoThunks.Class (AllowThunksIn (..), NoThunks (..)) -import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF) +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Crypto.KES qualified as KES +import Cardano.Crypto.Util + ( SignableRepresentation (getSignableRepresentation) + ) +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.BaseTypes (ProtVer (pvMajor)) +import Cardano.Ledger.Binary + ( Annotator (..) + , DecCBOR (decCBOR) + , EncCBOR (..) + , ToCBOR (..) + , encodedSigKESSizeExpr + , serialize' + , unCBORGroup + , withSlice + ) +import Cardano.Ledger.Binary.Coders +import Cardano.Ledger.Binary.Crypto + ( decodeSignedKES + , decodeVerKeyVRF + , encodeSignedKES + , encodeVerKeyVRF + ) +import Cardano.Ledger.Binary.Plain qualified as Plain +import Cardano.Ledger.Hashes + ( EraIndependentBlockBody + , EraIndependentBlockHeader + , HASH + ) +import Cardano.Ledger.Keys (KeyRole (BlockIssuer), VKey) +import Cardano.Protocol.Crypto (Crypto, KES, VRF) +import Cardano.Protocol.TPraos.BHeader (PrevHash) +import Cardano.Protocol.TPraos.OCert (OCert) +import Cardano.Slotting.Block (BlockNo) +import Cardano.Slotting.Slot (SlotNo) +import Data.ByteString qualified as BS +import Data.ByteString.Lazy qualified as BSL +import Data.Word (Word32) +import GHC.Generics (Generic) +import NoThunks.Class (AllowThunksIn (..), NoThunks (..)) +import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF) -- | The body of the header is the part which gets hashed to form the hash -- chain. data HeaderBody crypto = HeaderBody - { -- | block number - hbBlockNo :: !BlockNo, - -- | block slot - hbSlotNo :: !SlotNo, - -- | Hash of the previous block header - hbPrev :: !PrevHash, - -- | verification key of block issuer - hbVk :: !(VKey 'BlockIssuer), - -- | VRF verification key for block issuer - hbVrfVk :: !(VRF.VerKeyVRF (VRF crypto)), - -- | Certified VRF value - hbVrfRes :: !(VRF.CertifiedVRF (VRF crypto) InputVRF), - -- | Size of the block body - hbBodySize :: !Word32, - -- | Hash of block body - hbBodyHash :: !(Hash.Hash HASH EraIndependentBlockBody), - -- | operational certificate - hbOCert :: !(OCert crypto), - -- | protocol version - hbProtVer :: !ProtVer + { hbBlockNo :: !BlockNo + -- ^ block number + , hbSlotNo :: !SlotNo + -- ^ block slot + , hbPrev :: !PrevHash + -- ^ Hash of the previous block header + , hbVk :: !(VKey 'BlockIssuer) + -- ^ verification key of block issuer + , hbVrfVk :: !(VRF.VerKeyVRF (VRF crypto)) + -- ^ VRF verification key for block issuer + , hbVrfRes :: !(VRF.CertifiedVRF (VRF crypto) InputVRF) + -- ^ Certified VRF value + , hbBodySize :: !Word32 + -- ^ Size of the block body + , hbBodyHash :: !(Hash.Hash HASH EraIndependentBlockBody) + -- ^ Hash of block body + , hbOCert :: !(OCert crypto) + -- ^ operational certificate + , hbProtVer :: !ProtVer + -- ^ protocol version } - deriving (Generic) + deriving Generic deriving instance Crypto crypto => Show (HeaderBody crypto) @@ -96,19 +111,21 @@ instance NoThunks (HeaderBody crypto) data HeaderRaw crypto = HeaderRaw - { headerRawBody :: !(HeaderBody crypto), - headerRawSig :: !(KES.SignedKES (KES crypto) (HeaderBody crypto)) + { headerRawBody :: !(HeaderBody crypto) + , headerRawSig :: !(KES.SignedKES (KES crypto) (HeaderBody crypto)) } deriving (Show, Generic) instance Crypto c => Eq (HeaderRaw c) where - h1 == h2 = headerRawSig h1 == headerRawSig h2 - && headerRawBody h1 == headerRawBody h2 + h1 == h2 = + headerRawSig h1 == headerRawSig h2 + && headerRawBody h1 == headerRawBody h2 -- | Checks the binary representation first. instance Crypto c => Eq (Header c) where - h1 == h2 = headerBytes h1 == headerBytes h2 - && headerRaw h1 == headerRaw h2 + h1 == h2 = + headerBytes h1 == headerBytes h2 + && headerRaw h1 == headerRaw h2 instance Crypto crypto => @@ -116,20 +133,20 @@ instance -- | Full header type, carrying its own memoised bytes. data Header crypto = HeaderConstr - { headerRaw :: !(HeaderRaw crypto) + { headerRaw :: !(HeaderRaw crypto) , headerBytes :: BS.ByteString -- lazy on purpose, constructed on demand } deriving (Show, Generic) - deriving (NoThunks) via AllowThunksIn '["headerBytes"] (Header crypto) + deriving NoThunks via AllowThunksIn '["headerBytes"] (Header crypto) pattern Header :: Crypto crypto => HeaderBody crypto -> KES.SignedKES (KES crypto) (HeaderBody crypto) -> Header crypto -pattern Header {headerBody, headerSig} <- - HeaderConstr { - headerRaw = +pattern Header{headerBody, headerSig} <- + HeaderConstr + { headerRaw = HeaderRaw { headerRawBody = headerBody , headerRawSig = headerSig @@ -137,14 +154,15 @@ pattern Header {headerBody, headerSig} <- } where Header body sig = - let header = HeaderRaw - { headerRawBody = body - , headerRawSig = sig + let header = + HeaderRaw + { headerRawBody = body + , headerRawSig = sig + } + in HeaderConstr + { headerRaw = header + , headerBytes = serialize' (pvMajor (hbProtVer body)) header } - in HeaderConstr - { headerRaw = header - , headerBytes = serialize' (pvMajor (hbProtVer body)) header - } {-# COMPLETE Header #-} @@ -166,16 +184,16 @@ headerHash = Hash.castHash . Hash.hashWithSerialiser toCBOR instance Crypto crypto => EncCBOR (HeaderBody crypto) where encCBOR HeaderBody - { hbBlockNo, - hbSlotNo, - hbPrev, - hbVk, - hbVrfVk, - hbVrfRes, - hbBodySize, - hbBodyHash, - hbOCert, - hbProtVer + { hbBlockNo + , hbSlotNo + , hbPrev + , hbVk + , hbVrfVk + , hbVrfRes + , hbBodySize + , hbBodyHash + , hbOCert + , hbProtVer } = encode $ Rec HeaderBody diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/VRF.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/VRF.hs index 5b397703b8..99a0f06784 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/VRF.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/VRF.hs @@ -11,36 +11,48 @@ -- | This module implements VRF range extension as described in -- https://iohk.io/en/research/library/papers/on-uc-secure-range-extension-and-batch-verification-for-ecvrf/ -module Ouroboros.Consensus.Protocol.Praos.VRF ( - InputVRF +module Ouroboros.Consensus.Protocol.Praos.VRF + ( InputVRF , VRFUsage (..) , mkInputVRF , vrfLeaderValue , vrfNonceValue ) where -import Cardano.Binary (ToCBOR) -import Cardano.Crypto.Hash (Blake2b_256, Hash, castHash, hashToBytes, - hashWith, sizeHash) -import qualified Cardano.Crypto.Hash as Hash -import Cardano.Crypto.Util - (SignableRepresentation (getSignableRepresentation), - bytesToNatural) -import Cardano.Crypto.VRF (CertifiedVRF (certifiedOutput), - OutputVRF (..), getOutputVRFBytes) -import Cardano.Ledger.BaseTypes (Nonce (NeutralNonce, Nonce)) -import Cardano.Ledger.Binary (runByteBuilder) -import Cardano.Ledger.Hashes (HASH) -import Cardano.Ledger.Slot (SlotNo (SlotNo)) -import Cardano.Protocol.Crypto (Crypto (VRF)) -import Cardano.Protocol.TPraos.BHeader (BoundedNatural, - assertBoundedNatural) -import qualified Data.ByteString.Builder as BS -import qualified Data.ByteString.Builder.Extra as BS -import Data.Proxy (Proxy (Proxy)) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Numeric.Natural (Natural) +import Cardano.Binary (ToCBOR) +import Cardano.Crypto.Hash + ( Blake2b_256 + , Hash + , castHash + , hashToBytes + , hashWith + , sizeHash + ) +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Crypto.Util + ( SignableRepresentation (getSignableRepresentation) + , bytesToNatural + ) +import Cardano.Crypto.VRF + ( CertifiedVRF (certifiedOutput) + , OutputVRF (..) + , getOutputVRFBytes + ) +import Cardano.Ledger.BaseTypes (Nonce (NeutralNonce, Nonce)) +import Cardano.Ledger.Binary (runByteBuilder) +import Cardano.Ledger.Hashes (HASH) +import Cardano.Ledger.Slot (SlotNo (SlotNo)) +import Cardano.Protocol.Crypto (Crypto (VRF)) +import Cardano.Protocol.TPraos.BHeader + ( BoundedNatural + , assertBoundedNatural + ) +import Data.ByteString.Builder qualified as BS +import Data.ByteString.Builder.Extra qualified as BS +import Data.Proxy (Proxy (Proxy)) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Numeric.Natural (Natural) -- | Input to the verifiable random function. Consists of the hash of the slot -- and the epoch nonce. @@ -65,7 +77,7 @@ mkInputVRF (SlotNo slot) eNonce = $ BS.word64BE slot <> ( case eNonce of NeutralNonce -> mempty - Nonce h -> BS.byteStringCopy (Hash.hashToBytes h) + Nonce h -> BS.byteStringCopy (Hash.hashToBytes h) ) -- | Indicate the usage of the VRF result. @@ -95,7 +107,7 @@ hashVRF _ use certVRF = let vrfOutputAsBytes = getOutputVRFBytes $ certifiedOutput certVRF in case use of SVRFLeader -> castHash $ hashWith id $ "L" <> vrfOutputAsBytes - SVRFNonce -> castHash $ hashWith id $ "N" <> vrfOutputAsBytes + SVRFNonce -> castHash $ hashWith id $ "N" <> vrfOutputAsBytes -- | Range-extend a VRF output to be used for leader checks from the relevant -- hash. See section 4.1 of the linked paper for details. diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Views.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Views.hs index e2d75f595b..d72a5a7dc3 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Views.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/Praos/Views.hs @@ -1,51 +1,51 @@ {-# LANGUAGE DataKinds #-} -module Ouroboros.Consensus.Protocol.Praos.Views ( - HeaderView (..) +module Ouroboros.Consensus.Protocol.Praos.Views + ( HeaderView (..) , LedgerView (..) ) where -import Cardano.Crypto.KES (SignedKES) -import Cardano.Crypto.VRF (CertifiedVRF, VRFAlgorithm (VerKeyVRF)) -import Cardano.Ledger.BaseTypes (ProtVer) -import Cardano.Ledger.Keys (KeyRole (BlockIssuer), VKey) -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (KES, VRF) -import Cardano.Protocol.TPraos.BHeader (PrevHash) -import Cardano.Protocol.TPraos.OCert (OCert) -import Cardano.Slotting.Slot (SlotNo) -import Data.Word (Word16, Word32) -import Ouroboros.Consensus.Protocol.Praos.Header (HeaderBody) -import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF) +import Cardano.Crypto.KES (SignedKES) +import Cardano.Crypto.VRF (CertifiedVRF, VRFAlgorithm (VerKeyVRF)) +import Cardano.Ledger.BaseTypes (ProtVer) +import Cardano.Ledger.Keys (KeyRole (BlockIssuer), VKey) +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (KES, VRF) +import Cardano.Protocol.TPraos.BHeader (PrevHash) +import Cardano.Protocol.TPraos.OCert (OCert) +import Cardano.Slotting.Slot (SlotNo) +import Data.Word (Word16, Word32) +import Ouroboros.Consensus.Protocol.Praos.Header (HeaderBody) +import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF) -- | View of the block header required by the Praos protocol. data HeaderView crypto = HeaderView - { -- | Hash of the previous block - hvPrevHash :: !PrevHash, - -- | verification key of block issuer - hvVK :: !(VKey 'BlockIssuer), - -- | VRF verification key for block issuer - hvVrfVK :: !(VerKeyVRF (VRF crypto)), - -- | VRF result - hvVrfRes :: !(CertifiedVRF (VRF crypto) InputVRF), - -- | operational certificate - hvOCert :: !(OCert crypto), - -- | Slot - hvSlotNo :: !SlotNo, - -- | Header which must be signed - hvSigned :: !(HeaderBody crypto), - -- | KES Signature of the header - hvSignature :: !(SignedKES (KES crypto) (HeaderBody crypto)) + { hvPrevHash :: !PrevHash + -- ^ Hash of the previous block + , hvVK :: !(VKey 'BlockIssuer) + -- ^ verification key of block issuer + , hvVrfVK :: !(VerKeyVRF (VRF crypto)) + -- ^ VRF verification key for block issuer + , hvVrfRes :: !(CertifiedVRF (VRF crypto) InputVRF) + -- ^ VRF result + , hvOCert :: !(OCert crypto) + -- ^ operational certificate + , hvSlotNo :: !SlotNo + -- ^ Slot + , hvSigned :: !(HeaderBody crypto) + -- ^ Header which must be signed + , hvSignature :: !(SignedKES (KES crypto) (HeaderBody crypto)) + -- ^ KES Signature of the header } data LedgerView = LedgerView - { -- | Stake distribution - lvPoolDistr :: SL.PoolDistr, - -- | Maximum header size - lvMaxHeaderSize :: !Word16, - -- | Maximum block body size - lvMaxBodySize :: !Word32, - -- | Current protocol version - lvProtocolVersion :: !ProtVer + { lvPoolDistr :: SL.PoolDistr + -- ^ Stake distribution + , lvMaxHeaderSize :: !Word16 + -- ^ Maximum header size + , lvMaxBodySize :: !Word32 + -- ^ Maximum block body size + , lvProtocolVersion :: !ProtVer + -- ^ Current protocol version } - deriving (Show) + deriving Show diff --git a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/TPraos.hs b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/TPraos.hs index fab03fedc1..1612bfe39e 100644 --- a/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/TPraos.hs +++ b/ouroboros-consensus-protocol/src/ouroboros-consensus-protocol/Ouroboros/Consensus/Protocol/TPraos.hs @@ -14,8 +14,8 @@ -- -- Transitional praos allows for the overlaying of Praos with an overlay -- schedule determining slots to be produced by BFT -module Ouroboros.Consensus.Protocol.TPraos ( - MaxMajorProtVer (..) +module Ouroboros.Consensus.Protocol.TPraos + ( MaxMajorProtVer (..) , PraosChainSelectView (..) , TPraos , TPraosFields (..) @@ -27,126 +27,137 @@ module Ouroboros.Consensus.Protocol.TPraos ( , forgeTPraosFields , mkShelleyGlobals , mkTPraosParams + -- * Crypto , SL.PraosCrypto , StandardCrypto + -- * CannotForge , TPraosCannotForge (..) , tpraosCheckCanForge + -- * Type instances , ConsensusConfig (..) , Ticked (..) ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..), enforceSize) -import qualified Cardano.Crypto.Hash as Hash -import qualified Cardano.Crypto.KES as KES -import qualified Cardano.Crypto.VRF as VRF -import qualified Cardano.Ledger.BaseTypes as SL (ActiveSlotCoeff, Seed) -import Cardano.Ledger.BaseTypes.NonZero (nonZeroOr, unNonZero) -import Cardano.Ledger.Hashes (HASH) -import qualified Cardano.Ledger.Keys as SL -import qualified Cardano.Ledger.Shelley.API as SL -import Cardano.Protocol.Crypto (KES, StandardCrypto, VRF) -import qualified Cardano.Protocol.TPraos.API as SL -import qualified Cardano.Protocol.TPraos.BHeader as SL -import qualified Cardano.Protocol.TPraos.OCert as Absolute (KESPeriod (..)) -import qualified Cardano.Protocol.TPraos.OCert as SL -import qualified Cardano.Protocol.TPraos.Rules.Overlay as SL -import qualified Cardano.Protocol.TPraos.Rules.Prtcl as SL -import qualified Cardano.Protocol.TPraos.Rules.Tickn as SL -import Cardano.Slotting.EpochInfo -import Cardano.Slotting.Time (SystemStart (..)) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (Serialise (..)) -import Control.Monad.Except (Except, runExcept, throwError, - withExceptT) -import Data.Coerce (coerce) -import qualified Data.Map.Strict as Map -import qualified Data.Text as T (pack) -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Block -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) -import qualified Ouroboros.Consensus.Protocol.Ledger.HotKey as HotKey -import Ouroboros.Consensus.Protocol.Ledger.Util -import Ouroboros.Consensus.Protocol.Praos.Common -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Versioned +import Cardano.Binary (FromCBOR (..), ToCBOR (..), enforceSize) +import Cardano.Crypto.Hash qualified as Hash +import Cardano.Crypto.KES qualified as KES +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Ledger.BaseTypes qualified as SL (ActiveSlotCoeff, Seed) +import Cardano.Ledger.BaseTypes.NonZero (nonZeroOr, unNonZero) +import Cardano.Ledger.Hashes (HASH) +import Cardano.Ledger.Keys qualified as SL +import Cardano.Ledger.Shelley.API qualified as SL +import Cardano.Protocol.Crypto (KES, StandardCrypto, VRF) +import Cardano.Protocol.TPraos.API qualified as SL +import Cardano.Protocol.TPraos.BHeader qualified as SL +import Cardano.Protocol.TPraos.OCert qualified as Absolute (KESPeriod (..)) +import Cardano.Protocol.TPraos.OCert qualified as SL +import Cardano.Protocol.TPraos.Rules.Overlay qualified as SL +import Cardano.Protocol.TPraos.Rules.Prtcl qualified as SL +import Cardano.Protocol.TPraos.Rules.Tickn qualified as SL +import Cardano.Slotting.EpochInfo +import Cardano.Slotting.Time (SystemStart (..)) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (Serialise (..)) +import Control.Monad.Except + ( Except + , runExcept + , throwError + , withExceptT + ) +import Data.Coerce (coerce) +import Data.Map.Strict qualified as Map +import Data.Text qualified as T (pack) +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.Ledger.HotKey (HotKey) +import Ouroboros.Consensus.Protocol.Ledger.HotKey qualified as HotKey +import Ouroboros.Consensus.Protocol.Ledger.Util +import Ouroboros.Consensus.Protocol.Praos.Common +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Versioned {------------------------------------------------------------------------------- Fields required by TPraos in the header -------------------------------------------------------------------------------} -data TPraosFields c toSign = TPraosFields { - tpraosSignature :: KES.SignedKES (KES c) toSign - , tpraosToSign :: toSign - } - deriving (Generic) +data TPraosFields c toSign = TPraosFields + { tpraosSignature :: KES.SignedKES (KES c) toSign + , tpraosToSign :: toSign + } + deriving Generic -deriving instance (NoThunks toSign, SL.PraosCrypto c) - => NoThunks (TPraosFields c toSign) -deriving instance (Show toSign, SL.PraosCrypto c) - => Show (TPraosFields c toSign) +deriving instance + (NoThunks toSign, SL.PraosCrypto c) => + NoThunks (TPraosFields c toSign) +deriving instance + (Show toSign, SL.PraosCrypto c) => + Show (TPraosFields c toSign) -- | Fields arising from transitional praos execution which must be included in -- the block signature. -data TPraosToSign c = TPraosToSign { - -- | Verification key for the issuer of this block. - -- - -- Note that unlike in Classic/BFT where we have a key for the genesis - -- delegate on whose behalf we are issuing this block, this key - -- corresponds to the stake pool/core node actually forging the block. - tpraosToSignIssuerVK :: SL.VKey 'SL.BlockIssuer - , tpraosToSignVrfVK :: VRF.VerKeyVRF (VRF c) - -- | Verifiable result containing the updated nonce value. - , tpraosToSignEta :: VRF.CertifiedVRF (VRF c) SL.Nonce - -- | Verifiable proof of the leader value, used to determine whether the - -- node has the right to issue a block in this slot. - -- - -- We include a value here even for blocks forged under the BFT - -- schedule. It is not required that such a value be verifiable (though - -- by default it will be verifiably correct, but unused.) - , tpraosToSignLeader :: VRF.CertifiedVRF (VRF c) Natural - -- | Lightweight delegation certificate mapping the cold (DSIGN) key to - -- the online KES key. - , tpraosToSignOCert :: SL.OCert c - } - deriving (Generic) +data TPraosToSign c = TPraosToSign + { tpraosToSignIssuerVK :: SL.VKey 'SL.BlockIssuer + -- ^ Verification key for the issuer of this block. + -- + -- Note that unlike in Classic/BFT where we have a key for the genesis + -- delegate on whose behalf we are issuing this block, this key + -- corresponds to the stake pool/core node actually forging the block. + , tpraosToSignVrfVK :: VRF.VerKeyVRF (VRF c) + , tpraosToSignEta :: VRF.CertifiedVRF (VRF c) SL.Nonce + -- ^ Verifiable result containing the updated nonce value. + , tpraosToSignLeader :: VRF.CertifiedVRF (VRF c) Natural + -- ^ Verifiable proof of the leader value, used to determine whether the + -- node has the right to issue a block in this slot. + -- + -- We include a value here even for blocks forged under the BFT + -- schedule. It is not required that such a value be verifiable (though + -- by default it will be verifiably correct, but unused.) + , tpraosToSignOCert :: SL.OCert c + -- ^ Lightweight delegation certificate mapping the cold (DSIGN) key to + -- the online KES key. + } + deriving Generic instance SL.PraosCrypto c => NoThunks (TPraosToSign c) deriving instance SL.PraosCrypto c => Show (TPraosToSign c) forgeTPraosFields :: - ( SL.PraosCrypto c - , KES.Signable (KES c) toSign - , Monad m - ) - => HotKey c m - -> CanBeLeader (TPraos c) - -> IsLeader (TPraos c) - -> (TPraosToSign c -> toSign) - -> m (TPraosFields c toSign) + ( SL.PraosCrypto c + , KES.Signable (KES c) toSign + , Monad m + ) => + HotKey c m -> + CanBeLeader (TPraos c) -> + IsLeader (TPraos c) -> + (TPraosToSign c -> toSign) -> + m (TPraosFields c toSign) forgeTPraosFields hotKey PraosCanBeLeader{..} TPraosIsLeader{..} mkToSign = do - signature <- HotKey.sign hotKey toSign - return TPraosFields { - tpraosSignature = signature - , tpraosToSign = toSign + signature <- HotKey.sign hotKey toSign + return + TPraosFields + { tpraosSignature = signature + , tpraosToSign = toSign } - where - toSign = mkToSign signedFields - - signedFields = TPraosToSign { - tpraosToSignIssuerVK = praosCanBeLeaderColdVerKey - , tpraosToSignVrfVK = VRF.deriveVerKeyVRF praosCanBeLeaderSignKeyVRF - , tpraosToSignEta = tpraosIsLeaderEta - , tpraosToSignLeader = tpraosIsLeaderProof - , tpraosToSignOCert = praosCanBeLeaderOpCert + where + toSign = mkToSign signedFields + + signedFields = + TPraosToSign + { tpraosToSignIssuerVK = praosCanBeLeaderColdVerKey + , tpraosToSignVrfVK = VRF.deriveVerKeyVRF praosCanBeLeaderSignKeyVRF + , tpraosToSignEta = tpraosIsLeaderEta + , tpraosToSignLeader = tpraosIsLeaderProof + , tpraosToSignOCert = praosCanBeLeaderOpCert } -- | Because we are using the executable spec, rather than implementing the @@ -161,87 +172,87 @@ type TPraosValidateView c = SL.BHeader c data TPraos c -- | TPraos parameters that are node independent -data TPraosParams = TPraosParams { - -- | See 'Globals.slotsPerKESPeriod'. - tpraosSlotsPerKESPeriod :: !Word64 - -- | Active slots coefficient. This parameter represents the proportion - -- of slots in which blocks should be issued. This can be interpreted as - -- the probability that a party holding all the stake will be elected as - -- leader for a given slot. - , tpraosLeaderF :: !SL.ActiveSlotCoeff - -- | See 'Globals.securityParameter'. - , tpraosSecurityParam :: !SecurityParam - -- | Maximum number of KES iterations, see 'Globals.maxKESEvo'. - , tpraosMaxKESEvo :: !Word64 - -- | Quorum for update system votes and MIR certificates, see - -- 'Globals.quorum'. - , tpraosQuorum :: !Word64 - -- | All blocks invalid after this protocol version, see - -- 'Globals.maxMajorPV'. - , tpraosMaxMajorPV :: !MaxMajorProtVer - -- | Maximum number of lovelace in the system, see - -- 'Globals.maxLovelaceSupply'. - , tpraosMaxLovelaceSupply :: !Word64 - -- | Testnet or mainnet? - , tpraosNetworkId :: !SL.Network - -- | Initial nonce used for the TPraos protocol state. Typically this is - -- derived from the hash of the Shelley genesis config JSON file, but - -- different values may be used for testing purposes. - -- - -- NOTE: this is only used when translating the Byron 'ChainDepState' to - -- the Shelley 'ChainDepState', at which point we'll need access to the - -- initial nonce at runtime. TODO #2326. - , tpraosInitialNonce :: !SL.Nonce - -- | The system start, as projected from the chain's genesis block. - , tpraosSystemStart :: !SystemStart - } +data TPraosParams = TPraosParams + { tpraosSlotsPerKESPeriod :: !Word64 + -- ^ See 'Globals.slotsPerKESPeriod'. + , tpraosLeaderF :: !SL.ActiveSlotCoeff + -- ^ Active slots coefficient. This parameter represents the proportion + -- of slots in which blocks should be issued. This can be interpreted as + -- the probability that a party holding all the stake will be elected as + -- leader for a given slot. + , tpraosSecurityParam :: !SecurityParam + -- ^ See 'Globals.securityParameter'. + , tpraosMaxKESEvo :: !Word64 + -- ^ Maximum number of KES iterations, see 'Globals.maxKESEvo'. + , tpraosQuorum :: !Word64 + -- ^ Quorum for update system votes and MIR certificates, see + -- 'Globals.quorum'. + , tpraosMaxMajorPV :: !MaxMajorProtVer + -- ^ All blocks invalid after this protocol version, see + -- 'Globals.maxMajorPV'. + , tpraosMaxLovelaceSupply :: !Word64 + -- ^ Maximum number of lovelace in the system, see + -- 'Globals.maxLovelaceSupply'. + , tpraosNetworkId :: !SL.Network + -- ^ Testnet or mainnet? + , tpraosInitialNonce :: !SL.Nonce + -- ^ Initial nonce used for the TPraos protocol state. Typically this is + -- derived from the hash of the Shelley genesis config JSON file, but + -- different values may be used for testing purposes. + -- + -- NOTE: this is only used when translating the Byron 'ChainDepState' to + -- the Shelley 'ChainDepState', at which point we'll need access to the + -- initial nonce at runtime. TODO #2326. + , tpraosSystemStart :: !SystemStart + -- ^ The system start, as projected from the chain's genesis block. + } deriving (Generic, NoThunks) mkTPraosParams :: - MaxMajorProtVer - -> SL.Nonce -- ^ Initial nonce - -> SL.ShelleyGenesis - -> TPraosParams -mkTPraosParams maxMajorPV initialNonce genesis = TPraosParams { - tpraosSlotsPerKESPeriod = SL.sgSlotsPerKESPeriod genesis - , tpraosLeaderF = SL.sgActiveSlotCoeff genesis - , tpraosMaxKESEvo = SL.sgMaxKESEvolutions genesis - , tpraosQuorum = SL.sgUpdateQuorum genesis + MaxMajorProtVer -> + -- | Initial nonce + SL.Nonce -> + SL.ShelleyGenesis -> + TPraosParams +mkTPraosParams maxMajorPV initialNonce genesis = + TPraosParams + { tpraosSlotsPerKESPeriod = SL.sgSlotsPerKESPeriod genesis + , tpraosLeaderF = SL.sgActiveSlotCoeff genesis + , tpraosMaxKESEvo = SL.sgMaxKESEvolutions genesis + , tpraosQuorum = SL.sgUpdateQuorum genesis , tpraosMaxLovelaceSupply = SL.sgMaxLovelaceSupply genesis - , tpraosNetworkId = SL.sgNetworkId genesis - , tpraosSecurityParam = securityParam - , tpraosMaxMajorPV = maxMajorPV - , tpraosInitialNonce = initialNonce - , tpraosSystemStart = systemStart + , tpraosNetworkId = SL.sgNetworkId genesis + , tpraosSecurityParam = securityParam + , tpraosMaxMajorPV = maxMajorPV + , tpraosInitialNonce = initialNonce + , tpraosSystemStart = systemStart } - where - securityParam = SecurityParam $ SL.sgSecurityParam genesis - systemStart = SystemStart $ SL.sgSystemStart genesis + where + securityParam = SecurityParam $ SL.sgSecurityParam genesis + systemStart = SystemStart $ SL.sgSystemStart genesis -- | Assembled proof that the issuer has the right to issue a block in the -- selected slot. -data TPraosIsLeader c = TPraosIsLeader { - tpraosIsLeaderEta :: VRF.CertifiedVRF (VRF c) SL.Nonce - , tpraosIsLeaderProof :: VRF.CertifiedVRF (VRF c) Natural - -- | When in the overlay schedule (otherwise 'Nothing'), return the hash - -- of the VRF verification key in the overlay schedule - , tpraosIsLeaderGenVRFHash :: Maybe (Hash.Hash HASH (VRF.VerKeyVRF (VRF c))) - } - deriving (Generic) +data TPraosIsLeader c = TPraosIsLeader + { tpraosIsLeaderEta :: VRF.CertifiedVRF (VRF c) SL.Nonce + , tpraosIsLeaderProof :: VRF.CertifiedVRF (VRF c) Natural + , tpraosIsLeaderGenVRFHash :: Maybe (Hash.Hash HASH (VRF.VerKeyVRF (VRF c))) + -- ^ When in the overlay schedule (otherwise 'Nothing'), return the hash + -- of the VRF verification key in the overlay schedule + } + deriving Generic instance SL.PraosCrypto c => NoThunks (TPraosIsLeader c) -- | Static configuration -data instance ConsensusConfig (TPraos c) = TPraosConfig { - tpraosParams :: !TPraosParams - , tpraosEpochInfo :: !(EpochInfo (Except History.PastHorizonException)) - - -- it's useful for this record to be EpochInfo and one other thing, - -- because the one other thing can then be used as the - -- PartialConsensConfig in the HFC instance. - - } - deriving (Generic) +data instance ConsensusConfig (TPraos c) = TPraosConfig + { tpraosParams :: !TPraosParams + , tpraosEpochInfo :: !(EpochInfo (Except History.PastHorizonException)) + -- it's useful for this record to be EpochInfo and one other thing, + -- because the one other thing can then be used as the + -- PartialConsensConfig in the HFC instance. + } + deriving Generic instance SL.PraosCrypto c => NoThunks (ConsensusConfig (TPraos c)) @@ -249,10 +260,10 @@ instance SL.PraosCrypto c => NoThunks (ConsensusConfig (TPraos c)) -- -- In addition to the 'ChainDepState' provided by the ledger, we track the slot -- number of the last applied header. -data TPraosState = TPraosState { - tpraosStateLastSlot :: !(WithOrigin SlotNo) - , tpraosStateChainDepState :: !SL.ChainDepState - } +data TPraosState = TPraosState + { tpraosStateLastSlot :: !(WithOrigin SlotNo) + , tpraosStateChainDepState :: !SL.ChainDepState + } deriving (Generic, Show, Eq) instance NoThunks TPraosState @@ -269,165 +280,173 @@ instance FromCBOR TPraosState where instance Serialise TPraosState where encode (TPraosState slot chainDepState) = - encodeVersion serialisationFormatVersion1 $ mconcat [ - CBOR.encodeListLen 2 - , toCBOR slot - , toCBOR chainDepState - ] - - decode = decodeVersion + encodeVersion serialisationFormatVersion1 $ + mconcat + [ CBOR.encodeListLen 2 + , toCBOR slot + , toCBOR chainDepState + ] + + decode = + decodeVersion [(serialisationFormatVersion1, Decode decodeTPraosState1)] - where - decodeTPraosState1 = do - enforceSize "TPraosState" 2 - TPraosState <$> fromCBOR <*> fromCBOR - -data instance Ticked TPraosState = TickedChainDepState { - tickedTPraosStateChainDepState :: SL.ChainDepState - , tickedTPraosStateLedgerView :: SL.LedgerView - } + where + decodeTPraosState1 = do + enforceSize "TPraosState" 2 + TPraosState <$> fromCBOR <*> fromCBOR + +data instance Ticked TPraosState = TickedChainDepState + { tickedTPraosStateChainDepState :: SL.ChainDepState + , tickedTPraosStateLedgerView :: SL.LedgerView + } instance SL.PraosCrypto c => ConsensusProtocol (TPraos c) where type ChainDepState (TPraos c) = TPraosState - type IsLeader (TPraos c) = TPraosIsLeader c - type CanBeLeader (TPraos c) = PraosCanBeLeader c - type SelectView (TPraos c) = PraosChainSelectView c - type LedgerView (TPraos c) = SL.LedgerView + type IsLeader (TPraos c) = TPraosIsLeader c + type CanBeLeader (TPraos c) = PraosCanBeLeader c + type SelectView (TPraos c) = PraosChainSelectView c + type LedgerView (TPraos c) = SL.LedgerView type ValidationErr (TPraos c) = SL.ChainTransitionError c - type ValidateView (TPraos c) = TPraosValidateView c + type ValidateView (TPraos c) = TPraosValidateView c protocolSecurityParam = tpraosSecurityParam . tpraosParams checkIsLeader cfg PraosCanBeLeader{..} slot cs = do - -- First, check whether we're in the overlay schedule - case SL.lookupInOverlaySchedule firstSlot gkeys d asc slot of - -- Slot isn't in the overlay schedule, so we're in Praos - Nothing - | meetsLeaderThreshold cfg lv (SL.coerceKeyRole vkhCold) y - -> Just TPraosIsLeader { - tpraosIsLeaderEta = coerce rho - , tpraosIsLeaderProof = coerce y - , tpraosIsLeaderGenVRFHash = Nothing - } - | otherwise - -> Nothing - - -- This is a non-active slot; nobody may produce a block - Just SL.NonActiveSlot -> Nothing - - -- The given genesis key has authority to produce a block in this - -- slot. Check whether we're its delegate. - Just (SL.ActiveSlot gkhash) -> case Map.lookup gkhash dlgMap of + -- First, check whether we're in the overlay schedule + case SL.lookupInOverlaySchedule firstSlot gkeys d asc slot of + -- Slot isn't in the overlay schedule, so we're in Praos + Nothing + | meetsLeaderThreshold cfg lv (SL.coerceKeyRole vkhCold) y -> + Just + TPraosIsLeader + { tpraosIsLeaderEta = coerce rho + , tpraosIsLeaderProof = coerce y + , tpraosIsLeaderGenVRFHash = Nothing + } + | otherwise -> Nothing - -> error "unknown genesis key in overlay schedule" - Just (SL.GenDelegPair dlgHash genDlgVRFHash) - | SL.coerceKeyRole dlgHash == vkhCold - -> Just TPraosIsLeader { - tpraosIsLeaderEta = coerce rho - -- Note that this leader value is not checked for slots in - -- the overlay schedule, so we could set it to whatever we - -- want. We evaluate it as normal for simplicity's sake. - , tpraosIsLeaderProof = coerce y - , tpraosIsLeaderGenVRFHash = Just $ SL.fromVRFVerKeyHash genDlgVRFHash - } - | otherwise - -> Nothing - where - chainState = tickedTPraosStateChainDepState cs - lv = tickedTPraosStateLedgerView cs - d = SL.lvD lv - asc = tpraosLeaderF $ tpraosParams cfg - firstSlot = - firstSlotOfEpochOfSlot - (History.toPureEpochInfo $ tpraosEpochInfo cfg) - slot - gkeys = Map.keysSet dlgMap - eta0 = SL.ticknStateEpochNonce $ SL.csTickn chainState - vkhCold = SL.hashKey praosCanBeLeaderColdVerKey - rho' = SL.mkSeed SL.seedEta slot eta0 - y' = SL.mkSeed SL.seedL slot eta0 - - rho = VRF.evalCertified () rho' praosCanBeLeaderSignKeyVRF - y = VRF.evalCertified () y' praosCanBeLeaderSignKeyVRF - - SL.GenDelegs dlgMap = SL.lvGenDelegs lv - - tickChainDepState cfg@TPraosConfig{..} - lv - slot - (TPraosState lastSlot st) = - TickedChainDepState { - tickedTPraosStateChainDepState = st' - , tickedTPraosStateLedgerView = lv + -- This is a non-active slot; nobody may produce a block + Just SL.NonActiveSlot -> Nothing + -- The given genesis key has authority to produce a block in this + -- slot. Check whether we're its delegate. + Just (SL.ActiveSlot gkhash) -> case Map.lookup gkhash dlgMap of + Nothing -> + error "unknown genesis key in overlay schedule" + Just (SL.GenDelegPair dlgHash genDlgVRFHash) + | SL.coerceKeyRole dlgHash == vkhCold -> + Just + TPraosIsLeader + { tpraosIsLeaderEta = coerce rho + , -- Note that this leader value is not checked for slots in + -- the overlay schedule, so we could set it to whatever we + -- want. We evaluate it as normal for simplicity's sake. + tpraosIsLeaderProof = coerce y + , tpraosIsLeaderGenVRFHash = Just $ SL.fromVRFVerKeyHash genDlgVRFHash + } + | otherwise -> + Nothing + where + chainState = tickedTPraosStateChainDepState cs + lv = tickedTPraosStateLedgerView cs + d = SL.lvD lv + asc = tpraosLeaderF $ tpraosParams cfg + firstSlot = + firstSlotOfEpochOfSlot + (History.toPureEpochInfo $ tpraosEpochInfo cfg) + slot + gkeys = Map.keysSet dlgMap + eta0 = SL.ticknStateEpochNonce $ SL.csTickn chainState + vkhCold = SL.hashKey praosCanBeLeaderColdVerKey + rho' = SL.mkSeed SL.seedEta slot eta0 + y' = SL.mkSeed SL.seedL slot eta0 + + rho = VRF.evalCertified () rho' praosCanBeLeaderSignKeyVRF + y = VRF.evalCertified () y' praosCanBeLeaderSignKeyVRF + + SL.GenDelegs dlgMap = SL.lvGenDelegs lv + + tickChainDepState + cfg@TPraosConfig{..} + lv + slot + (TPraosState lastSlot st) = + TickedChainDepState + { tickedTPraosStateChainDepState = st' + , tickedTPraosStateLedgerView = lv } - where - st' = SL.tickChainDepState - (mkShelleyGlobals cfg) - lv - ( isNewEpoch - (History.toPureEpochInfo tpraosEpochInfo) - lastSlot - slot - ) - st + where + st' = + SL.tickChainDepState + (mkShelleyGlobals cfg) + lv + ( isNewEpoch + (History.toPureEpochInfo tpraosEpochInfo) + lastSlot + slot + ) + st updateChainDepState cfg b slot cs = - TPraosState (NotOrigin slot) <$> - SL.updateChainDepState - (mkShelleyGlobals cfg) - (tickedTPraosStateLedgerView cs) - b - (tickedTPraosStateChainDepState cs) + TPraosState (NotOrigin slot) + <$> SL.updateChainDepState + (mkShelleyGlobals cfg) + (tickedTPraosStateLedgerView cs) + b + (tickedTPraosStateChainDepState cs) reupdateChainDepState cfg b slot cs = - TPraosState (NotOrigin slot) $ - SL.reupdateChainDepState - (mkShelleyGlobals cfg) - (tickedTPraosStateLedgerView cs) - b - (tickedTPraosStateChainDepState cs) + TPraosState (NotOrigin slot) $ + SL.reupdateChainDepState + (mkShelleyGlobals cfg) + (tickedTPraosStateLedgerView cs) + b + (tickedTPraosStateChainDepState cs) mkShelleyGlobals :: ConsensusConfig (TPraos c) -> SL.Globals -mkShelleyGlobals TPraosConfig{..} = SL.Globals { - epochInfo = +mkShelleyGlobals TPraosConfig{..} = + SL.Globals + { epochInfo = hoistEpochInfo (runExcept . withExceptT (T.pack . show)) tpraosEpochInfo - , slotsPerKESPeriod = tpraosSlotsPerKESPeriod - , stabilityWindow = SL.computeStabilityWindow k tpraosLeaderF + , slotsPerKESPeriod = tpraosSlotsPerKESPeriod + , stabilityWindow = SL.computeStabilityWindow k tpraosLeaderF , randomnessStabilisationWindow = SL.computeRandomnessStabilisationWindow k tpraosLeaderF - , securityParameter = nonZeroOr k $ error "The security parameter cannot be zero." - , maxKESEvo = tpraosMaxKESEvo - , quorum = tpraosQuorum - , maxLovelaceSupply = tpraosMaxLovelaceSupply - , activeSlotCoeff = tpraosLeaderF - , networkId = tpraosNetworkId - , systemStart = tpraosSystemStart + , securityParameter = nonZeroOr k $ error "The security parameter cannot be zero." + , maxKESEvo = tpraosMaxKESEvo + , quorum = tpraosQuorum + , maxLovelaceSupply = tpraosMaxLovelaceSupply + , activeSlotCoeff = tpraosLeaderF + , networkId = tpraosNetworkId + , systemStart = tpraosSystemStart } - where - k = unNonZero $ maxRollbacks tpraosSecurityParam - TPraosParams{..} = tpraosParams + where + k = unNonZero $ maxRollbacks tpraosSecurityParam + TPraosParams{..} = tpraosParams -- | Check whether this node meets the leader threshold to issue a block. meetsLeaderThreshold :: - forall c. SL.PraosCrypto c - => ConsensusConfig (TPraos c) - -> LedgerView (TPraos c) - -> SL.KeyHash 'SL.StakePool - -> VRF.CertifiedVRF (VRF c) SL.Seed - -> Bool -meetsLeaderThreshold TPraosConfig { tpraosParams } - SL.LedgerView { lvPoolDistr } - keyHash - certNat = + forall c. + SL.PraosCrypto c => + ConsensusConfig (TPraos c) -> + LedgerView (TPraos c) -> + SL.KeyHash 'SL.StakePool -> + VRF.CertifiedVRF (VRF c) SL.Seed -> + Bool +meetsLeaderThreshold + TPraosConfig{tpraosParams} + SL.LedgerView{lvPoolDistr} + keyHash + certNat = SL.checkLeaderValue (VRF.certifiedOutput certNat) r (tpraosLeaderF tpraosParams) - where + where SL.PoolDistr poolDistr _totalActiveStake = lvPoolDistr - r = maybe 0 SL.individualPoolStake - $ Map.lookup keyHash poolDistr + r = + maybe 0 SL.individualPoolStake $ + Map.lookup keyHash poolDistr {------------------------------------------------------------------------------- CannotForge @@ -435,8 +454,8 @@ meetsLeaderThreshold TPraosConfig { tpraosParams } -- | Expresses that, whilst we believe ourselves to be a leader for this slot, -- we are nonetheless unable to forge a block. -data TPraosCannotForge c = - -- | The KES key in our operational certificate can't be used because the +data TPraosCannotForge c + = -- | The KES key in our operational certificate can't be used because the -- current (wall clock) period is before the start period of the key. -- current KES period. -- @@ -444,47 +463,49 @@ data TPraosCannotForge c = -- end period of the key, is caught when trying to update the key in -- 'updateForgeState'. TPraosCannotForgeKeyNotUsableYet - !Absolute.KESPeriod - -- ^ Current KES period according to the wallclock slot, i.e., the KES + -- | Current KES period according to the wallclock slot, i.e., the KES -- period in which we want to use the key. !Absolute.KESPeriod - -- ^ Start KES period of the KES key. - - -- | We are a genesis delegate, but our VRF key (second argument) does not + -- | Start KES period of the KES key. + !Absolute.KESPeriod + | -- | We are a genesis delegate, but our VRF key (second argument) does not -- match the registered key for that delegate (first argument). - | TPraosCannotForgeWrongVRF + TPraosCannotForgeWrongVRF !(Hash.Hash HASH (VRF.VerKeyVRF (VRF c))) !(Hash.Hash HASH (VRF.VerKeyVRF (VRF c))) - deriving (Generic) + deriving Generic deriving instance SL.PraosCrypto c => Show (TPraosCannotForge c) tpraosCheckCanForge :: - ConsensusConfig (TPraos c) - -> Hash.Hash HASH (VRF.VerKeyVRF (VRF c)) - -- ^ Precomputed hash of the VRF verification key - -> SlotNo - -> IsLeader (TPraos c) - -> HotKey.KESInfo - -> Either (TPraosCannotForge c) () -tpraosCheckCanForge TPraosConfig { tpraosParams } - forgingVRFHash - curSlot - TPraosIsLeader { tpraosIsLeaderGenVRFHash } - kesInfo - | let startPeriod = HotKey.kesStartPeriod kesInfo - , startPeriod > wallclockPeriod - = throwError $ TPraosCannotForgeKeyNotUsableYet wallclockPeriod startPeriod - | Just genVRFHash <- tpraosIsLeaderGenVRFHash - , genVRFHash /= forgingVRFHash - = throwError $ TPraosCannotForgeWrongVRF genVRFHash forgingVRFHash - | otherwise - = return () - where + ConsensusConfig (TPraos c) -> + -- | Precomputed hash of the VRF verification key + Hash.Hash HASH (VRF.VerKeyVRF (VRF c)) -> + SlotNo -> + IsLeader (TPraos c) -> + HotKey.KESInfo -> + Either (TPraosCannotForge c) () +tpraosCheckCanForge + TPraosConfig{tpraosParams} + forgingVRFHash + curSlot + TPraosIsLeader{tpraosIsLeaderGenVRFHash} + kesInfo + | let startPeriod = HotKey.kesStartPeriod kesInfo + , startPeriod > wallclockPeriod = + throwError $ TPraosCannotForgeKeyNotUsableYet wallclockPeriod startPeriod + | Just genVRFHash <- tpraosIsLeaderGenVRFHash + , genVRFHash /= forgingVRFHash = + throwError $ TPraosCannotForgeWrongVRF genVRFHash forgingVRFHash + | otherwise = + return () + where -- The current wallclock KES period wallclockPeriod :: Absolute.KESPeriod - wallclockPeriod = Absolute.KESPeriod $ fromIntegral $ - unSlotNo curSlot `div` tpraosSlotsPerKESPeriod tpraosParams + wallclockPeriod = + Absolute.KESPeriod $ + fromIntegral $ + unSlotNo curSlot `div` tpraosSlotsPerKESPeriod tpraosParams {------------------------------------------------------------------------------- PraosProtocolSupportsNode @@ -494,41 +515,41 @@ instance SL.PraosCrypto c => PraosProtocolSupportsNode (TPraos c) where type PraosProtocolSupportsNodeCrypto (TPraos c) = c getPraosNonces _prx cdst = - PraosNonces { - candidateNonce - , epochNonce = ticknStateEpochNonce - , evolvingNonce - , labNonce = csLabNonce - , previousLabNonce = ticknStatePrevHashNonce - } - where - TPraosState { tpraosStateChainDepState } = cdst - SL.ChainDepState { - SL.csLabNonce - , SL.csProtocol - , SL.csTickn - } = tpraosStateChainDepState - SL.PrtclState - _opcertCounters - evolvingNonce - candidateNonce - = csProtocol - SL.TicknState { - ticknStateEpochNonce - , ticknStatePrevHashNonce - } = csTickn + PraosNonces + { candidateNonce + , epochNonce = ticknStateEpochNonce + , evolvingNonce + , labNonce = csLabNonce + , previousLabNonce = ticknStatePrevHashNonce + } + where + TPraosState{tpraosStateChainDepState} = cdst + SL.ChainDepState + { SL.csLabNonce + , SL.csProtocol + , SL.csTickn + } = tpraosStateChainDepState + SL.PrtclState + _opcertCounters + evolvingNonce + candidateNonce = + csProtocol + SL.TicknState + { ticknStateEpochNonce + , ticknStatePrevHashNonce + } = csTickn getOpCertCounters _prx cdst = opcertCounters - where - TPraosState { tpraosStateChainDepState } = cdst - SL.ChainDepState { - SL.csProtocol - } = tpraosStateChainDepState - SL.PrtclState - opcertCounters - _evolvingNonce - _candidateNonce - = csProtocol + where + TPraosState{tpraosStateChainDepState} = cdst + SL.ChainDepState + { SL.csProtocol + } = tpraosStateChainDepState + SL.PrtclState + opcertCounters + _evolvingNonce + _candidateNonce = + csProtocol {------------------------------------------------------------------------------- Condense diff --git a/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Consensus/Protocol/Serialisation/Generators.hs b/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Consensus/Protocol/Serialisation/Generators.hs index eb9b7a27ca..034c9e24cc 100644 --- a/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Consensus/Protocol/Serialisation/Generators.hs +++ b/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Consensus/Protocol/Serialisation/Generators.hs @@ -1,28 +1,33 @@ {-# LANGUAGE ScopedTypeVariables #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Generators suitable for serialisation. Note that these are not guaranteed -- to be semantically correct at all, only structurally correct. module Test.Consensus.Protocol.Serialisation.Generators () where -import Cardano.Crypto.KES (unsoundPureSignedKES) -import Cardano.Crypto.VRF (evalCertified) -import Cardano.Protocol.TPraos.BHeader (HashHeader, PrevHash (..)) -import Cardano.Protocol.TPraos.OCert (KESPeriod (KESPeriod), - OCert (OCert)) -import Cardano.Slotting.Block (BlockNo (BlockNo)) -import Cardano.Slotting.Slot (SlotNo (SlotNo), - WithOrigin (At, Origin)) -import Ouroboros.Consensus.Protocol.Praos (PraosState (PraosState)) -import qualified Ouroboros.Consensus.Protocol.Praos as Praos -import Ouroboros.Consensus.Protocol.Praos.Header (Header (Header), - HeaderBody (HeaderBody)) -import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF, mkInputVRF) -import Test.Cardano.Ledger.Shelley.Serialisation.EraIndepGenerators () -import Test.Crypto.KES () -import Test.Crypto.VRF () -import Test.QuickCheck (Arbitrary (..), Gen, choose, oneof) +import Cardano.Crypto.KES (unsoundPureSignedKES) +import Cardano.Crypto.VRF (evalCertified) +import Cardano.Protocol.TPraos.BHeader (HashHeader, PrevHash (..)) +import Cardano.Protocol.TPraos.OCert + ( KESPeriod (KESPeriod) + , OCert (OCert) + ) +import Cardano.Slotting.Block (BlockNo (BlockNo)) +import Cardano.Slotting.Slot + ( SlotNo (SlotNo) + , WithOrigin (At, Origin) + ) +import Ouroboros.Consensus.Protocol.Praos (PraosState (PraosState)) +import Ouroboros.Consensus.Protocol.Praos qualified as Praos +import Ouroboros.Consensus.Protocol.Praos.Header + ( Header (Header) + , HeaderBody (HeaderBody) + ) +import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF, mkInputVRF) +import Test.Cardano.Ledger.Shelley.Serialisation.EraIndepGenerators () +import Test.Crypto.KES () +import Test.Crypto.VRF () +import Test.QuickCheck (Arbitrary (..), Gen, choose, oneof) instance Arbitrary InputVRF where arbitrary = mkInputVRF <$> arbitrary <*> arbitrary @@ -44,8 +49,8 @@ instance Praos.PraosCrypto c => Arbitrary (HeaderBody c) where <$> (BlockNo <$> choose (1, 10)) <*> (SlotNo <$> choose (1, 10)) <*> oneof - [ pure GenesisHash, - BlockHash <$> (arbitrary :: Gen HashHeader) + [ pure GenesisHash + , BlockHash <$> (arbitrary :: Gen HashHeader) ] <*> arbitrary <*> arbitrary @@ -64,14 +69,15 @@ instance Praos.PraosCrypto c => Arbitrary (Header c) where pure $ Header hBody hSig instance Arbitrary PraosState where - arbitrary = PraosState - <$> oneof [ - pure Origin, - At <$> (SlotNo <$> choose (1, 10)) - ] - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary - <*> arbitrary + arbitrary = + PraosState + <$> oneof + [ pure Origin + , At <$> (SlotNo <$> choose (1, 10)) + ] + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary + <*> arbitrary diff --git a/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Ouroboros/Consensus/Protocol/Praos/Header.hs b/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Ouroboros/Consensus/Protocol/Praos/Header.hs index c7d38ff6bd..a66d842561 100644 --- a/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Ouroboros/Consensus/Protocol/Praos/Header.hs +++ b/ouroboros-consensus-protocol/src/unstable-protocol-testlib/Test/Ouroboros/Consensus/Protocol/Praos/Header.hs @@ -8,8 +8,8 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeApplications #-} -module Test.Ouroboros.Consensus.Protocol.Praos.Header ( - GeneratorContext (..) +module Test.Ouroboros.Consensus.Protocol.Praos.Header + ( GeneratorContext (..) , MutatedHeader (..) , Mutation (..) , Sample (..) @@ -20,56 +20,106 @@ module Test.Ouroboros.Consensus.Protocol.Praos.Header ( , generateSamples ) where -import Cardano.Crypto.DSIGN - (DSIGNAlgorithm (SignKeyDSIGN, genKeyDSIGN, rawSerialiseSignKeyDSIGN), - Ed25519DSIGN, deriveVerKeyDSIGN, - rawDeserialiseSignKeyDSIGN) -import Cardano.Crypto.Hash (Blake2b_256, Hash, hashFromBytes, - hashToBytes, hashWith) -import qualified Cardano.Crypto.KES as KES -import Cardano.Crypto.Seed (mkSeedFromBytes) -import Cardano.Crypto.VRF (deriveVerKeyVRF, hashVerKeyVRF, - rawDeserialiseSignKeyVRF, rawSerialiseSignKeyVRF) -import qualified Cardano.Crypto.VRF as VRF -import Cardano.Crypto.VRF.Praos (skToBatchCompat) -import qualified Cardano.Crypto.VRF.Praos as VRF -import Cardano.Ledger.BaseTypes (ActiveSlotCoeff, Nonce (..), - PositiveUnitInterval, ProtVer (..), Version, activeSlotVal, - boundRational, mkActiveSlotCoeff, natVersion) -import Cardano.Ledger.Binary (MaxVersion, decCBOR, - decodeFullAnnotator, serialize') -import Cardano.Ledger.Keys (KeyHash, KeyRole (BlockIssuer), VKey (..), - hashKey, signedDSIGN) -import Cardano.Protocol.TPraos.BHeader (HashHeader (..), - PrevHash (..), checkLeaderNatValue) -import Cardano.Protocol.TPraos.OCert (KESPeriod (..), OCert (..), - OCertSignable (..)) -import Cardano.Slotting.Block (BlockNo (..)) -import Cardano.Slotting.Slot (SlotNo (..)) -import Data.Aeson (defaultOptions, (.:), (.=)) -import qualified Data.Aeson as Json -import Data.Bifunctor (second) -import Data.ByteString (ByteString) -import qualified Data.ByteString as BS -import qualified Data.ByteString.Base16 as Base16 -import qualified Data.ByteString.Lazy as LBS -import Data.Coerce (coerce) -import Data.Foldable (toList) -import qualified Data.Map as Map -import Data.Maybe (fromJust, fromMaybe) -import Data.Proxy (Proxy (..)) -import Data.Ratio ((%)) -import Data.Text.Encoding (decodeUtf8, encodeUtf8) -import Data.Word (Word64) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Protocol.Praos (PraosValidationErr (..)) -import Ouroboros.Consensus.Protocol.Praos.Header (Header, - HeaderBody (..), pattern Header) -import Ouroboros.Consensus.Protocol.Praos.VRF (InputVRF, mkInputVRF, - vrfLeaderValue) -import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto) -import Test.QuickCheck (Gen, arbitrary, choose, frequency, generate, - getPositive, resize, sized, suchThat, vectorOf) +import Cardano.Crypto.DSIGN + ( DSIGNAlgorithm (SignKeyDSIGN, genKeyDSIGN, rawSerialiseSignKeyDSIGN) + , Ed25519DSIGN + , deriveVerKeyDSIGN + , rawDeserialiseSignKeyDSIGN + ) +import Cardano.Crypto.Hash + ( Blake2b_256 + , Hash + , hashFromBytes + , hashToBytes + , hashWith + ) +import Cardano.Crypto.KES qualified as KES +import Cardano.Crypto.Seed (mkSeedFromBytes) +import Cardano.Crypto.VRF + ( deriveVerKeyVRF + , hashVerKeyVRF + , rawDeserialiseSignKeyVRF + , rawSerialiseSignKeyVRF + ) +import Cardano.Crypto.VRF qualified as VRF +import Cardano.Crypto.VRF.Praos (skToBatchCompat) +import Cardano.Crypto.VRF.Praos qualified as VRF +import Cardano.Ledger.BaseTypes + ( ActiveSlotCoeff + , Nonce (..) + , PositiveUnitInterval + , ProtVer (..) + , Version + , activeSlotVal + , boundRational + , mkActiveSlotCoeff + , natVersion + ) +import Cardano.Ledger.Binary + ( MaxVersion + , decCBOR + , decodeFullAnnotator + , serialize' + ) +import Cardano.Ledger.Keys + ( KeyHash + , KeyRole (BlockIssuer) + , VKey (..) + , hashKey + , signedDSIGN + ) +import Cardano.Protocol.TPraos.BHeader + ( HashHeader (..) + , PrevHash (..) + , checkLeaderNatValue + ) +import Cardano.Protocol.TPraos.OCert + ( KESPeriod (..) + , OCert (..) + , OCertSignable (..) + ) +import Cardano.Slotting.Block (BlockNo (..)) +import Cardano.Slotting.Slot (SlotNo (..)) +import Data.Aeson (defaultOptions, (.:), (.=)) +import Data.Aeson qualified as Json +import Data.Bifunctor (second) +import Data.ByteString (ByteString) +import Data.ByteString qualified as BS +import Data.ByteString.Base16 qualified as Base16 +import Data.ByteString.Lazy qualified as LBS +import Data.Coerce (coerce) +import Data.Foldable (toList) +import Data.Map qualified as Map +import Data.Maybe (fromJust, fromMaybe) +import Data.Proxy (Proxy (..)) +import Data.Ratio ((%)) +import Data.Text.Encoding (decodeUtf8, encodeUtf8) +import Data.Word (Word64) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Protocol.Praos (PraosValidationErr (..)) +import Ouroboros.Consensus.Protocol.Praos.Header + ( Header + , HeaderBody (..) + , pattern Header + ) +import Ouroboros.Consensus.Protocol.Praos.VRF + ( InputVRF + , mkInputVRF + , vrfLeaderValue + ) +import Ouroboros.Consensus.Protocol.TPraos (StandardCrypto) +import Test.QuickCheck + ( Gen + , arbitrary + , choose + , frequency + , generate + , getPositive + , resize + , sized + , suchThat + , vectorOf + ) -- * Test Vectors @@ -81,176 +131,179 @@ testVersion :: Version testVersion = natVersion @MaxVersion newtype Sample = Sample {sample :: [(GeneratorContext, MutatedHeader)]} - deriving (Show, Eq) + deriving (Show, Eq) instance Json.ToJSON Sample where - toJSON Sample{sample} = Json.toJSON sample + toJSON Sample{sample} = Json.toJSON sample instance Json.FromJSON Sample where - parseJSON = Json.withArray "Sample" $ \arr -> do - Sample . toList <$> traverse Json.parseJSON arr + parseJSON = Json.withArray "Sample" $ \arr -> do + Sample . toList <$> traverse Json.parseJSON arr genSample :: Gen Sample genSample = do - context <- genContext - sample <- sized $ \n -> vectorOf n $ genMutatedHeader context - pure $ Sample{sample} + context <- genContext + sample <- sized $ \n -> vectorOf n $ genMutatedHeader context + pure $ Sample{sample} genMutatedHeader :: GeneratorContext -> Gen (GeneratorContext, MutatedHeader) genMutatedHeader context = do - header <- genHeader context - mutation <- genMutation header - mutate context header mutation + header <- genHeader context + mutation <- genMutation header + mutate context header mutation -mutate :: GeneratorContext -> Header StandardCrypto -> Mutation -> Gen (GeneratorContext, MutatedHeader) +mutate :: + GeneratorContext -> Header StandardCrypto -> Mutation -> Gen (GeneratorContext, MutatedHeader) mutate context header mutation = - second (\h -> MutatedHeader{header = h, mutation}) <$> mutated - where - mutated = - case mutation of - NoMutation -> pure (context, header) - MutateKESKey -> do - let Header body _ = header - newKESSignKey <- newKESSigningKey <$> gen32Bytes - KESPeriod kesPeriod <- genValidKESPeriod (hbSlotNo body) praosSlotsPerKESPeriod - let sig' = KES.unsoundPureSignKES () kesPeriod body newKESSignKey - pure (context, Header body (KES.SignedKES sig')) - MutateColdKey -> do - let Header body _ = header - newColdSignKey <- genKeyDSIGN . mkSeedFromBytes <$> gen32Bytes - (hbOCert, KESPeriod kesPeriod) <- genCert (hbSlotNo body) context{coldSignKey = newColdSignKey} - let newBody = body{hbOCert} - let sig' = KES.unsoundPureSignKES () kesPeriod newBody kesSignKey - pure (context, Header newBody (KES.SignedKES sig')) - MutateKESPeriod -> do - let Header body _ = header - KESPeriod kesPeriod' <- genKESPeriodAfterLimit (hbSlotNo body) praosSlotsPerKESPeriod - let newKESPeriod = KESPeriod kesPeriod' - let oldOCert@OCert{ocertVkHot, ocertN} = hbOCert body - let newBody = - body - { hbOCert = - oldOCert - { ocertKESPeriod = newKESPeriod - , ocertSigma = signedDSIGN coldSignKey $ OCertSignable ocertVkHot ocertN newKESPeriod - } - } - let sig' = KES.unsoundPureSignKES () kesPeriod' newBody kesSignKey - pure (context, Header newBody (KES.SignedKES sig')) - MutateKESPeriodBefore -> do - let Header body _ = header - OCert{ocertKESPeriod = KESPeriod kesPeriod} = hbOCert body - newSlotNo <- genSlotAfterKESPeriod (fromIntegral kesPeriod) praosMaxKESEvo praosSlotsPerKESPeriod - let rho' = mkInputVRF newSlotNo nonce - period' = unSlotNo newSlotNo `div` praosSlotsPerKESPeriod - hbVrfRes = VRF.evalCertified () rho' vrfSignKey - newBody = body{hbSlotNo = newSlotNo, hbVrfRes} - sig' = KES.unsoundPureSignKES () (fromIntegral period' - kesPeriod) newBody kesSignKey - pure (context, Header newBody (KES.SignedKES sig')) - MutateCounterOver1 -> do - let poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey - Header body _ = header - OCert{ocertN} = hbOCert body - newCounter <- choose (0, ocertN - 2) - let context' = context{ocertCounters = Map.insert poolId newCounter (ocertCounters context)} - pure (context', header) - MutateCounterUnder -> do - let poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey - oldCounter = fromMaybe 0 $ Map.lookup poolId (ocertCounters context) - newCounter <- arbitrary `suchThat` (> oldCounter) - let context' = context{ocertCounters = Map.insert poolId newCounter (ocertCounters context)} - pure (context', header) - GeneratorContext{praosSlotsPerKESPeriod, praosMaxKESEvo, kesSignKey, vrfSignKey, coldSignKey, nonce} = context + second (\h -> MutatedHeader{header = h, mutation}) <$> mutated + where + mutated = + case mutation of + NoMutation -> pure (context, header) + MutateKESKey -> do + let Header body _ = header + newKESSignKey <- newKESSigningKey <$> gen32Bytes + KESPeriod kesPeriod <- genValidKESPeriod (hbSlotNo body) praosSlotsPerKESPeriod + let sig' = KES.unsoundPureSignKES () kesPeriod body newKESSignKey + pure (context, Header body (KES.SignedKES sig')) + MutateColdKey -> do + let Header body _ = header + newColdSignKey <- genKeyDSIGN . mkSeedFromBytes <$> gen32Bytes + (hbOCert, KESPeriod kesPeriod) <- genCert (hbSlotNo body) context{coldSignKey = newColdSignKey} + let newBody = body{hbOCert} + let sig' = KES.unsoundPureSignKES () kesPeriod newBody kesSignKey + pure (context, Header newBody (KES.SignedKES sig')) + MutateKESPeriod -> do + let Header body _ = header + KESPeriod kesPeriod' <- genKESPeriodAfterLimit (hbSlotNo body) praosSlotsPerKESPeriod + let newKESPeriod = KESPeriod kesPeriod' + let oldOCert@OCert{ocertVkHot, ocertN} = hbOCert body + let newBody = + body + { hbOCert = + oldOCert + { ocertKESPeriod = newKESPeriod + , ocertSigma = signedDSIGN coldSignKey $ OCertSignable ocertVkHot ocertN newKESPeriod + } + } + let sig' = KES.unsoundPureSignKES () kesPeriod' newBody kesSignKey + pure (context, Header newBody (KES.SignedKES sig')) + MutateKESPeriodBefore -> do + let Header body _ = header + OCert{ocertKESPeriod = KESPeriod kesPeriod} = hbOCert body + newSlotNo <- genSlotAfterKESPeriod (fromIntegral kesPeriod) praosMaxKESEvo praosSlotsPerKESPeriod + let rho' = mkInputVRF newSlotNo nonce + period' = unSlotNo newSlotNo `div` praosSlotsPerKESPeriod + hbVrfRes = VRF.evalCertified () rho' vrfSignKey + newBody = body{hbSlotNo = newSlotNo, hbVrfRes} + sig' = KES.unsoundPureSignKES () (fromIntegral period' - kesPeriod) newBody kesSignKey + pure (context, Header newBody (KES.SignedKES sig')) + MutateCounterOver1 -> do + let poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey + Header body _ = header + OCert{ocertN} = hbOCert body + newCounter <- choose (0, ocertN - 2) + let context' = context{ocertCounters = Map.insert poolId newCounter (ocertCounters context)} + pure (context', header) + MutateCounterUnder -> do + let poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey + oldCounter = fromMaybe 0 $ Map.lookup poolId (ocertCounters context) + newCounter <- arbitrary `suchThat` (> oldCounter) + let context' = context{ocertCounters = Map.insert poolId newCounter (ocertCounters context)} + pure (context', header) + GeneratorContext{praosSlotsPerKESPeriod, praosMaxKESEvo, kesSignKey, vrfSignKey, coldSignKey, nonce} = context data Mutation - = -- | No mutation - NoMutation - | -- | Mutate the KES key, ie. sign the header with a different KES key. - MutateKESKey - | -- | Mutate the cold key, ie. sign the operational certificate with a different cold key. - MutateColdKey - | -- | Mutate the KES period in the operational certificate to be - -- after the start of the KES period. - MutateKESPeriod - | -- | Mutate KES period to be before the current KES period - MutateKESPeriodBefore - | -- | Mutate certificate counter to be greater than expected - MutateCounterOver1 - | -- | Mutate certificate counter to be lower than expected - MutateCounterUnder - deriving (Eq, Show, Generic) + = -- | No mutation + NoMutation + | -- | Mutate the KES key, ie. sign the header with a different KES key. + MutateKESKey + | -- | Mutate the cold key, ie. sign the operational certificate with a different cold key. + MutateColdKey + | -- | Mutate the KES period in the operational certificate to be + -- after the start of the KES period. + MutateKESPeriod + | -- | Mutate KES period to be before the current KES period + MutateKESPeriodBefore + | -- | Mutate certificate counter to be greater than expected + MutateCounterOver1 + | -- | Mutate certificate counter to be lower than expected + MutateCounterUnder + deriving (Eq, Show, Generic) instance Json.ToJSON Mutation where - toEncoding = Json.genericToEncoding defaultOptions + toEncoding = Json.genericToEncoding defaultOptions instance Json.FromJSON Mutation expectedError :: Mutation -> PraosValidationErr StandardCrypto -> Bool expectedError = \case - NoMutation -> const False - MutateKESKey -> \case - InvalidKesSignatureOCERT{} -> True - _ -> False - MutateColdKey -> \case - InvalidSignatureOCERT{} -> True - _ -> False - MutateKESPeriod -> \case - KESBeforeStartOCERT{} -> True - _ -> False - MutateKESPeriodBefore -> \case - KESAfterEndOCERT{} -> True - _ -> False - MutateCounterOver1 -> \case - CounterOverIncrementedOCERT{} -> True - _ -> False - MutateCounterUnder -> \case - CounterTooSmallOCERT{} -> True - _ -> False + NoMutation -> const False + MutateKESKey -> \case + InvalidKesSignatureOCERT{} -> True + _ -> False + MutateColdKey -> \case + InvalidSignatureOCERT{} -> True + _ -> False + MutateKESPeriod -> \case + KESBeforeStartOCERT{} -> True + _ -> False + MutateKESPeriodBefore -> \case + KESAfterEndOCERT{} -> True + _ -> False + MutateCounterOver1 -> \case + CounterOverIncrementedOCERT{} -> True + _ -> False + MutateCounterUnder -> \case + CounterTooSmallOCERT{} -> True + _ -> False genMutation :: Header StandardCrypto -> Gen Mutation genMutation header = - frequency $ - [ (4, pure NoMutation) - , (1, pure MutateKESKey) - , (1, pure MutateColdKey) - , (1, pure MutateKESPeriod) - , (1, pure MutateKESPeriodBefore) - , (1, pure MutateCounterUnder) - ] - <> maybeCounterOver1 - where - Header body _ = header - OCert{ocertN} = hbOCert body - maybeCounterOver1 = - if ocertN > 10 - then [(1, pure MutateCounterOver1)] - else [] + frequency $ + [ (4, pure NoMutation) + , (1, pure MutateKESKey) + , (1, pure MutateColdKey) + , (1, pure MutateKESPeriod) + , (1, pure MutateKESPeriodBefore) + , (1, pure MutateCounterUnder) + ] + <> maybeCounterOver1 + where + Header body _ = header + OCert{ocertN} = hbOCert body + maybeCounterOver1 = + if ocertN > 10 + then [(1, pure MutateCounterOver1)] + else [] data MutatedHeader = MutatedHeader - { header :: !(Header StandardCrypto) - , mutation :: !Mutation - } - deriving (Show, Eq) + { header :: !(Header StandardCrypto) + , mutation :: !Mutation + } + deriving (Show, Eq) instance Json.ToJSON MutatedHeader where - toJSON MutatedHeader{header, mutation} = - Json.object - [ "header" .= cborHeader - , "mutation" .= mutation - ] - where - cborHeader = decodeUtf8 . Base16.encode $ serialize' testVersion header + toJSON MutatedHeader{header, mutation} = + Json.object + [ "header" .= cborHeader + , "mutation" .= mutation + ] + where + cborHeader = decodeUtf8 . Base16.encode $ serialize' testVersion header instance Json.FromJSON MutatedHeader where - parseJSON = Json.withObject "MutatedHeader" $ \obj -> do - cborHeader <- obj .: "header" - mutation <- obj .: "mutation" - header <- parseHeader cborHeader - pure MutatedHeader{header, mutation} - where - parseHeader cborHeader = do - let headerBytes = Base16.decodeLenient (encodeUtf8 cborHeader) - either (fail . show) pure $ decodeFullAnnotator @(Header StandardCrypto) testVersion "Header" decCBOR $ LBS.fromStrict headerBytes + parseJSON = Json.withObject "MutatedHeader" $ \obj -> do + cborHeader <- obj .: "header" + mutation <- obj .: "mutation" + header <- parseHeader cborHeader + pure MutatedHeader{header, mutation} + where + parseHeader cborHeader = do + let headerBytes = Base16.decodeLenient (encodeUtf8 cborHeader) + either (fail . show) pure $ + decodeFullAnnotator @(Header StandardCrypto) testVersion "Header" decCBOR $ + LBS.fromStrict headerBytes -- * Generators type KESKey = KES.UnsoundPureSignKeyKES (KES.Sum6KES Ed25519DSIGN Blake2b_256) @@ -262,191 +315,195 @@ newKESSigningKey :: ByteString -> KESKey newKESSigningKey = KES.unsoundPureGenKeyKES . mkSeedFromBytes data GeneratorContext = GeneratorContext - { praosSlotsPerKESPeriod :: !Word64 - , praosMaxKESEvo :: !Word64 - , kesSignKey :: !KESKey - , coldSignKey :: !(SignKeyDSIGN Ed25519DSIGN) - , vrfSignKey :: !(VRF.SignKeyVRF VRF.PraosVRF) - , nonce :: !Nonce - , ocertCounters :: !(Map.Map (KeyHash BlockIssuer) Word64) - , activeSlotCoeff :: !ActiveSlotCoeff - } - deriving (Show) + { praosSlotsPerKESPeriod :: !Word64 + , praosMaxKESEvo :: !Word64 + , kesSignKey :: !KESKey + , coldSignKey :: !(SignKeyDSIGN Ed25519DSIGN) + , vrfSignKey :: !(VRF.SignKeyVRF VRF.PraosVRF) + , nonce :: !Nonce + , ocertCounters :: !(Map.Map (KeyHash BlockIssuer) Word64) + , activeSlotCoeff :: !ActiveSlotCoeff + } + deriving Show instance Eq GeneratorContext where - a == b = - praosSlotsPerKESPeriod a == praosSlotsPerKESPeriod b - && praosMaxKESEvo a == praosMaxKESEvo b - && serialize' testVersion (KES.encodeUnsoundPureSignKeyKES (kesSignKey a)) == - serialize' testVersion (KES.encodeUnsoundPureSignKeyKES (kesSignKey b)) - && coldSignKey a == coldSignKey b - && vrfSignKey a == vrfSignKey b - && nonce a == nonce b + a == b = + praosSlotsPerKESPeriod a == praosSlotsPerKESPeriod b + && praosMaxKESEvo a == praosMaxKESEvo b + && serialize' testVersion (KES.encodeUnsoundPureSignKeyKES (kesSignKey a)) + == serialize' testVersion (KES.encodeUnsoundPureSignKeyKES (kesSignKey b)) + && coldSignKey a == coldSignKey b + && vrfSignKey a == vrfSignKey b + && nonce a == nonce b instance Json.ToJSON GeneratorContext where - toJSON GeneratorContext{..} = - Json.object - [ "praosSlotsPerKESPeriod" .= praosSlotsPerKESPeriod - , "praosMaxKESEvo" .= praosMaxKESEvo - , "kesSignKey" .= rawKesSignKey - , "coldSignKey" .= rawColdSignKey - , "vrfSignKey" .= rawVrfSignKey - , "vrfVKeyHash" .= rawVrVKeyHash - , "nonce" .= rawNonce - , "ocertCounters" .= ocertCounters - , "activeSlotCoeff" .= activeSlotVal activeSlotCoeff - ] - where - rawKesSignKey = decodeUtf8 . Base16.encode $ KES.rawSerialiseUnsoundPureSignKeyKES kesSignKey - rawColdSignKey = decodeUtf8 . Base16.encode $ rawSerialiseSignKeyDSIGN coldSignKey - rawVrfSignKey = decodeUtf8 . Base16.encode $ rawSerialiseSignKeyVRF $ skToBatchCompat vrfSignKey - rawVrVKeyHash = decodeUtf8 . Base16.encode $ hashToBytes $ hashVerKeyVRF @_ @Blake2b_256 $ deriveVerKeyVRF vrfSignKey - rawNonce = case nonce of - NeutralNonce -> decodeUtf8 . Base16.encode $ BS.replicate 32 0 - Nonce hashNonce -> decodeUtf8 . Base16.encode $ hashToBytes hashNonce + toJSON GeneratorContext{..} = + Json.object + [ "praosSlotsPerKESPeriod" .= praosSlotsPerKESPeriod + , "praosMaxKESEvo" .= praosMaxKESEvo + , "kesSignKey" .= rawKesSignKey + , "coldSignKey" .= rawColdSignKey + , "vrfSignKey" .= rawVrfSignKey + , "vrfVKeyHash" .= rawVrVKeyHash + , "nonce" .= rawNonce + , "ocertCounters" .= ocertCounters + , "activeSlotCoeff" .= activeSlotVal activeSlotCoeff + ] + where + rawKesSignKey = decodeUtf8 . Base16.encode $ KES.rawSerialiseUnsoundPureSignKeyKES kesSignKey + rawColdSignKey = decodeUtf8 . Base16.encode $ rawSerialiseSignKeyDSIGN coldSignKey + rawVrfSignKey = decodeUtf8 . Base16.encode $ rawSerialiseSignKeyVRF $ skToBatchCompat vrfSignKey + rawVrVKeyHash = + decodeUtf8 . Base16.encode $ + hashToBytes $ + hashVerKeyVRF @_ @Blake2b_256 $ + deriveVerKeyVRF vrfSignKey + rawNonce = case nonce of + NeutralNonce -> decodeUtf8 . Base16.encode $ BS.replicate 32 0 + Nonce hashNonce -> decodeUtf8 . Base16.encode $ hashToBytes hashNonce instance Json.FromJSON GeneratorContext where - parseJSON = Json.withObject "GeneratorContext" $ \obj -> do - praosSlotsPerKESPeriod <- obj .: "praosSlotsPerKESPeriod" - praosMaxKESEvo <- obj .: "praosMaxKESEvo" - rawKesSignKey <- obj .: "kesSignKey" - rawColdSignKey <- obj .: "coldSignKey" - rawVrfSignKey <- obj .: "vrfSignKey" - cborNonce <- obj .: "nonce" - ocertCounters <- obj .: "ocertCounters" - kesSignKey <- parseKesSignKey rawKesSignKey - coldSignKey <- parseColdSignKey rawColdSignKey - vrfSignKey <- parseVrfSignKey rawVrfSignKey - nonce <- parseNonce cborNonce - activeSlotCoeff <- mkActiveSlotCoeff <$> obj .: "activeSlotCoeff" - pure GeneratorContext{..} - where - parseNonce rawNonce = - case Base16.decode (encodeUtf8 rawNonce) of - Left _ -> pure NeutralNonce - Right nonceBytes -> Nonce <$> maybe (fail "invalid bytes for hash") pure (hashFromBytes nonceBytes) - parseColdSignKey rawKey = do - case Base16.decode (encodeUtf8 rawKey) of - Left err -> fail err - Right keyBytes -> - case rawDeserialiseSignKeyDSIGN keyBytes of - Nothing -> fail $ "Invalid cold key bytes: " <> show rawKey - Just key -> pure key - parseKesSignKey rawKey = do - case Base16.decode (encodeUtf8 rawKey) of - Left err -> fail err - Right keyBytes -> - case KES.rawDeserialiseUnsoundPureSignKeyKES keyBytes of - Nothing -> fail $ "Invalid KES key bytes: " <> show rawKey - Just key -> pure key - parseVrfSignKey rawKey = do - case Base16.decode (encodeUtf8 rawKey) of - Left err -> fail err - Right keyBytes -> - case rawDeserialiseSignKeyVRF keyBytes of - Nothing -> fail $ "Invalid VRF key bytes: " <> show rawKey - Just key -> pure key + parseJSON = Json.withObject "GeneratorContext" $ \obj -> do + praosSlotsPerKESPeriod <- obj .: "praosSlotsPerKESPeriod" + praosMaxKESEvo <- obj .: "praosMaxKESEvo" + rawKesSignKey <- obj .: "kesSignKey" + rawColdSignKey <- obj .: "coldSignKey" + rawVrfSignKey <- obj .: "vrfSignKey" + cborNonce <- obj .: "nonce" + ocertCounters <- obj .: "ocertCounters" + kesSignKey <- parseKesSignKey rawKesSignKey + coldSignKey <- parseColdSignKey rawColdSignKey + vrfSignKey <- parseVrfSignKey rawVrfSignKey + nonce <- parseNonce cborNonce + activeSlotCoeff <- mkActiveSlotCoeff <$> obj .: "activeSlotCoeff" + pure GeneratorContext{..} + where + parseNonce rawNonce = + case Base16.decode (encodeUtf8 rawNonce) of + Left _ -> pure NeutralNonce + Right nonceBytes -> Nonce <$> maybe (fail "invalid bytes for hash") pure (hashFromBytes nonceBytes) + parseColdSignKey rawKey = do + case Base16.decode (encodeUtf8 rawKey) of + Left err -> fail err + Right keyBytes -> + case rawDeserialiseSignKeyDSIGN keyBytes of + Nothing -> fail $ "Invalid cold key bytes: " <> show rawKey + Just key -> pure key + parseKesSignKey rawKey = do + case Base16.decode (encodeUtf8 rawKey) of + Left err -> fail err + Right keyBytes -> + case KES.rawDeserialiseUnsoundPureSignKeyKES keyBytes of + Nothing -> fail $ "Invalid KES key bytes: " <> show rawKey + Just key -> pure key + parseVrfSignKey rawKey = do + case Base16.decode (encodeUtf8 rawKey) of + Left err -> fail err + Right keyBytes -> + case rawDeserialiseSignKeyVRF keyBytes of + Nothing -> fail $ "Invalid VRF key bytes: " <> show rawKey + Just key -> pure key genContext :: Gen GeneratorContext genContext = do - praosSlotsPerKESPeriod <- choose (100, 10000) - praosMaxKESEvo <- choose (10, 1000) - ocertCounter <- choose (10, 100) - kesSignKey <- newKESSigningKey <$> gen32Bytes - coldSignKey <- genKeyDSIGN . mkSeedFromBytes <$> gen32Bytes - vrfSignKey <- fst <$> newVRFSigningKey <$> gen32Bytes - nonce <- Nonce <$> genHash - let poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey - ocertCounters = Map.fromList [(poolId, ocertCounter)] - activeSlotCoeff <- genActiveSlotCoeff - pure $ GeneratorContext{..} + praosSlotsPerKESPeriod <- choose (100, 10000) + praosMaxKESEvo <- choose (10, 1000) + ocertCounter <- choose (10, 100) + kesSignKey <- newKESSigningKey <$> gen32Bytes + coldSignKey <- genKeyDSIGN . mkSeedFromBytes <$> gen32Bytes + vrfSignKey <- fst <$> newVRFSigningKey <$> gen32Bytes + nonce <- Nonce <$> genHash + let poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey + ocertCounters = Map.fromList [(poolId, ocertCounter)] + activeSlotCoeff <- genActiveSlotCoeff + pure $ GeneratorContext{..} genActiveSlotCoeff :: Gen ActiveSlotCoeff genActiveSlotCoeff = do - choose (1, 100) >>= \n -> pure $ activeSlotCoeff (n % 100) - where - activeSlotCoeff = mkActiveSlotCoeff . fromJust . boundRational @PositiveUnitInterval - -{- | Generate a well-formed header - -The header is signed with the KES key, and all the signing keys -generated for the purpose of producing the header are returned. --} + choose (1, 100) >>= \n -> pure $ activeSlotCoeff (n % 100) + where + activeSlotCoeff = mkActiveSlotCoeff . fromJust . boundRational @PositiveUnitInterval + +-- | Generate a well-formed header +-- +-- The header is signed with the KES key, and all the signing keys +-- generated for the purpose of producing the header are returned. genHeader :: GeneratorContext -> Gen (Header StandardCrypto) genHeader context = do - (body, KESPeriod kesPeriod) <- genHeaderBody context - let sign = KES.SignedKES $ KES.unsoundPureSignKES () kesPeriod body kesSignKey - pure $ (Header body sign) - where - GeneratorContext{kesSignKey} = context + (body, KESPeriod kesPeriod) <- genHeaderBody context + let sign = KES.SignedKES $ KES.unsoundPureSignKES () kesPeriod body kesSignKey + pure $ (Header body sign) + where + GeneratorContext{kesSignKey} = context genHeaderBody :: GeneratorContext -> Gen (HeaderBody StandardCrypto, KESPeriod) genHeaderBody context = do - hbBlockNo <- BlockNo <$> arbitrary - (hbSlotNo, hbVrfRes, hbVrfVk) <- genLeadingSlot context - hbPrev <- BlockHash . HashHeader <$> genHash - let hbVk = VKey $ deriveVerKeyDSIGN coldSignKey - hbBodySize <- choose (1000, 90000) - hbBodyHash <- genHash - (hbOCert, kesPeriod) <- genCert hbSlotNo context - let hbProtVer = protocolVersionZero - headerBody = HeaderBody{..} - pure $ (headerBody, kesPeriod) - where - GeneratorContext{coldSignKey} = context - -genLeadingSlot :: GeneratorContext -> Gen (SlotNo, VRF.CertifiedVRF VRF.PraosVRF InputVRF, VRF.VerKeyVRF VRF.PraosVRF) + hbBlockNo <- BlockNo <$> arbitrary + (hbSlotNo, hbVrfRes, hbVrfVk) <- genLeadingSlot context + hbPrev <- BlockHash . HashHeader <$> genHash + let hbVk = VKey $ deriveVerKeyDSIGN coldSignKey + hbBodySize <- choose (1000, 90000) + hbBodyHash <- genHash + (hbOCert, kesPeriod) <- genCert hbSlotNo context + let hbProtVer = protocolVersionZero + headerBody = HeaderBody{..} + pure $ (headerBody, kesPeriod) + where + GeneratorContext{coldSignKey} = context + +genLeadingSlot :: + GeneratorContext -> Gen (SlotNo, VRF.CertifiedVRF VRF.PraosVRF InputVRF, VRF.VerKeyVRF VRF.PraosVRF) genLeadingSlot context = do - slotNo <- SlotNo . getPositive <$> arbitrary `suchThat` isLeader - let rho' = mkInputVRF slotNo nonce - hbVrfRes = VRF.evalCertified () rho' vrfSignKey - hbVrfVk = deriveVerKeyVRF vrfSignKey - pure (slotNo, hbVrfRes, hbVrfVk) - where - isLeader n = - let slotNo = SlotNo . getPositive $ n - rho' = mkInputVRF slotNo nonce - certified = VRF.evalCertified () rho' vrfSignKey - in checkLeaderNatValue (vrfLeaderValue (Proxy @StandardCrypto) certified) sigma activeSlotCoeff - sigma = 1 - GeneratorContext{vrfSignKey, nonce, activeSlotCoeff} = context + slotNo <- SlotNo . getPositive <$> arbitrary `suchThat` isLeader + let rho' = mkInputVRF slotNo nonce + hbVrfRes = VRF.evalCertified () rho' vrfSignKey + hbVrfVk = deriveVerKeyVRF vrfSignKey + pure (slotNo, hbVrfRes, hbVrfVk) + where + isLeader n = + let slotNo = SlotNo . getPositive $ n + rho' = mkInputVRF slotNo nonce + certified = VRF.evalCertified () rho' vrfSignKey + in checkLeaderNatValue (vrfLeaderValue (Proxy @StandardCrypto) certified) sigma activeSlotCoeff + sigma = 1 + GeneratorContext{vrfSignKey, nonce, activeSlotCoeff} = context protocolVersionZero :: ProtVer protocolVersionZero = ProtVer versionZero 0 - where - versionZero :: Version - versionZero = natVersion @0 + where + versionZero :: Version + versionZero = natVersion @0 genCert :: SlotNo -> GeneratorContext -> Gen (OCert StandardCrypto, KESPeriod) genCert slotNo context = do - let ocertVkHot = KES.unsoundPureDeriveVerKeyKES kesSignKey - poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey - ocertN = fromMaybe 0 $ Map.lookup poolId ocertCounters - ocertKESPeriod <- genValidKESPeriod slotNo praosSlotsPerKESPeriod - let ocertSigma = signedDSIGN coldSignKey $ OCertSignable ocertVkHot ocertN ocertKESPeriod - pure (OCert{..}, ocertKESPeriod) - where - GeneratorContext{kesSignKey, praosSlotsPerKESPeriod, coldSignKey, ocertCounters} = context + let ocertVkHot = KES.unsoundPureDeriveVerKeyKES kesSignKey + poolId = coerce $ hashKey $ VKey $ deriveVerKeyDSIGN coldSignKey + ocertN = fromMaybe 0 $ Map.lookup poolId ocertCounters + ocertKESPeriod <- genValidKESPeriod slotNo praosSlotsPerKESPeriod + let ocertSigma = signedDSIGN coldSignKey $ OCertSignable ocertVkHot ocertN ocertKESPeriod + pure (OCert{..}, ocertKESPeriod) + where + GeneratorContext{kesSignKey, praosSlotsPerKESPeriod, coldSignKey, ocertCounters} = context genValidKESPeriod :: SlotNo -> Word64 -> Gen KESPeriod genValidKESPeriod slotNo praosSlotsPerKESPeriod = - pure $ KESPeriod $ fromIntegral $ unSlotNo slotNo `div` praosSlotsPerKESPeriod + pure $ KESPeriod $ fromIntegral $ unSlotNo slotNo `div` praosSlotsPerKESPeriod genKESPeriodAfterLimit :: SlotNo -> Word64 -> Gen KESPeriod genKESPeriodAfterLimit slotNo praosSlotsPerKESPeriod = - KESPeriod . fromIntegral <$> arbitrary `suchThat` (> currentKESPeriod) - where - currentKESPeriod = unSlotNo slotNo `div` praosSlotsPerKESPeriod + KESPeriod . fromIntegral <$> arbitrary `suchThat` (> currentKESPeriod) + where + currentKESPeriod = unSlotNo slotNo `div` praosSlotsPerKESPeriod genSlotAfterKESPeriod :: Word64 -> Word64 -> Word64 -> Gen SlotNo genSlotAfterKESPeriod ocertKESPeriod praosMaxKESEvo praosSlotsPerKESPeriod = do - -- kp_ < c0_ + praosMaxKESEvo - -- ! => - -- kp >= c0_ + praosMaxKESEvo - -- c0 <= kp - praosMaxKESEvo - SlotNo <$> arbitrary `suchThat` (> threshold) - where - threshold = (ocertKESPeriod + praosMaxKESEvo + 1) * praosSlotsPerKESPeriod + -- kp_ < c0_ + praosMaxKESEvo + -- ! => + -- kp >= c0_ + praosMaxKESEvo + -- c0 <= kp - praosMaxKESEvo + SlotNo <$> arbitrary `suchThat` (> threshold) + where + threshold = (ocertKESPeriod + praosMaxKESEvo + 1) * praosSlotsPerKESPeriod genHash :: Gen (Hash Blake2b_256 a) genHash = coerce . hashWith id <$> gen32Bytes diff --git a/ouroboros-consensus-protocol/test/protocol-test/Main.hs b/ouroboros-consensus-protocol/test/protocol-test/Main.hs index be4b967fea..876a3d2225 100644 --- a/ouroboros-consensus-protocol/test/protocol-test/Main.hs +++ b/ouroboros-consensus-protocol/test/protocol-test/Main.hs @@ -1,14 +1,15 @@ module Main (main) where -import qualified Test.Consensus.Protocol.Praos.SelectView -import Test.Tasty -import Test.Util.TestEnv +import Test.Consensus.Protocol.Praos.SelectView qualified +import Test.Tasty +import Test.Util.TestEnv main :: IO () main = defaultMainWithTestEnv defaultTestEnvConfig tests tests :: TestTree tests = - testGroup "protocol" - [ Test.Consensus.Protocol.Praos.SelectView.tests - ] + testGroup + "protocol" + [ Test.Consensus.Protocol.Praos.SelectView.tests + ] diff --git a/ouroboros-consensus-protocol/test/protocol-test/Test/Consensus/Protocol/Praos/SelectView.hs b/ouroboros-consensus-protocol/test/protocol-test/Test/Consensus/Protocol/Praos/SelectView.hs index b268bd14ea..19618966c4 100644 --- a/ouroboros-consensus-protocol/test/protocol-test/Test/Consensus/Protocol/Praos/SelectView.hs +++ b/ouroboros-consensus-protocol/test/protocol-test/Test/Consensus/Protocol/Praos/SelectView.hs @@ -4,80 +4,84 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Consensus.Protocol.Praos.SelectView (tests) where -import qualified Cardano.Crypto.Hash as Crypto -import qualified Cardano.Crypto.Util as Crypto -import Cardano.Crypto.VRF (OutputVRF, mkTestOutputVRF) -import qualified Cardano.Ledger.Keys as SL -import Cardano.Protocol.Crypto (Crypto (..), StandardCrypto) -import Codec.Serialise (encode) -import Control.Monad -import Data.Containers.ListUtils (nubOrdOn) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Protocol.Praos.Common -import Test.Cardano.Ledger.Binary.Arbitrary () -import Test.Ouroboros.Consensus.Protocol -import Test.QuickCheck.Gen (Gen (..)) -import Test.QuickCheck.Random (mkQCGen) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.TestEnv +import Cardano.Crypto.Hash qualified as Crypto +import Cardano.Crypto.Util qualified as Crypto +import Cardano.Crypto.VRF (OutputVRF, mkTestOutputVRF) +import Cardano.Ledger.Keys qualified as SL +import Cardano.Protocol.Crypto (Crypto (..), StandardCrypto) +import Codec.Serialise (encode) +import Control.Monad +import Data.Containers.ListUtils (nubOrdOn) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Protocol.Praos.Common +import Test.Cardano.Ledger.Binary.Arbitrary () +import Test.Ouroboros.Consensus.Protocol +import Test.QuickCheck.Gen (Gen (..)) +import Test.QuickCheck.Random (mkQCGen) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.TestEnv tests :: TestTree -tests = testGroup "PraosChainSelectView" - [ adjustQuickCheckTests (* 50) +tests = + testGroup + "PraosChainSelectView" + [ adjustQuickCheckTests (* 50) -- Use a small max size by default in order to have a decent chance to -- trigger the actual tiebreaker cases. - $ adjustQuickCheckMaxSize (`div` 10) - $ tests_chainOrder (Proxy @(PraosChainSelectView StandardCrypto)) + $ + adjustQuickCheckMaxSize (`div` 10) $ + tests_chainOrder (Proxy @(PraosChainSelectView StandardCrypto)) ] instance Crypto c => Arbitrary (PraosChainSelectView c) where arbitrary = do - size <- fromIntegral <$> getSize - csvChainLength <- BlockNo <$> choose (1, size) - csvSlotNo <- SlotNo <$> choose (1, size) - csvIssuer <- elements knownIssuers - csvIssueNo <- choose (1, 10) - pure PraosChainSelectView { - csvChainLength + size <- fromIntegral <$> getSize + csvChainLength <- BlockNo <$> choose (1, size) + csvSlotNo <- SlotNo <$> choose (1, size) + csvIssuer <- elements knownIssuers + csvIssueNo <- choose (1, 10) + pure + PraosChainSelectView + { csvChainLength , csvSlotNo , csvIssuer , csvIssueNo , csvTieBreakVRF = mkVRFFor csvIssuer csvSlotNo } where - -- We want to draw from the same small set of issuer identities in order to - -- have a chance to explore cases where the issuers of two 'SelectView's - -- are identical. - knownIssuers :: [SL.VKey SL.BlockIssuer] - knownIssuers = - nubOrdOn SL.hashKey - $ unGen (replicateM numIssuers (SL.VKey <$> arbitrary)) randomSeed 100 - where - randomSeed = mkQCGen 4 -- chosen by fair dice roll - numIssuers = 10 + -- We want to draw from the same small set of issuer identities in order to + -- have a chance to explore cases where the issuers of two 'SelectView's + -- are identical. + knownIssuers :: [SL.VKey SL.BlockIssuer] + knownIssuers = + nubOrdOn SL.hashKey $ + unGen (replicateM numIssuers (SL.VKey <$> arbitrary)) randomSeed 100 + where + randomSeed = mkQCGen 4 -- chosen by fair dice roll + numIssuers = 10 - -- The header VRF is a deterministic function of the issuer VRF key, the - -- slot and the epoch nonce. Additionally, for any particular chain, the - -- slot determines the epoch nonce. - mkVRFFor :: SL.VKey SL.BlockIssuer -> SlotNo -> OutputVRF (VRF c) - mkVRFFor issuer slot = - mkTestOutputVRF - $ Crypto.bytesToNatural - $ Crypto.hashToBytes - $ Crypto.xor (Crypto.castHash issuerHash) - $ Crypto.hashWithSerialiser encode slot - where - SL.KeyHash issuerHash = SL.hashKey issuer + -- The header VRF is a deterministic function of the issuer VRF key, the + -- slot and the epoch nonce. Additionally, for any particular chain, the + -- slot determines the epoch nonce. + mkVRFFor :: SL.VKey SL.BlockIssuer -> SlotNo -> OutputVRF (VRF c) + mkVRFFor issuer slot = + mkTestOutputVRF $ + Crypto.bytesToNatural $ + Crypto.hashToBytes $ + Crypto.xor (Crypto.castHash issuerHash) $ + Crypto.hashWithSerialiser encode slot + where + SL.KeyHash issuerHash = SL.hashKey issuer -- | 'ChainOrderConfig' 'PraosChainSelectView' instance Arbitrary VRFTiebreakerFlavor where - arbitrary = oneof + arbitrary = + oneof [ pure UnrestrictedVRFTiebreaker , do size <- max 1 . fromIntegral <$> getSize @@ -85,7 +89,7 @@ instance Arbitrary VRFTiebreakerFlavor where ] shrink = \case - UnrestrictedVRFTiebreaker -> [] - RestrictedVRFTiebreaker maxDist -> - UnrestrictedVRFTiebreaker + UnrestrictedVRFTiebreaker -> [] + RestrictedVRFTiebreaker maxDist -> + UnrestrictedVRFTiebreaker : (RestrictedVRFTiebreaker <$> shrink maxDist) diff --git a/ouroboros-consensus/bench/ChainSync-client-bench/Bench/Consensus/ChainSyncClient/Driver.hs b/ouroboros-consensus/bench/ChainSync-client-bench/Bench/Consensus/ChainSyncClient/Driver.hs index 45ba484878..eccda50677 100644 --- a/ouroboros-consensus/bench/ChainSync-client-bench/Bench/Consensus/ChainSyncClient/Driver.hs +++ b/ouroboros-consensus/bench/ChainSync-client-bench/Bench/Consensus/ChainSyncClient/Driver.hs @@ -3,15 +3,15 @@ module Bench.Consensus.ChainSyncClient.Driver (mainWith) where -import Control.Monad (when) -import qualified Data.Array.IO as UA -import Data.Int (Int64) -import Data.Time.Clock (diffTimeToPicoseconds) -import qualified Data.Time.Clock.System as SysTime -import qualified Data.Time.Clock.TAI as TAI -import Data.Word (Word32) -import System.Environment (getArgs) -import Text.Read (readMaybe) +import Control.Monad (when) +import Data.Array.IO qualified as UA +import Data.Int (Int64) +import Data.Time.Clock (diffTimeToPicoseconds) +import Data.Time.Clock.System qualified as SysTime +import Data.Time.Clock.TAI qualified as TAI +import Data.Word (Word32) +import System.Environment (getArgs) +import Text.Read (readMaybe) -- | The argument to the function under test -- @@ -21,32 +21,33 @@ newtype X = X Int64 {-# INLINE mainWith #-} mainWith :: (Int64 -> IO ()) -> IO () mainWith fut = do - xx <- getArgs >>= \case - [loStr, hiStr] - | Just lo <- readMaybe loStr - , Just hi <- readMaybe hiStr - -> pure (X lo, X hi) - _ -> fail "Pass min and max index as arguments $1 and $2" - - let zz@(!lo, !hi) = mkMeasurementIndexInterval xx - - -- all the extraneous allocation happens up front - -- - -- TODO except the getSystemTime FFI overhead. I haven't find a - -- platform-agnostic package for reading the clock into a pre-allocated - -- buffer (eg via Storable) - varStarts <- newTimeArray_ zz - varStops <- newTimeArray_ zz - - let go !z = do - recordTimeArray varStarts z - let X i = fst $ coords z in fut i - recordTimeArray varStops z - when (z < hi) $ go (incrMeasurementIndex z) - go lo - - -- and all the rendering overhead happens after the fact - render varStarts varStops zz + xx <- + getArgs >>= \case + [loStr, hiStr] + | Just lo <- readMaybe loStr + , Just hi <- readMaybe hiStr -> + pure (X lo, X hi) + _ -> fail "Pass min and max index as arguments $1 and $2" + + let zz@(!lo, !hi) = mkMeasurementIndexInterval xx + + -- all the extraneous allocation happens up front + -- + -- TODO except the getSystemTime FFI overhead. I haven't find a + -- platform-agnostic package for reading the clock into a pre-allocated + -- buffer (eg via Storable) + varStarts <- newTimeArray_ zz + varStops <- newTimeArray_ zz + + let go !z = do + recordTimeArray varStarts z + let X i = fst $ coords z in fut i + recordTimeArray varStops z + when (z < hi) $ go (incrMeasurementIndex z) + go lo + + -- and all the rendering overhead happens after the fact + render varStarts varStops zz ----- @@ -65,18 +66,18 @@ samplesPerX = 1000 -- samples of each 'X' in the given interval mkMeasurementIndexInterval :: (X, X) -> (MeasurementIndex, MeasurementIndex) mkMeasurementIndexInterval (X lo, X hi) = - ( MeasurementIndex (firstSample lo) - , MeasurementIndex (firstSample (hi + 1) - 1) - ) - where - firstSample x = samplesPerX * x + ( MeasurementIndex (firstSample lo) + , MeasurementIndex (firstSample (hi + 1) - 1) + ) + where + firstSample x = samplesPerX * x -- | The inverse of the mapping that underlies 'mkMeasurementIndexInterval' coords :: MeasurementIndex -> (X, RepetitionIndex) coords (MeasurementIndex z) = - (X q, RepetitionIndex r) - where - (q, r) = z `quotRem` samplesPerX + (X q, RepetitionIndex r) + where + (q, r) = z `quotRem` samplesPerX -- | Increment incrMeasurementIndex :: MeasurementIndex -> MeasurementIndex @@ -87,15 +88,15 @@ incrMeasurementIndex (MeasurementIndex z) = MeasurementIndex $ z + 1 -- | Dump all measurements to the screen, along with their coordinates render :: TimeArray -> TimeArray -> (MeasurementIndex, MeasurementIndex) -> IO () render varStarts varStops zz = do - putStrLn "# Uuid RelativeSample Index Nanoseconds" - mapM_ each (UA.range zz) - where - each z = do - let (X x, RepetitionIndex y) = coords z - start <- readTimeArray varStarts z - stop <- readTimeArray varStops z - let dur = stop `diffPico` start - putStrLn $ unwords [let MeasurementIndex i = z in show i, show y, show x, show dur] + putStrLn "# Uuid RelativeSample Index Nanoseconds" + mapM_ each (UA.range zz) + where + each z = do + let (X x, RepetitionIndex y) = coords z + start <- readTimeArray varStarts z + stop <- readTimeArray varStops z + let dur = stop `diffPico` start + putStrLn $ unwords [let MeasurementIndex i = z in show i, show y, show x, show dur] ----- @@ -107,37 +108,38 @@ data TimeArray = TimeArray !(UA.IOUArray MeasurementIndex Int64) !(UA.IOUArray M newTimeArray_ :: (MeasurementIndex, MeasurementIndex) -> IO TimeArray newTimeArray_ zz = do - seconds <- UA.newArray_ zz - nanos <- UA.newArray_ zz - pure $ TimeArray seconds nanos + seconds <- UA.newArray_ zz + nanos <- UA.newArray_ zz + pure $ TimeArray seconds nanos {-# INLINE recordTimeArray #-} recordTimeArray :: TimeArray -> MeasurementIndex -> IO () recordTimeArray tarr = \z -> do - tm <- SysTime.getSystemTime - let SysTime.MkSystemTime { - SysTime.systemSeconds = a - , SysTime.systemNanoseconds = b - } = tm - UA.writeArray seconds z a - UA.writeArray nanos z b - where - TimeArray seconds nanos = tarr + tm <- SysTime.getSystemTime + let SysTime.MkSystemTime + { SysTime.systemSeconds = a + , SysTime.systemNanoseconds = b + } = tm + UA.writeArray seconds z a + UA.writeArray nanos z b + where + TimeArray seconds nanos = tarr readTimeArray :: TimeArray -> MeasurementIndex -> IO SysTime.SystemTime readTimeArray tarr = \z -> do - a <- UA.readArray seconds z - b <- UA.readArray nanos z - pure SysTime.MkSystemTime { - SysTime.systemSeconds = a + a <- UA.readArray seconds z + b <- UA.readArray nanos z + pure + SysTime.MkSystemTime + { SysTime.systemSeconds = a , SysTime.systemNanoseconds = b } - where - TimeArray seconds nanos = tarr + where + TimeArray seconds nanos = tarr diffPico :: SysTime.SystemTime -> SysTime.SystemTime -> Integer diffPico stop start = - diffTimeToPicoseconds $ stop' `TAI.diffAbsoluteTime` start' - where - stop' = SysTime.systemToTAITime stop - start' = SysTime.systemToTAITime start + diffTimeToPicoseconds $ stop' `TAI.diffAbsoluteTime` start' + where + stop' = SysTime.systemToTAITime stop + start' = SysTime.systemToTAITime start diff --git a/ouroboros-consensus/bench/ChainSync-client-bench/Main.hs b/ouroboros-consensus/bench/ChainSync-client-bench/Main.hs index a63ed264a0..4b9bebe844 100644 --- a/ouroboros-consensus/bench/ChainSync-client-bench/Main.hs +++ b/ouroboros-consensus/bench/ChainSync-client-bench/Main.hs @@ -9,200 +9,216 @@ -- blocks instead of Test Blocks module Main (main) where -import Bench.Consensus.ChainSyncClient.Driver (mainWith) -import Cardano.Crypto.DSIGN.Mock -import Cardano.Ledger.BaseTypes (knownNonZeroBounded) -import Control.Monad (void) -import Control.ResourceRegistry -import Control.Tracer (contramap, debugTracer, nullTracer) -import Data.IORef (newIORef, readIORef, writeIORef) -import qualified Data.List.NonEmpty as NE -import qualified Data.Map.Strict as Map -import Main.Utf8 (withStdTerminalHandles) -import Network.TypedProtocol.Channel -import Network.TypedProtocol.Driver.Simple -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import qualified Ouroboros.Consensus.HeaderStateHistory as HeaderStateHistory -import qualified Ouroboros.Consensus.HeaderValidation as HV -import qualified Ouroboros.Consensus.Ledger.Extended as Extended -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck as HistoricityCheck -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck - (clockSkewInSeconds) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck as InFutureCheck -import Ouroboros.Consensus.MiniProtocol.ChainSync.Server - (chainSyncServerForFollower) -import Ouroboros.Consensus.Node.NetworkProtocolVersion - (NodeToNodeVersion) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.BFT -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (Fingerprint (..), - WithFingerprint (..)) -import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (ChainUpdate (AddBlock, RollBack), - Tip (TipGenesis), tipFromHeader) -import Ouroboros.Network.ControlMessage (ControlMessage (Continue)) -import qualified Ouroboros.Network.Mock.Chain as Chain -import Ouroboros.Network.Protocol.ChainSync.ClientPipelined -import Ouroboros.Network.Protocol.ChainSync.Codec (codecChainSyncId) -import Ouroboros.Network.Protocol.ChainSync.PipelineDecision - (pipelineDecisionLowHighMark) -import Ouroboros.Network.Protocol.ChainSync.Server -import Test.Util.Orphans.Arbitrary () -import Test.Util.Orphans.IOLike () -import qualified Test.Util.TestBlock as TB +import Bench.Consensus.ChainSyncClient.Driver (mainWith) +import Cardano.Crypto.DSIGN.Mock +import Cardano.Ledger.BaseTypes (knownNonZeroBounded) +import Control.Monad (void) +import Control.ResourceRegistry +import Control.Tracer (contramap, debugTracer, nullTracer) +import Data.IORef (newIORef, readIORef, writeIORef) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict qualified as Map +import Main.Utf8 (withStdTerminalHandles) +import Network.TypedProtocol.Channel +import Network.TypedProtocol.Driver.Simple +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderStateHistory qualified as HeaderStateHistory +import Ouroboros.Consensus.HeaderValidation qualified as HV +import Ouroboros.Consensus.Ledger.Extended qualified as Extended +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck qualified as HistoricityCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck + ( clockSkewInSeconds + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck qualified as InFutureCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Server + ( chainSyncServerForFollower + ) +import Ouroboros.Consensus.Node.NetworkProtocolVersion + ( NodeToNodeVersion + ) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.BFT +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM + ( Fingerprint (..) + , WithFingerprint (..) + ) +import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block + ( ChainUpdate (AddBlock, RollBack) + , Tip (TipGenesis) + , tipFromHeader + ) +import Ouroboros.Network.ControlMessage (ControlMessage (Continue)) +import Ouroboros.Network.Mock.Chain qualified as Chain +import Ouroboros.Network.Protocol.ChainSync.ClientPipelined +import Ouroboros.Network.Protocol.ChainSync.Codec (codecChainSyncId) +import Ouroboros.Network.Protocol.ChainSync.PipelineDecision + ( pipelineDecisionLowHighMark + ) +import Ouroboros.Network.Protocol.ChainSync.Server +import Test.Util.Orphans.Arbitrary () +import Test.Util.Orphans.IOLike () +import Test.Util.TestBlock qualified as TB type B = TB.TestBlock type H = Header B main :: IO () main = withStdTerminalHandles $ mainWith $ \n -> do - varCandidate <- newTVarIO $ AF.Empty AF.AnchorGenesis + varCandidate <- newTVarIO $ AF.Empty AF.AnchorGenesis - varServerTip <- newTVarIO TipGenesis - follower <- mkFollower varServerTip + varServerTip <- newTVarIO TipGenesis + follower <- mkFollower varServerTip - oneBenchRun - varCandidate - varServerTip - follower - (fromIntegral n) + oneBenchRun + varCandidate + varServerTip + follower + (fromIntegral n) {-# INLINE oneBenchRun #-} oneBenchRun :: - StrictTVar IO (AnchoredFragment (HV.HeaderWithTime B)) - -> StrictTVar IO (Tip B) - -> ChainDB.Follower IO B (ChainDB.WithPoint B H) - -> Int - -> IO () + StrictTVar IO (AnchoredFragment (HV.HeaderWithTime B)) -> + StrictTVar IO (Tip B) -> + ChainDB.Follower IO B (ChainDB.WithPoint B H) -> + Int -> + IO () oneBenchRun - varCandidate - varServerTip - follower - n - = + varCandidate + varServerTip + follower + n = withRegistry $ \registry -> do - (clientChannel, serverChannel) <- createConnectedChannels - void - $ forkLinkedThread registry "ChainSyncServer" - $ runPeer nullTracer codecChainSyncId serverChannel - $ chainSyncServerPeer server - void - $ forkLinkedThread registry "ChainSyncClient" - $ void - $ runPipelinedPeer nullTracer codecChainSyncId clientChannel - $ chainSyncClientPeerPipelined client - - atomically $ do - candidate <- readTVar varCandidate - check $ case pointHash $ AF.headPoint candidate of - BlockHash (TB.TestHash ne) -> fromIntegral n < NE.head ne - _ -> False - where + (clientChannel, serverChannel) <- createConnectedChannels + void $ + forkLinkedThread registry "ChainSyncServer" $ + runPeer nullTracer codecChainSyncId serverChannel $ + chainSyncServerPeer server + void $ + forkLinkedThread registry "ChainSyncClient" $ + void $ + runPipelinedPeer nullTracer codecChainSyncId clientChannel $ + chainSyncClientPeerPipelined client + + atomically $ do + candidate <- readTVar varCandidate + check $ case pointHash $ AF.headPoint candidate of + BlockHash (TB.TestHash ne) -> fromIntegral n < NE.head ne + _ -> False + where -- This test is designed so that the initial ledger state suffices for -- everything the ChainSync client needs to do. chainDbView :: CSClient.ChainDbView IO B - chainDbView = CSClient.ChainDbView { - CSClient.getCurrentChain = pure $ AF.Empty AF.AnchorGenesis - , CSClient.getHeaderStateHistory = - pure - $ HeaderStateHistory.fromChain - topConfig - (oracularLedgerDB GenesisPoint) - Chain.Genesis - , CSClient.getIsInvalidBlock = pure invalidBlock - , CSClient.getPastLedger = pure . Just . oracularLedgerDB - } + chainDbView = + CSClient.ChainDbView + { CSClient.getCurrentChain = pure $ AF.Empty AF.AnchorGenesis + , CSClient.getHeaderStateHistory = + pure $ + HeaderStateHistory.fromChain + topConfig + (oracularLedgerDB GenesisPoint) + Chain.Genesis + , CSClient.getIsInvalidBlock = pure invalidBlock + , CSClient.getPastLedger = pure . Just . oracularLedgerDB + } headerInFutureCheck :: - InFutureCheck.SomeHeaderInFutureCheck IO B + InFutureCheck.SomeHeaderInFutureCheck IO B headerInFutureCheck = - InFutureCheck.realHeaderInFutureCheck - (clockSkewInSeconds 0) - inTheYearOneBillion + InFutureCheck.realHeaderInFutureCheck + (clockSkewInSeconds 0) + inTheYearOneBillion client :: CSClient.Consensus ChainSyncClientPipelined B IO client = - CSClient.chainSyncClient - CSClient.ConfigEnv { - CSClient.chainDbView - , CSClient.cfg = topConfig - , CSClient.tracer = nullTracer `asTypeOf` contramap show debugTracer - , CSClient.someHeaderInFutureCheck = headerInFutureCheck - , CSClient.historicityCheck = HistoricityCheck.noCheck - , CSClient.mkPipelineDecision0 = - pipelineDecisionLowHighMark 10 20 - , CSClient.getDiffusionPipeliningSupport = - DiffusionPipeliningOn - } - CSClient.DynamicEnv { - CSClient.version = maxBound :: NodeToNodeVersion - , CSClient.controlMessageSTM = return Continue - , CSClient.headerMetricsTracer = nullTracer - , CSClient.setCandidate = writeTVar varCandidate - , CSClient.setLatestSlot = \_ -> pure () - , CSClient.idling = CSClient.noIdling - , CSClient.loPBucket = CSClient.noLoPBucket - , CSClient.jumping = CSClient.noJumping - } + CSClient.chainSyncClient + CSClient.ConfigEnv + { CSClient.chainDbView + , CSClient.cfg = topConfig + , CSClient.tracer = nullTracer `asTypeOf` contramap show debugTracer + , CSClient.someHeaderInFutureCheck = headerInFutureCheck + , CSClient.historicityCheck = HistoricityCheck.noCheck + , CSClient.mkPipelineDecision0 = + pipelineDecisionLowHighMark 10 20 + , CSClient.getDiffusionPipeliningSupport = + DiffusionPipeliningOn + } + CSClient.DynamicEnv + { CSClient.version = maxBound :: NodeToNodeVersion + , CSClient.controlMessageSTM = return Continue + , CSClient.headerMetricsTracer = nullTracer + , CSClient.setCandidate = writeTVar varCandidate + , CSClient.setLatestSlot = \_ -> pure () + , CSClient.idling = CSClient.noIdling + , CSClient.loPBucket = CSClient.noLoPBucket + , CSClient.jumping = CSClient.noJumping + } server :: ChainSyncServer H (Point B) (Tip B) IO () server = - chainSyncServerForFollower - nullTracer - (readTVar varServerTip) - follower + chainSyncServerForFollower + nullTracer + (readTVar varServerTip) + follower ----- -- | No invalid blocks in this benchmark invalidBlock :: - WithFingerprint - (HeaderHash blk -> Maybe (Extended.ExtValidationError blk)) + WithFingerprint + (HeaderHash blk -> Maybe (Extended.ExtValidationError blk)) invalidBlock = - WithFingerprint isInvalidBlock fp - where - isInvalidBlock _hash = Nothing - fp = Fingerprint $ fromIntegral (0 :: Int) + WithFingerprint isInvalidBlock fp + where + isInvalidBlock _hash = Nothing + fp = Fingerprint $ fromIntegral (0 :: Int) -- | Ignore time in this benchmark -- -- This is clock fixed at billion years after the start of the chain. That -- should trivialize the in-future check: no header will be from the future. inTheYearOneBillion :: SystemTime IO -inTheYearOneBillion = SystemTime { - systemTimeWait = pure () - , systemTimeCurrent = pure $ RelativeTime $ - secondsToNominalDiffTime $ - 86400 -- seconds in a day - * 365 -- days in a year - * 1e9 - } +inTheYearOneBillion = + SystemTime + { systemTimeWait = pure () + , systemTimeCurrent = + pure $ + RelativeTime $ + secondsToNominalDiffTime $ + 86400 -- seconds in a day + * 365 -- days in a year + * 1e9 + } oracularLedgerDB :: Point B -> Extended.ExtLedgerState B mk oracularLedgerDB p = - Extended.ExtLedgerState { - Extended.headerState = HV.HeaderState { - HV.headerStateTip = case pointToWithOriginRealPoint p of - Origin -> Origin - NotOrigin rp -> NotOrigin $ HV.AnnTip { - HV.annTipSlotNo = realPointSlot rp - , HV.annTipInfo = realPointHash rp - , HV.annTipBlockNo = + Extended.ExtLedgerState + { Extended.headerState = + HV.HeaderState + { HV.headerStateTip = case pointToWithOriginRealPoint p of + Origin -> Origin + NotOrigin rp -> + NotOrigin $ + HV.AnnTip + { HV.annTipSlotNo = realPointSlot rp + , HV.annTipInfo = realPointHash rp + , HV.annTipBlockNo = testBlockHashBlockNo (realPointHash rp) - } + } , HV.headerStateChainDep = () } - , Extended.ledgerState = TB.TestLedger { - TB.lastAppliedPoint = p + , Extended.ledgerState = + TB.TestLedger + { TB.lastAppliedPoint = p , TB.payloadDependentState = TB.EmptyPLDS } } @@ -221,10 +237,10 @@ securityParam = SecurityParam $ knownNonZeroBounded @5 initialChain :: NE.NonEmpty B initialChain = - NE.fromList - $ take kInt - $ iterate TB.successorBlock - $ TB.firstBlock 0 + NE.fromList $ + take kInt $ + iterate TB.successorBlock $ + TB.firstBlock 0 ----- @@ -238,32 +254,36 @@ numCoreNodes :: NumCoreNodes numCoreNodes = NumCoreNodes 2 topConfig :: TopLevelConfig B -topConfig = TopLevelConfig { - topLevelConfigProtocol = BftConfig { - bftParams = BftParams { - bftSecurityParam = securityParam - , bftNumNodes = numCoreNodes - } - , bftSignKey = SignKeyMockDSIGN 0 - , bftVerKeys = Map.fromList [ - (CoreId (CoreNodeId 0), VerKeyMockDSIGN 0) - , (CoreId (CoreNodeId 1), VerKeyMockDSIGN 1) - ] - } - , topLevelConfigLedger = TB.testBlockLedgerConfigFrom eraParams - , topLevelConfigBlock = TB.TestBlockConfig numCoreNodes - , topLevelConfigCodec = TB.TestBlockCodecConfig - , topLevelConfigStorage = TB.TestBlockStorageConfig - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - where - eraParams :: HardFork.EraParams - eraParams = HardFork.defaultEraParams securityParam slotLength +topConfig = + TopLevelConfig + { topLevelConfigProtocol = + BftConfig + { bftParams = + BftParams + { bftSecurityParam = securityParam + , bftNumNodes = numCoreNodes + } + , bftSignKey = SignKeyMockDSIGN 0 + , bftVerKeys = + Map.fromList + [ (CoreId (CoreNodeId 0), VerKeyMockDSIGN 0) + , (CoreId (CoreNodeId 1), VerKeyMockDSIGN 1) + ] + } + , topLevelConfigLedger = TB.testBlockLedgerConfigFrom eraParams + , topLevelConfigBlock = TB.TestBlockConfig numCoreNodes + , topLevelConfigCodec = TB.TestBlockCodecConfig + , topLevelConfigStorage = TB.TestBlockStorageConfig + , topLevelConfigCheckpoints = emptyCheckpointsMap + } + where + eraParams :: HardFork.EraParams + eraParams = HardFork.defaultEraParams securityParam slotLength ----- -data FollowerState = - Resting !(RealPoint B) +data FollowerState + = Resting !(RealPoint B) | Switching !(Point B) !(NE.NonEmpty B) | Switched !(NE.NonEmpty B) @@ -273,44 +293,45 @@ data FollowerState = -- INVARIANT: the chains are never longer than 'kInt' and are 'kInt' long -- infinitely often. mkFollower :: - StrictTVar IO (Tip B) - -> IO (ChainDB.Follower IO B (ChainDB.WithPoint B H)) + StrictTVar IO (Tip B) -> + IO (ChainDB.Follower IO B (ChainDB.WithPoint B H)) mkFollower varTip = do - varState <- newIORef $ Resting $ blockRealPoint $ NE.last initialChain - - let wrap blk = ChainDB.WithPoint (getHeader blk) (blockPoint blk) - - let next = readIORef varState >>= \case - Switching ipoint blks -> do - writeIORef varState $ Switched blks - atomically $ writeTVar varTip $ tipFromHeader $ NE.last blks - pure $ RollBack ipoint - Switched blks -> do - let blk = NE.head blks - writeIORef varState $ case NE.nonEmpty (NE.tail blks) of - Nothing -> Resting $ blockRealPoint blk - Just blks' -> Switched blks' - atomically $ writeTVar varTip $ tipFromHeader $ NE.last blks - pure $ AddBlock $ wrap blk - Resting rp -> do - let (ipoint, blks) = TB.updateToNextNumeral rp - writeIORef varState $ Switched blks - atomically $ writeTVar varTip $ tipFromHeader $ NE.last blks - pure $ RollBack ipoint - - pure ChainDB.Follower { - ChainDB.followerClose = pure () - , ChainDB.followerInstruction = Just <$> next + varState <- newIORef $ Resting $ blockRealPoint $ NE.last initialChain + + let wrap blk = ChainDB.WithPoint (getHeader blk) (blockPoint blk) + + let next = + readIORef varState >>= \case + Switching ipoint blks -> do + writeIORef varState $ Switched blks + atomically $ writeTVar varTip $ tipFromHeader $ NE.last blks + pure $ RollBack ipoint + Switched blks -> do + let blk = NE.head blks + writeIORef varState $ case NE.nonEmpty (NE.tail blks) of + Nothing -> Resting $ blockRealPoint blk + Just blks' -> Switched blks' + atomically $ writeTVar varTip $ tipFromHeader $ NE.last blks + pure $ AddBlock $ wrap blk + Resting rp -> do + let (ipoint, blks) = TB.updateToNextNumeral rp + writeIORef varState $ Switched blks + atomically $ writeTVar varTip $ tipFromHeader $ NE.last blks + pure $ RollBack ipoint + + pure + ChainDB.Follower + { ChainDB.followerClose = pure () + , ChainDB.followerInstruction = Just <$> next , ChainDB.followerInstructionBlocking = next - , ChainDB.followerForward = \case - GenesisPoint : _ -> do - writeIORef varState $ Switching GenesisPoint initialChain - pure $ Just GenesisPoint - - ps -> error $ "impossible! " <> unlines (map show ps) - -- The client begins with an empty local chain, so this is the - -- only possible input at the handshake. - -- - -- Moreover, no chain is longer than k, so the there can never - -- another FindIntersect. + , ChainDB.followerForward = \case + GenesisPoint : _ -> do + writeIORef varState $ Switching GenesisPoint initialChain + pure $ Just GenesisPoint + ps -> error $ "impossible! " <> unlines (map show ps) + -- The client begins with an empty local chain, so this is the + -- only possible input at the handshake. + -- + -- Moreover, no chain is longer than k, so the there can never + -- another FindIntersect. } diff --git a/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool.hs b/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool.hs index 7563a31887..5506373dcd 100644 --- a/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool.hs +++ b/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool.hs @@ -6,45 +6,47 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} -module Bench.Consensus.Mempool ( - -- * Commands +module Bench.Consensus.Mempool + ( -- * Commands MempoolCmd (..) + -- ** Queries on commands , getCmdTx , getCmdTxId , getCmdsTxIds , getCmdsTxs + -- * Commands execution , run ) where -import Bench.Consensus.Mempool.TestBlock () -import Control.DeepSeq (NFData) -import Control.Monad (void) -import Data.Foldable (traverse_) -import Data.Maybe (mapMaybe) -import GHC.Generics (Generic) -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as Ledger -import Ouroboros.Consensus.Mempool.API (AddTxOnBehalfOf (..)) -import qualified Test.Consensus.Mempool.Mocked as Mocked -import Test.Consensus.Mempool.Mocked (MockedMempool) +import Bench.Consensus.Mempool.TestBlock () +import Control.DeepSeq (NFData) +import Control.Monad (void) +import Data.Foldable (traverse_) +import Data.Maybe (mapMaybe) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as Ledger +import Ouroboros.Consensus.Mempool.API (AddTxOnBehalfOf (..)) +import Test.Consensus.Mempool.Mocked (MockedMempool) +import Test.Consensus.Mempool.Mocked qualified as Mocked {------------------------------------------------------------------------------- Commands -------------------------------------------------------------------------------} -data MempoolCmd blk = - -- | Add a transaction. +data MempoolCmd blk + = -- | Add a transaction. -- -- NB: if the mempool is full, the execution of this command will block -- until the mempool has at least one byte free. As a consenquence, if these -- commands are run sequentially the benchmarks or tests will deadlock if -- the maxium mempool capacity is reached. AddTx (Ledger.GenTx blk) - deriving (Generic) + deriving Generic deriving stock instance Show (Ledger.GenTx blk) => Show (MempoolCmd blk) -deriving anyclass instance (NFData (Ledger.GenTx blk)) => NFData (MempoolCmd blk) +deriving anyclass instance NFData (Ledger.GenTx blk) => NFData (MempoolCmd blk) getCmdTx :: MempoolCmd blk -> Maybe (Ledger.GenTx blk) getCmdTx (AddTx tx) = Just tx @@ -53,13 +55,13 @@ getCmdsTxs :: [MempoolCmd blk] -> [Ledger.GenTx blk] getCmdsTxs = mapMaybe getCmdTx getCmdTxId :: - Ledger.HasTxId (Ledger.GenTx blk) - => MempoolCmd blk -> Maybe (Ledger.TxId (Ledger.GenTx blk)) + Ledger.HasTxId (Ledger.GenTx blk) => + MempoolCmd blk -> Maybe (Ledger.TxId (Ledger.GenTx blk)) getCmdTxId = fmap Ledger.txId . getCmdTx getCmdsTxIds :: - Ledger.HasTxId (Ledger.GenTx blk) - => [MempoolCmd blk] -> [Ledger.TxId (Ledger.GenTx blk)] + Ledger.HasTxId (Ledger.GenTx blk) => + [MempoolCmd blk] -> [Ledger.TxId (Ledger.GenTx blk)] getCmdsTxIds = mapMaybe getCmdTxId {------------------------------------------------------------------------------- @@ -69,12 +71,12 @@ getCmdsTxIds = mapMaybe getCmdTxId -- TODO: the interpretation of running the command should be defined elsewhere, -- and tested by state-mathine tests. run :: - Monad m - => MockedMempool m blk -> [MempoolCmd blk] -> m () + Monad m => + MockedMempool m blk -> [MempoolCmd blk] -> m () run mempool = traverse_ (runCmd mempool) runCmd :: - Monad m - => MockedMempool m blk -> MempoolCmd blk -> m () + Monad m => + MockedMempool m blk -> MempoolCmd blk -> m () runCmd mempool = \case - AddTx tx -> void $ Mocked.addTx mempool AddTxForRemotePeer tx -- TODO: we might want to benchmark the 'Intervene' case + AddTx tx -> void $ Mocked.addTx mempool AddTxForRemotePeer tx -- TODO: we might want to benchmark the 'Intervene' case diff --git a/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool/TestBlock.hs b/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool/TestBlock.hs index 52ea68d857..f25d195ef1 100644 --- a/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool/TestBlock.hs +++ b/ouroboros-consensus/bench/mempool-bench/Bench/Consensus/Mempool/TestBlock.hs @@ -10,15 +10,16 @@ {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Bench.Consensus.Mempool.TestBlock ( - -- * Test block +module Bench.Consensus.Mempool.TestBlock + ( -- * Test block TestBlock + -- * Initial parameters , initialLedgerState , sampleLedgerConfig + -- * Transactions , Token (Token) , Tx (Tx) @@ -26,28 +27,28 @@ module Bench.Consensus.Mempool.TestBlock ( , txSize ) where -import Cardano.Ledger.BaseTypes (knownNonZeroBounded) -import qualified Cardano.Slotting.Time as Time -import Codec.Serialise (Serialise (..)) -import Control.DeepSeq (NFData) -import Control.Monad.Trans.Except (except) -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.Set (Set) -import qualified Data.Set as Set -import Data.TreeDiff (ToExpr) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import qualified Ouroboros.Consensus.Block as Block -import Ouroboros.Consensus.Config.SecurityParam as Consensus -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import qualified Ouroboros.Consensus.Ledger.Basics as Ledger -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as Ledger -import Ouroboros.Consensus.Ledger.Tables -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import qualified Ouroboros.Consensus.Ledger.Tables.Utils as Ledger -import Ouroboros.Consensus.Util.IndexedMemPack (IndexedMemPack (..)) -import Test.Util.TestBlock hiding (TestBlock) +import Cardano.Ledger.BaseTypes (knownNonZeroBounded) +import Cardano.Slotting.Time qualified as Time +import Codec.Serialise (Serialise (..)) +import Control.DeepSeq (NFData) +import Control.Monad.Trans.Except (except) +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.Set (Set) +import Data.Set qualified as Set +import Data.TreeDiff (ToExpr) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block qualified as Block +import Ouroboros.Consensus.Config.SecurityParam as Consensus +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.Ledger.Basics qualified as Ledger +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as Ledger +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Ledger.Tables.Utils qualified as Ledger +import Ouroboros.Consensus.Util.IndexedMemPack (IndexedMemPack (..)) +import Test.Util.TestBlock hiding (TestBlock) {------------------------------------------------------------------------------- MempoolTestBlock @@ -55,95 +56,107 @@ import Test.Util.TestBlock hiding (TestBlock) type TestBlock = TestBlockWith Tx -data Tx = Tx { - consumed :: !(Set Token) +data Tx = Tx + { consumed :: !(Set Token) , produced :: !(Set Token) } deriving stock (Eq, Ord, Generic, Show) deriving anyclass (NoThunks, NFData) -newtype Token = Token { unToken :: Int } +newtype Token = Token {unToken :: Int} deriving stock (Show, Eq, Ord, Generic) deriving newtype (MemPack, Num, Enum) deriving anyclass (NoThunks, ToExpr, Serialise, NFData) mkTx :: - [Token] - -- ^ Consumed - -> [Token] - -- ^ Produced - -> Ledger.GenTx TestBlock -mkTx cons prod = TestBlockGenTx $ Tx { consumed = Set.fromList cons - , produced = Set.fromList prod - } + -- | Consumed + [Token] -> + -- | Produced + [Token] -> + Ledger.GenTx TestBlock +mkTx cons prod = + TestBlockGenTx $ + Tx + { consumed = Set.fromList cons + , produced = Set.fromList prod + } {------------------------------------------------------------------------------- Initial parameters -------------------------------------------------------------------------------} initialLedgerState :: LedgerState (TestBlockWith Tx) ValuesMK -initialLedgerState = TestLedger { - lastAppliedPoint = Block.GenesisPoint - , payloadDependentState = TestPLDS { - getTestPLDS = ValuesMK Map.empty - } +initialLedgerState = + TestLedger + { lastAppliedPoint = Block.GenesisPoint + , payloadDependentState = + TestPLDS + { getTestPLDS = ValuesMK Map.empty + } } sampleLedgerConfig :: Ledger.LedgerConfig TestBlock -sampleLedgerConfig = testBlockLedgerConfigFrom $ - HardFork.defaultEraParams (Consensus.SecurityParam $ knownNonZeroBounded @10) (Time.slotLengthFromSec 2) +sampleLedgerConfig = + testBlockLedgerConfigFrom $ + HardFork.defaultEraParams + (Consensus.SecurityParam $ knownNonZeroBounded @10) + (Time.slotLengthFromSec 2) {------------------------------------------------------------------------------- Payload semantics -------------------------------------------------------------------------------} -data TestLedgerState = TestLedgerState { - availableTokens :: !(Set Token) +data TestLedgerState = TestLedgerState + { availableTokens :: !(Set Token) } deriving stock (Generic, Eq, Show) deriving anyclass (NoThunks, ToExpr, Serialise) -data TxApplicationError = - -- | The transaction could not be applied due to the given unavailable tokens. - TxApplicationError { unavailable :: Set Token } +data TxApplicationError + = -- | The transaction could not be applied due to the given unavailable tokens. + TxApplicationError {unavailable :: Set Token} deriving stock (Generic, Eq, Show) deriving anyclass (NoThunks, ToExpr, Serialise) instance PayloadSemantics Tx where - newtype instance PayloadDependentState Tx mk = TestPLDS { - getTestPLDS :: mk Token () + newtype PayloadDependentState Tx mk = TestPLDS + { getTestPLDS :: mk Token () } deriving stock Generic type PayloadDependentError Tx = TxApplicationError applyPayload plds tx = - let - notFound = Set.filter (not . (`Map.member` tokMap)) consumed - in if Set.null notFound + let + notFound = Set.filter (not . (`Map.member` tokMap)) consumed + in + if Set.null notFound then Right $ TestPLDS (Ledger.rawAttachAndApplyDiffs toks fullDiff) - else Left $ TxApplicationError notFound - where - TestPLDS toks@(ValuesMK tokMap) = plds - Tx {consumed, produced} = tx + else Left $ TxApplicationError notFound + where + TestPLDS toks@(ValuesMK tokMap) = plds + Tx{consumed, produced} = tx - consumedDiff, producedDiff :: Diff.Diff Token () - consumedDiff = Diff.fromListDeletes [(t, ()) | t <- Set.toList consumed] - producedDiff = Diff.fromListInserts [(t, ()) | t <- Set.toList produced] + consumedDiff, producedDiff :: Diff.Diff Token () + consumedDiff = Diff.fromListDeletes [(t, ()) | t <- Set.toList consumed] + producedDiff = Diff.fromListInserts [(t, ()) | t <- Set.toList produced] - fullDiff :: DiffMK Token () - fullDiff = DiffMK $ consumedDiff <> producedDiff + fullDiff :: DiffMK Token () + fullDiff = DiffMK $ consumedDiff <> producedDiff getPayloadKeySets tx = LedgerTables $ KeysMK consumed - where - Tx {consumed} = tx - -deriving stock instance EqMK mk - => Eq (PayloadDependentState Tx mk) -deriving stock instance ShowMK mk - => Show (PayloadDependentState Tx mk) -deriving anyclass instance NoThunksMK mk - => NoThunks (PayloadDependentState Tx mk) + where + Tx{consumed} = tx + +deriving stock instance + EqMK mk => + Eq (PayloadDependentState Tx mk) +deriving stock instance + ShowMK mk => + Show (PayloadDependentState Tx mk) +deriving anyclass instance + NoThunksMK mk => + NoThunks (PayloadDependentState Tx mk) instance Serialise (PayloadDependentState Tx EmptyMK) where encode = error "Mempool bench TestBlock unused: encode" @@ -161,29 +174,32 @@ data instance Block.StorageConfig TestBlock = TestBlockStorageConfig Ledger tables -------------------------------------------------------------------------------} -type instance TxIn (LedgerState TestBlock) = Token +type instance TxIn (LedgerState TestBlock) = Token type instance TxOut (LedgerState TestBlock) = () instance HasLedgerTables (LedgerState TestBlock) where projectLedgerTables st = LedgerTables $ getTestPLDS $ payloadDependentState st - withLedgerTables st table = st { - payloadDependentState = plds { - getTestPLDS = Ledger.getLedgerTables table - } + withLedgerTables st table = + st + { payloadDependentState = + plds + { getTestPLDS = Ledger.getLedgerTables table + } } - where - TestLedger { payloadDependentState = plds } = st + where + TestLedger{payloadDependentState = plds} = st instance HasLedgerTables (Ticked (LedgerState TestBlock)) where - projectLedgerTables (TickedTestLedger st) = Ledger.castLedgerTables $ - Ledger.projectLedgerTables st + projectLedgerTables (TickedTestLedger st) = + Ledger.castLedgerTables $ + Ledger.projectLedgerTables st withLedgerTables (TickedTestLedger st) tables = TickedTestLedger $ Ledger.withLedgerTables st $ Ledger.castLedgerTables tables instance CanStowLedgerTables (LedgerState TestBlock) where - stowLedgerTables = error "Mempool bench TestBlock unused: stowLedgerTables" - unstowLedgerTables = error "Mempool bench TestBlock unused: unstowLedgerTables" + stowLedgerTables = error "Mempool bench TestBlock unused: stowLedgerTables" + unstowLedgerTables = error "Mempool bench TestBlock unused: unstowLedgerTables" instance IndexedMemPack (LedgerState TestBlock EmptyMK) () where indexedTypeName _ = typeName @() @@ -195,26 +211,29 @@ instance IndexedMemPack (LedgerState TestBlock EmptyMK) () where Mempool support -------------------------------------------------------------------------------} -newtype instance Ledger.GenTx TestBlock = TestBlockGenTx { unGenTx :: Tx } - deriving stock (Generic) +newtype instance Ledger.GenTx TestBlock = TestBlockGenTx {unGenTx :: Tx} + deriving stock Generic deriving newtype (Show, NoThunks, Eq, Ord, NFData) -- | For the mempool tests and benchmarks it is not imporant that we calculate -- the actual size of the transaction in bytes. txSize :: Ledger.GenTx TestBlock -> Ledger.ByteSize32 txSize (TestBlockGenTx tx) = - Ledger.ByteSize32 - $ fromIntegral - $ 1 + length (consumed tx) + length (produced tx) + Ledger.ByteSize32 $ + fromIntegral $ + 1 + length (consumed tx) + length (produced tx) instance Ledger.LedgerSupportsMempool TestBlock where applyTx _cfg _shouldIntervene _slot (TestBlockGenTx tx) tickedSt = - except $ fmap ((, ValidatedGenTx (TestBlockGenTx tx)) . Ledger.trackingToDiffs) - $ applyDirectlyToPayloadDependentState tickedSt tx + except $ + fmap ((,ValidatedGenTx (TestBlockGenTx tx)) . Ledger.trackingToDiffs) $ + applyDirectlyToPayloadDependentState tickedSt tx reapplyTx _ cfg slot (ValidatedGenTx genTx) tickedSt = - Ledger.attachAndApplyDiffs tickedSt . fst <$> Ledger.applyTx cfg Ledger.DoNotIntervene slot genTx tickedSt - -- FIXME: it is ok to use 'DoNotIntervene' here? + Ledger.attachAndApplyDiffs tickedSt . fst + <$> Ledger.applyTx cfg Ledger.DoNotIntervene slot genTx tickedSt + + -- FIXME: it is ok to use 'DoNotIntervene' here? txForgetValidated (ValidatedGenTx tx) = tx @@ -231,16 +250,16 @@ instance Ledger.TxLimits TestBlock where txMeasure _cfg _st = pure . Ledger.IgnoringOverflow . txSize newtype instance Ledger.TxId (Ledger.GenTx TestBlock) = TestBlockTxId Tx - deriving stock (Generic) + deriving stock Generic deriving newtype (Show, Ord, Eq) - deriving anyclass (NoThunks) + deriving anyclass NoThunks instance Ledger.HasTxId (Ledger.GenTx TestBlock) where txId (TestBlockGenTx tx) = TestBlockTxId tx -newtype instance Ledger.Validated (Ledger.GenTx TestBlock) = - ValidatedGenTx (Ledger.GenTx TestBlock) - deriving stock (Generic) +newtype instance Ledger.Validated (Ledger.GenTx TestBlock) + = ValidatedGenTx (Ledger.GenTx TestBlock) + deriving stock Generic deriving newtype (Show, NoThunks) type instance Ledger.ApplyTxErr TestBlock = TxApplicationError diff --git a/ouroboros-consensus/bench/mempool-bench/Main.hs b/ouroboros-consensus/bench/mempool-bench/Main.hs index a61c61058a..33c4e41c91 100644 --- a/ouroboros-consensus/bench/mempool-bench/Main.hs +++ b/ouroboros-consensus/bench/mempool-bench/Main.hs @@ -6,128 +6,136 @@ module Main (main) where -import Bench.Consensus.Mempool -import Bench.Consensus.Mempool.TestBlock (TestBlock) -import qualified Bench.Consensus.Mempool.TestBlock as TestBlock -import Control.Arrow (first) -import Control.DeepSeq -import Control.Monad (unless) -import qualified Control.Tracer as Tracer -import Data.Aeson -import qualified Data.ByteString.Lazy as BL -import qualified Data.Csv as Csv -import Data.Maybe (fromMaybe) -import qualified Data.Text as Text -import qualified Data.Text.Read as Text.Read -import Main.Utf8 (withStdTerminalHandles) -import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32) -import qualified Ouroboros.Consensus.Mempool.Capacity as Mempool -import System.Exit (die, exitFailure) -import qualified Test.Consensus.Mempool.Mocked as Mocked -import Test.Consensus.Mempool.Mocked (MockedMempool) -import Test.Tasty (withResource) -import Test.Tasty.Bench (CsvPath (CsvPath), bench, benchIngredients, - bgroup, whnfIO) -import Test.Tasty.HUnit (testCase, (@?=)) -import Test.Tasty.Options (changeOption) -import Test.Tasty.Runners (parseOptions, tryIngredients) +import Bench.Consensus.Mempool +import Bench.Consensus.Mempool.TestBlock (TestBlock) +import Bench.Consensus.Mempool.TestBlock qualified as TestBlock +import Control.Arrow (first) +import Control.DeepSeq +import Control.Monad (unless) +import Control.Tracer qualified as Tracer +import Data.Aeson +import Data.ByteString.Lazy qualified as BL +import Data.Csv qualified as Csv +import Data.Maybe (fromMaybe) +import Data.Text qualified as Text +import Data.Text.Read qualified as Text.Read +import Main.Utf8 (withStdTerminalHandles) +import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32) +import Ouroboros.Consensus.Mempool.Capacity qualified as Mempool +import System.Exit (die, exitFailure) +import Test.Consensus.Mempool.Mocked (MockedMempool) +import Test.Consensus.Mempool.Mocked qualified as Mocked +import Test.Tasty (withResource) +import Test.Tasty.Bench + ( CsvPath (CsvPath) + , bench + , benchIngredients + , bgroup + , whnfIO + ) +import Test.Tasty.HUnit (testCase, (@?=)) +import Test.Tasty.Options (changeOption) +import Test.Tasty.Runners (parseOptions, tryIngredients) main :: IO () main = withStdTerminalHandles $ do - let csvFilePath = "mempool-benchmarks.csv" - runBenchmarks csvFilePath - rawValues <- parseBenchmarkResults csvFilePath - convertCsvRowsToJsonObjects rawValues "mempool-benchmarks.json" - where - runBenchmarks csvFilePath = do - opts <- parseOptions benchIngredients benchmarkJustAddingTransactions - let opts' = changeOption (Just . fromMaybe (CsvPath csvFilePath)) opts - case tryIngredients benchIngredients opts' benchmarkJustAddingTransactions of - Nothing -> exitFailure - Just runIngredient -> do - success <- runIngredient - unless success exitFailure - where - benchmarkJustAddingTransactions = - bgroup "Just adding" $ - fmap benchAddNTxs [10_000, 20_000] - where - benchAddNTxs n = - withResource - (pure $!! - let cmds = mkNTryAddTxs n - sz = foldMap TestBlock.txSize $ getCmdsTxs cmds - in (cmds, sz) - ) - (\_ -> pure ()) - (\getCmds -> do - bgroup (show n <> " transactions") [ - bench "setup mempool" $ whnfIO $ do - (_cmds, capacity) <- getCmds - openMempoolWithCapacity capacity - , bench "setup mempool + benchmark" $ whnfIO $ do - (cmds, capacity) <- getCmds - mempool <- openMempoolWithCapacity capacity - run mempool cmds - , testCase "test" $ do - (cmds, capacity) <- getCmds - mempool <- openMempoolWithCapacity capacity - testAddCmds mempool cmds - , testCase "cmds length" $ do - (cmds, _capacity) <- getCmds - length cmds @?= n - ] - ) - where - testAddCmds mempool cmds = do + let csvFilePath = "mempool-benchmarks.csv" + runBenchmarks csvFilePath + rawValues <- parseBenchmarkResults csvFilePath + convertCsvRowsToJsonObjects rawValues "mempool-benchmarks.json" + where + runBenchmarks csvFilePath = do + opts <- parseOptions benchIngredients benchmarkJustAddingTransactions + let opts' = changeOption (Just . fromMaybe (CsvPath csvFilePath)) opts + case tryIngredients benchIngredients opts' benchmarkJustAddingTransactions of + Nothing -> exitFailure + Just runIngredient -> do + success <- runIngredient + unless success exitFailure + where + benchmarkJustAddingTransactions = + bgroup "Just adding" $ + fmap benchAddNTxs [10_000, 20_000] + where + benchAddNTxs n = + withResource + ( pure $!! + let cmds = mkNTryAddTxs n + sz = foldMap TestBlock.txSize $ getCmdsTxs cmds + in (cmds, sz) + ) + (\_ -> pure ()) + ( \getCmds -> do + bgroup + (show n <> " transactions") + [ bench "setup mempool" $ whnfIO $ do + (_cmds, capacity) <- getCmds + openMempoolWithCapacity capacity + , bench "setup mempool + benchmark" $ whnfIO $ do + (cmds, capacity) <- getCmds + mempool <- openMempoolWithCapacity capacity run mempool cmds - mempoolTxs <- Mocked.getTxs mempool - mempoolTxs @?= getCmdsTxs cmds + , testCase "test" $ do + (cmds, capacity) <- getCmds + mempool <- openMempoolWithCapacity capacity + testAddCmds mempool cmds + , testCase "cmds length" $ do + (cmds, _capacity) <- getCmds + length cmds @?= n + ] + ) + where + testAddCmds mempool cmds = do + run mempool cmds + mempoolTxs <- Mocked.getTxs mempool + mempoolTxs @?= getCmdsTxs cmds - parseBenchmarkResults csvFilePath = do - csvData <- BL.readFile csvFilePath - case Csv.decode Csv.HasHeader csvData of - Left err -> die err - Right rows -> pure rows + parseBenchmarkResults csvFilePath = do + csvData <- BL.readFile csvFilePath + case Csv.decode Csv.HasHeader csvData of + Left err -> die err + Right rows -> pure rows - -- Output the mempool benchmark results as a JSON file, which conforms to - -- the input expected by - -- https://github.com/benchmark-action/github-action-benchmark - convertCsvRowsToJsonObjects rows outFilePath = - encodeFile outFilePath $ fmap convertRowToJsonObject rows - where - convertRowToJsonObject (name:mean:_) = - object [ "name" .= adjustName name - , "value" .= adjustedMean - , "unit" .= unit - ] - where - adjustName = Text.replace "." " " - . Text.replace ".benchmark" "" + -- Output the mempool benchmark results as a JSON file, which conforms to + -- the input expected by + -- https://github.com/benchmark-action/github-action-benchmark + convertCsvRowsToJsonObjects rows outFilePath = + encodeFile outFilePath $ fmap convertRowToJsonObject rows + where + convertRowToJsonObject (name : mean : _) = + object + [ "name" .= adjustName name + , "value" .= adjustedMean + , "unit" .= unit + ] + where + adjustName = + Text.replace "." " " + . Text.replace ".benchmark" "" - adjustedMean :: Integer - (adjustedMean, unit) = first round - $ convertPicosecondsWithUnit - $ fromInteger - $ textToInt mean - where - textToInt = either error fst . Text.Read.decimal + adjustedMean :: Integer + (adjustedMean, unit) = + first round $ + convertPicosecondsWithUnit $ + fromInteger $ + textToInt mean + where + textToInt = either error fst . Text.Read.decimal - -- Convert a number of picoseconds to the largest time unit that - -- makes the conversion greater or equal than one. - convertPicosecondsWithUnit :: Double -> (Double, String) - convertPicosecondsWithUnit n - | numberOfDigits <= 4 = (n , "picoseconds" ) - | 4 <= numberOfDigits && numberOfDigits < 7 = (n / 1e3 , "nanoseconds" ) - | 7 <= numberOfDigits && numberOfDigits < 10 = (n / 1e6 , "microseconds") - | 10 <= numberOfDigits && numberOfDigits < 13 = (n / 1e9 , "milliseconds") - | 13 <= numberOfDigits = (n / 1e12, "seconds" ) - where - numberOfDigits :: Int - numberOfDigits = floor (logBase 10 n) + 1 - convertPicosecondsWithUnit _ = error "All the cases should be covered by the conditions above" - - convertRowToJsonObject _ = error "Wrong format" + -- Convert a number of picoseconds to the largest time unit that + -- makes the conversion greater or equal than one. + convertPicosecondsWithUnit :: Double -> (Double, String) + convertPicosecondsWithUnit n + | numberOfDigits <= 4 = (n, "picoseconds") + | 4 <= numberOfDigits && numberOfDigits < 7 = (n / 1e3, "nanoseconds") + | 7 <= numberOfDigits && numberOfDigits < 10 = (n / 1e6, "microseconds") + | 10 <= numberOfDigits && numberOfDigits < 13 = (n / 1e9, "milliseconds") + | 13 <= numberOfDigits = (n / 1e12, "seconds") + where + numberOfDigits :: Int + numberOfDigits = floor (logBase 10 n) + 1 + convertPicosecondsWithUnit _ = error "All the cases should be covered by the conditions above" + convertRowToJsonObject _ = error "Wrong format" {------------------------------------------------------------------------------- Adding TestBlock transactions to a mempool @@ -135,17 +143,21 @@ main = withStdTerminalHandles $ do openMempoolWithCapacity :: ByteSize32 -> IO (MockedMempool IO TestBlock) openMempoolWithCapacity capacity = - Mocked.openMockedMempool (Mempool.mkCapacityBytesOverride capacity) - Tracer.nullTracer - Mocked.MempoolAndModelParams { - Mocked.immpInitialState = TestBlock.initialLedgerState - , Mocked.immpLedgerConfig = TestBlock.sampleLedgerConfig - } + Mocked.openMockedMempool + (Mempool.mkCapacityBytesOverride capacity) + Tracer.nullTracer + Mocked.MempoolAndModelParams + { Mocked.immpInitialState = TestBlock.initialLedgerState + , Mocked.immpLedgerConfig = TestBlock.sampleLedgerConfig + } mkNTryAddTxs :: Int -> [MempoolCmd TestBlock.TestBlock] mkNTryAddTxs 0 = [] -mkNTryAddTxs n = [AddTx (TestBlock.mkTx [] [TestBlock.Token 0])] - <> fmap (AddTx . mkSimpleTx) (zip [0 .. n - 2] [1 .. n - 1]) - where - mkSimpleTx (x, y) = TestBlock.mkTx [TestBlock.Token (fromIntegral x)] - [TestBlock.Token (fromIntegral y)] +mkNTryAddTxs n = + [AddTx (TestBlock.mkTx [] [TestBlock.Token 0])] + <> fmap (AddTx . mkSimpleTx) (zip [0 .. n - 2] [1 .. n - 1]) + where + mkSimpleTx (x, y) = + TestBlock.mkTx + [TestBlock.Token (fromIntegral x)] + [TestBlock.Token (fromIntegral y)] diff --git a/ouroboros-consensus/ouroboros-consensus.cabal b/ouroboros-consensus/ouroboros-consensus.cabal index eea42167be..42b984b436 100644 --- a/ouroboros-consensus/ouroboros-consensus.cabal +++ b/ouroboros-consensus/ouroboros-consensus.cabal @@ -30,6 +30,7 @@ flag asserts common common-lib default-language: Haskell2010 + default-extensions: ImportQualifiedPost ghc-options: -Wall -Wcompat @@ -41,6 +42,7 @@ common common-lib -Wmissing-export-lists -Wunused-packages -Wno-unticked-promoted-constructors + -Wprepositive-qualified-module if flag(asserts) ghc-options: -fno-ignore-asserts diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block.hs index e9cccfdc24..0ee718be4a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block.hs @@ -1,12 +1,12 @@ -- | The consensus layer's abstract view of blocks module Ouroboros.Consensus.Block (module X) where -import Ouroboros.Consensus.Block.Abstract as X -import Ouroboros.Consensus.Block.EBB as X -import Ouroboros.Consensus.Block.Forging as X -import Ouroboros.Consensus.Block.NestedContent as X -import Ouroboros.Consensus.Block.RealPoint as X -import Ouroboros.Consensus.Block.SupportsDiffusionPipelining as X -import Ouroboros.Consensus.Block.SupportsMetrics as X -import Ouroboros.Consensus.Block.SupportsProtocol as X -import Ouroboros.Consensus.Block.SupportsSanityCheck as X +import Ouroboros.Consensus.Block.Abstract as X +import Ouroboros.Consensus.Block.EBB as X +import Ouroboros.Consensus.Block.Forging as X +import Ouroboros.Consensus.Block.NestedContent as X +import Ouroboros.Consensus.Block.RealPoint as X +import Ouroboros.Consensus.Block.SupportsDiffusionPipelining as X +import Ouroboros.Consensus.Block.SupportsMetrics as X +import Ouroboros.Consensus.Block.SupportsProtocol as X +import Ouroboros.Consensus.Block.SupportsSanityCheck as X diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Abstract.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Abstract.hs index 89ce61ff5e..4c61d3d503 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Abstract.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Abstract.hs @@ -6,16 +6,19 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Block.Abstract ( - -- * Protocol +module Ouroboros.Consensus.Block.Abstract + ( -- * Protocol BlockProtocol + -- * Configuration , BlockConfig , CodecConfig , StorageConfig + -- * Previous hash , GetPrevHash (..) , blockPrevHash + -- * Working with headers , GetHeader (..) , GetHeader1 (..) @@ -26,14 +29,18 @@ module Ouroboros.Consensus.Block.Abstract ( , headerHash , headerPoint , headerToIsEBB + -- * Raw hash , ConvertRawHash (..) , decodeRawHash , encodeRawHash + -- * Utilities for working with WithOrigin , succWithOrigin + -- * Ouroboros Genesis window , GenesisWindow (..) + -- * Re-export basic definitions from @ouroboros-network@ , ChainHash (..) , HasHeader (..) @@ -50,6 +57,7 @@ module Ouroboros.Consensus.Block.Abstract ( , castPoint , pointHash , pointSlot + -- * Re-export basic definitions from @cardano-base@ , BlockNo (..) , EpochNo (..) @@ -62,27 +70,48 @@ module Ouroboros.Consensus.Block.Abstract ( , withOriginToMaybe ) where -import Cardano.Slotting.Block (BlockNo (..)) -import Cardano.Slotting.Slot (EpochNo (..), EpochSize (..), - SlotNo (..), WithOrigin (Origin), fromWithOrigin, - withOrigin, withOriginFromMaybe, withOriginToMaybe) -import qualified Cardano.Slotting.Slot as Cardano -import qualified Codec.Serialise as Serialise -import Codec.Serialise.Decoding (Decoder) -import Codec.Serialise.Encoding (Encoding) -import qualified Data.ByteString as Strict -import Data.ByteString.Short (ShortByteString) -import qualified Data.ByteString.Short as Short -import Data.Kind (Type) -import Data.Maybe (isJust) -import Data.Word (Word32, Word64) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.EBB -import Ouroboros.Network.Block (ChainHash (..), HasHeader (..), - HeaderFields (..), HeaderHash, Point, StandardHash, - blockHash, blockNo, blockPoint, blockSlot, castHash, - castHeaderFields, castPoint, pattern BlockPoint, - pattern GenesisPoint, pointHash, pointSlot) +import Cardano.Slotting.Block (BlockNo (..)) +import Cardano.Slotting.Slot + ( EpochNo (..) + , EpochSize (..) + , SlotNo (..) + , WithOrigin (Origin) + , fromWithOrigin + , withOrigin + , withOriginFromMaybe + , withOriginToMaybe + ) +import Cardano.Slotting.Slot qualified as Cardano +import Codec.Serialise qualified as Serialise +import Codec.Serialise.Decoding (Decoder) +import Codec.Serialise.Encoding (Encoding) +import Data.ByteString qualified as Strict +import Data.ByteString.Short (ShortByteString) +import Data.ByteString.Short qualified as Short +import Data.Kind (Type) +import Data.Maybe (isJust) +import Data.Word (Word32, Word64) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.EBB +import Ouroboros.Network.Block + ( ChainHash (..) + , HasHeader (..) + , HeaderFields (..) + , HeaderHash + , Point + , StandardHash + , blockHash + , blockNo + , blockPoint + , blockSlot + , castHash + , castHeaderFields + , castPoint + , pointHash + , pointSlot + , pattern BlockPoint + , pattern GenesisPoint + ) {------------------------------------------------------------------------------- Protocol @@ -127,7 +156,7 @@ blockPrevHash = castHash . headerPrevHash . getHeader data family Header blk :: Type class HasHeader (Header blk) => GetHeader blk where - getHeader :: blk -> Header blk + getHeader :: blk -> Header blk -- | Check whether the header is the header of the block. -- @@ -137,7 +166,7 @@ class HasHeader (Header blk) => GetHeader blk where -- | When the given header is the header of an Epoch Boundary Block, returns -- its epoch number. - headerIsEBB :: Header blk -> Maybe EpochNo + headerIsEBB :: Header blk -> Maybe EpochNo headerToIsEBB :: GetHeader blk => Header blk -> IsEBB headerToIsEBB = toIsEBB . isJust . headerIsEBB @@ -224,16 +253,20 @@ class ConvertRawHash blk where -- | The size of the hash in number of bytes hashSize :: proxy blk -> Word32 - {-# MINIMAL hashSize - , (toRawHash | toShortRawHash) - , (fromRawHash | fromShortRawHash) #-} + {-# MINIMAL + hashSize + , (toRawHash | toShortRawHash) + , (fromRawHash | fromShortRawHash) + #-} -encodeRawHash :: ConvertRawHash blk - => proxy blk -> HeaderHash blk -> Encoding +encodeRawHash :: + ConvertRawHash blk => + proxy blk -> HeaderHash blk -> Encoding encodeRawHash p = Serialise.encode . toShortRawHash p -decodeRawHash :: ConvertRawHash blk - => proxy blk -> forall s. Decoder s (HeaderHash blk) +decodeRawHash :: + ConvertRawHash blk => + proxy blk -> forall s. Decoder s (HeaderHash blk) decodeRawHash p = fromShortRawHash p <$> Serialise.decode {------------------------------------------------------------------------------- @@ -264,6 +297,6 @@ succWithOrigin = withOrigin minBound succ -- consider when deciding whether to disconnect from a peer. It has to be -- smaller or equal to the stability window. For instance, for Shelley-based -- eras, this will be equal to a stability window, that is @3k/f@. -newtype GenesisWindow = GenesisWindow { unGenesisWindow :: Word64 } +newtype GenesisWindow = GenesisWindow {unGenesisWindow :: Word64} deriving stock (Show, Eq, Ord) deriving newtype (NoThunks, Num) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/EBB.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/EBB.hs index 72fb1daa0d..9820165301 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/EBB.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/EBB.hs @@ -2,16 +2,16 @@ {-# LANGUAGE DeriveGeneric #-} -- | Generic infrastructure for working with EBBs -module Ouroboros.Consensus.Block.EBB ( - IsEBB (..) +module Ouroboros.Consensus.Block.EBB + ( IsEBB (..) , fromIsEBB , toIsEBB ) where -import Codec.Serialise (Serialise (..)) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Util.Condense +import Codec.Serialise (Serialise (..)) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- EBBs @@ -44,5 +44,5 @@ toIsEBB :: Bool -> IsEBB toIsEBB b = if b then IsEBB else IsNotEBB fromIsEBB :: IsEBB -> Bool -fromIsEBB IsEBB = True +fromIsEBB IsEBB = True fromIsEBB IsNotEBB = False diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Forging.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Forging.hs index b5f6522913..ec05ba26a4 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Forging.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/Forging.hs @@ -8,8 +8,8 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Block.Forging ( - BlockForging (..) +module Ouroboros.Consensus.Block.Forging + ( BlockForging (..) , CannotForge , ForgeStateInfo , ForgeStateUpdateError @@ -18,20 +18,21 @@ module Ouroboros.Consensus.Block.Forging ( , castForgeStateUpdateInfo , checkShouldForge , forgeStateUpdateInfoFromUpdateInfo + -- * 'UpdateInfo' , UpdateInfo (..) ) where -import Control.Tracer (Tracer, traceWith) -import Data.Kind (Type) -import Data.Text (Text) -import GHC.Stack -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Ticked +import Control.Tracer (Tracer, traceWith) +import Data.Kind (Type) +import Data.Text (Text) +import GHC.Stack +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Ticked -- | Information about why we /cannot/ forge a block, although we are a leader -- @@ -50,185 +51,180 @@ type family ForgeStateUpdateError blk :: Type -- | The result of 'updateForgeState'. -- -- Note: the forge state itself is implicit and not reflected in the types. -data ForgeStateUpdateInfo blk = - ForgeStateUpdated (ForgeStateInfo blk) - -- ^ NB The update might have not changed the forge state. - | ForgeStateUpdateFailed (ForgeStateUpdateError blk) - | ForgeStateUpdateSuppressed - -- ^ A node was prevented from forging for an artificial reason, such as +data ForgeStateUpdateInfo blk + = -- | NB The update might have not changed the forge state. + ForgeStateUpdated (ForgeStateInfo blk) + | ForgeStateUpdateFailed (ForgeStateUpdateError blk) + | -- | A node was prevented from forging for an artificial reason, such as -- testing, benchmarking, etc. It's /artificial/ in that this constructor -- should never occur in a production deployment. + ForgeStateUpdateSuppressed -deriving instance (Show (ForgeStateInfo blk), Show (ForgeStateUpdateError blk)) - => Show (ForgeStateUpdateInfo blk) +deriving instance + (Show (ForgeStateInfo blk), Show (ForgeStateUpdateError blk)) => + Show (ForgeStateUpdateInfo blk) castForgeStateUpdateInfo :: - ( ForgeStateInfo blk ~ ForgeStateInfo blk' - , ForgeStateUpdateError blk ~ ForgeStateUpdateError blk' - ) - => ForgeStateUpdateInfo blk -> ForgeStateUpdateInfo blk' + ( ForgeStateInfo blk ~ ForgeStateInfo blk' + , ForgeStateUpdateError blk ~ ForgeStateUpdateError blk' + ) => + ForgeStateUpdateInfo blk -> ForgeStateUpdateInfo blk' castForgeStateUpdateInfo = \case - ForgeStateUpdated x -> ForgeStateUpdated x - ForgeStateUpdateFailed x -> ForgeStateUpdateFailed x - ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed + ForgeStateUpdated x -> ForgeStateUpdated x + ForgeStateUpdateFailed x -> ForgeStateUpdateFailed x + ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed -- | Stateful wrapper around block production -- -- NOTE: do not refer to the consensus or ledger config in the closure of this -- record because they might contain an @EpochInfo Identity@, which will be -- incorrect when used as part of the hard fork combinator. -data BlockForging m blk = BlockForging { - -- | Identifier used in the trace messages produced for this - -- 'BlockForging' record. - -- - -- Useful when the node is running with multiple sets of credentials. - forgeLabel :: Text - - -- | Proof that the node can be a leader - -- - -- NOTE: the other fields of this record may refer to this value (or a - -- value derived from it) in their closure, which means one should not - -- override this field independently from the others. - , canBeLeader :: CanBeLeader (BlockProtocol blk) - - -- | Update the forge state. - -- - -- When the node can be a leader, this will be called at the start of - -- each slot, right before calling 'checkCanForge'. - -- - -- When 'Updated' is returned, we trace the 'ForgeStateInfo'. - -- - -- When 'UpdateFailed' is returned, we trace the 'ForgeStateUpdateError' - -- and don't call 'checkCanForge'. - , updateForgeState :: - TopLevelConfig blk - -> SlotNo - -> Ticked (ChainDepState (BlockProtocol blk)) - -> m (ForgeStateUpdateInfo blk) - - -- | After checking that the node indeed is a leader ('checkIsLeader' - -- returned 'Just') and successfully updating the forge state - -- ('updateForgeState' did not return 'UpdateFailed'), do another check - -- to see whether we can actually forge a block. - -- - -- When 'CannotForge' is returned, we don't call 'forgeBlock'. - , checkCanForge :: - TopLevelConfig blk - -> SlotNo - -> Ticked (ChainDepState (BlockProtocol blk)) - -> IsLeader (BlockProtocol blk) - -> ForgeStateInfo blk -- Proof that 'updateForgeState' did not fail - -> Either (CannotForge blk) () - - -- | Forge a block - -- - -- The function is passed the prefix of the mempool that will fit within - -- a valid block; this is a set of transactions that is guaranteed to be - -- consistent with the ledger state (also provided as an argument) and - -- with each other (when applied in order). All of them should be - -- included in the forged block, since the mempool ensures they can fit. - -- - -- NOTE: do not refer to the consensus or ledger config in the closure, - -- because they might contain an @EpochInfo Identity@, which will be - -- incorrect when used as part of the hard fork combinator. Use the - -- given 'TopLevelConfig' instead, as it is guaranteed to be correct - -- even when used as part of the hard fork combinator. - -- - -- PRECONDITION: 'checkCanForge' returned @Right ()@. - , forgeBlock :: - TopLevelConfig blk - -> BlockNo -- Current block number - -> SlotNo -- Current slot number - -> TickedLedgerState blk EmptyMK -- Current ledger state - -> [Validated (GenTx blk)] -- Transactions to include - -> IsLeader (BlockProtocol blk) -- Proof we are leader - -> m blk - } - -data ShouldForge blk = - -- | Before check whether we are a leader in this slot, we tried to update +data BlockForging m blk = BlockForging + { forgeLabel :: Text + -- ^ Identifier used in the trace messages produced for this + -- 'BlockForging' record. + -- + -- Useful when the node is running with multiple sets of credentials. + , canBeLeader :: CanBeLeader (BlockProtocol blk) + -- ^ Proof that the node can be a leader + -- + -- NOTE: the other fields of this record may refer to this value (or a + -- value derived from it) in their closure, which means one should not + -- override this field independently from the others. + , updateForgeState :: + TopLevelConfig blk -> + SlotNo -> + Ticked (ChainDepState (BlockProtocol blk)) -> + m (ForgeStateUpdateInfo blk) + -- ^ Update the forge state. + -- + -- When the node can be a leader, this will be called at the start of + -- each slot, right before calling 'checkCanForge'. + -- + -- When 'Updated' is returned, we trace the 'ForgeStateInfo'. + -- + -- When 'UpdateFailed' is returned, we trace the 'ForgeStateUpdateError' + -- and don't call 'checkCanForge'. + , checkCanForge :: + TopLevelConfig blk -> + SlotNo -> + Ticked (ChainDepState (BlockProtocol blk)) -> + IsLeader (BlockProtocol blk) -> + ForgeStateInfo blk -> -- Proof that 'updateForgeState' did not fail + Either (CannotForge blk) () + -- ^ After checking that the node indeed is a leader ('checkIsLeader' + -- returned 'Just') and successfully updating the forge state + -- ('updateForgeState' did not return 'UpdateFailed'), do another check + -- to see whether we can actually forge a block. + -- + -- When 'CannotForge' is returned, we don't call 'forgeBlock'. + , forgeBlock :: + TopLevelConfig blk -> + BlockNo -> -- Current block number + SlotNo -> -- Current slot number + TickedLedgerState blk EmptyMK -> -- Current ledger state + [Validated (GenTx blk)] -> -- Transactions to include + IsLeader (BlockProtocol blk) -> -- Proof we are leader + m blk + -- ^ Forge a block + -- + -- The function is passed the prefix of the mempool that will fit within + -- a valid block; this is a set of transactions that is guaranteed to be + -- consistent with the ledger state (also provided as an argument) and + -- with each other (when applied in order). All of them should be + -- included in the forged block, since the mempool ensures they can fit. + -- + -- NOTE: do not refer to the consensus or ledger config in the closure, + -- because they might contain an @EpochInfo Identity@, which will be + -- incorrect when used as part of the hard fork combinator. Use the + -- given 'TopLevelConfig' instead, as it is guaranteed to be correct + -- even when used as part of the hard fork combinator. + -- + -- PRECONDITION: 'checkCanForge' returned @Right ()@. + } + +data ShouldForge blk + = -- | Before check whether we are a leader in this slot, we tried to update -- our forge state ('updateForgeState'), but it failed. We will not check -- whether we are leader and will thus not forge a block either. -- -- E.g., we could not evolve our KES key. ForgeStateUpdateError (ForgeStateUpdateError blk) - - -- | We are a leader in this slot, but we cannot forge for a certain + | -- | We are a leader in this slot, but we cannot forge for a certain -- reason. -- -- E.g., our KES key is not yet valid in this slot or we are not the -- current delegate of the genesis key we have a delegation certificate -- from. - | CannotForge (CannotForge blk) - - -- | We are not a leader in this slot - | NotLeader - - -- | We are a leader in this slot and we should forge a block. - | ShouldForge (IsLeader (BlockProtocol blk)) + CannotForge (CannotForge blk) + | -- | We are not a leader in this slot + NotLeader + | -- | We are a leader in this slot and we should forge a block. + ShouldForge (IsLeader (BlockProtocol blk)) checkShouldForge :: - forall m blk. - ( Monad m - , ConsensusProtocol (BlockProtocol blk) - , HasCallStack - ) - => BlockForging m blk - -> Tracer m (ForgeStateInfo blk) - -> TopLevelConfig blk - -> SlotNo - -> Ticked (ChainDepState (BlockProtocol blk)) - -> m (ShouldForge blk) -checkShouldForge BlockForging{..} - forgeStateInfoTracer - cfg - slot - tickedChainDepState = + forall m blk. + ( Monad m + , ConsensusProtocol (BlockProtocol blk) + , HasCallStack + ) => + BlockForging m blk -> + Tracer m (ForgeStateInfo blk) -> + TopLevelConfig blk -> + SlotNo -> + Ticked (ChainDepState (BlockProtocol blk)) -> + m (ShouldForge blk) +checkShouldForge + BlockForging{..} + forgeStateInfoTracer + cfg + slot + tickedChainDepState = updateForgeState cfg slot tickedChainDepState >>= \updateInfo -> case updateInfo of - ForgeStateUpdated info -> handleUpdated info - ForgeStateUpdateFailed err -> return $ ForgeStateUpdateError err - ForgeStateUpdateSuppressed -> return NotLeader - where + ForgeStateUpdated info -> handleUpdated info + ForgeStateUpdateFailed err -> return $ ForgeStateUpdateError err + ForgeStateUpdateSuppressed -> return NotLeader + where mbIsLeader :: Maybe (IsLeader (BlockProtocol blk)) mbIsLeader = - -- WARNING: It is critical that we do not depend on the 'BlockForging' - -- record for the implementation of 'checkIsLeader'. Doing so would - -- make composing multiple 'BlockForging' values responsible for also - -- composing the 'checkIsLeader' checks, but that should be the - -- responsibility of the 'ConsensusProtocol' instance for the - -- composition of those blocks. - checkIsLeader - (configConsensus cfg) - canBeLeader - slot - tickedChainDepState + -- WARNING: It is critical that we do not depend on the 'BlockForging' + -- record for the implementation of 'checkIsLeader'. Doing so would + -- make composing multiple 'BlockForging' values responsible for also + -- composing the 'checkIsLeader' checks, but that should be the + -- responsibility of the 'ConsensusProtocol' instance for the + -- composition of those blocks. + checkIsLeader + (configConsensus cfg) + canBeLeader + slot + tickedChainDepState handleUpdated :: ForgeStateInfo blk -> m (ShouldForge blk) handleUpdated info = do - traceWith forgeStateInfoTracer info - return $ case mbIsLeader of - Nothing -> NotLeader - Just isLeader -> - case checkCanForge cfg slot tickedChainDepState isLeader info of - Left cannotForge -> CannotForge cannotForge - Right () -> ShouldForge isLeader + traceWith forgeStateInfoTracer info + return $ case mbIsLeader of + Nothing -> NotLeader + Just isLeader -> + case checkCanForge cfg slot tickedChainDepState isLeader info of + Left cannotForge -> CannotForge cannotForge + Right () -> ShouldForge isLeader {------------------------------------------------------------------------------- UpdateInfo -------------------------------------------------------------------------------} -- | The result of updating something, e.g., the forge state. -data UpdateInfo updated failed = - -- | NOTE: The update may have induced no change. +data UpdateInfo updated failed + = -- | NOTE: The update may have induced no change. Updated updated | UpdateFailed failed - deriving (Show) + deriving Show -- | Embed 'UpdateInfo' into 'ForgeStateUpdateInfo' forgeStateUpdateInfoFromUpdateInfo :: - UpdateInfo (ForgeStateInfo blk) (ForgeStateUpdateError blk) - -> ForgeStateUpdateInfo blk + UpdateInfo (ForgeStateInfo blk) (ForgeStateUpdateError blk) -> + ForgeStateUpdateInfo blk forgeStateUpdateInfoFromUpdateInfo = \case - Updated info -> ForgeStateUpdated info - UpdateFailed err -> ForgeStateUpdateFailed err + Updated info -> ForgeStateUpdated info + UpdateFailed err -> ForgeStateUpdateFailed err diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/NestedContent.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/NestedContent.hs index 9b76af6a46..9d02d2bdfd 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/NestedContent.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/NestedContent.hs @@ -11,31 +11,34 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Block.NestedContent ( - -- * Block contents +module Ouroboros.Consensus.Block.NestedContent + ( -- * Block contents HasNestedContent (..) , NestedCtxt_ , curriedNest + -- * Flip type arguments , NestedCtxt (..) , castNestedCtxt , mapNestedCtxt + -- * Existentials , castSomeNestedCtxt , mapSomeNestedCtxt + -- * Convenience re-exports , module Ouroboros.Consensus.Util.DepPair , SomeSecond (..) ) where -import Data.Kind (Type) -import Data.Maybe (isJust) -import Data.Proxy -import Data.Type.Equality -import Data.Typeable (Typeable) -import NoThunks.Class (InspectHeap (..), NoThunks) -import Ouroboros.Consensus.Util (SomeSecond (..)) -import Ouroboros.Consensus.Util.DepPair +import Data.Kind (Type) +import Data.Maybe (isJust) +import Data.Proxy +import Data.Type.Equality +import Data.Typeable (Typeable) +import NoThunks.Class (InspectHeap (..), NoThunks) +import Ouroboros.Consensus.Util (SomeSecond (..)) +import Ouroboros.Consensus.Util.DepPair {------------------------------------------------------------------------------- Block contents @@ -72,24 +75,29 @@ import Ouroboros.Consensus.Util.DepPair -- bytes from the block into the type indicated by that @NestedCtxt@. -- -- TODO: We could perhaps define this independent of blocks in 'DepPair'. -class ( forall a. Show (NestedCtxt_ blk f a) - , SameDepIndex (NestedCtxt_ blk f) - ) => HasNestedContent f blk where +class + ( forall a. Show (NestedCtxt_ blk f a) + , SameDepIndex (NestedCtxt_ blk f) + ) => + HasNestedContent f blk + where unnest :: f blk -> DepPair (NestedCtxt f blk) - nest :: DepPair (NestedCtxt f blk) -> f blk + nest :: DepPair (NestedCtxt f blk) -> f blk -- Defaults when there is only a single type - default unnest :: ( TrivialDependency (NestedCtxt f blk) - , TrivialIndex (NestedCtxt f blk) ~ f blk - ) - => f blk -> DepPair (NestedCtxt f blk) + default unnest :: + ( TrivialDependency (NestedCtxt f blk) + , TrivialIndex (NestedCtxt f blk) ~ f blk + ) => + f blk -> DepPair (NestedCtxt f blk) unnest = DepPair indexIsTrivial - default nest :: ( TrivialDependency (NestedCtxt f blk) - , TrivialIndex (NestedCtxt f blk) ~ f blk - ) - => DepPair (NestedCtxt f blk) -> f blk + default nest :: + ( TrivialDependency (NestedCtxt f blk) + , TrivialIndex (NestedCtxt f blk) ~ f blk + ) => + DepPair (NestedCtxt f blk) -> f blk nest (DepPair x y) = fromTrivialDependency x y curriedNest :: HasNestedContent f blk => NestedCtxt f blk a -> a -> f blk @@ -109,38 +117,46 @@ data family NestedCtxt_ blk :: (Type -> Type) -> (Type -> Type) -- 'NestedCtxt' must be indexed on @blk@: it is the block that determines this -- type. However, we often want to partially apply the second argument (the -- functor), leaving the block type not yet defined. -newtype NestedCtxt f blk a = NestedCtxt { - flipNestedCtxt :: NestedCtxt_ blk f a - } - -deriving instance Show (NestedCtxt_ blk f a) - => Show (NestedCtxt f blk a) - -instance SameDepIndex (NestedCtxt_ blk f) - => SameDepIndex (NestedCtxt f blk) where +newtype NestedCtxt f blk a = NestedCtxt + { flipNestedCtxt :: NestedCtxt_ blk f a + } + +deriving instance + Show (NestedCtxt_ blk f a) => + Show (NestedCtxt f blk a) + +instance + SameDepIndex (NestedCtxt_ blk f) => + SameDepIndex (NestedCtxt f blk) + where sameDepIndex (NestedCtxt ctxt) (NestedCtxt ctxt') = - sameDepIndex ctxt ctxt' + sameDepIndex ctxt ctxt' -instance TrivialDependency (NestedCtxt_ blk f) - => TrivialDependency (NestedCtxt f blk) where +instance + TrivialDependency (NestedCtxt_ blk f) => + TrivialDependency (NestedCtxt f blk) + where type TrivialIndex (NestedCtxt f blk) = TrivialIndex (NestedCtxt_ blk f) hasSingleIndex (NestedCtxt ctxt) (NestedCtxt ctxt') = - hasSingleIndex ctxt ctxt' + hasSingleIndex ctxt ctxt' indexIsTrivial = - NestedCtxt indexIsTrivial + NestedCtxt indexIsTrivial -castNestedCtxt :: (NestedCtxt_ blk f a -> NestedCtxt_ blk' f a) - -> NestedCtxt f blk a - -> NestedCtxt f blk' a +castNestedCtxt :: + (NestedCtxt_ blk f a -> NestedCtxt_ blk' f a) -> + NestedCtxt f blk a -> + NestedCtxt f blk' a castNestedCtxt coerce (NestedCtxt ctxt) = NestedCtxt (coerce ctxt) -mapNestedCtxt :: (NestedCtxt_ blk f a -> NestedCtxt_ blk' f' a') - -> NestedCtxt f blk a - -> NestedCtxt f' blk' a' +mapNestedCtxt :: + (NestedCtxt_ blk f a -> NestedCtxt_ blk' f' a') -> + NestedCtxt f blk a -> + NestedCtxt f' blk' a' mapNestedCtxt f (NestedCtxt ctxt) = NestedCtxt (f ctxt) -deriving instance (HasNestedContent f blk, forall a. Show (g a)) - => Show (GenDepPair g (NestedCtxt f blk)) +deriving instance + (HasNestedContent f blk, forall a. Show (g a)) => + Show (GenDepPair g (NestedCtxt f blk)) {------------------------------------------------------------------------------- Existentials @@ -155,19 +171,25 @@ deriving instance HasNestedContent f blk => Show (SomeSecond (NestedCtxt f) blk) -- However, this constraint would have to be propagated all the way up, which is -- rather verbose and annoying (standalone deriving has to be used), hence we -- use 'InspectHeap' for convenience. -deriving via InspectHeap (SomeSecond (NestedCtxt f) blk) - instance (Typeable f, Typeable blk) => NoThunks (SomeSecond (NestedCtxt f) blk) - -instance SameDepIndex (NestedCtxt_ blk f) - => Eq (SomeSecond (NestedCtxt f) blk) where +deriving via + InspectHeap (SomeSecond (NestedCtxt f) blk) + instance + (Typeable f, Typeable blk) => NoThunks (SomeSecond (NestedCtxt f) blk) + +instance + SameDepIndex (NestedCtxt_ blk f) => + Eq (SomeSecond (NestedCtxt f) blk) + where SomeSecond ctxt == SomeSecond ctxt' = isJust (sameDepIndex ctxt ctxt') -castSomeNestedCtxt :: (forall a. NestedCtxt_ blk f a -> NestedCtxt_ blk' f a) - -> SomeSecond (NestedCtxt f) blk - -> SomeSecond (NestedCtxt f) blk' +castSomeNestedCtxt :: + (forall a. NestedCtxt_ blk f a -> NestedCtxt_ blk' f a) -> + SomeSecond (NestedCtxt f) blk -> + SomeSecond (NestedCtxt f) blk' castSomeNestedCtxt coerce (SomeSecond ctxt) = SomeSecond (castNestedCtxt coerce ctxt) -mapSomeNestedCtxt :: (forall a. NestedCtxt_ blk f a -> NestedCtxt_ blk' f' a) - -> SomeSecond (NestedCtxt f) blk - -> SomeSecond (NestedCtxt f') blk' +mapSomeNestedCtxt :: + (forall a. NestedCtxt_ blk f a -> NestedCtxt_ blk' f' a) -> + SomeSecond (NestedCtxt f) blk -> + SomeSecond (NestedCtxt f') blk' mapSomeNestedCtxt f (SomeSecond ctxt) = SomeSecond (mapNestedCtxt f ctxt) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/RealPoint.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/RealPoint.hs index 1c28ab3b52..94e2b9515d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/RealPoint.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/RealPoint.hs @@ -7,11 +7,12 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Block.RealPoint ( - -- * Non-genesis points +module Ouroboros.Consensus.Block.RealPoint + ( -- * Non-genesis points RealPoint (..) , decodeRealPoint , encodeRealPoint + -- * Derived , blockRealPoint , castRealPoint @@ -23,17 +24,17 @@ module Ouroboros.Consensus.Block.RealPoint ( , withOriginRealPointToPoint ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding, encodeListLen) -import Codec.Serialise (decode, encode) -import Data.Coerce -import Data.Proxy -import Data.Typeable (Typeable, typeRep) -import GHC.Generics -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Util.Condense +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding, encodeListLen) +import Codec.Serialise (decode, encode) +import Data.Coerce +import Data.Proxy +import Data.Typeable (Typeable, typeRep) +import GHC.Generics +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Non-genesis point @@ -41,34 +42,39 @@ import Ouroboros.Consensus.Util.Condense -- | Point of an actual block (i.e., not genesis) data RealPoint blk = RealPoint !SlotNo !(HeaderHash blk) - deriving (Generic) + deriving Generic -- TODO: The Ord instance should go -- -deriving instance StandardHash blk => Eq (RealPoint blk) -deriving instance StandardHash blk => Ord (RealPoint blk) +deriving instance StandardHash blk => Eq (RealPoint blk) +deriving instance StandardHash blk => Ord (RealPoint blk) deriving instance StandardHash blk => Show (RealPoint blk) -instance (StandardHash blk, Typeable blk) - => NoThunks (RealPoint blk) where +instance + (StandardHash blk, Typeable blk) => + NoThunks (RealPoint blk) + where showTypeOf _ = show $ typeRep (Proxy @(RealPoint blk)) instance Condense (HeaderHash blk) => Condense (RealPoint blk) where condense (RealPoint s h) = "(Point " <> condense s <> ", " <> condense h <> ")" -encodeRealPoint :: (HeaderHash blk -> Encoding) - -> (RealPoint blk -> Encoding) -encodeRealPoint encodeHash (RealPoint s h) = mconcat [ - encodeListLen 2 +encodeRealPoint :: + (HeaderHash blk -> Encoding) -> + (RealPoint blk -> Encoding) +encodeRealPoint encodeHash (RealPoint s h) = + mconcat + [ encodeListLen 2 , encode s , encodeHash h ] -decodeRealPoint :: (forall s. Decoder s (HeaderHash blk)) - -> (forall s. Decoder s (RealPoint blk)) +decodeRealPoint :: + (forall s. Decoder s (HeaderHash blk)) -> + (forall s. Decoder s (RealPoint blk)) decodeRealPoint decodeHash = do - enforceSize "RealPoint" 2 - RealPoint <$> decode <*> decodeHash + enforceSize "RealPoint" 2 + RealPoint <$> decode <*> decodeHash {------------------------------------------------------------------------------- Derived @@ -82,33 +88,35 @@ realPointHash (RealPoint _ h) = h blockRealPoint :: HasHeader blk => blk -> RealPoint blk blockRealPoint blk = RealPoint s h - where - HeaderFields { headerFieldSlot = s, headerFieldHash = h } = getHeaderFields blk + where + HeaderFields{headerFieldSlot = s, headerFieldHash = h} = getHeaderFields blk headerRealPoint :: - forall blk. HasHeader (Header blk) - => Header blk - -> RealPoint blk + forall blk. + HasHeader (Header blk) => + Header blk -> + RealPoint blk headerRealPoint hdr = RealPoint s h - where - HeaderFields { headerFieldSlot = s, headerFieldHash = h } = hf + where + HeaderFields{headerFieldSlot = s, headerFieldHash = h} = hf - hf :: HeaderFields (Header blk) - hf = getHeaderFields hdr + hf :: HeaderFields (Header blk) + hf = getHeaderFields hdr realPointToPoint :: RealPoint blk -> Point blk realPointToPoint (RealPoint s h) = BlockPoint s h withOriginRealPointToPoint :: WithOrigin (RealPoint blk) -> Point blk -withOriginRealPointToPoint Origin = GenesisPoint +withOriginRealPointToPoint Origin = GenesisPoint withOriginRealPointToPoint (NotOrigin p) = realPointToPoint p pointToWithOriginRealPoint :: Point blk -> WithOrigin (RealPoint blk) -pointToWithOriginRealPoint GenesisPoint = Origin +pointToWithOriginRealPoint GenesisPoint = Origin pointToWithOriginRealPoint (BlockPoint s h) = NotOrigin $ RealPoint s h castRealPoint :: - forall blk blk'. Coercible (HeaderHash blk) (HeaderHash blk') - => RealPoint blk - -> RealPoint blk' + forall blk blk'. + Coercible (HeaderHash blk) (HeaderHash blk') => + RealPoint blk -> + RealPoint blk' castRealPoint (RealPoint s h) = RealPoint s (coerce h) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsDiffusionPipelining.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsDiffusionPipelining.hs index 2fe83be6dc..c5c4076211 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsDiffusionPipelining.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsDiffusionPipelining.hs @@ -9,29 +9,33 @@ {-# LANGUAGE UndecidableInstances #-} -- | See 'BlockSupportsDiffusionPipelining'. -module Ouroboros.Consensus.Block.SupportsDiffusionPipelining ( - BlockSupportsDiffusionPipelining (..) +module Ouroboros.Consensus.Block.SupportsDiffusionPipelining + ( BlockSupportsDiffusionPipelining (..) , updateTentativeHeaderState + -- * @DerivingVia@ helpers + -- ** 'DisableDiffusionPipelining' , DisableDiffusionPipelining (..) + -- ** 'SelectViewDiffusionPipelining' , SelectViewDiffusionPipelining (..) , SelectViewTentativeState (..) + -- * Data family instances , BlockConfig (..) , Header (..) ) where -import Control.Monad (guard) -import Data.Coerce -import Data.Kind -import Data.Proxy -import GHC.Generics (Generic) -import NoThunks.Class -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Block.SupportsProtocol -import Ouroboros.Consensus.Protocol.Abstract +import Control.Monad (guard) +import Data.Coerce +import Data.Kind +import Data.Proxy +import GHC.Generics (Generic) +import NoThunks.Class +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Block.SupportsProtocol +import Ouroboros.Consensus.Protocol.Abstract -- | Block functionality required to support __Block Diffusion Pipelining via -- Delayed Validation__ (DPvDV). @@ -139,10 +143,12 @@ import Ouroboros.Consensus.Protocol.Abstract -- anymore. It /is/ acceptable if DPvDV is less effective in scenarios involving -- an adversary with a very large amount of resources (like stake). class - ( Show (TentativeHeaderState blk) + ( Show (TentativeHeaderState blk) , NoThunks (TentativeHeaderState blk) - , Show (TentativeHeaderView blk) - ) => BlockSupportsDiffusionPipelining blk where + , Show (TentativeHeaderView blk) + ) => + BlockSupportsDiffusionPipelining blk + where -- | State that is maintained to judge whether a header can be pipelined. It -- can be thought of as a summary of all past trap tentative headers. type TentativeHeaderState blk :: Type @@ -156,9 +162,9 @@ class -- | See 'TentativeHeaderView'. tentativeHeaderView :: - BlockConfig blk - -> Header blk - -> TentativeHeaderView blk + BlockConfig blk -> + Header blk -> + TentativeHeaderView blk -- | Apply a 'TentativeHeaderView' to the 'TentativeHeaderState'. This returns -- @'Just' st@ to indicate that the underlying header can be pipelined, and @@ -168,25 +174,26 @@ class -- -- Also see 'updateTentativeHeaderState'. applyTentativeHeaderView :: - Proxy blk - -> TentativeHeaderView blk - -- ^ Extracted using 'tentativeHeaderView' from a (valid) header whose - -- block body is either not yet known to be valid, or definitely invalid. - -> TentativeHeaderState blk - -- ^ The most recent 'TentativeHeaderState' in this particular context. - -> Maybe (TentativeHeaderState blk) - -- ^ The new 'TentativeHeaderState' in case the header satisfies the - -- pipelining criterion and is a trap header. + Proxy blk -> + -- | Extracted using 'tentativeHeaderView' from a (valid) header whose + -- block body is either not yet known to be valid, or definitely invalid. + TentativeHeaderView blk -> + -- | The most recent 'TentativeHeaderState' in this particular context. + TentativeHeaderState blk -> + -- | The new 'TentativeHeaderState' in case the header satisfies the + -- pipelining criterion and is a trap header. + Maybe (TentativeHeaderState blk) -- | Composition of 'tentativeHeaderView' and 'applyTentativeHeaderView'. updateTentativeHeaderState :: - forall blk. BlockSupportsDiffusionPipelining blk - => BlockConfig blk - -> Header blk - -> TentativeHeaderState blk - -> Maybe (TentativeHeaderState blk) + forall blk. + BlockSupportsDiffusionPipelining blk => + BlockConfig blk -> + Header blk -> + TentativeHeaderState blk -> + Maybe (TentativeHeaderState blk) updateTentativeHeaderState bcfg hdr = - applyTentativeHeaderView (Proxy @blk) (tentativeHeaderView bcfg hdr) + applyTentativeHeaderView (Proxy @blk) (tentativeHeaderView bcfg hdr) {------------------------------------------------------------------------------- DerivingVia helpers @@ -199,11 +206,11 @@ updateTentativeHeaderState bcfg hdr = -- > instance BlockSupportsDiffusionPipelining MyBlock newtype DisableDiffusionPipelining blk = DisableDiffusionPipelining blk -newtype instance Header (DisableDiffusionPipelining blk) = - DisableDiffusionPipeliningHeader (Header blk) +newtype instance Header (DisableDiffusionPipelining blk) + = DisableDiffusionPipeliningHeader (Header blk) -newtype instance BlockConfig (DisableDiffusionPipelining blk) = - DisableDiffusionPipeliningBlockConfig (BlockConfig blk) +newtype instance BlockConfig (DisableDiffusionPipelining blk) + = DisableDiffusionPipeliningBlockConfig (BlockConfig blk) instance BlockSupportsDiffusionPipelining (DisableDiffusionPipelining blk) where type TentativeHeaderState _ = () @@ -235,38 +242,42 @@ instance BlockSupportsDiffusionPipelining (DisableDiffusionPipelining blk) where -- header (and later also a valid block) every time they are elected. newtype SelectViewDiffusionPipelining blk = SelectViewDiffusionPipelining blk -newtype instance Header (SelectViewDiffusionPipelining blk) = - SelectViewDiffusionPipeliningHeader (Header blk) +newtype instance Header (SelectViewDiffusionPipelining blk) + = SelectViewDiffusionPipeliningHeader (Header blk) -newtype instance BlockConfig (SelectViewDiffusionPipelining blk) = - SelectViewDiffusionPipeliningBlockConfig (BlockConfig blk) +newtype instance BlockConfig (SelectViewDiffusionPipelining blk) + = SelectViewDiffusionPipeliningBlockConfig (BlockConfig blk) -- | @'TentativeHeaderState' ('SelectViewDiffusionPipelining' blk) ~ 'SelectViewTentativeState' ('BlockProtocol' blk)@ -data SelectViewTentativeState proto = - LastInvalidSelectView !(SelectView proto) +data SelectViewTentativeState proto + = LastInvalidSelectView !(SelectView proto) | NoLastInvalidSelectView - deriving stock (Generic) + deriving stock Generic -deriving stock instance ConsensusProtocol proto => Show (SelectViewTentativeState proto) -deriving stock instance ConsensusProtocol proto => Eq (SelectViewTentativeState proto) +deriving stock instance ConsensusProtocol proto => Show (SelectViewTentativeState proto) +deriving stock instance ConsensusProtocol proto => Eq (SelectViewTentativeState proto) deriving anyclass instance ConsensusProtocol proto => NoThunks (SelectViewTentativeState proto) instance ( BlockSupportsProtocol blk , Show (SelectView (BlockProtocol blk)) - ) => BlockSupportsDiffusionPipelining (SelectViewDiffusionPipelining blk) where - type TentativeHeaderState (SelectViewDiffusionPipelining blk) = - SelectViewTentativeState (BlockProtocol blk) + ) => + BlockSupportsDiffusionPipelining (SelectViewDiffusionPipelining blk) + where + type + TentativeHeaderState (SelectViewDiffusionPipelining blk) = + SelectViewTentativeState (BlockProtocol blk) - type TentativeHeaderView (SelectViewDiffusionPipelining blk) = - SelectView (BlockProtocol blk) + type + TentativeHeaderView (SelectViewDiffusionPipelining blk) = + SelectView (BlockProtocol blk) initialTentativeHeaderState _ = NoLastInvalidSelectView tentativeHeaderView = coerce selectView applyTentativeHeaderView _ sv' st = do - case st of - NoLastInvalidSelectView -> pure () - LastInvalidSelectView sv -> guard $ sv < sv' - pure $ LastInvalidSelectView sv' + case st of + NoLastInvalidSelectView -> pure () + LastInvalidSelectView sv -> guard $ sv < sv' + pure $ LastInvalidSelectView sv' diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsMetrics.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsMetrics.hs index c9ece3eb39..f8c0a4055b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsMetrics.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsMetrics.hs @@ -1,22 +1,22 @@ -- | See 'BlockSupportsMetrics'. -module Ouroboros.Consensus.Block.SupportsMetrics ( - BlockSupportsMetrics (..) +module Ouroboros.Consensus.Block.SupportsMetrics + ( BlockSupportsMetrics (..) , WhetherSelfIssued (..) , isSelfIssuedConstUnknown ) where -import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Block.Abstract {------------------------------------------------------------------------------- Supported blocks -------------------------------------------------------------------------------} -- | Whether a block was issued by a stakeholder currently forging on this node -data WhetherSelfIssued = - IsSelfIssued +data WhetherSelfIssued + = IsSelfIssued | IsNotSelfIssued - -- | We are unable to determine - | UnknownSelfIssued + | -- | We are unable to determine + UnknownSelfIssued deriving (Show, Eq) -- | Evidence that a block supports the metrics needed for business requirements diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsProtocol.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsProtocol.hs index 40ddcdd738..500befe99b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsProtocol.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsProtocol.hs @@ -5,42 +5,48 @@ module Ouroboros.Consensus.Block.SupportsProtocol (BlockSupportsProtocol (..)) where -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Protocol.Abstract +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Protocol.Abstract {------------------------------------------------------------------------------- Supported blocks -------------------------------------------------------------------------------} -- | Evidence that a block supports its protocol -class ( GetHeader blk - , GetPrevHash blk - , ConsensusProtocol (BlockProtocol blk) - , NoThunks (Header blk) - , NoThunks (BlockConfig blk) - , NoThunks (CodecConfig blk) - , NoThunks (StorageConfig blk) - ) => BlockSupportsProtocol blk where - validateView :: BlockConfig blk - -> Header blk - -> ValidateView (BlockProtocol blk) - - selectView :: BlockConfig blk - -> Header blk -> SelectView (BlockProtocol blk) - +class + ( GetHeader blk + , GetPrevHash blk + , ConsensusProtocol (BlockProtocol blk) + , NoThunks (Header blk) + , NoThunks (BlockConfig blk) + , NoThunks (CodecConfig blk) + , NoThunks (StorageConfig blk) + ) => + BlockSupportsProtocol blk + where + validateView :: + BlockConfig blk -> + Header blk -> + ValidateView (BlockProtocol blk) + + selectView :: + BlockConfig blk -> + Header blk -> + SelectView (BlockProtocol blk) -- Default chain selection just looks at longest chains - default selectView :: SelectView (BlockProtocol blk) ~ BlockNo - => BlockConfig blk - -> Header blk -> SelectView (BlockProtocol blk) + default selectView :: + SelectView (BlockProtocol blk) ~ BlockNo => + BlockConfig blk -> + Header blk -> + SelectView (BlockProtocol blk) selectView _ = blockNo projectChainOrderConfig :: - BlockConfig blk - -> ChainOrderConfig (SelectView (BlockProtocol blk)) - + BlockConfig blk -> + ChainOrderConfig (SelectView (BlockProtocol blk)) default projectChainOrderConfig :: - ChainOrderConfig (SelectView (BlockProtocol blk)) ~ () - => BlockConfig blk - -> ChainOrderConfig (SelectView (BlockProtocol blk)) + ChainOrderConfig (SelectView (BlockProtocol blk)) ~ () => + BlockConfig blk -> + ChainOrderConfig (SelectView (BlockProtocol blk)) projectChainOrderConfig _ = () diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsSanityCheck.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsSanityCheck.hs index 90eb7fe748..454b9b42f8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsSanityCheck.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Block/SupportsSanityCheck.hs @@ -1,4 +1,5 @@ {-# LANGUAGE LambdaCase #-} + -- | This module adds support for sanity checking consensus configuration -- on node startup. These checks should primarily look for unusual -- configuration choices that may point to an accidentally-misconfigured node @@ -9,61 +10,61 @@ -- situations when intentionally configuring a node "weirdly" can be useful, -- and so the user should be able to opt out of the sanity checks at their -- own peril. -module Ouroboros.Consensus.Block.SupportsSanityCheck ( - BlockSupportsSanityCheck (..) +module Ouroboros.Consensus.Block.SupportsSanityCheck + ( BlockSupportsSanityCheck (..) , SanityCheckIssue (..) , checkSecurityParamConsistency , sanityCheckConfig ) where -import Control.Exception -import Data.List.NonEmpty (NonEmpty (..)) -import qualified Data.List.NonEmpty as NonEmpty -import Data.Maybe (catMaybes) -import Ouroboros.Consensus.Config (TopLevelConfig) -import Ouroboros.Consensus.Config.SecurityParam +import Control.Exception +import Data.List.NonEmpty (NonEmpty (..)) +import Data.List.NonEmpty qualified as NonEmpty +import Data.Maybe (catMaybes) +import Ouroboros.Consensus.Config (TopLevelConfig) +import Ouroboros.Consensus.Config.SecurityParam -- | An issue found in the 'TopLevelConfig' for a block. See 'displayException' -- for human-readable descriptions of each of these cases, especially when -- presenting these to users. data SanityCheckIssue - -- | Configuration contains multiple security parameters. This may cause - -- strange behaviour around era boundaries. - = InconsistentSecurityParam (NonEmpty SecurityParam) + = -- | Configuration contains multiple security parameters. This may cause + -- strange behaviour around era boundaries. + InconsistentSecurityParam (NonEmpty SecurityParam) deriving (Show, Eq) instance Exception SanityCheckIssue where displayException = \case - InconsistentSecurityParam ks -> mconcat - [ "InconsistentSecurityParam: " - , "SecurityParams (K) were found to be inconsistent between constituent " - , "eras of a HardForkBlock: " - , show (NonEmpty.toList ks) - ] + InconsistentSecurityParam ks -> + mconcat + [ "InconsistentSecurityParam: " + , "SecurityParams (K) were found to be inconsistent between constituent " + , "eras of a HardForkBlock: " + , show (NonEmpty.toList ks) + ] -- | 'BlockSupportsSanityCheck' provides evidence that a block can be sanity -- checked for common issues on node startup. 'sanityCheckConfig', which runs -- performs each check and returns a list with each 'SanityCheckIssue' found, -- should be preferred over using these methods directly. class BlockSupportsSanityCheck blk where - -- | Generate a 'NonEmpty' list of security parameters for a given block type. -- For individual eras' block types, this is simply a singleton list -- containing the chosen 'SecurityParam', but combined block types (i.e. -- the 'HardForkCombinator') will return all of their constituent eras' -- configurations' security parameters. - configAllSecurityParams - :: TopLevelConfig blk - -> NonEmpty SecurityParam + configAllSecurityParams :: + TopLevelConfig blk -> + NonEmpty SecurityParam -- | Check a 'TopLevelConfig' for any inconsistency in constituent choices for -- 'SecurityParam' (colloquially @k@). For a block type to be considered -- "sane" in this regard, its configuration's security parameter as well as -- all of its childrens' configurations (if applicable) should be the same. -checkSecurityParamConsistency - :: BlockSupportsSanityCheck blk - => TopLevelConfig blk - -> Maybe SanityCheckIssue +checkSecurityParamConsistency :: + BlockSupportsSanityCheck blk => + TopLevelConfig blk -> + Maybe SanityCheckIssue checkSecurityParamConsistency cfg = do let allParams = configAllSecurityParams cfg if allSame allParams @@ -74,9 +75,9 @@ allSame :: Eq a => NonEmpty a -> Bool allSame (x :| xs) = all (x ==) xs -- | Run all supported sanity checks on a given 'TopLevelConfig'. -sanityCheckConfig - :: BlockSupportsSanityCheck blk - => TopLevelConfig blk - -> [SanityCheckIssue] +sanityCheckConfig :: + BlockSupportsSanityCheck blk => + TopLevelConfig blk -> + [SanityCheckIssue] sanityCheckConfig cfg = catMaybes [checkSecurityParamConsistency cfg] diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime.hs index 0cdbfeba6a..d99a9d9a13 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime.hs @@ -1,8 +1,8 @@ module Ouroboros.Consensus.BlockchainTime (module X) where -import Ouroboros.Consensus.BlockchainTime.API as X -import Ouroboros.Consensus.BlockchainTime.WallClock.Default as X -import Ouroboros.Consensus.BlockchainTime.WallClock.HardFork as X -import Ouroboros.Consensus.BlockchainTime.WallClock.Simple as X -import Ouroboros.Consensus.BlockchainTime.WallClock.Types as X -import Ouroboros.Consensus.BlockchainTime.WallClock.Util as X +import Ouroboros.Consensus.BlockchainTime.API as X +import Ouroboros.Consensus.BlockchainTime.WallClock.Default as X +import Ouroboros.Consensus.BlockchainTime.WallClock.HardFork as X +import Ouroboros.Consensus.BlockchainTime.WallClock.Simple as X +import Ouroboros.Consensus.BlockchainTime.WallClock.Types as X +import Ouroboros.Consensus.BlockchainTime.WallClock.Util as X diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/API.hs index bd667b4ddd..1d5f3b8f7e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/API.hs @@ -4,17 +4,17 @@ {-# LANGUAGE DerivingVia #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.BlockchainTime.API ( - BlockchainTime (..) +module Ouroboros.Consensus.BlockchainTime.API + ( BlockchainTime (..) , CurrentSlot (..) , knownSlotWatcher ) where -import GHC.Generics (Generic) -import NoThunks.Class (OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (Watcher (..)) +import GHC.Generics (Generic) +import NoThunks.Class (OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (Watcher (..)) {------------------------------------------------------------------------------- API @@ -25,25 +25,25 @@ import Ouroboros.Consensus.Util.STM (Watcher (..)) -- When we run the blockchain, there is a single, global time. We abstract over -- this here to allow to query this time (in terms of the current slot), and -- execute an action each time we advance a slot. -data BlockchainTime m = BlockchainTime { - -- | Get current slot - getCurrentSlot :: STM m CurrentSlot - } - deriving NoThunks - via OnlyCheckWhnfNamed "BlockchainTime" (BlockchainTime m) +data BlockchainTime m = BlockchainTime + { getCurrentSlot :: STM m CurrentSlot + -- ^ Get current slot + } + deriving + NoThunks + via OnlyCheckWhnfNamed "BlockchainTime" (BlockchainTime m) -data CurrentSlot = - -- | The current slot is known +data CurrentSlot + = -- | The current slot is known CurrentSlot !SlotNo - - -- | The current slot is not yet known + | -- | The current slot is not yet known -- -- This only happens when the tip of the ledger is so far behind that we -- lack the information necessary to translate the current 'UTCTime' into a -- 'SlotNo'. This should only be the case during syncing. - | CurrentSlotUnknown - deriving stock (Generic, Show) - deriving anyclass (NoThunks) + CurrentSlotUnknown + deriving stock (Generic, Show) + deriving anyclass NoThunks {------------------------------------------------------------------------------- Derived functionality @@ -54,21 +54,24 @@ data CurrentSlot = -- The action will not be called until the current slot becomes known -- (if the tip of our ledger is too far away from the current wallclock time, -- we may not know what the current 'SlotNo' is). -knownSlotWatcher :: forall m. IOLike m - => BlockchainTime m - -> (SlotNo -> m ()) -- ^ Action to execute - -> Watcher m SlotNo SlotNo +knownSlotWatcher :: + forall m. + IOLike m => + BlockchainTime m -> + -- | Action to execute + (SlotNo -> m ()) -> + Watcher m SlotNo SlotNo knownSlotWatcher btime notify = - Watcher { - wFingerprint = id - , wInitial = Nothing - , wNotify = notify - , wReader = getCurrentSlot' - } - where - getCurrentSlot' :: STM m SlotNo - getCurrentSlot' = do - mSlot <- getCurrentSlot btime - case mSlot of - CurrentSlotUnknown -> retry - CurrentSlot s -> return s + Watcher + { wFingerprint = id + , wInitial = Nothing + , wNotify = notify + , wReader = getCurrentSlot' + } + where + getCurrentSlot' :: STM m SlotNo + getCurrentSlot' = do + mSlot <- getCurrentSlot btime + case mSlot of + CurrentSlotUnknown -> retry + CurrentSlot s -> return s diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Default.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Default.hs index bb3028ab71..8863d07f5a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Default.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Default.hs @@ -1,31 +1,34 @@ module Ouroboros.Consensus.BlockchainTime.WallClock.Default (defaultSystemTime) where -import Control.Monad -import Control.Monad.Class.MonadTime.SI (MonadTime (..)) -import Control.Tracer -import Data.Time (UTCTime, diffUTCTime) -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.BlockchainTime.WallClock.Util -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Time +import Control.Monad +import Control.Monad.Class.MonadTime.SI (MonadTime (..)) +import Control.Tracer +import Data.Time (UTCTime, diffUTCTime) +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.BlockchainTime.WallClock.Util +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Time -defaultSystemTime :: (MonadTime m, MonadDelay m) - => SystemStart - -> Tracer m (TraceBlockchainTimeEvent UTCTime) - -> SystemTime m -defaultSystemTime start tracer = SystemTime { - systemTimeCurrent = toRelativeTime start <$> getCurrentTime - , systemTimeWait = waitForSystemStart start tracer +defaultSystemTime :: + (MonadTime m, MonadDelay m) => + SystemStart -> + Tracer m (TraceBlockchainTimeEvent UTCTime) -> + SystemTime m +defaultSystemTime start tracer = + SystemTime + { systemTimeCurrent = toRelativeTime start <$> getCurrentTime + , systemTimeWait = waitForSystemStart start tracer } -- | Wait until system start if necessary -waitForSystemStart :: (MonadTime m, MonadDelay m) - => SystemStart - -> Tracer m (TraceBlockchainTimeEvent UTCTime) - -> m () +waitForSystemStart :: + (MonadTime m, MonadDelay m) => + SystemStart -> + Tracer m (TraceBlockchainTimeEvent UTCTime) -> + m () waitForSystemStart start tracer = do - now <- getCurrentTime - when (getSystemStart start > now) $ do - let delay = getSystemStart start `diffUTCTime` now - traceWith tracer $ TraceStartTimeInTheFuture start delay - threadDelay (nominalDelay delay) + now <- getCurrentTime + when (getSystemStart start > now) $ do + let delay = getSystemStart start `diffUTCTime` now + traceWith tracer $ TraceStartTimeInTheFuture start delay + threadDelay (nominalDelay delay) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/HardFork.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/HardFork.hs index 27aa27a745..767c00d338 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/HardFork.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/HardFork.hs @@ -1,26 +1,26 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.BlockchainTime.WallClock.HardFork ( - BackoffDelay (..) +module Ouroboros.Consensus.BlockchainTime.WallClock.HardFork + ( BackoffDelay (..) , HardForkBlockchainTimeArgs (..) , hardForkBlockchainTime ) where -import Control.Monad -import Control.ResourceRegistry -import Control.Tracer -import Data.Time (NominalDiffTime) -import Data.Void -import GHC.Stack -import Ouroboros.Consensus.BlockchainTime.API -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.BlockchainTime.WallClock.Util -import Ouroboros.Consensus.HardFork.Abstract -import qualified Ouroboros.Consensus.HardFork.History as HF -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Time +import Control.Monad +import Control.ResourceRegistry +import Control.Tracer +import Data.Time (NominalDiffTime) +import Data.Void +import GHC.Stack +import Ouroboros.Consensus.BlockchainTime.API +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.BlockchainTime.WallClock.Util +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.History qualified as HF +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Time -- | A backoff delay -- @@ -44,126 +44,133 @@ import Ouroboros.Consensus.Util.Time newtype BackoffDelay = BackoffDelay NominalDiffTime data HardForkBlockchainTimeArgs m blk = HardForkBlockchainTimeArgs - { hfbtBackoffDelay :: m BackoffDelay - -- ^ See 'BackoffDelay' + { hfbtBackoffDelay :: m BackoffDelay + -- ^ See 'BackoffDelay' , hfbtGetLedgerState :: STM m (LedgerState blk EmptyMK) - , hfbtLedgerConfig :: LedgerConfig blk - , hfbtRegistry :: ResourceRegistry m - , hfbtSystemTime :: SystemTime m - , hfbtTracer :: Tracer m (TraceBlockchainTimeEvent RelativeTime) + , hfbtLedgerConfig :: LedgerConfig blk + , hfbtRegistry :: ResourceRegistry m + , hfbtSystemTime :: SystemTime m + , hfbtTracer :: Tracer m (TraceBlockchainTimeEvent RelativeTime) , hfbtMaxClockRewind :: NominalDiffTime - -- ^ Maximum time the clock can be rewound without throwing a fatal - -- 'SystemClockMovedBack' exception. - -- - -- When the slot length is short, e.g., Praos' 1s compared to PBFT's 20s, - -- the chances of an NTP sync causing the clock to go back to the previous - -- slot increase. - -- - -- We allow the system clock to rewind up to 'hfbtMaxClockRewind', tracing a - -- 'TraceSystemClockMovedBack' message in such cases. Note that the current - -- slot *never decreases*, we just wait a bit longer in the same slot. + -- ^ Maximum time the clock can be rewound without throwing a fatal + -- 'SystemClockMovedBack' exception. + -- + -- When the slot length is short, e.g., Praos' 1s compared to PBFT's 20s, + -- the chances of an NTP sync causing the clock to go back to the previous + -- slot increase. + -- + -- We allow the system clock to rewind up to 'hfbtMaxClockRewind', tracing a + -- 'TraceSystemClockMovedBack' message in such cases. Note that the current + -- slot *never decreases*, we just wait a bit longer in the same slot. } -- | 'BlockchainTime' instance with support for the hard fork history -hardForkBlockchainTime :: forall m blk. - ( IOLike m - , HasHardForkHistory blk - , HasCallStack - ) - => HardForkBlockchainTimeArgs m blk - -> m (BlockchainTime m) +hardForkBlockchainTime :: + forall m blk. + ( IOLike m + , HasHardForkHistory blk + , HasCallStack + ) => + HardForkBlockchainTimeArgs m blk -> + m (BlockchainTime m) hardForkBlockchainTime args = do - run <- HF.runWithCachedSummary (summarize <$> getLedgerState) - systemTimeWait + run <- HF.runWithCachedSummary (summarize <$> getLedgerState) + systemTimeWait - (firstSlot, now, firstDelay) <- getCurrentSlot' tracer time run backoffDelay - slotVar <- newTVarIO firstSlot - void $ forkLinkedThread registry "hardForkBlockchainTime" $ - loop run slotVar firstSlot now firstDelay + (firstSlot, now, firstDelay) <- getCurrentSlot' tracer time run backoffDelay + slotVar <- newTVarIO firstSlot + void $ + forkLinkedThread registry "hardForkBlockchainTime" $ + loop run slotVar firstSlot now firstDelay - return $ BlockchainTime { - getCurrentSlot = readTVar slotVar + return $ + BlockchainTime + { getCurrentSlot = readTVar slotVar } - where - HardForkBlockchainTimeArgs - { hfbtBackoffDelay = backoffDelay - , hfbtGetLedgerState = getLedgerState - , hfbtLedgerConfig = cfg - , hfbtRegistry = registry - , hfbtSystemTime = time@SystemTime{..} - , hfbtTracer = tracer - , hfbtMaxClockRewind = maxClockRewind - } = args + where + HardForkBlockchainTimeArgs + { hfbtBackoffDelay = backoffDelay + , hfbtGetLedgerState = getLedgerState + , hfbtLedgerConfig = cfg + , hfbtRegistry = registry + , hfbtSystemTime = time@SystemTime{..} + , hfbtTracer = tracer + , hfbtMaxClockRewind = maxClockRewind + } = args - summarize :: LedgerState blk EmptyMK -> HF.Summary (HardForkIndices blk) - summarize st = hardForkSummary cfg st + summarize :: LedgerState blk EmptyMK -> HF.Summary (HardForkIndices blk) + summarize st = hardForkSummary cfg st - loop :: HF.RunWithCachedSummary xs m - -> StrictTVar m CurrentSlot - -> CurrentSlot -- Previous slot - -> RelativeTime -- Current time - -> NominalDiffTime -- Time to wait until next slot - -> m Void - loop run slotVar = go - where - go :: CurrentSlot -> RelativeTime -> NominalDiffTime -> m Void - go prevSlot prevTime delay = do - threadDelay (nominalDelay delay) - (newSlot, newTime, newDelay) <- getCurrentSlot' tracer time run backoffDelay - newSlot' <- checkValidClockChange (prevSlot, prevTime) (newSlot, newTime) - atomically $ writeTVar slotVar newSlot' - go newSlot' newTime newDelay + loop :: + HF.RunWithCachedSummary xs m -> + StrictTVar m CurrentSlot -> + CurrentSlot -> -- Previous slot + RelativeTime -> -- Current time + NominalDiffTime -> -- Time to wait until next slot + m Void + loop run slotVar = go + where + go :: CurrentSlot -> RelativeTime -> NominalDiffTime -> m Void + go prevSlot prevTime delay = do + threadDelay (nominalDelay delay) + (newSlot, newTime, newDelay) <- getCurrentSlot' tracer time run backoffDelay + newSlot' <- checkValidClockChange (prevSlot, prevTime) (newSlot, newTime) + atomically $ writeTVar slotVar newSlot' + go newSlot' newTime newDelay - checkValidClockChange :: - (CurrentSlot, RelativeTime) - -> (CurrentSlot, RelativeTime) - -> m CurrentSlot - checkValidClockChange (prevSlot, prevTime) (newSlot, newTime) = - case (prevSlot, newSlot) of - (CurrentSlotUnknown, CurrentSlot _) - -- Unknown-to-known typically happens when syncing catches up far - -- enough that we can now know what the current slot is. - -> return newSlot - (CurrentSlot _, CurrentSlotUnknown) - -- Known-to-unknown can happen when the ledger is no longer being - -- updated and time marches on past the end of the safe zone. - -> return newSlot - (CurrentSlotUnknown, CurrentSlotUnknown) - -> return newSlot - (CurrentSlot m, CurrentSlot n) - -- Normally we expect @n == m + 1@, but if the system is under heavy - -- load, we might miss a slot. - | m < n - -> return newSlot - -- We could have @n == m@ or @n < m@ only if the user's system clock - -- was adjusted (say by an NTP process). We only allow a limited - -- rewinding of the clock, but never rewind the slot number - | m >= n - , prevTime `diffRelTime` newTime <= maxClockRewind - -> do traceWith tracer $ TraceSystemClockMovedBack prevTime newTime - return prevSlot - | otherwise - -> throwIO $ SystemClockMovedBack m n + checkValidClockChange :: + (CurrentSlot, RelativeTime) -> + (CurrentSlot, RelativeTime) -> + m CurrentSlot + checkValidClockChange (prevSlot, prevTime) (newSlot, newTime) = + case (prevSlot, newSlot) of + (CurrentSlotUnknown, CurrentSlot _) -> + -- Unknown-to-known typically happens when syncing catches up far + -- enough that we can now know what the current slot is. + return newSlot + (CurrentSlot _, CurrentSlotUnknown) -> + -- Known-to-unknown can happen when the ledger is no longer being + -- updated and time marches on past the end of the safe zone. + return newSlot + (CurrentSlotUnknown, CurrentSlotUnknown) -> + return newSlot + (CurrentSlot m, CurrentSlot n) + -- Normally we expect @n == m + 1@, but if the system is under heavy + -- load, we might miss a slot. + | m < n -> + return newSlot + -- We could have @n == m@ or @n < m@ only if the user's system clock + -- was adjusted (say by an NTP process). We only allow a limited + -- rewinding of the clock, but never rewind the slot number + | m >= n + , prevTime `diffRelTime` newTime <= maxClockRewind -> + do + traceWith tracer $ TraceSystemClockMovedBack prevTime newTime + return prevSlot + | otherwise -> + throwIO $ SystemClockMovedBack m n {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} -- | Get current slot, current time, and the delay until the next slot. -getCurrentSlot' :: forall m xs. IOLike m - => Tracer m (TraceBlockchainTimeEvent RelativeTime) - -> SystemTime m - -> HF.RunWithCachedSummary xs m - -> m BackoffDelay - -> m (CurrentSlot, RelativeTime, NominalDiffTime) +getCurrentSlot' :: + forall m xs. + IOLike m => + Tracer m (TraceBlockchainTimeEvent RelativeTime) -> + SystemTime m -> + HF.RunWithCachedSummary xs m -> + m BackoffDelay -> + m (CurrentSlot, RelativeTime, NominalDiffTime) getCurrentSlot' tracer SystemTime{..} run getBackoffDelay = do - now <- systemTimeCurrent - mSlot <- atomically $ HF.cachedRunQuery run $ HF.wallclockToSlot now - case mSlot of - Left ex -> do - -- give up for now and backoff; see 'BackoffDelay' - traceWith tracer $ TraceCurrentSlotUnknown now ex - BackoffDelay delay <- getBackoffDelay - return (CurrentSlotUnknown, now, delay) - Right (slot, _inSlot, timeLeft) -> do - return (CurrentSlot slot, now, timeLeft) + now <- systemTimeCurrent + mSlot <- atomically $ HF.cachedRunQuery run $ HF.wallclockToSlot now + case mSlot of + Left ex -> do + -- give up for now and backoff; see 'BackoffDelay' + traceWith tracer $ TraceCurrentSlotUnknown now ex + BackoffDelay delay <- getBackoffDelay + return (CurrentSlotUnknown, now, delay) + Right (slot, _inSlot, timeLeft) -> do + return (CurrentSlot slot, now, timeLeft) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Simple.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Simple.hs index 27f16e68f9..46208ad4a4 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Simple.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Simple.hs @@ -2,62 +2,69 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.BlockchainTime.WallClock.Simple ( - simpleBlockchainTime +module Ouroboros.Consensus.BlockchainTime.WallClock.Simple + ( simpleBlockchainTime + -- * Low-level API (exported primarily for testing) , getWallClockSlot , waitUntilNextSlot ) where -import Control.Monad -import Control.ResourceRegistry -import Data.Bifunctor -import Data.Fixed (divMod') -import Data.Time (NominalDiffTime) -import Data.Void -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime.API -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.BlockchainTime.WallClock.Util -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Time +import Control.Monad +import Control.ResourceRegistry +import Data.Bifunctor +import Data.Fixed (divMod') +import Data.Time (NominalDiffTime) +import Data.Void +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime.API +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.BlockchainTime.WallClock.Util +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Time -- | Real blockchain time -- -- WARNING: if the start time is in the future, 'simpleBlockchainTime' will -- block until the start time has come. -simpleBlockchainTime :: forall m. IOLike m - => ResourceRegistry m - -> SystemTime m - -> SlotLength - -> NominalDiffTime -- ^ Max clock rewind - -> m (BlockchainTime m) +simpleBlockchainTime :: + forall m. + IOLike m => + ResourceRegistry m -> + SystemTime m -> + SlotLength -> + -- | Max clock rewind + NominalDiffTime -> + m (BlockchainTime m) simpleBlockchainTime registry time slotLen maxClockRewind = do - systemTimeWait time + systemTimeWait time - -- Fork thread that continuously updates the current slot - firstSlot <- fst <$> getWallClockSlot time slotLen - slotVar <- newTVarIO firstSlot - void $ forkLinkedThread registry "simpleBlockchainTime" $ - loop slotVar firstSlot + -- Fork thread that continuously updates the current slot + firstSlot <- fst <$> getWallClockSlot time slotLen + slotVar <- newTVarIO firstSlot + void $ + forkLinkedThread registry "simpleBlockchainTime" $ + loop slotVar firstSlot - -- The API is now a simple STM one - return BlockchainTime { - getCurrentSlot = CurrentSlot <$> readTVar slotVar + -- The API is now a simple STM one + return + BlockchainTime + { getCurrentSlot = CurrentSlot <$> readTVar slotVar } - where - -- In each iteration of the loop, we recompute how long to wait until - -- the next slot. This minimizes clock skew. - loop :: StrictTVar m SlotNo - -> SlotNo - -> m Void - loop slotVar = go - where - go :: SlotNo -> m Void - go current = do - next <- waitUntilNextSlot time slotLen maxClockRewind current - atomically $ writeTVar slotVar next - go next + where + -- In each iteration of the loop, we recompute how long to wait until + -- the next slot. This minimizes clock skew. + loop :: + StrictTVar m SlotNo -> + SlotNo -> + m Void + loop slotVar = go + where + go :: SlotNo -> m Void + go current = do + next <- waitUntilNextSlot time slotLen maxClockRewind current + atomically $ writeTVar slotVar next + go next {------------------------------------------------------------------------------- Pure calculations @@ -65,25 +72,26 @@ simpleBlockchainTime registry time slotLen maxClockRewind = do slotFromUTCTime :: SlotLength -> RelativeTime -> (SlotNo, NominalDiffTime) slotFromUTCTime slotLen (RelativeTime now) = - first SlotNo $ now `divMod'` getSlotLength slotLen + first SlotNo $ now `divMod'` getSlotLength slotLen delayUntilNextSlot :: SlotLength -> RelativeTime -> NominalDiffTime delayUntilNextSlot slotLen now = - getSlotLength slotLen - timeSpent - where - (_curSlot, timeSpent) = slotFromUTCTime slotLen now + getSlotLength slotLen - timeSpent + where + (_curSlot, timeSpent) = slotFromUTCTime slotLen now {------------------------------------------------------------------------------- Stateful wrappers around the pure calculations -------------------------------------------------------------------------------} -- | Get current slot and time spent in that slot -getWallClockSlot :: IOLike m - => SystemTime m - -> SlotLength - -> m (SlotNo, NominalDiffTime) +getWallClockSlot :: + IOLike m => + SystemTime m -> + SlotLength -> + m (SlotNo, NominalDiffTime) getWallClockSlot SystemTime{..} slotLen = - slotFromUTCTime slotLen <$> systemTimeCurrent + slotFromUTCTime slotLen <$> systemTimeCurrent -- | Wait until the next slot -- @@ -96,43 +104,47 @@ getWallClockSlot SystemTime{..} slotLen = -- -- If the clock changed back less than the max clock rewind parameter, we stay -- in the same slot for longer and don't throw an exception. -waitUntilNextSlot :: IOLike m - => SystemTime m - -> SlotLength - -> NominalDiffTime -- ^ Max clock rewind - -> SlotNo -- ^ Current slot number - -> m SlotNo +waitUntilNextSlot :: + IOLike m => + SystemTime m -> + SlotLength -> + -- | Max clock rewind + NominalDiffTime -> + -- | Current slot number + SlotNo -> + m SlotNo waitUntilNextSlot time@SystemTime{..} slotLen maxClockRewind oldCurrent = do - now <- systemTimeCurrent - - let delay = delayUntilNextSlot slotLen now - threadDelay (nominalDelay delay) - - -- At this point we expect to be in 'nextSlot', but the actual now-current - -- slot might be different: - -- - -- o If the system is under heavy load, we might have missed some slots. If - -- this is the case, that's okay, and we just report the actual - -- now-current slot as the next slot. - -- o If the system clock is adjusted back a tiny bit (maybe due to an NTP - -- client running on the system), it's possible that we are still in the - -- /old/ current slot. If this happens, we just wait again; nothing bad - -- has happened, we just stay in one slot for longer. - -- o If the system clock is adjusted back more than that, we might be in a - -- slot number /before/ the old current slot. In that case, if the - -- adjustment is <= the max rewind parameter, we allow it, but stay in the - -- same slot. Just like the previous case, we will stay in one slot for - -- longer. - -- o If the system clock is adjusted back more than the max rewind - -- parameter, we throw an exception (see discussion above). - - afterDelay <- systemTimeCurrent - let (newCurrent, _timeInNewCurrent) = slotFromUTCTime slotLen afterDelay - - if | newCurrent > oldCurrent -> - return newCurrent - | newCurrent <= oldCurrent, - now `diffRelTime` afterDelay <= maxClockRewind -> - waitUntilNextSlot time slotLen maxClockRewind oldCurrent - | otherwise -> - throwIO $ SystemClockMovedBack oldCurrent newCurrent + now <- systemTimeCurrent + + let delay = delayUntilNextSlot slotLen now + threadDelay (nominalDelay delay) + + -- At this point we expect to be in 'nextSlot', but the actual now-current + -- slot might be different: + -- + -- o If the system is under heavy load, we might have missed some slots. If + -- this is the case, that's okay, and we just report the actual + -- now-current slot as the next slot. + -- o If the system clock is adjusted back a tiny bit (maybe due to an NTP + -- client running on the system), it's possible that we are still in the + -- /old/ current slot. If this happens, we just wait again; nothing bad + -- has happened, we just stay in one slot for longer. + -- o If the system clock is adjusted back more than that, we might be in a + -- slot number /before/ the old current slot. In that case, if the + -- adjustment is <= the max rewind parameter, we allow it, but stay in the + -- same slot. Just like the previous case, we will stay in one slot for + -- longer. + -- o If the system clock is adjusted back more than the max rewind + -- parameter, we throw an exception (see discussion above). + + afterDelay <- systemTimeCurrent + let (newCurrent, _timeInNewCurrent) = slotFromUTCTime slotLen afterDelay + + if + | newCurrent > oldCurrent -> + return newCurrent + | newCurrent <= oldCurrent + , now `diffRelTime` afterDelay <= maxClockRewind -> + waitUntilNextSlot time slotLen maxClockRewind oldCurrent + | otherwise -> + throwIO $ SystemClockMovedBack oldCurrent newCurrent diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Types.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Types.hs index 71a0fb7d8d..28105dd672 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Types.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Types.hs @@ -1,32 +1,37 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE DerivingVia #-} -module Ouroboros.Consensus.BlockchainTime.WallClock.Types ( - -- * System time +module Ouroboros.Consensus.BlockchainTime.WallClock.Types + ( -- * System time SystemStart (..) + -- * Relative time , RelativeTime (..) , addRelTime , diffRelTime , fromRelativeTime , toRelativeTime + -- * Get current time (as 'RelativeTime') , SystemTime (..) + -- * Slot length , getSlotLength , mkSlotLength + -- ** Conversions , slotLengthFromMillisec , slotLengthFromSec , slotLengthToMillisec , slotLengthToSec + -- ** opaque , SlotLength ) where -import Cardano.Slotting.Time -import Data.Time.Clock (NominalDiffTime) -import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) +import Cardano.Slotting.Time +import Data.Time.Clock (NominalDiffTime) +import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) addRelTime :: NominalDiffTime -> RelativeTime -> RelativeTime addRelTime = addRelativeTime @@ -41,18 +46,17 @@ diffRelTime = diffRelativeTime -- | System time -- -- Slots are counted from the system start. -data SystemTime m = SystemTime { - -- | Get current time (as a 'RelativeTime') - -- - -- For real deployment, this will take the current 'UTCTime' and then - -- subtract the 'SystemStart' (see 'defaultSystemTime'). Tests don't - -- bother with a 'UTCTime' and just work entirely in 'RelativeTime'. - systemTimeCurrent :: m RelativeTime - - -- | Wait for 'SystemStart' - -- - -- For the real deployment, this waits for the current 'UTCTime' - -- to reach 'SystemStart'. In tests this does nothing. - , systemTimeWait :: m () - } +data SystemTime m = SystemTime + { systemTimeCurrent :: m RelativeTime + -- ^ Get current time (as a 'RelativeTime') + -- + -- For real deployment, this will take the current 'UTCTime' and then + -- subtract the 'SystemStart' (see 'defaultSystemTime'). Tests don't + -- bother with a 'UTCTime' and just work entirely in 'RelativeTime'. + , systemTimeWait :: m () + -- ^ Wait for 'SystemStart' + -- + -- For the real deployment, this waits for the current 'UTCTime' + -- to reach 'SystemStart'. In tests this does nothing. + } deriving NoThunks via OnlyCheckWhnfNamed "SystemTime" (SystemTime m) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Util.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Util.hs index 3edd4a81ec..5e4485af35 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Util.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/BlockchainTime/WallClock/Util.hs @@ -1,19 +1,21 @@ {-# LANGUAGE DeriveFunctor #-} -- | Support for defining 'BlockchainTime' instances -module Ouroboros.Consensus.BlockchainTime.WallClock.Util ( - -- * Tracing +module Ouroboros.Consensus.BlockchainTime.WallClock.Util + ( -- * Tracing TraceBlockchainTimeEvent (..) + -- * Exceptions , SystemClockMovedBackException (..) ) where -import Control.Exception (Exception) -import Data.Time (NominalDiffTime) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime.WallClock.Types - (SystemStart) -import Ouroboros.Consensus.HardFork.History (PastHorizonException) +import Control.Exception (Exception) +import Data.Time (NominalDiffTime) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime.WallClock.Types + ( SystemStart + ) +import Ouroboros.Consensus.HardFork.History (PastHorizonException) {------------------------------------------------------------------------------- Tracing @@ -23,13 +25,12 @@ import Ouroboros.Consensus.HardFork.History (PastHorizonException) -- -- The @t@ parameter can be instantiated by the time, e.g., @UTCTime@ or -- @RelativeTime@. -data TraceBlockchainTimeEvent t = - -- | The start time of the blockchain time is in the future +data TraceBlockchainTimeEvent t + = -- | The start time of the blockchain time is in the future -- -- We have to block (for 'NominalDiffTime') until that time comes. TraceStartTimeInTheFuture SystemStart NominalDiffTime - - -- | Current slot is not yet known + | -- | Current slot is not yet known -- -- This happens when the tip of our current chain is so far in the past that -- we cannot translate the current wallclock to a slot number, typically @@ -42,9 +43,8 @@ data TraceBlockchainTimeEvent t = -- bounds between which we /can/ do conversions. The distance between the -- current time and the upper bound should rapidly decrease with consecutive -- 'TraceCurrentSlotUnknown' messages during syncing. - | TraceCurrentSlotUnknown t PastHorizonException - - -- | The system clock moved back an acceptable time span, e.g., because of + TraceCurrentSlotUnknown t PastHorizonException + | -- | The system clock moved back an acceptable time span, e.g., because of -- an NTP sync. -- -- The system clock moved back such that the new current slot would be @@ -55,18 +55,18 @@ data TraceBlockchainTimeEvent t = -- -- When the system clock moved back more than the configured limit, we shut -- down with a fatal exception. - | TraceSystemClockMovedBack t t + TraceSystemClockMovedBack t t deriving (Show, Functor) {------------------------------------------------------------------------------- Exceptions -------------------------------------------------------------------------------} -data SystemClockMovedBackException = - -- | The system clock got moved back so far that the slot number decreased +data SystemClockMovedBackException + = -- | The system clock got moved back so far that the slot number decreased -- -- We record the the slot number before and after the change. SystemClockMovedBack SlotNo SlotNo - deriving (Show) + deriving Show instance Exception SystemClockMovedBackException diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config.hs index a73890bd48..acb0147f90 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config.hs @@ -6,94 +6,99 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Config ( - -- * The top-level node configuration +module Ouroboros.Consensus.Config + ( -- * The top-level node configuration TopLevelConfig (..) , castTopLevelConfig , mkTopLevelConfig + -- ** Checkpoints map , CheckpointsMap (..) , DiffusionPipeliningSupport (..) , castCheckpointsMap , emptyCheckpointsMap + -- ** Derived extraction functions , configBlock , configCodec , configConsensus , configLedger , configStorage + -- ** Additional convenience functions , configSecurityParam + -- * Re-exports , module Ouroboros.Consensus.Config.SecurityParam ) where -import Data.Coerce -import Data.Map.Strict (Map) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Protocol.Abstract +import Data.Coerce +import Data.Map.Strict (Map) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Protocol.Abstract {------------------------------------------------------------------------------- Top-level config -------------------------------------------------------------------------------} -- | The top-level node configuration -data TopLevelConfig blk = TopLevelConfig { - topLevelConfigProtocol :: !(ConsensusConfig (BlockProtocol blk)) - , topLevelConfigLedger :: !(LedgerConfig blk) - , topLevelConfigBlock :: !(BlockConfig blk) - , topLevelConfigCodec :: !(CodecConfig blk) - , topLevelConfigStorage :: !(StorageConfig blk) - , topLevelConfigCheckpoints :: !(CheckpointsMap blk) - } - deriving (Generic) - -instance ( ConsensusProtocol (BlockProtocol blk) - , NoThunks (LedgerConfig blk) - , NoThunks (BlockConfig blk) - , NoThunks (CodecConfig blk) - , NoThunks (StorageConfig blk) - , NoThunks (HeaderHash blk) - ) => NoThunks (TopLevelConfig blk) +data TopLevelConfig blk = TopLevelConfig + { topLevelConfigProtocol :: !(ConsensusConfig (BlockProtocol blk)) + , topLevelConfigLedger :: !(LedgerConfig blk) + , topLevelConfigBlock :: !(BlockConfig blk) + , topLevelConfigCodec :: !(CodecConfig blk) + , topLevelConfigStorage :: !(StorageConfig blk) + , topLevelConfigCheckpoints :: !(CheckpointsMap blk) + } + deriving Generic + +instance + ( ConsensusProtocol (BlockProtocol blk) + , NoThunks (LedgerConfig blk) + , NoThunks (BlockConfig blk) + , NoThunks (CodecConfig blk) + , NoThunks (StorageConfig blk) + , NoThunks (HeaderHash blk) + ) => + NoThunks (TopLevelConfig blk) -- | Checkpoints are block hashes that are expected to be present in the honest -- historical chain. -- -- Each checkpoint is associated with a 'BlockNo', and any block with a -- 'BlockNo' in the checkpoints map is expected to have the corresponding hash. --- -newtype CheckpointsMap blk = CheckpointsMap { - unCheckpointsMap :: Map BlockNo (HeaderHash blk) - } +newtype CheckpointsMap blk = CheckpointsMap + { unCheckpointsMap :: Map BlockNo (HeaderHash blk) + } deriving (Generic, Monoid, Semigroup) -- | Configure consensus layer how to handle some cases of invalid data -- when processing mini protocol communication in the presence of diffusion -- pipelining. See also 'Ouroboros.Consensus.Block.BlockSupportsDiffusionPipelining' --- data DiffusionPipeliningSupport = DiffusionPipeliningOn | DiffusionPipeliningOff - deriving (Show) + deriving Show -instance ( NoThunks (HeaderHash blk) - ) => NoThunks (CheckpointsMap blk) +instance + NoThunks (HeaderHash blk) => + NoThunks (CheckpointsMap blk) emptyCheckpointsMap :: CheckpointsMap blk emptyCheckpointsMap = mempty mkTopLevelConfig :: - ConsensusConfig (BlockProtocol blk) - -> LedgerConfig blk - -> BlockConfig blk - -> CodecConfig blk - -> StorageConfig blk - -> CheckpointsMap blk - -> TopLevelConfig blk + ConsensusConfig (BlockProtocol blk) -> + LedgerConfig blk -> + BlockConfig blk -> + CodecConfig blk -> + StorageConfig blk -> + CheckpointsMap blk -> + TopLevelConfig blk mkTopLevelConfig prtclCfg ledgerCfg blockCfg codecCfg storageCfg checkpointsMap = - TopLevelConfig prtclCfg ledgerCfg blockCfg codecCfg storageCfg checkpointsMap + TopLevelConfig prtclCfg ledgerCfg blockCfg codecCfg storageCfg checkpointsMap configConsensus :: TopLevelConfig blk -> ConsensusConfig (BlockProtocol blk) configConsensus = topLevelConfigProtocol @@ -101,39 +106,42 @@ configConsensus = topLevelConfigProtocol configLedger :: TopLevelConfig blk -> LedgerConfig blk configLedger = topLevelConfigLedger -configBlock :: TopLevelConfig blk -> BlockConfig blk +configBlock :: TopLevelConfig blk -> BlockConfig blk configBlock = topLevelConfigBlock -configCodec :: TopLevelConfig blk -> CodecConfig blk +configCodec :: TopLevelConfig blk -> CodecConfig blk configCodec = topLevelConfigCodec -configStorage :: TopLevelConfig blk -> StorageConfig blk +configStorage :: TopLevelConfig blk -> StorageConfig blk configStorage = topLevelConfigStorage -configSecurityParam :: ConsensusProtocol (BlockProtocol blk) - => TopLevelConfig blk -> SecurityParam +configSecurityParam :: + ConsensusProtocol (BlockProtocol blk) => + TopLevelConfig blk -> SecurityParam configSecurityParam = protocolSecurityParam . configConsensus castTopLevelConfig :: - ( Coercible (ConsensusConfig (BlockProtocol blk)) - (ConsensusConfig (BlockProtocol blk')) - , LedgerConfig blk ~ LedgerConfig blk' - , Coercible (BlockConfig blk) (BlockConfig blk') - , Coercible (CodecConfig blk) (CodecConfig blk') - , Coercible (StorageConfig blk) (StorageConfig blk') - , Coercible (HeaderHash blk) (HeaderHash blk') - ) - => TopLevelConfig blk -> TopLevelConfig blk' -castTopLevelConfig TopLevelConfig{..} = TopLevelConfig{ - topLevelConfigProtocol = coerce topLevelConfigProtocol - , topLevelConfigLedger = topLevelConfigLedger - , topLevelConfigBlock = coerce topLevelConfigBlock - , topLevelConfigCodec = coerce topLevelConfigCodec - , topLevelConfigStorage = coerce topLevelConfigStorage + ( Coercible + (ConsensusConfig (BlockProtocol blk)) + (ConsensusConfig (BlockProtocol blk')) + , LedgerConfig blk ~ LedgerConfig blk' + , Coercible (BlockConfig blk) (BlockConfig blk') + , Coercible (CodecConfig blk) (CodecConfig blk') + , Coercible (StorageConfig blk) (StorageConfig blk') + , Coercible (HeaderHash blk) (HeaderHash blk') + ) => + TopLevelConfig blk -> TopLevelConfig blk' +castTopLevelConfig TopLevelConfig{..} = + TopLevelConfig + { topLevelConfigProtocol = coerce topLevelConfigProtocol + , topLevelConfigLedger = topLevelConfigLedger + , topLevelConfigBlock = coerce topLevelConfigBlock + , topLevelConfigCodec = coerce topLevelConfigCodec + , topLevelConfigStorage = coerce topLevelConfigStorage , topLevelConfigCheckpoints = coerce topLevelConfigCheckpoints } castCheckpointsMap :: - Coercible (HeaderHash blk) (HeaderHash blk') - => CheckpointsMap blk -> CheckpointsMap blk' + Coercible (HeaderHash blk) (HeaderHash blk') => + CheckpointsMap blk -> CheckpointsMap blk' castCheckpointsMap = coerce diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SecurityParam.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SecurityParam.hs index f02d14b94b..bebe022e8d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SecurityParam.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SecurityParam.hs @@ -1,17 +1,16 @@ {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE DerivingVia #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Config.SecurityParam (SecurityParam (..)) where -import Cardano.Binary -import Cardano.Ledger.BaseTypes.NonZero -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Quiet +import Cardano.Binary +import Cardano.Ledger.BaseTypes.NonZero +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Quiet -- | Protocol security parameter -- @@ -22,6 +21,6 @@ import Quiet -- -- NOTE: This talks about the number of /blocks/ we can roll back, not -- the number of /slots/. -newtype SecurityParam = SecurityParam { maxRollbacks :: NonZero Word64 } +newtype SecurityParam = SecurityParam {maxRollbacks :: NonZero Word64} deriving (Eq, Generic, NoThunks, ToCBOR, FromCBOR) deriving Show via Quiet SecurityParam diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SupportsNode.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SupportsNode.hs index ab519fdc77..004cc5e631 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SupportsNode.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Config/SupportsNode.hs @@ -1,11 +1,11 @@ module Ouroboros.Consensus.Config.SupportsNode (ConfigSupportsNode (..)) where -import Ouroboros.Consensus.Block.Abstract (BlockConfig) -import Ouroboros.Consensus.BlockchainTime (SystemStart) -import Ouroboros.Network.Magic (NetworkMagic) +import Ouroboros.Consensus.Block.Abstract (BlockConfig) +import Ouroboros.Consensus.BlockchainTime (SystemStart) +import Ouroboros.Network.Magic (NetworkMagic) -- | The 'BlockConfig' needs to contain some information in order to support -- running a node. class ConfigSupportsNode blk where - getSystemStart :: BlockConfig blk -> SystemStart + getSystemStart :: BlockConfig blk -> SystemStart getNetworkMagic :: BlockConfig blk -> NetworkMagic diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Forecast.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Forecast.hs index 2e6f4550b0..b63be9fb7d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Forecast.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Forecast.hs @@ -1,35 +1,36 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE NamedFieldPuns #-} -module Ouroboros.Consensus.Forecast ( - Forecast (..) +module Ouroboros.Consensus.Forecast + ( Forecast (..) , OutsideForecastRange (..) , constantForecastInRange , constantForecastOf , mapForecast , trivialForecast + -- * Utilities for constructing forecasts , crossEraForecastBound ) where -import Control.Exception (Exception) -import Control.Monad (guard) -import Control.Monad.Except (Except, throwError) -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.History.Util (addSlots) -import Ouroboros.Consensus.Ledger.Basics (GetTip, getTipSlot) - -data Forecast a = Forecast { - forecastAt :: WithOrigin SlotNo +import Control.Exception (Exception) +import Control.Monad (guard) +import Control.Monad.Except (Except, throwError) +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.History.Util (addSlots) +import Ouroboros.Consensus.Ledger.Basics (GetTip, getTipSlot) - -- Precondition: @At s >= forecastAt@ - , forecastFor :: SlotNo -> Except OutsideForecastRange a - } +data Forecast a = Forecast + { forecastAt :: WithOrigin SlotNo + , -- Precondition: @At s >= forecastAt@ + forecastFor :: SlotNo -> Except OutsideForecastRange a + } mapForecast :: (a -> b) -> Forecast a -> Forecast b -mapForecast f (Forecast at for) = Forecast{ - forecastAt = at +mapForecast f (Forecast at for) = + Forecast + { forecastAt = at , forecastFor = fmap f . for } @@ -52,36 +53,37 @@ constantForecastOf = constantForecastInRange Nothing -- This is primarily useful for tests; the forecast range is finite, and we -- do still check the precondition, to catch any bugs. constantForecastInRange :: Maybe SlotNo -> a -> WithOrigin SlotNo -> Forecast a -constantForecastInRange range' a at = Forecast { - forecastAt = at +constantForecastInRange range' a at = + Forecast + { forecastAt = at , forecastFor = forecastForWithRange range' } - where - forecastForWithRange Nothing = \for -> - if NotOrigin for >= at - then return a - else error "constantForecastOf: precondition violated" - forecastForWithRange (Just range) = \for -> - let outsideForecastMaxFor = succWithOrigin at + range - in if for >= outsideForecastMaxFor - then throwError $ OutsideForecastRange { - outsideForecastAt = at + where + forecastForWithRange Nothing = \for -> + if NotOrigin for >= at + then return a + else error "constantForecastOf: precondition violated" + forecastForWithRange (Just range) = \for -> + let outsideForecastMaxFor = succWithOrigin at + range + in if for >= outsideForecastMaxFor + then + throwError $ + OutsideForecastRange + { outsideForecastAt = at , outsideForecastMaxFor , outsideForecastFor = for } - else forecastForWithRange Nothing for - -data OutsideForecastRange = - OutsideForecastRange { - -- | The slot for which the forecast was obtained - outsideForecastAt :: !(WithOrigin SlotNo) - - -- | Exclusive upper bound on the range of the forecast - , outsideForecastMaxFor :: !SlotNo - - -- | The slot for which we requested a value - , outsideForecastFor :: !SlotNo - } + else forecastForWithRange Nothing for + +data OutsideForecastRange + = OutsideForecastRange + { outsideForecastAt :: !(WithOrigin SlotNo) + -- ^ The slot for which the forecast was obtained + , outsideForecastMaxFor :: !SlotNo + -- ^ Exclusive upper bound on the range of the forecast + , outsideForecastFor :: !SlotNo + -- ^ The slot for which we requested a value + } deriving (Show, Eq) instance Exception OutsideForecastRange @@ -170,25 +172,29 @@ instance Exception OutsideForecastRange -- 'SlotNo' the first block in the next era can; their @minimum@ will serve as -- an exclusive upper bound for the forecast range. crossEraForecastBound :: - WithOrigin SlotNo -- ^ Current tip (the slot the forecast is at) - -> SlotNo -- ^ Slot at which the transition to the next era happens - -> Word64 -- ^ Max lookeahead in the current era - -> Word64 -- ^ Max lookeahead in the next era - -> SlotNo + -- | Current tip (the slot the forecast is at) + WithOrigin SlotNo -> + -- | Slot at which the transition to the next era happens + SlotNo -> + -- | Max lookeahead in the current era + Word64 -> + -- | Max lookeahead in the next era + Word64 -> + SlotNo crossEraForecastBound currentTip transitionSlot currentLookahead nextLookahead = - maybe boundFromNextEra (min boundFromNextEra) boundFromCurrentEra - where - tipSucc :: SlotNo - tipSucc = succWithOrigin currentTip - - -- Upper bound arising from blocks in the current era - -- - -- 'Nothing' if there are no more blocks in this era - boundFromCurrentEra :: Maybe SlotNo - boundFromCurrentEra = do - guard (tipSucc < transitionSlot) - return $ addSlots currentLookahead tipSucc - - -- Upper bound arising from blocks in the next era - boundFromNextEra :: SlotNo - boundFromNextEra = addSlots nextLookahead transitionSlot + maybe boundFromNextEra (min boundFromNextEra) boundFromCurrentEra + where + tipSucc :: SlotNo + tipSucc = succWithOrigin currentTip + + -- Upper bound arising from blocks in the current era + -- + -- 'Nothing' if there are no more blocks in this era + boundFromCurrentEra :: Maybe SlotNo + boundFromCurrentEra = do + guard (tipSucc < transitionSlot) + return $ addSlots currentLookahead tipSucc + + -- Upper bound arising from blocks in the next era + boundFromNextEra :: SlotNo + boundFromNextEra = addSlots nextLookahead transitionSlot diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Diff.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Diff.hs index 3f292248b2..ea767dcdda 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Diff.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Diff.hs @@ -9,17 +9,21 @@ -- -- > import Ouroboros.Consensus.Fragment.Diff (ChainDiff (..)) -- > import qualified Ouroboros.Consensus.Fragment.Diff as Diff -module Ouroboros.Consensus.Fragment.Diff ( - ChainDiff (..) +module Ouroboros.Consensus.Fragment.Diff + ( ChainDiff (..) + -- * Queries , getAnchorPoint , getTip , rollbackExceedsSuffix + -- * Constructors , diff , extend + -- * Application , apply + -- * Manipulation , Ouroboros.Consensus.Fragment.Diff.map , append @@ -28,15 +32,16 @@ module Ouroboros.Consensus.Fragment.Diff ( , truncate ) where -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - AnchoredSeq (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Prelude hiding (mapM, truncate) -import qualified Prelude - +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , AnchoredSeq (..) + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Prelude hiding (mapM, truncate) +import Prelude qualified -- | A diff of a chain (fragment). -- @@ -47,13 +52,13 @@ import qualified Prelude -- back. In other words, applying a 'ChainDiff' can result in a chain shorter -- than the chain to which the diff was applied. data ChainDiff b = ChainDiff - { getRollback :: !Word64 - -- ^ The number of blocks/headers to roll back the current chain - , getSuffix :: !(AnchoredFragment b) - -- ^ The new blocks/headers to add after rolling back the current chain. - } + { getRollback :: !Word64 + -- ^ The number of blocks/headers to roll back the current chain + , getSuffix :: !(AnchoredFragment b) + -- ^ The new blocks/headers to add after rolling back the current chain. + } -deriving instance (StandardHash b, Eq b) => Eq (ChainDiff b) +deriving instance (StandardHash b, Eq b) => Eq (ChainDiff b) deriving instance (StandardHash b, Show b) => Show (ChainDiff b) {------------------------------------------------------------------------------- @@ -73,7 +78,7 @@ getAnchorPoint = castPoint . AF.anchorPoint . getSuffix -- greater than the length of the new elements in the suffix to add. rollbackExceedsSuffix :: HasHeader b => ChainDiff b -> Bool rollbackExceedsSuffix (ChainDiff nbRollback suffix) = - nbRollback > fromIntegral (AF.length suffix) + nbRollback > fromIntegral (AF.length suffix) {------------------------------------------------------------------------------- Constructors @@ -91,16 +96,18 @@ extend = ChainDiff 0 -- PRECONDITION: the candidate fragment must intersect with the current chain -- fragment. diff :: - (HasHeader b, HasHeader b', HeaderHash b ~ HeaderHash b', HasCallStack) - => AnchoredFragment b' -- ^ Current chain - -> AnchoredFragment b -- ^ Candidate chain - -> ChainDiff b + (HasHeader b, HasHeader b', HeaderHash b ~ HeaderHash b', HasCallStack) => + -- | Current chain + AnchoredFragment b' -> + -- | Candidate chain + AnchoredFragment b -> + ChainDiff b diff curChain candChain = case AF.intersect curChain candChain of - Just (_curChainPrefix, _candPrefix, curChainSuffix, candSuffix) - -> ChainDiff - (fromIntegral (AF.length curChainSuffix)) - candSuffix + Just (_curChainPrefix, _candPrefix, curChainSuffix, candSuffix) -> + ChainDiff + (fromIntegral (AF.length curChainSuffix)) + candSuffix -- Precondition violated. _ -> error "candidate fragment doesn't intersect with current chain" @@ -119,12 +126,12 @@ diff curChain candChain = -- The returned fragment will have the same anchor point as the given -- fragment. apply :: - HasHeader b - => AnchoredFragment b - -> ChainDiff b - -> Maybe (AnchoredFragment b) + HasHeader b => + AnchoredFragment b -> + ChainDiff b -> + Maybe (AnchoredFragment b) apply curChain (ChainDiff nbRollback suffix) = - AF.join (AF.dropNewest (fromIntegral nbRollback) curChain) suffix + AF.join (AF.dropNewest (fromIntegral nbRollback) curChain) suffix {------------------------------------------------------------------------------- Manipulation @@ -145,50 +152,50 @@ append (ChainDiff nbRollback suffix) b = (ChainDiff nbRollback (suffix :> b)) -- If the length of the truncated suffix is shorter than the rollback, -- 'Nothing' is returned. truncate :: - (HasHeader b, HasCallStack) - => Point b - -> ChainDiff b - -> ChainDiff b + (HasHeader b, HasCallStack) => + Point b -> + ChainDiff b -> + ChainDiff b truncate pt (ChainDiff nbRollback suffix) - | Just suffix' <- AF.rollback (castPoint pt) suffix - = ChainDiff nbRollback suffix' - | otherwise - = error $ "rollback point not on the candidate suffix: " <> show pt + | Just suffix' <- AF.rollback (castPoint pt) suffix = + ChainDiff nbRollback suffix' + | otherwise = + error $ "rollback point not on the candidate suffix: " <> show pt -- | Return the longest prefix of the suffix matching the given predicate, -- starting from the left, i.e., the \"oldest\" blocks. -- -- If the new suffix is shorter than the diff's rollback, return 'Nothing'. takeWhileOldest :: - HasHeader b - => (b -> Bool) - -> ChainDiff b - -> ChainDiff b + HasHeader b => + (b -> Bool) -> + ChainDiff b -> + ChainDiff b takeWhileOldest accept (ChainDiff nbRollback suffix) = - ChainDiff nbRollback (AF.takeWhileOldest accept suffix) + ChainDiff nbRollback (AF.takeWhileOldest accept suffix) map :: - forall a b. - ( HasHeader b - , HeaderHash a ~ HeaderHash b - ) - => (a -> b) - -> ChainDiff a - -> ChainDiff b + forall a b. + ( HasHeader b + , HeaderHash a ~ HeaderHash b + ) => + (a -> b) -> + ChainDiff a -> + ChainDiff b map f (ChainDiff rollback suffix) = - ChainDiff rollback - $ AF.mapAnchoredFragment f suffix + ChainDiff rollback $ + AF.mapAnchoredFragment f suffix mapM :: - forall a b m. - ( HasHeader b - , HeaderHash a ~ HeaderHash b - , Monad m - ) - => (a -> m b) - -> ChainDiff a - -> m (ChainDiff b) + forall a b m. + ( HasHeader b + , HeaderHash a ~ HeaderHash b + , Monad m + ) => + (a -> m b) -> + ChainDiff a -> + m (ChainDiff b) mapM f (ChainDiff rollback suffix) = - ChainDiff rollback - . AF.fromOldestFirst (AF.castAnchor (AF.anchor suffix)) - <$> Prelude.mapM f (AF.toOldestFirst suffix) + ChainDiff rollback + . AF.fromOldestFirst (AF.castAnchor (AF.anchor suffix)) + <$> Prelude.mapM f (AF.toOldestFirst suffix) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Validated.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Validated.hs index ffeeeeb57c..34a73a752b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Validated.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/Validated.hs @@ -10,22 +10,23 @@ -- -- > import Ouroboros.Consensus.Fragment.Validated (ValidatedFragment) -- > import qualified Ouroboros.Consensus.Fragment.Validated as VF -module Ouroboros.Consensus.Fragment.Validated ( - ValidatedFragment (ValidatedFragment) +module Ouroboros.Consensus.Fragment.Validated + ( ValidatedFragment (ValidatedFragment) , validatedFragment , validatedLedger , validatedTip + -- * Monadic , newM ) where -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.IOLike hiding (invariant) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.IOLike hiding (invariant) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF -- | Validated chain fragment along with the ledger state after validation -- @@ -35,20 +36,19 @@ import qualified Ouroboros.Network.AnchoredFragment as AF -- -- The invariant is only checked on construction, maintaining it afterwards is -- up to the user. -data ValidatedFragment b l = UnsafeValidatedFragment { - -- | Chain fragment - validatedFragment :: !(AnchoredFragment b) - - -- | Ledger after validation - , validatedLedger :: !l - } +data ValidatedFragment b l = UnsafeValidatedFragment + { validatedFragment :: !(AnchoredFragment b) + -- ^ Chain fragment + , validatedLedger :: !l + -- ^ Ledger after validation + } deriving (Functor, Foldable, Traversable) {-# COMPLETE ValidatedFragment #-} pattern ValidatedFragment :: - (GetTip l, HasHeader b, HeaderHash b ~ HeaderHash l, HasCallStack) - => AnchoredFragment b -> l mk -> ValidatedFragment b (l mk) + (GetTip l, HasHeader b, HeaderHash b ~ HeaderHash l, HasCallStack) => + AnchoredFragment b -> l mk -> ValidatedFragment b (l mk) pattern ValidatedFragment f l <- UnsafeValidatedFragment f l where ValidatedFragment f l = new f l @@ -57,76 +57,80 @@ validatedTip :: HasHeader b => ValidatedFragment b l -> Point b validatedTip = AF.headPoint . validatedFragment invariant :: - forall l mk b. - (GetTip l , HasHeader b, HeaderHash b ~ HeaderHash l) - => ValidatedFragment b (l mk) - -> Either String () + forall l mk b. + (GetTip l, HasHeader b, HeaderHash b ~ HeaderHash l) => + ValidatedFragment b (l mk) -> + Either String () invariant (ValidatedFragment fragment ledger) = pointInvariant (getTip ledger :: Point l) fragment pointInvariant :: - forall l b. (HeaderHash b ~ HeaderHash l, HasHeader b) - => Point l - -> AnchoredFragment b - -> Either String () + forall l b. + (HeaderHash b ~ HeaderHash l, HasHeader b) => + Point l -> + AnchoredFragment b -> + Either String () pointInvariant ledgerTip0 fragment - | ledgerTip /= headPoint - = Left $ concat [ - "ledger tip " - , show ledgerTip - , " /= head point " - , show headPoint - ] - | otherwise - = Right () - where - ledgerTip, headPoint :: Point b - ledgerTip = castPoint ledgerTip0 - headPoint = castPoint $ AF.headPoint fragment + | ledgerTip /= headPoint = + Left $ + concat + [ "ledger tip " + , show ledgerTip + , " /= head point " + , show headPoint + ] + | otherwise = + Right () + where + ledgerTip, headPoint :: Point b + ledgerTip = castPoint ledgerTip0 + headPoint = castPoint $ AF.headPoint fragment -- | Constructor for 'ValidatedFragment' that checks the invariant new :: - forall l mk b. - (GetTip l, HasHeader b, HeaderHash b ~ HeaderHash l, HasCallStack) - => AnchoredFragment b - -> l mk - -> ValidatedFragment b (l mk) + forall l mk b. + (GetTip l, HasHeader b, HeaderHash b ~ HeaderHash l, HasCallStack) => + AnchoredFragment b -> + l mk -> + ValidatedFragment b (l mk) new fragment ledger = - assertWithMsg (invariant validated) $ - validated - where - validated :: ValidatedFragment b (l mk) - validated = UnsafeValidatedFragment { - validatedFragment = fragment - , validatedLedger = ledger - } + assertWithMsg (invariant validated) $ + validated + where + validated :: ValidatedFragment b (l mk) + validated = + UnsafeValidatedFragment + { validatedFragment = fragment + , validatedLedger = ledger + } {------------------------------------------------------------------------------- Monadic -------------------------------------------------------------------------------} invariantM :: - forall m l b. - (MonadSTM m, GetTipSTM m l, HasHeader b, HeaderHash b ~ HeaderHash l) - => ValidatedFragment b l - -> m (Either String ()) + forall m l b. + (MonadSTM m, GetTipSTM m l, HasHeader b, HeaderHash b ~ HeaderHash l) => + ValidatedFragment b l -> + m (Either String ()) invariantM (UnsafeValidatedFragment fragment ledger) = do - ledgerTip <- getTipM ledger - pure $ pointInvariant ledgerTip fragment + ledgerTip <- getTipM ledger + pure $ pointInvariant ledgerTip fragment -- | Constructor for 'ValidatedFragment' that checks the invariant newM :: - forall m l b. - (MonadSTM m, GetTipSTM m l, HasHeader b, HeaderHash b ~ HeaderHash l, HasCallStack) - => AnchoredFragment b - -> l - -> m (ValidatedFragment b l) + forall m l b. + (MonadSTM m, GetTipSTM m l, HasHeader b, HeaderHash b ~ HeaderHash l, HasCallStack) => + AnchoredFragment b -> + l -> + m (ValidatedFragment b l) newM fragment ledger = do - msg <- invariantM validated - pure $ assertWithMsg msg validated - where - validated :: ValidatedFragment b l - validated = UnsafeValidatedFragment { - validatedFragment = fragment - , validatedLedger = ledger - } + msg <- invariantM validated + pure $ assertWithMsg msg validated + where + validated :: ValidatedFragment b l + validated = + UnsafeValidatedFragment + { validatedFragment = fragment + , validatedLedger = ledger + } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/ValidatedDiff.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/ValidatedDiff.hs index 1f31dfaec3..44e90febf4 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/ValidatedDiff.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Fragment/ValidatedDiff.hs @@ -8,28 +8,29 @@ -- -- > import Ouroboros.Consensus.Fragment.ValidatedDiff (ValidatedChainDiff (..)) -- > import qualified Ouroboros.Consensus.Fragment.ValidatedDiff as ValidatedDiff -module Ouroboros.Consensus.Fragment.ValidatedDiff ( - ValidatedChainDiff (ValidatedChainDiff) +module Ouroboros.Consensus.Fragment.ValidatedDiff + ( ValidatedChainDiff (ValidatedChainDiff) , getChainDiff , getLedger , new , rollbackExceedsSuffix , toValidatedFragment + -- * Monadic , newM , toValidatedFragmentM ) where -import Control.Monad.Except (throwError) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Fragment.Diff (ChainDiff) -import qualified Ouroboros.Consensus.Fragment.Diff as Diff -import Ouroboros.Consensus.Fragment.Validated (ValidatedFragment) -import qualified Ouroboros.Consensus.Fragment.Validated as VF -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.IOLike (MonadSTM (..)) +import Control.Monad.Except (throwError) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Fragment.Diff (ChainDiff) +import Ouroboros.Consensus.Fragment.Diff qualified as Diff +import Ouroboros.Consensus.Fragment.Validated (ValidatedFragment) +import Ouroboros.Consensus.Fragment.Validated qualified as VF +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.IOLike (MonadSTM (..)) -- | A 'ChainDiff' along with the ledger state after validation. -- @@ -40,15 +41,16 @@ import Ouroboros.Consensus.Util.IOLike (MonadSTM (..)) -- The invariant is only checked on construction, maintaining it afterwards is -- up to the user. data ValidatedChainDiff b l = UnsafeValidatedChainDiff - { getChainDiff :: ChainDiff b - , getLedger :: l - } + { getChainDiff :: ChainDiff b + , getLedger :: l + } -- | Allow for pattern matching on a 'ValidatedChainDiff' without exposing the -- (unsafe) constructor. Use 'new' to construct a 'ValidatedChainDiff'. -pattern ValidatedChainDiff - :: ChainDiff b -> l -> ValidatedChainDiff b l +pattern ValidatedChainDiff :: + ChainDiff b -> l -> ValidatedChainDiff b l pattern ValidatedChainDiff d l <- UnsafeValidatedChainDiff d l + {-# COMPLETE ValidatedChainDiff #-} -- | Create a 'ValidatedChainDiff'. @@ -57,38 +59,42 @@ pattern ValidatedChainDiff d l <- UnsafeValidatedChainDiff d l -- -- > getTip chainDiff == ledgerTipPoint ledger new :: - forall b l mk. (GetTip l, HasHeader b, HeaderHash l ~ HeaderHash b, HasCallStack) - => ChainDiff b - -> l mk - -> ValidatedChainDiff b (l mk) + forall b l mk. + (GetTip l, HasHeader b, HeaderHash l ~ HeaderHash b, HasCallStack) => + ChainDiff b -> + l mk -> + ValidatedChainDiff b (l mk) new chainDiff ledger = - assertWithMsg (pointInvariant (getTip ledger) chainDiff) $ + assertWithMsg (pointInvariant (getTip ledger) chainDiff) $ UnsafeValidatedChainDiff chainDiff ledger pointInvariant :: - forall l b. (HeaderHash b ~ HeaderHash l, HasHeader b) - => Point l - -> ChainDiff b - -> Either String () + forall l b. + (HeaderHash b ~ HeaderHash l, HasHeader b) => + Point l -> + ChainDiff b -> + Either String () pointInvariant ledgerTip0 chainDiff = precondition - where - chainDiffTip, ledgerTip :: Point b - chainDiffTip = castPoint $ Diff.getTip chainDiff - ledgerTip = castPoint ledgerTip0 - precondition - | chainDiffTip == ledgerTip - = return () - | otherwise - = throwError $ - "tip of ChainDiff doesn't match ledger: " <> - show chainDiffTip <> " /= " <> show ledgerTip + where + chainDiffTip, ledgerTip :: Point b + chainDiffTip = castPoint $ Diff.getTip chainDiff + ledgerTip = castPoint ledgerTip0 + precondition + | chainDiffTip == ledgerTip = + return () + | otherwise = + throwError $ + "tip of ChainDiff doesn't match ledger: " + <> show chainDiffTip + <> " /= " + <> show ledgerTip toValidatedFragment :: - (GetTip l, HasHeader b, HeaderHash l ~ HeaderHash b, HasCallStack) - => ValidatedChainDiff b (l mk) - -> ValidatedFragment b (l mk) + (GetTip l, HasHeader b, HeaderHash l ~ HeaderHash b, HasCallStack) => + ValidatedChainDiff b (l mk) -> + ValidatedFragment b (l mk) toValidatedFragment (UnsafeValidatedChainDiff cs l) = - VF.ValidatedFragment (Diff.getSuffix cs) l + VF.ValidatedFragment (Diff.getSuffix cs) l rollbackExceedsSuffix :: HasHeader b => ValidatedChainDiff b l -> Bool rollbackExceedsSuffix = Diff.rollbackExceedsSuffix . getChainDiff @@ -98,23 +104,30 @@ rollbackExceedsSuffix = Diff.rollbackExceedsSuffix . getChainDiff -------------------------------------------------------------------------------} newM :: - forall m b l. ( - MonadSTM m, GetTipSTM m l, HasHeader b, HeaderHash l ~ HeaderHash b - , HasCallStack - ) - => ChainDiff b - -> l - -> m (ValidatedChainDiff b l) + forall m b l. + ( MonadSTM m + , GetTipSTM m l + , HasHeader b + , HeaderHash l ~ HeaderHash b + , HasCallStack + ) => + ChainDiff b -> + l -> + m (ValidatedChainDiff b l) newM chainDiff ledger = do - ledgerTip <- getTipM ledger - pure $ assertWithMsg (pointInvariant ledgerTip chainDiff) - $ UnsafeValidatedChainDiff chainDiff ledger + ledgerTip <- getTipM ledger + pure $ + assertWithMsg (pointInvariant ledgerTip chainDiff) $ + UnsafeValidatedChainDiff chainDiff ledger toValidatedFragmentM :: - ( MonadSTM m, GetTipSTM m l, HasHeader b, HeaderHash l ~ HeaderHash b - , HasCallStack - ) - => ValidatedChainDiff b l - -> m (ValidatedFragment b l) + ( MonadSTM m + , GetTipSTM m l + , HasHeader b + , HeaderHash l ~ HeaderHash b + , HasCallStack + ) => + ValidatedChainDiff b l -> + m (ValidatedFragment b l) toValidatedFragmentM (UnsafeValidatedChainDiff cs l) = - VF.newM (Diff.getSuffix cs) l + VF.newM (Diff.getSuffix cs) l diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Genesis/Governor.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Genesis/Governor.hs index 6a388c6010..4dba0433fd 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Genesis/Governor.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Genesis/Governor.hs @@ -22,9 +22,8 @@ -- -- Whenever GDD disconnects peers, and as a result the youngest header present -- in all candidate fragments changes, the chain selection is updated. --- -module Ouroboros.Consensus.Genesis.Governor ( - DensityBounds (..) +module Ouroboros.Consensus.Genesis.Governor + ( DensityBounds (..) , GDDDebugInfo (..) , GDDStateView (..) , TraceGDDEvent (..) @@ -33,46 +32,58 @@ module Ouroboros.Consensus.Genesis.Governor ( , sharedCandidatePrefix ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Monad (guard, void, when) -import Control.Tracer (Tracer, traceWith) -import Data.Bifunctor (second) -import Data.Containers.ListUtils (nubOrd) -import Data.Foldable (for_, toList) -import Data.Functor.Compose (Compose (..)) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (maybeToList) -import Data.Maybe.Strict (StrictMaybe) -import Data.Typeable (Typeable) -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config (TopLevelConfig, configLedger, - configSecurityParam) -import Ouroboros.Consensus.Config.SecurityParam - (SecurityParam (SecurityParam)) -import Ouroboros.Consensus.HardFork.Abstract (HasHardForkHistory (..)) -import Ouroboros.Consensus.HardFork.History.Qry (qryFromExpr, - runQuery, slotToGenesisWindow) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.Ledger.Basics (EmptyMK) -import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState, - ledgerState) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client - (ChainSyncClientHandle (..), ChainSyncState (..)) -import Ouroboros.Consensus.Node.GsmState -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Util (eitherToMaybe, whenJust) -import Ouroboros.Consensus.Util.AnchoredFragment (stripCommonPrefix) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (Watcher (..)) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Monad (guard, void, when) +import Control.Tracer (Tracer, traceWith) +import Data.Bifunctor (second) +import Data.Containers.ListUtils (nubOrd) +import Data.Foldable (for_, toList) +import Data.Functor.Compose (Compose (..)) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (maybeToList) +import Data.Maybe.Strict (StrictMaybe) +import Data.Typeable (Typeable) +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config + ( TopLevelConfig + , configLedger + , configSecurityParam + ) +import Ouroboros.Consensus.Config.SecurityParam + ( SecurityParam (SecurityParam) + ) +import Ouroboros.Consensus.HardFork.Abstract (HasHardForkHistory (..)) +import Ouroboros.Consensus.HardFork.History.Qry + ( qryFromExpr + , runQuery + , slotToGenesisWindow + ) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.Ledger.Basics (EmptyMK) +import Ouroboros.Consensus.Ledger.Extended + ( ExtLedgerState + , ledgerState + ) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( ChainSyncClientHandle (..) + , ChainSyncState (..) + ) +import Ouroboros.Consensus.Node.GsmState +import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Util (eitherToMaybe, whenJust) +import Ouroboros.Consensus.Util.AnchoredFragment (stripCommonPrefix) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (Watcher (..)) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF -- | A 'Watcher' that evaluates the GDD rule whenever necessary, writing the LoE -- fragment to @varLoEFrag@, and then triggering ChainSel to reprocess all @@ -81,93 +92,97 @@ import qualified Ouroboros.Network.AnchoredFragment as AF -- Evaluating the GDD rule might cause peers to be disconnected if they have -- sparser chains than the best chain. gddWatcher :: - forall m blk peer. - ( IOLike m - , Ord peer - , LedgerSupportsProtocol blk - , HasHardForkHistory blk - ) - => TopLevelConfig blk - -> Tracer m (TraceGDDEvent peer blk) - -> ChainDB m blk - -> DiffTime -- ^ How often to evaluate GDD. 0 means as soon as possible. - -- Otherwise, no faster than once every T seconds, where T is - -- the provided value. - -> STM m GsmState - -> STM m (Map peer (ChainSyncClientHandle m blk)) - -- ^ The ChainSync handles. We trigger the GDD whenever our 'GsmState' - -- changes, and when 'Syncing', whenever any of the candidate fragments - -- changes. Also, we use this to disconnect from peers with insufficient - -- densities. - -> StrictTVar m (AnchoredFragment (HeaderWithTime blk)) - -- ^ The LoE fragment. It starts at a (recent) immutable tip and ends at - -- the common intersection of the candidate fragments. - -> Watcher m - (GsmState, GDDStateView m blk peer) - (Map peer (StrictMaybe (WithOrigin SlotNo), Bool)) + forall m blk peer. + ( IOLike m + , Ord peer + , LedgerSupportsProtocol blk + , HasHardForkHistory blk + ) => + TopLevelConfig blk -> + Tracer m (TraceGDDEvent peer blk) -> + ChainDB m blk -> + -- | How often to evaluate GDD. 0 means as soon as possible. + -- Otherwise, no faster than once every T seconds, where T is + -- the provided value. + DiffTime -> + STM m GsmState -> + -- | The ChainSync handles. We trigger the GDD whenever our 'GsmState' + -- changes, and when 'Syncing', whenever any of the candidate fragments + -- changes. Also, we use this to disconnect from peers with insufficient + -- densities. + STM m (Map peer (ChainSyncClientHandle m blk)) -> + -- | The LoE fragment. It starts at a (recent) immutable tip and ends at + -- the common intersection of the candidate fragments. + StrictTVar m (AnchoredFragment (HeaderWithTime blk)) -> + Watcher + m + (GsmState, GDDStateView m blk peer) + (Map peer (StrictMaybe (WithOrigin SlotNo), Bool)) gddWatcher cfg tracer chainDb rateLimit getGsmState getHandles varLoEFrag = - Watcher { - wInitial = Nothing - , wReader = (,) <$> getGsmState <*> getGDDStateView - , wFingerprint - , wNotify - } - where - getGDDStateView :: STM m (GDDStateView m blk peer) - getGDDStateView = do - curChain <- ChainDB.getCurrentChainWithTime chainDb - immutableLedgerSt <- ChainDB.getImmutableLedger chainDb - handles <- getHandles - states <- traverse (readTVar . cschState) handles - pure GDDStateView { - gddCtxCurChain = curChain - , gddCtxImmutableLedgerSt = immutableLedgerSt - , gddCtxKillActions = Map.map cschGDDKill handles - , gddCtxStates = states - } + Watcher + { wInitial = Nothing + , wReader = (,) <$> getGsmState <*> getGDDStateView + , wFingerprint + , wNotify + } + where + getGDDStateView :: STM m (GDDStateView m blk peer) + getGDDStateView = do + curChain <- ChainDB.getCurrentChainWithTime chainDb + immutableLedgerSt <- ChainDB.getImmutableLedger chainDb + handles <- getHandles + states <- traverse (readTVar . cschState) handles + pure + GDDStateView + { gddCtxCurChain = curChain + , gddCtxImmutableLedgerSt = immutableLedgerSt + , gddCtxKillActions = Map.map cschGDDKill handles + , gddCtxStates = states + } - wFingerprint :: - (GsmState, GDDStateView m blk peer) - -> Map peer (StrictMaybe (WithOrigin SlotNo), Bool) - wFingerprint (gsmState, GDDStateView{gddCtxStates}) = case gsmState of - -- When we are in 'PreSyncing' (HAA not satisfied) or are caught up, we - -- don't have to run the GDD on changes to the candidate fragments. - -- (Maybe we want to do it in 'PreSycing'?) - PreSyncing -> Map.empty - CaughtUp -> Map.empty - -- When syncing, wake up regularly while headers are sent. - -- Watching csLatestSlot ensures that GDD is woken up when a peer is - -- sending headers even if they are after the forecast horizon. Note - -- that there can be some delay between the header being validated and - -- it becoming visible to GDD. It will be visible only when csLatestSlot - -- changes again or when csIdling changes, which is guaranteed to happen - -- eventually. - Syncing -> - Map.map (\css -> (csLatestSlot css, csIdling css)) gddCtxStates - - wNotify :: (GsmState, GDDStateView m blk peer) -> m () - wNotify (_gsmState, stateView) = do - t0 <- getMonotonicTime - loeFrag <- evaluateGDD cfg tracer stateView - oldLoEFrag <- atomically $ swapTVar varLoEFrag loeFrag - -- The chain selection only depends on the LoE tip, so there - -- is no point in retriggering it if the LoE tip hasn't changed. - when (AF.headHash oldLoEFrag /= AF.headHash loeFrag) $ - void $ ChainDB.triggerChainSelectionAsync chainDb - tf <- getMonotonicTime - -- We limit the rate at which GDD is evaluated, otherwise it would - -- be called every time a new header is validated. - threadDelay $ rateLimit - diffTime tf t0 + wFingerprint :: + (GsmState, GDDStateView m blk peer) -> + Map peer (StrictMaybe (WithOrigin SlotNo), Bool) + wFingerprint (gsmState, GDDStateView{gddCtxStates}) = case gsmState of + -- When we are in 'PreSyncing' (HAA not satisfied) or are caught up, we + -- don't have to run the GDD on changes to the candidate fragments. + -- (Maybe we want to do it in 'PreSycing'?) + PreSyncing -> Map.empty + CaughtUp -> Map.empty + -- When syncing, wake up regularly while headers are sent. + -- Watching csLatestSlot ensures that GDD is woken up when a peer is + -- sending headers even if they are after the forecast horizon. Note + -- that there can be some delay between the header being validated and + -- it becoming visible to GDD. It will be visible only when csLatestSlot + -- changes again or when csIdling changes, which is guaranteed to happen + -- eventually. + Syncing -> + Map.map (\css -> (csLatestSlot css, csIdling css)) gddCtxStates + + wNotify :: (GsmState, GDDStateView m blk peer) -> m () + wNotify (_gsmState, stateView) = do + t0 <- getMonotonicTime + loeFrag <- evaluateGDD cfg tracer stateView + oldLoEFrag <- atomically $ swapTVar varLoEFrag loeFrag + -- The chain selection only depends on the LoE tip, so there + -- is no point in retriggering it if the LoE tip hasn't changed. + when (AF.headHash oldLoEFrag /= AF.headHash loeFrag) $ + void $ + ChainDB.triggerChainSelectionAsync chainDb + tf <- getMonotonicTime + -- We limit the rate at which GDD is evaluated, otherwise it would + -- be called every time a new header is validated. + threadDelay $ rateLimit - diffTime tf t0 -- | Pure snapshot of the dynamic data the GDD operates on. -data GDDStateView m blk peer = GDDStateView { - -- | The current chain selection - gddCtxCurChain :: AnchoredFragment (HeaderWithTime blk) - -- | The current ledger state +data GDDStateView m blk peer = GDDStateView + { gddCtxCurChain :: AnchoredFragment (HeaderWithTime blk) + -- ^ The current chain selection , gddCtxImmutableLedgerSt :: ExtLedgerState blk EmptyMK - -- | Callbacks to disconnect from peers - , gddCtxKillActions :: Map peer (m ()) - , gddCtxStates :: Map peer (ChainSyncState blk) + -- ^ The current ledger state + , gddCtxKillActions :: Map peer (m ()) + -- ^ Callbacks to disconnect from peers + , gddCtxStates :: Map peer (ChainSyncState blk) } -- | Disconnect peers that lose density comparisons and recompute the LoE fragment. @@ -177,76 +192,77 @@ data GDDStateView m blk peer = GDDStateView { -- disconnected. -- -- Yields the new LoE fragment. --- evaluateGDD :: - forall m blk peer. - ( IOLike m - , Ord peer - , LedgerSupportsProtocol blk - , HasHardForkHistory blk - ) - => TopLevelConfig blk - -> Tracer m (TraceGDDEvent peer blk) - -> GDDStateView m blk peer - -> m (AnchoredFragment (HeaderWithTime blk)) + forall m blk peer. + ( IOLike m + , Ord peer + , LedgerSupportsProtocol blk + , HasHardForkHistory blk + ) => + TopLevelConfig blk -> + Tracer m (TraceGDDEvent peer blk) -> + GDDStateView m blk peer -> + m (AnchoredFragment (HeaderWithTime blk)) evaluateGDD cfg tracer stateView = do - let GDDStateView { - gddCtxCurChain = curChain - , gddCtxImmutableLedgerSt = immutableLedgerSt - , gddCtxKillActions = killActions - , gddCtxStates = states - } = stateView - - (loeFrag, candidateSuffixes) = - sharedCandidatePrefix curChain candidates - candidates = Map.toList (csCandidate <$> states) - - msgen :: Maybe GenesisWindow - -- This could also use 'runWithCachedSummary' if deemed desirable. - msgen = eitherToMaybe $ runQuery qry summary - where - -- We use the Genesis window for the first slot /after/ the common - -- intersection. In particular, when the intersection is the last - -- slot of an era, we will use the Genesis window of the next era, - -- as all slots in the Genesis window reside in that next era. - slot = succWithOrigin $ AF.headSlot loeFrag - qry = qryFromExpr $ slotToGenesisWindow slot - summary = - hardForkSummary - (configLedger cfg) - -- Due to the cross-chain lemma (Property 17.3 in the Consensus - -- report) one could also use the ledger state at the tip of our - -- selection here (in which case this should never return - -- 'Nothing'), but this is subtle and maybe not desirable. - -- - -- In any case, the immutable ledger state will also - -- /eventually/ catch up to the LoE tip, so @msgen@ won't be - -- 'Nothing' forever. - (ledgerState immutableLedgerSt) - - whenJust msgen $ \sgen -> do - let - (losingPeers, bounds) = - densityDisconnect sgen (configSecurityParam cfg) states candidateSuffixes loeFrag - loeHead = AF.castAnchor $ AF.headAnchor loeFrag - - dropTimes = map (second (AF.mapAnchoredFragment hwtHeader)) - - traceWith tracer $ TraceGDDDebug $ GDDDebugInfo - { sgen - , curChain = AF.mapAnchoredFragment hwtHeader curChain - , bounds - , candidates = dropTimes candidates - , candidateSuffixes = dropTimes candidateSuffixes - , losingPeers - , loeHead - } + let GDDStateView + { gddCtxCurChain = curChain + , gddCtxImmutableLedgerSt = immutableLedgerSt + , gddCtxKillActions = killActions + , gddCtxStates = states + } = stateView + + (loeFrag, candidateSuffixes) = + sharedCandidatePrefix curChain candidates + candidates = Map.toList (csCandidate <$> states) + + msgen :: Maybe GenesisWindow + -- This could also use 'runWithCachedSummary' if deemed desirable. + msgen = eitherToMaybe $ runQuery qry summary + where + -- We use the Genesis window for the first slot /after/ the common + -- intersection. In particular, when the intersection is the last + -- slot of an era, we will use the Genesis window of the next era, + -- as all slots in the Genesis window reside in that next era. + slot = succWithOrigin $ AF.headSlot loeFrag + qry = qryFromExpr $ slotToGenesisWindow slot + summary = + hardForkSummary + (configLedger cfg) + -- Due to the cross-chain lemma (Property 17.3 in the Consensus + -- report) one could also use the ledger state at the tip of our + -- selection here (in which case this should never return + -- 'Nothing'), but this is subtle and maybe not desirable. + -- + -- In any case, the immutable ledger state will also + -- /eventually/ catch up to the LoE tip, so @msgen@ won't be + -- 'Nothing' forever. + (ledgerState immutableLedgerSt) + + whenJust msgen $ \sgen -> do + let + (losingPeers, bounds) = + densityDisconnect sgen (configSecurityParam cfg) states candidateSuffixes loeFrag + loeHead = AF.castAnchor $ AF.headAnchor loeFrag + + dropTimes = map (second (AF.mapAnchoredFragment hwtHeader)) + + traceWith tracer $ + TraceGDDDebug $ + GDDDebugInfo + { sgen + , curChain = AF.mapAnchoredFragment hwtHeader curChain + , bounds + , candidates = dropTimes candidates + , candidateSuffixes = dropTimes candidateSuffixes + , losingPeers + , loeHead + } - whenJust (NE.nonEmpty losingPeers) $ \losingPeersNE -> do - for_ losingPeersNE $ \peer -> killActions Map.! peer - traceWith tracer $ TraceGDDDisconnected losingPeersNE + whenJust (NE.nonEmpty losingPeers) $ \losingPeersNE -> do + for_ losingPeersNE $ \peer -> killActions Map.! peer + traceWith tracer $ TraceGDDDisconnected losingPeersNE - pure loeFrag + pure loeFrag -- | Compute the fragment @loeFrag@ between the immutable tip and the -- earliest intersection between @curChain@ and any of the @candidates@. @@ -262,21 +278,21 @@ sharedCandidatePrefix :: (AnchoredFragment (HeaderWithTime blk), [(peer, AnchoredFragment (HeaderWithTime blk))]) sharedCandidatePrefix curChain candidates = second getCompose $ - stripCommonPrefix (AF.castAnchor $ AF.anchor curChain) $ - Compose immutableTipSuffixes - where - immutableTip = AF.anchorPoint curChain - - splitAfterImmutableTip (peer, frag) = - case AF.splitAfterPoint frag immutableTip of - -- When there is no intersection, we assume the candidate fragment is - -- empty and anchored at the immutable tip. - -- See Note [CSJ truncates the candidate fragments]. - Nothing -> (peer, AF.takeOldest 0 curChain) - Just (_, suffix) -> (peer, suffix) - - immutableTipSuffixes = - map splitAfterImmutableTip candidates + stripCommonPrefix (AF.castAnchor $ AF.anchor curChain) $ + Compose immutableTipSuffixes + where + immutableTip = AF.anchorPoint curChain + + splitAfterImmutableTip (peer, frag) = + case AF.splitAfterPoint frag immutableTip of + -- When there is no intersection, we assume the candidate fragment is + -- empty and anchored at the immutable tip. + -- See Note [CSJ truncates the candidate fragments]. + Nothing -> (peer, AF.takeOldest 0 curChain) + Just (_, suffix) -> (peer, suffix) + + immutableTipSuffixes = + map splitAfterImmutableTip candidates -- Note [CSJ truncates the candidate fragments] -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -304,15 +320,15 @@ sharedCandidatePrefix curChain candidates = -- peer if no such fragment can be established. -- -data DensityBounds blk = - DensityBounds { - clippedFragment :: AnchoredFragment (Header blk), - offersMoreThanK :: Bool, - lowerBound :: Word64, - upperBound :: Word64, - hasBlockAfter :: Bool, - latestSlot :: WithOrigin SlotNo, - idling :: Bool +data DensityBounds blk + = DensityBounds + { clippedFragment :: AnchoredFragment (Header blk) + , offersMoreThanK :: Bool + , lowerBound :: Word64 + , upperBound :: Word64 + , hasBlockAfter :: Bool + , latestSlot :: WithOrigin SlotNo + , idling :: Bool } deriving stock instance (Show (Header blk), GetHeader blk) => Show (DensityBounds blk) @@ -338,120 +354,131 @@ deriving stock instance (Show (Header blk), GetHeader blk) => Show (DensityBound -- the genesis window or later. Either of them should be disconnected, even if -- both of them are serving adversarial chains. See -- "Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping" for more details. --- densityDisconnect :: - ( Ord peer - , LedgerSupportsProtocol blk - ) - => GenesisWindow - -> SecurityParam - -> Map peer (ChainSyncState blk) - -> [(peer, AnchoredFragment (HeaderWithTime blk))] - -> AnchoredFragment (HeaderWithTime blk) - -> ([peer], [(peer, DensityBounds blk)]) + ( Ord peer + , LedgerSupportsProtocol blk + ) => + GenesisWindow -> + SecurityParam -> + Map peer (ChainSyncState blk) -> + [(peer, AnchoredFragment (HeaderWithTime blk))] -> + AnchoredFragment (HeaderWithTime blk) -> + ([peer], [(peer, DensityBounds blk)]) densityDisconnect (GenesisWindow sgen) (SecurityParam k) states candidateSuffixes loeFrag = (losingPeers, densityBounds) - where - densityBounds = do - (peer, candidateSuffix) <- candidateSuffixes - let (clippedFragment, _) = - AF.splitAtSlot firstSlotAfterGenesisWindow candidateSuffix - state <- maybeToList (states Map.!? peer) - -- Skip peers that haven't sent any headers yet. - -- They should be disconnected by timeouts instead. - latestSlot <- toList (csLatestSlot state) - let idling = csIdling state - - -- Is there a block after the end of the Genesis window? - hasBlockAfter = - max (AF.headSlot candidateSuffix) latestSlot + where + densityBounds = do + (peer, candidateSuffix) <- candidateSuffixes + let (clippedFragment, _) = + AF.splitAtSlot firstSlotAfterGenesisWindow candidateSuffix + state <- maybeToList (states Map.!? peer) + -- Skip peers that haven't sent any headers yet. + -- They should be disconnected by timeouts instead. + latestSlot <- toList (csLatestSlot state) + let idling = csIdling state + + -- Is there a block after the end of the Genesis window? + hasBlockAfter = + max (AF.headSlot candidateSuffix) latestSlot >= NotOrigin firstSlotAfterGenesisWindow - -- If the slot of the latest header we know of is _after_ the end of - -- the Genesis window (either because the candidate fragment extends - -- beyond it or because we are waiting to validate a header beyond the - -- forecast horizon that we already received), there can be no headers - -- in between and 'potentialSlots' is 0. - potentialSlots = - if hasBlockAfter then 0 + -- If the slot of the latest header we know of is _after_ the end of + -- the Genesis window (either because the candidate fragment extends + -- beyond it or because we are waiting to validate a header beyond the + -- forecast horizon that we already received), there can be no headers + -- in between and 'potentialSlots' is 0. + potentialSlots = + if hasBlockAfter + then 0 else unknownTrailingSlots - -- Number of trailing slots in the genesis window that could have - -- headers which haven't been sent yet - unknownTrailingSlots = unSlotNo $ + -- Number of trailing slots in the genesis window that could have + -- headers which haven't been sent yet + unknownTrailingSlots = + unSlotNo $ -- cannot underflow as the fragment is clipped to the genesis window firstSlotAfterGenesisWindow - succWithOrigin (AF.headSlot clippedFragment) - -- The number of blocks within the Genesis window we know with certainty - lowerBound = fromIntegral $ AF.length clippedFragment - - upperBound = lowerBound + potentialSlots - - -- The number of blocks we know to be on the candidate chain after - -- the intersection, not limited to the Genesis window. - totalBlockCount = fromIntegral (AF.length candidateSuffix) - - -- Does the peer have more than k known blocks in _total_ after the intersection? - -- If not, it is not qualified to compete by density (yet). - offersMoreThanK = totalBlockCount > unNonZero k - - pure (peer, DensityBounds { clippedFragment = AF.mapAnchoredFragment hwtHeader clippedFragment - , offersMoreThanK - , lowerBound - , upperBound - , hasBlockAfter - , latestSlot - , idling}) - - losingPeers = nubOrd $ densityBounds >>= \ - (peer0 , DensityBounds { clippedFragment = frag0 - , lowerBound = lb0 - , upperBound = ub0 - , hasBlockAfter = hasBlockAfter0 - , idling = idling0 - }) -> do - (_peer1, DensityBounds {clippedFragment = frag1, offersMoreThanK, lowerBound = lb1 }) <- - densityBounds - -- Don't disconnect peer0 if it sent no headers after the intersection yet - -- and it is not idling. - -- - -- See Note [Chain disagreement] - -- - guard $ idling0 || not (AF.null frag0) || hasBlockAfter0 - -- ensure that the two peer fragments don't share any - -- headers after the LoE - guard $ AF.lastPoint frag0 /= AF.lastPoint frag1 - -- peer1 offers more than k blocks or peer0 has sent all headers in the - -- genesis window after the intersection (idling or not) - -- - -- Checking for offersMoreThanK is important to avoid disconnecting - -- competing honest peers when the syncing node is nearly caught up. - guard $ offersMoreThanK || lb0 == ub0 - -- peer1 has the same or better density than peer0 - -- If peer0 is idling, we assume no more headers will be sent. - -- - -- Having the same density is enough to disconnect peer0, as the honest - -- chain is expected to have a strictly higher density than all of the - -- other chains. - -- - -- This matters to ChainSync jumping, where adversarial dynamo and - -- objector could offer chains of equal density. - guard $ lb1 >= (if idling0 then lb0 else ub0) - - -- We disconnect peer0 if there is at least another peer peer1 with a - -- chain which is at least as good, and peer0 is either idling or there is - -- no extension to peer0's chain that can make it better than peer1's, and - -- peer1's has more than k headers or peer0 has sent all its headers in - -- the genesis window anchored at the intersection. - -- - -- A chain is "as good as another" if it has at least as many headers in - -- the genesis window anchored at the intersection. - pure peer0 - - loeIntersectionSlot = AF.headSlot loeFrag - - firstSlotAfterGenesisWindow = - succWithOrigin loeIntersectionSlot + SlotNo sgen + -- The number of blocks within the Genesis window we know with certainty + lowerBound = fromIntegral $ AF.length clippedFragment + + upperBound = lowerBound + potentialSlots + + -- The number of blocks we know to be on the candidate chain after + -- the intersection, not limited to the Genesis window. + totalBlockCount = fromIntegral (AF.length candidateSuffix) + + -- Does the peer have more than k known blocks in _total_ after the intersection? + -- If not, it is not qualified to compete by density (yet). + offersMoreThanK = totalBlockCount > unNonZero k + + pure + ( peer + , DensityBounds + { clippedFragment = AF.mapAnchoredFragment hwtHeader clippedFragment + , offersMoreThanK + , lowerBound + , upperBound + , hasBlockAfter + , latestSlot + , idling + } + ) + + losingPeers = + nubOrd $ + densityBounds + >>= \( peer0 + , DensityBounds + { clippedFragment = frag0 + , lowerBound = lb0 + , upperBound = ub0 + , hasBlockAfter = hasBlockAfter0 + , idling = idling0 + } + ) -> do + (_peer1, DensityBounds{clippedFragment = frag1, offersMoreThanK, lowerBound = lb1}) <- + densityBounds + -- Don't disconnect peer0 if it sent no headers after the intersection yet + -- and it is not idling. + -- + -- See Note [Chain disagreement] + -- + guard $ idling0 || not (AF.null frag0) || hasBlockAfter0 + -- ensure that the two peer fragments don't share any + -- headers after the LoE + guard $ AF.lastPoint frag0 /= AF.lastPoint frag1 + -- peer1 offers more than k blocks or peer0 has sent all headers in the + -- genesis window after the intersection (idling or not) + -- + -- Checking for offersMoreThanK is important to avoid disconnecting + -- competing honest peers when the syncing node is nearly caught up. + guard $ offersMoreThanK || lb0 == ub0 + -- peer1 has the same or better density than peer0 + -- If peer0 is idling, we assume no more headers will be sent. + -- + -- Having the same density is enough to disconnect peer0, as the honest + -- chain is expected to have a strictly higher density than all of the + -- other chains. + -- + -- This matters to ChainSync jumping, where adversarial dynamo and + -- objector could offer chains of equal density. + guard $ lb1 >= (if idling0 then lb0 else ub0) + + -- We disconnect peer0 if there is at least another peer peer1 with a + -- chain which is at least as good, and peer0 is either idling or there is + -- no extension to peer0's chain that can make it better than peer1's, and + -- peer1's has more than k headers or peer0 has sent all its headers in + -- the genesis window anchored at the intersection. + -- + -- A chain is "as good as another" if it has at least as many headers in + -- the genesis window anchored at the intersection. + pure peer0 + + loeIntersectionSlot = AF.headSlot loeFrag + + firstSlotAfterGenesisWindow = + succWithOrigin loeIntersectionSlot + SlotNo sgen -- Note [Chain disagreement] -- ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -476,27 +503,32 @@ densityDisconnect (GenesisWindow sgen) (SecurityParam k) states candidateSuffixe -- after the intersection. If both chains agree on the next header after -- the intersection, we don't disconnect peer1 either. -data GDDDebugInfo peer blk = - GDDDebugInfo { - bounds :: [(peer, DensityBounds blk)], - curChain :: AnchoredFragment (Header blk), - candidates :: [(peer, AnchoredFragment (Header blk))], - candidateSuffixes :: [(peer, AnchoredFragment (Header blk))], - losingPeers :: [peer], - loeHead :: AF.Anchor (Header blk), - sgen :: GenesisWindow +data GDDDebugInfo peer blk + = GDDDebugInfo + { bounds :: [(peer, DensityBounds blk)] + , curChain :: AnchoredFragment (Header blk) + , candidates :: [(peer, AnchoredFragment (Header blk))] + , candidateSuffixes :: [(peer, AnchoredFragment (Header blk))] + , losingPeers :: [peer] + , loeHead :: AF.Anchor (Header blk) + , sgen :: GenesisWindow } deriving stock instance - ( GetHeader blk, Show (Header blk), Show peer - ) => Show (GDDDebugInfo peer blk) - -data TraceGDDEvent peer blk = - -- | The GDD disconnected from the given peers due to insufficient density. + ( GetHeader blk + , Show (Header blk) + , Show peer + ) => + Show (GDDDebugInfo peer blk) + +data TraceGDDEvent peer blk + = -- | The GDD disconnected from the given peers due to insufficient density. TraceGDDDisconnected (NonEmpty peer) - | - TraceGDDDebug (GDDDebugInfo peer blk) + | TraceGDDDebug (GDDDebugInfo peer blk) deriving stock instance - ( GetHeader blk, Show (Header blk), Show peer - ) => Show (TraceGDDEvent peer blk) + ( GetHeader blk + , Show (Header blk) + , Show peer + ) => + Show (TraceGDDEvent peer blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Abstract.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Abstract.hs index a2b6d069df..5b54cc9a06 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Abstract.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Abstract.hs @@ -2,14 +2,14 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.HardFork.Abstract ( - HasHardForkHistory (..) +module Ouroboros.Consensus.HardFork.Abstract + ( HasHardForkHistory (..) , neverForksHardForkSummary ) where -import Data.Kind (Type) -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.Ledger.Abstract +import Data.Kind (Type) +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.Ledger.Abstract class HasHardForkHistory blk where -- | Type level description of the hard fork shape @@ -20,7 +20,7 @@ class HasHardForkHistory blk where -- in the hard fork, e.g., we might have something like -- -- > '[ByronBlock, ShelleyBlock, GoguenBlock] - type family HardForkIndices blk :: [Type] + type HardForkIndices blk :: [Type] -- | Summary of the hard fork state -- @@ -48,9 +48,10 @@ class HasHardForkHistory blk where -- ledgers, then the 'LedgerConfig' here must indeed already contain timing -- information, and so this function becomes little more than a projection -- (indeed, in this case the 'LedgerState' should be irrelevant). - hardForkSummary :: LedgerConfig blk - -> LedgerState blk mk - -> HardFork.Summary (HardForkIndices blk) + hardForkSummary :: + LedgerConfig blk -> + LedgerState blk mk -> + HardFork.Summary (HardForkIndices blk) -- | Helper function that can be used to define 'hardForkSummary' -- @@ -60,11 +61,12 @@ class HasHardForkHistory blk where -- blocks such as 'ShelleyBlock' their own 'HasHardForkHistory' instance so that -- we can run them as independent ledgers (in addition to being run with the -- hard fork combinator). -neverForksHardForkSummary :: (LedgerConfig blk -> HardFork.EraParams) - -> LedgerConfig blk - -> LedgerState blk mk - -> HardFork.Summary '[blk] +neverForksHardForkSummary :: + (LedgerConfig blk -> HardFork.EraParams) -> + LedgerConfig blk -> + LedgerState blk mk -> + HardFork.Summary '[blk] neverForksHardForkSummary getParams cfg _st = - HardFork.neverForksSummary eraEpochSize eraSlotLength eraGenesisWin - where - HardFork.EraParams{..} = getParams cfg + HardFork.neverForksSummary eraEpochSize eraSlotLength eraGenesisWin + where + HardFork.EraParams{..} = getParams cfg diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator.hs index 843b294ffa..1d9495a624 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator.hs @@ -5,76 +5,108 @@ -- Intended for unqualified import module Ouroboros.Consensus.HardFork.Combinator (module X) where -import Data.SOP.Functors as X (Product2 (..)) -import Data.SOP.InPairs as X (InPairs (..)) -import Data.SOP.Match as X (Mismatch (..)) -import Data.SOP.Telescope as X (Telescope (..)) -import Ouroboros.Consensus.HardFork.Combinator.Abstract as X -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras as X - (MismatchEraInfo (..), OneEraApplyTxErr (..), - OneEraBlock (..), OneEraGenTx (..), OneEraGenTxId (..), - OneEraHash (..), OneEraHeader (..), OneEraTipInfo (..), - PerEraBlockConfig (..), PerEraCodecConfig (..), - PerEraConsensusConfig (..), PerEraLedgerConfig (..), - PerEraStorageConfig (..)) -import Ouroboros.Consensus.HardFork.Combinator.Basics as X -import Ouroboros.Consensus.HardFork.Combinator.Block as X -import Ouroboros.Consensus.HardFork.Combinator.Forging as X - (HardForkForgeStateInfo (..), hardForkBlockForging) -import Ouroboros.Consensus.HardFork.Combinator.Info as X -import Ouroboros.Consensus.HardFork.Combinator.InjectTxs as X - (InjectTx, InjectValidatedTx, cannotInjectTx, - cannotInjectValidatedTx, pattern InjectTx, - pattern InjectValidatedTx) -import Ouroboros.Consensus.HardFork.Combinator.Ledger as X -import Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams as X () -import Ouroboros.Consensus.HardFork.Combinator.Ledger.PeerSelection as X () -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query as X -import Ouroboros.Consensus.HardFork.Combinator.Mempool as X -import Ouroboros.Consensus.HardFork.Combinator.Node as X () -import Ouroboros.Consensus.HardFork.Combinator.Node.DiffusionPipelining as X () -import Ouroboros.Consensus.HardFork.Combinator.Node.InitStorage as X () -import Ouroboros.Consensus.HardFork.Combinator.Node.Metrics as X () -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig as X -import Ouroboros.Consensus.HardFork.Combinator.Protocol as X -import Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel as X -import Ouroboros.Consensus.HardFork.Combinator.State as X - (HardForkState (..), initHardForkState) -import Ouroboros.Consensus.HardFork.Combinator.Translation as X +import Data.SOP.Functors as X (Product2 (..)) +import Data.SOP.InPairs as X (InPairs (..)) +import Data.SOP.Match as X (Mismatch (..)) +import Data.SOP.Telescope as X (Telescope (..)) +import Ouroboros.Consensus.HardFork.Combinator.Abstract as X +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras as X + ( MismatchEraInfo (..) + , OneEraApplyTxErr (..) + , OneEraBlock (..) + , OneEraGenTx (..) + , OneEraGenTxId (..) + , OneEraHash (..) + , OneEraHeader (..) + , OneEraTipInfo (..) + , PerEraBlockConfig (..) + , PerEraCodecConfig (..) + , PerEraConsensusConfig (..) + , PerEraLedgerConfig (..) + , PerEraStorageConfig (..) + ) +import Ouroboros.Consensus.HardFork.Combinator.Basics as X +import Ouroboros.Consensus.HardFork.Combinator.Block as X +import Ouroboros.Consensus.HardFork.Combinator.Forging as X + ( HardForkForgeStateInfo (..) + , hardForkBlockForging + ) +import Ouroboros.Consensus.HardFork.Combinator.Info as X +import Ouroboros.Consensus.HardFork.Combinator.InjectTxs as X + ( InjectTx + , InjectValidatedTx + , cannotInjectTx + , cannotInjectValidatedTx + , pattern InjectTx + , pattern InjectValidatedTx + ) +import Ouroboros.Consensus.HardFork.Combinator.Ledger as X +import Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams as X () +import Ouroboros.Consensus.HardFork.Combinator.Ledger.PeerSelection as X () +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query as X +import Ouroboros.Consensus.HardFork.Combinator.Mempool as X +import Ouroboros.Consensus.HardFork.Combinator.Node as X () +import Ouroboros.Consensus.HardFork.Combinator.Node.DiffusionPipelining as X () +import Ouroboros.Consensus.HardFork.Combinator.Node.InitStorage as X () +import Ouroboros.Consensus.HardFork.Combinator.Node.Metrics as X () +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig as X +import Ouroboros.Consensus.HardFork.Combinator.Protocol as X +import Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel as X +import Ouroboros.Consensus.HardFork.Combinator.State as X + ( HardForkState (..) + , initHardForkState + ) +import Ouroboros.Consensus.HardFork.Combinator.Translation as X -- Omitted from this export: -- + -- * "Ouroboros.Consensus.HardFork.Combinator.State" + -- This defines 'HardForkState', a wrapper around a 'Telescope'. We use this -- to define 'HardForkLedgerState', 'HardForkLedgerView' as well as -- 'HardForkChainDepState', but the type itself should mostly be internal to -- the hard fork combinator. We do export the constructor for it, as this may -- be required for serialisation code. -- + -- * "module Ouroboros.Consensus.HardFork.Combinator.State.Infra" + -- This module is only separate from @.State@ to avoid some cyclic module -- dependencies. Most modules internally to the HFC should import from -- @.State@ directly, and outside of the HFC not even @.State@ should be -- needed (see above). -- + -- * "Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView" + -- This is internal to "Ouroboros.Consensus.HardFork.Combinator.Protocol" -- + -- * "Ouroboros.Consensus.HardFork.Combinator.Protocol.State" + -- This is internal to "Ouroboros.Consensus.HardFork.Combinator.Protocol" -- + -- * "Ouroboros.Consensus.HardFork.Combinator.Degenerate" + -- This defines 'DegenFork', which is useful as a test case that the hard -- fork combinator when applied to a single block results in a system -- that is equivalent to just using that single block directly. -- + -- * "Ouroboros.Consensus.HardFork.Combinator.Embed.Unary" + -- Mostly used in combination with 'DegenFork'. -- + -- * "Ouroboros.Consensus.HardFork.Combinator.Embed.Nary" + -- Used for injection into n-ary sums. Alternative to @Unary@. -- + -- * Most of @Ouroboros.Consensus.HardFork.Combinator.SingleEra.*@ + -- These types are primarily used internally to define the HFC types. -- In a few cases some of the HFC types /are/ types from the SingleEra -- module hierarchy directly; in such cases, we should export them from @@ -82,7 +114,9 @@ import Ouroboros.Consensus.HardFork.Combinator.Translation as X -- TODO: Currently we only do this for @SingleEra.Info@, but we might also -- need to do it for other types. -- + -- * Ouroboros.Consensus.HardFork.Combinator.Util.* + -- We omit most utility functions and types, which are for internal use. Some -- exceptions the defintion of InPairs, which will be required to define -- translations, and the definition of a Telescope, which might be needed to diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract.hs index c0137105f6..d10610f74f 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract.hs @@ -1,11 +1,12 @@ -module Ouroboros.Consensus.HardFork.Combinator.Abstract ( - module X +module Ouroboros.Consensus.HardFork.Combinator.Abstract + ( module X + -- * Re-exports , IsNonEmpty (..) , ProofNonEmpty (..) ) where -import Data.SOP.NonEmpty -import Ouroboros.Consensus.HardFork.Combinator.Abstract.CanHardFork as X -import Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks as X -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock as X +import Data.SOP.NonEmpty +import Ouroboros.Consensus.HardFork.Combinator.Abstract.CanHardFork as X +import Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks as X +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock as X diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/CanHardFork.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/CanHardFork.hs index 399f055cc5..7c416fa823 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/CanHardFork.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/CanHardFork.hs @@ -6,40 +6,43 @@ module Ouroboros.Consensus.HardFork.Combinator.Abstract.CanHardFork (CanHardFork (..)) where -import Data.Measure (Measure) -import Data.SOP.Constraint -import Data.SOP.Functors (Product2) -import Data.SOP.InPairs (InPairs, RequiringBoth) -import qualified Data.SOP.InPairs as InPairs -import Data.SOP.NonEmpty -import qualified Data.SOP.Strict as SOP -import Data.SOP.Tails (Tails) -import qualified Data.SOP.Tails as Tails -import Data.Typeable -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.HardFork.Combinator.InjectTxs -import Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel -import Ouroboros.Consensus.HardFork.Combinator.Translation -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.Measure (Measure) +import Data.SOP.Constraint +import Data.SOP.Functors (Product2) +import Data.SOP.InPairs (InPairs, RequiringBoth) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.NonEmpty +import Data.SOP.Strict qualified as SOP +import Data.SOP.Tails (Tails) +import Data.SOP.Tails qualified as Tails +import Data.Typeable +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.HardFork.Combinator.InjectTxs +import Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel +import Ouroboros.Consensus.HardFork.Combinator.Translation +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- CanHardFork -------------------------------------------------------------------------------} -class ( All SingleEraBlock xs - , All (Compose HasLedgerTables LedgerState) xs - , All (Compose HasTickedLedgerTables LedgerState) xs - , Typeable xs - , IsNonEmpty xs - , Measure (HardForkTxMeasure xs) - , HasByteSize (HardForkTxMeasure xs) - , NoThunks (HardForkTxMeasure xs) - , Show (HardForkTxMeasure xs) - , TxMeasureMetrics (HardForkTxMeasure xs) - ) => CanHardFork xs where +class + ( All SingleEraBlock xs + , All (Compose HasLedgerTables LedgerState) xs + , All (Compose HasTickedLedgerTables LedgerState) xs + , Typeable xs + , IsNonEmpty xs + , Measure (HardForkTxMeasure xs) + , HasByteSize (HardForkTxMeasure xs) + , NoThunks (HardForkTxMeasure xs) + , Show (HardForkTxMeasure xs) + , TxMeasureMetrics (HardForkTxMeasure xs) + ) => + CanHardFork xs + where -- | A measure that can accurately represent the 'TxMeasure' of any era. -- -- Usually, this can simply be the union of the sets of components of each @@ -48,8 +51,8 @@ class ( All SingleEraBlock xs type HardForkTxMeasure xs hardForkEraTranslation :: EraTranslation xs - hardForkChainSel :: Tails AcrossEraSelection xs - hardForkInjectTxs :: + hardForkChainSel :: Tails AcrossEraSelection xs + hardForkInjectTxs :: InPairs ( RequiringBoth WrapLedgerConfig @@ -68,7 +71,7 @@ instance SingleEraBlock blk => CanHardFork '[blk] where type HardForkTxMeasure '[blk] = TxMeasure blk hardForkEraTranslation = trivialEraTranslation - hardForkChainSel = Tails.mk1 - hardForkInjectTxs = InPairs.mk1 + hardForkChainSel = Tails.mk1 + hardForkInjectTxs = InPairs.mk1 hardForkInjTxMeasure (SOP.Z (WrapTxMeasure x)) = x diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/NoHardForks.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/NoHardForks.hs index 575e257307..4a64294101 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/NoHardForks.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/NoHardForks.hs @@ -1,16 +1,16 @@ -module Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks ( - ImmutableEraParams (..) +module Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks + ( ImmutableEraParams (..) , NoHardForks (..) , immutableEpochInfo ) where -import Cardano.Slotting.EpochInfo -import Data.Functor.Identity (runIdentity) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Ledger.Abstract +import Cardano.Slotting.EpochInfo +import Data.Functor.Identity (runIdentity) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.History as History +import Ouroboros.Consensus.Ledger.Abstract {------------------------------------------------------------------------------- Blocks that don't /have/ any transitions @@ -38,17 +38,20 @@ class (SingleEraBlock blk, ImmutableEraParams blk) => NoHardForks blk where -- | Construct partial ledger config from full ledger config -- -- See also 'toPartialConsensusConfig' - toPartialLedgerConfig :: proxy blk - -> LedgerConfig blk -> PartialLedgerConfig blk + toPartialLedgerConfig :: + proxy blk -> + LedgerConfig blk -> + PartialLedgerConfig blk -immutableEpochInfo :: (Monad m, ImmutableEraParams blk) - => TopLevelConfig blk - -> EpochInfo m +immutableEpochInfo :: + (Monad m, ImmutableEraParams blk) => + TopLevelConfig blk -> + EpochInfo m immutableEpochInfo cfg = - hoistEpochInfo (pure . runIdentity) - $ fixedEpochInfo - (History.eraEpochSize params) - (History.eraSlotLength params) - where - params :: EraParams - params = immutableEraParams cfg + hoistEpochInfo (pure . runIdentity) $ + fixedEpochInfo + (History.eraEpochSize params) + (History.eraSlotLength params) + where + params :: EraParams + params = immutableEraParams cfg diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/SingleEraBlock.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/SingleEraBlock.hs index 0dd63e45a4..17152e1361 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/SingleEraBlock.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Abstract/SingleEraBlock.hs @@ -6,11 +6,12 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock ( - -- * Single era block +module Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock + ( -- * Single era block SingleEraBlock (..) , proxySingle , singleEraTransition' + -- * Era index , EraIndex (..) , eraIndexEmpty @@ -21,70 +22,72 @@ module Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock ( , eraIndexZero ) where -import Codec.Serialise -import Data.Either (isRight) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Index -import Data.SOP.Match -import Data.SOP.Strict -import qualified Data.Text as Text -import Data.Void -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.History (Bound, EraParams) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.Condense +import Codec.Serialise +import Data.Either (isRight) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Index +import Data.SOP.Match +import Data.SOP.Strict +import Data.Text qualified as Text +import Data.Void +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.History (Bound, EraParams) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- SingleEraBlock -------------------------------------------------------------------------------} -- | Blocks from which we can assemble a hard fork -class ( LedgerSupportsProtocol blk - , InspectLedger blk - , LedgerSupportsMempool blk - , ConvertRawTxId (GenTx blk) - , BlockSupportsLedgerQuery blk - , HasPartialConsensusConfig (BlockProtocol blk) - , HasPartialLedgerConfig blk - , ConvertRawHash blk - , ReconstructNestedCtxt Header blk - , CommonProtocolParams blk - , LedgerSupportsPeerSelection blk - , ConfigSupportsNode blk - , NodeInitStorage blk - , BlockSupportsDiffusionPipelining blk - , BlockSupportsMetrics blk - , SerialiseNodeToClient blk (PartialLedgerConfig blk) - -- LedgerTables - , CanStowLedgerTables (LedgerState blk) - , HasLedgerTables (LedgerState blk) - , HasLedgerTables (Ticked (LedgerState blk)) - -- Instances required to support testing - , Eq (GenTx blk) - , Eq (Validated (GenTx blk)) - , Eq (ApplyTxErr blk) - , Show blk - , Show (Header blk) - , Show (CannotForge blk) - , Show (ForgeStateInfo blk) - , Show (ForgeStateUpdateError blk) - ) => SingleEraBlock blk where - +class + ( LedgerSupportsProtocol blk + , InspectLedger blk + , LedgerSupportsMempool blk + , ConvertRawTxId (GenTx blk) + , BlockSupportsLedgerQuery blk + , HasPartialConsensusConfig (BlockProtocol blk) + , HasPartialLedgerConfig blk + , ConvertRawHash blk + , ReconstructNestedCtxt Header blk + , CommonProtocolParams blk + , LedgerSupportsPeerSelection blk + , ConfigSupportsNode blk + , NodeInitStorage blk + , BlockSupportsDiffusionPipelining blk + , BlockSupportsMetrics blk + , SerialiseNodeToClient blk (PartialLedgerConfig blk) + , -- LedgerTables + CanStowLedgerTables (LedgerState blk) + , HasLedgerTables (LedgerState blk) + , HasLedgerTables (Ticked (LedgerState blk)) + , -- Instances required to support testing + Eq (GenTx blk) + , Eq (Validated (GenTx blk)) + , Eq (ApplyTxErr blk) + , Show blk + , Show (Header blk) + , Show (CannotForge blk) + , Show (ForgeStateInfo blk) + , Show (ForgeStateUpdateError blk) + ) => + SingleEraBlock blk + where -- | Era transition -- -- This should only report the transition point once it is stable (rollback @@ -93,65 +96,74 @@ class ( LedgerSupportsProtocol blk -- Since we need this to construct the 'HardForkSummary' (and hence the -- 'EpochInfo'), this takes the /partial/ config, not the full config -- (or we'd end up with a catch-22). - singleEraTransition :: PartialLedgerConfig blk - -> EraParams -- ^ Current era parameters - -> Bound -- ^ Start of this era - -> LedgerState blk mk - -> Maybe EpochNo + singleEraTransition :: + PartialLedgerConfig blk -> + -- | Current era parameters + EraParams -> + -- | Start of this era + Bound -> + LedgerState blk mk -> + Maybe EpochNo -- | Era information (for use in error messages) - singleEraInfo :: proxy blk -> SingleEraInfo blk + singleEraInfo :: proxy blk -> SingleEraInfo blk proxySingle :: Proxy SingleEraBlock proxySingle = Proxy -singleEraTransition' :: SingleEraBlock blk - => WrapPartialLedgerConfig blk - -> EraParams - -> Bound - -> LedgerState blk mk -> Maybe EpochNo +singleEraTransition' :: + SingleEraBlock blk => + WrapPartialLedgerConfig blk -> + EraParams -> + Bound -> + LedgerState blk mk -> + Maybe EpochNo singleEraTransition' = singleEraTransition . unwrapPartialLedgerConfig {------------------------------------------------------------------------------- Era index -------------------------------------------------------------------------------} -newtype EraIndex xs = EraIndex { - getEraIndex :: NS (K ()) xs - } +newtype EraIndex xs = EraIndex + { getEraIndex :: NS (K ()) xs + } instance Eq (EraIndex xs) where EraIndex era == EraIndex era' = isRight (matchNS era era') instance All SingleEraBlock xs => Show (EraIndex xs) where show = hcollapse . hcmap proxySingle getEraName . getEraIndex - where - getEraName :: forall blk. SingleEraBlock blk - => K () blk -> K String blk - getEraName _ = - K - . (\name -> " name <> ">") - . Text.unpack - . singleEraName - $ singleEraInfo (Proxy @blk) + where + getEraName :: + forall blk. + SingleEraBlock blk => + K () blk -> K String blk + getEraName _ = + K + . (\name -> " name <> ">") + . Text.unpack + . singleEraName + $ singleEraInfo (Proxy @blk) instance All SingleEraBlock xs => Condense (EraIndex xs) where condense = hcollapse . hcmap proxySingle getEraName . getEraIndex - where - getEraName :: forall blk. SingleEraBlock blk - => K () blk -> K String blk - getEraName _ = - K - . Text.unpack - . singleEraName - $ singleEraInfo (Proxy @blk) + where + getEraName :: + forall blk. + SingleEraBlock blk => + K () blk -> K String blk + getEraName _ = + K + . Text.unpack + . singleEraName + $ singleEraInfo (Proxy @blk) instance SListI xs => Serialise (EraIndex xs) where encode = encode . nsToIndex . getEraIndex decode = do idx <- decode case nsFromIndex idx of - Nothing -> fail $ "EraIndex: invalid index " <> show idx + Nothing -> fail $ "EraIndex: invalid index " <> show idx Just eraIndex -> return (EraIndex eraIndex) eraIndexEmpty :: EraIndex '[] -> Void diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/AcrossEras.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/AcrossEras.hs index 6e5f55da74..4745cd0bcb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/AcrossEras.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/AcrossEras.hs @@ -15,19 +15,20 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.AcrossEras ( - -- * Value for /each/ era +module Ouroboros.Consensus.HardFork.Combinator.AcrossEras + ( -- * Value for /each/ era PerEraBlockConfig (..) , PerEraChainOrderConfig (..) , PerEraCodecConfig (..) , PerEraConsensusConfig (..) , PerEraLedgerConfig (..) , PerEraStorageConfig (..) + -- * Values for /some/ eras , SomeErasCanBeLeader (..) + -- * Value for /one/ era , OneEraApplyTxErr (..) , OneEraBlock (..) @@ -51,57 +52,59 @@ module Ouroboros.Consensus.HardFork.Combinator.AcrossEras ( , OneEraValidateView (..) , OneEraValidatedGenTx (..) , OneEraValidationErr (..) + -- * Value for two /different/ eras , EraMismatch (..) , MismatchEraInfo (..) , mismatchFutureEra , mismatchOneEra , mkEraMismatch + -- * Utility , getSameValue , oneEraBlockHeader ) where -import Codec.Serialise (Serialise (..)) -import Control.Monad.Except (throwError) -import qualified Data.ByteString.Base16 as B16 -import qualified Data.ByteString.Char8 as BSC -import Data.ByteString.Short (ShortByteString) -import qualified Data.ByteString.Short as Short -import Data.Function (on) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Match (Mismatch) -import qualified Data.SOP.Match as Match -import Data.SOP.OptNP (NonEmptyOptNP) -import Data.SOP.Strict -import Data.Text (Text) -import Data.Void -import GHC.Generics (Generic) -import GHC.Stack -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.Lifting -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (allEqual) -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Codec.Serialise (Serialise (..)) +import Control.Monad.Except (throwError) +import Data.ByteString.Base16 qualified as B16 +import Data.ByteString.Char8 qualified as BSC +import Data.ByteString.Short (ShortByteString) +import Data.ByteString.Short qualified as Short +import Data.Function (on) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Match (Mismatch) +import Data.SOP.Match qualified as Match +import Data.SOP.OptNP (NonEmptyOptNP) +import Data.SOP.Strict +import Data.Text (Text) +import Data.Void +import GHC.Generics (Generic) +import GHC.Stack +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.Lifting +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (allEqual) +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.Condense (Condense (..)) {------------------------------------------------------------------------------- Value for /each/ era -------------------------------------------------------------------------------} -newtype PerEraBlockConfig xs = PerEraBlockConfig { getPerEraBlockConfig :: NP BlockConfig xs } -newtype PerEraChainOrderConfig xs = PerEraChainOrderConfig { getPerEraChainOrderConfig :: NP WrapChainOrderConfig xs } -newtype PerEraCodecConfig xs = PerEraCodecConfig { getPerEraCodecConfig :: NP CodecConfig xs } -newtype PerEraConsensusConfig xs = PerEraConsensusConfig { getPerEraConsensusConfig :: NP WrapPartialConsensusConfig xs } -newtype PerEraLedgerConfig xs = PerEraLedgerConfig { getPerEraLedgerConfig :: NP WrapPartialLedgerConfig xs } -newtype PerEraStorageConfig xs = PerEraStorageConfig { getPerEraStorageConfig :: NP StorageConfig xs } +newtype PerEraBlockConfig xs = PerEraBlockConfig {getPerEraBlockConfig :: NP BlockConfig xs} +newtype PerEraChainOrderConfig xs = PerEraChainOrderConfig {getPerEraChainOrderConfig :: NP WrapChainOrderConfig xs} +newtype PerEraCodecConfig xs = PerEraCodecConfig {getPerEraCodecConfig :: NP CodecConfig xs} +newtype PerEraConsensusConfig xs = PerEraConsensusConfig {getPerEraConsensusConfig :: NP WrapPartialConsensusConfig xs} +newtype PerEraLedgerConfig xs = PerEraLedgerConfig {getPerEraLedgerConfig :: NP WrapPartialLedgerConfig xs} +newtype PerEraStorageConfig xs = PerEraStorageConfig {getPerEraStorageConfig :: NP StorageConfig xs} {------------------------------------------------------------------------------- Values for /some/ eras @@ -111,33 +114,33 @@ newtype PerEraStorageConfig xs = PerEraStorageConfig { getPerEraStorageCon "Ouroboros.Consensus.HardFork.Combinator.Embed.Unary" -------------------------------------------------------------------------------} -newtype SomeErasCanBeLeader xs = SomeErasCanBeLeader { getSomeErasCanBeLeader :: NonEmptyOptNP WrapCanBeLeader xs } +newtype SomeErasCanBeLeader xs = SomeErasCanBeLeader {getSomeErasCanBeLeader :: NonEmptyOptNP WrapCanBeLeader xs} {------------------------------------------------------------------------------- Value for /one/ era -------------------------------------------------------------------------------} -newtype OneEraApplyTxErr xs = OneEraApplyTxErr { getOneEraApplyTxErr :: NS WrapApplyTxErr xs } -newtype OneEraBlock xs = OneEraBlock { getOneEraBlock :: NS I xs } -newtype OneEraCannotForge xs = OneEraCannotForge { getOneEraCannotForge :: NS WrapCannotForge xs } -newtype OneEraEnvelopeErr xs = OneEraEnvelopeErr { getOneEraEnvelopeErr :: NS WrapEnvelopeErr xs } -newtype OneEraForgeStateInfo xs = OneEraForgeStateInfo { getOneEraForgeStateInfo :: NS WrapForgeStateInfo xs } -newtype OneEraForgeStateUpdateError xs = OneEraForgeStateUpdateError { getOneEraForgeStateUpdateError :: NS WrapForgeStateUpdateError xs } -newtype OneEraGenTx xs = OneEraGenTx { getOneEraGenTx :: NS GenTx xs } -newtype OneEraGenTxId xs = OneEraGenTxId { getOneEraGenTxId :: NS WrapGenTxId xs } -newtype OneEraHeader xs = OneEraHeader { getOneEraHeader :: NS Header xs } -newtype OneEraIsLeader xs = OneEraIsLeader { getOneEraIsLeader :: NS WrapIsLeader xs } -newtype OneEraLedgerError xs = OneEraLedgerError { getOneEraLedgerError :: NS WrapLedgerErr xs } -newtype OneEraLedgerEvent xs = OneEraLedgerEvent { getOneEraLedgerEvent :: NS WrapLedgerEvent xs } -newtype OneEraLedgerUpdate xs = OneEraLedgerUpdate { getOneEraLedgerUpdate :: NS WrapLedgerUpdate xs } -newtype OneEraLedgerWarning xs = OneEraLedgerWarning { getOneEraLedgerWarning :: NS WrapLedgerWarning xs } -newtype OneEraSelectView xs = OneEraSelectView { getOneEraSelectView :: NS WrapSelectView xs } -newtype OneEraTentativeHeaderState xs = OneEraTentativeHeaderState { getOneEraTentativeHeaderState :: NS WrapTentativeHeaderState xs } -newtype OneEraTentativeHeaderView xs = OneEraTentativeHeaderView { getOneEraTentativeHeaderView :: NS WrapTentativeHeaderView xs } -newtype OneEraTipInfo xs = OneEraTipInfo { getOneEraTipInfo :: NS WrapTipInfo xs } -newtype OneEraValidateView xs = OneEraValidateView { getOneEraValidateView :: NS WrapValidateView xs } -newtype OneEraValidatedGenTx xs = OneEraValidatedGenTx { getOneEraValidatedGenTx :: NS WrapValidatedGenTx xs } -newtype OneEraValidationErr xs = OneEraValidationErr { getOneEraValidationErr :: NS WrapValidationErr xs } +newtype OneEraApplyTxErr xs = OneEraApplyTxErr {getOneEraApplyTxErr :: NS WrapApplyTxErr xs} +newtype OneEraBlock xs = OneEraBlock {getOneEraBlock :: NS I xs} +newtype OneEraCannotForge xs = OneEraCannotForge {getOneEraCannotForge :: NS WrapCannotForge xs} +newtype OneEraEnvelopeErr xs = OneEraEnvelopeErr {getOneEraEnvelopeErr :: NS WrapEnvelopeErr xs} +newtype OneEraForgeStateInfo xs = OneEraForgeStateInfo {getOneEraForgeStateInfo :: NS WrapForgeStateInfo xs} +newtype OneEraForgeStateUpdateError xs = OneEraForgeStateUpdateError {getOneEraForgeStateUpdateError :: NS WrapForgeStateUpdateError xs} +newtype OneEraGenTx xs = OneEraGenTx {getOneEraGenTx :: NS GenTx xs} +newtype OneEraGenTxId xs = OneEraGenTxId {getOneEraGenTxId :: NS WrapGenTxId xs} +newtype OneEraHeader xs = OneEraHeader {getOneEraHeader :: NS Header xs} +newtype OneEraIsLeader xs = OneEraIsLeader {getOneEraIsLeader :: NS WrapIsLeader xs} +newtype OneEraLedgerError xs = OneEraLedgerError {getOneEraLedgerError :: NS WrapLedgerErr xs} +newtype OneEraLedgerEvent xs = OneEraLedgerEvent {getOneEraLedgerEvent :: NS WrapLedgerEvent xs} +newtype OneEraLedgerUpdate xs = OneEraLedgerUpdate {getOneEraLedgerUpdate :: NS WrapLedgerUpdate xs} +newtype OneEraLedgerWarning xs = OneEraLedgerWarning {getOneEraLedgerWarning :: NS WrapLedgerWarning xs} +newtype OneEraSelectView xs = OneEraSelectView {getOneEraSelectView :: NS WrapSelectView xs} +newtype OneEraTentativeHeaderState xs = OneEraTentativeHeaderState {getOneEraTentativeHeaderState :: NS WrapTentativeHeaderState xs} +newtype OneEraTentativeHeaderView xs = OneEraTentativeHeaderView {getOneEraTentativeHeaderView :: NS WrapTentativeHeaderView xs} +newtype OneEraTipInfo xs = OneEraTipInfo {getOneEraTipInfo :: NS WrapTipInfo xs} +newtype OneEraValidateView xs = OneEraValidateView {getOneEraValidateView :: NS WrapValidateView xs} +newtype OneEraValidatedGenTx xs = OneEraValidatedGenTx {getOneEraValidatedGenTx :: NS WrapValidatedGenTx xs} +newtype OneEraValidationErr xs = OneEraValidationErr {getOneEraValidationErr :: NS WrapValidationErr xs} {------------------------------------------------------------------------------- Hash @@ -150,7 +153,7 @@ newtype OneEraValidationErr xs = OneEraValidationErr { getOneEra -- of the hash would necessarily have to increase, and that leads to trouble. -- So, the type parameter @xs@ here is merely a phantom one, and we just store -- the underlying raw hash. -newtype OneEraHash (xs :: [k]) = OneEraHash { getOneEraHash :: ShortByteString } +newtype OneEraHash (xs :: [k]) = OneEraHash {getOneEraHash :: ShortByteString} deriving newtype (Eq, Ord, NoThunks, Serialise) instance Show (OneEraHash xs) where @@ -180,22 +183,23 @@ instance CanHardFork xs => Ord (OneEraGenTxId xs) where Value for two /different/ eras -------------------------------------------------------------------------------} -newtype MismatchEraInfo xs = MismatchEraInfo { - -- | Era mismatch - -- - -- We have an era mismatch between the era of a block/header/tx/query - -- and the era of the current ledger. - getMismatchEraInfo :: Mismatch SingleEraInfo LedgerEraInfo xs - } +newtype MismatchEraInfo xs = MismatchEraInfo + { getMismatchEraInfo :: Mismatch SingleEraInfo LedgerEraInfo xs + -- ^ Era mismatch + -- + -- We have an era mismatch between the era of a block/header/tx/query + -- and the era of the current ledger. + } mismatchOneEra :: MismatchEraInfo '[b] -> Void mismatchOneEra = Match.mismatchOne . getMismatchEraInfo -- | A mismatch _must_ involve a future era -mismatchFutureEra :: SListI xs - => MismatchEraInfo (x ': xs) -> NS SingleEraInfo xs +mismatchFutureEra :: + SListI xs => + MismatchEraInfo (x ': xs) -> NS SingleEraInfo xs mismatchFutureEra = - either id (hmap getLedgerEraInfo) + either id (hmap getLedgerEraInfo) . Match.mismatchNotFirst . getMismatchEraInfo @@ -205,12 +209,12 @@ mismatchFutureEra = -- | Extra info for errors caused by applying a block, header, transaction, or -- query from one era to a ledger from a different era. -data EraMismatch = EraMismatch { - -- | Name of the era of the ledger ("Byron" or "Shelley"). - ledgerEraName :: !Text - -- | Era of the block, header, transaction, or query. - , otherEraName :: !Text - } +data EraMismatch = EraMismatch + { ledgerEraName :: !Text + -- ^ Name of the era of the ledger ("Byron" or "Shelley"). + , otherEraName :: !Text + -- ^ Era of the block, header, transaction, or query. + } deriving (Eq, Show, Generic) -- | When a transaction or block from a certain era was applied to a ledger @@ -220,24 +224,26 @@ data EraMismatch = EraMismatch { -- transaction/block and the name of the era of the ledger. mkEraMismatch :: SListI xs => MismatchEraInfo xs -> EraMismatch mkEraMismatch (MismatchEraInfo mismatch) = - go mismatch - where - go :: SListI xs => Mismatch SingleEraInfo LedgerEraInfo xs -> EraMismatch - go (Match.ML otherEra ledgerEra) = EraMismatch { - ledgerEraName = hcollapse $ hmap (K . ledgerName) ledgerEra - , otherEraName = otherName otherEra - } - go (Match.MR otherEra ledgerEra) = EraMismatch { - ledgerEraName = ledgerName ledgerEra - , otherEraName = hcollapse $ hmap (K . otherName) otherEra - } - go (Match.MS m) = go m - - ledgerName :: LedgerEraInfo blk -> Text - ledgerName = singleEraName . getLedgerEraInfo - - otherName :: SingleEraInfo blk -> Text - otherName = singleEraName + go mismatch + where + go :: SListI xs => Mismatch SingleEraInfo LedgerEraInfo xs -> EraMismatch + go (Match.ML otherEra ledgerEra) = + EraMismatch + { ledgerEraName = hcollapse $ hmap (K . ledgerName) ledgerEra + , otherEraName = otherName otherEra + } + go (Match.MR otherEra ledgerEra) = + EraMismatch + { ledgerEraName = ledgerName ledgerEra + , otherEraName = hcollapse $ hmap (K . otherName) otherEra + } + go (Match.MS m) = go m + + ledgerName :: LedgerEraInfo blk -> Text + ledgerName = singleEraName . getLedgerEraInfo + + otherName :: SingleEraInfo blk -> Text + otherName = singleEraName {------------------------------------------------------------------------------- Utility @@ -245,29 +251,30 @@ mkEraMismatch (MismatchEraInfo mismatch) = oneEraBlockHeader :: CanHardFork xs => OneEraBlock xs -> OneEraHeader xs oneEraBlockHeader = - OneEraHeader + OneEraHeader . hcmap proxySingle (getHeader . unI) . getOneEraBlock getSameValue :: - forall xs a. (IsNonEmpty xs, Eq a, SListI xs, HasCallStack) - => NP (K a) xs - -> a + forall xs a. + (IsNonEmpty xs, Eq a, SListI xs, HasCallStack) => + NP (K a) xs -> + a getSameValue values = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - assertWithMsg allEqualCheck (unK (hd values)) - where - allEqualCheck :: Either String () - allEqualCheck - | allEqual (hcollapse values) - = return () - | otherwise - = throwError "differing values across hard fork" + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + assertWithMsg allEqualCheck (unK (hd values)) + where + allEqualCheck :: Either String () + allEqualCheck + | allEqual (hcollapse values) = + return () + | otherwise = + throwError "differing values across hard fork" oneEraGenTxIdRawHash :: CanHardFork xs => OneEraGenTxId xs -> ShortByteString oneEraGenTxIdRawHash = - hcollapse + hcollapse . hcmap proxySingle (K . toRawTxIdHash . unwrapGenTxId) . getOneEraGenTxId @@ -275,95 +282,169 @@ oneEraGenTxIdRawHash = NoThunks instances -------------------------------------------------------------------------------} -deriving via LiftNamedNP "PerEraBlockConfig" BlockConfig xs - instance CanHardFork xs => NoThunks (PerEraBlockConfig xs) - -deriving via LiftNamedNP "PerEraCodecConfig" CodecConfig xs - instance CanHardFork xs => NoThunks (PerEraCodecConfig xs) - -deriving via LiftNamedNP "PerEraConsensusConfig" WrapPartialConsensusConfig xs - instance CanHardFork xs => NoThunks (PerEraConsensusConfig xs) - -deriving via LiftNamedNP "PerEraLedgerConfig" WrapPartialLedgerConfig xs - instance CanHardFork xs => NoThunks (PerEraLedgerConfig xs) - -deriving via LiftNamedNP "PerEraStorageConfig" StorageConfig xs - instance CanHardFork xs => NoThunks (PerEraStorageConfig xs) - -deriving via LiftNamedNS "OneEraEnvelopeErr" WrapEnvelopeErr xs - instance CanHardFork xs => NoThunks (OneEraEnvelopeErr xs) - -deriving via LiftNamedNS "OneEraGenTx" GenTx xs - instance CanHardFork xs => NoThunks (OneEraGenTx xs) - -deriving via LiftNamedNS "OneEraGenTxId" WrapGenTxId xs - instance CanHardFork xs => NoThunks (OneEraGenTxId xs) - -deriving via LiftNamedNS "OneEraHeader" Header xs - instance CanHardFork xs => NoThunks (OneEraHeader xs) - -deriving via LiftNamedNS "OneEraLedgerError" WrapLedgerErr xs - instance CanHardFork xs => NoThunks (OneEraLedgerError xs) - -deriving via LiftNamedNS "OneEraSelectView" WrapSelectView xs - instance CanHardFork xs => NoThunks (OneEraSelectView xs) - -deriving via LiftNamedNS "OneEraTentativeHeaderState" WrapTentativeHeaderState xs - instance CanHardFork xs => NoThunks (OneEraTentativeHeaderState xs) - -deriving via LiftNamedNS "OneEraTipInfo" WrapTipInfo xs - instance CanHardFork xs => NoThunks (OneEraTipInfo xs) - -deriving via LiftNamedNS "OneEraValidated" WrapValidatedGenTx xs - instance CanHardFork xs => NoThunks (OneEraValidatedGenTx xs) - -deriving via LiftNamedNS "OneEraValidationErr" WrapValidationErr xs - instance CanHardFork xs => NoThunks (OneEraValidationErr xs) - -deriving via LiftNamedMismatch "MismatchEraInfo" SingleEraInfo LedgerEraInfo xs - instance CanHardFork xs => NoThunks (MismatchEraInfo xs) +deriving via + LiftNamedNP "PerEraBlockConfig" BlockConfig xs + instance + CanHardFork xs => NoThunks (PerEraBlockConfig xs) + +deriving via + LiftNamedNP "PerEraCodecConfig" CodecConfig xs + instance + CanHardFork xs => NoThunks (PerEraCodecConfig xs) + +deriving via + LiftNamedNP "PerEraConsensusConfig" WrapPartialConsensusConfig xs + instance + CanHardFork xs => NoThunks (PerEraConsensusConfig xs) + +deriving via + LiftNamedNP "PerEraLedgerConfig" WrapPartialLedgerConfig xs + instance + CanHardFork xs => NoThunks (PerEraLedgerConfig xs) + +deriving via + LiftNamedNP "PerEraStorageConfig" StorageConfig xs + instance + CanHardFork xs => NoThunks (PerEraStorageConfig xs) + +deriving via + LiftNamedNS "OneEraEnvelopeErr" WrapEnvelopeErr xs + instance + CanHardFork xs => NoThunks (OneEraEnvelopeErr xs) + +deriving via + LiftNamedNS "OneEraGenTx" GenTx xs + instance + CanHardFork xs => NoThunks (OneEraGenTx xs) + +deriving via + LiftNamedNS "OneEraGenTxId" WrapGenTxId xs + instance + CanHardFork xs => NoThunks (OneEraGenTxId xs) + +deriving via + LiftNamedNS "OneEraHeader" Header xs + instance + CanHardFork xs => NoThunks (OneEraHeader xs) + +deriving via + LiftNamedNS "OneEraLedgerError" WrapLedgerErr xs + instance + CanHardFork xs => NoThunks (OneEraLedgerError xs) + +deriving via + LiftNamedNS "OneEraSelectView" WrapSelectView xs + instance + CanHardFork xs => NoThunks (OneEraSelectView xs) + +deriving via + LiftNamedNS "OneEraTentativeHeaderState" WrapTentativeHeaderState xs + instance + CanHardFork xs => NoThunks (OneEraTentativeHeaderState xs) + +deriving via + LiftNamedNS "OneEraTipInfo" WrapTipInfo xs + instance + CanHardFork xs => NoThunks (OneEraTipInfo xs) + +deriving via + LiftNamedNS "OneEraValidated" WrapValidatedGenTx xs + instance + CanHardFork xs => NoThunks (OneEraValidatedGenTx xs) + +deriving via + LiftNamedNS "OneEraValidationErr" WrapValidationErr xs + instance + CanHardFork xs => NoThunks (OneEraValidationErr xs) + +deriving via + LiftNamedMismatch "MismatchEraInfo" SingleEraInfo LedgerEraInfo xs + instance + CanHardFork xs => NoThunks (MismatchEraInfo xs) {------------------------------------------------------------------------------- Other instances -------------------------------------------------------------------------------} -deriving via LiftNS WrapApplyTxErr xs instance CanHardFork xs => Eq (OneEraApplyTxErr xs) -deriving via LiftNS WrapEnvelopeErr xs instance CanHardFork xs => Eq (OneEraEnvelopeErr xs) -deriving via LiftNS GenTx xs instance CanHardFork xs => Eq (OneEraGenTx xs) -deriving via LiftNS WrapLedgerErr xs instance CanHardFork xs => Eq (OneEraLedgerError xs) -deriving via LiftNS WrapLedgerUpdate xs instance CanHardFork xs => Eq (OneEraLedgerUpdate xs) -deriving via LiftNS WrapLedgerWarning xs instance CanHardFork xs => Eq (OneEraLedgerWarning xs) -deriving via LiftNS WrapSelectView xs instance CanHardFork xs => Eq (OneEraSelectView xs) -deriving via LiftNS WrapTipInfo xs instance CanHardFork xs => Eq (OneEraTipInfo xs) +deriving via LiftNS WrapApplyTxErr xs instance CanHardFork xs => Eq (OneEraApplyTxErr xs) +deriving via LiftNS WrapEnvelopeErr xs instance CanHardFork xs => Eq (OneEraEnvelopeErr xs) +deriving via LiftNS GenTx xs instance CanHardFork xs => Eq (OneEraGenTx xs) +deriving via LiftNS WrapLedgerErr xs instance CanHardFork xs => Eq (OneEraLedgerError xs) +deriving via LiftNS WrapLedgerUpdate xs instance CanHardFork xs => Eq (OneEraLedgerUpdate xs) +deriving via LiftNS WrapLedgerWarning xs instance CanHardFork xs => Eq (OneEraLedgerWarning xs) +deriving via LiftNS WrapSelectView xs instance CanHardFork xs => Eq (OneEraSelectView xs) +deriving via LiftNS WrapTipInfo xs instance CanHardFork xs => Eq (OneEraTipInfo xs) deriving via LiftNS WrapValidatedGenTx xs instance CanHardFork xs => Eq (OneEraValidatedGenTx xs) -deriving via LiftNS WrapValidationErr xs instance CanHardFork xs => Eq (OneEraValidationErr xs) - -deriving via LiftNS WrapEnvelopeErr xs instance CanHardFork xs => Show (OneEraEnvelopeErr xs) -deriving via LiftNS WrapForgeStateInfo xs instance CanHardFork xs => Show (OneEraForgeStateInfo xs) -deriving via LiftNS WrapForgeStateUpdateError xs instance CanHardFork xs => Show (OneEraForgeStateUpdateError xs) -deriving via LiftNS WrapLedgerErr xs instance CanHardFork xs => Show (OneEraLedgerError xs) -deriving via LiftNS WrapLedgerUpdate xs instance CanHardFork xs => Show (OneEraLedgerUpdate xs) -deriving via LiftNS WrapLedgerWarning xs instance CanHardFork xs => Show (OneEraLedgerWarning xs) -deriving via LiftNS WrapTentativeHeaderState xs instance CanHardFork xs => Show (OneEraTentativeHeaderState xs) -deriving via LiftNS WrapTentativeHeaderView xs instance CanHardFork xs => Show (OneEraTentativeHeaderView xs) -deriving via LiftNS WrapTipInfo xs instance CanHardFork xs => Show (OneEraTipInfo xs) -deriving via LiftNS WrapValidatedGenTx xs instance CanHardFork xs => Show (OneEraValidatedGenTx xs) -deriving via LiftNS WrapValidationErr xs instance CanHardFork xs => Show (OneEraValidationErr xs) +deriving via LiftNS WrapValidationErr xs instance CanHardFork xs => Eq (OneEraValidationErr xs) + +deriving via + LiftNS WrapEnvelopeErr xs + instance + CanHardFork xs => Show (OneEraEnvelopeErr xs) +deriving via + LiftNS WrapForgeStateInfo xs + instance + CanHardFork xs => Show (OneEraForgeStateInfo xs) +deriving via + LiftNS WrapForgeStateUpdateError xs + instance + CanHardFork xs => Show (OneEraForgeStateUpdateError xs) +deriving via + LiftNS WrapLedgerErr xs + instance + CanHardFork xs => Show (OneEraLedgerError xs) +deriving via + LiftNS WrapLedgerUpdate xs + instance + CanHardFork xs => Show (OneEraLedgerUpdate xs) +deriving via + LiftNS WrapLedgerWarning xs + instance + CanHardFork xs => Show (OneEraLedgerWarning xs) +deriving via + LiftNS WrapTentativeHeaderState xs + instance + CanHardFork xs => Show (OneEraTentativeHeaderState xs) +deriving via + LiftNS WrapTentativeHeaderView xs + instance + CanHardFork xs => Show (OneEraTentativeHeaderView xs) +deriving via + LiftNS WrapTipInfo xs + instance + CanHardFork xs => Show (OneEraTipInfo xs) +deriving via + LiftNS WrapValidatedGenTx xs + instance + CanHardFork xs => Show (OneEraValidatedGenTx xs) +deriving via + LiftNS WrapValidationErr xs + instance + CanHardFork xs => Show (OneEraValidationErr xs) deriving instance Show (PartialLedgerConfig xs) => Show (WrapPartialLedgerConfig xs) -deriving via LiftNP WrapPartialLedgerConfig xs instance CanHardFork xs => Show (PerEraLedgerConfig xs) - -deriving via LiftMismatch SingleEraInfo LedgerEraInfo xs instance All SingleEraBlock xs => Eq (MismatchEraInfo xs) -deriving via LiftMismatch SingleEraInfo LedgerEraInfo xs instance All SingleEraBlock xs => Show (MismatchEraInfo xs) +deriving via + LiftNP WrapPartialLedgerConfig xs + instance + CanHardFork xs => Show (PerEraLedgerConfig xs) + +deriving via + LiftMismatch SingleEraInfo LedgerEraInfo xs + instance + All SingleEraBlock xs => Eq (MismatchEraInfo xs) +deriving via + LiftMismatch SingleEraInfo LedgerEraInfo xs + instance + All SingleEraBlock xs => Show (MismatchEraInfo xs) {------------------------------------------------------------------------------- Show instances used in tests only -------------------------------------------------------------------------------} -deriving via LiftNS WrapApplyTxErr xs instance CanHardFork xs => Show (OneEraApplyTxErr xs) -deriving via LiftNS I xs instance CanHardFork xs => Show (OneEraBlock xs) +deriving via LiftNS WrapApplyTxErr xs instance CanHardFork xs => Show (OneEraApplyTxErr xs) +deriving via LiftNS I xs instance CanHardFork xs => Show (OneEraBlock xs) deriving via LiftNS WrapCannotForge xs instance CanHardFork xs => Show (OneEraCannotForge xs) -deriving via LiftNS GenTx xs instance CanHardFork xs => Show (OneEraGenTx xs) -deriving via LiftNS WrapGenTxId xs instance CanHardFork xs => Show (OneEraGenTxId xs) -deriving via LiftNS Header xs instance CanHardFork xs => Show (OneEraHeader xs) -deriving via LiftNS WrapSelectView xs instance CanHardFork xs => Show (OneEraSelectView xs) +deriving via LiftNS GenTx xs instance CanHardFork xs => Show (OneEraGenTx xs) +deriving via LiftNS WrapGenTxId xs instance CanHardFork xs => Show (OneEraGenTxId xs) +deriving via LiftNS Header xs instance CanHardFork xs => Show (OneEraHeader xs) +deriving via LiftNS WrapSelectView xs instance CanHardFork xs => Show (OneEraSelectView xs) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Basics.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Basics.hs index ebe290a7ba..da5404b36a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Basics.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Basics.hs @@ -11,17 +11,19 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.HardFork.Combinator.Basics ( - -- * Hard fork protocol, block, and ledger state +module Ouroboros.Consensus.HardFork.Combinator.Basics + ( -- * Hard fork protocol, block, and ledger state HardForkBlock (..) , HardForkProtocol , LedgerState (..) + -- * Config , BlockConfig (..) , CodecConfig (..) , ConsensusConfig (..) , HardForkLedgerConfig (..) , StorageConfig (..) + -- ** Functions on config , completeConsensusConfig' , completeConsensusConfig'' @@ -29,31 +31,32 @@ module Ouroboros.Consensus.HardFork.Combinator.Basics ( , completeLedgerConfig'' , distribLedgerConfig , distribTopLevelConfig + -- ** Convenience re-exports , EpochInfo , Except ) where -import Cardano.Slotting.EpochInfo -import Data.Kind (Type) -import Data.SOP.Constraint -import Data.SOP.Functors -import Data.SOP.Strict -import Data.Typeable -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.State.Instances () -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (ShowProxy) +import Cardano.Slotting.EpochInfo +import Data.Kind (Type) +import Data.SOP.Constraint +import Data.SOP.Functors +import Data.SOP.Strict +import Data.Typeable +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.State.Instances () +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (ShowProxy) {------------------------------------------------------------------------------- Hard fork protocol, block, and ledger state @@ -61,84 +64,85 @@ import Ouroboros.Consensus.Util (ShowProxy) data HardForkProtocol (xs :: [Type]) -newtype HardForkBlock xs = HardForkBlock { - getHardForkBlock :: OneEraBlock xs - } - deriving (Show) +newtype HardForkBlock xs = HardForkBlock + { getHardForkBlock :: OneEraBlock xs + } + deriving Show -instance Typeable xs => ShowProxy (HardForkBlock xs) where +instance Typeable xs => ShowProxy (HardForkBlock xs) type instance BlockProtocol (HardForkBlock xs) = HardForkProtocol xs -type instance HeaderHash (HardForkBlock xs) = OneEraHash xs - -newtype instance LedgerState (HardForkBlock xs) mk = HardForkLedgerState { - hardForkLedgerStatePerEra :: HardForkState (Flip LedgerState mk) xs - } - -deriving stock instance (ShowMK mk, CanHardFork xs) - => Show (LedgerState (HardForkBlock xs) mk) -deriving stock instance (EqMK mk, CanHardFork xs) - => Eq (LedgerState (HardForkBlock xs) mk) -deriving newtype instance (NoThunksMK mk, CanHardFork xs) - => NoThunks (LedgerState (HardForkBlock xs) mk) +type instance HeaderHash (HardForkBlock xs) = OneEraHash xs + +newtype instance LedgerState (HardForkBlock xs) mk = HardForkLedgerState + { hardForkLedgerStatePerEra :: HardForkState (Flip LedgerState mk) xs + } + +deriving stock instance + (ShowMK mk, CanHardFork xs) => + Show (LedgerState (HardForkBlock xs) mk) +deriving stock instance + (EqMK mk, CanHardFork xs) => + Eq (LedgerState (HardForkBlock xs) mk) +deriving newtype instance + (NoThunksMK mk, CanHardFork xs) => + NoThunks (LedgerState (HardForkBlock xs) mk) {------------------------------------------------------------------------------- Protocol config -------------------------------------------------------------------------------} -data instance ConsensusConfig (HardForkProtocol xs) = HardForkConsensusConfig { - -- | The value of @k@ cannot change at hard fork boundaries - hardForkConsensusConfigK :: !(SecurityParam) - - -- | The shape of the hard fork - -- - -- We require this in the consensus config because consensus might need - -- access to 'EpochInfo', and in order to compute that, we need the - -- 'EraParams' of all eras. - , hardForkConsensusConfigShape :: !(History.Shape xs) - - -- | Config for each era - , hardForkConsensusConfigPerEra :: !(PerEraConsensusConfig xs) - } - deriving stock (Generic) - deriving anyclass (NoThunks) +data instance ConsensusConfig (HardForkProtocol xs) = HardForkConsensusConfig + { hardForkConsensusConfigK :: !(SecurityParam) + -- ^ The value of @k@ cannot change at hard fork boundaries + , hardForkConsensusConfigShape :: !(History.Shape xs) + -- ^ The shape of the hard fork + -- + -- We require this in the consensus config because consensus might need + -- access to 'EpochInfo', and in order to compute that, we need the + -- 'EraParams' of all eras. + , hardForkConsensusConfigPerEra :: !(PerEraConsensusConfig xs) + -- ^ Config for each era + } + deriving stock Generic + deriving anyclass NoThunks {------------------------------------------------------------------------------- Block config -------------------------------------------------------------------------------} -newtype instance BlockConfig (HardForkBlock xs) = HardForkBlockConfig { - hardForkBlockConfigPerEra :: PerEraBlockConfig xs - } - deriving newtype (NoThunks) +newtype instance BlockConfig (HardForkBlock xs) = HardForkBlockConfig + { hardForkBlockConfigPerEra :: PerEraBlockConfig xs + } + deriving newtype NoThunks {------------------------------------------------------------------------------- Codec config -------------------------------------------------------------------------------} -newtype instance CodecConfig (HardForkBlock xs) = HardForkCodecConfig { - hardForkCodecConfigPerEra :: PerEraCodecConfig xs - } - deriving newtype (NoThunks) +newtype instance CodecConfig (HardForkBlock xs) = HardForkCodecConfig + { hardForkCodecConfigPerEra :: PerEraCodecConfig xs + } + deriving newtype NoThunks {------------------------------------------------------------------------------- Storage config -------------------------------------------------------------------------------} -newtype instance StorageConfig (HardForkBlock xs) = HardForkStorageConfig { - hardForkStorageConfigPerEra :: PerEraStorageConfig xs - } - deriving newtype (NoThunks) +newtype instance StorageConfig (HardForkBlock xs) = HardForkStorageConfig + { hardForkStorageConfigPerEra :: PerEraStorageConfig xs + } + deriving newtype NoThunks {------------------------------------------------------------------------------- Ledger config -------------------------------------------------------------------------------} -data HardForkLedgerConfig xs = HardForkLedgerConfig { - hardForkLedgerConfigShape :: !(History.Shape xs) - , hardForkLedgerConfigPerEra :: !(PerEraLedgerConfig xs) - } - deriving (Generic) +data HardForkLedgerConfig xs = HardForkLedgerConfig + { hardForkLedgerConfigShape :: !(History.Shape xs) + , hardForkLedgerConfigPerEra :: !(PerEraLedgerConfig xs) + } + deriving Generic deriving instance Show (PerEraLedgerConfig xs) => Show (HardForkLedgerConfig xs) instance CanHardFork xs => NoThunks (HardForkLedgerConfig xs) @@ -149,85 +153,94 @@ type instance LedgerCfg (LedgerState (HardForkBlock xs)) = HardForkLedgerConfig Operations on config -------------------------------------------------------------------------------} -completeLedgerConfig' :: forall blk. - HasPartialLedgerConfig blk - => EpochInfo (Except PastHorizonException) - -> WrapPartialLedgerConfig blk - -> LedgerConfig blk +completeLedgerConfig' :: + forall blk. + HasPartialLedgerConfig blk => + EpochInfo (Except PastHorizonException) -> + WrapPartialLedgerConfig blk -> + LedgerConfig blk completeLedgerConfig' ei = - completeLedgerConfig (Proxy @blk) ei + completeLedgerConfig (Proxy @blk) ei . unwrapPartialLedgerConfig -completeLedgerConfig'' :: forall blk. - HasPartialLedgerConfig blk - => EpochInfo (Except PastHorizonException) - -> WrapPartialLedgerConfig blk - -> WrapLedgerConfig blk +completeLedgerConfig'' :: + forall blk. + HasPartialLedgerConfig blk => + EpochInfo (Except PastHorizonException) -> + WrapPartialLedgerConfig blk -> + WrapLedgerConfig blk completeLedgerConfig'' ei = - WrapLedgerConfig + WrapLedgerConfig . completeLedgerConfig (Proxy @blk) ei . unwrapPartialLedgerConfig -completeConsensusConfig' :: forall blk. - HasPartialConsensusConfig (BlockProtocol blk) - => EpochInfo (Except PastHorizonException) - -> WrapPartialConsensusConfig blk - -> ConsensusConfig (BlockProtocol blk) +completeConsensusConfig' :: + forall blk. + HasPartialConsensusConfig (BlockProtocol blk) => + EpochInfo (Except PastHorizonException) -> + WrapPartialConsensusConfig blk -> + ConsensusConfig (BlockProtocol blk) completeConsensusConfig' ei = - completeConsensusConfig (Proxy @(BlockProtocol blk)) ei + completeConsensusConfig (Proxy @(BlockProtocol blk)) ei . unwrapPartialConsensusConfig -completeConsensusConfig'' :: forall blk. - HasPartialConsensusConfig (BlockProtocol blk) - => EpochInfo (Except PastHorizonException) - -> WrapPartialConsensusConfig blk - -> WrapConsensusConfig blk +completeConsensusConfig'' :: + forall blk. + HasPartialConsensusConfig (BlockProtocol blk) => + EpochInfo (Except PastHorizonException) -> + WrapPartialConsensusConfig blk -> + WrapConsensusConfig blk completeConsensusConfig'' ei = - WrapConsensusConfig + WrapConsensusConfig . completeConsensusConfig (Proxy @(BlockProtocol blk)) ei . unwrapPartialConsensusConfig distribLedgerConfig :: - CanHardFork xs - => EpochInfo (Except PastHorizonException) - -> LedgerConfig (HardForkBlock xs) - -> NP WrapLedgerConfig xs + CanHardFork xs => + EpochInfo (Except PastHorizonException) -> + LedgerConfig (HardForkBlock xs) -> + NP WrapLedgerConfig xs distribLedgerConfig ei cfg = - hcmap - proxySingle - (completeLedgerConfig'' ei) - (getPerEraLedgerConfig $ hardForkLedgerConfigPerEra cfg) - -distribTopLevelConfig :: All SingleEraBlock xs - => EpochInfo (Except PastHorizonException) - -> TopLevelConfig (HardForkBlock xs) - -> NP TopLevelConfig xs + hcmap + proxySingle + (completeLedgerConfig'' ei) + (getPerEraLedgerConfig $ hardForkLedgerConfigPerEra cfg) + +distribTopLevelConfig :: + All SingleEraBlock xs => + EpochInfo (Except PastHorizonException) -> + TopLevelConfig (HardForkBlock xs) -> + NP TopLevelConfig xs distribTopLevelConfig ei tlc = - hcpure proxySingle - (fn_5 (\cfgConsensus cfgLedger cfgBlock cfgCodec cfgStorage -> - mkTopLevelConfig - (completeConsensusConfig' ei cfgConsensus) - (completeLedgerConfig' ei cfgLedger) - cfgBlock - cfgCodec - cfgStorage - -- topLevelConfigCheckpoints is only used in validateEnvelope, - -- where it comes from the TopLevelConfig of the HardForkBlock. - -- - -- The checkpoints of the underlying blocks are not used. - emptyCheckpointsMap)) - `hap` - (getPerEraConsensusConfig $ - hardForkConsensusConfigPerEra (configConsensus tlc)) - `hap` - (getPerEraLedgerConfig $ - hardForkLedgerConfigPerEra (configLedger tlc)) - `hap` - (getPerEraBlockConfig $ - hardForkBlockConfigPerEra (configBlock tlc)) - `hap` - (getPerEraCodecConfig $ - hardForkCodecConfigPerEra (configCodec tlc)) - `hap` - (getPerEraStorageConfig $ - hardForkStorageConfigPerEra (configStorage tlc)) + hcpure + proxySingle + ( fn_5 + ( \cfgConsensus cfgLedger cfgBlock cfgCodec cfgStorage -> + mkTopLevelConfig + (completeConsensusConfig' ei cfgConsensus) + (completeLedgerConfig' ei cfgLedger) + cfgBlock + cfgCodec + cfgStorage + -- topLevelConfigCheckpoints is only used in validateEnvelope, + -- where it comes from the TopLevelConfig of the HardForkBlock. + -- + -- The checkpoints of the underlying blocks are not used. + emptyCheckpointsMap + ) + ) + `hap` ( getPerEraConsensusConfig $ + hardForkConsensusConfigPerEra (configConsensus tlc) + ) + `hap` ( getPerEraLedgerConfig $ + hardForkLedgerConfigPerEra (configLedger tlc) + ) + `hap` ( getPerEraBlockConfig $ + hardForkBlockConfigPerEra (configBlock tlc) + ) + `hap` ( getPerEraCodecConfig $ + hardForkCodecConfigPerEra (configCodec tlc) + ) + `hap` ( getPerEraStorageConfig $ + hardForkStorageConfigPerEra (configStorage tlc) + ) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Block.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Block.hs index f4ad5dd5bf..dff72cca8c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Block.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Block.hs @@ -12,64 +12,64 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Block ( - -- * Type family instances +module Ouroboros.Consensus.HardFork.Combinator.Block + ( -- * Type family instances Header (..) , NestedCtxt_ (..) + -- * AnnTip , distribAnnTip , undistribAnnTip ) where -import Data.Function (on) -import Data.Functor.Product -import Data.Kind (Type) -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Index -import qualified Data.SOP.Match as Match -import Data.SOP.Strict -import Data.Typeable (Typeable) -import Data.Word -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (ShowProxy, (.:)) +import Data.Function (on) +import Data.Functor.Product +import Data.Kind (Type) +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Index +import Data.SOP.Match qualified as Match +import Data.SOP.Strict +import Data.Typeable (Typeable) +import Data.Word +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (ShowProxy, (.:)) {------------------------------------------------------------------------------- GetHeader -------------------------------------------------------------------------------} -newtype instance Header (HardForkBlock xs) = HardForkHeader { - getHardForkHeader :: OneEraHeader xs - } +newtype instance Header (HardForkBlock xs) = HardForkHeader + { getHardForkHeader :: OneEraHeader xs + } deriving (Show, NoThunks) -instance Typeable xs => ShowProxy (Header (HardForkBlock xs)) where +instance Typeable xs => ShowProxy (Header (HardForkBlock xs)) instance CanHardFork xs => GetHeader (HardForkBlock xs) where getHeader = HardForkHeader . oneEraBlockHeader . getHardForkBlock blockMatchesHeader = \hdr blk -> - case Match.matchNS - (getOneEraHeader (getHardForkHeader hdr)) - (getOneEraBlock (getHardForkBlock blk)) of - Left _ -> False - Right hdrAndBlk -> - hcollapse $ hcliftA proxySingle matchesSingle hdrAndBlk - where - matchesSingle :: GetHeader blk => Product Header I blk -> K Bool blk - matchesSingle (Pair hdr (I blk)) = K (blockMatchesHeader hdr blk) + case Match.matchNS + (getOneEraHeader (getHardForkHeader hdr)) + (getOneEraBlock (getHardForkBlock blk)) of + Left _ -> False + Right hdrAndBlk -> + hcollapse $ hcliftA proxySingle matchesSingle hdrAndBlk + where + matchesSingle :: GetHeader blk => Product Header I blk -> K Bool blk + matchesSingle (Pair hdr (I blk)) = K (blockMatchesHeader hdr blk) headerIsEBB = - hcollapse + hcollapse . hcmap proxySingle (K . headerIsEBB) . getOneEraHeader . getHardForkHeader @@ -85,92 +85,101 @@ instance CanHardFork xs => HasHeader (HardForkBlock xs) where instance CanHardFork xs => HasHeader (Header (HardForkBlock xs)) where getHeaderFields = - hcollapse + hcollapse . hcmap proxySingle (K . getOne) . getOneEraHeader . getHardForkHeader - where - getOne :: forall blk. SingleEraBlock blk - => Header blk -> HeaderFields (Header (HardForkBlock xs)) - getOne hdr = HeaderFields { - headerFieldHash = OneEraHash $ - toShortRawHash (Proxy @blk) headerFieldHash - , headerFieldSlot = headerFieldSlot - , headerFieldBlockNo = headerFieldBlockNo - } - where - HeaderFields{..} = getHeaderFields hdr + where + getOne :: + forall blk. + SingleEraBlock blk => + Header blk -> HeaderFields (Header (HardForkBlock xs)) + getOne hdr = + HeaderFields + { headerFieldHash = + OneEraHash $ + toShortRawHash (Proxy @blk) headerFieldHash + , headerFieldSlot = headerFieldSlot + , headerFieldBlockNo = headerFieldBlockNo + } + where + HeaderFields{..} = getHeaderFields hdr instance CanHardFork xs => GetPrevHash (HardForkBlock xs) where headerPrevHash = - hcollapse + hcollapse . hcmap proxySingle (K . getOnePrev) . getOneEraHeader . getHardForkHeader - where - getOnePrev :: forall blk. SingleEraBlock blk - => Header blk -> ChainHash (HardForkBlock xs) - getOnePrev hdr = - case headerPrevHash hdr of - GenesisHash -> GenesisHash - BlockHash h -> BlockHash (OneEraHash $ toShortRawHash (Proxy @blk) h) + where + getOnePrev :: + forall blk. + SingleEraBlock blk => + Header blk -> ChainHash (HardForkBlock xs) + getOnePrev hdr = + case headerPrevHash hdr of + GenesisHash -> GenesisHash + BlockHash h -> BlockHash (OneEraHash $ toShortRawHash (Proxy @blk) h) {------------------------------------------------------------------------------- NestedContent -------------------------------------------------------------------------------} data instance NestedCtxt_ (HardForkBlock xs) :: (Type -> Type) -> (Type -> Type) where - NCZ :: !(NestedCtxt_ x f a) -> NestedCtxt_ (HardForkBlock (x ': xs)) f a - NCS :: !(NestedCtxt_ (HardForkBlock xs) f a) -> NestedCtxt_ (HardForkBlock (x ': xs)) f a + NCZ :: !(NestedCtxt_ x f a) -> NestedCtxt_ (HardForkBlock (x ': xs)) f a + NCS :: !(NestedCtxt_ (HardForkBlock xs) f a) -> NestedCtxt_ (HardForkBlock (x ': xs)) f a deriving instance All SingleEraBlock xs => Show (NestedCtxt_ (HardForkBlock xs) Header a) instance CanHardFork xs => SameDepIndex (NestedCtxt_ (HardForkBlock xs) Header) where sameDepIndex = go - where - go :: All SingleEraBlock xs' - => NestedCtxt_ (HardForkBlock xs') Header a - -> NestedCtxt_ (HardForkBlock xs') Header b - -> Maybe (a :~: b) - go (NCZ ctxt) (NCZ ctxt') = sameDepIndex ctxt ctxt' - go (NCS ctxt) (NCS ctxt') = go ctxt ctxt' - go _ _ = Nothing + where + go :: + All SingleEraBlock xs' => + NestedCtxt_ (HardForkBlock xs') Header a -> + NestedCtxt_ (HardForkBlock xs') Header b -> + Maybe (a :~: b) + go (NCZ ctxt) (NCZ ctxt') = sameDepIndex ctxt ctxt' + go (NCS ctxt) (NCS ctxt') = go ctxt ctxt' + go _ _ = Nothing instance CanHardFork xs => HasNestedContent Header (HardForkBlock xs) where unnest = - go . getOneEraHeader . getHardForkHeader - where - go :: All SingleEraBlock xs' - => NS Header xs' -> DepPair (NestedCtxt Header (HardForkBlock xs')) - go (Z x) = case unnest x of - DepPair (NestedCtxt ctxt) x' -> - DepPair (NestedCtxt (NCZ ctxt)) x' - go (S x) = case go x of - DepPair (NestedCtxt ctxt) x' -> - DepPair (NestedCtxt (NCS ctxt)) x' + go . getOneEraHeader . getHardForkHeader + where + go :: + All SingleEraBlock xs' => + NS Header xs' -> DepPair (NestedCtxt Header (HardForkBlock xs')) + go (Z x) = case unnest x of + DepPair (NestedCtxt ctxt) x' -> + DepPair (NestedCtxt (NCZ ctxt)) x' + go (S x) = case go x of + DepPair (NestedCtxt ctxt) x' -> + DepPair (NestedCtxt (NCS ctxt)) x' nest = \(DepPair ctxt hdr) -> - HardForkHeader . OneEraHeader $ go ctxt hdr - where - go :: All SingleEraBlock xs' - => NestedCtxt Header (HardForkBlock xs') a -> a -> NS Header xs' - go (NestedCtxt (NCZ ctxt)) x = Z (nest (DepPair (NestedCtxt ctxt) x)) - go (NestedCtxt (NCS ctxt)) x = S (go (NestedCtxt ctxt) x) + HardForkHeader . OneEraHeader $ go ctxt hdr + where + go :: + All SingleEraBlock xs' => + NestedCtxt Header (HardForkBlock xs') a -> a -> NS Header xs' + go (NestedCtxt (NCZ ctxt)) x = Z (nest (DepPair (NestedCtxt ctxt) x)) + go (NestedCtxt (NCS ctxt)) x = S (go (NestedCtxt ctxt) x) {------------------------------------------------------------------------------- ConvertRawHash -------------------------------------------------------------------------------} instance CanHardFork xs => ConvertRawHash (HardForkBlock xs) where - toShortRawHash _ = getOneEraHash + toShortRawHash _ = getOneEraHash fromShortRawHash _ = OneEraHash - hashSize _ = getSameValue hashSizes - where - hashSizes :: NP (K Word32) xs - hashSizes = hcpure proxySingle hashSizeOne + hashSize _ = getSameValue hashSizes + where + hashSizes :: NP (K Word32) xs + hashSizes = hcpure proxySingle hashSizeOne - hashSizeOne :: forall blk. SingleEraBlock blk => K Word32 blk - hashSizeOne = K $ hashSize (Proxy @blk) + hashSizeOne :: forall blk. SingleEraBlock blk => K Word32 blk + hashSizeOne = K $ hashSize (Proxy @blk) {------------------------------------------------------------------------------- HasAnnTip @@ -180,41 +189,47 @@ instance CanHardFork xs => HasAnnTip (HardForkBlock xs) where type TipInfo (HardForkBlock xs) = OneEraTipInfo xs getTipInfo = - OneEraTipInfo + OneEraTipInfo . hcmap proxySingle (WrapTipInfo . getTipInfo) . getOneEraHeader . getHardForkHeader tipInfoHash _ = - hcollapse + hcollapse . hcmap proxySingle (K . tipInfoOne) . getOneEraTipInfo - where - tipInfoOne :: forall blk. SingleEraBlock blk - => WrapTipInfo blk -> OneEraHash xs - tipInfoOne = OneEraHash - . toShortRawHash (Proxy @blk) - . tipInfoHash (Proxy @blk) - . unwrapTipInfo + where + tipInfoOne :: + forall blk. + SingleEraBlock blk => + WrapTipInfo blk -> OneEraHash xs + tipInfoOne = + OneEraHash + . toShortRawHash (Proxy @blk) + . tipInfoHash (Proxy @blk) + . unwrapTipInfo distribAnnTip :: SListI xs => AnnTip (HardForkBlock xs) -> NS AnnTip xs distribAnnTip AnnTip{..} = - hmap distrib (getOneEraTipInfo annTipInfo) - where - distrib :: WrapTipInfo blk -> AnnTip blk - distrib (WrapTipInfo info) = - AnnTip annTipSlotNo annTipBlockNo info + hmap distrib (getOneEraTipInfo annTipInfo) + where + distrib :: WrapTipInfo blk -> AnnTip blk + distrib (WrapTipInfo info) = + AnnTip annTipSlotNo annTipBlockNo info undistribAnnTip :: forall xs. SListI xs => NS AnnTip xs -> AnnTip (HardForkBlock xs) undistribAnnTip = hcollapse . himap undistrib - where - undistrib :: Index xs blk - -> AnnTip blk - -> K (AnnTip (HardForkBlock xs)) blk - undistrib index AnnTip{..} = K $ - AnnTip annTipSlotNo - annTipBlockNo - (OneEraTipInfo . injectNS index . WrapTipInfo $ annTipInfo) + where + undistrib :: + Index xs blk -> + AnnTip blk -> + K (AnnTip (HardForkBlock xs)) blk + undistrib index AnnTip{..} = + K $ + AnnTip + annTipSlotNo + annTipBlockNo + (OneEraTipInfo . injectNS index . WrapTipInfo $ annTipInfo) {------------------------------------------------------------------------------- BasicEnvelopeValidation @@ -222,40 +237,46 @@ undistribAnnTip = hcollapse . himap undistrib instance CanHardFork xs => BasicEnvelopeValidation (HardForkBlock xs) where expectedFirstBlockNo _ = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty p _ -> expectedFirstBlockNo p + case isNonEmpty (Proxy @xs) of + ProofNonEmpty p _ -> expectedFirstBlockNo p minimumPossibleSlotNo _ = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty p _ -> minimumPossibleSlotNo p + case isNonEmpty (Proxy @xs) of + ProofNonEmpty p _ -> minimumPossibleSlotNo p -- TODO: If the block is from a different era as the current tip, we just -- expect @succ b@. This may not be sufficient: if we ever transition /to/ -- an era with EBBs, this is not correct. expectedNextBlockNo _ (OneEraTipInfo oldTip) (OneEraTipInfo newBlock) b = - case Match.matchNS oldTip newBlock of - Right matched -> hcollapse $ hcmap proxySingle aux matched - Left _mismatch -> succ b - where - aux :: forall blk. SingleEraBlock blk - => Product WrapTipInfo WrapTipInfo blk - -> K BlockNo blk - aux (Pair (WrapTipInfo old) (WrapTipInfo new)) = K $ - expectedNextBlockNo (Proxy @blk) old new b + case Match.matchNS oldTip newBlock of + Right matched -> hcollapse $ hcmap proxySingle aux matched + Left _mismatch -> succ b + where + aux :: + forall blk. + SingleEraBlock blk => + Product WrapTipInfo WrapTipInfo blk -> + K BlockNo blk + aux (Pair (WrapTipInfo old) (WrapTipInfo new)) = + K $ + expectedNextBlockNo (Proxy @blk) old new b -- TODO: If the block is from a different era as the current tip, we just -- expect @succ s@. This may not be sufficient: if we ever transition /to/ -- an era with EBBs, this is not correct. minimumNextSlotNo _ (OneEraTipInfo oldTip) (OneEraTipInfo newBlock) s = - case Match.matchNS oldTip newBlock of - Right matched -> hcollapse $ hcmap proxySingle aux matched - Left _mismatch -> succ s - where - aux :: forall blk. SingleEraBlock blk - => Product WrapTipInfo WrapTipInfo blk - -> K SlotNo blk - aux (Pair (WrapTipInfo old) (WrapTipInfo new)) = K $ - minimumNextSlotNo (Proxy @blk) old new s + case Match.matchNS oldTip newBlock of + Right matched -> hcollapse $ hcmap proxySingle aux matched + Left _mismatch -> succ s + where + aux :: + forall blk. + SingleEraBlock blk => + Product WrapTipInfo WrapTipInfo blk -> + K SlotNo blk + aux (Pair (WrapTipInfo old) (WrapTipInfo new)) = + K $ + minimumNextSlotNo (Proxy @blk) old new s {------------------------------------------------------------------------------- Other instances (primarily for the benefit of tests) @@ -263,20 +284,23 @@ instance CanHardFork xs => BasicEnvelopeValidation (HardForkBlock xs) where instance All Eq xs => Eq (HardForkBlock xs) where (==) = (aux .: Match.matchNS) `on` (getOneEraBlock . getHardForkBlock) - where - aux :: Either (Match.Mismatch I I xs) (NS (Product I I) xs) -> Bool - aux (Left _) = False - aux (Right m) = hcollapse $ - hcmap (Proxy @Eq) (\(Pair x y) -> K $ x == y) m + where + aux :: Either (Match.Mismatch I I xs) (NS (Product I I) xs) -> Bool + aux (Left _) = False + aux (Right m) = + hcollapse $ + hcmap (Proxy @Eq) (\(Pair x y) -> K $ x == y) m instance All (Compose Eq Header) xs => Eq (Header (HardForkBlock xs)) where (==) = (aux .: Match.matchNS) `on` (getOneEraHeader . getHardForkHeader) - where - aux :: Either (Match.Mismatch Header Header xs) (NS (Product Header Header) xs) - -> Bool - aux (Left _) = False - aux (Right m) = hcollapse $ - hcmap - (Proxy @(Compose Eq Header)) - (\(Pair x y) -> K $ x == y) - m + where + aux :: + Either (Match.Mismatch Header Header xs) (NS (Product Header Header) xs) -> + Bool + aux (Left _) = False + aux (Right m) = + hcollapse $ + hcmap + (Proxy @(Compose Eq Header)) + (\(Pair x y) -> K $ x == y) + m diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Compat.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Compat.hs index a2d77d836c..196a4c2397 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Compat.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Compat.hs @@ -6,31 +6,37 @@ {-# LANGUAGE StandaloneKindSignatures #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.HardFork.Combinator.Compat ( - HardForkCompatQuery (..) +module Ouroboros.Consensus.HardFork.Combinator.Compat + ( HardForkCompatQuery (..) + -- * Convenience constructors , compatGetEraStart , compatGetInterpreter , compatIfCurrent + -- * Wrappers , forwardCompatQuery , singleEraCompatQuery ) where -import Data.Kind (Type) -import Data.SOP.BasicFunctors -import Data.SOP.NonEmpty -import Data.SOP.Strict -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import qualified Ouroboros.Consensus.HardFork.History.Qry as Qry -import Ouroboros.Consensus.HardFork.History.Summary (Bound, Summary, - initBound, neverForksSummary) -import Ouroboros.Consensus.Ledger.Query +import Data.Kind (Type) +import Data.SOP.BasicFunctors +import Data.SOP.NonEmpty +import Data.SOP.Strict +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HardFork.History.Qry qualified as Qry +import Ouroboros.Consensus.HardFork.History.Summary + ( Bound + , Summary + , initBound + , neverForksSummary + ) +import Ouroboros.Consensus.Ledger.Query {------------------------------------------------------------------------------- Query language @@ -41,17 +47,15 @@ import Ouroboros.Consensus.Ledger.Query type HardForkCompatQuery :: Type -> QueryFootprint -> Type -> Type data HardForkCompatQuery blk fp result where CompatIfCurrent :: - BlockQuery blk fp result - -> HardForkCompatQuery blk fp result - + BlockQuery blk fp result -> + HardForkCompatQuery blk fp result CompatAnytime :: - QueryAnytime result - -> EraIndex (HardForkIndices blk) - -> HardForkCompatQuery blk QFNoTables result - + QueryAnytime result -> + EraIndex (HardForkIndices blk) -> + HardForkCompatQuery blk QFNoTables result CompatHardFork :: - QueryHardFork (HardForkIndices blk) result - -> HardForkCompatQuery blk QFNoTables result + QueryHardFork (HardForkIndices blk) result -> + HardForkCompatQuery blk QFNoTables result {------------------------------------------------------------------------------- Convenience constructors for 'HardForkCompatQuery' @@ -59,21 +63,21 @@ data HardForkCompatQuery blk fp result where -- | Submit query to underlying ledger compatIfCurrent :: - BlockQuery fp blk result - -> HardForkCompatQuery fp blk result + BlockQuery fp blk result -> + HardForkCompatQuery fp blk result compatIfCurrent = CompatIfCurrent -- | Get the start of the specified era, if known compatGetEraStart :: - EraIndex (HardForkIndices blk) - -> HardForkCompatQuery blk QFNoTables (Maybe Bound) + EraIndex (HardForkIndices blk) -> + HardForkCompatQuery blk QFNoTables (Maybe Bound) compatGetEraStart = CompatAnytime GetEraStart -- | Get an interpreter for history queries -- -- I.e., this can be used for slot/epoch/time conversions. compatGetInterpreter :: - HardForkCompatQuery blk QFNoTables (Qry.Interpreter (HardForkIndices blk)) + HardForkCompatQuery blk QFNoTables (Qry.Interpreter (HardForkIndices blk)) compatGetInterpreter = CompatHardFork GetInterpreter {------------------------------------------------------------------------------- @@ -83,43 +87,45 @@ compatGetInterpreter = CompatHardFork GetInterpreter -- | Wrapper used when connecting to a server that's running the HFC with -- at least two eras forwardCompatQuery :: - forall m x xs fp. IsNonEmpty xs - => (forall result. BlockQuery (HardForkBlock (x ': xs)) fp result -> m result) - -- ^ Submit a query through the LocalStateQuery protocol. - -> (forall result. HardForkCompatQuery (HardForkBlock (x ': xs)) fp result -> m result) + forall m x xs fp. + IsNonEmpty xs => + -- | Submit a query through the LocalStateQuery protocol. + (forall result. BlockQuery (HardForkBlock (x ': xs)) fp result -> m result) -> + (forall result. HardForkCompatQuery (HardForkBlock (x ': xs)) fp result -> m result) forwardCompatQuery f = go - where - go :: HardForkCompatQuery (HardForkBlock (x ': xs)) fp result -> m result - go (CompatIfCurrent qry) = f qry - go (CompatAnytime qry ix) = f (QueryAnytime qry ix) - go (CompatHardFork qry) = f (QueryHardFork qry) + where + go :: HardForkCompatQuery (HardForkBlock (x ': xs)) fp result -> m result + go (CompatIfCurrent qry) = f qry + go (CompatAnytime qry ix) = f (QueryAnytime qry ix) + go (CompatHardFork qry) = f (QueryHardFork qry) -- | Wrapper used when connecting to a server that's not using the HFC, or -- is using the HFC but with a single era only. singleEraCompatQuery :: - forall m blk era fp. (Monad m, HardForkIndices blk ~ '[era]) - => EpochSize - -> SlotLength - -> GenesisWindow - -> (forall result. BlockQuery blk fp result -> m result) - -- ^ Submit a query through the LocalStateQuery protocol. - -> (forall result. HardForkCompatQuery blk fp result -> m result) + forall m blk era fp. + (Monad m, HardForkIndices blk ~ '[era]) => + EpochSize -> + SlotLength -> + GenesisWindow -> + -- | Submit a query through the LocalStateQuery protocol. + (forall result. BlockQuery blk fp result -> m result) -> + (forall result. HardForkCompatQuery blk fp result -> m result) singleEraCompatQuery epochSize slotLen genesisWindow f = go - where - go :: HardForkCompatQuery blk fp result -> m result - go (CompatIfCurrent qry) = f qry - go (CompatAnytime qry ix) = const (goAnytime qry) (trivialIndex ix) - go (CompatHardFork qry) = goHardFork qry + where + go :: HardForkCompatQuery blk fp result -> m result + go (CompatIfCurrent qry) = f qry + go (CompatAnytime qry ix) = const (goAnytime qry) (trivialIndex ix) + go (CompatHardFork qry) = goHardFork qry - goAnytime :: QueryAnytime result -> m result - goAnytime GetEraStart = return $ Just initBound + goAnytime :: QueryAnytime result -> m result + goAnytime GetEraStart = return $ Just initBound - goHardFork :: QueryHardFork '[era] result -> m result - goHardFork GetInterpreter = return $ Qry.mkInterpreter summary - goHardFork GetCurrentEra = return eraIndexZero + goHardFork :: QueryHardFork '[era] result -> m result + goHardFork GetInterpreter = return $ Qry.mkInterpreter summary + goHardFork GetCurrentEra = return eraIndexZero - summary :: Summary '[era] - summary = neverForksSummary epochSize slotLen genesisWindow + summary :: Summary '[era] + summary = neverForksSummary epochSize slotLen genesisWindow - trivialIndex :: EraIndex '[era] -> () - trivialIndex (EraIndex (Z (K ()))) = () + trivialIndex :: EraIndex '[era] -> () + trivialIndex (EraIndex (Z (K ()))) = () diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Condense.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Condense.hs index afed7eafb5..8839651554 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Condense.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Condense.hs @@ -3,7 +3,6 @@ {-# LANGUAGE QuantifiedConstraints #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Condense instances @@ -14,33 +13,36 @@ -- NOTE: No guarantees are made about what these condense instances look like. module Ouroboros.Consensus.HardFork.Combinator.Condense (CondenseConstraints) where -import Data.Coerce -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import Ouroboros.Consensus.HardFork.Combinator -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.Condense +import Data.Coerce +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Infrastructure -------------------------------------------------------------------------------} -class ( Condense blk - , Condense (Header blk) - , Condense (GenTx blk) - , Condense (GenTxId blk) - ) => CondenseConstraints blk +class + ( Condense blk + , Condense (Header blk) + , Condense (GenTx blk) + , Condense (GenTxId blk) + ) => + CondenseConstraints blk pCondense :: Proxy CondenseConstraints pCondense = Proxy -defaultCondenseNS :: ( All CondenseConstraints xs - , forall blk. CondenseConstraints blk => Condense (f blk) - ) - => Proxy f -> NS f xs -> String +defaultCondenseNS :: + ( All CondenseConstraints xs + , forall blk. CondenseConstraints blk => Condense (f blk) + ) => + Proxy f -> NS f xs -> String defaultCondenseNS _ = hcollapse . hcmap pCondense (K . condense) {------------------------------------------------------------------------------- diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Degenerate.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Degenerate.hs index 289f92ee50..b5b1d37c23 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Degenerate.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Degenerate.hs @@ -9,11 +9,10 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE ViewPatterns #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Degenerate ( - -- * Pattern synonyms +module Ouroboros.Consensus.HardFork.Combinator.Degenerate + ( -- * Pattern synonyms BlockConfig (DegenBlockConfig) , BlockQuery (DegenQuery) , CodecConfig (DegenCodecConfig) @@ -32,146 +31,152 @@ module Ouroboros.Consensus.HardFork.Combinator.Degenerate ( , TxId (DegenGenTxId) ) where -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Strict -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Embed.Unary -import Ouroboros.Consensus.HardFork.Combinator.Ledger -import Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams () -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import Ouroboros.Consensus.HardFork.Combinator.Mempool -import Ouroboros.Consensus.HardFork.Combinator.Node () -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToClient () -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToNode () -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.SOP.Functors (Flip (..)) +import Data.SOP.Strict +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Abstract.NoHardForks +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Embed.Unary +import Ouroboros.Consensus.HardFork.Combinator.Ledger +import Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams () +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HardFork.Combinator.Mempool +import Ouroboros.Consensus.HardFork.Combinator.Node () +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToClient () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToNode () +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- Simple patterns -------------------------------------------------------------------------------} -{-# COMPLETE DegenApplyTxErr #-} -{-# COMPLETE DegenBlock #-} -{-# COMPLETE DegenBlockConfig #-} -{-# COMPLETE DegenCodecConfig #-} -{-# COMPLETE DegenGenTx #-} -{-# COMPLETE DegenGenTxId #-} -{-# COMPLETE DegenHeader #-} -{-# COMPLETE DegenLedgerError #-} -{-# COMPLETE DegenLedgerState #-} +{-# COMPLETE DegenApplyTxErr #-} +{-# COMPLETE DegenBlock #-} +{-# COMPLETE DegenBlockConfig #-} +{-# COMPLETE DegenCodecConfig #-} +{-# COMPLETE DegenGenTx #-} +{-# COMPLETE DegenGenTxId #-} +{-# COMPLETE DegenHeader #-} +{-# COMPLETE DegenLedgerError #-} +{-# COMPLETE DegenLedgerState #-} {-# COMPLETE DegenOtherHeaderEnvelopeError #-} -{-# COMPLETE DegenQuery #-} -{-# COMPLETE DegenQueryResult #-} -{-# COMPLETE DegenTipInfo #-} +{-# COMPLETE DegenQuery #-} +{-# COMPLETE DegenQueryResult #-} +{-# COMPLETE DegenTipInfo #-} pattern DegenBlock :: - forall b. NoHardForks b - => b - -> HardForkBlock '[b] + forall b. + NoHardForks b => + b -> + HardForkBlock '[b] pattern DegenBlock x <- (project' (Proxy @(I b)) -> x) where DegenBlock x = inject' (Proxy @(I b)) x pattern DegenHeader :: - NoHardForks b - => Header b - -> Header (HardForkBlock '[b]) + NoHardForks b => + Header b -> + Header (HardForkBlock '[b]) pattern DegenHeader x <- (project -> x) where DegenHeader x = inject x pattern DegenGenTx :: - NoHardForks b - => GenTx b - -> GenTx (HardForkBlock '[b]) + NoHardForks b => + GenTx b -> + GenTx (HardForkBlock '[b]) pattern DegenGenTx x <- (project -> x) where DegenGenTx x = inject x pattern DegenGenTxId :: - forall b. NoHardForks b - => GenTxId b - -> GenTxId (HardForkBlock '[b]) + forall b. + NoHardForks b => + GenTxId b -> + GenTxId (HardForkBlock '[b]) pattern DegenGenTxId x <- (project' (Proxy @(WrapGenTxId b)) -> x) where DegenGenTxId x = inject' (Proxy @(WrapGenTxId b)) x pattern DegenApplyTxErr :: - forall b. NoHardForks b - => ApplyTxErr b - -> HardForkApplyTxErr '[b] -- ApplyTxErr (HardForkBlock '[b]) + forall b. + NoHardForks b => + ApplyTxErr b -> + HardForkApplyTxErr '[b] -- ApplyTxErr (HardForkBlock '[b]) pattern DegenApplyTxErr x <- (project' (Proxy @(WrapApplyTxErr b)) -> x) where DegenApplyTxErr x = inject' (Proxy @(WrapApplyTxErr b)) x pattern DegenLedgerError :: - forall b. NoHardForks b - => LedgerError b - -> HardForkLedgerError '[b] -- LedgerError (HardForkBlock '[b]) + forall b. + NoHardForks b => + LedgerError b -> + HardForkLedgerError '[b] -- LedgerError (HardForkBlock '[b]) pattern DegenLedgerError x <- (project' (Proxy @(WrapLedgerErr b)) -> x) where DegenLedgerError x = inject' (Proxy @(WrapLedgerErr b)) x pattern DegenOtherHeaderEnvelopeError :: - forall b. NoHardForks b - => OtherHeaderEnvelopeError b - -> HardForkEnvelopeErr '[b] -- OtherHeaderEnvelopeError (HardForkBlock '[b]) + forall b. + NoHardForks b => + OtherHeaderEnvelopeError b -> + HardForkEnvelopeErr '[b] -- OtherHeaderEnvelopeError (HardForkBlock '[b]) pattern DegenOtherHeaderEnvelopeError x <- (project' (Proxy @(WrapEnvelopeErr b)) -> x) where DegenOtherHeaderEnvelopeError x = inject' (Proxy @(WrapEnvelopeErr b)) x pattern DegenTipInfo :: - forall b. NoHardForks b - => TipInfo b - -> OneEraTipInfo '[b] -- TipInfo (HardForkBlock '[b]) + forall b. + NoHardForks b => + TipInfo b -> + OneEraTipInfo '[b] -- TipInfo (HardForkBlock '[b]) pattern DegenTipInfo x <- (project' (Proxy @(WrapTipInfo b)) -> x) where DegenTipInfo x = inject' (Proxy @(WrapTipInfo b)) x pattern DegenQuery :: - () - => HardForkQueryResult '[b] result ~ a - => BlockQuery b fp result - -> BlockQuery (HardForkBlock '[b]) fp a + () => + HardForkQueryResult '[b] result ~ a => + BlockQuery b fp result -> + BlockQuery (HardForkBlock '[b]) fp a pattern DegenQuery x <- (projQuery' -> ProjHardForkQuery x) where DegenQuery x = injQuery x pattern DegenQueryResult :: - result - -> HardForkQueryResult '[b] result + result -> + HardForkQueryResult '[b] result pattern DegenQueryResult x <- (projQueryResult -> x) where DegenQueryResult x = injQueryResult x pattern DegenCodecConfig :: - NoHardForks b - => CodecConfig b - -> CodecConfig (HardForkBlock '[b]) + NoHardForks b => + CodecConfig b -> + CodecConfig (HardForkBlock '[b]) pattern DegenCodecConfig x <- (project -> x) where DegenCodecConfig x = inject x pattern DegenBlockConfig :: - NoHardForks b - => BlockConfig b - -> BlockConfig (HardForkBlock '[b]) + NoHardForks b => + BlockConfig b -> + BlockConfig (HardForkBlock '[b]) pattern DegenBlockConfig x <- (project -> x) where DegenBlockConfig x = inject x pattern DegenLedgerState :: - NoHardForks b - => LedgerState b mk - -> LedgerState (HardForkBlock '[b]) mk + NoHardForks b => + LedgerState b mk -> + LedgerState (HardForkBlock '[b]) mk pattern DegenLedgerState x <- (unFlip . project . Flip -> x) where DegenLedgerState x = unFlip $ inject $ Flip x @@ -185,35 +190,37 @@ pattern DegenLedgerState x <- (unFlip . project . Flip -> x) -------------------------------------------------------------------------------} {-# COMPLETE DegenConsensusConfig #-} -{-# COMPLETE DegenLedgerConfig #-} -{-# COMPLETE DegenTopLevelConfig #-} +{-# COMPLETE DegenLedgerConfig #-} +{-# COMPLETE DegenTopLevelConfig #-} pattern DegenConsensusConfig :: - PartialConsensusConfig (BlockProtocol b) - -> ConsensusConfig (BlockProtocol (HardForkBlock '[b])) + PartialConsensusConfig (BlockProtocol b) -> + ConsensusConfig (BlockProtocol (HardForkBlock '[b])) pattern DegenConsensusConfig x <- - HardForkConsensusConfig { - hardForkConsensusConfigPerEra = PerEraConsensusConfig - ( WrapPartialConsensusConfig x - :* Nil + HardForkConsensusConfig + { hardForkConsensusConfigPerEra = + PerEraConsensusConfig + ( WrapPartialConsensusConfig x + :* Nil ) - } + } pattern DegenLedgerConfig :: - PartialLedgerConfig b - -> HardForkLedgerConfig '[b] -- LedgerConfig (HardForkBlock '[b]) + PartialLedgerConfig b -> + HardForkLedgerConfig '[b] -- LedgerConfig (HardForkBlock '[b]) pattern DegenLedgerConfig x <- - HardForkLedgerConfig { - hardForkLedgerConfigPerEra = PerEraLedgerConfig - ( WrapPartialLedgerConfig x - :* Nil + HardForkLedgerConfig + { hardForkLedgerConfigPerEra = + PerEraLedgerConfig + ( WrapPartialLedgerConfig x + :* Nil ) - } + } pattern DegenTopLevelConfig :: - NoHardForks b - => TopLevelConfig b - -> TopLevelConfig (HardForkBlock '[b]) + NoHardForks b => + TopLevelConfig b -> + TopLevelConfig (HardForkBlock '[b]) pattern DegenTopLevelConfig x <- (project -> x) where DegenTopLevelConfig x = inject x diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Binary.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Binary.hs index 97ab5adc7a..9c67eff2c8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Binary.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Binary.hs @@ -6,116 +6,134 @@ module Ouroboros.Consensus.HardFork.Combinator.Embed.Binary (protocolInfoBinary) where -import Control.Exception (assert) -import Data.Align (alignWith) -import Data.SOP.Counting (exactlyTwo) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.OptNP (OptNP (..)) -import Data.SOP.Strict (NP (..)) -import Data.These (These (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Basics (LedgerConfig) -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.Abstract (protocolSecurityParam) -import Ouroboros.Consensus.TypeFamilyWrappers +import Control.Exception (assert) +import Data.Align (alignWith) +import Data.SOP.Counting (exactlyTwo) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.OptNP (OptNP (..)) +import Data.SOP.Strict (NP (..)) +import Data.These (These (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Basics (LedgerConfig) +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.Abstract (protocolSecurityParam) +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- ProtocolInfo -------------------------------------------------------------------------------} protocolInfoBinary :: - forall m blk1 blk2. - (CanHardFork '[blk1, blk2], Monad m) - -- First era - => ProtocolInfo blk1 - -> m [BlockForging m blk1] - -> History.EraParams - -> (ConsensusConfig (BlockProtocol blk1) -> PartialConsensusConfig (BlockProtocol blk1)) - -> (LedgerConfig blk1 -> PartialLedgerConfig blk1) - -- Second era - -> ProtocolInfo blk2 - -> m [BlockForging m blk2] - -> History.EraParams - -> (ConsensusConfig (BlockProtocol blk2) -> PartialConsensusConfig (BlockProtocol blk2)) - -> (LedgerConfig blk2 -> PartialLedgerConfig blk2) - -> ( ProtocolInfo (HardForkBlock '[blk1, blk2]) - , m [BlockForging m (HardForkBlock '[blk1, blk2])] - ) -protocolInfoBinary protocolInfo1 blockForging1 eraParams1 toPartialConsensusConfig1 toPartialLedgerConfig1 - protocolInfo2 blockForging2 eraParams2 toPartialConsensusConfig2 toPartialLedgerConfig2 = - ( ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = HardForkConsensusConfig { - hardForkConsensusConfigK = k - , hardForkConsensusConfigShape = shape - , hardForkConsensusConfigPerEra = PerEraConsensusConfig - ( WrapPartialConsensusConfig (toPartialConsensusConfig1 consensusConfig1) - :* WrapPartialConsensusConfig (toPartialConsensusConfig2 consensusConfig2) - :* Nil - ) + forall m blk1 blk2. + (CanHardFork '[blk1, blk2], Monad m) => + -- First era + ProtocolInfo blk1 -> + m [BlockForging m blk1] -> + History.EraParams -> + (ConsensusConfig (BlockProtocol blk1) -> PartialConsensusConfig (BlockProtocol blk1)) -> + (LedgerConfig blk1 -> PartialLedgerConfig blk1) -> + -- Second era + ProtocolInfo blk2 -> + m [BlockForging m blk2] -> + History.EraParams -> + (ConsensusConfig (BlockProtocol blk2) -> PartialConsensusConfig (BlockProtocol blk2)) -> + (LedgerConfig blk2 -> PartialLedgerConfig blk2) -> + ( ProtocolInfo (HardForkBlock '[blk1, blk2]) + , m [BlockForging m (HardForkBlock '[blk1, blk2])] + ) +protocolInfoBinary + protocolInfo1 + blockForging1 + eraParams1 + toPartialConsensusConfig1 + toPartialLedgerConfig1 + protocolInfo2 + blockForging2 + eraParams2 + toPartialConsensusConfig2 + toPartialLedgerConfig2 = + ( ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + HardForkConsensusConfig + { hardForkConsensusConfigK = k + , hardForkConsensusConfigShape = shape + , hardForkConsensusConfigPerEra = + PerEraConsensusConfig + ( WrapPartialConsensusConfig (toPartialConsensusConfig1 consensusConfig1) + :* WrapPartialConsensusConfig (toPartialConsensusConfig2 consensusConfig2) + :* Nil + ) + } + , topLevelConfigLedger = + HardForkLedgerConfig + { hardForkLedgerConfigShape = shape + , hardForkLedgerConfigPerEra = + PerEraLedgerConfig + ( WrapPartialLedgerConfig (toPartialLedgerConfig1 ledgerConfig1) + :* WrapPartialLedgerConfig (toPartialLedgerConfig2 ledgerConfig2) + :* Nil + ) + } + , topLevelConfigBlock = + HardForkBlockConfig $ + PerEraBlockConfig $ + (blockConfig1 :* blockConfig2 :* Nil) + , topLevelConfigCodec = + HardForkCodecConfig $ + PerEraCodecConfig $ + (codecConfig1 :* codecConfig2 :* Nil) + , topLevelConfigStorage = + HardForkStorageConfig $ + PerEraStorageConfig $ + (storageConfig1 :* storageConfig2 :* Nil) + , topLevelConfigCheckpoints = emptyCheckpointsMap } - , topLevelConfigLedger = HardForkLedgerConfig { - hardForkLedgerConfigShape = shape - , hardForkLedgerConfigPerEra = PerEraLedgerConfig - ( WrapPartialLedgerConfig (toPartialLedgerConfig1 ledgerConfig1) - :* WrapPartialLedgerConfig (toPartialLedgerConfig2 ledgerConfig2) - :* Nil - ) + , pInfoInitLedger = + ExtLedgerState + { ledgerState = + HardForkLedgerState $ + initHardForkState (Flip initLedgerState1) + , headerState = + genesisHeaderState $ + initHardForkState $ + WrapChainDepState $ + headerStateChainDep initHeaderState1 } - , topLevelConfigBlock = - HardForkBlockConfig $ - PerEraBlockConfig $ - (blockConfig1 :* blockConfig2 :* Nil) - , topLevelConfigCodec = - HardForkCodecConfig $ - PerEraCodecConfig $ - (codecConfig1 :* codecConfig2 :* Nil) - , topLevelConfigStorage = - HardForkStorageConfig $ - PerEraStorageConfig $ - (storageConfig1 :* storageConfig2 :* Nil) - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - , pInfoInitLedger = ExtLedgerState { - ledgerState = - HardForkLedgerState $ - initHardForkState (Flip initLedgerState1) - , headerState = - genesisHeaderState $ - initHardForkState $ - WrapChainDepState $ - headerStateChainDep initHeaderState1 - } - } + } , alignWith alignBlockForging <$> blockForging1 <*> blockForging2 ) - where - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = consensusConfig1 - , topLevelConfigLedger = ledgerConfig1 - , topLevelConfigBlock = blockConfig1 - , topLevelConfigCodec = codecConfig1 - , topLevelConfigStorage = storageConfig1 + where + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = consensusConfig1 + , topLevelConfigLedger = ledgerConfig1 + , topLevelConfigBlock = blockConfig1 + , topLevelConfigCodec = codecConfig1 + , topLevelConfigStorage = storageConfig1 } - , pInfoInitLedger = ExtLedgerState { - ledgerState = initLedgerState1 + , pInfoInitLedger = + ExtLedgerState + { ledgerState = initLedgerState1 , headerState = initHeaderState1 } } = protocolInfo1 - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = consensusConfig2 - , topLevelConfigLedger = ledgerConfig2 - , topLevelConfigBlock = blockConfig2 - , topLevelConfigCodec = codecConfig2 - , topLevelConfigStorage = storageConfig2 + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = consensusConfig2 + , topLevelConfigLedger = ledgerConfig2 + , topLevelConfigBlock = blockConfig2 + , topLevelConfigCodec = codecConfig2 + , topLevelConfigStorage = storageConfig2 } } = protocolInfo2 @@ -128,8 +146,8 @@ protocolInfoBinary protocolInfo1 blockForging1 eraParams1 toPartialConsensusConf shape = History.Shape $ exactlyTwo eraParams1 eraParams2 alignBlockForging :: - These (BlockForging m blk1) (BlockForging m blk2) - -> BlockForging m (HardForkBlock '[blk1, blk2]) + These (BlockForging m blk1) (BlockForging m blk2) -> + BlockForging m (HardForkBlock '[blk1, blk2]) alignBlockForging = \case This bf1 -> hardForkBlockForging diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Nary.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Nary.hs index 1074e8ad7b..9229295867 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Nary.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Nary.hs @@ -7,45 +7,51 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.HardFork.Combinator.Embed.Nary ( - Inject (..) +module Ouroboros.Consensus.HardFork.Combinator.Embed.Nary + ( Inject (..) , InjectionIndex (InjectionIndex) , inject' + -- * Defaults , injectHardForkState , injectNestedCtxt_ , injectQuery + -- * Initial 'ExtLedgerState' , injectInitialExtLedgerState + -- * Convenience , forgetInjectionIndex , oracularInjectionIndex ) where -import Data.Bifunctor (first) -import Data.Coerce (Coercible, coerce) -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting (Exactly (..)) -import Data.SOP.Dict (Dict (..)) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index -import qualified Data.SOP.InPairs as InPairs -import Data.SOP.Strict -import qualified Data.SOP.Telescope as Telescope -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation (AnnTip, HeaderState (..), - genesisHeaderState) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState (..)) -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.Bifunctor (first) +import Data.Coerce (Coercible, coerce) +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting (Exactly (..)) +import Data.SOP.Dict (Dict (..)) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index +import Data.SOP.Strict +import Data.SOP.Telescope qualified as Telescope +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation + ( AnnTip + , HeaderState (..) + , genesisHeaderState + ) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState (..)) +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- Injection for a single block into a HardForkBlock @@ -53,21 +59,22 @@ import Ouroboros.Consensus.TypeFamilyWrappers class Inject f where inject :: - forall x xs. (CanHardFork xs, HasCanonicalTxIn xs, HasHardForkTxOut xs) - => InjectionIndex xs x - -> f x - -> f (HardForkBlock xs) + forall x xs. + (CanHardFork xs, HasCanonicalTxIn xs, HasHardForkTxOut xs) => + InjectionIndex xs x -> + f x -> + f (HardForkBlock xs) inject' :: - forall f a b x xs. - ( Inject f - , CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - , Coercible a (f x) - , Coercible b (f (HardForkBlock xs)) - ) - => Proxy f -> InjectionIndex xs x -> a -> b + forall f a b x xs. + ( Inject f + , CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + , Coercible a (f x) + , Coercible b (f (HardForkBlock xs)) + ) => + Proxy f -> InjectionIndex xs x -> a -> b inject' _ iidx = coerce . inject @f iidx . coerce {------------------------------------------------------------------------------- @@ -77,8 +84,8 @@ inject' _ iidx = coerce . inject @f iidx . coerce -- | This data type is isomorphic to an 'Index' that additionally provides a -- 'History.Bound' for every era up to and including that index, but none of -- the subsequent eras. -newtype InjectionIndex xs x = - InjectionIndex (Telescope (K History.Bound) (State.Current ((:~:) x)) xs) +newtype InjectionIndex xs x + = InjectionIndex (Telescope (K History.Bound) (State.Current ((:~:) x)) xs) -- | Many instances of 'Inject' do not need the 'History.Bound's, eg those that -- do not construct 'HardForkState's @@ -94,15 +101,15 @@ forgetInjectionIndex (InjectionIndex tele) = -- INVARIANT: the result is completely independent of the 'history.Bound's for -- eras /after/ the given 'Index'. oracularInjectionIndex :: - SListI xs - => Exactly xs History.Bound - -> Index xs x - -> InjectionIndex xs x + SListI xs => + Exactly xs History.Bound -> + Index xs x -> + InjectionIndex xs x oracularInjectionIndex (Exactly np) (Index idx) = InjectionIndex $ Telescope.bihzipWith (\x (K ()) -> x) - (\(K start) Refl -> State.Current { currentStart = start, currentState = Refl }) + (\(K start) Refl -> State.Current{currentStart = start, currentState = Refl}) np $ Telescope.fromTip idx @@ -113,53 +120,54 @@ oracularInjectionIndex (Exactly np) (Index idx) = -- next. firstBound :: InjectionIndex xs x -> History.Bound firstBound (InjectionIndex tele) = case tele of - TZ State.Current {currentStart = start} -> start - TS (K start) _tele' -> start + TZ State.Current{currentStart = start} -> start + TS (K start) _tele' -> start {------------------------------------------------------------------------------- Defaults (to ease implementation) -------------------------------------------------------------------------------} injectNestedCtxt_ :: - forall f x xs a. - Index xs x - -> NestedCtxt_ x f a - -> NestedCtxt_ (HardForkBlock xs) f a + forall f x xs a. + Index xs x -> + NestedCtxt_ x f a -> + NestedCtxt_ (HardForkBlock xs) f a injectNestedCtxt_ idx nc = case idx of - IZ -> NCZ nc - IS idx' -> NCS (injectNestedCtxt_ idx' nc) + IZ -> NCZ nc + IS idx' -> NCS (injectNestedCtxt_ idx' nc) injectQuery :: - forall x xs result fp. - Index xs x - -> BlockQuery x fp result - -> QueryIfCurrent xs fp result + forall x xs result fp. + Index xs x -> + BlockQuery x fp result -> + QueryIfCurrent xs fp result injectQuery idx q = case idx of - IZ -> QZ q - IS idx' -> QS (injectQuery idx' q) + IZ -> QZ q + IS idx' -> QS (injectQuery idx' q) injectHardForkState :: - forall f x xs. - InjectionIndex xs x - -> f x - -> HardForkState f xs + forall f x xs. + InjectionIndex xs x -> + f x -> + HardForkState f xs injectHardForkState iidx x = - HardForkState $ go iidx - where - go :: - InjectionIndex xs' x - -> Telescope (K State.Past) (State.Current f) xs' - go (InjectionIndex (TZ current@State.Current { currentState = Refl })) = - TZ - current { State.currentState = x } - go (InjectionIndex (TS (K start) tele)) = - TS - (K State.Past { - pastStart = start - , pastEnd = firstBound (InjectionIndex tele) + HardForkState $ go iidx + where + go :: + InjectionIndex xs' x -> + Telescope (K State.Past) (State.Current f) xs' + go (InjectionIndex (TZ current@State.Current{currentState = Refl})) = + TZ + current{State.currentState = x} + go (InjectionIndex (TS (K start) tele)) = + TS + ( K + State.Past + { pastStart = start + , pastEnd = firstBound (InjectionIndex tele) } - ) - (go (InjectionIndex tele)) + ) + (go (InjectionIndex tele)) {------------------------------------------------------------------------------- Instances @@ -176,22 +184,22 @@ instance Inject Header where instance Inject SerialisedHeader where inject iidx = - serialisedHeaderFromPair + serialisedHeaderFromPair . first (mapSomeNestedCtxt (injectNestedCtxt_ idx)) . serialisedHeaderToPair - where - idx = forgetInjectionIndex iidx + where + idx = forgetInjectionIndex iidx instance Inject WrapHeaderHash where inject (iidx :: InjectionIndex xs x) = case dictIndexAll (Proxy @SingleEraBlock) idx of Dict -> - WrapHeaderHash - . OneEraHash - . toShortRawHash (Proxy @x) - . unwrapHeaderHash - where - idx = forgetInjectionIndex iidx + WrapHeaderHash + . OneEraHash + . toShortRawHash (Proxy @x) + . unwrapHeaderHash + where + idx = forgetInjectionIndex iidx instance Inject GenTx where inject = injectNS' (Proxy @GenTx) . forgetInjectionIndex @@ -201,41 +209,45 @@ instance Inject WrapGenTxId where instance Inject WrapApplyTxErr where inject = - ( (WrapApplyTxErr . HardForkApplyTxErrFromEra) - .: injectNS' (Proxy @WrapApplyTxErr) - ) - . forgetInjectionIndex + ( (WrapApplyTxErr . HardForkApplyTxErrFromEra) + .: injectNS' (Proxy @WrapApplyTxErr) + ) + . forgetInjectionIndex instance Inject (SomeBlockQuery :.: BlockQuery) where inject iidx (Comp (SomeBlockQuery q)) = - Comp $ SomeBlockQuery (QueryIfCurrent (injectQuery idx q)) - where - idx = forgetInjectionIndex iidx + Comp $ SomeBlockQuery (QueryIfCurrent (injectQuery idx q)) + where + idx = forgetInjectionIndex iidx instance Inject AnnTip where inject = - (undistribAnnTip .: injectNS' (Proxy @AnnTip)) . forgetInjectionIndex + (undistribAnnTip .: injectNS' (Proxy @AnnTip)) . forgetInjectionIndex instance Inject (Flip LedgerState mk) where inject iidx = - Flip . HardForkLedgerState . injectHardForkState iidx + Flip . HardForkLedgerState . injectHardForkState iidx instance Inject WrapChainDepState where inject = coerce .: injectHardForkState instance Inject HeaderState where - inject iidx HeaderState {..} = HeaderState { - headerStateTip = inject iidx <$> headerStateTip - , headerStateChainDep = unwrapChainDepState - $ inject iidx - $ WrapChainDepState headerStateChainDep + inject iidx HeaderState{..} = + HeaderState + { headerStateTip = inject iidx <$> headerStateTip + , headerStateChainDep = + unwrapChainDepState $ + inject iidx $ + WrapChainDepState headerStateChainDep } instance Inject (Flip ExtLedgerState mk) where - inject iidx (Flip ExtLedgerState {..}) = Flip $ ExtLedgerState { - ledgerState = unFlip $ inject iidx (Flip ledgerState) - , headerState = inject iidx headerState - } + inject iidx (Flip ExtLedgerState{..}) = + Flip $ + ExtLedgerState + { ledgerState = unFlip $ inject iidx (Flip ledgerState) + , headerState = inject iidx headerState + } {------------------------------------------------------------------------------- Initial ExtLedgerState @@ -255,55 +267,59 @@ instance Inject (Flip ExtLedgerState mk) where -- Note: this function is an /alternative/ to the 'Inject' class above. It does -- not rely on that class. injectInitialExtLedgerState :: - forall x xs. (CanHardFork (x ': xs), HasLedgerTables (LedgerState (HardForkBlock (x : xs)))) - => TopLevelConfig (HardForkBlock (x ': xs)) - -> ExtLedgerState x ValuesMK - -> ExtLedgerState (HardForkBlock (x ': xs)) ValuesMK + forall x xs. + (CanHardFork (x ': xs), HasLedgerTables (LedgerState (HardForkBlock (x : xs)))) => + TopLevelConfig (HardForkBlock (x ': xs)) -> + ExtLedgerState x ValuesMK -> + ExtLedgerState (HardForkBlock (x ': xs)) ValuesMK injectInitialExtLedgerState cfg extLedgerState0 = - ExtLedgerState { - ledgerState = targetEraLedgerState - , headerState = targetEraHeaderState - } - where - cfgs :: NP TopLevelConfig (x ': xs) - cfgs = - distribTopLevelConfig - (State.epochInfoLedger - (configLedger cfg) - (hardForkLedgerStatePerEra targetEraLedgerState)) - cfg - - targetEraLedgerState :: LedgerState (HardForkBlock (x ': xs)) ValuesMK - targetEraLedgerState = applyDiffs st st' - where - st :: LedgerState (HardForkBlock (x ': xs)) ValuesMK - st = HardForkLedgerState . initHardForkState . Flip . ledgerState $ extLedgerState0 - st' = HardForkLedgerState - -- We can immediately extend it to the right slot, executing any - -- scheduled hard forks in the first slot - (State.extendToSlot - (configLedger cfg) - (SlotNo 0) - (initHardForkState $ Flip $ forgetLedgerTables $ ledgerState extLedgerState0)) - - - firstEraChainDepState :: HardForkChainDepState (x ': xs) - firstEraChainDepState = - initHardForkState $ - WrapChainDepState $ - headerStateChainDep $ - headerState extLedgerState0 - - targetEraChainDepState :: HardForkChainDepState (x ': xs) - targetEraChainDepState = - -- Align the 'ChainDepState' with the ledger state of the target era. - State.align - (InPairs.requiringBoth - (hmap (WrapConsensusConfig . configConsensus) cfgs) - (translateChainDepState hardForkEraTranslation)) - (hpure (fn_2 (\_ st -> st))) + ExtLedgerState + { ledgerState = targetEraLedgerState + , headerState = targetEraHeaderState + } + where + cfgs :: NP TopLevelConfig (x ': xs) + cfgs = + distribTopLevelConfig + ( State.epochInfoLedger + (configLedger cfg) (hardForkLedgerStatePerEra targetEraLedgerState) - firstEraChainDepState + ) + cfg + + targetEraLedgerState :: LedgerState (HardForkBlock (x ': xs)) ValuesMK + targetEraLedgerState = applyDiffs st st' + where + st :: LedgerState (HardForkBlock (x ': xs)) ValuesMK + st = HardForkLedgerState . initHardForkState . Flip . ledgerState $ extLedgerState0 + st' = + HardForkLedgerState + -- We can immediately extend it to the right slot, executing any + -- scheduled hard forks in the first slot + ( State.extendToSlot + (configLedger cfg) + (SlotNo 0) + (initHardForkState $ Flip $ forgetLedgerTables $ ledgerState extLedgerState0) + ) + + firstEraChainDepState :: HardForkChainDepState (x ': xs) + firstEraChainDepState = + initHardForkState $ + WrapChainDepState $ + headerStateChainDep $ + headerState extLedgerState0 + + targetEraChainDepState :: HardForkChainDepState (x ': xs) + targetEraChainDepState = + -- Align the 'ChainDepState' with the ledger state of the target era. + State.align + ( InPairs.requiringBoth + (hmap (WrapConsensusConfig . configConsensus) cfgs) + (translateChainDepState hardForkEraTranslation) + ) + (hpure (fn_2 (\_ st -> st))) + (hardForkLedgerStatePerEra targetEraLedgerState) + firstEraChainDepState - targetEraHeaderState :: HeaderState (HardForkBlock (x ': xs)) - targetEraHeaderState = genesisHeaderState targetEraChainDepState + targetEraHeaderState :: HeaderState (HardForkBlock (x ': xs)) + targetEraHeaderState = genesisHeaderState targetEraChainDepState diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Unary.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Unary.hs index 772cd86fdb..4fe381d287 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Unary.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Embed/Unary.hs @@ -16,10 +16,11 @@ {-# LANGUAGE TypeOperators #-} -- | Witness isomorphism between @b@ and @HardForkBlock '[b]@ -module Ouroboros.Consensus.HardFork.Combinator.Embed.Unary ( - Isomorphic (..) +module Ouroboros.Consensus.HardFork.Combinator.Embed.Unary + ( Isomorphic (..) , inject' , project' + -- * Dependent types , ProjHardForkQuery (..) , injNestedCtxt @@ -29,49 +30,50 @@ module Ouroboros.Consensus.HardFork.Combinator.Embed.Unary ( , projQuery , projQuery' , projQueryResult + -- * Convenience exports , I (..) , Proxy (..) ) where -import Cardano.Slotting.EpochInfo -import Data.Bifunctor (first) -import Data.Coerce -import Data.Kind (Constraint, Type) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import qualified Data.SOP.OptNP as OptNP -import Data.SOP.Strict -import qualified Data.SOP.Telescope as Telescope -import Data.Type.Equality -import Data.Void -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.HardFork.Combinator.Forging -import Ouroboros.Consensus.HardFork.Combinator.Ledger -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import Ouroboros.Consensus.HardFork.Combinator.Mempool -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.Protocol -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.Init as InitChainDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers +import Cardano.Slotting.EpochInfo +import Data.Bifunctor (first) +import Data.Coerce +import Data.Kind (Constraint, Type) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.OptNP qualified as OptNP +import Data.SOP.Strict +import Data.SOP.Telescope qualified as Telescope +import Data.Type.Equality +import Data.Void +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.HardFork.Combinator.Forging +import Ouroboros.Consensus.HardFork.Combinator.Ledger +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HardFork.Combinator.Mempool +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.Protocol +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) +import Ouroboros.Consensus.Storage.ChainDB.Init qualified as InitChainDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- Projection/injection for a single block into degenerate HardForkBlock @@ -79,29 +81,31 @@ import Ouroboros.Consensus.TypeFamilyWrappers class Isomorphic f where project :: NoHardForks blk => f (HardForkBlock '[blk]) -> f blk - inject :: NoHardForks blk => f blk -> f (HardForkBlock '[blk]) - -project' :: forall proxy f x y blk. ( - Isomorphic f - , NoHardForks blk - , Coercible x (f (HardForkBlock '[blk])) - , Coercible y (f blk) - ) - => proxy (f blk) -> x -> y + inject :: NoHardForks blk => f blk -> f (HardForkBlock '[blk]) + +project' :: + forall proxy f x y blk. + ( Isomorphic f + , NoHardForks blk + , Coercible x (f (HardForkBlock '[blk])) + , Coercible y (f blk) + ) => + proxy (f blk) -> x -> y project' _ = - (coerce :: f blk -> y) + (coerce :: f blk -> y) . project . (coerce :: x -> f (HardForkBlock '[blk])) -inject' :: forall proxy f x y blk. ( - Isomorphic f - , NoHardForks blk - , Coercible x (f blk) - , Coercible y (f (HardForkBlock '[blk])) - ) - => proxy (f blk) -> x -> y +inject' :: + forall proxy f x y blk. + ( Isomorphic f + , NoHardForks blk + , Coercible x (f blk) + , Coercible y (f (HardForkBlock '[blk])) + ) => + proxy (f blk) -> x -> y inject' _ = - (coerce :: f (HardForkBlock '[blk]) -> y) + (coerce :: f (HardForkBlock '[blk]) -> y) . inject . (coerce :: x -> f blk) @@ -115,17 +119,19 @@ newtype IsomorphicUnary h f a = IsomorphicUnary (f a) instance ( IsSOPLike h , forall blk. Coercible (f (HardForkBlock '[blk])) (h f '[blk]) - ) => Isomorphic (IsomorphicUnary h f) where + ) => + Isomorphic (IsomorphicUnary h f) + where project :: - forall blk. - IsomorphicUnary h f (HardForkBlock '[blk]) - -> IsomorphicUnary h f blk + forall blk. + IsomorphicUnary h f (HardForkBlock '[blk]) -> + IsomorphicUnary h f blk project = coerce (fromSOPLike :: h f '[blk] -> f blk) inject :: - forall blk. - IsomorphicUnary h f blk - -> IsomorphicUnary h f (HardForkBlock '[blk]) + forall blk. + IsomorphicUnary h f blk -> + IsomorphicUnary h f (HardForkBlock '[blk]) inject = coerce (toSOPLike :: f blk -> h f '[blk]) type IsSOPLike :: ((k -> Type) -> [k] -> Type) -> Constraint @@ -151,57 +157,76 @@ instance IsSOPLike HardForkState where instance Isomorphic ((->) a) where project f = coerce (project @I) . f - inject f = coerce (inject @I) . f + inject f = coerce (inject @I) . f {------------------------------------------------------------------------------- Simple instances -------------------------------------------------------------------------------} -deriving via IsomorphicUnary NP BlockConfig instance Isomorphic BlockConfig -deriving via IsomorphicUnary NP CodecConfig instance Isomorphic CodecConfig +deriving via IsomorphicUnary NP BlockConfig instance Isomorphic BlockConfig +deriving via IsomorphicUnary NP CodecConfig instance Isomorphic CodecConfig deriving via IsomorphicUnary NP StorageConfig instance Isomorphic StorageConfig -deriving via IsomorphicUnary NS GenTx instance Isomorphic GenTx -deriving via IsomorphicUnary NS Header instance Isomorphic Header -deriving via IsomorphicUnary NS I instance Isomorphic I -deriving via IsomorphicUnary NS WrapCannotForge instance Isomorphic WrapCannotForge -deriving via IsomorphicUnary NS WrapForgeStateUpdateError instance Isomorphic WrapForgeStateUpdateError -deriving via IsomorphicUnary NS WrapGenTxId instance Isomorphic WrapGenTxId -deriving via IsomorphicUnary NS WrapIsLeader instance Isomorphic WrapIsLeader -deriving via IsomorphicUnary NS WrapTipInfo instance Isomorphic WrapTipInfo -deriving via IsomorphicUnary NS WrapValidatedGenTx instance Isomorphic WrapValidatedGenTx - -deriving via IsomorphicUnary HardForkState (Flip LedgerState mk) instance Isomorphic (Flip LedgerState mk) -deriving via IsomorphicUnary HardForkState WrapChainDepState instance Isomorphic WrapChainDepState +deriving via IsomorphicUnary NS GenTx instance Isomorphic GenTx +deriving via IsomorphicUnary NS Header instance Isomorphic Header +deriving via IsomorphicUnary NS I instance Isomorphic I +deriving via IsomorphicUnary NS WrapCannotForge instance Isomorphic WrapCannotForge +deriving via + IsomorphicUnary NS WrapForgeStateUpdateError + instance + Isomorphic WrapForgeStateUpdateError +deriving via IsomorphicUnary NS WrapGenTxId instance Isomorphic WrapGenTxId +deriving via IsomorphicUnary NS WrapIsLeader instance Isomorphic WrapIsLeader +deriving via IsomorphicUnary NS WrapTipInfo instance Isomorphic WrapTipInfo +deriving via IsomorphicUnary NS WrapValidatedGenTx instance Isomorphic WrapValidatedGenTx + +deriving via + IsomorphicUnary HardForkState (Flip LedgerState mk) + instance + Isomorphic (Flip LedgerState mk) +deriving via + IsomorphicUnary HardForkState WrapChainDepState + instance + Isomorphic WrapChainDepState {------------------------------------------------------------------------------- Hash -------------------------------------------------------------------------------} instance Isomorphic WrapHeaderHash where - project :: forall blk. ConvertRawHash blk - => WrapHeaderHash (HardForkBlock '[blk]) -> WrapHeaderHash blk + project :: + forall blk. + ConvertRawHash blk => + WrapHeaderHash (HardForkBlock '[blk]) -> WrapHeaderHash blk project = - WrapHeaderHash - . fromShortRawHash (Proxy @blk) . getOneEraHash + WrapHeaderHash + . fromShortRawHash (Proxy @blk) + . getOneEraHash . unwrapHeaderHash - inject :: forall blk. ConvertRawHash blk - => WrapHeaderHash blk -> WrapHeaderHash (HardForkBlock '[blk]) + inject :: + forall blk. + ConvertRawHash blk => + WrapHeaderHash blk -> WrapHeaderHash (HardForkBlock '[blk]) inject = - WrapHeaderHash - . OneEraHash . toShortRawHash (Proxy @blk) + WrapHeaderHash + . OneEraHash + . toShortRawHash (Proxy @blk) . unwrapHeaderHash instance Isomorphic ChainHash where - project :: forall blk. NoHardForks blk - => ChainHash (HardForkBlock '[blk]) -> ChainHash blk - project GenesisHash = GenesisHash + project :: + forall blk. + NoHardForks blk => + ChainHash (HardForkBlock '[blk]) -> ChainHash blk + project GenesisHash = GenesisHash project (BlockHash h) = BlockHash (project' (Proxy @(WrapHeaderHash blk)) h) - inject :: forall blk. NoHardForks blk - => ChainHash blk -> ChainHash (HardForkBlock '[blk]) - inject GenesisHash = GenesisHash + inject :: + forall blk. + NoHardForks blk => + ChainHash blk -> ChainHash (HardForkBlock '[blk]) + inject GenesisHash = GenesisHash inject (BlockHash h) = BlockHash (inject' (Proxy @(WrapHeaderHash blk)) h) {------------------------------------------------------------------------------- @@ -214,112 +239,132 @@ instance Isomorphic ChainHash where -- we need the 'EraParams' for their injections, which we can only derive if -- we have the top-level config. instance Isomorphic TopLevelConfig where - project :: forall blk. NoHardForks blk - => TopLevelConfig (HardForkBlock '[blk]) -> TopLevelConfig blk + project :: + forall blk. + NoHardForks blk => + TopLevelConfig (HardForkBlock '[blk]) -> TopLevelConfig blk project tlc = - mkTopLevelConfig - (auxConsensus $ configConsensus tlc) - (auxLedger $ configLedger tlc) - (project $ configBlock tlc) - (project $ configCodec tlc) - (project $ configStorage tlc) - emptyCheckpointsMap - where - ei :: EpochInfo (Except PastHorizonException) - ei = immutableEpochInfo $ project tlc - - auxLedger :: LedgerConfig (HardForkBlock '[blk]) -> LedgerConfig blk - auxLedger = - completeLedgerConfig (Proxy @blk) ei - . unwrapPartialLedgerConfig - . hd - . getPerEraLedgerConfig - . hardForkLedgerConfigPerEra - - auxConsensus :: ConsensusConfig (BlockProtocol (HardForkBlock '[blk])) - -> ConsensusConfig (BlockProtocol blk) - auxConsensus = - completeConsensusConfig (Proxy @(BlockProtocol blk)) ei - . unwrapPartialConsensusConfig - . hd - . getPerEraConsensusConfig - . hardForkConsensusConfigPerEra - - inject :: forall blk. NoHardForks blk - => TopLevelConfig blk -> TopLevelConfig (HardForkBlock '[blk]) + mkTopLevelConfig + (auxConsensus $ configConsensus tlc) + (auxLedger $ configLedger tlc) + (project $ configBlock tlc) + (project $ configCodec tlc) + (project $ configStorage tlc) + emptyCheckpointsMap + where + ei :: EpochInfo (Except PastHorizonException) + ei = immutableEpochInfo $ project tlc + + auxLedger :: LedgerConfig (HardForkBlock '[blk]) -> LedgerConfig blk + auxLedger = + completeLedgerConfig (Proxy @blk) ei + . unwrapPartialLedgerConfig + . hd + . getPerEraLedgerConfig + . hardForkLedgerConfigPerEra + + auxConsensus :: + ConsensusConfig (BlockProtocol (HardForkBlock '[blk])) -> + ConsensusConfig (BlockProtocol blk) + auxConsensus = + completeConsensusConfig (Proxy @(BlockProtocol blk)) ei + . unwrapPartialConsensusConfig + . hd + . getPerEraConsensusConfig + . hardForkConsensusConfigPerEra + + inject :: + forall blk. + NoHardForks blk => + TopLevelConfig blk -> TopLevelConfig (HardForkBlock '[blk]) inject tlc = - mkTopLevelConfig - (auxConsensus $ configConsensus tlc) - (auxLedger $ configLedger tlc) - (inject $ configBlock tlc) - (inject $ configCodec tlc) - (inject $ configStorage tlc) - emptyCheckpointsMap - where - eraParams = immutableEraParams tlc - - auxLedger :: LedgerConfig blk -> LedgerConfig (HardForkBlock '[blk]) - auxLedger cfg = HardForkLedgerConfig { - hardForkLedgerConfigShape = History.singletonShape eraParams - , hardForkLedgerConfigPerEra = PerEraLedgerConfig $ - WrapPartialLedgerConfig (toPartialLedgerConfig (Proxy @blk) cfg ) - :* Nil - } - - auxConsensus :: ConsensusConfig (BlockProtocol blk) - -> ConsensusConfig (BlockProtocol (HardForkBlock '[blk])) - auxConsensus cfg = HardForkConsensusConfig { - hardForkConsensusConfigK = protocolSecurityParam cfg - , hardForkConsensusConfigShape = History.singletonShape eraParams - , hardForkConsensusConfigPerEra = PerEraConsensusConfig $ - WrapPartialConsensusConfig (toPartialConsensusConfig (Proxy @(BlockProtocol blk)) cfg) - :* Nil - } + mkTopLevelConfig + (auxConsensus $ configConsensus tlc) + (auxLedger $ configLedger tlc) + (inject $ configBlock tlc) + (inject $ configCodec tlc) + (inject $ configStorage tlc) + emptyCheckpointsMap + where + eraParams = immutableEraParams tlc + + auxLedger :: LedgerConfig blk -> LedgerConfig (HardForkBlock '[blk]) + auxLedger cfg = + HardForkLedgerConfig + { hardForkLedgerConfigShape = History.singletonShape eraParams + , hardForkLedgerConfigPerEra = + PerEraLedgerConfig $ + WrapPartialLedgerConfig (toPartialLedgerConfig (Proxy @blk) cfg) + :* Nil + } + + auxConsensus :: + ConsensusConfig (BlockProtocol blk) -> + ConsensusConfig (BlockProtocol (HardForkBlock '[blk])) + auxConsensus cfg = + HardForkConsensusConfig + { hardForkConsensusConfigK = protocolSecurityParam cfg + , hardForkConsensusConfigShape = History.singletonShape eraParams + , hardForkConsensusConfigPerEra = + PerEraConsensusConfig $ + WrapPartialConsensusConfig (toPartialConsensusConfig (Proxy @(BlockProtocol blk)) cfg) + :* Nil + } {------------------------------------------------------------------------------- Various kinds of records -------------------------------------------------------------------------------} instance Isomorphic HeaderState where - project :: forall blk. NoHardForks blk - => HeaderState (HardForkBlock '[blk]) -> HeaderState blk - project HeaderState{..} = HeaderState { - headerStateTip = project <$> headerStateTip + project :: + forall blk. + NoHardForks blk => + HeaderState (HardForkBlock '[blk]) -> HeaderState blk + project HeaderState{..} = + HeaderState + { headerStateTip = project <$> headerStateTip , headerStateChainDep = project' (Proxy @(WrapChainDepState blk)) headerStateChainDep } - inject :: forall blk. NoHardForks blk - => HeaderState blk -> HeaderState (HardForkBlock '[blk]) - inject HeaderState{..} = HeaderState { - headerStateTip = inject <$> headerStateTip + inject :: + forall blk. + NoHardForks blk => + HeaderState blk -> HeaderState (HardForkBlock '[blk]) + inject HeaderState{..} = + HeaderState + { headerStateTip = inject <$> headerStateTip , headerStateChainDep = inject' (Proxy @(WrapChainDepState blk)) headerStateChainDep } instance Isomorphic (FlipTickedLedgerState mk) where project = - State.currentState + State.currentState . Telescope.fromTZ . getHardForkState . tickedHardForkLedgerStatePerEra . getFlipTickedLedgerState inject = - FlipTickedLedgerState + FlipTickedLedgerState . TickedHardForkLedgerState TransitionImpossible . HardForkState . Telescope.TZ . State.Current History.initBound instance Isomorphic (Flip ExtLedgerState mk) where - project (Flip ExtLedgerState{..}) = Flip $ ExtLedgerState { - ledgerState = unFlip $ project $ Flip ledgerState - , headerState = project headerState - } - - inject (Flip ExtLedgerState{..}) = Flip $ ExtLedgerState { - ledgerState = unFlip $ inject $ Flip ledgerState - , headerState = inject headerState - } + project (Flip ExtLedgerState{..}) = + Flip $ + ExtLedgerState + { ledgerState = unFlip $ project $ Flip ledgerState + , headerState = project headerState + } + + inject (Flip ExtLedgerState{..}) = + Flip $ + ExtLedgerState + { ledgerState = unFlip $ inject $ Flip ledgerState + , headerState = inject headerState + } instance Isomorphic AnnTip where project :: forall blk. NoHardForks blk => AnnTip (HardForkBlock '[blk]) -> AnnTip blk @@ -328,149 +373,171 @@ instance Isomorphic AnnTip where inject (AnnTip s b nfo) = AnnTip s b (OneEraTipInfo (Z (WrapTipInfo nfo))) instance Functor m => Isomorphic (InitChainDB m) where - project :: forall blk. NoHardForks blk - => InitChainDB m (HardForkBlock '[blk]) -> InitChainDB m blk + project :: + forall blk. + NoHardForks blk => + InitChainDB m (HardForkBlock '[blk]) -> InitChainDB m blk project = InitChainDB.map (inject' (Proxy @(I blk))) (unFlip . project . Flip) - inject :: forall blk. NoHardForks blk - => InitChainDB m blk -> InitChainDB m (HardForkBlock '[blk]) + inject :: + forall blk. + NoHardForks blk => + InitChainDB m blk -> InitChainDB m (HardForkBlock '[blk]) inject = InitChainDB.map (project' (Proxy @(I blk))) (unFlip . inject . Flip) instance Isomorphic ProtocolClientInfo where - project ProtocolClientInfo{..} = ProtocolClientInfo { - pClientInfoCodecConfig = project pClientInfoCodecConfig + project ProtocolClientInfo{..} = + ProtocolClientInfo + { pClientInfoCodecConfig = project pClientInfoCodecConfig } - inject ProtocolClientInfo{..} = ProtocolClientInfo { - pClientInfoCodecConfig = inject pClientInfoCodecConfig + inject ProtocolClientInfo{..} = + ProtocolClientInfo + { pClientInfoCodecConfig = inject pClientInfoCodecConfig } instance Isomorphic ForgeStateUpdateInfo where - project :: forall blk. NoHardForks blk - => ForgeStateUpdateInfo (HardForkBlock '[blk]) -> ForgeStateUpdateInfo blk + project :: + forall blk. + NoHardForks blk => + ForgeStateUpdateInfo (HardForkBlock '[blk]) -> ForgeStateUpdateInfo blk project forgeStateUpdateInfo = - case forgeStateUpdateInfo of - ForgeStateUpdated forgeStateInfo -> - ForgeStateUpdated - (project' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) - ForgeStateUpdateFailed forgeStateUpdateError -> - ForgeStateUpdateFailed - (project' (Proxy @(WrapForgeStateUpdateError blk)) forgeStateUpdateError) - ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed - - inject :: forall blk. NoHardForks blk - => ForgeStateUpdateInfo blk -> ForgeStateUpdateInfo (HardForkBlock '[blk]) + case forgeStateUpdateInfo of + ForgeStateUpdated forgeStateInfo -> + ForgeStateUpdated + (project' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) + ForgeStateUpdateFailed forgeStateUpdateError -> + ForgeStateUpdateFailed + (project' (Proxy @(WrapForgeStateUpdateError blk)) forgeStateUpdateError) + ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed + + inject :: + forall blk. + NoHardForks blk => + ForgeStateUpdateInfo blk -> ForgeStateUpdateInfo (HardForkBlock '[blk]) inject forgeStateUpdateInfo = - case forgeStateUpdateInfo of - ForgeStateUpdated forgeStateInfo -> - ForgeStateUpdated - (inject' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) - ForgeStateUpdateFailed forgeStateUpdateError -> - ForgeStateUpdateFailed - (inject' (Proxy @(WrapForgeStateUpdateError blk)) forgeStateUpdateError) - ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed + case forgeStateUpdateInfo of + ForgeStateUpdated forgeStateInfo -> + ForgeStateUpdated + (inject' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) + ForgeStateUpdateFailed forgeStateUpdateError -> + ForgeStateUpdateFailed + (inject' (Proxy @(WrapForgeStateUpdateError blk)) forgeStateUpdateError) + ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed instance Functor m => Isomorphic (BlockForging m) where - project :: forall blk. NoHardForks blk - => BlockForging m (HardForkBlock '[blk]) -> BlockForging m blk - project BlockForging {..} = BlockForging { - forgeLabel = forgeLabel - , canBeLeader = project' (Proxy @(WrapCanBeLeader blk)) canBeLeader + project :: + forall blk. + NoHardForks blk => + BlockForging m (HardForkBlock '[blk]) -> BlockForging m blk + project BlockForging{..} = + BlockForging + { forgeLabel = forgeLabel + , canBeLeader = project' (Proxy @(WrapCanBeLeader blk)) canBeLeader , updateForgeState = \cfg sno tickedChainDepSt -> - project <$> - updateForgeState - (inject cfg) - sno - (injTickedChainDepSt - (immutableEpochInfo cfg) - tickedChainDepSt) - , checkCanForge = \cfg sno tickedChainDepSt isLeader forgeStateInfo -> - first (project' (Proxy @(WrapCannotForge blk))) $ - checkCanForge - (inject cfg) - sno - (injTickedChainDepSt - (immutableEpochInfo cfg) - tickedChainDepSt) - (inject' (Proxy @(WrapIsLeader blk)) isLeader) - (inject' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) - - , forgeBlock = \cfg bno sno tickedLgrSt txs isLeader -> - project' (Proxy @(I blk)) <$> - forgeBlock - (inject cfg) - bno - sno - (getFlipTickedLedgerState (inject (FlipTickedLedgerState tickedLgrSt))) - (inject' (Proxy @(WrapValidatedGenTx blk)) <$> txs) - (inject' (Proxy @(WrapIsLeader blk)) isLeader) + project + <$> updateForgeState + (inject cfg) + sno + ( injTickedChainDepSt + (immutableEpochInfo cfg) + tickedChainDepSt + ) + , checkCanForge = \cfg sno tickedChainDepSt isLeader forgeStateInfo -> + first (project' (Proxy @(WrapCannotForge blk))) $ + checkCanForge + (inject cfg) + sno + ( injTickedChainDepSt + (immutableEpochInfo cfg) + tickedChainDepSt + ) + (inject' (Proxy @(WrapIsLeader blk)) isLeader) + (inject' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) + , forgeBlock = \cfg bno sno tickedLgrSt txs isLeader -> + project' (Proxy @(I blk)) + <$> forgeBlock + (inject cfg) + bno + sno + (getFlipTickedLedgerState (inject (FlipTickedLedgerState tickedLgrSt))) + (inject' (Proxy @(WrapValidatedGenTx blk)) <$> txs) + (inject' (Proxy @(WrapIsLeader blk)) isLeader) } - where - injTickedChainDepSt :: - EpochInfo (Except PastHorizonException) - -> Ticked (ChainDepState (BlockProtocol blk)) - -> Ticked (ChainDepState (HardForkProtocol '[blk])) - injTickedChainDepSt ei = - (`TickedHardForkChainDepState` ei) - . HardForkState - . Telescope.TZ - . State.Current History.initBound - . Comp - . WrapTickedChainDepState - - inject :: forall blk. NoHardForks blk - => BlockForging m blk -> BlockForging m (HardForkBlock '[blk]) - inject BlockForging {..} = BlockForging { - forgeLabel = forgeLabel - , canBeLeader = inject' (Proxy @(WrapCanBeLeader blk)) canBeLeader + where + injTickedChainDepSt :: + EpochInfo (Except PastHorizonException) -> + Ticked (ChainDepState (BlockProtocol blk)) -> + Ticked (ChainDepState (HardForkProtocol '[blk])) + injTickedChainDepSt ei = + (`TickedHardForkChainDepState` ei) + . HardForkState + . Telescope.TZ + . State.Current History.initBound + . Comp + . WrapTickedChainDepState + + inject :: + forall blk. + NoHardForks blk => + BlockForging m blk -> BlockForging m (HardForkBlock '[blk]) + inject BlockForging{..} = + BlockForging + { forgeLabel = forgeLabel + , canBeLeader = inject' (Proxy @(WrapCanBeLeader blk)) canBeLeader , updateForgeState = \cfg sno tickedChainDepSt -> - inject <$> - updateForgeState - (project cfg) - sno - (projTickedChainDepSt tickedChainDepSt) - , checkCanForge = \cfg sno tickedChainDepSt isLeader forgeStateInfo -> - first (inject' (Proxy @(WrapCannotForge blk))) $ - checkCanForge - (project cfg) - sno - (projTickedChainDepSt tickedChainDepSt) - (project' (Proxy @(WrapIsLeader blk)) isLeader) - (project' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) - - , forgeBlock = \cfg bno sno tickedLgrSt txs isLeader -> - inject' (Proxy @(I blk)) <$> - forgeBlock - (project cfg) - bno - sno - (getFlipTickedLedgerState (project (FlipTickedLedgerState tickedLgrSt))) - (project' (Proxy @(WrapValidatedGenTx blk)) <$> txs) - (project' (Proxy @(WrapIsLeader blk)) isLeader) + inject + <$> updateForgeState + (project cfg) + sno + (projTickedChainDepSt tickedChainDepSt) + , checkCanForge = \cfg sno tickedChainDepSt isLeader forgeStateInfo -> + first (inject' (Proxy @(WrapCannotForge blk))) $ + checkCanForge + (project cfg) + sno + (projTickedChainDepSt tickedChainDepSt) + (project' (Proxy @(WrapIsLeader blk)) isLeader) + (project' (Proxy @(WrapForgeStateInfo blk)) forgeStateInfo) + , forgeBlock = \cfg bno sno tickedLgrSt txs isLeader -> + inject' (Proxy @(I blk)) + <$> forgeBlock + (project cfg) + bno + sno + (getFlipTickedLedgerState (project (FlipTickedLedgerState tickedLgrSt))) + (project' (Proxy @(WrapValidatedGenTx blk)) <$> txs) + (project' (Proxy @(WrapIsLeader blk)) isLeader) } - where - projTickedChainDepSt :: - Ticked (ChainDepState (HardForkProtocol '[blk])) - -> Ticked (ChainDepState (BlockProtocol blk)) - projTickedChainDepSt = - unwrapTickedChainDepState - . unComp - . State.fromTZ - . tickedHardForkChainDepStatePerEra + where + projTickedChainDepSt :: + Ticked (ChainDepState (HardForkProtocol '[blk])) -> + Ticked (ChainDepState (BlockProtocol blk)) + projTickedChainDepSt = + unwrapTickedChainDepState + . unComp + . State.fromTZ + . tickedHardForkChainDepStatePerEra instance Isomorphic ProtocolInfo where - project :: forall blk. NoHardForks blk - => ProtocolInfo (HardForkBlock '[blk]) -> ProtocolInfo blk - project ProtocolInfo {..} = ProtocolInfo { - pInfoConfig = project pInfoConfig - , pInfoInitLedger = unFlip $ project $ Flip pInfoInitLedger + project :: + forall blk. + NoHardForks blk => + ProtocolInfo (HardForkBlock '[blk]) -> ProtocolInfo blk + project ProtocolInfo{..} = + ProtocolInfo + { pInfoConfig = project pInfoConfig + , pInfoInitLedger = unFlip $ project $ Flip pInfoInitLedger } - inject :: forall blk. NoHardForks blk - => ProtocolInfo blk -> ProtocolInfo (HardForkBlock '[blk]) - inject ProtocolInfo {..} = ProtocolInfo { - pInfoConfig = inject pInfoConfig - , pInfoInitLedger = unFlip $ inject $ Flip pInfoInitLedger + inject :: + forall blk. + NoHardForks blk => + ProtocolInfo blk -> ProtocolInfo (HardForkBlock '[blk]) + inject ProtocolInfo{..} = + ProtocolInfo + { pInfoConfig = inject pInfoConfig + , pInfoInitLedger = unFlip $ inject $ Flip pInfoInitLedger } {------------------------------------------------------------------------------- @@ -479,72 +546,75 @@ instance Isomorphic ProtocolInfo where instance Isomorphic WrapApplyTxErr where project = aux . unwrapApplyTxErr - where - aux :: ApplyTxErr (HardForkBlock '[blk]) -> WrapApplyTxErr blk - aux (HardForkApplyTxErrFromEra err) = unZ $ getOneEraApplyTxErr err - aux (HardForkApplyTxErrWrongEra err) = absurd $ mismatchOneEra err + where + aux :: ApplyTxErr (HardForkBlock '[blk]) -> WrapApplyTxErr blk + aux (HardForkApplyTxErrFromEra err) = unZ $ getOneEraApplyTxErr err + aux (HardForkApplyTxErrWrongEra err) = absurd $ mismatchOneEra err inject = WrapApplyTxErr . aux - where - aux :: WrapApplyTxErr blk -> ApplyTxErr (HardForkBlock '[blk]) - aux = HardForkApplyTxErrFromEra . OneEraApplyTxErr . Z + where + aux :: WrapApplyTxErr blk -> ApplyTxErr (HardForkBlock '[blk]) + aux = HardForkApplyTxErrFromEra . OneEraApplyTxErr . Z instance Isomorphic WrapEnvelopeErr where project = aux . unwrapEnvelopeErr - where - aux :: OtherHeaderEnvelopeError (HardForkBlock '[blk]) - -> WrapEnvelopeErr blk - aux (HardForkEnvelopeErrFromEra err) = unZ $ getOneEraEnvelopeErr err - aux (HardForkEnvelopeErrWrongEra err) = absurd $ mismatchOneEra err + where + aux :: + OtherHeaderEnvelopeError (HardForkBlock '[blk]) -> + WrapEnvelopeErr blk + aux (HardForkEnvelopeErrFromEra err) = unZ $ getOneEraEnvelopeErr err + aux (HardForkEnvelopeErrWrongEra err) = absurd $ mismatchOneEra err inject = WrapEnvelopeErr . aux - where - aux :: WrapEnvelopeErr b - -> OtherHeaderEnvelopeError (HardForkBlock '[b]) - aux = HardForkEnvelopeErrFromEra . OneEraEnvelopeErr . Z + where + aux :: + WrapEnvelopeErr b -> + OtherHeaderEnvelopeError (HardForkBlock '[b]) + aux = HardForkEnvelopeErrFromEra . OneEraEnvelopeErr . Z instance Isomorphic WrapCanBeLeader where project = OptNP.fromSingleton . getSomeErasCanBeLeader . unwrapCanBeLeader - inject = WrapCanBeLeader . SomeErasCanBeLeader . OptNP.singleton + inject = WrapCanBeLeader . SomeErasCanBeLeader . OptNP.singleton instance Isomorphic WrapForgeStateInfo where project (WrapForgeStateInfo forgeStateInfo) = - case forgeStateInfo of - CurrentEraForgeStateUpdated info -> unZ $ getOneEraForgeStateInfo info - inject = - WrapForgeStateInfo + case forgeStateInfo of + CurrentEraForgeStateUpdated info -> unZ $ getOneEraForgeStateInfo info + inject = + WrapForgeStateInfo . CurrentEraForgeStateUpdated . OneEraForgeStateInfo . Z instance Isomorphic WrapLedgerView where project = State.fromTZ . hardForkLedgerViewPerEra . unwrapLedgerView - inject = WrapLedgerView - . HardForkLedgerView TransitionImpossible - . HardForkState - . Telescope.TZ - . Current History.initBound + inject = + WrapLedgerView + . HardForkLedgerView TransitionImpossible + . HardForkState + . Telescope.TZ + . Current History.initBound instance Isomorphic (SomeSecond (NestedCtxt f)) where project (SomeSecond ctxt) = SomeSecond $ projNestedCtxt ctxt - inject (SomeSecond ctxt) = SomeSecond $ injNestedCtxt ctxt + inject (SomeSecond ctxt) = SomeSecond $ injNestedCtxt ctxt instance Isomorphic WrapLedgerErr where project = WrapLedgerErr . aux . unwrapLedgerErr - where - aux :: HardForkLedgerError '[blk] -> LedgerErr (LedgerState blk) - aux (HardForkLedgerErrorFromEra err) = - unwrapLedgerErr - . unZ - . getOneEraLedgerError - $ err - aux (HardForkLedgerErrorWrongEra err) = - absurd $ mismatchOneEra err + where + aux :: HardForkLedgerError '[blk] -> LedgerErr (LedgerState blk) + aux (HardForkLedgerErrorFromEra err) = + unwrapLedgerErr + . unZ + . getOneEraLedgerError + $ err + aux (HardForkLedgerErrorWrongEra err) = + absurd $ mismatchOneEra err inject = WrapLedgerErr . aux . unwrapLedgerErr - where - aux :: LedgerErr (LedgerState blk) -> HardForkLedgerError '[blk] - aux = HardForkLedgerErrorFromEra . OneEraLedgerError . Z . WrapLedgerErr + where + aux :: LedgerErr (LedgerState blk) -> HardForkLedgerError '[blk] + aux = HardForkLedgerErrorFromEra . OneEraLedgerError . Z . WrapLedgerErr {------------------------------------------------------------------------------- Serialised @@ -552,12 +622,12 @@ instance Isomorphic WrapLedgerErr where instance Isomorphic SerialisedHeader where project = - SerialisedHeaderFromDepPair + SerialisedHeaderFromDepPair . depPairFirst projNestedCtxt . serialisedHeaderToDepPair inject = - SerialisedHeaderFromDepPair + SerialisedHeaderFromDepPair . depPairFirst injNestedCtxt . serialisedHeaderToDepPair @@ -570,42 +640,47 @@ instance Isomorphic SerialisedHeader where -- | Project 'BlockQuery' -- -- Not an instance of 'Isomorphic' because the types change. -projQuery :: BlockQuery (HardForkBlock '[b]) fp result - -> (forall result'. - (result :~: HardForkQueryResult '[b] result') - -> BlockQuery b fp result' - -> a) - -> a +projQuery :: + BlockQuery (HardForkBlock '[b]) fp result -> + ( forall result'. + (result :~: HardForkQueryResult '[b] result') -> + BlockQuery b fp result' -> + a + ) -> + a projQuery qry k = - getHardForkQuery - qry - (\Refl -> k Refl . aux) - (\Refl prfNonEmpty _ _ -> case prfNonEmpty of {}) - (\Refl prfNonEmpty _ -> case prfNonEmpty of {}) - where - aux :: QueryIfCurrent '[b] fp result -> BlockQuery b fp result - aux (QZ q) = q - aux (QS q) = case q of {} - -projQuery' :: BlockQuery (HardForkBlock '[b]) fp result - -> ProjHardForkQuery fp b result + getHardForkQuery + qry + (\Refl -> k Refl . aux) + (\Refl prfNonEmpty _ _ -> case prfNonEmpty of {}) + (\Refl prfNonEmpty _ -> case prfNonEmpty of {}) + where + aux :: QueryIfCurrent '[b] fp result -> BlockQuery b fp result + aux (QZ q) = q + aux (QS q) = case q of {} + +projQuery' :: + BlockQuery (HardForkBlock '[b]) fp result -> + ProjHardForkQuery fp b result projQuery' qry = projQuery qry $ \Refl -> ProjHardForkQuery type ProjHardForkQuery :: QueryFootprint -> Type -> Type -> Type data ProjHardForkQuery fp b res where ProjHardForkQuery :: - BlockQuery b fp result' - -> ProjHardForkQuery fp b (HardForkQueryResult '[b] result') + BlockQuery b fp result' -> + ProjHardForkQuery fp b (HardForkQueryResult '[b] result') -- | Inject 'BlockQuery' -- -- Not an instance of 'Isomorphic' because the types change. -injQuery :: forall fp b result. BlockQuery b fp result - -> BlockQuery (HardForkBlock '[b]) fp (HardForkQueryResult '[b] result) +injQuery :: + forall fp b result. + BlockQuery b fp result -> + BlockQuery (HardForkBlock '[b]) fp (HardForkQueryResult '[b] result) injQuery = QueryIfCurrent . QZ projQueryResult :: HardForkQueryResult '[b] result -> result -projQueryResult (Left err) = absurd $ mismatchOneEra err +projQueryResult (Left err) = absurd $ mismatchOneEra err projQueryResult (Right result) = result injQueryResult :: result -> HardForkQueryResult '[b] result @@ -613,9 +688,9 @@ injQueryResult = Right projNestedCtxt :: NestedCtxt f (HardForkBlock '[blk]) a -> NestedCtxt f blk a projNestedCtxt = NestedCtxt . aux . flipNestedCtxt - where - aux :: NestedCtxt_ (HardForkBlock '[blk]) f a -> NestedCtxt_ blk f a - aux (NCZ ctxt) = ctxt + where + aux :: NestedCtxt_ (HardForkBlock '[blk]) f a -> NestedCtxt_ blk f a + aux (NCZ ctxt) = ctxt injNestedCtxt :: NestedCtxt f blk a -> NestedCtxt f (HardForkBlock '[blk]) a injNestedCtxt = NestedCtxt . NCZ . flipNestedCtxt diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Forging.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Forging.hs index 4ffcdc5037..5e2302a69c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Forging.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Forging.hs @@ -5,40 +5,39 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} +{-# OPTIONS_GHC -Wno-orphans #-} -{-# OPTIONS_GHC -Wno-orphans #-} - -module Ouroboros.Consensus.HardFork.Combinator.Forging ( - HardForkCannotForge +module Ouroboros.Consensus.HardFork.Combinator.Forging + ( HardForkCannotForge , HardForkForgeStateInfo (..) , HardForkForgeStateUpdateError , hardForkBlockForging ) where -import Data.Functor.Product -import Data.Maybe (fromMaybe) -import Data.SOP.BasicFunctors -import Data.SOP.Functors (Product2 (..)) -import Data.SOP.Index -import Data.SOP.InPairs (InPairs) -import qualified Data.SOP.InPairs as InPairs -import qualified Data.SOP.Match as Match -import Data.SOP.OptNP (NonEmptyOptNP, OptNP, ViewOptNP (..)) -import qualified Data.SOP.OptNP as OptNP -import Data.SOP.Strict -import Data.Text (Text) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.InjectTxs -import Ouroboros.Consensus.HardFork.Combinator.Ledger -import Ouroboros.Consensus.HardFork.Combinator.Mempool -import Ouroboros.Consensus.HardFork.Combinator.Protocol -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.Functor.Product +import Data.Maybe (fromMaybe) +import Data.SOP.BasicFunctors +import Data.SOP.Functors (Product2 (..)) +import Data.SOP.InPairs (InPairs) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index +import Data.SOP.Match qualified as Match +import Data.SOP.OptNP (NonEmptyOptNP, OptNP, ViewOptNP (..)) +import Data.SOP.OptNP qualified as OptNP +import Data.SOP.Strict +import Data.Text (Text) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.InjectTxs +import Ouroboros.Consensus.HardFork.Combinator.Ledger +import Ouroboros.Consensus.HardFork.Combinator.Mempool +import Ouroboros.Consensus.HardFork.Combinator.Protocol +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.TypeFamilyWrappers -- | If we cannot forge, it's because the current era could not forge type HardForkCannotForge xs = OneEraCannotForge xs @@ -54,15 +53,14 @@ type instance CannotForge (HardForkBlock xs) = HardForkCannotForge xs -- -- TODO #2766: expire past 'ForgeState' data HardForkForgeStateInfo xs where - -- | There is no 'BlockForging' record for the current era. - CurrentEraLacksBlockForging :: - EraIndex (x ': y ': xs) - -> HardForkForgeStateInfo (x ': y ': xs) - - -- | The 'ForgeState' of the current era was updated. - CurrentEraForgeStateUpdated :: - OneEraForgeStateInfo xs - -> HardForkForgeStateInfo xs + -- | There is no 'BlockForging' record for the current era. + CurrentEraLacksBlockForging :: + EraIndex (x ': y ': xs) -> + HardForkForgeStateInfo (x ': y ': xs) + -- | The 'ForgeState' of the current era was updated. + CurrentEraForgeStateUpdated :: + OneEraForgeStateInfo xs -> + HardForkForgeStateInfo xs deriving instance CanHardFork xs => Show (HardForkForgeStateInfo xs) @@ -72,127 +70,133 @@ type instance ForgeStateInfo (HardForkBlock xs) = HardForkForgeStateInfo xs -- and thus 'ForgeStateUpdateError'. type HardForkForgeStateUpdateError xs = OneEraForgeStateUpdateError xs -type instance ForgeStateUpdateError (HardForkBlock xs) = - HardForkForgeStateUpdateError xs +type instance + ForgeStateUpdateError (HardForkBlock xs) = + HardForkForgeStateUpdateError xs hardForkBlockForging :: - forall m xs. (CanHardFork xs, Monad m) - => Text - -- ^ Used as the 'forgeLabel', the labels of the given 'BlockForging's will - -- be ignored. - -> NonEmptyOptNP (BlockForging m) xs - -> BlockForging m (HardForkBlock xs) + forall m xs. + (CanHardFork xs, Monad m) => + -- | Used as the 'forgeLabel', the labels of the given 'BlockForging's will + -- be ignored. + Text -> + NonEmptyOptNP (BlockForging m) xs -> + BlockForging m (HardForkBlock xs) hardForkBlockForging label blockForging = - BlockForging { - forgeLabel = label - , canBeLeader = hardForkCanBeLeader blockForging - , updateForgeState = hardForkUpdateForgeState blockForging - , checkCanForge = hardForkCheckCanForge blockForging - , forgeBlock = hardForkForgeBlock blockForging - } + BlockForging + { forgeLabel = label + , canBeLeader = hardForkCanBeLeader blockForging + , updateForgeState = hardForkUpdateForgeState blockForging + , checkCanForge = hardForkCheckCanForge blockForging + , forgeBlock = hardForkForgeBlock blockForging + } hardForkCanBeLeader :: - CanHardFork xs - => NonEmptyOptNP (BlockForging m) xs -> HardForkCanBeLeader xs + CanHardFork xs => + NonEmptyOptNP (BlockForging m) xs -> HardForkCanBeLeader xs hardForkCanBeLeader = - SomeErasCanBeLeader + SomeErasCanBeLeader . hmap (WrapCanBeLeader . canBeLeader) -- | POSTCONDITION: the returned 'ForgeStateUpdateInfo' is from the same era as -- the ticked 'ChainDepState'. hardForkUpdateForgeState :: - forall m xs. (CanHardFork xs, Monad m) - => NonEmptyOptNP (BlockForging m) xs - -> TopLevelConfig (HardForkBlock xs) - -> SlotNo - -> Ticked (HardForkChainDepState xs) - -> m (ForgeStateUpdateInfo (HardForkBlock xs)) -hardForkUpdateForgeState blockForging - cfg - curSlot - (TickedHardForkChainDepState chainDepState ei) = + forall m xs. + (CanHardFork xs, Monad m) => + NonEmptyOptNP (BlockForging m) xs -> + TopLevelConfig (HardForkBlock xs) -> + SlotNo -> + Ticked (HardForkChainDepState xs) -> + m (ForgeStateUpdateInfo (HardForkBlock xs)) +hardForkUpdateForgeState + blockForging + cfg + curSlot + (TickedHardForkChainDepState chainDepState ei) = case OptNP.view blockForging of OptNP_ExactlyOne blockForging' -> - injectSingle <$> - updateForgeState + injectSingle + <$> updateForgeState blockForging' (hd (distribTopLevelConfig ei cfg)) curSlot (unwrapTickedChainDepState . unComp . State.fromTZ $ chainDepState) OptNP_AtLeastTwo -> - fmap undistrib - $ hsequence' - $ hzipWith3 + fmap undistrib + $ hsequence' + $ hzipWith3 aux (OptNP.toNP blockForging) (distribTopLevelConfig ei cfg) - $ State.tip chainDepState - where + $ State.tip chainDepState + where injectSingle :: - xs ~ '[blk] - => ForgeStateUpdateInfo blk - -> ForgeStateUpdateInfo (HardForkBlock '[blk]) + xs ~ '[blk] => + ForgeStateUpdateInfo blk -> + ForgeStateUpdateInfo (HardForkBlock '[blk]) injectSingle forgeStateUpdateInfo = - case forgeStateUpdateInfo of - ForgeStateUpdated info -> ForgeStateUpdated $ injInfo index info - ForgeStateUpdateFailed err -> ForgeStateUpdateFailed $ injUpdateError index err - ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed - where - index :: Index '[blk] blk - index = IZ + case forgeStateUpdateInfo of + ForgeStateUpdated info -> ForgeStateUpdated $ injInfo index info + ForgeStateUpdateFailed err -> ForgeStateUpdateFailed $ injUpdateError index err + ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed + where + index :: Index '[blk] blk + index = IZ aux :: - (Maybe :.: BlockForging m) blk - -> TopLevelConfig blk - -> (Ticked :.: WrapChainDepState) blk - -> (m :.: (Maybe :.: ForgeStateUpdateInfo)) blk + (Maybe :.: BlockForging m) blk -> + TopLevelConfig blk -> + (Ticked :.: WrapChainDepState) blk -> + (m :.: (Maybe :.: ForgeStateUpdateInfo)) blk aux (Comp mBlockForging) cfg' (Comp chainDepState') = - Comp $ fmap Comp $ case mBlockForging of - Nothing -> return Nothing - Just blockForging' -> Just <$> - updateForgeState + Comp $ fmap Comp $ case mBlockForging of + Nothing -> return Nothing + Just blockForging' -> + Just + <$> updateForgeState blockForging' cfg' curSlot (unwrapTickedChainDepState chainDepState') injInfo :: - Index xs blk - -> ForgeStateInfo blk - -> ForgeStateInfo (HardForkBlock xs) + Index xs blk -> + ForgeStateInfo blk -> + ForgeStateInfo (HardForkBlock xs) injInfo index = - CurrentEraForgeStateUpdated + CurrentEraForgeStateUpdated . OneEraForgeStateInfo . injectNS index . WrapForgeStateInfo injUpdateError :: - Index xs blk - -> ForgeStateUpdateError blk - -> ForgeStateUpdateError (HardForkBlock xs) + Index xs blk -> + ForgeStateUpdateError blk -> + ForgeStateUpdateError (HardForkBlock xs) injUpdateError index = - OneEraForgeStateUpdateError + OneEraForgeStateUpdateError . injectNS index . WrapForgeStateUpdateError undistrib :: - xs ~ (x ': y ': zs) - => NS (Maybe :.: ForgeStateUpdateInfo) xs - -> ForgeStateUpdateInfo (HardForkBlock xs) + xs ~ (x ': y ': zs) => + NS (Maybe :.: ForgeStateUpdateInfo) xs -> + ForgeStateUpdateInfo (HardForkBlock xs) undistrib = hcollapse . himap inj - where - inj :: forall blk. - Index xs blk - -> (Maybe :.: ForgeStateUpdateInfo) blk - -> K (ForgeStateUpdateInfo (HardForkBlock xs)) blk - inj index (Comp mForgeStateUpdateInfo) = - K $ case mForgeStateUpdateInfo of - Nothing -> ForgeStateUpdated $ CurrentEraLacksBlockForging $ eraIndexFromIndex index - Just forgeStateUpdateInfo -> - case forgeStateUpdateInfo of - ForgeStateUpdated info -> ForgeStateUpdated $ injInfo index info - ForgeStateUpdateFailed err -> ForgeStateUpdateFailed $ injUpdateError index err - ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed + where + inj :: + forall blk. + Index xs blk -> + (Maybe :.: ForgeStateUpdateInfo) blk -> + K (ForgeStateUpdateInfo (HardForkBlock xs)) blk + inj index (Comp mForgeStateUpdateInfo) = + K $ case mForgeStateUpdateInfo of + Nothing -> ForgeStateUpdated $ CurrentEraLacksBlockForging $ eraIndexFromIndex index + Just forgeStateUpdateInfo -> + case forgeStateUpdateInfo of + ForgeStateUpdated info -> ForgeStateUpdated $ injInfo index info + ForgeStateUpdateFailed err -> ForgeStateUpdateFailed $ injUpdateError index err + ForgeStateUpdateSuppressed -> ForgeStateUpdateSuppressed -- | PRECONDITION: the ticked 'ChainDepState', the 'HardForkIsLeader', and the -- 'HardForkStateInfo' are all from the same era, and we must have a @@ -201,20 +205,22 @@ hardForkUpdateForgeState blockForging -- This follows from the postconditions of 'check' and -- 'hardForkUpdateForgeState'. hardForkCheckCanForge :: - forall m xs empty. CanHardFork xs - => OptNP empty (BlockForging m) xs - -> TopLevelConfig (HardForkBlock xs) - -> SlotNo - -> Ticked (HardForkChainDepState xs) - -> HardForkIsLeader xs - -> HardForkForgeStateInfo xs - -> Either (HardForkCannotForge xs) () -hardForkCheckCanForge blockForging - cfg - curSlot - (TickedHardForkChainDepState chainDepState ei) - isLeader - forgeStateInfo = + forall m xs empty. + CanHardFork xs => + OptNP empty (BlockForging m) xs -> + TopLevelConfig (HardForkBlock xs) -> + SlotNo -> + Ticked (HardForkChainDepState xs) -> + HardForkIsLeader xs -> + HardForkForgeStateInfo xs -> + Either (HardForkCannotForge xs) () +hardForkCheckCanForge + blockForging + cfg + curSlot + (TickedHardForkChainDepState chainDepState ei) + isLeader + forgeStateInfo = distrib $ hizipWith3 checkOne @@ -223,58 +229,64 @@ hardForkCheckCanForge blockForging -- We know all three NSs must be from the same era, because they were -- all produced from the same 'BlockForging'. Unfortunately, we can't -- enforce it statically. - ( Match.mustMatchNS "ForgeStateInfo" forgeStateInfo' - $ Match.mustMatchNS "IsLeader" (getOneEraIsLeader isLeader) - $ State.tip chainDepState + ( Match.mustMatchNS "ForgeStateInfo" forgeStateInfo' $ + Match.mustMatchNS "IsLeader" (getOneEraIsLeader isLeader) $ + State.tip chainDepState ) - where + where distrib :: - NS (Maybe :.: WrapCannotForge) xs - -> Either (HardForkCannotForge xs) () + NS (Maybe :.: WrapCannotForge) xs -> + Either (HardForkCannotForge xs) () distrib = maybe (Right ()) (Left . OneEraCannotForge) . hsequence' missingBlockForgingImpossible :: EraIndex xs -> String missingBlockForgingImpossible eraIndex = - "impossible: current era lacks block forging but we have an IsLeader proof " + "impossible: current era lacks block forging but we have an IsLeader proof " <> show eraIndex forgeStateInfo' :: NS WrapForgeStateInfo xs forgeStateInfo' = case forgeStateInfo of - CurrentEraForgeStateUpdated info -> getOneEraForgeStateInfo info + CurrentEraForgeStateUpdated info -> getOneEraForgeStateInfo info CurrentEraLacksBlockForging eraIndex -> error $ missingBlockForgingImpossible eraIndex checkOne :: - Index xs blk - -> TopLevelConfig blk - -> (Maybe :.: BlockForging m) blk - -> Product - WrapForgeStateInfo - (Product - WrapIsLeader - (Ticked :.: WrapChainDepState)) - blk - -> (Maybe :.: WrapCannotForge) blk - -- ^ We use @Maybe x@ instead of @Either x ()@ because the former can - -- be partially applied. - checkOne index - cfg' - (Comp mBlockForging') - (Pair - (WrapForgeStateInfo forgeStateInfo'') - (Pair - (WrapIsLeader isLeader') - (Comp tickedChainDepState))) = - Comp $ either (Just . WrapCannotForge) (const Nothing) $ - checkCanForge - (fromMaybe - (error (missingBlockForgingImpossible (eraIndexFromIndex index))) - mBlockForging') - cfg' - curSlot - (unwrapTickedChainDepState tickedChainDepState) - isLeader' - forgeStateInfo'' + Index xs blk -> + TopLevelConfig blk -> + (Maybe :.: BlockForging m) blk -> + Product + WrapForgeStateInfo + ( Product + WrapIsLeader + (Ticked :.: WrapChainDepState) + ) + blk -> + (Maybe :.: WrapCannotForge) blk + -- \^ We use @Maybe x@ instead of @Either x ()@ because the former can + -- be partially applied. + checkOne + index + cfg' + (Comp mBlockForging') + ( Pair + (WrapForgeStateInfo forgeStateInfo'') + ( Pair + (WrapIsLeader isLeader') + (Comp tickedChainDepState) + ) + ) = + Comp $ + either (Just . WrapCannotForge) (const Nothing) $ + checkCanForge + ( fromMaybe + (error (missingBlockForgingImpossible (eraIndexFromIndex index))) + mBlockForging' + ) + cfg' + curSlot + (unwrapTickedChainDepState tickedChainDepState) + isLeader' + forgeStateInfo'' -- | PRECONDITION: the ticked 'LedgerState' and 'HardForkIsLeader' are from the -- same era, and we must have a 'BlockForging' for that era. @@ -282,92 +294,99 @@ hardForkCheckCanForge blockForging -- This follows from the postcondition of 'check' and the fact that the ticked -- 'ChainDepState' and ticked 'LedgerState' are from the same era. hardForkForgeBlock :: - forall m xs empty. (CanHardFork xs, Monad m) - => OptNP empty (BlockForging m) xs - -> TopLevelConfig (HardForkBlock xs) - -> BlockNo - -> SlotNo - -> TickedLedgerState (HardForkBlock xs) EmptyMK - -> [Validated (GenTx (HardForkBlock xs))] - -> HardForkIsLeader xs - -> m (HardForkBlock xs) -hardForkForgeBlock blockForging - cfg - bno - sno - (TickedHardForkLedgerState transition ledgerState) - txs - isLeader = - fmap (HardForkBlock . OneEraBlock) + forall m xs empty. + (CanHardFork xs, Monad m) => + OptNP empty (BlockForging m) xs -> + TopLevelConfig (HardForkBlock xs) -> + BlockNo -> + SlotNo -> + TickedLedgerState (HardForkBlock xs) EmptyMK -> + [Validated (GenTx (HardForkBlock xs))] -> + HardForkIsLeader xs -> + m (HardForkBlock xs) +hardForkForgeBlock + blockForging + cfg + bno + sno + (TickedHardForkLedgerState transition ledgerState) + txs + isLeader = + fmap (HardForkBlock . OneEraBlock) $ hsequence $ hizipWith3 - forgeBlockOne - cfgs - (OptNP.toNP blockForging) + forgeBlockOne + cfgs + (OptNP.toNP blockForging) $ injectValidatedTxs (map (getOneEraValidatedGenTx . getHardForkValidatedGenTx) txs) -- We know both NSs must be from the same era, because they were all -- produced from the same 'BlockForging'. Unfortunately, we can't enforce -- it statically. $ Match.mustMatchNS - "IsLeader" - (getOneEraIsLeader isLeader) - (State.tip ledgerState) - where + "IsLeader" + (getOneEraIsLeader isLeader) + (State.tip ledgerState) + where cfgs = distribTopLevelConfig ei cfg - ei = State.epochInfoPrecomputedTransitionInfo - (hardForkLedgerConfigShape (configLedger cfg)) - transition - ledgerState + ei = + State.epochInfoPrecomputedTransitionInfo + (hardForkLedgerConfigShape (configLedger cfg)) + transition + ledgerState missingBlockForgingImpossible :: EraIndex xs -> String missingBlockForgingImpossible eraIndex = - "impossible: current era lacks block forging but we have an IsLeader proof " + "impossible: current era lacks block forging but we have an IsLeader proof " <> show eraIndex injectValidatedTxs :: - [NS WrapValidatedGenTx xs] - -> NS f xs - -> NS (Product f ([] :.: WrapValidatedGenTx)) xs + [NS WrapValidatedGenTx xs] -> + NS f xs -> + NS (Product f ([] :.: WrapValidatedGenTx)) xs injectValidatedTxs = noMismatches .: flip (matchValidatedTxsNS injTxs) - where - injTxs :: InPairs InjectValidatedTx xs - injTxs = - InPairs.hmap (\(Pair2 _ x) -> x) - $ InPairs.requiringBoth - (hmap (WrapLedgerConfig . configLedger) cfgs) - hardForkInjectTxs + where + injTxs :: InPairs InjectValidatedTx xs + injTxs = + InPairs.hmap (\(Pair2 _ x) -> x) $ + InPairs.requiringBoth + (hmap (WrapLedgerConfig . configLedger) cfgs) + hardForkInjectTxs - -- | We know the transactions must be valid w.r.t. the given ledger - -- state, the Mempool maintains that invariant. That means they are - -- either from the same era, or can be injected into that era. - noMismatches :: - ([Match.Mismatch WrapValidatedGenTx f xs], NS (Product f ([] :.: WrapValidatedGenTx)) xs) - -> NS (Product f ([] :.: WrapValidatedGenTx)) xs - noMismatches ([], xs) = xs - noMismatches (_errs, _) = error "unexpected unmatchable transactions" + -- \| We know the transactions must be valid w.r.t. the given ledger + -- state, the Mempool maintains that invariant. That means they are + -- either from the same era, or can be injected into that era. + noMismatches :: + ([Match.Mismatch WrapValidatedGenTx f xs], NS (Product f ([] :.: WrapValidatedGenTx)) xs) -> + NS (Product f ([] :.: WrapValidatedGenTx)) xs + noMismatches ([], xs) = xs + noMismatches (_errs, _) = error "unexpected unmatchable transactions" - -- | Unwraps all the layers needed for SOP and call 'forgeBlock'. + -- \| Unwraps all the layers needed for SOP and call 'forgeBlock'. forgeBlockOne :: - Index xs blk - -> TopLevelConfig blk - -> (Maybe :.: BlockForging m) blk - -> Product - (Product - WrapIsLeader - (FlipTickedLedgerState EmptyMK)) - ([] :.: WrapValidatedGenTx) - blk - -> m blk - forgeBlockOne index - cfg' - (Comp mBlockForging') - (Pair - (Pair (WrapIsLeader isLeader') (FlipTickedLedgerState ledgerState')) - (Comp txs')) = + Index xs blk -> + TopLevelConfig blk -> + (Maybe :.: BlockForging m) blk -> + Product + ( Product + WrapIsLeader + (FlipTickedLedgerState EmptyMK) + ) + ([] :.: WrapValidatedGenTx) + blk -> + m blk + forgeBlockOne + index + cfg' + (Comp mBlockForging') + ( Pair + (Pair (WrapIsLeader isLeader') (FlipTickedLedgerState ledgerState')) + (Comp txs') + ) = forgeBlock - (fromMaybe + ( fromMaybe (error (missingBlockForgingImpossible (eraIndexFromIndex index))) - mBlockForging') + mBlockForging' + ) cfg' bno sno diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Info.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Info.hs index 78328e5d3c..fdb3ef9103 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Info.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Info.hs @@ -3,34 +3,34 @@ {-# LANGUAGE DerivingStrategies #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} -module Ouroboros.Consensus.HardFork.Combinator.Info ( - -- * Era info +module Ouroboros.Consensus.HardFork.Combinator.Info + ( -- * Era info LedgerEraInfo (..) , SingleEraInfo (..) ) where -import Codec.Serialise (Serialise) -import Data.Text (Text) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) +import Codec.Serialise (Serialise) +import Data.Text (Text) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) {------------------------------------------------------------------------------- Era info -------------------------------------------------------------------------------} -- | Information about an era (mostly for type errors) -data SingleEraInfo blk = SingleEraInfo { - singleEraName :: !Text - } - deriving stock (Generic, Eq, Show) +data SingleEraInfo blk = SingleEraInfo + { singleEraName :: !Text + } + deriving stock (Generic, Eq, Show) deriving anyclass (NoThunks, Serialise) -- | Additional newtype wrapper around 'SingleEraInfo' -- -- This is primarily useful for use in error messages: it marks which era -- info came from the ledger, and which came from a tx/block/header/etc. -newtype LedgerEraInfo blk = LedgerEraInfo { - getLedgerEraInfo :: SingleEraInfo blk - } - deriving stock (Eq, Show) +newtype LedgerEraInfo blk = LedgerEraInfo + { getLedgerEraInfo :: SingleEraInfo blk + } + deriving stock (Eq, Show) deriving newtype (NoThunks, Serialise) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/InjectTxs.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/InjectTxs.hs index 463a613d10..6903767194 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/InjectTxs.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/InjectTxs.hs @@ -6,17 +6,19 @@ {-# LANGUAGE TypeOperators #-} -- | Injecting a transaction from one block type to another -module Ouroboros.Consensus.HardFork.Combinator.InjectTxs ( - -- * Polymorphic +module Ouroboros.Consensus.HardFork.Combinator.InjectTxs + ( -- * Polymorphic InjectPolyTx (..) , cannotInjectPolyTx , matchPolyTx , matchPolyTxsTele + -- * Unvalidated transactions , InjectTx , cannotInjectTx , matchTx , pattern InjectTx + -- * Validated transactions , InjectValidatedTx , cannotInjectValidatedTx @@ -25,19 +27,19 @@ module Ouroboros.Consensus.HardFork.Combinator.InjectTxs ( , pattern InjectValidatedTx ) where -import Data.Bifunctor -import Data.Functor.Product -import Data.SOP.BasicFunctors -import Data.SOP.InPairs (InPairs (..)) -import Data.SOP.Match -import Data.SOP.Sing -import Data.SOP.Strict -import Data.SOP.Telescope (Telescope (..)) -import qualified Data.SOP.Telescope as Telescope -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (pairFst) +import Data.Bifunctor +import Data.Functor.Product +import Data.SOP.BasicFunctors +import Data.SOP.InPairs (InPairs (..)) +import Data.SOP.Match +import Data.SOP.Sing +import Data.SOP.Strict +import Data.SOP.Telescope (Telescope (..)) +import Data.SOP.Telescope qualified as Telescope +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (pairFst) {------------------------------------------------------------------------------- Polymorphic definitions @@ -46,9 +48,9 @@ import Ouroboros.Consensus.Util (pairFst) -- | @tx@ is either 'GenTx' or 'WrapValidatedGenTx' -- -- See 'InjectTx' and 'InjectValidatedTx', respectively. -data InjectPolyTx tx blk blk' = InjectPolyTx { - injectTxWith :: tx blk -> Maybe (tx blk') - } +data InjectPolyTx tx blk blk' = InjectPolyTx + { injectTxWith :: tx blk -> Maybe (tx blk') + } -- | The injection that always fails cannotInjectPolyTx :: InjectPolyTx tx blk blk' @@ -56,68 +58,76 @@ cannotInjectPolyTx = InjectPolyTx $ const Nothing -- | Match transaction with a telescope, attempting to inject where possible matchPolyTx' :: - InPairs (InjectPolyTx tx) xs - -> NS tx xs - -> Telescope g f xs - -> Either (Mismatch tx f xs) - (Telescope g (Product tx f) xs) + InPairs (InjectPolyTx tx) xs -> + NS tx xs -> + Telescope g f xs -> + Either + (Mismatch tx f xs) + (Telescope g (Product tx f) xs) matchPolyTx' = go - where - go :: InPairs (InjectPolyTx tx) xs - -> NS tx xs - -> Telescope g f xs - -> Either (Mismatch tx f xs) - (Telescope g (Product tx f) xs) - go _ (Z x) (TZ f) = Right $ TZ (Pair x f) - go (PCons _ is) (S x) (TS g f) = bimap MS (TS g) $ go is x f - go _ (S x) (TZ f) = Left $ MR x f - go (PCons i is) (Z x) (TS g f) = - case injectTxWith i x of - Nothing -> Left $ ML x (Telescope.tip f) - Just x' -> bimap MS (TS g) $ go is (Z x') f + where + go :: + InPairs (InjectPolyTx tx) xs -> + NS tx xs -> + Telescope g f xs -> + Either + (Mismatch tx f xs) + (Telescope g (Product tx f) xs) + go _ (Z x) (TZ f) = Right $ TZ (Pair x f) + go (PCons _ is) (S x) (TS g f) = bimap MS (TS g) $ go is x f + go _ (S x) (TZ f) = Left $ MR x f + go (PCons i is) (Z x) (TS g f) = + case injectTxWith i x of + Nothing -> Left $ ML x (Telescope.tip f) + Just x' -> bimap MS (TS g) $ go is (Z x') f matchPolyTx :: - SListI xs - => InPairs (InjectPolyTx tx) xs - -> NS tx xs - -> HardForkState f xs - -> Either (Mismatch tx (Current f) xs) - (HardForkState (Product tx f) xs) + SListI xs => + InPairs (InjectPolyTx tx) xs -> + NS tx xs -> + HardForkState f xs -> + Either + (Mismatch tx (Current f) xs) + (HardForkState (Product tx f) xs) matchPolyTx is tx = - fmap (HardForkState . hmap distrib) + fmap (HardForkState . hmap distrib) . matchPolyTx' is tx . getHardForkState - where - distrib :: Product tx (Current f) blk -> Current (Product tx f) blk - distrib (Pair tx' Current{..}) = Current { - currentStart = currentStart - , currentState = Pair tx' currentState - } + where + distrib :: Product tx (Current f) blk -> Current (Product tx f) blk + distrib (Pair tx' Current{..}) = + Current + { currentStart = currentStart + , currentState = Pair tx' currentState + } -- | Match a list of transactions with an 'Telescope', attempting to inject -- where possible matchPolyTxsTele :: - forall tx g f xs. SListI xs - => InPairs (InjectPolyTx tx) xs - -> Telescope g f xs - -> [NS tx xs] - -> ( [(NS tx xs, Mismatch tx f xs)] - , Telescope g (Product f ([] :.: tx)) xs - ) + forall tx g f xs. + SListI xs => + InPairs (InjectPolyTx tx) xs -> + Telescope g f xs -> + [NS tx xs] -> + ( [(NS tx xs, Mismatch tx f xs)] + , Telescope g (Product f ([] :.: tx)) xs + ) matchPolyTxsTele is ns = go - where - go :: [NS tx xs] - -> ([(NS tx xs, Mismatch tx f xs)], Telescope g (Product f ([] :.: tx)) xs) - go [] = ([], hmap (`Pair` Comp []) ns) - go (tx:txs) = - let (mismatched, matched) = go txs - in case matchPolyTx' is tx matched of - Left err -> ((tx, hmap pairFst err) : mismatched, matched) - Right matched' -> (mismatched, insert matched') - - insert :: Telescope g (Product tx (Product f ([] :.: tx))) xs - -> Telescope g (Product f ([] :.: tx)) xs - insert = hmap (\(Pair tx (Pair f (Comp txs))) -> Pair f (Comp (tx:txs))) + where + go :: + [NS tx xs] -> + ([(NS tx xs, Mismatch tx f xs)], Telescope g (Product f ([] :.: tx)) xs) + go [] = ([], hmap (`Pair` Comp []) ns) + go (tx : txs) = + let (mismatched, matched) = go txs + in case matchPolyTx' is tx matched of + Left err -> ((tx, hmap pairFst err) : mismatched, matched) + Right matched' -> (mismatched, insert matched') + + insert :: + Telescope g (Product tx (Product f ([] :.: tx))) xs -> + Telescope g (Product f ([] :.: tx)) xs + insert = hmap (\(Pair tx (Pair f (Comp txs))) -> Pair f (Comp (tx : txs))) {------------------------------------------------------------------------------- Monomorphic aliases @@ -135,12 +145,13 @@ cannotInjectTx = cannotInjectPolyTx -- | 'matchPolyTx' at type 'InjectTx' matchTx :: - SListI xs - => InPairs InjectTx xs - -> NS GenTx xs - -> HardForkState f xs - -> Either (Mismatch GenTx (Current f) xs) - (HardForkState (Product GenTx f) xs) + SListI xs => + InPairs InjectTx xs -> + NS GenTx xs -> + HardForkState f xs -> + Either + (Mismatch GenTx (Current f) xs) + (HardForkState (Product GenTx f) xs) matchTx = matchPolyTx ----- @@ -149,8 +160,8 @@ type InjectValidatedTx = InjectPolyTx WrapValidatedGenTx -- | 'InjectPolyTx' at type 'InjectValidatedTx' pattern InjectValidatedTx :: - (WrapValidatedGenTx blk -> Maybe (WrapValidatedGenTx blk')) - -> InjectValidatedTx blk blk' + (WrapValidatedGenTx blk -> Maybe (WrapValidatedGenTx blk')) -> + InjectValidatedTx blk blk' pattern InjectValidatedTx f = InjectPolyTx f -- | 'cannotInjectPolyTx' at type 'InjectValidatedTx' @@ -159,19 +170,21 @@ cannotInjectValidatedTx = cannotInjectPolyTx -- | 'matchPolyTx' at type 'InjectValidatedTx' matchValidatedTx :: - SListI xs - => InPairs InjectValidatedTx xs - -> NS WrapValidatedGenTx xs - -> HardForkState f xs - -> Either (Mismatch WrapValidatedGenTx (Current f) xs) - (HardForkState (Product WrapValidatedGenTx f) xs) + SListI xs => + InPairs InjectValidatedTx xs -> + NS WrapValidatedGenTx xs -> + HardForkState f xs -> + Either + (Mismatch WrapValidatedGenTx (Current f) xs) + (HardForkState (Product WrapValidatedGenTx f) xs) matchValidatedTx = matchPolyTx -- | 'matchPolyTxsNS' at type 'InjectValidatedTx' matchValidatedTxsNS :: - forall f xs. SListI xs - => InPairs InjectValidatedTx xs - -> NS f xs - -> [NS WrapValidatedGenTx xs] - -> ([Mismatch WrapValidatedGenTx f xs], NS (Product f ([] :.: WrapValidatedGenTx)) xs) + forall f xs. + SListI xs => + InPairs InjectValidatedTx xs -> + NS f xs -> + [NS WrapValidatedGenTx xs] -> + ([Mismatch WrapValidatedGenTx f xs], NS (Product f ([] :.: WrapValidatedGenTx)) xs) matchValidatedTxsNS ips ns txs = bimap (map snd) Telescope.tip $ matchPolyTxsTele ips (Telescope.fromTip ns) txs diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger.hs index a6c70a2b2b..be9f01ad67 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger.hs @@ -18,25 +18,29 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Ledger ( - HardForkEnvelopeErr (..) +module Ouroboros.Consensus.HardFork.Combinator.Ledger + ( HardForkEnvelopeErr (..) , HardForkLedgerError (..) , HardForkLedgerUpdate (..) , HardForkLedgerWarning (..) + -- * Type family instances , FlipTickedLedgerState (..) , Ticked (..) + -- * Low-level API (exported for the benefit of testing) , AnnForecast (..) , mkHardForkForecast + -- * Ledger tables , ejectLedgerTables , injectLedgerTables + -- ** HardForkTxIn , HasCanonicalTxIn (..) + -- ** HardForkTxOut , DefaultHardForkTxOut , HasHardForkTxOut (..) @@ -45,59 +49,62 @@ module Ouroboros.Consensus.HardFork.Combinator.Ledger ( , injectHardForkTxOutDefault ) where -import Control.Monad (guard) -import Control.Monad.Except (throwError, withExcept) -import qualified Control.State.Transition.Extended as STS -import Data.Functor ((<&>)) -import Data.Functor.Product -import Data.Kind (Type) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe, isJust) -import Data.MemPack -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting (getExactly) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index -import Data.SOP.InPairs (InPairs (..)) -import qualified Data.SOP.InPairs as InPairs -import qualified Data.SOP.Match as Match -import Data.SOP.Strict -import Data.SOP.Tails (Tails) -import qualified Data.SOP.Tails as Tails -import Data.SOP.Telescope (Telescope (..)) -import qualified Data.SOP.Telescope as Telescope -import Data.Typeable -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.Protocol () -import Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.HardFork.Combinator.Translation -import Ouroboros.Consensus.HardFork.History (Bound (..), EraParams, - SafeZone (..)) -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IndexedMemPack (IndexedMemPack) +import Control.Monad (guard) +import Control.Monad.Except (throwError, withExcept) +import Control.State.Transition.Extended qualified as STS +import Data.Functor ((<&>)) +import Data.Functor.Product +import Data.Kind (Type) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe, isJust) +import Data.MemPack +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting (getExactly) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs (InPairs (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index +import Data.SOP.Match qualified as Match +import Data.SOP.Strict +import Data.SOP.Tails (Tails) +import Data.SOP.Tails qualified as Tails +import Data.SOP.Telescope (Telescope (..)) +import Data.SOP.Telescope qualified as Telescope +import Data.Typeable +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.Protocol () +import Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.Combinator.Translation +import Ouroboros.Consensus.HardFork.History + ( Bound (..) + , EraParams + , SafeZone (..) + ) +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IndexedMemPack (IndexedMemPack) -- $setup -- >>> import Image.LaTeX.Render @@ -110,12 +117,11 @@ import Ouroboros.Consensus.Util.IndexedMemPack (IndexedMemPack) Errors -------------------------------------------------------------------------------} -data HardForkLedgerError xs = - -- | Validation error from one of the eras +data HardForkLedgerError xs + = -- | Validation error from one of the eras HardForkLedgerErrorFromEra (OneEraLedgerError xs) - - -- | We tried to apply a block from the wrong era - | HardForkLedgerErrorWrongEra (MismatchEraInfo xs) + | -- | We tried to apply a block from the wrong era + HardForkLedgerErrorWrongEra (MismatchEraInfo xs) deriving (Generic, Show, Eq, NoThunks) {------------------------------------------------------------------------------- @@ -123,73 +129,77 @@ data HardForkLedgerError xs = -------------------------------------------------------------------------------} instance CanHardFork xs => GetTip (LedgerState (HardForkBlock xs)) where - getTip = castPoint - . State.getTip (castPoint . getTip . unFlip) - . hardForkLedgerStatePerEra + getTip = + castPoint + . State.getTip (castPoint . getTip . unFlip) + . hardForkLedgerStatePerEra instance CanHardFork xs => GetTip (Ticked (LedgerState (HardForkBlock xs))) where - getTip = castPoint - . State.getTip (castPoint . getTip . getFlipTickedLedgerState) - . tickedHardForkLedgerStatePerEra + getTip = + castPoint + . State.getTip (castPoint . getTip . getFlipTickedLedgerState) + . tickedHardForkLedgerStatePerEra {------------------------------------------------------------------------------- Ticking -------------------------------------------------------------------------------} -newtype FlipTickedLedgerState mk blk = FlipTickedLedgerState { - getFlipTickedLedgerState :: Ticked (LedgerState blk) mk +newtype FlipTickedLedgerState mk blk = FlipTickedLedgerState + { getFlipTickedLedgerState :: Ticked (LedgerState blk) mk } -data instance Ticked (LedgerState (HardForkBlock xs)) mk = - TickedHardForkLedgerState { - tickedHardForkLedgerStateTransition :: !TransitionInfo - , tickedHardForkLedgerStatePerEra :: - !(HardForkState (FlipTickedLedgerState mk) xs) - } +data instance Ticked (LedgerState (HardForkBlock xs)) mk + = TickedHardForkLedgerState + { tickedHardForkLedgerStateTransition :: !TransitionInfo + , tickedHardForkLedgerStatePerEra :: + !(HardForkState (FlipTickedLedgerState mk) xs) + } instance CanHardFork xs => IsLedger (LedgerState (HardForkBlock xs)) where - type LedgerErr (LedgerState (HardForkBlock xs)) = HardForkLedgerError xs + type LedgerErr (LedgerState (HardForkBlock xs)) = HardForkLedgerError xs type AuxLedgerEvent (LedgerState (HardForkBlock xs)) = OneEraLedgerEvent xs applyChainTickLedgerResult evs cfg@HardForkLedgerConfig{..} slot (HardForkLedgerState st) = - sequenceHardForkState - (hcizipWith + sequenceHardForkState + ( hcizipWith proxySingle (tickOne ei slot evs) cfgs - extended) <&> \l' -> - TickedHardForkLedgerState { - tickedHardForkLedgerStateTransition = - -- We are bundling a 'TransitionInfo' with a /ticked/ ledger state, - -- but /derive/ that 'TransitionInfo' from the /unticked/ (albeit - -- extended) state. That requires justification. Three cases: - -- - -- o 'TransitionUnknown'. If the transition is unknown, then it - -- cannot become known due to ticking. In this case, we record - -- the tip of the ledger, which ticking also does not modify - -- (this is an explicit postcondition of 'applyChainTick'). - -- o 'TransitionKnown'. If the transition to the next epoch is - -- already known, then ticking does not change that information. - -- It can't be the case that the 'SlotNo' we're ticking to is - -- /in/ that next era, because if was, then 'extendToSlot' would - -- have extended the telescope further. - -- (This does mean however that it is important to use the - -- /extended/ ledger state, not the original, to determine the - -- 'TransitionInfo'.) - -- o 'TransitionImpossible'. This has two subcases: either we are - -- in the final era, in which case ticking certainly won't be able - -- to change that, or we're forecasting, which is simply not - -- applicable here. - State.mostRecentTransitionInfo cfg extended - , tickedHardForkLedgerStatePerEra = l' - } - where - cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - ei = State.epochInfoLedger cfg st + extended + ) + <&> \l' -> + TickedHardForkLedgerState + { tickedHardForkLedgerStateTransition = + -- We are bundling a 'TransitionInfo' with a /ticked/ ledger state, + -- but /derive/ that 'TransitionInfo' from the /unticked/ (albeit + -- extended) state. That requires justification. Three cases: + -- + -- o 'TransitionUnknown'. If the transition is unknown, then it + -- cannot become known due to ticking. In this case, we record + -- the tip of the ledger, which ticking also does not modify + -- (this is an explicit postcondition of 'applyChainTick'). + -- o 'TransitionKnown'. If the transition to the next epoch is + -- already known, then ticking does not change that information. + -- It can't be the case that the 'SlotNo' we're ticking to is + -- /in/ that next era, because if was, then 'extendToSlot' would + -- have extended the telescope further. + -- (This does mean however that it is important to use the + -- /extended/ ledger state, not the original, to determine the + -- 'TransitionInfo'.) + -- o 'TransitionImpossible'. This has two subcases: either we are + -- in the final era, in which case ticking certainly won't be able + -- to change that, or we're forecasting, which is simply not + -- applicable here. + State.mostRecentTransitionInfo cfg extended + , tickedHardForkLedgerStatePerEra = l' + } + where + cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra + ei = State.epochInfoLedger cfg st - extended :: HardForkState (Flip LedgerState DiffMK) xs - extended = State.extendToSlot cfg slot st + extended :: HardForkState (Flip LedgerState DiffMK) xs + extended = State.extendToSlot cfg slot st -- | Ticking outside of era transitions for now does not generate differences -- now that we only have the UTxO table, but we need the same type regardless of @@ -198,21 +208,24 @@ instance CanHardFork xs => IsLedger (LedgerState (HardForkBlock xs)) where -- This function ticks the ledger state using the particular block function, and -- prepends the diffs that might have been created if this tick crossed an era -- boundary. -tickOne :: (SListI xs, SingleEraBlock blk) - => EpochInfo (Except PastHorizonException) - -> SlotNo - -> ComputeLedgerEvents - -> Index xs blk - -> WrapPartialLedgerConfig blk - -> (Flip LedgerState DiffMK) blk - -> ( LedgerResult (LedgerState (HardForkBlock xs)) - :.: FlipTickedLedgerState DiffMK - ) blk +tickOne :: + (SListI xs, SingleEraBlock blk) => + EpochInfo (Except PastHorizonException) -> + SlotNo -> + ComputeLedgerEvents -> + Index xs blk -> + WrapPartialLedgerConfig blk -> + (Flip LedgerState DiffMK) blk -> + ( LedgerResult (LedgerState (HardForkBlock xs)) + :.: FlipTickedLedgerState DiffMK + ) + blk tickOne ei slot evs sopIdx partialCfg st = - Comp - . fmap ( FlipTickedLedgerState - . prependDiffs (unFlip st) - ) + Comp + . fmap + ( FlipTickedLedgerState + . prependDiffs (unFlip st) + ) . embedLedgerResult (injectLedgerEvent sopIdx) . applyChainTickLedgerResult evs (completeLedgerConfig' ei partialCfg) slot . forgetLedgerTables @@ -223,76 +236,87 @@ tickOne ei slot evs sopIdx partialCfg st = ApplyBlock -------------------------------------------------------------------------------} -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) - => ApplyBlock (LedgerState (HardForkBlock xs)) (HardForkBlock xs) where - - applyBlockLedgerResultWithValidation doValidate opts cfg - (HardForkBlock (OneEraBlock block)) - (TickedHardForkLedgerState transition st) = +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + ApplyBlock (LedgerState (HardForkBlock xs)) (HardForkBlock xs) + where + applyBlockLedgerResultWithValidation + doValidate + opts + cfg + (HardForkBlock (OneEraBlock block)) + (TickedHardForkLedgerState transition st) = case State.match block st of Left mismatch -> -- Block from the wrong era (note that 'applyChainTick' will already -- have initiated the transition to the next era if appropriate). - throwError - $ HardForkLedgerErrorWrongEra . MismatchEraInfo - $ Match.bihcmap proxySingle singleEraInfo ledgerInfo mismatch + throwError $ + HardForkLedgerErrorWrongEra . MismatchEraInfo $ + Match.bihcmap proxySingle singleEraInfo ledgerInfo mismatch Right matched -> - fmap (fmap HardForkLedgerState . sequenceHardForkState) - $ hsequence' - $ hcizipWith proxySingle (apply doValidate opts) cfgs matched - where + fmap (fmap HardForkLedgerState . sequenceHardForkState) $ + hsequence' $ + hcizipWith proxySingle (apply doValidate opts) cfgs matched + where cfgs = distribLedgerConfig ei cfg - ei = State.epochInfoPrecomputedTransitionInfo - (hardForkLedgerConfigShape cfg) - transition - st + ei = + State.epochInfoPrecomputedTransitionInfo + (hardForkLedgerConfigShape cfg) + transition + st applyBlockLedgerResult = defaultApplyBlockLedgerResult - reapplyBlockLedgerResult = defaultReapplyBlockLedgerResult (\_ -> - -- We already applied this block to this ledger state, - -- so it can't be from the wrong era - error "reapplyBlockLedgerResult: can't be from other era" + reapplyBlockLedgerResult = + defaultReapplyBlockLedgerResult + ( \_ -> + -- We already applied this block to this ledger state, + -- so it can't be from the wrong era + error "reapplyBlockLedgerResult: can't be from other era" ) getBlockKeySets (HardForkBlock (OneEraBlock ns)) = - hcollapse - $ hcimap proxySingle f ns - where - f :: - SingleEraBlock x - => Index xs x - -> I x - -> K (LedgerTables (LedgerState (HardForkBlock xs)) KeysMK) x - f idx (I blk) = K $ injectLedgerTables idx $ getBlockKeySets blk - -apply :: (SListI xs, SingleEraBlock blk) - => STS.ValidationPolicy - -> ComputeLedgerEvents - -> Index xs blk - -> WrapLedgerConfig blk - -> Product I (FlipTickedLedgerState ValuesMK) blk - -> ( Except (HardForkLedgerError xs) - :.: LedgerResult (LedgerState (HardForkBlock xs)) - :.: Flip LedgerState DiffMK - ) blk + hcollapse $ + hcimap proxySingle f ns + where + f :: + SingleEraBlock x => + Index xs x -> + I x -> + K (LedgerTables (LedgerState (HardForkBlock xs)) KeysMK) x + f idx (I blk) = K $ injectLedgerTables idx $ getBlockKeySets blk + +apply :: + (SListI xs, SingleEraBlock blk) => + STS.ValidationPolicy -> + ComputeLedgerEvents -> + Index xs blk -> + WrapLedgerConfig blk -> + Product I (FlipTickedLedgerState ValuesMK) blk -> + ( Except (HardForkLedgerError xs) + :.: LedgerResult (LedgerState (HardForkBlock xs)) + :.: Flip LedgerState DiffMK + ) + blk apply doValidate opts index (WrapLedgerConfig cfg) (Pair (I block) (FlipTickedLedgerState st)) = - Comp - $ withExcept (injectLedgerError index) - $ fmap (Comp . fmap Flip . embedLedgerResult (injectLedgerEvent index)) - $ applyBlockLedgerResultWithValidation doValidate opts cfg block st + Comp $ + withExcept (injectLedgerError index) $ + fmap (Comp . fmap Flip . embedLedgerResult (injectLedgerEvent index)) $ + applyBlockLedgerResultWithValidation doValidate opts cfg block st {------------------------------------------------------------------------------- UpdateLedger -------------------------------------------------------------------------------} -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) => UpdateLedger (HardForkBlock xs) +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + UpdateLedger (HardForkBlock xs) {------------------------------------------------------------------------------- HasHardForkHistory @@ -301,104 +325,118 @@ instance ( CanHardFork xs instance All SingleEraBlock xs => HasHardForkHistory (HardForkBlock xs) where type HardForkIndices (HardForkBlock xs) = xs - hardForkSummary cfg = State.reconstructSummaryLedger cfg - . hardForkLedgerStatePerEra + hardForkSummary cfg = + State.reconstructSummaryLedger cfg + . hardForkLedgerStatePerEra {------------------------------------------------------------------------------- HeaderValidation -------------------------------------------------------------------------------} -data HardForkEnvelopeErr xs = - -- | Validation error from one of the eras +data HardForkEnvelopeErr xs + = -- | Validation error from one of the eras HardForkEnvelopeErrFromEra (OneEraEnvelopeErr xs) - - -- | We tried to apply a block from the wrong era - | HardForkEnvelopeErrWrongEra (MismatchEraInfo xs) + | -- | We tried to apply a block from the wrong era + HardForkEnvelopeErrWrongEra (MismatchEraInfo xs) deriving (Eq, Show, Generic, NoThunks) instance CanHardFork xs => ValidateEnvelope (HardForkBlock xs) where type OtherHeaderEnvelopeError (HardForkBlock xs) = HardForkEnvelopeErr xs - additionalEnvelopeChecks tlc - (HardForkLedgerView transition hardForkView) = - \(HardForkHeader (OneEraHeader hdr)) -> - case Match.matchNS hdr (State.tip hardForkView) of - Left mismatch -> - throwError $ - HardForkEnvelopeErrWrongEra . MismatchEraInfo $ - Match.bihcmap proxySingle singleEraInfo ledgerViewInfo mismatch - Right matched -> - hcollapse $ hcizipWith proxySingle aux cfgs matched - where + additionalEnvelopeChecks + tlc + (HardForkLedgerView transition hardForkView) = + \(HardForkHeader (OneEraHeader hdr)) -> + case Match.matchNS hdr (State.tip hardForkView) of + Left mismatch -> + throwError $ + HardForkEnvelopeErrWrongEra . MismatchEraInfo $ + Match.bihcmap proxySingle singleEraInfo ledgerViewInfo mismatch + Right matched -> + hcollapse $ hcizipWith proxySingle aux cfgs matched + where ei :: EpochInfo (Except PastHorizonException) - ei = State.epochInfoPrecomputedTransitionInfo - (hardForkLedgerConfigShape $ configLedger tlc) - transition - hardForkView + ei = + State.epochInfoPrecomputedTransitionInfo + (hardForkLedgerConfigShape $ configLedger tlc) + transition + hardForkView cfgs :: NP TopLevelConfig xs cfgs = distribTopLevelConfig ei tlc - aux :: forall blk. SingleEraBlock blk - => Index xs blk - -> TopLevelConfig blk - -> Product Header WrapLedgerView blk - -> K (Except (HardForkEnvelopeErr xs) ()) blk - aux index cfg (Pair hdr view) = K $ + aux :: + forall blk. + SingleEraBlock blk => + Index xs blk -> + TopLevelConfig blk -> + Product Header WrapLedgerView blk -> + K (Except (HardForkEnvelopeErr xs) ()) blk + aux index cfg (Pair hdr view) = + K $ withExcept injErr' $ additionalEnvelopeChecks cfg (unwrapLedgerView view) hdr - where - injErr' :: OtherHeaderEnvelopeError blk -> HardForkEnvelopeErr xs - injErr' = HardForkEnvelopeErrFromEra - . OneEraEnvelopeErr - . injectNS index - . WrapEnvelopeErr + where + injErr' :: OtherHeaderEnvelopeError blk -> HardForkEnvelopeErr xs + injErr' = + HardForkEnvelopeErrFromEra + . OneEraEnvelopeErr + . injectNS index + . WrapEnvelopeErr {------------------------------------------------------------------------------- LedgerSupportsProtocol -------------------------------------------------------------------------------} -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) => LedgerSupportsProtocol (HardForkBlock xs) where - protocolLedgerView HardForkLedgerConfig{..} - (TickedHardForkLedgerState transition ticked) = - HardForkLedgerView { - hardForkLedgerViewTransition = transition - , hardForkLedgerViewPerEra = +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + LedgerSupportsProtocol (HardForkBlock xs) + where + protocolLedgerView + HardForkLedgerConfig{..} + (TickedHardForkLedgerState transition ticked) = + HardForkLedgerView + { hardForkLedgerViewTransition = transition + , hardForkLedgerViewPerEra = hczipWith proxySingle viewOne cfgs ticked } - where + where cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - ei = State.epochInfoPrecomputedTransitionInfo - hardForkLedgerConfigShape - transition - ticked - - viewOne :: SingleEraBlock blk - => WrapPartialLedgerConfig blk - -> FlipTickedLedgerState mk blk - -> WrapLedgerView blk + ei = + State.epochInfoPrecomputedTransitionInfo + hardForkLedgerConfigShape + transition + ticked + + viewOne :: + SingleEraBlock blk => + WrapPartialLedgerConfig blk -> + FlipTickedLedgerState mk blk -> + WrapLedgerView blk viewOne cfg (FlipTickedLedgerState st) = - WrapLedgerView $ - protocolLedgerView (completeLedgerConfig' ei cfg) st + WrapLedgerView $ + protocolLedgerView (completeLedgerConfig' ei cfg) st - ledgerViewForecastAt ledgerCfg@HardForkLedgerConfig{..} - (HardForkLedgerState ledgerSt) = + ledgerViewForecastAt + ledgerCfg@HardForkLedgerConfig{..} + (HardForkLedgerState ledgerSt) = mkHardForkForecast (InPairs.requiringBoth cfgs $ crossEraForecast hardForkEraTranslation) annForecast - where - ei = State.epochInfoLedger ledgerCfg ledgerSt + where + ei = State.epochInfoLedger ledgerCfg ledgerSt pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs + cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs annForecast :: HardForkState (AnnForecast LedgerState WrapLedgerView) xs - annForecast = HardForkState $ + annForecast = + HardForkState $ hczipWith3 proxySingle forecastOne @@ -407,408 +445,432 @@ instance ( CanHardFork xs (getHardForkState ledgerSt) forecastOne :: - forall blk mk. SingleEraBlock blk - => WrapPartialLedgerConfig blk - -> K EraParams blk - -> Current (Flip LedgerState mk) blk - -> Current (AnnForecast LedgerState WrapLedgerView) blk - forecastOne cfg (K params) (Current start (Flip st)) = Current { - currentStart = start - , currentState = AnnForecast { - annForecast = mapForecast WrapLedgerView $ - ledgerViewForecastAt cfg' st - , annForecastState = forgetLedgerTables st - , annForecastTip = ledgerTipSlot st - , annForecastEnd = History.mkUpperBound params start <$> - singleEraTransition' cfg params start st - } + forall blk mk. + SingleEraBlock blk => + WrapPartialLedgerConfig blk -> + K EraParams blk -> + Current (Flip LedgerState mk) blk -> + Current (AnnForecast LedgerState WrapLedgerView) blk + forecastOne cfg (K params) (Current start (Flip st)) = + Current + { currentStart = start + , currentState = + AnnForecast + { annForecast = + mapForecast WrapLedgerView $ + ledgerViewForecastAt cfg' st + , annForecastState = forgetLedgerTables st + , annForecastTip = ledgerTipSlot st + , annForecastEnd = + History.mkUpperBound params start + <$> singleEraTransition' cfg params start st + } } - where - cfg' :: LedgerConfig blk - cfg' = completeLedgerConfig' ei cfg + where + cfg' :: LedgerConfig blk + cfg' = completeLedgerConfig' ei cfg {------------------------------------------------------------------------------- Annotated forecasts -------------------------------------------------------------------------------} -- | Forecast annotated with details about the ledger it was derived from -data AnnForecast state view blk = AnnForecast { - annForecast :: Forecast (view blk) - , annForecastState :: state blk EmptyMK - , annForecastTip :: WithOrigin SlotNo - , annForecastEnd :: Maybe Bound - } +data AnnForecast state view blk = AnnForecast + { annForecast :: Forecast (view blk) + , annForecastState :: state blk EmptyMK + , annForecastTip :: WithOrigin SlotNo + , annForecastEnd :: Maybe Bound + } -- | Change a telescope of a forecast into a forecast of a telescope mkHardForkForecast :: - forall state view xs. - SListI xs - => InPairs (CrossEraForecaster state view) xs - -> HardForkState (AnnForecast state view) xs - -> Forecast (HardForkLedgerView_ view xs) -mkHardForkForecast translations st = Forecast { - forecastAt = hcollapse (hmap (K . forecastAt . annForecast) st) + forall state view xs. + SListI xs => + InPairs (CrossEraForecaster state view) xs -> + HardForkState (AnnForecast state view) xs -> + Forecast (HardForkLedgerView_ view xs) +mkHardForkForecast translations st = + Forecast + { forecastAt = hcollapse (hmap (K . forecastAt . annForecast) st) , forecastFor = \sno -> go sno translations (getHardForkState st) } - where - go :: SlotNo - -> InPairs (CrossEraForecaster state view) xs' - -> Telescope (K Past) (Current (AnnForecast state view)) xs' - -> Except OutsideForecastRange (HardForkLedgerView_ view xs') - go sno pairs (TZ cur) = oneForecast sno pairs cur - go sno (PCons _ ts) (TS past rest) = shiftView past <$> go sno ts rest + where + go :: + SlotNo -> + InPairs (CrossEraForecaster state view) xs' -> + Telescope (K Past) (Current (AnnForecast state view)) xs' -> + Except OutsideForecastRange (HardForkLedgerView_ view xs') + go sno pairs (TZ cur) = oneForecast sno pairs cur + go sno (PCons _ ts) (TS past rest) = shiftView past <$> go sno ts rest oneForecast :: - forall state view blk blks. - SlotNo - -> InPairs (CrossEraForecaster state view) (blk : blks) - -- ^ this function uses at most the first translation - -> Current (AnnForecast state view) blk - -> Except OutsideForecastRange (HardForkLedgerView_ view (blk : blks)) + forall state view blk blks. + SlotNo -> + -- | this function uses at most the first translation + InPairs (CrossEraForecaster state view) (blk : blks) -> + Current (AnnForecast state view) blk -> + Except OutsideForecastRange (HardForkLedgerView_ view (blk : blks)) oneForecast sno pairs (Current start AnnForecast{..}) = - case annForecastEnd of - Nothing -> endUnknown <$> forecastFor annForecast sno - Just end -> - if sno < boundSlot end + case annForecastEnd of + Nothing -> endUnknown <$> forecastFor annForecast sno + Just end -> + if sno < boundSlot end then beforeKnownEnd end <$> forecastFor annForecast sno else case pairs of PCons translate _ -> - afterKnownEnd end - <$> crossEraForecastWith translate end sno annForecastState - PNil -> + afterKnownEnd end + <$> crossEraForecastWith translate end sno annForecastState + PNil -> -- The requested slot is after the last era the code knows about. - throwError OutsideForecastRange { - outsideForecastAt = forecastAt annForecast - , outsideForecastMaxFor = boundSlot end - , outsideForecastFor = sno - } - where - endUnknown :: - f blk - -> HardForkLedgerView_ f (blk : blks) - endUnknown view = HardForkLedgerView { - hardForkLedgerViewTransition = - TransitionUnknown annForecastTip - , hardForkLedgerViewPerEra = HardForkState $ + throwError + OutsideForecastRange + { outsideForecastAt = forecastAt annForecast + , outsideForecastMaxFor = boundSlot end + , outsideForecastFor = sno + } + where + endUnknown :: + f blk -> + HardForkLedgerView_ f (blk : blks) + endUnknown view = + HardForkLedgerView + { hardForkLedgerViewTransition = + TransitionUnknown annForecastTip + , hardForkLedgerViewPerEra = + HardForkState $ TZ (Current start view) - } + } - beforeKnownEnd :: - Bound - -> f blk - -> HardForkLedgerView_ f (blk : blks) - beforeKnownEnd end view = HardForkLedgerView { - hardForkLedgerViewTransition = - TransitionKnown (boundEpoch end) - , hardForkLedgerViewPerEra = HardForkState $ + beforeKnownEnd :: + Bound -> + f blk -> + HardForkLedgerView_ f (blk : blks) + beforeKnownEnd end view = + HardForkLedgerView + { hardForkLedgerViewTransition = + TransitionKnown (boundEpoch end) + , hardForkLedgerViewPerEra = + HardForkState $ TZ (Current start view) - } + } - afterKnownEnd :: - Bound - -> f blk' - -> HardForkLedgerView_ f (blk : blk' : blks') - afterKnownEnd end view = HardForkLedgerView { - hardForkLedgerViewTransition = - -- We assume that we only ever have to translate to the /next/ era - -- (as opposed to /any/ subsequent era) - TransitionImpossible - , hardForkLedgerViewPerEra = HardForkState $ + afterKnownEnd :: + Bound -> + f blk' -> + HardForkLedgerView_ f (blk : blk' : blks') + afterKnownEnd end view = + HardForkLedgerView + { hardForkLedgerViewTransition = + -- We assume that we only ever have to translate to the /next/ era + -- (as opposed to /any/ subsequent era) + TransitionImpossible + , hardForkLedgerViewPerEra = + HardForkState $ TS (K (Past start end)) $ - TZ (Current end view) - } + TZ (Current end view) + } -shiftView :: K Past blk - -> HardForkLedgerView_ f blks - -> HardForkLedgerView_ f (blk : blks) -shiftView past HardForkLedgerView{..} = HardForkLedgerView { - hardForkLedgerViewTransition = hardForkLedgerViewTransition +shiftView :: + K Past blk -> + HardForkLedgerView_ f blks -> + HardForkLedgerView_ f (blk : blks) +shiftView past HardForkLedgerView{..} = + HardForkLedgerView + { hardForkLedgerViewTransition = hardForkLedgerViewTransition , hardForkLedgerViewPerEra = - HardForkState - . TS past - . getHardForkState - $ hardForkLedgerViewPerEra + HardForkState + . TS past + . getHardForkState + $ hardForkLedgerViewPerEra } {------------------------------------------------------------------------------- Inspection -------------------------------------------------------------------------------} -data HardForkLedgerWarning xs = - -- | Warning from the underlying era +data HardForkLedgerWarning xs + = -- | Warning from the underlying era HardForkWarningInEra (OneEraLedgerWarning xs) - - -- | The transition to the next era does not match the 'EraParams' + | -- | The transition to the next era does not match the 'EraParams' -- -- The 'EraParams' can specify a lower bound on when the transition to the -- next era will happen. If the actual transition, when confirmed, is -- /before/ this lower bound, the node is misconfigured and will likely -- not work correctly. This should be taken care of as soon as possible -- (before the transition happens). - | HardForkWarningTransitionMismatch (EraIndex xs) EraParams EpochNo - - -- | Transition in the final era + HardForkWarningTransitionMismatch (EraIndex xs) EraParams EpochNo + | -- | Transition in the final era -- -- The final era should never confirm any transitions. For clarity, we also -- record the index of that final era. - | HardForkWarningTransitionInFinalEra (EraIndex xs) EpochNo - - -- | An already-confirmed transition got un-confirmed - | HardForkWarningTransitionUnconfirmed (EraIndex xs) - - -- | An already-confirmed transition got changed + HardForkWarningTransitionInFinalEra (EraIndex xs) EpochNo + | -- | An already-confirmed transition got un-confirmed + HardForkWarningTransitionUnconfirmed (EraIndex xs) + | -- | An already-confirmed transition got changed -- -- We record the indices of the era we are transitioning from and to, -- as well as the old and new 'EpochNo' of that transition, in that order. - | HardForkWarningTransitionReconfirmed (EraIndex xs) (EraIndex xs) EpochNo EpochNo - -data HardForkLedgerUpdate xs = - HardForkUpdateInEra (OneEraLedgerUpdate xs) - - -- | Hard fork transition got confirmed - | HardForkUpdateTransitionConfirmed (EraIndex xs) (EraIndex xs) EpochNo + HardForkWarningTransitionReconfirmed (EraIndex xs) (EraIndex xs) EpochNo EpochNo - -- | Hard fork transition happened +data HardForkLedgerUpdate xs + = HardForkUpdateInEra (OneEraLedgerUpdate xs) + | -- | Hard fork transition got confirmed + HardForkUpdateTransitionConfirmed (EraIndex xs) (EraIndex xs) EpochNo + | -- | Hard fork transition happened -- -- We record the 'EpochNo' at the start of the era after the transition - | HardForkUpdateTransitionDone (EraIndex xs) (EraIndex xs) EpochNo - - -- | The hard fork transition rolled back - | HardForkUpdateTransitionRolledBack (EraIndex xs) (EraIndex xs) + HardForkUpdateTransitionDone (EraIndex xs) (EraIndex xs) EpochNo + | -- | The hard fork transition rolled back + HardForkUpdateTransitionRolledBack (EraIndex xs) (EraIndex xs) deriving instance CanHardFork xs => Show (HardForkLedgerWarning xs) -deriving instance CanHardFork xs => Eq (HardForkLedgerWarning xs) +deriving instance CanHardFork xs => Eq (HardForkLedgerWarning xs) deriving instance CanHardFork xs => Show (HardForkLedgerUpdate xs) -deriving instance CanHardFork xs => Eq (HardForkLedgerUpdate xs) +deriving instance CanHardFork xs => Eq (HardForkLedgerUpdate xs) instance CanHardFork xs => Condense (HardForkLedgerUpdate xs) where condense (HardForkUpdateInEra (OneEraLedgerUpdate update)) = - hcollapse $ hcmap proxySingle (K . condense . unwrapLedgerUpdate) update + hcollapse $ hcmap proxySingle (K . condense . unwrapLedgerUpdate) update condense (HardForkUpdateTransitionConfirmed ix ix' t) = - "confirmed " ++ condense (ix, ix', t) + "confirmed " ++ condense (ix, ix', t) condense (HardForkUpdateTransitionDone ix ix' e) = - "done " ++ condense (ix, ix', e) + "done " ++ condense (ix, ix', e) condense (HardForkUpdateTransitionRolledBack ix ix') = - "rolled back " ++ condense (ix, ix') + "rolled back " ++ condense (ix, ix') instance CanHardFork xs => InspectLedger (HardForkBlock xs) where type LedgerWarning (HardForkBlock xs) = HardForkLedgerWarning xs - type LedgerUpdate (HardForkBlock xs) = HardForkLedgerUpdate xs + type LedgerUpdate (HardForkBlock xs) = HardForkLedgerUpdate xs - inspectLedger cfg - (HardForkLedgerState before) - (HardForkLedgerState after) = + inspectLedger + cfg + (HardForkLedgerState before) + (HardForkLedgerState after) = inspectHardForkLedger pcfgs (getExactly shape) cfgs (Telescope.tip (getHardForkState before)) (Telescope.tip (getHardForkState after)) - where + where HardForkLedgerConfig{..} = configLedger cfg pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra shape = History.getShape hardForkLedgerConfigShape - cfgs = distribTopLevelConfig ei cfg - ei = State.epochInfoLedger (configLedger cfg) after + cfgs = distribTopLevelConfig ei cfg + ei = State.epochInfoLedger (configLedger cfg) after inspectHardForkLedger :: - CanHardFork xs - => NP WrapPartialLedgerConfig xs - -> NP (K EraParams) xs - -> NP TopLevelConfig xs - -> NS (Current (Flip LedgerState mk1)) xs - -> NS (Current (Flip LedgerState mk2)) xs - -> [LedgerEvent (HardForkBlock xs)] + CanHardFork xs => + NP WrapPartialLedgerConfig xs -> + NP (K EraParams) xs -> + NP TopLevelConfig xs -> + NS (Current (Flip LedgerState mk1)) xs -> + NS (Current (Flip LedgerState mk2)) xs -> + [LedgerEvent (HardForkBlock xs)] inspectHardForkLedger = go - where - go :: All SingleEraBlock xs - => NP WrapPartialLedgerConfig xs - -> NP (K EraParams) xs - -> NP TopLevelConfig xs - -> NS (Current (Flip LedgerState mk1)) xs - -> NS (Current (Flip LedgerState mk2)) xs - -> [LedgerEvent (HardForkBlock xs)] - - go (pc :* _) (K ps :* pss) (c :* _) (Z before) (Z after) = concat [ - map liftEvent $ - inspectLedger - c - (unFlip $ currentState before) - (unFlip $ currentState after) - - , case (pss, confirmedBefore, confirmedAfter) of - (_, Nothing, Nothing) -> - [] - (_, Just _, Nothing) -> - -- TODO: This should be a warning, but this can currently happen - -- in Byron. - [] - -- return $ LedgerWarning $ - -- HardForkWarningTransitionUnconfirmed eraIndexZero - (Nil, Nothing, Just transition) -> - return $ LedgerWarning $ + where + go :: + All SingleEraBlock xs => + NP WrapPartialLedgerConfig xs -> + NP (K EraParams) xs -> + NP TopLevelConfig xs -> + NS (Current (Flip LedgerState mk1)) xs -> + NS (Current (Flip LedgerState mk2)) xs -> + [LedgerEvent (HardForkBlock xs)] + + go (pc :* _) (K ps :* pss) (c :* _) (Z before) (Z after) = + concat + [ map liftEvent $ + inspectLedger + c + (unFlip $ currentState before) + (unFlip $ currentState after) + , case (pss, confirmedBefore, confirmedAfter) of + (_, Nothing, Nothing) -> + [] + (_, Just _, Nothing) -> + -- TODO: This should be a warning, but this can currently happen + -- in Byron. + [] + -- return $ LedgerWarning $ + -- HardForkWarningTransitionUnconfirmed eraIndexZero + (Nil, Nothing, Just transition) -> + return $ + LedgerWarning $ HardForkWarningTransitionInFinalEra eraIndexZero transition - (Nil, Just transition, Just transition') -> do - -- Only warn if the transition has changed - guard (transition /= transition') - return $ LedgerWarning $ + (Nil, Just transition, Just transition') -> do + -- Only warn if the transition has changed + guard (transition /= transition') + return $ + LedgerWarning $ HardForkWarningTransitionInFinalEra eraIndexZero transition - ((:*){}, Nothing, Just transition) -> - return $ - if validLowerBound (History.eraSafeZone ps) - then LedgerUpdate $ - HardForkUpdateTransitionConfirmed - eraIndexZero - (eraIndexSucc eraIndexZero) - transition - else LedgerWarning $ - HardForkWarningTransitionMismatch - eraIndexZero - ps - transition - ((:*){}, Just transition, Just transition') -> do - guard (transition /= transition') - return $ LedgerWarning $ + ((:*){}, Nothing, Just transition) -> + return $ + if validLowerBound (History.eraSafeZone ps) + then + LedgerUpdate $ + HardForkUpdateTransitionConfirmed + eraIndexZero + (eraIndexSucc eraIndexZero) + transition + else + LedgerWarning $ + HardForkWarningTransitionMismatch + eraIndexZero + ps + transition + ((:*){}, Just transition, Just transition') -> do + guard (transition /= transition') + return $ + LedgerWarning $ HardForkWarningTransitionReconfirmed eraIndexZero (eraIndexSucc eraIndexZero) transition transition' - ] - where - confirmedBefore, confirmedAfter :: Maybe EpochNo - confirmedBefore = singleEraTransition - (unwrapPartialLedgerConfig pc) - ps - (currentStart before) - (unFlip $ currentState before) - confirmedAfter = singleEraTransition - (unwrapPartialLedgerConfig pc) - ps - (currentStart after) - (unFlip $ currentState after) - - go Nil _ _ before _ = - case before of {} - go (_ :* pcs) (_ :* pss) (_ :* cs) (S before) (S after) = - map shiftEvent $ go pcs pss cs before after - go _ _ _ (Z _) (S after) = - return $ - LedgerUpdate $ - HardForkUpdateTransitionDone - eraIndexZero - (eraIndexSucc $ eraIndexFromNS after) - (hcollapse $ hmap (K . boundEpoch . currentStart) after) - go _ _ _ (S before) (Z _) = - return $ - LedgerUpdate $ - HardForkUpdateTransitionRolledBack - (eraIndexSucc $ eraIndexFromNS before) - eraIndexZero - - validLowerBound :: SafeZone -> Bool - validLowerBound (StandardSafeZone _) = True - validLowerBound UnsafeIndefiniteSafeZone = False + ] + where + confirmedBefore, confirmedAfter :: Maybe EpochNo + confirmedBefore = + singleEraTransition + (unwrapPartialLedgerConfig pc) + ps + (currentStart before) + (unFlip $ currentState before) + confirmedAfter = + singleEraTransition + (unwrapPartialLedgerConfig pc) + ps + (currentStart after) + (unFlip $ currentState after) + go Nil _ _ before _ = + case before of {} + go (_ :* pcs) (_ :* pss) (_ :* cs) (S before) (S after) = + map shiftEvent $ go pcs pss cs before after + go _ _ _ (Z _) (S after) = + return $ + LedgerUpdate $ + HardForkUpdateTransitionDone + eraIndexZero + (eraIndexSucc $ eraIndexFromNS after) + (hcollapse $ hmap (K . boundEpoch . currentStart) after) + go _ _ _ (S before) (Z _) = + return $ + LedgerUpdate $ + HardForkUpdateTransitionRolledBack + (eraIndexSucc $ eraIndexFromNS before) + eraIndexZero + + validLowerBound :: SafeZone -> Bool + validLowerBound (StandardSafeZone _) = True + validLowerBound UnsafeIndefiniteSafeZone = False {------------------------------------------------------------------------------- Internal auxiliary: lifting and shifting events -------------------------------------------------------------------------------} -liftEvent :: LedgerEvent x - -> LedgerEvent (HardForkBlock (x ': xs)) +liftEvent :: + LedgerEvent x -> + LedgerEvent (HardForkBlock (x ': xs)) liftEvent (LedgerWarning warning) = LedgerWarning $ liftWarning warning -liftEvent (LedgerUpdate update) = LedgerUpdate $ liftUpdate update +liftEvent (LedgerUpdate update) = LedgerUpdate $ liftUpdate update liftWarning :: LedgerWarning x -> HardForkLedgerWarning (x ': xs) liftWarning = - HardForkWarningInEra + HardForkWarningInEra . OneEraLedgerWarning . Z . WrapLedgerWarning liftUpdate :: LedgerUpdate x -> HardForkLedgerUpdate (x ': xs) liftUpdate = - HardForkUpdateInEra + HardForkUpdateInEra . OneEraLedgerUpdate . Z . WrapLedgerUpdate -shiftEvent :: LedgerEvent (HardForkBlock xs) - -> LedgerEvent (HardForkBlock (x ': xs)) +shiftEvent :: + LedgerEvent (HardForkBlock xs) -> + LedgerEvent (HardForkBlock (x ': xs)) shiftEvent (LedgerWarning warning) = LedgerWarning $ shiftWarning warning -shiftEvent (LedgerUpdate update) = LedgerUpdate $ shiftUpdate update +shiftEvent (LedgerUpdate update) = LedgerUpdate $ shiftUpdate update shiftWarning :: HardForkLedgerWarning xs -> HardForkLedgerWarning (x ': xs) shiftWarning = go - where - go (HardForkWarningInEra (OneEraLedgerWarning warning)) = - HardForkWarningInEra - (OneEraLedgerWarning (S warning)) - go (HardForkWarningTransitionMismatch ix ps t) = - HardForkWarningTransitionMismatch - (eraIndexSucc ix) - ps - t - go (HardForkWarningTransitionInFinalEra ix t) = - HardForkWarningTransitionInFinalEra - (eraIndexSucc ix) - t - go (HardForkWarningTransitionUnconfirmed ix) = - HardForkWarningTransitionUnconfirmed - (eraIndexSucc ix) - go (HardForkWarningTransitionReconfirmed ix ix' t t') = - HardForkWarningTransitionReconfirmed - (eraIndexSucc ix) - (eraIndexSucc ix') - t - t' + where + go (HardForkWarningInEra (OneEraLedgerWarning warning)) = + HardForkWarningInEra + (OneEraLedgerWarning (S warning)) + go (HardForkWarningTransitionMismatch ix ps t) = + HardForkWarningTransitionMismatch + (eraIndexSucc ix) + ps + t + go (HardForkWarningTransitionInFinalEra ix t) = + HardForkWarningTransitionInFinalEra + (eraIndexSucc ix) + t + go (HardForkWarningTransitionUnconfirmed ix) = + HardForkWarningTransitionUnconfirmed + (eraIndexSucc ix) + go (HardForkWarningTransitionReconfirmed ix ix' t t') = + HardForkWarningTransitionReconfirmed + (eraIndexSucc ix) + (eraIndexSucc ix') + t + t' shiftUpdate :: HardForkLedgerUpdate xs -> HardForkLedgerUpdate (x ': xs) shiftUpdate = go - where - go :: HardForkLedgerUpdate xs -> HardForkLedgerUpdate (x ': xs) - go (HardForkUpdateInEra (OneEraLedgerUpdate update)) = - HardForkUpdateInEra - (OneEraLedgerUpdate (S update)) - go (HardForkUpdateTransitionConfirmed ix ix' t) = - HardForkUpdateTransitionConfirmed - (eraIndexSucc ix) - (eraIndexSucc ix') - t - go (HardForkUpdateTransitionDone ix ix' e) = - HardForkUpdateTransitionDone - (eraIndexSucc ix) - (eraIndexSucc ix') - e - go (HardForkUpdateTransitionRolledBack ix ix') = - HardForkUpdateTransitionRolledBack - (eraIndexSucc ix) - (eraIndexSucc ix') + where + go :: HardForkLedgerUpdate xs -> HardForkLedgerUpdate (x ': xs) + go (HardForkUpdateInEra (OneEraLedgerUpdate update)) = + HardForkUpdateInEra + (OneEraLedgerUpdate (S update)) + go (HardForkUpdateTransitionConfirmed ix ix' t) = + HardForkUpdateTransitionConfirmed + (eraIndexSucc ix) + (eraIndexSucc ix') + t + go (HardForkUpdateTransitionDone ix ix' e) = + HardForkUpdateTransitionDone + (eraIndexSucc ix) + (eraIndexSucc ix') + e + go (HardForkUpdateTransitionRolledBack ix ix') = + HardForkUpdateTransitionRolledBack + (eraIndexSucc ix) + (eraIndexSucc ix') {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} -ledgerInfo :: forall blk mk. SingleEraBlock blk - => Current (FlipTickedLedgerState mk) blk -> LedgerEraInfo blk +ledgerInfo :: + forall blk mk. + SingleEraBlock blk => + Current (FlipTickedLedgerState mk) blk -> LedgerEraInfo blk ledgerInfo _ = LedgerEraInfo $ singleEraInfo (Proxy @blk) -ledgerViewInfo :: forall blk f. SingleEraBlock blk - => f blk -> LedgerEraInfo blk +ledgerViewInfo :: + forall blk f. + SingleEraBlock blk => + f blk -> LedgerEraInfo blk ledgerViewInfo _ = LedgerEraInfo $ singleEraInfo (Proxy @blk) injectLedgerError :: SListI xs => Index xs blk -> LedgerError blk -> HardForkLedgerError xs injectLedgerError index = - HardForkLedgerErrorFromEra + HardForkLedgerErrorFromEra . OneEraLedgerError . injectNS index . WrapLedgerErr -injectLedgerEvent :: SListI xs => Index xs blk -> AuxLedgerEvent (LedgerState blk) -> OneEraLedgerEvent xs +injectLedgerEvent :: + SListI xs => Index xs blk -> AuxLedgerEvent (LedgerState blk) -> OneEraLedgerEvent xs injectLedgerEvent index = - OneEraLedgerEvent + OneEraLedgerEvent . injectNS index . WrapLedgerEvent @@ -819,149 +881,167 @@ injectLedgerEvent index = -- | Warning: 'projectLedgerTables' and 'withLedgerTables' are prohibitively -- expensive when using big tables or when used multiple times. See the 'TxOut' -- instance for the 'HardForkBlock' for more information. -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) => HasLedgerTables (LedgerState (HardForkBlock xs)) where +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + HasLedgerTables (LedgerState (HardForkBlock xs)) + where projectLedgerTables :: - forall mk. (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => LedgerState (HardForkBlock xs) mk - -> LedgerTables (LedgerState (HardForkBlock xs)) mk - projectLedgerTables (HardForkLedgerState st) = hcollapse $ + forall mk. + (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + LedgerState (HardForkBlock xs) mk -> + LedgerTables (LedgerState (HardForkBlock xs)) mk + projectLedgerTables (HardForkLedgerState st) = + hcollapse $ hcimap (Proxy @(Compose HasLedgerTables LedgerState)) projectOne st - where - projectOne :: - Compose HasLedgerTables LedgerState x - => Index xs x - -> Flip LedgerState mk x - -> K (LedgerTables (LedgerState (HardForkBlock xs)) mk) x - projectOne i l = - K - $ injectLedgerTables i - $ projectLedgerTables - $ unFlip l + where + projectOne :: + Compose HasLedgerTables LedgerState x => + Index xs x -> + Flip LedgerState mk x -> + K (LedgerTables (LedgerState (HardForkBlock xs)) mk) x + projectOne i l = + K $ + injectLedgerTables i $ + projectLedgerTables $ + unFlip l withLedgerTables :: - forall mk any. (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => LedgerState (HardForkBlock xs) any - -> LedgerTables (LedgerState (HardForkBlock xs)) mk - -> LedgerState (HardForkBlock xs) mk - withLedgerTables (HardForkLedgerState st) tables = HardForkLedgerState $ + forall mk any. + (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + LedgerState (HardForkBlock xs) any -> + LedgerTables (LedgerState (HardForkBlock xs)) mk -> + LedgerState (HardForkBlock xs) mk + withLedgerTables (HardForkLedgerState st) tables = + HardForkLedgerState $ hcimap (Proxy @(Compose HasLedgerTables LedgerState)) withLedgerTablesOne st - where - withLedgerTablesOne :: - Compose HasLedgerTables LedgerState x - => Index xs x - -> Flip LedgerState any x - -> Flip LedgerState mk x - withLedgerTablesOne i l = - Flip - $ withLedgerTables (unFlip l) - $ ejectLedgerTables i tables - -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) => HasLedgerTables (Ticked (LedgerState (HardForkBlock xs))) where + where + withLedgerTablesOne :: + Compose HasLedgerTables LedgerState x => + Index xs x -> + Flip LedgerState any x -> + Flip LedgerState mk x + withLedgerTablesOne i l = + Flip $ + withLedgerTables (unFlip l) $ + ejectLedgerTables i tables + +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + HasLedgerTables (Ticked (LedgerState (HardForkBlock xs))) + where projectLedgerTables :: - forall mk. (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => Ticked (LedgerState (HardForkBlock xs)) mk - -> LedgerTables (Ticked (LedgerState (HardForkBlock xs))) mk - projectLedgerTables st = hcollapse $ + forall mk. + (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + Ticked (LedgerState (HardForkBlock xs)) mk -> + LedgerTables (Ticked (LedgerState (HardForkBlock xs))) mk + projectLedgerTables st = + hcollapse $ hcimap (Proxy @(Compose HasTickedLedgerTables LedgerState)) projectOne (tickedHardForkLedgerStatePerEra st) - where - projectOne :: - Compose HasTickedLedgerTables LedgerState x - => Index xs x - -> FlipTickedLedgerState mk x - -> K (LedgerTables (Ticked (LedgerState (HardForkBlock xs))) mk) x - projectOne i l = - K - $ castLedgerTables - $ injectLedgerTables i - $ castLedgerTables - $ projectLedgerTables - $ getFlipTickedLedgerState l + where + projectOne :: + Compose HasTickedLedgerTables LedgerState x => + Index xs x -> + FlipTickedLedgerState mk x -> + K (LedgerTables (Ticked (LedgerState (HardForkBlock xs))) mk) x + projectOne i l = + K $ + castLedgerTables $ + injectLedgerTables i $ + castLedgerTables $ + projectLedgerTables $ + getFlipTickedLedgerState l withLedgerTables :: - forall mk any. (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => Ticked (LedgerState (HardForkBlock xs)) any - -> LedgerTables (Ticked (LedgerState (HardForkBlock xs))) mk - -> Ticked (LedgerState (HardForkBlock xs)) mk - withLedgerTables st tables = st { - tickedHardForkLedgerStatePerEra = + forall mk any. + (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + Ticked (LedgerState (HardForkBlock xs)) any -> + LedgerTables (Ticked (LedgerState (HardForkBlock xs))) mk -> + Ticked (LedgerState (HardForkBlock xs)) mk + withLedgerTables st tables = + st + { tickedHardForkLedgerStatePerEra = hcimap (Proxy @(Compose HasTickedLedgerTables LedgerState)) withLedgerTablesOne (tickedHardForkLedgerStatePerEra st) } - where - withLedgerTablesOne :: - Compose HasTickedLedgerTables LedgerState x - => Index xs x - -> FlipTickedLedgerState any x - -> FlipTickedLedgerState mk x - withLedgerTablesOne i l = - FlipTickedLedgerState - $ withLedgerTables (getFlipTickedLedgerState l) - $ castLedgerTables - $ ejectLedgerTables i (castLedgerTables tables) - -instance All (Compose CanStowLedgerTables LedgerState) xs - => CanStowLedgerTables (LedgerState (HardForkBlock xs)) where + where + withLedgerTablesOne :: + Compose HasTickedLedgerTables LedgerState x => + Index xs x -> + FlipTickedLedgerState any x -> + FlipTickedLedgerState mk x + withLedgerTablesOne i l = + FlipTickedLedgerState $ + withLedgerTables (getFlipTickedLedgerState l) $ + castLedgerTables $ + ejectLedgerTables i (castLedgerTables tables) + +instance + All (Compose CanStowLedgerTables LedgerState) xs => + CanStowLedgerTables (LedgerState (HardForkBlock xs)) + where stowLedgerTables :: - LedgerState (HardForkBlock xs) ValuesMK - -> LedgerState (HardForkBlock xs) EmptyMK - stowLedgerTables (HardForkLedgerState st) = HardForkLedgerState $ + LedgerState (HardForkBlock xs) ValuesMK -> + LedgerState (HardForkBlock xs) EmptyMK + stowLedgerTables (HardForkLedgerState st) = + HardForkLedgerState $ hcmap (Proxy @(Compose CanStowLedgerTables LedgerState)) stowOne st - where - stowOne :: - Compose CanStowLedgerTables LedgerState x - => Flip LedgerState ValuesMK x - -> Flip LedgerState EmptyMK x - stowOne = Flip . stowLedgerTables . unFlip + where + stowOne :: + Compose CanStowLedgerTables LedgerState x => + Flip LedgerState ValuesMK x -> + Flip LedgerState EmptyMK x + stowOne = Flip . stowLedgerTables . unFlip unstowLedgerTables :: - LedgerState (HardForkBlock xs) EmptyMK - -> LedgerState (HardForkBlock xs) ValuesMK - unstowLedgerTables (HardForkLedgerState st) = HardForkLedgerState $ + LedgerState (HardForkBlock xs) EmptyMK -> + LedgerState (HardForkBlock xs) ValuesMK + unstowLedgerTables (HardForkLedgerState st) = + HardForkLedgerState $ hcmap (Proxy @(Compose CanStowLedgerTables LedgerState)) unstowOne st - where - unstowOne :: - Compose CanStowLedgerTables LedgerState x - => Flip LedgerState EmptyMK x - -> Flip LedgerState ValuesMK x - unstowOne = Flip . unstowLedgerTables . unFlip + where + unstowOne :: + Compose CanStowLedgerTables LedgerState x => + Flip LedgerState EmptyMK x -> + Flip LedgerState ValuesMK x + unstowOne = Flip . unstowLedgerTables . unFlip injectLedgerTables :: - forall xs x mk. ( - CanMapKeysMK mk - , CanMapMK mk - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) - => Index xs x - -> LedgerTables (LedgerState x ) mk - -> LedgerTables (LedgerState (HardForkBlock xs)) mk + forall xs x mk. + ( CanMapKeysMK mk + , CanMapMK mk + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + Index xs x -> + LedgerTables (LedgerState x) mk -> + LedgerTables (LedgerState (HardForkBlock xs)) mk injectLedgerTables idx = - bimapLedgerTables (injectCanonicalTxIn idx) (injectHardForkTxOut idx) + bimapLedgerTables (injectCanonicalTxIn idx) (injectHardForkTxOut idx) ejectLedgerTables :: - forall xs x mk. ( - CanMapKeysMK mk - , Ord (TxIn (LedgerState x)) - , HasCanonicalTxIn xs - , CanMapMK mk - , HasHardForkTxOut xs - ) - => Index xs x - -> LedgerTables (LedgerState (HardForkBlock xs)) mk - -> LedgerTables (LedgerState x ) mk + forall xs x mk. + ( CanMapKeysMK mk + , Ord (TxIn (LedgerState x)) + , HasCanonicalTxIn xs + , CanMapMK mk + , HasHardForkTxOut xs + ) => + Index xs x -> + LedgerTables (LedgerState (HardForkBlock xs)) mk -> + LedgerTables (LedgerState x) mk ejectLedgerTables idx = - bimapLedgerTables (ejectCanonicalTxIn idx) (ejectHardForkTxOut idx) + bimapLedgerTables (ejectCanonicalTxIn idx) (ejectHardForkTxOut idx) {------------------------------------------------------------------------------- HardForkTxIn @@ -969,7 +1049,7 @@ ejectLedgerTables idx = -- | Must be the 'CannonicalTxIn' type, but this will probably change in the -- future to @NS 'WrapTxIn' xs@. See 'HasCanonicalTxIn'. -type instance TxIn (LedgerState (HardForkBlock xs)) = CanonicalTxIn xs +type instance TxIn (LedgerState (HardForkBlock xs)) = CanonicalTxIn xs -- | Canonical TxIn -- @@ -980,24 +1060,27 @@ type instance TxIn (LedgerState (HardForkBlock xs)) = CanonicalTxIn xs -- has only one associated 'TxIn' type as a stop-gap, but Ledger will provide a -- serialization function into something more efficient. type HasCanonicalTxIn :: [Type] -> Constraint -class ( Show (CanonicalTxIn xs) - , Ord (CanonicalTxIn xs) - , NoThunks (CanonicalTxIn xs) - , MemPack (CanonicalTxIn xs) - ) => HasCanonicalTxIn xs where - data family CanonicalTxIn (xs :: [Type]) :: Type +class + ( Show (CanonicalTxIn xs) + , Ord (CanonicalTxIn xs) + , NoThunks (CanonicalTxIn xs) + , MemPack (CanonicalTxIn xs) + ) => + HasCanonicalTxIn xs + where + data CanonicalTxIn (xs :: [Type]) :: Type -- | Inject an era-specific 'TxIn' into a 'TxIn' for a 'HardForkBlock'. injectCanonicalTxIn :: - Index xs x - -> TxIn (LedgerState x) - -> CanonicalTxIn xs + Index xs x -> + TxIn (LedgerState x) -> + CanonicalTxIn xs -- | Distribute a 'TxIn' for a 'HardForkBlock' to an era-specific 'TxIn'. ejectCanonicalTxIn :: - Index xs x - -> CanonicalTxIn xs - -> TxIn (LedgerState x) + Index xs x -> + CanonicalTxIn xs -> + TxIn (LedgerState x) {------------------------------------------------------------------------------- HardForkTxOut @@ -1081,17 +1164,20 @@ type instance TxOut (LedgerState (HardForkBlock xs)) = HardForkTxOut xs -- >>> :} type DefaultHardForkTxOut xs = NS WrapTxOut xs -class ( Show (HardForkTxOut xs) - , Eq (HardForkTxOut xs) - , NoThunks (HardForkTxOut xs) - , IndexedMemPack (LedgerState (HardForkBlock xs) EmptyMK) (HardForkTxOut xs) - , SerializeTablesWithHint (LedgerState (HardForkBlock xs)) - ) => HasHardForkTxOut xs where +class + ( Show (HardForkTxOut xs) + , Eq (HardForkTxOut xs) + , NoThunks (HardForkTxOut xs) + , IndexedMemPack (LedgerState (HardForkBlock xs) EmptyMK) (HardForkTxOut xs) + , SerializeTablesWithHint (LedgerState (HardForkBlock xs)) + ) => + HasHardForkTxOut xs + where type HardForkTxOut xs :: Type type HardForkTxOut xs = DefaultHardForkTxOut xs injectHardForkTxOut :: Index xs x -> TxOut (LedgerState x) -> HardForkTxOut xs - ejectHardForkTxOut :: Index xs x -> HardForkTxOut xs -> TxOut (LedgerState x) + ejectHardForkTxOut :: Index xs x -> HardForkTxOut xs -> TxOut (LedgerState x) -- | This method is a null-arity method in a typeclass to make it a CAF, such -- that we only compute it once, then it is cached for the duration of the @@ -1100,7 +1186,7 @@ class ( Show (HardForkTxOut xs) -- -- This particular method is useful when our HardForkBlock uses -- DefaultHardForkTxOut, so that we can implement inject and project. - txOutEjections :: NP (K (NS WrapTxOut xs) -.-> WrapTxOut) xs + txOutEjections :: NP (K (NS WrapTxOut xs) -.-> WrapTxOut) xs default txOutEjections :: CanHardFork xs => NP (K (NS WrapTxOut xs) -.-> WrapTxOut) xs txOutEjections = composeTxOutTranslations $ ipTranslateTxOut hardForkEraTranslation @@ -1111,106 +1197,114 @@ class ( Show (HardForkTxOut xs) txOutTranslations :: Tails (InPairs.Fn2 WrapTxOut) xs default txOutTranslations :: CanHardFork xs => Tails (InPairs.Fn2 WrapTxOut) xs txOutTranslations = - Tails.inPairsToTails - $ InPairs.hmap - (\translator -> InPairs.Fn2 $ WrapTxOut . translateTxOutWith translator . unwrapTxOut) - (translateLedgerTables (hardForkEraTranslation @xs)) - -instance (CanHardFork xs, HasHardForkTxOut xs) - => CanUpgradeLedgerTables (LedgerState (HardForkBlock xs)) where + Tails.inPairsToTails $ + InPairs.hmap + (\translator -> InPairs.Fn2 $ WrapTxOut . translateTxOutWith translator . unwrapTxOut) + (translateLedgerTables (hardForkEraTranslation @xs)) + +instance + (CanHardFork xs, HasHardForkTxOut xs) => + CanUpgradeLedgerTables (LedgerState (HardForkBlock xs)) + where upgradeTables (HardForkLedgerState (HardForkState hs0)) (HardForkLedgerState (HardForkState hs1)) orig@(LedgerTables (ValuesMK vs)) = if isJust $ Match.telescopesMismatch hs0 hs1 - then LedgerTables $ ValuesMK $ extendTables (hmap (const (K ())) t1) vs - else orig - where - t1 = Telescope.tip hs1 + then LedgerTables $ ValuesMK $ extendTables (hmap (const (K ())) t1) vs + else orig + where + t1 = Telescope.tip hs1 extendTables :: - forall xs. - (CanHardFork xs, HasHardForkTxOut xs) - => NS (K ()) xs - -> Map.Map - (TxIn (LedgerState (HardForkBlock xs))) - (TxOut (LedgerState (HardForkBlock xs))) - -> Map.Map - (TxIn (LedgerState (HardForkBlock xs))) - (TxOut (LedgerState (HardForkBlock xs))) + forall xs. + (CanHardFork xs, HasHardForkTxOut xs) => + NS (K ()) xs -> + Map.Map + (TxIn (LedgerState (HardForkBlock xs))) + (TxOut (LedgerState (HardForkBlock xs))) -> + Map.Map + (TxIn (LedgerState (HardForkBlock xs))) + (TxOut (LedgerState (HardForkBlock xs))) extendTables st = - Map.map - (\txout -> - hcollapse - $ hcimap + Map.map + ( \txout -> + hcollapse $ + hcimap proxySingle - (\idxTarget (K ()) -> - K - . injectHardForkTxOut idxTarget - . ejectHardForkTxOut idxTarget - $ txout) + ( \idxTarget (K ()) -> + K + . injectHardForkTxOut idxTarget + . ejectHardForkTxOut idxTarget + $ txout + ) st - ) - + ) injectHardForkTxOutDefault :: - SListI xs - => Index xs x - -> TxOut (LedgerState x) - -> DefaultHardForkTxOut xs + SListI xs => + Index xs x -> + TxOut (LedgerState x) -> + DefaultHardForkTxOut xs injectHardForkTxOutDefault idx = injectNS idx . WrapTxOut ejectHardForkTxOutDefault :: - SListI xs - => HasHardForkTxOut xs - => Index xs x - -> DefaultHardForkTxOut xs - -> TxOut (LedgerState x) + SListI xs => + HasHardForkTxOut xs => + Index xs x -> + DefaultHardForkTxOut xs -> + TxOut (LedgerState x) ejectHardForkTxOutDefault idx = - unwrapTxOut - . apFn (projectNP idx txOutEjections) - . K + unwrapTxOut + . apFn (projectNP idx txOutEjections) + . K composeTxOutTranslations :: - SListI xs - => InPairs TranslateTxOut xs - -> NP (K (NS WrapTxOut xs) -.-> WrapTxOut) xs + SListI xs => + InPairs TranslateTxOut xs -> + NP (K (NS WrapTxOut xs) -.-> WrapTxOut) xs composeTxOutTranslations = \case - PNil -> - fn (unZ . unK) :* Nil - PCons (TranslateTxOut t) ts -> - fn ( eitherNS - id - (error "composeTranslations: anachrony") - . unK - ) + PNil -> + fn (unZ . unK) :* Nil + PCons (TranslateTxOut t) ts -> + fn + ( eitherNS + id + (error "composeTranslations: anachrony") + . unK + ) :* hmap - (\innerf -> fn $ + ( \innerf -> + fn $ apFn innerf - . K - . eitherNS - (Z . WrapTxOut . t . unwrapTxOut) - id - . unK) - (composeTxOutTranslations ts) - where - eitherNS :: (f x -> c) -> (NS f xs -> c) -> NS f (x ': xs) -> c - eitherNS l r = \case - Z x -> l x - S x -> r x + . K + . eitherNS + (Z . WrapTxOut . t . unwrapTxOut) + id + . unK + ) + (composeTxOutTranslations ts) + where + eitherNS :: (f x -> c) -> (NS f xs -> c) -> NS f (x ': xs) -> c + eitherNS l r = \case + Z x -> l x + S x -> r x class MemPack (TxOut (LedgerState x)) => MemPackTxOut x instance MemPack (TxOut (LedgerState x)) => MemPackTxOut x -instance (All MemPackTxOut xs, Typeable xs) - => MemPack (DefaultHardForkTxOut xs) where +instance + (All MemPackTxOut xs, Typeable xs) => + MemPack (DefaultHardForkTxOut xs) + where packM = - hcollapse . hcimap - (Proxy @MemPackTxOut) - (\idx (WrapTxOut txout) -> K $ do - packM (toWord8 idx) - packM txout - ) + hcollapse + . hcimap + (Proxy @MemPackTxOut) + ( \idx (WrapTxOut txout) -> K $ do + packM (toWord8 idx) + packM txout + ) packedByteCount txout = 1 + hcollapse (hcmap (Proxy @MemPackTxOut) (K . packedByteCount . unwrapTxOut) txout) @@ -1219,6 +1313,6 @@ instance (All MemPackTxOut xs, Typeable xs) idx <- unpackM hsequence' $ hcmap - (Proxy @MemPackTxOut) - (const $ Comp $ WrapTxOut <$> unpackM) - $ fromMaybe (error "Unknown tag") (nsFromIndex idx) + (Proxy @MemPackTxOut) + (const $ Comp $ WrapTxOut <$> unpackM) + $ fromMaybe (error "Unknown tag") (nsFromIndex idx) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/CommonProtocolParams.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/CommonProtocolParams.hs index aa385e19eb..ff542343f9 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/CommonProtocolParams.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/CommonProtocolParams.hs @@ -1,34 +1,39 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams () where -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.Strict -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Ledger - (HasCanonicalTxIn, HasHardForkTxOut (..)) -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.Strict +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Ledger + ( HasCanonicalTxIn + , HasHardForkTxOut (..) + ) +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.Ledger.CommonProtocolParams -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) => CommonProtocolParams (HardForkBlock xs) where +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + CommonProtocolParams (HardForkBlock xs) + where maxHeaderSize = askCurrentLedger maxHeaderSize - maxTxSize = askCurrentLedger maxTxSize + maxTxSize = askCurrentLedger maxTxSize askCurrentLedger :: - CanHardFork xs - => (forall blk. CommonProtocolParams blk => LedgerState blk mk -> a) - -> LedgerState (HardForkBlock xs) mk -> a + CanHardFork xs => + (forall blk. CommonProtocolParams blk => LedgerState blk mk -> a) -> + LedgerState (HardForkBlock xs) mk -> + a askCurrentLedger f = - hcollapse + hcollapse . hcmap proxySingle (K . f . unFlip) . State.tip . hardForkLedgerStatePerEra diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/PeerSelection.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/PeerSelection.hs index e21c909bff..15ba1166eb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/PeerSelection.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/PeerSelection.hs @@ -2,18 +2,18 @@ module Ouroboros.Consensus.HardFork.Combinator.Ledger.PeerSelection () where -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.Strict -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Ledger () -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.Strict +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Ledger () +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.Ledger.SupportsPeerSelection instance CanHardFork xs => LedgerSupportsPeerSelection (HardForkBlock xs) where getPeers = - hcollapse + hcollapse . hcmap proxySingle (K . getPeers . unFlip) . State.tip . hardForkLedgerStatePerEra diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/Query.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/Query.hs index dfe37d5fb2..b52ef80d02 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/Query.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Ledger/Query.hs @@ -16,11 +16,10 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Ledger.Query ( - BlockQuery (..) +module Ouroboros.Consensus.HardFork.Combinator.Ledger.Query + ( BlockQuery (..) , BlockSupportsHFLedgerQuery (..) , HardForkNodeToClientVersion (..) , HardForkQueryResult @@ -35,118 +34,127 @@ module Ouroboros.Consensus.HardFork.Combinator.Ledger.Query ( , hardForkQueryInfo ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as Dec -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as Enc -import Codec.Serialise (Serialise (..)) -import Data.Bifunctor -import Data.Functor.Product -import Data.Kind (Type) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting (getExactly) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.Index -import Data.SOP.Match (Mismatch (..), mustMatchNS) -import Data.SOP.Strict -import Data.Type.Equality -import Data.Typeable (Typeable) -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract (hardForkSummary) -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.Ledger () -import Ouroboros.Consensus.HardFork.Combinator.NetworkVersion -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.State (Current (..), - Past (..), Situated (..)) -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.HardFork.History (Bound (..), EraParams, - Shape (..)) -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Node.Serialisation (Some (..)) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.TypeFamilyWrappers (WrapChainDepState (..), - WrapTxOut) -import Ouroboros.Consensus.Util (ShowProxy) -import Ouroboros.Consensus.Util.IOLike (MonadSTM (atomically)) +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as Dec +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as Enc +import Codec.Serialise (Serialise (..)) +import Data.Bifunctor +import Data.Functor.Product +import Data.Kind (Type) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting (getExactly) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.Index +import Data.SOP.Match (Mismatch (..), mustMatchNS) +import Data.SOP.Strict +import Data.Type.Equality +import Data.Typeable (Typeable) +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract (hardForkSummary) +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.Ledger () +import Ouroboros.Consensus.HardFork.Combinator.NetworkVersion +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.State + ( Current (..) + , Past (..) + , Situated (..) + ) +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HardFork.History + ( Bound (..) + , EraParams + , Shape (..) + ) +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Node.Serialisation (Some (..)) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.TypeFamilyWrappers + ( WrapChainDepState (..) + , WrapTxOut + ) +import Ouroboros.Consensus.Util (ShowProxy) +import Ouroboros.Consensus.Util.IOLike (MonadSTM (atomically)) type HardForkQueryResult xs = Either (MismatchEraInfo xs) data instance BlockQuery (HardForkBlock xs) footprint result where -- | Answer a query about an era if it is the current one. QueryIfCurrent :: - QueryIfCurrent xs footprint result - -> BlockQuery (HardForkBlock xs) footprint (HardForkQueryResult xs result) - + QueryIfCurrent xs footprint result -> + BlockQuery (HardForkBlock xs) footprint (HardForkQueryResult xs result) -- | Answer a query about an era from /any/ era. -- -- NOTE: we don't allow this when there is only a single era, so that the -- HFC applied to a single era is still isomorphic to the single era. QueryAnytime :: - IsNonEmpty xs - => QueryAnytime result - -> EraIndex (x ': xs) - -> BlockQuery (HardForkBlock (x ': xs)) QFNoTables result - + IsNonEmpty xs => + QueryAnytime result -> + EraIndex (x ': xs) -> + BlockQuery (HardForkBlock (x ': xs)) QFNoTables result -- | Answer a query about the hard fork combinator -- -- NOTE: we don't allow this when there is only a single era, so that the -- HFC applied to a single era is still isomorphic to the single era. QueryHardFork :: - IsNonEmpty xs - => QueryHardFork (x ': xs) result - -> BlockQuery (HardForkBlock (x ': xs)) QFNoTables result + IsNonEmpty xs => + QueryHardFork (x ': xs) result -> + BlockQuery (HardForkBlock (x ': xs)) QFNoTables result -- | Queries that use ledger tables usually can be implemented faster if we work -- with the hard fork tables rather than projecting everything to the -- appropriate era before we process the query. This class should be used to -- implement how these queries that have a footprint which is not @QFNoTables@ -- are answered. -class ( All (Compose NoThunks WrapTxOut) xs - , All (Compose Show WrapTxOut) xs - , All (Compose Eq WrapTxOut) xs - , All (Compose HasTickedLedgerTables LedgerState) xs - , All (Compose HasLedgerTables LedgerState) xs - ) => BlockSupportsHFLedgerQuery xs where +class + ( All (Compose NoThunks WrapTxOut) xs + , All (Compose Show WrapTxOut) xs + , All (Compose Eq WrapTxOut) xs + , All (Compose HasTickedLedgerTables LedgerState) xs + , All (Compose HasLedgerTables LedgerState) xs + ) => + BlockSupportsHFLedgerQuery xs + where answerBlockQueryHFLookup :: - All SingleEraBlock xs - => Monad m - => Index xs x - -> ExtLedgerCfg x - -> BlockQuery x QFLookupTables result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m result + All SingleEraBlock xs => + Monad m => + Index xs x -> + ExtLedgerCfg x -> + BlockQuery x QFLookupTables result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m result answerBlockQueryHFTraverse :: - All SingleEraBlock xs - => Monad m - => Index xs x - -> ExtLedgerCfg x - -> BlockQuery x QFTraverseTables result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m result + All SingleEraBlock xs => + Monad m => + Index xs x -> + ExtLedgerCfg x -> + BlockQuery x QFTraverseTables result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m result -- | The @QFTraverseTables@ queries consist of some filter on the @TxOut@. This class -- provides that filter so that @answerBlockQueryHFAll@ can be implemented -- in an abstract manner depending on this function. queryLedgerGetTraversingFilter :: - Index xs x - -> BlockQuery x QFTraverseTables result - -> TxOut (LedgerState (HardForkBlock xs)) - -> Bool + Index xs x -> + BlockQuery x QFTraverseTables result -> + TxOut (LedgerState (HardForkBlock xs)) -> + Bool {------------------------------------------------------------------------------- Instances @@ -156,19 +164,22 @@ class ( All (Compose NoThunks WrapTxOut) xs -- Show ------ -instance Typeable xs => ShowProxy (BlockQuery (HardForkBlock xs)) where +instance Typeable xs => ShowProxy (BlockQuery (HardForkBlock xs)) + -- Use default implementation deriving instance All SingleEraBlock xs => Show (BlockQuery (HardForkBlock xs) footprint result) -instance All SingleEraBlock xs - => ShowQuery (BlockQuery (HardForkBlock xs) footprint) where - showResult (QueryAnytime qry _) result = showResult qry result - showResult (QueryHardFork qry) result = showResult qry result - showResult (QueryIfCurrent qry) mResult = - case mResult of - Left err -> show err - Right result -> showResult qry result +instance + All SingleEraBlock xs => + ShowQuery (BlockQuery (HardForkBlock xs) footprint) + where + showResult (QueryAnytime qry _) result = showResult qry result + showResult (QueryHardFork qry) result = showResult qry result + showResult (QueryIfCurrent qry) mResult = + case mResult of + Left err -> show err + Right result -> showResult qry result ------ -- Eq @@ -176,31 +187,33 @@ instance All SingleEraBlock xs instance All SingleEraBlock xs => SameDepIndex2 (BlockQuery (HardForkBlock xs)) where sameDepIndex2 (QueryIfCurrent qry) (QueryIfCurrent qry') = - (\Refl -> Refl) <$> sameDepIndex2 qry qry' - sameDepIndex2 (QueryIfCurrent {}) _ = - Nothing + (\Refl -> Refl) <$> sameDepIndex2 qry qry' + sameDepIndex2 (QueryIfCurrent{}) _ = + Nothing sameDepIndex2 (QueryAnytime qry era) (QueryAnytime qry' era') - | era == era' - = (\Refl -> Refl) <$> sameDepIndex qry qry' - | otherwise - = Nothing - sameDepIndex2(QueryAnytime {}) _ = - Nothing + | era == era' = + (\Refl -> Refl) <$> sameDepIndex qry qry' + | otherwise = + Nothing + sameDepIndex2 (QueryAnytime{}) _ = + Nothing sameDepIndex2 (QueryHardFork qry) (QueryHardFork qry') = - (\Refl -> Refl) <$> sameDepIndex qry qry' - sameDepIndex2 (QueryHardFork {}) _ = - Nothing + (\Refl -> Refl) <$> sameDepIndex qry qry' + sameDepIndex2 (QueryHardFork{}) _ = + Nothing {------------------------------------------------------------------------------- Query Ledger -------------------------------------------------------------------------------} -instance ( All SingleEraBlock xs - , BlockSupportsHFLedgerQuery xs - , All BlockSupportsLedgerQuery xs - , CanHardFork xs - ) - => BlockSupportsLedgerQuery (HardForkBlock xs) where +instance + ( All SingleEraBlock xs + , BlockSupportsHFLedgerQuery xs + , All BlockSupportsLedgerQuery xs + , CanHardFork xs + ) => + BlockSupportsLedgerQuery (HardForkBlock xs) + where answerPureBlockQuery (ExtLedgerCfg cfg) query @@ -222,104 +235,112 @@ instance ( All SingleEraBlock xs lcfg queryHardFork st - where + where cfgs = hmap ExtLedgerCfg $ distribTopLevelConfig ei cfg lcfg = configLedger cfg - ei = State.epochInfoLedger lcfg hardForkState + ei = State.epochInfoLedger lcfg hardForkState - answerBlockQueryLookup cfg (QueryIfCurrent q) = - answerBlockQueryHelper interpretQueryIfCurrentLookup cfg q + answerBlockQueryLookup cfg (QueryIfCurrent q) = + answerBlockQueryHelper interpretQueryIfCurrentLookup cfg q answerBlockQueryTraverse cfg (QueryIfCurrent q) = - answerBlockQueryHelper interpretQueryIfCurrentTraverse cfg q + answerBlockQueryHelper interpretQueryIfCurrentTraverse cfg q blockQueryIsSupportedOnVersion q (HardForkNodeToClientDisabled x) = case q of QueryIfCurrent (QZ q') -> blockQueryIsSupportedOnVersion q' x - QueryIfCurrent{} -> False - QueryAnytime{} -> False - QueryHardFork {} -> False + QueryIfCurrent{} -> False + QueryAnytime{} -> False + QueryHardFork{} -> False blockQueryIsSupportedOnVersion q (HardForkNodeToClientEnabled _hfv npversions) = case q of QueryIfCurrent qc -> go qc npversions - QueryAnytime{} -> True - QueryHardFork{} -> True + QueryAnytime{} -> True + QueryHardFork{} -> True where - go :: forall ys fp result. All BlockSupportsLedgerQuery ys => QueryIfCurrent ys fp result -> NP EraNodeToClientVersion ys -> Bool - go (QZ _) (EraNodeToClientDisabled :* _) = False - go (QZ x) (EraNodeToClientEnabled v :* _) = blockQueryIsSupportedOnVersion x v - go (QS x) (_ :* n) = go x n + go :: + forall ys fp result. + All BlockSupportsLedgerQuery ys => + QueryIfCurrent ys fp result -> NP EraNodeToClientVersion ys -> Bool + go (QZ _) (EraNodeToClientDisabled :* _) = False + go (QZ x) (EraNodeToClientEnabled v :* _) = blockQueryIsSupportedOnVersion x v + go (QS x) (_ :* n) = go x n -- | NOT EXPORTED, for footprints other than 'QFNoTables' answerBlockQueryHelper :: - (MonadSTM m, BlockSupportsHFLedgerQuery xs, CanHardFork xs) - => ( NP ExtLedgerCfg xs - -> QueryIfCurrent xs footprint result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m (HardForkQueryResult xs result) - ) - -> ExtLedgerCfg (HardForkBlock xs) - -> QueryIfCurrent xs footprint result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m (HardForkQueryResult xs result) + (MonadSTM m, BlockSupportsHFLedgerQuery xs, CanHardFork xs) => + ( NP ExtLedgerCfg xs -> + QueryIfCurrent xs footprint result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m (HardForkQueryResult xs result) + ) -> + ExtLedgerCfg (HardForkBlock xs) -> + QueryIfCurrent xs footprint result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m (HardForkQueryResult xs result) answerBlockQueryHelper f (ExtLedgerCfg cfg) qry forker = do - hardForkState <- hardForkLedgerStatePerEra . ledgerState <$> atomically (roforkerGetLedgerState forker) - let ei = State.epochInfoLedger lcfg hardForkState + hardForkState <- + hardForkLedgerStatePerEra . ledgerState <$> atomically (roforkerGetLedgerState forker) + let ei = State.epochInfoLedger lcfg hardForkState cfgs = hmap ExtLedgerCfg $ distribTopLevelConfig ei cfg f cfgs qry forker - where + where lcfg = configLedger cfg -- | Precondition: the 'ledgerState' and 'headerState' should be from the same -- era. In practice, this is _always_ the case, unless the 'ExtLedgerState' was -- manually crafted. distribExtLedgerState :: - All SingleEraBlock xs - => ExtLedgerState (HardForkBlock xs) mk -> NS (Flip ExtLedgerState mk) xs + All SingleEraBlock xs => + ExtLedgerState (HardForkBlock xs) mk -> NS (Flip ExtLedgerState mk) xs distribExtLedgerState (ExtLedgerState ledgerState headerState) = - hmap (\(Pair hst lst) -> Flip $ ExtLedgerState (unFlip lst) hst) $ - mustMatchNS - "HeaderState" - (distribHeaderState headerState) - (State.tip (hardForkLedgerStatePerEra ledgerState)) + hmap (\(Pair hst lst) -> Flip $ ExtLedgerState (unFlip lst) hst) $ + mustMatchNS + "HeaderState" + (distribHeaderState headerState) + (State.tip (hardForkLedgerStatePerEra ledgerState)) -- | Precondition: the 'headerStateTip' and 'headerStateChainDep' should be from -- the same era. In practice, this is _always_ the case, unless the -- 'HeaderState' was manually crafted. distribHeaderState :: - All SingleEraBlock xs - => HeaderState (HardForkBlock xs) -> NS HeaderState xs + All SingleEraBlock xs => + HeaderState (HardForkBlock xs) -> NS HeaderState xs distribHeaderState (HeaderState tip chainDepState) = - case tip of - Origin -> - hmap (HeaderState Origin . unwrapChainDepState) (State.tip chainDepState) - NotOrigin annTip -> - hmap - (\(Pair t cds) -> HeaderState (NotOrigin t) (unwrapChainDepState cds)) - (mustMatchNS "AnnTip" (distribAnnTip annTip) (State.tip chainDepState)) - -getHardForkQuery :: BlockQuery (HardForkBlock xs) footprint result - -> (forall result'. - result :~: HardForkQueryResult xs result' - -> QueryIfCurrent xs footprint result' - -> r) - -> (forall x' xs'. - xs :~: x' ': xs' - -> ProofNonEmpty xs' - -> QueryAnytime result - -> EraIndex xs - -> r) - -> (forall x' xs'. - xs :~: x' ': xs' - -> ProofNonEmpty xs' - -> QueryHardFork xs result - -> r) - -> r + case tip of + Origin -> + hmap (HeaderState Origin . unwrapChainDepState) (State.tip chainDepState) + NotOrigin annTip -> + hmap + (\(Pair t cds) -> HeaderState (NotOrigin t) (unwrapChainDepState cds)) + (mustMatchNS "AnnTip" (distribAnnTip annTip) (State.tip chainDepState)) + +getHardForkQuery :: + BlockQuery (HardForkBlock xs) footprint result -> + ( forall result'. + result :~: HardForkQueryResult xs result' -> + QueryIfCurrent xs footprint result' -> + r + ) -> + ( forall x' xs'. + xs :~: x' ': xs' -> + ProofNonEmpty xs' -> + QueryAnytime result -> + EraIndex xs -> + r + ) -> + ( forall x' xs'. + xs :~: x' ': xs' -> + ProofNonEmpty xs' -> + QueryHardFork xs result -> + r + ) -> + r getHardForkQuery q k1 k2 k3 = case q of - QueryIfCurrent qry -> k1 Refl qry - QueryAnytime qry era -> k2 Refl (isNonEmpty Proxy) qry era - QueryHardFork qry -> k3 Refl (isNonEmpty Proxy) qry + QueryIfCurrent qry -> k1 Refl qry + QueryAnytime qry era -> k2 Refl (isNonEmpty Proxy) qry era + QueryHardFork qry -> k3 Refl (isNonEmpty Proxy) qry {------------------------------------------------------------------------------- Current era queries @@ -327,7 +348,7 @@ getHardForkQuery q k1 k2 k3 = case q of type QueryIfCurrent :: [Type] -> QueryFootprint -> Type -> Type data QueryIfCurrent xs footprint result where - QZ :: BlockQuery x footprint result -> QueryIfCurrent (x ': xs) footprint result + QZ :: BlockQuery x footprint result -> QueryIfCurrent (x ': xs) footprint result QS :: QueryIfCurrent xs footprint result -> QueryIfCurrent (x ': xs) footprint result deriving instance All SingleEraBlock xs => Show (QueryIfCurrent xs footprint result) @@ -339,75 +360,81 @@ instance All SingleEraBlock xs => ShowQuery (QueryIfCurrent xs footprint) where instance All SingleEraBlock xs => SameDepIndex2 (QueryIfCurrent xs) where sameDepIndex2 (QZ qry) (QZ qry') = sameDepIndex2 qry qry' sameDepIndex2 (QS qry) (QS qry') = sameDepIndex2 qry qry' - sameDepIndex2 _ _ = Nothing + sameDepIndex2 _ _ = Nothing interpretQueryIfCurrent :: - forall result xs. All SingleEraBlock xs - => NP ExtLedgerCfg xs - -> QueryIfCurrent xs QFNoTables result - -> NS (Flip ExtLedgerState EmptyMK) xs - -> HardForkQueryResult xs result + forall result xs. + All SingleEraBlock xs => + NP ExtLedgerCfg xs -> + QueryIfCurrent xs QFNoTables result -> + NS (Flip ExtLedgerState EmptyMK) xs -> + HardForkQueryResult xs result interpretQueryIfCurrent = go - where - go :: All SingleEraBlock xs' - => NP ExtLedgerCfg xs' - -> QueryIfCurrent xs' QFNoTables result - -> NS (Flip ExtLedgerState EmptyMK) xs' - -> HardForkQueryResult xs' result - go (c :* _) (QZ qry) (Z (Flip st)) = - Right $ answerPureBlockQuery c qry st - go (_ :* cs) (QS qry) (S st) = - first shiftMismatch $ go cs qry st - go _ (QZ qry) (S st) = - Left $ MismatchEraInfo $ ML (queryInfo qry) (hcmap proxySingle (ledgerInfo . unFlip) st) - go _ (QS qry) (Z (Flip st)) = - Left $ MismatchEraInfo $ MR (hardForkQueryInfo qry) (ledgerInfo st) + where + go :: + All SingleEraBlock xs' => + NP ExtLedgerCfg xs' -> + QueryIfCurrent xs' QFNoTables result -> + NS (Flip ExtLedgerState EmptyMK) xs' -> + HardForkQueryResult xs' result + go (c :* _) (QZ qry) (Z (Flip st)) = + Right $ answerPureBlockQuery c qry st + go (_ :* cs) (QS qry) (S st) = + first shiftMismatch $ go cs qry st + go _ (QZ qry) (S st) = + Left $ MismatchEraInfo $ ML (queryInfo qry) (hcmap proxySingle (ledgerInfo . unFlip) st) + go _ (QS qry) (Z (Flip st)) = + Left $ MismatchEraInfo $ MR (hardForkQueryInfo qry) (ledgerInfo st) interpretQueryIfCurrentLookup :: - forall result xs m. (MonadSTM m, BlockSupportsHFLedgerQuery xs, CanHardFork xs) - => NP ExtLedgerCfg xs - -> QueryIfCurrent xs QFLookupTables result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m (HardForkQueryResult xs result) + forall result xs m. + (MonadSTM m, BlockSupportsHFLedgerQuery xs, CanHardFork xs) => + NP ExtLedgerCfg xs -> + QueryIfCurrent xs QFLookupTables result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m (HardForkQueryResult xs result) interpretQueryIfCurrentLookup cfg q forker = do - st <- distribExtLedgerState <$> atomically (roforkerGetLedgerState forker) - go indices cfg q st - where - go :: All SingleEraBlock xs' - => NP (Index xs) xs' - -> NP ExtLedgerCfg xs' - -> QueryIfCurrent xs' QFLookupTables result - -> NS (Flip ExtLedgerState EmptyMK) xs' - -> m (HardForkQueryResult xs' result) - go (idx :* _) (c :* _) (QZ qry) _ = - Right <$> answerBlockQueryHFLookup idx c qry forker - go (_ :* idx) (_ :* cs) (QS qry) (S st) = - first shiftMismatch <$> go idx cs qry st - go _ _ (QS qry) (Z (Flip st)) = - pure $ Left $ MismatchEraInfo $ MR (hardForkQueryInfo qry) (ledgerInfo st) + st <- distribExtLedgerState <$> atomically (roforkerGetLedgerState forker) + go indices cfg q st + where + go :: + All SingleEraBlock xs' => + NP (Index xs) xs' -> + NP ExtLedgerCfg xs' -> + QueryIfCurrent xs' QFLookupTables result -> + NS (Flip ExtLedgerState EmptyMK) xs' -> + m (HardForkQueryResult xs' result) + go (idx :* _) (c :* _) (QZ qry) _ = + Right <$> answerBlockQueryHFLookup idx c qry forker + go (_ :* idx) (_ :* cs) (QS qry) (S st) = + first shiftMismatch <$> go idx cs qry st + go _ _ (QS qry) (Z (Flip st)) = + pure $ Left $ MismatchEraInfo $ MR (hardForkQueryInfo qry) (ledgerInfo st) interpretQueryIfCurrentTraverse :: - forall result xs m. (MonadSTM m, BlockSupportsHFLedgerQuery xs, CanHardFork xs) - => NP ExtLedgerCfg xs - -> QueryIfCurrent xs QFTraverseTables result - -> ReadOnlyForker' m (HardForkBlock xs) - -> m (HardForkQueryResult xs result) + forall result xs m. + (MonadSTM m, BlockSupportsHFLedgerQuery xs, CanHardFork xs) => + NP ExtLedgerCfg xs -> + QueryIfCurrent xs QFTraverseTables result -> + ReadOnlyForker' m (HardForkBlock xs) -> + m (HardForkQueryResult xs result) interpretQueryIfCurrentTraverse cfg q forker = do - st <- distribExtLedgerState <$> atomically (roforkerGetLedgerState forker) - go indices cfg q st - where - go :: All SingleEraBlock xs' - => NP (Index xs) xs' - -> NP ExtLedgerCfg xs' - -> QueryIfCurrent xs' QFTraverseTables result - -> NS (Flip ExtLedgerState EmptyMK) xs' - -> m (HardForkQueryResult xs' result) - go (idx :* _) (c :* _) (QZ qry) _ = - Right <$> answerBlockQueryHFTraverse idx c qry forker - go (_ :* idx) (_ :* cs) (QS qry) (S st) = - first shiftMismatch <$> go idx cs qry st - go _ _ (QS qry) (Z (Flip st)) = - pure $ Left $ MismatchEraInfo $ MR (hardForkQueryInfo qry) (ledgerInfo st) + st <- distribExtLedgerState <$> atomically (roforkerGetLedgerState forker) + go indices cfg q st + where + go :: + All SingleEraBlock xs' => + NP (Index xs) xs' -> + NP ExtLedgerCfg xs' -> + QueryIfCurrent xs' QFTraverseTables result -> + NS (Flip ExtLedgerState EmptyMK) xs' -> + m (HardForkQueryResult xs' result) + go (idx :* _) (c :* _) (QZ qry) _ = + Right <$> answerBlockQueryHFTraverse idx c qry forker + go (_ :* idx) (_ :* cs) (QS qry) (S st) = + first shiftMismatch <$> go idx cs qry st + go _ _ (QS qry) (Z (Flip st)) = + pure $ Left $ MismatchEraInfo $ MR (hardForkQueryInfo qry) (ledgerInfo st) {------------------------------------------------------------------------------- Any era queries @@ -422,44 +449,46 @@ instance ShowQuery QueryAnytime where showResult GetEraStart = show instance SameDepIndex QueryAnytime where - sameDepIndex GetEraStart GetEraStart = Just Refl + sameDepIndex GetEraStart GetEraStart = Just Refl interpretQueryAnytime :: - forall result xs mk. All SingleEraBlock xs - => HardForkLedgerConfig xs - -> QueryAnytime result - -> EraIndex xs - -> State.HardForkState (Flip LedgerState mk) xs - -> result + forall result xs mk. + All SingleEraBlock xs => + HardForkLedgerConfig xs -> + QueryAnytime result -> + EraIndex xs -> + State.HardForkState (Flip LedgerState mk) xs -> + result interpretQueryAnytime cfg query (EraIndex era) st = - answerQueryAnytime cfg query (State.situate era st) + answerQueryAnytime cfg query (State.situate era st) answerQueryAnytime :: - All SingleEraBlock xs - => HardForkLedgerConfig xs - -> QueryAnytime result - -> Situated h (Flip LedgerState mk) xs - -> result + All SingleEraBlock xs => + HardForkLedgerConfig xs -> + QueryAnytime result -> + Situated h (Flip LedgerState mk) xs -> + result answerQueryAnytime HardForkLedgerConfig{..} = - go cfgs (getExactly (getShape hardForkLedgerConfigShape)) - where - cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - - go :: All SingleEraBlock xs' - => NP WrapPartialLedgerConfig xs' - -> NP (K EraParams) xs' - -> QueryAnytime result - -> Situated h (Flip LedgerState mk) xs' - -> result - go Nil _ _ ctxt = case ctxt of {} - go (c :* cs) (K ps :* pss) GetEraStart ctxt = case ctxt of - SituatedShift ctxt' -> go cs pss GetEraStart ctxt' - SituatedFuture _ _ -> Nothing - SituatedPast past _ -> Just $ pastStart $ unK past - SituatedCurrent cur _ -> Just $ currentStart cur - SituatedNext cur _ -> - History.mkUpperBound ps (currentStart cur) <$> - singleEraTransition + go cfgs (getExactly (getShape hardForkLedgerConfigShape)) + where + cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra + + go :: + All SingleEraBlock xs' => + NP WrapPartialLedgerConfig xs' -> + NP (K EraParams) xs' -> + QueryAnytime result -> + Situated h (Flip LedgerState mk) xs' -> + result + go Nil _ _ ctxt = case ctxt of {} + go (c :* cs) (K ps :* pss) GetEraStart ctxt = case ctxt of + SituatedShift ctxt' -> go cs pss GetEraStart ctxt' + SituatedFuture _ _ -> Nothing + SituatedPast past _ -> Just $ pastStart $ unK past + SituatedCurrent cur _ -> Just $ currentStart cur + SituatedNext cur _ -> + History.mkUpperBound ps (currentStart cur) + <$> singleEraTransition (unwrapPartialLedgerConfig c) ps (currentStart cur) @@ -471,44 +500,45 @@ answerQueryAnytime HardForkLedgerConfig{..} = data QueryHardFork xs result where GetInterpreter :: QueryHardFork xs (History.Interpreter xs) - GetCurrentEra :: QueryHardFork xs (EraIndex xs) + GetCurrentEra :: QueryHardFork xs (EraIndex xs) deriving instance Show (QueryHardFork xs result) instance All SingleEraBlock xs => ShowQuery (QueryHardFork xs) where showResult GetInterpreter = show - showResult GetCurrentEra = show + showResult GetCurrentEra = show instance SameDepIndex (QueryHardFork xs) where sameDepIndex GetInterpreter GetInterpreter = - Just Refl + Just Refl sameDepIndex GetInterpreter _ = - Nothing + Nothing sameDepIndex GetCurrentEra GetCurrentEra = - Just Refl + Just Refl sameDepIndex GetCurrentEra _ = - Nothing + Nothing interpretQueryHardFork :: - All SingleEraBlock xs - => HardForkLedgerConfig xs - -> QueryHardFork xs result - -> LedgerState (HardForkBlock xs) mk - -> result + All SingleEraBlock xs => + HardForkLedgerConfig xs -> + QueryHardFork xs result -> + LedgerState (HardForkBlock xs) mk -> + result interpretQueryHardFork cfg query st = - case query of - GetInterpreter -> - History.mkInterpreter $ hardForkSummary cfg st - GetCurrentEra -> - eraIndexFromNS $ State.tip $ hardForkLedgerStatePerEra st + case query of + GetInterpreter -> + History.mkInterpreter $ hardForkSummary cfg st + GetCurrentEra -> + eraIndexFromNS $ State.tip $ hardForkLedgerStatePerEra st {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} instance Serialise (Some QueryAnytime) where - encode (Some GetEraStart) = mconcat [ - Enc.encodeListLen 1 + encode (Some GetEraStart) = + mconcat + [ Enc.encodeListLen 1 , Enc.encodeWord8 0 ] @@ -526,40 +556,46 @@ decodeQueryAnytimeResult :: QueryAnytime result -> forall s. Decoder s result decodeQueryAnytimeResult GetEraStart = decode encodeQueryHardForkResult :: - SListI xs - => QueryHardFork xs result -> result -> Encoding + SListI xs => + QueryHardFork xs result -> result -> Encoding encodeQueryHardForkResult = \case - GetInterpreter -> encode - GetCurrentEra -> encode + GetInterpreter -> encode + GetCurrentEra -> encode decodeQueryHardForkResult :: - SListI xs - => QueryHardFork xs result -> forall s. Decoder s result + SListI xs => + QueryHardFork xs result -> forall s. Decoder s result decodeQueryHardForkResult = \case - GetInterpreter -> decode - GetCurrentEra -> decode + GetInterpreter -> decode + GetCurrentEra -> decode {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} -ledgerInfo :: forall blk mk. SingleEraBlock blk - => ExtLedgerState blk mk - -> LedgerEraInfo blk +ledgerInfo :: + forall blk mk. + SingleEraBlock blk => + ExtLedgerState blk mk -> + LedgerEraInfo blk ledgerInfo _ = LedgerEraInfo $ singleEraInfo (Proxy @blk) -queryInfo :: forall blk query (footprint :: QueryFootprint) result. SingleEraBlock blk - => query blk footprint result -> SingleEraInfo blk +queryInfo :: + forall blk query (footprint :: QueryFootprint) result. + SingleEraBlock blk => + query blk footprint result -> SingleEraInfo blk queryInfo _ = singleEraInfo (Proxy @blk) -hardForkQueryInfo :: All SingleEraBlock xs - => QueryIfCurrent xs footprint result -> NS SingleEraInfo xs +hardForkQueryInfo :: + All SingleEraBlock xs => + QueryIfCurrent xs footprint result -> NS SingleEraInfo xs hardForkQueryInfo = go - where - go :: All SingleEraBlock xs' - => QueryIfCurrent xs' footprint result -> NS SingleEraInfo xs' - go (QZ qry) = Z (queryInfo qry) - go (QS qry) = S (go qry) + where + go :: + All SingleEraBlock xs' => + QueryIfCurrent xs' footprint result -> NS SingleEraInfo xs' + go (QZ qry) = Z (queryInfo qry) + go (QS qry) = S (go qry) shiftMismatch :: MismatchEraInfo xs -> MismatchEraInfo (x ': xs) shiftMismatch = MismatchEraInfo . MS . getMismatchEraInfo diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Lifting.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Lifting.hs index f61f4856a8..9761750786 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Lifting.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Lifting.hs @@ -7,8 +7,8 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.HardFork.Combinator.Lifting ( - LiftMismatch (..) +module Ouroboros.Consensus.HardFork.Combinator.Lifting + ( LiftMismatch (..) , LiftNP (..) , LiftNS (..) , LiftNamedMismatch (..) @@ -19,36 +19,40 @@ module Ouroboros.Consensus.HardFork.Combinator.Lifting ( , LiftTelescope (..) ) where -import Data.List (intercalate) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Dict (Dict (..), all_NP, unAll_NP) -import Data.SOP.Match (Mismatch) -import Data.SOP.OptNP (OptNP (..)) -import Data.SOP.Sing -import Data.SOP.Strict -import Data.SOP.Telescope (Telescope) -import Data.Typeable -import GHC.TypeLits -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Data.List (intercalate) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Dict (Dict (..), all_NP, unAll_NP) +import Data.SOP.Match (Mismatch) +import Data.SOP.OptNP (OptNP (..)) +import Data.SOP.Sing +import Data.SOP.Strict +import Data.SOP.Telescope (Telescope) +import Data.Typeable +import GHC.TypeLits +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.HardFork.Combinator.Abstract {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} -proofAll :: SListI xs - => (forall x . Dict c x -> Dict d x) - -> Dict (All c) xs -> Dict (All d) xs +proofAll :: + SListI xs => + (forall x. Dict c x -> Dict d x) -> + Dict (All c) xs -> + Dict (All d) xs proofAll f dict = all_NP (hmap f (unAll_NP dict)) -proofLift :: (SingleEraBlock x => c (f x)) - => Dict SingleEraBlock x -> Dict (Compose c f) x +proofLift :: + (SingleEraBlock x => c (f x)) => + Dict SingleEraBlock x -> Dict (Compose c f) x proofLift Dict = Dict -liftEras :: (All SingleEraBlock xs, forall x. SingleEraBlock x => c (f x)) - => Proxy xs -> Proxy c -> Proxy f -> Dict (All (Compose c f)) xs +liftEras :: + (All SingleEraBlock xs, forall x. SingleEraBlock x => c (f x)) => + Proxy xs -> Proxy c -> Proxy f -> Dict (All (Compose c f)) xs liftEras _ _ _ = proofAll proofLift Dict {------------------------------------------------------------------------------- @@ -57,27 +61,34 @@ liftEras _ _ _ = proofAll proofLift Dict newtype LiftNS f xs = LiftNS (NS f xs) -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Eq (f x)) - => Eq (LiftNS f xs) where +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Eq (f x)) => + Eq (LiftNS f xs) + where LiftNS x == LiftNS y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - x == y - } + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + x == y -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Ord (f x)) - => Ord (LiftNS f xs) where +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Ord (f x)) => + Ord (LiftNS f xs) + where LiftNS x `compare` LiftNS y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of { Dict -> - x `compare` y - }} - -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) - => Show (LiftNS f xs) where + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of + Dict -> + x `compare` y + +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) => + Show (LiftNS f xs) + where show (LiftNS x) = - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of { Dict -> - show x - } + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of + Dict -> + show x {------------------------------------------------------------------------------- LiftNP @@ -85,27 +96,34 @@ instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) newtype LiftNP f xs = LiftNP (NP f xs) -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Eq (f x)) - => Eq (LiftNP f xs) where +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Eq (f x)) => + Eq (LiftNP f xs) + where LiftNP x == LiftNP y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - x == y - } + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + x == y -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Ord (f x)) - => Ord (LiftNP f xs) where +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Ord (f x)) => + Ord (LiftNP f xs) + where LiftNP x `compare` LiftNP y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of { Dict -> - x `compare` y - }} - -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) - => Show (LiftNP f xs) where + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of + Dict -> + x `compare` y + +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) => + Show (LiftNP f xs) + where show (LiftNP x) = - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of { Dict -> - show x - } + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of + Dict -> + show x {------------------------------------------------------------------------------- LiftOptNP @@ -113,19 +131,23 @@ instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) newtype LiftOptNP empty f xs = LiftOptNP (OptNP empty f xs) -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Eq (f x)) - => Eq (LiftOptNP empty f xs) where +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Eq (f x)) => + Eq (LiftOptNP empty f xs) + where LiftOptNP x == LiftOptNP y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - x == y - } + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + x == y -instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) - => Show (LiftOptNP empty f xs) where +instance + (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) => + Show (LiftOptNP empty f xs) + where show (LiftOptNP x) = - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of { Dict -> - show x - } + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of + Dict -> + show x {------------------------------------------------------------------------------- LiftTelescope @@ -133,37 +155,51 @@ instance (All SingleEraBlock xs, forall x. SingleEraBlock x => Show (f x)) newtype LiftTelescope g f xs = LiftTelescope (Telescope g f xs) -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => Eq (g x) - , forall x. SingleEraBlock x => Eq (f x) - ) => Eq (LiftTelescope g f xs) where +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => Eq (g x) + , forall x. SingleEraBlock x => Eq (f x) + ) => + Eq (LiftTelescope g f xs) + where LiftTelescope x == LiftTelescope y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - x == y - }} - -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => Ord (f x) - , forall x. SingleEraBlock x => Ord (g x) - ) => Ord (LiftTelescope g f xs) where + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + x == y + +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => Ord (f x) + , forall x. SingleEraBlock x => Ord (g x) + ) => + Ord (LiftTelescope g f xs) + where compare (LiftTelescope x) (LiftTelescope y) = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @g) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of { Dict -> - compare x y - }}}} - -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => Show (g x) - , forall x. SingleEraBlock x => Show (f x) - ) => Show (LiftTelescope g f xs) where + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @g) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of + Dict -> + compare x y + +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => Show (g x) + , forall x. SingleEraBlock x => Show (f x) + ) => + Show (LiftTelescope g f xs) + where show (LiftTelescope x) = - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @g) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of { Dict -> - show x - }} + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @g) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of + Dict -> + show x {------------------------------------------------------------------------------- LiftMismatch @@ -171,37 +207,51 @@ instance ( All SingleEraBlock xs newtype LiftMismatch f g xs = LiftMismatch (Mismatch f g xs) -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => Eq (f x) - , forall x. SingleEraBlock x => Eq (g x) - ) => Eq (LiftMismatch f g xs) where +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => Eq (f x) + , forall x. SingleEraBlock x => Eq (g x) + ) => + Eq (LiftMismatch f g xs) + where LiftMismatch x == LiftMismatch y = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of { Dict -> - x == y - }} - -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => Ord (f x) - , forall x. SingleEraBlock x => Ord (g x) - ) => Ord (LiftMismatch f g xs) where + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of + Dict -> + x == y + +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => Ord (f x) + , forall x. SingleEraBlock x => Ord (g x) + ) => + Ord (LiftMismatch f g xs) + where compare (LiftMismatch x) (LiftMismatch y) = - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @g) of { Dict -> - compare x y - }}}} - -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => Show (f x) - , forall x. SingleEraBlock x => Show (g x) - ) => Show (LiftMismatch f g xs) where + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Eq) (Proxy @g) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Ord) (Proxy @g) of + Dict -> + compare x y + +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => Show (f x) + , forall x. SingleEraBlock x => Show (g x) + ) => + Show (LiftMismatch f g xs) + where show (LiftMismatch x) = - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @Show) (Proxy @g) of { Dict -> - show x - }} + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @Show) (Proxy @g) of + Dict -> + show x {------------------------------------------------------------------------------- LiftNamedNS @@ -209,16 +259,19 @@ instance ( All SingleEraBlock xs newtype LiftNamedNS (name :: Symbol) f xs = LiftNamedNS (NS f xs) -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => NoThunks (f x) - , KnownSymbol name - ) => NoThunks (LiftNamedNS name f xs) where +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => NoThunks (f x) + , KnownSymbol name + ) => + NoThunks (LiftNamedNS name f xs) + where showTypeOf _ = symbolVal (Proxy @name) ++ " " ++ showBlockTypes (sList :: SList xs) wNoThunks ctxt (LiftNamedNS x) = - case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of { Dict -> - wNoThunks ctxt x - } + case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of + Dict -> + wNoThunks ctxt x {------------------------------------------------------------------------------- LiftNamedNP @@ -226,16 +279,19 @@ instance ( All SingleEraBlock xs newtype LiftNamedNP (name :: Symbol) f xs = LiftNamedNP (NP f xs) -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => NoThunks (f x) - , KnownSymbol name - ) => NoThunks (LiftNamedNP name f xs) where +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => NoThunks (f x) + , KnownSymbol name + ) => + NoThunks (LiftNamedNP name f xs) + where showTypeOf _ = symbolVal (Proxy @name) ++ " " ++ showBlockTypes (sList :: SList xs) wNoThunks ctxt (LiftNamedNP x) = - case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of { Dict -> - wNoThunks ctxt x - } + case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of + Dict -> + wNoThunks ctxt x {------------------------------------------------------------------------------- LiftNamedTelescope @@ -243,18 +299,22 @@ instance ( All SingleEraBlock xs newtype LiftNamedTelescope (name :: Symbol) f g xs = LiftNamedTelescope (Telescope f g xs) -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => NoThunks (f x) - , forall x. SingleEraBlock x => NoThunks (g x) - , KnownSymbol name - ) => NoThunks (LiftNamedTelescope name f g xs) where +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => NoThunks (f x) + , forall x. SingleEraBlock x => NoThunks (g x) + , KnownSymbol name + ) => + NoThunks (LiftNamedTelescope name f g xs) + where showTypeOf _ = symbolVal (Proxy @name) ++ " " ++ showBlockTypes (sList :: SList xs) wNoThunks ctxt (LiftNamedTelescope x) = - case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @g) of { Dict -> - wNoThunks ctxt x - }} + case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @g) of + Dict -> + wNoThunks ctxt x {------------------------------------------------------------------------------- LiftNamedTelescope @@ -262,18 +322,22 @@ instance ( All SingleEraBlock xs newtype LiftNamedMismatch (name :: Symbol) f g xs = LiftNamedMismatch (Mismatch f g xs) -instance ( All SingleEraBlock xs - , forall x. SingleEraBlock x => NoThunks (f x) - , forall x. SingleEraBlock x => NoThunks (g x) - , KnownSymbol name - ) => NoThunks (LiftNamedMismatch name f g xs) where +instance + ( All SingleEraBlock xs + , forall x. SingleEraBlock x => NoThunks (f x) + , forall x. SingleEraBlock x => NoThunks (g x) + , KnownSymbol name + ) => + NoThunks (LiftNamedMismatch name f g xs) + where showTypeOf _ = symbolVal (Proxy @name) ++ " " ++ showBlockTypes (sList :: SList xs) wNoThunks ctxt (LiftNamedMismatch x) = - case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of { Dict -> - case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @g) of { Dict -> - wNoThunks ctxt x - }} + case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @f) of + Dict -> + case liftEras (Proxy @xs) (Proxy @NoThunks) (Proxy @g) of + Dict -> + wNoThunks ctxt x {------------------------------------------------------------------------------- Auxiliary @@ -281,11 +345,11 @@ instance ( All SingleEraBlock xs showBlockTypes :: All SingleEraBlock xs => SList xs -> String showBlockTypes = - (\names -> "[" ++ intercalate "," names ++ "]") . hcollapse . go - where - go :: All SingleEraBlock xs' => SList xs' -> NP (K String) xs' - go SNil = Nil - go SCons = typeRep' :* go sList - - typeRep' :: forall blk. SingleEraBlock blk => K String blk - typeRep' = K . show $ typeRep (Proxy @blk) + (\names -> "[" ++ intercalate "," names ++ "]") . hcollapse . go + where + go :: All SingleEraBlock xs' => SList xs' -> NP (K String) xs' + go SNil = Nil + go SCons = typeRep' :* go sList + + typeRep' :: forall blk. SingleEraBlock blk => K String blk + typeRep' = K . show $ typeRep (Proxy @blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Mempool.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Mempool.hs index 46eacc01d0..acf660ce77 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Mempool.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Mempool.hs @@ -15,11 +15,10 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Mempool ( - GenTx (..) +module Ouroboros.Consensus.HardFork.Combinator.Mempool + ( GenTx (..) , HardForkApplyTxErr (..) , TxId (..) , Validated (..) @@ -27,75 +26,76 @@ module Ouroboros.Consensus.HardFork.Combinator.Mempool ( , hardForkApplyTxErrToEither ) where -import Control.Arrow (first, (+++)) -import Control.Monad.Except -import Data.Functor.Identity -import Data.Functor.Product -import Data.Kind (Type) -import qualified Data.Measure as Measure -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Functors -import Data.SOP.Index -import Data.SOP.InPairs (InPairs) -import qualified Data.SOP.InPairs as InPairs -import qualified Data.SOP.Match as Match -import Data.SOP.Strict -import qualified Data.SOP.Telescope as Tele -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.InjectTxs -import Ouroboros.Consensus.HardFork.Combinator.Ledger -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util - -data HardForkApplyTxErr xs = - -- | Validation error from one of the eras +import Control.Arrow (first, (+++)) +import Control.Monad.Except +import Data.Functor.Identity +import Data.Functor.Product +import Data.Kind (Type) +import Data.Measure qualified as Measure +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Functors +import Data.SOP.InPairs (InPairs) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index +import Data.SOP.Match qualified as Match +import Data.SOP.Strict +import Data.SOP.Telescope qualified as Tele +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.InjectTxs +import Ouroboros.Consensus.HardFork.Combinator.Ledger +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util + +data HardForkApplyTxErr xs + = -- | Validation error from one of the eras HardForkApplyTxErrFromEra !(OneEraApplyTxErr xs) + | -- | We tried to apply a block from the wrong era + HardForkApplyTxErrWrongEra !(MismatchEraInfo xs) + deriving Generic - -- | We tried to apply a block from the wrong era - | HardForkApplyTxErrWrongEra !(MismatchEraInfo xs) - deriving (Generic) - -instance Typeable xs => ShowProxy (HardForkApplyTxErr xs) where +instance Typeable xs => ShowProxy (HardForkApplyTxErr xs) -hardForkApplyTxErrToEither :: HardForkApplyTxErr xs - -> Either (MismatchEraInfo xs) (OneEraApplyTxErr xs) -hardForkApplyTxErrToEither (HardForkApplyTxErrFromEra err) = Right err -hardForkApplyTxErrToEither (HardForkApplyTxErrWrongEra err) = Left err +hardForkApplyTxErrToEither :: + HardForkApplyTxErr xs -> + Either (MismatchEraInfo xs) (OneEraApplyTxErr xs) +hardForkApplyTxErrToEither (HardForkApplyTxErrFromEra err) = Right err +hardForkApplyTxErrToEither (HardForkApplyTxErrWrongEra err) = Left err -hardForkApplyTxErrFromEither :: Either (MismatchEraInfo xs) (OneEraApplyTxErr xs) - -> HardForkApplyTxErr xs -hardForkApplyTxErrFromEither (Right err) = HardForkApplyTxErrFromEra err -hardForkApplyTxErrFromEither (Left err) = HardForkApplyTxErrWrongEra err +hardForkApplyTxErrFromEither :: + Either (MismatchEraInfo xs) (OneEraApplyTxErr xs) -> + HardForkApplyTxErr xs +hardForkApplyTxErrFromEither (Right err) = HardForkApplyTxErrFromEra err +hardForkApplyTxErrFromEither (Left err) = HardForkApplyTxErrWrongEra err deriving stock instance CanHardFork xs => Show (HardForkApplyTxErr xs) deriving stock instance CanHardFork xs => Eq (HardForkApplyTxErr xs) -newtype instance GenTx (HardForkBlock xs) = HardForkGenTx { - getHardForkGenTx :: OneEraGenTx xs - } +newtype instance GenTx (HardForkBlock xs) = HardForkGenTx + { getHardForkGenTx :: OneEraGenTx xs + } deriving (Eq, Generic, Show) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -newtype instance Validated (GenTx (HardForkBlock xs)) = HardForkValidatedGenTx { - getHardForkValidatedGenTx :: OneEraValidatedGenTx xs - } +newtype instance Validated (GenTx (HardForkBlock xs)) = HardForkValidatedGenTx + { getHardForkValidatedGenTx :: OneEraValidatedGenTx xs + } deriving (Eq, Generic, Show) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -instance Typeable xs => ShowProxy (GenTx (HardForkBlock xs)) where +instance Typeable xs => ShowProxy (GenTx (HardForkBlock xs)) type instance ApplyTxErr (HardForkBlock xs) = HardForkApplyTxErr xs @@ -116,51 +116,60 @@ type DecomposedReapplyTxsResult extra xs = (,,) [Invalidated (HardForkBlock xs)] [(Validated (GenTx (HardForkBlock xs)), extra)] - :.: - FlipTickedLedgerState TrackingMK - -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - ) => LedgerSupportsMempool (HardForkBlock xs) where + :.: FlipTickedLedgerState TrackingMK + +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + ) => + LedgerSupportsMempool (HardForkBlock xs) + where applyTx = applyHelper ModeApply ComputeDiffs reapplyTx doDiffs cfg slot vtx tls = - fst + fst <$> applyHelper - ModeReapply - doDiffs - cfg - DoNotIntervene - slot - (WrapValidatedGenTx vtx) - tls - - reapplyTxs :: forall extra. - ComputeDiffs - -> LedgerConfig (HardForkBlock xs) - -> SlotNo -- ^ Slot number of the block containing the tx - -> [(Validated (GenTx (HardForkBlock xs)), extra)] - -> TickedLedgerState (HardForkBlock xs) ValuesMK - -> ReapplyTxsResult extra (HardForkBlock xs) + ModeReapply + doDiffs + cfg + DoNotIntervene + slot + (WrapValidatedGenTx vtx) + tls + + reapplyTxs :: + forall extra. + ComputeDiffs -> + LedgerConfig (HardForkBlock xs) -> + SlotNo -> + -- \^ Slot number of the block containing the tx + [(Validated (GenTx (HardForkBlock xs)), extra)] -> + TickedLedgerState (HardForkBlock xs) ValuesMK -> + ReapplyTxsResult extra (HardForkBlock xs) reapplyTxs doDiffs HardForkLedgerConfig{..} slot vtxs (TickedHardForkLedgerState transition hardForkState) = - (\(err, val, st') -> - ReapplyTxsResult (mismatched' ++ err) val (TickedHardForkLedgerState transition st')) - . hsequence' - $ hcizipWith proxySingle modeApplyCurrent cfgs (State.HardForkState $ hmap flipCurrentAndProduct matched) - - where + ( \(err, val, st') -> + ReapplyTxsResult (mismatched' ++ err) val (TickedHardForkLedgerState transition st') + ) + . hsequence' + $ hcizipWith + proxySingle + modeApplyCurrent + cfgs + (State.HardForkState $ hmap flipCurrentAndProduct matched) + where pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs - ei = State.epochInfoPrecomputedTransitionInfo - hardForkLedgerConfigShape - transition - hardForkState + cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs + ei = + State.epochInfoPrecomputedTransitionInfo + hardForkLedgerConfigShape + transition + hardForkState flipCurrentAndProduct (Pair (State.Current c s) b) = State.Current c (Pair s b) @@ -168,63 +177,80 @@ instance ( CanHardFork xs (mismatched, matched) = matchPolyTxsTele -- How to translate txs to later eras - (InPairs.hmap - (\(Pair2 _ (InjectPolyTx w)) -> InjectPolyTx (\(Comp (ex, tx)) -> Comp . (ex,) <$> w tx)) - (InPairs.requiringBoth cfgs hardForkInjectTxs) + ( InPairs.hmap + (\(Pair2 _ (InjectPolyTx w)) -> InjectPolyTx (\(Comp (ex, tx)) -> Comp . (ex,) <$> w tx)) + (InPairs.requiringBoth cfgs hardForkInjectTxs) ) (State.getHardForkState hardForkState) - (map - (\(tx, extra) -> hmap (Comp . (extra,)) . getOneEraValidatedGenTx . getHardForkValidatedGenTx $ tx) - vtxs + ( map + (\(tx, extra) -> hmap (Comp . (extra,)) . getOneEraValidatedGenTx . getHardForkValidatedGenTx $ tx) + vtxs ) mismatched' :: [Invalidated (HardForkBlock xs)] mismatched' = - map (\x -> flip Invalidated ( HardForkApplyTxErrWrongEra - $ MismatchEraInfo - $ Match.bihcmap proxySingle singleEraInfo ledgerInfo - $ snd x) - . HardForkValidatedGenTx - . OneEraValidatedGenTx - . hmap (snd . unComp) - . fst - $ x) - mismatched - - modeApplyCurrent :: forall blk. - SingleEraBlock blk - => Index xs blk - -> WrapLedgerConfig blk - -> Product - (FlipTickedLedgerState ValuesMK) - ([] :.: (,) extra :.: WrapValidatedGenTx) blk - -> DecomposedReapplyTxsResult extra xs blk + map + ( \x -> + flip + Invalidated + ( HardForkApplyTxErrWrongEra $ + MismatchEraInfo $ + Match.bihcmap proxySingle singleEraInfo ledgerInfo $ + snd x + ) + . HardForkValidatedGenTx + . OneEraValidatedGenTx + . hmap (snd . unComp) + . fst + $ x + ) + mismatched + + modeApplyCurrent :: + forall blk. + SingleEraBlock blk => + Index xs blk -> + WrapLedgerConfig blk -> + Product + (FlipTickedLedgerState ValuesMK) + ([] :.: (,) extra :.: WrapValidatedGenTx) + blk -> + DecomposedReapplyTxsResult extra xs blk modeApplyCurrent index cfg (Pair (FlipTickedLedgerState st) txs) = let ReapplyTxsResult err val st' = - reapplyTxs doDiffs (unwrapLedgerConfig cfg) slot [ (unwrapValidatedGenTx tx, tk) | (Comp (tk,tx)) <- unComp txs ] st - in Comp - ( [ injectValidatedGenTx index (getInvalidated x) `Invalidated` injectApplyTxErr index (getReason x) | x <- err ] - , map (first (HardForkValidatedGenTx . OneEraValidatedGenTx . injectNS index . WrapValidatedGenTx)) val - , FlipTickedLedgerState st' - ) + reapplyTxs + doDiffs + (unwrapLedgerConfig cfg) + slot + [(unwrapValidatedGenTx tx, tk) | (Comp (tk, tx)) <- unComp txs] + st + in Comp + ( [ injectValidatedGenTx index (getInvalidated x) `Invalidated` injectApplyTxErr index (getReason x) + | x <- err + ] + , map + (first (HardForkValidatedGenTx . OneEraValidatedGenTx . injectNS index . WrapValidatedGenTx)) + val + , FlipTickedLedgerState st' + ) txForgetValidated = - HardForkGenTx + HardForkGenTx . OneEraGenTx . hcmap proxySingle (txForgetValidated . unwrapValidatedGenTx) . getOneEraValidatedGenTx . getHardForkValidatedGenTx getTransactionKeySets (HardForkGenTx (OneEraGenTx ns)) = - hcollapse - $ hcimap proxySingle f ns - where - f :: - SingleEraBlock x - => Index xs x - -> GenTx x - -> K (LedgerTables (LedgerState (HardForkBlock xs)) KeysMK) x - f idx tx = K $ injectLedgerTables idx $ getTransactionKeySets tx + hcollapse $ + hcimap proxySingle f ns + where + f :: + SingleEraBlock x => + Index xs x -> + GenTx x -> + K (LedgerTables (LedgerState (HardForkBlock xs)) KeysMK) x + f idx tx = K $ injectLedgerTables idx $ getTransactionKeySets tx -- This optimization is worthwile because we can save the projection and -- injection of ledger tables. @@ -235,21 +261,25 @@ instance ( CanHardFork xs -- TMVar, it is interesting to hold it for as short of a time as possible. prependMempoolDiffs (TickedHardForkLedgerState _ (State.HardForkState st1)) - (TickedHardForkLedgerState tr (State.HardForkState st2)) - = TickedHardForkLedgerState - tr + (TickedHardForkLedgerState tr (State.HardForkState st2)) = + TickedHardForkLedgerState + tr $ State.HardForkState $ runIdentity - (Tele.alignExtend - (InPairs.hpure (error "When prepending mempool diffs we used to un-aligned states, this should be impossible!")) - (hcpure proxySingle $ fn_2 $ \(State.Current _ a) (State.Current start b) -> State.Current start $ - FlipTickedLedgerState - $ prependMempoolDiffs - (getFlipTickedLedgerState a) - (getFlipTickedLedgerState b) - ) - st1 - st2) + ( Tele.alignExtend + ( InPairs.hpure + (error "When prepending mempool diffs we used to un-aligned states, this should be impossible!") + ) + ( hcpure proxySingle $ fn_2 $ \(State.Current _ a) (State.Current start b) -> + State.Current start $ + FlipTickedLedgerState $ + prependMempoolDiffs + (getFlipTickedLedgerState a) + (getFlipTickedLedgerState b) + ) + st1 + st2 + ) -- This optimization is worthwile because we can save the projection and -- injection of ledger tables. @@ -259,94 +289,100 @@ instance ( CanHardFork xs -- make adoption of new transactions faster. As adding a transaction takes a -- TMVar, it is interesting to hold it for as short of a time as possible. applyMempoolDiffs - vals keys (TickedHardForkLedgerState tr (State.HardForkState st)) = - TickedHardForkLedgerState tr $ State.HardForkState $ hcimap - proxySingle - (\idx (State.Current start (FlipTickedLedgerState a)) -> - State.Current start $ FlipTickedLedgerState - $ applyMempoolDiffs - (ejectLedgerTables idx vals) - (ejectLedgerTables idx keys) a ) - st - + vals + keys + (TickedHardForkLedgerState tr (State.HardForkState st)) = + TickedHardForkLedgerState tr $ + State.HardForkState $ + hcimap + proxySingle + ( \idx (State.Current start (FlipTickedLedgerState a)) -> + State.Current start $ + FlipTickedLedgerState $ + applyMempoolDiffs + (ejectLedgerTables idx vals) + (ejectLedgerTables idx keys) + a + ) + st instance CanHardFork xs => TxLimits (HardForkBlock xs) where type TxMeasure (HardForkBlock xs) = HardForkTxMeasure xs blockCapacityTxMeasure HardForkLedgerConfig{..} - (TickedHardForkLedgerState transition hardForkState) - = - hcollapse - $ hcizipWith proxySingle aux pcfgs hardForkState - where + (TickedHardForkLedgerState transition hardForkState) = + hcollapse $ + hcizipWith proxySingle aux pcfgs hardForkState + where pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - ei = State.epochInfoPrecomputedTransitionInfo - hardForkLedgerConfigShape - transition - hardForkState + ei = + State.epochInfoPrecomputedTransitionInfo + hardForkLedgerConfigShape + transition + hardForkState aux :: - SingleEraBlock blk - => Index xs blk - -> WrapPartialLedgerConfig blk - -> FlipTickedLedgerState mk blk - -> K (HardForkTxMeasure xs) blk + SingleEraBlock blk => + Index xs blk -> + WrapPartialLedgerConfig blk -> + FlipTickedLedgerState mk blk -> + K (HardForkTxMeasure xs) blk aux idx pcfg st' = - K - $ hardForkInjTxMeasure . injectNS idx . WrapTxMeasure - $ blockCapacityTxMeasure - (completeLedgerConfig' ei pcfg) - (getFlipTickedLedgerState st') + K $ + hardForkInjTxMeasure . injectNS idx . WrapTxMeasure $ + blockCapacityTxMeasure + (completeLedgerConfig' ei pcfg) + (getFlipTickedLedgerState st') txMeasure HardForkLedgerConfig{..} (TickedHardForkLedgerState transition hardForkState) - tx - = - case matchTx injs (unwrapTx tx) hardForkState of - Left{} -> pure Measure.zero -- safe b/c the tx will be found invalid - Right pair -> hcollapse $ hcizipWith proxySingle aux cfgs pair - where + tx = + case matchTx injs (unwrapTx tx) hardForkState of + Left{} -> pure Measure.zero -- safe b/c the tx will be found invalid + Right pair -> hcollapse $ hcizipWith proxySingle aux cfgs pair + where pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - ei = State.epochInfoPrecomputedTransitionInfo - hardForkLedgerConfigShape - transition - hardForkState - cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs + ei = + State.epochInfoPrecomputedTransitionInfo + hardForkLedgerConfigShape + transition + hardForkState + cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs unwrapTx = getOneEraGenTx . getHardForkGenTx injs :: InPairs (InjectPolyTx GenTx) xs injs = - InPairs.hmap (\(Pair2 injTx _injValidatedTx) -> injTx) - $ InPairs.requiringBoth cfgs hardForkInjectTxs - - aux :: forall blk. - SingleEraBlock blk - => Index xs blk - -> WrapLedgerConfig blk - -> (Product GenTx (FlipTickedLedgerState ValuesMK)) blk - -> K (Except (HardForkApplyTxErr xs) (HardForkTxMeasure xs)) blk + InPairs.hmap (\(Pair2 injTx _injValidatedTx) -> injTx) $ + InPairs.requiringBoth cfgs hardForkInjectTxs + + aux :: + forall blk. + SingleEraBlock blk => + Index xs blk -> + WrapLedgerConfig blk -> + (Product GenTx (FlipTickedLedgerState ValuesMK)) blk -> + K (Except (HardForkApplyTxErr xs) (HardForkTxMeasure xs)) blk aux idx cfg (Pair tx' st') = - K - $ mapExcept - ( ( HardForkApplyTxErrFromEra - . OneEraApplyTxErr - . injectNS idx - . WrapApplyTxErr - ) - +++ - (hardForkInjTxMeasure . injectNS idx . WrapTxMeasure) + K + $ mapExcept + ( ( HardForkApplyTxErrFromEra + . OneEraApplyTxErr + . injectNS idx + . WrapApplyTxErr + ) + +++ (hardForkInjTxMeasure . injectNS idx . WrapTxMeasure) ) - $ txMeasure + $ txMeasure (unwrapLedgerConfig cfg) (getFlipTickedLedgerState st') tx' -- | A private type used only to clarify the parameterization of 'applyHelper' data ApplyHelperMode :: (Type -> Type) -> Type where - ModeApply :: ApplyHelperMode GenTx + ModeApply :: ApplyHelperMode GenTx ModeReapply :: ApplyHelperMode WrapValidatedGenTx -- | 'applyHelper' has to return one of these, depending on the apply mode used. @@ -355,8 +391,8 @@ type family ApplyMK k where ApplyMK (ApplyHelperMode WrapValidatedGenTx) = TrackingMK -- | A private type used only to clarify the definition of 'applyHelper' -data ApplyResult xs txIn blk = ApplyResult { - arState :: Ticked (LedgerState blk) (ApplyMK (ApplyHelperMode txIn)) +data ApplyResult xs txIn blk = ApplyResult + { arState :: Ticked (LedgerState blk) (ApplyMK (ApplyHelperMode txIn)) , arValidatedTx :: Validated (GenTx (HardForkBlock xs)) } @@ -364,30 +400,34 @@ data ApplyResult xs txIn blk = ApplyResult { -- -- The @txIn@ variable is 'GenTx' or 'WrapValidatedGenTx', respectively. See -- 'ApplyHelperMode'. -applyHelper :: forall xs txIn. CanHardFork xs - => ApplyHelperMode txIn - -> ComputeDiffs - -> LedgerConfig (HardForkBlock xs) - -> WhetherToIntervene - -> SlotNo - -> txIn (HardForkBlock xs) - -> TickedLedgerState (HardForkBlock xs) ValuesMK - -> Except - (HardForkApplyTxErr xs) - ( TickedLedgerState (HardForkBlock xs) (ApplyMK (ApplyHelperMode txIn)) - , Validated (GenTx (HardForkBlock xs)) - ) -applyHelper mode - doDiffs - HardForkLedgerConfig{..} - wti - slot - tx - (TickedHardForkLedgerState transition hardForkState) = +applyHelper :: + forall xs txIn. + CanHardFork xs => + ApplyHelperMode txIn -> + ComputeDiffs -> + LedgerConfig (HardForkBlock xs) -> + WhetherToIntervene -> + SlotNo -> + txIn (HardForkBlock xs) -> + TickedLedgerState (HardForkBlock xs) ValuesMK -> + Except + (HardForkApplyTxErr xs) + ( TickedLedgerState (HardForkBlock xs) (ApplyMK (ApplyHelperMode txIn)) + , Validated (GenTx (HardForkBlock xs)) + ) +applyHelper + mode + doDiffs + HardForkLedgerConfig{..} + wti + slot + tx + (TickedHardForkLedgerState transition hardForkState) = case matchPolyTx injs (modeGetTx tx) hardForkState of Left mismatch -> - throwError $ HardForkApplyTxErrWrongEra . MismatchEraInfo $ - Match.bihcmap proxySingle singleEraInfo ledgerInfo mismatch + throwError $ + HardForkApplyTxErrWrongEra . MismatchEraInfo $ + Match.bihcmap proxySingle singleEraInfo ledgerInfo mismatch Right matched -> -- We are updating the ticked ledger state by applying a transaction, -- but for the HFC that ledger state contains a bundled @@ -407,8 +447,8 @@ applyHelper mode -- applicable here. do result <- - hsequence' - $ hcizipWith proxySingle modeApplyCurrent cfgs matched + hsequence' $ + hcizipWith proxySingle modeApplyCurrent cfgs matched let _ = result :: State.HardForkState (ApplyResult xs txIn) xs st' :: State.HardForkState (FlipTickedLedgerState (ApplyMK (ApplyHelperMode txIn))) xs @@ -418,78 +458,87 @@ applyHelper mode vtx = hcollapse $ (K . arValidatedTx) `hmap` result return (TickedHardForkLedgerState transition st', vtx) - where + where pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs - ei = State.epochInfoPrecomputedTransitionInfo - hardForkLedgerConfigShape - transition - hardForkState + cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs + ei = + State.epochInfoPrecomputedTransitionInfo + hardForkLedgerConfigShape + transition + hardForkState injs :: InPairs (InjectPolyTx txIn) xs injs = - InPairs.hmap - modeGetInjection - (InPairs.requiringBoth cfgs hardForkInjectTxs) + InPairs.hmap + modeGetInjection + (InPairs.requiringBoth cfgs hardForkInjectTxs) modeGetTx :: txIn (HardForkBlock xs) -> NS txIn xs modeGetTx = case mode of - ModeApply -> - getOneEraGenTx - . getHardForkGenTx - ModeReapply -> - getOneEraValidatedGenTx - . getHardForkValidatedGenTx - . unwrapValidatedGenTx - - modeGetInjection :: forall blk1 blk2. - Product2 InjectTx InjectValidatedTx blk1 blk2 - -> InjectPolyTx txIn blk1 blk2 + ModeApply -> + getOneEraGenTx + . getHardForkGenTx + ModeReapply -> + getOneEraValidatedGenTx + . getHardForkValidatedGenTx + . unwrapValidatedGenTx + + modeGetInjection :: + forall blk1 blk2. + Product2 InjectTx InjectValidatedTx blk1 blk2 -> + InjectPolyTx txIn blk1 blk2 modeGetInjection (Pair2 injTx injValidatedTx) = case mode of - ModeApply -> injTx - ModeReapply -> injValidatedTx - - modeApplyCurrent :: forall blk. - SingleEraBlock blk - => Index xs blk - -> WrapLedgerConfig blk - -> Product txIn (FlipTickedLedgerState ValuesMK) blk - -> ( Except (HardForkApplyTxErr xs) - :.: ApplyResult xs txIn - ) blk + ModeApply -> injTx + ModeReapply -> injValidatedTx + + modeApplyCurrent :: + forall blk. + SingleEraBlock blk => + Index xs blk -> + WrapLedgerConfig blk -> + Product txIn (FlipTickedLedgerState ValuesMK) blk -> + ( Except (HardForkApplyTxErr xs) + :.: ApplyResult xs txIn + ) + blk modeApplyCurrent index cfg (Pair tx' (FlipTickedLedgerState st)) = - Comp - $ withExcept (injectApplyTxErr index) - $ do + Comp $ + withExcept (injectApplyTxErr index) $ + do let lcfg = unwrapLedgerConfig cfg case mode of - ModeApply -> do + ModeApply -> do (st', vtx) <- applyTx lcfg wti slot tx' st - pure ApplyResult { - arValidatedTx = injectValidatedGenTx index vtx - , arState = st' - } + pure + ApplyResult + { arValidatedTx = injectValidatedGenTx index vtx + , arState = st' + } ModeReapply -> do - let vtx' = unwrapValidatedGenTx tx' - st' <- reapplyTx doDiffs lcfg slot vtx' st - -- provide the given transaction, which was already validated - pure ApplyResult { - arValidatedTx = injectValidatedGenTx index vtx' - , arState = st' + let vtx' = unwrapValidatedGenTx tx' + st' <- reapplyTx doDiffs lcfg slot vtx' st + -- provide the given transaction, which was already validated + pure + ApplyResult + { arValidatedTx = injectValidatedGenTx index vtx' + , arState = st' } -newtype instance TxId (GenTx (HardForkBlock xs)) = HardForkGenTxId { - getHardForkGenTxId :: OneEraGenTxId xs - } +newtype instance TxId (GenTx (HardForkBlock xs)) = HardForkGenTxId + { getHardForkGenTxId :: OneEraGenTxId xs + } deriving (Eq, Generic, Ord, Show) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -instance Typeable xs => ShowProxy (TxId (GenTx (HardForkBlock xs))) where +instance Typeable xs => ShowProxy (TxId (GenTx (HardForkBlock xs))) instance CanHardFork xs => HasTxId (GenTx (HardForkBlock xs)) where - txId = HardForkGenTxId . OneEraGenTxId - . hcmap proxySingle (WrapGenTxId . txId) - . getOneEraGenTx . getHardForkGenTx + txId = + HardForkGenTxId + . OneEraGenTxId + . hcmap proxySingle (WrapGenTxId . txId) + . getOneEraGenTx + . getHardForkGenTx {------------------------------------------------------------------------------- HasTxs @@ -499,36 +548,39 @@ instance CanHardFork xs => HasTxId (GenTx (HardForkBlock xs)) where instance All HasTxs xs => HasTxs (HardForkBlock xs) where extractTxs = - hcollapse + hcollapse . hcimap (Proxy @HasTxs) aux . getOneEraBlock . getHardForkBlock - where - aux :: - HasTxs blk - => Index xs blk - -> I blk - -> K [GenTx (HardForkBlock xs)] blk - aux index = K . map (injectNS' (Proxy @GenTx) index) . extractTxs . unI + where + aux :: + HasTxs blk => + Index xs blk -> + I blk -> + K [GenTx (HardForkBlock xs)] blk + aux index = K . map (injectNS' (Proxy @GenTx) index) . extractTxs . unI {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} -ledgerInfo :: forall blk mk. SingleEraBlock blk - => State.Current (FlipTickedLedgerState mk) blk -> LedgerEraInfo blk +ledgerInfo :: + forall blk mk. + SingleEraBlock blk => + State.Current (FlipTickedLedgerState mk) blk -> LedgerEraInfo blk ledgerInfo _ = LedgerEraInfo $ singleEraInfo (Proxy @blk) injectApplyTxErr :: SListI xs => Index xs blk -> ApplyTxErr blk -> HardForkApplyTxErr xs injectApplyTxErr index = - HardForkApplyTxErrFromEra + HardForkApplyTxErrFromEra . OneEraApplyTxErr . injectNS index . WrapApplyTxErr -injectValidatedGenTx :: SListI xs => Index xs blk -> Validated (GenTx blk) -> Validated (GenTx (HardForkBlock xs)) +injectValidatedGenTx :: + SListI xs => Index xs blk -> Validated (GenTx blk) -> Validated (GenTx (HardForkBlock xs)) injectValidatedGenTx index = - HardForkValidatedGenTx + HardForkValidatedGenTx . OneEraValidatedGenTx . injectNS index . WrapValidatedGenTx diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/NetworkVersion.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/NetworkVersion.hs index 858fa5dcde..13f883e656 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/NetworkVersion.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/NetworkVersion.hs @@ -5,13 +5,11 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Defines the different NTC and NTN versions for the HardFork Combinator. - -module Ouroboros.Consensus.HardFork.Combinator.NetworkVersion ( - EraNodeToClientVersion (..) +module Ouroboros.Consensus.HardFork.Combinator.NetworkVersion + ( EraNodeToClientVersion (..) , HardForkNodeToClientVersion (..) , HardForkNodeToNodeVersion (..) , HardForkSpecificNodeToClientVersion (..) @@ -20,11 +18,11 @@ module Ouroboros.Consensus.HardFork.Combinator.NetworkVersion ( , isHardForkNodeToNodeEnabled ) where -import Data.SOP.Constraint -import Data.SOP.Strict -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.SOP.Constraint +import Data.SOP.Strict +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- Versioning @@ -32,14 +30,14 @@ import Ouroboros.Consensus.TypeFamilyWrappers -- | Versioning of the specific additions made by the HFC to the @NodeToNode@ -- protocols, e.g., the era tag. -data HardForkSpecificNodeToNodeVersion = - HardForkSpecificNodeToNodeVersion1 +data HardForkSpecificNodeToNodeVersion + = HardForkSpecificNodeToNodeVersion1 deriving (Eq, Ord, Show, Enum, Bounded) -- | Versioning of the specific additions made by the HFC to the @NodeToClient@ -- protocols, e.g., the era tag or the hard-fork specific queries. -data HardForkSpecificNodeToClientVersion = - -- | Include the Genesis window in 'EraParams'. +data HardForkSpecificNodeToClientVersion + = -- | Include the Genesis window in 'EraParams'. HardForkSpecificNodeToClientVersion3 deriving (Eq, Ord, Show, Enum, Bounded) @@ -49,62 +47,71 @@ data HardForkNodeToNodeVersion xs where -- This means that only the first era (@x@) is supported, and moreover, is -- compatible with serialisation used if the HFC would not be present at all. HardForkNodeToNodeDisabled :: - BlockNodeToNodeVersion x - -> HardForkNodeToNodeVersion (x ': xs) - + BlockNodeToNodeVersion x -> + HardForkNodeToNodeVersion (x ': xs) -- | Enable the HFC -- -- Serialised values will always include tags inserted by the HFC to -- distinguish one era from another. We version the hard-fork specific parts -- with 'HardForkSpecificNodeToNodeVersion'. HardForkNodeToNodeEnabled :: - HardForkSpecificNodeToNodeVersion - -> NP WrapNodeToNodeVersion xs - -> HardForkNodeToNodeVersion xs + HardForkSpecificNodeToNodeVersion -> + NP WrapNodeToNodeVersion xs -> + HardForkNodeToNodeVersion xs data HardForkNodeToClientVersion xs where -- | Disable the HFC -- -- See 'HardForkNodeToNodeDisabled' HardForkNodeToClientDisabled :: - BlockNodeToClientVersion x - -> HardForkNodeToClientVersion (x ': xs) - + BlockNodeToClientVersion x -> + HardForkNodeToClientVersion (x ': xs) -- | Enable the HFC -- -- See 'HardForkNodeToNodeEnabled' HardForkNodeToClientEnabled :: - HardForkSpecificNodeToClientVersion - -> NP EraNodeToClientVersion xs - -> HardForkNodeToClientVersion xs + HardForkSpecificNodeToClientVersion -> + NP EraNodeToClientVersion xs -> + HardForkNodeToClientVersion xs -data EraNodeToClientVersion blk = - EraNodeToClientEnabled !(BlockNodeToClientVersion blk) +data EraNodeToClientVersion blk + = EraNodeToClientEnabled !(BlockNodeToClientVersion blk) | EraNodeToClientDisabled deriving instance Show (BlockNodeToClientVersion blk) => Show (EraNodeToClientVersion blk) deriving instance Eq (BlockNodeToClientVersion blk) => Eq (EraNodeToClientVersion blk) -deriving instance (All HasNetworkProtocolVersion xs, All (Compose Show WrapNodeToNodeVersion) xs) => Show (HardForkNodeToNodeVersion xs) -deriving instance (All HasNetworkProtocolVersion xs, All (Compose Show EraNodeToClientVersion) xs) => Show (HardForkNodeToClientVersion xs) - -deriving instance (All HasNetworkProtocolVersion xs, All (Compose Eq WrapNodeToNodeVersion) xs) => Eq (HardForkNodeToNodeVersion xs) -deriving instance (All HasNetworkProtocolVersion xs, All (Compose Eq EraNodeToClientVersion) xs) => Eq (HardForkNodeToClientVersion xs) - -instance ( All (Compose Show WrapNodeToNodeVersion) xs - , All (Compose Eq WrapNodeToNodeVersion) xs - , All (Compose Show EraNodeToClientVersion) xs - , All (Compose Eq EraNodeToClientVersion) xs - , All HasNetworkProtocolVersion xs) - => HasNetworkProtocolVersion (HardForkBlock xs) where - type BlockNodeToNodeVersion (HardForkBlock xs) = HardForkNodeToNodeVersion xs +deriving instance + (All HasNetworkProtocolVersion xs, All (Compose Show WrapNodeToNodeVersion) xs) => + Show (HardForkNodeToNodeVersion xs) +deriving instance + (All HasNetworkProtocolVersion xs, All (Compose Show EraNodeToClientVersion) xs) => + Show (HardForkNodeToClientVersion xs) + +deriving instance + (All HasNetworkProtocolVersion xs, All (Compose Eq WrapNodeToNodeVersion) xs) => + Eq (HardForkNodeToNodeVersion xs) +deriving instance + (All HasNetworkProtocolVersion xs, All (Compose Eq EraNodeToClientVersion) xs) => + Eq (HardForkNodeToClientVersion xs) + +instance + ( All (Compose Show WrapNodeToNodeVersion) xs + , All (Compose Eq WrapNodeToNodeVersion) xs + , All (Compose Show EraNodeToClientVersion) xs + , All (Compose Eq EraNodeToClientVersion) xs + , All HasNetworkProtocolVersion xs + ) => + HasNetworkProtocolVersion (HardForkBlock xs) + where + type BlockNodeToNodeVersion (HardForkBlock xs) = HardForkNodeToNodeVersion xs type BlockNodeToClientVersion (HardForkBlock xs) = HardForkNodeToClientVersion xs isHardForkNodeToNodeEnabled :: HardForkNodeToNodeVersion xs -> Bool -isHardForkNodeToNodeEnabled HardForkNodeToNodeEnabled {} = True -isHardForkNodeToNodeEnabled _ = False +isHardForkNodeToNodeEnabled HardForkNodeToNodeEnabled{} = True +isHardForkNodeToNodeEnabled _ = False isHardForkNodeToClientEnabled :: HardForkNodeToClientVersion xs -> Bool -isHardForkNodeToClientEnabled HardForkNodeToClientEnabled {} = True -isHardForkNodeToClientEnabled _ = False +isHardForkNodeToClientEnabled HardForkNodeToClientEnabled{} = True +isHardForkNodeToClientEnabled _ = False diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node.hs index 068ea68fbf..fda1547659 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node.hs @@ -3,38 +3,37 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Node () where -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Strict -import GHC.Stack -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Forging () -import Ouroboros.Consensus.HardFork.Combinator.Ledger -import Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams () -import Ouroboros.Consensus.HardFork.Combinator.Ledger.PeerSelection () -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import Ouroboros.Consensus.HardFork.Combinator.Node.DiffusionPipelining () -import Ouroboros.Consensus.HardFork.Combinator.Node.InitStorage () -import Ouroboros.Consensus.HardFork.Combinator.Node.Metrics () -import Ouroboros.Consensus.HardFork.Combinator.Node.SanityCheck () -import Ouroboros.Consensus.HardFork.Combinator.Serialisation -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Strict +import GHC.Stack +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Forging () +import Ouroboros.Consensus.HardFork.Combinator.Ledger +import Ouroboros.Consensus.HardFork.Combinator.Ledger.CommonProtocolParams () +import Ouroboros.Consensus.HardFork.Combinator.Ledger.PeerSelection () +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HardFork.Combinator.Node.DiffusionPipelining () +import Ouroboros.Consensus.HardFork.Combinator.Node.InitStorage () +import Ouroboros.Consensus.HardFork.Combinator.Node.Metrics () +import Ouroboros.Consensus.HardFork.Combinator.Node.SanityCheck () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run {------------------------------------------------------------------------------- ConfigSupportsNode -------------------------------------------------------------------------------} instance CanHardFork xs => ConfigSupportsNode (HardForkBlock xs) where - getSystemStart = getSameConfigValue getSystemStart + getSystemStart = getSameConfigValue getSystemStart getNetworkMagic = getSameConfigValue getNetworkMagic {------------------------------------------------------------------------------- @@ -42,27 +41,30 @@ instance CanHardFork xs => ConfigSupportsNode (HardForkBlock xs) where -------------------------------------------------------------------------------} getSameConfigValue :: - forall xs a. (CanHardFork xs, Eq a, HasCallStack) - => (forall blk. ConfigSupportsNode blk => BlockConfig blk -> a) - -> BlockConfig (HardForkBlock xs) - -> a + forall xs a. + (CanHardFork xs, Eq a, HasCallStack) => + (forall blk. ConfigSupportsNode blk => BlockConfig blk -> a) -> + BlockConfig (HardForkBlock xs) -> + a getSameConfigValue getValue blockConfig = getSameValue values - where - values :: NP (K a) xs - values = - hcmap (Proxy @SingleEraBlock) (K . getValue) - . getPerEraBlockConfig - . hardForkBlockConfigPerEra - $ blockConfig + where + values :: NP (K a) xs + values = + hcmap (Proxy @SingleEraBlock) (K . getValue) + . getPerEraBlockConfig + . hardForkBlockConfigPerEra + $ blockConfig {------------------------------------------------------------------------------- RunNode -------------------------------------------------------------------------------} -instance ( CanHardFork xs - , HasCanonicalTxIn xs - , HasHardForkTxOut xs - , BlockSupportsHFLedgerQuery xs - , SupportedNetworkProtocolVersion (HardForkBlock xs) - , SerialiseHFC xs - ) => RunNode (HardForkBlock xs) +instance + ( CanHardFork xs + , HasCanonicalTxIn xs + , HasHardForkTxOut xs + , BlockSupportsHFLedgerQuery xs + , SupportedNetworkProtocolVersion (HardForkBlock xs) + , SerialiseHFC xs + ) => + RunNode (HardForkBlock xs) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/DiffusionPipelining.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/DiffusionPipelining.hs index 42990d7765..04bd45488a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/DiffusionPipelining.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/DiffusionPipelining.hs @@ -3,24 +3,23 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Node.DiffusionPipelining () where -import Data.Functor.Product -import Data.Proxy -import Data.SOP.BasicFunctors -import qualified Data.SOP.Match as Match -import Data.SOP.NonEmpty -import Data.SOP.Strict -import Ouroboros.Consensus.Block.SupportsDiffusionPipelining -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util +import Data.Functor.Product +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Match qualified as Match +import Data.SOP.NonEmpty +import Data.SOP.Strict +import Ouroboros.Consensus.Block.SupportsDiffusionPipelining +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util -- | The 'BlockSupportsDiffusionPipelining' instance for the HFC is -- compositional: @@ -66,55 +65,60 @@ instance CanHardFork xs => BlockSupportsDiffusionPipelining (HardForkBlock xs) w type TentativeHeaderView (HardForkBlock xs) = OneEraTentativeHeaderView xs initialTentativeHeaderState _ - | ProofNonEmpty proxyHead _ <- isNonEmpty (Proxy @xs) - = OneEraTentativeHeaderState $ Z $ WrapTentativeHeaderState - $ initialTentativeHeaderState proxyHead + | ProofNonEmpty proxyHead _ <- isNonEmpty (Proxy @xs) = + OneEraTentativeHeaderState $ + Z $ + WrapTentativeHeaderState $ + initialTentativeHeaderState proxyHead tentativeHeaderView (HardForkBlockConfig (PerEraBlockConfig bcfg)) (HardForkHeader (OneEraHeader hdr)) = - OneEraTentativeHeaderView - . hcliftA2 proxySingle (WrapTentativeHeaderView .: tentativeHeaderView) bcfg - $ hdr + OneEraTentativeHeaderView + . hcliftA2 proxySingle (WrapTentativeHeaderView .: tentativeHeaderView) bcfg + $ hdr applyTentativeHeaderView Proxy (OneEraTentativeHeaderView thv) (OneEraTentativeHeaderState st) = - fmap OneEraTentativeHeaderState - . sequence_NS' - . hcmap proxySingle updateSt - =<< case Match.matchNS thv st of - Right thvSt -> Just thvSt - Left mismatch -> fromMismatch mismatch - where + fmap OneEraTentativeHeaderState + . sequence_NS' + . hcmap proxySingle updateSt + =<< case Match.matchNS thv st of + Right thvSt -> Just thvSt + Left mismatch -> fromMismatch mismatch + where updateSt :: - forall blk. BlockSupportsDiffusionPipelining blk - => Product WrapTentativeHeaderView WrapTentativeHeaderState blk - -> (Maybe :.: WrapTentativeHeaderState) blk + forall blk. + BlockSupportsDiffusionPipelining blk => + Product WrapTentativeHeaderView WrapTentativeHeaderState blk -> + (Maybe :.: WrapTentativeHeaderState) blk updateSt (Pair (WrapTentativeHeaderView thv') (WrapTentativeHeaderState st')) = - Comp $ fmap WrapTentativeHeaderState - $ applyTentativeHeaderView (Proxy @blk) thv' st' + Comp $ + fmap WrapTentativeHeaderState $ + applyTentativeHeaderView (Proxy @blk) thv' st' -- If the mismatch indicates that the tentative header view is in a later -- era than the 'TentativeHeaderState', pair the view with the -- 'initialTentativeHeaderState' of its era. fromMismatch :: - Match.Mismatch WrapTentativeHeaderView WrapTentativeHeaderState xs - -> Maybe (NS (Product WrapTentativeHeaderView WrapTentativeHeaderState) xs) + Match.Mismatch WrapTentativeHeaderView WrapTentativeHeaderState xs -> + Maybe (NS (Product WrapTentativeHeaderView WrapTentativeHeaderState) xs) fromMismatch mismatch - | ProofNonEmpty _ _ <- isNonEmpty (Proxy @xs) - = case Match.mismatchNotFirst mismatch of - -- The @thv@ is in an earlier era compared to the @st@, so it does - -- not satisfy the HFC pipelining criterion. - Right _ -> Nothing - -- The @thv@ is in a later era compared to the @st@. - Left thv' -> Just $ hcmap proxySingle withInitialSt (S thv') - where - withInitialSt :: - forall blk. BlockSupportsDiffusionPipelining blk - => WrapTentativeHeaderView blk - -> Product WrapTentativeHeaderView WrapTentativeHeaderState blk - withInitialSt v = Pair v (WrapTentativeHeaderState initialSt) - where - initialSt = initialTentativeHeaderState (Proxy @blk) + | ProofNonEmpty _ _ <- isNonEmpty (Proxy @xs) = + case Match.mismatchNotFirst mismatch of + -- The @thv@ is in an earlier era compared to the @st@, so it does + -- not satisfy the HFC pipelining criterion. + Right _ -> Nothing + -- The @thv@ is in a later era compared to the @st@. + Left thv' -> Just $ hcmap proxySingle withInitialSt (S thv') + where + withInitialSt :: + forall blk. + BlockSupportsDiffusionPipelining blk => + WrapTentativeHeaderView blk -> + Product WrapTentativeHeaderView WrapTentativeHeaderState blk + withInitialSt v = Pair v (WrapTentativeHeaderState initialSt) + where + initialSt = initialTentativeHeaderState (Proxy @blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/InitStorage.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/InitStorage.hs index a01e3bef0a..c5c4027b77 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/InitStorage.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/InitStorage.hs @@ -1,72 +1,75 @@ {-# LANGUAGE GADTs #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Node.InitStorage () where -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Functors -import Data.SOP.Index -import Data.SOP.Strict -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.Ledger.Abstract (EmptyMK) -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB (..)) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Functors +import Data.SOP.Index +import Data.SOP.Strict +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.Ledger.Abstract (EmptyMK) +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB (..)) instance CanHardFork xs => NodeInitStorage (HardForkBlock xs) where -- We use the chunk info from the first era nodeImmutableDbChunkInfo cfg = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - nodeImmutableDbChunkInfo - (hd cfgs) - where - cfgs = getPerEraStorageConfig (hardForkStorageConfigPerEra cfg) + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + nodeImmutableDbChunkInfo + (hd cfgs) + where + cfgs = getPerEraStorageConfig (hardForkStorageConfigPerEra cfg) -- Dispatch based on the era nodeCheckIntegrity cfg (HardForkBlock (OneEraBlock blk)) = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - hcollapse $ - hczipWith (Proxy @SingleEraBlock) aux cfgs blk - where - cfgs = getPerEraStorageConfig (hardForkStorageConfigPerEra cfg) + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + hcollapse $ + hczipWith (Proxy @SingleEraBlock) aux cfgs blk + where + cfgs = getPerEraStorageConfig (hardForkStorageConfigPerEra cfg) - aux :: NodeInitStorage blk => StorageConfig blk -> I blk -> K Bool blk - aux cfg' (I blk') = K $ nodeCheckIntegrity cfg' blk' + aux :: NodeInitStorage blk => StorageConfig blk -> I blk -> K Bool blk + aux cfg' (I blk') = K $ nodeCheckIntegrity cfg' blk' -- Call the 'nodeInitChainDB' of the era in which the current ledger is. -- -- In most cases, this will be the first era, except when one or more hard -- forks are statically scheduled at the first slot. nodeInitChainDB cfg (initChainDB :: InitChainDB m (HardForkBlock xs)) = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> do - currentLedger <- getCurrentLedger initChainDB - hcollapse $ - hcizipWith - proxySingle - aux - cfgs - (State.tip (hardForkLedgerStatePerEra currentLedger)) - where - cfgs = getPerEraStorageConfig (hardForkStorageConfigPerEra cfg) + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> do + currentLedger <- getCurrentLedger initChainDB + hcollapse $ + hcizipWith + proxySingle + aux + cfgs + (State.tip (hardForkLedgerStatePerEra currentLedger)) + where + cfgs = getPerEraStorageConfig (hardForkStorageConfigPerEra cfg) - aux :: - SingleEraBlock blk - => Index xs blk - -> StorageConfig blk - -> Flip LedgerState EmptyMK blk - -> K (m ()) blk - aux index cfg' (Flip currentLedger) = K $ - nodeInitChainDB cfg' InitChainDB { - addBlock = addBlock initChainDB - . injectNS' (Proxy @I) index + aux :: + SingleEraBlock blk => + Index xs blk -> + StorageConfig blk -> + Flip LedgerState EmptyMK blk -> + K (m ()) blk + aux index cfg' (Flip currentLedger) = + K $ + nodeInitChainDB + cfg' + InitChainDB + { addBlock = + addBlock initChainDB + . injectNS' (Proxy @I) index , getCurrentLedger = return currentLedger } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/Metrics.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/Metrics.hs index 3a5df365ce..d85a1810e8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/Metrics.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/Metrics.hs @@ -2,20 +2,20 @@ module Ouroboros.Consensus.HardFork.Combinator.Node.Metrics () where -import Data.SOP.BasicFunctors -import Data.SOP.Strict -import Ouroboros.Consensus.Block.SupportsMetrics -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.Util +import Data.SOP.BasicFunctors +import Data.SOP.Strict +import Ouroboros.Consensus.Block.SupportsMetrics +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.Util instance CanHardFork xs => BlockSupportsMetrics (HardForkBlock xs) where isSelfIssued cfg hdr = - hcollapse - $ hczipWith - proxySingle - (K .: isSelfIssued) - (getPerEraBlockConfig $ hardForkBlockConfigPerEra cfg) - (getOneEraHeader $ getHardForkHeader hdr) + hcollapse $ + hczipWith + proxySingle + (K .: isSelfIssued) + (getPerEraBlockConfig $ hardForkBlockConfigPerEra cfg) + (getOneEraHeader $ getHardForkHeader hdr) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/SanityCheck.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/SanityCheck.hs index 42a98a995a..da9ddfb128 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/SanityCheck.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Node/SanityCheck.hs @@ -3,34 +3,37 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# OPTIONS_GHC -Wno-orphans #-} + module Ouroboros.Consensus.HardFork.Combinator.Node.SanityCheck () where -import Data.List.NonEmpty (NonEmpty (..)) -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.History.EpochInfo -import Ouroboros.Consensus.Protocol.Abstract +import Data.List.NonEmpty (NonEmpty (..)) +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.History.EpochInfo +import Ouroboros.Consensus.Protocol.Abstract instance CanHardFork xs => BlockSupportsSanityCheck (HardForkBlock xs) where configAllSecurityParams tlc = - let configProtocol = topLevelConfigProtocol tlc in - hardForkConsensusConfigK configProtocol :| - perEraConsensusConfigSecurityParams (hardForkConsensusConfigPerEra configProtocol) + let configProtocol = topLevelConfigProtocol tlc + in hardForkConsensusConfigK configProtocol + :| perEraConsensusConfigSecurityParams (hardForkConsensusConfigPerEra configProtocol) -perEraConsensusConfigSecurityParams - :: All SingleEraBlock xs - => PerEraConsensusConfig xs -> [SecurityParam] +perEraConsensusConfigSecurityParams :: + All SingleEraBlock xs => + PerEraConsensusConfig xs -> [SecurityParam] perEraConsensusConfigSecurityParams (PerEraConsensusConfig xs) = unK $ hctraverse_ (Proxy @SingleEraBlock) go xs - where - go :: forall a . SingleEraBlock a - => WrapPartialConsensusConfig a -> K [SecurityParam] () - go (WrapPartialConsensusConfig c) = - K [ protocolSecurityParam (completeConsensusConfig (Proxy @(BlockProtocol a)) dummyEpochInfo c) ] + where + go :: + forall a. + SingleEraBlock a => + WrapPartialConsensusConfig a -> K [SecurityParam] () + go (WrapPartialConsensusConfig c) = + K [protocolSecurityParam (completeConsensusConfig (Proxy @(BlockProtocol a)) dummyEpochInfo c)] diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/PartialConfig.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/PartialConfig.hs index d0ee4cfde5..8ebc1380cc 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/PartialConfig.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/PartialConfig.hs @@ -9,32 +9,37 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.HardFork.Combinator.PartialConfig ( - HasPartialConsensusConfig (..) +module Ouroboros.Consensus.HardFork.Combinator.PartialConfig + ( HasPartialConsensusConfig (..) , HasPartialLedgerConfig (..) + -- * Newtype wrappers , WrapPartialConsensusConfig (..) , WrapPartialLedgerConfig (..) + -- * Convenience re-exports , EpochInfo (..) , Except , PastHorizonException ) where -import Cardano.Slotting.EpochInfo -import Control.Monad.Except (Except) -import Data.Kind (Type) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.History.Qry (PastHorizonException) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.Abstract +import Cardano.Slotting.EpochInfo +import Control.Monad.Except (Except) +import Data.Kind (Type) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.History.Qry (PastHorizonException) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.Abstract -- | Partial consensus config -class ( ConsensusProtocol p - , NoThunks (PartialConsensusConfig p) - ) => HasPartialConsensusConfig p where +class + ( ConsensusProtocol p + , NoThunks (PartialConsensusConfig p) + ) => + HasPartialConsensusConfig p + where type PartialConsensusConfig p :: Type type PartialConsensusConfig p = ConsensusConfig p @@ -42,35 +47,42 @@ class ( ConsensusProtocol p -- -- See comments for 'completeLedgerConfig' for some details about the -- 'EpochInfo'. - completeConsensusConfig :: proxy p - -> EpochInfo (Except PastHorizonException) - -> PartialConsensusConfig p -> ConsensusConfig p - - default completeConsensusConfig :: (PartialConsensusConfig p ~ ConsensusConfig p) - => proxy p - -> EpochInfo (Except PastHorizonException) - -> PartialConsensusConfig p -> ConsensusConfig p + completeConsensusConfig :: + proxy p -> + EpochInfo (Except PastHorizonException) -> + PartialConsensusConfig p -> + ConsensusConfig p + default completeConsensusConfig :: + PartialConsensusConfig p ~ ConsensusConfig p => + proxy p -> + EpochInfo (Except PastHorizonException) -> + PartialConsensusConfig p -> + ConsensusConfig p completeConsensusConfig _ _ = id -- | Construct partial consensus config from full consensus config -- -- NOTE: This is basically just losing 'EpochInfo', but that is constant -- anyway when we are dealing with a single era. - toPartialConsensusConfig :: proxy p - -> ConsensusConfig p - -> PartialConsensusConfig p - default toPartialConsensusConfig - :: (PartialConsensusConfig p ~ ConsensusConfig p) - => proxy p - -> ConsensusConfig p - -> PartialConsensusConfig p + toPartialConsensusConfig :: + proxy p -> + ConsensusConfig p -> + PartialConsensusConfig p + default toPartialConsensusConfig :: + PartialConsensusConfig p ~ ConsensusConfig p => + proxy p -> + ConsensusConfig p -> + PartialConsensusConfig p toPartialConsensusConfig _ = id -- | Partial ledger config -class ( UpdateLedger blk - , Show (PartialLedgerConfig blk) - , NoThunks (PartialLedgerConfig blk) - ) => HasPartialLedgerConfig blk where +class + ( UpdateLedger blk + , Show (PartialLedgerConfig blk) + , NoThunks (PartialLedgerConfig blk) + ) => + HasPartialLedgerConfig blk + where type PartialLedgerConfig blk :: Type type PartialLedgerConfig blk = LedgerConfig blk @@ -80,25 +92,33 @@ class ( UpdateLedger blk -- look past its horizon will result in a pure 'PastHorizonException'. -- The horizon is determined by the tip of the ledger /state/ (not view) -- from which the 'EpochInfo' is derived. - -- - completeLedgerConfig :: proxy blk - -> EpochInfo (Except PastHorizonException) - -> PartialLedgerConfig blk -> LedgerConfig blk - default completeLedgerConfig :: (PartialLedgerConfig blk ~ LedgerConfig blk) - => proxy blk - -> EpochInfo (Except PastHorizonException) - -> PartialLedgerConfig blk -> LedgerConfig blk + completeLedgerConfig :: + proxy blk -> + EpochInfo (Except PastHorizonException) -> + PartialLedgerConfig blk -> + LedgerConfig blk + default completeLedgerConfig :: + PartialLedgerConfig blk ~ LedgerConfig blk => + proxy blk -> + EpochInfo (Except PastHorizonException) -> + PartialLedgerConfig blk -> + LedgerConfig blk completeLedgerConfig _ _ = id {------------------------------------------------------------------------------- Newtype wrappers -------------------------------------------------------------------------------} -newtype WrapPartialLedgerConfig blk = WrapPartialLedgerConfig { unwrapPartialLedgerConfig :: PartialLedgerConfig blk } -newtype WrapPartialConsensusConfig blk = WrapPartialConsensusConfig { unwrapPartialConsensusConfig :: PartialConsensusConfig (BlockProtocol blk) } +newtype WrapPartialLedgerConfig blk = WrapPartialLedgerConfig + {unwrapPartialLedgerConfig :: PartialLedgerConfig blk} +newtype WrapPartialConsensusConfig blk = WrapPartialConsensusConfig + {unwrapPartialConsensusConfig :: PartialConsensusConfig (BlockProtocol blk)} -deriving instance NoThunks (PartialLedgerConfig blk) => NoThunks (WrapPartialLedgerConfig blk) -deriving instance NoThunks (PartialConsensusConfig (BlockProtocol blk)) => NoThunks (WrapPartialConsensusConfig blk) +deriving instance + NoThunks (PartialLedgerConfig blk) => NoThunks (WrapPartialLedgerConfig blk) +deriving instance + NoThunks (PartialConsensusConfig (BlockProtocol blk)) => NoThunks (WrapPartialConsensusConfig blk) -deriving newtype instance SerialiseNodeToClient blk (PartialLedgerConfig blk) - => SerialiseNodeToClient blk (WrapPartialLedgerConfig blk) +deriving newtype instance + SerialiseNodeToClient blk (PartialLedgerConfig blk) => + SerialiseNodeToClient blk (WrapPartialLedgerConfig blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol.hs index fa1dcaf582..e016f90d58 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol.hs @@ -11,72 +11,78 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Protocol ( - HardForkSelectView (..) +module Ouroboros.Consensus.HardFork.Combinator.Protocol + ( HardForkSelectView (..) + -- * Re-exports to keep 'Protocol.State' an internal module , HardForkCanBeLeader , HardForkChainDepState , HardForkIsLeader , HardForkValidationErr (..) + -- * Re-exports to keep 'Protocol.LedgerView' an internal module , HardForkLedgerView , HardForkLedgerView_ (..) + -- * Type family instances , Ticked (..) ) where -import Control.Monad.Except -import Data.Functor.Product -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Index -import Data.SOP.InPairs (InPairs (..)) -import qualified Data.SOP.InPairs as InPairs -import qualified Data.SOP.Match as Match -import qualified Data.SOP.OptNP as OptNP -import Data.SOP.Strict -import GHC.Generics (Generic) -import GHC.Stack -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel -import Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView - (HardForkLedgerView, HardForkLedgerView_ (..)) -import Ouroboros.Consensus.HardFork.Combinator.State (HardForkState, - Translate (..)) -import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -import Ouroboros.Consensus.HardFork.Combinator.Translation as HFTranslation -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util ((.:)) +import Control.Monad.Except +import Data.Functor.Product +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.InPairs (InPairs (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Index +import Data.SOP.Match qualified as Match +import Data.SOP.OptNP qualified as OptNP +import Data.SOP.Strict +import GHC.Generics (Generic) +import GHC.Stack +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel +import Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView + ( HardForkLedgerView + , HardForkLedgerView_ (..) + ) +import Ouroboros.Consensus.HardFork.Combinator.State + ( HardForkState + , Translate (..) + ) +import Ouroboros.Consensus.HardFork.Combinator.State qualified as State +import Ouroboros.Consensus.HardFork.Combinator.Translation as HFTranslation +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util ((.:)) {------------------------------------------------------------------------------- ChainSelection -------------------------------------------------------------------------------} -newtype HardForkSelectView xs = HardForkSelectView { - getHardForkSelectView :: WithBlockNo OneEraSelectView xs - } +newtype HardForkSelectView xs = HardForkSelectView + { getHardForkSelectView :: WithBlockNo OneEraSelectView xs + } deriving (Show, Eq) - deriving newtype (NoThunks) + deriving newtype NoThunks instance CanHardFork xs => Ord (HardForkSelectView xs) where compare (HardForkSelectView l) (HardForkSelectView r) = - acrossEraSelection - AcrossEraCompare - (hpure Proxy) - hardForkChainSel - (mapWithBlockNo getOneEraSelectView l) - (mapWithBlockNo getOneEraSelectView r) + acrossEraSelection + AcrossEraCompare + (hpure Proxy) + hardForkChainSel + (mapWithBlockNo getOneEraSelectView l) + (mapWithBlockNo getOneEraSelectView r) instance CanHardFork xs => ChainOrder (HardForkSelectView xs) where type ChainOrderConfig (HardForkSelectView xs) = PerEraChainOrderConfig xs @@ -93,11 +99,11 @@ instance CanHardFork xs => ChainOrder (HardForkSelectView xs) where (mapWithBlockNo getOneEraSelectView cand) mkHardForkSelectView :: - BlockNo - -> NS WrapSelectView xs - -> HardForkSelectView xs + BlockNo -> + NS WrapSelectView xs -> + HardForkSelectView xs mkHardForkSelectView bno view = - HardForkSelectView $ WithBlockNo bno (OneEraSelectView view) + HardForkSelectView $ WithBlockNo bno (OneEraSelectView view) {------------------------------------------------------------------------------- ConsensusProtocol @@ -108,17 +114,17 @@ type HardForkChainDepState xs = HardForkState WrapChainDepState xs instance CanHardFork xs => ConsensusProtocol (HardForkProtocol xs) where type ChainDepState (HardForkProtocol xs) = HardForkChainDepState xs type ValidationErr (HardForkProtocol xs) = HardForkValidationErr xs - type SelectView (HardForkProtocol xs) = HardForkSelectView xs - type LedgerView (HardForkProtocol xs) = HardForkLedgerView xs - type CanBeLeader (HardForkProtocol xs) = HardForkCanBeLeader xs - type IsLeader (HardForkProtocol xs) = HardForkIsLeader xs - type ValidateView (HardForkProtocol xs) = OneEraValidateView xs + type SelectView (HardForkProtocol xs) = HardForkSelectView xs + type LedgerView (HardForkProtocol xs) = HardForkLedgerView xs + type CanBeLeader (HardForkProtocol xs) = HardForkCanBeLeader xs + type IsLeader (HardForkProtocol xs) = HardForkIsLeader xs + type ValidateView (HardForkProtocol xs) = OneEraValidateView xs -- Operations on the state - tickChainDepState = tick - checkIsLeader = check - updateChainDepState = update + tickChainDepState = tick + checkIsLeader = check + updateChainDepState = update reupdateChainDepState = reupdate -- @@ -134,23 +140,23 @@ instance CanHardFork xs => ConsensusProtocol (HardForkProtocol xs) where instance CanHardFork xs => BlockSupportsProtocol (HardForkBlock xs) where validateView HardForkBlockConfig{..} = - OneEraValidateView + OneEraValidateView . hczipWith proxySingle (WrapValidateView .: validateView) cfgs . getOneEraHeader . getHardForkHeader - where - cfgs = getPerEraBlockConfig hardForkBlockConfigPerEra + where + cfgs = getPerEraBlockConfig hardForkBlockConfigPerEra selectView HardForkBlockConfig{..} hdr = - mkHardForkSelectView (blockNo hdr) + mkHardForkSelectView (blockNo hdr) . hczipWith proxySingle (WrapSelectView .: selectView) cfgs . getOneEraHeader $ getHardForkHeader hdr - where - cfgs = getPerEraBlockConfig hardForkBlockConfigPerEra + where + cfgs = getPerEraBlockConfig hardForkBlockConfigPerEra projectChainOrderConfig = - PerEraChainOrderConfig + PerEraChainOrderConfig . hcmap proxySingle (WrapChainOrderConfig . projectChainOrderConfig) . getPerEraBlockConfig . hardForkBlockConfigPerEra @@ -159,47 +165,52 @@ instance CanHardFork xs => BlockSupportsProtocol (HardForkBlock xs) where Ticking the chain dependent state -------------------------------------------------------------------------------} -data instance Ticked (HardForkChainDepState xs) = - TickedHardForkChainDepState { - tickedHardForkChainDepStatePerEra :: - HardForkState (Ticked :.: WrapChainDepState) xs - - -- | 'EpochInfo' constructed from the 'LedgerView' - , tickedHardForkChainDepStateEpochInfo :: - EpochInfo (Except PastHorizonException) +data instance Ticked (HardForkChainDepState xs) + = TickedHardForkChainDepState + { tickedHardForkChainDepStatePerEra :: + HardForkState (Ticked :.: WrapChainDepState) xs + , tickedHardForkChainDepStateEpochInfo :: + EpochInfo (Except PastHorizonException) + -- ^ 'EpochInfo' constructed from the 'LedgerView' + } + +tick :: + CanHardFork xs => + ConsensusConfig (HardForkProtocol xs) -> + HardForkLedgerView xs -> + SlotNo -> + HardForkChainDepState xs -> + Ticked (HardForkChainDepState xs) +tick + cfg@HardForkConsensusConfig{..} + (HardForkLedgerView transition ledgerView) + slot + chainDepState = + TickedHardForkChainDepState + { tickedHardForkChainDepStateEpochInfo = ei + , tickedHardForkChainDepStatePerEra = + State.align + (translateConsensus ei cfg) + (hcmap proxySingle (fn_2 . tickOne) cfgs) + ledgerView + chainDepState } - -tick :: CanHardFork xs - => ConsensusConfig (HardForkProtocol xs) - -> HardForkLedgerView xs - -> SlotNo - -> HardForkChainDepState xs - -> Ticked (HardForkChainDepState xs) -tick cfg@HardForkConsensusConfig{..} - (HardForkLedgerView transition ledgerView) - slot - chainDepState = TickedHardForkChainDepState { - tickedHardForkChainDepStateEpochInfo = ei - , tickedHardForkChainDepStatePerEra = - State.align - (translateConsensus ei cfg) - (hcmap proxySingle (fn_2 . tickOne) cfgs) - ledgerView - chainDepState - } - where + where cfgs = getPerEraConsensusConfig hardForkConsensusConfigPerEra - ei = State.epochInfoPrecomputedTransitionInfo - hardForkConsensusConfigShape - transition - ledgerView - - tickOne :: SingleEraBlock blk - => WrapPartialConsensusConfig blk - -> WrapLedgerView blk - -> WrapChainDepState blk - -> (Ticked :.: WrapChainDepState) blk - tickOne cfg' ledgerView' chainDepState' = Comp $ + ei = + State.epochInfoPrecomputedTransitionInfo + hardForkConsensusConfigShape + transition + ledgerView + + tickOne :: + SingleEraBlock blk => + WrapPartialConsensusConfig blk -> + WrapLedgerView blk -> + WrapChainDepState blk -> + (Ticked :.: WrapChainDepState) blk + tickOne cfg' ledgerView' chainDepState' = + Comp $ WrapTickedChainDepState $ tickChainDepState (completeConsensusConfig' ei cfg') @@ -224,16 +235,19 @@ type HardForkCanBeLeader xs = SomeErasCanBeLeader xs -- | POSTCONDITION: if the result is @Just isLeader@, then 'HardForkCanBeLeader' -- and the ticked 'ChainDepState' must be in the same era. The returned -- @isLeader@ will be from the same era. -check :: forall xs. (CanHardFork xs, HasCallStack) - => ConsensusConfig (HardForkProtocol xs) - -> HardForkCanBeLeader xs - -> SlotNo - -> Ticked (ChainDepState (HardForkProtocol xs)) - -> Maybe (HardForkIsLeader xs) -check HardForkConsensusConfig{..} - (SomeErasCanBeLeader canBeLeader) - slot - (TickedHardForkChainDepState chainDepState ei) = +check :: + forall xs. + (CanHardFork xs, HasCallStack) => + ConsensusConfig (HardForkProtocol xs) -> + HardForkCanBeLeader xs -> + SlotNo -> + Ticked (ChainDepState (HardForkProtocol xs)) -> + Maybe (HardForkIsLeader xs) +check + HardForkConsensusConfig{..} + (SomeErasCanBeLeader canBeLeader) + slot + (TickedHardForkChainDepState chainDepState ei) = undistrib $ hczipWith3 proxySingle @@ -241,150 +255,172 @@ check HardForkConsensusConfig{..} cfgs (OptNP.toNP canBeLeader) (State.tip chainDepState) - where + where cfgs = getPerEraConsensusConfig hardForkConsensusConfigPerEra checkOne :: - SingleEraBlock blk - => WrapPartialConsensusConfig blk - -> (Maybe :.: WrapCanBeLeader) blk - -> (Ticked :.: WrapChainDepState) blk - -> (Maybe :.: WrapIsLeader) blk + SingleEraBlock blk => + WrapPartialConsensusConfig blk -> + (Maybe :.: WrapCanBeLeader) blk -> + (Ticked :.: WrapChainDepState) blk -> + (Maybe :.: WrapIsLeader) blk checkOne cfg' (Comp mCanBeLeader) (Comp chainDepState') = Comp $ do - canBeLeader' <- mCanBeLeader - WrapIsLeader <$> - checkIsLeader - (completeConsensusConfig' ei cfg') - (unwrapCanBeLeader canBeLeader') - slot - (unwrapTickedChainDepState chainDepState') + canBeLeader' <- mCanBeLeader + WrapIsLeader + <$> checkIsLeader + (completeConsensusConfig' ei cfg') + (unwrapCanBeLeader canBeLeader') + slot + (unwrapTickedChainDepState chainDepState') undistrib :: NS (Maybe :.: WrapIsLeader) xs -> Maybe (HardForkIsLeader xs) undistrib = hcollapse . himap inj - where - inj :: Index xs blk - -> (Maybe :.: WrapIsLeader) blk - -> K (Maybe (HardForkIsLeader xs)) blk - inj index (Comp mIsLeader) = K $ - OneEraIsLeader . injectNS index <$> mIsLeader + where + inj :: + Index xs blk -> + (Maybe :.: WrapIsLeader) blk -> + K (Maybe (HardForkIsLeader xs)) blk + inj index (Comp mIsLeader) = + K $ + OneEraIsLeader . injectNS index <$> mIsLeader {------------------------------------------------------------------------------- Rolling forward and backward -------------------------------------------------------------------------------} -data HardForkValidationErr xs = - -- | Validation error from one of the eras +data HardForkValidationErr xs + = -- | Validation error from one of the eras HardForkValidationErrFromEra (OneEraValidationErr xs) - - -- | We tried to apply a block from the wrong era - | HardForkValidationErrWrongEra (MismatchEraInfo xs) - deriving (Generic) - -update :: forall xs. CanHardFork xs - => ConsensusConfig (HardForkProtocol xs) - -> OneEraValidateView xs - -> SlotNo - -> Ticked (HardForkChainDepState xs) - -> Except (HardForkValidationErr xs) (HardForkChainDepState xs) -update HardForkConsensusConfig{..} - (OneEraValidateView view) - slot - (TickedHardForkChainDepState chainDepState ei) = + | -- | We tried to apply a block from the wrong era + HardForkValidationErrWrongEra (MismatchEraInfo xs) + deriving Generic + +update :: + forall xs. + CanHardFork xs => + ConsensusConfig (HardForkProtocol xs) -> + OneEraValidateView xs -> + SlotNo -> + Ticked (HardForkChainDepState xs) -> + Except (HardForkValidationErr xs) (HardForkChainDepState xs) +update + HardForkConsensusConfig{..} + (OneEraValidateView view) + slot + (TickedHardForkChainDepState chainDepState ei) = case State.match view chainDepState of Left mismatch -> - throwError $ HardForkValidationErrWrongEra . MismatchEraInfo $ - Match.bihcmap - proxySingle - singleEraInfo - (LedgerEraInfo . chainDepStateInfo . State.currentState) - mismatch + throwError $ + HardForkValidationErrWrongEra . MismatchEraInfo $ + Match.bihcmap + proxySingle + singleEraInfo + (LedgerEraInfo . chainDepStateInfo . State.currentState) + mismatch Right matched -> - hsequence' - . hcizipWith proxySingle (updateEra ei slot) cfgs - $ matched - where + hsequence' + . hcizipWith proxySingle (updateEra ei slot) cfgs + $ matched + where cfgs = getPerEraConsensusConfig hardForkConsensusConfigPerEra -updateEra :: forall xs blk. (SListI xs, SingleEraBlock blk) - => EpochInfo (Except PastHorizonException) - -> SlotNo - -> Index xs blk - -> WrapPartialConsensusConfig blk - -> Product WrapValidateView (Ticked :.: WrapChainDepState) blk - -> (Except (HardForkValidationErr xs) :.: WrapChainDepState) blk -updateEra ei slot index cfg - (Pair view (Comp chainDepState)) = Comp $ - withExcept (injectValidationErr index) $ - fmap WrapChainDepState $ - updateChainDepState - (completeConsensusConfig' ei cfg) - (unwrapValidateView view) - slot - (unwrapTickedChainDepState chainDepState) - -reupdate :: forall xs. CanHardFork xs - => ConsensusConfig (HardForkProtocol xs) - -> OneEraValidateView xs - -> SlotNo - -> Ticked (HardForkChainDepState xs) - -> HardForkChainDepState xs -reupdate HardForkConsensusConfig{..} - (OneEraValidateView view) - slot - (TickedHardForkChainDepState chainDepState ei) = +updateEra :: + forall xs blk. + (SListI xs, SingleEraBlock blk) => + EpochInfo (Except PastHorizonException) -> + SlotNo -> + Index xs blk -> + WrapPartialConsensusConfig blk -> + Product WrapValidateView (Ticked :.: WrapChainDepState) blk -> + (Except (HardForkValidationErr xs) :.: WrapChainDepState) blk +updateEra + ei + slot + index + cfg + (Pair view (Comp chainDepState)) = + Comp $ + withExcept (injectValidationErr index) $ + fmap WrapChainDepState $ + updateChainDepState + (completeConsensusConfig' ei cfg) + (unwrapValidateView view) + slot + (unwrapTickedChainDepState chainDepState) + +reupdate :: + forall xs. + CanHardFork xs => + ConsensusConfig (HardForkProtocol xs) -> + OneEraValidateView xs -> + SlotNo -> + Ticked (HardForkChainDepState xs) -> + HardForkChainDepState xs +reupdate + HardForkConsensusConfig{..} + (OneEraValidateView view) + slot + (TickedHardForkChainDepState chainDepState ei) = case State.match view chainDepState of Left mismatch -> - error $ show . HardForkValidationErrWrongEra . MismatchEraInfo $ - Match.bihcmap - proxySingle - singleEraInfo - (LedgerEraInfo . chainDepStateInfo . State.currentState) - mismatch + error $ + show . HardForkValidationErrWrongEra . MismatchEraInfo $ + Match.bihcmap + proxySingle + singleEraInfo + (LedgerEraInfo . chainDepStateInfo . State.currentState) + mismatch Right matched -> - hczipWith proxySingle (reupdateEra ei slot) cfgs - $ matched - where + hczipWith proxySingle (reupdateEra ei slot) cfgs $ + matched + where cfgs = getPerEraConsensusConfig hardForkConsensusConfigPerEra -reupdateEra :: SingleEraBlock blk - => EpochInfo (Except PastHorizonException) - -> SlotNo - -> WrapPartialConsensusConfig blk - -> Product WrapValidateView (Ticked :.: WrapChainDepState) blk - -> WrapChainDepState blk +reupdateEra :: + SingleEraBlock blk => + EpochInfo (Except PastHorizonException) -> + SlotNo -> + WrapPartialConsensusConfig blk -> + Product WrapValidateView (Ticked :.: WrapChainDepState) blk -> + WrapChainDepState blk reupdateEra ei slot cfg (Pair view (Comp chainDepState)) = - WrapChainDepState $ - reupdateChainDepState - (completeConsensusConfig' ei cfg) - (unwrapValidateView view) - slot - (unwrapTickedChainDepState chainDepState) + WrapChainDepState $ + reupdateChainDepState + (completeConsensusConfig' ei cfg) + (unwrapValidateView view) + slot + (unwrapTickedChainDepState chainDepState) {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} -chainDepStateInfo :: forall blk. SingleEraBlock blk - => (Ticked :.: WrapChainDepState) blk -> SingleEraInfo blk +chainDepStateInfo :: + forall blk. + SingleEraBlock blk => + (Ticked :.: WrapChainDepState) blk -> SingleEraInfo blk chainDepStateInfo _ = singleEraInfo (Proxy @blk) -translateConsensus :: forall xs. CanHardFork xs - => EpochInfo (Except PastHorizonException) - -> ConsensusConfig (HardForkProtocol xs) - -> InPairs (Translate WrapChainDepState) xs +translateConsensus :: + forall xs. + CanHardFork xs => + EpochInfo (Except PastHorizonException) -> + ConsensusConfig (HardForkProtocol xs) -> + InPairs (Translate WrapChainDepState) xs translateConsensus ei HardForkConsensusConfig{..} = - InPairs.requiringBoth cfgs $ - HFTranslation.translateChainDepState hardForkEraTranslation - where - pcfgs = getPerEraConsensusConfig hardForkConsensusConfigPerEra - cfgs = hcmap proxySingle (completeConsensusConfig'' ei) pcfgs - -injectValidationErr :: SListI xs - => Index xs blk - -> ValidationErr (BlockProtocol blk) - -> HardForkValidationErr xs + InPairs.requiringBoth cfgs $ + HFTranslation.translateChainDepState hardForkEraTranslation + where + pcfgs = getPerEraConsensusConfig hardForkConsensusConfigPerEra + cfgs = hcmap proxySingle (completeConsensusConfig'' ei) pcfgs + +injectValidationErr :: + SListI xs => + Index xs blk -> + ValidationErr (BlockProtocol blk) -> + HardForkValidationErr xs injectValidationErr index = - HardForkValidationErrFromEra + HardForkValidationErrFromEra . OneEraValidationErr . injectNS index . WrapValidationErr @@ -393,6 +429,6 @@ injectValidationErr index = Instances -------------------------------------------------------------------------------} -deriving instance CanHardFork xs => Eq (HardForkValidationErr xs) -deriving instance CanHardFork xs => Show (HardForkValidationErr xs) +deriving instance CanHardFork xs => Eq (HardForkValidationErr xs) +deriving instance CanHardFork xs => Show (HardForkValidationErr xs) deriving instance CanHardFork xs => NoThunks (HardForkValidationErr xs) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs index 31bbcffc87..bf126e4fb1 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/ChainSel.hs @@ -9,24 +9,24 @@ {-# LANGUAGE TypeOperators #-} -- | Infrastructure for doing chain selection across eras -module Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel ( - AcrossEraMode (..) +module Ouroboros.Consensus.HardFork.Combinator.Protocol.ChainSel + ( AcrossEraMode (..) , AcrossEraSelection (..) , WithBlockNo (..) , acrossEraSelection , mapWithBlockNo ) where -import Data.Kind (Type) -import Data.SOP.Constraint -import Data.SOP.Strict -import Data.SOP.Tails (Tails (..)) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.Kind (Type) +import Data.SOP.Constraint +import Data.SOP.Strict +import Data.SOP.Tails (Tails (..)) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- Configuration @@ -38,7 +38,6 @@ data AcrossEraSelection :: Type -> Type -> Type where -- This is a useful default when two eras run totally different consensus -- protocols, and we just want to choose the longer chain. CompareBlockNo :: AcrossEraSelection x y - -- | Two eras using the same 'SelectView'. In this case, we can just compare -- chains even across eras, as the chain ordering is fully captured by -- 'SelectView' and its 'ChainOrder' instance. @@ -48,115 +47,123 @@ data AcrossEraSelection :: Type -> Type -> Type where -- 'compareChains'. However, this choice is arbitrary; we could also make it -- configurable here. CompareSameSelectView :: - SelectView (BlockProtocol x) ~ SelectView (BlockProtocol y) - => AcrossEraSelection x y + SelectView (BlockProtocol x) ~ SelectView (BlockProtocol y) => + AcrossEraSelection x y {------------------------------------------------------------------------------- Compare two eras -------------------------------------------------------------------------------} - -- | GADT indicating whether we are lifting 'compare' or 'preferCandidate' to -- the HFC, together with the type of configuration we need for that and the -- result type. data AcrossEraMode cfg a where - AcrossEraCompare :: AcrossEraMode Proxy Ordering + AcrossEraCompare :: AcrossEraMode Proxy Ordering AcrossEraPreferCandidate :: AcrossEraMode WrapChainOrderConfig Bool applyAcrossEraMode :: - ChainOrder sv - => cfg blk - -> (WrapChainOrderConfig blk -> ChainOrderConfig sv) - -> AcrossEraMode cfg a - -> sv -> sv -> a + ChainOrder sv => + cfg blk -> + (WrapChainOrderConfig blk -> ChainOrderConfig sv) -> + AcrossEraMode cfg a -> + sv -> + sv -> + a applyAcrossEraMode cfg f = \case - AcrossEraCompare -> compare - AcrossEraPreferCandidate -> preferCandidate (f cfg) + AcrossEraCompare -> compare + AcrossEraPreferCandidate -> preferCandidate (f cfg) data FlipArgs = KeepArgs | FlipArgs acrossEras :: - forall blk blk' cfg a. SingleEraBlock blk - => FlipArgs - -> AcrossEraMode cfg a - -> cfg blk' - -- ^ The configuration corresponding to the later block/era, also see - -- 'CompareSameSelectView'. - -> WithBlockNo WrapSelectView blk - -> WithBlockNo WrapSelectView blk' - -> AcrossEraSelection blk blk' - -> a -acrossEras flipArgs mode cfg + forall blk blk' cfg a. + SingleEraBlock blk => + FlipArgs -> + AcrossEraMode cfg a -> + -- | The configuration corresponding to the later block/era, also see + -- 'CompareSameSelectView'. + cfg blk' -> + WithBlockNo WrapSelectView blk -> + WithBlockNo WrapSelectView blk' -> + AcrossEraSelection blk blk' -> + a +acrossEras + flipArgs + mode + cfg (WithBlockNo bnoL (WrapSelectView l)) (WithBlockNo bnoR (WrapSelectView r)) = \case - CompareBlockNo -> maybeFlip cmp bnoL bnoR - where - cmp = applyAcrossEraMode cfg (const ()) mode + CompareBlockNo -> maybeFlip cmp bnoL bnoR + where + cmp = applyAcrossEraMode cfg (const ()) mode CompareSameSelectView -> maybeFlip cmp l r - where - cmp = applyAcrossEraMode cfg (unwrapChainOrderConfig) mode - where + where + cmp = applyAcrossEraMode cfg (unwrapChainOrderConfig) mode + where maybeFlip :: (b -> b -> a) -> b -> b -> a maybeFlip = case flipArgs of KeepArgs -> id FlipArgs -> flip acrossEraSelection :: - forall xs cfg a. - All SingleEraBlock xs - => AcrossEraMode cfg a - -> NP cfg xs - -> Tails AcrossEraSelection xs - -> WithBlockNo (NS WrapSelectView) xs - -> WithBlockNo (NS WrapSelectView) xs - -> a + forall xs cfg a. + All SingleEraBlock xs => + AcrossEraMode cfg a -> + NP cfg xs -> + Tails AcrossEraSelection xs -> + WithBlockNo (NS WrapSelectView) xs -> + WithBlockNo (NS WrapSelectView) xs -> + a acrossEraSelection mode = \cfg ffs l r -> - goBoth cfg ffs (distribBlockNo l, distribBlockNo r) - where - goBoth :: - All SingleEraBlock xs' - => NP cfg xs' - -> Tails AcrossEraSelection xs' - -> ( NS (WithBlockNo WrapSelectView) xs' - , NS (WithBlockNo WrapSelectView) xs' - ) - -> a - goBoth _ TNil = \(a, _) -> case a of {} - goBoth (cfg :* cfgs) (TCons fs ffs') = \case - (Z a, Z b) -> cmp (dropBlockNo a) (dropBlockNo b) - where - cmp = applyAcrossEraMode cfg unwrapChainOrderConfig mode - (Z a, S b) -> goOne KeepArgs a cfgs fs b - (S a, Z b) -> goOne FlipArgs b cfgs fs a - (S a, S b) -> goBoth cfgs ffs' (a, b) - - goOne :: - forall x xs'. (SingleEraBlock x, All SingleEraBlock xs') - => FlipArgs - -> WithBlockNo WrapSelectView x - -> NP cfg xs' - -> NP (AcrossEraSelection x) xs' - -> NS (WithBlockNo WrapSelectView) xs' - -> a - goOne flipArgs a = go - where - go :: forall xs''. All SingleEraBlock xs'' - => NP cfg xs'' - -> NP (AcrossEraSelection x) xs'' - -> NS (WithBlockNo WrapSelectView) xs'' - -> a - go _ Nil b = case b of {} - go (cfg :* _ ) (f :* _) (Z b) = acrossEras flipArgs mode cfg a b f - go (_ :* cfgs) (_ :* fs) (S b) = go cfgs fs b + goBoth cfg ffs (distribBlockNo l, distribBlockNo r) + where + goBoth :: + All SingleEraBlock xs' => + NP cfg xs' -> + Tails AcrossEraSelection xs' -> + ( NS (WithBlockNo WrapSelectView) xs' + , NS (WithBlockNo WrapSelectView) xs' + ) -> + a + goBoth _ TNil = \(a, _) -> case a of {} + goBoth (cfg :* cfgs) (TCons fs ffs') = \case + (Z a, Z b) -> cmp (dropBlockNo a) (dropBlockNo b) + where + cmp = applyAcrossEraMode cfg unwrapChainOrderConfig mode + (Z a, S b) -> goOne KeepArgs a cfgs fs b + (S a, Z b) -> goOne FlipArgs b cfgs fs a + (S a, S b) -> goBoth cfgs ffs' (a, b) + + goOne :: + forall x xs'. + (SingleEraBlock x, All SingleEraBlock xs') => + FlipArgs -> + WithBlockNo WrapSelectView x -> + NP cfg xs' -> + NP (AcrossEraSelection x) xs' -> + NS (WithBlockNo WrapSelectView) xs' -> + a + goOne flipArgs a = go + where + go :: + forall xs''. + All SingleEraBlock xs'' => + NP cfg xs'' -> + NP (AcrossEraSelection x) xs'' -> + NS (WithBlockNo WrapSelectView) xs'' -> + a + go _ Nil b = case b of {} + go (cfg :* _) (f :* _) (Z b) = acrossEras flipArgs mode cfg a b f + go (_ :* cfgs) (_ :* fs) (S b) = go cfgs fs b {------------------------------------------------------------------------------- WithBlockNo -------------------------------------------------------------------------------} -data WithBlockNo (f :: k -> Type) (a :: k) = WithBlockNo { - getBlockNo :: BlockNo - , dropBlockNo :: f a - } +data WithBlockNo (f :: k -> Type) (a :: k) = WithBlockNo + { getBlockNo :: BlockNo + , dropBlockNo :: f a + } deriving (Show, Eq, Generic, NoThunks) mapWithBlockNo :: (f x -> g y) -> WithBlockNo f x -> WithBlockNo g y diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/LedgerView.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/LedgerView.hs index fa29f707c5..2e97448f9a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/LedgerView.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Protocol/LedgerView.hs @@ -5,35 +5,35 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView ( - -- * Hard fork +module Ouroboros.Consensus.HardFork.Combinator.Protocol.LedgerView + ( -- * Hard fork HardForkLedgerView , HardForkLedgerView_ (..) + -- * Type family instances , Ticked (..) ) where -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Dict (Dict (..), all_NP) -import Data.SOP.Strict -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.State.Instances () -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Dict (Dict (..), all_NP) +import Data.SOP.Strict +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.State.Instances () +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- HardForkLedgerView -------------------------------------------------------------------------------} -data HardForkLedgerView_ f xs = HardForkLedgerView { - -- | Information about the transition to the next era, if known - hardForkLedgerViewTransition :: !TransitionInfo - - -- | The underlying ledger view - , hardForkLedgerViewPerEra :: !(HardForkState f xs) - } +data HardForkLedgerView_ f xs = HardForkLedgerView + { hardForkLedgerViewTransition :: !TransitionInfo + -- ^ Information about the transition to the next era, if known + , hardForkLedgerViewPerEra :: !(HardForkState f xs) + -- ^ The underlying ledger view + } deriving instance CanHardFork xs => Show (HardForkLedgerView_ WrapLedgerView xs) @@ -45,14 +45,15 @@ type HardForkLedgerView = HardForkLedgerView_ WrapLedgerView instance (SListI xs, Show a) => Show (HardForkLedgerView_ (K a) xs) where show HardForkLedgerView{..} = - case (dictPast, dictCurrent) of - (Dict, Dict) -> show ( - hardForkLedgerViewTransition + case (dictPast, dictCurrent) of + (Dict, Dict) -> + show + ( hardForkLedgerViewTransition , getHardForkState hardForkLedgerViewPerEra ) - where - dictPast :: Dict (All (Compose Show (K Past))) xs - dictPast = all_NP $ hpure Dict + where + dictPast :: Dict (All (Compose Show (K Past))) xs + dictPast = all_NP $ hpure Dict - dictCurrent :: Dict (All (Compose Show (Current (K a)))) xs - dictCurrent = all_NP $ hpure Dict + dictCurrent :: Dict (All (Compose Show (Current (K a)))) xs + dictCurrent = all_NP $ hpure Dict diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation.hs index 578f6001c4..dc0ff352d6 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation.hs @@ -1,14 +1,17 @@ -- | Serialisation support for the HFC module Ouroboros.Consensus.HardFork.Combinator.Serialisation (module X) where -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common as X - (EraNodeToClientVersion (..), - HardForkNodeToClientVersion (..), - HardForkNodeToNodeVersion (..), - HardForkSpecificNodeToClientVersion (..), - HardForkSpecificNodeToNodeVersion (..), - SerialiseConstraintsHFC, SerialiseHFC (..), - isHardForkNodeToClientEnabled, isHardForkNodeToNodeEnabled) -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk as X () -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToClient as X () -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToNode as X () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common as X + ( EraNodeToClientVersion (..) + , HardForkNodeToClientVersion (..) + , HardForkNodeToNodeVersion (..) + , HardForkSpecificNodeToClientVersion (..) + , HardForkSpecificNodeToNodeVersion (..) + , SerialiseConstraintsHFC + , SerialiseHFC (..) + , isHardForkNodeToClientEnabled + , isHardForkNodeToNodeEnabled + ) +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk as X () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToClient as X () +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToNode as X () diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/Common.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/Common.hs index a0b10c5cc5..636449a509 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/Common.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/Common.hs @@ -11,22 +11,23 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common ( - -- * Conditions required by the HFC to support serialisation +module Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common + ( -- * Conditions required by the HFC to support serialisation HardForkEncoderException (..) , SerialiseConstraintsHFC , SerialiseHFC (..) , disabledEraException , futureEraException , pSHFC + -- * Distinguish first era from the rest , FirstEra , LaterEra , isFirstEra , notFirstEra + -- * Versioning , EraNodeToClientVersion (..) , HardForkNodeToClientVersion (..) @@ -35,23 +36,29 @@ module Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common ( , HardForkSpecificNodeToNodeVersion (..) , isHardForkNodeToClientEnabled , isHardForkNodeToNodeEnabled + -- * Dealing with annotations , AnnDecoder (..) + -- * Serialisation of telescopes , decodeTelescope , encodeTelescope + -- * Serialisation of sums , decodeAnnNS , decodeNS , encodeNS + -- * Dependent serialisation , decodeNested , decodeNestedCtxt , encodeNested , encodeNestedCtxt + -- * MismatchEraInfo , decodeEitherMismatch , encodeEitherMismatch + -- * Distributive properties , distribAnnTip , distribQueryIfCurrent @@ -59,47 +66,48 @@ module Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common ( , undistribAnnTip , undistribQueryIfCurrent , undistribSerialisedHeader + -- * Deriving-via support for tests , SerialiseNS (..) ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as Dec -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as Enc -import Codec.Serialise (Serialise) -import qualified Codec.Serialise as Serialise -import Control.Exception (Exception, throw) -import qualified Data.ByteString.Lazy as Lazy -import Data.ByteString.Short (ShortByteString) -import qualified Data.ByteString.Short as Short -import Data.Kind (Type) -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Index -import qualified Data.SOP.Match as Match -import Data.SOP.Strict -import Data.SOP.Telescope (SimpleTelescope (..), Telescope (..)) -import qualified Data.SOP.Telescope as Telescope -import Data.Word -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.HardFork.Combinator.Info -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import Ouroboros.Consensus.HardFork.Combinator.NetworkVersion -import Ouroboros.Consensus.HardFork.Combinator.State -import Ouroboros.Consensus.HardFork.Combinator.State.Instances -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Network.Block (Serialised) +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as Dec +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as Enc +import Codec.Serialise (Serialise) +import Codec.Serialise qualified as Serialise +import Control.Exception (Exception, throw) +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short (ShortByteString) +import Data.ByteString.Short qualified as Short +import Data.Kind (Type) +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Index +import Data.SOP.Match qualified as Match +import Data.SOP.Strict +import Data.SOP.Telescope (SimpleTelescope (..), Telescope (..)) +import Data.SOP.Telescope qualified as Telescope +import Data.Word +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.HardFork.Combinator.Info +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HardFork.Combinator.NetworkVersion +import Ouroboros.Consensus.HardFork.Combinator.State +import Ouroboros.Consensus.HardFork.Combinator.State.Instances +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Network.Block (Serialised) {------------------------------------------------------------------------------- Distinguish between the first era and all others @@ -111,34 +119,40 @@ type family FirstEra (xs :: [Type]) where type family LaterEra (xs :: [Type]) where LaterEra (x ': xs) = xs -isFirstEra :: forall f xs. All SingleEraBlock xs - => NS f xs - -> Either (NS SingleEraInfo (LaterEra xs)) (f (FirstEra xs)) +isFirstEra :: + forall f xs. + All SingleEraBlock xs => + NS f xs -> + Either (NS SingleEraInfo (LaterEra xs)) (f (FirstEra xs)) isFirstEra (Z x) = Right x isFirstEra (S x) = Left (hcmap proxySingle aux x) - where - aux :: forall blk. SingleEraBlock blk => f blk -> SingleEraInfo blk - aux _ = singleEraInfo (Proxy @blk) + where + aux :: forall blk. SingleEraBlock blk => f blk -> SingleEraInfo blk + aux _ = singleEraInfo (Proxy @blk) -- | Used to construct 'FutureEraException' -notFirstEra :: All SingleEraBlock xs - => NS f xs -- ^ 'NS' intended to be from a future era - -> NS SingleEraInfo xs +notFirstEra :: + All SingleEraBlock xs => + -- | 'NS' intended to be from a future era + NS f xs -> + NS SingleEraInfo xs notFirstEra = hcmap proxySingle aux - where - aux :: forall f blk. SingleEraBlock blk => f blk -> SingleEraInfo blk - aux _ = singleEraInfo (Proxy @blk) + where + aux :: forall f blk. SingleEraBlock blk => f blk -> SingleEraInfo blk + aux _ = singleEraInfo (Proxy @blk) {------------------------------------------------------------------------------- Conditions required by the HFC to support serialisation -------------------------------------------------------------------------------} -class ( SingleEraBlock blk - , SerialiseDiskConstraints blk - , SerialiseNodeToNodeConstraints blk - , SerialiseNodeToClientConstraints blk - , HasNetworkProtocolVersion blk - ) => SerialiseConstraintsHFC blk +class + ( SingleEraBlock blk + , SerialiseDiskConstraints blk + , SerialiseNodeToNodeConstraints blk + , SerialiseNodeToClientConstraints blk + , HasNetworkProtocolVersion blk + ) => + SerialiseConstraintsHFC blk pSHFC :: Proxy SerialiseConstraintsHFC pSHFC = Proxy @@ -168,111 +182,127 @@ pSHFC = Proxy -- sending blocks with the HFC disabled suggests that that tag is unexpected. -- This would then lead to problems with binary streaming, and we do not -- currently provide any provisions to resolve these. -class ( CanHardFork xs - , All SerialiseConstraintsHFC xs - -- Required for HasNetworkProtocolVersion - , All (Compose Show EraNodeToClientVersion) xs - , All (Compose Eq EraNodeToClientVersion) xs - , All (Compose Show WrapNodeToNodeVersion) xs - , All (Compose Eq WrapNodeToNodeVersion) xs - -- Required for 'encodeNestedCtxt'/'decodeNestedCtxt' - , All (EncodeDiskDepIx (NestedCtxt Header)) xs - , All (DecodeDiskDepIx (NestedCtxt Header)) xs - -- Required for 'getHfcBinaryBlockInfo' - , All HasBinaryBlockInfo xs - , All HasNetworkProtocolVersion xs - , All BlockSupportsLedgerQuery xs - -- LedgerTables on the HardForkBlock might not be compositionally - -- defined, but we need to require this instances for any instantiation. - , HasLedgerTables (LedgerState (HardForkBlock xs)) - , SerializeTablesWithHint (LedgerState (HardForkBlock xs)) - ) => SerialiseHFC xs where - - encodeDiskHfcBlock :: CodecConfig (HardForkBlock xs) - -> HardForkBlock xs -> Encoding +class + ( CanHardFork xs + , All SerialiseConstraintsHFC xs + , -- Required for HasNetworkProtocolVersion + All (Compose Show EraNodeToClientVersion) xs + , All (Compose Eq EraNodeToClientVersion) xs + , All (Compose Show WrapNodeToNodeVersion) xs + , All (Compose Eq WrapNodeToNodeVersion) xs + , -- Required for 'encodeNestedCtxt'/'decodeNestedCtxt' + All (EncodeDiskDepIx (NestedCtxt Header)) xs + , All (DecodeDiskDepIx (NestedCtxt Header)) xs + , -- Required for 'getHfcBinaryBlockInfo' + All HasBinaryBlockInfo xs + , All HasNetworkProtocolVersion xs + , All BlockSupportsLedgerQuery xs + , -- LedgerTables on the HardForkBlock might not be compositionally + -- defined, but we need to require this instances for any instantiation. + HasLedgerTables (LedgerState (HardForkBlock xs)) + , SerializeTablesWithHint (LedgerState (HardForkBlock xs)) + ) => + SerialiseHFC xs + where + encodeDiskHfcBlock :: + CodecConfig (HardForkBlock xs) -> + HardForkBlock xs -> + Encoding encodeDiskHfcBlock cfg = - encodeNS (hcmap pSHFC (fn . mapIK . encodeDisk) cfgs) + encodeNS (hcmap pSHFC (fn . mapIK . encodeDisk) cfgs) . (getOneEraBlock . getHardForkBlock) - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) - decodeDiskHfcBlock :: CodecConfig (HardForkBlock xs) - -> forall s. Decoder s (Lazy.ByteString -> HardForkBlock xs) + decodeDiskHfcBlock :: + CodecConfig (HardForkBlock xs) -> + forall s. + Decoder s (Lazy.ByteString -> HardForkBlock xs) decodeDiskHfcBlock cfg = - (\f -> HardForkBlock . OneEraBlock . f) + (\f -> HardForkBlock . OneEraBlock . f) <$> decodeAnnNS (hcmap pSHFC aux cfgs) - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) - aux :: SerialiseDiskConstraints blk - => CodecConfig blk -> AnnDecoder I blk - aux cfg' = AnnDecoder $ (I .) <$> decodeDisk cfg' + aux :: + SerialiseDiskConstraints blk => + CodecConfig blk -> AnnDecoder I blk + aux cfg' = AnnDecoder $ (I .) <$> decodeDisk cfg' -- | Used as the implementation of 'reconstructPrefixLen' for -- 'HardForkBlock'. reconstructHfcPrefixLen :: proxy (Header (HardForkBlock xs)) -> PrefixLen reconstructHfcPrefixLen _ = - -- We insert two bytes at the front - 2 `addPrefixLen` maximum (hcollapse perEra) - where - perEra :: NP (K PrefixLen) xs - perEra = hcpure proxySingle reconstructOne - - reconstructOne :: forall blk. SingleEraBlock blk - => K PrefixLen blk - reconstructOne = K $ reconstructPrefixLen (Proxy @(Header blk)) + -- We insert two bytes at the front + 2 `addPrefixLen` maximum (hcollapse perEra) + where + perEra :: NP (K PrefixLen) xs + perEra = hcpure proxySingle reconstructOne + + reconstructOne :: + forall blk. + SingleEraBlock blk => + K PrefixLen blk + reconstructOne = K $ reconstructPrefixLen (Proxy @(Header blk)) -- | Used as the implementation of 'reconstructNestedCtxt' for -- 'HardForkBlock'. reconstructHfcNestedCtxt :: - proxy (Header (HardForkBlock xs)) - -> ShortByteString -- ^ First bytes ('reconstructPrefixLen') of the block - -> SizeInBytes -- ^ Block size - -> SomeSecond (NestedCtxt Header) (HardForkBlock xs) + proxy (Header (HardForkBlock xs)) -> + -- | First bytes ('reconstructPrefixLen') of the block + ShortByteString -> + -- | Block size + SizeInBytes -> + SomeSecond (NestedCtxt Header) (HardForkBlock xs) reconstructHfcNestedCtxt _ prefix blockSize = - case nsFromIndex tag of - Nothing -> error $ "invalid HardForkBlock with tag: " <> show tag - Just ns -> injSomeSecond $ hcmap proxySingle reconstructOne ns - where - tag :: Word8 - tag = Short.index prefix 1 - - prefixOne :: ShortByteString - prefixOne = Short.pack . drop 2 . Short.unpack $ prefix - - reconstructOne :: forall blk. SingleEraBlock blk - => K () blk -> SomeSecond (NestedCtxt Header) blk - reconstructOne _ = - reconstructNestedCtxt (Proxy @(Header blk)) prefixOne blockSize - - injSomeSecond :: NS (SomeSecond (NestedCtxt Header)) xs' - -> SomeSecond (NestedCtxt Header) (HardForkBlock xs') - injSomeSecond (Z x) = case x of - SomeSecond (NestedCtxt y) -> SomeSecond (NestedCtxt (NCZ y)) - injSomeSecond (S x) = case injSomeSecond x of - SomeSecond (NestedCtxt y) -> SomeSecond (NestedCtxt (NCS y)) + case nsFromIndex tag of + Nothing -> error $ "invalid HardForkBlock with tag: " <> show tag + Just ns -> injSomeSecond $ hcmap proxySingle reconstructOne ns + where + tag :: Word8 + tag = Short.index prefix 1 + + prefixOne :: ShortByteString + prefixOne = Short.pack . drop 2 . Short.unpack $ prefix + + reconstructOne :: + forall blk. + SingleEraBlock blk => + K () blk -> SomeSecond (NestedCtxt Header) blk + reconstructOne _ = + reconstructNestedCtxt (Proxy @(Header blk)) prefixOne blockSize + + injSomeSecond :: + NS (SomeSecond (NestedCtxt Header)) xs' -> + SomeSecond (NestedCtxt Header) (HardForkBlock xs') + injSomeSecond (Z x) = case x of + SomeSecond (NestedCtxt y) -> SomeSecond (NestedCtxt (NCZ y)) + injSomeSecond (S x) = case injSomeSecond x of + SomeSecond (NestedCtxt y) -> SomeSecond (NestedCtxt (NCS y)) -- | Used as the implementation of 'getBinaryBlockInfo' for -- 'HardForkBlock'. getHfcBinaryBlockInfo :: HardForkBlock xs -> BinaryBlockInfo getHfcBinaryBlockInfo (HardForkBlock (OneEraBlock bs)) = - hcollapse $ hcmap (Proxy @HasBinaryBlockInfo) aux bs - where - -- The header is unchanged, but the whole block is offset by 2 bytes - -- (list length and tag) - aux :: HasBinaryBlockInfo blk => I blk -> K BinaryBlockInfo blk - aux (I blk) = K $ BinaryBlockInfo { - headerOffset = headerOffset underlyingBlockInfo + 2 - , headerSize = headerSize underlyingBlockInfo + hcollapse $ hcmap (Proxy @HasBinaryBlockInfo) aux bs + where + -- The header is unchanged, but the whole block is offset by 2 bytes + -- (list length and tag) + aux :: HasBinaryBlockInfo blk => I blk -> K BinaryBlockInfo blk + aux (I blk) = + K $ + BinaryBlockInfo + { headerOffset = headerOffset underlyingBlockInfo + 2 + , headerSize = headerSize underlyingBlockInfo } - where - underlyingBlockInfo :: BinaryBlockInfo - underlyingBlockInfo = getBinaryBlockInfo blk + where + underlyingBlockInfo :: BinaryBlockInfo + underlyingBlockInfo = getBinaryBlockInfo blk -- | Used as the implementation of 'estimateBlockSize' for 'HardForkBlock'. estimateHfcBlockSize :: Header (HardForkBlock xs) -> SizeInBytes estimateHfcBlockSize = - (+ 2) -- Account for the era wrapper + (+ 2) -- Account for the era wrapper . hcollapse . hcmap (Proxy @SerialiseConstraintsHFC) (K . estimateBlockSize) . getOneEraHeader @@ -286,7 +316,6 @@ class ( CanHardFork xs data HardForkEncoderException where -- | HFC disabled, but we saw a value from an era other than the first HardForkEncoderFutureEra :: SingleEraInfo blk -> HardForkEncoderException - -- | HFC enabled, but we saw a value from a disabled era -- -- This is only thrown by the Node-to-Client codec. Two nodes' negotiated @@ -298,10 +327,8 @@ data HardForkEncoderException where -- -- See 'HardForkNodeToClientEnabled' for the use case. HardForkEncoderDisabledEra :: SingleEraInfo blk -> HardForkEncoderException - -- | HFC disabled, but we saw a query that is only supported by the HFC HardForkEncoderQueryHfcDisabled :: HardForkEncoderException - -- | HFC enabled, but we saw a HFC query that is not supported by the -- HFC-specific version used HardForkEncoderQueryWrongVersion :: HardForkEncoderException @@ -310,177 +337,203 @@ deriving instance Show HardForkEncoderException instance Exception HardForkEncoderException futureEraException :: - SListI xs - => NS SingleEraInfo xs - -> HardForkEncoderException + SListI xs => + NS SingleEraInfo xs -> + HardForkEncoderException futureEraException = hcollapse . hmap (K . HardForkEncoderFutureEra) disabledEraException :: - forall blk. SingleEraBlock blk - => Proxy blk - -> HardForkEncoderException + forall blk. + SingleEraBlock blk => + Proxy blk -> + HardForkEncoderException disabledEraException = HardForkEncoderDisabledEra . singleEraInfo {------------------------------------------------------------------------------- Dealing with annotations -------------------------------------------------------------------------------} -data AnnDecoder f blk = AnnDecoder { - annDecoder :: forall s. Decoder s (Lazy.ByteString -> f blk) - } +data AnnDecoder f blk = AnnDecoder + { annDecoder :: forall s. Decoder s (Lazy.ByteString -> f blk) + } {------------------------------------------------------------------------------- Serialisation of telescopes -------------------------------------------------------------------------------} -encodeTelescope :: SListI xs - => NP (f -.-> K Encoding) xs -> HardForkState f xs -> Encoding -encodeTelescope es (HardForkState st) = mconcat [ - Enc.encodeListLen (1 + fromIntegral ix) - , mconcat $ hcollapse $ SimpleTelescope - (Telescope.bihzipWith (const encPast) encCurrent es st) +encodeTelescope :: + SListI xs => + NP (f -.-> K Encoding) xs -> HardForkState f xs -> Encoding +encodeTelescope es (HardForkState st) = + mconcat + [ Enc.encodeListLen (1 + fromIntegral ix) + , mconcat $ + hcollapse $ + SimpleTelescope + (Telescope.bihzipWith (const encPast) encCurrent es st) ] - where - -- The tip of the telescope also tells us the length - ix :: Word8 - ix = nsToIndex (Telescope.tip st) + where + -- The tip of the telescope also tells us the length + ix :: Word8 + ix = nsToIndex (Telescope.tip st) - encPast :: K Past blk -> K Encoding blk - encPast = K . encodePast . unK + encPast :: K Past blk -> K Encoding blk + encPast = K . encodePast . unK - encCurrent :: (f -.-> K Encoding) blk -> Current f blk -> K Encoding blk - encCurrent enc = K . encodeCurrent (unK . apFn enc) + encCurrent :: (f -.-> K Encoding) blk -> Current f blk -> K Encoding blk + encCurrent enc = K . encodeCurrent (unK . apFn enc) decodeTelescope :: NP (Decoder s :.: f) xs -> Decoder s (HardForkState f xs) decodeTelescope = \ds -> do - ix <- Dec.decodeListLen - if ix < 1 - then fail $ "decodeTelescope: invalid telescope length " ++ show ix - else HardForkState <$> go (ix - 1) ds - where - go :: Int - -> NP (Decoder s :.: f) xs - -> Decoder s (Telescope (K Past) (Current f) xs) - go 0 (Comp d :* _) = TZ <$> decodeCurrent d - go i (Comp _ :* ds) = TS <$> (K <$> decodePast) <*> go (i - 1) ds - go _ Nil = error "decodeTelescope: invalid telescope length" + ix <- Dec.decodeListLen + if ix < 1 + then fail $ "decodeTelescope: invalid telescope length " ++ show ix + else HardForkState <$> go (ix - 1) ds + where + go :: + Int -> + NP (Decoder s :.: f) xs -> + Decoder s (Telescope (K Past) (Current f) xs) + go 0 (Comp d :* _) = TZ <$> decodeCurrent d + go i (Comp _ :* ds) = TS <$> (K <$> decodePast) <*> go (i - 1) ds + go _ Nil = error "decodeTelescope: invalid telescope length" {------------------------------------------------------------------------------- Serialisation of sums -------------------------------------------------------------------------------} encodeNS :: SListI xs => NP (f -.-> K Encoding) xs -> NS f xs -> Encoding -encodeNS es ns = mconcat [ - Enc.encodeListLen 2 +encodeNS es ns = + mconcat + [ Enc.encodeListLen 2 , Enc.encodeWord8 $ nsToIndex ns , hcollapse $ hzipWith apFn es ns ] decodeNS :: forall xs f s. SListI xs => NP (Decoder s :.: f) xs -> Decoder s (NS f xs) decodeNS ds = do - enforceSize "decodeNS" 2 - i <- Dec.decodeWord8 - case nsFromIndex i of - Nothing -> fail $ "decodeNS: invalid index " ++ show i - Just ns -> hcollapse $ hizipWith aux ds ns - where - aux :: Index xs blk - -> (Decoder s :.: f) blk - -> K () blk - -> K (Decoder s (NS f xs)) blk - aux index (Comp dec) (K ()) = K $ injectNS index <$> dec - -decodeAnnNS :: forall xs f. SListI xs - => NP (AnnDecoder f) xs - -> forall s. Decoder s (Lazy.ByteString -> NS f xs) + enforceSize "decodeNS" 2 + i <- Dec.decodeWord8 + case nsFromIndex i of + Nothing -> fail $ "decodeNS: invalid index " ++ show i + Just ns -> hcollapse $ hizipWith aux ds ns + where + aux :: + Index xs blk -> + (Decoder s :.: f) blk -> + K () blk -> + K (Decoder s (NS f xs)) blk + aux index (Comp dec) (K ()) = K $ injectNS index <$> dec + +decodeAnnNS :: + forall xs f. + SListI xs => + NP (AnnDecoder f) xs -> + forall s. + Decoder s (Lazy.ByteString -> NS f xs) decodeAnnNS ds = do - enforceSize "decodeDiskAnnNS" 2 - i <- Dec.decodeWord8 - case nsFromIndex i of - Nothing -> fail $ "decodeAnnNS: invalid index " ++ show i - Just ns -> hcollapse $ hizipWith aux ds ns - where - aux :: Index xs blk - -> AnnDecoder f blk - -> K () blk - -> K (Decoder s (Lazy.ByteString -> NS f xs)) blk - aux index (AnnDecoder dec) (K ()) = K $ (injectNS index .) <$> dec + enforceSize "decodeDiskAnnNS" 2 + i <- Dec.decodeWord8 + case nsFromIndex i of + Nothing -> fail $ "decodeAnnNS: invalid index " ++ show i + Just ns -> hcollapse $ hizipWith aux ds ns + where + aux :: + Index xs blk -> + AnnDecoder f blk -> + K () blk -> + K (Decoder s (Lazy.ByteString -> NS f xs)) blk + aux index (AnnDecoder dec) (K ()) = K $ (injectNS index .) <$> dec {------------------------------------------------------------------------------- Dependent serialisation -------------------------------------------------------------------------------} -encodeNested :: All (EncodeDiskDep (NestedCtxt f)) xs - => CodecConfig (HardForkBlock xs) - -> NestedCtxt f (HardForkBlock xs) a - -> a - -> Encoding +encodeNested :: + All (EncodeDiskDep (NestedCtxt f)) xs => + CodecConfig (HardForkBlock xs) -> + NestedCtxt f (HardForkBlock xs) a -> + a -> + Encoding encodeNested = \ccfg (NestedCtxt ctxt) a -> - go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) ctxt a - where - go :: All (EncodeDiskDep (NestedCtxt f)) xs' - => NP CodecConfig xs' - -> NestedCtxt_ (HardForkBlock xs') f a - -> a -> Encoding - go Nil ctxt = case ctxt of {} - go (c :* _) (NCZ ctxt) = encodeDiskDep c (NestedCtxt ctxt) - go (_ :* cs) (NCS ctxt) = go cs ctxt - -decodeNested :: All (DecodeDiskDep (NestedCtxt f)) xs - => CodecConfig (HardForkBlock xs) - -> NestedCtxt f (HardForkBlock xs) a - -> forall s. Decoder s (Lazy.ByteString -> a) + go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) ctxt a + where + go :: + All (EncodeDiskDep (NestedCtxt f)) xs' => + NP CodecConfig xs' -> + NestedCtxt_ (HardForkBlock xs') f a -> + a -> + Encoding + go Nil ctxt = case ctxt of {} + go (c :* _) (NCZ ctxt) = encodeDiskDep c (NestedCtxt ctxt) + go (_ :* cs) (NCS ctxt) = go cs ctxt + +decodeNested :: + All (DecodeDiskDep (NestedCtxt f)) xs => + CodecConfig (HardForkBlock xs) -> + NestedCtxt f (HardForkBlock xs) a -> + forall s. + Decoder s (Lazy.ByteString -> a) decodeNested = \ccfg (NestedCtxt ctxt) -> - go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) ctxt - where - go :: All (DecodeDiskDep (NestedCtxt f)) xs' - => NP CodecConfig xs' - -> NestedCtxt_ (HardForkBlock xs') f a - -> Decoder s (Lazy.ByteString -> a) - go Nil ctxt = case ctxt of {} - go (c :* _) (NCZ ctxt) = decodeDiskDep c (NestedCtxt ctxt) - go (_ :* cs) (NCS ctxt) = go cs ctxt - -encodeNestedCtxt :: All (EncodeDiskDepIx (NestedCtxt f)) xs - => CodecConfig (HardForkBlock xs) - -> SomeSecond (NestedCtxt f) (HardForkBlock xs) - -> Encoding + go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) ctxt + where + go :: + All (DecodeDiskDep (NestedCtxt f)) xs' => + NP CodecConfig xs' -> + NestedCtxt_ (HardForkBlock xs') f a -> + Decoder s (Lazy.ByteString -> a) + go Nil ctxt = case ctxt of {} + go (c :* _) (NCZ ctxt) = decodeDiskDep c (NestedCtxt ctxt) + go (_ :* cs) (NCS ctxt) = go cs ctxt + +encodeNestedCtxt :: + All (EncodeDiskDepIx (NestedCtxt f)) xs => + CodecConfig (HardForkBlock xs) -> + SomeSecond (NestedCtxt f) (HardForkBlock xs) -> + Encoding encodeNestedCtxt = \ccfg (SomeSecond ctxt) -> - go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) - npWithIndices - (flipNestedCtxt ctxt) - where - go :: All (EncodeDiskDepIx (NestedCtxt f)) xs' - => NP CodecConfig xs' - -> NP (K Word8) xs' - -> NestedCtxt_ (HardForkBlock xs') f a - -> Encoding - go Nil _ ctxt = case ctxt of {} - go (_ :* cs) (_ :* is) (NCS ctxt) = go cs is ctxt - go (c :* _) (K i :* _) (NCZ ctxt) = mconcat [ - Enc.encodeListLen 2 - , Serialise.encode i - , encodeDiskDepIx c (SomeSecond (NestedCtxt ctxt)) - ] - -decodeNestedCtxt :: All (DecodeDiskDepIx (NestedCtxt f)) xs - => CodecConfig (HardForkBlock xs) - -> forall s. Decoder s (SomeSecond (NestedCtxt f) (HardForkBlock xs)) + go + (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) + npWithIndices + (flipNestedCtxt ctxt) + where + go :: + All (EncodeDiskDepIx (NestedCtxt f)) xs' => + NP CodecConfig xs' -> + NP (K Word8) xs' -> + NestedCtxt_ (HardForkBlock xs') f a -> + Encoding + go Nil _ ctxt = case ctxt of {} + go (_ :* cs) (_ :* is) (NCS ctxt) = go cs is ctxt + go (c :* _) (K i :* _) (NCZ ctxt) = + mconcat + [ Enc.encodeListLen 2 + , Serialise.encode i + , encodeDiskDepIx c (SomeSecond (NestedCtxt ctxt)) + ] + +decodeNestedCtxt :: + All (DecodeDiskDepIx (NestedCtxt f)) xs => + CodecConfig (HardForkBlock xs) -> + forall s. + Decoder s (SomeSecond (NestedCtxt f) (HardForkBlock xs)) decodeNestedCtxt = \ccfg -> do - enforceSize "decodeNestedCtxt" 2 - tag <- Serialise.decode - case nsFromIndex tag of - Nothing -> fail $ "decodeNestedCtxt: invalid tag " ++ show tag - Just ns -> - go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) ns - where - go :: All (DecodeDiskDepIx (NestedCtxt f)) xs' - => NP CodecConfig xs' - -> NS (K ()) xs' - -> forall s. Decoder s (SomeSecond (NestedCtxt f) (HardForkBlock xs')) - go Nil i = case i of {} - go (c :* _) (Z _) = mapSomeNestedCtxt NCZ <$> decodeDiskDepIx c - go (_ :* cs) (S i) = mapSomeNestedCtxt NCS <$> go cs i + enforceSize "decodeNestedCtxt" 2 + tag <- Serialise.decode + case nsFromIndex tag of + Nothing -> fail $ "decodeNestedCtxt: invalid tag " ++ show tag + Just ns -> + go (getPerEraCodecConfig (hardForkCodecConfigPerEra ccfg)) ns + where + go :: + All (DecodeDiskDepIx (NestedCtxt f)) xs' => + NP CodecConfig xs' -> + NS (K ()) xs' -> + forall s. + Decoder s (SomeSecond (NestedCtxt f) (HardForkBlock xs')) + go Nil i = case i of {} + go (c :* _) (Z _) = mapSomeNestedCtxt NCZ <$> decodeDiskDepIx c + go (_ :* cs) (S i) = mapSomeNestedCtxt NCS <$> go cs i {------------------------------------------------------------------------------- Serialisation of 'MismatchEraInfo' @@ -489,105 +542,115 @@ decodeNestedCtxt = \ccfg -> do using 'HardForkNodeToClientDisabled'. -------------------------------------------------------------------------------} -encodeEitherMismatch :: forall xs a. SListI xs - => BlockNodeToClientVersion (HardForkBlock xs) - -> (a -> Encoding) - -> (Either (MismatchEraInfo xs) a -> Encoding) +encodeEitherMismatch :: + forall xs a. + SListI xs => + BlockNodeToClientVersion (HardForkBlock xs) -> + (a -> Encoding) -> + (Either (MismatchEraInfo xs) a -> Encoding) encodeEitherMismatch version enc ma = - case (version, ma) of - (HardForkNodeToClientDisabled {}, Right a) -> - enc a - (HardForkNodeToClientDisabled {}, Left err) -> - throw $ futureEraException (mismatchFutureEra err) - (HardForkNodeToClientEnabled {}, Right a) -> mconcat [ - Enc.encodeListLen 1 - , enc a - ] - (HardForkNodeToClientEnabled {}, Left (MismatchEraInfo err)) -> mconcat [ - Enc.encodeListLen 2 - , encodeNS (hpure (fn encodeName)) era1 - , encodeNS (hpure (fn (encodeName . getLedgerEraInfo))) era2 - ] - where - era1 :: NS SingleEraInfo xs - era2 :: NS LedgerEraInfo xs - (era1, era2) = Match.mismatchToNS err - where - encodeName :: SingleEraInfo blk -> K Encoding blk - encodeName = K . Serialise.encode . singleEraName - -decodeEitherMismatch :: SListI xs - => BlockNodeToClientVersion (HardForkBlock xs) - -> Decoder s a - -> Decoder s (Either (MismatchEraInfo xs) a) + case (version, ma) of + (HardForkNodeToClientDisabled{}, Right a) -> + enc a + (HardForkNodeToClientDisabled{}, Left err) -> + throw $ futureEraException (mismatchFutureEra err) + (HardForkNodeToClientEnabled{}, Right a) -> + mconcat + [ Enc.encodeListLen 1 + , enc a + ] + (HardForkNodeToClientEnabled{}, Left (MismatchEraInfo err)) -> + mconcat + [ Enc.encodeListLen 2 + , encodeNS (hpure (fn encodeName)) era1 + , encodeNS (hpure (fn (encodeName . getLedgerEraInfo))) era2 + ] + where + era1 :: NS SingleEraInfo xs + era2 :: NS LedgerEraInfo xs + (era1, era2) = Match.mismatchToNS err + where + encodeName :: SingleEraInfo blk -> K Encoding blk + encodeName = K . Serialise.encode . singleEraName + +decodeEitherMismatch :: + SListI xs => + BlockNodeToClientVersion (HardForkBlock xs) -> + Decoder s a -> + Decoder s (Either (MismatchEraInfo xs) a) decodeEitherMismatch version dec = - case version of - HardForkNodeToClientDisabled {} -> - Right <$> dec - HardForkNodeToClientEnabled {} -> do - tag <- Dec.decodeListLen - case tag of - 1 -> Right <$> dec - 2 -> do era1 <- decodeNS (hpure (Comp decodeName)) - era2 <- decodeNS (hpure (Comp (LedgerEraInfo <$> decodeName))) - case Match.matchNS era1 era2 of - Left err -> return $ Left (MismatchEraInfo err) - Right _ -> fail "dispatchDecoderErr: unexpected match" - _ -> fail $ "dispatchDecoderErr: invalid tag " ++ show tag - where - decodeName :: forall blk s. Decoder s (SingleEraInfo blk) - decodeName = SingleEraInfo <$> Serialise.decode + case version of + HardForkNodeToClientDisabled{} -> + Right <$> dec + HardForkNodeToClientEnabled{} -> do + tag <- Dec.decodeListLen + case tag of + 1 -> Right <$> dec + 2 -> do + era1 <- decodeNS (hpure (Comp decodeName)) + era2 <- decodeNS (hpure (Comp (LedgerEraInfo <$> decodeName))) + case Match.matchNS era1 era2 of + Left err -> return $ Left (MismatchEraInfo err) + Right _ -> fail "dispatchDecoderErr: unexpected match" + _ -> fail $ "dispatchDecoderErr: invalid tag " ++ show tag + where + decodeName :: forall blk s. Decoder s (SingleEraInfo blk) + decodeName = SingleEraInfo <$> Serialise.decode {------------------------------------------------------------------------------- Distributive properties -------------------------------------------------------------------------------} -distribSerialisedHeader :: SerialisedHeader (HardForkBlock xs) - -> NS SerialisedHeader xs +distribSerialisedHeader :: + SerialisedHeader (HardForkBlock xs) -> + NS SerialisedHeader xs distribSerialisedHeader = \hdr -> - case serialisedHeaderToDepPair hdr of - GenDepPair (NestedCtxt ctxt) bs -> - go ctxt bs - where - go :: NestedCtxt_ (HardForkBlock xs) Header a - -> Serialised a - -> NS SerialisedHeader xs - go (NCZ c) = Z . SerialisedHeaderFromDepPair . GenDepPair (NestedCtxt c) - go (NCS c) = S . go c - -undistribSerialisedHeader :: NS SerialisedHeader xs - -> SerialisedHeader (HardForkBlock xs) + case serialisedHeaderToDepPair hdr of + GenDepPair (NestedCtxt ctxt) bs -> + go ctxt bs + where + go :: + NestedCtxt_ (HardForkBlock xs) Header a -> + Serialised a -> + NS SerialisedHeader xs + go (NCZ c) = Z . SerialisedHeaderFromDepPair . GenDepPair (NestedCtxt c) + go (NCS c) = S . go c + +undistribSerialisedHeader :: + NS SerialisedHeader xs -> + SerialisedHeader (HardForkBlock xs) undistribSerialisedHeader = - SerialisedHeaderFromDepPair . go - where - go :: NS SerialisedHeader xs - -> GenDepPair Serialised (NestedCtxt Header (HardForkBlock xs)) - go (Z (SerialisedHeaderFromDepPair (GenDepPair (NestedCtxt c) bs))) = - GenDepPair (NestedCtxt (NCZ c)) bs - go (S bs) = - depPairFirst (mapNestedCtxt NCS) $ go bs + SerialisedHeaderFromDepPair . go + where + go :: + NS SerialisedHeader xs -> + GenDepPair Serialised (NestedCtxt Header (HardForkBlock xs)) + go (Z (SerialisedHeaderFromDepPair (GenDepPair (NestedCtxt c) bs))) = + GenDepPair (NestedCtxt (NCZ c)) bs + go (S bs) = + depPairFirst (mapNestedCtxt NCS) $ go bs distribQueryIfCurrent :: - SomeBlockQuery (QueryIfCurrent xs) - -> NS (SomeBlockQuery :.: BlockQuery) xs + SomeBlockQuery (QueryIfCurrent xs) -> + NS (SomeBlockQuery :.: BlockQuery) xs distribQueryIfCurrent = go - where - go :: SomeBlockQuery (QueryIfCurrent xs) -> NS (SomeBlockQuery :.: BlockQuery) xs - go (SomeBlockQuery (QZ qry)) = Z (Comp (SomeBlockQuery qry)) - go (SomeBlockQuery (QS qry)) = S (go (SomeBlockQuery qry)) + where + go :: SomeBlockQuery (QueryIfCurrent xs) -> NS (SomeBlockQuery :.: BlockQuery) xs + go (SomeBlockQuery (QZ qry)) = Z (Comp (SomeBlockQuery qry)) + go (SomeBlockQuery (QS qry)) = S (go (SomeBlockQuery qry)) undistribQueryIfCurrent :: - NS (SomeBlockQuery :.: BlockQuery) xs - -> SomeBlockQuery (QueryIfCurrent xs) + NS (SomeBlockQuery :.: BlockQuery) xs -> + SomeBlockQuery (QueryIfCurrent xs) undistribQueryIfCurrent = go - where - go :: NS (SomeBlockQuery :.: BlockQuery) xs -> SomeBlockQuery (QueryIfCurrent xs) - go (Z qry) = case qry of - Comp (SomeBlockQuery qry') -> - SomeBlockQuery (QZ qry') - go (S qry) = case go qry of - SomeBlockQuery qry' -> - SomeBlockQuery (QS qry') + where + go :: NS (SomeBlockQuery :.: BlockQuery) xs -> SomeBlockQuery (QueryIfCurrent xs) + go (Z qry) = case qry of + Comp (SomeBlockQuery qry') -> + SomeBlockQuery (QZ qry') + go (S qry) = case go qry of + SomeBlockQuery qry' -> + SomeBlockQuery (QS qry') {------------------------------------------------------------------------------- Deriving-via support @@ -602,15 +665,23 @@ undistribQueryIfCurrent = go -- -- > deriving via SerialiseNS Header SomeEras -- > instance Serialise (Header SomeSecond) -newtype SerialiseNS f xs = SerialiseNS { - getSerialiseNS :: NS f xs - } +newtype SerialiseNS f xs = SerialiseNS + { getSerialiseNS :: NS f xs + } instance All (Compose Serialise f) xs => Serialise (SerialiseNS f xs) where - encode = encodeNS (hcpure (Proxy @(Compose Serialise f)) - (fn (K . Serialise.encode))) - . getSerialiseNS - - decode = SerialiseNS - <$> decodeNS (hcpure (Proxy @(Compose Serialise f)) - (Comp Serialise.decode)) + encode = + encodeNS + ( hcpure + (Proxy @(Compose Serialise f)) + (fn (K . Serialise.encode)) + ) + . getSerialiseNS + + decode = + SerialiseNS + <$> decodeNS + ( hcpure + (Proxy @(Compose Serialise f)) + (Comp Serialise.decode) + ) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseDisk.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseDisk.hs index 69e350b434..6dc2ddc590 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseDisk.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseDisk.hs @@ -1,37 +1,36 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE ScopedTypeVariables #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () where -import Codec.CBOR.Encoding (Encoding) -import qualified Data.ByteString.Lazy as Lazy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Dict (Dict (..), all_NP) -import Data.SOP.Functors -import Data.SOP.Strict -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Protocol -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Storage.ChainDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers - -instance SerialiseHFC xs => SerialiseDiskConstraints (HardForkBlock xs) +import Codec.CBOR.Encoding (Encoding) +import Data.ByteString.Lazy qualified as Lazy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Dict (Dict (..), all_NP) +import Data.SOP.Functors +import Data.SOP.Strict +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Protocol +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Storage.ChainDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers + +instance SerialiseHFC xs => SerialiseDiskConstraints (HardForkBlock xs) {------------------------------------------------------------------------------- 'ReconstructNestedCtxt' -------------------------------------------------------------------------------} instance SerialiseHFC xs => ReconstructNestedCtxt Header (HardForkBlock xs) where - reconstructPrefixLen = reconstructHfcPrefixLen + reconstructPrefixLen = reconstructHfcPrefixLen reconstructNestedCtxt = reconstructHfcNestedCtxt {------------------------------------------------------------------------------- @@ -45,90 +44,115 @@ instance SerialiseHFC xs => HasBinaryBlockInfo (HardForkBlock xs) where Blocks/headers -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => EncodeDisk (HardForkBlock xs) (HardForkBlock xs) where +instance + SerialiseHFC xs => + EncodeDisk (HardForkBlock xs) (HardForkBlock xs) + where encodeDisk = encodeDiskHfcBlock -instance SerialiseHFC xs - => DecodeDisk (HardForkBlock xs) (Lazy.ByteString -> HardForkBlock xs) where +instance + SerialiseHFC xs => + DecodeDisk (HardForkBlock xs) (Lazy.ByteString -> HardForkBlock xs) + where decodeDisk = decodeDiskHfcBlock -instance SerialiseHFC xs - => EncodeDiskDepIx (NestedCtxt Header) (HardForkBlock xs) where +instance + SerialiseHFC xs => + EncodeDiskDepIx (NestedCtxt Header) (HardForkBlock xs) + where encodeDiskDepIx = encodeNestedCtxt -instance SerialiseHFC xs - => DecodeDiskDepIx (NestedCtxt Header) (HardForkBlock xs) where +instance + SerialiseHFC xs => + DecodeDiskDepIx (NestedCtxt Header) (HardForkBlock xs) + where decodeDiskDepIx = decodeNestedCtxt -instance SerialiseHFC xs - => EncodeDiskDep (NestedCtxt Header) (HardForkBlock xs) where +instance + SerialiseHFC xs => + EncodeDiskDep (NestedCtxt Header) (HardForkBlock xs) + where encodeDiskDep = - case dict of - Dict -> encodeNested - where - dict :: Dict (All (EncodeDiskDep (NestedCtxt Header))) xs - dict = all_NP (hcpure pSHFC Dict) - -instance SerialiseHFC xs - => DecodeDiskDep (NestedCtxt Header) (HardForkBlock xs) where + case dict of + Dict -> encodeNested + where + dict :: Dict (All (EncodeDiskDep (NestedCtxt Header))) xs + dict = all_NP (hcpure pSHFC Dict) + +instance + SerialiseHFC xs => + DecodeDiskDep (NestedCtxt Header) (HardForkBlock xs) + where decodeDiskDep = - case dict of - Dict -> decodeNested - where - dict :: Dict (All (DecodeDiskDep (NestedCtxt Header))) xs - dict = all_NP (hcpure pSHFC Dict) + case dict of + Dict -> decodeNested + where + dict :: Dict (All (DecodeDiskDep (NestedCtxt Header))) xs + dict = all_NP (hcpure pSHFC Dict) {------------------------------------------------------------------------------- Ledger state -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => EncodeDisk (HardForkBlock xs) (AnnTip (HardForkBlock xs)) where +instance + SerialiseHFC xs => + EncodeDisk (HardForkBlock xs) (AnnTip (HardForkBlock xs)) + where encodeDisk cfg = - encodeNS (hcmap pSHFC (fn . (K .: encodeDisk)) cfgs) + encodeNS (hcmap pSHFC (fn . (K .: encodeDisk)) cfgs) . distribAnnTip - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) -instance SerialiseHFC xs - => DecodeDisk (HardForkBlock xs) (AnnTip (HardForkBlock xs)) where +instance + SerialiseHFC xs => + DecodeDisk (HardForkBlock xs) (AnnTip (HardForkBlock xs)) + where decodeDisk cfg = - fmap undistribAnnTip - $ decodeNS (hcmap pSHFC (Comp . decodeDisk) cfgs) - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) - -instance SerialiseHFC xs - => EncodeDisk (HardForkBlock xs) (HardForkChainDepState xs) where + fmap undistribAnnTip $ + decodeNS (hcmap pSHFC (Comp . decodeDisk) cfgs) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + +instance + SerialiseHFC xs => + EncodeDisk (HardForkBlock xs) (HardForkChainDepState xs) + where encodeDisk cfg = - encodeTelescope (hcmap pSHFC (fn . aux) cfgs) - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) - - aux :: SerialiseDiskConstraints blk - => CodecConfig blk -> WrapChainDepState blk -> K Encoding blk - aux cfg' (WrapChainDepState st) = K $ encodeDisk cfg' st - -instance SerialiseHFC xs - => DecodeDisk (HardForkBlock xs) (HardForkChainDepState xs) where + encodeTelescope (hcmap pSHFC (fn . aux) cfgs) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + + aux :: + SerialiseDiskConstraints blk => + CodecConfig blk -> WrapChainDepState blk -> K Encoding blk + aux cfg' (WrapChainDepState st) = K $ encodeDisk cfg' st + +instance + SerialiseHFC xs => + DecodeDisk (HardForkBlock xs) (HardForkChainDepState xs) + where decodeDisk cfg = - decodeTelescope (hcmap pSHFC (Comp . fmap WrapChainDepState . decodeDisk) cfgs) - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) - -instance SerialiseHFC xs - => EncodeDisk (HardForkBlock xs) (LedgerState (HardForkBlock xs) EmptyMK) where + decodeTelescope (hcmap pSHFC (Comp . fmap WrapChainDepState . decodeDisk) cfgs) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + +instance + SerialiseHFC xs => + EncodeDisk (HardForkBlock xs) (LedgerState (HardForkBlock xs) EmptyMK) + where encodeDisk cfg = - encodeTelescope (hcmap pSHFC (\cfg' -> fn (K . encodeDisk cfg' . unFlip)) cfgs) + encodeTelescope (hcmap pSHFC (\cfg' -> fn (K . encodeDisk cfg' . unFlip)) cfgs) . hardForkLedgerStatePerEra - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) -instance SerialiseHFC xs - => DecodeDisk (HardForkBlock xs) (LedgerState (HardForkBlock xs) EmptyMK) where +instance + SerialiseHFC xs => + DecodeDisk (HardForkBlock xs) (LedgerState (HardForkBlock xs) EmptyMK) + where decodeDisk cfg = - fmap HardForkLedgerState - $ decodeTelescope (hcmap pSHFC (Comp . fmap Flip . decodeDisk) cfgs) - where - cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) + fmap HardForkLedgerState $ + decodeTelescope (hcmap pSHFC (Comp . fmap Flip . decodeDisk) cfgs) + where + cfgs = getPerEraCodecConfig (hardForkCodecConfigPerEra cfg) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToClient.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToClient.hs index f0744af20e..b0d43a63f5 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToClient.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToClient.hs @@ -11,45 +11,50 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeOperators #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToClient () where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import qualified Codec.CBOR.Decoding as Dec -import Codec.CBOR.Encoding (Encoding, encodeListLen) -import qualified Codec.CBOR.Encoding as Enc -import qualified Codec.Serialise as Serialise -import Control.Exception (throw) -import Data.Maybe (catMaybes) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting -import Data.SOP.NonEmpty (ProofNonEmpty (..), checkIsNonEmpty, - isNonEmpty) -import Data.SOP.Sing (lengthSList) -import Data.SOP.Strict -import qualified Data.Text as T -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query -import Ouroboros.Consensus.HardFork.Combinator.Mempool -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Util ((.:)) -import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, - wrapCBORinCBOR) +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Decoding qualified as Dec +import Codec.CBOR.Encoding (Encoding, encodeListLen) +import Codec.CBOR.Encoding qualified as Enc +import Codec.Serialise qualified as Serialise +import Control.Exception (throw) +import Data.Maybe (catMaybes) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting +import Data.SOP.NonEmpty + ( ProofNonEmpty (..) + , checkIsNonEmpty + , isNonEmpty + ) +import Data.SOP.Sing (lengthSList) +import Data.SOP.Strict +import Data.Text qualified as T +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Ledger.Query +import Ouroboros.Consensus.HardFork.Combinator.Mempool +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool (GenTxId) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Util ((.:)) +import Ouroboros.Network.Block + ( Serialised + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) {------------------------------------------------------------------------------- Serialisation of products @@ -60,144 +65,151 @@ import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, -- that the protocol version does not support those eras. Hence, omitting the -- corresponding elements is the correct behavior. encodeNodeToClientNP :: - forall f xs. SerialiseHFC xs - => ( forall x. SerialiseConstraintsHFC x - => CodecConfig x - -> BlockNodeToClientVersion x - -> f x - -> Encoding - ) - -- ^ The encoding of the individual elements (assuming era `x` is enabled). - -> CodecConfig (HardForkBlock xs) - -> HardForkNodeToClientVersion xs - -> NP f xs - -> Encoding + forall f xs. + SerialiseHFC xs => + -- | The encoding of the individual elements (assuming era `x` is enabled). + ( forall x. + SerialiseConstraintsHFC x => + CodecConfig x -> + BlockNodeToClientVersion x -> + f x -> + Encoding + ) -> + CodecConfig (HardForkBlock xs) -> + HardForkNodeToClientVersion xs -> + NP f xs -> + Encoding encodeNodeToClientNP encodeElement (HardForkCodecConfig (PerEraCodecConfig ccfgs)) version xs - | Just err <- validateHardForkNodeToClientVersion version - = error err - | otherwise - = case version of - HardForkNodeToClientDisabled versionX -> case ccfgs of - ccfg :* _ -> case xs of - x :* _ -> encodeElement ccfg versionX x - HardForkNodeToClientEnabled _ subVersions -> - let components :: [Encoding] - components = catMaybes - $ hcollapse - $ hczipWith3 - (Proxy @SerialiseConstraintsHFC) - (\ccfg subVersionMay x -> K $ case subVersionMay of - EraNodeToClientEnabled subVersion -> Just (encodeElement ccfg subVersion x) - -- Omit disabled eras - EraNodeToClientDisabled -> Nothing - ) - ccfgs - subVersions - xs - listLen = fromIntegral (length components) - in Enc.encodeListLen listLen <> mconcat components + | Just err <- validateHardForkNodeToClientVersion version = + error err + | otherwise = + case version of + HardForkNodeToClientDisabled versionX -> case ccfgs of + ccfg :* _ -> case xs of + x :* _ -> encodeElement ccfg versionX x + HardForkNodeToClientEnabled _ subVersions -> + let components :: [Encoding] + components = + catMaybes $ + hcollapse $ + hczipWith3 + (Proxy @SerialiseConstraintsHFC) + ( \ccfg subVersionMay x -> K $ case subVersionMay of + EraNodeToClientEnabled subVersion -> Just (encodeElement ccfg subVersion x) + -- Omit disabled eras + EraNodeToClientDisabled -> Nothing + ) + ccfgs + subVersions + xs + listLen = fromIntegral (length components) + in Enc.encodeListLen listLen <> mconcat components -- | Decoding of `NP f xs`. If any eras are disabled in the version and hence -- missing in the serialisation, then this throws an exception. In effect, -- deserialisation of product types is only supported when the sender uses an -- equal or superset of eras. decodeNodeToClientNP :: - forall f xs. SerialiseHFC xs - => ( forall x. SerialiseConstraintsHFC x - => CodecConfig x - -> BlockNodeToClientVersion x - -> (forall s. Decoder s (f x)) - ) - -- ^ The decoding of the individual elements (assuming era `x` is enabled). - -> CodecConfig (HardForkBlock xs) - -> HardForkNodeToClientVersion xs - -> (forall s. Decoder s (NP f xs)) + forall f xs. + SerialiseHFC xs => + -- | The decoding of the individual elements (assuming era `x` is enabled). + ( forall x. + SerialiseConstraintsHFC x => + CodecConfig x -> + BlockNodeToClientVersion x -> + (forall s. Decoder s (f x)) + ) -> + CodecConfig (HardForkBlock xs) -> + HardForkNodeToClientVersion xs -> + (forall s. Decoder s (NP f xs)) decodeNodeToClientNP decodeElement (HardForkCodecConfig (PerEraCodecConfig ccfgs)) version - | Just err <- validateHardForkNodeToClientVersion version - = error err - | otherwise - = case version of - HardForkNodeToClientDisabled versionX -> case ccfgs of - (ccfg :* Nil) -> do - singleElement <- decodeElement ccfg versionX - return (singleElement :* Nil) - _ -> failVersion - - HardForkNodeToClientEnabled _ subVersions -> do - enforceSize failVersionTxt expectedN - hsequence' $ hczipWith - (Proxy @SerialiseConstraintsHFC) - (\ccfg subVersionMay -> Comp $ case subVersionMay of - EraNodeToClientEnabled subVersion -> decodeElement ccfg subVersion - -- Fail if any era is disabled - EraNodeToClientDisabled -> failVersion - ) - ccfgs - subVersions - where - expectedN = lengthSList (Proxy @xs) - - failVersion :: Decoder s a - failVersion = fail failVersionStr - failVersionStr = "decodeNodeToClient: (NP f xs): incompatible node-to-client version" - failVersionTxt = T.pack failVersionStr + | Just err <- validateHardForkNodeToClientVersion version = + error err + | otherwise = + case version of + HardForkNodeToClientDisabled versionX -> case ccfgs of + (ccfg :* Nil) -> do + singleElement <- decodeElement ccfg versionX + return (singleElement :* Nil) + _ -> failVersion + HardForkNodeToClientEnabled _ subVersions -> do + enforceSize failVersionTxt expectedN + hsequence' $ + hczipWith + (Proxy @SerialiseConstraintsHFC) + ( \ccfg subVersionMay -> Comp $ case subVersionMay of + EraNodeToClientEnabled subVersion -> decodeElement ccfg subVersion + -- Fail if any era is disabled + EraNodeToClientDisabled -> failVersion + ) + ccfgs + subVersions + where + expectedN = lengthSList (Proxy @xs) + + failVersion :: Decoder s a + failVersion = fail failVersionStr + failVersionStr = "decodeNodeToClient: (NP f xs): incompatible node-to-client version" + failVersionTxt = T.pack failVersionStr -- | Check that @version@ consists of a run of 0 or more enabled eras followed -- by a run of 0 or more disabled eras. Returns an error message if not. validateHardForkNodeToClientVersion :: - SerialiseHFC xs - => HardForkNodeToClientVersion xs - -> Maybe String + SerialiseHFC xs => + HardForkNodeToClientVersion xs -> + Maybe String validateHardForkNodeToClientVersion version = case version of - HardForkNodeToClientDisabled _ -> Nothing - HardForkNodeToClientEnabled _ subVersions -> goEnabled subVersions - where - goEnabled :: NP EraNodeToClientVersion xs' -> Maybe String - goEnabled v = case v of - Nil -> Nothing - EraNodeToClientEnabled _ :* v' -> goEnabled v' - EraNodeToClientDisabled :* v' -> goDisabled v' - - goDisabled :: NP EraNodeToClientVersion xs' -> Maybe String - goDisabled v = case v of - Nil -> Nothing - EraNodeToClientEnabled _ :* _ -> Just $ + HardForkNodeToClientDisabled _ -> Nothing + HardForkNodeToClientEnabled _ subVersions -> goEnabled subVersions + where + goEnabled :: NP EraNodeToClientVersion xs' -> Maybe String + goEnabled v = case v of + Nil -> Nothing + EraNodeToClientEnabled _ :* v' -> goEnabled v' + EraNodeToClientDisabled :* v' -> goDisabled v' + + goDisabled :: NP EraNodeToClientVersion xs' -> Maybe String + goDisabled v = case v of + Nil -> Nothing + EraNodeToClientEnabled _ :* _ -> + Just $ "Expected HardForkNodeToClientVersion to consists of a run of 0 or more" - <> " enabled eras followed by a run of 0 or more disabled eras, but got: " - <> show version - EraNodeToClientDisabled :* v' -> goDisabled v' + <> " enabled eras followed by a run of 0 or more disabled eras, but got: " + <> show version + EraNodeToClientDisabled :* v' -> goDisabled v' instance SerialiseHFC xs => SerialiseNodeToClientConstraints (HardForkBlock xs) instance SerialiseHFC xs => SerialiseNodeToClient (HardForkBlock xs) (HardForkLedgerConfig xs) where encodeNodeToClient ccfg version (HardForkLedgerConfig hflcShape perEraLedgerConfig) = - mconcat [ - encodeListLen 2 - , encodeNodeToClient @_ @(History.Shape xs) ccfg version hflcShape - , encodeNodeToClient @_ @(PerEraLedgerConfig xs) ccfg version perEraLedgerConfig - ] + mconcat + [ encodeListLen 2 + , encodeNodeToClient @_ @(History.Shape xs) ccfg version hflcShape + , encodeNodeToClient @_ @(PerEraLedgerConfig xs) ccfg version perEraLedgerConfig + ] decodeNodeToClient ccfg version = do - enforceSize "HardForkLedgerConfig" 2 - HardForkLedgerConfig - <$> decodeNodeToClient @_ @(History.Shape xs) ccfg version - <*> decodeNodeToClient @_ @(PerEraLedgerConfig xs) ccfg version + enforceSize "HardForkLedgerConfig" 2 + HardForkLedgerConfig + <$> decodeNodeToClient @_ @(History.Shape xs) ccfg version + <*> decodeNodeToClient @_ @(PerEraLedgerConfig xs) ccfg version instance SerialiseHFC xs => SerialiseNodeToClient (HardForkBlock xs) (History.Shape xs) where encodeNodeToClient ccfg version (History.Shape (Exactly xs)) = encodeNodeToClientNP - (\_ _ (K a) -> Serialise.encode a) - ccfg - version - xs + (\_ _ (K a) -> Serialise.encode a) + ccfg + version + xs decodeNodeToClient ccfg version = - History.Shape . Exactly <$> decodeNodeToClientNP + History.Shape . Exactly + <$> decodeNodeToClientNP (\_ _ -> K <$> Serialise.decode) ccfg version @@ -206,85 +218,104 @@ instance SerialiseHFC xs => SerialiseNodeToClient (HardForkBlock xs) (History.Sh Dispatch to first era or HFC -------------------------------------------------------------------------------} -dispatchEncoder :: forall f xs. ( - SerialiseHFC xs - , forall blk. SerialiseNodeToClientConstraints blk - => SerialiseNodeToClient blk (f blk) - ) - => CodecConfig (HardForkBlock xs) - -> BlockNodeToClientVersion (HardForkBlock xs) - -> NS f xs -> Encoding +dispatchEncoder :: + forall f xs. + ( SerialiseHFC xs + , forall blk. + SerialiseNodeToClientConstraints blk => + SerialiseNodeToClient blk (f blk) + ) => + CodecConfig (HardForkBlock xs) -> + BlockNodeToClientVersion (HardForkBlock xs) -> + NS f xs -> + Encoding dispatchEncoder ccfg version ns = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - case (ccfgs, version, ns) of - (c0 :* _, HardForkNodeToClientDisabled v0, Z x0) -> - encodeNodeToClient c0 v0 x0 - (_, HardForkNodeToClientDisabled _, S later) -> - throw $ futureEraException (notFirstEra later) - (_, HardForkNodeToClientEnabled _ versions, _) -> - encodeNS (hczipWith pSHFC aux ccfgs versions) ns - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - aux :: forall blk. (SingleEraBlock blk, SerialiseNodeToClientConstraints blk) - => CodecConfig blk - -> EraNodeToClientVersion blk - -> (f -.-> K Encoding) blk - aux ccfg' (EraNodeToClientEnabled v) = Fn $ K . encodeNodeToClient ccfg' v - aux _ EraNodeToClientDisabled = Fn $ \_ -> - throw $ disabledEraException (Proxy @blk) - -dispatchDecoder :: forall f xs. ( - SerialiseHFC xs - , forall blk. SerialiseNodeToClientConstraints blk - => SerialiseNodeToClient blk (f blk) - ) - => CodecConfig (HardForkBlock xs) - -> BlockNodeToClientVersion (HardForkBlock xs) - -> forall s. Decoder s (NS f xs) + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + case (ccfgs, version, ns) of + (c0 :* _, HardForkNodeToClientDisabled v0, Z x0) -> + encodeNodeToClient c0 v0 x0 + (_, HardForkNodeToClientDisabled _, S later) -> + throw $ futureEraException (notFirstEra later) + (_, HardForkNodeToClientEnabled _ versions, _) -> + encodeNS (hczipWith pSHFC aux ccfgs versions) ns + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + + aux :: + forall blk. + (SingleEraBlock blk, SerialiseNodeToClientConstraints blk) => + CodecConfig blk -> + EraNodeToClientVersion blk -> + (f -.-> K Encoding) blk + aux ccfg' (EraNodeToClientEnabled v) = Fn $ K . encodeNodeToClient ccfg' v + aux _ EraNodeToClientDisabled = Fn $ \_ -> + throw $ disabledEraException (Proxy @blk) + +dispatchDecoder :: + forall f xs. + ( SerialiseHFC xs + , forall blk. + SerialiseNodeToClientConstraints blk => + SerialiseNodeToClient blk (f blk) + ) => + CodecConfig (HardForkBlock xs) -> + BlockNodeToClientVersion (HardForkBlock xs) -> + forall s. + Decoder s (NS f xs) dispatchDecoder ccfg version = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - case (ccfgs, version) of - (c0 :* _, HardForkNodeToClientDisabled v0) -> - Z <$> decodeNodeToClient c0 v0 - (_, HardForkNodeToClientEnabled _ versions) -> - decodeNS (hczipWith pSHFC aux ccfgs versions) - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - aux :: forall blk. (SingleEraBlock blk, SerialiseNodeToClientConstraints blk) - => CodecConfig blk - -> EraNodeToClientVersion blk - -> forall s. (Decoder s :.: f) blk - aux ccfg' (EraNodeToClientEnabled v) = Comp $ decodeNodeToClient ccfg' v - aux _ EraNodeToClientDisabled = Comp $ - fail . show $ disabledEraException (Proxy @blk) - -dispatchEncoderErr :: forall f xs. ( - SerialiseHFC xs - , forall blk. SerialiseNodeToClientConstraints blk - => SerialiseNodeToClient blk (f blk) - ) - => CodecConfig (HardForkBlock xs) - -> BlockNodeToClientVersion (HardForkBlock xs) - -> Either (MismatchEraInfo xs) (NS f xs) -> Encoding + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + case (ccfgs, version) of + (c0 :* _, HardForkNodeToClientDisabled v0) -> + Z <$> decodeNodeToClient c0 v0 + (_, HardForkNodeToClientEnabled _ versions) -> + decodeNS (hczipWith pSHFC aux ccfgs versions) + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + + aux :: + forall blk. + (SingleEraBlock blk, SerialiseNodeToClientConstraints blk) => + CodecConfig blk -> + EraNodeToClientVersion blk -> + forall s. + (Decoder s :.: f) blk + aux ccfg' (EraNodeToClientEnabled v) = Comp $ decodeNodeToClient ccfg' v + aux _ EraNodeToClientDisabled = + Comp $ + fail . show $ + disabledEraException (Proxy @blk) + +dispatchEncoderErr :: + forall f xs. + ( SerialiseHFC xs + , forall blk. + SerialiseNodeToClientConstraints blk => + SerialiseNodeToClient blk (f blk) + ) => + CodecConfig (HardForkBlock xs) -> + BlockNodeToClientVersion (HardForkBlock xs) -> + Either (MismatchEraInfo xs) (NS f xs) -> + Encoding dispatchEncoderErr ccfg version = - encodeEitherMismatch version $ - dispatchEncoder ccfg version - -dispatchDecoderErr :: forall f xs. ( - SerialiseHFC xs - , forall blk. SerialiseNodeToClientConstraints blk - => SerialiseNodeToClient blk (f blk) - ) - => CodecConfig (HardForkBlock xs) - -> BlockNodeToClientVersion (HardForkBlock xs) - -> forall s. Decoder s (Either (MismatchEraInfo xs) (NS f xs)) + encodeEitherMismatch version $ + dispatchEncoder ccfg version + +dispatchDecoderErr :: + forall f xs. + ( SerialiseHFC xs + , forall blk. + SerialiseNodeToClientConstraints blk => + SerialiseNodeToClient blk (f blk) + ) => + CodecConfig (HardForkBlock xs) -> + BlockNodeToClientVersion (HardForkBlock xs) -> + forall s. + Decoder s (Either (MismatchEraInfo xs) (NS f xs)) dispatchDecoderErr ccfg version = - decodeEitherMismatch version $ - dispatchDecoder ccfg version + decodeEitherMismatch version $ + dispatchDecoder ccfg version after :: (a -> b -> d -> e) -> (c -> d) -> a -> b -> c -> e after f g x y z = f x y (g z) @@ -303,17 +334,21 @@ instance SerialiseHFC xs => SerialiseNodeToClient (HardForkBlock xs) (PerEraLedg Blocks -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) (HardForkBlock xs) where - encodeNodeToClient ccfg _ = wrapCBORinCBOR (encodeDiskHfcBlock ccfg) +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) (HardForkBlock xs) + where + encodeNodeToClient ccfg _ = wrapCBORinCBOR (encodeDiskHfcBlock ccfg) decodeNodeToClient ccfg _ = unwrapCBORinCBOR (decodeDiskHfcBlock ccfg) {------------------------------------------------------------------------------- Serialised blocks -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) (Serialised (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) (Serialised (HardForkBlock xs)) + where encodeNodeToClient _ _ = Serialise.encode decodeNodeToClient _ _ = Serialise.decode @@ -321,23 +356,31 @@ instance SerialiseHFC xs Transactions -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) (GenTx (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) (GenTx (HardForkBlock xs)) + where encodeNodeToClient = dispatchEncoder `after` (getOneEraGenTx . getHardForkGenTx) decodeNodeToClient = fmap (HardForkGenTx . OneEraGenTx) .: dispatchDecoder -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) (GenTxId (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) (GenTxId (HardForkBlock xs)) + where encodeNodeToClient = dispatchEncoder `after` (getOneEraGenTxId . getHardForkGenTxId) decodeNodeToClient = fmap (HardForkGenTxId . OneEraGenTxId) .: dispatchDecoder -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) SlotNo where +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) SlotNo + where encodeNodeToClient _ _ = Serialise.encode decodeNodeToClient _ _ = Serialise.decode -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) (HardForkApplyTxErr xs) where +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) (HardForkApplyTxErr xs) + where encodeNodeToClient = dispatchEncoderErr `after` (fmap getOneEraApplyTxErr . hardForkApplyTxErrToEither) decodeNodeToClient = fmap (hardForkApplyTxErrFromEither . fmap OneEraApplyTxErr) .: dispatchDecoderErr @@ -346,171 +389,180 @@ instance SerialiseHFC xs -------------------------------------------------------------------------------} encodeQueryHardFork :: - HardForkSpecificNodeToClientVersion - -> Some (QueryHardFork xs) - -> Encoding + HardForkSpecificNodeToClientVersion -> + Some (QueryHardFork xs) -> + Encoding encodeQueryHardFork _vHfc = \case - Some GetInterpreter -> mconcat [ - Enc.encodeListLen 1 + Some GetInterpreter -> + mconcat + [ Enc.encodeListLen 1 , Enc.encodeWord8 0 ] - Some GetCurrentEra -> mconcat [ - Enc.encodeListLen 1 + Some GetCurrentEra -> + mconcat + [ Enc.encodeListLen 1 , Enc.encodeWord8 1 ] decodeQueryHardFork :: Decoder s (Some (QueryHardFork xs)) decodeQueryHardFork = do - enforceSize "QueryHardFork" 1 - tag <- Dec.decodeWord8 - case tag of - 0 -> return $ Some GetInterpreter - 1 -> return $ Some GetCurrentEra - _ -> fail $ "QueryHardFork: invalid tag " ++ show tag - -instance SerialiseHFC xs - => SerialiseNodeToClient (HardForkBlock xs) (SomeBlockQuery (BlockQuery (HardForkBlock xs))) where + enforceSize "QueryHardFork" 1 + tag <- Dec.decodeWord8 + case tag of + 0 -> return $ Some GetInterpreter + 1 -> return $ Some GetCurrentEra + _ -> fail $ "QueryHardFork: invalid tag " ++ show tag + +instance + SerialiseHFC xs => + SerialiseNodeToClient (HardForkBlock xs) (SomeBlockQuery (BlockQuery (HardForkBlock xs))) + where encodeNodeToClient ccfg version (SomeBlockQuery q) = case version of - HardForkNodeToClientDisabled v0 -> case q of - QueryIfCurrent qry -> - case distribQueryIfCurrent (SomeBlockQuery qry) of - Z (Comp qry0) -> encodeNodeToClient (hd ccfgs) v0 qry0 - S later -> throw $ futureEraException (notFirstEra later) - QueryAnytime {} -> - throw HardForkEncoderQueryHfcDisabled - QueryHardFork {} -> - throw HardForkEncoderQueryHfcDisabled - - HardForkNodeToClientEnabled vHfc _ -> case q of - QueryIfCurrent qry -> mconcat [ - Enc.encodeListLen 2 + HardForkNodeToClientDisabled v0 -> case q of + QueryIfCurrent qry -> + case distribQueryIfCurrent (SomeBlockQuery qry) of + Z (Comp qry0) -> encodeNodeToClient (hd ccfgs) v0 qry0 + S later -> throw $ futureEraException (notFirstEra later) + QueryAnytime{} -> + throw HardForkEncoderQueryHfcDisabled + QueryHardFork{} -> + throw HardForkEncoderQueryHfcDisabled + HardForkNodeToClientEnabled vHfc _ -> case q of + QueryIfCurrent qry -> + mconcat + [ Enc.encodeListLen 2 , Enc.encodeWord8 0 , dispatchEncoder ccfg version (distribQueryIfCurrent (SomeBlockQuery qry)) ] - QueryAnytime qry eraIndex -> mconcat [ - Enc.encodeListLen 3 + QueryAnytime qry eraIndex -> + mconcat + [ Enc.encodeListLen 3 , Enc.encodeWord8 1 , Serialise.encode (Some qry) , Serialise.encode eraIndex ] - QueryHardFork qry -> mconcat [ - Enc.encodeListLen 2 + QueryHardFork qry -> + mconcat + [ Enc.encodeListLen 2 , Enc.encodeWord8 2 , encodeQueryHardFork vHfc (Some qry) ] - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg decodeNodeToClient ccfg version = case version of - HardForkNodeToClientDisabled v0 -> - injQueryIfCurrent . Z . Comp <$> - decodeNodeToClient (hd ccfgs) v0 - HardForkNodeToClientEnabled {} -> case isNonEmpty (Proxy @xs) of - ProofNonEmpty (_ :: Proxy x') (p :: Proxy xs') -> do - size <- Dec.decodeListLen - tag <- Dec.decodeWord8 - case (size, tag) of - (2, 0) -> injQueryIfCurrent <$> dispatchDecoder ccfg version - - (3, 1) -> do - Some (qry :: QueryAnytime result) <- Serialise.decode - eraIndex :: EraIndex (x' ': xs') <- Serialise.decode - case checkIsNonEmpty p of - Nothing -> fail "QueryAnytime requires multiple era" - Just (ProofNonEmpty {}) -> - return $ SomeBlockQuery (QueryAnytime qry eraIndex) - - (2, 2) -> do - Some (qry :: QueryHardFork xs result) <- decodeQueryHardFork - case checkIsNonEmpty p of - Nothing -> fail "QueryHardFork requires multiple era" - Just (ProofNonEmpty {}) -> - return $ SomeBlockQuery (QueryHardFork qry) - - _ -> fail $ "HardForkQuery: invalid size and tag" <> show (size, tag) - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - injQueryIfCurrent :: NS (SomeBlockQuery :.: BlockQuery) xs - -> SomeBlockQuery (BlockQuery (HardForkBlock xs)) - injQueryIfCurrent ns = - case undistribQueryIfCurrent ns of - SomeBlockQuery q -> SomeBlockQuery (QueryIfCurrent q) + HardForkNodeToClientDisabled v0 -> + injQueryIfCurrent . Z . Comp + <$> decodeNodeToClient (hd ccfgs) v0 + HardForkNodeToClientEnabled{} -> case isNonEmpty (Proxy @xs) of + ProofNonEmpty (_ :: Proxy x') (p :: Proxy xs') -> do + size <- Dec.decodeListLen + tag <- Dec.decodeWord8 + case (size, tag) of + (2, 0) -> injQueryIfCurrent <$> dispatchDecoder ccfg version + (3, 1) -> do + Some (qry :: QueryAnytime result) <- Serialise.decode + eraIndex :: EraIndex (x' ': xs') <- Serialise.decode + case checkIsNonEmpty p of + Nothing -> fail "QueryAnytime requires multiple era" + Just (ProofNonEmpty{}) -> + return $ SomeBlockQuery (QueryAnytime qry eraIndex) + (2, 2) -> do + Some (qry :: QueryHardFork xs result) <- decodeQueryHardFork + case checkIsNonEmpty p of + Nothing -> fail "QueryHardFork requires multiple era" + Just (ProofNonEmpty{}) -> + return $ SomeBlockQuery (QueryHardFork qry) + _ -> fail $ "HardForkQuery: invalid size and tag" <> show (size, tag) + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + + injQueryIfCurrent :: + NS (SomeBlockQuery :.: BlockQuery) xs -> + SomeBlockQuery (BlockQuery (HardForkBlock xs)) + injQueryIfCurrent ns = + case undistribQueryIfCurrent ns of + SomeBlockQuery q -> SomeBlockQuery (QueryIfCurrent q) {------------------------------------------------------------------------------- Results -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseBlockQueryResult (HardForkBlock xs) BlockQuery where +instance + SerialiseHFC xs => + SerialiseBlockQueryResult (HardForkBlock xs) BlockQuery + where encodeBlockQueryResult ccfg version (QueryIfCurrent qry) = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - encodeEitherMismatch version $ - case (ccfgs, version, qry) of - (c0 :* _, HardForkNodeToClientDisabled v0, QZ qry') -> - encodeBlockQueryResult c0 v0 qry' - (_, HardForkNodeToClientDisabled _, QS qry') -> - throw $ futureEraException (hardForkQueryInfo qry') - (_, HardForkNodeToClientEnabled _ versions, _) -> - encodeQueryIfCurrentResult ccfgs versions qry - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - encodeBlockQueryResult _ _ (QueryAnytime qry _) = encodeQueryAnytimeResult qry - encodeBlockQueryResult _ _ (QueryHardFork qry) = encodeQueryHardForkResult qry + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + encodeEitherMismatch version $ + case (ccfgs, version, qry) of + (c0 :* _, HardForkNodeToClientDisabled v0, QZ qry') -> + encodeBlockQueryResult c0 v0 qry' + (_, HardForkNodeToClientDisabled _, QS qry') -> + throw $ futureEraException (hardForkQueryInfo qry') + (_, HardForkNodeToClientEnabled _ versions, _) -> + encodeQueryIfCurrentResult ccfgs versions qry + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + encodeBlockQueryResult _ _ (QueryAnytime qry _) = encodeQueryAnytimeResult qry + encodeBlockQueryResult _ _ (QueryHardFork qry) = encodeQueryHardForkResult qry decodeBlockQueryResult ccfg version (QueryIfCurrent qry) = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - decodeEitherMismatch version $ - case (ccfgs, version, qry) of - (c0 :* _, HardForkNodeToClientDisabled v0, QZ qry') -> - decodeBlockQueryResult c0 v0 qry' - (_, HardForkNodeToClientDisabled _, QS qry') -> - throw $ futureEraException (hardForkQueryInfo qry') - (_, HardForkNodeToClientEnabled _ versions, _) -> - decodeQueryIfCurrentResult ccfgs versions qry - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - decodeBlockQueryResult _ _ (QueryAnytime qry _) = decodeQueryAnytimeResult qry - decodeBlockQueryResult _ _ (QueryHardFork qry) = decodeQueryHardForkResult qry + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + decodeEitherMismatch version $ + case (ccfgs, version, qry) of + (c0 :* _, HardForkNodeToClientDisabled v0, QZ qry') -> + decodeBlockQueryResult c0 v0 qry' + (_, HardForkNodeToClientDisabled _, QS qry') -> + throw $ futureEraException (hardForkQueryInfo qry') + (_, HardForkNodeToClientEnabled _ versions, _) -> + decodeQueryIfCurrentResult ccfgs versions qry + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + decodeBlockQueryResult _ _ (QueryAnytime qry _) = decodeQueryAnytimeResult qry + decodeBlockQueryResult _ _ (QueryHardFork qry) = decodeQueryHardForkResult qry encodeQueryIfCurrentResult :: - All SerialiseConstraintsHFC xs - => NP CodecConfig xs - -> NP EraNodeToClientVersion xs - -> QueryIfCurrent xs fp result - -> result -> Encoding + All SerialiseConstraintsHFC xs => + NP CodecConfig xs -> + NP EraNodeToClientVersion xs -> + QueryIfCurrent xs fp result -> + result -> + Encoding encodeQueryIfCurrentResult (c :* _) (EraNodeToClientEnabled v :* _) (QZ qry) = - encodeBlockQueryResult c v qry + encodeBlockQueryResult c v qry encodeQueryIfCurrentResult (_ :* _) (EraNodeToClientDisabled :* _) (QZ qry) = - qryDisabledEra qry - where - qryDisabledEra :: forall blk fp result. SingleEraBlock blk - => BlockQuery blk fp result -> result -> Encoding - qryDisabledEra _ _ = throw $ disabledEraException (Proxy @blk) + qryDisabledEra qry + where + qryDisabledEra :: + forall blk fp result. + SingleEraBlock blk => + BlockQuery blk fp result -> result -> Encoding + qryDisabledEra _ _ = throw $ disabledEraException (Proxy @blk) encodeQueryIfCurrentResult (_ :* cs) (_ :* vs) (QS qry) = - encodeQueryIfCurrentResult cs vs qry + encodeQueryIfCurrentResult cs vs qry encodeQueryIfCurrentResult Nil _ qry = - case qry of {} + case qry of {} decodeQueryIfCurrentResult :: - All SerialiseConstraintsHFC xs - => NP CodecConfig xs - -> NP EraNodeToClientVersion xs - -> QueryIfCurrent xs fp result - -> (forall s. Decoder s result) + All SerialiseConstraintsHFC xs => + NP CodecConfig xs -> + NP EraNodeToClientVersion xs -> + QueryIfCurrent xs fp result -> + (forall s. Decoder s result) decodeQueryIfCurrentResult (c :* _) (EraNodeToClientEnabled v :* _) (QZ qry) = - decodeBlockQueryResult c v qry + decodeBlockQueryResult c v qry decodeQueryIfCurrentResult (_ :* _) (EraNodeToClientDisabled :* _) (QZ qry) = - qryDisabledEra qry - where - qryDisabledEra :: forall blk fp result. SingleEraBlock blk - => BlockQuery blk fp result -> forall s. Decoder s result - qryDisabledEra _ = fail . show $ disabledEraException (Proxy @blk) + qryDisabledEra qry + where + qryDisabledEra :: + forall blk fp result. + SingleEraBlock blk => + BlockQuery blk fp result -> forall s. Decoder s result + qryDisabledEra _ = fail . show $ disabledEraException (Proxy @blk) decodeQueryIfCurrentResult (_ :* cs) (_ :* vs) (QS qry) = - decodeQueryIfCurrentResult cs vs qry + decodeQueryIfCurrentResult cs vs qry decodeQueryIfCurrentResult Nil _ qry = - case qry of {} + case qry of {} diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToNode.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToNode.hs index edf78b0051..89ad2a2a5c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToNode.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Serialisation/SerialiseNodeToNode.hs @@ -8,35 +8,37 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeOperators #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseNodeToNode () where -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.Serialise as Serialise -import Control.Exception (throw) -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.NonEmpty (ProofNonEmpty (..), isNonEmpty) -import Data.SOP.Strict -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.Block -import Ouroboros.Consensus.HardFork.Combinator.Mempool -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common -import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util ((.:)) -import Ouroboros.Network.Block (Serialised, unwrapCBORinCBOR, - wrapCBORinCBOR) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Codec.Serialise qualified as Serialise +import Control.Exception (throw) +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.NonEmpty (ProofNonEmpty (..), isNonEmpty) +import Data.SOP.Strict +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.Block +import Ouroboros.Consensus.HardFork.Combinator.Mempool +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.Common +import Ouroboros.Consensus.HardFork.Combinator.Serialisation.SerialiseDisk () +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util ((.:)) +import Ouroboros.Network.Block + ( Serialised + , unwrapCBORinCBOR + , wrapCBORinCBOR + ) instance SerialiseHFC xs => SerialiseNodeToNodeConstraints (HardForkBlock xs) where estimateBlockSize = estimateHfcBlockSize @@ -45,57 +47,68 @@ instance SerialiseHFC xs => SerialiseNodeToNodeConstraints (HardForkBlock xs) wh Dispatch to first era or HFC -------------------------------------------------------------------------------} -dispatchEncoder :: forall f xs. ( - SerialiseHFC xs - , forall blk. SerialiseNodeToNodeConstraints blk - => SerialiseNodeToNode blk (f blk) - ) - => CodecConfig (HardForkBlock xs) - -> BlockNodeToNodeVersion (HardForkBlock xs) - -> NS f xs -> Encoding +dispatchEncoder :: + forall f xs. + ( SerialiseHFC xs + , forall blk. + SerialiseNodeToNodeConstraints blk => + SerialiseNodeToNode blk (f blk) + ) => + CodecConfig (HardForkBlock xs) -> + BlockNodeToNodeVersion (HardForkBlock xs) -> + NS f xs -> + Encoding dispatchEncoder ccfg version ns = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - case (ccfgs, version, ns) of - (c0 :* _, HardForkNodeToNodeDisabled v0, Z x0) -> - encodeNodeToNode c0 v0 x0 - (_, HardForkNodeToNodeDisabled _, S later) -> - throw $ futureEraException (notFirstEra later) - (_, HardForkNodeToNodeEnabled _ versions, _) -> - encodeNS (hczipWith pSHFC aux ccfgs versions) ns - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - aux :: forall blk. (SerialiseNodeToNodeConstraints blk) - => CodecConfig blk - -> WrapNodeToNodeVersion blk - -> (f -.-> K Encoding) blk - aux ccfg' (WrapNodeToNodeVersion v) = Fn $ K . encodeNodeToNode ccfg' v - -dispatchDecoder :: forall f xs. ( - SerialiseHFC xs - , forall blk. SerialiseNodeToNodeConstraints blk - => SerialiseNodeToNode blk (f blk) - ) - => CodecConfig (HardForkBlock xs) - -> BlockNodeToNodeVersion (HardForkBlock xs) - -> forall s. Decoder s (NS f xs) + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + case (ccfgs, version, ns) of + (c0 :* _, HardForkNodeToNodeDisabled v0, Z x0) -> + encodeNodeToNode c0 v0 x0 + (_, HardForkNodeToNodeDisabled _, S later) -> + throw $ futureEraException (notFirstEra later) + (_, HardForkNodeToNodeEnabled _ versions, _) -> + encodeNS (hczipWith pSHFC aux ccfgs versions) ns + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + + aux :: + forall blk. + SerialiseNodeToNodeConstraints blk => + CodecConfig blk -> + WrapNodeToNodeVersion blk -> + (f -.-> K Encoding) blk + aux ccfg' (WrapNodeToNodeVersion v) = Fn $ K . encodeNodeToNode ccfg' v + +dispatchDecoder :: + forall f xs. + ( SerialiseHFC xs + , forall blk. + SerialiseNodeToNodeConstraints blk => + SerialiseNodeToNode blk (f blk) + ) => + CodecConfig (HardForkBlock xs) -> + BlockNodeToNodeVersion (HardForkBlock xs) -> + forall s. + Decoder s (NS f xs) dispatchDecoder ccfg version = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - case (ccfgs, version) of - (c0 :* _, HardForkNodeToNodeDisabled v0) -> - Z <$> decodeNodeToNode c0 v0 - (_, HardForkNodeToNodeEnabled _ versions) -> - decodeNS (hczipWith pSHFC aux ccfgs versions) - where - ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg - - aux :: forall blk. (SerialiseNodeToNodeConstraints blk) - => CodecConfig blk - -> WrapNodeToNodeVersion blk - -> forall s. (Decoder s :.: f) blk - aux ccfg' (WrapNodeToNodeVersion v) = Comp $ decodeNodeToNode ccfg' v + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + case (ccfgs, version) of + (c0 :* _, HardForkNodeToNodeDisabled v0) -> + Z <$> decodeNodeToNode c0 v0 + (_, HardForkNodeToNodeEnabled _ versions) -> + decodeNS (hczipWith pSHFC aux ccfgs versions) + where + ccfgs = getPerEraCodecConfig $ hardForkCodecConfigPerEra ccfg + + aux :: + forall blk. + SerialiseNodeToNodeConstraints blk => + CodecConfig blk -> + WrapNodeToNodeVersion blk -> + forall s. + (Decoder s :.: f) blk + aux ccfg' (WrapNodeToNodeVersion v) = Comp $ decodeNodeToNode ccfg' v after :: (a -> b -> d -> e) -> (c -> d) -> a -> b -> c -> e after f g x y z = f x y (g z) @@ -104,13 +117,17 @@ after f g x y z = f x y (g z) Blocks/headers -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseNodeToNode (HardForkBlock xs) (HardForkBlock xs) where - encodeNodeToNode ccfg _ = wrapCBORinCBOR (encodeDiskHfcBlock ccfg) +instance + SerialiseHFC xs => + SerialiseNodeToNode (HardForkBlock xs) (HardForkBlock xs) + where + encodeNodeToNode ccfg _ = wrapCBORinCBOR (encodeDiskHfcBlock ccfg) decodeNodeToNode ccfg _ = unwrapCBORinCBOR (decodeDiskHfcBlock ccfg) -instance SerialiseHFC xs - => SerialiseNodeToNode (HardForkBlock xs) (Header (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToNode (HardForkBlock xs) (Header (HardForkBlock xs)) + where encodeNodeToNode = dispatchEncoder `after` (getOneEraHeader . getHardForkHeader) decodeNodeToNode = fmap (HardForkHeader . OneEraHeader) .: dispatchDecoder @@ -118,13 +135,17 @@ instance SerialiseHFC xs Serialised blocks/headers -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseNodeToNode (HardForkBlock xs) (Serialised (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToNode (HardForkBlock xs) (Serialised (HardForkBlock xs)) + where encodeNodeToNode _ _ = Serialise.encode decodeNodeToNode _ _ = Serialise.decode -instance SerialiseHFC xs - => SerialiseNodeToNode (HardForkBlock xs) (SerialisedHeader (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToNode (HardForkBlock xs) (SerialisedHeader (HardForkBlock xs)) + where encodeNodeToNode = dispatchEncoder `after` distribSerialisedHeader decodeNodeToNode = fmap undistribSerialisedHeader .: dispatchDecoder @@ -132,12 +153,16 @@ instance SerialiseHFC xs Transactions -------------------------------------------------------------------------------} -instance SerialiseHFC xs - => SerialiseNodeToNode (HardForkBlock xs) (GenTx (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToNode (HardForkBlock xs) (GenTx (HardForkBlock xs)) + where encodeNodeToNode = dispatchEncoder `after` (getOneEraGenTx . getHardForkGenTx) decodeNodeToNode = fmap (HardForkGenTx . OneEraGenTx) .: dispatchDecoder -instance SerialiseHFC xs - => SerialiseNodeToNode (HardForkBlock xs) (GenTxId (HardForkBlock xs)) where +instance + SerialiseHFC xs => + SerialiseNodeToNode (HardForkBlock xs) (GenTxId (HardForkBlock xs)) + where encodeNodeToNode = dispatchEncoder `after` (getOneEraGenTxId . getHardForkGenTxId) decodeNodeToNode = fmap (HardForkGenTxId . OneEraGenTxId) .: dispatchDecoder diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State.hs index babaa31569..4142ae89f6 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State.hs @@ -13,64 +13,75 @@ -- -- > import Ouroboros.Consensus.HardFork.Combinator.State (HardForkState(..)) -- > import qualified Ouroboros.Consensus.HardFork.Combinator.State as State -module Ouroboros.Consensus.HardFork.Combinator.State ( - module X +module Ouroboros.Consensus.HardFork.Combinator.State + ( module X + -- * Support for defining instances , getTip + -- * Serialisation support , recover + -- * EpochInfo , epochInfoLedger , epochInfoPrecomputedTransitionInfo , mostRecentTransitionInfo , reconstructSummaryLedger + -- * Ledger specific functionality , extendToSlot ) where -import Control.Monad (guard) -import Data.Functor.Product -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting (getExactly) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.InPairs (InPairs, Requiring (..)) -import qualified Data.SOP.InPairs as InPairs -import Data.SOP.Strict -import Data.SOP.Telescope (Extend (..), ScanNext (..), Telescope) -import qualified Data.SOP.Telescope as Telescope -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract -import Ouroboros.Consensus.HardFork.Combinator.AcrossEras -import Ouroboros.Consensus.HardFork.Combinator.Basics -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HardFork.Combinator.State.Infra as X -import Ouroboros.Consensus.HardFork.Combinator.State.Instances as X () -import Ouroboros.Consensus.HardFork.Combinator.State.Types as X -import Ouroboros.Consensus.HardFork.Combinator.Translation -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.Ledger.Abstract hiding (getTip) -import Ouroboros.Consensus.Ledger.Tables.Utils -import Prelude hiding (sequence) +import Control.Monad (guard) +import Data.Functor.Product +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting (getExactly) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.InPairs (InPairs, Requiring (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Strict +import Data.SOP.Telescope (Extend (..), ScanNext (..), Telescope) +import Data.SOP.Telescope qualified as Telescope +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract +import Ouroboros.Consensus.HardFork.Combinator.AcrossEras +import Ouroboros.Consensus.HardFork.Combinator.Basics +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.Combinator.State.Infra as X +import Ouroboros.Consensus.HardFork.Combinator.State.Instances as X () +import Ouroboros.Consensus.HardFork.Combinator.State.Types as X +import Ouroboros.Consensus.HardFork.Combinator.Translation +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.Ledger.Abstract hiding (getTip) +import Ouroboros.Consensus.Ledger.Tables.Utils +import Prelude hiding (sequence) {------------------------------------------------------------------------------- GetTip -------------------------------------------------------------------------------} -getTip :: forall f xs. CanHardFork xs - => (forall blk. SingleEraBlock blk => f blk -> Point blk) - -> HardForkState f xs -> Point (HardForkBlock xs) +getTip :: + forall f xs. + CanHardFork xs => + (forall blk. SingleEraBlock blk => f blk -> Point blk) -> + HardForkState f xs -> + Point (HardForkBlock xs) getTip getLedgerTip = - hcollapse + hcollapse . hcmap proxySingle (K . injPoint . getLedgerTip) . tip - where - injPoint :: forall blk. SingleEraBlock blk - => Point blk -> Point (HardForkBlock xs) - injPoint GenesisPoint = GenesisPoint - injPoint (BlockPoint s h) = BlockPoint s $ OneEraHash $ - toShortRawHash (Proxy @blk) h + where + injPoint :: + forall blk. + SingleEraBlock blk => + Point blk -> Point (HardForkBlock xs) + injPoint GenesisPoint = GenesisPoint + injPoint (BlockPoint s h) = + BlockPoint s $ + OneEraHash $ + toShortRawHash (Proxy @blk) h {------------------------------------------------------------------------------- Recovery @@ -82,88 +93,95 @@ getTip getLedgerTip = -- really only need to store the underlying @f@. It is not strictly essential -- that this is possible but it helps with the unary hardfork case, and it may -- in general help with binary compatibility. -recover :: forall f xs. CanHardFork xs - => Telescope (K Past) f xs -> HardForkState f xs +recover :: + forall f xs. + CanHardFork xs => + Telescope (K Past) f xs -> HardForkState f xs recover = - case isNonEmpty (Proxy @xs) of - ProofNonEmpty {} -> - HardForkState + case isNonEmpty (Proxy @xs) of + ProofNonEmpty{} -> + HardForkState . Telescope.bihmap - (\(Pair _ past) -> past) - recoverCurrent + (\(Pair _ past) -> past) + recoverCurrent . Telescope.scanl - (InPairs.hpure $ ScanNext $ const $ K . pastEnd . unK) - (K History.initBound) - where - recoverCurrent :: Product (K History.Bound) f blk -> Current f blk - recoverCurrent (Pair (K prevEnd) st) = Current { - currentStart = prevEnd - , currentState = st - } + (InPairs.hpure $ ScanNext $ const $ K . pastEnd . unK) + (K History.initBound) + where + recoverCurrent :: Product (K History.Bound) f blk -> Current f blk + recoverCurrent (Pair (K prevEnd) st) = + Current + { currentStart = prevEnd + , currentState = st + } {------------------------------------------------------------------------------- Reconstruct EpochInfo -------------------------------------------------------------------------------} -mostRecentTransitionInfo :: All SingleEraBlock xs - => HardForkLedgerConfig xs - -> HardForkState (Flip LedgerState mk) xs - -> TransitionInfo +mostRecentTransitionInfo :: + All SingleEraBlock xs => + HardForkLedgerConfig xs -> + HardForkState (Flip LedgerState mk) xs -> + TransitionInfo mostRecentTransitionInfo HardForkLedgerConfig{..} st = - hcollapse $ - hczipWith3 - proxySingle - getTransition - cfgs - (getExactly (History.getShape hardForkLedgerConfigShape)) - (Telescope.tip (getHardForkState st)) - where - cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra + hcollapse $ + hczipWith3 + proxySingle + getTransition + cfgs + (getExactly (History.getShape hardForkLedgerConfigShape)) + (Telescope.tip (getHardForkState st)) + where + cfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - getTransition :: SingleEraBlock blk - => WrapPartialLedgerConfig blk - -> K History.EraParams blk - -> Current (Flip LedgerState mk) blk - -> K TransitionInfo blk - getTransition cfg (K eraParams) Current{currentState = Flip curState, ..} = K $ - case singleEraTransition' cfg eraParams currentStart curState of - Nothing -> TransitionUnknown (ledgerTipSlot curState) - Just e -> TransitionKnown e + getTransition :: + SingleEraBlock blk => + WrapPartialLedgerConfig blk -> + K History.EraParams blk -> + Current (Flip LedgerState mk) blk -> + K TransitionInfo blk + getTransition cfg (K eraParams) Current{currentState = Flip curState, ..} = K $ + case singleEraTransition' cfg eraParams currentStart curState of + Nothing -> TransitionUnknown (ledgerTipSlot curState) + Just e -> TransitionKnown e -reconstructSummaryLedger :: All SingleEraBlock xs - => HardForkLedgerConfig xs - -> HardForkState (Flip LedgerState mk) xs - -> History.Summary xs +reconstructSummaryLedger :: + All SingleEraBlock xs => + HardForkLedgerConfig xs -> + HardForkState (Flip LedgerState mk) xs -> + History.Summary xs reconstructSummaryLedger cfg@HardForkLedgerConfig{..} st = - reconstructSummary - hardForkLedgerConfigShape - (mostRecentTransitionInfo cfg st) - st + reconstructSummary + hardForkLedgerConfigShape + (mostRecentTransitionInfo cfg st) + st -- | Construct 'EpochInfo' from the ledger state -- -- NOTE: The resulting 'EpochInfo' is a snapshot only, with a limited range. -- It should not be stored. -epochInfoLedger :: All SingleEraBlock xs - => HardForkLedgerConfig xs - -> HardForkState (Flip LedgerState mk) xs - -> EpochInfo (Except PastHorizonException) +epochInfoLedger :: + All SingleEraBlock xs => + HardForkLedgerConfig xs -> + HardForkState (Flip LedgerState mk) xs -> + EpochInfo (Except PastHorizonException) epochInfoLedger cfg st = - History.summaryToEpochInfo $ - reconstructSummaryLedger cfg st + History.summaryToEpochInfo $ + reconstructSummaryLedger cfg st -- | Construct 'EpochInfo' given precomputed 'TransitionInfo' -- -- The transition and state arguments are acquired either from a ticked ledger -- state or a ledger view. epochInfoPrecomputedTransitionInfo :: - History.Shape xs - -> TransitionInfo - -> HardForkState f xs - -> EpochInfo (Except PastHorizonException) + History.Shape xs -> + TransitionInfo -> + HardForkState f xs -> + EpochInfo (Except PastHorizonException) epochInfoPrecomputedTransitionInfo shape transition st = - History.summaryToEpochInfo $ - reconstructSummary shape transition st + History.summaryToEpochInfo $ + reconstructSummary shape transition st {------------------------------------------------------------------------------- Extending @@ -201,100 +219,118 @@ epochInfoPrecomputedTransitionInfo shape transition st = -- -- 4. Attach the diffs resulting from step 3 to the @era3@ ledger state from -- step 2, and return it. -extendToSlot :: forall xs. - (CanHardFork xs) - => HardForkLedgerConfig xs - -> SlotNo - -> HardForkState (Flip LedgerState EmptyMK) xs - -> HardForkState (Flip LedgerState DiffMK) xs +extendToSlot :: + forall xs. + CanHardFork xs => + HardForkLedgerConfig xs -> + SlotNo -> + HardForkState (Flip LedgerState EmptyMK) xs -> + HardForkState (Flip LedgerState DiffMK) xs extendToSlot ledgerCfg@HardForkLedgerConfig{..} slot ledgerSt@(HardForkState st) = - HardForkState + HardForkState . unI . Telescope.extend - ( InPairs.hczipWith proxySingle (\f f' -> Require $ \(K t) - -> Extend $ \cur - -> I $ howExtend f f' t cur) - translateLS - translateLT - ) - (hczipWith - proxySingle - (fn .: whenExtend) - pcfgs - (getExactly (History.getShape hardForkLedgerConfigShape))) + ( InPairs.hczipWith + proxySingle + ( \f f' -> Require $ \(K t) -> + Extend $ \cur -> + I $ howExtend f f' t cur + ) + translateLS + translateLT + ) + ( hczipWith + proxySingle + (fn .: whenExtend) + pcfgs + (getExactly (History.getShape hardForkLedgerConfigShape)) + ) -- In order to make this an automorphism, as required by 'Telescope.extend', -- we have to promote the input to @DiffMK@ albeit it being empty. $ hcmap - proxySingle - (\c -> c { currentState = Flip - . flip withLedgerTables emptyLedgerTables - . unFlip - . currentState - $ c } - ) + proxySingle + ( \c -> + c + { currentState = + Flip + . flip withLedgerTables emptyLedgerTables + . unFlip + . currentState + $ c + } + ) $ st - where - pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra - cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs - ei = epochInfoLedger ledgerCfg ledgerSt + where + pcfgs = getPerEraLedgerConfig hardForkLedgerConfigPerEra + cfgs = hcmap proxySingle (completeLedgerConfig'' ei) pcfgs + ei = epochInfoLedger ledgerCfg ledgerSt - -- Return the end of this era if we should transition to the next - whenExtend :: SingleEraBlock blk - => WrapPartialLedgerConfig blk - -> K History.EraParams blk - -> Current (Flip LedgerState DiffMK) blk - -> (Maybe :.: K History.Bound) blk - whenExtend pcfg (K eraParams) cur = Comp $ K <$> do - transition <- singleEraTransition' - pcfg - eraParams - (currentStart cur) - (unFlip $ currentState cur) - let endBound = History.mkUpperBound - eraParams - (currentStart cur) - transition + -- Return the end of this era if we should transition to the next + whenExtend :: + SingleEraBlock blk => + WrapPartialLedgerConfig blk -> + K History.EraParams blk -> + Current (Flip LedgerState DiffMK) blk -> + (Maybe :.: K History.Bound) blk + whenExtend pcfg (K eraParams) cur = + Comp $ + K <$> do + transition <- + singleEraTransition' + pcfg + eraParams + (currentStart cur) + (unFlip $ currentState cur) + let endBound = + History.mkUpperBound + eraParams + (currentStart cur) + transition guard (slot >= History.boundSlot endBound) return endBound - howExtend :: (HasLedgerTables (LedgerState blk), HasLedgerTables (LedgerState blk')) - => TranslateLedgerState blk blk' - -> TranslateLedgerTables blk blk' - -> History.Bound - -> Current (Flip LedgerState DiffMK) blk - -> (K Past blk, Current (Flip LedgerState DiffMK) blk') - howExtend f f' currentEnd cur = ( - K Past { - pastStart = currentStart cur - , pastEnd = currentEnd - } - , Current { - currentStart = currentEnd - , currentState = - Flip - -- We need to bring back the diffs provided by previous - -- translations. Note that if there is only one translation or - -- if the previous translations don't add any new tables this - -- will just be a no-op. See the haddock for - -- 'translateLedgerTablesWith' and 'extendToSlot' for more - -- information. - . prependDiffs ( translateLedgerTablesWith f' - . projectLedgerTables - . unFlip - . currentState - $ cur - ) - . translateLedgerStateWith f (History.boundEpoch currentEnd) - . forgetLedgerTables - . unFlip - . currentState - $ cur - } - ) + howExtend :: + (HasLedgerTables (LedgerState blk), HasLedgerTables (LedgerState blk')) => + TranslateLedgerState blk blk' -> + TranslateLedgerTables blk blk' -> + History.Bound -> + Current (Flip LedgerState DiffMK) blk -> + (K Past blk, Current (Flip LedgerState DiffMK) blk') + howExtend f f' currentEnd cur = + ( K + Past + { pastStart = currentStart cur + , pastEnd = currentEnd + } + , Current + { currentStart = currentEnd + , currentState = + Flip + -- We need to bring back the diffs provided by previous + -- translations. Note that if there is only one translation or + -- if the previous translations don't add any new tables this + -- will just be a no-op. See the haddock for + -- 'translateLedgerTablesWith' and 'extendToSlot' for more + -- information. + . prependDiffs + ( translateLedgerTablesWith f' + . projectLedgerTables + . unFlip + . currentState + $ cur + ) + . translateLedgerStateWith f (History.boundEpoch currentEnd) + . forgetLedgerTables + . unFlip + . currentState + $ cur + } + ) - translateLS :: InPairs TranslateLedgerState xs - translateLS = InPairs.requiringBoth cfgs $ - translateLedgerState hardForkEraTranslation + translateLS :: InPairs TranslateLedgerState xs + translateLS = + InPairs.requiringBoth cfgs $ + translateLedgerState hardForkEraTranslation - translateLT :: InPairs TranslateLedgerTables xs - translateLT = translateLedgerTables hardForkEraTranslation + translateLT :: InPairs TranslateLedgerTables xs + translateLT = translateLedgerTables hardForkEraTranslation diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Infra.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Infra.hs index b9a7d09d22..4dfdecb280 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Infra.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Infra.hs @@ -8,53 +8,65 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.HardFork.Combinator.State.Infra ( - -- * Initialization +module Ouroboros.Consensus.HardFork.Combinator.State.Infra + ( -- * Initialization initHardForkState + -- * Lifting 'Telescope' operations , fromTZ , match , sequence , tip + -- * Situated , Situated (..) , situate + -- * Aligning , align + -- * EpochInfo/Summary , reconstructSummary ) where -import Data.Functor.Product -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Counting -import Data.SOP.InPairs (InPairs, Requiring (..)) -import qualified Data.SOP.InPairs as InPairs -import Data.SOP.Match (Mismatch) -import qualified Data.SOP.Match as Match -import Data.SOP.NonEmpty -import Data.SOP.Strict -import Data.SOP.Telescope (Extend (..), Telescope (..)) -import qualified Data.SOP.Telescope as Telescope -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.HardFork.Combinator.State.Lift -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.HardFork.History (Bound (..), EraEnd (..), - EraParams (..), EraSummary (..), SafeZone (..)) -import qualified Ouroboros.Consensus.HardFork.History as History -import Prelude hiding (sequence) +import Data.Functor.Product +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Counting +import Data.SOP.InPairs (InPairs, Requiring (..)) +import Data.SOP.InPairs qualified as InPairs +import Data.SOP.Match (Mismatch) +import Data.SOP.Match qualified as Match +import Data.SOP.NonEmpty +import Data.SOP.Strict +import Data.SOP.Telescope (Extend (..), Telescope (..)) +import Data.SOP.Telescope qualified as Telescope +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.HardFork.Combinator.State.Lift +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.HardFork.History + ( Bound (..) + , EraEnd (..) + , EraParams (..) + , EraSummary (..) + , SafeZone (..) + ) +import Ouroboros.Consensus.HardFork.History qualified as History +import Prelude hiding (sequence) {------------------------------------------------------------------------------- Initialization -------------------------------------------------------------------------------} initHardForkState :: f x -> HardForkState f (x ': xs) -initHardForkState st = HardForkState $ TZ $ Current { - currentStart = History.initBound - , currentState = st - } +initHardForkState st = + HardForkState $ + TZ $ + Current + { currentStart = History.initBound + , currentState = st + } {------------------------------------------------------------------------------- Lift telescope operations @@ -63,25 +75,30 @@ initHardForkState st = HardForkState $ TZ $ Current { tip :: SListI xs => HardForkState f xs -> NS f xs tip (HardForkState st) = hmap currentState $ Telescope.tip st -match :: SListI xs - => NS h xs - -> HardForkState f xs - -> Either (Mismatch h (Current f) xs) (HardForkState (Product h f) xs) +match :: + SListI xs => + NS h xs -> + HardForkState f xs -> + Either (Mismatch h (Current f) xs) (HardForkState (Product h f) xs) match ns (HardForkState t) = - HardForkState . hmap distrib <$> Match.matchTelescope ns t - where - distrib :: Product h (Current f) blk -> Current (Product h f) blk - distrib (Pair x (Current start y)) = - Current start (Pair x y) + HardForkState . hmap distrib <$> Match.matchTelescope ns t + where + distrib :: Product h (Current f) blk -> Current (Product h f) blk + distrib (Pair x (Current start y)) = + Current start (Pair x y) -sequence :: forall f m xs. (SListI xs, Functor m) - => HardForkState (m :.: f) xs -> m (HardForkState f xs) -sequence = \(HardForkState st) -> HardForkState <$> - Telescope.sequence (hmap distrib st) - where - distrib :: Current (m :.: f) blk -> (m :.: Current f) blk - distrib (Current start st) = Comp $ - Current start <$> unComp st +sequence :: + forall f m xs. + (SListI xs, Functor m) => + HardForkState (m :.: f) xs -> m (HardForkState f xs) +sequence = \(HardForkState st) -> + HardForkState + <$> Telescope.sequence (hmap distrib st) + where + distrib :: Current (m :.: f) blk -> (m :.: Current f) blk + distrib (Current start st) = + Comp $ + Current start <$> unComp st fromTZ :: HardForkState f '[blk] -> f blk fromTZ = currentState . Telescope.fromTZ . getHardForkState @@ -92,108 +109,133 @@ fromTZ = currentState . Telescope.fromTZ . getHardForkState -- | A @h@ situated in time data Situated h f xs where - SituatedCurrent :: Current f x -> h x -> Situated h f (x ': xs) - SituatedNext :: Current f x -> h y -> Situated h f (x ': y ': xs) - SituatedFuture :: Current f x -> NS h xs -> Situated h f (x ': y ': xs) - SituatedPast :: K Past x -> h x -> Situated h f (x ': xs) - SituatedShift :: Situated h f xs -> Situated h f (x ': xs) + SituatedCurrent :: Current f x -> h x -> Situated h f (x ': xs) + SituatedNext :: Current f x -> h y -> Situated h f (x ': y ': xs) + SituatedFuture :: Current f x -> NS h xs -> Situated h f (x ': y ': xs) + SituatedPast :: K Past x -> h x -> Situated h f (x ': xs) + SituatedShift :: Situated h f xs -> Situated h f (x ': xs) situate :: NS h xs -> HardForkState f xs -> Situated h f xs situate ns = go ns . getHardForkState - where - go :: NS h xs' - -> Telescope (K Past) (Current f) xs' - -> Situated h f xs' - go (Z era) (TZ cur) = SituatedCurrent cur era - go (S (Z era)) (TZ cur) = SituatedNext cur era - go (S (S era)) (TZ cur) = SituatedFuture cur era - go (Z era) (TS past _) = SituatedPast past era - go (S era) (TS _ st) = SituatedShift $ go era st + where + go :: + NS h xs' -> + Telescope (K Past) (Current f) xs' -> + Situated h f xs' + go (Z era) (TZ cur) = SituatedCurrent cur era + go (S (Z era)) (TZ cur) = SituatedNext cur era + go (S (S era)) (TZ cur) = SituatedFuture cur era + go (Z era) (TS past _) = SituatedPast past era + go (S era) (TS _ st) = SituatedShift $ go era st {------------------------------------------------------------------------------- Aligning -------------------------------------------------------------------------------} -align :: forall xs f f' f''. All SingleEraBlock xs - => InPairs (Translate f) xs - -> NP (f' -.-> f -.-> f'') xs - -> HardForkState f' xs -- ^ State we are aligning with - -> HardForkState f xs -- ^ State we are aligning - -> HardForkState f'' xs +align :: + forall xs f f' f''. + All SingleEraBlock xs => + InPairs (Translate f) xs -> + NP (f' -.-> f -.-> f'') xs -> + -- | State we are aligning with + HardForkState f' xs -> + -- | State we are aligning + HardForkState f xs -> + HardForkState f'' xs align fs updTip (HardForkState alignWith) (HardForkState toAlign) = - HardForkState . unI $ - Telescope.alignExtend - (InPairs.hmap (\f -> Require $ - \past -> Extend $ - \cur -> I $ - newCurrent f past cur) fs) - (hmap (fn_2 . liftUpdTip) updTip) - alignWith - toAlign - where - liftUpdTip :: (f' -.-> f -.-> f'') blk - -> Current f' blk -> Current f blk -> Current f'' blk - liftUpdTip f = lift . apFn . apFn f . currentState + HardForkState . unI $ + Telescope.alignExtend + ( InPairs.hmap + ( \f -> Require $ + \past -> Extend $ + \cur -> + I $ + newCurrent f past cur + ) + fs + ) + (hmap (fn_2 . liftUpdTip) updTip) + alignWith + toAlign + where + liftUpdTip :: + (f' -.-> f -.-> f'') blk -> + Current f' blk -> + Current f blk -> + Current f'' blk + liftUpdTip f = lift . apFn . apFn f . currentState - newCurrent :: Translate f blk blk' - -> K Past blk - -> Current f blk - -> (K Past blk, Current f blk') - newCurrent f (K past) curF = ( - K Past { pastStart = currentStart curF - , pastEnd = curEnd - } - , Current { currentStart = curEnd - , currentState = translateWith f - (boundEpoch curEnd) - (currentState curF) - } - ) - where - curEnd :: Bound - curEnd = pastEnd past + newCurrent :: + Translate f blk blk' -> + K Past blk -> + Current f blk -> + (K Past blk, Current f blk') + newCurrent f (K past) curF = + ( K + Past + { pastStart = currentStart curF + , pastEnd = curEnd + } + , Current + { currentStart = curEnd + , currentState = + translateWith + f + (boundEpoch curEnd) + (currentState curF) + } + ) + where + curEnd :: Bound + curEnd = pastEnd past {------------------------------------------------------------------------------- Summary/EpochInfo -------------------------------------------------------------------------------} -reconstructSummary :: History.Shape xs - -> TransitionInfo -- ^ At the tip - -> HardForkState f xs - -> History.Summary xs +reconstructSummary :: + History.Shape xs -> + -- | At the tip + TransitionInfo -> + HardForkState f xs -> + History.Summary xs reconstructSummary (History.Shape shape) transition (HardForkState st) = - History.Summary $ go shape st - where - go :: Exactly xs' EraParams - -> Telescope (K Past) (Current f) xs' - -> NonEmpty xs' EraSummary - go ExactlyNil t = case t of {} - go (ExactlyCons params ss) (TS (K Past{..}) t) = - NonEmptyCons (EraSummary pastStart (EraEnd pastEnd) params) $ go ss t - go (ExactlyCons params ss) (TZ Current{..}) = - case transition of - TransitionKnown epoch -> - -- We haven't reached the next era yet, but the transition is - -- already known. The safe zone applies from the start of the - -- next era. - let currentEnd = History.mkUpperBound params currentStart epoch - nextStart = currentEnd - in case ss of + History.Summary $ go shape st + where + go :: + Exactly xs' EraParams -> + Telescope (K Past) (Current f) xs' -> + NonEmpty xs' EraSummary + go ExactlyNil t = case t of {} + go (ExactlyCons params ss) (TS (K Past{..}) t) = + NonEmptyCons (EraSummary pastStart (EraEnd pastEnd) params) $ go ss t + go (ExactlyCons params ss) (TZ Current{..}) = + case transition of + TransitionKnown epoch -> + -- We haven't reached the next era yet, but the transition is + -- already known. The safe zone applies from the start of the + -- next era. + let currentEnd = History.mkUpperBound params currentStart epoch + nextStart = currentEnd + in case ss of ExactlyCons nextParams _ -> - NonEmptyCons EraSummary { - eraStart = currentStart + NonEmptyCons + EraSummary + { eraStart = currentStart , eraParams = params - , eraEnd = EraEnd currentEnd + , eraEnd = EraEnd currentEnd } - $ NonEmptyOne EraSummary { - eraStart = nextStart - , eraParams = nextParams - , eraEnd = applySafeZone - nextParams - nextStart - (boundSlot nextStart) - } - ExactlyNil -> + $ NonEmptyOne + EraSummary + { eraStart = nextStart + , eraParams = nextParams + , eraEnd = + applySafeZone + nextParams + nextStart + (boundSlot nextStart) + } + ExactlyNil -> -- HOWEVER, this code doesn't know what that next era is! This -- can arise when a node has not updated its code despite an -- imminent hard fork. @@ -209,49 +251,56 @@ reconstructSummary (History.Shape shape) transition (HardForkState st) = -- a similarly obvious pattern in -- 'mkSomeConsensusProtocolCardano' defined in the -- @cardano-node@ repo. - NonEmptyOne EraSummary { - eraStart = currentStart - , eraParams = params - , eraEnd = EraEnd currentEnd - } - TransitionUnknown ledgerTip -> NonEmptyOne $ EraSummary { - eraStart = currentStart - , eraParams = params - , eraEnd = applySafeZone - params - currentStart - -- Even if the safe zone is 0, the first slot at - -- which the next era could begin is the /next/ - (next ledgerTip) - } - -- 'TransitionImpossible' is used in one of two cases: we are in the - -- final era this chain will ever have (handled by the corresponding - -- 'UnsafeIndefiniteSafeZone' case within 'applySafeZone' below) or - -- this era is a future era that hasn't begun yet, in which case the - -- safe zone must start at the beginning of this era. - TransitionImpossible -> NonEmptyOne $ EraSummary { - eraStart = currentStart - , eraParams = params - , eraEnd = applySafeZone - params - currentStart - (boundSlot currentStart) - } + NonEmptyOne + EraSummary + { eraStart = currentStart + , eraParams = params + , eraEnd = EraEnd currentEnd + } + TransitionUnknown ledgerTip -> + NonEmptyOne $ + EraSummary + { eraStart = currentStart + , eraParams = params + , eraEnd = + applySafeZone + params + currentStart + -- Even if the safe zone is 0, the first slot at + -- which the next era could begin is the /next/ + (next ledgerTip) + } + -- 'TransitionImpossible' is used in one of two cases: we are in the + -- final era this chain will ever have (handled by the corresponding + -- 'UnsafeIndefiniteSafeZone' case within 'applySafeZone' below) or + -- this era is a future era that hasn't begun yet, in which case the + -- safe zone must start at the beginning of this era. + TransitionImpossible -> + NonEmptyOne $ + EraSummary + { eraStart = currentStart + , eraParams = params + , eraEnd = + applySafeZone + params + currentStart + (boundSlot currentStart) + } - -- Apply safe zone from the specified 'SlotNo' - -- - -- All arguments must be referring to or in the same era. - applySafeZone :: EraParams -> Bound -> SlotNo -> EraEnd - applySafeZone params@EraParams{..} start = - case eraSafeZone of - UnsafeIndefiniteSafeZone -> - const EraUnbounded - StandardSafeZone safeFromTip -> - EraEnd - . History.mkUpperBound params start - . History.slotToEpochBound params start - . History.addSlots safeFromTip + -- Apply safe zone from the specified 'SlotNo' + -- + -- All arguments must be referring to or in the same era. + applySafeZone :: EraParams -> Bound -> SlotNo -> EraEnd + applySafeZone params@EraParams{..} start = + case eraSafeZone of + UnsafeIndefiniteSafeZone -> + const EraUnbounded + StandardSafeZone safeFromTip -> + EraEnd + . History.mkUpperBound params start + . History.slotToEpochBound params start + . History.addSlots safeFromTip - next :: WithOrigin SlotNo -> SlotNo - next Origin = SlotNo 0 - next (NotOrigin s) = succ s + next :: WithOrigin SlotNo -> SlotNo + next Origin = SlotNo 0 + next (NotOrigin s) = succ s diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Instances.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Instances.hs index a7bb00fbf8..9540d2a64a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Instances.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Instances.hs @@ -12,33 +12,32 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.HardFork.Combinator.State.Instances ( - -- * Serialisation support +module Ouroboros.Consensus.HardFork.Combinator.State.Instances + ( -- * Serialisation support decodeCurrent , decodePast , encodeCurrent , encodePast ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding, encodeListLen) -import Codec.Serialise -import Data.Coerce -import Data.Proxy -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import qualified Data.SOP.Telescope as Telescope -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock -import Ouroboros.Consensus.HardFork.Combinator.Lifting -import Ouroboros.Consensus.HardFork.Combinator.State.Lift -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Prelude hiding (sequence) +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding, encodeListLen) +import Codec.Serialise +import Data.Coerce +import Data.Proxy +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Data.SOP.Telescope qualified as Telescope +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.HardFork.Combinator.Abstract.SingleEraBlock +import Ouroboros.Consensus.HardFork.Combinator.Lifting +import Ouroboros.Consensus.HardFork.Combinator.State.Lift +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Prelude hiding (sequence) {------------------------------------------------------------------------------- SOP class instances @@ -47,18 +46,20 @@ import Prelude hiding (sequence) other SOP type; in particular, they deal with lifting functions to 'Current'. -------------------------------------------------------------------------------} -type instance Prod HardForkState = NP -type instance SListIN HardForkState = SListI -type instance AllN HardForkState c = All c +type instance Prod HardForkState = NP +type instance SListIN HardForkState = SListI +type instance AllN HardForkState c = All c type instance CollapseTo HardForkState a = a instance HAp HardForkState where - hap np (HardForkState st) = HardForkState $ + hap np (HardForkState st) = + HardForkState $ hap (map_NP' (Fn . lift . apFn) np) st instance HSequence HardForkState where - hctraverse' = \p f (HardForkState st) -> HardForkState <$> - hctraverse' p (liftM f) st + hctraverse' = \p f (HardForkState st) -> + HardForkState + <$> hctraverse' p (liftM f) st htraverse' = hctraverse' (Proxy @Top) hsequence' = htraverse' unComp @@ -66,14 +67,17 @@ instance HCollapse HardForkState where hcollapse = hcollapse . hmap currentState . Telescope.tip . getHardForkState instance HTrans HardForkState HardForkState where - htrans p t (HardForkState st) = HardForkState $ + htrans p t (HardForkState st) = + HardForkState $ htrans p (\(Current b fx) -> Current b $ t fx) st hcoerce :: - forall f g xs ys. AllZipN (Prod HardForkState) (LiftedCoercible f g) xs ys - => HardForkState f xs - -> HardForkState g ys - hcoerce (HardForkState st) = HardForkState $ + forall f g xs ys. + AllZipN (Prod HardForkState) (LiftedCoercible f g) xs ys => + HardForkState f xs -> + HardForkState g ys + hcoerce (HardForkState st) = + HardForkState $ htrans (Proxy @(LiftedCoercible f g)) (\(Current b fx) -> Current b $ coerce fx) @@ -85,24 +89,33 @@ type instance Same HardForkState = HardForkState Eq, Show, NoThunks -------------------------------------------------------------------------------} -deriving instance Eq (f blk) => Eq (Current f blk) -deriving instance Show (f blk) => Show (Current f blk) +deriving instance Eq (f blk) => Eq (Current f blk) +deriving instance Show (f blk) => Show (Current f blk) deriving instance NoThunks (f blk) => NoThunks (Current f blk) -deriving via LiftTelescope (K Past) (Current f) xs - instance ( All SingleEraBlock xs - , forall blk. SingleEraBlock blk => Show (f blk) - ) => Show (HardForkState f xs) - -deriving via LiftTelescope (K Past) (Current f) xs - instance ( All SingleEraBlock xs - , forall blk. SingleEraBlock blk => Eq (f blk) - ) => Eq (HardForkState f xs) - -deriving via LiftNamedTelescope "HardForkState" (K Past) (Current f) xs - instance ( All SingleEraBlock xs - , forall blk. SingleEraBlock blk => NoThunks (f blk) - ) => NoThunks (HardForkState f xs) +deriving via + LiftTelescope (K Past) (Current f) xs + instance + ( All SingleEraBlock xs + , forall blk. SingleEraBlock blk => Show (f blk) + ) => + Show (HardForkState f xs) + +deriving via + LiftTelescope (K Past) (Current f) xs + instance + ( All SingleEraBlock xs + , forall blk. SingleEraBlock blk => Eq (f blk) + ) => + Eq (HardForkState f xs) + +deriving via + LiftNamedTelescope "HardForkState" (K Past) (Current f) xs + instance + ( All SingleEraBlock xs + , forall blk. SingleEraBlock blk => NoThunks (f blk) + ) => + NoThunks (HardForkState f xs) {------------------------------------------------------------------------------- Serialisation @@ -112,32 +125,34 @@ deriving via LiftNamedTelescope "HardForkState" (K Past) (Current f) xs -------------------------------------------------------------------------------} encodeCurrent :: (f blk -> Encoding) -> Current f blk -> Encoding -encodeCurrent f Current{..} = mconcat [ - encodeListLen 2 +encodeCurrent f Current{..} = + mconcat + [ encodeListLen 2 , encode currentStart , f currentState ] decodeCurrent :: Decoder s (f blk) -> Decoder s (Current f blk) decodeCurrent f = do - enforceSize "decodeCurrent" 2 - currentStart <- decode - currentState <- f - return Current{..} + enforceSize "decodeCurrent" 2 + currentStart <- decode + currentState <- f + return Current{..} encodePast :: Past -> Encoding -encodePast Past{..} = mconcat [ - encodeListLen 2 +encodePast Past{..} = + mconcat + [ encodeListLen 2 , encode pastStart , encode pastEnd ] decodePast :: Decoder s Past decodePast = do - enforceSize "decodePast" 2 - pastStart <- decode - pastEnd <- decode - return Past{..} + enforceSize "decodePast" 2 + pastStart <- decode + pastEnd <- decode + return Past{..} instance Serialise (f blk) => Serialise (Current f blk) where encode = encodeCurrent encode diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Lift.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Lift.hs index c01fc8ec26..e4a3f79a37 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Lift.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Lift.hs @@ -1,14 +1,14 @@ -- | Lifting functions for the various types used in 'HardForkState' -- -- NOTE: These are internal and not exported in the toplevel @.State@ module. -module Ouroboros.Consensus.HardFork.Combinator.State.Lift ( - -- * Lifting functions on @f@ to @Current @f@ +module Ouroboros.Consensus.HardFork.Combinator.State.Lift + ( -- * Lifting functions on @f@ to @Current @f@ lift , liftM ) where -import Data.Functor.Identity -import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Data.Functor.Identity +import Ouroboros.Consensus.HardFork.Combinator.State.Types {------------------------------------------------------------------------------- Lifting functions on @f@ to @Current @f@ @@ -17,6 +17,7 @@ import Ouroboros.Consensus.HardFork.Combinator.State.Types lift :: (f blk -> f' blk) -> Current f blk -> Current f' blk lift f = runIdentity . liftM (Identity . f) -liftM :: Functor m - => (f blk -> m (f' blk)) -> Current f blk -> m (Current f' blk) +liftM :: + Functor m => + (f blk -> m (f' blk)) -> Current f blk -> m (Current f' blk) liftM f (Current start cur) = Current start <$> f cur diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Types.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Types.hs index bf282ba872..43a54dfd65 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Types.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/State/Types.hs @@ -4,12 +4,13 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.HardFork.Combinator.State.Types ( - -- * Main types +module Ouroboros.Consensus.HardFork.Combinator.State.Types + ( -- * Main types Current (..) , HardForkState (..) , Past (..) , sequenceHardForkState + -- * Supporting types , CrossEraForecaster (..) , TransitionInfo (..) @@ -20,20 +21,20 @@ module Ouroboros.Consensus.HardFork.Combinator.State.Types ( , translateLedgerTablesWith ) where -import Control.Monad.Except -import qualified Data.Map.Strict as Map -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Strict -import Data.SOP.Telescope (Telescope) -import qualified Data.SOP.Telescope as Telescope -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.History (Bound) -import Ouroboros.Consensus.Ledger.Basics -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff +import Control.Monad.Except +import Data.Map.Strict qualified as Map +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Strict +import Data.SOP.Telescope (Telescope) +import Data.SOP.Telescope qualified as Telescope +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.History (Bound) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff {------------------------------------------------------------------------------- Types @@ -80,35 +81,38 @@ import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -- validation check. Note that in this particular example, the ledger state will -- always be ahead of the consensus state, never behind; 'alignExtend' can be -- used in this case. -newtype HardForkState f xs = HardForkState { - getHardForkState :: Telescope (K Past) (Current f) xs - } deriving (Generic) +newtype HardForkState f xs = HardForkState + { getHardForkState :: Telescope (K Past) (Current f) xs + } + deriving Generic -- | Information about the current era -data Current f blk = Current { - currentStart :: !Bound - , currentState :: !(f blk) - } - deriving (Generic) +data Current f blk = Current + { currentStart :: !Bound + , currentState :: !(f blk) + } + deriving Generic -- | Information about a past era -data Past = Past { - pastStart :: !Bound - , pastEnd :: !Bound - } +data Past = Past + { pastStart :: !Bound + , pastEnd :: !Bound + } deriving (Eq, Show, Generic, NoThunks) -- | Thin wrapper around 'Telescope.sequence' -sequenceHardForkState :: forall m f xs. (All Top xs, Functor m) - => HardForkState (m :.: f) xs -> m (HardForkState f xs) +sequenceHardForkState :: + forall m f xs. + (All Top xs, Functor m) => + HardForkState (m :.: f) xs -> m (HardForkState f xs) sequenceHardForkState (HardForkState tel) = - fmap HardForkState - $ Telescope.sequence - $ hmap sequenceCurrent tel - where - sequenceCurrent :: Current (m :.: f) a -> (m :.: Current f) a - sequenceCurrent (Current start state) = - Comp $ Current start <$> unComp state + fmap HardForkState $ + Telescope.sequence $ + hmap sequenceCurrent tel + where + sequenceCurrent :: Current (m :.: f) a -> (m :.: Current f) a + sequenceCurrent (Current start state) = + Comp $ Current start <$> unComp state {------------------------------------------------------------------------------- Supporting types @@ -117,9 +121,9 @@ sequenceHardForkState (HardForkState tel) = -- | Translate @f x@ to @f y@ across an era transition -- -- Typically @f@ will be 'LedgerState' or 'WrapChainDepState'. -newtype Translate f x y = Translate { - translateWith :: EpochNo -> f x -> f y - } +newtype Translate f x y = Translate + { translateWith :: EpochNo -> f x -> f y + } -- | Forecast a @view y@ from a @state x@ across an era transition. -- @@ -127,17 +131,21 @@ newtype Translate f x y = Translate { -- 'SlotNo' we're constructing a forecast for. This enables the translation -- function to take into account any scheduled changes that the final ledger -- view in the preceding era might have. -newtype CrossEraForecaster state view x y = CrossEraForecaster { - crossEraForecastWith :: - Bound -- 'Bound' of the transition (start of the new era) - -> SlotNo -- 'SlotNo' we're constructing a forecast for - -> state x EmptyMK - -> Except OutsideForecastRange (view y) - } +newtype CrossEraForecaster state view x y = CrossEraForecaster + { crossEraForecastWith :: + Bound -> -- 'Bound' of the transition (start of the new era) + SlotNo -> -- 'SlotNo' we're constructing a forecast for + state x EmptyMK -> + Except OutsideForecastRange (view y) + } -- | Translate a 'LedgerState' across an era transition. -newtype TranslateLedgerState x y = TranslateLedgerState { - -- | How to translate a 'LedgerState' during the era transition. +newtype TranslateLedgerState x y = TranslateLedgerState + { translateLedgerStateWith :: + EpochNo -> + LedgerState x EmptyMK -> + LedgerState y DiffMK + -- ^ How to translate a 'LedgerState' during the era transition. -- -- When translating between eras, it can be the case that values are modified, -- thus requiring this to be a @DiffMK@ on the return type. If no tables are @@ -154,23 +162,18 @@ newtype TranslateLedgerState x y = TranslateLedgerState { -- related to the AVVMs. In particular they were deleted and included in the -- reserves. See the code that performs the translation Shelley->Allegra for -- more information. - translateLedgerStateWith :: - EpochNo - -> LedgerState x EmptyMK - -> LedgerState y DiffMK } -- | Transate a 'LedgerTables' across an era transition. -data TranslateLedgerTables x y = TranslateLedgerTables { - -- | Translate a 'TxIn' across an era transition. - -- - -- See 'translateLedgerTablesWith'. - translateTxInWith :: !(TxIn (LedgerState x) -> TxIn (LedgerState y)) - - -- | Translate a 'TxOut' across an era transition. - -- - -- See 'translateLedgerTablesWith'. +data TranslateLedgerTables x y = TranslateLedgerTables + { translateTxInWith :: !(TxIn (LedgerState x) -> TxIn (LedgerState y)) + -- ^ Translate a 'TxIn' across an era transition. + -- + -- See 'translateLedgerTablesWith'. , translateTxOutWith :: !(TxOut (LedgerState x) -> TxOut (LedgerState y)) + -- ^ Translate a 'TxOut' across an era transition. + -- + -- See 'translateLedgerTablesWith'. } newtype TranslateTxOut x y = TranslateTxOut (TxOut (LedgerState x) -> TxOut (LedgerState y)) @@ -200,12 +203,12 @@ newtype TranslateTxOut x y = TranslateTxOut (TxOut (LedgerState x) -> TxOut (Led -- previous eras, so it will be called only when crossing era boundaries, -- therefore the translation won't be equivalent to 'id'. translateLedgerTablesWith :: - Ord (TxIn (LedgerState y)) - => TranslateLedgerTables x y - -> LedgerTables (LedgerState x) DiffMK - -> LedgerTables (LedgerState y) DiffMK + Ord (TxIn (LedgerState y)) => + TranslateLedgerTables x y -> + LedgerTables (LedgerState x) DiffMK -> + LedgerTables (LedgerState y) DiffMK translateLedgerTablesWith f = - LedgerTables + LedgerTables . DiffMK . Diff.Diff . Map.mapKeys (translateTxInWith f) @@ -213,12 +216,12 @@ translateLedgerTablesWith f = . getDiffMK . mapMK (translateTxOutWith f) . getLedgerTables - where - getDiff (Diff.Diff m) = m + where + getDiff (Diff.Diff m) = m -- | Knowledge in a particular era of the transition to the next era -data TransitionInfo = - -- | No transition is yet known for this era +data TransitionInfo + = -- | No transition is yet known for this era -- We instead record the ledger tip (which must be in /this/ era) -- -- NOTE: If we are forecasting, this will be set to the slot number of the @@ -227,11 +230,9 @@ data TransitionInfo = -- range of that 'EpochInfo' will extend a safe zone from that /past/ -- ledger state. TransitionUnknown !(WithOrigin SlotNo) - - -- | Transition to the next era is known to happen at this 'EpochNo' - | TransitionKnown !EpochNo - - -- | The transition is impossible + | -- | Transition to the next era is known to happen at this 'EpochNo' + TransitionKnown !EpochNo + | -- | The transition is impossible -- -- This can be due to one of two reasons: -- @@ -239,5 +240,5 @@ data TransitionInfo = -- * This era has not actually begun yet (we are forecasting). In this case, -- we cannot look past the safe zone of this era and hence, by definition, -- the transition to the /next/ era cannot happen. - | TransitionImpossible + TransitionImpossible deriving (Show, Generic, NoThunks) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Translation.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Translation.hs index 884f4f21d3..1a6b2499b9 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Translation.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Combinator/Translation.hs @@ -2,44 +2,50 @@ {-# LANGUAGE DerivingVia #-} {-# LANGUAGE FlexibleContexts #-} -module Ouroboros.Consensus.HardFork.Combinator.Translation ( - -- * Translate from one era to the next +module Ouroboros.Consensus.HardFork.Combinator.Translation + ( -- * Translate from one era to the next EraTranslation (..) , ipTranslateTxOut , trivialEraTranslation ) where -import Data.SOP.Constraint -import Data.SOP.InPairs (InPairs (..), RequiringBoth (..)) -import qualified Data.SOP.InPairs as InPairs -import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.HardFork.Combinator.State.Types -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.TypeFamilyWrappers +import Data.SOP.Constraint +import Data.SOP.InPairs (InPairs (..), RequiringBoth (..)) +import Data.SOP.InPairs qualified as InPairs +import NoThunks.Class (NoThunks, OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.HardFork.Combinator.State.Types +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.TypeFamilyWrappers {------------------------------------------------------------------------------- Translate from one era to the next -------------------------------------------------------------------------------} -data EraTranslation xs = EraTranslation { - translateLedgerState :: !(InPairs (RequiringBoth WrapLedgerConfig TranslateLedgerState ) xs) - , translateLedgerTables :: !(InPairs TranslateLedgerTables xs) - , translateChainDepState :: !(InPairs (RequiringBoth WrapConsensusConfig (Translate WrapChainDepState)) xs) - , crossEraForecast :: !(InPairs (RequiringBoth WrapLedgerConfig (CrossEraForecaster LedgerState WrapLedgerView)) xs) - } - deriving NoThunks - via OnlyCheckWhnfNamed "EraTranslation" (EraTranslation xs) +data EraTranslation xs = EraTranslation + { translateLedgerState :: + !(InPairs (RequiringBoth WrapLedgerConfig TranslateLedgerState) xs) + , translateLedgerTables :: + !(InPairs TranslateLedgerTables xs) + , translateChainDepState :: + !(InPairs (RequiringBoth WrapConsensusConfig (Translate WrapChainDepState)) xs) + , crossEraForecast :: + !(InPairs (RequiringBoth WrapLedgerConfig (CrossEraForecaster LedgerState WrapLedgerView)) xs) + } + deriving + NoThunks + via OnlyCheckWhnfNamed "EraTranslation" (EraTranslation xs) ipTranslateTxOut :: - All Top xs - => EraTranslation xs - -> InPairs TranslateTxOut xs + All Top xs => + EraTranslation xs -> + InPairs TranslateTxOut xs ipTranslateTxOut = InPairs.hmap (TranslateTxOut . translateTxOutWith) . translateLedgerTables trivialEraTranslation :: EraTranslation '[blk] -trivialEraTranslation = EraTranslation { - translateLedgerState = PNil - , translateLedgerTables = PNil - , crossEraForecast = PNil +trivialEraTranslation = + EraTranslation + { translateLedgerState = PNil + , translateLedgerTables = PNil + , crossEraForecast = PNil , translateChainDepState = PNil } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History.hs index 9a1e292485..00e63b7678 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History.hs @@ -3,9 +3,9 @@ -- > import qualified Ouroboros.Consensus.HardFork.History as History module Ouroboros.Consensus.HardFork.History (module X) where -import Ouroboros.Consensus.HardFork.History.Caching as X -import Ouroboros.Consensus.HardFork.History.EpochInfo as X -import Ouroboros.Consensus.HardFork.History.EraParams as X -import Ouroboros.Consensus.HardFork.History.Qry as X -import Ouroboros.Consensus.HardFork.History.Summary as X -import Ouroboros.Consensus.HardFork.History.Util as X +import Ouroboros.Consensus.HardFork.History.Caching as X +import Ouroboros.Consensus.HardFork.History.EpochInfo as X +import Ouroboros.Consensus.HardFork.History.EraParams as X +import Ouroboros.Consensus.HardFork.History.Qry as X +import Ouroboros.Consensus.HardFork.History.Summary as X +import Ouroboros.Consensus.HardFork.History.Util as X diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Caching.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Caching.hs index 4e5d2b4643..690859229d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Caching.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Caching.hs @@ -4,51 +4,56 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.HardFork.History.Caching ( - RunWithCachedSummary (..) +module Ouroboros.Consensus.HardFork.History.Caching + ( RunWithCachedSummary (..) , runWithCachedSummary ) where -import Data.Kind (Type) -import Ouroboros.Consensus.HardFork.History.Qry -import Ouroboros.Consensus.HardFork.History.Summary -import Ouroboros.Consensus.Util.IOLike +import Data.Kind (Type) +import Ouroboros.Consensus.HardFork.History.Qry +import Ouroboros.Consensus.HardFork.History.Summary +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- Caching the summary -------------------------------------------------------------------------------} -- | Stateful abstraction to execute queries -data RunWithCachedSummary (xs :: [Type]) m = RunWithCachedSummary { - -- | Run the specified query - -- - -- If the query fails with a 'PastHorizonException', it will update its - -- internal state (compute a new summary) and try again. If that /still/ - -- fails, the 'PastHorizonException' is returned. - -- - cachedRunQuery :: forall a. Qry a - -> STM m (Either PastHorizonException a) - } +data RunWithCachedSummary (xs :: [Type]) m = RunWithCachedSummary + { cachedRunQuery :: + forall a. + Qry a -> + STM m (Either PastHorizonException a) + -- ^ Run the specified query + -- + -- If the query fails with a 'PastHorizonException', it will update its + -- internal state (compute a new summary) and try again. If that /still/ + -- fails, the 'PastHorizonException' is returned. + } -- | Construct 'RunWithCachedSummary' given action that computes the summary -- -- Most use cases will probably construct this action from an action that reads -- the ledger state and then computes the summary from that. -runWithCachedSummary :: forall m xs. MonadSTM m - => STM m (Summary xs) - -> m (RunWithCachedSummary xs m) +runWithCachedSummary :: + forall m xs. + MonadSTM m => + STM m (Summary xs) -> + m (RunWithCachedSummary xs m) runWithCachedSummary getLatestSummary = do - initSummary <- atomically getLatestSummary - var <- newTVarIO initSummary - return $ RunWithCachedSummary { cachedRunQuery = go var } - where - go :: StrictTVar m (Summary xs) - -> Qry a -> STM m (Either PastHorizonException a) - go var q = do - summary <- readTVar var - case runQuery q summary of - Right a -> return (Right a) - Left PastHorizon{} -> do - summary' <- getLatestSummary - writeTVar var summary' - return $ runQuery q summary' + initSummary <- atomically getLatestSummary + var <- newTVarIO initSummary + return $ RunWithCachedSummary{cachedRunQuery = go var} + where + go :: + StrictTVar m (Summary xs) -> + Qry a -> + STM m (Either PastHorizonException a) + go var q = do + summary <- readTVar var + case runQuery q summary of + Right a -> return (Right a) + Left PastHorizon{} -> do + summary' <- getLatestSummary + writeTVar var summary' + return $ runQuery q summary' diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EpochInfo.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EpochInfo.hs index a3187974ad..21532400d1 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EpochInfo.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EpochInfo.hs @@ -3,20 +3,20 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Derive 'EpochInfo' -module Ouroboros.Consensus.HardFork.History.EpochInfo ( - dummyEpochInfo +module Ouroboros.Consensus.HardFork.History.EpochInfo + ( dummyEpochInfo , interpreterToEpochInfo , summaryToEpochInfo , toPureEpochInfo ) where -import Cardano.Slotting.EpochInfo.API -import Control.Exception (throw) -import Control.Monad.Except (Except, runExcept, throwError) -import Data.Functor.Identity -import GHC.Stack -import Ouroboros.Consensus.HardFork.History.Qry -import Ouroboros.Consensus.HardFork.History.Summary +import Cardano.Slotting.EpochInfo.API +import Control.Exception (throw) +import Control.Monad.Except (Except, runExcept, throwError) +import Data.Functor.Identity +import GHC.Stack +import Ouroboros.Consensus.HardFork.History.Qry +import Ouroboros.Consensus.HardFork.History.Summary {------------------------------------------------------------------------------- Translation to EpochInfo @@ -27,32 +27,34 @@ summaryToEpochInfo :: forall xs. Summary xs -> EpochInfo (Except PastHorizonExce summaryToEpochInfo = interpreterToEpochInfo . mkInterpreter -- | Construct an 'EpochInfo' for a /snapshot/ of the ledger state -interpreterToEpochInfo :: forall xs. Interpreter xs - -> EpochInfo (Except PastHorizonException) -interpreterToEpochInfo i = EpochInfo { - epochInfoSize_ = \e -> interpretQuery' (epochToSize e) +interpreterToEpochInfo :: + forall xs. + Interpreter xs -> + EpochInfo (Except PastHorizonException) +interpreterToEpochInfo i = + EpochInfo + { epochInfoSize_ = \e -> interpretQuery' (epochToSize e) , epochInfoFirst_ = \e -> interpretQuery' (epochToSlot' e) , epochInfoEpoch_ = \s -> interpretQuery' (fst <$> slotToEpoch' s) - , epochInfoSlotToRelativeTime_ = \s -> interpretQuery' (fst <$> slotToWallclock s) - , epochInfoSlotLength_ = \s -> interpretQuery' (slotToSlotLength s) } - where - interpretQuery' :: HasCallStack => Qry a -> Except PastHorizonException a - interpretQuery' q = either throwError pure $ interpretQuery i q + where + interpretQuery' :: HasCallStack => Qry a -> Except PastHorizonException a + interpretQuery' q = either throwError pure $ interpretQuery i q -- | A dummy 'EpochInfo' that always throws an 'error'. -- -- To be used as a placeholder before a summary is available. dummyEpochInfo :: EpochInfo (Except PastHorizonException) -dummyEpochInfo = EpochInfo { - epochInfoSize_ = \_ -> error "dummyEpochInfo used" - , epochInfoFirst_ = \_ -> error "dummyEpochInfo used" - , epochInfoEpoch_ = \_ -> error "dummyEpochInfo used" +dummyEpochInfo = + EpochInfo + { epochInfoSize_ = \_ -> error "dummyEpochInfo used" + , epochInfoFirst_ = \_ -> error "dummyEpochInfo used" + , epochInfoEpoch_ = \_ -> error "dummyEpochInfo used" , epochInfoSlotToRelativeTime_ = \_ -> error "dummyEpochInfo used" - , epochInfoSlotLength_ = \_ -> error "dummyEpochInfo used" + , epochInfoSlotLength_ = \_ -> error "dummyEpochInfo used" } -- | Interpret the 'PastHorizonException' as a _pure exception_ via 'throw' diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EraParams.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EraParams.hs index 2a046f24f3..c45acc8927 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EraParams.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/EraParams.hs @@ -8,26 +8,27 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.HardFork.History.EraParams ( - -- * API +module Ouroboros.Consensus.HardFork.History.EraParams + ( -- * API EraParams (..) , SafeZone (..) + -- * Defaults , defaultEraParams ) where -import Cardano.Binary (enforceSize) -import Cardano.Ledger.BaseTypes (unNonZero) -import Codec.CBOR.Decoding (Decoder, decodeListLen, decodeWord8) -import Codec.CBOR.Encoding (Encoding, encodeListLen, encodeWord8) -import Codec.Serialise (Serialise (..)) -import Control.Monad (void) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.Config.SecurityParam +import Cardano.Binary (enforceSize) +import Cardano.Ledger.BaseTypes (unNonZero) +import Codec.CBOR.Decoding (Decoder, decodeListLen, decodeWord8) +import Codec.CBOR.Encoding (Encoding, encodeListLen, encodeWord8) +import Codec.Serialise (Serialise (..)) +import Control.Monad (void) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.Config.SecurityParam {------------------------------------------------------------------------------- OVERVIEW @@ -130,14 +131,14 @@ import Ouroboros.Consensus.Config.SecurityParam -------------------------------------------------------------------------------} -- | Parameters that can vary across hard forks -data EraParams = EraParams { - eraEpochSize :: !EpochSize - , eraSlotLength :: !SlotLength - , eraSafeZone :: !SafeZone - , eraGenesisWin :: !GenesisWindow - } - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) +data EraParams = EraParams + { eraEpochSize :: !EpochSize + , eraSlotLength :: !SlotLength + , eraSafeZone :: !SafeZone + , eraGenesisWin :: !GenesisWindow + } + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks -- | Default 'EraParams' -- @@ -149,16 +150,17 @@ data EraParams = EraParams { -- -- This is primarily useful for tests. defaultEraParams :: SecurityParam -> SlotLength -> EraParams -defaultEraParams (SecurityParam k) slotLength = EraParams { - eraEpochSize = EpochSize (unNonZero k * 10) +defaultEraParams (SecurityParam k) slotLength = + EraParams + { eraEpochSize = EpochSize (unNonZero k * 10) , eraSlotLength = slotLength - , eraSafeZone = StandardSafeZone (unNonZero k * 2) + , eraSafeZone = StandardSafeZone (unNonZero k * 2) , eraGenesisWin = GenesisWindow (unNonZero k * 2) } -- | Zone in which it is guaranteed that no hard fork can take place -data SafeZone = - -- | Standard safe zone +data SafeZone + = -- | Standard safe zone -- -- We record -- @@ -167,8 +169,7 @@ data SafeZone = -- guaranteed to have @k@ blocks. -- * Optionally, an 'EpochNo' before which no hard fork can take place. StandardSafeZone !Word64 - - -- | Pretend the transition to the next era will not take place. + | -- | Pretend the transition to the next era will not take place. -- -- This constructor is marked as unsafe because it effectively extends -- the safe zone of this era indefinitely into the future. This means that @@ -187,9 +188,9 @@ data SafeZone = -- -- This constructor can be regarded as an " extreme " version of -- 'LowerBound', and can be used for similar reasons. - | UnsafeIndefiniteSafeZone - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) + UnsafeIndefiniteSafeZone + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks {------------------------------------------------------------------------------- Serialisation @@ -197,24 +198,26 @@ data SafeZone = instance Serialise SafeZone where encode = \case - StandardSafeZone safeFromTip -> mconcat [ - encodeListLen 3 + StandardSafeZone safeFromTip -> + mconcat + [ encodeListLen 3 , encodeWord8 0 , encode safeFromTip - -- For backward compatibility we still encode safeBeforeEpoch - , encodeSafeBeforeEpoch + , -- For backward compatibility we still encode safeBeforeEpoch + encodeSafeBeforeEpoch ] - UnsafeIndefiniteSafeZone -> mconcat [ - encodeListLen 1 + UnsafeIndefiniteSafeZone -> + mconcat + [ encodeListLen 1 , encodeWord8 1 ] decode = do size <- decodeListLen - tag <- decodeWord8 + tag <- decodeWord8 case (size, tag) of (3, 0) -> StandardSafeZone <$> decode <* decodeSafeBeforeEpoch (1, 1) -> return UnsafeIndefiniteSafeZone - _ -> fail $ "SafeZone: invalid size and tag " <> show (size, tag) + _ -> fail $ "SafeZone: invalid size and tag " <> show (size, tag) -- | Artificial encoder for backward compatibility, see #2646. encodeSafeBeforeEpoch :: Encoding @@ -223,16 +226,17 @@ encodeSafeBeforeEpoch = encodeListLen 1 <> encodeWord8 0 -- | Artificial decoder for backward compatibility, see #2646. decodeSafeBeforeEpoch :: Decoder s () decodeSafeBeforeEpoch = do - size <- decodeListLen - tag <- decodeWord8 - case (size, tag) of - (1, 0) -> return () - (2, 1) -> void $ decode @EpochNo - _ -> fail $ "SafeBeforeEpoch: invalid size and tag " <> show (size, tag) + size <- decodeListLen + tag <- decodeWord8 + case (size, tag) of + (1, 0) -> return () + (2, 1) -> void $ decode @EpochNo + _ -> fail $ "SafeBeforeEpoch: invalid size and tag " <> show (size, tag) instance Serialise EraParams where - encode EraParams{..} = mconcat $ [ - encodeListLen 4 + encode EraParams{..} = + mconcat $ + [ encodeListLen 4 , encode (unEpochSize eraEpochSize) , encode eraSlotLength , encode eraSafeZone @@ -240,9 +244,9 @@ instance Serialise EraParams where ] decode = do - enforceSize "EraParams" 4 - eraEpochSize <- EpochSize <$> decode - eraSlotLength <- decode - eraSafeZone <- decode - eraGenesisWin <- GenesisWindow <$> decode - return EraParams{..} + enforceSize "EraParams" 4 + eraEpochSize <- EpochSize <$> decode + eraSlotLength <- decode + eraSafeZone <- decode + eraGenesisWin <- GenesisWindow <$> decode + return EraParams{..} diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Qry.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Qry.hs index ce8f5331a7..9c4844c752 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Qry.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Qry.hs @@ -12,22 +12,26 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.HardFork.History.Qry ( - -- * Qry +module Ouroboros.Consensus.HardFork.History.Qry + ( -- * Qry Expr (..) , PastHorizonException (..) , qryFromExpr , runQuery , runQueryPure , runQueryThrow + -- ** opaque , Qry + -- * Interpreter , interpretQuery , mkInterpreter , unsafeExtendSafeZone + -- ** opaque , Interpreter + -- * Specific queries , epochToSize , epochToSlot @@ -40,30 +44,30 @@ module Ouroboros.Consensus.HardFork.History.Qry ( , wallclockToSlot ) where -import Codec.Serialise (Serialise (..)) -import Control.Exception (throw) -import Control.Monad (ap, guard, liftM, (>=>)) -import Control.Monad.Except () -import Data.Bifunctor -import Data.Fixed (divMod') -import Data.Foldable (toList) -import Data.Functor.Identity -import Data.Kind (Type) -import Data.SOP.NonEmpty (NonEmpty (..)) -import Data.SOP.Sing (SListI) -import Data.Time hiding (UTCTime) -import Data.Word -import GHC.Generics (Generic) -import GHC.Show (showSpace) -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.HardFork.History.EraParams -import Ouroboros.Consensus.HardFork.History.Summary -import Ouroboros.Consensus.HardFork.History.Util -import Ouroboros.Consensus.Util (Some (..)) -import Ouroboros.Consensus.Util.IOLike -import Quiet +import Codec.Serialise (Serialise (..)) +import Control.Exception (throw) +import Control.Monad (ap, guard, liftM, (>=>)) +import Control.Monad.Except () +import Data.Bifunctor +import Data.Fixed (divMod') +import Data.Foldable (toList) +import Data.Functor.Identity +import Data.Kind (Type) +import Data.SOP.NonEmpty (NonEmpty (..)) +import Data.SOP.Sing (SListI) +import Data.Time hiding (UTCTime) +import Data.Word +import GHC.Generics (Generic) +import GHC.Show (showSpace) +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.HardFork.History.EraParams +import Ouroboros.Consensus.HardFork.History.Summary +import Ouroboros.Consensus.HardFork.History.Util +import Ouroboros.Consensus.Util (Some (..)) +import Ouroboros.Consensus.Util.IOLike +import Quiet {------------------------------------------------------------------------------- Internal: reified queries @@ -157,12 +161,12 @@ instance Functor Qry where fmap = liftM instance Applicative Qry where - pure = QPure + pure = QPure (<*>) = ap instance Monad Qry where return = pure - QPure a >>= k = k a + QPure a >>= k = k a QExpr e f >>= k = QExpr e (f >=> k) -- | Construct a 'Qry' from a closed 'Expr' @@ -173,17 +177,17 @@ qryFromExpr e = QExpr (ClosedExpr e) QPure Clarifying newtypes -------------------------------------------------------------------------------} -newtype TimeInEra = TimeInEra { getTimeInEra :: NominalDiffTime } deriving (Generic) -newtype TimeInSlot = TimeInSlot { getTimeInSlot :: NominalDiffTime } deriving (Generic) -newtype SlotInEra = SlotInEra { getSlotInEra :: Word64 } deriving (Generic) -newtype SlotInEpoch = SlotInEpoch { getSlotInEpoch :: Word64 } deriving (Generic) -newtype EpochInEra = EpochInEra { getEpochInEra :: Word64 } deriving (Generic) +newtype TimeInEra = TimeInEra {getTimeInEra :: NominalDiffTime} deriving Generic +newtype TimeInSlot = TimeInSlot {getTimeInSlot :: NominalDiffTime} deriving Generic +newtype SlotInEra = SlotInEra {getSlotInEra :: Word64} deriving Generic +newtype SlotInEpoch = SlotInEpoch {getSlotInEpoch :: Word64} deriving Generic +newtype EpochInEra = EpochInEra {getEpochInEra :: Word64} deriving Generic -deriving via Quiet TimeInEra instance Show TimeInEra -deriving via Quiet TimeInSlot instance Show TimeInSlot -deriving via Quiet SlotInEra instance Show SlotInEra +deriving via Quiet TimeInEra instance Show TimeInEra +deriving via Quiet TimeInSlot instance Show TimeInSlot +deriving via Quiet SlotInEra instance Show SlotInEra deriving via Quiet SlotInEpoch instance Show SlotInEpoch -deriving via Quiet EpochInEra instance Show EpochInEra +deriving via Quiet EpochInEra instance Show EpochInEra {------------------------------------------------------------------------------- Expressions @@ -195,41 +199,35 @@ data ClosedExpr a = ClosedExpr (forall f. Expr f a) data Expr (f :: Type -> Type) :: Type -> Type where -- PHOAS infrastructure - EVar :: f a -> Expr f a - ELit :: Show a => a -> Expr f a - ELet :: Expr f a -> (f a -> Expr f b) -> Expr f b - + EVar :: f a -> Expr f a + ELit :: Show a => a -> Expr f a + ELet :: Expr f a -> (f a -> Expr f b) -> Expr f b -- Support for pairs makes expressions more easily composable EPair :: Expr f a -> Expr f b -> Expr f (a, b) - EFst :: Expr f (a, b) -> Expr f a - ESnd :: Expr f (a, b) -> Expr f b - + EFst :: Expr f (a, b) -> Expr f a + ESnd :: Expr f (a, b) -> Expr f b -- Convert from absolute to era-relative - EAbsToRelTime :: Expr f RelativeTime -> Expr f TimeInEra - EAbsToRelSlot :: Expr f SlotNo -> Expr f SlotInEra - EAbsToRelEpoch :: Expr f EpochNo -> Expr f EpochInEra - + EAbsToRelTime :: Expr f RelativeTime -> Expr f TimeInEra + EAbsToRelSlot :: Expr f SlotNo -> Expr f SlotInEra + EAbsToRelEpoch :: Expr f EpochNo -> Expr f EpochInEra -- Convert from era-relative to absolute - ERelToAbsTime :: Expr f TimeInEra -> Expr f RelativeTime - ERelToAbsSlot :: Expr f (SlotInEra, TimeInSlot) -> Expr f SlotNo + ERelToAbsTime :: Expr f TimeInEra -> Expr f RelativeTime + ERelToAbsSlot :: Expr f (SlotInEra, TimeInSlot) -> Expr f SlotNo ERelToAbsEpoch :: Expr f (EpochInEra, SlotInEpoch) -> Expr f EpochNo - -- Convert between relative values - ERelTimeToSlot :: Expr f TimeInEra -> Expr f (SlotInEra, TimeInSlot) - ERelSlotToTime :: Expr f SlotInEra -> Expr f TimeInEra - ERelSlotToEpoch :: Expr f SlotInEra -> Expr f (EpochInEra, SlotInEpoch) + ERelTimeToSlot :: Expr f TimeInEra -> Expr f (SlotInEra, TimeInSlot) + ERelSlotToTime :: Expr f SlotInEra -> Expr f TimeInEra + ERelSlotToEpoch :: Expr f SlotInEra -> Expr f (EpochInEra, SlotInEpoch) ERelEpochToSlot :: Expr f EpochInEra -> Expr f SlotInEra - -- Get era parameters -- The arguments are used for bound checks - ESlotLength :: Expr f SlotNo -> Expr f SlotLength - EEpochSize :: Expr f EpochNo -> Expr f EpochSize - + ESlotLength :: Expr f SlotNo -> Expr f SlotLength + EEpochSize :: Expr f EpochNo -> Expr f EpochSize EGenesisWindow :: Expr f SlotNo -> Expr f GenesisWindow {------------------------------------------------------------------------------- @@ -238,113 +236,112 @@ data Expr (f :: Type -> Type) :: Type -> Type where evalExprInEra :: EraSummary -> ClosedExpr a -> Maybe a evalExprInEra EraSummary{..} = \(ClosedExpr e) -> go e - where - EraParams{..} = eraParams - slotLen = getSlotLength eraSlotLength - epochSize = unEpochSize eraEpochSize - - guardEnd :: (Bound -> Bool) -> Maybe () - guardEnd p = - case eraEnd of - EraUnbounded -> return () - EraEnd b -> guard $ p b - - go :: Expr Identity a -> Maybe a - go (EVar a) = - return $ runIdentity a - go (ELet e f) = - go e >>= go . f . Identity - - -- Literals and pairs - go (ELit i) = - return i - go (EPair e e') = do - x <- go e - y <- go e' - return (x, y) - go (EFst e) = - fst <$> go e - go (ESnd e) = - snd <$> go e - - -- Convert absolute to relative - -- - -- The guards here justify the subtractions. - - go (EAbsToRelTime expr) = do - t <- go expr - guard (t >= boundTime eraStart) - return $ TimeInEra (t `diffRelTime` boundTime eraStart) - go (EAbsToRelSlot expr) = do - s <- go expr - guard (s >= boundSlot eraStart) - return $ SlotInEra (countSlots s (boundSlot eraStart)) - go (EAbsToRelEpoch expr) = do - e <- go expr - guard (e >= boundEpoch eraStart) - return $ EpochInEra (countEpochs e (boundEpoch eraStart)) - - -- Convert relative to absolute - -- - -- As justified by the proof above, the guards treat the upper bound - -- as inclusive. - - go (ERelToAbsTime expr) = do - t <- go expr - let absTime = getTimeInEra t `addRelTime` boundTime eraStart - guardEnd $ \end -> absTime <= boundTime end - return absTime - go (ERelToAbsSlot expr) = do - (s, t) <- go expr - let absSlot = addSlots (getSlotInEra s) (boundSlot eraStart) - guardEnd $ \end -> absSlot < boundSlot end - || absSlot == boundSlot end && getTimeInSlot t == 0 - return absSlot - go (ERelToAbsEpoch expr) = do - (e, s) <- go expr - let absEpoch = addEpochs (getEpochInEra e) (boundEpoch eraStart) - guardEnd $ \end -> absEpoch < boundEpoch end - || absEpoch == boundEpoch end && getSlotInEpoch s == 0 - return absEpoch - - -- Convert between relative values - -- - -- No guards necessary - - go (ERelTimeToSlot expr) = do - t <- go expr - return $ bimap SlotInEra TimeInSlot (getTimeInEra t `divMod'` slotLen) - go (ERelSlotToTime expr) = do - s <- go expr - return $ TimeInEra (fromIntegral (getSlotInEra s) * slotLen) - go (ERelSlotToEpoch expr) = do - s <- go expr - return $ bimap EpochInEra SlotInEpoch $ getSlotInEra s `divMod` epochSize - go (ERelEpochToSlot expr) = do - e <- go expr - return $ SlotInEra (getEpochInEra e * epochSize) - - -- Get era parameters - -- - -- Here the upper bound must definitely be exclusive, or we'd return the - -- era parameters from the wrong era. - - go (ESlotLength expr) = do - s <- go expr - guard $ s >= boundSlot eraStart - guardEnd $ \end -> s < boundSlot end - return eraSlotLength - go (EEpochSize expr) = do - e <- go expr - guard $ e >= boundEpoch eraStart - guardEnd $ \end -> e < boundEpoch end - return eraEpochSize - - go (EGenesisWindow expr) = do - s <- go expr - guard $ s >= boundSlot eraStart - guardEnd $ \end -> s < boundSlot end - return eraGenesisWin + where + EraParams{..} = eraParams + slotLen = getSlotLength eraSlotLength + epochSize = unEpochSize eraEpochSize + + guardEnd :: (Bound -> Bool) -> Maybe () + guardEnd p = + case eraEnd of + EraUnbounded -> return () + EraEnd b -> guard $ p b + + go :: Expr Identity a -> Maybe a + go (EVar a) = + return $ runIdentity a + go (ELet e f) = + go e >>= go . f . Identity + -- Literals and pairs + go (ELit i) = + return i + go (EPair e e') = do + x <- go e + y <- go e' + return (x, y) + go (EFst e) = + fst <$> go e + go (ESnd e) = + snd <$> go e + -- Convert absolute to relative + -- + -- The guards here justify the subtractions. + + go (EAbsToRelTime expr) = do + t <- go expr + guard (t >= boundTime eraStart) + return $ TimeInEra (t `diffRelTime` boundTime eraStart) + go (EAbsToRelSlot expr) = do + s <- go expr + guard (s >= boundSlot eraStart) + return $ SlotInEra (countSlots s (boundSlot eraStart)) + go (EAbsToRelEpoch expr) = do + e <- go expr + guard (e >= boundEpoch eraStart) + return $ EpochInEra (countEpochs e (boundEpoch eraStart)) + + -- Convert relative to absolute + -- + -- As justified by the proof above, the guards treat the upper bound + -- as inclusive. + + go (ERelToAbsTime expr) = do + t <- go expr + let absTime = getTimeInEra t `addRelTime` boundTime eraStart + guardEnd $ \end -> absTime <= boundTime end + return absTime + go (ERelToAbsSlot expr) = do + (s, t) <- go expr + let absSlot = addSlots (getSlotInEra s) (boundSlot eraStart) + guardEnd $ \end -> + absSlot < boundSlot end + || absSlot == boundSlot end && getTimeInSlot t == 0 + return absSlot + go (ERelToAbsEpoch expr) = do + (e, s) <- go expr + let absEpoch = addEpochs (getEpochInEra e) (boundEpoch eraStart) + guardEnd $ \end -> + absEpoch < boundEpoch end + || absEpoch == boundEpoch end && getSlotInEpoch s == 0 + return absEpoch + + -- Convert between relative values + -- + -- No guards necessary + + go (ERelTimeToSlot expr) = do + t <- go expr + return $ bimap SlotInEra TimeInSlot (getTimeInEra t `divMod'` slotLen) + go (ERelSlotToTime expr) = do + s <- go expr + return $ TimeInEra (fromIntegral (getSlotInEra s) * slotLen) + go (ERelSlotToEpoch expr) = do + s <- go expr + return $ bimap EpochInEra SlotInEpoch $ getSlotInEra s `divMod` epochSize + go (ERelEpochToSlot expr) = do + e <- go expr + return $ SlotInEra (getEpochInEra e * epochSize) + + -- Get era parameters + -- + -- Here the upper bound must definitely be exclusive, or we'd return the + -- era parameters from the wrong era. + + go (ESlotLength expr) = do + s <- go expr + guard $ s >= boundSlot eraStart + guardEnd $ \end -> s < boundSlot end + return eraSlotLength + go (EEpochSize expr) = do + e <- go expr + guard $ e >= boundEpoch eraStart + guardEnd $ \end -> e < boundEpoch end + return eraEpochSize + go (EGenesisWindow expr) = do + s <- go expr + guard $ s >= boundSlot eraStart + guardEnd $ \end -> s < boundSlot end + return eraGenesisWin {------------------------------------------------------------------------------- PastHorizonException @@ -355,16 +352,14 @@ evalExprInEra EraSummary{..} = \(ClosedExpr e) -> go e -- That is, we tried to convert something that is past the point in time -- beyond which we lack information due to uncertainty about the next -- hard fork. -data PastHorizonException = PastHorizon { - -- | Callstack to the call to 'runQuery' - pastHorizonCallStack :: CallStack - - -- | The 'Expr' we tried to evaluate - , pastHorizonExpression :: Some ClosedExpr - - -- | The 'EraSummary's that we tried to evaluate the 'Expr' against - , pastHorizonSummary :: [EraSummary] - } +data PastHorizonException = PastHorizon + { pastHorizonCallStack :: CallStack + -- ^ Callstack to the call to 'runQuery' + , pastHorizonExpression :: Some ClosedExpr + -- ^ The 'Expr' we tried to evaluate + , pastHorizonSummary :: [EraSummary] + -- ^ The 'EraSummary's that we tried to evaluate the 'Expr' against + } deriving instance Show PastHorizonException instance Exception PastHorizonException @@ -385,25 +380,26 @@ instance Exception PastHorizonException -- they should not be composed into a single query. How could we know to which -- era which relative slot/time refers? runQuery :: - forall a xs. HasCallStack - => Qry a -> Summary xs -> Either PastHorizonException a + forall a xs. + HasCallStack => + Qry a -> Summary xs -> Either PastHorizonException a runQuery qry (Summary summary) = go summary - where - go :: NonEmpty xs' EraSummary -> Either PastHorizonException a - go (NonEmptyOne era) = tryEra era qry - go (NonEmptyCons era eras) = case tryEra era qry of - Left _ -> go eras - Right x -> Right x - - tryEra :: forall b. EraSummary -> Qry b -> Either PastHorizonException b - tryEra era = \case - QPure x -> Right x - QExpr e k -> - case evalExprInEra era e of - Just x -> - tryEra era (k x) - Nothing -> - Left $ PastHorizon callStack (Some e) (toList summary) + where + go :: NonEmpty xs' EraSummary -> Either PastHorizonException a + go (NonEmptyOne era) = tryEra era qry + go (NonEmptyCons era eras) = case tryEra era qry of + Left _ -> go eras + Right x -> Right x + + tryEra :: forall b. EraSummary -> Qry b -> Either PastHorizonException b + tryEra era = \case + QPure x -> Right x + QExpr e k -> + case evalExprInEra era e of + Just x -> + tryEra era (k x) + Nothing -> + Left $ PastHorizon callStack (Some e) (toList summary) runQueryThrow :: (HasCallStack, MonadThrow m) => Qry a -> Summary xs -> m a runQueryThrow q = either throwIO return . runQuery q @@ -420,7 +416,7 @@ runQueryPure q = either throw id . runQuery q -- -- The 'Summary' should be considered internal. newtype Interpreter xs = Interpreter (Summary xs) - deriving (Eq) + deriving Eq deriving instance SListI xs => Serialise (Interpreter xs) @@ -431,10 +427,10 @@ mkInterpreter :: Summary xs -> Interpreter xs mkInterpreter = Interpreter interpretQuery :: - HasCallStack - => Interpreter xs - -> Qry a - -> Either PastHorizonException a + HasCallStack => + Interpreter xs -> + Qry a -> + Either PastHorizonException a interpretQuery (Interpreter summary) qry = runQuery qry summary -- | UNSAFE: extend the safe zone of the current era of the given 'Interpreter' @@ -450,11 +446,11 @@ interpretQuery (Interpreter summary) qry = runQuery qry summary -- incorrect. unsafeExtendSafeZone :: Interpreter xs -> Interpreter xs unsafeExtendSafeZone (Interpreter (Summary eraSummaries)) = - Interpreter (Summary (go eraSummaries)) - where - go :: NonEmpty xs' EraSummary -> NonEmpty xs' EraSummary - go (NonEmptyCons e es) = NonEmptyCons e (go es) - go (NonEmptyOne e) = NonEmptyOne e { eraEnd = EraUnbounded } + Interpreter (Summary (go eraSummaries)) + where + go :: NonEmpty xs' EraSummary -> NonEmpty xs' EraSummary + go (NonEmptyCons e es) = NonEmptyCons e (go es) + go (NonEmptyOne e) = NonEmptyOne e{eraEnd = EraUnbounded} {------------------------------------------------------------------------------- Specific queries @@ -472,32 +468,33 @@ unsafeExtendSafeZone (Interpreter (Summary eraSummaries)) = -- Additionally returns the time spent and time left in this slot. wallclockToSlot :: RelativeTime -> Qry (SlotNo, NominalDiffTime, NominalDiffTime) wallclockToSlot absTime = - aux <$> qryFromExpr (wallclockToSlotExpr absTime) - where - aux :: (TimeInSlot, (SlotNo, SlotLength)) - -> (SlotNo, NominalDiffTime, NominalDiffTime) - aux (TimeInSlot timeInSlot, (absSlot, slotLen)) = ( - absSlot - , timeInSlot - , getSlotLength slotLen - timeInSlot - ) + aux <$> qryFromExpr (wallclockToSlotExpr absTime) + where + aux :: + (TimeInSlot, (SlotNo, SlotLength)) -> + (SlotNo, NominalDiffTime, NominalDiffTime) + aux (TimeInSlot timeInSlot, (absSlot, slotLen)) = + ( absSlot + , timeInSlot + , getSlotLength slotLen - timeInSlot + ) -- | Translate 'SlotNo' to the 'UTCTime' at the start of that slot -- -- Additionally returns the length of the slot. slotToWallclock :: SlotNo -> Qry (RelativeTime, SlotLength) slotToWallclock absSlot = - qryFromExpr (slotToWallclockExpr absSlot) + qryFromExpr (slotToWallclockExpr absSlot) -- | Acquire a slot's length slotToSlotLength :: SlotNo -> Qry SlotLength slotToSlotLength absSlot = - qryFromExpr (slotToSlotLengthExpr absSlot) + qryFromExpr (slotToSlotLengthExpr absSlot) -- | Convert 'SlotNo' to 'EpochNo' and the relative slot within the epoch slotToEpoch' :: SlotNo -> Qry (EpochNo, Word64) slotToEpoch' absSlot = - second getSlotInEpoch <$> qryFromExpr (slotToEpochExpr' absSlot) + second getSlotInEpoch <$> qryFromExpr (slotToEpochExpr' absSlot) -- | Translate 'SlotNo' to its corresponding 'EpochNo' -- @@ -505,30 +502,31 @@ slotToEpoch' absSlot = -- slots are left in this slot. slotToEpoch :: SlotNo -> Qry (EpochNo, Word64, Word64) slotToEpoch absSlot = - aux <$> qryFromExpr (slotToEpochExpr absSlot) - where - aux :: ((EpochNo, SlotInEpoch), EpochSize) - -> (EpochNo, Word64, Word64) - aux ((absEpoch, SlotInEpoch slotInEpoch), epochSize) = ( - absEpoch - , slotInEpoch - , unEpochSize epochSize - slotInEpoch - ) + aux <$> qryFromExpr (slotToEpochExpr absSlot) + where + aux :: + ((EpochNo, SlotInEpoch), EpochSize) -> + (EpochNo, Word64, Word64) + aux ((absEpoch, SlotInEpoch slotInEpoch), epochSize) = + ( absEpoch + , slotInEpoch + , unEpochSize epochSize - slotInEpoch + ) epochToSlot' :: EpochNo -> Qry SlotNo epochToSlot' absEpoch = - qryFromExpr (epochToSlotExpr' absEpoch) + qryFromExpr (epochToSlotExpr' absEpoch) -- | Translate 'EpochNo' to the 'SlotNo' of the first slot in that epoch -- -- Additionally returns the size of the epoch. epochToSlot :: EpochNo -> Qry (SlotNo, EpochSize) epochToSlot absEpoch = - qryFromExpr (epochToSlotExpr absEpoch) + qryFromExpr (epochToSlotExpr absEpoch) epochToSize :: EpochNo -> Qry EpochSize epochToSize absEpoch = - qryFromExpr (epochToSizeExpr absEpoch) + qryFromExpr (epochToSizeExpr absEpoch) {------------------------------------------------------------------------------- Supporting expressions for the queries above @@ -536,104 +534,108 @@ epochToSize absEpoch = wallclockToSlotExpr :: RelativeTime -> Expr f (TimeInSlot, (SlotNo, SlotLength)) wallclockToSlotExpr absTime = - ELet (ERelTimeToSlot (EAbsToRelTime (ELit absTime))) $ \relSlot -> + ELet (ERelTimeToSlot (EAbsToRelTime (ELit absTime))) $ \relSlot -> ELet (ERelToAbsSlot (EVar relSlot)) $ \absSlot -> - EPair (ESnd (EVar relSlot)) - (EPair (EVar absSlot) (ESlotLength (EVar absSlot))) + EPair + (ESnd (EVar relSlot)) + (EPair (EVar absSlot) (ESlotLength (EVar absSlot))) slotToWallclockExpr :: SlotNo -> Expr f (RelativeTime, SlotLength) slotToWallclockExpr absSlot = - EPair - (ERelToAbsTime (ERelSlotToTime (EAbsToRelSlot (ELit absSlot)))) - (ESlotLength (ELit absSlot)) + EPair + (ERelToAbsTime (ERelSlotToTime (EAbsToRelSlot (ELit absSlot)))) + (ESlotLength (ELit absSlot)) slotToSlotLengthExpr :: SlotNo -> Expr f SlotLength slotToSlotLengthExpr absSlot = ESlotLength (ELit absSlot) slotToEpochExpr' :: SlotNo -> Expr f (EpochNo, SlotInEpoch) slotToEpochExpr' absSlot = - ELet (ERelSlotToEpoch (EAbsToRelSlot (ELit absSlot))) $ \epochSlot -> + ELet (ERelSlotToEpoch (EAbsToRelSlot (ELit absSlot))) $ \epochSlot -> EPair (ERelToAbsEpoch (EVar epochSlot)) (ESnd (EVar epochSlot)) slotToEpochExpr :: - SlotNo - -> Expr f ((EpochNo, SlotInEpoch), EpochSize) + SlotNo -> + Expr f ((EpochNo, SlotInEpoch), EpochSize) slotToEpochExpr absSlot = - ELet (slotToEpochExpr' absSlot) $ \x -> + ELet (slotToEpochExpr' absSlot) $ \x -> EPair (EVar x) (EEpochSize (EFst (EVar x))) epochToSlotExpr' :: EpochNo -> Expr f SlotNo epochToSlotExpr' absEpoch = - ERelToAbsSlot (EPair (ERelEpochToSlot (EAbsToRelEpoch (ELit absEpoch))) - (ELit (TimeInSlot 0))) + ERelToAbsSlot + ( EPair + (ERelEpochToSlot (EAbsToRelEpoch (ELit absEpoch))) + (ELit (TimeInSlot 0)) + ) epochToSlotExpr :: EpochNo -> Expr f (SlotNo, EpochSize) epochToSlotExpr absEpoch = - EPair (epochToSlotExpr' absEpoch) (epochToSizeExpr absEpoch) + EPair (epochToSlotExpr' absEpoch) (epochToSizeExpr absEpoch) epochToSizeExpr :: EpochNo -> Expr f EpochSize epochToSizeExpr absEpoch = - EEpochSize (ELit absEpoch) + EEpochSize (ELit absEpoch) slotToGenesisWindow :: SlotNo -> Expr f GenesisWindow slotToGenesisWindow absSlot = - EGenesisWindow (ELit absSlot) + EGenesisWindow (ELit absSlot) {------------------------------------------------------------------------------- 'Show' instances -------------------------------------------------------------------------------} newtype Var a = Var String - deriving (Show) + deriving Show deriving instance Show (Some ClosedExpr) instance Show (ClosedExpr a) where showsPrec = \d (ClosedExpr e) -> go 0 d e - where - go :: Int -- How many variables are already in scope? - -> Int -- Precedence - -> Expr Var b -> ShowS - go n d = showParen (d >= 11) . \case - - -- Variables and let-binding - -- - -- We recover Haskell syntax here, e.g. - -- - -- > ELet .. (\x -> .... x ....) - - EVar (Var x) -> showString "EVar " . showString x - ELet e f -> let x = "x" ++ show n in - showString "ELet " - . go n 11 e - . showString " (\\" - . showString x - . showString " -> " - . go (n + 1) 0 (f (Var x)) - . showString ")" - - -- Literals - - ELit i -> showString "ELit " . showsPrec 11 i - - -- Pairs - - EPair e e' -> showString "EPair " . go n 11 e . showSpace . go n 11 e' - EFst e -> showString "EFst " . go n 11 e - ESnd e -> showString "ESnd " . go n 11 e - - -- Domain specific - - EAbsToRelTime e -> showString "EAbsToRelTime " . go n 11 e - EAbsToRelSlot e -> showString "EAbsToRelSlot " . go n 11 e - EAbsToRelEpoch e -> showString "EAbsToRelEpoch " . go n 11 e - ERelToAbsTime e -> showString "ERelToAbsTime " . go n 11 e - ERelToAbsSlot e -> showString "ERelToAbsSlot " . go n 11 e - ERelToAbsEpoch e -> showString "ERelToAbsEpoch " . go n 11 e - ERelTimeToSlot e -> showString "ERelTimeToSlot " . go n 11 e - ERelSlotToTime e -> showString "ERelSlotToTime " . go n 11 e - ERelSlotToEpoch e -> showString "ERelSlotToEpoch " . go n 11 e - ERelEpochToSlot e -> showString "ERelEpochToSlot " . go n 11 e - ESlotLength e -> showString "ESlotLength " . go n 11 e - EEpochSize e -> showString "EEpochSize " . go n 11 e - EGenesisWindow e -> showString "EGenesisWindow " . go n 11 e + where + go :: + Int -> -- How many variables are already in scope? + Int -> -- Precedence + Expr Var b -> + ShowS + go n d = + showParen (d >= 11) . \case + -- Variables and let-binding + -- + -- We recover Haskell syntax here, e.g. + -- + -- > ELet .. (\x -> .... x ....) + + EVar (Var x) -> showString "EVar " . showString x + ELet e f -> + let x = "x" ++ show n + in showString "ELet " + . go n 11 e + . showString " (\\" + . showString x + . showString " -> " + . go (n + 1) 0 (f (Var x)) + . showString ")" + -- Literals + + ELit i -> showString "ELit " . showsPrec 11 i + -- Pairs + + EPair e e' -> showString "EPair " . go n 11 e . showSpace . go n 11 e' + EFst e -> showString "EFst " . go n 11 e + ESnd e -> showString "ESnd " . go n 11 e + -- Domain specific + + EAbsToRelTime e -> showString "EAbsToRelTime " . go n 11 e + EAbsToRelSlot e -> showString "EAbsToRelSlot " . go n 11 e + EAbsToRelEpoch e -> showString "EAbsToRelEpoch " . go n 11 e + ERelToAbsTime e -> showString "ERelToAbsTime " . go n 11 e + ERelToAbsSlot e -> showString "ERelToAbsSlot " . go n 11 e + ERelToAbsEpoch e -> showString "ERelToAbsEpoch " . go n 11 e + ERelTimeToSlot e -> showString "ERelTimeToSlot " . go n 11 e + ERelSlotToTime e -> showString "ERelSlotToTime " . go n 11 e + ERelSlotToEpoch e -> showString "ERelSlotToEpoch " . go n 11 e + ERelEpochToSlot e -> showString "ERelEpochToSlot " . go n 11 e + ESlotLength e -> showString "ESlotLength " . go n 11 e + EEpochSize e -> showString "EEpochSize " . go n 11 e + EGenesisWindow e -> showString "EGenesisWindow " . go n 11 e diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Summary.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Summary.hs index 1c982137b1..0ef241f4a5 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Summary.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Summary.hs @@ -14,21 +14,25 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.HardFork.History.Summary ( - -- * Bounds +module Ouroboros.Consensus.HardFork.History.Summary + ( -- * Bounds Bound (..) , initBound , mkUpperBound , slotToEpochBound + -- * Per-era summary , EraEnd (..) , EraSummary (..) , mkEraEnd + -- * Overall summary , Summary (..) + -- ** Construction , neverForksSummary , summaryWithExactly + -- *** Summarize , Shape (..) , Transitions (..) @@ -37,83 +41,95 @@ module Ouroboros.Consensus.HardFork.History.Summary ( , singletonShape , summarize , transitionsUnknown + -- ** Query , summaryBounds , summaryInit ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (TokenType (TypeNull), decodeNull, - peekTokenType) -import Codec.CBOR.Encoding (encodeListLen, encodeNull) -import Codec.Serialise -import Control.Monad (unless) -import Control.Monad.Except (Except, throwError) -import Data.Bifunctor -import Data.Foldable (toList) -import Data.Kind (Type) -import Data.Proxy -import Data.SOP.Counting -import Data.SOP.NonEmpty -import Data.SOP.Sing (SListI, lengthSList) -import Data.Time hiding (UTCTime) -import Data.Word -import GHC.Generics (Generic) -import GHC.Stack -import NoThunks.Class (InspectHeapNamed (..), NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime.WallClock.Types -import Ouroboros.Consensus.HardFork.History.EraParams -import Ouroboros.Consensus.HardFork.History.Util +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding + ( TokenType (TypeNull) + , decodeNull + , peekTokenType + ) +import Codec.CBOR.Encoding (encodeListLen, encodeNull) +import Codec.Serialise +import Control.Monad (unless) +import Control.Monad.Except (Except, throwError) +import Data.Bifunctor +import Data.Foldable (toList) +import Data.Kind (Type) +import Data.Proxy +import Data.SOP.Counting +import Data.SOP.NonEmpty +import Data.SOP.Sing (SListI, lengthSList) +import Data.Time hiding (UTCTime) +import Data.Word +import GHC.Generics (Generic) +import GHC.Stack +import NoThunks.Class (InspectHeapNamed (..), NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime.WallClock.Types +import Ouroboros.Consensus.HardFork.History.EraParams +import Ouroboros.Consensus.HardFork.History.Util {------------------------------------------------------------------------------- Bounds -------------------------------------------------------------------------------} -- | Detailed information about the time bounds of an era -data Bound = Bound { - boundTime :: !RelativeTime - , boundSlot :: !SlotNo - , boundEpoch :: !EpochNo - } - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) +data Bound = Bound + { boundTime :: !RelativeTime + , boundSlot :: !SlotNo + , boundEpoch :: !EpochNo + } + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks initBound :: Bound -initBound = Bound { - boundTime = RelativeTime 0 - , boundSlot = SlotNo 0 - , boundEpoch = EpochNo 0 +initBound = + Bound + { boundTime = RelativeTime 0 + , boundSlot = SlotNo 0 + , boundEpoch = EpochNo 0 } -- | Version of 'mkUpperBound' when the upper bound may not be known -- -- If passed 'Nothing', assumes 'EraUnbounded'. This is /NOT/ -- suitable for eras where the transition is simply unknown. -mkEraEnd :: EraParams - -> Bound -- ^ Lower bound - -> Maybe EpochNo -- ^ Upper bound - -> EraEnd +mkEraEnd :: + EraParams -> + -- | Lower bound + Bound -> + -- | Upper bound + Maybe EpochNo -> + EraEnd mkEraEnd params lo = maybe EraUnbounded (EraEnd . mkUpperBound params lo) -- | Compute upper bound given just the epoch number and era parameters -mkUpperBound :: HasCallStack - => EraParams - -> Bound -- ^ Lower bound - -> EpochNo -- ^ Upper bound - -> Bound -mkUpperBound EraParams{..} lo hiEpoch = Bound { - boundTime = addRelTime inEraTime $ boundTime lo - , boundSlot = addSlots inEraSlots $ boundSlot lo +mkUpperBound :: + HasCallStack => + EraParams -> + -- | Lower bound + Bound -> + -- | Upper bound + EpochNo -> + Bound +mkUpperBound EraParams{..} lo hiEpoch = + Bound + { boundTime = addRelTime inEraTime $ boundTime lo + , boundSlot = addSlots inEraSlots $ boundSlot lo , boundEpoch = hiEpoch } - where - inEraEpochs, inEraSlots :: Word64 - inEraEpochs = countEpochs hiEpoch (boundEpoch lo) - inEraSlots = inEraEpochs * unEpochSize eraEpochSize + where + inEraEpochs, inEraSlots :: Word64 + inEraEpochs = countEpochs hiEpoch (boundEpoch lo) + inEraSlots = inEraEpochs * unEpochSize eraEpochSize - inEraTime :: NominalDiffTime - inEraTime = fromIntegral inEraSlots * getSlotLength eraSlotLength + inEraTime :: NominalDiffTime + inEraTime = fromIntegral inEraSlots * getSlotLength eraSlotLength -- Given the 'SlotNo' of the first /slot/ in which a transition could take -- place, compute the first /epoch/ in which this could happen (since @@ -122,12 +138,12 @@ mkUpperBound EraParams{..} lo hiEpoch = Bound { -- however, it will be the /next/ epoch. slotToEpochBound :: EraParams -> Bound -> SlotNo -> EpochNo slotToEpochBound EraParams{eraEpochSize = EpochSize epochSize} lo hiSlot = - addEpochs - (if inEpoch == 0 then epochs else epochs + 1) - (boundEpoch lo) - where - slots = countSlots hiSlot (boundSlot lo) - (epochs, inEpoch) = slots `divMod` epochSize + addEpochs + (if inEpoch == 0 then epochs else epochs + 1) + (boundEpoch lo) + where + slots = countSlots hiSlot (boundSlot lo) + (epochs, inEpoch) = slots `divMod` epochSize {------------------------------------------------------------------------------- Summary @@ -166,25 +182,27 @@ slotToEpochBound EraParams{eraEpochSize = EpochSize epochSize} lo hiSlot = -- > t' - t == ((s' - s) * slotLen) -- > (t' - t) / slotLen == s' - s -- > s + ((t' - t) / slotLen) == s' -data EraSummary = EraSummary { - eraStart :: !Bound -- ^ Inclusive lower bound - , eraEnd :: !EraEnd -- ^ Exclusive upper bound - , eraParams :: !EraParams -- ^ Active parameters - } - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) +data EraSummary = EraSummary + { eraStart :: !Bound + -- ^ Inclusive lower bound + , eraEnd :: !EraEnd + -- ^ Exclusive upper bound + , eraParams :: !EraParams + -- ^ Active parameters + } + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks -- | Exclusive upper bound on the era -data EraEnd = - -- | Bounded era +data EraEnd + = -- | Bounded era EraEnd !Bound - - -- | Unbounded era + | -- | Unbounded era -- -- This arises from the use of 'UnsafeIndefiniteSafeZone'. - | EraUnbounded - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) + EraUnbounded + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks -- | Summary of the /confirmed/ part of the ledger -- @@ -192,7 +210,7 @@ data EraEnd = -- about the start and end of each era. -- -- We have at most one summary for each era, and at least one -newtype Summary xs = Summary { getSummary :: NonEmpty xs EraSummary } +newtype Summary xs = Summary {getSummary :: NonEmpty xs EraSummary} deriving (Eq, Show) deriving NoThunks via InspectHeapNamed "Summary" (Summary xs) @@ -202,16 +220,20 @@ newtype Summary xs = Summary { getSummary :: NonEmpty xs EraSummary } -- | 'Summary' for a ledger that never forks neverForksSummary :: EpochSize -> SlotLength -> GenesisWindow -> Summary '[x] -neverForksSummary epochSize slotLen genesisWindow = Summary $ NonEmptyOne $ EraSummary { - eraStart = initBound - , eraEnd = EraUnbounded - , eraParams = EraParams { - eraEpochSize = epochSize - , eraSlotLength = slotLen - , eraSafeZone = UnsafeIndefiniteSafeZone - , eraGenesisWin = genesisWindow +neverForksSummary epochSize slotLen genesisWindow = + Summary $ + NonEmptyOne $ + EraSummary + { eraStart = initBound + , eraEnd = EraUnbounded + , eraParams = + EraParams + { eraEpochSize = epochSize + , eraSlotLength = slotLen + , eraSafeZone = UnsafeIndefiniteSafeZone + , eraGenesisWin = genesisWindow + } } - } {------------------------------------------------------------------------------- Basic API for 'Summary' @@ -220,7 +242,7 @@ neverForksSummary epochSize slotLen genesisWindow = Summary $ NonEmptyOne $ EraS -- | Outer bounds of the summary summaryBounds :: Summary xs -> (Bound, EraEnd) summaryBounds (Summary summary) = - (eraStart (nonEmptyHead summary), eraEnd (nonEmptyLast summary)) + (eraStart (nonEmptyHead summary), eraEnd (nonEmptyLast summary)) -- | Analogue of 'Data.List.init' for 'Summary' (i.e., split off the final era) -- @@ -250,8 +272,8 @@ summaryWithExactly = Summary . exactlyWeakenNonEmpty -- look something like @'[ByronBlock, ShelleyBlock, GoguenBlock]@ and do affect -- the hard fork combinator. So far this is a list of block types, since most -- of consensus is indexed by block types. -newtype Shape xs = Shape { getShape :: Exactly xs EraParams } - deriving (Show) +newtype Shape xs = Shape {getShape :: Exactly xs EraParams} + deriving Show deriving NoThunks via InspectHeapNamed "Shape" (Shape xs) -- | There is only one era @@ -302,63 +324,66 @@ transitionsUnknown = Transitions AtMostNil -- 'minimumPossibleSlotNo' is not zero (e.g., some ledgers might set it to 1), -- the maximum number of blocks (aka filled slots) in an epoch is just 1 (or -- more) less than the other epochs. -summarize :: WithOrigin SlotNo -- ^ Slot at the tip of the ledger - -> Shape xs - -> Transitions xs - -> Summary xs +summarize :: + -- | Slot at the tip of the ledger + WithOrigin SlotNo -> + Shape xs -> + Transitions xs -> + Summary xs summarize ledgerTip = \(Shape shape) (Transitions transitions) -> - Summary $ go initBound shape transitions - where - go :: Bound -- Lower bound for current era - -> Exactly (x ': xs) EraParams -- params for all eras - -> AtMost xs EpochNo -- transitions - -> NonEmpty (x ': xs) EraSummary - -- CASE (ii) from 'EraParams' Haddock - -- NOTE: Ledger tip might be close to the end of this era (or indeed past - -- it) but this doesn't matter for the summary of /this/ era. - go lo (ExactlyCons params ss) (AtMostCons epoch fs) = - NonEmptyCons (EraSummary lo (EraEnd hi) params) $ go hi ss fs - where - hi = mkUpperBound params lo epoch - -- CASE (i) or (iii) from 'EraParams' Haddock - go lo (ExactlyCons params@EraParams{..} _) AtMostNil = - NonEmptyOne (EraSummary lo hi params) - where - hi :: EraEnd - hi = case eraSafeZone of - UnsafeIndefiniteSafeZone -> - EraUnbounded - StandardSafeZone safeFromTip -> - EraEnd - . mkUpperBound params lo - . slotToEpochBound params lo - . addSlots safeFromTip - -- If the tip is already in this era, safe zone applies from the - -- ledger tip (CASE (i) from 'EraParams' Haddock). If the ledger - -- tip is in the /previous/ era, but the transition to /this/ era - -- is already known, the safe zone applies from the start of this - -- era (CASE (iii) from 'EraParams' Haddock). - -- - -- NOTE: The upper bound is /exclusive/: - -- - -- o Suppose the ledger tip is at slot 10, and 'safeFromTip' is 2. - -- Then we should be able to make accurate predictions for slots - -- 10 (of course), as well as (the safe zone) slots 11 and 12. - -- Since the upper bound is /exclusive/, this means that the - -- upper bound becomes 13. (Case i) - -- o If the ledger tip is in the previous era (case iii), and the - -- start of this era is slot 100, then we should be able to - -- give accurate predictions for the first two slots in this era - -- (100 and 101), and the upper bound becomes 102. - -- - -- This explains the use of the extra addition ('next') for - -- case (i) but not for case (iii). - $ max (next ledgerTip) (boundSlot lo) - - -- Upper bound is exclusive, so we count from the /next/ ledger tip - next :: WithOrigin SlotNo -> SlotNo - next Origin = SlotNo 0 - next (NotOrigin s) = succ s + Summary $ go initBound shape transitions + where + go :: + Bound -> -- Lower bound for current era + Exactly (x ': xs) EraParams -> -- params for all eras + AtMost xs EpochNo -> -- transitions + NonEmpty (x ': xs) EraSummary + -- CASE (ii) from 'EraParams' Haddock + -- NOTE: Ledger tip might be close to the end of this era (or indeed past + -- it) but this doesn't matter for the summary of /this/ era. + go lo (ExactlyCons params ss) (AtMostCons epoch fs) = + NonEmptyCons (EraSummary lo (EraEnd hi) params) $ go hi ss fs + where + hi = mkUpperBound params lo epoch + -- CASE (i) or (iii) from 'EraParams' Haddock + go lo (ExactlyCons params@EraParams{..} _) AtMostNil = + NonEmptyOne (EraSummary lo hi params) + where + hi :: EraEnd + hi = case eraSafeZone of + UnsafeIndefiniteSafeZone -> + EraUnbounded + StandardSafeZone safeFromTip -> + EraEnd + . mkUpperBound params lo + . slotToEpochBound params lo + . addSlots safeFromTip + -- If the tip is already in this era, safe zone applies from the + -- ledger tip (CASE (i) from 'EraParams' Haddock). If the ledger + -- tip is in the /previous/ era, but the transition to /this/ era + -- is already known, the safe zone applies from the start of this + -- era (CASE (iii) from 'EraParams' Haddock). + -- + -- NOTE: The upper bound is /exclusive/: + -- + -- o Suppose the ledger tip is at slot 10, and 'safeFromTip' is 2. + -- Then we should be able to make accurate predictions for slots + -- 10 (of course), as well as (the safe zone) slots 11 and 12. + -- Since the upper bound is /exclusive/, this means that the + -- upper bound becomes 13. (Case i) + -- o If the ledger tip is in the previous era (case iii), and the + -- start of this era is slot 100, then we should be able to + -- give accurate predictions for the first two slots in this era + -- (100 and 101), and the upper bound becomes 102. + -- + -- This explains the use of the extra addition ('next') for + -- case (i) but not for case (iii). + $ max (next ledgerTip) (boundSlot lo) + + -- Upper bound is exclusive, so we count from the /next/ ledger tip + next :: WithOrigin SlotNo -> SlotNo + next Origin = SlotNo 0 + next (NotOrigin s) = succ s {------------------------------------------------------------------------------- Invariants @@ -374,119 +399,131 @@ summarize ledgerTip = \(Shape shape) (Transitions transitions) -> -- be non-empty). invariantShape :: Shape xs -> Except String () invariantShape = \(Shape shape) -> - go (EpochNo 0) shape - where - go :: EpochNo -- Lower bound on the start of the era - -> Exactly xs EraParams -> Except String () - go _ ExactlyNil = return () - go lowerBound (ExactlyCons _ shape') = - let nextLowerBound = addEpochs 1 lowerBound - in go nextLowerBound shape' + go (EpochNo 0) shape + where + go :: + EpochNo -> -- Lower bound on the start of the era + Exactly xs EraParams -> + Except String () + go _ ExactlyNil = return () + go lowerBound (ExactlyCons _ shape') = + let nextLowerBound = addEpochs 1 lowerBound + in go nextLowerBound shape' -- | Check 'Summary' invariants invariantSummary :: Summary xs -> Except String () invariantSummary = \(Summary summary) -> - -- Pretend the start of the first era is the "end of the previous" one - go (eraStart (nonEmptyHead summary)) (toList summary) - where - go :: Bound -- ^ End of the previous era - -> [EraSummary] -> Except String () - go _ [] = return () - go prevEnd (curSummary : next) = do - unless (curStart == prevEnd) $ - throwError $ mconcat [ - "Bounds don't line up: end of previous era " - , show prevEnd - , " /= start of current era " - , show curStart - ] - - case mCurEnd of - EraUnbounded -> - unless (null next) $ - throwError "Unbounded non-final era" - EraEnd curEnd -> do - -- Check the invariants mentioned at 'EraSummary' - -- - -- o @epochsInEra@ corresponds to @e' - e@ - -- o @slotsInEra@ corresponds to @(e' - e) * epochSize)@ - -- o @timeInEra@ corresponds to @((e' - e) * epochSize * slotLen@ - -- which, if INV-1b holds, equals @(s' - s) * slotLen@ - let epochsInEra, slotsInEra :: Word64 - epochsInEra = countEpochs (boundEpoch curEnd) (boundEpoch curStart) - slotsInEra = epochsInEra * unEpochSize (eraEpochSize curParams) - - timeInEra :: NominalDiffTime - timeInEra = fromIntegral slotsInEra - * getSlotLength (eraSlotLength curParams) - - unless (boundEpoch curEnd > boundEpoch curStart) $ - throwError "Empty era" - - unless (boundSlot curEnd == addSlots slotsInEra (boundSlot curStart)) $ - throwError $ mconcat [ - "Invalid final boundSlot in " - , show curSummary - , " (INV-1b)" - ] - - unless (boundTime curEnd == addRelTime timeInEra (boundTime curStart)) $ - throwError $ mconcat [ - "Invalid final boundTime in " - , show curSummary - , " (INV-2b)" - ] - - go curEnd next - where - curStart :: Bound - mCurEnd :: EraEnd - curParams :: EraParams - EraSummary curStart mCurEnd curParams = curSummary + -- Pretend the start of the first era is the "end of the previous" one + go (eraStart (nonEmptyHead summary)) (toList summary) + where + go :: + Bound -> + -- \^ End of the previous era + [EraSummary] -> + Except String () + go _ [] = return () + go prevEnd (curSummary : next) = do + unless (curStart == prevEnd) $ + throwError $ + mconcat + [ "Bounds don't line up: end of previous era " + , show prevEnd + , " /= start of current era " + , show curStart + ] + + case mCurEnd of + EraUnbounded -> + unless (null next) $ + throwError "Unbounded non-final era" + EraEnd curEnd -> do + -- Check the invariants mentioned at 'EraSummary' + -- + -- o @epochsInEra@ corresponds to @e' - e@ + -- o @slotsInEra@ corresponds to @(e' - e) * epochSize)@ + -- o @timeInEra@ corresponds to @((e' - e) * epochSize * slotLen@ + -- which, if INV-1b holds, equals @(s' - s) * slotLen@ + let epochsInEra, slotsInEra :: Word64 + epochsInEra = countEpochs (boundEpoch curEnd) (boundEpoch curStart) + slotsInEra = epochsInEra * unEpochSize (eraEpochSize curParams) + + timeInEra :: NominalDiffTime + timeInEra = + fromIntegral slotsInEra + * getSlotLength (eraSlotLength curParams) + + unless (boundEpoch curEnd > boundEpoch curStart) $ + throwError "Empty era" + + unless (boundSlot curEnd == addSlots slotsInEra (boundSlot curStart)) $ + throwError $ + mconcat + [ "Invalid final boundSlot in " + , show curSummary + , " (INV-1b)" + ] + + unless (boundTime curEnd == addRelTime timeInEra (boundTime curStart)) $ + throwError $ + mconcat + [ "Invalid final boundTime in " + , show curSummary + , " (INV-2b)" + ] + + go curEnd next + where + curStart :: Bound + mCurEnd :: EraEnd + curParams :: EraParams + EraSummary curStart mCurEnd curParams = curSummary {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} instance Serialise Bound where - encode Bound{..} = mconcat [ - encodeListLen 3 + encode Bound{..} = + mconcat + [ encodeListLen 3 , encode boundTime , encode boundSlot , encode boundEpoch ] decode = do - enforceSize "Bound" 3 - boundTime <- decode - boundSlot <- decode - boundEpoch <- decode - return Bound{..} + enforceSize "Bound" 3 + boundTime <- decode + boundSlot <- decode + boundEpoch <- decode + return Bound{..} instance Serialise EraEnd where - encode EraUnbounded = encodeNull + encode EraUnbounded = encodeNull encode (EraEnd bound) = encode bound - decode = peekTokenType >>= \case + decode = + peekTokenType >>= \case TypeNull -> do decodeNull return EraUnbounded _ -> EraEnd <$> decode instance Serialise EraSummary where - encode EraSummary{..} = mconcat [ - encodeListLen 3 + encode EraSummary{..} = + mconcat + [ encodeListLen 3 , encode eraStart , encode eraEnd , encode eraParams ] decode = do - enforceSize "EraSummary" 3 - eraStart <- decode - eraEnd <- decode - eraParams <- decode - return EraSummary{..} + enforceSize "EraSummary" 3 + eraStart <- decode + eraEnd <- decode + eraParams <- decode + return EraSummary{..} instance SListI xs => Serialise (Summary xs) where encode (Summary eraSummaries) = encode (toList eraSummaries) @@ -528,13 +565,13 @@ instance SListI xs => Serialise (Summary xs) where -- -- - If @|xs| == |ys|@, then at most @n == |xs|@. decode = do - -- Drop all eras we don't know about - eraSummaries <- take nbXs <$> decode - - case Summary <$> nonEmptyFromList eraSummaries of - Just summary -> return summary - Nothing -> fail "Summary: expected at least one era summary" - where - -- @|xs|@ - nbXs :: Int - nbXs = lengthSList (Proxy @xs) + -- Drop all eras we don't know about + eraSummaries <- take nbXs <$> decode + + case Summary <$> nonEmptyFromList eraSummaries of + Just summary -> return summary + Nothing -> fail "Summary: expected at least one era summary" + where + -- @|xs|@ + nbXs :: Int + nbXs = lengthSList (Proxy @xs) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Util.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Util.hs index 2f5ce1b32e..daf8fd443e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Util.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/History/Util.hs @@ -1,5 +1,5 @@ -module Ouroboros.Consensus.HardFork.History.Util ( - -- * Adding and subtracting slots/epochs +module Ouroboros.Consensus.HardFork.History.Util + ( -- * Adding and subtracting slots/epochs addEpochs , addSlots , countEpochs @@ -7,11 +7,11 @@ module Ouroboros.Consensus.HardFork.History.Util ( , subSlots ) where -import Control.Exception (assert) -import Data.Word -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Util.RedundantConstraints +import Control.Exception (assert) +import Data.Word +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Util.RedundantConstraints {------------------------------------------------------------------------------- Adding and subtracting slots/epochs @@ -29,11 +29,11 @@ addEpochs n (EpochNo x) = EpochNo (x + n) -- | @countSlots to fr@ counts the slots from @fr@ to @to@ (@to >= fr@) countSlots :: HasCallStack => SlotNo -> SlotNo -> Word64 countSlots (SlotNo to) (SlotNo fr) = assert (to >= fr) $ to - fr - where - _ = keepRedundantConstraint (Proxy :: Proxy HasCallStack) + where + _ = keepRedundantConstraint (Proxy :: Proxy HasCallStack) -- | @countEpochs to fr@ counts the epochs from @fr@ to @to@ (@to >= fr@) countEpochs :: HasCallStack => EpochNo -> EpochNo -> Word64 countEpochs (EpochNo to) (EpochNo fr) = assert (to >= fr) $ to - fr - where - _ = keepRedundantConstraint (Proxy :: Proxy HasCallStack) + where + _ = keepRedundantConstraint (Proxy :: Proxy HasCallStack) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Simple.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Simple.hs index da6e1cb493..12ea1f4814 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Simple.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HardFork/Simple.hs @@ -6,48 +6,48 @@ module Ouroboros.Consensus.HardFork.Simple (TriggerHardFork (..)) where -import Cardano.Binary -import Cardano.Slotting.Slot (EpochNo) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Node.Serialisation +import Cardano.Binary +import Cardano.Slotting.Slot (EpochNo) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Node.Serialisation -- | The trigger condition that will cause the hard fork transition. -- -- This type is only intended for use as part of a -- 'Ouroboros.Consensus.Ledger.Basics.LedgerCfg', which means it is "static": -- it cannot change during an execution of the node process. -data TriggerHardFork = - -- | Trigger the transition when the on-chain protocol major version (from +data TriggerHardFork + = -- | Trigger the transition when the on-chain protocol major version (from -- the ledger state) reaches this number. -- -- Note: The HFC logic does not require the trigger version for one era to -- be the successor of the trigger version for the previous era. TriggerHardForkAtVersion !Word16 - -- | For testing only, trigger the transition at a specific hard-coded + | -- | For testing only, trigger the transition at a specific hard-coded -- epoch, irrespective of the ledger state. - | TriggerHardForkAtEpoch !EpochNo - -- | Ledger states in this era cannot determine when the hard fork + TriggerHardForkAtEpoch !EpochNo + | -- | Ledger states in this era cannot determine when the hard fork -- transition will happen. -- -- It's crucial to note that this option does /not/ imply that "the era -- will never end". Instead, the era cannot end within this node process -- before it restarts with different software and/or configuration for this -- era. - | TriggerHardForkNotDuringThisExecution + TriggerHardForkNotDuringThisExecution deriving (Show, Generic, NoThunks) instance SerialiseNodeToClient blk TriggerHardFork where encodeNodeToClient _ _ triggerHardFork = case triggerHardFork of TriggerHardForkAtVersion v -> encodeListLen 2 <> encodeWord8 0 <> toCBOR v - TriggerHardForkAtEpoch e -> encodeListLen 2 <> encodeWord8 1 <> toCBOR e + TriggerHardForkAtEpoch e -> encodeListLen 2 <> encodeWord8 1 <> toCBOR e TriggerHardForkNotDuringThisExecution -> encodeListLen 2 <> encodeWord8 2 decodeNodeToClient _ _ = do len <- decodeListLen tag <- decodeWord8 case (len, tag) of - (2, 0) -> TriggerHardForkAtVersion <$> fromCBOR @Word16 - (2, 1) -> TriggerHardForkAtEpoch <$> fromCBOR @EpochNo - (2, 2) -> pure TriggerHardForkNotDuringThisExecution + (2, 0) -> TriggerHardForkAtVersion <$> fromCBOR @Word16 + (2, 1) -> TriggerHardForkAtEpoch <$> fromCBOR @EpochNo + (2, 2) -> pure TriggerHardForkNotDuringThisExecution _ -> fail $ "TriggerHardFork: invalid (len, tag): " <> show (len, tag) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderStateHistory.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderStateHistory.hs index a74e2d1c52..a7bed246fe 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderStateHistory.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderStateHistory.hs @@ -16,62 +16,68 @@ -- -- > import Ouroboros.Consensus.HeaderStateHistory (HeaderStateHistory) -- > import qualified Ouroboros.Consensus.HeaderStateHistory as HeaderStateHistory -module Ouroboros.Consensus.HeaderStateHistory ( - HeaderStateHistory (..) +module Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateHistory (..) , cast , current , rewind , trim + -- * 'HeaderStateWithTime' , HeaderStateWithTime (..) , castHeaderStateWithTime , mkHeaderStateWithTime , mkHeaderStateWithTimeFromSummary + -- * Validation , validateHeader + -- * Support for tests , fromChain ) where -import Control.Monad.Except (Except) -import Data.Coerce (Coercible) -import qualified Data.List.NonEmpty as NE -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (RelativeTime) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract (HasHardForkHistory (..)) -import Ouroboros.Consensus.HardFork.History (Summary) -import qualified Ouroboros.Consensus.HardFork.History.Qry as Qry -import Ouroboros.Consensus.HeaderValidation hiding (validateHeader) -import qualified Ouroboros.Consensus.HeaderValidation as HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Tables.Utils (applyDiffs) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Util.CallStack (HasCallStack) -import Ouroboros.Network.AnchoredSeq (Anchorable, AnchoredSeq (..)) -import qualified Ouroboros.Network.AnchoredSeq as AS -import Ouroboros.Network.Mock.Chain (Chain) -import qualified Ouroboros.Network.Mock.Chain as Chain +import Control.Monad.Except (Except) +import Data.Coerce (Coercible) +import Data.List.NonEmpty qualified as NE +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime (RelativeTime) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract (HasHardForkHistory (..)) +import Ouroboros.Consensus.HardFork.History (Summary) +import Ouroboros.Consensus.HardFork.History.Qry qualified as Qry +import Ouroboros.Consensus.HeaderValidation hiding (validateHeader) +import Ouroboros.Consensus.HeaderValidation qualified as HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Tables.Utils (applyDiffs) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Util.CallStack (HasCallStack) +import Ouroboros.Network.AnchoredSeq (Anchorable, AnchoredSeq (..)) +import Ouroboros.Network.AnchoredSeq qualified as AS +import Ouroboros.Network.Mock.Chain (Chain) +import Ouroboros.Network.Mock.Chain qualified as Chain -- | Maintain a history of 'HeaderStateWithTime's. -newtype HeaderStateHistory blk = HeaderStateHistory { - unHeaderStateHistory :: - AnchoredSeq - (WithOrigin SlotNo) - (HeaderStateWithTime blk) - (HeaderStateWithTime blk) - } - deriving (Generic) +newtype HeaderStateHistory blk = HeaderStateHistory + { unHeaderStateHistory :: + AnchoredSeq + (WithOrigin SlotNo) + (HeaderStateWithTime blk) + (HeaderStateWithTime blk) + } + deriving Generic -deriving stock instance (BlockSupportsProtocol blk, HasAnnTip blk) - => Eq (HeaderStateHistory blk) -deriving stock instance (BlockSupportsProtocol blk, HasAnnTip blk) - => Show (HeaderStateHistory blk) -deriving newtype instance (BlockSupportsProtocol blk, HasAnnTip blk) - => NoThunks (HeaderStateHistory blk) +deriving stock instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + Eq (HeaderStateHistory blk) +deriving stock instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + Show (HeaderStateHistory blk) +deriving newtype instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + NoThunks (HeaderStateHistory blk) current :: HeaderStateHistory blk -> HeaderStateWithTime blk current = either id id . AS.head . unHeaderStateHistory @@ -88,17 +94,18 @@ append h (HeaderStateHistory history) = HeaderStateHistory (history :> h) -- snapshot and an anchor. trim :: Int -> HeaderStateHistory blk -> HeaderStateHistory blk trim n (HeaderStateHistory history) = - HeaderStateHistory (AS.anchorNewest (fromIntegral n) history) + HeaderStateHistory (AS.anchorNewest (fromIntegral n) history) cast :: - ( Coercible (ChainDepState (BlockProtocol blk )) - (ChainDepState (BlockProtocol blk')) - , TipInfo blk ~ TipInfo blk' - ) - => HeaderStateHistory blk -> HeaderStateHistory blk' + ( Coercible + (ChainDepState (BlockProtocol blk)) + (ChainDepState (BlockProtocol blk')) + , TipInfo blk ~ TipInfo blk' + ) => + HeaderStateHistory blk -> HeaderStateHistory blk' cast (HeaderStateHistory history) = - HeaderStateHistory - $ AS.bimap castHeaderStateWithTime castHeaderStateWithTime history + HeaderStateHistory $ + AS.bimap castHeaderStateWithTime castHeaderStateWithTime history -- | \( O\(n\) \). Rewind the header state history -- @@ -123,20 +130,21 @@ cast (HeaderStateHistory history) = -- rewinding that header state again by @j@ where @i + j > k@ is not possible -- and will yield 'Nothing'. rewind :: - forall blk. (HasAnnTip blk) - => Point blk - -> HeaderStateHistory blk - -> Maybe (HeaderStateHistory blk, Maybe (HeaderStateWithTime blk)) + forall blk. + HasAnnTip blk => + Point blk -> + HeaderStateHistory blk -> + Maybe (HeaderStateHistory blk, Maybe (HeaderStateWithTime blk)) rewind p (HeaderStateHistory history) = do - (prefix, suffix) <- AS.splitAfterMeasure + (prefix, suffix) <- + AS.splitAfterMeasure (pointSlot p) ((== p) . headerStatePoint . hswtHeaderState . either id id) history - let oldestRewound = case suffix of - AS.Empty _ -> Nothing - hswt AS.:< _ -> Just hswt - pure (HeaderStateHistory prefix, oldestRewound) - + let oldestRewound = case suffix of + AS.Empty _ -> Nothing + hswt AS.:< _ -> Just hswt + pure (HeaderStateHistory prefix, oldestRewound) {------------------------------------------------------------------------------- HeaderStateWithTime @@ -145,61 +153,66 @@ rewind p (HeaderStateHistory history) = do -- | A 'HeaderState' together with the 'RelativeTime' corresponding to the tip -- slot of the state. For a state at 'Origin', we use the same time as for slot -- 0. -data HeaderStateWithTime blk = HeaderStateWithTime { - hswtHeaderState :: !(HeaderState blk) - , hswtSlotTime :: !RelativeTime +data HeaderStateWithTime blk = HeaderStateWithTime + { hswtHeaderState :: !(HeaderState blk) + , hswtSlotTime :: !RelativeTime } - deriving stock (Generic) + deriving stock Generic -deriving stock instance (BlockSupportsProtocol blk, HasAnnTip blk) - => Eq (HeaderStateWithTime blk) -deriving stock instance (BlockSupportsProtocol blk, HasAnnTip blk) - => Show (HeaderStateWithTime blk) -deriving anyclass instance (BlockSupportsProtocol blk, HasAnnTip blk) - => NoThunks (HeaderStateWithTime blk) +deriving stock instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + Eq (HeaderStateWithTime blk) +deriving stock instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + Show (HeaderStateWithTime blk) +deriving anyclass instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + NoThunks (HeaderStateWithTime blk) instance Anchorable (WithOrigin SlotNo) (HeaderStateWithTime blk) (HeaderStateWithTime blk) where asAnchor = id getAnchorMeasure _ = fmap annTipSlotNo . headerStateTip . hswtHeaderState castHeaderStateWithTime :: - ( Coercible (ChainDepState (BlockProtocol blk )) - (ChainDepState (BlockProtocol blk')) - , TipInfo blk ~ TipInfo blk' - ) - => HeaderStateWithTime blk -> HeaderStateWithTime blk' -castHeaderStateWithTime hswt = HeaderStateWithTime { - hswtHeaderState = castHeaderState $ hswtHeaderState hswt - , hswtSlotTime = hswtSlotTime hswt + ( Coercible + (ChainDepState (BlockProtocol blk)) + (ChainDepState (BlockProtocol blk')) + , TipInfo blk ~ TipInfo blk' + ) => + HeaderStateWithTime blk -> HeaderStateWithTime blk' +castHeaderStateWithTime hswt = + HeaderStateWithTime + { hswtHeaderState = castHeaderState $ hswtHeaderState hswt + , hswtSlotTime = hswtSlotTime hswt } mkHeaderStateWithTimeFromSummary :: - (HasCallStack, HasAnnTip blk) - => Summary (HardForkIndices blk) - -- ^ Must be able to convert the tip slot of the 'HeaderState' to a time. - -> HeaderState blk - -> HeaderStateWithTime blk + (HasCallStack, HasAnnTip blk) => + -- | Must be able to convert the tip slot of the 'HeaderState' to a time. + Summary (HardForkIndices blk) -> + HeaderState blk -> + HeaderStateWithTime blk mkHeaderStateWithTimeFromSummary summary hst = - HeaderStateWithTime { - hswtHeaderState = hst - , hswtSlotTime = slotTime - } - where - (slotTime, _) = Qry.runQueryPure qry summary - qry = Qry.slotToWallclock slot - slot = fromWithOrigin 0 $ pointSlot $ headerStatePoint hst + HeaderStateWithTime + { hswtHeaderState = hst + , hswtSlotTime = slotTime + } + where + (slotTime, _) = Qry.runQueryPure qry summary + qry = Qry.slotToWallclock slot + slot = fromWithOrigin 0 $ pointSlot $ headerStatePoint hst mkHeaderStateWithTime :: - (HasCallStack, HasHardForkHistory blk, HasAnnTip blk) - => LedgerConfig blk - -> ExtLedgerState blk mk - -> HeaderStateWithTime blk + (HasCallStack, HasHardForkHistory blk, HasAnnTip blk) => + LedgerConfig blk -> + ExtLedgerState blk mk -> + HeaderStateWithTime blk mkHeaderStateWithTime lcfg (ExtLedgerState lst hst) = - mkHeaderStateWithTimeFromSummary summary hst - where - -- A summary can always translate the tip slot of the ledger state it was - -- created from. - summary = hardForkSummary lcfg lst + mkHeaderStateWithTimeFromSummary summary hst + where + -- A summary can always translate the tip slot of the ledger state it was + -- created from. + summary = hardForkSummary lcfg lst {------------------------------------------------------------------------------- Validation @@ -212,24 +225,26 @@ mkHeaderStateWithTime lcfg (ExtLedgerState lst hst) = -- -- Note: this function does not trim the 'HeaderStateHistory'. validateHeader :: - forall blk. (BlockSupportsProtocol blk, ValidateEnvelope blk) - => TopLevelConfig blk - -> LedgerView (BlockProtocol blk) - -> Header blk - -> RelativeTime - -- ^ The time of the slot of the header. - -> HeaderStateHistory blk - -> Except (HeaderError blk) (HeaderStateHistory blk) + forall blk. + (BlockSupportsProtocol blk, ValidateEnvelope blk) => + TopLevelConfig blk -> + LedgerView (BlockProtocol blk) -> + Header blk -> + -- | The time of the slot of the header. + RelativeTime -> + HeaderStateHistory blk -> + Except (HeaderError blk) (HeaderStateHistory blk) validateHeader cfg lv hdr slotTime history = do - st' <- HeaderValidation.validateHeader cfg lv hdr st - return $ append (HeaderStateWithTime st' slotTime) history - where - st :: Ticked (HeaderState blk) - st = tickHeaderState - (configConsensus cfg) - lv - (blockSlot hdr) - (hswtHeaderState $ current history) + st' <- HeaderValidation.validateHeader cfg lv hdr st + return $ append (HeaderStateWithTime st' slotTime) history + where + st :: Ticked (HeaderState blk) + st = + tickHeaderState + (configConsensus cfg) + lv + (blockSlot hdr) + (hswtHeaderState $ current history) {------------------------------------------------------------------------------- Support for tests @@ -240,23 +255,23 @@ validateHeader cfg lv hdr slotTime history = do -- -- PRECONDITION: the blocks in the chain are valid. fromChain :: - forall blk. - ( ApplyBlock (ExtLedgerState blk) blk - , HasHardForkHistory blk - , HasAnnTip blk - ) - => TopLevelConfig blk - -> ExtLedgerState blk ValuesMK - -- ^ Initial ledger state - -> Chain blk - -> HeaderStateHistory blk + forall blk. + ( ApplyBlock (ExtLedgerState blk) blk + , HasHardForkHistory blk + , HasAnnTip blk + ) => + TopLevelConfig blk -> + -- | Initial ledger state + ExtLedgerState blk ValuesMK -> + Chain blk -> + HeaderStateHistory blk fromChain cfg initState chain = - HeaderStateHistory (AS.fromOldestFirst anchorSnapshot snapshots) - where - anchorSnapshot NE.:| snapshots = - fmap (mkHeaderStateWithTime (configLedger cfg)) - . NE.scanl - (\st blk -> applyDiffs st $ tickThenReapply OmitLedgerEvents (ExtLedgerCfg cfg) blk st) - initState - . Chain.toOldestFirst - $ chain + HeaderStateHistory (AS.fromOldestFirst anchorSnapshot snapshots) + where + anchorSnapshot NE.:| snapshots = + fmap (mkHeaderStateWithTime (configLedger cfg)) + . NE.scanl + (\st blk -> applyDiffs st $ tickThenReapply OmitLedgerEvents (ExtLedgerCfg cfg) blk st) + initState + . Chain.toOldestFirst + $ chain diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderValidation.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderValidation.hs index 16833533a7..5a72257fd2 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderValidation.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/HeaderValidation.hs @@ -18,9 +18,10 @@ {-# LANGUAGE UndecidableInstances #-} -- | Header validation -module Ouroboros.Consensus.HeaderValidation ( - revalidateHeader +module Ouroboros.Consensus.HeaderValidation + ( revalidateHeader , validateHeader + -- * Annotated tips , AnnTip (..) , HasAnnTip (..) @@ -30,6 +31,7 @@ module Ouroboros.Consensus.HeaderValidation ( , castAnnTip , getAnnTip , mapAnnTip + -- * Header state , HeaderState (..) , castHeaderState @@ -37,16 +39,20 @@ module Ouroboros.Consensus.HeaderValidation ( , headerStateBlockNo , headerStatePoint , tickHeaderState + -- * Validate header envelope , BasicEnvelopeValidation (..) , HeaderEnvelopeError (..) , ValidateEnvelope (..) , castHeaderEnvelopeError + -- * Errors , HeaderError (..) , castHeaderError + -- * TipInfoIsEBB , TipInfoIsEBB (..) + -- * Serialization , decodeAnnTipIsEBB , decodeHeaderState @@ -54,41 +60,48 @@ module Ouroboros.Consensus.HeaderValidation ( , defaultEncodeAnnTip , encodeAnnTipIsEBB , encodeHeaderState + -- * Type family instances , Ticked (..) + -- * Header with time , HeaderWithTime (..) , mkHeaderWithTime ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding, encodeListLen) -import Codec.Serialise (decode, encode) -import Control.Monad (unless, when) -import Control.Monad.Except (Except, runExcept, throwError, - withExcept) -import Data.Coerce -import Data.Kind (Type) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Data.Typeable (Typeable) -import Data.Void (Void) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (RelativeTime) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract - (HasHardForkHistory (hardForkSummary)) -import qualified Ouroboros.Consensus.HardFork.History.Qry as Qry -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util (whenJust) -import Ouroboros.Consensus.Util.Assert -import qualified Ouroboros.Consensus.Util.CBOR as Util.CBOR +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding, encodeListLen) +import Codec.Serialise (decode, encode) +import Control.Monad (unless, when) +import Control.Monad.Except + ( Except + , runExcept + , throwError + , withExcept + ) +import Data.Coerce +import Data.Kind (Type) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.Typeable (Typeable) +import Data.Void (Void) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime (RelativeTime) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract + ( HasHardForkHistory (hardForkSummary) + ) +import Ouroboros.Consensus.HardFork.History.Qry qualified as Qry +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util (whenJust) +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Consensus.Util.CBOR qualified as Util.CBOR {------------------------------------------------------------------------------- Preliminary: annotated tip @@ -100,15 +113,15 @@ import qualified Ouroboros.Consensus.Util.CBOR as Util.CBOR -- header envelope. Under normal circumstances no additional information is -- required, but for instance for Byron we need to know if the previous header -- was an EBB. -data AnnTip blk = AnnTip { - annTipSlotNo :: !SlotNo - , annTipBlockNo :: !BlockNo - , annTipInfo :: !(TipInfo blk) - } - deriving (Generic) +data AnnTip blk = AnnTip + { annTipSlotNo :: !SlotNo + , annTipBlockNo :: !BlockNo + , annTipInfo :: !(TipInfo blk) + } + deriving Generic -deriving instance HasAnnTip blk => Show (AnnTip blk) -deriving instance HasAnnTip blk => Eq (AnnTip blk) +deriving instance HasAnnTip blk => Show (AnnTip blk) +deriving instance HasAnnTip blk => Eq (AnnTip blk) deriving instance HasAnnTip blk => NoThunks (AnnTip blk) annTipHash :: forall blk. HasAnnTip blk => AnnTip blk -> HeaderHash blk @@ -124,13 +137,16 @@ castAnnTip :: TipInfo blk ~ TipInfo blk' => AnnTip blk -> AnnTip blk' castAnnTip AnnTip{..} = AnnTip{..} mapAnnTip :: (TipInfo blk -> TipInfo blk') -> AnnTip blk -> AnnTip blk' -mapAnnTip f AnnTip { annTipInfo, .. } = AnnTip { annTipInfo = f annTipInfo, .. } - -class ( StandardHash blk - , Show (TipInfo blk) - , Eq (TipInfo blk) - , NoThunks (TipInfo blk) - ) => HasAnnTip blk where +mapAnnTip f AnnTip{annTipInfo, ..} = AnnTip{annTipInfo = f annTipInfo, ..} + +class + ( StandardHash blk + , Show (TipInfo blk) + , Eq (TipInfo blk) + , NoThunks (TipInfo blk) + ) => + HasAnnTip blk + where type TipInfo blk :: Type type TipInfo blk = HeaderHash blk @@ -139,21 +155,24 @@ class ( StandardHash blk -- | The tip info must at least include the hash tipInfoHash :: proxy blk -> TipInfo blk -> HeaderHash blk - - default tipInfoHash :: (TipInfo blk ~ HeaderHash blk) - => proxy blk -> TipInfo blk -> HeaderHash blk + default tipInfoHash :: + TipInfo blk ~ HeaderHash blk => + proxy blk -> TipInfo blk -> HeaderHash blk tipInfoHash _ = id - default getTipInfo :: (TipInfo blk ~ HeaderHash blk, HasHeader (Header blk)) - => Header blk -> TipInfo blk + default getTipInfo :: + (TipInfo blk ~ HeaderHash blk, HasHeader (Header blk)) => + Header blk -> TipInfo blk getTipInfo = blockHash -getAnnTip :: (HasHeader (Header blk), HasAnnTip blk) - => Header blk -> AnnTip blk -getAnnTip hdr = AnnTip { - annTipSlotNo = blockSlot hdr - , annTipBlockNo = blockNo hdr - , annTipInfo = getTipInfo hdr +getAnnTip :: + (HasHeader (Header blk), HasAnnTip blk) => + Header blk -> AnnTip blk +getAnnTip hdr = + AnnTip + { annTipSlotNo = blockSlot hdr + , annTipBlockNo = blockNo hdr + , annTipInfo = getTipInfo hdr } {------------------------------------------------------------------------------- @@ -163,43 +182,51 @@ getAnnTip hdr = AnnTip { -- | State required to validate the header -- -- See 'validateHeader' for details -data HeaderState blk = HeaderState { - headerStateTip :: !(WithOrigin (AnnTip blk)) - , headerStateChainDep :: !(ChainDepState (BlockProtocol blk)) - } - deriving (Generic) +data HeaderState blk = HeaderState + { headerStateTip :: !(WithOrigin (AnnTip blk)) + , headerStateChainDep :: !(ChainDepState (BlockProtocol blk)) + } + deriving Generic castHeaderState :: - ( Coercible (ChainDepState (BlockProtocol blk )) - (ChainDepState (BlockProtocol blk')) - , TipInfo blk ~ TipInfo blk' - ) - => HeaderState blk -> HeaderState blk' -castHeaderState HeaderState {..} = HeaderState { - headerStateTip = castAnnTip <$> headerStateTip + ( Coercible + (ChainDepState (BlockProtocol blk)) + (ChainDepState (BlockProtocol blk')) + , TipInfo blk ~ TipInfo blk' + ) => + HeaderState blk -> HeaderState blk' +castHeaderState HeaderState{..} = + HeaderState + { headerStateTip = castAnnTip <$> headerStateTip , headerStateChainDep = coerce headerStateChainDep } -deriving instance (BlockSupportsProtocol blk, HasAnnTip blk) - => Eq (HeaderState blk) -deriving instance (BlockSupportsProtocol blk, HasAnnTip blk) - => Show (HeaderState blk) -deriving instance (BlockSupportsProtocol blk, HasAnnTip blk) - => NoThunks (HeaderState blk) - -data instance Ticked (HeaderState blk) = TickedHeaderState { - untickedHeaderStateTip :: WithOrigin (AnnTip blk) - , tickedHeaderStateChainDep :: Ticked (ChainDepState (BlockProtocol blk)) - } +deriving instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + Eq (HeaderState blk) +deriving instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + Show (HeaderState blk) +deriving instance + (BlockSupportsProtocol blk, HasAnnTip blk) => + NoThunks (HeaderState blk) + +data instance Ticked (HeaderState blk) = TickedHeaderState + { untickedHeaderStateTip :: WithOrigin (AnnTip blk) + , tickedHeaderStateChainDep :: Ticked (ChainDepState (BlockProtocol blk)) + } -- | Tick the 'ChainDepState' inside the 'HeaderState' -tickHeaderState :: ConsensusProtocol (BlockProtocol blk) - => ConsensusConfig (BlockProtocol blk) - -> LedgerView (BlockProtocol blk) - -> SlotNo - -> HeaderState blk -> Ticked (HeaderState blk) -tickHeaderState cfg ledgerView slot HeaderState {..} = TickedHeaderState { - untickedHeaderStateTip = headerStateTip +tickHeaderState :: + ConsensusProtocol (BlockProtocol blk) => + ConsensusConfig (BlockProtocol blk) -> + LedgerView (BlockProtocol blk) -> + SlotNo -> + HeaderState blk -> + Ticked (HeaderState blk) +tickHeaderState cfg ledgerView slot HeaderState{..} = + TickedHeaderState + { untickedHeaderStateTip = headerStateTip , tickedHeaderStateChainDep = tickChainDepState cfg ledgerView slot headerStateChainDep } @@ -212,7 +239,7 @@ headerStateBlockNo = fmap annTipBlockNo . headerStateTip headerStatePoint :: HasAnnTip blk => HeaderState blk -> Point blk headerStatePoint = - withOriginRealPointToPoint + withOriginRealPointToPoint . fmap annTipRealPoint . headerStateTip @@ -220,63 +247,68 @@ headerStatePoint = Validate header envelope -------------------------------------------------------------------------------} -data HeaderEnvelopeError blk = - -- | Invalid block number +data HeaderEnvelopeError blk + = -- | Invalid block number -- -- We record both the expected and actual block number UnexpectedBlockNo !BlockNo !BlockNo - - -- | Invalid slot number + | -- | Invalid slot number -- -- We record both the expected (minimum) and actual slot number - | UnexpectedSlotNo !SlotNo !SlotNo - - -- | Invalid hash (in the reference to the previous block) + UnexpectedSlotNo !SlotNo !SlotNo + | -- | Invalid hash (in the reference to the previous block) -- -- We record the current tip as well as the prev hash of the new block. - | UnexpectedPrevHash !(WithOrigin (HeaderHash blk)) !(ChainHash blk) - - -- | The block at the given block number has a hash which does not match the + UnexpectedPrevHash !(WithOrigin (HeaderHash blk)) !(ChainHash blk) + | -- | The block at the given block number has a hash which does not match the -- expected checkpoint hash. -- -- > CheckpointMismatch blockNo expected actual - -- - | CheckpointMismatch !BlockNo !(HeaderHash blk) !(HeaderHash blk) - - -- | Block specific envelope error - | OtherHeaderEnvelopeError !(OtherHeaderEnvelopeError blk) - deriving (Generic) - -deriving instance (ValidateEnvelope blk) => Eq (HeaderEnvelopeError blk) -deriving instance (ValidateEnvelope blk) => Show (HeaderEnvelopeError blk) -deriving instance ( ValidateEnvelope blk - , Typeable blk - ) => NoThunks (HeaderEnvelopeError blk) - -castHeaderEnvelopeError :: ( HeaderHash blk ~ HeaderHash blk' - , OtherHeaderEnvelopeError blk ~ OtherHeaderEnvelopeError blk' - ) - => HeaderEnvelopeError blk -> HeaderEnvelopeError blk' + CheckpointMismatch !BlockNo !(HeaderHash blk) !(HeaderHash blk) + | -- | Block specific envelope error + OtherHeaderEnvelopeError !(OtherHeaderEnvelopeError blk) + deriving Generic + +deriving instance ValidateEnvelope blk => Eq (HeaderEnvelopeError blk) +deriving instance ValidateEnvelope blk => Show (HeaderEnvelopeError blk) +deriving instance + ( ValidateEnvelope blk + , Typeable blk + ) => + NoThunks (HeaderEnvelopeError blk) + +castHeaderEnvelopeError :: + ( HeaderHash blk ~ HeaderHash blk' + , OtherHeaderEnvelopeError blk ~ OtherHeaderEnvelopeError blk' + ) => + HeaderEnvelopeError blk -> HeaderEnvelopeError blk' castHeaderEnvelopeError = \case - OtherHeaderEnvelopeError err -> OtherHeaderEnvelopeError err - UnexpectedBlockNo expected actual -> UnexpectedBlockNo expected actual - UnexpectedSlotNo expected actual -> UnexpectedSlotNo expected actual - UnexpectedPrevHash oldTip prevHash -> UnexpectedPrevHash oldTip (castHash prevHash) - CheckpointMismatch bNo expected actual -> CheckpointMismatch bNo expected actual + OtherHeaderEnvelopeError err -> OtherHeaderEnvelopeError err + UnexpectedBlockNo expected actual -> UnexpectedBlockNo expected actual + UnexpectedSlotNo expected actual -> UnexpectedSlotNo expected actual + UnexpectedPrevHash oldTip prevHash -> UnexpectedPrevHash oldTip (castHash prevHash) + CheckpointMismatch bNo expected actual -> CheckpointMismatch bNo expected actual -- | Ledger-independent envelope validation (block, slot, hash) -class ( HasHeader (Header blk) - , HasAnnTip blk - ) => BasicEnvelopeValidation blk where +class + ( HasHeader (Header blk) + , HasAnnTip blk + ) => + BasicEnvelopeValidation blk + where -- | The block number of the first block on the chain expectedFirstBlockNo :: proxy blk -> BlockNo expectedFirstBlockNo _ = BlockNo 0 -- | Next block number - expectedNextBlockNo :: proxy blk - -> TipInfo blk -- ^ Old tip - -> TipInfo blk -- ^ New block - -> BlockNo -> BlockNo + expectedNextBlockNo :: + proxy blk -> + -- | Old tip + TipInfo blk -> + -- | New block + TipInfo blk -> + BlockNo -> + BlockNo expectedNextBlockNo _ _ _ = succ -- | The smallest possible 'SlotNo' @@ -287,123 +319,150 @@ class ( HasHeader (Header blk) minimumPossibleSlotNo _ = SlotNo 0 -- | Minimum next slot number - minimumNextSlotNo :: proxy blk - -> TipInfo blk -- ^ Old tip - -> TipInfo blk -- ^ New block - -> SlotNo -> SlotNo + minimumNextSlotNo :: + proxy blk -> + -- | Old tip + TipInfo blk -> + -- | New block + TipInfo blk -> + SlotNo -> + SlotNo minimumNextSlotNo _ _ _ = succ -- | Validate header envelope -class ( BasicEnvelopeValidation blk - , GetPrevHash blk - , Eq (OtherHeaderEnvelopeError blk) - , Show (OtherHeaderEnvelopeError blk) - , NoThunks (OtherHeaderEnvelopeError blk) - ) => ValidateEnvelope blk where - +class + ( BasicEnvelopeValidation blk + , GetPrevHash blk + , Eq (OtherHeaderEnvelopeError blk) + , Show (OtherHeaderEnvelopeError blk) + , NoThunks (OtherHeaderEnvelopeError blk) + ) => + ValidateEnvelope blk + where -- | A block-specific error that 'validateEnvelope' can return. type OtherHeaderEnvelopeError blk :: Type + type OtherHeaderEnvelopeError blk = Void -- | Do additional envelope checks - additionalEnvelopeChecks :: TopLevelConfig blk - -> LedgerView (BlockProtocol blk) - -> Header blk - -> Except (OtherHeaderEnvelopeError blk) () + additionalEnvelopeChecks :: + TopLevelConfig blk -> + LedgerView (BlockProtocol blk) -> + Header blk -> + Except (OtherHeaderEnvelopeError blk) () additionalEnvelopeChecks _ _ _ = return () -- | Validate the header envelope -validateEnvelope :: forall blk. (ValidateEnvelope blk) - => TopLevelConfig blk - -> LedgerView (BlockProtocol blk) - -> WithOrigin (AnnTip blk) -- ^ Old tip - -> Header blk - -> Except (HeaderEnvelopeError blk) () +validateEnvelope :: + forall blk. + ValidateEnvelope blk => + TopLevelConfig blk -> + LedgerView (BlockProtocol blk) -> + -- | Old tip + WithOrigin (AnnTip blk) -> + Header blk -> + Except (HeaderEnvelopeError blk) () validateEnvelope cfg ledgerView oldTip hdr = do - unless (actualBlockNo == expectedBlockNo) $ - throwError $ UnexpectedBlockNo expectedBlockNo actualBlockNo - unless (actualSlotNo >= expectedSlotNo) $ - throwError $ UnexpectedSlotNo expectedSlotNo actualSlotNo - unless (checkPrevHash' (annTipHash <$> oldTip) actualPrevHash) $ - throwError $ UnexpectedPrevHash (annTipHash <$> oldTip) actualPrevHash - validateIfCheckpoint (topLevelConfigCheckpoints cfg) hdr - withExcept OtherHeaderEnvelopeError $ - additionalEnvelopeChecks cfg ledgerView hdr - where - checkPrevHash' :: WithOrigin (HeaderHash blk) - -> ChainHash blk - -> Bool - checkPrevHash' Origin GenesisHash = True - checkPrevHash' (NotOrigin h) (BlockHash h') = h == h' - checkPrevHash' _ _ = False - - actualSlotNo :: SlotNo - actualBlockNo :: BlockNo - actualPrevHash :: ChainHash blk - - actualSlotNo = blockSlot hdr - actualBlockNo = blockNo hdr - actualPrevHash = headerPrevHash hdr - - expectedSlotNo :: SlotNo -- Lower bound only - expectedSlotNo = - case oldTip of - Origin -> minimumPossibleSlotNo p - NotOrigin tip -> minimumNextSlotNo p (annTipInfo tip) - (getTipInfo hdr) - (annTipSlotNo tip) - - expectedBlockNo :: BlockNo - expectedBlockNo = - case oldTip of - Origin -> expectedFirstBlockNo p - NotOrigin tip -> expectedNextBlockNo p (annTipInfo tip) - (getTipInfo hdr) - (annTipBlockNo tip) - - p = Proxy @blk + unless (actualBlockNo == expectedBlockNo) $ + throwError $ + UnexpectedBlockNo expectedBlockNo actualBlockNo + unless (actualSlotNo >= expectedSlotNo) $ + throwError $ + UnexpectedSlotNo expectedSlotNo actualSlotNo + unless (checkPrevHash' (annTipHash <$> oldTip) actualPrevHash) $ + throwError $ + UnexpectedPrevHash (annTipHash <$> oldTip) actualPrevHash + validateIfCheckpoint (topLevelConfigCheckpoints cfg) hdr + withExcept OtherHeaderEnvelopeError $ + additionalEnvelopeChecks cfg ledgerView hdr + where + checkPrevHash' :: + WithOrigin (HeaderHash blk) -> + ChainHash blk -> + Bool + checkPrevHash' Origin GenesisHash = True + checkPrevHash' (NotOrigin h) (BlockHash h') = h == h' + checkPrevHash' _ _ = False + + actualSlotNo :: SlotNo + actualBlockNo :: BlockNo + actualPrevHash :: ChainHash blk + + actualSlotNo = blockSlot hdr + actualBlockNo = blockNo hdr + actualPrevHash = headerPrevHash hdr + + expectedSlotNo :: SlotNo -- Lower bound only + expectedSlotNo = + case oldTip of + Origin -> minimumPossibleSlotNo p + NotOrigin tip -> + minimumNextSlotNo + p + (annTipInfo tip) + (getTipInfo hdr) + (annTipSlotNo tip) + + expectedBlockNo :: BlockNo + expectedBlockNo = + case oldTip of + Origin -> expectedFirstBlockNo p + NotOrigin tip -> + expectedNextBlockNo + p + (annTipInfo tip) + (getTipInfo hdr) + (annTipBlockNo tip) + + p = Proxy @blk validateIfCheckpoint :: - HasHeader (Header blk) - => CheckpointsMap blk - -> Header blk - -> Except (HeaderEnvelopeError blk) () + HasHeader (Header blk) => + CheckpointsMap blk -> + Header blk -> + Except (HeaderEnvelopeError blk) () validateIfCheckpoint checkpointsMap hdr = - whenJust (Map.lookup (blockNo hdr) $ unCheckpointsMap checkpointsMap) $ - \checkpoint -> when (headerHash hdr /= checkpoint) $ - throwError $ CheckpointMismatch (blockNo hdr) checkpoint (headerHash hdr) + whenJust (Map.lookup (blockNo hdr) $ unCheckpointsMap checkpointsMap) $ + \checkpoint -> + when (headerHash hdr /= checkpoint) $ + throwError $ + CheckpointMismatch (blockNo hdr) checkpoint (headerHash hdr) {------------------------------------------------------------------------------- Errors -------------------------------------------------------------------------------} -- | Invalid header -data HeaderError blk = - -- | Invalid consensus protocol fields +data HeaderError blk + = -- | Invalid consensus protocol fields HeaderProtocolError !(ValidationErr (BlockProtocol blk)) - - -- | Failed to validate the envelope - | HeaderEnvelopeError !(HeaderEnvelopeError blk) - deriving (Generic) - -deriving instance (BlockSupportsProtocol blk, ValidateEnvelope blk) - => Eq (HeaderError blk) -deriving instance (BlockSupportsProtocol blk, ValidateEnvelope blk) - => Show (HeaderError blk) -deriving instance (BlockSupportsProtocol blk, ValidateEnvelope blk) - => NoThunks (HeaderError blk) - -castHeaderError :: ( ValidationErr (BlockProtocol blk ) - ~ ValidationErr (BlockProtocol blk') - , HeaderHash blk - ~ HeaderHash blk' - , OtherHeaderEnvelopeError blk - ~ OtherHeaderEnvelopeError blk' - ) - => HeaderError blk -> HeaderError blk' + | -- | Failed to validate the envelope + HeaderEnvelopeError !(HeaderEnvelopeError blk) + deriving Generic + +deriving instance + (BlockSupportsProtocol blk, ValidateEnvelope blk) => + Eq (HeaderError blk) +deriving instance + (BlockSupportsProtocol blk, ValidateEnvelope blk) => + Show (HeaderError blk) +deriving instance + (BlockSupportsProtocol blk, ValidateEnvelope blk) => + NoThunks (HeaderError blk) + +castHeaderError :: + ( ValidationErr (BlockProtocol blk) + ~ ValidationErr (BlockProtocol blk') + , HeaderHash blk + ~ HeaderHash blk' + , OtherHeaderEnvelopeError blk + ~ OtherHeaderEnvelopeError blk' + ) => + HeaderError blk -> HeaderError blk' castHeaderError (HeaderProtocolError e) = HeaderProtocolError e -castHeaderError (HeaderEnvelopeError e) = HeaderEnvelopeError $ - castHeaderEnvelopeError e +castHeaderError (HeaderEnvelopeError e) = + HeaderEnvelopeError $ + castHeaderEnvelopeError e {------------------------------------------------------------------------------- Validation proper @@ -440,26 +499,28 @@ castHeaderError (HeaderEnvelopeError e) = HeaderEnvelopeError $ -- /If/ a particular ledger wants to verify additional fields in the header, it -- will get the chance to do so in 'applyBlockLedgerResult', which is passed the -- entire block (not just the block body). -validateHeader :: (BlockSupportsProtocol blk, ValidateEnvelope blk) - => TopLevelConfig blk - -> LedgerView (BlockProtocol blk) - -> Header blk - -> Ticked (HeaderState blk) - -> Except (HeaderError blk) (HeaderState blk) +validateHeader :: + (BlockSupportsProtocol blk, ValidateEnvelope blk) => + TopLevelConfig blk -> + LedgerView (BlockProtocol blk) -> + Header blk -> + Ticked (HeaderState blk) -> + Except (HeaderError blk) (HeaderState blk) validateHeader cfg ledgerView hdr st = do - withExcept HeaderEnvelopeError $ - validateEnvelope - cfg - ledgerView - (untickedHeaderStateTip st) - hdr - chainDepState' <- withExcept HeaderProtocolError $ + withExcept HeaderEnvelopeError $ + validateEnvelope + cfg + ledgerView + (untickedHeaderStateTip st) + hdr + chainDepState' <- + withExcept HeaderProtocolError $ updateChainDepState (configConsensus cfg) (validateView (configBlock cfg) hdr) (blockSlot hdr) (tickedHeaderStateChainDep st) - return $ HeaderState (NotOrigin (getAnnTip hdr)) chainDepState' + return $ HeaderState (NotOrigin (getAnnTip hdr)) chainDepState' -- | Header revalidation -- @@ -469,28 +530,31 @@ validateHeader cfg ledgerView hdr st = do -- Expensive validation checks are skipped ('reupdateChainDepState' vs. -- 'updateChainDepState'). revalidateHeader :: - forall blk. (BlockSupportsProtocol blk, ValidateEnvelope blk, HasCallStack) - => TopLevelConfig blk - -> LedgerView (BlockProtocol blk) - -> Header blk - -> Ticked (HeaderState blk) - -> HeaderState blk + forall blk. + (BlockSupportsProtocol blk, ValidateEnvelope blk, HasCallStack) => + TopLevelConfig blk -> + LedgerView (BlockProtocol blk) -> + Header blk -> + Ticked (HeaderState blk) -> + HeaderState blk revalidateHeader cfg ledgerView hdr st = - assertWithMsg envelopeCheck $ - HeaderState - (NotOrigin (getAnnTip hdr)) - chainDepState' - where - chainDepState' :: ChainDepState (BlockProtocol blk) - chainDepState' = - reupdateChainDepState - (configConsensus cfg) - (validateView (configBlock cfg) hdr) - (blockSlot hdr) - (tickedHeaderStateChainDep st) - - envelopeCheck :: Either String () - envelopeCheck = runExcept $ withExcept show $ + assertWithMsg envelopeCheck $ + HeaderState + (NotOrigin (getAnnTip hdr)) + chainDepState' + where + chainDepState' :: ChainDepState (BlockProtocol blk) + chainDepState' = + reupdateChainDepState + (configConsensus cfg) + (validateView (configBlock cfg) hdr) + (blockSlot hdr) + (tickedHeaderStateChainDep st) + + envelopeCheck :: Either String () + envelopeCheck = + runExcept $ + withExcept show $ validateEnvelope cfg ledgerView @@ -504,9 +568,9 @@ revalidateHeader cfg ledgerView hdr st = -- | Reusable strict data type for 'TipInfo' in case the 'TipInfo' should -- contain 'IsEBB' in addition to the 'HeaderHash'. data TipInfoIsEBB blk = TipInfoIsEBB !(HeaderHash blk) !IsEBB - deriving (Generic) + deriving Generic -deriving instance StandardHash blk => Eq (TipInfoIsEBB blk) +deriving instance StandardHash blk => Eq (TipInfoIsEBB blk) deriving instance StandardHash blk => Show (TipInfoIsEBB blk) deriving instance StandardHash blk => NoThunks (TipInfoIsEBB blk) @@ -520,36 +584,44 @@ deriving instance StandardHash blk => NoThunks (TipInfoIsEBB blk) -- client) according to the header's chain. This clarification may be helpful, -- since it's possible that some other chain would translate that same slot to -- a different time. -data HeaderWithTime blk = HeaderWithTime { - hwtHeader :: !(Header blk) +data HeaderWithTime blk = HeaderWithTime + { hwtHeader :: !(Header blk) , hwtSlotRelativeTime :: !RelativeTime } - deriving (Generic) - -deriving stock instance (Eq (Header blk)) - => Eq (HeaderWithTime blk) -deriving stock instance (Show (Header blk)) - => Show (HeaderWithTime blk) -deriving anyclass instance (NoThunks (Header blk)) - => NoThunks (HeaderWithTime blk) + deriving Generic + +deriving stock instance + Eq (Header blk) => + Eq (HeaderWithTime blk) +deriving stock instance + Show (Header blk) => + Show (HeaderWithTime blk) +deriving anyclass instance + NoThunks (Header blk) => + NoThunks (HeaderWithTime blk) type instance HeaderHash (HeaderWithTime blk) = HeaderHash (Header blk) -instance ( Show (HeaderHash blk) - , Eq (HeaderHash blk) - , Ord (HeaderHash blk) - , Typeable (HeaderHash blk) - , NoThunks (HeaderHash blk) - ) => StandardHash (HeaderWithTime blk) - -instance ( HasHeader (Header blk) - , StandardHash (HeaderWithTime blk) - , Typeable blk - ) => HasHeader (HeaderWithTime blk) where +instance + ( Show (HeaderHash blk) + , Eq (HeaderHash blk) + , Ord (HeaderHash blk) + , Typeable (HeaderHash blk) + , NoThunks (HeaderHash blk) + ) => + StandardHash (HeaderWithTime blk) + +instance + ( HasHeader (Header blk) + , StandardHash (HeaderWithTime blk) + , Typeable blk + ) => + HasHeader (HeaderWithTime blk) + where getHeaderFields = - castHeaderFields - . getHeaderFields - . hwtHeader + castHeaderFields + . getHeaderFields + . hwtHeader instance GetHeader1 HeaderWithTime where getHeader1 = hwtHeader @@ -561,94 +633,104 @@ instance GetHeader1 HeaderWithTime where -- -- This is INLINEed since the summary can usually be reused. mkHeaderWithTime :: - ( HasHardForkHistory blk - , HasHeader (Header blk) - ) - => LedgerConfig blk - -> LedgerState blk mk - -> Header blk - -> HeaderWithTime blk + ( HasHardForkHistory blk + , HasHeader (Header blk) + ) => + LedgerConfig blk -> + LedgerState blk mk -> + Header blk -> + HeaderWithTime blk {-# INLINE mkHeaderWithTime #-} mkHeaderWithTime cfg lst = \hdr -> - let summary = hardForkSummary cfg lst - slot = realPointSlot $ headerRealPoint hdr - qry = Qry.slotToWallclock slot - (slotTime, _) = Qry.runQueryPure qry summary - in HeaderWithTime { - hwtHeader = hdr - , hwtSlotRelativeTime = slotTime - } + let summary = hardForkSummary cfg lst + slot = realPointSlot $ headerRealPoint hdr + qry = Qry.slotToWallclock slot + (slotTime, _) = Qry.runQueryPure qry summary + in HeaderWithTime + { hwtHeader = hdr + , hwtSlotRelativeTime = slotTime + } {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} -defaultEncodeAnnTip :: TipInfo blk ~ HeaderHash blk - => (HeaderHash blk -> Encoding) - -> (AnnTip blk -> Encoding) -defaultEncodeAnnTip encodeHash AnnTip{..} = mconcat [ - encodeListLen 3 - , encode annTipSlotNo +defaultEncodeAnnTip :: + TipInfo blk ~ HeaderHash blk => + (HeaderHash blk -> Encoding) -> + (AnnTip blk -> Encoding) +defaultEncodeAnnTip encodeHash AnnTip{..} = + mconcat + [ encodeListLen 3 + , encode annTipSlotNo , encodeHash annTipInfo - , encode annTipBlockNo + , encode annTipBlockNo ] -defaultDecodeAnnTip :: TipInfo blk ~ HeaderHash blk - => (forall s. Decoder s (HeaderHash blk)) - -> (forall s. Decoder s (AnnTip blk)) +defaultDecodeAnnTip :: + TipInfo blk ~ HeaderHash blk => + (forall s. Decoder s (HeaderHash blk)) -> + (forall s. Decoder s (AnnTip blk)) defaultDecodeAnnTip decodeHash = do - enforceSize "AnnTip" 3 - annTipSlotNo <- decode - annTipInfo <- decodeHash - annTipBlockNo <- decode - return AnnTip{..} - -encodeAnnTipIsEBB :: TipInfo blk ~ TipInfoIsEBB blk - => (HeaderHash blk -> Encoding) - -> (AnnTip blk -> Encoding) -encodeAnnTipIsEBB encodeHash AnnTip{..} = mconcat [ - encodeListLen 4 - , encode annTipSlotNo + enforceSize "AnnTip" 3 + annTipSlotNo <- decode + annTipInfo <- decodeHash + annTipBlockNo <- decode + return AnnTip{..} + +encodeAnnTipIsEBB :: + TipInfo blk ~ TipInfoIsEBB blk => + (HeaderHash blk -> Encoding) -> + (AnnTip blk -> Encoding) +encodeAnnTipIsEBB encodeHash AnnTip{..} = + mconcat + [ encodeListLen 4 + , encode annTipSlotNo , encodeHash hash - , encode annTipBlockNo + , encode annTipBlockNo , encodeInfo isEBB ] - where - TipInfoIsEBB hash isEBB = annTipInfo + where + TipInfoIsEBB hash isEBB = annTipInfo - encodeInfo :: IsEBB -> Encoding - encodeInfo = encode + encodeInfo :: IsEBB -> Encoding + encodeInfo = encode -decodeAnnTipIsEBB :: TipInfo blk ~ TipInfoIsEBB blk - => (forall s. Decoder s (HeaderHash blk)) - -> (forall s. Decoder s (AnnTip blk)) +decodeAnnTipIsEBB :: + TipInfo blk ~ TipInfoIsEBB blk => + (forall s. Decoder s (HeaderHash blk)) -> + (forall s. Decoder s (AnnTip blk)) decodeAnnTipIsEBB decodeHash = do - enforceSize "AnnTip" 4 - annTipSlotNo <- decode - hash <- decodeHash - annTipBlockNo <- decode - isEBB <- decodeInfo - return AnnTip{annTipInfo = TipInfoIsEBB hash isEBB, ..} - where - decodeInfo :: forall s. Decoder s IsEBB - decodeInfo = decode - -encodeHeaderState :: (ChainDepState (BlockProtocol blk) -> Encoding) - -> (AnnTip blk -> Encoding) - -> (HeaderState blk -> Encoding) -encodeHeaderState encodeChainDepState - encodeAnnTip' - HeaderState {..} = mconcat [ - encodeListLen 2 - , Util.CBOR.encodeWithOrigin encodeAnnTip' headerStateTip - , encodeChainDepState headerStateChainDep - ] - -decodeHeaderState :: (forall s. Decoder s (ChainDepState (BlockProtocol blk))) - -> (forall s. Decoder s (AnnTip blk)) - -> (forall s. Decoder s (HeaderState blk)) + enforceSize "AnnTip" 4 + annTipSlotNo <- decode + hash <- decodeHash + annTipBlockNo <- decode + isEBB <- decodeInfo + return AnnTip{annTipInfo = TipInfoIsEBB hash isEBB, ..} + where + decodeInfo :: forall s. Decoder s IsEBB + decodeInfo = decode + +encodeHeaderState :: + (ChainDepState (BlockProtocol blk) -> Encoding) -> + (AnnTip blk -> Encoding) -> + (HeaderState blk -> Encoding) +encodeHeaderState + encodeChainDepState + encodeAnnTip' + HeaderState{..} = + mconcat + [ encodeListLen 2 + , Util.CBOR.encodeWithOrigin encodeAnnTip' headerStateTip + , encodeChainDepState headerStateChainDep + ] + +decodeHeaderState :: + (forall s. Decoder s (ChainDepState (BlockProtocol blk))) -> + (forall s. Decoder s (AnnTip blk)) -> + (forall s. Decoder s (HeaderState blk)) decodeHeaderState decodeChainDepState decodeAnnTip' = do - enforceSize "HeaderState" 2 - headerStateTip <- Util.CBOR.decodeWithOrigin decodeAnnTip' - headerStateChainDep <- decodeChainDepState - return HeaderState {..} + enforceSize "HeaderState" 2 + headerStateTip <- Util.CBOR.decodeWithOrigin decodeAnnTip' + headerStateChainDep <- decodeChainDepState + return HeaderState{..} diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Abstract.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Abstract.hs index 93db44a429..df48439963 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Abstract.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Abstract.hs @@ -12,15 +12,17 @@ -- This module defines how to apply blocks to a ledger state, and re-exports -- (from "Ouroboros.Consensus.Ledger.Basics") how to tick ledger states. These -- are the two main operations we can do with a 'LedgerState'. -module Ouroboros.Consensus.Ledger.Abstract ( - -- * Type-level validation marker +module Ouroboros.Consensus.Ledger.Abstract + ( -- * Type-level validation marker Validated + -- * Apply block , ApplyBlock (..) , ComputeLedgerEvents (..) , UpdateLedger , defaultApplyBlockLedgerResult , defaultReapplyBlockLedgerResult + -- * Derived , applyLedgerBlock , foldLedger @@ -30,23 +32,25 @@ module Ouroboros.Consensus.Ledger.Abstract ( , tickThenApplyLedgerResult , tickThenReapply , tickThenReapplyLedgerResult + -- ** Short-hand , ledgerTipHash , ledgerTipPoint , ledgerTipSlot + -- * Re-exports , module Ouroboros.Consensus.Ledger.Basics ) where -import Control.Monad.Except -import qualified Control.State.Transition.Extended as STS -import Data.Kind (Type) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util +import Control.Monad.Except +import Control.State.Transition.Extended qualified as STS +import Data.Kind (Type) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util -- | " Validated " transaction or block -- @@ -78,14 +82,16 @@ data family Validated x :: Type Apply block to ledger state -------------------------------------------------------------------------------} -class ( IsLedger l - , HeaderHash l ~ HeaderHash blk - , HasHeader blk - , HasHeader (Header blk) - , HasLedgerTables l - , HasLedgerTables (Ticked l) - ) => ApplyBlock l blk where - +class + ( IsLedger l + , HeaderHash l ~ HeaderHash blk + , HasHeader blk + , HasHeader (Header blk) + , HasLedgerTables l + , HasLedgerTables (Ticked l) + ) => + ApplyBlock l blk + where -- | Apply a block to the ledger state. -- -- This is passed the ledger state ticked to the slot of the given block, so @@ -94,13 +100,13 @@ class ( IsLedger l -- Users of this function can set any validation level allowed by the -- @small-steps@ package. See "Control.State.Transition.Extended". applyBlockLedgerResultWithValidation :: - HasCallStack - => STS.ValidationPolicy - -> ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> Except (LedgerErr l) (LedgerResult l (l DiffMK)) + HasCallStack => + STS.ValidationPolicy -> + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + Except (LedgerErr l) (LedgerResult l (l DiffMK)) -- | Apply a block to the ledger state. -- @@ -109,12 +115,12 @@ class ( IsLedger l -- -- This function will use 'ValidateAll' policy for calling the ledger rules. applyBlockLedgerResult :: - HasCallStack - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> Except (LedgerErr l) (LedgerResult l (l DiffMK)) + HasCallStack => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + Except (LedgerErr l) (LedgerResult l (l DiffMK)) -- | Re-apply a block to the very same ledger state it was applied in before. -- @@ -127,38 +133,38 @@ class ( IsLedger l -- validation checks. Thus this function will call the ledger rules with -- 'ValidateNone' policy. reapplyBlockLedgerResult :: - HasCallStack - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> LedgerResult l (l DiffMK) + HasCallStack => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + LedgerResult l (l DiffMK) -- | Given a block, get the key-sets that we need to apply it to a ledger -- state. getBlockKeySets :: blk -> LedgerTables l KeysMK defaultApplyBlockLedgerResult :: - (HasCallStack, ApplyBlock l blk) - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> Except (LedgerErr l) (LedgerResult l (l DiffMK)) + (HasCallStack, ApplyBlock l blk) => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + Except (LedgerErr l) (LedgerResult l (l DiffMK)) defaultApplyBlockLedgerResult = applyBlockLedgerResultWithValidation STS.ValidateAll defaultReapplyBlockLedgerResult :: - (HasCallStack, ApplyBlock l blk) - => (LedgerErr l -> LedgerResult l (l DiffMK)) - -> ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> LedgerResult l (l DiffMK) + (HasCallStack, ApplyBlock l blk) => + (LedgerErr l -> LedgerResult l (l DiffMK)) -> + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + LedgerResult l (l DiffMK) defaultReapplyBlockLedgerResult throwReapplyError = - (either throwReapplyError id . runExcept) - ...: applyBlockLedgerResultWithValidation STS.ValidateNone + (either throwReapplyError id . runExcept) + ...: applyBlockLedgerResultWithValidation STS.ValidateNone -- | Interaction with the ledger layer class ApplyBlock (LedgerState blk) blk => UpdateLedger blk @@ -169,102 +175,117 @@ class ApplyBlock (LedgerState blk) blk => UpdateLedger blk -- | 'lrResult' after 'applyBlockLedgerResult' applyLedgerBlock :: - forall l blk. - (ApplyBlock l blk, HasCallStack) - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> Except (LedgerErr l) (l DiffMK) + forall l blk. + (ApplyBlock l blk, HasCallStack) => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + Except (LedgerErr l) (l DiffMK) applyLedgerBlock = fmap lrResult ...: applyBlockLedgerResult -- | 'lrResult' after 'reapplyBlockLedgerResult' reapplyLedgerBlock :: - forall l blk. - (ApplyBlock l blk, HasCallStack) - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> Ticked l ValuesMK - -> l DiffMK + forall l blk. + (ApplyBlock l blk, HasCallStack) => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + Ticked l ValuesMK -> + l DiffMK reapplyLedgerBlock = lrResult ...: reapplyBlockLedgerResult tickThenApplyLedgerResult :: - ApplyBlock l blk - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> l ValuesMK - -> Except (LedgerErr l) (LedgerResult l (l DiffMK)) + ApplyBlock l blk => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + l ValuesMK -> + Except (LedgerErr l) (LedgerResult l (l DiffMK)) tickThenApplyLedgerResult evs cfg blk l = do let lrTick = applyChainTickLedgerResult evs cfg (blockSlot blk) (forgetLedgerTables l) - lrBlock <- applyBlockLedgerResult evs cfg blk (applyDiffForKeys l (getBlockKeySets blk) (lrResult lrTick)) - pure LedgerResult { - lrEvents = lrEvents lrTick <> lrEvents lrBlock - , lrResult = prependDiffs (lrResult lrTick) (lrResult lrBlock) - } + lrBlock <- + applyBlockLedgerResult + evs + cfg + blk + (applyDiffForKeys l (getBlockKeySets blk) (lrResult lrTick)) + pure + LedgerResult + { lrEvents = lrEvents lrTick <> lrEvents lrBlock + , lrResult = prependDiffs (lrResult lrTick) (lrResult lrBlock) + } tickThenReapplyLedgerResult :: - forall l blk. - ApplyBlock l blk - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> l ValuesMK - -> LedgerResult l (l DiffMK) + forall l blk. + ApplyBlock l blk => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + l ValuesMK -> + LedgerResult l (l DiffMK) tickThenReapplyLedgerResult evs cfg blk l = - let lrTick = applyChainTickLedgerResult evs cfg (blockSlot blk) (forgetLedgerTables l) - lrBlock = reapplyBlockLedgerResult evs cfg blk (applyDiffForKeys l (getBlockKeySets blk) (lrResult lrTick)) - in LedgerResult { - lrEvents = lrEvents lrTick <> lrEvents lrBlock - , lrResult = prependDiffs (lrResult lrTick) (lrResult lrBlock) - } + let lrTick = applyChainTickLedgerResult evs cfg (blockSlot blk) (forgetLedgerTables l) + lrBlock = + reapplyBlockLedgerResult + evs + cfg + blk + (applyDiffForKeys l (getBlockKeySets blk) (lrResult lrTick)) + in LedgerResult + { lrEvents = lrEvents lrTick <> lrEvents lrBlock + , lrResult = prependDiffs (lrResult lrTick) (lrResult lrBlock) + } tickThenApply :: - forall l blk. - ApplyBlock l blk - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> l ValuesMK - -> Except (LedgerErr l) (l DiffMK) + forall l blk. + ApplyBlock l blk => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + l ValuesMK -> + Except (LedgerErr l) (l DiffMK) tickThenApply = fmap lrResult ...: tickThenApplyLedgerResult tickThenReapply :: - forall l blk. - ApplyBlock l blk - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> l ValuesMK - -> l DiffMK + forall l blk. + ApplyBlock l blk => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + l ValuesMK -> + l DiffMK tickThenReapply = lrResult ...: tickThenReapplyLedgerResult foldLedger :: - ApplyBlock l blk - => ComputeLedgerEvents -> LedgerCfg l -> [blk] -> l ValuesMK -> Except (LedgerErr l) (l ValuesMK) -foldLedger evs cfg = repeatedlyM (\blk state -> applyDiffForKeys state (getBlockKeySets blk) <$> tickThenApply evs cfg blk state) + ApplyBlock l blk => + ComputeLedgerEvents -> LedgerCfg l -> [blk] -> l ValuesMK -> Except (LedgerErr l) (l ValuesMK) +foldLedger evs cfg = + repeatedlyM + (\blk state -> applyDiffForKeys state (getBlockKeySets blk) <$> tickThenApply evs cfg blk state) refoldLedger :: - ApplyBlock l blk - => ComputeLedgerEvents -> LedgerCfg l -> [blk] -> l ValuesMK -> l ValuesMK -refoldLedger evs cfg = repeatedly (\blk state -> applyDiffForKeys state (getBlockKeySets blk) $ tickThenReapply evs cfg blk state) + ApplyBlock l blk => + ComputeLedgerEvents -> LedgerCfg l -> [blk] -> l ValuesMK -> l ValuesMK +refoldLedger evs cfg = + repeatedly + (\blk state -> applyDiffForKeys state (getBlockKeySets blk) $ tickThenReapply evs cfg blk state) {------------------------------------------------------------------------------- Short-hand -------------------------------------------------------------------------------} ledgerTipPoint :: - UpdateLedger blk - => LedgerState blk mk -> Point blk + UpdateLedger blk => + LedgerState blk mk -> Point blk ledgerTipPoint = castPoint . getTip ledgerTipHash :: - UpdateLedger blk - => LedgerState blk mk -> ChainHash blk + UpdateLedger blk => + LedgerState blk mk -> ChainHash blk ledgerTipHash = pointHash . ledgerTipPoint ledgerTipSlot :: - UpdateLedger blk - => LedgerState blk mk -> WithOrigin SlotNo + UpdateLedger blk => + LedgerState blk mk -> WithOrigin SlotNo ledgerTipSlot = pointSlot . ledgerTipPoint diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Basics.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Basics.hs index b78393dd96..2313a0d086 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Basics.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Basics.hs @@ -14,40 +14,45 @@ -- -- Normally this is imported from "Ouroboros.Consensus.Ledger.Abstract". We -- pull this out to avoid circular module dependencies. -module Ouroboros.Consensus.Ledger.Basics ( - -- * The 'LedgerState' definition +module Ouroboros.Consensus.Ledger.Basics + ( -- * The 'LedgerState' definition LedgerCfg , LedgerState , TickedLedgerState + -- * Definition of a ledger independent of a choice of block , ComputeLedgerEvents (..) , IsLedger (..) , applyChainTick + -- * Ledger Events , LedgerResult (..) , VoidLedgerEvent , castLedgerResult , embedLedgerResult , pureLedgerResult + -- * GetTip , GetTip (..) , GetTipSTM (..) , getTipHash , getTipM , getTipSlot + -- * Associated types by block type , LedgerConfig , LedgerError + -- * Re-exports , module Ouroboros.Consensus.Ledger.Tables ) where -import Data.Kind (Constraint, Type) -import GHC.Generics -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.IOLike +import Data.Kind (Constraint, Type) +import GHC.Generics +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- Tip @@ -94,22 +99,23 @@ data LedgerResult l a = LedgerResult deriving (Foldable, Functor, Traversable) castLedgerResult :: - (AuxLedgerEvent l ~ AuxLedgerEvent l') - => LedgerResult l a - -> LedgerResult l' a + AuxLedgerEvent l ~ AuxLedgerEvent l' => + LedgerResult l a -> + LedgerResult l' a castLedgerResult (LedgerResult x0 x1) = LedgerResult x0 x1 embedLedgerResult :: - (AuxLedgerEvent l -> AuxLedgerEvent l') - -> LedgerResult l a - -> LedgerResult l' a + (AuxLedgerEvent l -> AuxLedgerEvent l') -> + LedgerResult l a -> + LedgerResult l' a embedLedgerResult inj lr = lr{lrEvents = inj `map` lrEvents lr} pureLedgerResult :: a -> LedgerResult l a -pureLedgerResult a = LedgerResult { - lrEvents = mempty - , lrResult = a - } +pureLedgerResult a = + LedgerResult + { lrEvents = mempty + , lrResult = a + } {------------------------------------------------------------------------------- Definition of a ledger independent of a choice of block @@ -134,35 +140,38 @@ data ComputeLedgerEvents = ComputeLedgerEvents | OmitLedgerEvents deriving (Eq, Show, Generic, NoThunks) type IsLedger :: LedgerStateKind -> Constraint -class ( -- Requirements on the ledger state itself - forall mk. EqMK mk => Eq (l mk) - , forall mk. NoThunksMK mk => NoThunks (l mk) - , forall mk. ShowMK mk => Show (l mk) - -- Requirements on 'LedgerCfg' - , NoThunks (LedgerCfg l) - -- Requirements on 'LedgerErr' - , Show (LedgerErr l) - , Eq (LedgerErr l) - , NoThunks (LedgerErr l) - -- Get the tip - -- - -- See comment for 'applyChainTickLedgerResult' about the tip of the - -- ticked ledger. - , GetTip l - , GetTip (Ticked l) - ) => IsLedger l where +class + ( -- Requirements on the ledger state itself + forall mk. EqMK mk => Eq (l mk) + , forall mk. NoThunksMK mk => NoThunks (l mk) + , forall mk. ShowMK mk => Show (l mk) + , -- Requirements on 'LedgerCfg' + NoThunks (LedgerCfg l) + , -- Requirements on 'LedgerErr' + Show (LedgerErr l) + , Eq (LedgerErr l) + , NoThunks (LedgerErr l) + , -- Get the tip + -- + -- See comment for 'applyChainTickLedgerResult' about the tip of the + -- ticked ledger. + GetTip l + , GetTip (Ticked l) + ) => + IsLedger l + where -- | Errors that can arise when updating the ledger -- -- This is defined here rather than in 'ApplyBlock', since the /type/ of -- these errors does not depend on the type of the block. - type family LedgerErr l :: Type + type LedgerErr l :: Type -- | Event emitted by the ledger -- -- TODO we call this 'AuxLedgerEvent' to differentiate from 'LedgerEvent' in -- 'InspectLedger'. When that module is rewritten to make use of ledger -- derived events, we may rename this type. - type family AuxLedgerEvent l :: Type + type AuxLedgerEvent l :: Type -- | Apply "slot based" state transformations -- @@ -195,20 +204,20 @@ class ( -- Requirements on the ledger state itself -- -- prop> ledgerTipPoint (applyChainTick cfg slot st) == ledgerTipPoint st applyChainTickLedgerResult :: - ComputeLedgerEvents - -> LedgerCfg l - -> SlotNo - -> l EmptyMK - -> LedgerResult l (Ticked l DiffMK) + ComputeLedgerEvents -> + LedgerCfg l -> + SlotNo -> + l EmptyMK -> + LedgerResult l (Ticked l DiffMK) -- | 'lrResult' after 'applyChainTickLedgerResult' applyChainTick :: - IsLedger l - => ComputeLedgerEvents - -> LedgerCfg l - -> SlotNo - -> l EmptyMK - -> Ticked l DiffMK + IsLedger l => + ComputeLedgerEvents -> + LedgerCfg l -> + SlotNo -> + l EmptyMK -> + Ticked l DiffMK applyChainTick = lrResult ...: applyChainTickLedgerResult {------------------------------------------------------------------------------- @@ -233,11 +242,12 @@ applyChainTick = lrResult ...: applyChainTickLedgerResult -- 'Ouroboros.Consensus.Ledger.Abstract.ApplyBlock'). type LedgerState :: Type -> LedgerStateKind data family LedgerState blk mk + type TickedLedgerState blk = Ticked (LedgerState blk) type instance HeaderHash (LedgerState blk) = HeaderHash blk instance StandardHash blk => StandardHash (LedgerState blk) -type LedgerConfig blk = LedgerCfg (LedgerState blk) -type LedgerError blk = LedgerErr (LedgerState blk) +type LedgerConfig blk = LedgerCfg (LedgerState blk) +type LedgerError blk = LedgerErr (LedgerState blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/CommonProtocolParams.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/CommonProtocolParams.hs index 3d5c616444..d51ab4b601 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/CommonProtocolParams.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/CommonProtocolParams.hs @@ -1,11 +1,10 @@ module Ouroboros.Consensus.Ledger.CommonProtocolParams (CommonProtocolParams (..)) where -import Data.Word (Word32) -import Ouroboros.Consensus.Ledger.Abstract +import Data.Word (Word32) +import Ouroboros.Consensus.Ledger.Abstract -- | Ask the ledger for common protocol parameters. class UpdateLedger blk => CommonProtocolParams blk where - -- | The maximum header size in bytes according to the currently adopted -- protocol parameters of the ledger state. maxHeaderSize :: LedgerState blk mk -> Word32 diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Dual.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Dual.hs index 918603eb54..ed5894c376 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Dual.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Dual.hs @@ -20,18 +20,21 @@ {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE UndecidableSuperClasses #-} -module Ouroboros.Consensus.Ledger.Dual ( - Bridge (..) +module Ouroboros.Consensus.Ledger.Dual + ( Bridge (..) + -- * Pair types , DualBlock (..) , DualGenTxErr (..) , DualHeader , DualLedgerConfig (..) , DualLedgerError (..) + -- * Lifted functions , ctxtDualMain , dualExtValidationErrorMain , dualTopLevelConfigMain + -- * Type class family instances , BlockConfig (..) , CodecConfig (..) @@ -44,6 +47,7 @@ module Ouroboros.Consensus.Ledger.Dual ( , Ticked (..) , TxId (..) , Validated (..) + -- * Serialisation , decodeDualBlock , decodeDualGenTx @@ -61,40 +65,40 @@ module Ouroboros.Consensus.Ledger.Dual ( , encodeDualLedgerState ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding, encodeListLen) -import Codec.Serialise -import Control.Arrow ((+++)) -import Control.Monad.Except -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.ByteString.Short as Short -import Data.Functor ((<&>)) -import Data.Kind (Type) -import Data.Typeable -import GHC.Generics (Generic) -import GHC.Stack -import NoThunks.Class (AllowThunk (..), NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IndexedMemPack +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding, encodeListLen) +import Codec.Serialise +import Control.Arrow ((+++)) +import Control.Monad.Except +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short qualified as Short +import Data.Functor ((<&>)) +import Data.Kind (Type) +import Data.Typeable +import GHC.Generics (Generic) +import GHC.Stack +import NoThunks.Class (AllowThunk (..), NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Block @@ -117,17 +121,18 @@ import Ouroboros.Consensus.Util.IndexedMemPack -- -- NOTE: The dual ledger is used for testing purposes only; we do not do any -- meaningful 'NoThunks' checks here. -data DualBlock m a = DualBlock { - dualBlockMain :: m - , dualBlockAux :: Maybe a - , dualBlockBridge :: BridgeBlock m a - } +data DualBlock m a = DualBlock + { dualBlockMain :: m + , dualBlockAux :: Maybe a + , dualBlockBridge :: BridgeBlock m a + } deriving instance (Show m, Show a, Show (BridgeBlock m a)) => Show (DualBlock m a) -deriving instance (Eq m, Eq a, Eq (BridgeBlock m a)) => Eq (DualBlock m a) +deriving instance (Eq m, Eq a, Eq (BridgeBlock m a)) => Eq (DualBlock m a) -instance (Typeable m, Typeable a) - => ShowProxy (DualBlock m a) where +instance + (Typeable m, Typeable a) => + ShowProxy (DualBlock m a) instance Condense m => Condense (DualBlock m a) where condense = condense . dualBlockMain @@ -136,22 +141,22 @@ type instance HeaderHash (DualBlock m a) = HeaderHash m instance StandardHash m => StandardHash (DualBlock m a) instance ConvertRawHash m => ConvertRawHash (DualBlock m a) where - toShortRawHash _ = toShortRawHash (Proxy @m) + toShortRawHash _ = toShortRawHash (Proxy @m) fromShortRawHash _ = fromShortRawHash (Proxy @m) - hashSize _ = hashSize (Proxy @m) + hashSize _ = hashSize (Proxy @m) {------------------------------------------------------------------------------- Header -------------------------------------------------------------------------------} -newtype instance Header (DualBlock m a) = DualHeader { dualHeaderMain :: Header m } +newtype instance Header (DualBlock m a) = DualHeader {dualHeaderMain :: Header m} deriving NoThunks via AllowThunk (Header (DualBlock m a)) instance Bridge m a => GetHeader (DualBlock m a) where getHeader = DualHeader . getHeader . dualBlockMain blockMatchesHeader hdr = - blockMatchesHeader (dualHeaderMain hdr) . dualBlockMain + blockMatchesHeader (dualHeaderMain hdr) . dualBlockMain -- We can look at the concrete header to see if this is an EBB headerIsEBB = headerIsEBB . dualHeaderMain @@ -160,21 +165,22 @@ type DualHeader m a = Header (DualBlock m a) deriving instance Show (Header m) => Show (DualHeader m a) -instance (Typeable m, Typeable a) - => ShowProxy (DualHeader m a) where +instance + (Typeable m, Typeable a) => + ShowProxy (DualHeader m a) {------------------------------------------------------------------------------- Config -------------------------------------------------------------------------------} -data instance BlockConfig (DualBlock m a) = DualBlockConfig { - dualBlockConfigMain :: BlockConfig m - , dualBlockConfigAux :: BlockConfig a - } +data instance BlockConfig (DualBlock m a) = DualBlockConfig + { dualBlockConfigMain :: BlockConfig m + , dualBlockConfigAux :: BlockConfig a + } deriving NoThunks via AllowThunk (BlockConfig (DualBlock m a)) instance ConfigSupportsNode m => ConfigSupportsNode (DualBlock m a) where - getSystemStart = getSystemStart . dualBlockConfigMain + getSystemStart = getSystemStart . dualBlockConfigMain getNetworkMagic = getNetworkMagic . dualBlockConfigMain {------------------------------------------------------------------------------- @@ -183,12 +189,13 @@ instance ConfigSupportsNode m => ConfigSupportsNode (DualBlock m a) where -- | This is only used for block production dualTopLevelConfigMain :: TopLevelConfig (DualBlock m a) -> TopLevelConfig m -dualTopLevelConfigMain TopLevelConfig{..} = TopLevelConfig{ - topLevelConfigProtocol = topLevelConfigProtocol - , topLevelConfigLedger = dualLedgerConfigMain topLevelConfigLedger - , topLevelConfigBlock = dualBlockConfigMain topLevelConfigBlock - , topLevelConfigCodec = dualCodecConfigMain topLevelConfigCodec - , topLevelConfigStorage = dualStorageConfigMain topLevelConfigStorage +dualTopLevelConfigMain TopLevelConfig{..} = + TopLevelConfig + { topLevelConfigProtocol = topLevelConfigProtocol + , topLevelConfigLedger = dualLedgerConfigMain topLevelConfigLedger + , topLevelConfigBlock = dualBlockConfigMain topLevelConfigBlock + , topLevelConfigCodec = dualCodecConfigMain topLevelConfigCodec + , topLevelConfigStorage = dualStorageConfigMain topLevelConfigStorage , topLevelConfigCheckpoints = castCheckpointsMap topLevelConfigCheckpoints } @@ -196,70 +203,75 @@ dualTopLevelConfigMain TopLevelConfig{..} = TopLevelConfig{ CodecConfig -------------------------------------------------------------------------------} -data instance CodecConfig (DualBlock m a) = DualCodecConfig { - dualCodecConfigMain :: !(CodecConfig m) - , dualCodecConfigAux :: !(CodecConfig a) - } - deriving (Generic) +data instance CodecConfig (DualBlock m a) = DualCodecConfig + { dualCodecConfigMain :: !(CodecConfig m) + , dualCodecConfigAux :: !(CodecConfig a) + } + deriving Generic -instance ( NoThunks (CodecConfig m) - , NoThunks (CodecConfig a) - ) => NoThunks (CodecConfig (DualBlock m a)) - -- Use generic instance +instance + ( NoThunks (CodecConfig m) + , NoThunks (CodecConfig a) + ) => + NoThunks (CodecConfig (DualBlock m a)) + +-- Use generic instance {------------------------------------------------------------------------------- StorageConfig -------------------------------------------------------------------------------} -data instance StorageConfig (DualBlock m a) = DualStorageConfig { - dualStorageConfigMain :: !(StorageConfig m) - , dualStorageConfigAux :: !(StorageConfig a) - } - deriving (Generic) +data instance StorageConfig (DualBlock m a) = DualStorageConfig + { dualStorageConfigMain :: !(StorageConfig m) + , dualStorageConfigAux :: !(StorageConfig a) + } + deriving Generic -instance ( NoThunks (StorageConfig m) - , NoThunks (StorageConfig a) - ) => NoThunks (StorageConfig (DualBlock m a)) - -- Use generic instance +instance + ( NoThunks (StorageConfig m) + , NoThunks (StorageConfig a) + ) => + NoThunks (StorageConfig (DualBlock m a)) + +-- Use generic instance {------------------------------------------------------------------------------- Bridge two ledgers -------------------------------------------------------------------------------} -- | Bridge the two ledgers -class ( - -- Requirements on the main block - HasHeader m - , GetHeader m - , HasHeader (Header m) - , LedgerSupportsProtocol m - , HasHardForkHistory m - , LedgerSupportsMempool m - , CommonProtocolParams m - , HasTxId (GenTx m) - , Show (ApplyTxErr m) - , Show (LedgerConfig m) - - -- Requirements on the auxiliary block - -- No 'LedgerSupportsProtocol' for @a@! - , Typeable a - , UpdateLedger a - , LedgerSupportsMempool a - , Show (ApplyTxErr a) - , Show (LedgerConfig a) - , NoThunks (LedgerConfig a) - , NoThunks (CodecConfig a) - , NoThunks (StorageConfig a) - - -- Requirements on the various bridges - , Show (BridgeLedger m a) - , Eq (BridgeLedger m a) - , Serialise (BridgeLedger m a) - , Serialise (BridgeBlock m a) - , Serialise (BridgeTx m a) - , Show (BridgeTx m a) - ) => Bridge m a where - +class + ( -- Requirements on the main block + HasHeader m + , GetHeader m + , HasHeader (Header m) + , LedgerSupportsProtocol m + , HasHardForkHistory m + , LedgerSupportsMempool m + , CommonProtocolParams m + , HasTxId (GenTx m) + , Show (ApplyTxErr m) + , Show (LedgerConfig m) + , -- Requirements on the auxiliary block + -- No 'LedgerSupportsProtocol' for @a@! + Typeable a + , UpdateLedger a + , LedgerSupportsMempool a + , Show (ApplyTxErr a) + , Show (LedgerConfig a) + , NoThunks (LedgerConfig a) + , NoThunks (CodecConfig a) + , NoThunks (StorageConfig a) + , -- Requirements on the various bridges + Show (BridgeLedger m a) + , Eq (BridgeLedger m a) + , Serialise (BridgeLedger m a) + , Serialise (BridgeBlock m a) + , Serialise (BridgeTx m a) + , Show (BridgeTx m a) + ) => + Bridge m a + where -- | Additional information relating both ledgers type BridgeLedger m a :: Type @@ -269,11 +281,15 @@ class ( -- | Information required to update the bridge when applying a transaction type BridgeTx m a :: Type - updateBridgeWithBlock :: DualBlock m a - -> BridgeLedger m a -> BridgeLedger m a + updateBridgeWithBlock :: + DualBlock m a -> + BridgeLedger m a -> + BridgeLedger m a - updateBridgeWithTx :: Validated (GenTx (DualBlock m a)) - -> BridgeLedger m a -> BridgeLedger m a + updateBridgeWithTx :: + Validated (GenTx (DualBlock m a)) -> + BridgeLedger m a -> + BridgeLedger m a {------------------------------------------------------------------------------- HasHeader instance @@ -296,7 +312,7 @@ type instance BlockProtocol (DualBlock m a) = BlockProtocol m instance Bridge m a => BlockSupportsProtocol (DualBlock m a) where validateView cfg = validateView (dualBlockConfigMain cfg) . dualHeaderMain - selectView cfg = selectView (dualBlockConfigMain cfg) . dualHeaderMain + selectView cfg = selectView (dualBlockConfigMain cfg) . dualHeaderMain projectChainOrderConfig = projectChainOrderConfig . dualBlockConfigMain @@ -313,27 +329,31 @@ instance Bridge m a => BlockSupportsProtocol (DualBlock m a) where -- (see 'agreeOnError'), rather than a regular chain failure; if this happens, -- it indicates a bug, and the node should fail (rather than just, for example, -- reject a block). -data DualLedgerError m a = DualLedgerError { - dualLedgerErrorMain :: LedgerError m - , dualLedgerErrorAux :: LedgerError a - } +data DualLedgerError m a = DualLedgerError + { dualLedgerErrorMain :: LedgerError m + , dualLedgerErrorAux :: LedgerError a + } deriving NoThunks via AllowThunk (DualLedgerError m a) -deriving instance ( Show (LedgerError m) - , Show (LedgerError a) - ) => Show (DualLedgerError m a) -deriving instance ( Eq (LedgerError m) - , Eq (LedgerError a) - ) => Eq (DualLedgerError m a) +deriving instance + ( Show (LedgerError m) + , Show (LedgerError a) + ) => + Show (DualLedgerError m a) +deriving instance + ( Eq (LedgerError m) + , Eq (LedgerError a) + ) => + Eq (DualLedgerError m a) {------------------------------------------------------------------------------- Update the ledger -------------------------------------------------------------------------------} -data DualLedgerConfig m a = DualLedgerConfig { - dualLedgerConfigMain :: LedgerConfig m - , dualLedgerConfigAux :: LedgerConfig a - } +data DualLedgerConfig m a = DualLedgerConfig + { dualLedgerConfigMain :: LedgerConfig m + , dualLedgerConfigAux :: LedgerConfig a + } deriving NoThunks via AllowThunk (DualLedgerConfig m a) deriving instance (Show (LedgerConfig m), Show (LedgerConfig a)) => Show (DualLedgerConfig m a) @@ -351,24 +371,23 @@ instance Bridge m a => GetTip (Ticked (LedgerState (DualBlock m a))) where -- We only have tables on the main ledger state to be able to compare it to a -- reference spec implementation which doesn't use tables. The result should be -- the same. -data instance Ticked (LedgerState (DualBlock m a)) mk = TickedDualLedgerState { - tickedDualLedgerStateMain :: Ticked (LedgerState m) mk - , tickedDualLedgerStateAux :: Ticked (LedgerState a) ValuesMK - , tickedDualLedgerStateBridge :: BridgeLedger m a - - -- | The original, unticked ledger for the auxiliary block - -- - -- The reason we keep this in addition to the ticked ledger state is that - -- not every main block is paired with an auxiliary block. When there is - -- no auxiliary block, the auxiliary ledger state remains unchanged. - , tickedDualLedgerStateAuxOrig :: LedgerState a ValuesMK - } +data instance Ticked (LedgerState (DualBlock m a)) mk = TickedDualLedgerState + { tickedDualLedgerStateMain :: Ticked (LedgerState m) mk + , tickedDualLedgerStateAux :: Ticked (LedgerState a) ValuesMK + , tickedDualLedgerStateBridge :: BridgeLedger m a + , tickedDualLedgerStateAuxOrig :: LedgerState a ValuesMK + -- ^ The original, unticked ledger for the auxiliary block + -- + -- The reason we keep this in addition to the ticked ledger state is that + -- not every main block is paired with an auxiliary block. When there is + -- no auxiliary block, the auxiliary ledger state remains unchanged. + } deriving NoThunks via AllowThunk (Ticked (LedgerState (DualBlock m a)) mk) instance Bridge m a => IsLedger (LedgerState (DualBlock m a)) where - type LedgerErr (LedgerState (DualBlock m a)) = DualLedgerError m a + type LedgerErr (LedgerState (DualBlock m a)) = DualLedgerError m a - -- | The dual ledger events are exactly those of the main ledger; it ignores + -- \| The dual ledger events are exactly those of the main ledger; it ignores -- any possible auxiliary ledger events. -- -- NOTE: This may change. It's unclear whether we want the two ledgers to emit @@ -377,116 +396,145 @@ instance Bridge m a => IsLedger (LedgerState (DualBlock m a)) where -- any events. So we make this easy choice for for now. type AuxLedgerEvent (LedgerState (DualBlock m a)) = AuxLedgerEvent (LedgerState m) - applyChainTickLedgerResult evs - DualLedgerConfig{..} - slot - DualLedgerState{..} = - castLedgerResult ledgerResult <&> \main -> TickedDualLedgerState { - tickedDualLedgerStateMain = main - , tickedDualLedgerStateAux = applyDiffs dualLedgerStateAux dualLedger - , tickedDualLedgerStateAuxOrig = dualLedgerStateAux - , tickedDualLedgerStateBridge = dualLedgerStateBridge - } - where - dualLedger = applyChainTick evs - dualLedgerConfigAux - slot - (forgetLedgerTables dualLedgerStateAux) - ledgerResult = applyChainTickLedgerResult evs - dualLedgerConfigMain - slot - dualLedgerStateMain + applyChainTickLedgerResult + evs + DualLedgerConfig{..} + slot + DualLedgerState{..} = + castLedgerResult ledgerResult <&> \main -> + TickedDualLedgerState + { tickedDualLedgerStateMain = main + , tickedDualLedgerStateAux = applyDiffs dualLedgerStateAux dualLedger + , tickedDualLedgerStateAuxOrig = dualLedgerStateAux + , tickedDualLedgerStateBridge = dualLedgerStateBridge + } + where + dualLedger = + applyChainTick + evs + dualLedgerConfigAux + slot + (forgetLedgerTables dualLedgerStateAux) + ledgerResult = + applyChainTickLedgerResult + evs + dualLedgerConfigMain + slot + dualLedgerStateMain applyHelper :: - Bridge m a - => ( ComputeLedgerEvents - -> LedgerCfg (LedgerState m) - -> m - -> Ticked (LedgerState m) ValuesMK - -> Except (LedgerErr (LedgerState m)) (LedgerResult (LedgerState m) (LedgerState m DiffMK)) - ) - -> ComputeLedgerEvents - -> DualLedgerConfig m a - -> DualBlock m a - -> Ticked (LedgerState (DualBlock m a)) ValuesMK - -> Except (DualLedgerError m a) (LedgerResult (LedgerState (DualBlock m a)) (LedgerState (DualBlock m a) DiffMK)) + Bridge m a => + ( ComputeLedgerEvents -> + LedgerCfg (LedgerState m) -> + m -> + Ticked (LedgerState m) ValuesMK -> + Except (LedgerErr (LedgerState m)) (LedgerResult (LedgerState m) (LedgerState m DiffMK)) + ) -> + ComputeLedgerEvents -> + DualLedgerConfig m a -> + DualBlock m a -> + Ticked (LedgerState (DualBlock m a)) ValuesMK -> + Except + (DualLedgerError m a) + (LedgerResult (LedgerState (DualBlock m a)) (LedgerState (DualBlock m a) DiffMK)) applyHelper f opts cfg block@DualBlock{..} TickedDualLedgerState{..} = do - (ledgerResult, aux') <- - agreeOnError DualLedgerError ( - f opts - (dualLedgerConfigMain cfg) - dualBlockMain - tickedDualLedgerStateMain - , applyMaybeBlock opts - (dualLedgerConfigAux cfg) - dualBlockAux - tickedDualLedgerStateAux - (forgetLedgerTables tickedDualLedgerStateAuxOrig) - ) - return $ castLedgerResult ledgerResult <&> \main' -> DualLedgerState { - dualLedgerStateMain = main' - , dualLedgerStateAux = applyDiffs tickedDualLedgerStateAux aux' - , dualLedgerStateBridge = updateBridgeWithBlock - block - tickedDualLedgerStateBridge + (ledgerResult, aux') <- + agreeOnError + DualLedgerError + ( f + opts + (dualLedgerConfigMain cfg) + dualBlockMain + tickedDualLedgerStateMain + , applyMaybeBlock + opts + (dualLedgerConfigAux cfg) + dualBlockAux + tickedDualLedgerStateAux + (forgetLedgerTables tickedDualLedgerStateAuxOrig) + ) + return $ + castLedgerResult ledgerResult <&> \main' -> + DualLedgerState + { dualLedgerStateMain = main' + , dualLedgerStateAux = applyDiffs tickedDualLedgerStateAux aux' + , dualLedgerStateBridge = + updateBridgeWithBlock + block + tickedDualLedgerStateBridge } instance Bridge m a => ApplyBlock (LedgerState (DualBlock m a)) (DualBlock m a) where - applyBlockLedgerResultWithValidation doValidate = applyHelper (applyBlockLedgerResultWithValidation doValidate) applyBlockLedgerResult = applyHelper applyBlockLedgerResult - reapplyBlockLedgerResult evs cfg - block@DualBlock{..} - TickedDualLedgerState{..} = - castLedgerResult ledgerResult <&> \main' -> DualLedgerState { - dualLedgerStateMain = main' - , dualLedgerStateAux = applyDiffs tickedDualLedgerStateAux auxLedger - , dualLedgerStateBridge = updateBridgeWithBlock - block - tickedDualLedgerStateBridge - } - where - auxLedger = reapplyMaybeBlock evs - (dualLedgerConfigAux cfg) - dualBlockAux - tickedDualLedgerStateAux - (forgetLedgerTables tickedDualLedgerStateAuxOrig) - ledgerResult = reapplyBlockLedgerResult evs - (dualLedgerConfigMain cfg) - dualBlockMain - tickedDualLedgerStateMain - - getBlockKeySets = castLedgerTables - . getBlockKeySets @(LedgerState m) - . dualBlockMain - -data instance LedgerState (DualBlock m a) mk = DualLedgerState { - dualLedgerStateMain :: LedgerState m mk - , dualLedgerStateAux :: LedgerState a ValuesMK - , dualLedgerStateBridge :: BridgeLedger m a - } + reapplyBlockLedgerResult + evs + cfg + block@DualBlock{..} + TickedDualLedgerState{..} = + castLedgerResult ledgerResult <&> \main' -> + DualLedgerState + { dualLedgerStateMain = main' + , dualLedgerStateAux = applyDiffs tickedDualLedgerStateAux auxLedger + , dualLedgerStateBridge = + updateBridgeWithBlock + block + tickedDualLedgerStateBridge + } + where + auxLedger = + reapplyMaybeBlock + evs + (dualLedgerConfigAux cfg) + dualBlockAux + tickedDualLedgerStateAux + (forgetLedgerTables tickedDualLedgerStateAuxOrig) + ledgerResult = + reapplyBlockLedgerResult + evs + (dualLedgerConfigMain cfg) + dualBlockMain + tickedDualLedgerStateMain + + getBlockKeySets = + castLedgerTables + . getBlockKeySets @(LedgerState m) + . dualBlockMain + +data instance LedgerState (DualBlock m a) mk = DualLedgerState + { dualLedgerStateMain :: LedgerState m mk + , dualLedgerStateAux :: LedgerState a ValuesMK + , dualLedgerStateBridge :: BridgeLedger m a + } deriving NoThunks via AllowThunk (LedgerState (DualBlock m a) mk) instance Bridge m a => UpdateLedger (DualBlock m a) -deriving instance ( Bridge m a, ShowMK mk - ) => Show (LedgerState (DualBlock m a) mk) -deriving instance ( Bridge m a, EqMK mk - ) => Eq (LedgerState (DualBlock m a) mk) +deriving instance + ( Bridge m a + , ShowMK mk + ) => + Show (LedgerState (DualBlock m a) mk) +deriving instance + ( Bridge m a + , EqMK mk + ) => + Eq (LedgerState (DualBlock m a) mk) {------------------------------------------------------------------------------- Utilities for working with the extended ledger state -------------------------------------------------------------------------------} -dualExtValidationErrorMain :: ExtValidationError (DualBlock m a) - -> ExtValidationError m +dualExtValidationErrorMain :: + ExtValidationError (DualBlock m a) -> + ExtValidationError m dualExtValidationErrorMain = \case - ExtValidationErrorLedger e -> ExtValidationErrorLedger (dualLedgerErrorMain e) - ExtValidationErrorHeader e -> ExtValidationErrorHeader (castHeaderError e) + ExtValidationErrorLedger e -> ExtValidationErrorLedger (dualLedgerErrorMain e) + ExtValidationErrorHeader e -> ExtValidationErrorHeader (castHeaderError e) {------------------------------------------------------------------------------- LedgerSupportsProtocol @@ -498,51 +546,52 @@ dualExtValidationErrorMain = \case instance Bridge m a => HasAnnTip (DualBlock m a) where type TipInfo (DualBlock m a) = TipInfo m tipInfoHash _ = tipInfoHash (Proxy @m) - getTipInfo = getTipInfo . dualHeaderMain + getTipInfo = getTipInfo . dualHeaderMain instance Bridge m a => BasicEnvelopeValidation (DualBlock m a) where - expectedFirstBlockNo _ = expectedFirstBlockNo (Proxy @m) - expectedNextBlockNo _ = expectedNextBlockNo (Proxy @m) + expectedFirstBlockNo _ = expectedFirstBlockNo (Proxy @m) + expectedNextBlockNo _ = expectedNextBlockNo (Proxy @m) minimumPossibleSlotNo _ = minimumPossibleSlotNo (Proxy @m) - minimumNextSlotNo _ = minimumNextSlotNo (Proxy @m) + minimumNextSlotNo _ = minimumNextSlotNo (Proxy @m) instance Bridge m a => ValidateEnvelope (DualBlock m a) where type OtherHeaderEnvelopeError (DualBlock m a) = OtherHeaderEnvelopeError m additionalEnvelopeChecks cfg ledgerView hdr = - additionalEnvelopeChecks - (dualTopLevelConfigMain cfg) - ledgerView - (dualHeaderMain hdr) + additionalEnvelopeChecks + (dualTopLevelConfigMain cfg) + ledgerView + (dualHeaderMain hdr) instance Bridge m a => LedgerSupportsProtocol (DualBlock m a) where protocolLedgerView cfg state = - protocolLedgerView - (dualLedgerConfigMain cfg) - (tickedDualLedgerStateMain state) + protocolLedgerView + (dualLedgerConfigMain cfg) + (tickedDualLedgerStateMain state) ledgerViewForecastAt cfg state = - ledgerViewForecastAt - (dualLedgerConfigMain cfg) - (dualLedgerStateMain state) + ledgerViewForecastAt + (dualLedgerConfigMain cfg) + (dualLedgerStateMain state) instance Bridge m a => HasHardForkHistory (DualBlock m a) where type HardForkIndices (DualBlock m a) = HardForkIndices m hardForkSummary cfg state = - hardForkSummary - (dualLedgerConfigMain cfg) - (dualLedgerStateMain state) + hardForkSummary + (dualLedgerConfigMain cfg) + (dualLedgerStateMain state) {------------------------------------------------------------------------------- Querying the ledger -------------------------------------------------------------------------------} data instance BlockQuery (DualBlock m a) footprint result - deriving (Show) + deriving Show -instance (Typeable m, Typeable a) - => ShowProxy (BlockQuery (DualBlock m a)) where +instance + (Typeable m, Typeable a) => + ShowProxy (BlockQuery (DualBlock m a)) -- | Not used in the tests: no constructors instance Bridge m a => BlockSupportsLedgerQuery (DualBlock m a) where @@ -560,48 +609,52 @@ instance ShowQuery (BlockQuery (DualBlock m a) footprint) where -- | Forward to the main ledger instance Bridge m a => CommonProtocolParams (DualBlock m a) where maxHeaderSize = maxHeaderSize . dualLedgerStateMain - maxTxSize = maxTxSize . dualLedgerStateMain + maxTxSize = maxTxSize . dualLedgerStateMain {------------------------------------------------------------------------------- Mempool support -------------------------------------------------------------------------------} -data DualGenTxErr m a = DualGenTxErr { - dualGenTxErrMain :: ApplyTxErr m - , dualGenTxErrAux :: ApplyTxErr a - } +data DualGenTxErr m a = DualGenTxErr + { dualGenTxErrMain :: ApplyTxErr m + , dualGenTxErrAux :: ApplyTxErr a + } -instance (Typeable m, Typeable a) - => ShowProxy (DualGenTxErr m a) where +instance + (Typeable m, Typeable a) => + ShowProxy (DualGenTxErr m a) -data instance GenTx (DualBlock m a) = DualGenTx { - dualGenTxMain :: GenTx m - , dualGenTxAux :: GenTx a - , dualGenTxBridge :: BridgeTx m a - } +data instance GenTx (DualBlock m a) = DualGenTx + { dualGenTxMain :: GenTx m + , dualGenTxAux :: GenTx a + , dualGenTxBridge :: BridgeTx m a + } deriving NoThunks via AllowThunk (GenTx (DualBlock m a)) -data instance Validated (GenTx (DualBlock m a)) = ValidatedDualGenTx { - vDualGenTxMain :: Validated (GenTx m) - , vDualGenTxAux :: Validated (GenTx a) - , vDualGenTxBridge :: BridgeTx m a - } +data instance Validated (GenTx (DualBlock m a)) = ValidatedDualGenTx + { vDualGenTxMain :: Validated (GenTx m) + , vDualGenTxAux :: Validated (GenTx a) + , vDualGenTxBridge :: BridgeTx m a + } deriving NoThunks via AllowThunk (Validated (GenTx (DualBlock m a))) -instance (Typeable m, Typeable a) - => ShowProxy (GenTx (DualBlock m a)) where +instance + (Typeable m, Typeable a) => + ShowProxy (GenTx (DualBlock m a)) type instance ApplyTxErr (DualBlock m a) = DualGenTxErr m a instance Bridge m a => LedgerSupportsMempool (DualBlock m a) where - applyTx DualLedgerConfig{..} - wti - slot - DualGenTx{..} - TickedDualLedgerState{..} = do + applyTx + DualLedgerConfig{..} + wti + slot + DualGenTx{..} + TickedDualLedgerState{..} = do ((main', mainVtx), (aux', auxVtx)) <- - agreeOnError DualGenTxErr ( - applyTx + agreeOnError + DualGenTxErr + ( applyTx dualLedgerConfigMain wti slot @@ -614,28 +667,35 @@ instance Bridge m a => LedgerSupportsMempool (DualBlock m a) where dualGenTxAux tickedDualLedgerStateAux ) - let vtx = ValidatedDualGenTx { - vDualGenTxMain = mainVtx - , vDualGenTxAux = auxVtx + let vtx = + ValidatedDualGenTx + { vDualGenTxMain = mainVtx + , vDualGenTxAux = auxVtx , vDualGenTxBridge = dualGenTxBridge } - return (TickedDualLedgerState { - tickedDualLedgerStateMain = main' - , tickedDualLedgerStateAux = applyDiffs tickedDualLedgerStateAux aux' - , tickedDualLedgerStateAuxOrig = tickedDualLedgerStateAuxOrig - , tickedDualLedgerStateBridge = updateBridgeWithTx - vtx - tickedDualLedgerStateBridge - }, vtx) - - - reapplyTx doDiffs DualLedgerConfig{..} - slot - tx@ValidatedDualGenTx{..} - TickedDualLedgerState{..} = do + return + ( TickedDualLedgerState + { tickedDualLedgerStateMain = main' + , tickedDualLedgerStateAux = applyDiffs tickedDualLedgerStateAux aux' + , tickedDualLedgerStateAuxOrig = tickedDualLedgerStateAuxOrig + , tickedDualLedgerStateBridge = + updateBridgeWithTx + vtx + tickedDualLedgerStateBridge + } + , vtx + ) + + reapplyTx + doDiffs + DualLedgerConfig{..} + slot + tx@ValidatedDualGenTx{..} + TickedDualLedgerState{..} = do (main', aux') <- - agreeOnError DualGenTxErr ( - reapplyTx + agreeOnError + DualGenTxErr + ( reapplyTx doDiffs dualLedgerConfigMain slot @@ -648,52 +708,57 @@ instance Bridge m a => LedgerSupportsMempool (DualBlock m a) where vDualGenTxAux tickedDualLedgerStateAux ) - return $ TickedDualLedgerState { - tickedDualLedgerStateMain = main' - , tickedDualLedgerStateAux = trackingToValues aux' - , tickedDualLedgerStateAuxOrig = tickedDualLedgerStateAuxOrig - , tickedDualLedgerStateBridge = updateBridgeWithTx - tx - tickedDualLedgerStateBridge - } + return $ + TickedDualLedgerState + { tickedDualLedgerStateMain = main' + , tickedDualLedgerStateAux = trackingToValues aux' + , tickedDualLedgerStateAuxOrig = tickedDualLedgerStateAuxOrig + , tickedDualLedgerStateBridge = + updateBridgeWithTx + tx + tickedDualLedgerStateBridge + } txForgetValidated vtx = - DualGenTx { - dualGenTxMain = txForgetValidated vDualGenTxMain - , dualGenTxAux = txForgetValidated vDualGenTxAux - , dualGenTxBridge = vDualGenTxBridge - } - where - ValidatedDualGenTx { - vDualGenTxMain - , vDualGenTxAux - , vDualGenTxBridge - } = vtx + DualGenTx + { dualGenTxMain = txForgetValidated vDualGenTxMain + , dualGenTxAux = txForgetValidated vDualGenTxAux + , dualGenTxBridge = vDualGenTxBridge + } + where + ValidatedDualGenTx + { vDualGenTxMain + , vDualGenTxAux + , vDualGenTxBridge + } = vtx - getTransactionKeySets = castLedgerTables - . getTransactionKeySets @m - . dualGenTxMain + getTransactionKeySets = + castLedgerTables + . getTransactionKeySets @m + . dualGenTxMain instance Bridge m a => TxLimits (DualBlock m a) where type TxMeasure (DualBlock m a) = TxMeasure m - txMeasure DualLedgerConfig{..} TickedDualLedgerState{..} DualGenTx{..} = do + txMeasure DualLedgerConfig{..} TickedDualLedgerState{..} DualGenTx{..} = + do mapExcept (inj +++ id) - $ txMeasure dualLedgerConfigMain tickedDualLedgerStateMain dualGenTxMain - where - inj m = DualGenTxErr m (error "ByronSpec has no tx-too-big error") + $ txMeasure dualLedgerConfigMain tickedDualLedgerStateMain dualGenTxMain + where + inj m = DualGenTxErr m (error "ByronSpec has no tx-too-big error") blockCapacityTxMeasure DualLedgerConfig{..} TickedDualLedgerState{..} = - blockCapacityTxMeasure dualLedgerConfigMain tickedDualLedgerStateMain + blockCapacityTxMeasure dualLedgerConfigMain tickedDualLedgerStateMain -- We don't need a pair of IDs, as long as we can unique ID the transaction -newtype instance TxId (GenTx (DualBlock m a)) = DualGenTxId { - dualGenTxIdMain :: GenTxId m - } +newtype instance TxId (GenTx (DualBlock m a)) = DualGenTxId + { dualGenTxIdMain :: GenTxId m + } deriving NoThunks via AllowThunk (TxId (GenTx (DualBlock m a))) -instance (Typeable m, Typeable a) - => ShowProxy (TxId (GenTx (DualBlock m a))) where +instance + (Typeable m, Typeable a) => + ShowProxy (TxId (GenTx (DualBlock m a))) instance Bridge m a => HasTxId (GenTx (DualBlock m a)) where txId = DualGenTxId . txId . dualGenTxMain @@ -703,8 +768,8 @@ deriving instance Bridge m a => Show (Validated (GenTx (DualBlock m a))) deriving instance Bridge m a => Show (DualGenTxErr m a) deriving instance Show (GenTxId m) => Show (TxId (GenTx (DualBlock m a))) -deriving instance Eq (GenTxId m) => Eq (TxId (GenTx (DualBlock m a))) -deriving instance Ord (GenTxId m) => Ord (TxId (GenTx (DualBlock m a))) +deriving instance Eq (GenTxId m) => Eq (TxId (GenTx (DualBlock m a))) +deriving instance Ord (GenTxId m) => Ord (TxId (GenTx (DualBlock m a))) {------------------------------------------------------------------------------- Nested contents @@ -713,48 +778,59 @@ deriving instance Ord (GenTxId m) => Ord (TxId (GenTx (DualBlock m a))) -------------------------------------------------------------------------------} newtype instance NestedCtxt_ (DualBlock m a) f x where - CtxtDual :: NestedCtxt_ m f x -> NestedCtxt_ (DualBlock m a) f x + CtxtDual :: NestedCtxt_ m f x -> NestedCtxt_ (DualBlock m a) f x -deriving instance Show (NestedCtxt_ m f x) - => Show (NestedCtxt_ (DualBlock m a) f x) +deriving instance + Show (NestedCtxt_ m f x) => + Show (NestedCtxt_ (DualBlock m a) f x) -instance SameDepIndex (NestedCtxt_ m f) - => SameDepIndex (NestedCtxt_ (DualBlock m a) f) where +instance + SameDepIndex (NestedCtxt_ m f) => + SameDepIndex (NestedCtxt_ (DualBlock m a) f) + where sameDepIndex (CtxtDual ctxt) (CtxtDual ctxt') = - sameDepIndex ctxt ctxt' + sameDepIndex ctxt ctxt' ctxtDualMain :: NestedCtxt_ (DualBlock m a) f x -> NestedCtxt_ m f x ctxtDualMain (CtxtDual ctxtMain) = ctxtMain -instance HasNestedContent Header m - => HasNestedContent Header (DualBlock m a) where +instance + HasNestedContent Header m => + HasNestedContent Header (DualBlock m a) + where unnest = depPairFirst (mapNestedCtxt CtxtDual) . unnest . dualHeaderMain - nest = DualHeader . nest . depPairFirst (mapNestedCtxt ctxtDualMain) + nest = DualHeader . nest . depPairFirst (mapNestedCtxt ctxtDualMain) -instance ReconstructNestedCtxt Header m - => ReconstructNestedCtxt Header (DualBlock m a) where +instance + ReconstructNestedCtxt Header m => + ReconstructNestedCtxt Header (DualBlock m a) + where reconstructPrefixLen _ = - -- Account for the outer @encodeListLen 3@ - 1 `addPrefixLen` reconstructPrefixLen (Proxy @(Header m)) + -- Account for the outer @encodeListLen 3@ + 1 `addPrefixLen` reconstructPrefixLen (Proxy @(Header m)) reconstructNestedCtxt _ prefix size = - case reconstructNestedCtxt (Proxy @(Header m)) prefixMain size of - SomeSecond ctxt -> SomeSecond (mapNestedCtxt CtxtDual ctxt) - where - prefixMain = Short.pack . drop 1 . Short.unpack $ prefix + case reconstructNestedCtxt (Proxy @(Header m)) prefixMain size of + SomeSecond ctxt -> SomeSecond (mapNestedCtxt CtxtDual ctxt) + where + prefixMain = Short.pack . drop 1 . Short.unpack $ prefix -instance EncodeDiskDepIx (NestedCtxt Header) m - => EncodeDiskDepIx (NestedCtxt Header) (DualBlock m a) where +instance + EncodeDiskDepIx (NestedCtxt Header) m => + EncodeDiskDepIx (NestedCtxt Header) (DualBlock m a) + where encodeDiskDepIx ccfg (SomeSecond ctxt) = - encodeDiskDepIx - (dualCodecConfigMain ccfg) - (SomeSecond (mapNestedCtxt ctxtDualMain ctxt)) + encodeDiskDepIx + (dualCodecConfigMain ccfg) + (SomeSecond (mapNestedCtxt ctxtDualMain ctxt)) -instance EncodeDiskDep (NestedCtxt Header) m - => EncodeDiskDep (NestedCtxt Header) (DualBlock m a) where +instance + EncodeDiskDep (NestedCtxt Header) m => + EncodeDiskDep (NestedCtxt Header) (DualBlock m a) + where encodeDiskDep ccfg ctxt = - encodeDiskDep - (dualCodecConfigMain ccfg) - (mapNestedCtxt ctxtDualMain ctxt) + encodeDiskDep + (dualCodecConfigMain ccfg) + (mapNestedCtxt ctxtDualMain ctxt) {------------------------------------------------------------------------------- HasBinaryBlockInfo @@ -765,13 +841,13 @@ instance EncodeDiskDep (NestedCtxt Header) m -- This is sufficient, because we never need just the header of the auxiliary. instance HasBinaryBlockInfo m => HasBinaryBlockInfo (DualBlock m a) where getBinaryBlockInfo DualBlock{..} = - BinaryBlockInfo { - headerSize = headerSize mainBinaryBlockInfo - , headerOffset = headerOffset mainBinaryBlockInfo + 1 - } - where - mainBinaryBlockInfo :: BinaryBlockInfo - mainBinaryBlockInfo = getBinaryBlockInfo dualBlockMain + BinaryBlockInfo + { headerSize = headerSize mainBinaryBlockInfo + , headerOffset = headerOffset mainBinaryBlockInfo + 1 + } + where + mainBinaryBlockInfo :: BinaryBlockInfo + mainBinaryBlockInfo = getBinaryBlockInfo dualBlockMain {------------------------------------------------------------------------------- Inspection @@ -785,29 +861,31 @@ instance HasBinaryBlockInfo m => HasBinaryBlockInfo (DualBlock m a) where -- for it. We therefore just use the main block. instance InspectLedger m => InspectLedger (DualBlock m a) where type LedgerWarning (DualBlock m a) = LedgerWarning m - type LedgerUpdate (DualBlock m a) = LedgerUpdate m + type LedgerUpdate (DualBlock m a) = LedgerUpdate m - inspectLedger cfg before after = map castLedgerEvent $ + inspectLedger cfg before after = + map castLedgerEvent $ inspectLedger (dualTopLevelConfigMain cfg) - (dualLedgerStateMain before) - (dualLedgerStateMain after) - + (dualLedgerStateMain before) + (dualLedgerStateMain after) {------------------------------------------------------------------------------- PeerSelection -------------------------------------------------------------------------------} -instance LedgerSupportsPeerSelection m - => LedgerSupportsPeerSelection (DualBlock m a) where +instance + LedgerSupportsPeerSelection m => + LedgerSupportsPeerSelection (DualBlock m a) + where getPeers = getPeers . dualLedgerStateMain {------------------------------------------------------------------------------- Forging -------------------------------------------------------------------------------} -type instance CannotForge (DualBlock m a) = CannotForge m -type instance ForgeStateInfo (DualBlock m a) = ForgeStateInfo m +type instance CannotForge (DualBlock m a) = CannotForge m +type instance ForgeStateInfo (DualBlock m a) = ForgeStateInfo m type instance ForgeStateUpdateError (DualBlock m a) = ForgeStateUpdateError m {------------------------------------------------------------------------------- @@ -817,47 +895,50 @@ type instance ForgeStateUpdateError (DualBlock m a) = ForgeStateUpdateError m -- | Lift 'applyLedgerBlock' to @Maybe blk@ -- -- Returns state unchanged on 'Nothing' -applyMaybeBlock :: UpdateLedger blk - => ComputeLedgerEvents - -> LedgerConfig blk - -> Maybe blk - -> TickedLedgerState blk ValuesMK - -> LedgerState blk EmptyMK - -> Except (LedgerError blk) (LedgerState blk DiffMK) -applyMaybeBlock _ _ Nothing _ st = return $ st `withLedgerTables` emptyLedgerTables -applyMaybeBlock evs cfg (Just block) tst _ = applyLedgerBlock evs cfg block tst +applyMaybeBlock :: + UpdateLedger blk => + ComputeLedgerEvents -> + LedgerConfig blk -> + Maybe blk -> + TickedLedgerState blk ValuesMK -> + LedgerState blk EmptyMK -> + Except (LedgerError blk) (LedgerState blk DiffMK) +applyMaybeBlock _ _ Nothing _ st = return $ st `withLedgerTables` emptyLedgerTables +applyMaybeBlock evs cfg (Just block) tst _ = applyLedgerBlock evs cfg block tst -- | Lift 'reapplyLedgerBlock' to @Maybe blk@ -- -- See also 'applyMaybeBlock' -reapplyMaybeBlock :: UpdateLedger blk - => ComputeLedgerEvents - -> LedgerConfig blk - -> Maybe blk - -> TickedLedgerState blk ValuesMK - -> LedgerState blk EmptyMK - -> LedgerState blk DiffMK -reapplyMaybeBlock _ _ Nothing _ st = st `withLedgerTables` emptyLedgerTables -reapplyMaybeBlock evs cfg (Just block) tst _ = reapplyLedgerBlock evs cfg block tst +reapplyMaybeBlock :: + UpdateLedger blk => + ComputeLedgerEvents -> + LedgerConfig blk -> + Maybe blk -> + TickedLedgerState blk ValuesMK -> + LedgerState blk EmptyMK -> + LedgerState blk DiffMK +reapplyMaybeBlock _ _ Nothing _ st = st `withLedgerTables` emptyLedgerTables +reapplyMaybeBlock evs cfg (Just block) tst _ = reapplyLedgerBlock evs cfg block tst -- | Used when the concrete and abstract implementation should agree on errors -- -- The abstract-versus-concrete tests from the ledger folks tests precisely -- this, so if this fails, it indicates a bug somewhere and we error out. -agreeOnError :: (Show e, Show e', HasCallStack) - => (e -> e' -> err) - -> (Except e a, Except e' b) - -> Except err (a, b) +agreeOnError :: + (Show e, Show e', HasCallStack) => + (e -> e' -> err) -> + (Except e a, Except e' b) -> + Except err (a, b) agreeOnError f (ma, mb) = - case (runExcept ma, runExcept mb) of - (Left e, Left e') -> - throwError $ f e e' - (Left e, Right _) -> - error $ "agreeOnError: unexpected error " ++ show e - (Right _, Left e') -> - error $ "agreeOnError: unexpected error " ++ show e' - (Right a, Right b) -> - return (a, b) + case (runExcept ma, runExcept mb) of + (Left e, Left e') -> + throwError $ f e e' + (Left e, Right _) -> + error $ "agreeOnError: unexpected error " ++ show e + (Right _, Left e') -> + error $ "agreeOnError: unexpected error " ++ show e' + (Right a, Right b) -> + return (a, b) {------------------------------------------------------------------------------- Serialisation @@ -865,154 +946,185 @@ agreeOnError f (ma, mb) = For now we just require 'Serialise' for the auxiliary block. -------------------------------------------------------------------------------} -encodeDualLedgerConfig :: (LedgerCfg (LedgerState m) -> Encoding) - -> (LedgerCfg (LedgerState a) -> Encoding) - -> DualLedgerConfig m a - -> Encoding -encodeDualLedgerConfig encodeM encodeA (DualLedgerConfig m a) = mconcat [ - encodeListLen 2 +encodeDualLedgerConfig :: + (LedgerCfg (LedgerState m) -> Encoding) -> + (LedgerCfg (LedgerState a) -> Encoding) -> + DualLedgerConfig m a -> + Encoding +encodeDualLedgerConfig encodeM encodeA (DualLedgerConfig m a) = + mconcat + [ encodeListLen 2 , encodeM m , encodeA a ] -decodeDualLedgerConfig :: Decoder s (LedgerCfg (LedgerState m)) - -> Decoder s (LedgerCfg (LedgerState a)) - -> Decoder s (DualLedgerConfig m a) +decodeDualLedgerConfig :: + Decoder s (LedgerCfg (LedgerState m)) -> + Decoder s (LedgerCfg (LedgerState a)) -> + Decoder s (DualLedgerConfig m a) decodeDualLedgerConfig decodeM decodeA = do enforceSize "DualLedgerConfig" 2 DualLedgerConfig <$> decodeM <*> decodeA -encodeDualBlock :: (Bridge m a, Serialise a) - => (m -> Encoding) - -> DualBlock m a -> Encoding -encodeDualBlock encodeMain DualBlock{..} = mconcat [ - encodeListLen 3 - , encodeMain dualBlockMain - , encode dualBlockAux - , encode dualBlockBridge - ] - -decodeDualBlock :: (Bridge m a, Serialise a) - => Decoder s (Lazy.ByteString -> m) - -> Decoder s (Lazy.ByteString -> DualBlock m a) +encodeDualBlock :: + (Bridge m a, Serialise a) => + (m -> Encoding) -> + DualBlock m a -> + Encoding +encodeDualBlock encodeMain DualBlock{..} = + mconcat + [ encodeListLen 3 + , encodeMain dualBlockMain + , encode dualBlockAux + , encode dualBlockBridge + ] + +decodeDualBlock :: + (Bridge m a, Serialise a) => + Decoder s (Lazy.ByteString -> m) -> + Decoder s (Lazy.ByteString -> DualBlock m a) decodeDualBlock decodeMain = do - enforceSize "DualBlock" 3 - dualBlock - <$> decodeMain - <*> decode - <*> decode - where - dualBlock :: (Lazy.ByteString -> m) - -> Maybe a - -> BridgeBlock m a - -> (Lazy.ByteString -> DualBlock m a) - dualBlock conc abst bridge bs = DualBlock (conc bs) abst bridge - -encodeDualHeader :: (Header m -> Encoding) - -> Header (DualBlock m a) -> Encoding + enforceSize "DualBlock" 3 + dualBlock + <$> decodeMain + <*> decode + <*> decode + where + dualBlock :: + (Lazy.ByteString -> m) -> + Maybe a -> + BridgeBlock m a -> + (Lazy.ByteString -> DualBlock m a) + dualBlock conc abst bridge bs = DualBlock (conc bs) abst bridge + +encodeDualHeader :: + (Header m -> Encoding) -> + Header (DualBlock m a) -> + Encoding encodeDualHeader encodeMain DualHeader{..} = encodeMain dualHeaderMain -decodeDualHeader :: Decoder s (Lazy.ByteString -> Header m) - -> Decoder s (Lazy.ByteString -> Header (DualBlock m a)) +decodeDualHeader :: + Decoder s (Lazy.ByteString -> Header m) -> + Decoder s (Lazy.ByteString -> Header (DualBlock m a)) decodeDualHeader decodeMain = - dualHeader <$> decodeMain - where - dualHeader :: (Lazy.ByteString -> Header m) - -> (Lazy.ByteString -> Header (DualBlock m a)) - dualHeader conc bs = DualHeader (conc bs) - -encodeDualGenTx :: (Bridge m a, Serialise (GenTx a)) - => (GenTx m -> Encoding) - -> GenTx (DualBlock m a) -> Encoding -encodeDualGenTx encodeMain DualGenTx{..} = mconcat [ - encodeListLen 3 + dualHeader <$> decodeMain + where + dualHeader :: + (Lazy.ByteString -> Header m) -> + (Lazy.ByteString -> Header (DualBlock m a)) + dualHeader conc bs = DualHeader (conc bs) + +encodeDualGenTx :: + (Bridge m a, Serialise (GenTx a)) => + (GenTx m -> Encoding) -> + GenTx (DualBlock m a) -> + Encoding +encodeDualGenTx encodeMain DualGenTx{..} = + mconcat + [ encodeListLen 3 , encodeMain dualGenTxMain - , encode dualGenTxAux - , encode dualGenTxBridge + , encode dualGenTxAux + , encode dualGenTxBridge ] -decodeDualGenTx :: (Bridge m a, Serialise (GenTx a)) - => Decoder s (GenTx m) - -> Decoder s (GenTx (DualBlock m a)) +decodeDualGenTx :: + (Bridge m a, Serialise (GenTx a)) => + Decoder s (GenTx m) -> + Decoder s (GenTx (DualBlock m a)) decodeDualGenTx decodeMain = do - enforceSize "DualGenTx" 3 - DualGenTx - <$> decodeMain - <*> decode - <*> decode - -encodeDualGenTxId :: (GenTxId m -> Encoding) - -> GenTxId (DualBlock m a) -> Encoding + enforceSize "DualGenTx" 3 + DualGenTx + <$> decodeMain + <*> decode + <*> decode + +encodeDualGenTxId :: + (GenTxId m -> Encoding) -> + GenTxId (DualBlock m a) -> + Encoding encodeDualGenTxId encodeMain = encodeMain . dualGenTxIdMain -decodeDualGenTxId :: Decoder s (GenTxId m) - -> Decoder s (GenTxId (DualBlock m a)) +decodeDualGenTxId :: + Decoder s (GenTxId m) -> + Decoder s (GenTxId (DualBlock m a)) decodeDualGenTxId decodeMain = DualGenTxId <$> decodeMain -encodeDualGenTxErr :: Serialise (ApplyTxErr a) - => (ApplyTxErr m -> Encoding) - -> ApplyTxErr (DualBlock m a) -> Encoding -encodeDualGenTxErr encodeMain DualGenTxErr{..} = mconcat [ - encodeListLen 2 +encodeDualGenTxErr :: + Serialise (ApplyTxErr a) => + (ApplyTxErr m -> Encoding) -> + ApplyTxErr (DualBlock m a) -> + Encoding +encodeDualGenTxErr encodeMain DualGenTxErr{..} = + mconcat + [ encodeListLen 2 , encodeMain dualGenTxErrMain - , encode dualGenTxErrAux + , encode dualGenTxErrAux ] -decodeDualGenTxErr :: Serialise (ApplyTxErr a) - => Decoder s (ApplyTxErr m) - -> Decoder s (ApplyTxErr (DualBlock m a)) +decodeDualGenTxErr :: + Serialise (ApplyTxErr a) => + Decoder s (ApplyTxErr m) -> + Decoder s (ApplyTxErr (DualBlock m a)) decodeDualGenTxErr decodeMain = do - enforceSize "DualGenTxErr" 2 - DualGenTxErr - <$> decodeMain - <*> decode - -encodeDualLedgerState :: (Bridge m a, Serialise (LedgerState a ValuesMK)) - => (LedgerState m mk -> Encoding) - -> LedgerState (DualBlock m a) mk -> Encoding -encodeDualLedgerState encodeMain DualLedgerState{..} = mconcat [ - encodeListLen 3 + enforceSize "DualGenTxErr" 2 + DualGenTxErr + <$> decodeMain + <*> decode + +encodeDualLedgerState :: + (Bridge m a, Serialise (LedgerState a ValuesMK)) => + (LedgerState m mk -> Encoding) -> + LedgerState (DualBlock m a) mk -> + Encoding +encodeDualLedgerState encodeMain DualLedgerState{..} = + mconcat + [ encodeListLen 3 , encodeMain dualLedgerStateMain - , encode dualLedgerStateAux - , encode dualLedgerStateBridge + , encode dualLedgerStateAux + , encode dualLedgerStateBridge ] -decodeDualLedgerState :: (Bridge m a, Serialise (LedgerState a ValuesMK)) - => Decoder s (LedgerState m mk) - -> Decoder s (LedgerState (DualBlock m a) mk) +decodeDualLedgerState :: + (Bridge m a, Serialise (LedgerState a ValuesMK)) => + Decoder s (LedgerState m mk) -> + Decoder s (LedgerState (DualBlock m a) mk) decodeDualLedgerState decodeMain = do - enforceSize "DualLedgerState" 3 - DualLedgerState - <$> decodeMain - <*> decode - <*> decode + enforceSize "DualLedgerState" 3 + DualLedgerState + <$> decodeMain + <*> decode + <*> decode {------------------------------------------------------------------------------- Ledger Tables -------------------------------------------------------------------------------} -type instance TxIn (LedgerState (DualBlock m a)) = TxIn (LedgerState m) +type instance TxIn (LedgerState (DualBlock m a)) = TxIn (LedgerState m) type instance TxOut (LedgerState (DualBlock m a)) = TxOut (LedgerState m) instance CanUpgradeLedgerTables (LedgerState (DualBlock m a)) where upgradeTables _ _ = id -instance (txout ~ TxOut (LedgerState m), IndexedMemPack (LedgerState m EmptyMK) txout) - => IndexedMemPack (LedgerState (DualBlock m a) EmptyMK) txout where +instance + (txout ~ TxOut (LedgerState m), IndexedMemPack (LedgerState m EmptyMK) txout) => + IndexedMemPack (LedgerState (DualBlock m a) EmptyMK) txout + where indexedTypeName (DualLedgerState st _ _) = indexedTypeName @(LedgerState m EmptyMK) @txout st indexedPackedByteCount (DualLedgerState st _ _) = indexedPackedByteCount st indexedPackM (DualLedgerState st _ _) = indexedPackM st indexedUnpackM (DualLedgerState st _ _) = indexedUnpackM st -instance (Ord (TxIn (LedgerState m)), MemPack (TxIn (LedgerState m)), MemPack (TxOut (LedgerState m))) - => SerializeTablesWithHint (LedgerState (DualBlock m a)) where +instance + (Ord (TxIn (LedgerState m)), MemPack (TxIn (LedgerState m)), MemPack (TxOut (LedgerState m))) => + SerializeTablesWithHint (LedgerState (DualBlock m a)) + where encodeTablesWithHint = defaultEncodeTablesWithHint decodeTablesWithHint = defaultDecodeTablesWithHint -instance ( - Bridge m a +instance + ( Bridge m a , NoThunks (TxOut (LedgerState m)) , NoThunks (TxIn (LedgerState m)) , Show (TxOut (LedgerState m)) @@ -1020,21 +1132,24 @@ instance ( , Eq (TxOut (LedgerState m)) , Ord (TxIn (LedgerState m)) , MemPack (TxIn (LedgerState m)) - ) => HasLedgerTables (LedgerState (DualBlock m a)) where + ) => + HasLedgerTables (LedgerState (DualBlock m a)) + where projectLedgerTables DualLedgerState{..} = - castLedgerTables - (projectLedgerTables dualLedgerStateMain) + castLedgerTables + (projectLedgerTables dualLedgerStateMain) withLedgerTables DualLedgerState{..} main = - DualLedgerState { - dualLedgerStateMain = withLedgerTables dualLedgerStateMain - $ castLedgerTables main - , dualLedgerStateAux = dualLedgerStateAux - , dualLedgerStateBridge = dualLedgerStateBridge - } + DualLedgerState + { dualLedgerStateMain = + withLedgerTables dualLedgerStateMain $ + castLedgerTables main + , dualLedgerStateAux = dualLedgerStateAux + , dualLedgerStateBridge = dualLedgerStateBridge + } -instance ( - Bridge m a +instance + ( Bridge m a , NoThunks (TxOut (LedgerState m)) , NoThunks (TxIn (LedgerState m)) , Show (TxOut (LedgerState m)) @@ -1042,44 +1157,50 @@ instance ( , Eq (TxOut (LedgerState m)) , Ord (TxIn (LedgerState m)) , MemPack (TxIn (LedgerState m)) - )=> HasLedgerTables (Ticked (LedgerState (DualBlock m a))) where + ) => + HasLedgerTables (Ticked (LedgerState (DualBlock m a))) + where projectLedgerTables TickedDualLedgerState{..} = - castLedgerTables - (projectLedgerTables tickedDualLedgerStateMain) + castLedgerTables + (projectLedgerTables tickedDualLedgerStateMain) withLedgerTables TickedDualLedgerState{..} main = - TickedDualLedgerState { - tickedDualLedgerStateMain = + TickedDualLedgerState + { tickedDualLedgerStateMain = withLedgerTables tickedDualLedgerStateMain $ castLedgerTables main , tickedDualLedgerStateAux , tickedDualLedgerStateBridge , tickedDualLedgerStateAuxOrig } -instance CanStowLedgerTables (LedgerState m) - => CanStowLedgerTables (LedgerState (DualBlock m a)) where +instance + CanStowLedgerTables (LedgerState m) => + CanStowLedgerTables (LedgerState (DualBlock m a)) + where stowLedgerTables dls = - DualLedgerState{ - dualLedgerStateMain = stowLedgerTables dualLedgerStateMain - , dualLedgerStateAux - , dualLedgerStateBridge - } + DualLedgerState + { dualLedgerStateMain = stowLedgerTables dualLedgerStateMain + , dualLedgerStateAux + , dualLedgerStateBridge + } where - DualLedgerState { dualLedgerStateMain - , dualLedgerStateAux - , dualLedgerStateBridge - } = dls + DualLedgerState + { dualLedgerStateMain + , dualLedgerStateAux + , dualLedgerStateBridge + } = dls unstowLedgerTables dls = - DualLedgerState{ - dualLedgerStateMain = unstowLedgerTables dualLedgerStateMain - , dualLedgerStateAux - , dualLedgerStateBridge - } + DualLedgerState + { dualLedgerStateMain = unstowLedgerTables dualLedgerStateMain + , dualLedgerStateAux + , dualLedgerStateBridge + } where - DualLedgerState { dualLedgerStateMain - , dualLedgerStateAux - , dualLedgerStateBridge - } = dls + DualLedgerState + { dualLedgerStateMain + , dualLedgerStateAux + , dualLedgerStateBridge + } = dls diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Extended.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Extended.hs index 9da489fe7e..4a47abcceb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Extended.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Extended.hs @@ -1,5 +1,5 @@ -{- HLINT ignore "Unused LANGUAGE pragma" -} -- False hint on TypeOperators - +{- HLINT ignore "Unused LANGUAGE pragma" -} +-- False hint on TypeOperators {-# LANGUAGE CPP #-} {-# LANGUAGE DeriveAnyClass #-} {-# LANGUAGE DeriveGeneric #-} @@ -16,48 +16,50 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Ledger.Extended ( - -- * Extended ledger state +module Ouroboros.Consensus.Ledger.Extended + ( -- * Extended ledger state ExtLedgerCfg (..) , ExtLedgerState (..) , ExtValidationError (..) + -- * Serialisation , decodeDiskExtLedgerState , decodeExtLedgerState , encodeDiskExtLedgerState , encodeExtLedgerState + -- * Type family instances , LedgerTables (..) , Ticked (..) ) where -import Codec.CBOR.Decoding (Decoder, decodeListLenOf) -import Codec.CBOR.Encoding (Encoding, encodeListLen) -import Control.Monad.Except -import Data.Functor ((<&>)) -import Data.MemPack -import Data.Proxy -import Data.Typeable -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.IndexedMemPack +import Codec.CBOR.Decoding (Decoder, decodeListLenOf) +import Codec.CBOR.Encoding (Encoding, encodeListLen) +import Control.Monad.Except +import Data.Functor ((<&>)) +import Data.MemPack +import Data.Proxy +import Data.Typeable +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Extended ledger state -------------------------------------------------------------------------------} -data ExtValidationError blk = - ExtValidationErrorLedger !(LedgerError blk) +data ExtValidationError blk + = ExtValidationErrorLedger !(LedgerError blk) | ExtValidationErrorHeader !(HeaderError blk) - deriving (Generic) + deriving Generic deriving instance LedgerSupportsProtocol blk => Eq (ExtValidationError blk) deriving instance LedgerSupportsProtocol blk => NoThunks (ExtValidationError blk) @@ -66,33 +68,38 @@ deriving instance LedgerSupportsProtocol blk => Show (ExtValidationError blk) -- | Extended ledger state -- -- This is the combination of the header state and the ledger state proper. -data ExtLedgerState blk mk = ExtLedgerState { - ledgerState :: !(LedgerState blk mk) - , headerState :: !(HeaderState blk) - } - deriving (Generic) - -deriving instance (EqMK mk, LedgerSupportsProtocol blk) - => Eq (ExtLedgerState blk mk) -deriving instance (ShowMK mk, LedgerSupportsProtocol blk) - => Show (ExtLedgerState blk mk) +data ExtLedgerState blk mk = ExtLedgerState + { ledgerState :: !(LedgerState blk mk) + , headerState :: !(HeaderState blk) + } + deriving Generic + +deriving instance + (EqMK mk, LedgerSupportsProtocol blk) => + Eq (ExtLedgerState blk mk) +deriving instance + (ShowMK mk, LedgerSupportsProtocol blk) => + Show (ExtLedgerState blk mk) -- | We override 'showTypeOf' to show the type of the block -- -- This makes debugging a bit easier, as the block gets used to resolve all -- kinds of type families. -instance (NoThunksMK mk, LedgerSupportsProtocol blk) - => NoThunks (ExtLedgerState blk mk) where +instance + (NoThunksMK mk, LedgerSupportsProtocol blk) => + NoThunks (ExtLedgerState blk mk) + where showTypeOf _ = show $ typeRep (Proxy @(ExtLedgerState blk)) type instance HeaderHash (ExtLedgerState blk) = HeaderHash (LedgerState blk) -instance ( - NoThunks (HeaderHash blk) +instance + ( NoThunks (HeaderHash blk) , Typeable (HeaderHash blk) , Show (HeaderHash blk) , Ord (HeaderHash blk) , Eq (HeaderHash blk) - ) => StandardHash (ExtLedgerState blk) + ) => + StandardHash (ExtLedgerState blk) instance IsLedger (LedgerState blk) => GetTip (ExtLedgerState blk) where getTip = castPoint . getTip . ledgerState @@ -105,18 +112,20 @@ instance IsLedger (LedgerState blk) => GetTip (ExtLedgerState blk) where -- -- Since the extended ledger also does the consensus protocol validation, we -- also need the consensus config. -newtype ExtLedgerCfg blk = ExtLedgerCfg { - getExtLedgerCfg :: TopLevelConfig blk - } - deriving (Generic) - -instance ( ConsensusProtocol (BlockProtocol blk) - , NoThunks (BlockConfig blk) - , NoThunks (CodecConfig blk) - , NoThunks (LedgerConfig blk) - , NoThunks (StorageConfig blk) - , NoThunks (HeaderHash blk) - ) => NoThunks (ExtLedgerCfg blk) +newtype ExtLedgerCfg blk = ExtLedgerCfg + { getExtLedgerCfg :: TopLevelConfig blk + } + deriving Generic + +instance + ( ConsensusProtocol (BlockProtocol blk) + , NoThunks (BlockConfig blk) + , NoThunks (CodecConfig blk) + , NoThunks (LedgerConfig blk) + , NoThunks (StorageConfig blk) + , NoThunks (HeaderHash blk) + ) => + NoThunks (ExtLedgerCfg blk) type instance LedgerCfg (ExtLedgerState blk) = ExtLedgerCfg blk @@ -124,74 +133,77 @@ type instance LedgerCfg (ExtLedgerState blk) = ExtLedgerCfg blk The ticked extended ledger state -------------------------------------------------------------------------------} -data instance Ticked (ExtLedgerState blk) mk = TickedExtLedgerState { - tickedLedgerState :: Ticked (LedgerState blk) mk - , ledgerView :: LedgerView (BlockProtocol blk) - , tickedHeaderState :: Ticked (HeaderState blk) - } +data instance Ticked (ExtLedgerState blk) mk = TickedExtLedgerState + { tickedLedgerState :: Ticked (LedgerState blk) mk + , ledgerView :: LedgerView (BlockProtocol blk) + , tickedHeaderState :: Ticked (HeaderState blk) + } instance IsLedger (LedgerState blk) => GetTip (Ticked (ExtLedgerState blk)) where getTip = castPoint . getTip . tickedLedgerState -instance LedgerSupportsProtocol blk - => IsLedger (ExtLedgerState blk) where +instance + LedgerSupportsProtocol blk => + IsLedger (ExtLedgerState blk) + where type LedgerErr (ExtLedgerState blk) = ExtValidationError blk type AuxLedgerEvent (ExtLedgerState blk) = AuxLedgerEvent (LedgerState blk) applyChainTickLedgerResult evs cfg slot (ExtLedgerState ledger header) = - castLedgerResult ledgerResult <&> \tickedLedgerState -> + castLedgerResult ledgerResult <&> \tickedLedgerState -> let ledgerView :: LedgerView (BlockProtocol blk) ledgerView = protocolLedgerView lcfg tickedLedgerState tickedHeaderState :: Ticked (HeaderState blk) tickedHeaderState = - tickHeaderState - (configConsensus $ getExtLedgerCfg cfg) - ledgerView - slot - header - in TickedExtLedgerState {..} - where - lcfg :: LedgerConfig blk - lcfg = configLedger $ getExtLedgerCfg cfg - - ledgerResult = applyChainTickLedgerResult evs lcfg slot ledger + tickHeaderState + (configConsensus $ getExtLedgerCfg cfg) + ledgerView + slot + header + in TickedExtLedgerState{..} + where + lcfg :: LedgerConfig blk + lcfg = configLedger $ getExtLedgerCfg cfg + + ledgerResult = applyChainTickLedgerResult evs lcfg slot ledger applyHelper :: - forall blk. - (HasCallStack, LedgerSupportsProtocol blk) - => ( HasCallStack - => ComputeLedgerEvents - -> LedgerCfg (LedgerState blk) - -> blk - -> Ticked (LedgerState blk) ValuesMK - -> Except - (LedgerErr (LedgerState blk)) - (LedgerResult (LedgerState blk) (LedgerState blk DiffMK)) - ) - -> ComputeLedgerEvents - -> LedgerCfg (ExtLedgerState blk) - -> blk - -> Ticked (ExtLedgerState blk) ValuesMK - -> Except - (LedgerErr (ExtLedgerState blk)) - (LedgerResult (ExtLedgerState blk) (ExtLedgerState blk DiffMK)) + forall blk. + (HasCallStack, LedgerSupportsProtocol blk) => + ( HasCallStack => + ComputeLedgerEvents -> + LedgerCfg (LedgerState blk) -> + blk -> + Ticked (LedgerState blk) ValuesMK -> + Except + (LedgerErr (LedgerState blk)) + (LedgerResult (LedgerState blk) (LedgerState blk DiffMK)) + ) -> + ComputeLedgerEvents -> + LedgerCfg (ExtLedgerState blk) -> + blk -> + Ticked (ExtLedgerState blk) ValuesMK -> + Except + (LedgerErr (ExtLedgerState blk)) + (LedgerResult (ExtLedgerState blk) (ExtLedgerState blk DiffMK)) applyHelper f opts cfg blk TickedExtLedgerState{..} = do - ledgerResult <- - withExcept ExtValidationErrorLedger - $ f opts - (configLedger $ getExtLedgerCfg cfg) - blk - tickedLedgerState - hdr <- - withExcept ExtValidationErrorHeader - $ validateHeader @blk - (getExtLedgerCfg cfg) - ledgerView - (getHeader blk) - tickedHeaderState - pure $ (\l -> ExtLedgerState l hdr) <$> castLedgerResult ledgerResult + ledgerResult <- + withExcept ExtValidationErrorLedger $ + f + opts + (configLedger $ getExtLedgerCfg cfg) + blk + tickedLedgerState + hdr <- + withExcept ExtValidationErrorHeader $ + validateHeader @blk + (getExtLedgerCfg cfg) + ledgerView + (getHeader blk) + tickedHeaderState + pure $ (\l -> ExtLedgerState l hdr) <$> castLedgerResult ledgerResult instance LedgerSupportsProtocol blk => ApplyBlock (ExtLedgerState blk) blk where applyBlockLedgerResultWithValidation doValidate = @@ -201,19 +213,20 @@ instance LedgerSupportsProtocol blk => ApplyBlock (ExtLedgerState blk) blk where applyHelper applyBlockLedgerResult reapplyBlockLedgerResult evs cfg blk TickedExtLedgerState{..} = - (\l -> ExtLedgerState l hdr) <$> castLedgerResult ledgerResult - where - ledgerResult = - reapplyBlockLedgerResult evs - (configLedger $ getExtLedgerCfg cfg) - blk - tickedLedgerState - hdr = - revalidateHeader - (getExtLedgerCfg cfg) - ledgerView - (getHeader blk) - tickedHeaderState + (\l -> ExtLedgerState l hdr) <$> castLedgerResult ledgerResult + where + ledgerResult = + reapplyBlockLedgerResult + evs + (configLedger $ getExtLedgerCfg cfg) + blk + tickedLedgerState + hdr = + revalidateHeader + (getExtLedgerCfg cfg) + ledgerView + (getHeader blk) + tickedHeaderState getBlockKeySets = castLedgerTables . getBlockKeySets @(LedgerState blk) @@ -221,59 +234,67 @@ instance LedgerSupportsProtocol blk => ApplyBlock (ExtLedgerState blk) blk where Serialisation -------------------------------------------------------------------------------} -encodeExtLedgerState :: (LedgerState blk mk -> Encoding) - -> (ChainDepState (BlockProtocol blk) -> Encoding) - -> (AnnTip blk -> Encoding) - -> ExtLedgerState blk mk -> Encoding -encodeExtLedgerState encodeLedgerState - encodeChainDepState - encodeAnnTip - ExtLedgerState{ledgerState, headerState} = mconcat [ - encodeListLen 2 - , encodeLedgerState ledgerState - , encodeHeaderState' headerState - ] - where - encodeHeaderState' = encodeHeaderState - encodeChainDepState - encodeAnnTip +encodeExtLedgerState :: + (LedgerState blk mk -> Encoding) -> + (ChainDepState (BlockProtocol blk) -> Encoding) -> + (AnnTip blk -> Encoding) -> + ExtLedgerState blk mk -> + Encoding +encodeExtLedgerState + encodeLedgerState + encodeChainDepState + encodeAnnTip + ExtLedgerState{ledgerState, headerState} = + mconcat + [ encodeListLen 2 + , encodeLedgerState ledgerState + , encodeHeaderState' headerState + ] + where + encodeHeaderState' = + encodeHeaderState + encodeChainDepState + encodeAnnTip encodeDiskExtLedgerState :: - forall blk. - (EncodeDisk blk (LedgerState blk EmptyMK), - EncodeDisk blk (ChainDepState (BlockProtocol blk)), - EncodeDisk blk (AnnTip blk) - ) - => (CodecConfig blk -> ExtLedgerState blk EmptyMK -> Encoding) + forall blk. + ( EncodeDisk blk (LedgerState blk EmptyMK) + , EncodeDisk blk (ChainDepState (BlockProtocol blk)) + , EncodeDisk blk (AnnTip blk) + ) => + (CodecConfig blk -> ExtLedgerState blk EmptyMK -> Encoding) encodeDiskExtLedgerState cfg = encodeExtLedgerState (encodeDisk cfg) (encodeDisk cfg) (encodeDisk cfg) -decodeExtLedgerState :: (forall s. Decoder s (LedgerState blk EmptyMK)) - -> (forall s. Decoder s (ChainDepState (BlockProtocol blk))) - -> (forall s. Decoder s (AnnTip blk)) - -> (forall s. Decoder s (ExtLedgerState blk EmptyMK)) -decodeExtLedgerState decodeLedgerState - decodeChainDepState - decodeAnnTip = do - decodeListLenOf 2 - ledgerState <- decodeLedgerState - headerState <- decodeHeaderState' - return ExtLedgerState{ledgerState, headerState} - where - decodeHeaderState' = decodeHeaderState - decodeChainDepState - decodeAnnTip +decodeExtLedgerState :: + (forall s. Decoder s (LedgerState blk EmptyMK)) -> + (forall s. Decoder s (ChainDepState (BlockProtocol blk))) -> + (forall s. Decoder s (AnnTip blk)) -> + (forall s. Decoder s (ExtLedgerState blk EmptyMK)) +decodeExtLedgerState + decodeLedgerState + decodeChainDepState + decodeAnnTip = do + decodeListLenOf 2 + ledgerState <- decodeLedgerState + headerState <- decodeHeaderState' + return ExtLedgerState{ledgerState, headerState} + where + decodeHeaderState' = + decodeHeaderState + decodeChainDepState + decodeAnnTip decodeDiskExtLedgerState :: - forall blk. - (DecodeDisk blk (LedgerState blk EmptyMK), - DecodeDisk blk (ChainDepState (BlockProtocol blk)), - DecodeDisk blk (AnnTip blk) - ) - => (CodecConfig blk -> forall s. Decoder s (ExtLedgerState blk EmptyMK)) + forall blk. + ( DecodeDisk blk (LedgerState blk EmptyMK) + , DecodeDisk blk (ChainDepState (BlockProtocol blk)) + , DecodeDisk blk (AnnTip blk) + ) => + (CodecConfig blk -> forall s. Decoder s (ExtLedgerState blk EmptyMK)) decodeDiskExtLedgerState cfg = decodeExtLedgerState (decodeDisk cfg) @@ -284,11 +305,11 @@ decodeDiskExtLedgerState cfg = Ledger Tables -------------------------------------------------------------------------------} -type instance TxIn (ExtLedgerState blk) = TxIn (LedgerState blk) +type instance TxIn (ExtLedgerState blk) = TxIn (LedgerState blk) type instance TxOut (ExtLedgerState blk) = TxOut (LedgerState blk) -instance ( - HasLedgerTables (LedgerState blk) +instance + ( HasLedgerTables (LedgerState blk) , NoThunks (TxOut (LedgerState blk)) , NoThunks (TxIn (LedgerState blk)) , Show (TxOut (LedgerState blk)) @@ -296,25 +317,31 @@ instance ( , Eq (TxOut (LedgerState blk)) , Ord (TxIn (LedgerState blk)) , MemPack (TxIn (LedgerState blk)) - ) => HasLedgerTables (ExtLedgerState blk) where + ) => + HasLedgerTables (ExtLedgerState blk) + where projectLedgerTables (ExtLedgerState lstate _) = - castLedgerTables (projectLedgerTables lstate) + castLedgerTables (projectLedgerTables lstate) withLedgerTables (ExtLedgerState lstate hstate) tables = - ExtLedgerState - (lstate `withLedgerTables` castLedgerTables tables) - hstate + ExtLedgerState + (lstate `withLedgerTables` castLedgerTables tables) + hstate -instance LedgerTablesAreTrivial (LedgerState blk) - => LedgerTablesAreTrivial (ExtLedgerState blk) where +instance + LedgerTablesAreTrivial (LedgerState blk) => + LedgerTablesAreTrivial (ExtLedgerState blk) + where convertMapKind (ExtLedgerState x y) = ExtLedgerState (convertMapKind x) y -instance LedgerTablesAreTrivial (Ticked (LedgerState blk)) - => LedgerTablesAreTrivial (Ticked (ExtLedgerState blk)) where +instance + LedgerTablesAreTrivial (Ticked (LedgerState blk)) => + LedgerTablesAreTrivial (Ticked (ExtLedgerState blk)) + where convertMapKind (TickedExtLedgerState x y z) = - TickedExtLedgerState (convertMapKind x) y z + TickedExtLedgerState (convertMapKind x) y z -instance ( - HasLedgerTables (Ticked (LedgerState blk)) +instance + ( HasLedgerTables (Ticked (LedgerState blk)) , NoThunks (TxOut (LedgerState blk)) , NoThunks (TxIn (LedgerState blk)) , Show (TxOut (LedgerState blk)) @@ -322,9 +349,11 @@ instance ( , Eq (TxOut (LedgerState blk)) , Ord (TxIn (LedgerState blk)) , MemPack (TxIn (LedgerState blk)) - ) => HasLedgerTables (Ticked (ExtLedgerState blk)) where + ) => + HasLedgerTables (Ticked (ExtLedgerState blk)) + where projectLedgerTables (TickedExtLedgerState lstate _view _hstate) = - castLedgerTables (projectLedgerTables lstate) + castLedgerTables (projectLedgerTables lstate) withLedgerTables (TickedExtLedgerState lstate view hstate) tables = @@ -333,16 +362,20 @@ instance ( view hstate -instance CanStowLedgerTables (LedgerState blk) - => CanStowLedgerTables (ExtLedgerState blk) where - stowLedgerTables (ExtLedgerState lstate hstate) = - ExtLedgerState (stowLedgerTables lstate) hstate +instance + CanStowLedgerTables (LedgerState blk) => + CanStowLedgerTables (ExtLedgerState blk) + where + stowLedgerTables (ExtLedgerState lstate hstate) = + ExtLedgerState (stowLedgerTables lstate) hstate - unstowLedgerTables (ExtLedgerState lstate hstate) = - ExtLedgerState (unstowLedgerTables lstate) hstate + unstowLedgerTables (ExtLedgerState lstate hstate) = + ExtLedgerState (unstowLedgerTables lstate) hstate -instance (txout ~ (TxOut (LedgerState blk)), IndexedMemPack (LedgerState blk EmptyMK) txout) - => IndexedMemPack (ExtLedgerState blk EmptyMK) txout where +instance + (txout ~ (TxOut (LedgerState blk)), IndexedMemPack (LedgerState blk EmptyMK) txout) => + IndexedMemPack (ExtLedgerState blk EmptyMK) txout + where indexedTypeName (ExtLedgerState st _) = indexedTypeName @(LedgerState blk EmptyMK) @txout st indexedPackedByteCount (ExtLedgerState st _) = indexedPackedByteCount st indexedPackM (ExtLedgerState st _) = indexedPackM st diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Inspect.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Inspect.hs index 67ced8c8af..97ef8244eb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Inspect.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Inspect.hs @@ -6,55 +6,58 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.Ledger.Inspect ( - InspectLedger (..) +module Ouroboros.Consensus.Ledger.Inspect + ( InspectLedger (..) , LedgerEvent (..) , castLedgerEvent , partitionLedgerEvents ) where -import Data.Either -import Data.Kind (Type) -import Data.Void -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.RedundantConstraints +import Data.Either +import Data.Kind (Type) +import Data.Void +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.RedundantConstraints -data LedgerEvent blk = - LedgerWarning (LedgerWarning blk) +data LedgerEvent blk + = LedgerWarning (LedgerWarning blk) | LedgerUpdate (LedgerUpdate blk) deriving instance InspectLedger blk => Show (LedgerEvent blk) -deriving instance InspectLedger blk => Eq (LedgerEvent blk) +deriving instance InspectLedger blk => Eq (LedgerEvent blk) castLedgerEvent :: - ( LedgerWarning blk ~ LedgerWarning blk' - , LedgerUpdate blk ~ LedgerUpdate blk' - ) - => LedgerEvent blk -> LedgerEvent blk' + ( LedgerWarning blk ~ LedgerWarning blk' + , LedgerUpdate blk ~ LedgerUpdate blk' + ) => + LedgerEvent blk -> LedgerEvent blk' castLedgerEvent (LedgerWarning warning) = LedgerWarning warning -castLedgerEvent (LedgerUpdate update) = LedgerUpdate update +castLedgerEvent (LedgerUpdate update) = LedgerUpdate update ledgerEventToEither :: - LedgerEvent blk - -> Either (LedgerWarning blk) (LedgerUpdate blk) -ledgerEventToEither (LedgerWarning warning) = Left warning -ledgerEventToEither (LedgerUpdate update) = Right update + LedgerEvent blk -> + Either (LedgerWarning blk) (LedgerUpdate blk) +ledgerEventToEither (LedgerWarning warning) = Left warning +ledgerEventToEither (LedgerUpdate update) = Right update partitionLedgerEvents :: - [LedgerEvent blk] - -> ([LedgerWarning blk], [LedgerUpdate blk]) + [LedgerEvent blk] -> + ([LedgerWarning blk], [LedgerUpdate blk]) partitionLedgerEvents = partitionEithers . map ledgerEventToEither -class ( Show (LedgerWarning blk) - , Show (LedgerUpdate blk) - , Eq (LedgerWarning blk) - , Eq (LedgerUpdate blk) - , Condense (LedgerUpdate blk) - ) => InspectLedger blk where +class + ( Show (LedgerWarning blk) + , Show (LedgerUpdate blk) + , Eq (LedgerWarning blk) + , Eq (LedgerUpdate blk) + , Condense (LedgerUpdate blk) + ) => + InspectLedger blk + where type LedgerWarning blk :: Type - type LedgerUpdate blk :: Type + type LedgerUpdate blk :: Type -- | Inspect the ledger -- @@ -65,26 +68,30 @@ class ( Show (LedgerWarning blk) -- instead. That doesn't fit quite so neatly with the HFC at present, so -- leaving it at this for now. inspectLedger :: - TopLevelConfig blk - -> LedgerState blk mk1 -- ^ Before - -> LedgerState blk mk2 -- ^ After - -> [LedgerEvent blk] + TopLevelConfig blk -> + -- | Before + LedgerState blk mk1 -> + -- | After + LedgerState blk mk2 -> + [LedgerEvent blk] -- Defaults -- The defaults just use no events at all type LedgerWarning blk = Void - type LedgerUpdate blk = Void + type LedgerUpdate blk = Void default inspectLedger :: - ( LedgerWarning blk ~ Void - , LedgerUpdate blk ~ Void - ) - => TopLevelConfig blk - -> LedgerState blk mk1 -- ^ Before - -> LedgerState blk mk2 -- ^ After - -> [LedgerEvent blk] + ( LedgerWarning blk ~ Void + , LedgerUpdate blk ~ Void + ) => + TopLevelConfig blk -> + -- | Before + LedgerState blk mk1 -> + -- | After + LedgerState blk mk2 -> + [LedgerEvent blk] inspectLedger _ _ _ = [] - where - _ = keepRedundantConstraint (Proxy @(LedgerWarning blk ~ Void)) - _ = keepRedundantConstraint (Proxy @(LedgerUpdate blk ~ Void)) + where + _ = keepRedundantConstraint (Proxy @(LedgerWarning blk ~ Void)) + _ = keepRedundantConstraint (Proxy @(LedgerUpdate blk ~ Void)) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query.hs index 70d3d949e2..a4b2108064 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query.hs @@ -16,65 +16,81 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Ledger.Query ( - -- * Queries that can be answered by the Consensus layer +module Ouroboros.Consensus.Ledger.Query + ( -- * Queries that can be answered by the Consensus layer Query (..) , answerQuery + -- * How to answer specific queries , BlockQuery , BlockSupportsLedgerQuery (..) , ConfigSupportsNode (..) , ShowQuery (..) + -- * Version , QueryVersion (..) , nodeToClientVersionToQueryVersion + -- * Serialization , queryDecodeNodeToClient , queryEncodeNodeToClient , queryIsSupportedOnNodeToClientVersion , querySupportedVersions + -- * Footprints , QueryFootprint (..) , SQueryFootprint (..) , SomeBlockQuery (..) ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..)) -import Cardano.Slotting.Block (BlockNo (..)) -import Cardano.Slotting.Slot (WithOrigin (..)) -import Codec.CBOR.Decoding -import Codec.CBOR.Encoding -import Codec.Serialise (Serialise) -import Codec.Serialise.Class (decode, encode) -import Control.Exception (throw) -import Data.Kind (Type) -import qualified Data.Map.Strict as Map -import Data.Maybe (isJust) -import Data.Singletons -import Data.SOP.BasicFunctors -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block.Abstract (CodecConfig) -import Ouroboros.Consensus.BlockchainTime (SystemStart) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.HeaderValidation (HasAnnTip (..), - headerStateBlockNo, headerStatePoint) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query.Version -import Ouroboros.Consensus.Node.NetworkProtocolVersion - (BlockNodeToClientVersion, NodeToClientVersion, - SupportedNetworkProtocolVersion (supportedNodeToClientVersions)) -import Ouroboros.Consensus.Node.Serialisation - (SerialiseBlockQueryResult (..), - SerialiseNodeToClient (..), SerialiseResult (..)) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Util (ShowProxy (..), SomeSecond (..)) -import Ouroboros.Consensus.Util.DepPair -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (HeaderHash, Point (..), StandardHash, - decodePoint, encodePoint) -import Ouroboros.Network.Protocol.LocalStateQuery.Type +import Cardano.Binary (FromCBOR (..), ToCBOR (..)) +import Cardano.Slotting.Block (BlockNo (..)) +import Cardano.Slotting.Slot (WithOrigin (..)) +import Codec.CBOR.Decoding +import Codec.CBOR.Encoding +import Codec.Serialise (Serialise) +import Codec.Serialise.Class (decode, encode) +import Control.Exception (throw) +import Data.Kind (Type) +import Data.Map.Strict qualified as Map +import Data.Maybe (isJust) +import Data.SOP.BasicFunctors +import Data.Singletons +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block.Abstract (CodecConfig) +import Ouroboros.Consensus.BlockchainTime (SystemStart) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.HeaderValidation + ( HasAnnTip (..) + , headerStateBlockNo + , headerStatePoint + ) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query.Version +import Ouroboros.Consensus.Node.NetworkProtocolVersion + ( BlockNodeToClientVersion + , NodeToClientVersion + , SupportedNetworkProtocolVersion (supportedNodeToClientVersions) + ) +import Ouroboros.Consensus.Node.Serialisation + ( SerialiseBlockQueryResult (..) + , SerialiseNodeToClient (..) + , SerialiseResult (..) + ) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Util (ShowProxy (..), SomeSecond (..)) +import Ouroboros.Consensus.Util.DepPair +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block + ( HeaderHash + , Point (..) + , StandardHash + , decodePoint + , encodePoint + ) +import Ouroboros.Network.Protocol.LocalStateQuery.Type {------------------------------------------------------------------------------- Footprints @@ -83,23 +99,23 @@ import Ouroboros.Network.Protocol.LocalStateQuery.Type -- | Queries on the local state might require reading ledger tables from disk. -- This datatype (which will sometimes be concretized via @sing@) allows -- Consensus to categorize the queries. -data QueryFootprint = - -- | The query doesn't need ledger tables, thus can be answered only with +data QueryFootprint + = -- | The query doesn't need ledger tables, thus can be answered only with -- the ledger state. QFNoTables - -- | The query needs some tables, but doesn't need to traverse the whole + | -- | The query needs some tables, but doesn't need to traverse the whole -- backing store. - | QFLookupTables - -- | The query needs to traverse the whole backing store. - | QFTraverseTables + QFLookupTables + | -- | The query needs to traverse the whole backing store. + QFTraverseTables type instance Sing = SQueryFootprint type SQueryFootprint :: QueryFootprint -> Type data SQueryFootprint a where - SQFNoTables :: SQueryFootprint QFNoTables - SQFLookupTables :: SQueryFootprint QFLookupTables - SQFTraverseTables :: SQueryFootprint QFTraverseTables + SQFNoTables :: SQueryFootprint QFNoTables + SQFLookupTables :: SQueryFootprint QFLookupTables + SQFTraverseTables :: SQueryFootprint QFTraverseTables instance SingI QFNoTables where sing = SQFNoTables @@ -109,15 +125,15 @@ instance SingI QFTraverseTables where sing = SQFTraverseTables type SomeBlockQuery :: (QueryFootprint -> Type -> Type) -> Type -data SomeBlockQuery q = - forall footprint result. SingI footprint => SomeBlockQuery !(q footprint result) +data SomeBlockQuery q + = forall footprint result. SingI footprint => SomeBlockQuery !(q footprint result) {------------------------------------------------------------------------------- Block Queries -------------------------------------------------------------------------------} -- | Different queries supported by the ledger, indexed by the result type. -type BlockQuery :: Type -> QueryFootprint -> Type -> Type +type BlockQuery :: Type -> QueryFootprint -> Type -> Type data family BlockQuery -- | Query the ledger extended state. @@ -125,22 +141,21 @@ data family BlockQuery -- Used by the LocalStateQuery protocol to allow clients to query the extended -- ledger state. class - -- These instances are not needed for BlockSupportsLedgerQuery but we bundle them here - -- so that we don't need to put them in 'SingleEraBlock' later on - ( - forall fp result. Show (BlockQuery blk fp result), - forall fp. ShowQuery (BlockQuery blk fp) - , SameDepIndex2 (BlockQuery blk) - ) - => BlockSupportsLedgerQuery blk where - + -- These instances are not needed for BlockSupportsLedgerQuery but we bundle them here + -- so that we don't need to put them in 'SingleEraBlock' later on + ( forall fp result. Show (BlockQuery blk fp result) + , forall fp. ShowQuery (BlockQuery blk fp) + , SameDepIndex2 (BlockQuery blk) + ) => + BlockSupportsLedgerQuery blk + where -- | Answer the given query about the extended ledger state, without reading -- ledger tables from the disk. answerPureBlockQuery :: - ExtLedgerCfg blk - -> BlockQuery blk QFNoTables result - -> ExtLedgerState blk EmptyMK - -> result + ExtLedgerCfg blk -> + BlockQuery blk QFNoTables result -> + ExtLedgerState blk EmptyMK -> + result -- | Answer a query that requires to perform a lookup on the ledger tables. As -- consensus always runs with a HardForkBlock, this might result in a @@ -152,11 +167,11 @@ class -- For the hard fork block this will be instantiated to -- 'Ouroboros.Consensus.HardFork.Combinator.Ledger.Query.answerBlockQueryHFLookup'. answerBlockQueryLookup :: - MonadSTM m - => ExtLedgerCfg blk - -> BlockQuery blk QFLookupTables result - -> ReadOnlyForker' m blk - -> m result + MonadSTM m => + ExtLedgerCfg blk -> + BlockQuery blk QFLookupTables result -> + ReadOnlyForker' m blk -> + m result -- | Answer a query that requires to traverse the ledger tables. As consensus -- always runs with a HardForkBlock, this might result in a different code @@ -167,11 +182,11 @@ class -- For the hard fork block this will be instantiated to -- 'Ouroboros.Consensus.HardFork.Combinator.Ledger.Query.answerBlockQueryHFTraverse'. answerBlockQueryTraverse :: - MonadSTM m - => ExtLedgerCfg blk - -> BlockQuery blk QFTraverseTables result - -> ReadOnlyForker' m blk - -> m result + MonadSTM m => + ExtLedgerCfg blk -> + BlockQuery blk QFTraverseTables result -> + ReadOnlyForker' m blk -> + m result -- | Is the given query supported in this NTC version? -- @@ -186,9 +201,9 @@ class -- particular this function implements the check described in -- https://ouroboros-consensus.cardano.intersectmbo.org/docs/for-developers/QueryVersioning/#checks. blockQueryIsSupportedOnVersion :: - BlockQuery blk fp result - -> BlockNodeToClientVersion blk - -> Bool + BlockQuery blk fp result -> + BlockNodeToClientVersion blk -> + Bool {------------------------------------------------------------------------------- Queries @@ -196,10 +211,10 @@ class queryName :: Query blk result -> String queryName query = case query of - BlockQuery _ -> "BlockQuery" - GetSystemStart -> "GetSystemStart" - GetChainBlockNo -> "GetChainBlockNo" - GetChainPoint -> "GetChainPoint" + BlockQuery _ -> "BlockQuery" + GetSystemStart -> "GetSystemStart" + GetChainBlockNo -> "GetChainBlockNo" + GetChainPoint -> "GetChainPoint" DebugLedgerConfig -> "DebugLedgerConfig" -- | Different queries supported by the ledger for all block types, indexed @@ -212,22 +227,18 @@ data Query blk result where -- argument is versioned by the @BlockNodeToClientVersion blk@. BlockQuery :: SingI footprint => BlockQuery blk footprint result -> Query blk result - -- | Get the 'SystemStart' time. -- -- Supported by 'QueryVersion' >= 'QueryVersion1'. GetSystemStart :: Query blk SystemStart - -- | Get the 'GetChainBlockNo' time. -- -- Supported by 'QueryVersion' >= 'QueryVersion2'. GetChainBlockNo :: Query blk (WithOrigin BlockNo) - -- | Get the 'GetChainPoint' time. -- -- Supported by 'QueryVersion' >= 'QueryVersion2'. GetChainPoint :: Query blk (Point blk) - -- | Get the ledger config. Note that this is a debug query, so we are not -- (yet) guaranteeing stability across node versions. -- @@ -238,32 +249,32 @@ data Query blk result where -- | Answer the given query about the extended ledger state. answerQuery :: - forall blk m result. - (BlockSupportsLedgerQuery blk, ConfigSupportsNode blk, HasAnnTip blk, MonadSTM m) - => ExtLedgerCfg blk - -> ReadOnlyForker' m blk - -> Query blk result - -> m result + forall blk m result. + (BlockSupportsLedgerQuery blk, ConfigSupportsNode blk, HasAnnTip blk, MonadSTM m) => + ExtLedgerCfg blk -> + ReadOnlyForker' m blk -> + Query blk result -> + m result answerQuery config forker query = case query of - BlockQuery (blockQuery :: BlockQuery blk footprint result) -> - case sing :: Sing footprint of - SQFNoTables -> - answerPureBlockQuery config blockQuery <$> - atomically (roforkerGetLedgerState forker) - SQFLookupTables -> - answerBlockQueryLookup config blockQuery forker - SQFTraverseTables -> - answerBlockQueryTraverse config blockQuery forker - GetSystemStart -> - pure $ getSystemStart (topLevelConfigBlock (getExtLedgerCfg config)) - GetChainBlockNo -> - headerStateBlockNo . headerState <$> - atomically (roforkerGetLedgerState forker) - GetChainPoint -> - headerStatePoint . headerState <$> - atomically (roforkerGetLedgerState forker) - DebugLedgerConfig -> - pure $ topLevelConfigLedger (getExtLedgerCfg config) + BlockQuery (blockQuery :: BlockQuery blk footprint result) -> + case sing :: Sing footprint of + SQFNoTables -> + answerPureBlockQuery config blockQuery + <$> atomically (roforkerGetLedgerState forker) + SQFLookupTables -> + answerBlockQueryLookup config blockQuery forker + SQFTraverseTables -> + answerBlockQueryTraverse config blockQuery forker + GetSystemStart -> + pure $ getSystemStart (topLevelConfigBlock (getExtLedgerCfg config)) + GetChainBlockNo -> + headerStateBlockNo . headerState + <$> atomically (roforkerGetLedgerState forker) + GetChainPoint -> + headerStatePoint . headerState + <$> atomically (roforkerGetLedgerState forker) + DebugLedgerConfig -> + pure $ topLevelConfigLedger (getExtLedgerCfg config) {------------------------------------------------------------------------------- Query instances @@ -274,82 +285,86 @@ answerQuery config forker query = case query of ------ deriving instance - (forall footprint result. Show (BlockQuery blk footprint result)) - => Show (SomeBlockQuery (BlockQuery blk)) + (forall footprint result. Show (BlockQuery blk footprint result)) => + Show (SomeBlockQuery (BlockQuery blk)) deriving instance - (forall footprint. Show (BlockQuery blk footprint result)) - => Show (Query blk result) + (forall footprint. Show (BlockQuery blk footprint result)) => + Show (Query blk result) -instance (ShowProxy (BlockQuery blk)) => ShowProxy (Query blk) where +instance ShowProxy (BlockQuery blk) => ShowProxy (Query blk) where showProxy (Proxy :: Proxy (Query blk)) = "Query (" ++ showProxy (Proxy @(BlockQuery blk)) ++ ")" instance - (forall footprint. ShowQuery (BlockQuery blk footprint), StandardHash blk) - => ShowQuery (Query blk) where + (forall footprint. ShowQuery (BlockQuery blk footprint), StandardHash blk) => + ShowQuery (Query blk) + where showResult (BlockQuery blockQuery) = showResult blockQuery - showResult GetSystemStart = show - showResult GetChainBlockNo = show - showResult GetChainPoint = show - showResult DebugLedgerConfig = const "LedgerConfig{..}" + showResult GetSystemStart = show + showResult GetChainBlockNo = show + showResult GetChainPoint = show + showResult DebugLedgerConfig = const "LedgerConfig{..}" instance Show (SomeBlockQuery (BlockQuery blk)) => Show (SomeSecond Query blk) where - show (SomeSecond (BlockQuery blockQueryA)) = + show (SomeSecond (BlockQuery blockQueryA)) = "Query " ++ show (SomeBlockQuery blockQueryA) - show (SomeSecond GetSystemStart) = "Query GetSystemStart" - show (SomeSecond GetChainBlockNo) = "Query GetChainBlockNo" - show (SomeSecond GetChainPoint) = "Query GetChainPoint" - show (SomeSecond DebugLedgerConfig) = "Query DebugLedgerConfig" + show (SomeSecond GetSystemStart) = "Query GetSystemStart" + show (SomeSecond GetChainBlockNo) = "Query GetChainBlockNo" + show (SomeSecond GetChainPoint) = "Query GetChainPoint" + show (SomeSecond DebugLedgerConfig) = "Query DebugLedgerConfig" queryIsSupportedOnNodeToClientVersion :: - forall blk result. - (SupportedNetworkProtocolVersion blk, BlockSupportsLedgerQuery blk) - => Query blk result - -> NodeToClientVersion - -> Bool + forall blk result. + (SupportedNetworkProtocolVersion blk, BlockSupportsLedgerQuery blk) => + Query blk result -> + NodeToClientVersion -> + Bool queryIsSupportedOnNodeToClientVersion q ntc = case supportedNodeToClientVersions (Proxy @blk) Map.!? ntc of Nothing -> False Just bv -> queryIsSupportedOnVersion q qv bv - where - qv = nodeToClientVersionToQueryVersion ntc + where + qv = nodeToClientVersionToQueryVersion ntc queryIsSupportedOnVersion :: - BlockSupportsLedgerQuery blk - => Query blk result - -> QueryVersion - -> BlockNodeToClientVersion blk - -> Bool + BlockSupportsLedgerQuery blk => + Query blk result -> + QueryVersion -> + BlockNodeToClientVersion blk -> + Bool queryIsSupportedOnVersion q qv bv = case q of - BlockQuery q' -> qv >= QueryVersion1 && blockQueryIsSupportedOnVersion q' bv - GetSystemStart{} -> qv >= QueryVersion1 - GetChainBlockNo{} -> qv >= QueryVersion2 - GetChainPoint{} -> qv >= QueryVersion2 - DebugLedgerConfig{} -> qv >= QueryVersion3 + BlockQuery q' -> qv >= QueryVersion1 && blockQueryIsSupportedOnVersion q' bv + GetSystemStart{} -> qv >= QueryVersion1 + GetChainBlockNo{} -> qv >= QueryVersion2 + GetChainPoint{} -> qv >= QueryVersion2 + DebugLedgerConfig{} -> qv >= QueryVersion3 querySupportedVersions :: - forall blk result. - (SupportedNetworkProtocolVersion blk, BlockSupportsLedgerQuery blk) - => Query blk result - -> [NodeToClientVersion] + forall blk result. + (SupportedNetworkProtocolVersion blk, BlockSupportsLedgerQuery blk) => + Query blk result -> + [NodeToClientVersion] querySupportedVersions q = - [ v | v <- [minBound..maxBound] - , queryIsSupportedOnNodeToClientVersion q v + [ v + | v <- [minBound .. maxBound] + , queryIsSupportedOnNodeToClientVersion q v ] -- | Exception thrown in the encoders -data QueryEncoderException blk = - -- | A query was submitted that is not supported by the given 'QueryVersion' +data QueryEncoderException blk + = -- | A query was submitted that is not supported by the given 'QueryVersion' QueryEncoderUnsupportedQuery - (SomeSecond Query blk) - QueryVersion - (BlockNodeToClientVersion blk) + (SomeSecond Query blk) + QueryVersion + (BlockNodeToClientVersion blk) -deriving instance (Show (SomeSecond Query blk), Show (BlockNodeToClientVersion blk)) - => Show (QueryEncoderException blk) -instance (Typeable blk, Show (SomeSecond Query blk), Show (BlockNodeToClientVersion blk)) - => Exception (QueryEncoderException blk) +deriving instance + (Show (SomeSecond Query blk), Show (BlockNodeToClientVersion blk)) => + Show (QueryEncoderException blk) +instance + (Typeable blk, Show (SomeSecond Query blk), Show (BlockNodeToClientVersion blk)) => + Exception (QueryEncoderException blk) ------ -- Eq @@ -362,164 +377,165 @@ instance SameDepIndex2 query => Eq (SomeBlockQuery query) where SomeBlockQuery l == SomeBlockQuery r = isJust $ sameDepIndex2 l r instance SameDepIndex2 (BlockQuery blk) => SameDepIndex (Query blk) where - sameDepIndex (BlockQuery blockQueryA) (BlockQuery blockQueryB) - = (\Refl -> Refl) <$> sameDepIndex2 blockQueryA blockQueryB - sameDepIndex (BlockQuery _) _ - = Nothing - sameDepIndex GetSystemStart GetSystemStart - = Just Refl - sameDepIndex GetSystemStart _ - = Nothing - sameDepIndex GetChainBlockNo GetChainBlockNo - = Just Refl - sameDepIndex GetChainBlockNo _ - = Nothing - sameDepIndex GetChainPoint GetChainPoint - = Just Refl - sameDepIndex GetChainPoint _ - = Nothing - sameDepIndex DebugLedgerConfig DebugLedgerConfig - = Just Refl - sameDepIndex DebugLedgerConfig _ - = Nothing + sameDepIndex (BlockQuery blockQueryA) (BlockQuery blockQueryB) = + (\Refl -> Refl) <$> sameDepIndex2 blockQueryA blockQueryB + sameDepIndex (BlockQuery _) _ = + Nothing + sameDepIndex GetSystemStart GetSystemStart = + Just Refl + sameDepIndex GetSystemStart _ = + Nothing + sameDepIndex GetChainBlockNo GetChainBlockNo = + Just Refl + sameDepIndex GetChainBlockNo _ = + Nothing + sameDepIndex GetChainPoint GetChainPoint = + Just Refl + sameDepIndex GetChainPoint _ = + Nothing + sameDepIndex DebugLedgerConfig DebugLedgerConfig = + Just Refl + sameDepIndex DebugLedgerConfig _ = + Nothing ------ -- Serialization ------ deriving newtype instance - SerialiseNodeToClient blk ( SomeBlockQuery (query blk)) - => SerialiseNodeToClient blk ((SomeBlockQuery :.: query) blk) + SerialiseNodeToClient blk (SomeBlockQuery (query blk)) => + SerialiseNodeToClient blk ((SomeBlockQuery :.: query) blk) queryEncodeNodeToClient :: - forall blk. - SerialiseNodeToClient blk (SomeBlockQuery (BlockQuery blk)) - => Show (SomeSecond Query blk) - => BlockSupportsLedgerQuery blk - => Show (BlockNodeToClientVersion blk) - => Typeable blk - => CodecConfig blk - -> QueryVersion - -> BlockNodeToClientVersion blk - -> SomeSecond Query blk - -> Encoding -queryEncodeNodeToClient codecConfig queryVersion blockVersion (SomeSecond query) - = requireVersion query $ case query of - BlockQuery blockQuery -> - mconcat - [ encodeListLen 2 - , encodeWord8 0 - , encodeBlockQuery blockQuery - ] - - GetSystemStart -> - mconcat - [ encodeListLen 1 - , encodeWord8 1 - ] - - GetChainBlockNo -> - mconcat - [ encodeListLen 1 - , encodeWord8 2 - ] - - GetChainPoint -> - mconcat - [ encodeListLen 1 - , encodeWord8 3 - ] - - DebugLedgerConfig -> - mconcat - [ encodeListLen 1 - , encodeWord8 4 - ] - where - requireVersion :: Query blk result -> a -> a - requireVersion q a = - if queryIsSupportedOnVersion q queryVersion blockVersion - then a - else throw $ QueryEncoderUnsupportedQuery (SomeSecond query) queryVersion blockVersion - - encodeBlockQuery :: - SingI footprint - => BlockQuery blk footprint result - -> Encoding - encodeBlockQuery blockQuery = - encodeNodeToClient - @blk - @(SomeBlockQuery (BlockQuery blk)) - codecConfig - blockVersion - (SomeBlockQuery blockQuery) + forall blk. + SerialiseNodeToClient blk (SomeBlockQuery (BlockQuery blk)) => + Show (SomeSecond Query blk) => + BlockSupportsLedgerQuery blk => + Show (BlockNodeToClientVersion blk) => + Typeable blk => + CodecConfig blk -> + QueryVersion -> + BlockNodeToClientVersion blk -> + SomeSecond Query blk -> + Encoding +queryEncodeNodeToClient codecConfig queryVersion blockVersion (SomeSecond query) = + requireVersion query $ case query of + BlockQuery blockQuery -> + mconcat + [ encodeListLen 2 + , encodeWord8 0 + , encodeBlockQuery blockQuery + ] + GetSystemStart -> + mconcat + [ encodeListLen 1 + , encodeWord8 1 + ] + GetChainBlockNo -> + mconcat + [ encodeListLen 1 + , encodeWord8 2 + ] + GetChainPoint -> + mconcat + [ encodeListLen 1 + , encodeWord8 3 + ] + DebugLedgerConfig -> + mconcat + [ encodeListLen 1 + , encodeWord8 4 + ] + where + requireVersion :: Query blk result -> a -> a + requireVersion q a = + if queryIsSupportedOnVersion q queryVersion blockVersion + then a + else throw $ QueryEncoderUnsupportedQuery (SomeSecond query) queryVersion blockVersion + + encodeBlockQuery :: + SingI footprint => + BlockQuery blk footprint result -> + Encoding + encodeBlockQuery blockQuery = + encodeNodeToClient + @blk + @(SomeBlockQuery (BlockQuery blk)) + codecConfig + blockVersion + (SomeBlockQuery blockQuery) queryDecodeNodeToClient :: - forall blk. - SerialiseNodeToClient blk (SomeBlockQuery (BlockQuery blk)) - => CodecConfig blk - -> QueryVersion - -> BlockNodeToClientVersion blk - -> forall s. Decoder s (SomeSecond Query blk) -queryDecodeNodeToClient codecConfig queryVersion blockVersion - = case queryVersion of - QueryVersion1 -> handleTopLevelQuery - QueryVersion2 -> handleTopLevelQuery - QueryVersion3 -> handleTopLevelQuery - where - handleTopLevelQuery :: Decoder s (SomeSecond Query blk) - handleTopLevelQuery = do - size <- decodeListLen - tag <- decodeWord8 - case (size, tag) of - (2, 0) -> requireVersion QueryVersion1 =<< decodeBlockQuery - (1, 1) -> requireVersion QueryVersion1 $ SomeSecond GetSystemStart - (1, 2) -> requireVersion QueryVersion2 $ SomeSecond GetChainBlockNo - (1, 3) -> requireVersion QueryVersion2 $ SomeSecond GetChainPoint - (1, 4) -> requireVersion QueryVersion3 $ SomeSecond DebugLedgerConfig - _ -> fail $ "Query: invalid size and tag" <> show (size, tag) - - requireVersion :: - QueryVersion - -> SomeSecond Query blk - -> Decoder s (SomeSecond Query blk) - requireVersion expectedVersion someSecondQuery = - if queryVersion >= expectedVersion - then return someSecondQuery - else case someSecondQuery of - SomeSecond query -> fail $ "Query: " <> queryName query <> " requires at least " <> show expectedVersion - - decodeBlockQuery :: Decoder s (SomeSecond Query blk) - decodeBlockQuery = do - SomeBlockQuery blockQuery <- decodeNodeToClient + forall blk. + SerialiseNodeToClient blk (SomeBlockQuery (BlockQuery blk)) => + CodecConfig blk -> + QueryVersion -> + BlockNodeToClientVersion blk -> + forall s. + Decoder s (SomeSecond Query blk) +queryDecodeNodeToClient codecConfig queryVersion blockVersion = + case queryVersion of + QueryVersion1 -> handleTopLevelQuery + QueryVersion2 -> handleTopLevelQuery + QueryVersion3 -> handleTopLevelQuery + where + handleTopLevelQuery :: Decoder s (SomeSecond Query blk) + handleTopLevelQuery = do + size <- decodeListLen + tag <- decodeWord8 + case (size, tag) of + (2, 0) -> requireVersion QueryVersion1 =<< decodeBlockQuery + (1, 1) -> requireVersion QueryVersion1 $ SomeSecond GetSystemStart + (1, 2) -> requireVersion QueryVersion2 $ SomeSecond GetChainBlockNo + (1, 3) -> requireVersion QueryVersion2 $ SomeSecond GetChainPoint + (1, 4) -> requireVersion QueryVersion3 $ SomeSecond DebugLedgerConfig + _ -> fail $ "Query: invalid size and tag" <> show (size, tag) + + requireVersion :: + QueryVersion -> + SomeSecond Query blk -> + Decoder s (SomeSecond Query blk) + requireVersion expectedVersion someSecondQuery = + if queryVersion >= expectedVersion + then return someSecondQuery + else case someSecondQuery of + SomeSecond query -> fail $ "Query: " <> queryName query <> " requires at least " <> show expectedVersion + + decodeBlockQuery :: Decoder s (SomeSecond Query blk) + decodeBlockQuery = do + SomeBlockQuery blockQuery <- + decodeNodeToClient @blk @(SomeBlockQuery (BlockQuery blk)) codecConfig blockVersion - return (SomeSecond (BlockQuery blockQuery)) - -instance ( SerialiseBlockQueryResult blk BlockQuery - , Serialise (HeaderHash blk) - , SerialiseNodeToClient blk (LedgerConfig blk) - ) => SerialiseResult blk Query where - encodeResult codecConfig blockVersion (BlockQuery blockQuery) result - = encodeBlockQueryResult codecConfig blockVersion blockQuery result - encodeResult _ _ GetSystemStart result - = toCBOR result - encodeResult _ _ GetChainBlockNo result - = toCBOR result - encodeResult _ _ GetChainPoint result - = encodePoint encode result - encodeResult codecConfig blockVersion DebugLedgerConfig result - = encodeNodeToClient codecConfig blockVersion result - - decodeResult codecConfig blockVersion (BlockQuery query) - = decodeBlockQueryResult codecConfig blockVersion query - decodeResult _ _ GetSystemStart - = fromCBOR - decodeResult _ _ GetChainBlockNo - = fromCBOR - decodeResult _ _ GetChainPoint - = decodePoint decode - decodeResult codecConfig blockVersion DebugLedgerConfig - = decodeNodeToClient @blk @(LedgerConfig blk) codecConfig blockVersion + return (SomeSecond (BlockQuery blockQuery)) + +instance + ( SerialiseBlockQueryResult blk BlockQuery + , Serialise (HeaderHash blk) + , SerialiseNodeToClient blk (LedgerConfig blk) + ) => + SerialiseResult blk Query + where + encodeResult codecConfig blockVersion (BlockQuery blockQuery) result = + encodeBlockQueryResult codecConfig blockVersion blockQuery result + encodeResult _ _ GetSystemStart result = + toCBOR result + encodeResult _ _ GetChainBlockNo result = + toCBOR result + encodeResult _ _ GetChainPoint result = + encodePoint encode result + encodeResult codecConfig blockVersion DebugLedgerConfig result = + encodeNodeToClient codecConfig blockVersion result + + decodeResult codecConfig blockVersion (BlockQuery query) = + decodeBlockQueryResult codecConfig blockVersion query + decodeResult _ _ GetSystemStart = + fromCBOR + decodeResult _ _ GetChainBlockNo = + fromCBOR + decodeResult _ _ GetChainPoint = + decodePoint decode + decodeResult codecConfig blockVersion DebugLedgerConfig = + decodeNodeToClient @blk @(LedgerConfig blk) codecConfig blockVersion diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query/Version.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query/Version.hs index 9ed29584a3..a0980bb23f 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query/Version.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Query/Version.hs @@ -1,23 +1,21 @@ -module Ouroboros.Consensus.Ledger.Query.Version ( - QueryVersion (..) +module Ouroboros.Consensus.Ledger.Query.Version + ( QueryVersion (..) , nodeToClientVersionToQueryVersion ) where -import Ouroboros.Network.NodeToClient.Version +import Ouroboros.Network.NodeToClient.Version -- | Version of the `Query blk` type. -- -- Multiple top level queries are now supported. The encoding now has -- constructor tags for the different top level queries for QueryVersion1 onwards. data QueryVersion - -- Adds support for 'GetSystemStart'. - = QueryVersion1 - - -- Adds support for 'GetChainBlockNo' and 'GetChainPoint'. - | QueryVersion2 - - -- Adds support for 'DebugLedgerConfig' - | QueryVersion3 + = -- Adds support for 'GetSystemStart'. + QueryVersion1 + | -- Adds support for 'GetChainBlockNo' and 'GetChainPoint'. + QueryVersion2 + | -- Adds support for 'DebugLedgerConfig' + QueryVersion3 deriving (Eq, Ord, Enum, Bounded, Show) -- | Get the @QueryVersion@ supported by this @NodeToClientVersion@. diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsMempool.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsMempool.hs index fa781294de..b3198aeb6c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsMempool.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsMempool.hs @@ -7,8 +7,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Ledger.SupportsMempool ( - ApplyTxErr +module Ouroboros.Consensus.Ledger.SupportsMempool + ( ApplyTxErr , ByteSize32 (..) , ComputeDiffs (..) , ConvertRawTxId (..) @@ -28,23 +28,23 @@ module Ouroboros.Consensus.Ledger.SupportsMempool ( , WhetherToIntervene (..) ) where -import Codec.Serialise (Serialise) -import Control.DeepSeq (NFData) -import Control.Monad.Except -import Data.ByteString.Short (ShortByteString) -import Data.Coerce (coerce) -import Data.DerivingVia (InstantiatedAt (..)) -import qualified Data.Foldable as Foldable -import Data.Kind (Type) -import Data.Measure (Measure) -import qualified Data.Measure -import Data.Word (Word32) -import GHC.Stack (HasCallStack) -import NoThunks.Class -import Numeric.Natural -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Tables.Utils +import Codec.Serialise (Serialise) +import Control.DeepSeq (NFData) +import Control.Monad.Except +import Data.ByteString.Short (ShortByteString) +import Data.Coerce (coerce) +import Data.DerivingVia (InstantiatedAt (..)) +import Data.Foldable qualified as Foldable +import Data.Kind (Type) +import Data.Measure (Measure) +import Data.Measure qualified +import Data.Word (Word32) +import GHC.Stack (HasCallStack) +import NoThunks.Class +import Numeric.Natural +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Tables.Utils -- | Generalized transaction -- @@ -72,15 +72,15 @@ type family ApplyTxErr blk -- transaction: they must have made some sort of mistake, and we don't want the -- ledger to penalize them. data WhetherToIntervene - = DoNotIntervene - -- ^ We do not trust remote peers, so if a problematic-yet-valid transaction + = -- | We do not trust remote peers, so if a problematic-yet-valid transaction -- arrives over NTN, we accept it; it will end up in a block and the ledger -- will penalize them for it. - | Intervene - -- ^ We trust local clients, so if a problematic-yet-valid transaction + DoNotIntervene + | -- | We trust local clients, so if a problematic-yet-valid transaction -- arrives over NTC, we reject it in order to avoid the ledger penalizing -- them for it. - deriving (Show) + Intervene + deriving Show -- | Whether to keep track of the diffs produced by applying the transactions. -- @@ -96,24 +96,25 @@ data WhetherToIntervene -- so this optimization will no longer be needed. That's why we chose -- to go with a boolean isomorph instead of something fancier. data ComputeDiffs - = - -- | This option should be used when syncing the mempool with the + = -- | This option should be used when syncing the mempool with the -- LedgerDB, to store a useful state in the mempool. ComputeDiffs - -- | This option should be used only when snapshotting the mempool, + | -- | This option should be used only when snapshotting the mempool, -- as we discard the resulting state anyways. - | IgnoreDiffs - deriving (Show) - -class ( UpdateLedger blk - , TxLimits blk - , NoThunks (GenTx blk) - , NoThunks (Validated (GenTx blk)) - , Show (GenTx blk) - , Show (Validated (GenTx blk)) - , Show (ApplyTxErr blk) - ) => LedgerSupportsMempool blk where - + IgnoreDiffs + deriving Show + +class + ( UpdateLedger blk + , TxLimits blk + , NoThunks (GenTx blk) + , NoThunks (Validated (GenTx blk)) + , Show (GenTx blk) + , Show (Validated (GenTx blk)) + , Show (ApplyTxErr blk) + ) => + LedgerSupportsMempool blk + where -- | Check whether the internal invariants of the transaction hold. txInvariant :: GenTx blk -> Bool txInvariant = const True @@ -126,13 +127,15 @@ class ( UpdateLedger blk -- -- The resulting ledger state contains the diffs produced by applying this -- transaction alone. - applyTx :: LedgerConfig blk - -> WhetherToIntervene - -> SlotNo -- ^ Slot number of the block containing the tx - -> GenTx blk - -> TickedLedgerState blk ValuesMK - -- ^ Contain only the values for the tx to apply - -> Except (ApplyTxErr blk) (TickedLedgerState blk DiffMK, Validated (GenTx blk)) + applyTx :: + LedgerConfig blk -> + WhetherToIntervene -> + -- | Slot number of the block containing the tx + SlotNo -> + GenTx blk -> + -- | Contain only the values for the tx to apply + TickedLedgerState blk ValuesMK -> + Except (ApplyTxErr blk) (TickedLedgerState blk DiffMK, Validated (GenTx blk)) -- | Apply a previously validated transaction to a potentially different -- ledger state @@ -144,14 +147,16 @@ class ( UpdateLedger blk -- The returned ledger state contains the resulting values too so that this -- function can be used to reapply a list of transactions, providing as a -- first state one that contains the values for all the transactions. - reapplyTx :: HasCallStack - => ComputeDiffs - -> LedgerConfig blk - -> SlotNo -- ^ Slot number of the block containing the tx - -> Validated (GenTx blk) - -> TickedLedgerState blk ValuesMK - -- ^ Contains at least the values for the tx to reapply - -> Except (ApplyTxErr blk) (TickedLedgerState blk TrackingMK) + reapplyTx :: + HasCallStack => + ComputeDiffs -> + LedgerConfig blk -> + -- | Slot number of the block containing the tx + SlotNo -> + Validated (GenTx blk) -> + -- | Contains at least the values for the tx to reapply + TickedLedgerState blk ValuesMK -> + Except (ApplyTxErr blk) (TickedLedgerState blk TrackingMK) -- | Apply a list of previously validated transactions to a new ledger state. -- @@ -168,28 +173,34 @@ class ( UpdateLedger blk -- in the same order as they were given, as we will use those later on to -- filter a list of 'TxTicket's. reapplyTxs :: - ComputeDiffs - -> LedgerConfig blk - -> SlotNo -- ^ Slot number of the block containing the tx - -> [(Validated (GenTx blk), extra)] - -> TickedLedgerState blk ValuesMK - -> ReapplyTxsResult extra blk + ComputeDiffs -> + LedgerConfig blk -> + -- | Slot number of the block containing the tx + SlotNo -> + [(Validated (GenTx blk), extra)] -> + TickedLedgerState blk ValuesMK -> + ReapplyTxsResult extra blk reapplyTxs doDiffs cfg slot txs st = - (\(err, val, st') -> - ReapplyTxsResult - err - (reverse val) - st' - ) - $ Foldable.foldl' (\(accE, accV, st') (tx, extra) -> - case runExcept (reapplyTx doDiffs cfg slot tx $ trackingToValues st') of - Left err -> (Invalidated tx err : accE, accV, st') - Right st'' -> (accE, (tx, extra) : accV, - case doDiffs of - ComputeDiffs -> prependTrackingDiffs st' st'' - IgnoreDiffs -> st'' - ) - ) ([], [], attachEmptyDiffs st) txs + ( \(err, val, st') -> + ReapplyTxsResult + err + (reverse val) + st' + ) + $ Foldable.foldl' + ( \(accE, accV, st') (tx, extra) -> + case runExcept (reapplyTx doDiffs cfg slot tx $ trackingToValues st') of + Left err -> (Invalidated tx err : accE, accV, st') + Right st'' -> + ( accE + , (tx, extra) : accV + , case doDiffs of + ComputeDiffs -> prependTrackingDiffs st' st'' + IgnoreDiffs -> st'' + ) + ) + ([], [], attachEmptyDiffs st) + txs -- | Discard the evidence that transaction has been previously validated txForgetValidated :: Validated (GenTx blk) -> GenTx blk @@ -213,30 +224,29 @@ class ( UpdateLedger blk -- | Prepend diffs on ledger states prependMempoolDiffs :: - TickedLedgerState blk DiffMK - -> TickedLedgerState blk DiffMK - -> TickedLedgerState blk DiffMK + TickedLedgerState blk DiffMK -> + TickedLedgerState blk DiffMK -> + TickedLedgerState blk DiffMK prependMempoolDiffs = prependDiffs -- | Apply diffs on ledger states applyMempoolDiffs :: - LedgerTables (LedgerState blk) ValuesMK - -> LedgerTables (LedgerState blk) KeysMK - -> TickedLedgerState blk DiffMK - -> TickedLedgerState blk ValuesMK + LedgerTables (LedgerState blk) ValuesMK -> + LedgerTables (LedgerState blk) KeysMK -> + TickedLedgerState blk DiffMK -> + TickedLedgerState blk ValuesMK applyMempoolDiffs = applyDiffForKeysOnTables - -data ReapplyTxsResult extra blk = - ReapplyTxsResult { - -- | txs that are now invalid. Order doesn't matter - invalidatedTxs :: ![Invalidated blk] - -- | txs that are valid again, order must be the same as the order in - -- which txs were received - , validatedTxs :: ![(Validated (GenTx blk), extra)] - -- | Resulting ledger state - , resultingState :: !(TickedLedgerState blk TrackingMK) - } +data ReapplyTxsResult extra blk + = ReapplyTxsResult + { invalidatedTxs :: ![Invalidated blk] + -- ^ txs that are now invalid. Order doesn't matter + , validatedTxs :: ![(Validated (GenTx blk), extra)] + -- ^ txs that are valid again, order must be the same as the order in + -- which txs were received + , resultingState :: !(TickedLedgerState blk TrackingMK) + -- ^ Resulting ledger state + } -- | A generalized transaction, 'GenTx', identifier. type TxId :: Type -> Type @@ -246,11 +256,13 @@ data family TxId blk -- -- The mempool will use these to locate transactions, so two different -- transactions should have different identifiers. -class ( Show (TxId tx) - , Ord (TxId tx) - , NoThunks (TxId tx) - ) => HasTxId tx where - +class + ( Show (TxId tx) + , Ord (TxId tx) + , NoThunks (TxId tx) + ) => + HasTxId tx + where -- | Return the 'TxId' of a 'GenTx'. -- -- NOTE: a 'TxId' must be unique up to ledger rules, i.e., two 'GenTx's with @@ -263,7 +275,6 @@ class ( Show (TxId tx) -- | Extract the raw hash bytes from a 'TxId'. class HasTxId tx => ConvertRawTxId tx where - -- | NOTE: The composition @'toRawTxIdHash' . 'txId'@ must satisfy the same -- properties as defined in the docs of 'txId'. toRawTxIdHash :: TxId tx -> ShortByteString @@ -296,12 +307,15 @@ class HasTxs blk where -- bit more complex as it had to take other factors into account (like -- execution units). For details please see the individual instances for the -- TxLimits. -class ( Measure (TxMeasure blk) - , HasByteSize (TxMeasure blk) - , NoThunks (TxMeasure blk) - , TxMeasureMetrics (TxMeasure blk) - , Show (TxMeasure blk) - ) => TxLimits blk where +class + ( Measure (TxMeasure blk) + , HasByteSize (TxMeasure blk) + , NoThunks (TxMeasure blk) + , TxMeasureMetrics (TxMeasure blk) + , Show (TxMeasure blk) + ) => + TxLimits blk + where -- | The (possibly multi-dimensional) size of a transaction in a block. type TxMeasure blk @@ -335,20 +349,20 @@ class ( Measure (TxMeasure blk) -- Returns an exception if and only if the transaction violates the per-tx -- limits. txMeasure :: - LedgerConfig blk - -- ^ used at least by HFC's composition logic - -> TickedLedgerState blk ValuesMK - -- ^ This state needs values as a transaction measure might depend on - -- those. For example in Cardano they look at the reference scripts. - -> GenTx blk - -> Except (ApplyTxErr blk) (TxMeasure blk) + -- | used at least by HFC's composition logic + LedgerConfig blk -> + -- | This state needs values as a transaction measure might depend on + -- those. For example in Cardano they look at the reference scripts. + TickedLedgerState blk ValuesMK -> + GenTx blk -> + Except (ApplyTxErr blk) (TxMeasure blk) -- | What is the allowed capacity for the txs in an individual block? blockCapacityTxMeasure :: - LedgerConfig blk - -- ^ at least for symmetry with 'txMeasure' - -> TickedLedgerState blk mk - -> TxMeasure blk + -- | at least for symmetry with 'txMeasure' + LedgerConfig blk -> + TickedLedgerState blk mk -> + TxMeasure blk -- | We intentionally do not declare a 'Num' instance! We prefer @ByteSize32@ -- to occur explicitly in the code where possible, for @@ -370,15 +384,17 @@ class ( Measure (TxMeasure blk) -- as encoders. Thus 'Natural' would merely defer the overflow concern, and -- even risks instilling a false sense that overflow need not be considered at -- all. -newtype ByteSize32 = ByteSize32 { unByteSize32 :: Word32 } - deriving stock (Show) - deriving newtype (Eq, Ord) - deriving newtype (NFData) - deriving newtype (Serialise) - deriving (Monoid, Semigroup) - via (InstantiatedAt Measure (IgnoringOverflow ByteSize32)) - deriving (NoThunks) - via OnlyCheckWhnfNamed "ByteSize" ByteSize32 +newtype ByteSize32 = ByteSize32 {unByteSize32 :: Word32} + deriving stock Show + deriving newtype (Eq, Ord) + deriving newtype NFData + deriving newtype Serialise + deriving + (Monoid, Semigroup) + via (InstantiatedAt Measure (IgnoringOverflow ByteSize32)) + deriving + NoThunks + via OnlyCheckWhnfNamed "ByteSize" ByteSize32 -- | @'IgnoringOverflow' a@ has the same semantics as @a@, except it ignores -- the fact that @a@ can overflow. @@ -391,20 +407,20 @@ newtype ByteSize32 = ByteSize32 { unByteSize32 :: Word32 } -- will break assumptions, so overflow must therefore be guarded against. -- -- TODO upstream this to the @measure@ package -newtype IgnoringOverflow a = IgnoringOverflow { unIgnoringOverflow :: a } - deriving stock (Show) - deriving newtype (Eq, Ord) - deriving newtype (NFData) - deriving newtype (Monoid, Semigroup) - deriving newtype (NoThunks) - deriving newtype (HasByteSize) - deriving newtype (TxMeasureMetrics) +newtype IgnoringOverflow a = IgnoringOverflow {unIgnoringOverflow :: a} + deriving stock Show + deriving newtype (Eq, Ord) + deriving newtype NFData + deriving newtype (Monoid, Semigroup) + deriving newtype NoThunks + deriving newtype HasByteSize + deriving newtype TxMeasureMetrics instance Measure (IgnoringOverflow ByteSize32) where zero = coerce (0 :: Word32) plus = coerce $ (+) @Word32 - min = coerce $ min @Word32 - max = coerce $ max @Word32 + min = coerce $ min @Word32 + max = coerce $ max @Word32 class HasByteSize a where -- | The byte size component (of 'TxMeasure') @@ -427,6 +443,7 @@ instance TxMeasureMetrics ByteSize32 where -- | A transaction that was previously valid. Used to clarify the types on the -- 'reapplyTxs' function. -data Invalidated blk = Invalidated { getInvalidated :: !(Validated (GenTx blk)) - , getReason :: !(ApplyTxErr blk) - } +data Invalidated blk = Invalidated + { getInvalidated :: !(Validated (GenTx blk)) + , getReason :: !(ApplyTxErr blk) + } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsPeerSelection.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsPeerSelection.hs index f94fd1c730..8edb3edd83 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsPeerSelection.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsPeerSelection.hs @@ -1,8 +1,9 @@ -module Ouroboros.Consensus.Ledger.SupportsPeerSelection ( - LedgerSupportsPeerSelection (..) +module Ouroboros.Consensus.Ledger.SupportsPeerSelection + ( LedgerSupportsPeerSelection (..) , PoolStake (..) , StakePoolRelay (..) , stakePoolRelayAccessPoint + -- * Re-exports for convenience , DomainAccessPoint (..) , IP (..) @@ -10,31 +11,34 @@ module Ouroboros.Consensus.Ledger.SupportsPeerSelection ( , RelayAccessPoint (..) ) where -import Control.DeepSeq (NFData (..)) -import Data.List.NonEmpty (NonEmpty) -import Ouroboros.Consensus.Ledger.Abstract (LedgerState) -import Ouroboros.Network.PeerSelection.LedgerPeers.Type - (PoolStake (..)) -import Ouroboros.Network.PeerSelection.RelayAccessPoint - (DomainAccessPoint (..), IP (..), PortNumber, - RelayAccessPoint (..)) +import Control.DeepSeq (NFData (..)) +import Data.List.NonEmpty (NonEmpty) +import Ouroboros.Consensus.Ledger.Abstract (LedgerState) +import Ouroboros.Network.PeerSelection.LedgerPeers.Type + ( PoolStake (..) + ) +import Ouroboros.Network.PeerSelection.RelayAccessPoint + ( DomainAccessPoint (..) + , IP (..) + , PortNumber + , RelayAccessPoint (..) + ) -- | A relay registered for a stake pool -data StakePoolRelay = - -- | One of the current relays +data StakePoolRelay + = -- | One of the current relays CurrentRelay RelayAccessPoint - - -- | One of the future relays - | FutureRelay RelayAccessPoint + | -- | One of the future relays + FutureRelay RelayAccessPoint deriving (Show, Eq) instance NFData StakePoolRelay where - rnf (CurrentRelay ra) = rnf ra - rnf (FutureRelay ra) = rnf ra + rnf (CurrentRelay ra) = rnf ra + rnf (FutureRelay ra) = rnf ra stakePoolRelayAccessPoint :: StakePoolRelay -> RelayAccessPoint stakePoolRelayAccessPoint (CurrentRelay ra) = ra -stakePoolRelayAccessPoint (FutureRelay ra) = ra +stakePoolRelayAccessPoint (FutureRelay ra) = ra class LedgerSupportsPeerSelection blk where -- | Return peers registered in the ledger ordered by descending 'PoolStake'. diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsProtocol.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsProtocol.hs index 53e3d25936..1babfe531d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsProtocol.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/SupportsProtocol.hs @@ -2,32 +2,36 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Ledger.SupportsProtocol ( - GenesisWindow (..) +module Ouroboros.Consensus.Ledger.SupportsProtocol + ( GenesisWindow (..) , LedgerSupportsProtocol (..) ) where -import Control.Monad.Except -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) -import Ouroboros.Consensus.Protocol.Abstract +import Control.Monad.Except +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables) +import Ouroboros.Consensus.Protocol.Abstract -- | Link protocol to ledger -class ( BlockSupportsProtocol blk - , UpdateLedger blk - , ValidateEnvelope blk - ) => LedgerSupportsProtocol blk where +class + ( BlockSupportsProtocol blk + , UpdateLedger blk + , ValidateEnvelope blk + ) => + LedgerSupportsProtocol blk + where -- | Extract the ledger view from the given ticked ledger state -- -- See 'ledgerViewForecastAt' for a discussion and precise definition of the -- relation between this and forecasting. - protocolLedgerView :: LedgerConfig blk - -> Ticked (LedgerState blk) mk - -> LedgerView (BlockProtocol blk) + protocolLedgerView :: + LedgerConfig blk -> + Ticked (LedgerState blk) mk -> + LedgerView (BlockProtocol blk) -- | Get a forecast at the given ledger state. -- @@ -65,35 +69,37 @@ class ( BlockSupportsProtocol blk -- -- See 'lemma_ledgerViewForecastAt_applyChainTick'. ledgerViewForecastAt :: - HasCallStack - => LedgerConfig blk - -> LedgerState blk mk - -> Forecast (LedgerView (BlockProtocol blk)) + HasCallStack => + LedgerConfig blk -> + LedgerState blk mk -> + Forecast (LedgerView (BlockProtocol blk)) -- | Relation between 'ledgerViewForecastAt' and 'applyChainTick' -_lemma_ledgerViewForecastAt_applyChainTick - :: ( LedgerSupportsProtocol blk - , Eq (LedgerView (BlockProtocol blk)) - ) - => LedgerConfig blk - -> LedgerState blk mk - -> Forecast (LedgerView (BlockProtocol blk)) - -> SlotNo - -> Either String () +_lemma_ledgerViewForecastAt_applyChainTick :: + ( LedgerSupportsProtocol blk + , Eq (LedgerView (BlockProtocol blk)) + ) => + LedgerConfig blk -> + LedgerState blk mk -> + Forecast (LedgerView (BlockProtocol blk)) -> + SlotNo -> + Either String () _lemma_ledgerViewForecastAt_applyChainTick cfg st forecast for - | NotOrigin for >= ledgerTipSlot st - , let lhs = forecastFor forecast for - rhs = protocolLedgerView cfg - . applyChainTick OmitLedgerEvents cfg for - . forgetLedgerTables - $ st - , Right lhs' <- runExcept lhs - , lhs' /= rhs - = Left $ unlines - [ "ledgerViewForecastAt /= protocolLedgerView . applyChainTick:" - , show lhs' - , " /= " - , show rhs - ] - | otherwise - = Right () + | NotOrigin for >= ledgerTipSlot st + , let lhs = forecastFor forecast for + rhs = + protocolLedgerView cfg + . applyChainTick OmitLedgerEvents cfg for + . forgetLedgerTables + $ st + , Right lhs' <- runExcept lhs + , lhs' /= rhs = + Left $ + unlines + [ "ledgerViewForecastAt /= protocolLedgerView . applyChainTick:" + , show lhs' + , " /= " + , show rhs + ] + | otherwise = + Right () diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables.hs index ce3f51fb6d..1c690a11bc 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables.hs @@ -154,18 +154,23 @@ -- ['KeysMK']: Contains a @Data.Set@ of txins. -- -- ['SeqDiffMK']: A fingertree of 'DiffMK's. -module Ouroboros.Consensus.Ledger.Tables ( - -- * Core +module Ouroboros.Consensus.Ledger.Tables + ( -- * Core module Ouroboros.Consensus.Ledger.Tables.Basics , module Ouroboros.Consensus.Ledger.Tables.MapKind + -- * Utilities , module Ouroboros.Consensus.Ledger.Tables.Combinators + -- * Basic LedgerState classes + -- ** Stowing ledger tables , CanStowLedgerTables (..) + -- ** Extracting and injecting ledger tables , HasLedgerTables (..) , HasTickedLedgerTables + -- * Serialization , SerializeTablesHint , SerializeTablesWithHint (..) @@ -173,6 +178,7 @@ module Ouroboros.Consensus.Ledger.Tables ( , defaultEncodeTablesWithHint , valuesMKDecoder , valuesMKEncoder + -- * Special classes , LedgerTablesAreTrivial , TrivialLedgerTables (..) @@ -180,18 +186,18 @@ module Ouroboros.Consensus.Ledger.Tables ( , trivialLedgerTables ) where -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Data.Kind (Constraint, Type) -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.Void (Void) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Ledger.Tables.Basics -import Ouroboros.Consensus.Ledger.Tables.Combinators -import Ouroboros.Consensus.Ledger.Tables.MapKind -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.IndexedMemPack +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Data.Kind (Constraint, Type) +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.Void (Void) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Ledger.Tables.Basics +import Ouroboros.Consensus.Ledger.Tables.Combinators +import Ouroboros.Consensus.Ledger.Tables.MapKind +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Basic LedgerState classes @@ -200,24 +206,26 @@ import Ouroboros.Consensus.Util.IndexedMemPack -- | Extracting @'LedgerTables'@ from @l mk@ (which will share the same @mk@), -- or replacing the @'LedgerTables'@ associated to a particular @l@. type HasLedgerTables :: LedgerStateKind -> Constraint -class ( Ord (TxIn l) - , Eq (TxOut l) - , Show (TxIn l) - , Show (TxOut l) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - , MemPack (TxIn l) - , IndexedMemPack (MemPackIdx l EmptyMK) (TxOut l) - ) => HasLedgerTables l where - +class + ( Ord (TxIn l) + , Eq (TxOut l) + , Show (TxIn l) + , Show (TxOut l) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + , MemPack (TxIn l) + , IndexedMemPack (MemPackIdx l EmptyMK) (TxOut l) + ) => + HasLedgerTables l + where -- | Extract the ledger tables from a ledger state -- -- The constraints on @mk@ are necessary because the 'CardanoBlock' instance -- uses them. projectLedgerTables :: - (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => l mk - -> LedgerTables l mk + (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + l mk -> + LedgerTables l mk -- | Overwrite the tables in the given ledger state. -- @@ -230,27 +238,31 @@ class ( Ord (TxIn l) -- The constraints on @mk@ are necessary because the 'CardanoBlock' instance -- uses them. withLedgerTables :: - (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => l any - -> LedgerTables l mk - -> l mk - -instance ( Ord (TxIn l) - , Eq (TxOut l) - , Show (TxIn l) - , Show (TxOut l) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - , MemPack (TxIn l) - , IndexedMemPack (MemPackIdx l EmptyMK) (TxOut l) - ) => HasLedgerTables (LedgerTables l) where + (CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + l any -> + LedgerTables l mk -> + l mk + +instance + ( Ord (TxIn l) + , Eq (TxOut l) + , Show (TxIn l) + , Show (TxOut l) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + , MemPack (TxIn l) + , IndexedMemPack (MemPackIdx l EmptyMK) (TxOut l) + ) => + HasLedgerTables (LedgerTables l) + where projectLedgerTables = castLedgerTables withLedgerTables _ = castLedgerTables -- | Convenience class, useful for partially applying the composition of -- 'HasLedgerTables' and 'Ticked'. type HasTickedLedgerTables :: LedgerStateKind -> Constraint -class HasLedgerTables (Ticked l) => HasTickedLedgerTables l where +class HasLedgerTables (Ticked l) => HasTickedLedgerTables l + instance HasLedgerTables (Ticked l) => HasTickedLedgerTables l -- | LedgerTables are projections of data from a LedgerState and as such they @@ -265,8 +277,8 @@ instance HasLedgerTables (Ticked l) => HasTickedLedgerTables l -- instance of this class, but HardForkBlocks might avoid doing so. type CanStowLedgerTables :: LedgerStateKind -> Constraint class CanStowLedgerTables l where - stowLedgerTables :: l ValuesMK -> l EmptyMK - unstowLedgerTables :: l EmptyMK -> l ValuesMK + stowLedgerTables :: l ValuesMK -> l EmptyMK + unstowLedgerTables :: l EmptyMK -> l ValuesMK {------------------------------------------------------------------------------- Serialization Codecs @@ -275,19 +287,21 @@ class CanStowLedgerTables l where -- | Default encoder of @'LedgerTables' l ''ValuesMK'@ to be used by the -- in-memory backing store. valuesMKEncoder :: - forall l. SerializeTablesWithHint l - => l EmptyMK - -> LedgerTables l ValuesMK - -> CBOR.Encoding + forall l. + SerializeTablesWithHint l => + l EmptyMK -> + LedgerTables l ValuesMK -> + CBOR.Encoding valuesMKEncoder st tbs = CBOR.encodeListLen 1 <> encodeTablesWithHint st tbs -- | Default decoder of @'LedgerTables' l ''ValuesMK'@ to be used by the -- in-memory backing store. valuesMKDecoder :: - forall l s. SerializeTablesWithHint l - => l EmptyMK - -> CBOR.Decoder s (LedgerTables l ValuesMK) + forall l s. + SerializeTablesWithHint l => + l EmptyMK -> + CBOR.Decoder s (LedgerTables l ValuesMK) valuesMKDecoder st = CBOR.decodeListLenOf 1 >> decodeTablesWithHint st @@ -305,44 +319,51 @@ valuesMKDecoder st = -- are somewhat degenerate. class SerializeTablesWithHint l where encodeTablesWithHint :: - SerializeTablesHint (LedgerTables l ValuesMK) - -> LedgerTables l ValuesMK -> CBOR.Encoding + SerializeTablesHint (LedgerTables l ValuesMK) -> + LedgerTables l ValuesMK -> + CBOR.Encoding decodeTablesWithHint :: - SerializeTablesHint (LedgerTables l ValuesMK) - -> CBOR.Decoder s (LedgerTables l ValuesMK) + SerializeTablesHint (LedgerTables l ValuesMK) -> + CBOR.Decoder s (LedgerTables l ValuesMK) -- This is just for the BackingStore Lockstep tests. Once V1 is gone -- we can inline it above. -- | The hint for 'SerializeTablesWithHint' type family SerializeTablesHint values :: Type + type instance SerializeTablesHint (LedgerTables l ValuesMK) = l EmptyMK defaultEncodeTablesWithHint :: - (MemPack (TxIn l), MemPack (TxOut l)) - => SerializeTablesHint (LedgerTables l ValuesMK) - -> LedgerTables l ValuesMK -> CBOR.Encoding + (MemPack (TxIn l), MemPack (TxOut l)) => + SerializeTablesHint (LedgerTables l ValuesMK) -> + LedgerTables l ValuesMK -> + CBOR.Encoding defaultEncodeTablesWithHint _ (LedgerTables (ValuesMK tbs)) = - mconcat [ CBOR.encodeMapLen (fromIntegral $ Map.size tbs) - , Map.foldMapWithKey (\k v -> - mconcat [ CBOR.encodeBytes (packByteString k) - , CBOR.encodeBytes (packByteString v) - ] - ) tbs - ] + mconcat + [ CBOR.encodeMapLen (fromIntegral $ Map.size tbs) + , Map.foldMapWithKey + ( \k v -> + mconcat + [ CBOR.encodeBytes (packByteString k) + , CBOR.encodeBytes (packByteString v) + ] + ) + tbs + ] defaultDecodeTablesWithHint :: - (Ord (TxIn l), MemPack (TxIn l), MemPack (TxOut l)) - => SerializeTablesHint (LedgerTables l ValuesMK) - -> CBOR.Decoder s (LedgerTables l ValuesMK) + (Ord (TxIn l), MemPack (TxIn l), MemPack (TxOut l)) => + SerializeTablesHint (LedgerTables l ValuesMK) -> + CBOR.Decoder s (LedgerTables l ValuesMK) defaultDecodeTablesWithHint _ = do n <- CBOR.decodeMapLen LedgerTables . ValuesMK <$> go n Map.empty - where - go 0 m = pure m - go n !m = do - (k, v) <- (,) <$> (unpackMonadFail =<< CBOR.decodeBytes) <*> (unpackMonadFail =<< CBOR.decodeBytes) - go (n - 1) (Map.insert k v m) + where + go 0 m = pure m + go n !m = do + (k, v) <- (,) <$> (unpackMonadFail =<< CBOR.decodeBytes) <*> (unpackMonadFail =<< CBOR.decodeBytes) + go (n - 1) (Map.insert k v m) {------------------------------------------------------------------------------- Special classes of ledger states @@ -364,16 +385,16 @@ class (TxIn l ~ Void, TxOut l ~ Void) => LedgerTablesAreTrivial l where convertMapKind :: l mk -> l mk' trivialLedgerTables :: - (ZeroableMK mk, LedgerTablesAreTrivial l) - => LedgerTables l mk + (ZeroableMK mk, LedgerTablesAreTrivial l) => + LedgerTables l mk trivialLedgerTables = LedgerTables emptyMK -- | A newtype to @derive via@ the instances for blocks with trivial ledger -- tables. type TrivialLedgerTables :: LedgerStateKind -> MapKind -> Type -newtype TrivialLedgerTables l mk = TrivialLedgerTables { untrivialLedgerTables :: l mk } +newtype TrivialLedgerTables l mk = TrivialLedgerTables {untrivialLedgerTables :: l mk} -type instance TxIn (TrivialLedgerTables l) = TxIn l +type instance TxIn (TrivialLedgerTables l) = TxIn l type instance TxOut (TrivialLedgerTables l) = TxOut l instance LedgerTablesAreTrivial l => LedgerTablesAreTrivial (TrivialLedgerTables l) where @@ -394,7 +415,7 @@ instance IndexedMemPack (TrivialLedgerTables l EmptyMK) Void where indexedUnpackM _ = unpackM instance SerializeTablesWithHint (TrivialLedgerTables l) where - decodeTablesWithHint _ = do - _ <- CBOR.decodeMapLen - pure (LedgerTables $ ValuesMK Map.empty) - encodeTablesWithHint _ _ = CBOR.encodeMapLen 0 + decodeTablesWithHint _ = do + _ <- CBOR.decodeMapLen + pure (LedgerTables $ ValuesMK Map.empty) + encodeTablesWithHint _ _ = CBOR.encodeMapLen 0 diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Basics.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Basics.hs index 34988e695f..286a6e8c29 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Basics.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Basics.hs @@ -10,13 +10,16 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Ledger.Tables.Basics ( - -- * Kinds - -- +module Ouroboros.Consensus.Ledger.Tables.Basics + ( -- * Kinds + + -- + -- | For convenience' sake, we define these kinds which convey the intended -- instantiation for the type variables. LedgerStateKind , MapKind + -- * Ledger tables , LedgerTables (..) , MemPackIdx @@ -26,11 +29,11 @@ module Ouroboros.Consensus.Ledger.Tables.Basics ( , castLedgerTables ) where -import Data.Coerce (coerce) -import Data.Kind (Type) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Ticked (Ticked) +import Data.Coerce (coerce) +import Data.Kind (Type) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Ticked (Ticked) {------------------------------------------------------------------------------- Kinds @@ -38,7 +41,8 @@ import Ouroboros.Consensus.Ticked (Ticked) -- | Something that holds two types, which intend to represent /keys/ and -- /values/. -type MapKind = {- key -} Type -> {- value -} Type -> Type +type MapKind {- key -} = Type {- value -} -> Type -> Type + type LedgerStateKind = MapKind -> Type {------------------------------------------------------------------------------- @@ -65,17 +69,20 @@ type LedgerStateKind = MapKind -> Type -- The @mk@ can be instantiated to anything that is map-like, i.e. that expects -- two type parameters, the key and the value. type LedgerTables :: LedgerStateKind -> MapKind -> Type -newtype LedgerTables l mk = LedgerTables { - getLedgerTables :: mk (TxIn l) (TxOut l) +newtype LedgerTables l mk = LedgerTables + { getLedgerTables :: mk (TxIn l) (TxOut l) } deriving stock Generic -deriving stock instance Show (mk (TxIn l) (TxOut l)) - => Show (LedgerTables l mk) -deriving stock instance Eq (mk (TxIn l) (TxOut l)) - => Eq (LedgerTables l mk) -deriving newtype instance NoThunks (mk (TxIn l) (TxOut l)) - => NoThunks (LedgerTables l mk) +deriving stock instance + Show (mk (TxIn l) (TxOut l)) => + Show (LedgerTables l mk) +deriving stock instance + Eq (mk (TxIn l) (TxOut l)) => + Eq (LedgerTables l mk) +deriving newtype instance + NoThunks (mk (TxIn l) (TxOut l)) => + NoThunks (LedgerTables l mk) -- | Each @LedgerState@ instance will have the notion of a @TxIn@ for the tables. -- @@ -90,10 +97,10 @@ type family TxIn l type TxOut :: LedgerStateKind -> Type type family TxOut l -type instance TxIn (LedgerTables l) = TxIn l +type instance TxIn (LedgerTables l) = TxIn l type instance TxOut (LedgerTables l) = TxOut l -type instance TxIn (Ticked l) = TxIn l -type instance TxOut (Ticked l) = TxOut l +type instance TxIn (Ticked l) = TxIn l +type instance TxOut (Ticked l) = TxOut l -- | Auxiliary information for @IndexedMemPack@. type MemPackIdx :: LedgerStateKind -> MapKind -> Type @@ -105,7 +112,7 @@ type family MemPackIdx l mk where type SameUtxoTypes l l' = (TxIn l ~ TxIn l', TxOut l ~ TxOut l') castLedgerTables :: - SameUtxoTypes l l' - => LedgerTables l mk - -> LedgerTables l' mk + SameUtxoTypes l l' => + LedgerTables l mk -> + LedgerTables l' mk castLedgerTables = coerce diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Combinators.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Combinators.hs index 18e5910f2c..431d625b07 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Combinators.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Combinators.hs @@ -11,7 +11,6 @@ {-# LANGUAGE StandaloneKindSignatures #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Ledger tables are barbie-types (see @barbies@ package), though unfortunately @@ -30,50 +29,60 @@ -- -- TODO: if we make mapkinds of kind @(k1, k2) -> Type@ instead of @k1 -> k2 -> -- Type@, then we could reuse most of the @barbies@ machinery. -module Ouroboros.Consensus.Ledger.Tables.Combinators ( - -- * Common constraints +module Ouroboros.Consensus.Ledger.Tables.Combinators + ( -- * Common constraints LedgerTableConstraints + -- * Functor , ltmap + -- * Traversable , lttraverse + -- ** Utility functions , ltsequence + -- * Applicative , ltprod , ltpure + -- ** Utility functions , ltap , ltliftA , ltliftA2 , ltliftA3 , ltliftA4 + -- * Applicative and Traversable , ltzipWith2A + -- * Collapsing , ltcollapse + -- * Lifted functions , fn2_1 , fn2_2 , fn2_3 , fn2_4 , type (-..->) (..) + -- ** Re-exports of utils , (...:) , (..:) , (.:) + -- * Basic bifunctors , K2 (..) , type (:..:) (..) ) where -import Data.Bifunctor -import Data.Kind -import Data.SOP.Functors -import Ouroboros.Consensus.Ledger.Tables.Basics -import Ouroboros.Consensus.Ledger.Tables.MapKind -import Ouroboros.Consensus.Util ((...:), (..:), (.:)) -import Ouroboros.Consensus.Util.IndexedMemPack +import Data.Bifunctor +import Data.Kind +import Data.SOP.Functors +import Ouroboros.Consensus.Ledger.Tables.Basics +import Ouroboros.Consensus.Ledger.Tables.MapKind +import Ouroboros.Consensus.Util ((...:), (..:), (.:)) +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Common constraints @@ -84,12 +93,15 @@ import Ouroboros.Consensus.Util.IndexedMemPack -- deltas instead of us being the ones that compute them, we can probably drop -- this constraint. type LedgerTableConstraints l = - ( Ord (TxIn l), Eq (TxOut l), MemPack (TxIn l) + ( Ord (TxIn l) + , Eq (TxOut l) + , MemPack (TxIn l) , IndexedMemPack (MemPackIdx l EmptyMK) (TxOut l) ) type LedgerTableConstraints' l k v = - ( Ord k, Eq v + ( Ord k + , Eq v , MemPack k , IndexedMemPack (MemPackIdx l EmptyMK) v ) @@ -100,10 +112,10 @@ type LedgerTableConstraints' l k v = -- | Like 'bmap', but for ledger tables. ltmap :: - LedgerTableConstraints l - => (forall k v. (LedgerTableConstraints' l k v) => mk1 k v -> mk2 k v) - -> LedgerTables l mk1 - -> LedgerTables l mk2 + LedgerTableConstraints l => + (forall k v. LedgerTableConstraints' l k v => mk1 k v -> mk2 k v) -> + LedgerTables l mk1 -> + LedgerTables l mk2 ltmap f (LedgerTables x) = LedgerTables $ f x {------------------------------------------------------------------------------- @@ -112,10 +124,10 @@ ltmap f (LedgerTables x) = LedgerTables $ f x -- | Like 'btraverse', but for ledger tables. lttraverse :: - (Applicative f, LedgerTableConstraints l) - => (forall k v. (LedgerTableConstraints' l k v) => mk1 k v -> f (mk2 k v)) - -> LedgerTables l mk1 - -> f (LedgerTables l mk2) + (Applicative f, LedgerTableConstraints l) => + (forall k v. LedgerTableConstraints' l k v => mk1 k v -> f (mk2 k v)) -> + LedgerTables l mk1 -> + f (LedgerTables l mk2) lttraverse f (LedgerTables x) = LedgerTables <$> f x -- @@ -123,9 +135,9 @@ lttraverse f (LedgerTables x) = LedgerTables <$> f x -- ltsequence :: - (Applicative f, LedgerTableConstraints l) - => LedgerTables l (f :..: mk) - -> f (LedgerTables l mk) + (Applicative f, LedgerTableConstraints l) => + LedgerTables l (f :..: mk) -> + f (LedgerTables l mk) ltsequence = lttraverse unComp2 {------------------------------------------------------------------------------- @@ -134,9 +146,9 @@ ltsequence = lttraverse unComp2 -- | Like 'bpure', but for ledger tables. ltpure :: - LedgerTableConstraints l - => (forall k v. (LedgerTableConstraints' l k v) => mk k v) - -> LedgerTables l mk + LedgerTableConstraints l => + (forall k v. LedgerTableConstraints' l k v => mk k v) -> + LedgerTables l mk ltpure = LedgerTables -- | Like 'bprod', but for ledger tables. @@ -148,47 +160,49 @@ ltprod (LedgerTables x) (LedgerTables y) = LedgerTables (Pair2 x y) -- ltap :: - LedgerTableConstraints l - => LedgerTables l (mk1 -..-> mk2) - -> LedgerTables l mk1 - -> LedgerTables l mk2 + LedgerTableConstraints l => + LedgerTables l (mk1 -..-> mk2) -> + LedgerTables l mk1 -> + LedgerTables l mk2 ltap f x = ltmap g $ ltprod f x - where g (Pair2 f' x') = apFn2 f' x' + where + g (Pair2 f' x') = apFn2 f' x' ltliftA :: - LedgerTableConstraints l - => (forall k v. (LedgerTableConstraints' l k v) => mk1 k v -> mk2 k v) - -> LedgerTables l mk1 - -> LedgerTables l mk2 + LedgerTableConstraints l => + (forall k v. LedgerTableConstraints' l k v => mk1 k v -> mk2 k v) -> + LedgerTables l mk1 -> + LedgerTables l mk2 ltliftA f x = ltpure (fn2_1 f) `ltap` x ltliftA2 :: - LedgerTableConstraints l - => (forall k v. (LedgerTableConstraints' l k v) => mk1 k v -> mk2 k v -> mk3 k v) - -> LedgerTables l mk1 - -> LedgerTables l mk2 - -> LedgerTables l mk3 + LedgerTableConstraints l => + (forall k v. LedgerTableConstraints' l k v => mk1 k v -> mk2 k v -> mk3 k v) -> + LedgerTables l mk1 -> + LedgerTables l mk2 -> + LedgerTables l mk3 ltliftA2 f x x' = ltpure (fn2_2 f) `ltap` x `ltap` x' ltliftA3 :: - LedgerTableConstraints l - => (forall k v. (LedgerTableConstraints' l k v) => mk1 k v -> mk2 k v -> mk3 k v -> mk4 k v) - -> LedgerTables l mk1 - -> LedgerTables l mk2 - -> LedgerTables l mk3 - -> LedgerTables l mk4 + LedgerTableConstraints l => + (forall k v. LedgerTableConstraints' l k v => mk1 k v -> mk2 k v -> mk3 k v -> mk4 k v) -> + LedgerTables l mk1 -> + LedgerTables l mk2 -> + LedgerTables l mk3 -> + LedgerTables l mk4 ltliftA3 f x x' x'' = ltpure (fn2_3 f) `ltap` x `ltap` x' `ltap` x'' ltliftA4 :: - LedgerTableConstraints l - => ( forall k v. (LedgerTableConstraints' l k v) - => mk1 k v -> mk2 k v -> mk3 k v -> mk4 k v -> mk5 k v - ) - -> LedgerTables l mk1 - -> LedgerTables l mk2 - -> LedgerTables l mk3 - -> LedgerTables l mk4 - -> LedgerTables l mk5 + LedgerTableConstraints l => + ( forall k v. + LedgerTableConstraints' l k v => + mk1 k v -> mk2 k v -> mk3 k v -> mk4 k v -> mk5 k v + ) -> + LedgerTables l mk1 -> + LedgerTables l mk2 -> + LedgerTables l mk3 -> + LedgerTables l mk4 -> + LedgerTables l mk5 ltliftA4 f x x' x'' x''' = ltpure (fn2_4 f) `ltap` x `ltap` x' `ltap` x'' `ltap` x''' @@ -197,11 +211,11 @@ ltliftA4 f x x' x'' x''' = -------------------------------------------------------------------------------} ltzipWith2A :: - (Applicative f, LedgerTableConstraints l) - => (forall k v. LedgerTableConstraints' l k v => mk1 k v -> mk2 k v -> f (mk3 k v)) - -> LedgerTables l mk1 - -> LedgerTables l mk2 - -> f (LedgerTables l mk3) + (Applicative f, LedgerTableConstraints l) => + (forall k v. LedgerTableConstraints' l k v => mk1 k v -> mk2 k v -> f (mk3 k v)) -> + LedgerTables l mk1 -> + LedgerTables l mk2 -> + f (LedgerTables l mk3) ltzipWith2A f = ltsequence .: ltliftA2 (Comp2 .: f) {------------------------------------------------------------------------------- @@ -215,15 +229,21 @@ ltcollapse = unK2 . getLedgerTables Semigroup and Monoid -------------------------------------------------------------------------------} -instance ( forall k v. (LedgerTableConstraints' l k v) => Semigroup (mk k v) - , LedgerTableConstraints l - ) => Semigroup (LedgerTables l mk) where +instance + ( forall k v. LedgerTableConstraints' l k v => Semigroup (mk k v) + , LedgerTableConstraints l + ) => + Semigroup (LedgerTables l mk) + where (<>) :: LedgerTables l mk -> LedgerTables l mk -> LedgerTables l mk (<>) = ltliftA2 (<>) -instance ( forall k v. (LedgerTableConstraints' l k v) => Monoid (mk k v) - , LedgerTableConstraints l - ) => Monoid (LedgerTables l mk) where +instance + ( forall k v. LedgerTableConstraints' l k v => Monoid (mk k v) + , LedgerTableConstraints l + ) => + Monoid (LedgerTables l mk) + where mempty :: LedgerTables l mk mempty = ltpure mempty @@ -235,7 +255,7 @@ instance ( forall k v. (LedgerTableConstraints' l k v) => Monoid (mk k v) -- -- Similar to '(-.->)', but for @f@ and @g@ that are bifunctors. type (-..->) :: (k1 -> k2 -> Type) -> (k1 -> k2 -> Type) -> k1 -> k2 -> Type -newtype (f -..-> g) a b = Fn2 { apFn2 :: f a b -> g a b } +newtype (f -..-> g) a b = Fn2 {apFn2 :: f a b -> g a b} infixr 1 -..-> @@ -244,19 +264,19 @@ fn2_1 :: (f a b -> g a b) -> (f -..-> g) a b fn2_1 = Fn2 -- | Construct a binary lifted function -fn2_2 :: (f a b -> f' a b -> f'' a b ) -> (f -..-> f' -..-> f'') a b +fn2_2 :: (f a b -> f' a b -> f'' a b) -> (f -..-> f' -..-> f'') a b fn2_2 f = Fn2 $ \x -> Fn2 $ \x' -> f x x' -- | Construct a ternary lifted function. fn2_3 :: - (f a b -> f' a b -> f'' a b -> f''' a b) - -> (f -..-> f' -..-> f'' -..-> f''') a b + (f a b -> f' a b -> f'' a b -> f''' a b) -> + (f -..-> f' -..-> f'' -..-> f''') a b fn2_3 f = Fn2 $ \x -> Fn2 $ \x' -> Fn2 $ \x'' -> f x x' x'' -- | Construct a quaternary lifted function. fn2_4 :: - (f a b -> f' a b -> f'' a b -> f''' a b -> f'''' a b) - -> (f -..-> f' -..-> f'' -..-> f''' -..-> f'''') a b + (f a b -> f' a b -> f'' a b -> f''' a b -> f'''' a b) -> + (f -..-> f' -..-> f'' -..-> f''' -..-> f'''') a b fn2_4 f = Fn2 $ \x -> Fn2 $ \x' -> Fn2 $ \x'' -> Fn2 $ \x''' -> f x x' x'' x''' {------------------------------------------------------------------------------- @@ -265,7 +285,7 @@ fn2_4 f = Fn2 $ \x -> Fn2 $ \x' -> Fn2 $ \x'' -> Fn2 $ \x''' -> f x x' x'' x''' -- | The constant type bifunctor. type K2 :: Type -> k1 -> k2 -> Type -newtype K2 a b c = K2 { unK2 :: a } +newtype K2 a b c = K2 {unK2 :: a} deriving stock (Show, Eq) deriving stock (Functor, Foldable, Traversable) deriving newtype (Monoid, Semigroup) @@ -277,15 +297,16 @@ instance Bifunctor (K2 a) where -- -- Example: @Comp2 (Just (17, True)) :: (Maybe :..: (,)) Int Bool@ type (:..:) :: (k3 -> Type) -> (k1 -> k2 -> k3) -> k1 -> k2 -> Type -newtype (:..:) f g a b = Comp2 { unComp2 :: f (g a b) } +newtype (:..:) f g a b = Comp2 {unComp2 :: f (g a b)} deriving stock (Show, Eq) deriving stock (Functor, Foldable) deriving newtype (Monoid, Semigroup) infixr 7 :..: -deriving stock instance (Traversable f, Traversable (g a)) - => Traversable ((f :..: g) a) +deriving stock instance + (Traversable f, Traversable (g a)) => + Traversable ((f :..: g) a) instance (Functor f, Bifunctor g) => Bifunctor (f :..: g) where bimap f g (Comp2 x) = Comp2 $ fmap (bimap f g) x diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Diff.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Diff.hs index acc24cc516..a00d9fe996 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Diff.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Diff.hs @@ -4,51 +4,60 @@ {-# LANGUAGE DerivingVia #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} -module Ouroboros.Consensus.Ledger.Tables.Diff ( - -- * Types +module Ouroboros.Consensus.Ledger.Tables.Diff + ( -- * Types Delta (..) , Diff (..) + -- * Conversion , keysSet + -- * Construction , diff + -- ** Maps , fromMap , fromMapDeletes , fromMapInserts + -- ** Set , fromSetDeletes + -- ** Lists , fromList , fromListDeletes , fromListInserts + -- * Query + -- ** Size , null , numDeletes , numInserts , size + -- * Applying diffs , applyDiff , applyDiffForKeys + -- * Filter , filterWithKeyOnly , foldMapDelta , traverseDeltaWithKey_ ) where -import Control.Monad (void) -import Data.Bifunctor -import Data.Foldable (foldMap') -import qualified Data.Map.Merge.Strict as Merge -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Monoid -import Data.Set (Set) -import qualified Data.Set as Set -import GHC.Generics -import NoThunks.Class -import Prelude hiding (null) +import Control.Monad (void) +import Data.Bifunctor +import Data.Foldable (foldMap') +import Data.Map.Merge.Strict qualified as Merge +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Monoid +import Data.Set (Set) +import Data.Set qualified as Set +import GHC.Generics +import NoThunks.Class +import Prelude hiding (null) {------------------------------------------------------------------------------ Types @@ -70,8 +79,8 @@ instance Ord k => Semigroup (Diff k v) where instance Ord k => Monoid (Diff k v) where mempty = Diff mempty -data Delta v = - Insert !v +data Delta v + = Insert !v | Delete deriving stock (Show, Eq, Functor) deriving Generic @@ -93,13 +102,16 @@ keysSet (Diff m) = Map.keysSet m ------------------------------------------------------------------------------} diff :: (Ord k, Eq v) => Map k v -> Map k v -> Diff k v -diff m1 m2 = Diff $ +diff m1 m2 = + Diff $ Merge.merge (Merge.mapMissing $ \_k _v -> Delete) (Merge.mapMissing $ \_k v -> Insert v) - (Merge.zipWithMaybeMatched $ \ _k v1 v2 -> - if v1 == v2 then Nothing - else Just (Insert v2)) + ( Merge.zipWithMaybeMatched $ \_k v1 v2 -> + if v1 == v2 + then Nothing + else Just (Insert v2) + ) m1 m2 @@ -136,47 +148,47 @@ size (Diff m) = Map.size m numInserts :: Diff k v -> Int numInserts (Diff m) = getSum $ foldMap' f m - where - f (Insert _) = 1 - f Delete = 0 + where + f (Insert _) = 1 + f Delete = 0 numDeletes :: Diff k v -> Int numDeletes (Diff m) = getSum $ foldMap' f m - where - f (Insert _) = 0 - f Delete = 1 + where + f (Insert _) = 0 + f Delete = 1 {------------------------------------------------------------------------------ Applying diffs ------------------------------------------------------------------------------} applyDiff :: - Ord k - => Map k v - -> Diff k v - -> Map k v + Ord k => + Map k v -> + Diff k v -> + Map k v applyDiff m (Diff diffs) = - Merge.merge - Merge.preserveMissing - (Merge.mapMaybeMissing newKeys) - (Merge.zipWithMaybeMatched oldKeys) - m - diffs - where - newKeys :: k -> Delta v -> Maybe v - newKeys _k (Insert x) = Just x - newKeys _k Delete = Nothing - - oldKeys :: k -> v -> Delta v -> Maybe v - oldKeys _k _v1 (Insert x) = Just x - oldKeys _k _v1 Delete = Nothing + Merge.merge + Merge.preserveMissing + (Merge.mapMaybeMissing newKeys) + (Merge.zipWithMaybeMatched oldKeys) + m + diffs + where + newKeys :: k -> Delta v -> Maybe v + newKeys _k (Insert x) = Just x + newKeys _k Delete = Nothing + + oldKeys :: k -> v -> Delta v -> Maybe v + oldKeys _k _v1 (Insert x) = Just x + oldKeys _k _v1 Delete = Nothing applyDiffForKeys :: - Ord k - => Map k v - -> Set k - -> Diff k v - -> Map k v + Ord k => + Map k v -> + Set k -> + Diff k v -> + Map k v applyDiffForKeys m ks (Diff diffs) = applyDiff m @@ -195,10 +207,10 @@ filterWithKeyOnly f (Diff m) = Diff $ Map.filterWithKey (const . f) m -- | Traversal with keys over the deltas. traverseDeltaWithKey_ :: - Applicative t - => (k -> Delta v -> t a) - -> Diff k v - -> t () + Applicative t => + (k -> Delta v -> t a) -> + Diff k v -> + t () traverseDeltaWithKey_ f (Diff m) = void $ Map.traverseWithKey f m -- | @'foldMap'@ over the deltas. diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/MapKind.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/MapKind.hs index c2116891f3..76b483edfb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/MapKind.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/MapKind.hs @@ -10,8 +10,8 @@ {-# LANGUAGE TypeFamilies #-} -- | Classes for 'MapKind's and concrete 'MapKind's -module Ouroboros.Consensus.Ledger.Tables.MapKind ( - -- * Classes +module Ouroboros.Consensus.Ledger.Tables.MapKind + ( -- * Classes CanMapKeysMK (..) , CanMapMK (..) , EqMK @@ -19,6 +19,7 @@ module Ouroboros.Consensus.Ledger.Tables.MapKind ( , ShowMK , ZeroableMK (..) , bimapLedgerTables + -- * Concrete MapKinds , CodecMK (..) , DiffMK (..) @@ -29,18 +30,18 @@ module Ouroboros.Consensus.Ledger.Tables.MapKind ( , ValuesMK (..) ) where -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Data.Kind (Constraint) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import GHC.Generics (Generic) -import NoThunks.Class -import Ouroboros.Consensus.Ledger.Tables.Basics -import Ouroboros.Consensus.Ledger.Tables.Diff (Diff (..)) -import Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Data.Kind (Constraint) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import GHC.Generics (Generic) +import NoThunks.Class +import Ouroboros.Consensus.Ledger.Tables.Basics +import Ouroboros.Consensus.Ledger.Tables.Diff (Diff (..)) +import Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq {------------------------------------------------------------------------------- Classes @@ -73,8 +74,9 @@ class (forall k v. (Eq k, Eq v) => Eq (mk k v)) => EqMK mk -- | For convenience, such that we don't have to include @QuantifiedConstraints@ -- everywhere. type NoThunksMK :: MapKind -> Constraint -class (forall k v. (NoThunks k, NoThunks v) => NoThunks (mk k v)) - => NoThunksMK mk +class + (forall k v. (NoThunks k, NoThunks v) => NoThunks (mk k v)) => + NoThunksMK mk -- | Map both keys and values in ledger tables. -- @@ -82,20 +84,20 @@ class (forall k v. (NoThunks k, NoThunks v) => NoThunks (mk k v)) -- `Data.Set.map', namely that only injective functions are suitable to be used -- here. bimapLedgerTables :: - forall x y mk. ( - CanMapKeysMK mk - , CanMapMK mk - , Ord (TxIn y) - ) - => (TxIn x -> TxIn y) - -> (TxOut x -> TxOut y) - -> LedgerTables x mk - -> LedgerTables y mk + forall x y mk. + ( CanMapKeysMK mk + , CanMapMK mk + , Ord (TxIn y) + ) => + (TxIn x -> TxIn y) -> + (TxOut x -> TxOut y) -> + LedgerTables x mk -> + LedgerTables y mk bimapLedgerTables f g = - LedgerTables - . mapKeysMK f - . mapMK g - . getLedgerTables + LedgerTables + . mapKeysMK f + . mapMK g + . getLedgerTables {------------------------------------------------------------------------------- EmptyMK @@ -138,7 +140,7 @@ instance CanMapKeysMK KeysMK where ValuesMK -------------------------------------------------------------------------------} -newtype ValuesMK k v = ValuesMK { getValuesMK :: Map k v } +newtype ValuesMK k v = ValuesMK {getValuesMK :: Map k v} deriving stock (Generic, Eq, Show) deriving anyclass NoThunks deriving anyclass (ShowMK, EqMK, NoThunksMK) @@ -156,7 +158,7 @@ instance CanMapKeysMK ValuesMK where DiffMK -------------------------------------------------------------------------------} -newtype DiffMK k v = DiffMK { getDiffMK :: Diff k v } +newtype DiffMK k v = DiffMK {getDiffMK :: Diff k v} deriving stock (Generic, Eq, Show) deriving newtype Functor deriving anyclass NoThunks @@ -166,8 +168,9 @@ instance ZeroableMK DiffMK where emptyMK = DiffMK mempty instance CanMapKeysMK DiffMK where - mapKeysMK f (DiffMK (Diff m)) = DiffMK . Diff $ - Map.mapKeys f m + mapKeysMK f (DiffMK (Diff m)) = + DiffMK . Diff $ + Map.mapKeys f m instance CanMapMK DiffMK where mapMK f (DiffMK d) = DiffMK $ fmap f d @@ -188,15 +191,15 @@ instance CanMapMK TrackingMK where instance CanMapKeysMK TrackingMK where mapKeysMK f (TrackingMK vs d) = - TrackingMK - (getValuesMK . mapKeysMK f . ValuesMK $ vs) - (getDiffMK . mapKeysMK f . DiffMK $ d) + TrackingMK + (getValuesMK . mapKeysMK f . ValuesMK $ vs) + (getDiffMK . mapKeysMK f . DiffMK $ d) {------------------------------------------------------------------------------- SeqDiffMK -------------------------------------------------------------------------------} -newtype SeqDiffMK k v = SeqDiffMK { getSeqDiffMK :: DiffSeq k v } +newtype SeqDiffMK k v = SeqDiffMK {getSeqDiffMK :: DiffSeq k v} deriving stock (Generic, Eq, Show) deriving anyclass NoThunks deriving anyclass (ShowMK, EqMK, NoThunksMK) @@ -225,9 +228,9 @@ instance ZeroableMK SeqDiffMK where -- -- We will serialize UTxO maps as unstowed ledger tables when storing snapshots -- while using an in-memory backend for the LedgerDB. -data CodecMK k v = CodecMK { - encodeKey :: !(k -> CBOR.Encoding) +data CodecMK k v = CodecMK + { encodeKey :: !(k -> CBOR.Encoding) , encodeValue :: !(v -> CBOR.Encoding) - , decodeKey :: !(forall s . CBOR.Decoder s k) - , decodeValue :: !(forall s . CBOR.Decoder s v) + , decodeKey :: !(forall s. CBOR.Decoder s k) + , decodeValue :: !(forall s. CBOR.Decoder s v) } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Utils.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Utils.hs index cb029bdde3..67fafc2773 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Utils.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ledger/Tables/Utils.hs @@ -12,37 +12,48 @@ -- think of some way to make this more ergonomic. In particular for functions -- that take two ledger states, it is unclear if it will keep the in-memory part -- of the first or the second one. -module Ouroboros.Consensus.Ledger.Tables.Utils ( - -- * Projection and injection +module Ouroboros.Consensus.Ledger.Tables.Utils + ( -- * Projection and injection ltprj , ltwith + -- * Basic operations , emptyLedgerTables , forgetLedgerTables + -- * Operations on 'DiffMK' + -- ** Apply diffs , applyDiffForKeys , applyDiffForKeysOnTables , applyDiffs + -- ** Create diffs , calculateDifference , noNewTickingDiffs , valuesAsDiffs + -- ** Combining diffs , prependDiffs + -- * Operations on 'TrackingMK' + -- ** Augment , attachAndApplyDiffs , attachEmptyDiffs , prependTrackingDiffs + -- ** Reduce , trackingToDiffs , trackingToValues + -- * Union values , unionValues + -- * Exposed for @cardano-api@ , applyDiffsMK , restrictValuesMK + -- * Testing , applyDiffs' , rawAttachAndApplyDiffs -- used in test @@ -50,29 +61,29 @@ module Ouroboros.Consensus.Ledger.Tables.Utils ( , restrictValues' ) where -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Ledger.Tables -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff {------------------------------------------------------------------------------- Projection and injection -------------------------------------------------------------------------------} ltwith :: - ( HasLedgerTables l - , CanMapMK mk' - , CanMapKeysMK mk' - , ZeroableMK mk' - ) - => l mk - -> LedgerTables l mk' - -> l mk' + ( HasLedgerTables l + , CanMapMK mk' + , CanMapKeysMK mk' + , ZeroableMK mk' + ) => + l mk -> + LedgerTables l mk' -> + l mk' ltwith = withLedgerTables ltprj :: - (HasLedgerTables l, SameUtxoTypes l l', CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) - => l mk - -> LedgerTables l' mk + (HasLedgerTables l, SameUtxoTypes l l', CanMapMK mk, CanMapKeysMK mk, ZeroableMK mk) => + l mk -> + LedgerTables l' mk ltprj = castLedgerTables . projectLedgerTables {------------------------------------------------------------------------------- @@ -81,9 +92,10 @@ ltprj = castLedgerTables . projectLedgerTables -- | Replace tables with an empty diff. Can be used to specify that a ledger -- state tick produces no new UTXO entries. -noNewTickingDiffs :: HasLedgerTables l - => l any - -> l DiffMK +noNewTickingDiffs :: + HasLedgerTables l => + l any -> + l DiffMK noNewTickingDiffs l = withLedgerTables l emptyLedgerTables -- | Remove the ledger tables @@ -119,23 +131,30 @@ trackingToValues l = ltwith l $ ltmap rawTrackingValues (ltprj l) -- rawPrependDiffs :: - Ord k - => DiffMK k v -- ^ Earlier differences - -> DiffMK k v -- ^ Later differences - -> DiffMK k v + Ord k => + -- | Earlier differences + DiffMK k v -> + -- | Later differences + DiffMK k v -> + DiffMK k v rawPrependDiffs (DiffMK d1) (DiffMK d2) = DiffMK (d1 <> d2) -- | Prepend diffs from the first ledger state to the diffs from the second -- ledger state. Returns ledger tables. prependDiffs' :: - (SameUtxoTypes l l'', SameUtxoTypes l' l'', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => l DiffMK -> l' DiffMK -> LedgerTables l'' DiffMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l' l'' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + l DiffMK -> l' DiffMK -> LedgerTables l'' DiffMK prependDiffs' l1 l2 = ltliftA2 rawPrependDiffs (ltprj l1) (ltprj l2) -- | Prepend the diffs from @l1@ to @l2@. Returns @l2@. prependDiffs :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => l DiffMK -> l' DiffMK -> l' DiffMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + l DiffMK -> l' DiffMK -> l' DiffMK prependDiffs l1 l2 = ltwith l2 $ prependDiffs' l1 l2 -- @@ -143,49 +162,61 @@ prependDiffs l1 l2 = ltwith l2 $ prependDiffs' l1 l2 -- applyDiffsMK :: - Ord k - => ValuesMK k v -- ^ Values to which differences are applied - -> DiffMK k v -- ^ Differences to apply - -> ValuesMK k v + Ord k => + -- | Values to which differences are applied + ValuesMK k v -> + -- | Differences to apply + DiffMK k v -> + ValuesMK k v applyDiffsMK (ValuesMK vals) (DiffMK diffs) = ValuesMK (Diff.applyDiff vals diffs) -- | Apply diffs from the second ledger state to the values of the first ledger -- state. Returns ledger tables. applyDiffs' :: - (SameUtxoTypes l l'', SameUtxoTypes l' l'', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => l ValuesMK -> l' DiffMK -> LedgerTables l'' ValuesMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l' l'' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + l ValuesMK -> l' DiffMK -> LedgerTables l'' ValuesMK applyDiffs' l1 l2 = ltliftA2 applyDiffsMK (ltprj l1) (ltprj l2) -- | Apply diffs from @l2@ on values from @l1@. Returns @l2@. applyDiffs :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => l ValuesMK -> l' DiffMK -> l' ValuesMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + l ValuesMK -> l' DiffMK -> l' ValuesMK applyDiffs l1 l2 = ltwith l2 $ applyDiffs' l1 l2 rawApplyDiffForKeys :: - Ord k - => ValuesMK k v - -> KeysMK k v - -> DiffMK k v - -> ValuesMK k v + Ord k => + ValuesMK k v -> + KeysMK k v -> + DiffMK k v -> + ValuesMK k v rawApplyDiffForKeys (ValuesMK vals) (KeysMK keys) (DiffMK diffs) = ValuesMK (Diff.applyDiffForKeys vals keys diffs) -- | Apply diffs in @l3@ for keys in @l2@ and @l1@ on values from @l1@. Returns @l3@. applyDiffForKeys :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => l ValuesMK -> LedgerTables l KeysMK -> l' DiffMK -> l' ValuesMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + l ValuesMK -> LedgerTables l KeysMK -> l' DiffMK -> l' ValuesMK applyDiffForKeys l1 l2 l3 = ltwith l3 $ applyDiffForKeys' (ltprj l1) l2 l3 applyDiffForKeys' :: - (SameUtxoTypes l l'', SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => LedgerTables l ValuesMK -> LedgerTables l KeysMK -> l' DiffMK -> LedgerTables l'' ValuesMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l l' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + LedgerTables l ValuesMK -> LedgerTables l KeysMK -> l' DiffMK -> LedgerTables l'' ValuesMK applyDiffForKeys' l1 l2 l3 = ltliftA3 rawApplyDiffForKeys (castLedgerTables l1) (castLedgerTables l2) (ltprj l3) -- | Apply diffs in @l3@ for keys in @l2@ and @l1@ on values from @l1@. Returns @l3@. applyDiffForKeysOnTables :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => LedgerTables l ValuesMK -> LedgerTables l KeysMK -> l' DiffMK -> l' ValuesMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + LedgerTables l ValuesMK -> LedgerTables l KeysMK -> l' DiffMK -> l' ValuesMK applyDiffForKeysOnTables l1 l2 l3 = ltwith l3 $ applyDiffForKeys' l1 l2 l3 -- @@ -193,34 +224,39 @@ applyDiffForKeysOnTables l1 l2 l3 = ltwith l3 $ applyDiffForKeys' l1 l2 l3 -- rawCalculateDifference :: - (Ord k, Eq v) - => ValuesMK k v - -> ValuesMK k v - -> TrackingMK k v + (Ord k, Eq v) => + ValuesMK k v -> + ValuesMK k v -> + TrackingMK k v rawCalculateDifference (ValuesMK before) (ValuesMK after) = TrackingMK after (Diff.diff before after) -- | Promote values to diffs, for cases in which all existing values must be -- considered diffs. In particular this is used when populating the ledger -- tables for the first time. valuesAsDiffs :: - (LedgerTableConstraints l, HasLedgerTables l) - => l ValuesMK -> l DiffMK + (LedgerTableConstraints l, HasLedgerTables l) => + l ValuesMK -> l DiffMK valuesAsDiffs l = trackingToDiffs $ ltwith l $ ltliftA (rawCalculateDifference emptyMK) (ltprj l) -- | Calculate the differences between two ledger states. The first ledger state -- is considered /before/, the second ledger state is considered /after/. -- Returns ledger tables. calculateDifference' :: - (SameUtxoTypes l l'', SameUtxoTypes l' l'', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => l ValuesMK -> l' ValuesMK -> LedgerTables l'' TrackingMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l' l'' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + l ValuesMK -> l' ValuesMK -> LedgerTables l'' TrackingMK calculateDifference' l1 l2 = ltliftA2 rawCalculateDifference (ltprj l1) (ltprj l2) -- | Calculate the differences between two ledger states. The first ledger state -- is considered /before/, the second ledger state is considered /after/. -- Returns the second ledger state. calculateDifference :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => l ValuesMK -> l' ValuesMK -> l' TrackingMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + l ValuesMK -> l' ValuesMK -> l' TrackingMK calculateDifference l1 l2 = ltwith l2 $ calculateDifference' l1 l2 -- @@ -228,26 +264,31 @@ calculateDifference l1 l2 = ltwith l2 $ calculateDifference' l1 l2 -- rawAttachAndApplyDiffs :: - Ord k - => ValuesMK k v - -> DiffMK k v - -> TrackingMK k v + Ord k => + ValuesMK k v -> + DiffMK k v -> + TrackingMK k v rawAttachAndApplyDiffs (ValuesMK v) (DiffMK d) = TrackingMK (Diff.applyDiff v d) d -- | Apply the differences from the first ledger state to the values of the -- second ledger state, and returns the resulting values together with the -- applied diff. attachAndApplyDiffs' :: - (SameUtxoTypes l l'', SameUtxoTypes l' l'', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => l' ValuesMK -> l DiffMK -> LedgerTables l'' TrackingMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l' l'' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + l' ValuesMK -> l DiffMK -> LedgerTables l'' TrackingMK attachAndApplyDiffs' l1 l2 = ltliftA2 rawAttachAndApplyDiffs (ltprj l1) (ltprj l2) -- | Apply the differences from the first ledger state to the values of the -- second ledger state. Returns the second ledger state with a 'TrackingMK' of -- the final values and all the diffs. attachAndApplyDiffs :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => l ValuesMK -> l' DiffMK -> l' TrackingMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + l ValuesMK -> l' DiffMK -> l' TrackingMK attachAndApplyDiffs l1 l2 = ltwith l2 $ attachAndApplyDiffs' l1 l2 rawAttachEmptyDiffs :: Ord k => ValuesMK k v -> TrackingMK k v @@ -268,10 +309,10 @@ attachEmptyDiffs l1 = ltwith l1 $ ltmap rawAttachEmptyDiffs (ltprj l1) -- second argument is @TrackingMK v2 d2@, it should be the case that @applyDiff -- v1 d2 == v2@. rawPrependTrackingDiffs :: - Ord k - => TrackingMK k v - -> TrackingMK k v - -> TrackingMK k v + Ord k => + TrackingMK k v -> + TrackingMK k v -> + TrackingMK k v rawPrependTrackingDiffs (TrackingMK _ d1) (TrackingMK v d2) = TrackingMK v (d1 <> d2) @@ -281,8 +322,13 @@ rawPrependTrackingDiffs (TrackingMK _ d1) (TrackingMK v d2) = -- -- PRECONDITION: See 'rawPrependTrackingDiffs'. prependTrackingDiffs' :: - (SameUtxoTypes l l'', SameUtxoTypes l' l'', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => l TrackingMK -> l' TrackingMK -> LedgerTables l'' TrackingMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l' l'' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + l TrackingMK -> l' TrackingMK -> LedgerTables l'' TrackingMK prependTrackingDiffs' l1 l2 = ltliftA2 rawPrependTrackingDiffs (ltprj l1) (ltprj l2) -- | Prepend tracking diffs from the first ledger state to the tracking diffs @@ -291,22 +337,27 @@ prependTrackingDiffs' l1 l2 = ltliftA2 rawPrependTrackingDiffs (ltprj l1) (ltprj -- -- PRECONDITION: See 'rawPrependTrackingDiffs'. prependTrackingDiffs :: - (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') - => l TrackingMK -> l' TrackingMK -> l' TrackingMK + (SameUtxoTypes l l', HasLedgerTables l, HasLedgerTables l') => + l TrackingMK -> l' TrackingMK -> l' TrackingMK prependTrackingDiffs l1 l2 = ltwith l2 $ prependTrackingDiffs' l1 l2 -- Restrict values restrictValuesMK :: - Ord k - => ValuesMK k v - -> KeysMK k v - -> ValuesMK k v + Ord k => + ValuesMK k v -> + KeysMK k v -> + ValuesMK k v restrictValuesMK (ValuesMK v) (KeysMK k) = ValuesMK $ v `Map.restrictKeys` k restrictValues' :: - (SameUtxoTypes l l'', SameUtxoTypes l' l'', HasLedgerTables l, HasLedgerTables l', HasLedgerTables l'') - => l ValuesMK -> l' KeysMK -> LedgerTables l'' ValuesMK + ( SameUtxoTypes l l'' + , SameUtxoTypes l' l'' + , HasLedgerTables l + , HasLedgerTables l' + , HasLedgerTables l'' + ) => + l ValuesMK -> l' KeysMK -> LedgerTables l'' ValuesMK restrictValues' l1 l2 = ltliftA2 restrictValuesMK (ltprj l1) (ltprj l2) --- @@ -315,8 +366,8 @@ restrictValues' l1 l2 = ltliftA2 restrictValuesMK (ltprj l1) (ltprj l2) -- different values on the tables, thus there will never be -- conflicting collisions. unionValues :: - Ord k - => ValuesMK k v - -> ValuesMK k v - -> ValuesMK k v + Ord k => + ValuesMK k v -> + ValuesMK k v -> + ValuesMK k v unionValues (ValuesMK m1) (ValuesMK m2) = ValuesMK $ Map.union m1 m2 diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool.hs index 1e015dca12..74140f4508 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool.hs @@ -1,8 +1,10 @@ -- | The mempool API and implementation. -module Ouroboros.Consensus.Mempool ( - -- * Mempool API +module Ouroboros.Consensus.Mempool + ( -- * Mempool API + -- ** Mempool Mempool (..) + -- ** Transaction adding , AddTxOnBehalfOf (..) , MempoolAddTxResult (..) @@ -11,38 +13,63 @@ module Ouroboros.Consensus.Mempool ( , isMempoolTxAdded , isMempoolTxRejected , mempoolTxAddedToMaybe + -- ** Ledger state to forge on top of , ForgeLedgerState (..) + -- ** Mempool Snapshot , MempoolSnapshot (..) + -- ** Re-exports , SizeInBytes , TicketNo , zeroTicketNo + -- * Mempool capacity , MempoolCapacityBytesOverride (..) , computeMempoolCapacity + -- ** Mempool Size , MempoolSize (..) + -- * Mempool initialization , openMempool , openMempoolWithoutSyncThread + -- * ChainDB interface , LedgerInterface (..) , chainDBLedgerInterface + -- * Trace , TraceEventMempool (..) ) where -import Ouroboros.Consensus.Mempool.API (AddTxOnBehalfOf (..), - ForgeLedgerState (..), Mempool (..), - MempoolAddTxResult (..), MempoolSnapshot (..), SizeInBytes, - TicketNo, addLocalTxs, addTxs, isMempoolTxAdded, - isMempoolTxRejected, mempoolTxAddedToMaybe, zeroTicketNo) -import Ouroboros.Consensus.Mempool.Capacity - (MempoolCapacityBytesOverride (..), MempoolSize (..), - computeMempoolCapacity) -import Ouroboros.Consensus.Mempool.Impl.Common (LedgerInterface (..), - TraceEventMempool (..), chainDBLedgerInterface) -import Ouroboros.Consensus.Mempool.Init (openMempool, - openMempoolWithoutSyncThread) +import Ouroboros.Consensus.Mempool.API + ( AddTxOnBehalfOf (..) + , ForgeLedgerState (..) + , Mempool (..) + , MempoolAddTxResult (..) + , MempoolSnapshot (..) + , SizeInBytes + , TicketNo + , addLocalTxs + , addTxs + , isMempoolTxAdded + , isMempoolTxRejected + , mempoolTxAddedToMaybe + , zeroTicketNo + ) +import Ouroboros.Consensus.Mempool.Capacity + ( MempoolCapacityBytesOverride (..) + , MempoolSize (..) + , computeMempoolCapacity + ) +import Ouroboros.Consensus.Mempool.Impl.Common + ( LedgerInterface (..) + , TraceEventMempool (..) + , chainDBLedgerInterface + ) +import Ouroboros.Consensus.Mempool.Init + ( openMempool + , openMempoolWithoutSyncThread + ) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/API.hs index 13082dbf14..d4aefb1310 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/API.hs @@ -9,9 +9,10 @@ -- The interface is then initialized in "Ouroboros.Consensus.Mempool.Init" with -- the functions from "Ouroboros.Consensus.Mempool.Update" and -- "Ouroboros.Consensus.Mempool.Query". -module Ouroboros.Consensus.Mempool.API ( - -- * Mempool +module Ouroboros.Consensus.Mempool.API + ( -- * Mempool Mempool (..) + -- * Transaction adding , AddTxOnBehalfOf (..) , MempoolAddTxResult (..) @@ -20,24 +21,27 @@ module Ouroboros.Consensus.Mempool.API ( , isMempoolTxAdded , isMempoolTxRejected , mempoolTxAddedToMaybe + -- * Ledger state to forge on top of , ForgeLedgerState (..) + -- * Mempool Snapshot , MempoolSnapshot (..) + -- * Re-exports , SizeInBytes , TicketNo , zeroTicketNo ) where -import qualified Data.List.NonEmpty as NE -import Ouroboros.Consensus.Block (ChainHash, SlotNo) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import qualified Ouroboros.Consensus.Mempool.Capacity as Cap -import Ouroboros.Consensus.Mempool.TxSeq (TicketNo, zeroTicketNo) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Protocol.TxSubmission2.Type (SizeInBytes) +import Data.List.NonEmpty qualified as NE +import Ouroboros.Consensus.Block (ChainHash, SlotNo) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool.Capacity qualified as Cap +import Ouroboros.Consensus.Mempool.TxSeq (TicketNo, zeroTicketNo) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Protocol.TxSubmission2.Type (SizeInBytes) {------------------------------------------------------------------------------- Mempool API @@ -91,129 +95,124 @@ import Ouroboros.Network.Protocol.TxSubmission2.Type (SizeInBytes) -- of the N remote peers together also get total weight 1/(M+1). This means -- individual remote peers get weight 1/(N * (M+1)). Intuitively: a single local -- client has the same weight as all the remote peers put together. --- -data Mempool m blk = Mempool { - -- | Add a single transaction to the mempool. - -- - -- The new transaction provided will be validated, /in order/, against - -- the ledger state obtained by applying all the transactions already in - -- the mempool. Transactions which are found to be invalid are dropped, - -- whereas valid transactions are added to the mempool. - -- - -- Note that transactions that are invalid will /never/ be added to the - -- mempool. However, it is possible that, at a given point in time, - -- transactions which were valid in an older ledger state but are invalid - -- in the current ledger state, could exist within the mempool until they - -- are revalidated and dropped from the mempool via a call to - -- 'syncWithLedger' or by the background thread that watches the ledger - -- for changes. - -- - -- This action returns one of two results. - -- - -- * A 'MempoolTxAdded' value if the transaction provided was found to - -- be valid. This transactions is now in the mempool. - -- - -- * A 'MempoolTxRejected' value if the transaction provided was found - -- to be invalid, along with its accompanying validation errors. This - -- transactions is not in the mempool. - -- - -- Note that this is a blocking action. It will block until the - -- transaction fits into the mempool. This includes transactions that - -- turn out to be invalid: the action waits for there to be space for - -- the transaction before validation is attempted. - -- - -- Note that it is safe to use this from multiple threads concurrently. - -- - -- POSTCONDITION: - -- > let prj = \case - -- > MempoolTxAdded vtx -> txForgetValidated vtx - -- > MempoolTxRejected tx _err -> tx - -- > processed <- addTx wti txs - -- > prj processed == tx - -- - -- In principle it is possible that validation errors are transient; for - -- example, it is possible that a transaction is rejected because one of - -- its inputs is not /yet/ available in the UTxO (the transaction it - -- depends on is not yet in the chain, nor in the mempool). In practice - -- however it is likely that rejected transactions will still be - -- rejected later, and should just be dropped. - -- - -- It is important to note one important special case of transactions - -- being "invalid": a transaction will /also/ be considered invalid if - -- /that very same transaction/ is already included on the blockchain - -- (after all, by definition that must mean its inputs have been used). - -- Rejected transactions are therefore not necessarily a sign of - -- malicious behaviour. Indeed, we would expect /most/ transactions that - -- are reported as invalid by 'addTxs' to be invalid precisely because - -- they have already been included. Distinguishing between these two - -- cases can be done in theory, but it is expensive unless we have an - -- index of transaction hashes that have been included on the blockchain. - -- - -- As long as we keep the mempool entirely in-memory this could live in - -- @STM m@; we keep it in @m@ instead to leave open the possibility of - -- persistence. - addTx :: AddTxOnBehalfOf - -> GenTx blk - -> m (MempoolAddTxResult blk) - - -- | Manually remove the given transactions from the mempool. - , removeTxsEvenIfValid :: NE.NonEmpty (GenTxId blk) -> m () - - -- | Sync the transactions in the mempool with the current ledger state - -- of the 'ChainDB'. - -- - -- The transactions that exist within the mempool will be revalidated - -- against the current ledger state. Transactions which are found to be - -- invalid with respect to the current ledger state, will be dropped - -- from the mempool, whereas valid transactions will remain. - -- - -- We keep this in @m@ instead of @STM m@ to leave open the possibility - -- of persistence. Additionally, this makes it possible to trace the - -- removal of invalid transactions. - -- - -- n.b. in our current implementation, when one opens a mempool, we - -- spawn a thread which performs this action whenever the 'ChainDB' tip - -- point changes. - , syncWithLedger :: m (MempoolSnapshot blk) - - -- | Get a snapshot of the current mempool state. This allows for - -- further pure queries on the snapshot. - -- - -- This doesn't look at the ledger state at all. - , getSnapshot :: STM m (MempoolSnapshot blk) - - -- | Get a snapshot of the mempool state that is valid with respect to - -- the given ledger state - -- - -- This does not update the state of the mempool. - -- - -- The arguments: - -- - -- - The current slot in which we want the snapshot - -- - -- - The ledger state ticked to the given slot number (with the diffs from ticking) - -- - -- - A function that reads values for keys at the unticked ledger state. - , getSnapshotFor :: - SlotNo - -> TickedLedgerState blk DiffMK - -> (LedgerTables (LedgerState blk) KeysMK -> m (LedgerTables (LedgerState blk) ValuesMK)) - -> m (MempoolSnapshot blk) - - -- | Get the mempool's capacity - -- - -- Note that the capacity of the Mempool, unless it is overridden with - -- 'MempoolCapacityBytesOverride', can dynamically change when the ledger - -- state is updated: it will be set to twice the current ledger's maximum - -- transaction capacity of a block. - -- - -- When the capacity happens to shrink at some point, we /do not/ remove - -- transactions from the Mempool to satisfy this new lower limit. - -- Instead, we treat it the same way as a Mempool which is /at/ - -- capacity, i.e., we won't admit new transactions until some have been - -- removed because they have become invalid. - , getCapacity :: STM m (TxMeasure blk) - } +data Mempool m blk = Mempool + { addTx :: + AddTxOnBehalfOf -> + GenTx blk -> + m (MempoolAddTxResult blk) + -- ^ Add a single transaction to the mempool. + -- + -- The new transaction provided will be validated, /in order/, against + -- the ledger state obtained by applying all the transactions already in + -- the mempool. Transactions which are found to be invalid are dropped, + -- whereas valid transactions are added to the mempool. + -- + -- Note that transactions that are invalid will /never/ be added to the + -- mempool. However, it is possible that, at a given point in time, + -- transactions which were valid in an older ledger state but are invalid + -- in the current ledger state, could exist within the mempool until they + -- are revalidated and dropped from the mempool via a call to + -- 'syncWithLedger' or by the background thread that watches the ledger + -- for changes. + -- + -- This action returns one of two results. + -- + -- * A 'MempoolTxAdded' value if the transaction provided was found to + -- be valid. This transactions is now in the mempool. + -- + -- * A 'MempoolTxRejected' value if the transaction provided was found + -- to be invalid, along with its accompanying validation errors. This + -- transactions is not in the mempool. + -- + -- Note that this is a blocking action. It will block until the + -- transaction fits into the mempool. This includes transactions that + -- turn out to be invalid: the action waits for there to be space for + -- the transaction before validation is attempted. + -- + -- Note that it is safe to use this from multiple threads concurrently. + -- + -- POSTCONDITION: + -- > let prj = \case + -- > MempoolTxAdded vtx -> txForgetValidated vtx + -- > MempoolTxRejected tx _err -> tx + -- > processed <- addTx wti txs + -- > prj processed == tx + -- + -- In principle it is possible that validation errors are transient; for + -- example, it is possible that a transaction is rejected because one of + -- its inputs is not /yet/ available in the UTxO (the transaction it + -- depends on is not yet in the chain, nor in the mempool). In practice + -- however it is likely that rejected transactions will still be + -- rejected later, and should just be dropped. + -- + -- It is important to note one important special case of transactions + -- being "invalid": a transaction will /also/ be considered invalid if + -- /that very same transaction/ is already included on the blockchain + -- (after all, by definition that must mean its inputs have been used). + -- Rejected transactions are therefore not necessarily a sign of + -- malicious behaviour. Indeed, we would expect /most/ transactions that + -- are reported as invalid by 'addTxs' to be invalid precisely because + -- they have already been included. Distinguishing between these two + -- cases can be done in theory, but it is expensive unless we have an + -- index of transaction hashes that have been included on the blockchain. + -- + -- As long as we keep the mempool entirely in-memory this could live in + -- @STM m@; we keep it in @m@ instead to leave open the possibility of + -- persistence. + , removeTxsEvenIfValid :: NE.NonEmpty (GenTxId blk) -> m () + -- ^ Manually remove the given transactions from the mempool. + , syncWithLedger :: m (MempoolSnapshot blk) + -- ^ Sync the transactions in the mempool with the current ledger state + -- of the 'ChainDB'. + -- + -- The transactions that exist within the mempool will be revalidated + -- against the current ledger state. Transactions which are found to be + -- invalid with respect to the current ledger state, will be dropped + -- from the mempool, whereas valid transactions will remain. + -- + -- We keep this in @m@ instead of @STM m@ to leave open the possibility + -- of persistence. Additionally, this makes it possible to trace the + -- removal of invalid transactions. + -- + -- n.b. in our current implementation, when one opens a mempool, we + -- spawn a thread which performs this action whenever the 'ChainDB' tip + -- point changes. + , getSnapshot :: STM m (MempoolSnapshot blk) + -- ^ Get a snapshot of the current mempool state. This allows for + -- further pure queries on the snapshot. + -- + -- This doesn't look at the ledger state at all. + , getSnapshotFor :: + SlotNo -> + TickedLedgerState blk DiffMK -> + (LedgerTables (LedgerState blk) KeysMK -> m (LedgerTables (LedgerState blk) ValuesMK)) -> + m (MempoolSnapshot blk) + -- ^ Get a snapshot of the mempool state that is valid with respect to + -- the given ledger state + -- + -- This does not update the state of the mempool. + -- + -- The arguments: + -- + -- - The current slot in which we want the snapshot + -- + -- - The ledger state ticked to the given slot number (with the diffs from ticking) + -- + -- - A function that reads values for keys at the unticked ledger state. + , getCapacity :: STM m (TxMeasure blk) + -- ^ Get the mempool's capacity + -- + -- Note that the capacity of the Mempool, unless it is overridden with + -- 'MempoolCapacityBytesOverride', can dynamically change when the ledger + -- state is updated: it will be set to twice the current ledger's maximum + -- transaction capacity of a block. + -- + -- When the capacity happens to shrink at some point, we /do not/ remove + -- transactions from the Mempool to satisfy this new lower limit. + -- Instead, we treat it the same way as a Mempool which is /at/ + -- capacity, i.e., we won't admit new transactions until some have been + -- removed because they have become invalid. + } {------------------------------------------------------------------------------- Result of adding a transaction to the mempool @@ -221,26 +220,29 @@ data Mempool m blk = Mempool { -- | The result of attempting to add a transaction to the mempool. data MempoolAddTxResult blk - = MempoolTxAdded !(Validated (GenTx blk)) - -- ^ The transaction was added to the mempool. - | MempoolTxRejected !(GenTx blk) !(ApplyTxErr blk) - -- ^ The transaction was rejected and could not be added to the mempool + = -- | The transaction was added to the mempool. + MempoolTxAdded !(Validated (GenTx blk)) + | -- | The transaction was rejected and could not be added to the mempool -- for the specified reason. + MempoolTxRejected !(GenTx blk) !(ApplyTxErr blk) -deriving instance (Eq (GenTx blk), Eq (Validated (GenTx blk)), Eq (ApplyTxErr blk)) => Eq (MempoolAddTxResult blk) -deriving instance (Show (GenTx blk), Show (Validated (GenTx blk)), Show (ApplyTxErr blk)) => Show (MempoolAddTxResult blk) +deriving instance + (Eq (GenTx blk), Eq (Validated (GenTx blk)), Eq (ApplyTxErr blk)) => Eq (MempoolAddTxResult blk) +deriving instance + (Show (GenTx blk), Show (Validated (GenTx blk)), Show (ApplyTxErr blk)) => + Show (MempoolAddTxResult blk) mempoolTxAddedToMaybe :: MempoolAddTxResult blk -> Maybe (Validated (GenTx blk)) mempoolTxAddedToMaybe (MempoolTxAdded vtx) = Just vtx -mempoolTxAddedToMaybe _ = Nothing +mempoolTxAddedToMaybe _ = Nothing isMempoolTxAdded :: MempoolAddTxResult blk -> Bool isMempoolTxAdded MempoolTxAdded{} = True -isMempoolTxAdded _ = False +isMempoolTxAdded _ = False isMempoolTxRejected :: MempoolAddTxResult blk -> Bool isMempoolTxRejected MempoolTxRejected{} = True -isMempoolTxRejected _ = False +isMempoolTxRejected _ = False -- | A wrapper around 'addTx' that adds a sequence of transactions on behalf of -- a remote peer. @@ -250,10 +252,11 @@ isMempoolTxRejected _ = False -- -- See 'addTx' for further details. addTxs :: - forall m blk t. (MonadSTM m, Traversable t) - => Mempool m blk - -> t (GenTx blk) - -> m (t (MempoolAddTxResult blk)) + forall m blk t. + (MonadSTM m, Traversable t) => + Mempool m blk -> + t (GenTx blk) -> + m (t (MempoolAddTxResult blk)) addTxs mempool = mapM (addTx mempool AddTxForRemotePeer) -- | A wrapper around 'addTx' that adds a sequence of transactions on behalf of @@ -264,10 +267,11 @@ addTxs mempool = mapM (addTx mempool AddTxForRemotePeer) -- -- See 'addTx' for further details. addLocalTxs :: - forall m blk t. (MonadSTM m, Traversable t) - => Mempool m blk - -> t (GenTx blk) - -> m (t (MempoolAddTxResult blk)) + forall m blk t. + (MonadSTM m, Traversable t) => + Mempool m blk -> + t (GenTx blk) -> + m (t (MempoolAddTxResult blk)) addLocalTxs mempool = mapM (addTx mempool AddTxForLocalClient) -- | Who are we adding a tx on behalf of, a remote peer or a local client? @@ -278,10 +282,8 @@ addLocalTxs mempool = mapM (addTx mempool AddTxForLocalClient) -- 2. priority of service: local clients are prioritised over remote peers. -- -- See 'Mempool' for a discussion of fairness and priority. --- data AddTxOnBehalfOf = AddTxForRemotePeer | AddTxForLocalClient - {------------------------------------------------------------------------------- Ledger state considered for forging -------------------------------------------------------------------------------} @@ -295,21 +297,20 @@ data AddTxOnBehalfOf = AddTxForRemotePeer | AddTxForLocalClient -- ledger: the update system might be updated, scheduled delegations might be -- applied, etc., and such changes should take effect before we validate any -- transactions. -data ForgeLedgerState blk = - -- | The slot number of the block is known +data ForgeLedgerState blk + = -- | The slot number of the block is known -- -- This will only be the case when we realized that we are the slot leader -- and we are actually producing a block. It is the caller's responsibility -- to call 'applyChainTick' and produce the ticked ledger state. ForgeInKnownSlot SlotNo (TickedLedgerState blk DiffMK) - - -- | The slot number of the block is not yet known + | -- | The slot number of the block is not yet known -- -- When we are validating transactions before we know in which block they -- will end up, we have to make an assumption about which slot number to use -- for 'applyChainTick' to prepare the ledger state; we will assume that -- they will end up in the slot after the slot at the tip of the ledger. - | ForgeInUnknownSlot (LedgerState blk EmptyMK) + ForgeInUnknownSlot (LedgerState blk EmptyMK) {------------------------------------------------------------------------------- Snapshot of the mempool @@ -328,36 +329,28 @@ data ForgeLedgerState blk = -- Note that it is expected that 'getTx' will often return 'Nothing' -- even for tx sequence numbers returned in previous snapshots. This happens -- when the transaction has been removed from the mempool between snapshots. --- -data MempoolSnapshot blk = MempoolSnapshot { - -- | Get all transactions (oldest to newest) in the mempool snapshot along - -- with their ticket number. - snapshotTxs :: [(Validated (GenTx blk), TicketNo, TxMeasure blk)] - - -- | Get all transactions (oldest to newest) in the mempool snapshot, - -- along with their ticket number, which are associated with a ticket - -- number greater than the one provided. - , snapshotTxsAfter :: TicketNo -> [(Validated (GenTx blk), TicketNo, TxMeasure blk)] - - -- | Get the greatest prefix (oldest to newest) that respects the given - -- block capacity. - , snapshotTake :: TxMeasure blk -> [Validated (GenTx blk)] - - -- | Get a specific transaction from the mempool snapshot by its ticket - -- number, if it exists. - , snapshotLookupTx :: TicketNo -> Maybe (Validated (GenTx blk)) - - -- | Determine whether a specific transaction exists within the mempool - -- snapshot. - , snapshotHasTx :: GenTxId blk -> Bool - - -- | Get the size of the mempool snapshot. +data MempoolSnapshot blk = MempoolSnapshot + { snapshotTxs :: [(Validated (GenTx blk), TicketNo, TxMeasure blk)] + -- ^ Get all transactions (oldest to newest) in the mempool snapshot along + -- with their ticket number. + , snapshotTxsAfter :: TicketNo -> [(Validated (GenTx blk), TicketNo, TxMeasure blk)] + -- ^ Get all transactions (oldest to newest) in the mempool snapshot, + -- along with their ticket number, which are associated with a ticket + -- number greater than the one provided. + , snapshotTake :: TxMeasure blk -> [Validated (GenTx blk)] + -- ^ Get the greatest prefix (oldest to newest) that respects the given + -- block capacity. + , snapshotLookupTx :: TicketNo -> Maybe (Validated (GenTx blk)) + -- ^ Get a specific transaction from the mempool snapshot by its ticket + -- number, if it exists. + , snapshotHasTx :: GenTxId blk -> Bool + -- ^ Determine whether a specific transaction exists within the mempool + -- snapshot. , snapshotMempoolSize :: Cap.MempoolSize - - -- | The block number of the "virtual block" under construction - , snapshotSlotNo :: SlotNo - - -- | The resulting state currently in the mempool after applying the - -- transactions - , snapshotStateHash :: ChainHash (TickedLedgerState blk) + -- ^ Get the size of the mempool snapshot. + , snapshotSlotNo :: SlotNo + -- ^ The block number of the "virtual block" under construction + , snapshotStateHash :: ChainHash (TickedLedgerState blk) + -- ^ The resulting state currently in the mempool after applying the + -- transactions } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Capacity.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Capacity.hs index 3833b3c87e..ea31e4156f 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Capacity.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Capacity.hs @@ -12,23 +12,24 @@ -- -- > import Ouroboros.Consensus.Mempool.Capacity (Capacity) -- > import qualified Ouroboros.Consensus.Mempool.Capacity as Capacity -module Ouroboros.Consensus.Mempool.Capacity ( - -- * Mempool capacity +module Ouroboros.Consensus.Mempool.Capacity + ( -- * Mempool capacity MempoolCapacityBytesOverride (..) , computeMempoolCapacity , mkCapacityBytesOverride + -- * Mempool Size , MempoolSize (..) ) where -import Data.DerivingVia (InstantiatedAt (..)) -import Data.Measure (Measure) -import Data.Semigroup (stimes) -import Data.Word (Word32) -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.SupportsMempool +import Data.DerivingVia (InstantiatedAt (..)) +import Data.Measure (Measure) +import Data.Semigroup (stimes) +import Data.Word (Word32) +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.SupportsMempool {------------------------------------------------------------------------------- Mempool capacity in bytes @@ -37,12 +38,12 @@ import Ouroboros.Consensus.Ledger.SupportsMempool -- | An override for the default 'MempoolCapacityBytes' which is 2x the -- maximum transaction capacity data MempoolCapacityBytesOverride - = NoMempoolCapacityBytesOverride - -- ^ Use 2x the maximum transaction capacity of a block. This will change + = -- | Use 2x the maximum transaction capacity of a block. This will change -- dynamically with the protocol parameters adopted in the current ledger. - | MempoolCapacityBytesOverride !ByteSize32 - -- ^ Use the least multiple of the block capacity that is no less than this + NoMempoolCapacityBytesOverride + | -- | Use the least multiple of the block capacity that is no less than this -- size. + MempoolCapacityBytesOverride !ByteSize32 deriving (Eq, Show) -- | Create an override for the mempool capacity using the provided number of @@ -56,28 +57,29 @@ mkCapacityBytesOverride = MempoolCapacityBytesOverride -- If an override is present, reinterpret it as a number of blocks (rounded -- up), and then simply multiply the ledger's capacity by that number. computeMempoolCapacity :: - LedgerSupportsMempool blk - => LedgerConfig blk - -> TickedLedgerState blk mk - -> MempoolCapacityBytesOverride - -> TxMeasure blk + LedgerSupportsMempool blk => + LedgerConfig blk -> + TickedLedgerState blk mk -> + MempoolCapacityBytesOverride -> + TxMeasure blk computeMempoolCapacity cfg st override = - capacity - where - oneBlock = blockCapacityTxMeasure cfg st - ByteSize32 oneBlockBytes = txMeasureByteSize oneBlock + capacity + where + oneBlock = blockCapacityTxMeasure cfg st + ByteSize32 oneBlockBytes = txMeasureByteSize oneBlock - blockCount = case override of - NoMempoolCapacityBytesOverride -> 2 - MempoolCapacityBytesOverride (ByteSize32 x) -> - -- This calculation is happening at Word32. If it was to overflow, it - -- will round down instead. - max 1 $ if x + oneBlockBytes < x - then x `div` oneBlockBytes - else (x + oneBlockBytes - 1) `div` oneBlockBytes + blockCount = case override of + NoMempoolCapacityBytesOverride -> 2 + MempoolCapacityBytesOverride (ByteSize32 x) -> + -- This calculation is happening at Word32. If it was to overflow, it + -- will round down instead. + max 1 $ + if x + oneBlockBytes < x + then x `div` oneBlockBytes + else (x + oneBlockBytes - 1) `div` oneBlockBytes - SemigroupViaMeasure capacity = - stimes blockCount (SemigroupViaMeasure oneBlock) + SemigroupViaMeasure capacity = + stimes blockCount (SemigroupViaMeasure oneBlock) newtype SemigroupViaMeasure a = SemigroupViaMeasure a deriving newtype (Eq, Measure) @@ -89,15 +91,16 @@ newtype SemigroupViaMeasure a = SemigroupViaMeasure a -- | The size of a mempool. data MempoolSize = MempoolSize - { msNumTxs :: !Word32 - -- ^ The number of transactions in the mempool. + { msNumTxs :: !Word32 + -- ^ The number of transactions in the mempool. , msNumBytes :: !ByteSize32 - -- ^ The summed byte size of all the transactions in the mempool. - } deriving (Eq, Show, Generic, NoThunks) + -- ^ The summed byte size of all the transactions in the mempool. + } + deriving (Eq, Show, Generic, NoThunks) instance Semigroup MempoolSize where MempoolSize xt xb <> MempoolSize yt yb = MempoolSize (xt + yt) (xb <> yb) instance Monoid MempoolSize where - mempty = MempoolSize { msNumTxs = 0, msNumBytes = ByteSize32 0 } + mempty = MempoolSize{msNumTxs = 0, msNumBytes = ByteSize32 0} mappend = (<>) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Impl/Common.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Impl/Common.hs index 2ff1ad1b24..d57e445bb7 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Impl/Common.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Impl/Common.hs @@ -10,172 +10,176 @@ -- | Definition of common types used in "Ouroboros.Consensus.Mempool.Init", -- "Ouroboros.Consensus.Mempool.Update" and "Ouroboros.Consensus.Mempool.Query". -module Ouroboros.Consensus.Mempool.Impl.Common ( - -- * Internal state +module Ouroboros.Consensus.Mempool.Impl.Common + ( -- * Internal state InternalState (..) , isMempoolSize + -- * Mempool environment , MempoolEnv (..) , initMempoolEnv + -- * Ledger interface , LedgerInterface (..) , chainDBLedgerInterface + -- * Validation , RevalidateTxsResult (..) , computeSnapshot , revalidateTxsFor , validateNewTransaction + -- * Tracing , TraceEventMempool (..) + -- * Conversions , snapshotFromIS + -- * Ticking a ledger state , tickLedgerState ) where -import Control.Concurrent.Class.MonadMVar (MVar, newMVar) -import Control.Concurrent.Class.MonadSTM.Strict.TMVar (newTMVarIO) -import Control.Monad.Trans.Except (runExcept) -import Control.Tracer -import qualified Data.Foldable as Foldable -import qualified Data.List.NonEmpty as NE -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Typeable -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended (ledgerState) -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Mempool.API -import Ouroboros.Consensus.Mempool.Capacity -import Ouroboros.Consensus.Mempool.TxSeq (TxSeq (..), TxTicket (..)) -import qualified Ouroboros.Consensus.Mempool.TxSeq as TxSeq -import Ouroboros.Consensus.Storage.ChainDB (ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Util.Enclose (EnclosingTimed) -import Ouroboros.Consensus.Util.IOLike hiding (newMVar) +import Control.Concurrent.Class.MonadMVar (MVar, newMVar) +import Control.Concurrent.Class.MonadSTM.Strict.TMVar (newTMVarIO) +import Control.Monad.Trans.Except (runExcept) +import Control.Tracer +import Data.Foldable qualified as Foldable +import Data.List.NonEmpty qualified as NE +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Typeable +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended (ledgerState) +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Mempool.API +import Ouroboros.Consensus.Mempool.Capacity +import Ouroboros.Consensus.Mempool.TxSeq (TxSeq (..), TxTicket (..)) +import Ouroboros.Consensus.Mempool.TxSeq qualified as TxSeq +import Ouroboros.Consensus.Storage.ChainDB (ChainDB) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Util.Enclose (EnclosingTimed) +import Ouroboros.Consensus.Util.IOLike hiding (newMVar) {------------------------------------------------------------------------------- Internal State -------------------------------------------------------------------------------} -- | Internal state in the mempool -data InternalState blk = IS { - -- | Transactions currently in the mempool - -- - -- NOTE: the total size of the transactions in 'isTxs' may exceed the - -- current capacity ('isCapacity'). When the capacity computed from the - -- ledger has shrunk, we don't remove transactions from the Mempool to - -- satisfy the new lower limit. We let the transactions get removed in - -- the normal way: by becoming invalid w.r.t. the updated ledger state. - -- We treat a Mempool /over/ capacity in the same way as a Mempool /at/ - -- capacity. - isTxs :: !(TxSeq (TxMeasure blk) (Validated (GenTx blk))) - - -- | The cached IDs of transactions currently in the mempool. - -- - -- This allows one to more quickly lookup transactions by ID from a - -- 'MempoolSnapshot' (see 'snapshotHasTx'). - -- - -- This should always be in-sync with the transactions in 'isTxs'. - , isTxIds :: !(Set (GenTxId blk)) - - -- | The cached set of keys needed for the transactions - -- currently in the mempool. - -- - -- INVARIANT: @'isTxKeys' == foldMap (getTransactionKeySets . txForgetValidated) $ toList 'isTxs'@ - , isTxKeys :: !(LedgerTables (LedgerState blk) KeysMK) - - -- | The cached values corresponding to reading 'isTxKeys' at - -- 'isLedgerState'. These values can be used unless we switch to - -- a different ledger state. It usually happens in the forging - -- loop that the same ledger state that was in 'isLedgerState' - -- is used, but ticked to a different slot so we can reuse these - -- values. - -- - -- INVARIANT: 'isTxValues' should be equal to @getForkerAtTarget ... 'isLedgerState' >>= \f -> forkerReadTables f isTxKeys@ - , isTxValues :: !(LedgerTables (LedgerState blk) ValuesMK) - -- | The cached ledger state after applying the transactions in the - -- Mempool against the chain's ledger state. New transactions will be - -- validated against this ledger. - -- - -- INVARIANT: 'isLedgerState' is the ledger resulting from applying the - -- transactions in 'isTxs' against the ledger identified 'isTip' as tip. - , isLedgerState :: !(TickedLedgerState blk DiffMK) - - -- | The tip of the chain that 'isTxs' was validated against - , isTip :: !(Point blk) - - -- | The most recent 'SlotNo' that 'isTxs' was validated against - -- - -- Note in particular that if the mempool is revalidated against a state S - -- at slot s, then the state will be ticked (for now to the successor - -- slot, see 'tickLedgerState') and 'isSlotNo' will be set to @succ s@, - -- which is different from the slot of the original ledger state, which - -- will remain in 'isTip'. - , isSlotNo :: !SlotNo - - -- | The mempool 'TicketNo' counter. - -- - -- See 'vrLastTicketNo' for more information. - , isLastTicketNo :: !TicketNo - - -- | Current maximum capacity of the Mempool. Result of - -- 'computeMempoolCapacity' using the current chain's - -- 'TickedLedgerState'. - -- - -- NOTE: this does not correspond to 'isLedgerState', which is the - -- 'TickedLedgerState' /after/ applying the transactions in the Mempool. - -- There might be a transaction in the Mempool triggering a change in - -- the maximum transaction capacity of a block, which would change the - -- Mempool's capacity (unless overridden). We don't want the Mempool's - -- capacity to depend on its contents. The mempool is assuming /all/ its - -- transactions will be in the next block. So any changes caused by that - -- block will take effect after applying it and will only affect the - -- next block. - , isCapacity :: !(TxMeasure blk) - } - deriving (Generic) - -deriving instance ( NoThunks (Validated (GenTx blk)) - , NoThunks (GenTxId blk) - , NoThunks (TickedLedgerState blk DiffMK) - , NoThunks (TxIn (LedgerState blk)) - , NoThunks (TxOut (LedgerState blk)) - , NoThunks (TxMeasure blk) - , StandardHash blk - , Typeable blk - ) => NoThunks (InternalState blk) +data InternalState blk = IS + { isTxs :: !(TxSeq (TxMeasure blk) (Validated (GenTx blk))) + -- ^ Transactions currently in the mempool + -- + -- NOTE: the total size of the transactions in 'isTxs' may exceed the + -- current capacity ('isCapacity'). When the capacity computed from the + -- ledger has shrunk, we don't remove transactions from the Mempool to + -- satisfy the new lower limit. We let the transactions get removed in + -- the normal way: by becoming invalid w.r.t. the updated ledger state. + -- We treat a Mempool /over/ capacity in the same way as a Mempool /at/ + -- capacity. + , isTxIds :: !(Set (GenTxId blk)) + -- ^ The cached IDs of transactions currently in the mempool. + -- + -- This allows one to more quickly lookup transactions by ID from a + -- 'MempoolSnapshot' (see 'snapshotHasTx'). + -- + -- This should always be in-sync with the transactions in 'isTxs'. + , isTxKeys :: !(LedgerTables (LedgerState blk) KeysMK) + -- ^ The cached set of keys needed for the transactions + -- currently in the mempool. + -- + -- INVARIANT: @'isTxKeys' == foldMap (getTransactionKeySets . txForgetValidated) $ toList 'isTxs'@ + , isTxValues :: !(LedgerTables (LedgerState blk) ValuesMK) + -- ^ The cached values corresponding to reading 'isTxKeys' at + -- 'isLedgerState'. These values can be used unless we switch to + -- a different ledger state. It usually happens in the forging + -- loop that the same ledger state that was in 'isLedgerState' + -- is used, but ticked to a different slot so we can reuse these + -- values. + -- + -- INVARIANT: 'isTxValues' should be equal to @getForkerAtTarget ... 'isLedgerState' >>= \f -> forkerReadTables f isTxKeys@ + , isLedgerState :: !(TickedLedgerState blk DiffMK) + -- ^ The cached ledger state after applying the transactions in the + -- Mempool against the chain's ledger state. New transactions will be + -- validated against this ledger. + -- + -- INVARIANT: 'isLedgerState' is the ledger resulting from applying the + -- transactions in 'isTxs' against the ledger identified 'isTip' as tip. + , isTip :: !(Point blk) + -- ^ The tip of the chain that 'isTxs' was validated against + , isSlotNo :: !SlotNo + -- ^ The most recent 'SlotNo' that 'isTxs' was validated against + -- + -- Note in particular that if the mempool is revalidated against a state S + -- at slot s, then the state will be ticked (for now to the successor + -- slot, see 'tickLedgerState') and 'isSlotNo' will be set to @succ s@, + -- which is different from the slot of the original ledger state, which + -- will remain in 'isTip'. + , isLastTicketNo :: !TicketNo + -- ^ The mempool 'TicketNo' counter. + -- + -- See 'vrLastTicketNo' for more information. + , isCapacity :: !(TxMeasure blk) + -- ^ Current maximum capacity of the Mempool. Result of + -- 'computeMempoolCapacity' using the current chain's + -- 'TickedLedgerState'. + -- + -- NOTE: this does not correspond to 'isLedgerState', which is the + -- 'TickedLedgerState' /after/ applying the transactions in the Mempool. + -- There might be a transaction in the Mempool triggering a change in + -- the maximum transaction capacity of a block, which would change the + -- Mempool's capacity (unless overridden). We don't want the Mempool's + -- capacity to depend on its contents. The mempool is assuming /all/ its + -- transactions will be in the next block. So any changes caused by that + -- block will take effect after applying it and will only affect the + -- next block. + } + deriving Generic + +deriving instance + ( NoThunks (Validated (GenTx blk)) + , NoThunks (GenTxId blk) + , NoThunks (TickedLedgerState blk DiffMK) + , NoThunks (TxIn (LedgerState blk)) + , NoThunks (TxOut (LedgerState blk)) + , NoThunks (TxMeasure blk) + , StandardHash blk + , Typeable blk + ) => + NoThunks (InternalState blk) -- | \( O(1) \). Return the number of transactions in the internal state of -- the Mempool paired with their total size in bytes. isMempoolSize :: TxLimits blk => InternalState blk -> MempoolSize -isMempoolSize is = MempoolSize { - msNumTxs = fromIntegral $ length $ isTxs is - , msNumBytes = txMeasureByteSize $ TxSeq.toSize $ isTxs is - } +isMempoolSize is = + MempoolSize + { msNumTxs = fromIntegral $ length $ isTxs is + , msNumBytes = txMeasureByteSize $ TxSeq.toSize $ isTxs is + } initInternalState :: - LedgerSupportsMempool blk - => MempoolCapacityBytesOverride - -> TicketNo -- ^ Used for 'isLastTicketNo' - -> LedgerConfig blk - -> SlotNo - -> TickedLedgerState blk DiffMK - -> InternalState blk -initInternalState capacityOverride lastTicketNo cfg slot st = IS { - isTxs = TxSeq.Empty - , isTxIds = Set.empty - , isTxKeys = emptyLedgerTables - , isTxValues = emptyLedgerTables - , isLedgerState = st - , isTip = castPoint $ getTip st - , isSlotNo = slot + LedgerSupportsMempool blk => + MempoolCapacityBytesOverride -> + -- | Used for 'isLastTicketNo' + TicketNo -> + LedgerConfig blk -> + SlotNo -> + TickedLedgerState blk DiffMK -> + InternalState blk +initInternalState capacityOverride lastTicketNo cfg slot st = + IS + { isTxs = TxSeq.Empty + , isTxIds = Set.empty + , isTxKeys = emptyLedgerTables + , isTxValues = emptyLedgerTables + , isLedgerState = st + , isTip = castPoint $ getTip st + , isSlotNo = slot , isLastTicketNo = lastTicketNo - , isCapacity = computeMempoolCapacity cfg st capacityOverride + , isCapacity = computeMempoolCapacity cfg st capacityOverride } {------------------------------------------------------------------------------- @@ -184,21 +188,22 @@ initInternalState capacityOverride lastTicketNo cfg slot st = IS { -- | Abstract interface needed to run a Mempool. data LedgerInterface m blk = LedgerInterface - { -- | Get the current tip of the LedgerDB. - getCurrentLedgerState :: STM m (LedgerState blk EmptyMK) - -- | Get values at the given point on the chain. Returns Nothing if the - -- anchor moved or if the state is not found on the ledger db. - , getLedgerTablesAtFor - :: Point blk - -> LedgerTables (LedgerState blk) KeysMK - -> m (Maybe (LedgerTables (LedgerState blk) ValuesMK)) - } + { getCurrentLedgerState :: STM m (LedgerState blk EmptyMK) + -- ^ Get the current tip of the LedgerDB. + , getLedgerTablesAtFor :: + Point blk -> + LedgerTables (LedgerState blk) KeysMK -> + m (Maybe (LedgerTables (LedgerState blk) ValuesMK)) + -- ^ Get values at the given point on the chain. Returns Nothing if the + -- anchor moved or if the state is not found on the ledger db. + } -- | Create a 'LedgerInterface' from a 'ChainDB'. chainDBLedgerInterface :: - IOLike m - => ChainDB m blk -> LedgerInterface m blk -chainDBLedgerInterface chainDB = LedgerInterface + IOLike m => + ChainDB m blk -> LedgerInterface m blk +chainDBLedgerInterface chainDB = + LedgerInterface { getCurrentLedgerState = ledgerState <$> ChainDB.getCurrentLedger chainDB , getLedgerTablesAtFor = \pt keys -> @@ -212,39 +217,42 @@ chainDBLedgerInterface chainDB = LedgerInterface -- | The mempool environment captures all the associated variables wrt the -- Mempool and is accessed by the Mempool interface on demand to perform the -- different operations. -data MempoolEnv m blk = MempoolEnv { - mpEnvLedger :: LedgerInterface m blk - , mpEnvLedgerCfg :: LedgerConfig blk - , mpEnvStateVar :: StrictTMVar m (InternalState blk) - , mpEnvAddTxsRemoteFifo :: MVar m () - , mpEnvAddTxsAllFifo :: MVar m () - , mpEnvTracer :: Tracer m (TraceEventMempool blk) - , mpEnvCapacityOverride :: MempoolCapacityBytesOverride - } +data MempoolEnv m blk = MempoolEnv + { mpEnvLedger :: LedgerInterface m blk + , mpEnvLedgerCfg :: LedgerConfig blk + , mpEnvStateVar :: StrictTMVar m (InternalState blk) + , mpEnvAddTxsRemoteFifo :: MVar m () + , mpEnvAddTxsAllFifo :: MVar m () + , mpEnvTracer :: Tracer m (TraceEventMempool blk) + , mpEnvCapacityOverride :: MempoolCapacityBytesOverride + } -initMempoolEnv :: ( IOLike m - , LedgerSupportsMempool blk - , ValidateEnvelope blk - ) - => LedgerInterface m blk - -> LedgerConfig blk - -> MempoolCapacityBytesOverride - -> Tracer m (TraceEventMempool blk) - -> m (MempoolEnv m blk) +initMempoolEnv :: + ( IOLike m + , LedgerSupportsMempool blk + , ValidateEnvelope blk + ) => + LedgerInterface m blk -> + LedgerConfig blk -> + MempoolCapacityBytesOverride -> + Tracer m (TraceEventMempool blk) -> + m (MempoolEnv m blk) initMempoolEnv ledgerInterface cfg capacityOverride tracer = do - st <- atomically $ getCurrentLedgerState ledgerInterface - let (slot, st') = tickLedgerState cfg (ForgeInUnknownSlot st) - isVar <- newTMVarIO - $ initInternalState capacityOverride TxSeq.zeroTicketNo cfg slot st' - addTxRemoteFifo <- newMVar () - addTxAllFifo <- newMVar () - return MempoolEnv - { mpEnvLedger = ledgerInterface - , mpEnvLedgerCfg = cfg - , mpEnvStateVar = isVar + st <- atomically $ getCurrentLedgerState ledgerInterface + let (slot, st') = tickLedgerState cfg (ForgeInUnknownSlot st) + isVar <- + newTMVarIO $ + initInternalState capacityOverride TxSeq.zeroTicketNo cfg slot st' + addTxRemoteFifo <- newMVar () + addTxAllFifo <- newMVar () + return + MempoolEnv + { mpEnvLedger = ledgerInterface + , mpEnvLedgerCfg = cfg + , mpEnvStateVar = isVar , mpEnvAddTxsRemoteFifo = addTxRemoteFifo - , mpEnvAddTxsAllFifo = addTxAllFifo - , mpEnvTracer = tracer + , mpEnvAddTxsAllFifo = addTxAllFifo + , mpEnvTracer = tracer , mpEnvCapacityOverride = capacityOverride } @@ -254,24 +262,25 @@ initMempoolEnv ledgerInterface cfg capacityOverride tracer = do -- | Tick the 'LedgerState' using the given 'BlockSlot'. tickLedgerState :: - forall blk. (UpdateLedger blk, ValidateEnvelope blk) - => LedgerConfig blk - -> ForgeLedgerState blk - -> (SlotNo, TickedLedgerState blk DiffMK) + forall blk. + (UpdateLedger blk, ValidateEnvelope blk) => + LedgerConfig blk -> + ForgeLedgerState blk -> + (SlotNo, TickedLedgerState blk DiffMK) tickLedgerState _cfg (ForgeInKnownSlot slot st) = (slot, st) tickLedgerState cfg (ForgeInUnknownSlot st) = - (slot, applyChainTick OmitLedgerEvents cfg slot st) - where - -- Optimistically assume that the transactions will be included in a block - -- in the next available slot - -- - -- TODO: We should use time here instead - -- - -- Once we do, the ValidateEnvelope constraint can go. - slot :: SlotNo - slot = case ledgerTipSlot st of - Origin -> minimumPossibleSlotNo (Proxy @blk) - NotOrigin s -> succ s + (slot, applyChainTick OmitLedgerEvents cfg slot st) + where + -- Optimistically assume that the transactions will be included in a block + -- in the next available slot + -- + -- TODO: We should use time here instead + -- + -- Once we do, the ValidateEnvelope constraint can go. + slot :: SlotNo + slot = case ledgerTipSlot st of + Origin -> minimumPossibleSlotNo (Proxy @blk) + NotOrigin s -> succ s {------------------------------------------------------------------------------- Validation @@ -279,48 +288,49 @@ tickLedgerState cfg (ForgeInUnknownSlot st) = -- | Extend 'InternalState' with a new transaction (one which we have not -- previously validated) that may or may not be valid in this ledger state. -validateNewTransaction - :: (LedgerSupportsMempool blk, HasTxId (GenTx blk)) - => LedgerConfig blk - -> WhetherToIntervene - -> GenTx blk - -> TxMeasure blk - -> LedgerTables (LedgerState blk) ValuesMK - -- ^ Values to cache if success - -> TickedLedgerState blk ValuesMK - -- ^ This state is the internal state with the tables for this transaction - -- advanced through the diffs in the internal state. One could think we can - -- create this value here, but it is needed for some other uses like calling - -- 'txMeasure' before this function. - -> InternalState blk - -> ( Either (ApplyTxErr blk) (Validated (GenTx blk)) - , InternalState blk - ) +validateNewTransaction :: + (LedgerSupportsMempool blk, HasTxId (GenTx blk)) => + LedgerConfig blk -> + WhetherToIntervene -> + GenTx blk -> + TxMeasure blk -> + -- | Values to cache if success + LedgerTables (LedgerState blk) ValuesMK -> + -- | This state is the internal state with the tables for this transaction + -- advanced through the diffs in the internal state. One could think we can + -- create this value here, but it is needed for some other uses like calling + -- 'txMeasure' before this function. + TickedLedgerState blk ValuesMK -> + InternalState blk -> + ( Either (ApplyTxErr blk) (Validated (GenTx blk)) + , InternalState blk + ) validateNewTransaction cfg wti tx txsz origValues st is = - case runExcept (applyTx cfg wti isSlotNo tx st) of - Left err -> ( Left err, is ) - Right (st', vtx) -> - ( Right vtx - , is { isTxs = isTxs :> TxTicket vtx nextTicketNo txsz - , isTxKeys = isTxKeys <> getTransactionKeySets tx - , isTxValues = ltliftA2 unionValues isTxValues origValues - , isTxIds = Set.insert (txId tx) isTxIds - , isLedgerState = prependMempoolDiffs isLedgerState st' - , isLastTicketNo = nextTicketNo - } - ) - where - IS { - isTxs - , isTxIds - , isTxKeys - , isTxValues - , isLedgerState - , isLastTicketNo - , isSlotNo - } = is - - nextTicketNo = succ isLastTicketNo + case runExcept (applyTx cfg wti isSlotNo tx st) of + Left err -> (Left err, is) + Right (st', vtx) -> + ( Right vtx + , is + { isTxs = isTxs :> TxTicket vtx nextTicketNo txsz + , isTxKeys = isTxKeys <> getTransactionKeySets tx + , isTxValues = ltliftA2 unionValues isTxValues origValues + , isTxIds = Set.insert (txId tx) isTxIds + , isLedgerState = prependMempoolDiffs isLedgerState st' + , isLastTicketNo = nextTicketNo + } + ) + where + IS + { isTxs + , isTxIds + , isTxKeys + , isTxValues + , isLedgerState + , isLastTicketNo + , isSlotNo + } = is + + nextTicketNo = succ isLastTicketNo -- | Revalidate the given transactions against the given ticked ledger state, -- producing a new 'InternalState'. @@ -329,140 +339,150 @@ validateNewTransaction cfg wti tx txsz origValues st is = -- transactions given to it were previously applied, for example if we are -- revalidating the whole set of transactions onto a new state, or if we remove -- some transactions and revalidate the remaining ones. -revalidateTxsFor - :: (LedgerSupportsMempool blk, HasTxId (GenTx blk)) - => MempoolCapacityBytesOverride - -> LedgerConfig blk - -> SlotNo - -> TickedLedgerState blk DiffMK - -- ^ The ticked ledger state againt which txs will be revalidated - -> LedgerTables (LedgerState blk) ValuesMK - -- ^ The tables with all the inputs for the transactions - -> TicketNo -- ^ 'isLastTicketNo' and 'vrLastTicketNo' - -> [TxTicket (TxMeasure blk) (Validated (GenTx blk))] - -> RevalidateTxsResult blk +revalidateTxsFor :: + (LedgerSupportsMempool blk, HasTxId (GenTx blk)) => + MempoolCapacityBytesOverride -> + LedgerConfig blk -> + SlotNo -> + -- | The ticked ledger state againt which txs will be revalidated + TickedLedgerState blk DiffMK -> + -- | The tables with all the inputs for the transactions + LedgerTables (LedgerState blk) ValuesMK -> + -- | 'isLastTicketNo' and 'vrLastTicketNo' + TicketNo -> + [TxTicket (TxMeasure blk) (Validated (GenTx blk))] -> + RevalidateTxsResult blk revalidateTxsFor capacityOverride cfg slot st values lastTicketNo txTickets = let theTxs = map wrap txTickets wrap = (\(TxTicket tx tk tz) -> (tx, (tk, tz))) unwrap = (\(tx, (tk, tz)) -> TxTicket tx tk tz) ReapplyTxsResult err val st' = - reapplyTxs ComputeDiffs cfg slot theTxs - $ applyMempoolDiffs - values - (Foldable.foldMap' (getTransactionKeySets . txForgetValidated . fst) theTxs) - st + reapplyTxs ComputeDiffs cfg slot theTxs $ + applyMempoolDiffs + values + (Foldable.foldMap' (getTransactionKeySets . txForgetValidated . fst) theTxs) + st keys = Foldable.foldMap' (getTransactionKeySets . txForgetValidated . fst) val - - in RevalidateTxsResult - (IS { - isTxs = TxSeq.fromList $ map unwrap val - , isTxIds = Set.fromList $ map (txId . txForgetValidated . fst) val - , isTxKeys = keys - , isTxValues = ltliftA2 restrictValuesMK values keys - , isLedgerState = trackingToDiffs st' - , isTip = castPoint $ getTip st - , isSlotNo = slot - , isLastTicketNo = lastTicketNo - , isCapacity = computeMempoolCapacity cfg st' capacityOverride - }) - err - -data RevalidateTxsResult blk = - RevalidateTxsResult { - -- | The internal state after revalidation - newInternalState :: !(InternalState blk) - -- | The previously valid transactions that were now invalid - , removedTxs :: ![Invalidated blk] - } + in RevalidateTxsResult + ( IS + { isTxs = TxSeq.fromList $ map unwrap val + , isTxIds = Set.fromList $ map (txId . txForgetValidated . fst) val + , isTxKeys = keys + , isTxValues = ltliftA2 restrictValuesMK values keys + , isLedgerState = trackingToDiffs st' + , isTip = castPoint $ getTip st + , isSlotNo = slot + , isLastTicketNo = lastTicketNo + , isCapacity = computeMempoolCapacity cfg st' capacityOverride + } + ) + err + +data RevalidateTxsResult blk + = RevalidateTxsResult + { newInternalState :: !(InternalState blk) + -- ^ The internal state after revalidation + , removedTxs :: ![Invalidated blk] + -- ^ The previously valid transactions that were now invalid + } -- | Compute snapshot is largely the same as revalidate the transactions -- but we ignore the diffs. -computeSnapshot - :: (LedgerSupportsMempool blk, HasTxId (GenTx blk)) - => MempoolCapacityBytesOverride - -> LedgerConfig blk - -> SlotNo - -> TickedLedgerState blk DiffMK - -- ^ The ticked ledger state againt which txs will be revalidated - -> LedgerTables (LedgerState blk) ValuesMK - -- ^ The tables with all the inputs for the transactions - -> TicketNo -- ^ 'isLastTicketNo' and 'vrLastTicketNo' - -> [TxTicket (TxMeasure blk) (Validated (GenTx blk))] - -> MempoolSnapshot blk +computeSnapshot :: + (LedgerSupportsMempool blk, HasTxId (GenTx blk)) => + MempoolCapacityBytesOverride -> + LedgerConfig blk -> + SlotNo -> + -- | The ticked ledger state againt which txs will be revalidated + TickedLedgerState blk DiffMK -> + -- | The tables with all the inputs for the transactions + LedgerTables (LedgerState blk) ValuesMK -> + -- | 'isLastTicketNo' and 'vrLastTicketNo' + TicketNo -> + [TxTicket (TxMeasure blk) (Validated (GenTx blk))] -> + MempoolSnapshot blk computeSnapshot capacityOverride cfg slot st values lastTicketNo txTickets = let theTxs = map wrap txTickets wrap = (\(TxTicket tx tk tz) -> (tx, (tk, tz))) unwrap = (\(tx, (tk, tz)) -> TxTicket tx tk tz) ReapplyTxsResult _ val st' = - reapplyTxs IgnoreDiffs cfg slot theTxs - $ applyMempoolDiffs - values - (Foldable.foldMap' (getTransactionKeySets . txForgetValidated . fst) theTxs) - st - - in snapshotFromIS $ IS { - isTxs = TxSeq.fromList $ map unwrap val - , isTxIds = Set.fromList $ map (txId . txForgetValidated . fst) val - -- These two can be empty since we don't need the resulting - -- values at all when making a snapshot, as we won't update - -- the internal state. - , isTxKeys = emptyLedgerTables - , isTxValues = emptyLedgerTables - , isLedgerState = trackingToDiffs st' - , isTip = castPoint $ getTip st - , isSlotNo = slot - , isLastTicketNo = lastTicketNo - , isCapacity = computeMempoolCapacity cfg st' capacityOverride - } + reapplyTxs IgnoreDiffs cfg slot theTxs $ + applyMempoolDiffs + values + (Foldable.foldMap' (getTransactionKeySets . txForgetValidated . fst) theTxs) + st + in snapshotFromIS $ + IS + { isTxs = TxSeq.fromList $ map unwrap val + , isTxIds = Set.fromList $ map (txId . txForgetValidated . fst) val + , -- These two can be empty since we don't need the resulting + -- values at all when making a snapshot, as we won't update + -- the internal state. + isTxKeys = emptyLedgerTables + , isTxValues = emptyLedgerTables + , isLedgerState = trackingToDiffs st' + , isTip = castPoint $ getTip st + , isSlotNo = slot + , isLastTicketNo = lastTicketNo + , isCapacity = computeMempoolCapacity cfg st' capacityOverride + } {------------------------------------------------------------------------------- Conversions -------------------------------------------------------------------------------} -- | Create a Mempool Snapshot from a given Internal State of the mempool. -snapshotFromIS :: forall blk. - (HasTxId (GenTx blk), TxLimits blk, GetTip (TickedLedgerState blk)) - => InternalState blk - -> MempoolSnapshot blk -snapshotFromIS is = MempoolSnapshot { - snapshotTxs = implSnapshotGetTxs is - , snapshotTxsAfter = implSnapshotGetTxsAfter is - , snapshotLookupTx = implSnapshotGetTx is - , snapshotHasTx = implSnapshotHasTx is +snapshotFromIS :: + forall blk. + (HasTxId (GenTx blk), TxLimits blk, GetTip (TickedLedgerState blk)) => + InternalState blk -> + MempoolSnapshot blk +snapshotFromIS is = + MempoolSnapshot + { snapshotTxs = implSnapshotGetTxs is + , snapshotTxsAfter = implSnapshotGetTxsAfter is + , snapshotLookupTx = implSnapshotGetTx is + , snapshotHasTx = implSnapshotHasTx is , snapshotMempoolSize = implSnapshotGetMempoolSize is - , snapshotSlotNo = isSlotNo is - , snapshotStateHash = getTipHash $ isLedgerState is - , snapshotTake = implSnapshotTake is + , snapshotSlotNo = isSlotNo is + , snapshotStateHash = getTipHash $ isLedgerState is + , snapshotTake = implSnapshotTake is } where - implSnapshotGetTxs :: InternalState blk - -> [(Validated (GenTx blk), TicketNo, TxMeasure blk)] + implSnapshotGetTxs :: + InternalState blk -> + [(Validated (GenTx blk), TicketNo, TxMeasure blk)] implSnapshotGetTxs = flip implSnapshotGetTxsAfter TxSeq.zeroTicketNo - implSnapshotGetTxsAfter :: InternalState blk - -> TicketNo - -> [(Validated (GenTx blk), TicketNo, TxMeasure blk)] + implSnapshotGetTxsAfter :: + InternalState blk -> + TicketNo -> + [(Validated (GenTx blk), TicketNo, TxMeasure blk)] implSnapshotGetTxsAfter IS{isTxs} = TxSeq.toTuples . snd . TxSeq.splitAfterTicketNo isTxs - implSnapshotTake :: InternalState blk - -> TxMeasure blk - -> [Validated (GenTx blk)] + implSnapshotTake :: + InternalState blk -> + TxMeasure blk -> + [Validated (GenTx blk)] implSnapshotTake IS{isTxs} = map TxSeq.txTicketTx . TxSeq.toList . fst . TxSeq.splitAfterTxSize isTxs - implSnapshotGetTx :: InternalState blk - -> TicketNo - -> Maybe (Validated (GenTx blk)) + implSnapshotGetTx :: + InternalState blk -> + TicketNo -> + Maybe (Validated (GenTx blk)) implSnapshotGetTx IS{isTxs} = (isTxs `TxSeq.lookupByTicketNo`) - implSnapshotHasTx :: InternalState blk - -> GenTxId blk - -> Bool + implSnapshotHasTx :: + InternalState blk -> + GenTxId blk -> + Bool implSnapshotHasTx IS{isTxIds} = flip Set.member isTxIds - implSnapshotGetMempoolSize :: InternalState blk - -> MempoolSize + implSnapshotGetMempoolSize :: + InternalState blk -> + MempoolSize implSnapshotGetMempoolSize = isMempoolSize {------------------------------------------------------------------------------- @@ -472,71 +492,74 @@ snapshotFromIS is = MempoolSnapshot { -- | Events traced by the Mempool. data TraceEventMempool blk = TraceMempoolAddedTx + -- | New, valid transaction that was added to the Mempool. (Validated (GenTx blk)) - -- ^ New, valid transaction that was added to the Mempool. + -- | The size of the Mempool before adding the transaction. MempoolSize - -- ^ The size of the Mempool before adding the transaction. + -- | The size of the Mempool after adding the transaction. MempoolSize - -- ^ The size of the Mempool after adding the transaction. | TraceMempoolRejectedTx - (GenTx blk) - -- ^ New, invalid transaction thas was rejected and thus not added to + -- | New, invalid transaction thas was rejected and thus not added to -- the Mempool. + (GenTx blk) + -- | The reason for rejecting the transaction. (ApplyTxErr blk) - -- ^ The reason for rejecting the transaction. + -- | The current size of the Mempool. MempoolSize - -- ^ The current size of the Mempool. | TraceMempoolRemoveTxs - [(Validated (GenTx blk), ApplyTxErr blk)] - -- ^ Previously valid transactions that are no longer valid because of + -- | Previously valid transactions that are no longer valid because of -- changes in the ledger state (details are in the provided 'ApplyTxErr'). -- These transactions have been removed from the Mempool. + [(Validated (GenTx blk), ApplyTxErr blk)] + -- | The current size of the Mempool. MempoolSize - -- ^ The current size of the Mempool. | TraceMempoolManuallyRemovedTxs + -- | Transactions that have been manually removed from the Mempool. (NE.NonEmpty (GenTxId blk)) - -- ^ Transactions that have been manually removed from the Mempool. - [Validated (GenTx blk)] - -- ^ Previously valid transactions that are no longer valid because they + -- | Previously valid transactions that are no longer valid because they -- dependend on transactions that were manually removed from the -- Mempool. These transactions have also been removed from the Mempool. -- -- This list shares not transactions with the list of manually removed -- transactions. + [Validated (GenTx blk)] + -- | The current size of the Mempool. MempoolSize - -- ^ The current size of the Mempool. - | TraceMempoolSynced - -- ^ Emitted when the mempool is adjusted after the tip has changed. + | -- | Emitted when the mempool is adjusted after the tip has changed. + TraceMempoolSynced + -- | How long the sync operation took. EnclosingTimed - -- ^ How long the sync operation took. - - -- | A sync is not needed, as the point at the tip of the LedgerDB and the - -- point at the mempool are the same. - | TraceMempoolSyncNotNeeded (Point blk) - -- | We will try to add a transaction. Adding a transaction might need to - -- trigger a re-sync. - | TraceMempoolAttemptingAdd (GenTx blk) - -- | When adding a transaction, the ledger state in the mempool was found - -- in the LedgerDB, and therefore we can read values, even if it is not the - -- tip of the LedgerDB. An async re-sync will be performed eventually in - -- that case. - | TraceMempoolLedgerFound (Point blk) - -- | When adding a transaction, the ledger state in the mempool is gone - -- from the LedgerDB, so we cannot read values for the new - -- transaction. This forces an in-place re-sync. - | TraceMempoolLedgerNotFound (Point blk) - deriving (Generic) - -deriving instance ( Eq (GenTx blk) - , Eq (Validated (GenTx blk)) - , Eq (GenTxId blk) - , Eq (ApplyTxErr blk) - , StandardHash blk - ) => Eq (TraceEventMempool blk) - -deriving instance ( Show (GenTx blk) - , Show (Validated (GenTx blk)) - , Show (GenTxId blk) - , Show (ApplyTxErr blk) - , StandardHash blk - ) => Show (TraceEventMempool blk) + | -- | A sync is not needed, as the point at the tip of the LedgerDB and the + -- point at the mempool are the same. + TraceMempoolSyncNotNeeded (Point blk) + | -- | We will try to add a transaction. Adding a transaction might need to + -- trigger a re-sync. + TraceMempoolAttemptingAdd (GenTx blk) + | -- | When adding a transaction, the ledger state in the mempool was found + -- in the LedgerDB, and therefore we can read values, even if it is not the + -- tip of the LedgerDB. An async re-sync will be performed eventually in + -- that case. + TraceMempoolLedgerFound (Point blk) + | -- | When adding a transaction, the ledger state in the mempool is gone + -- from the LedgerDB, so we cannot read values for the new + -- transaction. This forces an in-place re-sync. + TraceMempoolLedgerNotFound (Point blk) + deriving Generic + +deriving instance + ( Eq (GenTx blk) + , Eq (Validated (GenTx blk)) + , Eq (GenTxId blk) + , Eq (ApplyTxErr blk) + , StandardHash blk + ) => + Eq (TraceEventMempool blk) + +deriving instance + ( Show (GenTx blk) + , Show (Validated (GenTx blk)) + , Show (GenTxId blk) + , Show (ApplyTxErr blk) + , StandardHash blk + ) => + Show (TraceEventMempool blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Init.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Init.hs index 7e7738ef6a..a0c79c4cb0 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Init.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Init.hs @@ -2,25 +2,25 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Creating a mempool -module Ouroboros.Consensus.Mempool.Init ( - openMempool +module Ouroboros.Consensus.Mempool.Init + ( openMempool , openMempoolWithoutSyncThread ) where -import Control.Monad (void) -import Control.ResourceRegistry -import Control.Tracer -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mempool.API (Mempool (..)) -import Ouroboros.Consensus.Mempool.Capacity -import Ouroboros.Consensus.Mempool.Impl.Common -import Ouroboros.Consensus.Mempool.Query -import Ouroboros.Consensus.Mempool.Update -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM +import Control.Monad (void) +import Control.ResourceRegistry +import Control.Tracer +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool.API (Mempool (..)) +import Ouroboros.Consensus.Mempool.Capacity +import Ouroboros.Consensus.Mempool.Impl.Common +import Ouroboros.Consensus.Mempool.Query +import Ouroboros.Consensus.Mempool.Update +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM {------------------------------------------------------------------------------- Opening the mempool @@ -29,52 +29,54 @@ import Ouroboros.Consensus.Util.STM -- | Create a @Mempool m blk@ in @m@ to manipulate the mempool. It will also -- fork a thread that syncs the mempool and the ledger when the ledger changes. openMempool :: - ( IOLike m - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , ValidateEnvelope blk - ) - => ResourceRegistry m - -> LedgerInterface m blk - -> LedgerConfig blk - -> MempoolCapacityBytesOverride - -> Tracer m (TraceEventMempool blk) - -> m (Mempool m blk) + ( IOLike m + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , ValidateEnvelope blk + ) => + ResourceRegistry m -> + LedgerInterface m blk -> + LedgerConfig blk -> + MempoolCapacityBytesOverride -> + Tracer m (TraceEventMempool blk) -> + m (Mempool m blk) openMempool registry ledger cfg capacityOverride tracer = do - env <- initMempoolEnv ledger cfg capacityOverride tracer - forkSyncStateOnTipPointChange registry env - return $ mkMempool env + env <- initMempoolEnv ledger cfg capacityOverride tracer + forkSyncStateOnTipPointChange registry env + return $ mkMempool env -- | Spawn a thread which syncs the 'Mempool' state whenever the 'LedgerState' -- changes. -forkSyncStateOnTipPointChange :: forall m blk. +forkSyncStateOnTipPointChange :: + forall m blk. ( IOLike m , LedgerSupportsMempool blk , HasTxId (GenTx blk) , ValidateEnvelope blk - ) - => ResourceRegistry m - -> MempoolEnv m blk - -> m () + ) => + ResourceRegistry m -> + MempoolEnv m blk -> + m () forkSyncStateOnTipPointChange registry menv = - void $ forkLinkedWatcher + void $ + forkLinkedWatcher registry "Mempool.syncStateOnTipPointChange" - Watcher { - wFingerprint = id - , wInitial = Nothing - , wNotify = action - , wReader = getCurrentTip + Watcher + { wFingerprint = id + , wInitial = Nothing + , wNotify = action + , wReader = getCurrentTip } - where - action :: Point blk -> m () - action _tipPoint = - void $ implSyncWithLedger menv + where + action :: Point blk -> m () + action _tipPoint = + void $ implSyncWithLedger menv - -- Using the tip ('Point') allows for quicker equality checks - getCurrentTip :: STM m (Point blk) - getCurrentTip = - ledgerTipPoint + -- Using the tip ('Point') allows for quicker equality checks + getCurrentTip :: STM m (Point blk) + getCurrentTip = + ledgerTipPoint <$> getCurrentLedgerState (mpEnvLedger menv) -- | Unlike 'openMempool', this function does not fork a background thread @@ -82,35 +84,36 @@ forkSyncStateOnTipPointChange registry menv = -- -- Intended for testing purposes. openMempoolWithoutSyncThread :: - ( IOLike m - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , ValidateEnvelope blk - ) - => LedgerInterface m blk - -> LedgerConfig blk - -> MempoolCapacityBytesOverride - -> Tracer m (TraceEventMempool blk) - -> m (Mempool m blk) + ( IOLike m + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , ValidateEnvelope blk + ) => + LedgerInterface m blk -> + LedgerConfig blk -> + MempoolCapacityBytesOverride -> + Tracer m (TraceEventMempool blk) -> + m (Mempool m blk) openMempoolWithoutSyncThread ledger cfg capacityOverride tracer = - mkMempool <$> initMempoolEnv ledger cfg capacityOverride tracer + mkMempool <$> initMempoolEnv ledger cfg capacityOverride tracer mkMempool :: - ( IOLike m - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , ValidateEnvelope blk - ) - => MempoolEnv m blk -> Mempool m blk -mkMempool mpEnv = Mempool - { addTx = implAddTx mpEnv + ( IOLike m + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , ValidateEnvelope blk + ) => + MempoolEnv m blk -> Mempool m blk +mkMempool mpEnv = + Mempool + { addTx = implAddTx mpEnv , removeTxsEvenIfValid = implRemoveTxsEvenIfValid mpEnv - , syncWithLedger = implSyncWithLedger mpEnv - , getSnapshot = snapshotFromIS <$> readTMVar istate - , getSnapshotFor = implGetSnapshotFor mpEnv - , getCapacity = isCapacity <$> readTMVar istate + , syncWithLedger = implSyncWithLedger mpEnv + , getSnapshot = snapshotFromIS <$> readTMVar istate + , getSnapshotFor = implGetSnapshotFor mpEnv + , getCapacity = isCapacity <$> readTMVar istate } - where - MempoolEnv { - mpEnvStateVar = istate - } = mpEnv + where + MempoolEnv + { mpEnvStateVar = istate + } = mpEnv diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Query.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Query.hs index 134726da0e..9a7d43ab20 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Query.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Query.hs @@ -3,31 +3,33 @@ -- | Queries to the mempool module Ouroboros.Consensus.Mempool.Query (implGetSnapshotFor) where -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mempool.API -import Ouroboros.Consensus.Mempool.Impl.Common -import qualified Ouroboros.Consensus.Mempool.TxSeq as TxSeq -import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool.API +import Ouroboros.Consensus.Mempool.Impl.Common +import Ouroboros.Consensus.Mempool.TxSeq qualified as TxSeq +import Ouroboros.Consensus.Util.IOLike implGetSnapshotFor :: - ( IOLike m - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - ) - => MempoolEnv m blk - -> SlotNo -- ^ Get snapshot for this slot number (usually the current slot) - -> TickedLedgerState blk DiffMK -- ^ The ledger state at which we want the - -- snapshot, ticked to @slot@. - -> (LedgerTables (LedgerState blk) KeysMK -> m (LedgerTables (LedgerState blk) ValuesMK)) - -- ^ A function that returns values corresponding to the given keys for - -- the unticked ledger state. - -> m (MempoolSnapshot blk) + ( IOLike m + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + ) => + MempoolEnv m blk -> + -- | Get snapshot for this slot number (usually the current slot) + SlotNo -> + -- | The ledger state at which we want the + -- snapshot, ticked to @slot@. + TickedLedgerState blk DiffMK -> + -- | A function that returns values corresponding to the given keys for + -- the unticked ledger state. + (LedgerTables (LedgerState blk) KeysMK -> m (LedgerTables (LedgerState blk) ValuesMK)) -> + m (MempoolSnapshot blk) implGetSnapshotFor mpEnv slot ticked readUntickedTables = do is <- atomically $ readTMVar istate - if pointHash (isTip is) == castHash (getTipHash ticked) && - isSlotNo is == slot + if pointHash (isTip is) == castHash (getTipHash ticked) + && isSlotNo is == slot then -- We are looking for a snapshot exactly for the ledger state we already -- have cached, then just return it. @@ -35,23 +37,24 @@ implGetSnapshotFor mpEnv slot ticked readUntickedTables = do else do values <- if pointHash (isTip is) == castHash (getTipHash ticked) - -- We are looking for a snapshot at the same state ticked - -- to a different slot, so we can reuse the cached values - then pure (isTxValues is) - -- We are looking for a snapshot at a different state, so we - -- need to read the values from the ledgerdb. - else readUntickedTables (isTxKeys is) - pure - $ computeSnapshot - capacityOverride - cfg - slot - ticked - values - (isLastTicketNo is) - (TxSeq.toList $ isTxs is) + -- We are looking for a snapshot at the same state ticked + -- to a different slot, so we can reuse the cached values + then pure (isTxValues is) + -- We are looking for a snapshot at a different state, so we + -- need to read the values from the ledgerdb. + else readUntickedTables (isTxKeys is) + pure $ + computeSnapshot + capacityOverride + cfg + slot + ticked + values + (isLastTicketNo is) + (TxSeq.toList $ isTxs is) where - MempoolEnv { mpEnvStateVar = istate - , mpEnvLedgerCfg = cfg - , mpEnvCapacityOverride = capacityOverride - } = mpEnv + MempoolEnv + { mpEnvStateVar = istate + , mpEnvLedgerCfg = cfg + , mpEnvCapacityOverride = capacityOverride + } = mpEnv diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/TxSeq.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/TxSeq.hs index 39180d323c..db94a3c0d3 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/TxSeq.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/TxSeq.hs @@ -13,8 +13,8 @@ -- -- > import Ouroboros.Consensus.Mempool.TxSeq (TxSeq (..)) -- > import qualified Ouroboros.Consensus.Mempool.TxSeq as TxSeq -module Ouroboros.Consensus.Mempool.TxSeq ( - TicketNo (..) +module Ouroboros.Consensus.Mempool.TxSeq + ( TicketNo (..) , TxSeq (Empty, (:>), (:<)) , TxTicket (..) , fromList @@ -25,19 +25,20 @@ module Ouroboros.Consensus.Mempool.TxSeq ( , toSize , toTuples , zeroTicketNo + -- * Reference implementations for testing , splitAfterTxSizeSpec ) where -import Control.Arrow ((***)) -import Data.FingerTree.Strict (StrictFingerTree) -import qualified Data.FingerTree.Strict as FingerTree -import qualified Data.Foldable as Foldable -import Data.Measure (Measure) -import qualified Data.Measure as Measure -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) +import Control.Arrow ((***)) +import Data.FingerTree.Strict (StrictFingerTree) +import Data.FingerTree.Strict qualified as FingerTree +import Data.Foldable qualified as Foldable +import Data.Measure (Measure) +import Data.Measure qualified as Measure +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) {------------------------------------------------------------------------------- Mempool transaction sequence as a finger tree @@ -45,7 +46,6 @@ import NoThunks.Class (NoThunks) -- | We allocate each transaction a (monotonically increasing) ticket number -- as it enters the mempool. --- newtype TicketNo = TicketNo Word64 deriving stock (Eq, Ord, Show) deriving newtype (Enum, Bounded, NoThunks) @@ -56,15 +56,15 @@ zeroTicketNo = TicketNo 0 -- | We associate transactions in the mempool with their ticket number and -- size in bytes. --- data TxTicket sz tx = TxTicket - { txTicketTx :: !tx - -- ^ The transaction associated with this ticket. - , txTicketNo :: !TicketNo - -- ^ The ticket number. + { txTicketTx :: !tx + -- ^ The transaction associated with this ticket. + , txTicketNo :: !TicketNo + -- ^ The ticket number. , txTicketSize :: !sz - -- ^ The size of 'txTicketTx'. - } deriving (Eq, Show, Generic, NoThunks) + -- ^ The size of 'txTicketTx'. + } + deriving (Eq, Show, Generic, NoThunks) -- | The mempool is a sequence of transactions with their ticket numbers and -- size in bytes. @@ -81,16 +81,15 @@ data TxTicket sz tx = TxTicket -- The mempool sequence is represented by a fingertree. We use a fingertree -- measure to allow not just normal sequence operations but also efficient -- splitting and indexing by the ticket number. --- -newtype TxSeq sz tx = - TxSeq (StrictFingerTree (TxSeqMeasure sz) (TxTicket sz tx)) - deriving stock (Show) - deriving newtype (NoThunks) +newtype TxSeq sz tx + = TxSeq (StrictFingerTree (TxSeqMeasure sz) (TxTicket sz tx)) + deriving stock Show + deriving newtype NoThunks instance Measure sz => Foldable (TxSeq sz) where foldMap f (TxSeq txs) = Foldable.foldMap (f . txTicketTx) txs - null (TxSeq txs) = Foldable.null txs - length (TxSeq txs) = mCount $ FingerTree.measure txs + null (TxSeq txs) = Foldable.null txs + length (TxSeq txs) = mCount $ FingerTree.measure txs -- | The 'StrictFingerTree' relies on a \"measure\" for subsequences in the -- tree. A measure of the size of the subsequence allows for efficient @@ -102,74 +101,73 @@ instance Measure sz => Foldable (TxSeq sz) where -- to measure individual elements of the sequence (i.e. 'TxTicket's), via a -- 'Measured' instance, and also a way to combine the measures, via a 'Monoid' -- instance. --- -data TxSeqMeasure sz = TxSeqMeasure { - mCount :: !Int, - mMinTicket :: !TicketNo, - mMaxTicket :: !TicketNo, - mSize :: !sz - } +data TxSeqMeasure sz = TxSeqMeasure + { mCount :: !Int + , mMinTicket :: !TicketNo + , mMaxTicket :: !TicketNo + , mSize :: !sz + } deriving Show instance Measure sz => FingerTree.Measured (TxSeqMeasure sz) (TxTicket sz tx) where - measure ticket = TxSeqMeasure { - mCount = 1 - , mMinTicket = txTicketNo - , mMaxTicket = txTicketNo - , mSize = txTicketSize - } - where - TxTicket{txTicketNo, txTicketSize} = ticket + measure ticket = + TxSeqMeasure + { mCount = 1 + , mMinTicket = txTicketNo + , mMaxTicket = txTicketNo + , mSize = txTicketSize + } + where + TxTicket{txTicketNo, txTicketSize} = ticket instance Measure sz => Semigroup (TxSeqMeasure sz) where - vl <> vr = TxSeqMeasure - (mCount vl + mCount vr) - (mMinTicket vl `min` mMinTicket vr) - (mMaxTicket vl `max` mMaxTicket vr) - (mSize vl `Measure.plus` mSize vr) + vl <> vr = + TxSeqMeasure + (mCount vl + mCount vr) + (mMinTicket vl `min` mMinTicket vr) + (mMaxTicket vl `max` mMaxTicket vr) + (mSize vl `Measure.plus` mSize vr) instance Measure sz => Monoid (TxSeqMeasure sz) where - mempty = TxSeqMeasure { - mCount = 0 - , mMinTicket = maxBound -- note the inversion! + mempty = + TxSeqMeasure + { mCount = 0 + , mMinTicket = maxBound -- note the inversion! , mMaxTicket = minBound - , mSize = Measure.zero + , mSize = Measure.zero } mappend = (<>) -- | A helper function for the ':>' pattern. --- viewBack :: Measure sz => TxSeq sz tx -> Maybe (TxSeq sz tx, TxTicket sz tx) viewBack (TxSeq txs) = case FingerTree.viewr txs of - FingerTree.EmptyR -> Nothing - txs' FingerTree.:> tx -> Just (TxSeq txs', tx) + FingerTree.EmptyR -> Nothing + txs' FingerTree.:> tx -> Just (TxSeq txs', tx) -- | A helper function for the ':<' pattern. --- viewFront :: Measure sz => TxSeq sz tx -> Maybe (TxTicket sz tx, TxSeq sz tx) viewFront (TxSeq txs) = case FingerTree.viewl txs of - FingerTree.EmptyL -> Nothing - tx FingerTree.:< txs' -> Just (tx, TxSeq txs') + FingerTree.EmptyL -> Nothing + tx FingerTree.:< txs' -> Just (tx, TxSeq txs') -- | An empty mempool sequence. --- pattern Empty :: Measure sz => TxSeq sz tx -pattern Empty <- (viewFront -> Nothing) where - Empty = TxSeq FingerTree.empty +pattern Empty <- (viewFront -> Nothing) + where + Empty = TxSeq FingerTree.empty -- | \( O(1) \). Access or add a tx at the back of the mempool sequence. -- -- New txs are always added at the back. --- pattern (:>) :: Measure sz => TxSeq sz tx -> TxTicket sz tx -> TxSeq sz tx -pattern txs :> tx <- (viewBack -> Just (txs, tx)) where - TxSeq txs :> tx = TxSeq (txs FingerTree.|> tx) --TODO: assert ordered by ticket no +pattern txs :> tx <- (viewBack -> Just (txs, tx)) + where + TxSeq txs :> tx = TxSeq (txs FingerTree.|> tx) -- TODO: assert ordered by ticket no -- | \( O(1) \). Access a tx at the front of the mempool sequence. -- -- Note that we never add txs at the front. We access txs from front to back -- when forwarding txs to other peers, or when adding txs to blocks. --- pattern (:<) :: Measure sz => TxTicket sz tx -> TxSeq sz tx -> TxSeq sz tx pattern tx :< txs <- (viewFront -> Just (tx, txs)) @@ -179,41 +177,42 @@ infixl 5 :>, :< {-# COMPLETE Empty, (:<) #-} -- | \( O(\log(n)) \). Look up a transaction in the sequence by its 'TicketNo'. --- lookupByTicketNo :: Measure sz => TxSeq sz tx -> TicketNo -> Maybe tx lookupByTicketNo (TxSeq txs) n = - case FingerTree.search (\ml mr -> mMaxTicket ml >= n - && mMinTicket mr > n) txs of - FingerTree.Position _ (TxTicket tx n' _) _ | n' == n -> Just tx - _ -> Nothing + case FingerTree.search + ( \ml mr -> + mMaxTicket ml >= n + && mMinTicket mr > n + ) + txs of + FingerTree.Position _ (TxTicket tx n' _) _ | n' == n -> Just tx + _ -> Nothing -- | \( O(\log(n)) \). Split the sequence of transactions into two parts -- based on the given 'TicketNo'. The first part has transactions with tickets -- less than or equal to the given ticket, and the second part has transactions -- with tickets strictly greater than the given ticket. --- splitAfterTicketNo :: - Measure sz - => TxSeq sz tx - -> TicketNo - -> (TxSeq sz tx, TxSeq sz tx) + Measure sz => + TxSeq sz tx -> + TicketNo -> + (TxSeq sz tx, TxSeq sz tx) splitAfterTicketNo (TxSeq txs) n = - case FingerTree.split (\m -> mMaxTicket m > n) txs of - (l, r) -> (TxSeq l, TxSeq r) + case FingerTree.split (\m -> mMaxTicket m > n) txs of + (l, r) -> (TxSeq l, TxSeq r) -- | \( O(\log(n)) \). Split the sequence of transactions into two parts based -- on the given @sz@. The first part has transactions whose summed @sz@ is less -- than or equal to the given @sz@, and the second part has the remaining -- transactions in the sequence. --- splitAfterTxSize :: - Measure sz - => TxSeq sz tx - -> sz - -> (TxSeq sz tx, TxSeq sz tx) + Measure sz => + TxSeq sz tx -> + sz -> + (TxSeq sz tx, TxSeq sz tx) splitAfterTxSize (TxSeq txs) n = - case FingerTree.split (\m -> not $ mSize m Measure.<= n) txs of - (l, r) -> (TxSeq l, TxSeq r) + case FingerTree.split (\m -> not $ mSize m Measure.<= n) txs of + (l, r) -> (TxSeq l, TxSeq r) -- | \( O(n) \). Specification of 'splitAfterTxSize'. -- @@ -221,29 +220,31 @@ splitAfterTxSize (TxSeq txs) n = -- -- This function is used to verify whether 'splitAfterTxSize' behaves as -- expected. -splitAfterTxSizeSpec :: forall sz tx. - Measure sz - => TxSeq sz tx - -> sz - -> (TxSeq sz tx, TxSeq sz tx) +splitAfterTxSizeSpec :: + forall sz tx. + Measure sz => + TxSeq sz tx -> + sz -> + (TxSeq sz tx, TxSeq sz tx) splitAfterTxSizeSpec txseq n = - (fromList *** fromList) - $ go Measure.zero [] - $ toList txseq - where - go :: sz - -> [TxTicket sz tx] - -> [TxTicket sz tx] - -> ([TxTicket sz tx], [TxTicket sz tx]) - go accSize accTickets = \case - [] - -> (reverse accTickets, []) - t:ts - | let accSize' = accSize `Measure.plus` txTicketSize t - , accSize' Measure.<= n - -> go accSize' (t:accTickets) ts - | otherwise - -> (reverse accTickets, t:ts) + (fromList *** fromList) $ + go Measure.zero [] $ + toList txseq + where + go :: + sz -> + [TxTicket sz tx] -> + [TxTicket sz tx] -> + ([TxTicket sz tx], [TxTicket sz tx]) + go accSize accTickets = \case + [] -> + (reverse accTickets, []) + t : ts + | let accSize' = accSize `Measure.plus` txTicketSize t + , accSize' Measure.<= n -> + go accSize' (t : accTickets) ts + | otherwise -> + (reverse accTickets, t : ts) -- | Given a list of 'TxTicket's, construct a 'TxSeq'. fromList :: Measure sz => [TxTicket sz tx] -> TxSeq sz tx @@ -256,17 +257,18 @@ toList (TxSeq ftree) = Foldable.toList ftree -- | Convert a 'TxSeq' to a list of pairs of transactions and their -- associated 'TicketNo's and sizes. toTuples :: TxSeq sz tx -> [(tx, TicketNo, sz)] -toTuples (TxSeq ftree) = fmap - (\ticket -> - ( txTicketTx ticket - , txTicketNo ticket - , txTicketSize ticket - ) +toTuples (TxSeq ftree) = + fmap + ( \ticket -> + ( txTicketTx ticket + , txTicketNo ticket + , txTicketSize ticket + ) ) (Foldable.toList ftree) -- | \( O(1) \). Return the total size of the given 'TxSeq'. toSize :: Measure sz => TxSeq sz tx -> sz toSize (TxSeq ftree) = mSize - where - TxSeqMeasure { mSize } = FingerTree.measure ftree + where + TxSeqMeasure{mSize} = FingerTree.measure ftree diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Update.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Update.hs index 50557a061e..6709a544bc 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Update.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Mempool/Update.hs @@ -3,37 +3,37 @@ -- | Operations that update the mempool. They are internally divided in the pure -- and impure sides of the operation. -module Ouroboros.Consensus.Mempool.Update ( - implAddTx +module Ouroboros.Consensus.Mempool.Update + ( implAddTx , implRemoveTxsEvenIfValid , implSyncWithLedger ) where -import Cardano.Slotting.Slot -import Control.Concurrent.Class.MonadMVar (withMVar) -import Control.Monad (void) -import Control.Monad.Except (runExcept) -import Control.Tracer -import qualified Data.Foldable as Foldable -import Data.Functor.Contravariant ((>$<)) -import qualified Data.List.NonEmpty as NE -import Data.Maybe (fromMaybe) -import qualified Data.Measure as Measure -import qualified Data.Set as Set -import Data.Void -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mempool.API -import Ouroboros.Consensus.Mempool.Capacity -import Ouroboros.Consensus.Mempool.Impl.Common -import Ouroboros.Consensus.Mempool.TxSeq (TxTicket (..)) -import qualified Ouroboros.Consensus.Mempool.TxSeq as TxSeq -import Ouroboros.Consensus.Util (whenJust) -import Ouroboros.Consensus.Util.Enclose -import Ouroboros.Consensus.Util.IOLike hiding (withMVar) -import Ouroboros.Consensus.Util.STM -import Ouroboros.Network.Block +import Cardano.Slotting.Slot +import Control.Concurrent.Class.MonadMVar (withMVar) +import Control.Monad (void) +import Control.Monad.Except (runExcept) +import Control.Tracer +import Data.Foldable qualified as Foldable +import Data.Functor.Contravariant ((>$<)) +import Data.List.NonEmpty qualified as NE +import Data.Maybe (fromMaybe) +import Data.Measure qualified as Measure +import Data.Set qualified as Set +import Data.Void +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool.API +import Ouroboros.Consensus.Mempool.Capacity +import Ouroboros.Consensus.Mempool.Impl.Common +import Ouroboros.Consensus.Mempool.TxSeq (TxTicket (..)) +import Ouroboros.Consensus.Mempool.TxSeq qualified as TxSeq +import Ouroboros.Consensus.Util (whenJust) +import Ouroboros.Consensus.Util.Enclose +import Ouroboros.Consensus.Util.IOLike hiding (withMVar) +import Ouroboros.Consensus.Util.STM +import Ouroboros.Network.Block {------------------------------------------------------------------------------- Add transactions @@ -41,84 +41,83 @@ import Ouroboros.Network.Block -- | Add a single transaction to the mempool, blocking if there is no space. implAddTx :: - ( IOLike m - , LedgerSupportsMempool blk - , ValidateEnvelope blk - , HasTxId (GenTx blk) - ) - => MempoolEnv m blk - -> AddTxOnBehalfOf - -- ^ Whether we're acting on behalf of a remote peer or a local client. - -> GenTx blk - -- ^ The transaction to add to the mempool. - -> m (MempoolAddTxResult blk) + ( IOLike m + , LedgerSupportsMempool blk + , ValidateEnvelope blk + , HasTxId (GenTx blk) + ) => + MempoolEnv m blk -> + -- | Whether we're acting on behalf of a remote peer or a local client. + AddTxOnBehalfOf -> + -- | The transaction to add to the mempool. + GenTx blk -> + m (MempoolAddTxResult blk) implAddTx mpEnv onbehalf tx = - -- To ensure fair behaviour between threads that are trying to add - -- transactions, we make them all queue in a fifo. Only the one at the head - -- of the queue gets to actually wait for space to get freed up in the - -- mempool. This avoids small transactions repeatedly squeezing in ahead of - -- larger transactions. - -- - -- The fifo behaviour is implemented using a simple MVar. And take this - -- MVar lock on a transaction by transaction basis. So if several threads - -- are each trying to add several transactions, then they'll interleave at - -- transaction granularity, not batches of transactions. - -- - -- To add back in a bit of deliberate unfairness, we want to prioritise - -- transactions being added on behalf of local clients, over ones being - -- added on behalf of remote peers. We do this by using a pair of mvar - -- fifos: remote peers must wait on both mvars, while local clients only - -- need to wait on the second. - case onbehalf of - AddTxForRemotePeer -> - withMVar remoteFifo $ \() -> + -- To ensure fair behaviour between threads that are trying to add + -- transactions, we make them all queue in a fifo. Only the one at the head + -- of the queue gets to actually wait for space to get freed up in the + -- mempool. This avoids small transactions repeatedly squeezing in ahead of + -- larger transactions. + -- + -- The fifo behaviour is implemented using a simple MVar. And take this + -- MVar lock on a transaction by transaction basis. So if several threads + -- are each trying to add several transactions, then they'll interleave at + -- transaction granularity, not batches of transactions. + -- + -- To add back in a bit of deliberate unfairness, we want to prioritise + -- transactions being added on behalf of local clients, over ones being + -- added on behalf of remote peers. We do this by using a pair of mvar + -- fifos: remote peers must wait on both mvars, while local clients only + -- need to wait on the second. + case onbehalf of + AddTxForRemotePeer -> + withMVar remoteFifo $ \() -> withMVar allFifo $ \() -> -- This action can also block. Holding the MVars means -- there is only a single such thread blocking at once. implAddTx' + AddTxForLocalClient -> + withMVar allFifo $ \() -> + -- As above but skip the first MVar fifo so we will get + -- service sooner if there's lots of other remote + -- threads waiting. + implAddTx' + where + MempoolEnv + { mpEnvAddTxsRemoteFifo = remoteFifo + , mpEnvAddTxsAllFifo = allFifo + , mpEnvTracer = trcr + } = mpEnv - AddTxForLocalClient -> - withMVar allFifo $ \() -> - -- As above but skip the first MVar fifo so we will get - -- service sooner if there's lots of other remote - -- threads waiting. - implAddTx' - where - MempoolEnv { - mpEnvAddTxsRemoteFifo = remoteFifo - , mpEnvAddTxsAllFifo = allFifo - , mpEnvTracer = trcr - } = mpEnv - - implAddTx' = do - TransactionProcessingResult _ result ev <- - doAddTx - mpEnv - (whetherToIntervene onbehalf) - tx - traceWith trcr ev - return result + implAddTx' = do + TransactionProcessingResult _ result ev <- + doAddTx + mpEnv + (whetherToIntervene onbehalf) + tx + traceWith trcr ev + return result - whetherToIntervene :: AddTxOnBehalfOf -> WhetherToIntervene - whetherToIntervene AddTxForRemotePeer = DoNotIntervene - whetherToIntervene AddTxForLocalClient = Intervene + whetherToIntervene :: AddTxOnBehalfOf -> WhetherToIntervene + whetherToIntervene AddTxForRemotePeer = DoNotIntervene + whetherToIntervene AddTxForLocalClient = Intervene -- | Tried to add a transaction, was it processed or is there no space left? -data TriedToAddTx blk = - -- | Adding the next transaction would put the mempool over capacity. +data TriedToAddTx blk + = -- | Adding the next transaction would put the mempool over capacity. NotEnoughSpaceLeft | Processed (TransactionProcessed blk) -- | The new state, if the transaction was accepted -data TransactionProcessed blk = - TransactionProcessingResult - (Maybe (InternalState blk)) - -- ^ If the transaction was accepted, the new state that can be written to - -- the TVar. - (MempoolAddTxResult blk) - -- ^ The result of trying to add the transaction to the mempool. - (TraceEventMempool blk) - -- ^ The event emitted by the operation. +data TransactionProcessed blk + = TransactionProcessingResult + -- | If the transaction was accepted, the new state that can be written to + -- the TVar. + (Maybe (InternalState blk)) + -- | The result of trying to add the transaction to the mempool. + (MempoolAddTxResult blk) + -- | The event emitted by the operation. + (TraceEventMempool blk) -- | This function returns whether the transaction was added or rejected, and -- will block if the mempool is full. @@ -141,175 +140,175 @@ data TransactionProcessed blk = -- INVARIANT: The code needs that read and writes on the state are coupled -- together or inconsistencies will arise. doAddTx :: - ( LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , ValidateEnvelope blk - , IOLike m - ) - => MempoolEnv m blk - -> WhetherToIntervene - -> GenTx blk - -- ^ The transaction to add to the mempool. - -> m (TransactionProcessed blk) + ( LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , ValidateEnvelope blk + , IOLike m + ) => + MempoolEnv m blk -> + WhetherToIntervene -> + -- | The transaction to add to the mempool. + GenTx blk -> + m (TransactionProcessed blk) doAddTx mpEnv wti tx = - doAddTx' Nothing - where - MempoolEnv { - mpEnvLedger = ldgrInterface - , mpEnvLedgerCfg = cfg - , mpEnvStateVar = istate - , mpEnvTracer = trcr - } = mpEnv + doAddTx' Nothing + where + MempoolEnv + { mpEnvLedger = ldgrInterface + , mpEnvLedgerCfg = cfg + , mpEnvStateVar = istate + , mpEnvTracer = trcr + } = mpEnv - doAddTx' mbPrevSize = do - traceWith trcr $ TraceMempoolAttemptingAdd tx + doAddTx' mbPrevSize = do + traceWith trcr $ TraceMempoolAttemptingAdd tx - -- If retrying, wait until the mempool size changes before attempting to - -- add the tx again - let additionalCheck is = - case mbPrevSize of - Nothing -> pure () - Just prevSize -> check $ isMempoolSize is /= prevSize + -- If retrying, wait until the mempool size changes before attempting to + -- add the tx again + let additionalCheck is = + case mbPrevSize of + Nothing -> pure () + Just prevSize -> check $ isMempoolSize is /= prevSize - res <- withTMVarAnd istate additionalCheck - $ \is () -> do - mTbs <- getLedgerTablesAtFor ldgrInterface (isTip is) (getTransactionKeySets tx) - case mTbs of - Just tbs -> do - traceWith trcr $ TraceMempoolLedgerFound (isTip is) - case pureTryAddTx cfg wti tx is tbs of - NotEnoughSpaceLeft -> do - pure (Retry (isMempoolSize is), is) - Processed outcome@(TransactionProcessingResult is' _ _) -> do - pure (OK outcome, fromMaybe is is') - Nothing -> do - traceWith trcr $ TraceMempoolLedgerNotFound (isTip is) - -- We couldn't retrieve the values because the state is no longer on - -- the db. We need to resync. - pure (Resync, is) - case res of - Retry s' -> doAddTx' (Just s') - OK outcome -> pure outcome - Resync -> do - void $ implSyncWithLedger mpEnv - doAddTx' mbPrevSize + res <- withTMVarAnd istate additionalCheck $ + \is () -> do + mTbs <- getLedgerTablesAtFor ldgrInterface (isTip is) (getTransactionKeySets tx) + case mTbs of + Just tbs -> do + traceWith trcr $ TraceMempoolLedgerFound (isTip is) + case pureTryAddTx cfg wti tx is tbs of + NotEnoughSpaceLeft -> do + pure (Retry (isMempoolSize is), is) + Processed outcome@(TransactionProcessingResult is' _ _) -> do + pure (OK outcome, fromMaybe is is') + Nothing -> do + traceWith trcr $ TraceMempoolLedgerNotFound (isTip is) + -- We couldn't retrieve the values because the state is no longer on + -- the db. We need to resync. + pure (Resync, is) + case res of + Retry s' -> doAddTx' (Just s') + OK outcome -> pure outcome + Resync -> do + void $ implSyncWithLedger mpEnv + doAddTx' mbPrevSize -data WithTMVarOutcome retry ok = - Retry !retry +data WithTMVarOutcome retry ok + = Retry !retry | OK ok | Resync pureTryAddTx :: - ( LedgerSupportsMempool blk - , HasTxId (GenTx blk) - ) - => LedgerCfg (LedgerState blk) - -- ^ The ledger configuration. - -> WhetherToIntervene - -> GenTx blk - -- ^ The transaction to add to the mempool. - -> InternalState blk - -- ^ The current internal state of the mempool. - -> LedgerTables (LedgerState blk) ValuesMK - -> TriedToAddTx blk + ( LedgerSupportsMempool blk + , HasTxId (GenTx blk) + ) => + -- | The ledger configuration. + LedgerCfg (LedgerState blk) -> + WhetherToIntervene -> + -- | The transaction to add to the mempool. + GenTx blk -> + -- | The current internal state of the mempool. + InternalState blk -> + LedgerTables (LedgerState blk) ValuesMK -> + TriedToAddTx blk pureTryAddTx cfg wti tx is values = - let st = applyMempoolDiffs values (getTransactionKeySets tx) (isLedgerState is) in - case runExcept $ txMeasure cfg st tx of - Left err -> - -- The transaction does not have a valid measure (eg its ExUnits is - -- greater than what this ledger state allows for a single transaction). - -- - -- It might seem simpler to remove the failure case from 'txMeasure' and - -- simply fully validate the tx before determining whether it'd fit in - -- the mempool; that way we could reject invalid txs ASAP. However, for a - -- valid tx, we'd pay that validation cost every time the node's - -- selection changed, even if the tx wouldn't fit. So it'd very much be - -- as if the mempool were effectively over capacity! What's worse, each - -- attempt would not be using 'extendVRPrevApplied'. - Processed $ TransactionProcessingResult - Nothing - (MempoolTxRejected tx err) - (TraceMempoolRejectedTx - tx - err - (isMempoolSize is) - ) - Right txsz - -- Check for overflow - -- - -- No measure of a transaction can ever be negative, so the only way - -- adding two measures could result in a smaller measure is if some - -- modular arithmetic overflowed. Also, overflow necessarily yields a - -- lesser result, since adding 'maxBound' is modularly equivalent to - -- subtracting one. Recall that we're checking each individual addition. - -- - -- We assume that the 'txMeasure' limit and the mempool capacity - -- 'isCapacity' are much smaller than the modulus, and so this should - -- never happen. Despite that, blocking until adding the transaction - -- doesn't overflow seems like a reasonable way to handle this case. - | not $ currentSize Measure.<= currentSize `Measure.plus` txsz - -> - NotEnoughSpaceLeft - -- We add the transaction if and only if it wouldn't overrun any component - -- of the mempool capacity. - -- - -- In the past, this condition was instead @TxSeq.toSize (isTxs is) < - -- isCapacity is@. Thus the effective capacity of the mempool was - -- actually one increment less than the reported capacity plus one - -- transaction. That subtlety's cost paid for two benefits. - -- - -- First, the absence of addition avoids a risk of overflow, since the - -- transaction's sizes (eg ExUnits) have not yet been bounded by - -- validation (which presumably enforces a low enough bound that any - -- reasonably-sized mempool would never overflow the representation's - -- 'maxBound'). - -- - -- Second, it is more fair, since it does not depend on the transaction - -- at all. EG a large transaction might struggle to win the race against - -- a firehose of tiny transactions. - -- - -- However, we prefer to avoid the subtlety. Overflow is handled by the - -- previous guard. And fairness is already ensured elsewhere (the 'MVar's - -- in 'implAddTx' --- which the "Test.Consensus.Mempool.Fairness" test - -- exercises). Moreover, the notion of "is under capacity" becomes - -- difficult to assess independently of the pending tx when the measure - -- is multi-dimensional; both typical options (any component is not full - -- or every component is not full) lead to some confusing behaviors - -- (denying some txs that would "obviously" fit and accepting some txs - -- that "obviously" don't, respectively). - -- - -- Even with the overflow handler, it's important that 'txMeasure' - -- returns a well-bounded result. Otherwise, if an adversarial tx arrived - -- that could't even fit in an empty mempool, then that thread would - -- never release the 'MVar'. In particular, we tacitly assume here that a - -- tx that wouldn't even fit in an empty mempool would be rejected by - -- 'txMeasure'. - | not $ currentSize `Measure.plus` txsz Measure.<= isCapacity is - -> - NotEnoughSpaceLeft - | otherwise - -> - case validateNewTransaction cfg wti tx txsz values st is of - (Left err, _) -> - Processed $ TransactionProcessingResult + let st = applyMempoolDiffs values (getTransactionKeySets tx) (isLedgerState is) + in case runExcept $ txMeasure cfg st tx of + Left err -> + -- The transaction does not have a valid measure (eg its ExUnits is + -- greater than what this ledger state allows for a single transaction). + -- + -- It might seem simpler to remove the failure case from 'txMeasure' and + -- simply fully validate the tx before determining whether it'd fit in + -- the mempool; that way we could reject invalid txs ASAP. However, for a + -- valid tx, we'd pay that validation cost every time the node's + -- selection changed, even if the tx wouldn't fit. So it'd very much be + -- as if the mempool were effectively over capacity! What's worse, each + -- attempt would not be using 'extendVRPrevApplied'. + Processed $ + TransactionProcessingResult Nothing (MempoolTxRejected tx err) - (TraceMempoolRejectedTx - tx - err - (isMempoolSize is) - ) - (Right vtx, is') -> - Processed $ TransactionProcessingResult - (Just is') - (MempoolTxAdded vtx) - (TraceMempoolAddedTx - vtx + ( TraceMempoolRejectedTx + tx + err (isMempoolSize is) - (isMempoolSize is') - ) - where - currentSize = TxSeq.toSize (isTxs is) + ) + Right txsz + -- Check for overflow + -- + -- No measure of a transaction can ever be negative, so the only way + -- adding two measures could result in a smaller measure is if some + -- modular arithmetic overflowed. Also, overflow necessarily yields a + -- lesser result, since adding 'maxBound' is modularly equivalent to + -- subtracting one. Recall that we're checking each individual addition. + -- + -- We assume that the 'txMeasure' limit and the mempool capacity + -- 'isCapacity' are much smaller than the modulus, and so this should + -- never happen. Despite that, blocking until adding the transaction + -- doesn't overflow seems like a reasonable way to handle this case. + | not $ currentSize Measure.<= currentSize `Measure.plus` txsz -> + NotEnoughSpaceLeft + -- We add the transaction if and only if it wouldn't overrun any component + -- of the mempool capacity. + -- + -- In the past, this condition was instead @TxSeq.toSize (isTxs is) < + -- isCapacity is@. Thus the effective capacity of the mempool was + -- actually one increment less than the reported capacity plus one + -- transaction. That subtlety's cost paid for two benefits. + -- + -- First, the absence of addition avoids a risk of overflow, since the + -- transaction's sizes (eg ExUnits) have not yet been bounded by + -- validation (which presumably enforces a low enough bound that any + -- reasonably-sized mempool would never overflow the representation's + -- 'maxBound'). + -- + -- Second, it is more fair, since it does not depend on the transaction + -- at all. EG a large transaction might struggle to win the race against + -- a firehose of tiny transactions. + -- + -- However, we prefer to avoid the subtlety. Overflow is handled by the + -- previous guard. And fairness is already ensured elsewhere (the 'MVar's + -- in 'implAddTx' --- which the "Test.Consensus.Mempool.Fairness" test + -- exercises). Moreover, the notion of "is under capacity" becomes + -- difficult to assess independently of the pending tx when the measure + -- is multi-dimensional; both typical options (any component is not full + -- or every component is not full) lead to some confusing behaviors + -- (denying some txs that would "obviously" fit and accepting some txs + -- that "obviously" don't, respectively). + -- + -- Even with the overflow handler, it's important that 'txMeasure' + -- returns a well-bounded result. Otherwise, if an adversarial tx arrived + -- that could't even fit in an empty mempool, then that thread would + -- never release the 'MVar'. In particular, we tacitly assume here that a + -- tx that wouldn't even fit in an empty mempool would be rejected by + -- 'txMeasure'. + | not $ currentSize `Measure.plus` txsz Measure.<= isCapacity is -> + NotEnoughSpaceLeft + | otherwise -> + case validateNewTransaction cfg wti tx txsz values st is of + (Left err, _) -> + Processed $ + TransactionProcessingResult + Nothing + (MempoolTxRejected tx err) + ( TraceMempoolRejectedTx + tx + err + (isMempoolSize is) + ) + (Right vtx, is') -> + Processed $ + TransactionProcessingResult + (Just is') + (MempoolTxAdded vtx) + ( TraceMempoolAddedTx + vtx + (isMempoolSize is) + (isMempoolSize is') + ) + where + currentSize = TxSeq.toSize (isTxs is) {------------------------------------------------------------------------------- Remove transactions @@ -317,84 +316,90 @@ pureTryAddTx cfg wti tx is values = -- | See 'Ouroboros.Consensus.Mempool.API.removeTxsEvenIfValid'. implRemoveTxsEvenIfValid :: - ( IOLike m - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , ValidateEnvelope blk - ) - => MempoolEnv m blk - -> NE.NonEmpty (GenTxId blk) - -> m () + ( IOLike m + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , ValidateEnvelope blk + ) => + MempoolEnv m blk -> + NE.NonEmpty (GenTxId blk) -> + m () implRemoveTxsEvenIfValid mpEnv toRemove = do - (out :: WithTMVarOutcome Void ()) <- withTMVarAnd istate (const $ getCurrentLedgerState ldgrInterface) - $ \is ls -> do - let toKeep = filter - ( (`notElem` Set.fromList (NE.toList toRemove)) - . txId - . txForgetValidated - . txTicketTx - ) - (TxSeq.toList $ isTxs is) - (slot, ticked) = tickLedgerState cfg (ForgeInUnknownSlot ls) - toKeep' = Foldable.foldMap' (getTransactionKeySets . txForgetValidated . TxSeq.txTicketTx) toKeep - mTbs <- getLedgerTablesAtFor ldgrInterface (castPoint (getTip ls)) toKeep' - case mTbs of - Nothing -> pure (Resync, is) - Just tbs -> do - let (is', t) = pureRemoveTxs - capacityOverride - cfg - slot - ticked - tbs - (isLastTicketNo is) - toKeep - toRemove - traceWith trcr t - pure (OK (), is') + (out :: WithTMVarOutcome Void ()) <- withTMVarAnd istate (const $ getCurrentLedgerState ldgrInterface) $ + \is ls -> do + let toKeep = + filter + ( (`notElem` Set.fromList (NE.toList toRemove)) + . txId + . txForgetValidated + . txTicketTx + ) + (TxSeq.toList $ isTxs is) + (slot, ticked) = tickLedgerState cfg (ForgeInUnknownSlot ls) + toKeep' = Foldable.foldMap' (getTransactionKeySets . txForgetValidated . TxSeq.txTicketTx) toKeep + mTbs <- getLedgerTablesAtFor ldgrInterface (castPoint (getTip ls)) toKeep' + case mTbs of + Nothing -> pure (Resync, is) + Just tbs -> do + let (is', t) = + pureRemoveTxs + capacityOverride + cfg + slot + ticked + tbs + (isLastTicketNo is) + toKeep + toRemove + traceWith trcr t + pure (OK (), is') case out of - Resync -> do + Resync -> do void $ implSyncWithLedger mpEnv implRemoveTxsEvenIfValid mpEnv toRemove - OK () -> pure () - where - MempoolEnv { mpEnvStateVar = istate - , mpEnvLedger = ldgrInterface - , mpEnvTracer = trcr - , mpEnvLedgerCfg = cfg - , mpEnvCapacityOverride = capacityOverride - } = mpEnv + OK () -> pure () + where + MempoolEnv + { mpEnvStateVar = istate + , mpEnvLedger = ldgrInterface + , mpEnvTracer = trcr + , mpEnvLedgerCfg = cfg + , mpEnvCapacityOverride = capacityOverride + } = mpEnv -- | Craft a 'RemoveTxs' that manually removes the given transactions from the -- mempool, returning inside it an updated InternalState. pureRemoveTxs :: - ( LedgerSupportsMempool blk - , HasTxId (GenTx blk) - ) - => MempoolCapacityBytesOverride - -> LedgerConfig blk - -> SlotNo - -> TickedLedgerState blk DiffMK - -> LedgerTables (LedgerState blk) ValuesMK - -> TicketNo - -> [TxTicket (TxMeasure blk) (Validated (GenTx blk))] -- ^ Txs to keep - -> NE.NonEmpty (GenTxId blk) -- ^ IDs to remove - -> (InternalState blk, TraceEventMempool blk) + ( LedgerSupportsMempool blk + , HasTxId (GenTx blk) + ) => + MempoolCapacityBytesOverride -> + LedgerConfig blk -> + SlotNo -> + TickedLedgerState blk DiffMK -> + LedgerTables (LedgerState blk) ValuesMK -> + TicketNo -> + -- | Txs to keep + [TxTicket (TxMeasure blk) (Validated (GenTx blk))] -> + -- | IDs to remove + NE.NonEmpty (GenTxId blk) -> + (InternalState blk, TraceEventMempool blk) pureRemoveTxs capacityOverride lcfg slot lstate values tkt txs txIds = - let RevalidateTxsResult is' removed = - revalidateTxsFor - capacityOverride - lcfg - slot - lstate - values - tkt - txs - trace = TraceMempoolManuallyRemovedTxs - txIds - (map getInvalidated removed) - (isMempoolSize is') - in (is', trace) + let RevalidateTxsResult is' removed = + revalidateTxsFor + capacityOverride + lcfg + slot + lstate + values + tkt + txs + trace = + TraceMempoolManuallyRemovedTxs + txIds + (map getInvalidated removed) + (isMempoolSize is') + in (is', trace) {------------------------------------------------------------------------------- Sync with ledger @@ -402,67 +407,69 @@ pureRemoveTxs capacityOverride lcfg slot lstate values tkt txs txIds = -- | See 'Ouroboros.Consensus.Mempool.API.syncWithLedger'. implSyncWithLedger :: - ( IOLike m - , LedgerSupportsMempool blk - , ValidateEnvelope blk - , HasTxId (GenTx blk) - ) - => MempoolEnv m blk - -> m (MempoolSnapshot blk) + ( IOLike m + , LedgerSupportsMempool blk + , ValidateEnvelope blk + , HasTxId (GenTx blk) + ) => + MempoolEnv m blk -> + m (MempoolSnapshot blk) implSyncWithLedger mpEnv = encloseTimedWith (TraceMempoolSynced >$< mpEnvTracer mpEnv) $ do (res :: WithTMVarOutcome Void (MempoolSnapshot blk)) <- - withTMVarAnd istate (const $ getCurrentLedgerState ldgrInterface) $ - \is ls -> do - let (slot, ls') = tickLedgerState cfg $ ForgeInUnknownSlot ls - if pointHash (isTip is) == castHash (getTipHash ls) && isSlotNo is == slot - then do - -- The tip didn't change, put the same state. - traceWith trcr $ TraceMempoolSyncNotNeeded (isTip is) - pure (OK (snapshotFromIS is), is) - else do - -- We need to revalidate - let pt = castPoint (getTip ls) - mTbs <- getLedgerTablesAtFor ldgrInterface pt (isTxKeys is) - case mTbs of - Just tbs -> do - let (is', mTrace) = pureSyncWithLedger - capacityOverride - cfg - slot - ls' - tbs - is - whenJust mTrace (traceWith trcr) - pure (OK (snapshotFromIS is'), is') - Nothing -> do - -- If the point is gone, resync - pure (Resync, is) + withTMVarAnd istate (const $ getCurrentLedgerState ldgrInterface) $ + \is ls -> do + let (slot, ls') = tickLedgerState cfg $ ForgeInUnknownSlot ls + if pointHash (isTip is) == castHash (getTipHash ls) && isSlotNo is == slot + then do + -- The tip didn't change, put the same state. + traceWith trcr $ TraceMempoolSyncNotNeeded (isTip is) + pure (OK (snapshotFromIS is), is) + else do + -- We need to revalidate + let pt = castPoint (getTip ls) + mTbs <- getLedgerTablesAtFor ldgrInterface pt (isTxKeys is) + case mTbs of + Just tbs -> do + let (is', mTrace) = + pureSyncWithLedger + capacityOverride + cfg + slot + ls' + tbs + is + whenJust mTrace (traceWith trcr) + pure (OK (snapshotFromIS is'), is') + Nothing -> do + -- If the point is gone, resync + pure (Resync, is) case res of - OK v -> pure v + OK v -> pure v Resync -> implSyncWithLedger mpEnv - where - MempoolEnv { mpEnvStateVar = istate - , mpEnvLedger = ldgrInterface - , mpEnvTracer = trcr - , mpEnvLedgerCfg = cfg - , mpEnvCapacityOverride = capacityOverride - } = mpEnv + where + MempoolEnv + { mpEnvStateVar = istate + , mpEnvLedger = ldgrInterface + , mpEnvTracer = trcr + , mpEnvLedgerCfg = cfg + , mpEnvCapacityOverride = capacityOverride + } = mpEnv -- | Create a 'SyncWithLedger' value representing the values that will need to -- be stored for committing this synchronization with the Ledger. -- -- See the documentation of 'runSyncWithLedger' for more context. -pureSyncWithLedger - :: (LedgerSupportsMempool blk, HasTxId (GenTx blk)) - => MempoolCapacityBytesOverride - -> LedgerConfig blk - -> SlotNo - -> TickedLedgerState blk DiffMK - -> LedgerTables (LedgerState blk) ValuesMK - -> InternalState blk - -> ( InternalState blk - , Maybe (TraceEventMempool blk) - ) +pureSyncWithLedger :: + (LedgerSupportsMempool blk, HasTxId (GenTx blk)) => + MempoolCapacityBytesOverride -> + LedgerConfig blk -> + SlotNo -> + TickedLedgerState blk DiffMK -> + LedgerTables (LedgerState blk) ValuesMK -> + InternalState blk -> + ( InternalState blk + , Maybe (TraceEventMempool blk) + ) pureSyncWithLedger capacityOverride lcfg slot lstate values istate = let RevalidateTxsResult is' removed = revalidateTxsFor @@ -473,9 +480,11 @@ pureSyncWithLedger capacityOverride lcfg slot lstate values istate = values (isLastTicketNo istate) (TxSeq.toList $ isTxs istate) - mTrace = if null removed - then - Nothing - else - Just $ TraceMempoolRemoveTxs (map (\x -> (getInvalidated x, getReason x)) removed) (isMempoolSize is') - in (is', mTrace) + mTrace = + if null removed + then + Nothing + else + Just $ + TraceMempoolRemoveTxs (map (\x -> (getInvalidated x, getReason x)) removed) (isMempoolSize is') + in (is', mTrace) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/ClientInterface.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/ClientInterface.hs index 40213ff6f5..c97a3f0cca 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/ClientInterface.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/ClientInterface.hs @@ -4,129 +4,153 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Initialization of the 'BlockFetchConsensusInterface' -module Ouroboros.Consensus.MiniProtocol.BlockFetch.ClientInterface ( - ChainDbView (..) +module Ouroboros.Consensus.MiniProtocol.BlockFetch.ClientInterface + ( ChainDbView (..) , defaultChainDbView , mkBlockFetchConsensusInterface , readFetchModeDefault ) where -import Cardano.Network.ConsensusMode (ConsensusMode) -import Cardano.Network.PeerSelection.Bootstrap (UseBootstrapPeers, - requiresBootstrapPeers) -import Cardano.Network.Types (LedgerStateJudgement) -import Control.Monad -import Control.Tracer (Tracer) -import Data.Map.Strict (Map) -import Data.Time.Clock (UTCTime) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block hiding (blockMatchesHeader) -import qualified Ouroboros.Consensus.Block as Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.Config.SupportsNode as SupportsNode -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client as CSClient -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping as CSJumping -import Ouroboros.Consensus.Storage.ChainDB.API (AddBlockPromise, - ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment - (InvalidBlockPunishment) -import qualified Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment as InvalidBlockPunishment -import Ouroboros.Consensus.Util.AnchoredFragment -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (MaxSlotNo) -import Ouroboros.Network.BlockFetch.ConsensusInterface - (BlockFetchConsensusInterface (..), ChainSelStarvation, - FetchMode (..), FromConsensus (..), PraosFetchMode (..), - mkReadFetchMode) -import Ouroboros.Network.SizeInBytes +import Cardano.Network.ConsensusMode (ConsensusMode) +import Cardano.Network.PeerSelection.Bootstrap + ( UseBootstrapPeers + , requiresBootstrapPeers + ) +import Cardano.Network.Types (LedgerStateJudgement) +import Control.Monad +import Control.Tracer (Tracer) +import Data.Map.Strict (Map) +import Data.Time.Clock (UTCTime) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block hiding (blockMatchesHeader) +import Ouroboros.Consensus.Block qualified as Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Config.SupportsNode qualified as SupportsNode +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client qualified as CSClient +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping qualified as CSJumping +import Ouroboros.Consensus.Storage.ChainDB.API + ( AddBlockPromise + , ChainDB + ) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment + ( InvalidBlockPunishment + ) +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment qualified as InvalidBlockPunishment +import Ouroboros.Consensus.Util.AnchoredFragment +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (MaxSlotNo) +import Ouroboros.Network.BlockFetch.ConsensusInterface + ( BlockFetchConsensusInterface (..) + , ChainSelStarvation + , FetchMode (..) + , FromConsensus (..) + , PraosFetchMode (..) + , mkReadFetchMode + ) +import Ouroboros.Network.SizeInBytes -- | Abstract over the ChainDB -data ChainDbView m blk = ChainDbView { - getCurrentChain :: STM m (AnchoredFragment (Header blk)) - , getCurrentChainWithTime :: STM m (AnchoredFragment (HeaderWithTime blk)) - , getIsFetched :: STM m (Point blk -> Bool) - , getMaxSlotNo :: STM m MaxSlotNo - , addBlockAsync :: InvalidBlockPunishment m -> blk -> m (AddBlockPromise m blk) - , getChainSelStarvation :: STM m ChainSelStarvation - } +data ChainDbView m blk = ChainDbView + { getCurrentChain :: STM m (AnchoredFragment (Header blk)) + , getCurrentChainWithTime :: STM m (AnchoredFragment (HeaderWithTime blk)) + , getIsFetched :: STM m (Point blk -> Bool) + , getMaxSlotNo :: STM m MaxSlotNo + , addBlockAsync :: InvalidBlockPunishment m -> blk -> m (AddBlockPromise m blk) + , getChainSelStarvation :: STM m ChainSelStarvation + } defaultChainDbView :: ChainDB m blk -> ChainDbView m blk -defaultChainDbView chainDB = ChainDbView { - getCurrentChain = ChainDB.getCurrentChain chainDB - , getCurrentChainWithTime = ChainDB.getCurrentChainWithTime chainDB - , getIsFetched = ChainDB.getIsFetched chainDB - , getMaxSlotNo = ChainDB.getMaxSlotNo chainDB - , addBlockAsync = ChainDB.addBlockAsync chainDB - , getChainSelStarvation = ChainDB.getChainSelStarvation chainDB - } +defaultChainDbView chainDB = + ChainDbView + { getCurrentChain = ChainDB.getCurrentChain chainDB + , getCurrentChainWithTime = ChainDB.getCurrentChainWithTime chainDB + , getIsFetched = ChainDB.getIsFetched chainDB + , getMaxSlotNo = ChainDB.getMaxSlotNo chainDB + , addBlockAsync = ChainDB.addBlockAsync chainDB + , getChainSelStarvation = ChainDB.getChainSelStarvation chainDB + } readFetchModeDefault :: - (MonadSTM m, HasHeader blk) - => ConsensusMode - -> BlockchainTime m - -> STM m (AnchoredFragment blk) - -> STM m UseBootstrapPeers - -> STM m LedgerStateJudgement - -> STM m FetchMode -readFetchModeDefault consensusMode btime getCurrentChain - getUseBootstrapPeers getLedgerStateJudgement = + (MonadSTM m, HasHeader blk) => + ConsensusMode -> + BlockchainTime m -> + STM m (AnchoredFragment blk) -> + STM m UseBootstrapPeers -> + STM m LedgerStateJudgement -> + STM m FetchMode +readFetchModeDefault + consensusMode + btime + getCurrentChain + getUseBootstrapPeers + getLedgerStateJudgement = mkReadFetchMode consensusMode getLedgerStateJudgement praosFetchMode - where + where praosFetchMode = do mCurSlot <- getCurrentSlot btime - usingBootstrapPeers <- requiresBootstrapPeers <$> getUseBootstrapPeers - <*> getLedgerStateJudgement + usingBootstrapPeers <- + requiresBootstrapPeers + <$> getUseBootstrapPeers + <*> getLedgerStateJudgement -- This logic means that when the node is using bootstrap peers and is in -- TooOld state it will always return BulkSync. Otherwise if the node -- isn't using bootstrap peers (i.e. has them disabled it will use the old -- logic of returning BulkSync if behind 1000 slots case (usingBootstrapPeers, mCurSlot) of - (True, _) -> return FetchModeBulkSync - (False, CurrentSlotUnknown) -> return FetchModeBulkSync + (True, _) -> return FetchModeBulkSync + (False, CurrentSlotUnknown) -> return FetchModeBulkSync (False, CurrentSlot curSlot) -> do curChainSlot <- AF.headSlot <$> getCurrentChain let slotsBehind = case curChainSlot of -- There's nothing in the chain. If the current slot is 0, then -- we're 1 slot behind. - Origin -> unSlotNo curSlot + 1 + Origin -> unSlotNo curSlot + 1 NotOrigin slot -> unSlotNo curSlot - unSlotNo slot maxSlotsBehind = 1000 - return $ if slotsBehind < maxSlotsBehind - -- When the current chain is near to "now", use deadline mode, - -- when it is far away, use bulk sync mode. - then FetchModeDeadline - else FetchModeBulkSync + return $ + if slotsBehind < maxSlotsBehind + -- When the current chain is near to "now", use deadline mode, + -- when it is far away, use bulk sync mode. + then FetchModeDeadline + else FetchModeBulkSync mkBlockFetchConsensusInterface :: - forall m peer blk. - ( IOLike m - , BlockSupportsDiffusionPipelining blk - , Ord peer - , LedgerSupportsProtocol blk - , SupportsNode.ConfigSupportsNode blk - ) - => Tracer m (CSJumping.TraceEventDbf peer) - -> BlockConfig blk - -> ChainDbView m blk - -> CSClient.ChainSyncClientHandleCollection peer m blk - -> (Header blk -> SizeInBytes) - -> STM m FetchMode - -- ^ See 'readFetchMode'. - -> DiffusionPipeliningSupport - -> BlockFetchConsensusInterface peer (HeaderWithTime blk) blk m + forall m peer blk. + ( IOLike m + , BlockSupportsDiffusionPipelining blk + , Ord peer + , LedgerSupportsProtocol blk + , SupportsNode.ConfigSupportsNode blk + ) => + Tracer m (CSJumping.TraceEventDbf peer) -> + BlockConfig blk -> + ChainDbView m blk -> + CSClient.ChainSyncClientHandleCollection peer m blk -> + (Header blk -> SizeInBytes) -> + -- | See 'readFetchMode'. + STM m FetchMode -> + DiffusionPipeliningSupport -> + BlockFetchConsensusInterface peer (HeaderWithTime blk) blk m mkBlockFetchConsensusInterface - csjTracer bcfg chainDB csHandlesCol blockFetchSize readFetchMode pipelining = - BlockFetchConsensusInterface {blockFetchSize = blockFetchSize . hwtHeader, ..} - where + csjTracer + bcfg + chainDB + csHandlesCol + blockFetchSize + readFetchMode + pipelining = + BlockFetchConsensusInterface{blockFetchSize = blockFetchSize . hwtHeader, ..} + where getCandidates :: STM m (Map peer (AnchoredFragment (HeaderWithTime blk))) getCandidates = CSClient.viewChainSyncState (CSClient.cschcMap csHandlesCol) CSClient.csCandidate @@ -152,52 +176,52 @@ mkBlockFetchConsensusInterface -- Hand over the block to the ChainDB, but don't wait until it has been -- written to disk or processed. mkAddFetchedBlock_ :: - ( BlockConfig blk - -> Header blk - -> InvalidBlockPunishment m - -> InvalidBlockPunishment m - ) - -> DiffusionPipeliningSupport - -> Point blk - -> blk - -> m () + ( BlockConfig blk -> + Header blk -> + InvalidBlockPunishment m -> + InvalidBlockPunishment m + ) -> + DiffusionPipeliningSupport -> + Point blk -> + blk -> + m () mkAddFetchedBlock_ pipeliningPunishment enabledPipelining _pt blk = void $ do - disconnect <- InvalidBlockPunishment.mkPunishThisThread - -- A BlockFetch peer can either send an entire range or none of the - -- range; anything else will incur a disconnect. And in 'FetchDeadline' - -- mode, which is the relevant case for this kind of DoS attack (because - -- in bulk sync, our honest peers will be streaming a very dense chain - -- very quickly, meaning the adversary only has very small windows during - -- which we're interested in its chains), the node only requests whole - -- suffixes from peers: the BlockFetch decision logic does not avoid - -- requesting a block that is already in-flight from other peers. Thus - -- the adversary cannot send us blocks out-of-order (during - -- 'FetchDeadline'), even if they control more than one of our peers. - -- - -- Therefore, the following punishment logic only needs to cover the - -- "whole chain received in-order from a single-peer" case. Which it - -- currently does. - -- - -- TODO maintain the context of which ChainSync candidate incurring this - -- fetch request, and disconnect immediately if the invalid block is not - -- the tip of that candidate. As-is, in 'FetchDeadline' they must also - -- send the next block, but they might be able to wait long enough that - -- it is not desirable when it arrives, and therefore not be disconnected - -- from. So their choices are: cause a disconnect or else do nothing for - -- long enough. Both are fine by us, from a DoS mitigation perspective. - let punishment = InvalidBlockPunishment.branch $ \case - -- invalid parents always cause a disconnect - InvalidBlockPunishment.BlockPrefix -> disconnect - -- when pipelining, we forgive an invalid block itself if it's - -- better than the previous invalid block this peer delivered - InvalidBlockPunishment.BlockItself -> case enabledPipelining of - DiffusionPipeliningOff -> disconnect - DiffusionPipeliningOn -> - pipeliningPunishment bcfg (getHeader blk) disconnect - addBlockAsync - chainDB - punishment - blk + disconnect <- InvalidBlockPunishment.mkPunishThisThread + -- A BlockFetch peer can either send an entire range or none of the + -- range; anything else will incur a disconnect. And in 'FetchDeadline' + -- mode, which is the relevant case for this kind of DoS attack (because + -- in bulk sync, our honest peers will be streaming a very dense chain + -- very quickly, meaning the adversary only has very small windows during + -- which we're interested in its chains), the node only requests whole + -- suffixes from peers: the BlockFetch decision logic does not avoid + -- requesting a block that is already in-flight from other peers. Thus + -- the adversary cannot send us blocks out-of-order (during + -- 'FetchDeadline'), even if they control more than one of our peers. + -- + -- Therefore, the following punishment logic only needs to cover the + -- "whole chain received in-order from a single-peer" case. Which it + -- currently does. + -- + -- TODO maintain the context of which ChainSync candidate incurring this + -- fetch request, and disconnect immediately if the invalid block is not + -- the tip of that candidate. As-is, in 'FetchDeadline' they must also + -- send the next block, but they might be able to wait long enough that + -- it is not desirable when it arrives, and therefore not be disconnected + -- from. So their choices are: cause a disconnect or else do nothing for + -- long enough. Both are fine by us, from a DoS mitigation perspective. + let punishment = InvalidBlockPunishment.branch $ \case + -- invalid parents always cause a disconnect + InvalidBlockPunishment.BlockPrefix -> disconnect + -- when pipelining, we forgive an invalid block itself if it's + -- better than the previous invalid block this peer delivered + InvalidBlockPunishment.BlockItself -> case enabledPipelining of + DiffusionPipeliningOff -> disconnect + DiffusionPipeliningOn -> + pipeliningPunishment bcfg (getHeader blk) disconnect + addBlockAsync + chainDB + punishment + blk readFetchedMaxSlotNo :: STM m MaxSlotNo readFetchedMaxSlotNo = getMaxSlotNo chainDB @@ -215,10 +239,11 @@ mkBlockFetchConsensusInterface -- preserves an invariant that relates our current chain to the candidate -- fragment, by the time the block fetch download logic considers the -- fragment, our current chain might have changed. - plausibleCandidateChain :: HasCallStack - => AnchoredFragment (HeaderWithTime blk) - -> AnchoredFragment (HeaderWithTime blk) - -> Bool + plausibleCandidateChain :: + HasCallStack => + AnchoredFragment (HeaderWithTime blk) -> + AnchoredFragment (HeaderWithTime blk) -> + Bool plausibleCandidateChain ours cand -- 1. The ChainDB maintains the invariant that the anchor of our fragment -- corresponds to the immutable tip. @@ -239,44 +264,46 @@ mkBlockFetchConsensusInterface -- block number stays the same, but the slot number increases (EBB -- case). -- - | anchorBlockNoAndSlot cand < anchorBlockNoAndSlot ours -- (4) - = case (AF.null ours, AF.null cand) of - -- Both are non-empty, the precondition trivially holds. - (False, False) -> preferAnchoredCandidate bcfg ours cand - -- The candidate is shorter than our chain and, worse, we'd have to - -- roll back past our immutable tip (the anchor of @cand@). - (_, True) -> False - -- As argued above we can only reach this case when our chain's anchor - -- has changed (4). - -- - -- It is impossible for our chain to change /and/ still be empty: the - -- anchor of our chain only changes when a new block becomes - -- immutable. For a new block to become immutable, we must have - -- extended our chain with at least @k + 1@ blocks. Which means our - -- fragment can't be empty. - (True, _) -> error "impossible" - - | otherwise - = preferAnchoredCandidate bcfg ours cand - where - anchorBlockNoAndSlot :: - AnchoredFragment (HeaderWithTime blk) - -> (WithOrigin BlockNo, WithOrigin SlotNo) - anchorBlockNoAndSlot frag = - (AF.anchorToBlockNo a, AF.anchorToSlotNo a) - where - a = AF.anchor frag + | anchorBlockNoAndSlot cand < anchorBlockNoAndSlot ours -- (4) + = + case (AF.null ours, AF.null cand) of + -- Both are non-empty, the precondition trivially holds. + (False, False) -> preferAnchoredCandidate bcfg ours cand + -- The candidate is shorter than our chain and, worse, we'd have to + -- roll back past our immutable tip (the anchor of @cand@). + (_, True) -> False + -- As argued above we can only reach this case when our chain's anchor + -- has changed (4). + -- + -- It is impossible for our chain to change /and/ still be empty: the + -- anchor of our chain only changes when a new block becomes + -- immutable. For a new block to become immutable, we must have + -- extended our chain with at least @k + 1@ blocks. Which means our + -- fragment can't be empty. + (True, _) -> error "impossible" + | otherwise = + preferAnchoredCandidate bcfg ours cand + where + anchorBlockNoAndSlot :: + AnchoredFragment (HeaderWithTime blk) -> + (WithOrigin BlockNo, WithOrigin SlotNo) + anchorBlockNoAndSlot frag = + (AF.anchorToBlockNo a, AF.anchorToSlotNo a) + where + a = AF.anchor frag - compareCandidateChains :: AnchoredFragment (HeaderWithTime blk) - -> AnchoredFragment (HeaderWithTime blk) - -> Ordering + compareCandidateChains :: + AnchoredFragment (HeaderWithTime blk) -> + AnchoredFragment (HeaderWithTime blk) -> + Ordering compareCandidateChains = compareAnchoredFragments bcfg headerForgeUTCTime :: FromConsensus (HeaderWithTime blk) -> STM m UTCTime - headerForgeUTCTime = pure - . fromRelativeTime (SupportsNode.getSystemStart bcfg) - . hwtSlotRelativeTime - . unFromConsensus + headerForgeUTCTime = + pure + . fromRelativeTime (SupportsNode.getSystemStart bcfg) + . hwtSlotRelativeTime + . unFromConsensus readChainSelStarvation = getChainSelStarvation chainDB diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/Server.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/Server.hs index bf4b3adee5..1d972e7d74 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/Server.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/BlockFetch/Server.hs @@ -8,54 +8,64 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.MiniProtocol.BlockFetch.Server ( - blockFetchServer +module Ouroboros.Consensus.MiniProtocol.BlockFetch.Server + ( blockFetchServer + -- * Trace events , TraceBlockFetchServerEvent (..) + -- * Exceptions , BlockFetchServerException + -- * Low-level API , blockFetchServer' ) where -import Control.ResourceRegistry (ResourceRegistry) -import Control.Tracer (Tracer, traceWith) -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.ChainDB (ChainDB, Iterator, - IteratorResult (..), UnknownRange, WithPoint (..), - getSerialisedBlockWithPoint) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (Serialised (..)) -import Ouroboros.Network.NodeToNode.Version (NodeToNodeVersion) -import Ouroboros.Network.Protocol.BlockFetch.Server - (BlockFetchBlockSender (..), BlockFetchSendBlocks (..), - BlockFetchServer (..)) -import Ouroboros.Network.Protocol.BlockFetch.Type (ChainRange (..)) - -data BlockFetchServerException = - -- | A block that was supposed to be included in a batch was garbage - -- collected since we started the batch and can no longer be sent. - -- - -- This will very rarely happen, only in the following scenario: when - -- the batch started, the requested blocks were on the current chain, - -- but then the current chain changed such that the requested blocks are - -- now on a fork. If while requesting the blocks from the batch, there - -- were a pause of /hours/ such that the fork gets older than @k@, then - -- the next request after this long pause could result in this - -- exception, as the block to stream from the old fork could have been - -- garbage collected. However, the network protocol will have timed out - -- long before this happens. - forall blk. (Typeable blk, StandardHash blk) => - BlockGCed (RealPoint blk) +import Control.ResourceRegistry (ResourceRegistry) +import Control.Tracer (Tracer, traceWith) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.ChainDB + ( ChainDB + , Iterator + , IteratorResult (..) + , UnknownRange + , WithPoint (..) + , getSerialisedBlockWithPoint + ) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block (Serialised (..)) +import Ouroboros.Network.NodeToNode.Version (NodeToNodeVersion) +import Ouroboros.Network.Protocol.BlockFetch.Server + ( BlockFetchBlockSender (..) + , BlockFetchSendBlocks (..) + , BlockFetchServer (..) + ) +import Ouroboros.Network.Protocol.BlockFetch.Type (ChainRange (..)) - -- | Thrown when requesting the genesis block from the database - -- - -- Although the genesis block has a hash and a point associated with it, - -- it does not actually exist other than as a concept; we cannot read and - -- return it. - | NoGenesisBlock +data BlockFetchServerException + = -- | A block that was supposed to be included in a batch was garbage + -- collected since we started the batch and can no longer be sent. + -- + -- This will very rarely happen, only in the following scenario: when + -- the batch started, the requested blocks were on the current chain, + -- but then the current chain changed such that the requested blocks are + -- now on a fork. If while requesting the blocks from the batch, there + -- were a pause of /hours/ such that the fork gets older than @k@, then + -- the next request after this long pause could result in this + -- exception, as the block to stream from the old fork could have been + -- garbage collected. However, the network protocol will have timed out + -- long before this happens. + forall blk. + (Typeable blk, StandardHash blk) => + BlockGCed (RealPoint blk) + | -- | Thrown when requesting the genesis block from the database + -- + -- Although the genesis block has a hash and a point associated with it, + -- it does not actually exist other than as a concept; we cannot read and + -- return it. + NoGenesisBlock deriving instance Show BlockFetchServerException @@ -65,85 +75,88 @@ instance Exception BlockFetchServerException -- 'Ouroboros.Network.BlockFetch.Examples.mockBlockFetchServer1', but using -- the 'ChainDB'. blockFetchServer :: - forall m blk. - ( IOLike m - , StandardHash blk - , Typeable blk - ) - => Tracer m (TraceBlockFetchServerEvent blk) - -> ChainDB m blk - -> NodeToNodeVersion - -> ResourceRegistry m - -> BlockFetchServer (Serialised blk) (Point blk) m () + forall m blk. + ( IOLike m + , StandardHash blk + , Typeable blk + ) => + Tracer m (TraceBlockFetchServerEvent blk) -> + ChainDB m blk -> + NodeToNodeVersion -> + ResourceRegistry m -> + BlockFetchServer (Serialised blk) (Point blk) m () blockFetchServer tracer chainDB _version registry = - blockFetchServer' tracer $ - ChainDB.stream chainDB registry getSerialisedBlockWithPoint + blockFetchServer' tracer $ + ChainDB.stream chainDB registry getSerialisedBlockWithPoint -blockFetchServer' - :: forall m blk a. - ( IOLike m - , StandardHash blk - , Typeable blk - ) - => Tracer m (TraceBlockFetchServerEvent blk) - -> ( ChainDB.StreamFrom blk -> ChainDB.StreamTo blk - -> m (Either (UnknownRange blk) (Iterator m blk (WithPoint blk a))) - ) - -> BlockFetchServer a (Point blk) m () +blockFetchServer' :: + forall m blk a. + ( IOLike m + , StandardHash blk + , Typeable blk + ) => + Tracer m (TraceBlockFetchServerEvent blk) -> + ( ChainDB.StreamFrom blk -> + ChainDB.StreamTo blk -> + m (Either (UnknownRange blk) (Iterator m blk (WithPoint blk a))) + ) -> + BlockFetchServer a (Point blk) m () blockFetchServer' tracer stream = senderSide - where - senderSide :: BlockFetchServer a (Point blk) m () - senderSide = BlockFetchServer receiveReq' () + where + senderSide :: BlockFetchServer a (Point blk) m () + senderSide = BlockFetchServer receiveReq' () - receiveReq' :: ChainRange (Point blk) - -> m (BlockFetchBlockSender a (Point blk) m ()) - receiveReq' (ChainRange start end) = - case (start, end) of - (BlockPoint s h, BlockPoint s' h') -> - receiveReq (RealPoint s h) (RealPoint s' h') - _otherwise -> - throwIO NoGenesisBlock + receiveReq' :: + ChainRange (Point blk) -> + m (BlockFetchBlockSender a (Point blk) m ()) + receiveReq' (ChainRange start end) = + case (start, end) of + (BlockPoint s h, BlockPoint s' h') -> + receiveReq (RealPoint s h) (RealPoint s' h') + _otherwise -> + throwIO NoGenesisBlock - receiveReq :: RealPoint blk - -> RealPoint blk - -> m (BlockFetchBlockSender a (Point blk) m ()) - receiveReq start end = do - errIt <- stream + receiveReq :: + RealPoint blk -> + RealPoint blk -> + m (BlockFetchBlockSender a (Point blk) m ()) + receiveReq start end = do + errIt <- + stream (ChainDB.StreamFromInclusive start) - (ChainDB.StreamToInclusive end) - return $ case errIt of - -- The range is not in the ChainDB or it forks off more than @k@ - -- blocks back. - Left _ -> SendMsgNoBlocks $ return senderSide - -- When we got an iterator, it will stream at least one block since - -- its bounds are inclusive, so we don't have to check whether the - -- iterator is empty. - Right it -> SendMsgStartBatch $ sendBlocks it - - sendBlocks :: ChainDB.Iterator m blk (WithPoint blk a) - -> m (BlockFetchSendBlocks a (Point blk) m ()) - sendBlocks it = do - next <- ChainDB.iteratorNext it - case next of - IteratorResult blk -> do - traceWith tracer $ TraceBlockFetchServerSendBlock $ point blk - return $ SendMsgBlock (withoutPoint blk) (sendBlocks it) - IteratorExhausted -> do - ChainDB.iteratorClose it - return $ SendMsgBatchDone $ return senderSide - IteratorBlockGCed pt -> do - ChainDB.iteratorClose it - throwIO $ BlockGCed @blk pt + (ChainDB.StreamToInclusive end) + return $ case errIt of + -- The range is not in the ChainDB or it forks off more than @k@ + -- blocks back. + Left _ -> SendMsgNoBlocks $ return senderSide + -- When we got an iterator, it will stream at least one block since + -- its bounds are inclusive, so we don't have to check whether the + -- iterator is empty. + Right it -> SendMsgStartBatch $ sendBlocks it + sendBlocks :: + ChainDB.Iterator m blk (WithPoint blk a) -> + m (BlockFetchSendBlocks a (Point blk) m ()) + sendBlocks it = do + next <- ChainDB.iteratorNext it + case next of + IteratorResult blk -> do + traceWith tracer $ TraceBlockFetchServerSendBlock $ point blk + return $ SendMsgBlock (withoutPoint blk) (sendBlocks it) + IteratorExhausted -> do + ChainDB.iteratorClose it + return $ SendMsgBatchDone $ return senderSide + IteratorBlockGCed pt -> do + ChainDB.iteratorClose it + throwIO $ BlockGCed @blk pt {------------------------------------------------------------------------------- Trace events -------------------------------------------------------------------------------} -- | Events traced by the Block Fetch Server. -data TraceBlockFetchServerEvent blk = - -- | The server sent a block to the peer. +data TraceBlockFetchServerEvent blk + = -- | The server sent a block to the peer. -- This traces the start, not the end, of block sending. - -- TraceBlockFetchServerSendBlock !(Point blk) deriving (Eq, Show) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client.hs index 23f89c0b8c..e63a756b9e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client.hs @@ -17,7 +17,6 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -fno-strictness #-} -- NOTE: With @-fstrictness@ optimisation (enabled by default for -O1), we get @@ -36,31 +35,36 @@ -- -- This module is intended for qualified import, aliased as either CSC, -- CSClient, or CsClient. - -module Ouroboros.Consensus.MiniProtocol.ChainSync.Client ( - -- * ChainSync client +module Ouroboros.Consensus.MiniProtocol.ChainSync.Client + ( -- * ChainSync client bracketChainSyncClient , chainSyncClient + -- * Arguments , ChainDbView (..) , ConfigEnv (..) , DynamicEnv (..) , InternalEnv (..) , defaultChainDbView + -- * Results , ChainSyncClientException (..) , ChainSyncClientResult (..) + -- * Misc , Consensus , Our (..) , Their (..) + -- * Genesis configuration , CSJConfig (..) , CSJEnabledConfig (..) , ChainSyncLoPBucketConfig (..) , ChainSyncLoPBucketEnabledConfig (..) + -- * Trace events , TraceChainSyncClientEvent (..) + -- * State shared with other components , ChainSyncClientHandle (..) , ChainSyncClientHandleCollection (..) @@ -74,128 +78,141 @@ module Ouroboros.Consensus.MiniProtocol.ChainSync.Client ( , viewChainSyncState ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Monad (join, void) -import Control.Monad.Class.MonadTimer (MonadTimer) -import Control.Monad.Except (runExcept, throwError) -import Control.Tracer -import Data.Foldable (traverse_) -import Data.Functor ((<&>)) -import Data.Kind (Type) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import Data.Maybe.Strict (StrictMaybe (..)) -import Data.Proxy -import Data.Typeable -import Data.Word (Word64) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Network.TypedProtocol.Core -import NoThunks.Class (unsafeNoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (RelativeTime) -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.History - (PastHorizonException (PastHorizon)) -import Ouroboros.Consensus.HeaderStateHistory - (HeaderStateHistory (..), HeaderStateWithTime (..), - validateHeader) -import qualified Ouroboros.Consensus.HeaderStateHistory as HeaderStateHistory -import Ouroboros.Consensus.HeaderValidation hiding (validateHeader) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck - (HistoricalChainSyncMessage (..), HistoricityCheck, - HistoricityException) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck as HistoricityCheck -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck as InFutureCheck -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping as Jumping -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State -import Ouroboros.Consensus.Node.GsmState (GsmState (..)) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB (ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB as ChainDB -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.AnchoredFragment - (preferAnchoredCandidate) -import Ouroboros.Consensus.Util.Assert (assertWithMsg) -import Ouroboros.Consensus.Util.EarlyExit (WithEarlyExit, exitEarly) -import qualified Ouroboros.Consensus.Util.EarlyExit as EarlyExit -import Ouroboros.Consensus.Util.IOLike hiding (handle) -import Ouroboros.Consensus.Util.LeakyBucket - (atomicallyWithMonotonicTime) -import qualified Ouroboros.Consensus.Util.LeakyBucket as LeakyBucket -import Ouroboros.Consensus.Util.STM (Fingerprint, Watcher (..), - WithFingerprint (..), withWatcher) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - AnchoredSeq (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF -import qualified Ouroboros.Network.AnchoredSeq as AS -import Ouroboros.Network.Block (Tip (..), getTipBlockNo) -import Ouroboros.Network.ControlMessage (ControlMessage (..), - ControlMessageSTM) -import Ouroboros.Network.PeerSelection.PeerMetric.Type - (HeaderMetricsTracer) -import Ouroboros.Network.Protocol.ChainSync.ClientPipelined -import Ouroboros.Network.Protocol.ChainSync.PipelineDecision +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Monad (join, void) +import Control.Monad.Class.MonadTimer (MonadTimer) +import Control.Monad.Except (runExcept, throwError) +import Control.Tracer +import Data.Foldable (traverse_) +import Data.Functor ((<&>)) +import Data.Kind (Type) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import Data.Maybe.Strict (StrictMaybe (..)) +import Data.Proxy +import Data.Typeable +import Data.Word (Word64) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Network.TypedProtocol.Core +import NoThunks.Class (unsafeNoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime (RelativeTime) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.History + ( PastHorizonException (PastHorizon) + ) +import Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateHistory (..) + , HeaderStateWithTime (..) + , validateHeader + ) +import Ouroboros.Consensus.HeaderStateHistory qualified as HeaderStateHistory +import Ouroboros.Consensus.HeaderValidation hiding (validateHeader) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck + ( HistoricalChainSyncMessage (..) + , HistoricityCheck + , HistoricityException + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck qualified as HistoricityCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck qualified as InFutureCheck +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping qualified as Jumping +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State +import Ouroboros.Consensus.Node.GsmState (GsmState (..)) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB (ChainDB) +import Ouroboros.Consensus.Storage.ChainDB qualified as ChainDB +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.AnchoredFragment + ( preferAnchoredCandidate + ) +import Ouroboros.Consensus.Util.Assert (assertWithMsg) +import Ouroboros.Consensus.Util.EarlyExit (WithEarlyExit, exitEarly) +import Ouroboros.Consensus.Util.EarlyExit qualified as EarlyExit +import Ouroboros.Consensus.Util.IOLike hiding (handle) +import Ouroboros.Consensus.Util.LeakyBucket + ( atomicallyWithMonotonicTime + ) +import Ouroboros.Consensus.Util.LeakyBucket qualified as LeakyBucket +import Ouroboros.Consensus.Util.STM + ( Fingerprint + , Watcher (..) + , WithFingerprint (..) + , withWatcher + ) +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , AnchoredSeq (..) + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.AnchoredSeq qualified as AS +import Ouroboros.Network.Block (Tip (..), getTipBlockNo) +import Ouroboros.Network.ControlMessage + ( ControlMessage (..) + , ControlMessageSTM + ) +import Ouroboros.Network.PeerSelection.PeerMetric.Type + ( HeaderMetricsTracer + ) +import Ouroboros.Network.Protocol.ChainSync.ClientPipelined +import Ouroboros.Network.Protocol.ChainSync.PipelineDecision -- | Merely a helpful abbreviation type Consensus - (client :: Type -> Type -> Type -> (Type -> Type) -> Type -> Type) - blk - m - = client (Header blk) (Point blk) (Tip blk) m ChainSyncClientResult + (client :: Type -> Type -> Type -> (Type -> Type) -> Type -> Type) + blk + m = + client (Header blk) (Point blk) (Tip blk) m ChainSyncClientResult -- | Abstract over the ChainDB -data ChainDbView m blk = ChainDbView { - getCurrentChain :: STM m (AnchoredFragment (Header blk)) - , - getHeaderStateHistory :: STM m (HeaderStateHistory blk) - , - getPastLedger :: Point blk -> STM m (Maybe (ExtLedgerState blk EmptyMK)) - , - getIsInvalidBlock :: - STM m - (WithFingerprint - (HeaderHash blk -> Maybe (ExtValidationError blk))) +data ChainDbView m blk = ChainDbView + { getCurrentChain :: STM m (AnchoredFragment (Header blk)) + , getHeaderStateHistory :: STM m (HeaderStateHistory blk) + , getPastLedger :: Point blk -> STM m (Maybe (ExtLedgerState blk EmptyMK)) + , getIsInvalidBlock :: + STM + m + ( WithFingerprint + (HeaderHash blk -> Maybe (ExtValidationError blk)) + ) } -- | Configuration of the leaky bucket when it is enabled. -data ChainSyncLoPBucketEnabledConfig = ChainSyncLoPBucketEnabledConfig { - -- | The capacity of the bucket (think number of tokens). - csbcCapacity :: Integer, - -- | The rate of the bucket (think tokens per second). - csbcRate :: Rational - } deriving stock (Eq, Generic, Show) +data ChainSyncLoPBucketEnabledConfig = ChainSyncLoPBucketEnabledConfig + { csbcCapacity :: Integer + -- ^ The capacity of the bucket (think number of tokens). + , csbcRate :: Rational + -- ^ The rate of the bucket (think tokens per second). + } + deriving stock (Eq, Generic, Show) -- | Configuration of the leaky bucket. data ChainSyncLoPBucketConfig - = - -- | Fully disable the leaky bucket. The background thread that is used to + = -- | Fully disable the leaky bucket. The background thread that is used to -- run it will not even be started. ChainSyncLoPBucketDisabled - | - -- | Enable the leaky bucket. + | -- | Enable the leaky bucket. ChainSyncLoPBucketEnabled ChainSyncLoPBucketEnabledConfig deriving stock (Eq, Generic, Show) -- | Configuration of ChainSync Jumping data CSJConfig - = - -- | Disable ChainSync Jumping. All clients will fully synchronize with + = -- | Disable ChainSync Jumping. All clients will fully synchronize with -- the chain of its peer. CSJDisabled - | - -- | Enable ChainSync Jumping + | -- | Enable ChainSync Jumping CSJEnabled CSJEnabledConfig deriving stock (Eq, Generic, Show) -newtype CSJEnabledConfig = CSJEnabledConfig { - -- | The _ideal_ size for ChainSync jumps. Note that the algorithm +newtype CSJEnabledConfig = CSJEnabledConfig + { csjcJumpSize :: SlotNo + -- ^ The _ideal_ size for ChainSync jumps. Note that the algorithm -- is best-effort: there might not be exactly `csjcJumpSize` slots between two -- jumps, depending on the chain. -- @@ -211,26 +228,27 @@ newtype CSJEnabledConfig = CSJEnabledConfig { -- are less involved in the syncing. A jump size as large as the genesis -- window has a higher change that dishonest peers can delay syncing by a -- small margin (around 2 minutes per dishonest peer with mainnet parameters). - csjcJumpSize :: SlotNo -} deriving stock (Eq, Generic, Show) + } + deriving stock (Eq, Generic, Show) defaultChainDbView :: - ChainDB m blk -> ChainDbView m blk -defaultChainDbView chainDB = ChainDbView { - getCurrentChain = ChainDB.getCurrentChain chainDB - , getHeaderStateHistory = ChainDB.getHeaderStateHistory chainDB - , getPastLedger = ChainDB.getPastLedger chainDB - , getIsInvalidBlock = ChainDB.getIsInvalidBlock chainDB - } + ChainDB m blk -> ChainDbView m blk +defaultChainDbView chainDB = + ChainDbView + { getCurrentChain = ChainDB.getCurrentChain chainDB + , getHeaderStateHistory = ChainDB.getHeaderStateHistory chainDB + , getPastLedger = ChainDB.getPastLedger chainDB + , getIsInvalidBlock = ChainDB.getIsInvalidBlock chainDB + } -- | A newtype wrapper to avoid confusing our tip with their tip. -newtype Their a = Their { unTheir :: a } - deriving stock (Eq) +newtype Their a = Their {unTheir :: a} + deriving stock Eq deriving newtype (Show, NoThunks) -- | A newtype wrapper to avoid confusing our tip with their tip. -newtype Our a = Our { unOur :: a } - deriving stock (Eq) +newtype Our a = Our {unOur :: a} + deriving stock Eq deriving newtype (Show, NoThunks) -- | Convenience function for reading a nested set of TVars and extracting some @@ -256,141 +274,138 @@ chainSyncStateFor varHandles peer = -- | Interface for the ChainSync client to manipulate the idling flag in -- 'ChainSyncState'. -data Idling m = Idling { - -- | Mark the peer as being idle. - idlingStart :: !(m ()) - - -- | Mark the peer as not being idle. - , idlingStop :: !(m ()) +data Idling m = Idling + { idlingStart :: !(m ()) + -- ^ Mark the peer as being idle. + , idlingStop :: !(m ()) + -- ^ Mark the peer as not being idle. } - deriving stock (Generic) + deriving stock Generic deriving anyclass instance IOLike m => NoThunks (Idling m) -- | No-op implementation, for tests. noIdling :: Applicative m => Idling m noIdling = - Idling { - idlingStart = pure () - , idlingStop = pure () + Idling + { idlingStart = pure () + , idlingStop = pure () } -- | Interface to the LoP implementation for the ChainSync client. -data LoPBucket m = LoPBucket { - -- | Pause the bucket, because the peer is alert and we're waiting for some - -- condition. - lbPause :: !(m ()) - - -- | Resume the bucket after pausing it. - , lbResume :: !(m ()) - - -- | Notify the bucket that the peer has sent an interesting header. +data LoPBucket m = LoPBucket + { lbPause :: !(m ()) + -- ^ Pause the bucket, because the peer is alert and we're waiting for some + -- condition. + , lbResume :: !(m ()) + -- ^ Resume the bucket after pausing it. , lbGrantToken :: !(m ()) + -- ^ Notify the bucket that the peer has sent an interesting header. } - deriving stock (Generic) + deriving stock Generic deriving anyclass instance IOLike m => NoThunks (LoPBucket m) -- | No-op implementation, for tests. noLoPBucket :: Applicative m => LoPBucket m noLoPBucket = - LoPBucket { - lbPause = pure () - , lbResume = pure () + LoPBucket + { lbPause = pure () + , lbResume = pure () , lbGrantToken = pure () } -- | Interface for the ChainSync client to its state allocated by -- 'bracketChainSyncClient'. -data ChainSyncStateView m blk = ChainSyncStateView { - -- | The current candidate fragment - csvSetCandidate :: !(AnchoredFragment (HeaderWithTime blk) -> STM m ()) - - -- | Update the slot of the latest received header +data ChainSyncStateView m blk = ChainSyncStateView + { csvSetCandidate :: !(AnchoredFragment (HeaderWithTime blk) -> STM m ()) + -- ^ The current candidate fragment , csvSetLatestSlot :: !(WithOrigin SlotNo -> STM m ()) - - -- | (Un)mark the peer as idling. - , csvIdling :: !(Idling m) - - -- | Control the 'LeakyBucket' for the LoP. - , csvLoPBucket :: !(LoPBucket m) - - -- | Jumping-related API. - , csvJumping :: !(Jumping.Jumping m blk) + -- ^ Update the slot of the latest received header + , csvIdling :: !(Idling m) + -- ^ (Un)mark the peer as idling. + , csvLoPBucket :: !(LoPBucket m) + -- ^ Control the 'LeakyBucket' for the LoP. + , csvJumping :: !(Jumping.Jumping m blk) + -- ^ Jumping-related API. } - deriving stock (Generic) + deriving stock Generic -deriving anyclass instance ( - IOLike m, - HasHeader blk, - NoThunks (Header blk) - ) => NoThunks (ChainSyncStateView m blk) +deriving anyclass instance + ( IOLike m + , HasHeader blk + , NoThunks (Header blk) + ) => + NoThunks (ChainSyncStateView m blk) bracketChainSyncClient :: forall m peer blk a. - ( IOLike m - , Ord peer - , LedgerSupportsProtocol blk - , MonadTimer m - ) - => Tracer m (TraceChainSyncClientEvent blk) - -> Tracer m (Jumping.TraceEventCsj peer blk) - -> ChainDbView m blk - -> ChainSyncClientHandleCollection peer m blk - -- ^ The kill handle and states for each peer, we need the whole map because we - -- (de)register nodes (@peer@). - -> STM m GsmState - -- ^ A function giving the current GSM state; only used at startup. - -> peer - -> NodeToNodeVersion - -> ChainSyncLoPBucketConfig - -> CSJConfig - -> DiffusionPipeliningSupport - -> (ChainSyncStateView m blk -> m a) - -> m a + ( IOLike m + , Ord peer + , LedgerSupportsProtocol blk + , MonadTimer m + ) => + Tracer m (TraceChainSyncClientEvent blk) -> + Tracer m (Jumping.TraceEventCsj peer blk) -> + ChainDbView m blk -> + -- | The kill handle and states for each peer, we need the whole map because we + -- (de)register nodes (@peer@). + ChainSyncClientHandleCollection peer m blk -> + -- | A function giving the current GSM state; only used at startup. + STM m GsmState -> + peer -> + NodeToNodeVersion -> + ChainSyncLoPBucketConfig -> + CSJConfig -> + DiffusionPipeliningSupport -> + (ChainSyncStateView m blk -> m a) -> + m a bracketChainSyncClient - tracer - tracerCsj - ChainDbView { getIsInvalidBlock } - varHandles - getGsmState - peer - version - csBucketConfig - csjConfig - pipelining - body - = - LeakyBucket.execAgainstBucket' - $ \lopBucket -> - mkChainSyncClientHandleState >>= \csHandleState -> - withCSJCallbacks lopBucket csHandleState csjConfig $ \csjCallbacks -> - withWatcher - "ChainSync.Client.rejectInvalidBlocks" - (invalidBlockWatcher csHandleState) - $ body ChainSyncStateView { - csvSetCandidate = - modifyTVar csHandleState . \ c s -> s {csCandidate = c} - , csvSetLatestSlot = - modifyTVar csHandleState . \ ls s -> s {csLatestSlot = SJust ls} - , csvIdling = Idling { - idlingStart = atomically $ modifyTVar csHandleState $ \ s -> s {csIdling = True} - , idlingStop = atomically $ modifyTVar csHandleState $ \ s -> s {csIdling = False} - } - , csvLoPBucket = LoPBucket { - lbPause = LeakyBucket.setPaused' lopBucket True - , lbResume = LeakyBucket.setPaused' lopBucket False - , lbGrantToken = void $ LeakyBucket.fill' lopBucket 1 - } - , csvJumping = csjCallbacks - } - where + tracer + tracerCsj + ChainDbView{getIsInvalidBlock} + varHandles + getGsmState + peer + version + csBucketConfig + csjConfig + pipelining + body = + LeakyBucket.execAgainstBucket' $ + \lopBucket -> + mkChainSyncClientHandleState >>= \csHandleState -> + withCSJCallbacks lopBucket csHandleState csjConfig $ \csjCallbacks -> + withWatcher + "ChainSync.Client.rejectInvalidBlocks" + (invalidBlockWatcher csHandleState) + $ body + ChainSyncStateView + { csvSetCandidate = + modifyTVar csHandleState . \c s -> s{csCandidate = c} + , csvSetLatestSlot = + modifyTVar csHandleState . \ls s -> s{csLatestSlot = SJust ls} + , csvIdling = + Idling + { idlingStart = atomically $ modifyTVar csHandleState $ \s -> s{csIdling = True} + , idlingStop = atomically $ modifyTVar csHandleState $ \s -> s{csIdling = False} + } + , csvLoPBucket = + LoPBucket + { lbPause = LeakyBucket.setPaused' lopBucket True + , lbResume = LeakyBucket.setPaused' lopBucket False + , lbGrantToken = void $ LeakyBucket.fill' lopBucket 1 + } + , csvJumping = csjCallbacks + } + where mkChainSyncClientHandleState = - newTVarIO ChainSyncState { - csCandidate = AF.Empty AF.AnchorGenesis - , csLatestSlot = SNothing - , csIdling = False - } + newTVarIO + ChainSyncState + { csCandidate = AF.Empty AF.AnchorGenesis + , csLatestSlot = SNothing + , csIdling = False + } withCSJCallbacks :: LeakyBucket.Handlers m -> @@ -402,71 +417,76 @@ bracketChainSyncClient tid <- myThreadId cschJumpInfo <- newTVarIO Nothing cschJumping <- newTVarIO (Disengaged DisengagedDone) - let handle = ChainSyncClientHandle { - cschGDDKill = throwTo tid DensityTooLow - , cschOnGsmStateChanged = updateLopBucketConfig lopBucket - , cschState - , cschJumping - , cschJumpInfo - } + let handle = + ChainSyncClientHandle + { cschGDDKill = throwTo tid DensityTooLow + , cschOnGsmStateChanged = updateLopBucketConfig lopBucket + , cschState + , cschJumping + , cschJumpInfo + } insertHandle = atomicallyWithMonotonicTime $ \time -> do gsmState <- getGsmState updateLopBucketConfig lopBucket gsmState time cschcAddHandle varHandles peer handle deleteHandle = atomically $ cschcRemoveHandle varHandles peer bracket_ insertHandle deleteHandle $ f Jumping.noJumping - withCSJCallbacks lopBucket csHandleState (CSJEnabled csjEnabledConfig) f = bracket (acquireContext lopBucket csHandleState csjEnabledConfig) releaseContext $ \(peerContext, mbEv) -> do traverse_ (traceWith (Jumping.tracer peerContext)) mbEv f $ Jumping.mkJumping peerContext acquireContext lopBucket cschState (CSJEnabledConfig jumpSize) = do - tid <- myThreadId - atomicallyWithMonotonicTime $ \time -> do - gsmState <- getGsmState - updateLopBucketConfig lopBucket gsmState time - cschJumpInfo <- newTVar Nothing - context <- Jumping.makeContext varHandles jumpSize tracerCsj - Jumping.registerClient gsmState context peer cschState $ \cschJumping -> ChainSyncClientHandle + tid <- myThreadId + atomicallyWithMonotonicTime $ \time -> do + gsmState <- getGsmState + updateLopBucketConfig lopBucket gsmState time + cschJumpInfo <- newTVar Nothing + context <- Jumping.makeContext varHandles jumpSize tracerCsj + Jumping.registerClient gsmState context peer cschState $ \cschJumping -> + ChainSyncClientHandle { cschGDDKill = throwTo tid DensityTooLow , cschOnGsmStateChanged = updateLopBucketConfig lopBucket - -- See Note [Updating the CSJ State when the GSM State Changes] - -- in the Haddocks of 'Jumping.registerClient'. - , cschState + , -- See Note [Updating the CSJ State when the GSM State Changes] + -- in the Haddocks of 'Jumping.registerClient'. + cschState , cschJumping , cschJumpInfo } releaseContext (peerContext, _mbEv) = do - mbEv <- atomically $ Jumping.unregisterClient peerContext - traverse_ (traceWith (Jumping.tracer peerContext)) mbEv + mbEv <- atomically $ Jumping.unregisterClient peerContext + traverse_ (traceWith (Jumping.tracer peerContext)) mbEv invalidBlockWatcher varState = - invalidBlockRejector - tracer version pipelining getIsInvalidBlock (csCandidate <$> readTVar varState) + invalidBlockRejector + tracer + version + pipelining + getIsInvalidBlock + (csCandidate <$> readTVar varState) - -- | Update the configuration of the bucket to match the given GSM state. + -- \| Update the configuration of the bucket to match the given GSM state. -- NOTE: The new level is currently the maximal capacity of the bucket; -- maybe we want to change that later. updateLopBucketConfig :: LeakyBucket.Handlers m -> GsmState -> Time -> STM m () updateLopBucketConfig lopBucket gsmState = LeakyBucket.updateConfig lopBucket $ \_ -> - let config = lopBucketConfig gsmState in - (LeakyBucket.capacity config, config) + let config = lopBucketConfig gsmState + in (LeakyBucket.capacity config, config) - -- | Wrapper around 'LeakyBucket.execAgainstBucket' that handles the + -- \| Wrapper around 'LeakyBucket.execAgainstBucket' that handles the -- disabled bucket by running the given action with dummy handlers. lopBucketConfig :: GsmState -> LeakyBucket.Config m lopBucketConfig gsmState = case (gsmState, csBucketConfig) of - (Syncing, ChainSyncLoPBucketEnabled ChainSyncLoPBucketEnabledConfig {csbcCapacity, csbcRate}) -> - LeakyBucket.Config - { capacity = fromInteger $ csbcCapacity, - rate = csbcRate, - onEmpty = throwIO EmptyBucket, - fillOnOverflow = True - } + (Syncing, ChainSyncLoPBucketEnabled ChainSyncLoPBucketEnabledConfig{csbcCapacity, csbcRate}) -> + LeakyBucket.Config + { capacity = fromInteger $ csbcCapacity + , rate = csbcRate + , onEmpty = throwIO EmptyBucket + , fillOnOverflow = True + } -- NOTE: If we decide to slow the bucket down when “almost caught-up”, -- we should add a state to the GSM and corresponding configuration -- fields and a bucket config here. @@ -518,7 +538,6 @@ bracketChainSyncClient -- TODO #467 if the 'theirTip' that they sent us is on our chain, just -- switch to it. - -- = Candidate fragment size -- ------------------------- -- @@ -571,196 +590,198 @@ bracketChainSyncClient -- 6480 = 8640@ headers. The header state history will have the same length. -- -- This worst case can happen when: + -- * We are more than 6480 or respectively 8640 blocks behind, bulk syncing, + -- and the BlockFetch client and/or the ChainDB can't keep up with the -- ChainSync client. + -- * When our clock is running behind such that we are not adopting the + -- corresponding blocks because we think they are from the future. + -- * When an attacker is serving us headers from the future. + -- -- When we are in sync with the network, the fragment will typically be @k@ to -- @k + 1@ headers long. -- | State used when the intersection between the candidate and the current -- chain is unknown. -data UnknownIntersectionState blk = UnknownIntersectionState { - ourFrag :: !(AnchoredFragment (Header blk)) - -- ^ A view of the current chain fragment. Note that this might be - -- temporarily out of date w.r.t. the actual current chain until we update - -- it again. - -- - -- This fragment is used to select points from to find an intersection - -- with the candidate. - -- - -- INVARIANT: 'ourFrag' contains @k@ headers, unless close to genesis. - , - ourHeaderStateHistory :: !(HeaderStateHistory blk) - -- ^ 'HeaderStateHistory' corresponding to the tip (most recent block) of - -- 'ourFrag'. - , - uBestBlockNo :: !BlockNo - -- ^ The best block number of any header sent by this peer, to be used by - -- the limit on patience. +data UnknownIntersectionState blk = UnknownIntersectionState + { ourFrag :: !(AnchoredFragment (Header blk)) + -- ^ A view of the current chain fragment. Note that this might be + -- temporarily out of date w.r.t. the actual current chain until we update + -- it again. + -- + -- This fragment is used to select points from to find an intersection + -- with the candidate. + -- + -- INVARIANT: 'ourFrag' contains @k@ headers, unless close to genesis. + , ourHeaderStateHistory :: !(HeaderStateHistory blk) + -- ^ 'HeaderStateHistory' corresponding to the tip (most recent block) of + -- 'ourFrag'. + , uBestBlockNo :: !BlockNo + -- ^ The best block number of any header sent by this peer, to be used by + -- the limit on patience. } - deriving (Generic) + deriving Generic instance - LedgerSupportsProtocol blk - => NoThunks (UnknownIntersectionState blk) where - showTypeOf _ = show $ typeRep (Proxy @(UnknownIntersectionState blk)) + LedgerSupportsProtocol blk => + NoThunks (UnknownIntersectionState blk) + where + showTypeOf _ = show $ typeRep (Proxy @(UnknownIntersectionState blk)) -- | State used when the intersection between the candidate and the current -- chain is known. -data KnownIntersectionState blk = KnownIntersectionState { - mostRecentIntersection :: !(Point blk) - -- ^ The most recent intersection point between 'theirFrag' and 'ourFrag'. - -- Note that this is not necessarily the anchor point of both 'theirFrag' - -- and 'ourFrag', they might have many more headers in common. - -- - -- INVARIANT: - -- @ - -- (==) - -- (Just 'mostRecentIntersection') - -- ('AF.intersectionPoint' 'theirFrag' 'ourFrag') - -- @ - -- - -- It follows from the invariants on 'ourFrag' that this point is within - -- the last @k@ headers of the current chain fragment, at time of - -- computing the 'KnownIntersectionState'. - , - ourFrag :: !(AnchoredFragment (Header blk)) - -- ^ A view of the current chain fragment used to maintain the invariants - -- with. Note that this might be temporarily out of date w.r.t. the actual - -- current chain until we update it again. - -- - -- INVARIANT: 'ourFrag' contains @k@ headers, unless close to genesis. - -- - -- INVARIANT: 'theirFrag' and 'ourFrag' have the same anchor point. From - -- this follows that both fragments intersect. This also means that - -- 'theirFrag' forks off within the last @k@ headers/blocks of the - -- 'ourFrag'. - , - theirFrag :: !(AnchoredFragment (HeaderWithTime blk)) - -- ^ The candidate, the synched fragment of their chain. - -- - -- See the \"Candidate fragment size\" note above. - , - theirHeaderStateHistory :: !(HeaderStateHistory blk) - -- ^ 'HeaderStateHistory' corresponding to the tip (most recent block) of - -- 'theirFrag'. - -- - -- INVARIANT: the tips in 'theirHeaderStateHistory' correspond to the - -- headers in 'theirFrag', including the anchor. - -- - -- See the \"Candidate fragment size\" note above. - , - kBestBlockNo :: !BlockNo - -- ^ The best block number of any header sent by this peer, to be used by - -- the limit on patience. +data KnownIntersectionState blk = KnownIntersectionState + { mostRecentIntersection :: !(Point blk) + -- ^ The most recent intersection point between 'theirFrag' and 'ourFrag'. + -- Note that this is not necessarily the anchor point of both 'theirFrag' + -- and 'ourFrag', they might have many more headers in common. + -- + -- INVARIANT: + -- @ + -- (==) + -- (Just 'mostRecentIntersection') + -- ('AF.intersectionPoint' 'theirFrag' 'ourFrag') + -- @ + -- + -- It follows from the invariants on 'ourFrag' that this point is within + -- the last @k@ headers of the current chain fragment, at time of + -- computing the 'KnownIntersectionState'. + , ourFrag :: !(AnchoredFragment (Header blk)) + -- ^ A view of the current chain fragment used to maintain the invariants + -- with. Note that this might be temporarily out of date w.r.t. the actual + -- current chain until we update it again. + -- + -- INVARIANT: 'ourFrag' contains @k@ headers, unless close to genesis. + -- + -- INVARIANT: 'theirFrag' and 'ourFrag' have the same anchor point. From + -- this follows that both fragments intersect. This also means that + -- 'theirFrag' forks off within the last @k@ headers/blocks of the + -- 'ourFrag'. + , theirFrag :: !(AnchoredFragment (HeaderWithTime blk)) + -- ^ The candidate, the synched fragment of their chain. + -- + -- See the \"Candidate fragment size\" note above. + , theirHeaderStateHistory :: !(HeaderStateHistory blk) + -- ^ 'HeaderStateHistory' corresponding to the tip (most recent block) of + -- 'theirFrag'. + -- + -- INVARIANT: the tips in 'theirHeaderStateHistory' correspond to the + -- headers in 'theirFrag', including the anchor. + -- + -- See the \"Candidate fragment size\" note above. + , kBestBlockNo :: !BlockNo + -- ^ The best block number of any header sent by this peer, to be used by + -- the limit on patience. } - deriving (Generic) + deriving Generic instance - LedgerSupportsProtocol blk - => NoThunks (KnownIntersectionState blk) where - showTypeOf _ = show $ typeRep (Proxy @(KnownIntersectionState blk)) - -checkKnownIntersectionInvariants :: forall blk. - ( HasHeader blk - , HasHeader (Header blk) - , HasAnnTip blk - , ConsensusProtocol (BlockProtocol blk) - ) - => ConsensusConfig (BlockProtocol blk) - -> KnownIntersectionState blk - -> Either String () -checkKnownIntersectionInvariants cfg kis - -- 'theirHeaderStateHistory' invariant - | let HeaderStateHistory snapshots = theirHeaderStateHistory - historyTips :: [WithOrigin (AnnTip blk)] - historyTips = headerStateTip . hswtHeaderState <$> AS.toOldestFirst snapshots - fragmentTips :: [WithOrigin (AnnTip blk)] - fragmentTips = NotOrigin . getAnnTip . hwtHeader <$> AF.toOldestFirst theirFrag - - fragmentAnchorPoint = castPoint $ AF.anchorPoint theirFrag - historyAnchorPoint = - withOriginRealPointToPoint - $ annTipRealPoint <$> headerStateTip (hswtHeaderState $ AS.anchor snapshots) - , historyTips /= fragmentTips - || - historyAnchorPoint /= fragmentAnchorPoint - = throwError $ unwords - [ "The tips in theirHeaderStateHistory" - , "didn't match the headers in theirFrag:" - , show historyTips - , "vs" - , show fragmentTips - , "with anchors" - , show historyAnchorPoint - , "vs" - , show fragmentAnchorPoint - ] - - -- 'ourFrag' invariants - | let nbHeaders = AF.length ourFrag - ourAnchorPoint = AF.anchorPoint ourFrag - , nbHeaders < fromIntegral (unNonZero k) - , ourAnchorPoint /= GenesisPoint - = throwError $ unwords - [ "ourFrag contains fewer than k headers and not close to genesis:" - , show nbHeaders - , "vs" - , show k - , "with anchor" - , show ourAnchorPoint - ] - - | let ourFragAnchor = AF.anchorPoint ourFrag - theirFragAnchor = AF.anchorPoint theirFrag - , ourFragAnchor /= castPoint theirFragAnchor - = throwError $ unwords - [ "ourFrag and theirFrag have different anchor points:" - , show ourFragAnchor - , "vs" - , show theirFragAnchor - ] - - -- 'mostRecentIntersection' invariant - | let actualMostRecentIntersection = - castPoint <$> AF.intersectionPoint theirFrag ourFrag - , Just mostRecentIntersection /= actualMostRecentIntersection - = throwError $ unwords - [ "mostRecentIntersection not the most recent intersection" - , "of theirFrag and ourFrag:" - , show mostRecentIntersection - , "vs" - , show actualMostRecentIntersection - ] - - | otherwise - = return () + LedgerSupportsProtocol blk => + NoThunks (KnownIntersectionState blk) where - SecurityParam k = protocolSecurityParam cfg - - KnownIntersectionState { - mostRecentIntersection - , ourFrag - , theirFrag - , theirHeaderStateHistory - } = kis + showTypeOf _ = show $ typeRep (Proxy @(KnownIntersectionState blk)) + +checkKnownIntersectionInvariants :: + forall blk. + ( HasHeader blk + , HasHeader (Header blk) + , HasAnnTip blk + , ConsensusProtocol (BlockProtocol blk) + ) => + ConsensusConfig (BlockProtocol blk) -> + KnownIntersectionState blk -> + Either String () +checkKnownIntersectionInvariants cfg kis + -- 'theirHeaderStateHistory' invariant + | let HeaderStateHistory snapshots = theirHeaderStateHistory + historyTips :: [WithOrigin (AnnTip blk)] + historyTips = headerStateTip . hswtHeaderState <$> AS.toOldestFirst snapshots + fragmentTips :: [WithOrigin (AnnTip blk)] + fragmentTips = NotOrigin . getAnnTip . hwtHeader <$> AF.toOldestFirst theirFrag + + fragmentAnchorPoint = castPoint $ AF.anchorPoint theirFrag + historyAnchorPoint = + withOriginRealPointToPoint $ + annTipRealPoint <$> headerStateTip (hswtHeaderState $ AS.anchor snapshots) + , historyTips /= fragmentTips + || historyAnchorPoint /= fragmentAnchorPoint = + throwError $ + unwords + [ "The tips in theirHeaderStateHistory" + , "didn't match the headers in theirFrag:" + , show historyTips + , "vs" + , show fragmentTips + , "with anchors" + , show historyAnchorPoint + , "vs" + , show fragmentAnchorPoint + ] + -- 'ourFrag' invariants + | let nbHeaders = AF.length ourFrag + ourAnchorPoint = AF.anchorPoint ourFrag + , nbHeaders < fromIntegral (unNonZero k) + , ourAnchorPoint /= GenesisPoint = + throwError $ + unwords + [ "ourFrag contains fewer than k headers and not close to genesis:" + , show nbHeaders + , "vs" + , show k + , "with anchor" + , show ourAnchorPoint + ] + | let ourFragAnchor = AF.anchorPoint ourFrag + theirFragAnchor = AF.anchorPoint theirFrag + , ourFragAnchor /= castPoint theirFragAnchor = + throwError $ + unwords + [ "ourFrag and theirFrag have different anchor points:" + , show ourFragAnchor + , "vs" + , show theirFragAnchor + ] + -- 'mostRecentIntersection' invariant + | let actualMostRecentIntersection = + castPoint <$> AF.intersectionPoint theirFrag ourFrag + , Just mostRecentIntersection /= actualMostRecentIntersection = + throwError $ + unwords + [ "mostRecentIntersection not the most recent intersection" + , "of theirFrag and ourFrag:" + , show mostRecentIntersection + , "vs" + , show actualMostRecentIntersection + ] + | otherwise = + return () + where + SecurityParam k = protocolSecurityParam cfg + + KnownIntersectionState + { mostRecentIntersection + , ourFrag + , theirFrag + , theirHeaderStateHistory + } = kis assertKnownIntersectionInvariants :: - ( HasHeader blk - , HasHeader (Header blk) - , HasAnnTip blk - , ConsensusProtocol (BlockProtocol blk) - , HasCallStack - ) - => ConsensusConfig (BlockProtocol blk) - -> KnownIntersectionState blk - -> KnownIntersectionState blk + ( HasHeader blk + , HasHeader (Header blk) + , HasAnnTip blk + , ConsensusProtocol (BlockProtocol blk) + , HasCallStack + ) => + ConsensusConfig (BlockProtocol blk) -> + KnownIntersectionState blk -> + KnownIntersectionState blk assertKnownIntersectionInvariants cfg kis = - assertWithMsg (checkKnownIntersectionInvariants cfg kis) kis + assertWithMsg (checkKnownIntersectionInvariants cfg kis) kis {------------------------------------------------------------------------------- The ChainSync client definition @@ -769,85 +790,79 @@ assertKnownIntersectionInvariants cfg kis = -- | Arguments determined by configuration -- -- These are available before the diffusion layer is online. -data ConfigEnv m blk = ConfigEnv { - mkPipelineDecision0 :: MkPipelineDecision - -- ^ The pipelining decider to use after 'MsgFoundIntersect' arrives - , tracer :: Tracer m (TraceChainSyncClientEvent blk) - , cfg :: TopLevelConfig blk +data ConfigEnv m blk = ConfigEnv + { mkPipelineDecision0 :: MkPipelineDecision + -- ^ The pipelining decider to use after 'MsgFoundIntersect' arrives + , tracer :: Tracer m (TraceChainSyncClientEvent blk) + , cfg :: TopLevelConfig blk , someHeaderInFutureCheck :: InFutureCheck.SomeHeaderInFutureCheck m blk - , historicityCheck :: HistoricityCheck m blk - , chainDbView :: ChainDbView m blk - , getDiffusionPipeliningSupport - :: DiffusionPipeliningSupport + , historicityCheck :: HistoricityCheck m blk + , chainDbView :: ChainDbView m blk + , getDiffusionPipeliningSupport :: + DiffusionPipeliningSupport } -- | Arguments determined dynamically -- -- These are available only after the diffusion layer is online and/or on per -- client basis. -data DynamicEnv m blk = DynamicEnv { - version :: NodeToNodeVersion - , controlMessageSTM :: ControlMessageSTM m +data DynamicEnv m blk = DynamicEnv + { version :: NodeToNodeVersion + , controlMessageSTM :: ControlMessageSTM m , headerMetricsTracer :: HeaderMetricsTracer m - , setCandidate :: AnchoredFragment (HeaderWithTime blk) -> STM m () - , setLatestSlot :: WithOrigin SlotNo -> STM m () - , idling :: Idling m - , loPBucket :: LoPBucket m - , jumping :: Jumping.Jumping m blk + , setCandidate :: AnchoredFragment (HeaderWithTime blk) -> STM m () + , setLatestSlot :: WithOrigin SlotNo -> STM m () + , idling :: Idling m + , loPBucket :: LoPBucket m + , jumping :: Jumping.Jumping m blk } -- | General values collectively needed by the top-level entry points -data InternalEnv m blk arrival judgment = InternalEnv { - drainThePipe :: +data InternalEnv m blk arrival judgment = InternalEnv + { drainThePipe :: forall s n. - NoThunks s - => Nat n - -> Stateful m blk s (ClientPipelinedStIdle 'Z) - -> Stateful m blk s (ClientPipelinedStIdle n) - -- ^ "Drain the pipe": collect and discard all in-flight responses and - -- finally execute the given action. - , - disconnect :: + NoThunks s => + Nat n -> + Stateful m blk s (ClientPipelinedStIdle 'Z) -> + Stateful m blk s (ClientPipelinedStIdle n) + -- ^ "Drain the pipe": collect and discard all in-flight responses and + -- finally execute the given action. + , disconnect :: forall m' a. - MonadThrow m' - => ChainSyncClientException - -> m' a - -- ^ Disconnect from the upstream node by throwing the given exception. - -- The cleanup is handled in 'bracketChainSyncClient'. - , - headerInFutureCheck :: - InFutureCheck.HeaderInFutureCheck m blk arrival judgment - , - intersectsWithCurrentChain :: - KnownIntersectionState blk - -> STM m (UpdatedIntersectionState blk ()) - -- ^ A combinator necessary whenever relying on a - -- 'KnownIntersectionState', since it's always possible that that - -- intersection will go stale. - -- - -- Look at the current chain fragment that may have been updated in the - -- background. Check whether the candidate fragment still intersects with - -- it. If so, update the 'KnownIntersectionState' and trim the candidate - -- fragment to the new current chain fragment's anchor point. If not, - -- return 'Nothing'. - -- - -- INVARIANT: This a read-only STM transaction. - , - terminate :: - ChainSyncClientResult - -> m (Consensus (ClientPipelinedStIdle 'Z) blk m) - -- ^ Gracefully terminate the connection with the upstream node with the - -- given result. - , - terminateAfterDrain :: + MonadThrow m' => + ChainSyncClientException -> + m' a + -- ^ Disconnect from the upstream node by throwing the given exception. + -- The cleanup is handled in 'bracketChainSyncClient'. + , headerInFutureCheck :: + InFutureCheck.HeaderInFutureCheck m blk arrival judgment + , intersectsWithCurrentChain :: + KnownIntersectionState blk -> + STM m (UpdatedIntersectionState blk ()) + -- ^ A combinator necessary whenever relying on a + -- 'KnownIntersectionState', since it's always possible that that + -- intersection will go stale. + -- + -- Look at the current chain fragment that may have been updated in the + -- background. Check whether the candidate fragment still intersects with + -- it. If so, update the 'KnownIntersectionState' and trim the candidate + -- fragment to the new current chain fragment's anchor point. If not, + -- return 'Nothing'. + -- + -- INVARIANT: This a read-only STM transaction. + , terminate :: + ChainSyncClientResult -> + m (Consensus (ClientPipelinedStIdle 'Z) blk m) + -- ^ Gracefully terminate the connection with the upstream node with the + -- given result. + , terminateAfterDrain :: forall n. - Nat n - -> ChainSyncClientResult - -> m (Consensus (ClientPipelinedStIdle n) blk m) - -- ^ Same as 'terminate', but first 'drainThePipe'. - , - traceException :: forall a. m a -> m a - -- ^ Trace any 'ChainSyncClientException' if thrown. + Nat n -> + ChainSyncClientResult -> + m (Consensus (ClientPipelinedStIdle n) blk m) + -- ^ Same as 'terminate', but first 'drainThePipe'. + , traceException :: forall a. m a -> m a + -- ^ Trace any 'ChainSyncClientException' if thrown. } -- | Chain sync client @@ -855,145 +870,145 @@ data InternalEnv m blk arrival judgment = InternalEnv { -- This never terminates. In case of a failure, a 'ChainSyncClientException' -- is thrown. The network layer classifies exception such that the -- corresponding peer will never be chosen again. -chainSyncClient :: forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ConfigEnv m blk - -> DynamicEnv m blk - -> Consensus ChainSyncClientPipelined blk m +chainSyncClient :: + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ConfigEnv m blk -> + DynamicEnv m blk -> + Consensus ChainSyncClientPipelined blk m chainSyncClient cfgEnv dynEnv = - case someHeaderInFutureCheck cfgEnv of - InFutureCheck.SomeHeaderInFutureCheck headerInFutureCheck -> - ChainSyncClientPipelined - $ continueWithState () - $ -- Start ChainSync by looking for an intersection between our - -- current chain fragment and their chain. - findIntersectionTop - cfgEnv - dynEnv - (mkIntEnv headerInFutureCheck) - (BlockNo 0) - (ForkTooDeep GenesisPoint) - where - ConfigEnv { - cfg - , chainDbView - , tracer - } = cfgEnv - - ChainDbView { - getCurrentChain - } = chainDbView - - DynamicEnv { - idling - } = dynEnv - - mkIntEnv :: - InFutureCheck.HeaderInFutureCheck m blk arrival judgment - -> InternalEnv m blk arrival judgment - mkIntEnv hifc = InternalEnv { - drainThePipe - , - disconnect = throwIO - , - headerInFutureCheck = hifc - , - intersectsWithCurrentChain - , - terminate - , - terminateAfterDrain = \n result -> - continueWithState () - $ drainThePipe n - $ Stateful $ \() -> terminate result - , - traceException = \m -> do - m `catch` \(e :: ChainSyncClientException) -> do - traceWith tracer $ TraceException e - throwIO e + case someHeaderInFutureCheck cfgEnv of + InFutureCheck.SomeHeaderInFutureCheck headerInFutureCheck -> + ChainSyncClientPipelined $ + continueWithState () $ -- Start ChainSync by looking for an intersection between our + -- current chain fragment and their chain. + findIntersectionTop + cfgEnv + dynEnv + (mkIntEnv headerInFutureCheck) + (BlockNo 0) + (ForkTooDeep GenesisPoint) + where + ConfigEnv + { cfg + , chainDbView + , tracer + } = cfgEnv + + ChainDbView + { getCurrentChain + } = chainDbView + + DynamicEnv + { idling + } = dynEnv + + mkIntEnv :: + InFutureCheck.HeaderInFutureCheck m blk arrival judgment -> + InternalEnv m blk arrival judgment + mkIntEnv hifc = + InternalEnv + { drainThePipe + , disconnect = throwIO + , headerInFutureCheck = hifc + , intersectsWithCurrentChain + , terminate + , terminateAfterDrain = \n result -> + continueWithState () $ + drainThePipe n $ + Stateful $ + \() -> terminate result + , traceException = \m -> do + m `catch` \(e :: ChainSyncClientException) -> do + traceWith tracer $ TraceException e + throwIO e } - drainThePipe :: - forall s n. - NoThunks s - => Nat n - -> Stateful m blk s (ClientPipelinedStIdle 'Z) - -> Stateful m blk s (ClientPipelinedStIdle n) - drainThePipe n0 m = - let go :: - forall n'. - Nat n' - -> s - -> m (Consensus (ClientPipelinedStIdle n') blk m) - go n s = do - traceWith tracer $ TraceDrainingThePipe n - case n of - Zero -> continueWithState s m - Succ n' -> return $ CollectResponse Nothing $ ClientStNext { - recvMsgRollForward = \_hdr _tip -> go n' s - , recvMsgRollBackward = \_pt _tip -> go n' s - } - in Stateful $ \s -> idlingStop idling >> go n0 s - - terminate :: - ChainSyncClientResult - -> m (Consensus (ClientPipelinedStIdle 'Z) blk m) - terminate res = do - traceWith tracer (TraceTermination res) - pure (SendMsgDone res) - - intersectsWithCurrentChain :: - KnownIntersectionState blk - -> STM m (UpdatedIntersectionState blk ()) - intersectsWithCurrentChain kis = do - let KnownIntersectionState { - ourFrag - , theirFrag - , theirHeaderStateHistory - , kBestBlockNo - } = kis - ourFrag' <- getCurrentChain - - -- Our current chain didn't change, and changes to their chain that - -- might affect the intersection point are handled elsewhere - -- ('rollBackward'), so we have nothing to do. - let noChange = AF.headPoint ourFrag == AF.headPoint ourFrag' - - return $ if noChange then StillIntersects () kis else do - case AF.intersectionPoint ourFrag' theirFrag of - Just intersection - | Just (_, trimmedCandidate) <- - AF.splitAfterPoint theirFrag (AF.anchorPoint ourFrag') - -> - -- Even though our current chain changed it still - -- intersects with candidate fragment, so update the - -- 'ourFrag' field and trim the candidate fragment to the - -- same anchor point. - -- - -- Note that this is the only place we need to trim. - -- Headers on their chain can only become unnecessary - -- (eligible for trimming) in two ways: 1. we adopted them, - -- i.e., our chain changed (handled in this function); 2. - -- we will /never/ adopt them, which is handled in the "no - -- more intersection case". - StillIntersects () - $ assertKnownIntersectionInvariants (configConsensus cfg) - $ KnownIntersectionState { - mostRecentIntersection = castPoint intersection - , ourFrag = ourFrag' - , theirFrag = trimmedCandidate - , theirHeaderStateHistory = + drainThePipe :: + forall s n. + NoThunks s => + Nat n -> + Stateful m blk s (ClientPipelinedStIdle 'Z) -> + Stateful m blk s (ClientPipelinedStIdle n) + drainThePipe n0 m = + let go :: + forall n'. + Nat n' -> + s -> + m (Consensus (ClientPipelinedStIdle n') blk m) + go n s = do + traceWith tracer $ TraceDrainingThePipe n + case n of + Zero -> continueWithState s m + Succ n' -> + return $ + CollectResponse Nothing $ + ClientStNext + { recvMsgRollForward = \_hdr _tip -> go n' s + , recvMsgRollBackward = \_pt _tip -> go n' s + } + in Stateful $ \s -> idlingStop idling >> go n0 s + + terminate :: + ChainSyncClientResult -> + m (Consensus (ClientPipelinedStIdle 'Z) blk m) + terminate res = do + traceWith tracer (TraceTermination res) + pure (SendMsgDone res) + + intersectsWithCurrentChain :: + KnownIntersectionState blk -> + STM m (UpdatedIntersectionState blk ()) + intersectsWithCurrentChain kis = do + let KnownIntersectionState + { ourFrag + , theirFrag + , theirHeaderStateHistory + , kBestBlockNo + } = kis + ourFrag' <- getCurrentChain + + -- Our current chain didn't change, and changes to their chain that + -- might affect the intersection point are handled elsewhere + -- ('rollBackward'), so we have nothing to do. + let noChange = AF.headPoint ourFrag == AF.headPoint ourFrag' + + return $ + if noChange + then StillIntersects () kis + else do + case AF.intersectionPoint ourFrag' theirFrag of + Just intersection + | Just (_, trimmedCandidate) <- + AF.splitAfterPoint theirFrag (AF.anchorPoint ourFrag') -> + -- Even though our current chain changed it still + -- intersects with candidate fragment, so update the + -- 'ourFrag' field and trim the candidate fragment to the + -- same anchor point. + -- + -- Note that this is the only place we need to trim. + -- Headers on their chain can only become unnecessary + -- (eligible for trimming) in two ways: 1. we adopted them, + -- i.e., our chain changed (handled in this function); 2. + -- we will /never/ adopt them, which is handled in the "no + -- more intersection case". + StillIntersects () $ + assertKnownIntersectionInvariants (configConsensus cfg) $ + KnownIntersectionState + { mostRecentIntersection = castPoint intersection + , ourFrag = ourFrag' + , theirFrag = trimmedCandidate + , theirHeaderStateHistory = -- We trim the 'HeaderStateHistory' to the same -- size as our fragment so they keep in sync. HeaderStateHistory.trim - (AF.length trimmedCandidate) - theirHeaderStateHistory - , kBestBlockNo - } - - _ -> NoLongerIntersects + (AF.length trimmedCandidate) + theirHeaderStateHistory + , kBestBlockNo + } + _ -> NoLongerIntersects {------------------------------------------------------------------------------- (Re-)Establishing a common intersection @@ -1001,183 +1016,195 @@ chainSyncClient cfgEnv dynEnv = findIntersectionTop :: forall m blk arrival judgment. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ConfigEnv m blk - -> DynamicEnv m blk - -> InternalEnv m blk arrival judgment - -> BlockNo - -- ^ Peer's best block; needed to build an 'UnknownIntersectionState'. - -> (Our (Tip blk) -> Their (Tip blk) -> ChainSyncClientResult) - -- ^ Exception to throw when no intersection is found. - -> Stateful m blk () (ClientPipelinedStIdle 'Z) + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ConfigEnv m blk -> + DynamicEnv m blk -> + InternalEnv m blk arrival judgment -> + -- | Peer's best block; needed to build an 'UnknownIntersectionState'. + BlockNo -> + -- | Exception to throw when no intersection is found. + (Our (Tip blk) -> Their (Tip blk) -> ChainSyncClientResult) -> + Stateful m blk () (ClientPipelinedStIdle 'Z) findIntersectionTop cfgEnv dynEnv intEnv = - findIntersection - where - ConfigEnv { - tracer - , cfg - , chainDbView - } = cfgEnv - - ChainDbView { - getCurrentChain - , getHeaderStateHistory - } = chainDbView - - DynamicEnv { - setCandidate - , jumping - } = dynEnv - - InternalEnv { - disconnect - , terminate - , traceException - } = intEnv - - -- Try to find an intersection by sending points of our current chain to - -- the server, if any of them intersect with their chain, roll back our - -- chain to that point and start synching using that fragment. If none - -- intersect, disconnect by throwing the exception obtained by calling the - -- passed function. - findIntersection :: - BlockNo - -- ^ Peer's best block; needed to build an 'UnknownIntersectionState'. - -> (Our (Tip blk) -> Their (Tip blk) -> ChainSyncClientResult) - -- ^ Exception to throw when no intersection is found. - -> Stateful m blk () (ClientPipelinedStIdle 'Z) - findIntersection uBestBlockNo mkResult = Stateful $ \() -> do - (ourFrag, ourHeaderStateHistory) <- atomically $ (,) - <$> getCurrentChain - <*> getHeaderStateHistory - -- This means that if an intersection is found for one of these points, - -- it was an intersection within the last @k@ blocks of our current - -- chain. If not, we could never switch to this candidate chain anyway. - let maxOffset = fromIntegral (AF.length ourFrag) - k = protocolSecurityParam (configConsensus cfg) - offsets = mkOffsets k maxOffset - points = - map castPoint - $ AF.selectPoints (map fromIntegral offsets) ourFrag - - !uis = UnknownIntersectionState { - ourFrag = ourFrag - , ourHeaderStateHistory = ourHeaderStateHistory - , uBestBlockNo - } + findIntersection + where + ConfigEnv + { tracer + , cfg + , chainDbView + } = cfgEnv + + ChainDbView + { getCurrentChain + , getHeaderStateHistory + } = chainDbView + + DynamicEnv + { setCandidate + , jumping + } = dynEnv + + InternalEnv + { disconnect + , terminate + , traceException + } = intEnv + + -- Try to find an intersection by sending points of our current chain to + -- the server, if any of them intersect with their chain, roll back our + -- chain to that point and start synching using that fragment. If none + -- intersect, disconnect by throwing the exception obtained by calling the + -- passed function. + findIntersection :: + BlockNo -> + -- \^ Peer's best block; needed to build an 'UnknownIntersectionState'. + (Our (Tip blk) -> Their (Tip blk) -> ChainSyncClientResult) -> + -- \^ Exception to throw when no intersection is found. + Stateful m blk () (ClientPipelinedStIdle 'Z) + findIntersection uBestBlockNo mkResult = Stateful $ \() -> do + (ourFrag, ourHeaderStateHistory) <- + atomically $ + (,) + <$> getCurrentChain + <*> getHeaderStateHistory + -- This means that if an intersection is found for one of these points, + -- it was an intersection within the last @k@ blocks of our current + -- chain. If not, we could never switch to this candidate chain anyway. + let maxOffset = fromIntegral (AF.length ourFrag) + k = protocolSecurityParam (configConsensus cfg) + offsets = mkOffsets k maxOffset + points = + map castPoint $ + AF.selectPoints (map fromIntegral offsets) ourFrag + + !uis = + UnknownIntersectionState + { ourFrag = ourFrag + , ourHeaderStateHistory = ourHeaderStateHistory + , uBestBlockNo + } - return - $ SendMsgFindIntersect points - $ ClientPipelinedStIntersect { - recvMsgIntersectFound = \i theirTip' -> - continueWithState uis - $ intersectFound (castPoint i) (Their theirTip') - , - recvMsgIntersectNotFound = \theirTip' -> - terminate - $ mkResult (ourTipFromChain ourFrag) (Their theirTip') - } + return $ + SendMsgFindIntersect points $ + ClientPipelinedStIntersect + { recvMsgIntersectFound = \i theirTip' -> + continueWithState uis $ + intersectFound (castPoint i) (Their theirTip') + , recvMsgIntersectNotFound = \theirTip' -> + terminate $ + mkResult (ourTipFromChain ourFrag) (Their theirTip') + } - -- One of the points we sent intersected our chain. This intersection point - -- will become the new tip of the candidate fragment. - intersectFound :: - Point blk -- ^ Intersection - -> Their (Tip blk) - -> Stateful m blk - (UnknownIntersectionState blk) - (ClientPipelinedStIdle 'Z) - intersectFound intersection theirTip = Stateful $ \uis -> do - let UnknownIntersectionState { - ourFrag - , ourHeaderStateHistory - , uBestBlockNo - } = uis - traceWith tracer $ - TraceFoundIntersection - intersection (ourTipFromChain ourFrag) theirTip - traceException $ do - -- Roll back the current chain fragment to the @intersection@. - -- - -- While the primitives in the ChainSync protocol are "roll back", - -- "roll forward (apply block)", etc. The /real/ primitive is - -- "switch to fork", which means that a roll back is always - -- followed by applying at least as many blocks that we rolled - -- back. - -- - -- This is important for 'rewindHeaderStateHistory', which can only - -- roll back up to @k@ blocks, /once/, i.e., we cannot keep rolling - -- back the same chain state multiple times, because that would - -- mean that we store the chain state for the /whole chain/, all - -- the way to genesis. - -- - -- So the rewind below is fine when we are switching to a fork - -- (i.e. it is followed by rolling forward again), but we need some - -- guarantees that the ChainSync protocol /does/ in fact give us a - -- switch-to-fork instead of a true rollback. - (theirFrag, theirHeaderStateHistory) <- do - case attemptRollback - intersection - -- We only perform the linear computation - -- required by 'withTime' once when finding - -- an intersection with a peer, so this - -- should not impact the performance. - (ourFrag `withTime` ourHeaderStateHistory, ourHeaderStateHistory) - of - Just (c, d, _oldestRewound) -> return (c, d) - Nothing -> - -- The @intersection@ is not on our fragment, even - -- though we sent only points from our fragment to find - -- an intersection with. The node must have sent us an - -- invalid intersection point. - disconnect - $ InvalidIntersection - intersection (ourTipFromChain ourFrag) theirTip - let kis = - assertKnownIntersectionInvariants (configConsensus cfg) - $ KnownIntersectionState { - mostRecentIntersection = intersection - , ourFrag - , theirFrag - , theirHeaderStateHistory - , kBestBlockNo = uBestBlockNo - } - atomically $ do - updateJumpInfoSTM jumping kis - setCandidate theirFrag - setLatestSlot dynEnv (AF.headSlot theirFrag) - continueWithState kis $ - knownIntersectionStateTop cfgEnv dynEnv intEnv theirTip + -- One of the points we sent intersected our chain. This intersection point + -- will become the new tip of the candidate fragment. + intersectFound :: + Point blk -> + -- \^ Intersection + Their (Tip blk) -> + Stateful + m + blk + (UnknownIntersectionState blk) + (ClientPipelinedStIdle 'Z) + intersectFound intersection theirTip = Stateful $ \uis -> do + let UnknownIntersectionState + { ourFrag + , ourHeaderStateHistory + , uBestBlockNo + } = uis + traceWith tracer $ + TraceFoundIntersection + intersection + (ourTipFromChain ourFrag) + theirTip + traceException $ do + -- Roll back the current chain fragment to the @intersection@. + -- + -- While the primitives in the ChainSync protocol are "roll back", + -- "roll forward (apply block)", etc. The /real/ primitive is + -- "switch to fork", which means that a roll back is always + -- followed by applying at least as many blocks that we rolled + -- back. + -- + -- This is important for 'rewindHeaderStateHistory', which can only + -- roll back up to @k@ blocks, /once/, i.e., we cannot keep rolling + -- back the same chain state multiple times, because that would + -- mean that we store the chain state for the /whole chain/, all + -- the way to genesis. + -- + -- So the rewind below is fine when we are switching to a fork + -- (i.e. it is followed by rolling forward again), but we need some + -- guarantees that the ChainSync protocol /does/ in fact give us a + -- switch-to-fork instead of a true rollback. + (theirFrag, theirHeaderStateHistory) <- do + case attemptRollback + intersection + -- We only perform the linear computation + -- required by 'withTime' once when finding + -- an intersection with a peer, so this + -- should not impact the performance. + (ourFrag `withTime` ourHeaderStateHistory, ourHeaderStateHistory) of + Just (c, d, _oldestRewound) -> return (c, d) + Nothing -> + -- The @intersection@ is not on our fragment, even + -- though we sent only points from our fragment to find + -- an intersection with. The node must have sent us an + -- invalid intersection point. + disconnect $ + InvalidIntersection + intersection + (ourTipFromChain ourFrag) + theirTip + let kis = + assertKnownIntersectionInvariants (configConsensus cfg) $ + KnownIntersectionState + { mostRecentIntersection = intersection + , ourFrag + , theirFrag + , theirHeaderStateHistory + , kBestBlockNo = uBestBlockNo + } + atomically $ do + updateJumpInfoSTM jumping kis + setCandidate theirFrag + setLatestSlot dynEnv (AF.headSlot theirFrag) + continueWithState kis $ + knownIntersectionStateTop cfgEnv dynEnv intEnv theirTip -- | Augment the given fragment of headers with the times specified in -- the given state history. -- -- PRECONDITION: the fragment must be a prefix of the state history. --- withTime :: - (Typeable blk, HasHeader (Header blk)) - => AnchoredFragment (Header blk) - -> HeaderStateHistory blk - -> AnchoredFragment (HeaderWithTime blk) + (Typeable blk, HasHeader (Header blk)) => + AnchoredFragment (Header blk) -> + HeaderStateHistory blk -> + AnchoredFragment (HeaderWithTime blk) withTime fragment (HeaderStateHistory history) = - assertWithMsg ( - if AF.length fragment == AF.length history - then Right () - else Left $ "Fragment and history have different lengths (|fragment| = " - ++ show (AF.length fragment) - ++ ", |history| = " ++ show (AF.length history) - ++ ")" - ) $ - AF.fromOldestFirst - (AF.castAnchor $ AF.anchor fragment) - $ fmap addTimeToHeader $ zip (AF.toOldestFirst fragment) (AF.toOldestFirst history) - where - addTimeToHeader :: (Header blk, HeaderStateWithTime blk) -> HeaderWithTime blk - addTimeToHeader (hdr, hsWt) = HeaderWithTime { - hwtHeader = hdr - , hwtSlotRelativeTime = hswtSlotTime hsWt - } + assertWithMsg + ( if AF.length fragment == AF.length history + then Right () + else + Left $ + "Fragment and history have different lengths (|fragment| = " + ++ show (AF.length fragment) + ++ ", |history| = " + ++ show (AF.length history) + ++ ")" + ) + $ AF.fromOldestFirst + (AF.castAnchor $ AF.anchor fragment) + $ fmap addTimeToHeader + $ zip (AF.toOldestFirst fragment) (AF.toOldestFirst history) + where + addTimeToHeader :: (Header blk, HeaderStateWithTime blk) -> HeaderWithTime blk + addTimeToHeader (hdr, hsWt) = + HeaderWithTime + { hwtHeader = hdr + , hwtSlotRelativeTime = hswtSlotTime hsWt + } {------------------------------------------------------------------------------- Processing 'MsgRollForward' and 'MsgRollBackward' @@ -1185,438 +1212,444 @@ withTime fragment (HeaderStateHistory history) = knownIntersectionStateTop :: forall m blk arrival judgment. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ConfigEnv m blk - -> DynamicEnv m blk - -> InternalEnv m blk arrival judgment - -> Their (Tip blk) - -> Stateful m blk - (KnownIntersectionState blk) - (ClientPipelinedStIdle 'Z) + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ConfigEnv m blk -> + DynamicEnv m blk -> + InternalEnv m blk arrival judgment -> + Their (Tip blk) -> + Stateful + m + blk + (KnownIntersectionState blk) + (ClientPipelinedStIdle 'Z) knownIntersectionStateTop cfgEnv dynEnv intEnv = - nextStep mkPipelineDecision0 Zero - -- The 'MkPiplineDecision' and @'Nat' n@ arguments below could safely be - -- merged into the 'KnownIntersectionState' record type, but it's - -- unfortunately quite awkward to do so. - where - ConfigEnv { - mkPipelineDecision0 - , tracer - , cfg - , historicityCheck - } = cfgEnv - - DynamicEnv { - controlMessageSTM - , headerMetricsTracer - , idling - , loPBucket - , setCandidate - , jumping - } = dynEnv - - InternalEnv { - drainThePipe - , headerInFutureCheck - , terminateAfterDrain - , traceException - } = intEnv - - InFutureCheck.HeaderInFutureCheck { - recordHeaderArrival - } = headerInFutureCheck - - -- Request the next message (roll forward or backward). - -- - -- This is also the place where we checked whether we're asked to terminate - -- by the mux layer or to wait and perform a CSJ jump. - nextStep :: - MkPipelineDecision - -> Nat n - -> Their (Tip blk) - -> Stateful m blk - (KnownIntersectionState blk) - (ClientPipelinedStIdle n) - nextStep mkPipelineDecision n theirTip = Stateful $ \kis -> - atomically controlMessageSTM >>= \case - -- We have been asked to terminate the client - Terminate -> terminateAfterDrain n $ AskedToTerminate - _continue -> do - -- Wait until next jumping instruction, which can be either to - -- jump, to run normal ChainSync, or to restart the ChainSync - -- client. - -- Pause LoP while waiting, we'll resume it if we get `RunNormally` - -- or `Restart`. - traceWith tracer TraceJumpingWaitingForNextInstruction - lbPause loPBucket - instruction <- Jumping.jgNextInstruction jumping - traceWith tracer $ TraceJumpingInstructionIs instruction - case instruction of - Jumping.JumpInstruction jumpInstruction -> - continueWithState kis - $ drainThePipe n - $ offerJump mkPipelineDecision jumpInstruction - Jumping.RunNormally -> do - lbResume loPBucket - continueWithState kis - $ nextStep' mkPipelineDecision n theirTip - Jumping.Restart -> do - lbResume loPBucket - continueWithState () - $ drainThePipe n - $ findIntersectionTop - cfgEnv - dynEnv - intEnv - (kBestBlockNo kis) - NoMoreIntersection - - nextStep' :: - MkPipelineDecision - -> Nat n - -> Their (Tip blk) - -> Stateful m blk - (KnownIntersectionState blk) - (ClientPipelinedStIdle n) - nextStep' mkPipelineDecision n theirTip = - Stateful $ \kis -> - return $ - requestNext - kis - mkPipelineDecision - n - theirTip - (AF.headBlockNo (theirFrag kis)) - - offerJump :: - MkPipelineDecision - -> Jumping.JumpInstruction blk - -> Stateful m blk - (KnownIntersectionState blk) - (ClientPipelinedStIdle Z) - offerJump mkPipelineDecision jump = Stateful $ \kis -> do - let jumpInfo = case jump of - Jumping.JumpTo ji -> ji - Jumping.JumpToGoodPoint ji -> ji - dynamoTipPt = castPoint $ AF.headPoint $ jTheirFragment jumpInfo - traceWith tracer $ TraceOfferJump dynamoTipPt - return $ - SendMsgFindIntersect [dynamoTipPt] $ - ClientPipelinedStIntersect { - recvMsgIntersectFound = \pt theirTip -> - if - | pt == dynamoTipPt -> do - Jumping.jgProcessJumpResult jumping $ Jumping.AcceptedJump jump - traceWith tracer $ TraceJumpResult $ Jumping.AcceptedJump jump - let kis' = case jump of - -- Since the updated kis is needed to validate headers, - -- we only update it if we are becoming a Dynamo or - -- an objector - Jumping.JumpToGoodPoint{} -> combineJumpInfo kis jumpInfo - _ -> kis - continueWithState kis' $ nextStep mkPipelineDecision Zero (Their theirTip) - | otherwise -> throwIO InvalidJumpResponse - , - recvMsgIntersectNotFound = \theirTip -> do - Jumping.jgProcessJumpResult jumping $ Jumping.RejectedJump jump - traceWith tracer $ TraceJumpResult $ Jumping.RejectedJump jump - continueWithState kis $ nextStep mkPipelineDecision Zero (Their theirTip) + nextStep mkPipelineDecision0 Zero + where + -- The 'MkPiplineDecision' and @'Nat' n@ arguments below could safely be + -- merged into the 'KnownIntersectionState' record type, but it's + -- unfortunately quite awkward to do so. + + ConfigEnv + { mkPipelineDecision0 + , tracer + , cfg + , historicityCheck + } = cfgEnv + + DynamicEnv + { controlMessageSTM + , headerMetricsTracer + , idling + , loPBucket + , setCandidate + , jumping + } = dynEnv + + InternalEnv + { drainThePipe + , headerInFutureCheck + , terminateAfterDrain + , traceException + } = intEnv + + InFutureCheck.HeaderInFutureCheck + { recordHeaderArrival + } = headerInFutureCheck + + -- Request the next message (roll forward or backward). + -- + -- This is also the place where we checked whether we're asked to terminate + -- by the mux layer or to wait and perform a CSJ jump. + nextStep :: + MkPipelineDecision -> + Nat n -> + Their (Tip blk) -> + Stateful + m + blk + (KnownIntersectionState blk) + (ClientPipelinedStIdle n) + nextStep mkPipelineDecision n theirTip = Stateful $ \kis -> + atomically controlMessageSTM >>= \case + -- We have been asked to terminate the client + Terminate -> terminateAfterDrain n $ AskedToTerminate + _continue -> do + -- Wait until next jumping instruction, which can be either to + -- jump, to run normal ChainSync, or to restart the ChainSync + -- client. + -- Pause LoP while waiting, we'll resume it if we get `RunNormally` + -- or `Restart`. + traceWith tracer TraceJumpingWaitingForNextInstruction + lbPause loPBucket + instruction <- Jumping.jgNextInstruction jumping + traceWith tracer $ TraceJumpingInstructionIs instruction + case instruction of + Jumping.JumpInstruction jumpInstruction -> + continueWithState kis $ + drainThePipe n $ + offerJump mkPipelineDecision jumpInstruction + Jumping.RunNormally -> do + lbResume loPBucket + continueWithState kis $ + nextStep' mkPipelineDecision n theirTip + Jumping.Restart -> do + lbResume loPBucket + continueWithState () $ + drainThePipe n $ + findIntersectionTop + cfgEnv + dynEnv + intEnv + (kBestBlockNo kis) + NoMoreIntersection + + nextStep' :: + MkPipelineDecision -> + Nat n -> + Their (Tip blk) -> + Stateful + m + blk + (KnownIntersectionState blk) + (ClientPipelinedStIdle n) + nextStep' mkPipelineDecision n theirTip = + Stateful $ \kis -> + return $ + requestNext + kis + mkPipelineDecision + n + theirTip + (AF.headBlockNo (theirFrag kis)) + + offerJump :: + MkPipelineDecision -> + Jumping.JumpInstruction blk -> + Stateful + m + blk + (KnownIntersectionState blk) + (ClientPipelinedStIdle Z) + offerJump mkPipelineDecision jump = Stateful $ \kis -> do + let jumpInfo = case jump of + Jumping.JumpTo ji -> ji + Jumping.JumpToGoodPoint ji -> ji + dynamoTipPt = castPoint $ AF.headPoint $ jTheirFragment jumpInfo + traceWith tracer $ TraceOfferJump dynamoTipPt + return $ + SendMsgFindIntersect [dynamoTipPt] $ + ClientPipelinedStIntersect + { recvMsgIntersectFound = \pt theirTip -> + if + | pt == dynamoTipPt -> do + Jumping.jgProcessJumpResult jumping $ Jumping.AcceptedJump jump + traceWith tracer $ TraceJumpResult $ Jumping.AcceptedJump jump + let kis' = case jump of + -- Since the updated kis is needed to validate headers, + -- we only update it if we are becoming a Dynamo or + -- an objector + Jumping.JumpToGoodPoint{} -> combineJumpInfo kis jumpInfo + _ -> kis + continueWithState kis' $ nextStep mkPipelineDecision Zero (Their theirTip) + | otherwise -> throwIO InvalidJumpResponse + , recvMsgIntersectNotFound = \theirTip -> do + Jumping.jgProcessJumpResult jumping $ Jumping.RejectedJump jump + traceWith tracer $ TraceJumpResult $ Jumping.RejectedJump jump + continueWithState kis $ nextStep mkPipelineDecision Zero (Their theirTip) + } + where + combineJumpInfo :: + KnownIntersectionState blk -> + JumpInfo blk -> + KnownIntersectionState blk + combineJumpInfo kis ji = + let mRewoundHistory = + HeaderStateHistory.rewind + (AF.castPoint $ AF.headPoint $ jTheirFragment ji) + (jTheirHeaderStateHistory ji) + -- We assume the history is always possible to rewind. The case + -- where this wouldn't be true is if the original candidate + -- fragment provided by the dynamo contained headers that have + -- no corresponding header state. + (rewoundHistory, _oldestRewound) = + fromMaybe (error "offerJump: cannot rewind history") mRewoundHistory + -- If the tip of jTheirFragment does not match the tip of + -- jTheirHeaderStateHistory, then the history needs rewinding. + historyNeedsRewinding = + (/= AF.headPoint (jTheirFragment ji)) $ + castPoint $ + headerStatePoint . hswtHeaderState . either id id $ + AF.head $ + HeaderStateHistory.unHeaderStateHistory $ + jTheirHeaderStateHistory ji + -- Recompute the intersection only if a suffix of the candidate + -- fragment was trimmed. + intersection + | historyNeedsRewinding = case AF.intersectionPoint (jOurFragment ji) (jTheirFragment ji) of + Just po -> castPoint po + Nothing -> error "offerJump: the jumpInfo should have a valid intersection" + | otherwise = jMostRecentIntersection ji + in KnownIntersectionState + { mostRecentIntersection = intersection + , ourFrag = jOurFragment ji + , theirFrag = jTheirFragment ji + , theirHeaderStateHistory = rewoundHistory + , kBestBlockNo = max (fromWithOrigin 0 $ AF.headBlockNo $ jTheirFragment ji) (kBestBlockNo kis) } - where - combineJumpInfo :: - KnownIntersectionState blk - -> JumpInfo blk - -> KnownIntersectionState blk - combineJumpInfo kis ji = - let mRewoundHistory = - HeaderStateHistory.rewind - (AF.castPoint $ AF.headPoint $ jTheirFragment ji) - (jTheirHeaderStateHistory ji) - -- We assume the history is always possible to rewind. The case - -- where this wouldn't be true is if the original candidate - -- fragment provided by the dynamo contained headers that have - -- no corresponding header state. - (rewoundHistory, _oldestRewound) = - fromMaybe (error "offerJump: cannot rewind history") mRewoundHistory - -- If the tip of jTheirFragment does not match the tip of - -- jTheirHeaderStateHistory, then the history needs rewinding. - historyNeedsRewinding = - (/= AF.headPoint (jTheirFragment ji)) $ - castPoint $ - headerStatePoint . hswtHeaderState . either id id $ - AF.head $ - HeaderStateHistory.unHeaderStateHistory $ - jTheirHeaderStateHistory ji - -- Recompute the intersection only if a suffix of the candidate - -- fragment was trimmed. - intersection - | historyNeedsRewinding = case AF.intersectionPoint (jOurFragment ji) (jTheirFragment ji) of - Just po -> castPoint po - Nothing -> error "offerJump: the jumpInfo should have a valid intersection" - | otherwise = jMostRecentIntersection ji - in KnownIntersectionState - { mostRecentIntersection = intersection - , ourFrag = jOurFragment ji - , theirFrag = jTheirFragment ji - , theirHeaderStateHistory = rewoundHistory - , kBestBlockNo = max (fromWithOrigin 0 $ AF.headBlockNo $ jTheirFragment ji) (kBestBlockNo kis) - } - requestNext :: - KnownIntersectionState blk - -> MkPipelineDecision - -> Nat n - -> Their (Tip blk) - -> WithOrigin BlockNo - -> Consensus (ClientPipelinedStIdle n) blk m - requestNext kis mkPipelineDecision n theirTip candTipBlockNo = - let theirTipBlockNo = getTipBlockNo (unTheir theirTip) - decision = - runPipelineDecision - mkPipelineDecision - n - candTipBlockNo - theirTipBlockNo - onMsgAwaitReply = do - HistoricityCheck.judgeMessageHistoricity - historicityCheck - HistoricalMsgAwaitReply - (HeaderStateHistory.current (theirHeaderStateHistory kis)) >>= \case - Left ex -> throwIO $ HistoricityError ex - Right () -> pure () - idlingStart idling - lbPause loPBucket - Jumping.jgOnAwaitReply jumping - in - case (n, decision) of + requestNext :: + KnownIntersectionState blk -> + MkPipelineDecision -> + Nat n -> + Their (Tip blk) -> + WithOrigin BlockNo -> + Consensus (ClientPipelinedStIdle n) blk m + requestNext kis mkPipelineDecision n theirTip candTipBlockNo = + let theirTipBlockNo = getTipBlockNo (unTheir theirTip) + decision = + runPipelineDecision + mkPipelineDecision + n + candTipBlockNo + theirTipBlockNo + onMsgAwaitReply = do + HistoricityCheck.judgeMessageHistoricity + historicityCheck + HistoricalMsgAwaitReply + (HeaderStateHistory.current (theirHeaderStateHistory kis)) + >>= \case + Left ex -> throwIO $ HistoricityError ex + Right () -> pure () + idlingStart idling + lbPause loPBucket + Jumping.jgOnAwaitReply jumping + in case (n, decision) of (Zero, (Request, mkPipelineDecision')) -> - SendMsgRequestNext - onMsgAwaitReply - (handleNext kis mkPipelineDecision' Zero) - + SendMsgRequestNext + onMsgAwaitReply + (handleNext kis mkPipelineDecision' Zero) (_, (Pipeline, mkPipelineDecision')) -> - SendMsgRequestNextPipelined - onMsgAwaitReply - $ requestNext - kis - mkPipelineDecision' - (Succ n) - theirTip - candTipBlockNo - + SendMsgRequestNextPipelined + onMsgAwaitReply + $ requestNext + kis + mkPipelineDecision' + (Succ n) + theirTip + candTipBlockNo (Succ n', (CollectOrPipeline, mkPipelineDecision')) -> - CollectResponse - ( Just - $ pure - $ SendMsgRequestNextPipelined - onMsgAwaitReply - $ requestNext - kis - mkPipelineDecision' - (Succ n) - theirTip - candTipBlockNo - ) - (handleNext kis mkPipelineDecision' n') - + CollectResponse + ( Just + $ pure + $ SendMsgRequestNextPipelined + onMsgAwaitReply + $ requestNext + kis + mkPipelineDecision' + (Succ n) + theirTip + candTipBlockNo + ) + (handleNext kis mkPipelineDecision' n') (Succ n', (Collect, mkPipelineDecision')) -> - CollectResponse - Nothing - (handleNext kis mkPipelineDecision' n') - - handleNext :: - KnownIntersectionState blk - -> MkPipelineDecision - -> Nat n - -> Consensus (ClientStNext n) blk m - handleNext kis mkPipelineDecision n = - -- Unconditionally restart the leaky LoP bucket when receiving any - -- message. - ClientStNext { - recvMsgRollForward = \hdr theirTip -> do - (idlingStop idling >> lbResume loPBucket) - traceWith tracer $ TraceDownloadedHeader hdr - continueWithState kis $ - rollForward - mkPipelineDecision - n - hdr - (Their theirTip) - , - recvMsgRollBackward = \intersection theirTip -> do - (idlingStop idling >> lbResume loPBucket) - let intersection' :: Point blk - intersection' = castPoint intersection - traceWith tracer $ TraceRolledBack intersection' - continueWithState kis $ - rollBackward - mkPipelineDecision - n - intersection' - (Their theirTip) + CollectResponse + Nothing + (handleNext kis mkPipelineDecision' n') + + handleNext :: + KnownIntersectionState blk -> + MkPipelineDecision -> + Nat n -> + Consensus (ClientStNext n) blk m + handleNext kis mkPipelineDecision n = + -- Unconditionally restart the leaky LoP bucket when receiving any + -- message. + ClientStNext + { recvMsgRollForward = \hdr theirTip -> do + (idlingStop idling >> lbResume loPBucket) + traceWith tracer $ TraceDownloadedHeader hdr + continueWithState kis $ + rollForward + mkPipelineDecision + n + hdr + (Their theirTip) + , recvMsgRollBackward = \intersection theirTip -> do + (idlingStop idling >> lbResume loPBucket) + let intersection' :: Point blk + intersection' = castPoint intersection + traceWith tracer $ TraceRolledBack intersection' + continueWithState kis $ + rollBackward + mkPipelineDecision + n + intersection' + (Their theirTip) } - -- Process a new header. - -- - -- This is also the place where we check whether their chain still - -- intersects with ours. We have to do this in order to get a ledger state - -- to forecast from. It is also sufficient to do this just here, and not on - -- MsgRollBack or MsgAwaitReply as these do not induce significant work in - -- the context of ChainSync. - rollForward :: - MkPipelineDecision - -> Nat n - -> Header blk - -> Their (Tip blk) - -> Stateful m blk - (KnownIntersectionState blk) - (ClientPipelinedStIdle n) - rollForward mkPipelineDecision n hdr theirTip = - Stateful $ \kis -> traceException $ do - arrival <- recordHeaderArrival hdr - arrivalTime <- getMonotonicTime - - let slotNo = blockSlot hdr - - checkKnownInvalid cfgEnv dynEnv intEnv hdr - - Jumping.jgOnRollForward jumping (blockPoint hdr) - atomically (setLatestSlot dynEnv (NotOrigin slotNo)) - - checkTime cfgEnv dynEnv intEnv kis arrival slotNo >>= \case - NoLongerIntersects -> - continueWithState () - $ drainThePipe n - $ findIntersectionTop - cfgEnv - dynEnv - intEnv - (kBestBlockNo kis) - NoMoreIntersection - - StillIntersects (ledgerView, hdrSlotTime) kis' -> do - kis'' <- - checkValid cfgEnv intEnv hdr hdrSlotTime theirTip kis' ledgerView - kis''' <- checkLoP cfgEnv dynEnv hdr kis'' - - atomically $ do - updateJumpInfoSTM jumping kis''' - setCandidate (theirFrag kis''') - atomically - $ traceWith headerMetricsTracer (slotNo, arrivalTime) - - continueWithState kis''' - $ nextStep mkPipelineDecision n theirTip - - rollBackward :: - MkPipelineDecision - -> Nat n - -> Point blk - -> Their (Tip blk) - -> Stateful m blk - (KnownIntersectionState blk) - (ClientPipelinedStIdle n) - rollBackward mkPipelineDecision n rollBackPoint theirTip = - Stateful $ \kis -> - traceException - $ let KnownIntersectionState { - mostRecentIntersection - , ourFrag - , theirFrag - , theirHeaderStateHistory - , kBestBlockNo - } = kis - in - case attemptRollback - rollBackPoint - (theirFrag, theirHeaderStateHistory) - of - Nothing -> - -- Remember that we use our current chain fragment as the - -- starting point for the candidate's chain. Our fragment - -- contained @k@ headers. At this point, the candidate - -- fragment might have grown to more than @k@ or rolled - -- back to less than @k@ headers. - -- - -- But now, it rolled back to some point that is not on the - -- fragment, which means that it tried to roll back to some - -- point before one of the last @k@ headers we initially - -- started from. We could never switch to this fork anyway, - -- so just disconnect. Furthermore, our current chain might - -- have advanced in the meantime, so the point we would - -- have to roll back to might have been much further back - -- than @k@ blocks (> @k@ + the number of blocks we have - -- advanced since starting syncing). - -- - -- INVARIANT: a candidate fragment contains @>=k@ headers - -- (unless near genesis, in which case we mean the total - -- number of blocks in the fragment) minus @r@ headers - -- where @r <= k@. This ghost variable @r@ indicates the - -- number of headers we temporarily rolled back. Such a - -- rollback must always be followed by rolling forward @s@ - -- new headers where @s >= r@. - -- - -- Thus, @k - r + s >= k@. - terminateAfterDrain n - $ RolledBackPastIntersection - rollBackPoint - (ourTipFromChain ourFrag) - theirTip - - Just (theirFrag', theirHeaderStateHistory', mOldestRewound) -> do - - whenJust mOldestRewound $ \oldestRewound -> - HistoricityCheck.judgeMessageHistoricity - historicityCheck - HistoricalMsgRollBackward - oldestRewound >>= \case - Left ex -> throwIO $ HistoricityError ex + -- Process a new header. + -- + -- This is also the place where we check whether their chain still + -- intersects with ours. We have to do this in order to get a ledger state + -- to forecast from. It is also sufficient to do this just here, and not on + -- MsgRollBack or MsgAwaitReply as these do not induce significant work in + -- the context of ChainSync. + rollForward :: + MkPipelineDecision -> + Nat n -> + Header blk -> + Their (Tip blk) -> + Stateful + m + blk + (KnownIntersectionState blk) + (ClientPipelinedStIdle n) + rollForward mkPipelineDecision n hdr theirTip = + Stateful $ \kis -> traceException $ do + arrival <- recordHeaderArrival hdr + arrivalTime <- getMonotonicTime + + let slotNo = blockSlot hdr + + checkKnownInvalid cfgEnv dynEnv intEnv hdr + + Jumping.jgOnRollForward jumping (blockPoint hdr) + atomically (setLatestSlot dynEnv (NotOrigin slotNo)) + + checkTime cfgEnv dynEnv intEnv kis arrival slotNo >>= \case + NoLongerIntersects -> + continueWithState () $ + drainThePipe n $ + findIntersectionTop + cfgEnv + dynEnv + intEnv + (kBestBlockNo kis) + NoMoreIntersection + StillIntersects (ledgerView, hdrSlotTime) kis' -> do + kis'' <- + checkValid cfgEnv intEnv hdr hdrSlotTime theirTip kis' ledgerView + kis''' <- checkLoP cfgEnv dynEnv hdr kis'' + + atomically $ do + updateJumpInfoSTM jumping kis''' + setCandidate (theirFrag kis''') + atomically $ + traceWith headerMetricsTracer (slotNo, arrivalTime) + + continueWithState kis''' $ + nextStep mkPipelineDecision n theirTip + + rollBackward :: + MkPipelineDecision -> + Nat n -> + Point blk -> + Their (Tip blk) -> + Stateful + m + blk + (KnownIntersectionState blk) + (ClientPipelinedStIdle n) + rollBackward mkPipelineDecision n rollBackPoint theirTip = + Stateful $ \kis -> + traceException $ + let KnownIntersectionState + { mostRecentIntersection + , ourFrag + , theirFrag + , theirHeaderStateHistory + , kBestBlockNo + } = kis + in case attemptRollback + rollBackPoint + (theirFrag, theirHeaderStateHistory) of + Nothing -> + -- Remember that we use our current chain fragment as the + -- starting point for the candidate's chain. Our fragment + -- contained @k@ headers. At this point, the candidate + -- fragment might have grown to more than @k@ or rolled + -- back to less than @k@ headers. + -- + -- But now, it rolled back to some point that is not on the + -- fragment, which means that it tried to roll back to some + -- point before one of the last @k@ headers we initially + -- started from. We could never switch to this fork anyway, + -- so just disconnect. Furthermore, our current chain might + -- have advanced in the meantime, so the point we would + -- have to roll back to might have been much further back + -- than @k@ blocks (> @k@ + the number of blocks we have + -- advanced since starting syncing). + -- + -- INVARIANT: a candidate fragment contains @>=k@ headers + -- (unless near genesis, in which case we mean the total + -- number of blocks in the fragment) minus @r@ headers + -- where @r <= k@. This ghost variable @r@ indicates the + -- number of headers we temporarily rolled back. Such a + -- rollback must always be followed by rolling forward @s@ + -- new headers where @s >= r@. + -- + -- Thus, @k - r + s >= k@. + terminateAfterDrain n $ + RolledBackPastIntersection + rollBackPoint + (ourTipFromChain ourFrag) + theirTip + Just (theirFrag', theirHeaderStateHistory', mOldestRewound) -> do + whenJust mOldestRewound $ \oldestRewound -> + HistoricityCheck.judgeMessageHistoricity + historicityCheck + HistoricalMsgRollBackward + oldestRewound + >>= \case + Left ex -> throwIO $ HistoricityError ex Right () -> pure () - -- We just rolled back to @rollBackPoint@, either our most - -- recent intersection was after or at @rollBackPoint@, in - -- which case @rollBackPoint@ becomes the new most recent - -- intersection. - -- - -- But if the most recent intersection was /before/ - -- @rollBackPoint@, then the most recent intersection doesn't - -- change. - let mostRecentIntersection' = - if AF.withinFragmentBounds - (castPoint rollBackPoint) - ourFrag - then rollBackPoint - else mostRecentIntersection - - kis' = - assertKnownIntersectionInvariants - (configConsensus cfg) - $ KnownIntersectionState { - mostRecentIntersection = mostRecentIntersection' - , ourFrag = ourFrag - , theirFrag = theirFrag' - , theirHeaderStateHistory = theirHeaderStateHistory' - , kBestBlockNo - } - atomically $ do - updateJumpInfoSTM jumping kis' - setCandidate theirFrag' - setLatestSlot dynEnv (pointSlot rollBackPoint) - - Jumping.jgOnRollBackward jumping (pointSlot rollBackPoint) - - continueWithState kis' $ - nextStep mkPipelineDecision n theirTip + -- We just rolled back to @rollBackPoint@, either our most + -- recent intersection was after or at @rollBackPoint@, in + -- which case @rollBackPoint@ becomes the new most recent + -- intersection. + -- + -- But if the most recent intersection was /before/ + -- @rollBackPoint@, then the most recent intersection doesn't + -- change. + let mostRecentIntersection' = + if AF.withinFragmentBounds + (castPoint rollBackPoint) + ourFrag + then rollBackPoint + else mostRecentIntersection + + kis' = + assertKnownIntersectionInvariants + (configConsensus cfg) + $ KnownIntersectionState + { mostRecentIntersection = mostRecentIntersection' + , ourFrag = ourFrag + , theirFrag = theirFrag' + , theirHeaderStateHistory = theirHeaderStateHistory' + , kBestBlockNo + } + atomically $ do + updateJumpInfoSTM jumping kis' + setCandidate theirFrag' + setLatestSlot dynEnv (pointSlot rollBackPoint) + + Jumping.jgOnRollBackward jumping (pointSlot rollBackPoint) + + continueWithState kis' $ + nextStep mkPipelineDecision n theirTip -- | Let ChainSync jumping know about an update to the 'KnownIntersectionState'. updateJumpInfoSTM :: - Jumping.Jumping m blk - -> KnownIntersectionState blk - -> STM m () + Jumping.Jumping m blk -> + KnownIntersectionState blk -> + STM m () updateJumpInfoSTM jumping kis@KnownIntersectionState{ourFrag} = - Jumping.jgUpdateJumpInfo jumping JumpInfo + Jumping.jgUpdateJumpInfo + jumping + JumpInfo { jMostRecentIntersection = mostRecentIntersection kis , jOurFragment = ourFrag , jTheirFragment = theirFrag kis @@ -1635,44 +1668,44 @@ updateJumpInfoSTM jumping kis@KnownIntersectionState{ourFrag} = -- /must/ be 100% reliable. checkKnownInvalid :: forall m blk arrival judgment. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ConfigEnv m blk - -> DynamicEnv m blk - -> InternalEnv m blk arrival judgment - -> Header blk - -> m () + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ConfigEnv m blk -> + DynamicEnv m blk -> + InternalEnv m blk arrival judgment -> + Header blk -> + m () checkKnownInvalid cfgEnv dynEnv intEnv hdr = case scrutinee of - GenesisHash -> return () - BlockHash hash -> do - isInvalidBlock <- atomically $ forgetFingerprint <$> getIsInvalidBlock - whenJust (isInvalidBlock hash) $ \reason -> - disconnect $ InvalidBlock (headerPoint hdr) hash reason - where - ConfigEnv { - chainDbView - , getDiffusionPipeliningSupport - } = cfgEnv - - ChainDbView { - getIsInvalidBlock - } = chainDbView - - DynamicEnv { - version = _version - } = dynEnv - - InternalEnv { - disconnect - } = intEnv - - -- When pipelining, the tip of the candidate is forgiven for being an - -- invalid block, but not if it extends any invalid blocks. - scrutinee = case getDiffusionPipeliningSupport of - DiffusionPipeliningOff -> BlockHash (headerHash hdr) - -- Disconnect if the parent block of `hdr` is known to be invalid. - DiffusionPipeliningOn -> headerPrevHash hdr + GenesisHash -> return () + BlockHash hash -> do + isInvalidBlock <- atomically $ forgetFingerprint <$> getIsInvalidBlock + whenJust (isInvalidBlock hash) $ \reason -> + disconnect $ InvalidBlock (headerPoint hdr) hash reason + where + ConfigEnv + { chainDbView + , getDiffusionPipeliningSupport + } = cfgEnv + + ChainDbView + { getIsInvalidBlock + } = chainDbView + + DynamicEnv + { version = _version + } = dynEnv + + InternalEnv + { disconnect + } = intEnv + + -- When pipelining, the tip of the candidate is forgiven for being an + -- invalid block, but not if it extends any invalid blocks. + scrutinee = case getDiffusionPipeliningSupport of + DiffusionPipeliningOff -> BlockHash (headerHash hdr) + -- Disconnect if the parent block of `hdr` is known to be invalid. + DiffusionPipeliningOn -> headerPrevHash hdr -- | Manage the relationships between the header's slot, arrival time, and -- intersection with the local selection @@ -1694,253 +1727,256 @@ checkKnownInvalid cfgEnv dynEnv intEnv hdr = case scrutinee of -- is not preferrable to ours, we disconnect. checkTime :: forall m blk arrival judgment. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ConfigEnv m blk - -> DynamicEnv m blk - -> InternalEnv m blk arrival judgment - -> KnownIntersectionState blk - -> arrival - -> SlotNo - -> m (UpdatedIntersectionState blk (LedgerView (BlockProtocol blk), RelativeTime)) + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ConfigEnv m blk -> + DynamicEnv m blk -> + InternalEnv m blk arrival judgment -> + KnownIntersectionState blk -> + arrival -> + SlotNo -> + m (UpdatedIntersectionState blk (LedgerView (BlockProtocol blk), RelativeTime)) checkTime cfgEnv dynEnv intEnv = - \kis arrival slotNo -> pauseBucket $ castEarlyExitIntersects $ do - Intersects kis2 (lst, slotTime) <- checkArrivalTime kis arrival - Intersects kis3 ledgerView <- case projectLedgerView slotNo lst of - Just ledgerView -> pure $ Intersects kis2 ledgerView - Nothing -> do - EarlyExit.lift $ - traceWith (tracer cfgEnv) - $ TraceWaitingBeyondForecastHorizon slotNo - res <- readLedgerState kis2 (projectLedgerView slotNo) - EarlyExit.lift $ - traceWith (tracer cfgEnv) - $ TraceAccessingForecastHorizon slotNo - pure res - pure $ Intersects kis3 (ledgerView, slotTime) - where - ConfigEnv { - cfg - , chainDbView - } = cfgEnv - - ChainDbView { - getPastLedger - } = chainDbView - - InternalEnv { - disconnect - , headerInFutureCheck - , intersectsWithCurrentChain - } = intEnv - - InFutureCheck.HeaderInFutureCheck { - handleHeaderArrival - , judgeHeaderArrival - } = headerInFutureCheck - - -- Determine whether the header is from the future, and handle that fact if - -- so. Also return the ledger state used for the determination. - -- - -- Relies on 'readLedgerState'. - checkArrivalTime :: - KnownIntersectionState blk - -> arrival - -> WithEarlyExit m (Intersects blk (LedgerState blk EmptyMK, RelativeTime)) - checkArrivalTime kis arrival = do - Intersects kis' (lst, judgment) <- do - readLedgerState kis $ \lst -> - case runExcept - $ judgeHeaderArrival (configLedger cfg) lst arrival - of - Left PastHorizon{} -> Nothing - Right judgment -> Just (lst, judgment) - - -- For example, throw an exception if the header is from the far - -- future. - EarlyExit.lift $ handleHeaderArrival judgment <&> runExcept >>= \case - Left exn -> disconnect (InFutureHeaderExceedsClockSkew exn) - Right slotTime -> return $ Intersects kis' (lst, slotTime) - - -- Block until the the ledger state at the intersection with the local - -- selection returns 'Just'. - -- - -- Exits early if the intersection no longer exists. - readLedgerState :: - forall a. - KnownIntersectionState blk - -> (LedgerState blk EmptyMK -> Maybe a) - -> WithEarlyExit m (Intersects blk a) - readLedgerState kis prj = castM $ readLedgerStateHelper kis prj - - readLedgerStateHelper :: - forall a. - KnownIntersectionState blk - -> (LedgerState blk EmptyMK -> Maybe a) - -> m (WithEarlyExit m (Intersects blk a)) - readLedgerStateHelper kis prj = atomically $ do - -- We must first find the most recent intersection with the current - -- chain. Note that this is cheap when the chain and candidate haven't - -- changed. - intersectsWithCurrentChain kis >>= \case - NoLongerIntersects -> return exitEarly - StillIntersects () kis' -> do - let KnownIntersectionState { - mostRecentIntersection - } = kis' - lst <- - fmap - (maybe - (error $ - "intersection not within last k blocks: " - <> show mostRecentIntersection - ) - ledgerState - ) - $ getPastLedger mostRecentIntersection - case prj lst of - Nothing -> do - checkPreferTheirsOverOurs kis' - retry - Just ledgerView -> - return $ return $ Intersects kis' ledgerView - - -- Note [Candidate comparing beyond the forecast horizon] - -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -- - -- When a header is beyond the forecast horizon and their fragment is not - -- preferrable to our selection (ourFrag), then we disconnect, as we will - -- never end up selecting it. - -- - -- In the context of Genesis, one can think of the candidate losing a - -- density comparison against the selection. See the Genesis documentation - -- for why this check is necessary. - -- - -- In particular, this means that we will disconnect from peers who offer us - -- a chain containing a slot gap larger than a forecast window. - checkPreferTheirsOverOurs :: KnownIntersectionState blk -> STM m () - checkPreferTheirsOverOurs kis - | -- Precondition is fulfilled as ourFrag and theirFrag intersect by - -- construction. - preferAnchoredCandidate (configBlock cfg) ourFrag theirFrag - = pure () - | otherwise - = throwSTM $ CandidateTooSparse + \kis arrival slotNo -> pauseBucket $ castEarlyExitIntersects $ do + Intersects kis2 (lst, slotTime) <- checkArrivalTime kis arrival + Intersects kis3 ledgerView <- case projectLedgerView slotNo lst of + Just ledgerView -> pure $ Intersects kis2 ledgerView + Nothing -> do + EarlyExit.lift $ + traceWith (tracer cfgEnv) $ + TraceWaitingBeyondForecastHorizon slotNo + res <- readLedgerState kis2 (projectLedgerView slotNo) + EarlyExit.lift $ + traceWith (tracer cfgEnv) $ + TraceAccessingForecastHorizon slotNo + pure res + pure $ Intersects kis3 (ledgerView, slotTime) + where + ConfigEnv + { cfg + , chainDbView + } = cfgEnv + + ChainDbView + { getPastLedger + } = chainDbView + + InternalEnv + { disconnect + , headerInFutureCheck + , intersectsWithCurrentChain + } = intEnv + + InFutureCheck.HeaderInFutureCheck + { handleHeaderArrival + , judgeHeaderArrival + } = headerInFutureCheck + + -- Determine whether the header is from the future, and handle that fact if + -- so. Also return the ledger state used for the determination. + -- + -- Relies on 'readLedgerState'. + checkArrivalTime :: + KnownIntersectionState blk -> + arrival -> + WithEarlyExit m (Intersects blk (LedgerState blk EmptyMK, RelativeTime)) + checkArrivalTime kis arrival = do + Intersects kis' (lst, judgment) <- do + readLedgerState kis $ \lst -> + case runExcept $ + judgeHeaderArrival (configLedger cfg) lst arrival of + Left PastHorizon{} -> Nothing + Right judgment -> Just (lst, judgment) + + -- For example, throw an exception if the header is from the far + -- future. + EarlyExit.lift $ + handleHeaderArrival judgment <&> runExcept >>= \case + Left exn -> disconnect (InFutureHeaderExceedsClockSkew exn) + Right slotTime -> return $ Intersects kis' (lst, slotTime) + + -- Block until the the ledger state at the intersection with the local + -- selection returns 'Just'. + -- + -- Exits early if the intersection no longer exists. + readLedgerState :: + forall a. + KnownIntersectionState blk -> + (LedgerState blk EmptyMK -> Maybe a) -> + WithEarlyExit m (Intersects blk a) + readLedgerState kis prj = castM $ readLedgerStateHelper kis prj + + readLedgerStateHelper :: + forall a. + KnownIntersectionState blk -> + (LedgerState blk EmptyMK -> Maybe a) -> + m (WithEarlyExit m (Intersects blk a)) + readLedgerStateHelper kis prj = atomically $ do + -- We must first find the most recent intersection with the current + -- chain. Note that this is cheap when the chain and candidate haven't + -- changed. + intersectsWithCurrentChain kis >>= \case + NoLongerIntersects -> return exitEarly + StillIntersects () kis' -> do + let KnownIntersectionState + { mostRecentIntersection + } = kis' + lst <- + fmap + ( maybe + ( error $ + "intersection not within last k blocks: " + <> show mostRecentIntersection + ) + ledgerState + ) + $ getPastLedger mostRecentIntersection + case prj lst of + Nothing -> do + checkPreferTheirsOverOurs kis' + retry + Just ledgerView -> + return $ return $ Intersects kis' ledgerView + + -- Note [Candidate comparing beyond the forecast horizon] + -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + -- + -- When a header is beyond the forecast horizon and their fragment is not + -- preferrable to our selection (ourFrag), then we disconnect, as we will + -- never end up selecting it. + -- + -- In the context of Genesis, one can think of the candidate losing a + -- density comparison against the selection. See the Genesis documentation + -- for why this check is necessary. + -- + -- In particular, this means that we will disconnect from peers who offer us + -- a chain containing a slot gap larger than a forecast window. + checkPreferTheirsOverOurs :: KnownIntersectionState blk -> STM m () + checkPreferTheirsOverOurs kis + | -- Precondition is fulfilled as ourFrag and theirFrag intersect by + -- construction. + preferAnchoredCandidate (configBlock cfg) ourFrag theirFrag = + pure () + | otherwise = + throwSTM $ + CandidateTooSparse mostRecentIntersection (ourTipFromChain ourFrag) (theirTipFromChain theirFrag) - where - KnownIntersectionState { - mostRecentIntersection - , ourFrag - , theirFrag - } = kis + where + KnownIntersectionState + { mostRecentIntersection + , ourFrag + , theirFrag + } = kis + + -- Returns 'Nothing' if the ledger state cannot forecast the ledger view + -- that far into the future. + projectLedgerView :: + SlotNo -> + LedgerState blk EmptyMK -> + Maybe (LedgerView (BlockProtocol blk)) + projectLedgerView slot lst = + let forecast = ledgerViewForecastAt (configLedger cfg) lst + in -- TODO cache this in the KnownIntersectionState? Or even in the + -- LedgerDB? - -- Returns 'Nothing' if the ledger state cannot forecast the ledger view - -- that far into the future. - projectLedgerView :: - SlotNo - -> LedgerState blk EmptyMK - -> Maybe (LedgerView (BlockProtocol blk)) - projectLedgerView slot lst = - let forecast = ledgerViewForecastAt (configLedger cfg) lst - -- TODO cache this in the KnownIntersectionState? Or even in the - -- LedgerDB? - in case runExcept $ forecastFor forecast slot of - Right ledgerView -> Just ledgerView - Left OutsideForecastRange{} -> - -- The header is too far ahead of the intersection point with - -- our current chain. We have to wait until our chain and the - -- intersection have advanced far enough. This will wait on - -- changes to the current chain via the call to - -- 'intersectsWithCurrentChain' before it. - Nothing - - -- Pause the LoP bucket for the entire duration of 'checkTime'. It will - -- either execute very fast, or it will block on the time translation or - -- forecast horizon, waiting for our selection to advance. During this - -- period, we should not leak tokens as our peer is not responsible for this - -- waiting time. - pauseBucket :: m a -> m a - pauseBucket = - bracket_ - (lbPause (loPBucket dynEnv)) - (lbResume (loPBucket dynEnv)) + Right ledgerView -> Just ledgerView + Left OutsideForecastRange{} -> + -- The header is too far ahead of the intersection point with + -- our current chain. We have to wait until our chain and the + -- intersection have advanced far enough. This will wait on + -- changes to the current chain via the call to + -- 'intersectsWithCurrentChain' before it. + Nothing + + -- Pause the LoP bucket for the entire duration of 'checkTime'. It will + -- either execute very fast, or it will block on the time translation or + -- forecast horizon, waiting for our selection to advance. During this + -- period, we should not leak tokens as our peer is not responsible for this + -- waiting time. + pauseBucket :: m a -> m a + pauseBucket = + bracket_ + (lbPause (loPBucket dynEnv)) + (lbResume (loPBucket dynEnv)) -- | Update the 'KnownIntersectionState' according to the header, if it's valid -- -- Crucially: disconnects if it isn't. checkValid :: forall m blk arrival judgment. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ConfigEnv m blk - -> InternalEnv m blk arrival judgment - -> Header blk - -> RelativeTime -- ^ onset of the header's slot - -> Their (Tip blk) - -> KnownIntersectionState blk - -> LedgerView (BlockProtocol blk) - -> m (KnownIntersectionState blk) + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ConfigEnv m blk -> + InternalEnv m blk arrival judgment -> + Header blk -> + -- | onset of the header's slot + RelativeTime -> + Their (Tip blk) -> + KnownIntersectionState blk -> + LedgerView (BlockProtocol blk) -> + m (KnownIntersectionState blk) checkValid cfgEnv intEnv hdr hdrSlotTime theirTip kis ledgerView = do - let KnownIntersectionState { - mostRecentIntersection - , ourFrag - , theirFrag - , theirHeaderStateHistory - , kBestBlockNo - } = kis - - let hdrPoint = headerPoint hdr - - -- Validate header - theirHeaderStateHistory' <- - case runExcept - $ validateHeader cfg ledgerView hdr hdrSlotTime theirHeaderStateHistory - of - Right theirHeaderStateHistory' -> return theirHeaderStateHistory' - Left vErr -> - disconnect - $ HeaderError hdrPoint vErr (ourTipFromChain ourFrag) theirTip - - let - validatedHdr = HeaderWithTime { hwtHeader = hdr - , hwtSlotRelativeTime = hdrSlotTime - } - theirFrag' = theirFrag :> validatedHdr - -- Advance the most recent intersection if we have the same - -- header on our fragment too. This is cheaper than recomputing - -- the intersection from scratch. - mostRecentIntersection' - | Just ourSuccessor <- - AF.successorBlock (castPoint mostRecentIntersection) ourFrag - , headerHash ourSuccessor == headerHash hdr - = headerPoint hdr - | otherwise - = mostRecentIntersection - - traceWith (tracer cfgEnv) $ TraceValidatedHeader hdr - - pure - $ assertKnownIntersectionInvariants (configConsensus cfg) - $ KnownIntersectionState { - mostRecentIntersection = mostRecentIntersection' - , ourFrag = ourFrag - , theirFrag = theirFrag' - , theirHeaderStateHistory = theirHeaderStateHistory' - , kBestBlockNo - } - where - ConfigEnv { - cfg - } = cfgEnv + let KnownIntersectionState + { mostRecentIntersection + , ourFrag + , theirFrag + , theirHeaderStateHistory + , kBestBlockNo + } = kis + + let hdrPoint = headerPoint hdr + + -- Validate header + theirHeaderStateHistory' <- + case runExcept $ + validateHeader cfg ledgerView hdr hdrSlotTime theirHeaderStateHistory of + Right theirHeaderStateHistory' -> return theirHeaderStateHistory' + Left vErr -> + disconnect $ + HeaderError hdrPoint vErr (ourTipFromChain ourFrag) theirTip + + let + validatedHdr = + HeaderWithTime + { hwtHeader = hdr + , hwtSlotRelativeTime = hdrSlotTime + } + theirFrag' = theirFrag :> validatedHdr + -- Advance the most recent intersection if we have the same + -- header on our fragment too. This is cheaper than recomputing + -- the intersection from scratch. + mostRecentIntersection' + | Just ourSuccessor <- + AF.successorBlock (castPoint mostRecentIntersection) ourFrag + , headerHash ourSuccessor == headerHash hdr = + headerPoint hdr + | otherwise = + mostRecentIntersection + + traceWith (tracer cfgEnv) $ TraceValidatedHeader hdr + + pure $ + assertKnownIntersectionInvariants (configConsensus cfg) $ + KnownIntersectionState + { mostRecentIntersection = mostRecentIntersection' + , ourFrag = ourFrag + , theirFrag = theirFrag' + , theirHeaderStateHistory = theirHeaderStateHistory' + , kBestBlockNo + } + where + ConfigEnv + { cfg + } = cfgEnv - InternalEnv { - disconnect - } = intEnv + InternalEnv + { disconnect + } = intEnv -- | Check the limit on patience. If the block number of the new header is -- better than anything (valid) we have seen from this peer so far, we add a @@ -1948,50 +1984,52 @@ checkValid cfgEnv intEnv hdr hdrSlotTime theirTip kis ledgerView = do -- only after validation of the block. checkLoP :: forall m blk. - ( IOLike m - , HasHeader (Header blk) ) - => ConfigEnv m blk - -> DynamicEnv m blk - -> Header blk - -> KnownIntersectionState blk - -> m (KnownIntersectionState blk) + ( IOLike m + , HasHeader (Header blk) + ) => + ConfigEnv m blk -> + DynamicEnv m blk -> + Header blk -> + KnownIntersectionState blk -> + m (KnownIntersectionState blk) checkLoP ConfigEnv{tracer} DynamicEnv{loPBucket} hdr kis@KnownIntersectionState{kBestBlockNo} = if blockNo hdr > kBestBlockNo - then do lbGrantToken loPBucket - traceWith tracer $ TraceGaveLoPToken True hdr kBestBlockNo - pure $ kis{kBestBlockNo = blockNo hdr} - else do traceWith tracer $ TraceGaveLoPToken False hdr kBestBlockNo - pure kis + then do + lbGrantToken loPBucket + traceWith tracer $ TraceGaveLoPToken True hdr kBestBlockNo + pure $ kis{kBestBlockNo = blockNo hdr} + else do + traceWith tracer $ TraceGaveLoPToken False hdr kBestBlockNo + pure kis {------------------------------------------------------------------------------- Utilities used in the *top functions -------------------------------------------------------------------------------} -data UpdatedIntersectionState blk a = - NoLongerIntersects - -- ^ The local selection has changed such that 'ourFrag' no longer +data UpdatedIntersectionState blk a + = -- | The local selection has changed such that 'ourFrag' no longer -- intersects 'theirFrag' -- -- (In general, the intersection could also be lost because of messages -- they sent, but that's handled elsewhere, not involving this data type.) - | - StillIntersects a !(KnownIntersectionState blk) + NoLongerIntersects + | StillIntersects a !(KnownIntersectionState blk) -data Intersects blk a = - Intersects - (KnownIntersectionState blk) - a +data Intersects blk a + = Intersects + (KnownIntersectionState blk) + a castEarlyExitIntersects :: - Monad m - => WithEarlyExit m (Intersects blk a) - -> m (UpdatedIntersectionState blk a) + Monad m => + WithEarlyExit m (Intersects blk a) -> + m (UpdatedIntersectionState blk a) castEarlyExitIntersects = - fmap cnv . EarlyExit.withEarlyExit - where - cnv = \case - Nothing -> NoLongerIntersects - Just (Intersects kis a) -> StillIntersects a kis + fmap cnv . EarlyExit.withEarlyExit + where + cnv = \case + Nothing -> NoLongerIntersects + Just (Intersects kis a) -> StillIntersects a kis -- | Recent offsets -- @@ -2015,20 +2053,20 @@ castEarlyExitIntersects = -- selected. mkOffsets :: SecurityParam -> Word64 -> [Word64] mkOffsets (SecurityParam k) maxOffset = - [0] ++ takeWhile (< l) [fib n | n <- [2..]] ++ [l] - where - l = unNonZero k `min` maxOffset + [0] ++ takeWhile (< l) [fib n | n <- [2 ..]] ++ [l] + where + l = unNonZero k `min` maxOffset ourTipFromChain :: - HasHeader (Header blk) - => AnchoredFragment (Header blk) - -> Our (Tip blk) + HasHeader (Header blk) => + AnchoredFragment (Header blk) -> + Our (Tip blk) ourTipFromChain = Our . AF.anchorToTip . AF.headAnchor theirTipFromChain :: - HasHeader (HeaderWithTime blk) - => AnchoredFragment (HeaderWithTime blk) - -> Their (Tip blk) + HasHeader (HeaderWithTime blk) => + AnchoredFragment (HeaderWithTime blk) -> + Their (Tip blk) theirTipFromChain = Their . AF.anchorToTip . AF.headAnchor -- | A type-legos auxillary function used in 'readLedgerState'. @@ -2036,21 +2074,21 @@ castM :: Monad m => m (WithEarlyExit m x) -> WithEarlyExit m x castM = join . EarlyExit.lift attemptRollback :: - ( BlockSupportsProtocol blk - , HasAnnTip blk - ) - => Point blk - -> (AnchoredFragment (HeaderWithTime blk), HeaderStateHistory blk) - -> Maybe - ( AnchoredFragment (HeaderWithTime blk) - , HeaderStateHistory blk - , -- The state of the oldest header that was rolled back, if any. - Maybe (HeaderStateWithTime blk) - ) + ( BlockSupportsProtocol blk + , HasAnnTip blk + ) => + Point blk -> + (AnchoredFragment (HeaderWithTime blk), HeaderStateHistory blk) -> + Maybe + ( AnchoredFragment (HeaderWithTime blk) + , HeaderStateHistory blk + , -- The state of the oldest header that was rolled back, if any. + Maybe (HeaderStateWithTime blk) + ) attemptRollback rollBackPoint (frag, state) = do - frag' <- AF.rollback (castPoint rollBackPoint) frag - (state', oldestRewound) <- HeaderStateHistory.rewind rollBackPoint state - return (frag', state', oldestRewound) + frag' <- AF.rollback (castPoint rollBackPoint) frag + (state', oldestRewound) <- HeaderStateHistory.rewind rollBackPoint state + return (frag', state', oldestRewound) {------------------------------------------------------------------------------- Looking for newly-recognized trap headers on the existing candidate @@ -2073,63 +2111,64 @@ attemptRollback rollBackPoint (frag, state) = do -- of invalid blocks). invalidBlockRejector :: forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => Tracer m (TraceChainSyncClientEvent blk) - -> NodeToNodeVersion - -> DiffusionPipeliningSupport - -> STM m (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) - -- ^ Get the invalid block checker - -> STM m (AnchoredFragment (HeaderWithTime blk)) - -- ^ Get the candidate - -> Watcher m - (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) - Fingerprint + ( IOLike m + , LedgerSupportsProtocol blk + ) => + Tracer m (TraceChainSyncClientEvent blk) -> + NodeToNodeVersion -> + DiffusionPipeliningSupport -> + -- | Get the invalid block checker + STM m (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) -> + -- | Get the candidate + STM m (AnchoredFragment (HeaderWithTime blk)) -> + Watcher + m + (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) + Fingerprint invalidBlockRejector tracer _version pipelining getIsInvalidBlock getCandidate = - Watcher { - wFingerprint = getFingerprint - , wInitial = Nothing - , wNotify = checkInvalid . forgetFingerprint - , wReader = getIsInvalidBlock - } - where - checkInvalid :: (HeaderHash blk -> Maybe (ExtValidationError blk)) -> m () - checkInvalid isInvalidBlock = do - theirFrag <- atomically getCandidate - -- The invalid block is likely to be a more recent block, so check from - -- newest to oldest. - -- - -- As of block diffusion pipelining, their tip header might be - -- tentative. Since they do not yet have a way to explicitly say - -- whether it is tentative, we assume it is and therefore skip their - -- tip here. TODO once it's explicit, only skip it if it's annotated as - -- tentative - mapM_ (uncurry disconnect) - $ firstJust - (\hdrWithTime -> - let hdr = hwtHeader hdrWithTime in - (hdr,) <$> isInvalidBlock (headerHash hdr) - ) - $ ( case pipelining of - DiffusionPipeliningOff -> id - DiffusionPipeliningOn -> - -- As mentioned in the comment above, if the - -- header is tentative we skip the fragment tip, - -- dropping the first element. - drop 1 - ) - $ AF.toNewestFirst theirFrag - - disconnect :: Header blk -> ExtValidationError blk -> m () - disconnect invalidHeader reason = do - let ex = - InvalidBlock - (headerPoint invalidHeader) - (headerHash invalidHeader) - reason - traceWith tracer $ TraceException ex - throwIO ex + Watcher + { wFingerprint = getFingerprint + , wInitial = Nothing + , wNotify = checkInvalid . forgetFingerprint + , wReader = getIsInvalidBlock + } + where + checkInvalid :: (HeaderHash blk -> Maybe (ExtValidationError blk)) -> m () + checkInvalid isInvalidBlock = do + theirFrag <- atomically getCandidate + -- The invalid block is likely to be a more recent block, so check from + -- newest to oldest. + -- + -- As of block diffusion pipelining, their tip header might be + -- tentative. Since they do not yet have a way to explicitly say + -- whether it is tentative, we assume it is and therefore skip their + -- tip here. TODO once it's explicit, only skip it if it's annotated as + -- tentative + mapM_ (uncurry disconnect) + $ firstJust + ( \hdrWithTime -> + let hdr = hwtHeader hdrWithTime + in (hdr,) <$> isInvalidBlock (headerHash hdr) + ) + $ ( case pipelining of + DiffusionPipeliningOff -> id + DiffusionPipeliningOn -> + -- As mentioned in the comment above, if the + -- header is tentative we skip the fragment tip, + -- dropping the first element. + drop 1 + ) + $ AF.toNewestFirst theirFrag + + disconnect :: Header blk -> ExtValidationError blk -> m () + disconnect invalidHeader reason = do + let ex = + InvalidBlock + (headerPoint invalidHeader) + (headerHash invalidHeader) + reason + traceWith tracer $ TraceException ex + throwIO ex {------------------------------------------------------------------------------- Explicit state @@ -2146,12 +2185,12 @@ invalidBlockRejector tracer _version pipelining getIsInvalidBlock getCandidate = newtype Stateful m blk s st = Stateful (s -> m (Consensus st blk m)) continueWithState :: - NoThunks s - => s - -> Stateful m blk s st - -> m (Consensus st blk m) + NoThunks s => + s -> + Stateful m blk s st -> + m (Consensus st blk m) continueWithState !s (Stateful f) = - checkInvariant (show <$> unsafeNoThunks s) $ f s + checkInvariant (show <$> unsafeNoThunks s) $ f s {------------------------------------------------------------------------------- Return value @@ -2168,61 +2207,59 @@ continueWithState !s (Stateful f) = -- client-side of the chain sync protocol. However, the other direction of the -- protocol, and, e.g., the transaction submission protocol, should keep -- running. -data ChainSyncClientResult = - forall blk. BlockSupportsProtocol blk => +data ChainSyncClientResult + = forall blk. + BlockSupportsProtocol blk => ForkTooDeep - (Point blk) -- ^ Intersection - (Our (Tip blk)) - (Their (Tip blk)) - -- ^ The server we're connecting to forked more than @k@ blocks ago. - | - forall blk. BlockSupportsProtocol blk => - NoMoreIntersection - (Our (Tip blk)) - (Their (Tip blk)) - -- ^ Our chain changed such that it no longer intersects with the + -- | Intersection + (Point blk) + (Our (Tip blk)) + -- | The server we're connecting to forked more than @k@ blocks ago. + (Their (Tip blk)) + | -- | Our chain changed such that it no longer intersects with the -- candidate's fragment, and asking for a new intersection did not yield -- one. - | - forall blk. BlockSupportsProtocol blk => + forall blk. + BlockSupportsProtocol blk => + NoMoreIntersection + (Our (Tip blk)) + (Their (Tip blk)) + | forall blk. + BlockSupportsProtocol blk => RolledBackPastIntersection - (Point blk) -- ^ Point asked to roll back to - (Our (Tip blk)) - (Their (Tip blk)) - -- ^ We were asked to roll back past the anchor point of the candidate's - -- fragment. This means the candidate chain no longer forks off within @k@, - -- making it impossible to switch to. - | + -- | Point asked to roll back to + (Point blk) + (Our (Tip blk)) + -- | We were asked to roll back past the anchor point of the candidate's + -- fragment. This means the candidate chain no longer forks off within @k@, + -- making it impossible to switch to. + (Their (Tip blk)) + | -- | We were asked to terminate via the 'ControlMessageSTM' AskedToTerminate - -- ^ We were asked to terminate via the 'ControlMessageSTM' deriving instance Show ChainSyncClientResult instance Eq ChainSyncClientResult where - (==) - (ForkTooDeep (a :: Point blk) b c ) - (ForkTooDeep (a' :: Point blk') b' c') - | Just Refl <- eqT @blk @blk' - = (a, b, c) == (a', b', c') - - (==) - (NoMoreIntersection (a :: Our (Tip blk )) b ) - (NoMoreIntersection (a' :: Our (Tip blk')) b') - | Just Refl <- eqT @blk @blk' - = (a, b) == (a', b') - - (==) - (RolledBackPastIntersection (a :: Point blk ) b c ) - (RolledBackPastIntersection (a' :: Point blk') b' c') - | Just Refl <- eqT @blk @blk' - = (a, b, c) == (a', b', c') - - AskedToTerminate == AskedToTerminate = True - - ForkTooDeep{} == _ = False - NoMoreIntersection{} == _ = False - RolledBackPastIntersection{} == _ = False - AskedToTerminate == _ = False + (==) + (ForkTooDeep (a :: Point blk) b c) + (ForkTooDeep (a' :: Point blk') b' c') + | Just Refl <- eqT @blk @blk' = + (a, b, c) == (a', b', c') + (==) + (NoMoreIntersection (a :: Our (Tip blk)) b) + (NoMoreIntersection (a' :: Our (Tip blk')) b') + | Just Refl <- eqT @blk @blk' = + (a, b) == (a', b') + (==) + (RolledBackPastIntersection (a :: Point blk) b c) + (RolledBackPastIntersection (a' :: Point blk') b' c') + | Just Refl <- eqT @blk @blk' = + (a, b, c) == (a', b', c') + AskedToTerminate == AskedToTerminate = True + ForkTooDeep{} == _ = False + NoMoreIntersection{} == _ = False + RolledBackPastIntersection{} == _ = False + AskedToTerminate == _ = False {------------------------------------------------------------------------------- Exception @@ -2232,113 +2269,109 @@ instance Eq ChainSyncClientResult where -- behaviour, e.g., serving an invalid header or a header corresponding to a -- known invalid block, we throw an exception to disconnect. This will bring -- down all miniprotocols in both directions with that node. -data ChainSyncClientException = - forall blk. (BlockSupportsProtocol blk, ValidateEnvelope blk) => +data ChainSyncClientException + = forall blk. + (BlockSupportsProtocol blk, ValidateEnvelope blk) => HeaderError - (Point blk) -- ^ Invalid header - (HeaderError blk) - (Our (Tip blk)) - (Their (Tip blk)) - -- ^ Header validation threw an error. - | - forall blk. BlockSupportsProtocol blk => + -- | Invalid header + (Point blk) + (HeaderError blk) + (Our (Tip blk)) + -- | Header validation threw an error. + (Their (Tip blk)) + | forall blk. + BlockSupportsProtocol blk => InvalidIntersection - (Point blk) -- ^ Intersection - (Our (Tip blk)) - (Their (Tip blk)) - -- ^ We send the upstream node a bunch of points from a chain fragment and - -- the upstream node responded with an intersection point that is not on - -- our chain fragment, and thus not among the points we sent. - -- - -- We store the intersection point the upstream node sent us. - | - forall blk. LedgerSupportsProtocol blk => + -- | Intersection + (Point blk) + (Our (Tip blk)) + -- | We send the upstream node a bunch of points from a chain fragment and + -- the upstream node responded with an intersection point that is not on + -- our chain fragment, and thus not among the points we sent. + -- + -- We store the intersection point the upstream node sent us. + (Their (Tip blk)) + | forall blk. + LedgerSupportsProtocol blk => InvalidBlock - (Point blk) - -- ^ Block that triggered the validity check. - (HeaderHash blk) - -- ^ Invalid block. If pipelining was negotiated, this can be - -- different from the previous argument. - (ExtValidationError blk) - -- ^ The upstream node's chain contained a block that we know is invalid. - | - forall blk. BlockSupportsProtocol blk => + -- | Block that triggered the validity check. + (Point blk) + -- | Invalid block. If pipelining was negotiated, this can be + -- different from the previous argument. + (HeaderHash blk) + -- | The upstream node's chain contained a block that we know is invalid. + (ExtValidationError blk) + | forall blk. + BlockSupportsProtocol blk => CandidateTooSparse - (Point blk) -- ^ Intersection - (Our (Tip blk)) - (Their (Tip blk)) - -- ^ The upstream node's chain was so sparse that it was worse than our - -- selection despite being blocked on the forecast horizon. - | + -- | Intersection + (Point blk) + (Our (Tip blk)) + -- | The upstream node's chain was so sparse that it was worse than our + -- selection despite being blocked on the forecast horizon. + (Their (Tip blk)) + | -- | A header arrived from the far future. InFutureHeaderExceedsClockSkew !InFutureCheck.HeaderArrivalException - -- ^ A header arrived from the far future. - | - HistoricityError !HistoricityException - | + | HistoricityError !HistoricityException + | -- | The peer lost its race against the bucket. EmptyBucket - -- ^ The peer lost its race against the bucket. - | + | -- | When the peer responded incorrectly to a jump request. InvalidJumpResponse - -- ^ When the peer responded incorrectly to a jump request. - | DensityTooLow - -- ^ The peer has been deemed unworthy by the GDD + | -- | The peer has been deemed unworthy by the GDD + DensityTooLow deriving instance Show ChainSyncClientException instance Eq ChainSyncClientException where - (==) - (HeaderError (a :: Point blk ) b c d ) - (HeaderError (a' :: Point blk') b' c' d') - | Just Refl <- eqT @blk @blk' - = (a, b, c, d) == (a', b', c', d') - - (==) - (InvalidIntersection (a :: Point blk ) b c ) - (InvalidIntersection (a' :: Point blk') b' c') - | Just Refl <- eqT @blk @blk' - = (a, b, c) == (a', b', c') - - (==) - (InvalidBlock (a :: Point blk) b c ) - (InvalidBlock (a' :: Point blk') b' c') - | Just Refl <- eqT @blk @blk' - = (a, b, c) == (a', b', c') - - (==) - (CandidateTooSparse (a :: Point blk ) b c ) - (CandidateTooSparse (a' :: Point blk') b' c') - | Just Refl <- eqT @blk @blk' - = (a, b, c) == (a', b', c') - - (==) - (InFutureHeaderExceedsClockSkew a ) - (InFutureHeaderExceedsClockSkew a') - = a == a' - - (==) - (HistoricityError a ) - (HistoricityError a') - = a == a' - - (==) - EmptyBucket EmptyBucket - = True - (==) - InvalidJumpResponse InvalidJumpResponse - = True - (==) - DensityTooLow DensityTooLow - = True - - HeaderError{} == _ = False - InvalidIntersection{} == _ = False - InvalidBlock{} == _ = False - CandidateTooSparse{} == _ = False - InFutureHeaderExceedsClockSkew{} == _ = False - HistoricityError{} == _ = False - EmptyBucket == _ = False - InvalidJumpResponse == _ = False - DensityTooLow == _ = False + (==) + (HeaderError (a :: Point blk) b c d) + (HeaderError (a' :: Point blk') b' c' d') + | Just Refl <- eqT @blk @blk' = + (a, b, c, d) == (a', b', c', d') + (==) + (InvalidIntersection (a :: Point blk) b c) + (InvalidIntersection (a' :: Point blk') b' c') + | Just Refl <- eqT @blk @blk' = + (a, b, c) == (a', b', c') + (==) + (InvalidBlock (a :: Point blk) b c) + (InvalidBlock (a' :: Point blk') b' c') + | Just Refl <- eqT @blk @blk' = + (a, b, c) == (a', b', c') + (==) + (CandidateTooSparse (a :: Point blk) b c) + (CandidateTooSparse (a' :: Point blk') b' c') + | Just Refl <- eqT @blk @blk' = + (a, b, c) == (a', b', c') + (==) + (InFutureHeaderExceedsClockSkew a) + (InFutureHeaderExceedsClockSkew a') = + a == a' + (==) + (HistoricityError a) + (HistoricityError a') = + a == a' + (==) + EmptyBucket + EmptyBucket = + True + (==) + InvalidJumpResponse + InvalidJumpResponse = + True + (==) + DensityTooLow + DensityTooLow = + True + HeaderError{} == _ = False + InvalidIntersection{} == _ = False + InvalidBlock{} == _ = False + CandidateTooSparse{} == _ = False + InFutureHeaderExceedsClockSkew{} == _ = False + HistoricityError{} == _ = False + EmptyBucket == _ = False + InvalidJumpResponse == _ = False + DensityTooLow == _ = False instance Exception ChainSyncClientException @@ -2347,57 +2380,44 @@ instance Exception ChainSyncClientException -------------------------------------------------------------------------------} -- | Events traced by the Chain Sync Client. -data TraceChainSyncClientEvent blk = - TraceDownloadedHeader (Header blk) - -- ^ While following a candidate chain, we rolled forward by downloading a +data TraceChainSyncClientEvent blk + = -- | While following a candidate chain, we rolled forward by downloading a -- header. - | + TraceDownloadedHeader (Header blk) + | -- | While following a candidate chain, we rolled back to the given point. TraceRolledBack (Point blk) - -- ^ While following a candidate chain, we rolled back to the given point. - | - TraceFoundIntersection (Point blk) (Our (Tip blk)) (Their (Tip blk)) - -- ^ We found an intersection between our chain fragment and the + | -- | We found an intersection between our chain fragment and the -- candidate's chain. - | + TraceFoundIntersection (Point blk) (Our (Tip blk)) (Their (Tip blk)) + | -- | An exception was thrown by the Chain Sync Client. TraceException ChainSyncClientException - -- ^ An exception was thrown by the Chain Sync Client. - | + | -- | The client has terminated. TraceTermination ChainSyncClientResult - -- ^ The client has terminated. - | + | -- | We have validated the given header. TraceValidatedHeader (Header blk) - -- ^ We have validated the given header. - | - TraceWaitingBeyondForecastHorizon SlotNo - -- ^ The 'SlotNo' is beyond the forecast horizon, the ChainSync client + | -- | The 'SlotNo' is beyond the forecast horizon, the ChainSync client -- cannot yet validate a header in this slot and therefore is waiting. - | - TraceAccessingForecastHorizon SlotNo - -- ^ The 'SlotNo', which was previously beyond the forecast horizon, has now + TraceWaitingBeyondForecastHorizon SlotNo + | -- | The 'SlotNo', which was previously beyond the forecast horizon, has now -- entered it, and we can resume processing. - | - TraceGaveLoPToken Bool (Header blk) BlockNo - -- ^ Whether we added a token to the LoP bucket of the peer. Also carries + TraceAccessingForecastHorizon SlotNo + | -- | Whether we added a token to the LoP bucket of the peer. Also carries -- the considered header and the best block number known prior to this -- header. - | + TraceGaveLoPToken Bool (Header blk) BlockNo + | -- | ChainSync Jumping offering a point to jump to. TraceOfferJump (Point blk) - -- ^ ChainSync Jumping offering a point to jump to. - | + | -- | ChainSync Jumping -- reply. TraceJumpResult (Jumping.JumpResult blk) - -- ^ ChainSync Jumping -- reply. - | - TraceJumpingWaitingForNextInstruction - -- ^ ChainSync Jumping -- the ChainSync client is requesting the next + | -- | ChainSync Jumping -- the ChainSync client is requesting the next -- instruction. - | + TraceJumpingWaitingForNextInstruction + | -- | ChainSync Jumping -- the ChainSync client got its next instruction. TraceJumpingInstructionIs (Jumping.Instruction blk) - -- ^ ChainSync Jumping -- the ChainSync client got its next instruction. - | - forall n. TraceDrainingThePipe (Nat n) + | forall n. TraceDrainingThePipe (Nat n) deriving instance ( BlockSupportsProtocol blk , Show (Header blk) - ) - => Show (TraceChainSyncClientEvent blk) + ) => + Show (TraceChainSyncClientEvent blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/HistoricityCheck.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/HistoricityCheck.hs index 97c6feb953..93f61ea88b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/HistoricityCheck.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/HistoricityCheck.hs @@ -7,30 +7,37 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck ( - -- * Interface +module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.HistoricityCheck + ( -- * Interface HistoricalChainSyncMessage (..) , HistoricityCheck (..) , HistoricityCutoff (..) , HistoricityException (..) + -- * Real implementation , mkCheck , noCheck ) where -import Control.Exception (Exception) -import Control.Monad (when) -import Control.Monad.Except (throwError) -import Data.Time.Clock (NominalDiffTime) -import Data.Typeable (eqT) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (RelativeTime, - SystemTime (..), diffRelTime) -import Ouroboros.Consensus.HeaderStateHistory - (HeaderStateWithTime (..)) -import Ouroboros.Consensus.HeaderValidation (HasAnnTip, - headerStatePoint) -import Ouroboros.Consensus.Node.GsmState (GsmState (..)) +import Control.Exception (Exception) +import Control.Monad (when) +import Control.Monad.Except (throwError) +import Data.Time.Clock (NominalDiffTime) +import Data.Typeable (eqT) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime + ( RelativeTime + , SystemTime (..) + , diffRelTime + ) +import Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateWithTime (..) + ) +import Ouroboros.Consensus.HeaderValidation + ( HasAnnTip + , headerStatePoint + ) +import Ouroboros.Consensus.Node.GsmState (GsmState (..)) {------------------------------------------------------------------------------- Interface @@ -38,60 +45,59 @@ import Ouroboros.Consensus.Node.GsmState (GsmState (..)) -- | Interface for the ChainSync client for deciding whether @MsgRollBackward@s -- and @MsgAwaitReply@s are historical. -data HistoricityCheck m blk = HistoricityCheck { - -- | Determine whether the received message is historical. Depending on the - -- first argument, the second argument is: - -- - -- * 'HistoricalMsgRollBackward': The oldest state that was rolled back. - -- (Note that rollbacks of depth zero are hence never historical.). - -- - -- * 'HistoricalMsgAwaitReply': The state corresponding to the tip of the - -- candidate fragment when @MsgAwaitReply@ was sent. - judgeMessageHistoricity :: - HistoricalChainSyncMessage - -> HeaderStateWithTime blk - -> m (Either HistoricityException ()) +data HistoricityCheck m blk = HistoricityCheck + { judgeMessageHistoricity :: + HistoricalChainSyncMessage -> + HeaderStateWithTime blk -> + m (Either HistoricityException ()) + -- ^ Determine whether the received message is historical. Depending on the + -- first argument, the second argument is: + -- + -- * 'HistoricalMsgRollBackward': The oldest state that was rolled back. + -- (Note that rollbacks of depth zero are hence never historical.). + -- + -- * 'HistoricalMsgAwaitReply': The state corresponding to the tip of the + -- candidate fragment when @MsgAwaitReply@ was sent. } -- | ChainSync historicity checks are performed for @MsgRollBackward@s and -- @MsgAwaitReply@s, see 'HistoricityCheck'. -data HistoricalChainSyncMessage = - HistoricalMsgRollBackward +data HistoricalChainSyncMessage + = HistoricalMsgRollBackward | HistoricalMsgAwaitReply deriving stock (Show, Eq) -data HistoricityException = - -- | We received a @MsgRollBackward@ or a @MsgAwaitReply@ while their - -- candidate chain was too old for it to plausibly have been sent by an honest - -- caught-up peer. - -- - -- INVARIANT: @'historicityCutoff' < 'arrivalTime' `diffRelTime` 'slotTime'@ - forall blk. HasHeader blk => HistoricityException { - historicalMessage :: HistoricalChainSyncMessage - -- | Depending on 'historicalMessage': - -- - -- * 'HistoricalMsgRollBackward': The oldest header that was rewound. - -- - -- * 'HistoricalMsgAwaitReply': The tip of the candidate fragment. - , historicalPoint :: !(Point blk) - -- | The time corresponding to the slot of 'historicalPoint'. - , slotTime :: !RelativeTime - -- | When the offending 'historicalMessage' was received. - , arrivalTime :: !RelativeTime +data HistoricityException + = -- | We received a @MsgRollBackward@ or a @MsgAwaitReply@ while their + -- candidate chain was too old for it to plausibly have been sent by an honest + -- caught-up peer. + -- + -- INVARIANT: @'historicityCutoff' < 'arrivalTime' `diffRelTime` 'slotTime'@ + forall blk. HasHeader blk => HistoricityException + { historicalMessage :: HistoricalChainSyncMessage + , historicalPoint :: !(Point blk) + -- ^ Depending on 'historicalMessage': + -- + -- * 'HistoricalMsgRollBackward': The oldest header that was rewound. + -- + -- * 'HistoricalMsgAwaitReply': The tip of the candidate fragment. + , slotTime :: !RelativeTime + -- ^ The time corresponding to the slot of 'historicalPoint'. + , arrivalTime :: !RelativeTime + -- ^ When the offending 'historicalMessage' was received. , historicityCutoff :: !HistoricityCutoff } - deriving anyclass (Exception) + deriving anyclass Exception deriving stock instance Show HistoricityException instance Eq HistoricityException where (==) (HistoricityException l0 (l1 :: Point l) l2 l3 l4) - (HistoricityException r0 (r1 :: Point r) r2 r3 r4) - = case eqT @l @r of - Nothing -> False + (HistoricityException r0 (r1 :: Point r) r2 r3 r4) = + case eqT @l @r of + Nothing -> False Just Refl -> (l0, l1, l2, l3, l4) == (r0, r1, r2, r3, r4) - -- ^ The maximum age of a @MsgRollBackward@ or @MsgAwaitReply@ at arrival time, -- constraining the age of the oldest rewound header or the tip of the candidate -- fragment, respectively. @@ -107,8 +113,9 @@ instance Eq HistoricityException where -- older than 36 hours or signals that it doesn't have more headers is either -- violating the maximum rollback or else isn't a caught-up node. Either way, a -- syncing node should not be connected to that peer. -newtype HistoricityCutoff = HistoricityCutoff { - getHistoricityCutoff :: NominalDiffTime + +newtype HistoricityCutoff = HistoricityCutoff + { getHistoricityCutoff :: NominalDiffTime } deriving stock (Show, Eq, Ord) @@ -121,47 +128,52 @@ newtype HistoricityCutoff = HistoricityCutoff { -- already mitigated by other means (for example indirectly by the Limit on -- Patience in the case of Genesis /without/ ChainSync Jumping). noCheck :: Applicative m => HistoricityCheck m blk -noCheck = HistoricityCheck { - judgeMessageHistoricity = \_msg _hswt -> pure $ Right () +noCheck = + HistoricityCheck + { judgeMessageHistoricity = \_msg _hswt -> pure $ Right () } -- | Deny all rollbacks that rewind blocks older than -- 'HistoricityCutoff' upon arrival. mkCheck :: - forall m blk. - ( Monad m - , HasHeader blk - , HasAnnTip blk - ) - => SystemTime m - -> m GsmState - -- ^ Get the current 'GsmState'. - -- - -- This is used to disable the historicity check when we are caught up. The - -- rationale is extra resilience against disconnects between honest nodes - -- in disaster scenarios with very low chain density. - -> HistoricityCutoff - -> HistoricityCheck m blk -mkCheck systemTime getCurrentGsmState cshc = HistoricityCheck { - judgeMessageHistoricity = \msg hswt -> getCurrentGsmState >>= \case + forall m blk. + ( Monad m + , HasHeader blk + , HasAnnTip blk + ) => + SystemTime m -> + -- | Get the current 'GsmState'. + -- + -- This is used to disable the historicity check when we are caught up. The + -- rationale is extra resilience against disconnects between honest nodes + -- in disaster scenarios with very low chain density. + m GsmState -> + HistoricityCutoff -> + HistoricityCheck m blk +mkCheck systemTime getCurrentGsmState cshc = + HistoricityCheck + { judgeMessageHistoricity = \msg hswt -> + getCurrentGsmState >>= \case PreSyncing -> judgeRollback msg hswt - Syncing -> judgeRollback msg hswt - CaughtUp -> pure $ Right () + Syncing -> judgeRollback msg hswt + CaughtUp -> pure $ Right () } - where - HistoricityCutoff historicityCutoff = cshc + where + HistoricityCutoff historicityCutoff = cshc - judgeRollback :: - HistoricalChainSyncMessage - -> HeaderStateWithTime blk - -> m (Either HistoricityException ()) - judgeRollback msg (HeaderStateWithTime headerState slotTime) = do - arrivalTime <- systemTimeCurrent systemTime - let actualRollbackAge = arrivalTime `diffRelTime` slotTime - pure $ when (historicityCutoff < actualRollbackAge) $ - throwError HistoricityException { - historicalMessage = msg - , historicalPoint = headerStatePoint headerState + judgeRollback :: + HistoricalChainSyncMessage -> + HeaderStateWithTime blk -> + m (Either HistoricityException ()) + judgeRollback msg (HeaderStateWithTime headerState slotTime) = do + arrivalTime <- systemTimeCurrent systemTime + let actualRollbackAge = arrivalTime `diffRelTime` slotTime + pure $ + when (historicityCutoff < actualRollbackAge) $ + throwError + HistoricityException + { historicalMessage = msg + , historicalPoint = headerStatePoint headerState , slotTime , arrivalTime , historicityCutoff = cshc diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/InFutureCheck.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/InFutureCheck.hs index 99111bed4a..f6349e0105 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/InFutureCheck.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/InFutureCheck.hs @@ -5,50 +5,70 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck ( - -- * Interface +module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck + ( -- * Interface HeaderInFutureCheck (..) , SomeHeaderInFutureCheck (..) + -- * Real Implementation , HeaderArrivalException (..) , realHeaderInFutureCheck + -- * Clock skew , clockSkewInSeconds , defaultClockSkew + -- ** not exporting the constructor , ClockSkew , unClockSkew ) where -import Control.Exception (Exception) -import Control.Monad (unless, when) -import Control.Monad.Class.MonadTimer.SI (MonadDelay, threadDelay) -import Control.Monad.Except (Except, liftEither, throwError) -import Data.Proxy (Proxy (Proxy)) -import Data.Time.Clock (NominalDiffTime) -import Data.Type.Equality ((:~:) (Refl)) -import Data.Typeable (eqT) -import Ouroboros.Consensus.Block.Abstract (Header) -import Ouroboros.Consensus.Block.RealPoint (RealPoint, - headerRealPoint, realPointSlot) -import Ouroboros.Consensus.BlockchainTime.WallClock.Types - (RelativeTime, SystemTime, diffRelTime, systemTimeCurrent) -import Ouroboros.Consensus.HardFork.Abstract (HasHardForkHistory, - hardForkSummary) -import Ouroboros.Consensus.HardFork.History (PastHorizonException) -import Ouroboros.Consensus.HardFork.History.Qry (runQuery, - slotToWallclock) -import Ouroboros.Consensus.Ledger.Basics (EmptyMK, LedgerConfig, - LedgerState) -import Ouroboros.Consensus.Util.Time (nominalDelay, - secondsToNominalDiffTime) -import Ouroboros.Network.Block (HasHeader) +import Control.Exception (Exception) +import Control.Monad (unless, when) +import Control.Monad.Class.MonadTimer.SI (MonadDelay, threadDelay) +import Control.Monad.Except (Except, liftEither, throwError) +import Data.Proxy (Proxy (Proxy)) +import Data.Time.Clock (NominalDiffTime) +import Data.Type.Equality ((:~:) (Refl)) +import Data.Typeable (eqT) +import Ouroboros.Consensus.Block.Abstract (Header) +import Ouroboros.Consensus.Block.RealPoint + ( RealPoint + , headerRealPoint + , realPointSlot + ) +import Ouroboros.Consensus.BlockchainTime.WallClock.Types + ( RelativeTime + , SystemTime + , diffRelTime + , systemTimeCurrent + ) +import Ouroboros.Consensus.HardFork.Abstract + ( HasHardForkHistory + , hardForkSummary + ) +import Ouroboros.Consensus.HardFork.History (PastHorizonException) +import Ouroboros.Consensus.HardFork.History.Qry + ( runQuery + , slotToWallclock + ) +import Ouroboros.Consensus.Ledger.Basics + ( EmptyMK + , LedgerConfig + , LedgerState + ) +import Ouroboros.Consensus.Util.Time + ( nominalDelay + , secondsToNominalDiffTime + ) +import Ouroboros.Network.Block (HasHeader) {------------------------------------------------------------------------------- Interface -------------------------------------------------------------------------------} -data SomeHeaderInFutureCheck m blk = forall arrival judgment. +data SomeHeaderInFutureCheck m blk + = forall arrival judgment. SomeHeaderInFutureCheck (HeaderInFutureCheck m blk arrival judgment) -- | The interface a ChainSync client needs in order to check the arrival time @@ -56,53 +76,47 @@ data SomeHeaderInFutureCheck m blk = forall arrival judgment. -- -- Instead of alphabetical, the fields are in the order in which the ChainSync -- client logic will invoke them for each header. -data HeaderInFutureCheck m blk arrival judgment = HeaderInFutureCheck { - proxyArrival :: Proxy arrival - , - -- | This is ideally called _immediately_ upon the header arriving. - recordHeaderArrival :: Header blk -> m arrival - , - -- | Judge what to do about the header's arrival time. - -- - -- Note that this may be called after a delay, hence @arrival@ contains at - -- least the arrival time. - -- - -- In particular, such a delay might be caused by waiting for the - -- intersection with the local selection to change after this function - -- returns 'Ouroboros.Consensus.HardFork.HistoryPastHorizon'. - judgeHeaderArrival :: - LedgerConfig blk - -> LedgerState blk EmptyMK - -> arrival - -> Except PastHorizonException judgment - , - -- | Enact the judgment. - -- - -- On success, return the slot time of the header; otherwise, an exception - -- should be raised. - handleHeaderArrival :: - judgment - -> m (Except HeaderArrivalException RelativeTime) +data HeaderInFutureCheck m blk arrival judgment = HeaderInFutureCheck + { proxyArrival :: Proxy arrival + , recordHeaderArrival :: Header blk -> m arrival + -- ^ This is ideally called _immediately_ upon the header arriving. + , judgeHeaderArrival :: + LedgerConfig blk -> + LedgerState blk EmptyMK -> + arrival -> + Except PastHorizonException judgment + -- ^ Judge what to do about the header's arrival time. + -- + -- Note that this may be called after a delay, hence @arrival@ contains at + -- least the arrival time. + -- + -- In particular, such a delay might be caused by waiting for the + -- intersection with the local selection to change after this function + -- returns 'Ouroboros.Consensus.HardFork.HistoryPastHorizon'. + , handleHeaderArrival :: + judgment -> + m (Except HeaderArrivalException RelativeTime) + -- ^ Enact the judgment. + -- + -- On success, return the slot time of the header; otherwise, an exception + -- should be raised. } {------------------------------------------------------------------------------- Real implmementation -------------------------------------------------------------------------------} -data HeaderArrivalException = - -- | The header arrived so early that its issuer either minted it before - -- their clock reached its slot onset or else the difference between their - -- clock and ours is more severe than we're configured to tolerate. - -- - -- INVARIANT: @'tolerableClockSkew' < negate 'ageUponArrival'@ - forall blk. HasHeader blk => FarFutureHeaderException { - ageUponArrival :: !NominalDiffTime - , - arrivedPoint :: !(RealPoint blk) - , - arrivalTime :: !RelativeTime - , - tolerableClockSkew :: !NominalDiffTime +data HeaderArrivalException + = -- | The header arrived so early that its issuer either minted it before + -- their clock reached its slot onset or else the difference between their + -- clock and ours is more severe than we're configured to tolerate. + -- + -- INVARIANT: @'tolerableClockSkew' < negate 'ageUponArrival'@ + forall blk. HasHeader blk => FarFutureHeaderException + { ageUponArrival :: !NominalDiffTime + , arrivedPoint :: !(RealPoint blk) + , arrivalTime :: !RelativeTime + , tolerableClockSkew :: !NominalDiffTime } deriving instance Show HeaderArrivalException @@ -112,9 +126,9 @@ instance Exception HeaderArrivalException instance Eq HeaderArrivalException where (==) (FarFutureHeaderException l0 (l1 :: RealPoint l) l2 l3) - (FarFutureHeaderException r0 (r1 :: RealPoint r) r2 r3) - = case eqT @l @r of - Nothing -> False + (FarFutureHeaderException r0 (r1 :: RealPoint r) r2 r3) = + case eqT @l @r of + Nothing -> False Just Refl -> (l0, l1, l2, l3) == (r0, r1, r2, r3) realHeaderInFutureCheck :: @@ -122,43 +136,44 @@ realHeaderInFutureCheck :: , HasHeader (Header blk) , HasHardForkHistory blk , MonadDelay m - ) - => ClockSkew -> SystemTime m -> SomeHeaderInFutureCheck m blk + ) => + ClockSkew -> SystemTime m -> SomeHeaderInFutureCheck m blk realHeaderInFutureCheck skew systemTime = - SomeHeaderInFutureCheck - $ HeaderInFutureCheck { - proxyArrival = Proxy - , recordHeaderArrival = \hdr -> do - (,) (headerRealPoint hdr) <$> systemTimeCurrent systemTime - , judgeHeaderArrival = \lcfg lst (p, arrivalTime_) -> do - let qry = slotToWallclock (realPointSlot p) - hfSummary = hardForkSummary lcfg lst - -- TODO cache this in the KnownIntersectionState? Or even in the - -- LedgerDB? - (onset, _slotLength) <- liftEither $ runQuery qry hfSummary - pure (p, arrivalTime_, onset) - , handleHeaderArrival = \(p, arrivalTime_, onset) -> do - let ageUponArrival_ = arrivalTime_ `diffRelTime` onset - tooEarly = unClockSkew skew < negate ageUponArrival_ - -- TODO leap seconds? - - -- this delay is the simple part of Ouroboros Chronos - unless tooEarly $ do + SomeHeaderInFutureCheck $ + HeaderInFutureCheck + { proxyArrival = Proxy + , recordHeaderArrival = \hdr -> do + (,) (headerRealPoint hdr) <$> systemTimeCurrent systemTime + , judgeHeaderArrival = \lcfg lst (p, arrivalTime_) -> do + let qry = slotToWallclock (realPointSlot p) + hfSummary = hardForkSummary lcfg lst + -- TODO cache this in the KnownIntersectionState? Or even in the + -- LedgerDB? + (onset, _slotLength) <- liftEither $ runQuery qry hfSummary + pure (p, arrivalTime_, onset) + , handleHeaderArrival = \(p, arrivalTime_, onset) -> do + let ageUponArrival_ = arrivalTime_ `diffRelTime` onset + tooEarly = unClockSkew skew < negate ageUponArrival_ + -- TODO leap seconds? + + -- this delay is the simple part of Ouroboros Chronos + unless tooEarly $ do now <- systemTimeCurrent systemTime - let ageNow = now `diffRelTime` onset + let ageNow = now `diffRelTime` onset syntheticDelay = negate ageNow - threadDelay $ nominalDelay syntheticDelay -- TODO leap seconds? - - pure $ do - when tooEarly $ throwError FarFutureHeaderException { - ageUponArrival = ageUponArrival_ - , arrivedPoint = p - , arrivalTime = arrivalTime_ - , tolerableClockSkew = unClockSkew skew - } - -- no exception if within skew - pure onset - } + threadDelay $ nominalDelay syntheticDelay -- TODO leap seconds? + pure $ do + when tooEarly $ + throwError + FarFutureHeaderException + { ageUponArrival = ageUponArrival_ + , arrivedPoint = p + , arrivalTime = arrivalTime_ + , tolerableClockSkew = unClockSkew skew + } + -- no exception if within skew + pure onset + } {------------------------------------------------------------------------------- Clock skew @@ -182,7 +197,7 @@ realHeaderInFutureCheck skew systemTime = -- invalid (and disconnect from A, since A is sending it invalid blocks). -- -- Use 'defaultClockSkew' when unsure. -newtype ClockSkew = ClockSkew { unClockSkew :: NominalDiffTime } +newtype ClockSkew = ClockSkew {unClockSkew :: NominalDiffTime} deriving (Show, Eq, Ord) -- | Default maximum permissible clock skew diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/Jumping.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/Jumping.hs index fa4aa34748..3d8784e9ca 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/Jumping.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/Jumping.hs @@ -175,9 +175,8 @@ -- The BlockFetch logic can ask to change the dynamo if it is not serving -- blocks fast enough. If there are other non-disengaged peers, all peers are -- demoted to happy jumpers (l+g+h) and a new dynamo is elected (e). --- -module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping ( - Context +module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping + ( Context , ContextWith (..) , Instruction (..) , JumpInstruction (..) @@ -195,81 +194,87 @@ module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.Jumping ( , unregisterClient ) where -import Cardano.Slotting.Slot (SlotNo (..), WithOrigin (..)) -import Control.Monad (forM, forM_, when) -import Control.Tracer (Tracer, traceWith) -import Data.Foldable (toList, traverse_) -import Data.List (sortOn) -import qualified Data.Map as Map -import Data.Maybe (catMaybes, fromMaybe) -import Data.Maybe.Strict (StrictMaybe (..)) -import Data.Sequence.Strict (StrictSeq) -import qualified Data.Sequence.Strict as Seq -import qualified Data.Strict.Either as Strict -import Data.Typeable (Typeable) -import Data.Void (absurd) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block (HasHeader (getHeaderFields), Header, - Point (..), castPoint, pointSlot, succWithOrigin) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State - (ChainSyncClientHandle (..), - ChainSyncClientHandleCollection (..), - ChainSyncJumpingJumperState (..), - ChainSyncJumpingState (..), ChainSyncState (..), - DisengagedInitState (..), DynamoInitState (..), - JumpInfo (..), JumperInitState (..), - ObjectorInitState (..)) -import Ouroboros.Consensus.Node.GsmState (GsmState) -import qualified Ouroboros.Consensus.Node.GsmState as GSM -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.IOLike hiding (handle) -import qualified Ouroboros.Network.AnchoredFragment as AF +import Cardano.Slotting.Slot (SlotNo (..), WithOrigin (..)) +import Control.Monad (forM, forM_, when) +import Control.Tracer (Tracer, traceWith) +import Data.Foldable (toList, traverse_) +import Data.List (sortOn) +import Data.Map qualified as Map +import Data.Maybe (catMaybes, fromMaybe) +import Data.Maybe.Strict (StrictMaybe (..)) +import Data.Sequence.Strict (StrictSeq) +import Data.Sequence.Strict qualified as Seq +import Data.Strict.Either qualified as Strict +import Data.Typeable (Typeable) +import Data.Void (absurd) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block + ( HasHeader (getHeaderFields) + , Header + , Point (..) + , castPoint + , pointSlot + , succWithOrigin + ) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State + ( ChainSyncClientHandle (..) + , ChainSyncClientHandleCollection (..) + , ChainSyncJumpingJumperState (..) + , ChainSyncJumpingState (..) + , ChainSyncState (..) + , DisengagedInitState (..) + , DynamoInitState (..) + , JumpInfo (..) + , JumperInitState (..) + , ObjectorInitState (..) + ) +import Ouroboros.Consensus.Node.GsmState (GsmState) +import Ouroboros.Consensus.Node.GsmState qualified as GSM +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.IOLike hiding (handle) +import Ouroboros.Network.AnchoredFragment qualified as AF -- | Hooks for ChainSync jumping. data Jumping m blk = Jumping - { -- | Get the next instruction to execute, which can be either to run normal - -- ChainSync, to jump to a given point, or to restart ChainSync. When the - -- peer is a jumper and there is no jump request, 'jgNextInstruction' blocks - -- until a jump request is made. - jgNextInstruction :: !(m (Instruction blk)), - - -- | To be called whenever the peer claims to have no more headers. - jgOnAwaitReply :: !(m ()), - - -- | To be called whenever a header is received from the peer - -- before it is validated. - jgOnRollForward :: !(Point (Header blk) -> m ()), - - -- | To be called whenever a peer rolls back. - jgOnRollBackward :: !(WithOrigin SlotNo -> m ()), - - -- | Process the result of a jump, either accepted or rejected. - -- - -- The jump result is used to decide on the next jumps or whether to elect - -- an objector. - jgProcessJumpResult :: !(JumpResult blk -> m ()), - - -- | To be called to update the last known jump possible to the tip of - -- the peers candidate fragment. The ChainSync clients for all peers should - -- call this function in case they are or they become dynamos. - -- - -- JumpInfo is meant to be a snapshot of the @KnownIntersectionState@ of - -- the ChainSync client. See 'JumpInfo' for more details. - jgUpdateJumpInfo :: !(JumpInfo blk -> STM m ()) + { jgNextInstruction :: !(m (Instruction blk)) + -- ^ Get the next instruction to execute, which can be either to run normal + -- ChainSync, to jump to a given point, or to restart ChainSync. When the + -- peer is a jumper and there is no jump request, 'jgNextInstruction' blocks + -- until a jump request is made. + , jgOnAwaitReply :: !(m ()) + -- ^ To be called whenever the peer claims to have no more headers. + , jgOnRollForward :: !(Point (Header blk) -> m ()) + -- ^ To be called whenever a header is received from the peer + -- before it is validated. + , jgOnRollBackward :: !(WithOrigin SlotNo -> m ()) + -- ^ To be called whenever a peer rolls back. + , jgProcessJumpResult :: !(JumpResult blk -> m ()) + -- ^ Process the result of a jump, either accepted or rejected. + -- + -- The jump result is used to decide on the next jumps or whether to elect + -- an objector. + , jgUpdateJumpInfo :: !(JumpInfo blk -> STM m ()) + -- ^ To be called to update the last known jump possible to the tip of + -- the peers candidate fragment. The ChainSync clients for all peers should + -- call this function in case they are or they become dynamos. + -- + -- JumpInfo is meant to be a snapshot of the @KnownIntersectionState@ of + -- the ChainSync client. See 'JumpInfo' for more details. } - deriving stock (Generic) + deriving stock Generic deriving anyclass instance - ( IOLike m, - HasHeader blk, - NoThunks (Header blk) + ( IOLike m + , HasHeader blk + , NoThunks (Header blk) ) => NoThunks (Jumping m blk) -- | No-op implementation of CSJ -noJumping :: (MonadSTM m) => Jumping m blk +noJumping :: MonadSTM m => Jumping m blk noJumping = Jumping { jgNextInstruction = pure RunNormally @@ -282,30 +287,31 @@ noJumping = -- | Create the callbacks for a given peer. mkJumping :: - ( MonadSTM m, - Eq peer, - LedgerSupportsProtocol blk + ( MonadSTM m + , Eq peer + , LedgerSupportsProtocol blk ) => PeerContext m peer blk -> Jumping m blk -mkJumping peerContext = Jumping - { jgNextInstruction = +mkJumping peerContext = + Jumping + { jgNextInstruction = atomically (nextInstruction (pure ()) peerContext) >>= \case - Strict.Right instr -> pure instr - Strict.Left () -> do - traceWith (tracer peerContext) BlockedOnJump - id - $ fmap (Strict.either absurd id) - $ atomically - $ nextInstruction retry peerContext - , jgOnAwaitReply = f $ onAwaitReply peerContext - , jgOnRollForward = f . onRollForward peerContext - , jgOnRollBackward = f . onRollBackward peerContext - , jgProcessJumpResult = f . processJumpResult peerContext - , jgUpdateJumpInfo = updateJumpInfo peerContext - } - where - f m = atomically m >>= traverse_ (traceWith (tracer peerContext)) + Strict.Right instr -> pure instr + Strict.Left () -> do + traceWith (tracer peerContext) BlockedOnJump + id $ + fmap (Strict.either absurd id) $ + atomically $ + nextInstruction retry peerContext + , jgOnAwaitReply = f $ onAwaitReply peerContext + , jgOnRollForward = f . onRollForward peerContext + , jgOnRollBackward = f . onRollBackward peerContext + , jgProcessJumpResult = f . processJumpResult peerContext + , jgUpdateJumpInfo = updateJumpInfo peerContext + } + where + f m = atomically m >>= traverse_ (traceWith (tracer peerContext)) -- | A context for ChainSync jumping -- @@ -318,11 +324,11 @@ mkJumping peerContext = Jumping -- at least as old as the oldest intersection of the `FoundIntersection` jumpers -- with the dynamo. data ContextWith peerField handleField m peer blk = Context - { peer :: !peerField, - handle :: !handleField, - handlesCol :: !(ChainSyncClientHandleCollection peer m blk), - jumpSize :: !SlotNo, - tracer :: Tracer m (TraceEventCsj peer blk) + { peer :: !peerField + , handle :: !handleField + , handlesCol :: !(ChainSyncClientHandleCollection peer m blk) + , jumpSize :: !SlotNo + , tracer :: Tracer m (TraceEventCsj peer blk) } -- | A non-specific, generic context for ChainSync jumping. @@ -338,8 +344,8 @@ makeContext :: MonadSTM m => ChainSyncClientHandleCollection peer m blk -> SlotNo -> + -- | The size of jumps, in number of slots. Tracer m (TraceEventCsj peer blk) -> - -- ^ The size of jumps, in number of slots. STM m (Context m peer blk) makeContext h jumpSize tracer = do pure $ Context () () h jumpSize tracer @@ -347,27 +353,29 @@ makeContext h jumpSize tracer = do -- | Get a generic context from a peer context by stripping away the -- peer-specific fields. stripContext :: PeerContext m peer blk -> Context m peer blk -stripContext context = context {peer = (), handle = ()} +stripContext context = context{peer = (), handle = ()} -- | Instruction from the jumping governor, either to run normal ChainSync, or -- to jump to follow a dynamo with the given fragment, or to restart ChainSync. data Instruction blk = RunNormally - -- | The restart instruction restarts the ChainSync protocol. This is + | -- | The restart instruction restarts the ChainSync protocol. This is -- necessary when disengaging a peer of which we know no point that we -- could set the intersection of the ChainSync server to. - | Restart + Restart | -- | Jump to the tip of the given fragment. JumpInstruction !(JumpInstruction blk) - deriving (Generic) + deriving Generic deriving instance (Typeable blk, HasHeader (Header blk), Eq (Header blk)) => Eq (Instruction blk) -deriving instance (Typeable blk, HasHeader (Header blk), Show (Header blk)) => Show (Instruction blk) +deriving instance + (Typeable blk, HasHeader (Header blk), Show (Header blk)) => Show (Instruction blk) deriving anyclass instance - ( HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (Instruction blk) + ( HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (Instruction blk) data JumpInstruction blk = JumpTo !(JumpInfo blk) @@ -375,9 +383,10 @@ data JumpInstruction blk -- objectors and dynamos. Otherwise, the ChainSync server wouldn't know -- which headers to start serving. JumpToGoodPoint !(JumpInfo blk) - deriving (Generic) + deriving Generic -deriving instance (Typeable blk, HasHeader (Header blk), Eq (Header blk)) => Eq (JumpInstruction blk) +deriving instance + (Typeable blk, HasHeader (Header blk), Eq (Header blk)) => Eq (JumpInstruction blk) instance (Typeable blk, HasHeader (Header blk), Show (Header blk)) => Show (JumpInstruction blk) where showsPrec p = \case JumpTo jumpInfo -> @@ -386,25 +395,27 @@ instance (Typeable blk, HasHeader (Header blk), Show (Header blk)) => Show (Jump showParen (p > 10) $ showString "JumpToGoodPoint " . shows (AF.headPoint $ jTheirFragment jumpInfo) deriving anyclass instance - ( HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (JumpInstruction blk) + ( HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (JumpInstruction blk) -- | The result of a jump request, either accepted or rejected. data JumpResult blk = AcceptedJump !(JumpInstruction blk) | RejectedJump !(JumpInstruction blk) - deriving (Generic) + deriving Generic deriving instance (Typeable blk, HasHeader (Header blk), Eq (Header blk)) => Eq (JumpResult blk) deriving instance (Typeable blk, HasHeader (Header blk), Show (Header blk)) => Show (JumpResult blk) deriving anyclass instance - ( HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (JumpResult blk) + ( HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (JumpResult blk) -- | Compute the next instruction for the given peer. In the majority of cases, -- this consists in reading the peer's handle, having the dynamo and objector @@ -413,11 +424,11 @@ deriving anyclass instance -- while, we need to indicate to the jumpers that they need to jump, and this -- requires writing to a TVar for every jumper. nextInstruction :: - ( MonadSTM m ) => + MonadSTM m => STM m retry -> PeerContext m peer blk -> STM m (Strict.Either retry (Instruction blk)) -nextInstruction retry_ context = +nextInstruction retry_ context = readTVar (cschJumping (handle context)) >>= \case Disengaged DisengagedDone -> pur RunNormally Disengaged Disengaging -> do @@ -442,11 +453,12 @@ nextInstruction retry_ context = case jumperState of Happy FreshJumper mGoodJumpInfo -> writeTVar (cschJumping (handle context)) $ - Jumper nextJumpVar $ Happy StartedJumper mGoodJumpInfo + Jumper nextJumpVar $ + Happy StartedJumper mGoodJumpInfo _ -> pure () pur $ JumpInstruction $ JumpTo jumpInfo - where - pur = pure . Strict.Right + where + pur = pure . Strict.Right -- | This function is called when we receive a 'MsgRollForward' message before -- validating it. @@ -458,10 +470,10 @@ nextInstruction retry_ context = -- -- We also check that the Objector disagrees with the header sent at its -- rejected jump. If it agrees to it, we disengage it. --- -onRollForward :: forall m peer blk. - ( MonadSTM m, - LedgerSupportsProtocol blk +onRollForward :: + forall m peer blk. + ( MonadSTM m + , LedgerSupportsProtocol blk ) => PeerContext m peer blk -> Point (Header blk) -> @@ -481,33 +493,35 @@ onRollForward context point = mJumpInfo <- readTVar (cschJumpInfo (handle context)) setJumps mJumpInfo | otherwise -> pure Nothing - where - setJumps Nothing = error "onRollForward: Dynamo without jump info" - setJumps (Just jumpInfo) = do - writeTVar (cschJumping (handle context)) $ - Dynamo DynamoStarted $ AF.headSlot $ jTheirFragment jumpInfo - handles <- cschcSeq (handlesCol context) - forM_ handles $ \(_, h) -> - readTVar (cschJumping h) >>= \case - Jumper nextJumpVar Happy{} -> writeTVar nextJumpVar (Just jumpInfo) - _ -> pure () - pure - $ Just - $ SentJumpInstruction - $ castPoint - $ AF.headPoint - $ jTheirFragment jumpInfo + where + setJumps Nothing = error "onRollForward: Dynamo without jump info" + setJumps (Just jumpInfo) = do + writeTVar (cschJumping (handle context)) $ + Dynamo DynamoStarted $ + AF.headSlot $ + jTheirFragment jumpInfo + handles <- cschcSeq (handlesCol context) + forM_ handles $ \(_, h) -> + readTVar (cschJumping h) >>= \case + Jumper nextJumpVar Happy{} -> writeTVar nextJumpVar (Just jumpInfo) + _ -> pure () + pure $ + Just $ + SentJumpInstruction $ + castPoint $ + AF.headPoint $ + jTheirFragment jumpInfo -- | This function is called when we receive a 'MsgRollBackward' message. -- -- Here we check if the peer is trying to roll back to a point before the last -- jump. If so, we disengage the peer. This prevents adversaries from sending -- as objectors the same chain as the dynamo. --- -onRollBackward :: forall m peer blk. - ( MonadSTM m, - Eq peer, - LedgerSupportsProtocol blk +onRollBackward :: + forall m peer blk. + ( MonadSTM m + , Eq peer + , LedgerSupportsProtocol blk ) => PeerContext m peer blk -> WithOrigin SlotNo -> @@ -532,9 +546,9 @@ onRollBackward context slot = -- If this is the dynamo, we need to elect a new dynamo as no more headers -- are available. onAwaitReply :: - ( MonadSTM m, - Eq peer, - LedgerSupportsProtocol blk + ( MonadSTM m + , Eq peer + , LedgerSupportsProtocol blk ) => PeerContext m peer blk -> STM m (Maybe (TraceEventCsj peer blk)) @@ -559,10 +573,11 @@ onAwaitReply context = -- enter a phase of several jumps to pinpoint exactly where the disagreement -- occurs. Once this phase is finished, we trigger the election of a new -- objector, which might update many TVars. -processJumpResult :: forall m peer blk. - ( MonadSTM m, - Eq peer, - LedgerSupportsProtocol blk +processJumpResult :: + forall m peer blk. + ( MonadSTM m + , Eq peer + , LedgerSupportsProtocol blk ) => PeerContext m peer blk -> JumpResult blk -> @@ -580,7 +595,6 @@ processJumpResult context jumpResult = -- Not interesting in the dynamo state AcceptedJump JumpTo{} -> pure Nothing RejectedJump JumpTo{} -> pure Nothing - Disengaged{} -> pure Nothing Objector{} -> case jumpResult of @@ -595,93 +609,94 @@ processJumpResult context jumpResult = -- Not interesting in the objector state AcceptedJump JumpTo{} -> pure Nothing RejectedJump JumpTo{} -> pure Nothing - Jumper nextJumpVar jumperState -> - case jumpResult of - AcceptedJump (JumpTo goodJumpInfo) -> do - -- The jump was accepted; we set the jumper's candidate fragment to - -- the dynamo's candidate fragment up to the accepted point. - -- - -- The candidate fragments of jumpers don't grow otherwise, as only the - -- objector and the dynamo request further headers. - updateChainSyncState (handle context) goodJumpInfo - writeTVar (cschJumpInfo (handle context)) $ Just goodJumpInfo - case jumperState of - LookingForIntersection _goodJumpInfo badJumpInfo -> - -- @AF.headPoint fragment@ is in @badFragment@, as the jumper - -- looking for an intersection is the only client asking for its - -- jumps. - lookForIntersection nextJumpVar goodJumpInfo badJumpInfo - Happy StartedJumper _mGoodJumpInfo -> do - writeTVar (cschJumping (handle context)) $ - Jumper nextJumpVar $ Happy StartedJumper $ Just goodJumpInfo - pure Nothing - Happy FreshJumper _mGoodJumpInfo -> - pure Nothing - FoundIntersection{} -> - -- Only happy jumpers are asked to jump by the dynamo, and only - -- jumpers looking for an intersection are asked to jump by - -- themselves. - error "processJumpResult: Jumpers in state FoundIntersection shouldn't be further jumping." - - RejectedJump (JumpTo badJumpInfo) -> - -- The tip of @goodFragment@ is in @jTheirFragment jumpInfo@ or is - -- an ancestor of it. If the jump was requested by the dynamo, this - -- holds because the dynamo is not allowed to rollback before the - -- jumps that it requests. - -- - -- If the jump was requested by the jumper, this holds because the - -- jumper is looking for an intersection, and such jumper only asks - -- for jumps that meet this condition. - case jumperState of - LookingForIntersection goodJumpInfo _ -> - lookForIntersection nextJumpVar goodJumpInfo badJumpInfo - Happy StartedJumper mGoodJumpInfo -> - lookForIntersection nextJumpVar (mkGoodJumpInfo mGoodJumpInfo badJumpInfo) badJumpInfo - Happy FreshJumper _ -> - pure Nothing - FoundIntersection{} -> - error "processJumpResult (rejected): Jumpers in state FoundIntersection shouldn't be further jumping." - - -- These aren't interesting in the case of jumpers. - AcceptedJump JumpToGoodPoint{} -> pure Nothing - RejectedJump JumpToGoodPoint{} -> pure Nothing - where - -- Avoid redundant constraint "HasHeader blk" reported by some ghc's - _ = getHeaderFields @blk + case jumpResult of + AcceptedJump (JumpTo goodJumpInfo) -> do + -- The jump was accepted; we set the jumper's candidate fragment to + -- the dynamo's candidate fragment up to the accepted point. + -- + -- The candidate fragments of jumpers don't grow otherwise, as only the + -- objector and the dynamo request further headers. + updateChainSyncState (handle context) goodJumpInfo + writeTVar (cschJumpInfo (handle context)) $ Just goodJumpInfo + case jumperState of + LookingForIntersection _goodJumpInfo badJumpInfo -> + -- @AF.headPoint fragment@ is in @badFragment@, as the jumper + -- looking for an intersection is the only client asking for its + -- jumps. + lookForIntersection nextJumpVar goodJumpInfo badJumpInfo + Happy StartedJumper _mGoodJumpInfo -> do + writeTVar (cschJumping (handle context)) $ + Jumper nextJumpVar $ + Happy StartedJumper $ + Just goodJumpInfo + pure Nothing + Happy FreshJumper _mGoodJumpInfo -> + pure Nothing + FoundIntersection{} -> + -- Only happy jumpers are asked to jump by the dynamo, and only + -- jumpers looking for an intersection are asked to jump by + -- themselves. + error "processJumpResult: Jumpers in state FoundIntersection shouldn't be further jumping." + RejectedJump (JumpTo badJumpInfo) -> + -- The tip of @goodFragment@ is in @jTheirFragment jumpInfo@ or is + -- an ancestor of it. If the jump was requested by the dynamo, this + -- holds because the dynamo is not allowed to rollback before the + -- jumps that it requests. + -- + -- If the jump was requested by the jumper, this holds because the + -- jumper is looking for an intersection, and such jumper only asks + -- for jumps that meet this condition. + case jumperState of + LookingForIntersection goodJumpInfo _ -> + lookForIntersection nextJumpVar goodJumpInfo badJumpInfo + Happy StartedJumper mGoodJumpInfo -> + lookForIntersection nextJumpVar (mkGoodJumpInfo mGoodJumpInfo badJumpInfo) badJumpInfo + Happy FreshJumper _ -> + pure Nothing + FoundIntersection{} -> + error + "processJumpResult (rejected): Jumpers in state FoundIntersection shouldn't be further jumping." + -- These aren't interesting in the case of jumpers. + AcceptedJump JumpToGoodPoint{} -> pure Nothing + RejectedJump JumpToGoodPoint{} -> pure Nothing + where + -- Avoid redundant constraint "HasHeader blk" reported by some ghc's + _ = getHeaderFields @blk - updateChainSyncState :: ChainSyncClientHandle m blk -> JumpInfo blk -> STM m () - updateChainSyncState handle jump = do - let fragment = jTheirFragment jump - modifyTVar (cschState handle) $ \csState -> - csState {csCandidate = fragment, csLatestSlot = SJust (AF.headSlot fragment) } - writeTVar (cschJumpInfo handle) $ Just jump + updateChainSyncState :: ChainSyncClientHandle m blk -> JumpInfo blk -> STM m () + updateChainSyncState handle jump = do + let fragment = jTheirFragment jump + modifyTVar (cschState handle) $ \csState -> + csState{csCandidate = fragment, csLatestSlot = SJust (AF.headSlot fragment)} + writeTVar (cschJumpInfo handle) $ Just jump - mkGoodJumpInfo :: Maybe (JumpInfo blk) -> JumpInfo blk -> JumpInfo blk - mkGoodJumpInfo mGoodJumpInfo badJumpInfo = do - let badFragment = jTheirFragment badJumpInfo - -- use the jump info of the rejected jump if the good jump info is - -- not available (i.e. there were no accepted jumps) - badFragmentStart = AF.takeOldest 0 badFragment - in fromMaybe (badJumpInfo {jTheirFragment = badFragmentStart}) mGoodJumpInfo + mkGoodJumpInfo :: Maybe (JumpInfo blk) -> JumpInfo blk -> JumpInfo blk + mkGoodJumpInfo mGoodJumpInfo badJumpInfo = do + let badFragment = jTheirFragment badJumpInfo + -- use the jump info of the rejected jump if the good jump info is + -- not available (i.e. there were no accepted jumps) + badFragmentStart = AF.takeOldest 0 badFragment + in fromMaybe (badJumpInfo{jTheirFragment = badFragmentStart}) mGoodJumpInfo - -- | Given a good point (where we know we agree with the dynamo) and a bad - -- fragment (where we know the tip disagrees with the dynamo), either decide - -- that we know the intersection for sure (if the bad point is the successor - -- of the good point) or program a jump somewhere in the middle to refine - -- those points. - -- - -- PRECONDITION: The good point is in the candidate fragment of - -- @badJumpInfo@ or it is an ancestor of it. - lookForIntersection nextJumpVar goodJumpInfo badJumpInfo = do - let badFragment = jTheirFragment badJumpInfo - -- If the good point is not in the bad fragment, the anchor of the bad - -- fragment should be a good point too. - searchFragment = - maybe badFragment snd $ - AF.splitAfterPoint badFragment (AF.headPoint $ jTheirFragment goodJumpInfo) - let len = AF.length searchFragment - if len <= 1 then do + -- \| Given a good point (where we know we agree with the dynamo) and a bad + -- fragment (where we know the tip disagrees with the dynamo), either decide + -- that we know the intersection for sure (if the bad point is the successor + -- of the good point) or program a jump somewhere in the middle to refine + -- those points. + -- + -- PRECONDITION: The good point is in the candidate fragment of + -- @badJumpInfo@ or it is an ancestor of it. + lookForIntersection nextJumpVar goodJumpInfo badJumpInfo = do + let badFragment = jTheirFragment badJumpInfo + -- If the good point is not in the bad fragment, the anchor of the bad + -- fragment should be a good point too. + searchFragment = + maybe badFragment snd $ + AF.splitAfterPoint badFragment (AF.headPoint $ jTheirFragment goodJumpInfo) + let len = AF.length searchFragment + if len <= 1 + then do -- If the fragment only contains the bad tip, we know the -- intersection is the good point. -- Clear any subsequent jumps requested by the dynamo. @@ -689,39 +704,40 @@ processJumpResult context jumpResult = maybeElectNewObjector nextJumpVar goodJumpInfo (AF.castPoint $ AF.headPoint badFragment) else do let theirFragment = AF.dropNewest (len `div` 2) badFragment - writeTVar nextJumpVar $ Just - badJumpInfo { jTheirFragment = theirFragment } + writeTVar nextJumpVar $ + Just + badJumpInfo{jTheirFragment = theirFragment} writeTVar (cschJumping (handle context)) $ Jumper nextJumpVar (LookingForIntersection goodJumpInfo badJumpInfo) pure Nothing - maybeElectNewObjector :: - StrictTVar m (Maybe (JumpInfo blk)) - -> JumpInfo blk - -> Point (Header blk) - -> STM m (Maybe (TraceEventCsj peer blk)) - maybeElectNewObjector nextJumpVar goodJumpInfo badPoint = do - findObjector (stripContext context) >>= \case - Nothing -> do - -- There is no objector yet. Promote the jumper to objector. - writeTVar (cschJumping (handle context)) (Objector Starting goodJumpInfo badPoint) - pure $ Just $ BecomingObjector Nothing - Just (oPeerId, oInitState, oGoodJump, oBadPoint, oHandle) - | pointSlot oBadPoint <= pointSlot badPoint -> do - -- The objector's intersection is still old enough. Keep it. - writeTVar (cschJumping (handle context)) $ - Jumper nextJumpVar (FoundIntersection Starting goodJumpInfo badPoint) - pure Nothing - | otherwise -> do - -- Found an earlier intersection. Demote the old objector and - -- promote the jumper to objector. - newJumper Nothing (FoundIntersection oInitState oGoodJump oBadPoint) >>= - writeTVar (cschJumping oHandle) - writeTVar (cschJumping (handle context)) (Objector Starting goodJumpInfo badPoint) - pure $ Just $ BecomingObjector (Just oPeerId) + maybeElectNewObjector :: + StrictTVar m (Maybe (JumpInfo blk)) -> + JumpInfo blk -> + Point (Header blk) -> + STM m (Maybe (TraceEventCsj peer blk)) + maybeElectNewObjector nextJumpVar goodJumpInfo badPoint = do + findObjector (stripContext context) >>= \case + Nothing -> do + -- There is no objector yet. Promote the jumper to objector. + writeTVar (cschJumping (handle context)) (Objector Starting goodJumpInfo badPoint) + pure $ Just $ BecomingObjector Nothing + Just (oPeerId, oInitState, oGoodJump, oBadPoint, oHandle) + | pointSlot oBadPoint <= pointSlot badPoint -> do + -- The objector's intersection is still old enough. Keep it. + writeTVar (cschJumping (handle context)) $ + Jumper nextJumpVar (FoundIntersection Starting goodJumpInfo badPoint) + pure Nothing + | otherwise -> do + -- Found an earlier intersection. Demote the old objector and + -- promote the jumper to objector. + newJumper Nothing (FoundIntersection oInitState oGoodJump oBadPoint) + >>= writeTVar (cschJumping oHandle) + writeTVar (cschJumping (handle context)) (Objector Starting goodJumpInfo badPoint) + pure $ Just $ BecomingObjector (Just oPeerId) updateJumpInfo :: - (MonadSTM m) => + MonadSTM m => PeerContext m peer blk -> JumpInfo blk -> STM m () @@ -733,15 +749,15 @@ updateJumpInfo context jumpInfo = -- | Find the dynamo in a TVar containing a map of handles. Returns then handle -- of the dynamo, or 'Nothing' if there is none. getDynamo :: - (MonadSTM m) => + MonadSTM m => ChainSyncClientHandleCollection peer m blk -> STM m (Maybe (peer, ChainSyncClientHandle m blk)) getDynamo handlesCol = do handles <- cschcSeq handlesCol findM (\(_, handle) -> isDynamo <$> readTVar (cschJumping handle)) handles - where - isDynamo Dynamo{} = True - isDynamo _ = False + where + isDynamo Dynamo{} = True + isDynamo _ = False -- | Disengage a peer, meaning that it will no longer be asked to jump or -- act as dynamo or objector. @@ -761,12 +777,11 @@ disengageWith initState handle = do writeTVar (cschJumping handle) (Disengaged initState) writeTVar (cschJumpInfo handle) Nothing - -- | Convenience function that, given an intersection point and a jumper state, -- make a fresh 'Jumper' constructor. newJumper :: - ( MonadSTM m, - LedgerSupportsProtocol blk + ( MonadSTM m + , LedgerSupportsProtocol blk ) => Maybe (JumpInfo blk) -> ChainSyncJumpingJumperState blk -> @@ -827,11 +842,11 @@ newJumper jumpInfo jumperState = do -- not directly relevant to this question---recall that CSJ is merely an -- optimization to avoid excess load on honest upstream peers. registerClient :: - ( LedgerSupportsProtocol blk, - IOLike m + ( LedgerSupportsProtocol blk + , IOLike m ) => + -- | the GSM state as of when the node connected to the upstream peer GsmState -> - -- ^ the GSM state as of when the node connected to the upstream peer Context m peer blk -> peer -> StrictTVar m (ChainSyncState blk) -> @@ -840,21 +855,21 @@ registerClient :: STM m (PeerContext m peer blk, Maybe (TraceEventCsj peer blk)) registerClient gsmState context peer csState mkHandle = do (csjState, mbEv) <- case gsmState of - GSM.CaughtUp -> pure (Disengaged DisengagedDone, Nothing) - -- This branch disables CSJ while the GSM is in the CaughtUp state. + GSM.CaughtUp -> pure (Disengaged DisengagedDone, Nothing) + -- This branch disables CSJ while the GSM is in the CaughtUp state. GSM.PreSyncing -> engageClient context csState - GSM.Syncing -> engageClient context csState + GSM.Syncing -> engageClient context csState cschJumping <- newTVar csjState let handle = mkHandle cschJumping cschcAddHandle (handlesCol context) peer handle - pure (context {peer, handle}, mbEv) + pure (context{peer, handle}, mbEv) -- | A helper for 'registerClient' -- -- /NOT EXPORTED/ engageClient :: - ( LedgerSupportsProtocol blk, - IOLike m + ( LedgerSupportsProtocol blk + , IOLike m ) => Context m peer blk -> StrictTVar m (ChainSyncState blk) -> @@ -871,9 +886,9 @@ engageClient context csState = do -- | Unregister a client from a 'PeerContext'; this might trigger the election -- of a new dynamo or objector if the peer was one of these two. unregisterClient :: - ( MonadSTM m, - Ord peer, - LedgerSupportsProtocol blk + ( MonadSTM m + , Ord peer + , LedgerSupportsProtocol blk ) => PeerContext m peer blk -> STM m (Maybe (TraceEventCsj peer blk)) @@ -893,9 +908,9 @@ unregisterClient context = do -- It does nothing if there is no other engaged peer to elect or if the given -- peer is not the dynamo. rotateDynamo :: - ( Ord peer, - LedgerSupportsProtocol blk, - MonadSTM m + ( Ord peer + , LedgerSupportsProtocol blk + , MonadSTM m ) => Tracer m (TraceEventDbf peer) -> ChainSyncClientHandleCollection peer m blk -> @@ -921,13 +936,13 @@ rotateDynamo tracer handlesCol peer = do error "rotateDynamo: no engaged peer found" Just (newDynamoId, newDynHandle) | newDynamoId == peer -> - -- The old dynamo is the only engaged peer left. - pure Nothing + -- The old dynamo is the only engaged peer left. + pure Nothing | otherwise -> do - newJumper Nothing (Happy FreshJumper Nothing) - >>= writeTVar (cschJumping oldDynHandle) - promoteToDynamo peerStates newDynamoId newDynHandle - pure $ Just $ RotatedDynamo peer newDynamoId + newJumper Nothing (Happy FreshJumper Nothing) + >>= writeTVar (cschJumping oldDynHandle) + promoteToDynamo peerStates newDynamoId newDynHandle + pure $ Just $ RotatedDynamo peer newDynamoId _ -> -- Do not re-elect a dynamo if the peer is not the dynamo. pure Nothing @@ -942,9 +957,9 @@ rotateDynamo tracer handlesCol peer = do -- prevents the possibility of their candidate chain being lost and having to -- eventually be re-downloaded, which CSJ ought to avoid. backfillDynamo :: - ( MonadSTM m, - Eq peer, - LedgerSupportsProtocol blk + ( MonadSTM m + , Eq peer + , LedgerSupportsProtocol blk ) => Context m peer blk -> STM m (TraceCsjReason -> TraceEventCsj peer blk, Maybe (peer, ChainSyncClientHandle m blk)) @@ -954,7 +969,7 @@ backfillDynamo context = do -- prefer a 'Started' 'Objector', if any exists findObjector context >>= \case Just (oId, Started, _oGoodJI, _oBad, oHandle) -> - pure $ Just $ (oId,oHandle) + pure $ Just $ (oId, oHandle) _ -> findNonDisengaged peerStates case mDynamo of @@ -965,9 +980,9 @@ backfillDynamo context = do -- | Promote the given peer to dynamo and demote all other peers to jumpers. promoteToDynamo :: - ( MonadSTM m, - Eq peer, - LedgerSupportsProtocol blk + ( MonadSTM m + , Eq peer + , LedgerSupportsProtocol blk ) => StrictSeq (peer, ChainSyncClientHandle m blk) -> peer -> @@ -975,26 +990,27 @@ promoteToDynamo :: STM m () promoteToDynamo peerStates dynId dynamo = do mJumpInfo <- readTVar (cschJumpInfo dynamo) - jumping' <- readTVar (cschJumping dynamo) >>= \case - -- An 'Objector' that already 'Started' need not be disrupted. - -- - -- Remark. Intuitively, a 'Starting' 'Objector' also need not be disrupted, - -- but disrupting it wouldn't waste any @MsgRollForward@s. More concretely, - -- it's not obvious how to build a 'DynamoStarting' from a 'Starting'. - Objector Started oGoodJI _oBad -> do - -- This intersection point is necessarily behind the replaced Dynamos's - -- latest jump instruction, but its relative age is bounded. - let islot = AF.headSlot $ jTheirFragment oGoodJI - pure $ Dynamo DynamoStarted islot - -- Otherwise, the peer being promoted could be a Jumper or an Objector - -- Starting, but never Dynamo nor Disengaged. - _ -> do - fragment <- csCandidate <$> readTVar (cschState dynamo) - -- If there is no jump info, the dynamo must be just starting and - -- there is no need to set the intersection of the ChainSync server. - let dynamoInitState = maybe DynamoStarted DynamoStarting mJumpInfo - slot = AF.headSlot fragment - pure $ Dynamo dynamoInitState slot + jumping' <- + readTVar (cschJumping dynamo) >>= \case + -- An 'Objector' that already 'Started' need not be disrupted. + -- + -- Remark. Intuitively, a 'Starting' 'Objector' also need not be disrupted, + -- but disrupting it wouldn't waste any @MsgRollForward@s. More concretely, + -- it's not obvious how to build a 'DynamoStarting' from a 'Starting'. + Objector Started oGoodJI _oBad -> do + -- This intersection point is necessarily behind the replaced Dynamos's + -- latest jump instruction, but its relative age is bounded. + let islot = AF.headSlot $ jTheirFragment oGoodJI + pure $ Dynamo DynamoStarted islot + -- Otherwise, the peer being promoted could be a Jumper or an Objector + -- Starting, but never Dynamo nor Disengaged. + _ -> do + fragment <- csCandidate <$> readTVar (cschState dynamo) + -- If there is no jump info, the dynamo must be just starting and + -- there is no need to set the intersection of the ChainSync server. + let dynamoInitState = maybe DynamoStarted DynamoStarting mJumpInfo + slot = AF.headSlot fragment + pure $ Dynamo dynamoInitState slot writeTVar (cschJumping dynamo) jumping' -- Demote all other peers to jumpers @@ -1007,7 +1023,7 @@ promoteToDynamo peerStates dynId dynamo = do -- | Find a non-disengaged peer in the given sequence findNonDisengaged :: - (MonadSTM m) => + MonadSTM m => StrictSeq (peer, ChainSyncClientHandle m blk) -> STM m (Maybe (peer, ChainSyncClientHandle m blk)) findNonDisengaged = @@ -1015,41 +1031,44 @@ findNonDisengaged = isDisengaged :: ChainSyncJumpingState m blk -> Bool isDisengaged Disengaged{} = True -isDisengaged _ = False +isDisengaged _ = False -- | Find the objector in a context, if there is one. findObjector :: - (MonadSTM m) => + MonadSTM m => Context m peer blk -> - STM m (Maybe (peer, ObjectorInitState, JumpInfo blk, Point (Header blk), ChainSyncClientHandle m blk)) + STM + m + (Maybe (peer, ObjectorInitState, JumpInfo blk, Point (Header blk), ChainSyncClientHandle m blk)) findObjector context = cschcSeq (handlesCol context) >>= go - where - go Seq.Empty = pure Nothing - go ((peer, handle) Seq.:<| xs) = - readTVar (cschJumping handle) >>= \case - Objector initState goodJump badPoint -> - pure $ Just (peer, initState, goodJump, badPoint, handle) - _ -> go xs + where + go Seq.Empty = pure Nothing + go ((peer, handle) Seq.:<| xs) = + readTVar (cschJumping handle) >>= \case + Objector initState goodJump badPoint -> + pure $ Just (peer, initState, goodJump, badPoint, handle) + _ -> go xs -- | Look into all dissenting jumper and promote the one with the oldest -- intersection with the dynamo as the new objector. electNewObjector :: - (MonadSTM m) => + MonadSTM m => Context m peer blk -> STM m (TraceCsjReason -> TraceEventCsj peer blk) -electNewObjector context = NoLongerObjector <$> do - peerStates <- toList <$> cschcSeq (handlesCol context) - dissentingJumpers <- collectDissentingJumpers peerStates - let sortedJumpers = sortOn (pointSlot . fst . snd) dissentingJumpers - case sortedJumpers of - [] -> pure Nothing - (peer, (badPoint, (initState, goodJumpInfo, handle))):_ -> do - writeTVar (cschJumping handle) $ Objector initState goodJumpInfo badPoint - pure $ Just peer - where - collectDissentingJumpers peerStates = - fmap catMaybes $ +electNewObjector context = + NoLongerObjector <$> do + peerStates <- toList <$> cschcSeq (handlesCol context) + dissentingJumpers <- collectDissentingJumpers peerStates + let sortedJumpers = sortOn (pointSlot . fst . snd) dissentingJumpers + case sortedJumpers of + [] -> pure Nothing + (peer, (badPoint, (initState, goodJumpInfo, handle))) : _ -> do + writeTVar (cschJumping handle) $ Objector initState goodJumpInfo badPoint + pure $ Just peer + where + collectDissentingJumpers peerStates = + fmap catMaybes $ forM peerStates $ \(peer, handle) -> readTVar (cschJumping handle) >>= \case Jumper _ (FoundIntersection initState goodJumpInfo badPoint) -> @@ -1060,22 +1079,26 @@ electNewObjector context = NoLongerObjector <$> do -- | Events due to the centralized Devoted BlockFetch logic data TraceEventDbf peer = RotatedDynamo peer peer - deriving (Show) + deriving Show -- | Events arising from a specific ChainSync client data TraceEventCsj peer blk - = BecomingObjector (Maybe peer) -- ^ previous objector + = -- | previous objector + BecomingObjector (Maybe peer) | BlockedOnJump | InitializedAsDynamo - | NoLongerDynamo (Maybe peer) TraceCsjReason -- ^ new dynamo if known - | NoLongerObjector (Maybe peer) TraceCsjReason -- ^ new objector if known - | SentJumpInstruction (Point blk) -- ^ jump target - deriving (Show) + | -- | new dynamo if known + NoLongerDynamo (Maybe peer) TraceCsjReason + | -- | new objector if known + NoLongerObjector (Maybe peer) TraceCsjReason + | -- | jump target + SentJumpInstruction (Point blk) + deriving Show data TraceCsjReason = BecauseCsjDisengage | BecauseCsjDisconnect - deriving (Show) + deriving Show unitNothing :: () -> Maybe a unitNothing () = Nothing diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/State.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/State.hs index 9d0a9d8af0..ef91f7ec1b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/State.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Client/State.hs @@ -7,8 +7,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State ( - ChainSyncClientHandle (..) +module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State + ( ChainSyncClientHandle (..) , ChainSyncClientHandleCollection (..) , ChainSyncJumpingJumperState (..) , ChainSyncJumpingState (..) @@ -21,148 +21,157 @@ module Ouroboros.Consensus.MiniProtocol.ChainSync.Client.State ( , newChainSyncClientHandleCollection ) where -import Cardano.Slotting.Slot (SlotNo, WithOrigin) -import Data.Function (on) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe.Strict (StrictMaybe (..)) -import Data.Sequence.Strict (StrictSeq) -import qualified Data.Sequence.Strict as Seq -import Data.Typeable (Proxy (..), Typeable, typeRep) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block (HasHeader, Header, Point) -import Ouroboros.Consensus.HeaderStateHistory (HeaderStateHistory) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.Node.GsmState (GsmState) -import Ouroboros.Consensus.Util.IOLike (IOLike, NoThunks (..), STM, - StrictTVar, Time, modifyTVar, newTVar, readTVar) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - headPoint) +import Cardano.Slotting.Slot (SlotNo, WithOrigin) +import Data.Function (on) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe.Strict (StrictMaybe (..)) +import Data.Sequence.Strict (StrictSeq) +import Data.Sequence.Strict qualified as Seq +import Data.Typeable (Proxy (..), Typeable, typeRep) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block (HasHeader, Header, Point) +import Ouroboros.Consensus.HeaderStateHistory (HeaderStateHistory) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Node.GsmState (GsmState) +import Ouroboros.Consensus.Util.IOLike + ( IOLike + , NoThunks (..) + , STM + , StrictTVar + , Time + , modifyTVar + , newTVar + , readTVar + ) +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , headPoint + ) -- | A ChainSync client's state that's used by other components, like the GDD or -- the jumping governor. -data ChainSyncState blk = ChainSyncState { - - -- | The current candidate fragment. - csCandidate :: !(AnchoredFragment (HeaderWithTime blk)) - -- | Whether the last message sent by the peer was MsgAwaitReply. - -- - -- This ChainSync client should ensure that its peer sets this flag while - -- and only while both of the following conditions are satisfied: the - -- peer's latest message has been fully processed (especially that its - -- candidate has been updated; previous argument) and its latest message - -- did not claim that it already has headers that extend its candidate. - -- - -- It's more important that the flag is unset promptly than it is for the - -- flag to be set promptly, because of how this is used by the GSM to - -- determine that the node is done syncing. - , csIdling :: !Bool - - -- | When the client receives a new header, it updates this field before - -- processing it further, and the latest slot may refer to a header beyond - -- the forecast horizon while the candidate fragment isn't extended yet, to - -- signal to GDD that the density is known up to this slot. +data ChainSyncState blk = ChainSyncState + { csCandidate :: !(AnchoredFragment (HeaderWithTime blk)) + -- ^ The current candidate fragment. + , csIdling :: !Bool + -- ^ Whether the last message sent by the peer was MsgAwaitReply. + -- + -- This ChainSync client should ensure that its peer sets this flag while + -- and only while both of the following conditions are satisfied: the + -- peer's latest message has been fully processed (especially that its + -- candidate has been updated; previous argument) and its latest message + -- did not claim that it already has headers that extend its candidate. + -- + -- It's more important that the flag is unset promptly than it is for the + -- flag to be set promptly, because of how this is used by the GSM to + -- determine that the node is done syncing. , csLatestSlot :: !(StrictMaybe (WithOrigin SlotNo)) + -- ^ When the client receives a new header, it updates this field before + -- processing it further, and the latest slot may refer to a header beyond + -- the forecast horizon while the candidate fragment isn't extended yet, to + -- signal to GDD that the density is known up to this slot. } - deriving stock (Generic) + deriving stock Generic -deriving anyclass instance ( - HasHeader blk, - NoThunks (Header blk) - ) => NoThunks (ChainSyncState blk) +deriving anyclass instance + ( HasHeader blk + , NoThunks (Header blk) + ) => + NoThunks (ChainSyncState blk) -- | An interface to a ChainSync client that's used by other components, like -- the GDD governor. -data ChainSyncClientHandle m blk = ChainSyncClientHandle { - -- | Disconnects from the peer when the GDD considers it adversarial - cschGDDKill :: !(m ()) - - -- | Callback called by the GSM when the GSM state changes. They take the - -- current time and should execute rapidly. Used to enable/disable the LoP. +data ChainSyncClientHandle m blk = ChainSyncClientHandle + { cschGDDKill :: !(m ()) + -- ^ Disconnects from the peer when the GDD considers it adversarial , cschOnGsmStateChanged :: !(GsmState -> Time -> STM m ()) - - -- | Data shared between the client and external components like GDD. - , cschState :: !(StrictTVar m (ChainSyncState blk)) - - -- | The state of the peer with respect to ChainSync jumping. - , cschJumping :: !(StrictTVar m (ChainSyncJumpingState m blk)) - - -- | ChainSync state needed to jump to the tip of the candidate fragment of - -- the peer. - , cschJumpInfo :: !(StrictTVar m (Maybe (JumpInfo blk))) + -- ^ Callback called by the GSM when the GSM state changes. They take the + -- current time and should execute rapidly. Used to enable/disable the LoP. + , cschState :: !(StrictTVar m (ChainSyncState blk)) + -- ^ Data shared between the client and external components like GDD. + , cschJumping :: !(StrictTVar m (ChainSyncJumpingState m blk)) + -- ^ The state of the peer with respect to ChainSync jumping. + , cschJumpInfo :: !(StrictTVar m (Maybe (JumpInfo blk))) + -- ^ ChainSync state needed to jump to the tip of the candidate fragment of + -- the peer. } - deriving stock (Generic) + deriving stock Generic -deriving anyclass instance ( - IOLike m, - HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (ChainSyncClientHandle m blk) +deriving anyclass instance + ( IOLike m + , HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (ChainSyncClientHandle m blk) -- | A collection of ChainSync client handles for the peers of this node. -- -- Sometimes we want to see the collection as a Map, and sometimes as a sequence. -- The implementation keeps both views in sync. -data ChainSyncClientHandleCollection peer m blk = ChainSyncClientHandleCollection { - -- | A map containing the handles for the peers in the collection - cschcMap :: !(STM m (Map peer (ChainSyncClientHandle m blk))) - -- | A sequence containing the handles for the peers in the collection +data ChainSyncClientHandleCollection peer m blk = ChainSyncClientHandleCollection + { cschcMap :: !(STM m (Map peer (ChainSyncClientHandle m blk))) + -- ^ A map containing the handles for the peers in the collection , cschcSeq :: !(STM m (StrictSeq (peer, ChainSyncClientHandle m blk))) - -- | Add the handle for the given peer to the collection - -- PRECONDITION: The peer is not already in the collection - , cschcAddHandle :: !(peer -> ChainSyncClientHandle m blk -> STM m ()) - -- | Remove the handle for the given peer from the collection + -- ^ A sequence containing the handles for the peers in the collection + , cschcAddHandle :: !(peer -> ChainSyncClientHandle m blk -> STM m ()) + -- ^ Add the handle for the given peer to the collection + -- PRECONDITION: The peer is not already in the collection , cschcRemoveHandle :: !(peer -> STM m ()) - -- | Moves the handle for the given peer to the end of the sequence + -- ^ Remove the handle for the given peer from the collection , cschcRotateHandle :: !(peer -> STM m ()) - -- | Remove all the handles from the collection + -- ^ Moves the handle for the given peer to the end of the sequence , cschcRemoveAllHandles :: !(STM m ()) + -- ^ Remove all the handles from the collection } - deriving stock (Generic) + deriving stock Generic -deriving anyclass instance ( - IOLike m, - HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (STM m ()), - NoThunks (Header blk), - NoThunks (STM m (Map peer (ChainSyncClientHandle m blk))), - NoThunks (STM m (StrictSeq (peer, ChainSyncClientHandle m blk))) - ) => NoThunks (ChainSyncClientHandleCollection peer m blk) +deriving anyclass instance + ( IOLike m + , HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (STM m ()) + , NoThunks (Header blk) + , NoThunks (STM m (Map peer (ChainSyncClientHandle m blk))) + , NoThunks (STM m (StrictSeq (peer, ChainSyncClientHandle m blk))) + ) => + NoThunks (ChainSyncClientHandleCollection peer m blk) newChainSyncClientHandleCollection :: - ( Ord peer, - IOLike m, - LedgerSupportsProtocol blk, - NoThunks peer - ) - => STM m (ChainSyncClientHandleCollection peer m blk) + ( Ord peer + , IOLike m + , LedgerSupportsProtocol blk + , NoThunks peer + ) => + STM m (ChainSyncClientHandleCollection peer m blk) newChainSyncClientHandleCollection = do handlesMap <- newTVar mempty handlesSeq <- newTVar mempty - return ChainSyncClientHandleCollection { - cschcMap = readTVar handlesMap - , cschcSeq = readTVar handlesSeq - , cschcAddHandle = \peer handle -> do - modifyTVar handlesMap (Map.insert peer handle) - modifyTVar handlesSeq (Seq.|> (peer, handle)) - , cschcRemoveHandle = \peer -> do - modifyTVar handlesMap (Map.delete peer) - modifyTVar handlesSeq $ \s -> - let (xs, ys) = Seq.spanl ((/= peer) . fst) s - in xs Seq.>< Seq.drop 1 ys - , cschcRotateHandle = \peer -> - modifyTVar handlesSeq $ \s -> - let (xs, ys) = Seq.spanl ((/= peer) . fst) s - in xs Seq.>< Seq.drop 1 ys Seq.>< Seq.take 1 ys - , cschcRemoveAllHandles = do - modifyTVar handlesMap (const mempty) - modifyTVar handlesSeq (const mempty) - } + return + ChainSyncClientHandleCollection + { cschcMap = readTVar handlesMap + , cschcSeq = readTVar handlesSeq + , cschcAddHandle = \peer handle -> do + modifyTVar handlesMap (Map.insert peer handle) + modifyTVar handlesSeq (Seq.|> (peer, handle)) + , cschcRemoveHandle = \peer -> do + modifyTVar handlesMap (Map.delete peer) + modifyTVar handlesSeq $ \s -> + let (xs, ys) = Seq.spanl ((/= peer) . fst) s + in xs Seq.>< Seq.drop 1 ys + , cschcRotateHandle = \peer -> + modifyTVar handlesSeq $ \s -> + let (xs, ys) = Seq.spanl ((/= peer) . fst) s + in xs Seq.>< Seq.drop 1 ys Seq.>< Seq.take 1 ys + , cschcRemoveAllHandles = do + modifyTVar handlesMap (const mempty) + modifyTVar handlesSeq (const mempty) + } data DynamoInitState blk = -- | The dynamo still has to set the intersection of the ChainSync server @@ -172,13 +181,14 @@ data DynamoInitState blk -- the candidate fragment. DynamoStarting !(JumpInfo blk) | DynamoStarted - deriving (Generic) + deriving Generic deriving anyclass instance - ( HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (DynamoInitState blk) + ( HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (DynamoInitState blk) data ObjectorInitState = -- | The objector still needs to set the intersection of the ChainSync @@ -237,14 +247,15 @@ data ChainSyncJumpingState m blk !(StrictTVar m (Maybe (JumpInfo blk))) -- | More precisely, the state of the jumper. !(ChainSyncJumpingJumperState blk) - deriving (Generic) + deriving Generic deriving anyclass instance - ( IOLike m, - HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (ChainSyncJumpingState m blk) + ( IOLike m + , HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (ChainSyncJumpingState m blk) -- | The ChainSync state required for jumps -- @@ -255,12 +266,12 @@ deriving anyclass instance -- This can happen if we need to look for an intersection when the jumper -- rejects a jump. data JumpInfo blk = JumpInfo - { jMostRecentIntersection :: !(Point blk) - , jOurFragment :: !(AnchoredFragment (Header blk)) - , jTheirFragment :: !(AnchoredFragment (HeaderWithTime blk)) + { jMostRecentIntersection :: !(Point blk) + , jOurFragment :: !(AnchoredFragment (Header blk)) + , jTheirFragment :: !(AnchoredFragment (HeaderWithTime blk)) , jTheirHeaderStateHistory :: !(HeaderStateHistory blk) } - deriving (Generic) + deriving Generic instance (HasHeader (Header blk), Typeable blk) => Eq (JumpInfo blk) where (==) = (==) `on` headPoint . jTheirFragment @@ -292,10 +303,11 @@ data ChainSyncJumpingJumperState blk -- The init state indicates the initialization to use for the objector in -- case this jumper is promoted. FoundIntersection ObjectorInitState !(JumpInfo blk) !(Point (Header blk)) - deriving (Generic) + deriving Generic deriving anyclass instance - ( HasHeader blk, - LedgerSupportsProtocol blk, - NoThunks (Header blk) - ) => NoThunks (ChainSyncJumpingJumperState blk) + ( HasHeader blk + , LedgerSupportsProtocol blk + , NoThunks (Header blk) + ) => + NoThunks (ChainSyncJumpingJumperState blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Server.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Server.hs index 7a536b9ccf..0895a9b9dd 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Server.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/ChainSync/Server.hs @@ -4,47 +4,58 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.MiniProtocol.ChainSync.Server ( - Tip +module Ouroboros.Consensus.MiniProtocol.ChainSync.Server + ( Tip , chainSyncBlockServerFollower , chainSyncBlocksServer , chainSyncHeaderServerFollower , chainSyncHeadersServer + -- * Trace events , BlockingType (..) , TraceChainSyncServerEvent (..) + -- * Low-level API , chainSyncServerForFollower ) where -import Control.ResourceRegistry (ResourceRegistry) -import Control.Tracer -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB, Follower, - WithPoint (..), getSerialisedBlockWithPoint, - getSerialisedHeaderWithPoint) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Enclose (Enclosing, Enclosing' (..), - pattern FallingEdge) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (ChainUpdate (..), Serialised, - Tip (..)) -import Ouroboros.Network.Protocol.ChainSync.Server - +import Control.ResourceRegistry (ResourceRegistry) +import Control.Tracer +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.ChainDB.API + ( ChainDB + , Follower + , WithPoint (..) + , getSerialisedBlockWithPoint + , getSerialisedHeaderWithPoint + ) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Enclose + ( Enclosing + , Enclosing' (..) + , pattern FallingEdge + ) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block + ( ChainUpdate (..) + , Serialised + , Tip (..) + ) +import Ouroboros.Network.Protocol.ChainSync.Server chainSyncHeaderServerFollower :: - ChainDB m blk - -> ChainDB.ChainType - -> ResourceRegistry m - -> m (Follower m blk (WithPoint blk (SerialisedHeader blk))) + ChainDB m blk -> + ChainDB.ChainType -> + ResourceRegistry m -> + m (Follower m blk (WithPoint blk (SerialisedHeader blk))) chainSyncHeaderServerFollower chainDB chainType registry = ChainDB.newFollower chainDB registry chainType getSerialisedHeaderWithPoint chainSyncBlockServerFollower :: - ChainDB m blk - -> ResourceRegistry m - -> m (Follower m blk (WithPoint blk (Serialised blk))) + ChainDB m blk -> + ResourceRegistry m -> + m (Follower m blk (WithPoint blk (Serialised blk))) chainSyncBlockServerFollower chainDB registry = ChainDB.newFollower chainDB registry ChainDB.SelectedChain getSerialisedBlockWithPoint @@ -52,32 +63,31 @@ chainSyncBlockServerFollower chainDB registry = -- -- The node-to-node protocol uses the chain sync mini-protocol with chain -- headers (and fetches blocks separately with the block fetch mini-protocol). --- chainSyncHeadersServer :: - forall m blk. - ( IOLike m - , HasHeader (Header blk) - ) - => Tracer m (TraceChainSyncServerEvent blk) - -> ChainDB m blk - -> Follower m blk (WithPoint blk (SerialisedHeader blk)) - -> ChainSyncServer (SerialisedHeader blk) (Point blk) (Tip blk) m () + forall m blk. + ( IOLike m + , HasHeader (Header blk) + ) => + Tracer m (TraceChainSyncServerEvent blk) -> + ChainDB m blk -> + Follower m blk (WithPoint blk (SerialisedHeader blk)) -> + ChainSyncServer (SerialisedHeader blk) (Point blk) (Tip blk) m () chainSyncHeadersServer tracer chainDB flr = - chainSyncServerForFollower tracer (ChainDB.getCurrentTip chainDB) flr + chainSyncServerForFollower tracer (ChainDB.getCurrentTip chainDB) flr -- | Chain Sync Server for blocks for a given a 'ChainDB'. -- -- The local node-to-client protocol uses the chain sync mini-protocol with -- chains of full blocks (rather than a header \/ body split). --- chainSyncBlocksServer :: - forall m blk. (IOLike m, HasHeader (Header blk)) - => Tracer m (TraceChainSyncServerEvent blk) - -> ChainDB m blk - -> Follower m blk (WithPoint blk (Serialised blk)) - -> ChainSyncServer (Serialised blk) (Point blk) (Tip blk) m () + forall m blk. + (IOLike m, HasHeader (Header blk)) => + Tracer m (TraceChainSyncServerEvent blk) -> + ChainDB m blk -> + Follower m blk (WithPoint blk (Serialised blk)) -> + ChainSyncServer (Serialised blk) (Point blk) (Tip blk) m () chainSyncBlocksServer tracer chainDB flr = - chainSyncServerForFollower tracer (ChainDB.getCurrentTip chainDB) flr + chainSyncServerForFollower tracer (ChainDB.getCurrentTip chainDB) flr -- | A chain sync server. -- @@ -87,80 +97,88 @@ chainSyncBlocksServer tracer chainDB flr = -- 'Ourboros.Network.ChainProducerState.ChainProducerState'. -- -- All the hard work is done by the 'Follower's provided by the 'ChainDB'. --- chainSyncServerForFollower :: - forall m blk b. IOLike m - => Tracer m (TraceChainSyncServerEvent blk) - -> STM m (Tip blk) - -> Follower m blk (WithPoint blk b) - -> ChainSyncServer b (Point blk) (Tip blk) m () + forall m blk b. + IOLike m => + Tracer m (TraceChainSyncServerEvent blk) -> + STM m (Tip blk) -> + Follower m blk (WithPoint blk b) -> + ChainSyncServer b (Point blk) (Tip blk) m () chainSyncServerForFollower tracer getCurrentTip flr = - idle' - where - idle :: ServerStIdle b (Point blk) (Tip blk) m () - idle = ServerStIdle { - recvMsgRequestNext = handleRequestNext, - recvMsgFindIntersect = handleFindIntersect, - recvMsgDoneClient = pure () + idle' + where + idle :: ServerStIdle b (Point blk) (Tip blk) m () + idle = + ServerStIdle + { recvMsgRequestNext = handleRequestNext + , recvMsgFindIntersect = handleFindIntersect + , recvMsgDoneClient = pure () } - idle' :: ChainSyncServer b (Point blk) (Tip blk) m () - idle' = ChainSyncServer $ return idle + idle' :: ChainSyncServer b (Point blk) (Tip blk) m () + idle' = ChainSyncServer $ return idle - handleRequestNext :: m (Either (ServerStNext b (Point blk) (Tip blk) m ()) - (m (ServerStNext b (Point blk) (Tip blk) m ()))) - handleRequestNext = ChainDB.followerInstruction flr >>= \case + handleRequestNext :: + m + ( Either + (ServerStNext b (Point blk) (Tip blk) m ()) + (m (ServerStNext b (Point blk) (Tip blk) m ())) + ) + handleRequestNext = + ChainDB.followerInstruction flr >>= \case Just update -> do tip <- atomically getCurrentTip let mkTraceEvent = TraceChainSyncServerUpdate tip (point <$> update) NonBlocking traceWith tracer $ mkTraceEvent RisingEdge return $ Left $ sendNext mkTraceEvent tip update - Nothing -> return $ Right $ do + Nothing -> return $ Right $ do -- Follower is at the head, we have to block and wait for the chain to -- change. update <- ChainDB.followerInstructionBlocking flr - tip <- atomically getCurrentTip + tip <- atomically getCurrentTip let mkTraceEvent = TraceChainSyncServerUpdate tip (point <$> update) Blocking traceWith tracer $ mkTraceEvent RisingEdge return $ sendNext mkTraceEvent tip update - sendNext :: (Enclosing -> TraceChainSyncServerEvent blk) - -> Tip blk - -> ChainUpdate blk (WithPoint blk b) - -> ServerStNext b (Point blk) (Tip blk) m () - sendNext mkTraceEvent tip = \case - AddBlock hdr -> SendMsgRollForward (withoutPoint hdr) tip traceThenIdle - RollBack pt -> SendMsgRollBackward pt tip traceThenIdle - where - traceThenIdle = ChainSyncServer $ do - traceWith tracer $ mkTraceEvent FallingEdge - return idle - - handleFindIntersect :: [Point blk] - -> m (ServerStIntersect b (Point blk) (Tip blk) m ()) - handleFindIntersect points = do - -- TODO guard number of points - changed <- ChainDB.followerForward flr points - tip <- atomically getCurrentTip - case changed of - Just pt -> return $ SendMsgIntersectFound pt tip idle' - Nothing -> return $ SendMsgIntersectNotFound tip idle' + sendNext :: + (Enclosing -> TraceChainSyncServerEvent blk) -> + Tip blk -> + ChainUpdate blk (WithPoint blk b) -> + ServerStNext b (Point blk) (Tip blk) m () + sendNext mkTraceEvent tip = \case + AddBlock hdr -> SendMsgRollForward (withoutPoint hdr) tip traceThenIdle + RollBack pt -> SendMsgRollBackward pt tip traceThenIdle + where + traceThenIdle = ChainSyncServer $ do + traceWith tracer $ mkTraceEvent FallingEdge + return idle + + handleFindIntersect :: + [Point blk] -> + m (ServerStIntersect b (Point blk) (Tip blk) m ()) + handleFindIntersect points = do + -- TODO guard number of points + changed <- ChainDB.followerForward flr points + tip <- atomically getCurrentTip + case changed of + Just pt -> return $ SendMsgIntersectFound pt tip idle' + Nothing -> return $ SendMsgIntersectNotFound tip idle' {------------------------------------------------------------------------------- Trace events -------------------------------------------------------------------------------} -- | Events traced by the Chain Sync Server. -data TraceChainSyncServerEvent blk = - -- | Send a 'ChainUpdate' message. +data TraceChainSyncServerEvent blk + = -- | Send a 'ChainUpdate' message. TraceChainSyncServerUpdate + -- | Tip of the currently selected chain. (Tip blk) - -- ^ Tip of the currently selected chain. - (ChainUpdate blk (Point blk)) - -- ^ The whole headers/blocks in the traced 'ChainUpdate' are substituted + -- | The whole headers/blocks in the traced 'ChainUpdate' are substituted -- with their corresponding 'Point'. + (ChainUpdate blk (Point blk)) BlockingType Enclosing deriving (Eq, Show) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalStateQuery/Server.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalStateQuery/Server.hs index 5938b4ad29..f3bb2a0131 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalStateQuery/Server.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalStateQuery/Server.hs @@ -3,67 +3,75 @@ module Ouroboros.Consensus.MiniProtocol.LocalStateQuery.Server (localStateQueryServer) where - -import Data.Functor ((<&>)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Query (BlockSupportsLedgerQuery, - Query) -import qualified Ouroboros.Consensus.Ledger.Query as Query -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Protocol.LocalStateQuery.Server -import Ouroboros.Network.Protocol.LocalStateQuery.Type - (AcquireFailure (..), Target (..)) +import Data.Functor ((<&>)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Query + ( BlockSupportsLedgerQuery + , Query + ) +import Ouroboros.Consensus.Ledger.Query qualified as Query +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Protocol.LocalStateQuery.Server +import Ouroboros.Network.Protocol.LocalStateQuery.Type + ( AcquireFailure (..) + , Target (..) + ) localStateQueryServer :: - forall m blk. - ( IOLike m - , BlockSupportsLedgerQuery blk - , Query.ConfigSupportsNode blk - , LedgerSupportsProtocol blk - ) - => ExtLedgerCfg blk - -> ( Target (Point blk) - -> m (Either GetForkerError (ReadOnlyForker' m blk)) - ) - -> LocalStateQueryServer blk (Point blk) (Query blk) m () + forall m blk. + ( IOLike m + , BlockSupportsLedgerQuery blk + , Query.ConfigSupportsNode blk + , LedgerSupportsProtocol blk + ) => + ExtLedgerCfg blk -> + ( Target (Point blk) -> + m (Either GetForkerError (ReadOnlyForker' m blk)) + ) -> + LocalStateQueryServer blk (Point blk) (Query blk) m () localStateQueryServer cfg getView = - LocalStateQueryServer $ return idle - where - idle :: ServerStIdle blk (Point blk) (Query blk) m () - idle = ServerStIdle { - recvMsgAcquire = handleAcquire - , recvMsgDone = return () - } + LocalStateQueryServer $ return idle + where + idle :: ServerStIdle blk (Point blk) (Query blk) m () + idle = + ServerStIdle + { recvMsgAcquire = handleAcquire + , recvMsgDone = return () + } - handleAcquire :: Target (Point blk) - -> m (ServerStAcquiring blk (Point blk) (Query blk) m ()) - handleAcquire mpt = do - getView mpt <&> \case - Right forker -> SendMsgAcquired $ acquired forker - Left e -> case e of - PointTooOld{} -> - SendMsgFailure AcquireFailurePointTooOld idle - PointNotOnChain -> - SendMsgFailure AcquireFailurePointNotOnChain idle + handleAcquire :: + Target (Point blk) -> + m (ServerStAcquiring blk (Point blk) (Query blk) m ()) + handleAcquire mpt = do + getView mpt <&> \case + Right forker -> SendMsgAcquired $ acquired forker + Left e -> case e of + PointTooOld{} -> + SendMsgFailure AcquireFailurePointTooOld idle + PointNotOnChain -> + SendMsgFailure AcquireFailurePointNotOnChain idle - acquired :: ReadOnlyForker' m blk - -> ServerStAcquired blk (Point blk) (Query blk) m () - acquired forker = ServerStAcquired { - recvMsgQuery = handleQuery forker - , recvMsgReAcquire = \mp -> do close; handleAcquire mp - , recvMsgRelease = do close; return idle - } - where - close = roforkerClose forker + acquired :: + ReadOnlyForker' m blk -> + ServerStAcquired blk (Point blk) (Query blk) m () + acquired forker = + ServerStAcquired + { recvMsgQuery = handleQuery forker + , recvMsgReAcquire = \mp -> do close; handleAcquire mp + , recvMsgRelease = do close; return idle + } + where + close = roforkerClose forker - handleQuery :: - ReadOnlyForker' m blk - -> Query blk result - -> m (ServerStQuerying blk (Point blk) (Query blk) m () result) - handleQuery forker query = do - result <- Query.answerQuery cfg forker query - return $ SendMsgResult result (acquired forker) + handleQuery :: + ReadOnlyForker' m blk -> + Query blk result -> + m (ServerStQuerying blk (Point blk) (Query blk) m () result) + handleQuery forker query = do + result <- Query.answerQuery cfg forker query + return $ SendMsgResult result (acquired forker) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs index e49980bf33..8f8d0cf5f7 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs @@ -8,118 +8,140 @@ module Ouroboros.Consensus.MiniProtocol.LocalTxMonitor.Server (localTxMonitorServer) where -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import qualified Data.Measure as Measure -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mempool -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Protocol.LocalTxMonitor.Server -import Ouroboros.Network.Protocol.LocalTxMonitor.Type +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Measure qualified as Measure +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Protocol.LocalTxMonitor.Server +import Ouroboros.Network.Protocol.LocalTxMonitor.Type -- | Local transaction monitoring server, for inspecting the mempool. --- localTxMonitorServer :: - forall blk m. - ( MonadSTM m - , LedgerSupportsMempool blk - ) - => Mempool m blk - -> LocalTxMonitorServer (GenTxId blk) (GenTx blk) SlotNo m () + forall blk m. + ( MonadSTM m + , LedgerSupportsMempool blk + ) => + Mempool m blk -> + LocalTxMonitorServer (GenTxId blk) (GenTx blk) SlotNo m () localTxMonitorServer mempool = - LocalTxMonitorServer (pure serverStIdle) - where - serverStIdle - :: ServerStIdle (GenTxId blk) (GenTx blk) SlotNo m () - serverStIdle = - ServerStIdle + LocalTxMonitorServer (pure serverStIdle) + where + serverStIdle :: + ServerStIdle (GenTxId blk) (GenTx blk) SlotNo m () + serverStIdle = + ServerStIdle { recvMsgDone = do pure () , recvMsgAcquire = do - s <- atomically $ - (,) - <$> getCapacity mempool - <*> getSnapshot mempool + s <- + atomically $ + (,) + <$> getCapacity mempool + <*> getSnapshot mempool pure $ serverStAcquiring s } - serverStAcquiring - :: (TxMeasure blk, MempoolSnapshot blk) - -> ServerStAcquiring (GenTxId blk) (GenTx blk) SlotNo m () - serverStAcquiring s@(_, snapshot) = - SendMsgAcquired (snapshotSlotNo snapshot) (serverStAcquired s (snapshotTxs snapshot)) + serverStAcquiring :: + (TxMeasure blk, MempoolSnapshot blk) -> + ServerStAcquiring (GenTxId blk) (GenTx blk) SlotNo m () + serverStAcquiring s@(_, snapshot) = + SendMsgAcquired (snapshotSlotNo snapshot) (serverStAcquired s (snapshotTxs snapshot)) - serverStAcquired - :: (TxMeasure blk, MempoolSnapshot blk) - -> [(Validated (GenTx blk), idx, TxMeasure blk)] - -> ServerStAcquired (GenTxId blk) (GenTx blk) SlotNo m () - serverStAcquired s@(capacity, snapshot) txs = - ServerStAcquired + serverStAcquired :: + (TxMeasure blk, MempoolSnapshot blk) -> + [(Validated (GenTx blk), idx, TxMeasure blk)] -> + ServerStAcquired (GenTxId blk) (GenTx blk) SlotNo m () + serverStAcquired s@(capacity, snapshot) txs = + ServerStAcquired { recvMsgNextTx = case txs of - [] -> + [] -> pure $ SendMsgReplyNextTx Nothing (serverStAcquired s []) - (txForgetValidated -> h, _tno, _byteSize):q -> + (txForgetValidated -> h, _tno, _byteSize) : q -> pure $ SendMsgReplyNextTx (Just h) (serverStAcquired s q) , recvMsgHasTx = \txid -> pure $ SendMsgReplyHasTx (snapshotHasTx snapshot txid) (serverStAcquired s txs) , recvMsgGetSizes = do - let MempoolSize{msNumTxs,msNumBytes} = snapshotMempoolSize snapshot - let sizes = MempoolSizeAndCapacity - { capacityInBytes = unByteSize32 $ txMeasureByteSize capacity - , sizeInBytes = unByteSize32 $ txMeasureByteSize msNumBytes - , numberOfTxs = msNumTxs - } + let MempoolSize{msNumTxs, msNumBytes} = snapshotMempoolSize snapshot + let sizes = + MempoolSizeAndCapacity + { capacityInBytes = unByteSize32 $ txMeasureByteSize capacity + , sizeInBytes = unByteSize32 $ txMeasureByteSize msNumBytes + , numberOfTxs = msNumTxs + } pure $ SendMsgReplyGetSizes sizes (serverStAcquired s txs) , recvMsgGetMeasures = do let txsMeasures = foldl (\acc (_, _, m) -> Measure.plus acc m) Measure.zero txs - measures = MempoolMeasures - { txCount = fromIntegral $ length txs - , measuresMap = - mkMeasuresMap (Proxy :: Proxy blk) txsMeasures capacity - } + measures = + MempoolMeasures + { txCount = fromIntegral $ length txs + , measuresMap = + mkMeasuresMap (Proxy :: Proxy blk) txsMeasures capacity + } pure $ SendMsgReplyGetMeasures measures (serverStAcquired s txs) , recvMsgAwaitAcquire = do s' <- atomically $ do s'@(_, snapshot') <- - (,) - <$> getCapacity mempool - <*> getSnapshot mempool + (,) + <$> getCapacity mempool + <*> getSnapshot mempool s' <$ check (not (snapshot `isSameSnapshot` snapshot')) pure $ serverStAcquiring s' , recvMsgRelease = pure serverStIdle } - -- Are two snapshots equal? (from the perspective of this protocol) - isSameSnapshot - :: MempoolSnapshot blk - -> MempoolSnapshot blk - -> Bool - isSameSnapshot a b = - (tno <$> snapshotTxs a) == (tno <$> snapshotTxs b) - && - snapshotSlotNo a == snapshotSlotNo b + -- Are two snapshots equal? (from the perspective of this protocol) + isSameSnapshot :: + MempoolSnapshot blk -> + MempoolSnapshot blk -> + Bool + isSameSnapshot a b = + (tno <$> snapshotTxs a) == (tno <$> snapshotTxs b) + && snapshotSlotNo a == snapshotSlotNo b - tno (_a, b, _c) = b :: TicketNo + tno (_a, b, _c) = b :: TicketNo -mkMeasuresMap :: TxMeasureMetrics (TxMeasure blk) - => Proxy blk - -> TxMeasure blk - -> TxMeasure blk - -> Map MeasureName (SizeAndCapacity Integer) +mkMeasuresMap :: + TxMeasureMetrics (TxMeasure blk) => + Proxy blk -> + TxMeasure blk -> + TxMeasure blk -> + Map MeasureName (SizeAndCapacity Integer) mkMeasuresMap Proxy size capacity = Map.fromList - [ (TransactionBytes, SizeAndCapacity (byteSizeInteger $ txMeasureMetricTxSizeBytes size) (byteSizeInteger $ txMeasureMetricTxSizeBytes capacity)) - , (ExUnitsMemory, SizeAndCapacity (fromIntegral $ txMeasureMetricExUnitsMemory size) (fromIntegral $ txMeasureMetricExUnitsMemory capacity)) - , (ExUnitsSteps, SizeAndCapacity (fromIntegral $ txMeasureMetricExUnitsSteps size) (fromIntegral $ txMeasureMetricExUnitsSteps capacity)) - , (ReferenceScriptsBytes, SizeAndCapacity (byteSizeInteger $ txMeasureMetricRefScriptsSizeBytes size) (byteSizeInteger $ txMeasureMetricRefScriptsSizeBytes capacity)) + [ + ( TransactionBytes + , SizeAndCapacity + (byteSizeInteger $ txMeasureMetricTxSizeBytes size) + (byteSizeInteger $ txMeasureMetricTxSizeBytes capacity) + ) + , + ( ExUnitsMemory + , SizeAndCapacity + (fromIntegral $ txMeasureMetricExUnitsMemory size) + (fromIntegral $ txMeasureMetricExUnitsMemory capacity) + ) + , + ( ExUnitsSteps + , SizeAndCapacity + (fromIntegral $ txMeasureMetricExUnitsSteps size) + (fromIntegral $ txMeasureMetricExUnitsSteps capacity) + ) + , + ( ReferenceScriptsBytes + , SizeAndCapacity + (byteSizeInteger $ txMeasureMetricRefScriptsSizeBytes size) + (byteSizeInteger $ txMeasureMetricRefScriptsSizeBytes capacity) + ) ] - where - byteSizeInteger :: ByteSize32 -> Integer - byteSizeInteger = fromIntegral . unByteSize32 + where + byteSizeInteger :: ByteSize32 -> Integer + byteSizeInteger = fromIntegral . unByteSize32 pattern TransactionBytes :: MeasureName pattern TransactionBytes = MeasureName "transaction_bytes" diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxSubmission/Server.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxSubmission/Server.hs index 5bde176f77..f0e6a02315 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxSubmission/Server.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/MiniProtocol/LocalTxSubmission/Server.hs @@ -3,52 +3,52 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server ( - localTxSubmissionServer +module Ouroboros.Consensus.MiniProtocol.LocalTxSubmission.Server + ( localTxSubmissionServer + -- * Trace events , TraceLocalTxSubmissionServerEvent (..) ) where -import Control.Tracer -import Data.Tuple (Solo (..)) -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mempool.API -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Protocol.LocalTxSubmission.Server -import Ouroboros.Network.Protocol.LocalTxSubmission.Type - +import Control.Tracer +import Data.Tuple (Solo (..)) +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool.API +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Protocol.LocalTxSubmission.Server +import Ouroboros.Network.Protocol.LocalTxSubmission.Type -- | Local transaction submission server, for adding txs to the 'Mempool' --- localTxSubmissionServer :: - MonadSTM m - => Tracer m (TraceLocalTxSubmissionServerEvent blk) - -> Mempool m blk - -> LocalTxSubmissionServer (GenTx blk) (ApplyTxErr blk) m () + MonadSTM m => + Tracer m (TraceLocalTxSubmissionServerEvent blk) -> + Mempool m blk -> + LocalTxSubmissionServer (GenTx blk) (ApplyTxErr blk) m () localTxSubmissionServer tracer mempool = - server - where - server = LocalTxSubmissionServer { - recvMsgSubmitTx = \tx -> do - traceWith tracer $ TraceReceivedTx tx - MkSolo addTxRes <- addLocalTxs mempool (MkSolo tx) - case addTxRes of - MempoolTxAdded _tx -> return (SubmitSuccess, server) - MempoolTxRejected _tx addTxErr -> return (SubmitFail addTxErr, server) - - , recvMsgDone = () - } - + server + where + server = + LocalTxSubmissionServer + { recvMsgSubmitTx = \tx -> do + traceWith tracer $ TraceReceivedTx tx + MkSolo addTxRes <- addLocalTxs mempool (MkSolo tx) + case addTxRes of + MempoolTxAdded _tx -> return (SubmitSuccess, server) + MempoolTxRejected _tx addTxErr -> return (SubmitFail addTxErr, server) + , recvMsgDone = () + } {------------------------------------------------------------------------------- Trace events -------------------------------------------------------------------------------} data TraceLocalTxSubmissionServerEvent blk - = TraceReceivedTx (GenTx blk) - -- ^ A transaction was received. - -deriving instance Eq (GenTx blk) - => Eq (TraceLocalTxSubmissionServerEvent blk) -deriving instance Show (GenTx blk) - => Show (TraceLocalTxSubmissionServerEvent blk) + = -- | A transaction was received. + TraceReceivedTx (GenTx blk) + +deriving instance + Eq (GenTx blk) => + Eq (TraceLocalTxSubmissionServerEvent blk) +deriving instance + Show (GenTx blk) => + Show (TraceLocalTxSubmissionServerEvent blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/GsmState.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/GsmState.hs index 81b2a78436..18fb331f6a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/GsmState.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/GsmState.hs @@ -6,18 +6,16 @@ -- ChainSync client relies on its state. module Ouroboros.Consensus.Node.GsmState (GsmState (..)) where -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) -- | Current state of the Genesis State Machine -data GsmState = - PreSyncing - -- ^ We are syncing, and the Honest Availability Assumption is not +data GsmState + = -- | We are syncing, and the Honest Availability Assumption is not -- satisfied. - | + PreSyncing + | -- | We are syncing, and the Honest Availability Assumption is satisfied. Syncing - -- ^ We are syncing, and the Honest Availability Assumption is satisfied. - | + | -- | We are caught-up. CaughtUp - -- ^ We are caught-up. deriving (Eq, Show, Read, Generic, NoThunks) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/InitStorage.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/InitStorage.hs index 73f979bb92..07fdd1a427 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/InitStorage.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/InitStorage.hs @@ -1,9 +1,9 @@ module Ouroboros.Consensus.Node.InitStorage (NodeInitStorage (..)) where -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) -import Ouroboros.Consensus.Storage.ImmutableDB (ChunkInfo) -import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) +import Ouroboros.Consensus.Storage.ImmutableDB (ChunkInfo) +import Ouroboros.Consensus.Util.IOLike -- | Functionality needed to initialise the storage layer of the node. class NodeInitStorage blk where diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/NetworkProtocolVersion.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/NetworkProtocolVersion.hs index d99eba9363..71b60ef62c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/NetworkProtocolVersion.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/NetworkProtocolVersion.hs @@ -1,21 +1,22 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Node.NetworkProtocolVersion ( - HasNetworkProtocolVersion (..) +module Ouroboros.Consensus.Node.NetworkProtocolVersion + ( HasNetworkProtocolVersion (..) , SupportedNetworkProtocolVersion (..) , latestReleasedNodeVersionDefault + -- * Re-exports , NodeToClientVersion (..) , NodeToNodeVersion (..) ) where -import Data.Kind (Type) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Ouroboros.Network.NodeToClient.Version -import Ouroboros.Network.NodeToNode.Version +import Data.Kind (Type) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Ouroboros.Network.NodeToClient.Version +import Ouroboros.Network.NodeToNode.Version {------------------------------------------------------------------------------- Protocol versioning @@ -30,27 +31,30 @@ import Ouroboros.Network.NodeToNode.Version -- additionally which queries are allowed, in the case of -- 'BlockNodeToClienVersion' (this use is already handled by -- 'Ouroboros.Consensus.Shelley.Node.TPraos.shelleyProtVer' in the NTN case). -class ( Show (BlockNodeToNodeVersion blk) - , Show (BlockNodeToClientVersion blk) - , Eq (BlockNodeToNodeVersion blk) - , Eq (BlockNodeToClientVersion blk) - ) => HasNetworkProtocolVersion blk where - type BlockNodeToNodeVersion blk :: Type +class + ( Show (BlockNodeToNodeVersion blk) + , Show (BlockNodeToClientVersion blk) + , Eq (BlockNodeToNodeVersion blk) + , Eq (BlockNodeToClientVersion blk) + ) => + HasNetworkProtocolVersion blk + where + type BlockNodeToNodeVersion blk :: Type type BlockNodeToClientVersion blk :: Type -- Defaults - type BlockNodeToNodeVersion blk = () + type BlockNodeToNodeVersion blk = () type BlockNodeToClientVersion blk = () class HasNetworkProtocolVersion blk => SupportedNetworkProtocolVersion blk where -- | Enumerate all supported node-to-node versions - supportedNodeToNodeVersions - :: Proxy blk -> Map NodeToNodeVersion (BlockNodeToNodeVersion blk) + supportedNodeToNodeVersions :: + Proxy blk -> Map NodeToNodeVersion (BlockNodeToNodeVersion blk) -- | Enumerate all supported node-to-client versions - supportedNodeToClientVersions - :: Proxy blk -> Map NodeToClientVersion (BlockNodeToClientVersion blk) + supportedNodeToClientVersions :: + Proxy blk -> Map NodeToClientVersion (BlockNodeToClientVersion blk) -- | The latest released version -- @@ -58,18 +62,18 @@ class HasNetworkProtocolVersion blk => SupportedNetworkProtocolVersion blk where -- -- IMPORTANT Note that this is entirely independent of the -- 'Ouroboros.Consensus.Shelley.Node.TPraos.shelleyProtVer' field et al. - latestReleasedNodeVersion - :: Proxy blk -> (Maybe NodeToNodeVersion, Maybe NodeToClientVersion) + latestReleasedNodeVersion :: + Proxy blk -> (Maybe NodeToNodeVersion, Maybe NodeToClientVersion) -- | A default for 'latestReleasedNodeVersion' -- -- Chooses the greatest in 'supportedNodeToNodeVersions' and -- 'supportedNodeToClientVersions'. latestReleasedNodeVersionDefault :: - SupportedNetworkProtocolVersion blk - => Proxy blk - -> (Maybe NodeToNodeVersion, Maybe NodeToClientVersion) + SupportedNetworkProtocolVersion blk => + Proxy blk -> + (Maybe NodeToNodeVersion, Maybe NodeToClientVersion) latestReleasedNodeVersionDefault prx = - ( fmap fst $ Map.lookupMax $ supportedNodeToNodeVersions prx - , fmap fst $ Map.lookupMax $ supportedNodeToClientVersions prx - ) + ( fmap fst $ Map.lookupMax $ supportedNodeToNodeVersions prx + , fmap fst $ Map.lookupMax $ supportedNodeToClientVersions prx + ) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/ProtocolInfo.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/ProtocolInfo.hs index dc28c638aa..496977b85e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/ProtocolInfo.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/ProtocolInfo.hs @@ -2,20 +2,20 @@ {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Node.ProtocolInfo ( - NumCoreNodes (..) +module Ouroboros.Consensus.Node.ProtocolInfo + ( NumCoreNodes (..) , ProtocolClientInfo (..) , ProtocolInfo (..) , enumCoreNodes ) where -import Data.Word -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Basics (ValuesMK) -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.NodeId +import Data.Word +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Basics (ValuesMK) +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.NodeId {------------------------------------------------------------------------------- Number of core nodes @@ -25,23 +25,23 @@ newtype NumCoreNodes = NumCoreNodes Word64 deriving (Show, NoThunks) enumCoreNodes :: NumCoreNodes -> [CoreNodeId] -enumCoreNodes (NumCoreNodes 0) = [] +enumCoreNodes (NumCoreNodes 0) = [] enumCoreNodes (NumCoreNodes numNodes) = - [ CoreNodeId n | n <- [0 .. numNodes - 1] ] + [CoreNodeId n | n <- [0 .. numNodes - 1]] {------------------------------------------------------------------------------- Data required to run a protocol -------------------------------------------------------------------------------} -- | Data required to run the specified protocol. -data ProtocolInfo b = ProtocolInfo { - pInfoConfig :: TopLevelConfig b - , pInfoInitLedger :: ExtLedgerState b ValuesMK - -- ^ At genesis, this LedgerState must contain the UTxOs for the initial - -- era (which for Cardano is Byron that has void tables). - } +data ProtocolInfo b = ProtocolInfo + { pInfoConfig :: TopLevelConfig b + , pInfoInitLedger :: ExtLedgerState b ValuesMK + -- ^ At genesis, this LedgerState must contain the UTxOs for the initial + -- era (which for Cardano is Byron that has void tables). + } -- | Data required by clients of a node running the specified protocol. -data ProtocolClientInfo b = ProtocolClientInfo { - pClientInfoCodecConfig :: CodecConfig b - } +data ProtocolClientInfo b = ProtocolClientInfo + { pClientInfoCodecConfig :: CodecConfig b + } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Run.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Run.hs index 97e91ee33d..deb339db27 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Run.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Run.hs @@ -5,55 +5,63 @@ -- | Infrastructure required to run a node -- -- The definitions in this module are independent from any specific protocol. -module Ouroboros.Consensus.Node.Run ( - -- * SerialiseDisk +module Ouroboros.Consensus.Node.Run + ( -- * SerialiseDisk ImmutableDbSerialiseConstraints , LedgerDbSerialiseConstraints , SerialiseDiskConstraints , VolatileDbSerialiseConstraints + -- * SerialiseNodeToNode , SerialiseNodeToNodeConstraints (..) + -- * SerialiseNodeToClient , SerialiseNodeToClientConstraints + -- * RunNode , RunNode ) where -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Storage.ChainDB - (ImmutableDbSerialiseConstraints, SerialiseDiskConstraints, - VolatileDbSerialiseConstraints) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util (ShowProxy) -import Ouroboros.Network.Block (Serialised) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Storage.ChainDB + ( ImmutableDbSerialiseConstraints + , SerialiseDiskConstraints + , VolatileDbSerialiseConstraints + ) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util (ShowProxy) +import Ouroboros.Network.Block (Serialised) {------------------------------------------------------------------------------- RunNode proper -------------------------------------------------------------------------------} -- | Serialisation constraints needed by the node-to-node protocols -class ( ConvertRawHash blk - , SerialiseNodeToNode blk blk - , SerialiseNodeToNode blk (Header blk) - , SerialiseNodeToNode blk (Serialised blk) - , SerialiseNodeToNode blk (SerialisedHeader blk) - , SerialiseNodeToNode blk (GenTx blk) - , SerialiseNodeToNode blk (GenTxId blk) - ) => SerialiseNodeToNodeConstraints blk where +class + ( ConvertRawHash blk + , SerialiseNodeToNode blk blk + , SerialiseNodeToNode blk (Header blk) + , SerialiseNodeToNode blk (Serialised blk) + , SerialiseNodeToNode blk (SerialisedHeader blk) + , SerialiseNodeToNode blk (GenTx blk) + , SerialiseNodeToNode blk (GenTxId blk) + ) => + SerialiseNodeToNodeConstraints blk + where -- | An upper bound on the size in bytes of the block corresponding to the -- header. This can be an overestimate, but not an underestimate. -- @@ -66,52 +74,57 @@ class ( ConvertRawHash blk estimateBlockSize :: Header blk -> SizeInBytes -- | Serialisation constraints needed by the node-to-client protocols -class ( Typeable blk - , ConvertRawHash blk - , SerialiseNodeToClient blk blk - , SerialiseNodeToClient blk (Serialised blk) - , SerialiseNodeToClient blk (GenTx blk) - , SerialiseNodeToClient blk (GenTxId blk) - , SerialiseNodeToClient blk SlotNo - , SerialiseNodeToClient blk (ApplyTxErr blk) - , SerialiseNodeToClient blk (SomeBlockQuery (BlockQuery blk)) - , SerialiseNodeToClient blk (LedgerConfig blk) - , SerialiseBlockQueryResult blk BlockQuery - ) => SerialiseNodeToClientConstraints blk +class + ( Typeable blk + , ConvertRawHash blk + , SerialiseNodeToClient blk blk + , SerialiseNodeToClient blk (Serialised blk) + , SerialiseNodeToClient blk (GenTx blk) + , SerialiseNodeToClient blk (GenTxId blk) + , SerialiseNodeToClient blk SlotNo + , SerialiseNodeToClient blk (ApplyTxErr blk) + , SerialiseNodeToClient blk (SomeBlockQuery (BlockQuery blk)) + , SerialiseNodeToClient blk (LedgerConfig blk) + , SerialiseBlockQueryResult blk BlockQuery + ) => + SerialiseNodeToClientConstraints blk + +class + ( LedgerSupportsProtocol blk + , InspectLedger blk + , HasHardForkHistory blk + , LedgerSupportsMempool blk + , HasTxId (GenTx blk) + , BlockSupportsLedgerQuery blk + , SupportedNetworkProtocolVersion blk + , ConfigSupportsNode blk + , ConvertRawHash blk + , CommonProtocolParams blk + , HasBinaryBlockInfo blk + , SerialiseDiskConstraints blk + , SerialiseNodeToNodeConstraints blk + , SerialiseNodeToClientConstraints blk + , LedgerSupportsPeerSelection blk + , NodeInitStorage blk + , BlockSupportsMetrics blk + , BlockSupportsDiffusionPipelining blk + , BlockSupportsSanityCheck blk + , Show (CannotForge blk) + , Show (ForgeStateInfo blk) + , Show (ForgeStateUpdateError blk) + , ShowProxy blk + , ShowProxy (ApplyTxErr blk) + , ShowProxy (GenTx blk) + , ShowProxy (Header blk) + , ShowProxy (BlockQuery blk) + , ShowProxy (TxId (GenTx blk)) + , (forall fp. ShowQuery (BlockQuery blk fp)) + , LedgerSupportsLedgerDB blk + ) => + RunNode blk -class ( LedgerSupportsProtocol blk - , InspectLedger blk - , HasHardForkHistory blk - , LedgerSupportsMempool blk - , HasTxId (GenTx blk) - , BlockSupportsLedgerQuery blk - , SupportedNetworkProtocolVersion blk - , ConfigSupportsNode blk - , ConvertRawHash blk - , CommonProtocolParams blk - , HasBinaryBlockInfo blk - , SerialiseDiskConstraints blk - , SerialiseNodeToNodeConstraints blk - , SerialiseNodeToClientConstraints blk - , LedgerSupportsPeerSelection blk - , NodeInitStorage blk - , BlockSupportsMetrics blk - , BlockSupportsDiffusionPipelining blk - , BlockSupportsSanityCheck blk - , Show (CannotForge blk) - , Show (ForgeStateInfo blk) - , Show (ForgeStateUpdateError blk) - , ShowProxy blk - , ShowProxy (ApplyTxErr blk) - , ShowProxy (GenTx blk) - , ShowProxy (Header blk) - , ShowProxy (BlockQuery blk) - , ShowProxy (TxId (GenTx blk)) - , (forall fp. ShowQuery (BlockQuery blk fp)) - , LedgerSupportsLedgerDB blk - ) => RunNode blk - -- This class is intentionally empty. It is not necessarily compositional - ie - -- the instance for 'HardForkBlock' might do more than merely delegate to the - -- instance for each era - but we want as many of its super classes as - -- possible to rely on compositional instances when possible. Not putting any - -- methods here helps encourage that. +-- This class is intentionally empty. It is not necessarily compositional - ie +-- the instance for 'HardForkBlock' might do more than merely delegate to the +-- instance for each era - but we want as many of its super classes as +-- possible to rely on compositional instances when possible. Not putting any +-- methods here helps encourage that. diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Serialisation.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Serialisation.hs index d537dae7c1..6520aae47c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Serialisation.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Node/Serialisation.hs @@ -19,31 +19,35 @@ -- encoder from the decoder, because the reasons don't apply: we always need -- both directions and we don't have access to the bytestrings that could be -- used for the annotations (we use CBOR-in-CBOR in those cases). -module Ouroboros.Consensus.Node.Serialisation ( - SerialiseBlockQueryResult (..) +module Ouroboros.Consensus.Node.Serialisation + ( SerialiseBlockQueryResult (..) , SerialiseNodeToClient (..) , SerialiseNodeToNode (..) , SerialiseResult (..) + -- * Defaults , defaultDecodeCBORinCBOR , defaultEncodeCBORinCBOR + -- * Re-exported for convenience , Some (..) ) where -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import Codec.Serialise (Serialise (decode, encode)) -import Data.Kind -import Data.SOP.BasicFunctors -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsMempool (ApplyTxErr, - GenTxId) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (Some (..)) -import Ouroboros.Network.Block (unwrapCBORinCBOR, wrapCBORinCBOR) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Codec.Serialise (Serialise (decode, encode)) +import Data.Kind +import Data.SOP.BasicFunctors +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsMempool + ( ApplyTxErr + , GenTxId + ) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (Some (..)) +import Ouroboros.Network.Block (unwrapCBORinCBOR, wrapCBORinCBOR) {------------------------------------------------------------------------------- NodeToNode @@ -58,14 +62,14 @@ class SerialiseNodeToNode blk a where -- When the config is not needed, we provide a default, unversioned -- implementation using 'Serialise' - default encodeNodeToNode - :: Serialise a - => CodecConfig blk -> BlockNodeToNodeVersion blk -> a -> Encoding + default encodeNodeToNode :: + Serialise a => + CodecConfig blk -> BlockNodeToNodeVersion blk -> a -> Encoding encodeNodeToNode _ccfg _version = encode - default decodeNodeToNode - :: Serialise a - => CodecConfig blk -> BlockNodeToNodeVersion blk -> forall s. Decoder s a + default decodeNodeToNode :: + Serialise a => + CodecConfig blk -> BlockNodeToNodeVersion blk -> forall s. Decoder s a decodeNodeToNode _ccfg _version = decode {------------------------------------------------------------------------------- @@ -81,14 +85,14 @@ class SerialiseNodeToClient blk a where -- When the config is not needed, we provide a default, unversioned -- implementation using 'Serialise' - default encodeNodeToClient - :: Serialise a - => CodecConfig blk -> BlockNodeToClientVersion blk -> a -> Encoding + default encodeNodeToClient :: + Serialise a => + CodecConfig blk -> BlockNodeToClientVersion blk -> a -> Encoding encodeNodeToClient _ccfg _version = encode - default decodeNodeToClient - :: Serialise a - => CodecConfig blk -> BlockNodeToClientVersion blk -> forall s. Decoder s a + default decodeNodeToClient :: + Serialise a => + CodecConfig blk -> BlockNodeToClientVersion blk -> forall s. Decoder s a decodeNodeToClient _ccfg _version = decode {------------------------------------------------------------------------------- @@ -101,18 +105,20 @@ class SerialiseNodeToClient blk a where -- 'NodeToClientVersion' argument. type SerialiseResult :: Type -> (Type -> Type -> Type) -> Constraint class SerialiseResult blk query where - encodeResult - :: forall result. - CodecConfig blk - -> BlockNodeToClientVersion blk - -> query blk result - -> result -> Encoding - decodeResult - :: forall result. - CodecConfig blk - -> BlockNodeToClientVersion blk - -> query blk result - -> forall s. Decoder s result + encodeResult :: + forall result. + CodecConfig blk -> + BlockNodeToClientVersion blk -> + query blk result -> + result -> + Encoding + decodeResult :: + forall result. + CodecConfig blk -> + BlockNodeToClientVersion blk -> + query blk result -> + forall s. + Decoder s result -- | How to serialise the @result@ of a block query. -- @@ -120,18 +126,20 @@ class SerialiseResult blk query where -- 'NodeToClientVersion' argument. type SerialiseBlockQueryResult :: Type -> (Type -> k -> Type -> Type) -> Constraint class SerialiseBlockQueryResult blk query where - encodeBlockQueryResult - :: forall fp result. - CodecConfig blk - -> BlockNodeToClientVersion blk - -> query blk fp result - -> result -> Encoding - decodeBlockQueryResult - :: forall fp result. - CodecConfig blk - -> BlockNodeToClientVersion blk - -> query blk fp result - -> forall s. Decoder s result + encodeBlockQueryResult :: + forall fp result. + CodecConfig blk -> + BlockNodeToClientVersion blk -> + query blk fp result -> + result -> + Encoding + decodeBlockQueryResult :: + forall fp result. + CodecConfig blk -> + BlockNodeToClientVersion blk -> + query blk fp result -> + forall s. + Decoder s result {------------------------------------------------------------------------------- Defaults @@ -153,20 +161,26 @@ defaultDecodeCBORinCBOR = unwrapCBORinCBOR (const <$> decode) Forwarding instances -------------------------------------------------------------------------------} -deriving newtype instance SerialiseNodeToNode blk blk - => SerialiseNodeToNode blk (I blk) +deriving newtype instance + SerialiseNodeToNode blk blk => + SerialiseNodeToNode blk (I blk) -deriving newtype instance SerialiseNodeToClient blk blk - => SerialiseNodeToClient blk (I blk) +deriving newtype instance + SerialiseNodeToClient blk blk => + SerialiseNodeToClient blk (I blk) -deriving newtype instance SerialiseNodeToNode blk (GenTxId blk) - => SerialiseNodeToNode blk (WrapGenTxId blk) +deriving newtype instance + SerialiseNodeToNode blk (GenTxId blk) => + SerialiseNodeToNode blk (WrapGenTxId blk) -deriving newtype instance SerialiseNodeToClient blk (GenTxId blk) - => SerialiseNodeToClient blk (WrapGenTxId blk) +deriving newtype instance + SerialiseNodeToClient blk (GenTxId blk) => + SerialiseNodeToClient blk (WrapGenTxId blk) -deriving newtype instance SerialiseNodeToClient blk (ApplyTxErr blk) - => SerialiseNodeToClient blk (WrapApplyTxErr blk) +deriving newtype instance + SerialiseNodeToClient blk (ApplyTxErr blk) => + SerialiseNodeToClient blk (WrapApplyTxErr blk) -deriving newtype instance SerialiseNodeToClient blk (LedgerConfig blk) - => SerialiseNodeToClient blk (WrapLedgerConfig blk) +deriving newtype instance + SerialiseNodeToClient blk (LedgerConfig blk) => + SerialiseNodeToClient blk (WrapLedgerConfig blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/NodeId.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/NodeId.hs index 9c5b37cc14..b3460bcfcd 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/NodeId.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/NodeId.hs @@ -4,8 +4,8 @@ {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.NodeId ( - -- * Node IDs +module Ouroboros.Consensus.NodeId + ( -- * Node IDs CoreNodeId (..) , NodeId (..) , decodeNodeId @@ -13,17 +13,17 @@ module Ouroboros.Consensus.NodeId ( , fromCoreNodeId ) where -import Cardano.Binary -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (Serialise (..)) -import Data.Hashable -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import Ouroboros.Network.Util.ShowProxy (ShowProxy (..)) -import Quiet +import Cardano.Binary +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (Serialise (..)) +import Data.Hashable +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Ouroboros.Network.Util.ShowProxy (ShowProxy (..)) +import Quiet {------------------------------------------------------------------------------- Node IDs @@ -31,8 +31,9 @@ import Quiet -- TODO: It is not at all clear that this makes any sense anymore. The network -- layer does not use or provide node ids (it uses addresses). -data NodeId = CoreId !CoreNodeId - | RelayId !Word64 +data NodeId + = CoreId !CoreNodeId + | RelayId !Word64 deriving (Eq, Ord, Show, Generic, NoThunks) instance FromCBOR NodeId where @@ -42,12 +43,12 @@ instance FromCBOR NodeId where case (len, tag) of (2, 0) -> CoreId <$> fromCBOR @CoreNodeId (2, 1) -> RelayId <$> fromCBOR @Word64 - _ -> fail $ "NodeId: unknown (len, tag) " ++ show (len, tag) + _ -> fail $ "NodeId: unknown (len, tag) " ++ show (len, tag) instance ToCBOR NodeId where - toCBOR nodeId = case nodeId of - CoreId x -> encodeListLen 2 <> encodeWord8 0 <> toCBOR x - RelayId x -> encodeListLen 2 <> encodeWord8 1 <> toCBOR x + toCBOR nodeId = case nodeId of + CoreId x -> encodeListLen 2 <> encodeWord8 0 <> toCBOR x + RelayId x -> encodeListLen 2 <> encodeWord8 1 <> toCBOR x instance Serialise NodeId where decode = fromCBOR @@ -55,15 +56,15 @@ instance Serialise NodeId where instance Condense NodeId where condense (CoreId (CoreNodeId i)) = "c" ++ show i - condense (RelayId i ) = "r" ++ show i + condense (RelayId i) = "r" ++ show i instance Hashable NodeId -- | Core node ID -newtype CoreNodeId = CoreNodeId { - unCoreNodeId :: Word64 - } - deriving stock (Eq, Ord, Generic) +newtype CoreNodeId = CoreNodeId + { unCoreNodeId :: Word64 + } + deriving stock (Eq, Ord, Generic) deriving newtype (Condense, FromCBOR, ToCBOR, NoThunks) deriving Show via Quiet CoreNodeId @@ -73,12 +74,14 @@ instance ShowProxy NodeId where showProxy _ = "NodeId" encodeNodeId :: NodeId -> CBOR.Encoding -encodeNodeId (CoreId (CoreNodeId wo)) = CBOR.encodeListLen 2 - <> CBOR.encodeWord 0 - <> CBOR.encodeWord64 wo -encodeNodeId (RelayId wo) = CBOR.encodeListLen 2 - <> CBOR.encodeWord 1 - <> CBOR.encodeWord64 wo +encodeNodeId (CoreId (CoreNodeId wo)) = + CBOR.encodeListLen 2 + <> CBOR.encodeWord 0 + <> CBOR.encodeWord64 wo +encodeNodeId (RelayId wo) = + CBOR.encodeListLen 2 + <> CBOR.encodeWord 1 + <> CBOR.encodeWord64 wo decodeNodeId :: CBOR.Decoder s NodeId decodeNodeId = do diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Abstract.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Abstract.hs index b19232caa2..9e2f55b70c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Abstract.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Abstract.hs @@ -8,28 +8,31 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Protocol.Abstract ( - -- * Abstract definition of the Ouroboros protocol +module Ouroboros.Consensus.Protocol.Abstract + ( -- * Abstract definition of the Ouroboros protocol ConsensusConfig , ConsensusProtocol (..) + -- * Chain order , ChainOrder (..) , SimpleChainOrder (..) + -- * Translation , TranslateProto (..) + -- * Convenience re-exports , SecurityParam (..) ) where -import Control.Monad.Except -import Data.Kind (Type) -import Data.Proxy (Proxy) -import Data.Typeable (Typeable) -import GHC.Stack -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Ticked +import Control.Monad.Except +import Data.Kind (Type) +import Data.Proxy (Proxy) +import Data.Typeable (Typeable) +import GHC.Stack +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Ticked -- | Static configuration required to run the consensus protocol -- @@ -46,30 +49,33 @@ data family ConsensusConfig p :: Type -- -- This class encodes the part that is independent from any particular -- block representation. -class ( Show (ChainDepState p) - , Show (ValidationErr p) - , Show (SelectView p) - , Show (LedgerView p) - , Eq (ChainDepState p) - , Eq (ValidationErr p) - , ChainOrder (SelectView p) - , NoThunks (ConsensusConfig p) - , NoThunks (ChainDepState p) - , NoThunks (ValidationErr p) - , NoThunks (SelectView p) - , Typeable p -- so that p can appear in exceptions - ) => ConsensusProtocol p where +class + ( Show (ChainDepState p) + , Show (ValidationErr p) + , Show (SelectView p) + , Show (LedgerView p) + , Eq (ChainDepState p) + , Eq (ValidationErr p) + , ChainOrder (SelectView p) + , NoThunks (ConsensusConfig p) + , NoThunks (ChainDepState p) + , NoThunks (ValidationErr p) + , NoThunks (SelectView p) + , Typeable p -- so that p can appear in exceptions + ) => + ConsensusProtocol p + where -- | Protocol-specific state -- -- NOTE: This chain is blockchain dependent, i.e., updated when new blocks -- come in (more precisely, new /headers/), and subject to rollback. - type family ChainDepState p :: Type + type ChainDepState p :: Type -- | Evidence that a node /is/ the leader - type family IsLeader p :: Type + type IsLeader p :: Type -- | Evidence that we /can/ be a leader - type family CanBeLeader p :: Type + type CanBeLeader p :: Type -- | View on a header required for chain selection -- @@ -81,7 +87,8 @@ class ( Show (ChainDepState p) -- on the headers at the tips of those chains: chain A is strictly preferred -- over chain B whenever A's select view is preferred over B's select view -- according to the 'ChainOrder' instance. - type family SelectView p :: Type + type SelectView p :: Type + type SelectView p = BlockNo -- | Projection of the ledger state the Ouroboros protocol needs access to @@ -123,40 +130,43 @@ class ( Show (ChainDepState p) -- in the consensus layer since that depends on the computation (and sampling) -- of entropy, which is done consensus side, not ledger side (the reward -- calculation does not depend on this). - type family LedgerView p :: Type + type LedgerView p :: Type -- | Validation errors - type family ValidationErr p :: Type + type ValidationErr p :: Type -- | View on a header required to validate it - type family ValidateView p :: Type + type ValidateView p :: Type -- | Check if a node is the leader - checkIsLeader :: HasCallStack - => ConsensusConfig p - -> CanBeLeader p - -> SlotNo - -> Ticked (ChainDepState p) - -> Maybe (IsLeader p) + checkIsLeader :: + HasCallStack => + ConsensusConfig p -> + CanBeLeader p -> + SlotNo -> + Ticked (ChainDepState p) -> + Maybe (IsLeader p) -- | Tick the 'ChainDepState' -- -- We pass the 'LedgerView' to 'tickChainDepState'. Functions that /take/ a -- ticked 'ChainDepState' are not separately passed a ledger view; protocols -- that require it, can include it in their ticked 'ChainDepState' type. - tickChainDepState :: ConsensusConfig p - -> LedgerView p - -> SlotNo - -> ChainDepState p - -> Ticked (ChainDepState p) + tickChainDepState :: + ConsensusConfig p -> + LedgerView p -> + SlotNo -> + ChainDepState p -> + Ticked (ChainDepState p) -- | Apply a header - updateChainDepState :: HasCallStack - => ConsensusConfig p - -> ValidateView p - -> SlotNo - -> Ticked (ChainDepState p) - -> Except (ValidationErr p) (ChainDepState p) + updateChainDepState :: + HasCallStack => + ConsensusConfig p -> + ValidateView p -> + SlotNo -> + Ticked (ChainDepState p) -> + Except (ValidationErr p) (ChainDepState p) -- | Re-apply a header to the same 'ChainDepState' we have been able to -- successfully apply to before. @@ -169,12 +179,13 @@ class ( Show (ChainDepState p) -- It is worth noting that since we already know that the header is valid -- w.r.t. the provided 'ChainDepState', no validation checks should be -- performed. - reupdateChainDepState :: HasCallStack - => ConsensusConfig p - -> ValidateView p - -> SlotNo - -> Ticked (ChainDepState p) - -> ChainDepState p + reupdateChainDepState :: + HasCallStack => + ConsensusConfig p -> + ValidateView p -> + SlotNo -> + Ticked (ChainDepState p) -> + ChainDepState p -- | We require that protocols support a @k@ security parameter protocolSecurityParam :: ConsensusConfig p -> SecurityParam @@ -183,13 +194,13 @@ class ( Show (ChainDepState p) class TranslateProto protoFrom protoTo where -- | Translate the ledger view. translateLedgerView :: - Proxy (protoFrom, protoTo) -> LedgerView protoFrom -> LedgerView protoTo + Proxy (protoFrom, protoTo) -> LedgerView protoFrom -> LedgerView protoTo + translateChainDepState :: Proxy (protoFrom, protoTo) -> ChainDepState protoFrom -> ChainDepState protoTo -- | Degenerate instance - we may always translate from a protocol to itself. -instance TranslateProto singleProto singleProto - where +instance TranslateProto singleProto singleProto where translateLedgerView _ = id translateChainDepState _ = id @@ -242,10 +253,12 @@ class Ord sv => ChainOrder sv where -- Intuitively, this means that only the logic for breaking ties between -- chains with equal block number is customizable via this class. preferCandidate :: - ChainOrderConfig sv - -> sv -- ^ Tip of our chain - -> sv -- ^ Tip of the candidate - -> Bool + ChainOrderConfig sv -> + -- | Tip of our chain + sv -> + -- | Tip of the candidate + sv -> + Bool -- | A @DerivingVia@ helper to implement 'preferCandidate' in terms of the 'Ord' -- instance. diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/BFT.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/BFT.hs index 4ac3e25f46..a1c1147521 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/BFT.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/BFT.hs @@ -11,79 +11,86 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.Protocol.BFT ( - Bft +module Ouroboros.Consensus.Protocol.BFT + ( Bft , BftFields (..) , BftParams (..) , BftValidationErr (..) , forgeBftFields + -- * Classes , BftCrypto (..) , BftMockCrypto , BftStandardCrypto , BftValidateView (..) , bftValidateView + -- * Type instances , ConsensusConfig (..) ) where -import Cardano.Crypto.DSIGN -import Control.Monad.Except -import Data.Kind (Type) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Data.Typeable -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..), NodeId (..)) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.Condense +import Cardano.Crypto.DSIGN +import Control.Monad.Except +import Data.Kind (Type) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.Typeable +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..), NodeId (..)) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Fields BFT requires in a block -------------------------------------------------------------------------------} -data BftFields c toSign = BftFields { - bftSignature :: !(SignedDSIGN (BftDSIGN c) toSign) - } - deriving (Generic) +data BftFields c toSign = BftFields + { bftSignature :: !(SignedDSIGN (BftDSIGN c) toSign) + } + deriving Generic deriving instance BftCrypto c => Show (BftFields c toSign) -deriving instance BftCrypto c => Eq (BftFields c toSign) +deriving instance BftCrypto c => Eq (BftFields c toSign) -- We use the generic implementation, but override 'showTypeOf' to show @c@ instance (BftCrypto c, Typeable toSign) => NoThunks (BftFields c toSign) where showTypeOf _ = show $ typeRep (Proxy @(BftFields c)) -data BftValidateView c = - forall signed. Signable (BftDSIGN c) signed - => BftValidateView (BftFields c signed) signed +data BftValidateView c + = forall signed. + Signable (BftDSIGN c) signed => + BftValidateView (BftFields c signed) signed -- | Convenience constructor for 'BftValidateView' -bftValidateView :: ( SignedHeader hdr - , Signable (BftDSIGN c) (Signed hdr) - ) - => (hdr -> BftFields c (Signed hdr)) - -> (hdr -> BftValidateView c) +bftValidateView :: + ( SignedHeader hdr + , Signable (BftDSIGN c) (Signed hdr) + ) => + (hdr -> BftFields c (Signed hdr)) -> + (hdr -> BftValidateView c) bftValidateView getFields hdr = - BftValidateView (getFields hdr) (headerSigned hdr) - -forgeBftFields :: ( BftCrypto c - , Signable (BftDSIGN c) toSign - ) - => ConsensusConfig (Bft c) - -> toSign - -> BftFields c toSign -forgeBftFields BftConfig{..} toSign = let - signature = signedDSIGN () toSign bftSignKey - in BftFields { - bftSignature = signature - } + BftValidateView (getFields hdr) (headerSigned hdr) + +forgeBftFields :: + ( BftCrypto c + , Signable (BftDSIGN c) toSign + ) => + ConsensusConfig (Bft c) -> + toSign -> + BftFields c toSign +forgeBftFields BftConfig{..} toSign = + let + signature = signedDSIGN () toSign bftSignKey + in + BftFields + { bftSignature = signature + } {------------------------------------------------------------------------------- Protocol proper @@ -100,66 +107,67 @@ forgeBftFields BftConfig{..} toSign = let data Bft c -- | Protocol parameters -data BftParams = BftParams { - -- | Security parameter - -- - -- Although the protocol proper does not have such a security parameter, - -- we insist on it. - bftSecurityParam :: !SecurityParam - - -- | Number of core nodes - , bftNumNodes :: !NumCoreNodes - } +data BftParams = BftParams + { bftSecurityParam :: !SecurityParam + -- ^ Security parameter + -- + -- Although the protocol proper does not have such a security parameter, + -- we insist on it. + , bftNumNodes :: !NumCoreNodes + -- ^ Number of core nodes + } deriving (Generic, NoThunks) -- | (Static) node configuration -data instance ConsensusConfig (Bft c) = BftConfig { - bftParams :: !BftParams - , bftSignKey :: !(SignKeyDSIGN (BftDSIGN c)) - , bftVerKeys :: !(Map NodeId (VerKeyDSIGN (BftDSIGN c))) - } - deriving (Generic) +data instance ConsensusConfig (Bft c) = BftConfig + { bftParams :: !BftParams + , bftSignKey :: !(SignKeyDSIGN (BftDSIGN c)) + , bftVerKeys :: !(Map NodeId (VerKeyDSIGN (BftDSIGN c))) + } + deriving Generic instance BftCrypto c => ConsensusProtocol (Bft c) where type ValidationErr (Bft c) = BftValidationErr - type ValidateView (Bft c) = BftValidateView c - type LedgerView (Bft c) = () - type IsLeader (Bft c) = () + type ValidateView (Bft c) = BftValidateView c + type LedgerView (Bft c) = () + type IsLeader (Bft c) = () type ChainDepState (Bft c) = () - type CanBeLeader (Bft c) = CoreNodeId + type CanBeLeader (Bft c) = CoreNodeId protocolSecurityParam = bftSecurityParam . bftParams checkIsLeader BftConfig{..} (CoreNodeId i) (SlotNo n) _ = - if n `mod` numCoreNodes == i + if n `mod` numCoreNodes == i then Just () else Nothing - where - BftParams{..} = bftParams - NumCoreNodes numCoreNodes = bftNumNodes - - updateChainDepState BftConfig{..} - (BftValidateView BftFields{..} signed) - (SlotNo n) - _ = + where + BftParams{..} = bftParams + NumCoreNodes numCoreNodes = bftNumNodes + + updateChainDepState + BftConfig{..} + (BftValidateView BftFields{..} signed) + (SlotNo n) + _ = -- TODO: Should deal with unknown node IDs case verifySignedDSIGN - () - (bftVerKeys Map.! expectedLeader) - signed - bftSignature of + () + (bftVerKeys Map.! expectedLeader) + signed + bftSignature of Right () -> return () Left err -> throwError $ BftInvalidSignature err - where + where BftParams{..} = bftParams expectedLeader = CoreId $ CoreNodeId (n `mod` numCoreNodes) NumCoreNodes numCoreNodes = bftNumNodes reupdateChainDepState _ _ _ _ = () - tickChainDepState _ _ _ _ = TickedTrivial + tickChainDepState _ _ _ _ = TickedTrivial instance BftCrypto c => NoThunks (ConsensusConfig (Bft c)) - -- use generic instance + +-- use generic instance {------------------------------------------------------------------------------- BFT specific types @@ -173,13 +181,16 @@ data BftValidationErr = BftInvalidSignature String -------------------------------------------------------------------------------} -- | Crypto primitives required by BFT -class ( Typeable c - , DSIGNAlgorithm (BftDSIGN c) - , Condense (SigDSIGN (BftDSIGN c)) - , NoThunks (SigDSIGN (BftDSIGN c)) - , ContextDSIGN (BftDSIGN c) ~ () - ) => BftCrypto c where - type family BftDSIGN c :: Type +class + ( Typeable c + , DSIGNAlgorithm (BftDSIGN c) + , Condense (SigDSIGN (BftDSIGN c)) + , NoThunks (SigDSIGN (BftDSIGN c)) + , ContextDSIGN (BftDSIGN c) ~ () + ) => + BftCrypto c + where + type BftDSIGN c :: Type data BftStandardCrypto data BftMockCrypto diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/LeaderSchedule.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/LeaderSchedule.hs index d827763d71..3bc2a86feb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/LeaderSchedule.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/LeaderSchedule.hs @@ -4,19 +4,19 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Protocol.LeaderSchedule ( - LeaderSchedule (..) +module Ouroboros.Consensus.Protocol.LeaderSchedule + ( LeaderSchedule (..) , leaderScheduleFor ) where -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.NodeId (CoreNodeId (..), fromCoreNodeId) -import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.NodeId (CoreNodeId (..), fromCoreNodeId) +import Ouroboros.Consensus.Util.Condense (Condense (..)) {------------------------------------------------------------------------------- Leader schedule @@ -27,27 +27,28 @@ import Ouroboros.Consensus.Util.Condense (Condense (..)) inspectable and shrinkable manner. -------------------------------------------------------------------------------} -newtype LeaderSchedule = LeaderSchedule { - getLeaderSchedule :: Map SlotNo [CoreNodeId] - } - deriving stock (Show, Eq, Ord, Generic) - deriving anyclass (NoThunks) +newtype LeaderSchedule = LeaderSchedule + { getLeaderSchedule :: Map SlotNo [CoreNodeId] + } + deriving stock (Show, Eq, Ord, Generic) + deriving anyclass NoThunks -- | The 'Slots' a given node is supposed to lead in leaderScheduleFor :: CoreNodeId -> LeaderSchedule -> Set SlotNo leaderScheduleFor nid = - Map.keysSet + Map.keysSet . Map.filter (elem nid) . getLeaderSchedule instance Semigroup LeaderSchedule where - LeaderSchedule l <> LeaderSchedule r = - LeaderSchedule $ - Map.unionWith comb l r - where - comb ls rs = ls ++ filter (`notElem` ls) rs + LeaderSchedule l <> LeaderSchedule r = + LeaderSchedule $ + Map.unionWith comb l r + where + comb ls rs = ls ++ filter (`notElem` ls) rs instance Condense LeaderSchedule where - condense (LeaderSchedule m) = condense - $ map (\(s, ls) -> (s, map fromCoreNodeId ls)) - $ Map.toList m + condense (LeaderSchedule m) = + condense $ + map (\(s, ls) -> (s, map fromCoreNodeId ls)) $ + Map.toList m diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/MockChainSel.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/MockChainSel.hs index 8a8334383c..b5ea3aca46 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/MockChainSel.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/MockChainSel.hs @@ -1,17 +1,17 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TupleSections #-} -module Ouroboros.Consensus.Protocol.MockChainSel ( - selectChain +module Ouroboros.Consensus.Protocol.MockChainSel + ( selectChain , selectUnvalidatedChain ) where -import Data.List (sortOn) -import Data.Maybe (listToMaybe, mapMaybe) -import Data.Ord (Down (..)) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Network.Mock.Chain (Chain) -import qualified Ouroboros.Network.Mock.Chain as Chain +import Data.List (sortOn) +import Data.Maybe (listToMaybe, mapMaybe) +import Data.Ord (Down (..)) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Network.Mock.Chain (Chain) +import Ouroboros.Network.Mock.Chain qualified as Chain {------------------------------------------------------------------------------- Chain selection @@ -29,45 +29,50 @@ import qualified Ouroboros.Network.Mock.Chain as Chain -- somehow fail if the selected chain turns out to be invalid.) -- -- Returns 'Nothing' if we stick with our current chain. -selectChain :: forall proxy p hdr l. ConsensusProtocol p - => proxy p - -> ChainOrderConfig (SelectView p) - -> (hdr -> SelectView p) - -> Chain hdr -- ^ Our chain - -> [(Chain hdr, l)] -- ^ Upstream chains - -> Maybe (Chain hdr, l) +selectChain :: + forall proxy p hdr l. + ConsensusProtocol p => + proxy p -> + ChainOrderConfig (SelectView p) -> + (hdr -> SelectView p) -> + -- | Our chain + Chain hdr -> + -- | Upstream chains + [(Chain hdr, l)] -> + Maybe (Chain hdr, l) selectChain _ cfg view ours = - listToMaybe + listToMaybe . map snd . sortOn (Down . fst) . mapMaybe selectPreferredCandidate - where - -- | Only retain a candidate if it is preferred over the current chain. As - -- only a non-empty chain can be preferred over the current chain, we can - -- extract the 'SelectView' of the tip of the candidate. - selectPreferredCandidate :: - (Chain hdr, l) - -> Maybe (SelectView p, (Chain hdr, l)) - selectPreferredCandidate x@(cand, _) = - case (Chain.head ours, Chain.head cand) of - (Nothing, Just candTip) - -> Just (view candTip, x) - (Just ourTip, Just candTip) - | let candView = view candTip - , preferCandidate cfg (view ourTip) candView - -> Just (candView, x) - _otherwise - -> Nothing + where + -- \| Only retain a candidate if it is preferred over the current chain. As + -- only a non-empty chain can be preferred over the current chain, we can + -- extract the 'SelectView' of the tip of the candidate. + selectPreferredCandidate :: + (Chain hdr, l) -> + Maybe (SelectView p, (Chain hdr, l)) + selectPreferredCandidate x@(cand, _) = + case (Chain.head ours, Chain.head cand) of + (Nothing, Just candTip) -> + Just (view candTip, x) + (Just ourTip, Just candTip) + | let candView = view candTip + , preferCandidate cfg (view ourTip) candView -> + Just (candView, x) + _otherwise -> + Nothing -- | Chain selection on unvalidated chains -selectUnvalidatedChain :: ConsensusProtocol p - => proxy p - -> ChainOrderConfig (SelectView p) - -> (hdr -> SelectView p) - -> Chain hdr - -> [Chain hdr] - -> Maybe (Chain hdr) +selectUnvalidatedChain :: + ConsensusProtocol p => + proxy p -> + ChainOrderConfig (SelectView p) -> + (hdr -> SelectView p) -> + Chain hdr -> + [Chain hdr] -> + Maybe (Chain hdr) selectUnvalidatedChain p cfg view ours = - fmap fst + fmap fst . selectChain p cfg view ours - . map (, ()) + . map (,()) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/ModChainSel.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/ModChainSel.hs index 5651c5c7f9..4f37fd4069 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/ModChainSel.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/ModChainSel.hs @@ -2,43 +2,47 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Protocol.ModChainSel ( - ModChainSel +module Ouroboros.Consensus.Protocol.ModChainSel + ( ModChainSel + -- * Type family instances , ConsensusConfig (..) ) where -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Protocol.Abstract +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Protocol.Abstract data ModChainSel p s -newtype instance ConsensusConfig (ModChainSel p s) = McsConsensusConfig { - mcsConfigP :: ConsensusConfig p - } - deriving (Generic) - -instance ( ConsensusProtocol p - , ChainOrder s - , Show s - , Typeable s - , NoThunks s - ) => ConsensusProtocol (ModChainSel p s) where - type SelectView (ModChainSel p s) = s - - type ChainDepState (ModChainSel p s) = ChainDepState p - type IsLeader (ModChainSel p s) = IsLeader p - type CanBeLeader (ModChainSel p s) = CanBeLeader p - type LedgerView (ModChainSel p s) = LedgerView p - type ValidationErr (ModChainSel p s) = ValidationErr p - type ValidateView (ModChainSel p s) = ValidateView p - - checkIsLeader = checkIsLeader . mcsConfigP - tickChainDepState = tickChainDepState . mcsConfigP - updateChainDepState = updateChainDepState . mcsConfigP - reupdateChainDepState = reupdateChainDepState . mcsConfigP - protocolSecurityParam = protocolSecurityParam . mcsConfigP +newtype instance ConsensusConfig (ModChainSel p s) = McsConsensusConfig + { mcsConfigP :: ConsensusConfig p + } + deriving Generic + +instance + ( ConsensusProtocol p + , ChainOrder s + , Show s + , Typeable s + , NoThunks s + ) => + ConsensusProtocol (ModChainSel p s) + where + type SelectView (ModChainSel p s) = s + + type ChainDepState (ModChainSel p s) = ChainDepState p + type IsLeader (ModChainSel p s) = IsLeader p + type CanBeLeader (ModChainSel p s) = CanBeLeader p + type LedgerView (ModChainSel p s) = LedgerView p + type ValidationErr (ModChainSel p s) = ValidationErr p + type ValidateView (ModChainSel p s) = ValidateView p + + checkIsLeader = checkIsLeader . mcsConfigP + tickChainDepState = tickChainDepState . mcsConfigP + updateChainDepState = updateChainDepState . mcsConfigP + reupdateChainDepState = reupdateChainDepState . mcsConfigP + protocolSecurityParam = protocolSecurityParam . mcsConfigP instance ConsensusProtocol p => NoThunks (ConsensusConfig (ModChainSel p s)) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT.hs index b157679168..aadbb9ffc0 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT.hs @@ -19,8 +19,8 @@ {-# OPTIONS_GHC -Wno-orphans #-} #endif -module Ouroboros.Consensus.Protocol.PBFT ( - PBft +module Ouroboros.Consensus.Protocol.PBFT + ( PBft , PBftCanBeLeader (..) , PBftFields (..) , PBftIsLeader (..) @@ -31,8 +31,10 @@ module Ouroboros.Consensus.Protocol.PBFT ( , mkPBftSelectView , pbftWindowExceedsThreshold , pbftWindowSize + -- * Forging , forgePBftFields + -- * Classes , PBftCrypto (..) , PBftMockCrypto @@ -40,92 +42,97 @@ module Ouroboros.Consensus.Protocol.PBFT ( , PBftValidateView (..) , pbftValidateBoundary , pbftValidateRegular + -- * CannotForge , PBftCannotForge (..) , pbftCheckCanForge + -- * Type instances , ConsensusConfig (..) , Ticked (..) + -- * Exported for tracing errors , PBftValidationErr (..) ) where -import Cardano.Crypto.DSIGN.Class -import Cardano.Ledger.BaseTypes (unNonZero) -import Codec.Serialise (Serialise (..)) -import qualified Control.Exception as Exn -import Control.Monad (unless) -import Control.Monad.Except (throwError) -import Control.ResourceRegistry () -import Data.Bifunctor (first) -import Data.Bimap (Bimap) -import qualified Data.Bimap as Bimap -import Data.Text (Text) -import qualified Data.Text as Text -import Data.Typeable (Typeable) -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.PBFT.Crypto -import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Orphans () +import Cardano.Crypto.DSIGN.Class +import Cardano.Ledger.BaseTypes (unNonZero) +import Codec.Serialise (Serialise (..)) +import Control.Exception qualified as Exn +import Control.Monad (unless) +import Control.Monad.Except (throwError) +import Control.ResourceRegistry () +import Data.Bifunctor (first) +import Data.Bimap (Bimap) +import Data.Bimap qualified as Bimap +import Data.Text (Text) +import Data.Text qualified as Text +import Data.Typeable (Typeable) +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.PBFT.Crypto +import Ouroboros.Consensus.Protocol.PBFT.State (PBftState) +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Orphans () {------------------------------------------------------------------------------- Fields that PBFT requires present in a block -------------------------------------------------------------------------------} -data PBftFields c toSign = PBftFields { - -- | The actual issuer of a block - pbftIssuer :: VerKeyDSIGN (PBftDSIGN c) - -- | The stakeholder on whose behalf the block is being issued - , pbftGenKey :: VerKeyDSIGN (PBftDSIGN c) - , pbftSignature :: SignedDSIGN (PBftDSIGN c) toSign - } - deriving (Generic) +data PBftFields c toSign = PBftFields + { pbftIssuer :: VerKeyDSIGN (PBftDSIGN c) + -- ^ The actual issuer of a block + , pbftGenKey :: VerKeyDSIGN (PBftDSIGN c) + -- ^ The stakeholder on whose behalf the block is being issued + , pbftSignature :: SignedDSIGN (PBftDSIGN c) toSign + } + deriving Generic deriving instance PBftCrypto c => Show (PBftFields c toSign) -deriving instance PBftCrypto c => Eq (PBftFields c toSign) +deriving instance PBftCrypto c => Eq (PBftFields c toSign) instance (PBftCrypto c, Typeable toSign) => NoThunks (PBftFields c toSign) - -- use generic instance + +-- use generic instance -- | Part of the header that we validate -data PBftValidateView c = - -- | Regular block - -- - -- Regular blocks are signed, and so we need to validate them. - -- We also need to know the slot number of the block - forall signed. Signable (PBftDSIGN c) signed - => PBftValidateRegular - (PBftFields c signed) - signed - (ContextDSIGN (PBftDSIGN c)) - - -- | Boundary block (EBB) - -- - -- EBBs are not signed and they do not affect the consensus state. - | PBftValidateBoundary +data PBftValidateView c + = -- | Regular block + -- + -- Regular blocks are signed, and so we need to validate them. + -- We also need to know the slot number of the block + forall signed. + Signable (PBftDSIGN c) signed => + PBftValidateRegular + (PBftFields c signed) + signed + (ContextDSIGN (PBftDSIGN c)) + | -- | Boundary block (EBB) + -- + -- EBBs are not signed and they do not affect the consensus state. + PBftValidateBoundary -- | Convenience constructor for 'PBftValidateView' for regular blocks -pbftValidateRegular :: ( SignedHeader hdr - , Signable (PBftDSIGN c) (Signed hdr) - ) - => ContextDSIGN (PBftDSIGN c) - -> (hdr -> PBftFields c (Signed hdr)) - -> (hdr -> PBftValidateView c) +pbftValidateRegular :: + ( SignedHeader hdr + , Signable (PBftDSIGN c) (Signed hdr) + ) => + ContextDSIGN (PBftDSIGN c) -> + (hdr -> PBftFields c (Signed hdr)) -> + (hdr -> PBftValidateView c) pbftValidateRegular contextDSIGN getFields hdr = - PBftValidateRegular - (getFields hdr) - (headerSigned hdr) - contextDSIGN + PBftValidateRegular + (getFields hdr) + (headerSigned hdr) + contextDSIGN -- | Convenience constructor for 'PBftValidateView' for boundary blocks pbftValidateBoundary :: hdr -> PBftValidateView c @@ -137,80 +144,85 @@ pbftValidateBoundary _hdr = PBftValidateBoundary -- we need to know if a block is an EBB or not (because a chain ending on an -- EBB with a particular block number is longer than a chain on a regular -- block with that same block number). -data PBftSelectView = PBftSelectView { - pbftSelectViewBlockNo :: BlockNo - , pbftSelectViewIsEBB :: IsEBB - } +data PBftSelectView = PBftSelectView + { pbftSelectViewBlockNo :: BlockNo + , pbftSelectViewIsEBB :: IsEBB + } deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) - deriving (ChainOrder) via SimpleChainOrder PBftSelectView + deriving anyclass NoThunks + deriving ChainOrder via SimpleChainOrder PBftSelectView mkPBftSelectView :: GetHeader blk => Header blk -> PBftSelectView -mkPBftSelectView hdr = PBftSelectView { - pbftSelectViewBlockNo = blockNo hdr - , pbftSelectViewIsEBB = headerToIsEBB hdr +mkPBftSelectView hdr = + PBftSelectView + { pbftSelectViewBlockNo = blockNo hdr + , pbftSelectViewIsEBB = headerToIsEBB hdr } instance Ord PBftSelectView where compare (PBftSelectView lBlockNo lIsEBB) (PBftSelectView rBlockNo rIsEBB) = - mconcat [ - -- Prefer the highest block number, as it is a proxy for chain length - lBlockNo `compare` rBlockNo - - -- If the block numbers are the same, check if one of them is an EBB. - -- An EBB has the same block number as the block before it, so the - -- chain ending with an EBB is actually longer than the one ending - -- with a regular block. - , score lIsEBB `compare` score rIsEBB - ] - where - score :: IsEBB -> Int - score IsEBB = 1 - score IsNotEBB = 0 + mconcat + [ -- Prefer the highest block number, as it is a proxy for chain length + lBlockNo `compare` rBlockNo + , -- If the block numbers are the same, check if one of them is an EBB. + -- An EBB has the same block number as the block before it, so the + -- chain ending with an EBB is actually longer than the one ending + -- with a regular block. + score lIsEBB `compare` score rIsEBB + ] + where + score :: IsEBB -> Int + score IsEBB = 1 + score IsNotEBB = 0 {------------------------------------------------------------------------------- Block forging -------------------------------------------------------------------------------} -forgePBftFields :: forall c toSign. ( - PBftCrypto c - , Signable (PBftDSIGN c) toSign - ) - => (VerKeyDSIGN (PBftDSIGN c) -> ContextDSIGN (PBftDSIGN c)) - -- ^ Construct DSIGN context given 'pbftGenKey' - -> IsLeader (PBft c) - -> toSign - -> PBftFields c toSign +forgePBftFields :: + forall c toSign. + ( PBftCrypto c + , Signable (PBftDSIGN c) toSign + ) => + -- | Construct DSIGN context given 'pbftGenKey' + (VerKeyDSIGN (PBftDSIGN c) -> ContextDSIGN (PBftDSIGN c)) -> + IsLeader (PBft c) -> + toSign -> + PBftFields c toSign forgePBftFields contextDSIGN PBftIsLeader{..} toSign = - Exn.assert (issuer == deriveVerKeyDSIGN pbftIsLeaderSignKey) $ PBftFields { - pbftIssuer = issuer - , pbftGenKey = genKey + Exn.assert (issuer == deriveVerKeyDSIGN pbftIsLeaderSignKey) $ + PBftFields + { pbftIssuer = issuer + , pbftGenKey = genKey , pbftSignature = signature } - where - issuer = dlgCertDlgVerKey pbftIsLeaderDlgCert - genKey = dlgCertGenVerKey pbftIsLeaderDlgCert - ctxtDSIGN = contextDSIGN genKey - signature = signedDSIGN ctxtDSIGN toSign pbftIsLeaderSignKey + where + issuer = dlgCertDlgVerKey pbftIsLeaderDlgCert + genKey = dlgCertGenVerKey pbftIsLeaderDlgCert + ctxtDSIGN = contextDSIGN genKey + signature = signedDSIGN ctxtDSIGN toSign pbftIsLeaderSignKey {------------------------------------------------------------------------------- Information PBFT requires from the ledger -------------------------------------------------------------------------------} -newtype PBftLedgerView c = PBftLedgerView { - -- | ProtocolParameters: map from genesis to delegate keys. - pbftDelegates :: Bimap (PBftVerKeyHash c) (PBftVerKeyHash c) - } - deriving (Generic) +newtype PBftLedgerView c = PBftLedgerView + { pbftDelegates :: Bimap (PBftVerKeyHash c) (PBftVerKeyHash c) + -- ^ ProtocolParameters: map from genesis to delegate keys. + } + deriving Generic deriving instance PBftCrypto c => NoThunks (PBftLedgerView c) - -- use generic instance + +-- use generic instance deriving instance Eq (PBftVerKeyHash c) => Eq (PBftLedgerView c) deriving instance Show (PBftVerKeyHash c) => Show (PBftLedgerView c) -instance (Serialise (PBftVerKeyHash c), Ord (PBftVerKeyHash c)) - => Serialise (PBftLedgerView c) where +instance + (Serialise (PBftVerKeyHash c), Ord (PBftVerKeyHash c)) => + Serialise (PBftLedgerView c) + where encode (PBftLedgerView ds) = encode (Bimap.toList ds) decode = PBftLedgerView . Bimap.fromList <$> decode @@ -223,109 +235,109 @@ instance (Serialise (PBftVerKeyHash c), Ord (PBftVerKeyHash c)) -- As defined in https://hydra.iohk.io/job/Cardano/cardano-ledger-specs/byronChainSpec/latest/download-by-type/doc-pdf/blockchain-spec data PBft c - -- | Signature threshold. This represents the proportion of blocks in a -- @pbftSignatureWindow@-sized window which may be signed by any single key. -newtype PBftSignatureThreshold = PBftSignatureThreshold { - getPBftSignatureThreshold :: Double - } +newtype PBftSignatureThreshold = PBftSignatureThreshold + { getPBftSignatureThreshold :: Double + } deriving (Eq, Show, Generic, NoThunks) -- | Protocol parameters -data PBftParams = PBftParams { - -- | Security parameter - -- - -- Although the protocol proper does not have such a security parameter, - -- we insist on it. - pbftSecurityParam :: !SecurityParam - - -- | Number of core nodes - , pbftNumNodes :: !NumCoreNodes - - -- | Signature threshold - -- - -- This bounds the proportion of the latest 'pbftSecurityParam'-many - -- blocks which is allowed to be signed by any single key. The protocol - -- proper is parameterized over the size of this window of recent blocks, - -- but this implementation follows the specification by fixing that - -- parameter to the ambient security parameter @k@. - , pbftSignatureThreshold :: !PBftSignatureThreshold - } +data PBftParams = PBftParams + { pbftSecurityParam :: !SecurityParam + -- ^ Security parameter + -- + -- Although the protocol proper does not have such a security parameter, + -- we insist on it. + , pbftNumNodes :: !NumCoreNodes + -- ^ Number of core nodes + , pbftSignatureThreshold :: !PBftSignatureThreshold + -- ^ Signature threshold + -- + -- This bounds the proportion of the latest 'pbftSecurityParam'-many + -- blocks which is allowed to be signed by any single key. The protocol + -- proper is parameterized over the size of this window of recent blocks, + -- but this implementation follows the specification by fixing that + -- parameter to the ambient security parameter @k@. + } deriving (Generic, NoThunks, Show) -- | If we are a core node (i.e. a block producing node) we know which core -- node we are, and we have the operational key pair and delegation certificate. --- -data PBftCanBeLeader c = PBftCanBeLeader { - pbftCanBeLeaderCoreNodeId :: !CoreNodeId - , pbftCanBeLeaderSignKey :: !(SignKeyDSIGN (PBftDSIGN c)) - , pbftCanBeLeaderDlgCert :: !(PBftDelegationCert c) - } - deriving (Generic) +data PBftCanBeLeader c = PBftCanBeLeader + { pbftCanBeLeaderCoreNodeId :: !CoreNodeId + , pbftCanBeLeaderSignKey :: !(SignKeyDSIGN (PBftDSIGN c)) + , pbftCanBeLeaderDlgCert :: !(PBftDelegationCert c) + } + deriving Generic instance PBftCrypto c => NoThunks (PBftCanBeLeader c) -- | Information required to produce a block. -data PBftIsLeader c = PBftIsLeader { - pbftIsLeaderSignKey :: !(SignKeyDSIGN (PBftDSIGN c)) - , pbftIsLeaderDlgCert :: !(PBftDelegationCert c) - } - deriving (Generic) +data PBftIsLeader c = PBftIsLeader + { pbftIsLeaderSignKey :: !(SignKeyDSIGN (PBftDSIGN c)) + , pbftIsLeaderDlgCert :: !(PBftDelegationCert c) + } + deriving Generic instance PBftCrypto c => NoThunks (PBftIsLeader c) -- | (Static) node configuration -newtype instance ConsensusConfig (PBft c) = PBftConfig { - pbftParams :: PBftParams - } +newtype instance ConsensusConfig (PBft c) = PBftConfig + { pbftParams :: PBftParams + } deriving (Generic, NoThunks) -- Ticking has no effect on the PBFtState, but we do need the ledger view -data instance Ticked (PBftState c) = TickedPBftState { - getPBftLedgerView :: LedgerView (PBft c) - , getTickedPBftState :: PBftState c - } +data instance Ticked (PBftState c) = TickedPBftState + { getPBftLedgerView :: LedgerView (PBft c) + , getTickedPBftState :: PBftState c + } instance PBftCrypto c => ConsensusProtocol (PBft c) where type ValidationErr (PBft c) = PBftValidationErr c - type ValidateView (PBft c) = PBftValidateView c - type SelectView (PBft c) = PBftSelectView + type ValidateView (PBft c) = PBftValidateView c + type SelectView (PBft c) = PBftSelectView - -- | We require two things from the ledger state: + -- \| We require two things from the ledger state: -- -- - Protocol parameters, for the signature window and threshold. -- - The delegation map. - type LedgerView (PBft c) = PBftLedgerView c - type IsLeader (PBft c) = PBftIsLeader c - type ChainDepState (PBft c) = PBftState c - type CanBeLeader (PBft c) = PBftCanBeLeader c + type LedgerView (PBft c) = PBftLedgerView c + type IsLeader (PBft c) = PBftIsLeader c + type ChainDepState (PBft c) = PBftState c + type CanBeLeader (PBft c) = PBftCanBeLeader c protocolSecurityParam = pbftSecurityParam . pbftParams - checkIsLeader PBftConfig{pbftParams} - PBftCanBeLeader{..} - (SlotNo n) - _tickedChainDepState = + checkIsLeader + PBftConfig{pbftParams} + PBftCanBeLeader{..} + (SlotNo n) + _tickedChainDepState = -- We are the slot leader based on our node index, and the current -- slot number. Our node index depends which genesis key has delegated -- to us, see 'genesisKeyCoreNodeId'. - if n `mod` numCoreNodes == i then - Just PBftIsLeader { - pbftIsLeaderSignKey = pbftCanBeLeaderSignKey - , pbftIsLeaderDlgCert = pbftCanBeLeaderDlgCert - } - else - Nothing - where + if n `mod` numCoreNodes == i + then + Just + PBftIsLeader + { pbftIsLeaderSignKey = pbftCanBeLeaderSignKey + , pbftIsLeaderDlgCert = pbftCanBeLeaderDlgCert + } + else + Nothing + where PBftParams{pbftNumNodes = NumCoreNodes numCoreNodes} = pbftParams CoreNodeId i = pbftCanBeLeaderCoreNodeId tickChainDepState _ lv _ = TickedPBftState lv - updateChainDepState cfg - toValidate - slot - (TickedPBftState (PBftLedgerView dms) state) = + updateChainDepState + cfg + toValidate + slot + (TickedPBftState (PBftLedgerView dms) state) = case toValidate of PBftValidateBoundary -> return state @@ -333,50 +345,54 @@ instance PBftCrypto c => ConsensusProtocol (PBft c) where -- Check that the issuer signature verifies, and that it's a delegate of a -- genesis key, and that genesis key hasn't voted too many times. case verifySignedDSIGN - contextDSIGN - pbftIssuer - signed - pbftSignature of + contextDSIGN + pbftIssuer + signed + pbftSignature of Right () -> return () Left err -> throwError $ PBftInvalidSignature (Text.pack err) -- FIXME confirm that non-strict inequality is ok in general. -- It's here because EBBs have the same slot as the first block of their -- epoch. - unless (NotOrigin slot >= S.lastSignedSlot state) - $ throwError PBftInvalidSlot + unless (NotOrigin slot >= S.lastSignedSlot state) $ + throwError PBftInvalidSlot case Bimap.lookupR (hashVerKey pbftIssuer) dms of Nothing -> - throwError $ PBftNotGenesisDelegate - (hashVerKey pbftIssuer) - (PBftLedgerView dms) + throwError $ + PBftNotGenesisDelegate + (hashVerKey pbftIssuer) + (PBftLedgerView dms) Just gk -> do let state' = append cfg params (slot, gk) state case pbftWindowExceedsThreshold params state' gk of - Left n -> throwError $ PBftExceededSignThreshold gk n + Left n -> throwError $ PBftExceededSignThreshold gk n Right () -> return $! state' - where + where params = pbftWindowParams cfg - reupdateChainDepState cfg - toValidate - slot - (TickedPBftState (PBftLedgerView dms) state) = + reupdateChainDepState + cfg + toValidate + slot + (TickedPBftState (PBftLedgerView dms) state) = case toValidate of PBftValidateBoundary -> state PBftValidateRegular PBftFields{pbftIssuer} _ _ -> case Bimap.lookupR (hashVerKey pbftIssuer) dms of Nothing -> - error $ show $ PBftNotGenesisDelegate - (hashVerKey pbftIssuer) - (PBftLedgerView dms) + error $ + show $ + PBftNotGenesisDelegate + (hashVerKey pbftIssuer) + (PBftLedgerView dms) Just gk -> do let state' = append cfg params (slot, gk) state case pbftWindowExceedsThreshold params state' gk of - Left n -> error $ show $ PBftExceededSignThreshold gk n + Left n -> error $ show $ PBftExceededSignThreshold gk n Right () -> state' - where + where params = pbftWindowParams cfg {------------------------------------------------------------------------------- @@ -384,25 +400,25 @@ instance PBftCrypto c => ConsensusProtocol (PBft c) where -------------------------------------------------------------------------------} -- | Parameters for the window check -data PBftWindowParams = PBftWindowParams { - -- | Window size - windowSize :: S.WindowSize - - -- | Threshold (maximum number of slots anyone is allowed to sign) - , threshold :: Word64 - } +data PBftWindowParams = PBftWindowParams + { windowSize :: S.WindowSize + -- ^ Window size + , threshold :: Word64 + -- ^ Threshold (maximum number of slots anyone is allowed to sign) + } -- | Compute window check parameters from the node config pbftWindowParams :: ConsensusConfig (PBft c) -> PBftWindowParams -pbftWindowParams PBftConfig{..} = PBftWindowParams { - windowSize = winSize - , threshold = +pbftWindowParams PBftConfig{..} = + PBftWindowParams + { windowSize = winSize + , threshold = floor $ getPBftSignatureThreshold pbftSignatureThreshold * fromIntegral winSize } - where - PBftParams{..} = pbftParams - winSize = pbftWindowSize pbftSecurityParam + where + PBftParams{..} = pbftParams + winSize = pbftWindowSize pbftSecurityParam -- | Window size used by PBFT -- @@ -414,25 +430,27 @@ pbftWindowSize (SecurityParam k) = S.WindowSize $ unNonZero k -- -- Returns @Just@ the number of blocks signed if exceeded. pbftWindowExceedsThreshold :: - PBftCrypto c - => PBftWindowParams - -> PBftState c - -> PBftVerKeyHash c - -> Either Word64 () + PBftCrypto c => + PBftWindowParams -> + PBftState c -> + PBftVerKeyHash c -> + Either Word64 () pbftWindowExceedsThreshold PBftWindowParams{..} st gk = - if numSigned > threshold - then Left numSigned - else Right () - where - numSigned = S.countSignedBy st gk - -append :: PBftCrypto c - => ConsensusConfig (PBft c) - -> PBftWindowParams - -> (SlotNo, PBftVerKeyHash c) - -> PBftState c -> PBftState c + if numSigned > threshold + then Left numSigned + else Right () + where + numSigned = S.countSignedBy st gk + +append :: + PBftCrypto c => + ConsensusConfig (PBft c) -> + PBftWindowParams -> + (SlotNo, PBftVerKeyHash c) -> + PBftState c -> + PBftState c append PBftConfig{} PBftWindowParams{..} = - S.append windowSize . uncurry S.PBftSigner + S.append windowSize . uncurry S.PBftSigner {------------------------------------------------------------------------------- PBFT specific types @@ -443,13 +461,13 @@ append PBftConfig{} PBftWindowParams{..} = data PBftValidationErr c = PBftInvalidSignature !Text | PBftNotGenesisDelegate !(PBftVerKeyHash c) !(PBftLedgerView c) - -- | We record how many slots this key signed - | PBftExceededSignThreshold !(PBftVerKeyHash c) !Word64 + | -- | We record how many slots this key signed + PBftExceededSignThreshold !(PBftVerKeyHash c) !Word64 | PBftInvalidSlot deriving (Generic, NoThunks) deriving instance PBftCrypto c => Show (PBftValidationErr c) -deriving instance PBftCrypto c => Eq (PBftValidationErr c) +deriving instance PBftCrypto c => Eq (PBftValidationErr c) {------------------------------------------------------------------------------- CannotForge @@ -457,41 +475,43 @@ deriving instance PBftCrypto c => Eq (PBftValidationErr c) -- | Expresses that, whilst we believe ourselves to be a leader for this slot, -- we are nonetheless unable to forge a block. -data PBftCannotForge c = - -- | We cannot forge a block because we are not the current delegate of the +data PBftCannotForge c + = -- | We cannot forge a block because we are not the current delegate of the -- genesis key we have a delegation certificate from. PBftCannotForgeInvalidDelegation !(PBftVerKeyHash c) - -- | We cannot lead because delegates of the genesis key we have a + | -- | We cannot lead because delegates of the genesis key we have a -- delegation from have already forged the maximum number of blocks in this -- signing window. - | PBftCannotForgeThresholdExceeded !Word64 - deriving (Generic) + PBftCannotForgeThresholdExceeded !Word64 + deriving Generic deriving instance PBftCrypto c => Show (PBftCannotForge c) instance PBftCrypto c => NoThunks (PBftCannotForge c) - -- use generic instance + +-- use generic instance pbftCheckCanForge :: - forall c. PBftCrypto c - => ConsensusConfig (PBft c) - -> PBftCanBeLeader c - -> SlotNo - -> Ticked (PBftState c) - -> Either (PBftCannotForge c) () + forall c. + PBftCrypto c => + ConsensusConfig (PBft c) -> + PBftCanBeLeader c -> + SlotNo -> + Ticked (PBftState c) -> + Either (PBftCannotForge c) () pbftCheckCanForge cfg PBftCanBeLeader{..} slot tickedChainDepState = - case Bimap.lookupR dlgKeyHash dms of - Nothing -> Left $ PBftCannotForgeInvalidDelegation dlgKeyHash - Just gk -> - first PBftCannotForgeThresholdExceeded $ - pbftWindowExceedsThreshold params (append cfg params (slot, gk) cds) gk - where - params = pbftWindowParams cfg - - dlgKeyHash :: PBftVerKeyHash c - dlgKeyHash = hashVerKey . dlgCertDlgVerKey $ pbftCanBeLeaderDlgCert - - TickedPBftState (PBftLedgerView dms) cds = tickedChainDepState + case Bimap.lookupR dlgKeyHash dms of + Nothing -> Left $ PBftCannotForgeInvalidDelegation dlgKeyHash + Just gk -> + first PBftCannotForgeThresholdExceeded $ + pbftWindowExceedsThreshold params (append cfg params (slot, gk) cds) gk + where + params = pbftWindowParams cfg + + dlgKeyHash :: PBftVerKeyHash c + dlgKeyHash = hashVerKey . dlgCertDlgVerKey $ pbftCanBeLeaderDlgCert + + TickedPBftState (PBftLedgerView dms) cds = tickedChainDepState {------------------------------------------------------------------------------- Condense diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/Crypto.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/Crypto.hs index 97648bdbcd..bcf1232103 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/Crypto.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/Crypto.hs @@ -7,21 +7,21 @@ {-# LANGUAGE TypeFamilyDependencies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Protocol.PBFT.Crypto ( - PBftCrypto (..) +module Ouroboros.Consensus.Protocol.PBFT.Crypto + ( PBftCrypto (..) , PBftMockCrypto , PBftMockVerKeyHash (..) ) where -import Cardano.Crypto.DSIGN.Class -import Cardano.Crypto.DSIGN.Mock (MockDSIGN, VerKeyDSIGN (..)) -import Codec.Serialise (Serialise) -import Data.Kind (Type) -import Data.Typeable -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Util.Condense +import Cardano.Crypto.DSIGN.Class +import Cardano.Crypto.DSIGN.Mock (MockDSIGN, VerKeyDSIGN (..)) +import Codec.Serialise (Serialise) +import Data.Kind (Type) +import Data.Typeable +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Util.Condense -- | Crypto primitives required by BFT -- @@ -29,40 +29,42 @@ import Ouroboros.Consensus.Util.Condense -- directly. We make this family injective for convenience - whilst it's -- _possible_ that there could be non-injective instances, the chances of there -- being more than the two instances here are basically non-existent. -class ( Typeable c - , DSIGNAlgorithm (PBftDSIGN c) - , Condense (SigDSIGN (PBftDSIGN c)) - , Show (PBftVerKeyHash c) - , Ord (PBftVerKeyHash c) - , Eq (PBftVerKeyHash c) - , Show (PBftVerKeyHash c) - , NoThunks (PBftVerKeyHash c) - , NoThunks (PBftDelegationCert c) - , Serialise (PBftVerKeyHash c) - ) => PBftCrypto c where - type family PBftDSIGN c :: Type - type family PBftDelegationCert c = (d :: Type) | d -> c - type family PBftVerKeyHash c = (d :: Type) | d -> c +class + ( Typeable c + , DSIGNAlgorithm (PBftDSIGN c) + , Condense (SigDSIGN (PBftDSIGN c)) + , Show (PBftVerKeyHash c) + , Ord (PBftVerKeyHash c) + , Eq (PBftVerKeyHash c) + , Show (PBftVerKeyHash c) + , NoThunks (PBftVerKeyHash c) + , NoThunks (PBftDelegationCert c) + , Serialise (PBftVerKeyHash c) + ) => + PBftCrypto c + where + type PBftDSIGN c :: Type + type PBftDelegationCert c = (d :: Type) | d -> c + type PBftVerKeyHash c = (d :: Type) | d -> c dlgCertGenVerKey :: PBftDelegationCert c -> VerKeyDSIGN (PBftDSIGN c) dlgCertDlgVerKey :: PBftDelegationCert c -> VerKeyDSIGN (PBftDSIGN c) - hashVerKey :: VerKeyDSIGN (PBftDSIGN c) -> PBftVerKeyHash c - + hashVerKey :: VerKeyDSIGN (PBftDSIGN c) -> PBftVerKeyHash c data PBftMockCrypto instance PBftCrypto PBftMockCrypto where - type PBftDSIGN PBftMockCrypto = MockDSIGN + type PBftDSIGN PBftMockCrypto = MockDSIGN type PBftDelegationCert PBftMockCrypto = (VerKeyDSIGN MockDSIGN, VerKeyDSIGN MockDSIGN) - type PBftVerKeyHash PBftMockCrypto = PBftMockVerKeyHash + type PBftVerKeyHash PBftMockCrypto = PBftMockVerKeyHash dlgCertGenVerKey = fst dlgCertDlgVerKey = snd - hashVerKey = PBftMockVerKeyHash + hashVerKey = PBftMockVerKeyHash -- | We don't hash and just use the underlying 'Word64'. -newtype PBftMockVerKeyHash = PBftMockVerKeyHash { - getPBftMockVerKeyHash :: VerKeyDSIGN MockDSIGN - } +newtype PBftMockVerKeyHash = PBftMockVerKeyHash + { getPBftMockVerKeyHash :: VerKeyDSIGN MockDSIGN + } deriving (Eq, Show, Generic, NoThunks) deriving (Serialise, Ord) via Word64 diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/State.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/State.hs index 5c5ed29814..d46837c12c 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/State.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/PBFT/State.hs @@ -14,46 +14,50 @@ -- | PBFT chain state -- -- Intended for qualified import. -module Ouroboros.Consensus.Protocol.PBFT.State ( - PBftSigner (..) +module Ouroboros.Consensus.Protocol.PBFT.State + ( PBftSigner (..) , PBftState (..) , Ticked (..) , WindowSize (..) + -- * Construction , append , empty + -- * Queries , countSignatures , countSignedBy , lastSignedSlot + -- * Conversion , fromList , toList + -- * Serialization , decodePBftState , encodePBftState ) where -import Codec.Serialise (Serialise (..)) -import Codec.Serialise.Decoding (Decoder) -import Codec.Serialise.Encoding (Encoding) -import Control.Monad (unless) -import Control.Monad.Except (Except, runExcept, throwError) -import qualified Data.Foldable as Foldable -import Data.List (sortOn) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Sequence.Strict (StrictSeq (Empty, (:<|), (:|>)), (|>)) -import qualified Data.Sequence.Strict as Seq -import Data.Word -import GHC.Generics (Generic) -import GHC.Stack -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Protocol.PBFT.Crypto -import Ouroboros.Consensus.Ticked -import Ouroboros.Consensus.Util (repeatedly) -import Ouroboros.Consensus.Util.Versioned +import Codec.Serialise (Serialise (..)) +import Codec.Serialise.Decoding (Decoder) +import Codec.Serialise.Encoding (Encoding) +import Control.Monad (unless) +import Control.Monad.Except (Except, runExcept, throwError) +import Data.Foldable qualified as Foldable +import Data.List (sortOn) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Sequence.Strict (StrictSeq (Empty, (:<|), (:|>)), (|>)) +import Data.Sequence.Strict qualified as Seq +import Data.Word +import GHC.Generics (Generic) +import GHC.Stack +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Protocol.PBFT.Crypto +import Ouroboros.Consensus.Ticked +import Ouroboros.Consensus.Util (repeatedly) +import Ouroboros.Consensus.Util.Versioned {------------------------------------------------------------------------------- Types @@ -78,19 +82,18 @@ import Ouroboros.Consensus.Util.Versioned -- -- The window size itself is pretty much arbitrary and will be fixed by a -- particular blockchain specification (e.g., Byron). -data PBftState c = PBftState { - -- | Signatures in the window - -- - -- We should have precisely @n@ signatures in the window, unless we are - -- near genesis. - -- - -- INVARIANT Empty if and only if we are exactly at genesis. - inWindow :: !(StrictSeq (PBftSigner c)) - - -- | Cached counts of the signatures in the window - , counts :: !(Map (PBftVerKeyHash c) Word64) - } - deriving (Generic) +data PBftState c = PBftState + { inWindow :: !(StrictSeq (PBftSigner c)) + -- ^ Signatures in the window + -- + -- We should have precisely @n@ signatures in the window, unless we are + -- near genesis. + -- + -- INVARIANT Empty if and only if we are exactly at genesis. + , counts :: !(Map (PBftVerKeyHash c) Word64) + -- ^ Cached counts of the signatures in the window + } + deriving Generic {------------------------------------------------------------------------------- Invariant @@ -100,24 +103,27 @@ size :: Num b => StrictSeq a -> b size = fromIntegral . Seq.length -- | Re-compute cached counts -computeCounts :: PBftCrypto c - => StrictSeq (PBftSigner c) -> Map (PBftVerKeyHash c) Word64 +computeCounts :: + PBftCrypto c => + StrictSeq (PBftSigner c) -> Map (PBftVerKeyHash c) Word64 computeCounts inWindow = - repeatedly (incrementKey . pbftSignerGenesisKey) - (Foldable.toList inWindow) - Map.empty - -invariant :: PBftCrypto c - => WindowSize -> PBftState c -> Either String () + repeatedly + (incrementKey . pbftSignerGenesisKey) + (Foldable.toList inWindow) + Map.empty + +invariant :: + PBftCrypto c => + WindowSize -> PBftState c -> Either String () invariant (WindowSize n) st@PBftState{..} = runExcept $ do - unless (size inWindow <= n) $ - failure "Too many in-window signatures" + unless (size inWindow <= n) $ + failure "Too many in-window signatures" - unless (computeCounts inWindow == counts) $ - failure "Cached counts incorrect" - where - failure :: String -> Except String () - failure err = throwError $ err ++ ": " ++ show st + unless (computeCounts inWindow == counts) $ + failure "Cached counts incorrect" + where + failure :: String -> Except String () + failure err = throwError $ err ++ ": " ++ show st -- | The 'PBftState' tests don't rely on this flag but check the -- invariant manually. This flag is here so that the invariant checks could be @@ -128,9 +134,10 @@ enableInvariant :: Bool enableInvariant = False assertInvariant :: - (HasCallStack, PBftCrypto c) - => WindowSize - -> PBftState c -> PBftState c + (HasCallStack, PBftCrypto c) => + WindowSize -> + PBftState c -> + PBftState c assertInvariant n st | enableInvariant = case invariant n st of @@ -139,25 +146,25 @@ assertInvariant n st | otherwise = st -- | Slot and corresponding genesis key -data PBftSigner c = PBftSigner { - pbftSignerSlotNo :: !SlotNo - , pbftSignerGenesisKey :: !(PBftVerKeyHash c) - } - deriving (Generic) +data PBftSigner c = PBftSigner + { pbftSignerSlotNo :: !SlotNo + , pbftSignerGenesisKey :: !(PBftVerKeyHash c) + } + deriving Generic -- | Window size -- -- See 'PBftState' itself for a detailed discussion on the window size -- versus the number of signatures. -newtype WindowSize = WindowSize { getWindowSize :: Word64 } +newtype WindowSize = WindowSize {getWindowSize :: Word64} deriving newtype (Show, Eq, Ord, Enum, Num, Real, Integral) -deriving instance PBftCrypto c => Show (PBftState c) -deriving instance PBftCrypto c => Eq (PBftState c) +deriving instance PBftCrypto c => Show (PBftState c) +deriving instance PBftCrypto c => Eq (PBftState c) deriving instance PBftCrypto c => NoThunks (PBftState c) -deriving instance PBftCrypto c => Show (PBftSigner c) -deriving instance PBftCrypto c => Eq (PBftSigner c) +deriving instance PBftCrypto c => Show (PBftSigner c) +deriving instance PBftCrypto c => Eq (PBftSigner c) deriving instance PBftCrypto c => NoThunks (PBftSigner c) {------------------------------------------------------------------------------- @@ -185,9 +192,9 @@ countSignedBy PBftState{..} gk = Map.findWithDefault 0 gk counts -- Unaffected by EBBs, since they're not signed. lastSignedSlot :: PBftState c -> WithOrigin SlotNo lastSignedSlot PBftState{..} = - case inWindow of - _ :|> signer -> NotOrigin (pbftSignerSlotNo signer) - _otherwise -> Origin + case inWindow of + _ :|> signer -> NotOrigin (pbftSignerSlotNo signer) + _otherwise -> Origin {------------------------------------------------------------------------------- Construction @@ -197,34 +204,39 @@ lastSignedSlot PBftState{..} = -- -- In other words, the PBFT chain state corresponding to genesis. empty :: PBftState c -empty = PBftState { - inWindow = Empty - , counts = Map.empty +empty = + PBftState + { inWindow = Empty + , counts = Map.empty } -- | Append new signature -- -- Drops the oldest signature, provided we have reached the required number. append :: - forall c. PBftCrypto c - => WindowSize - -> PBftSigner c - -> PBftState c -> PBftState c + forall c. + PBftCrypto c => + WindowSize -> + PBftSigner c -> + PBftState c -> + PBftState c append n signer@(PBftSigner _ gk) PBftState{..} = - assertInvariant n $ PBftState { - inWindow = trimmedWindow - , counts = trimmedCounts + assertInvariant n $ + PBftState + { inWindow = trimmedWindow + , counts = trimmedCounts } - where - -- First append the signature to the right, - (appendedWindow, appendedCounts) = - (inWindow |> signer, incrementKey gk counts) - -- then trim the oldest from the left, if needed. - (trimmedWindow, trimmedCounts) = case appendedWindow of - x :<| xs | size inWindow == getWindowSize n -> + where + -- First append the signature to the right, + (appendedWindow, appendedCounts) = + (inWindow |> signer, incrementKey gk counts) + -- then trim the oldest from the left, if needed. + (trimmedWindow, trimmedCounts) = case appendedWindow of + x :<| xs + | size inWindow == getWindowSize n -> (xs, decrementKey (pbftSignerGenesisKey x) appendedCounts) - _otherwise -> - (appendedWindow, appendedCounts) + _otherwise -> + (appendedWindow, appendedCounts) {------------------------------------------------------------------------------- Internal @@ -232,18 +244,18 @@ append n signer@(PBftSigner _ gk) PBftState{..} = incrementKey :: Ord gk => gk -> Map gk Word64 -> Map gk Word64 incrementKey = Map.alter inc - where - inc :: Maybe Word64 -> Maybe Word64 - inc Nothing = Just 1 - inc (Just n) = Just (n + 1) + where + inc :: Maybe Word64 -> Maybe Word64 + inc Nothing = Just 1 + inc (Just n) = Just (n + 1) decrementKey :: Ord gk => gk -> Map gk Word64 -> Map gk Word64 decrementKey = Map.alter dec - where - dec :: Maybe Word64 -> Maybe Word64 - dec Nothing = error "decrementKey: key does not exist" - dec (Just 1) = Nothing - dec (Just n) = Just (n - 1) + where + dec :: Maybe Word64 -> Maybe Word64 + dec Nothing = error "decrementKey: key does not exist" + dec (Just 1) = Nothing + dec (Just n) = Just (n - 1) {------------------------------------------------------------------------------- Conversion @@ -258,12 +270,13 @@ toList = Foldable.toList . inWindow -- -- PRECONDITION: the slots of the signers are in ascending order. fromList :: PBftCrypto c => [PBftSigner c] -> PBftState c -fromList signers = PBftState { - inWindow = inWindow - , counts = computeCounts inWindow +fromList signers = + PBftState + { inWindow = inWindow + , counts = computeCounts inWindow } - where - inWindow = Seq.fromList signers + where + inWindow = Seq.fromList signers {------------------------------------------------------------------------------- Serialization @@ -275,39 +288,42 @@ serializationFormatVersion1 = 1 invert :: PBftCrypto c => PBftState c -> Map (PBftVerKeyHash c) [SlotNo] invert = - Foldable.foldl' - (\acc (PBftSigner slot key) -> Map.insertWith (<>) key [slot] acc) - Map.empty + Foldable.foldl' + (\acc (PBftSigner slot key) -> Map.insertWith (<>) key [slot] acc) + Map.empty . inWindow uninvert :: PBftCrypto c => Map (PBftVerKeyHash c) [SlotNo] -> PBftState c uninvert = - fromList + fromList . sortOn pbftSignerSlotNo . concatMap (\(key, slots) -> map (`PBftSigner` key) slots) . Map.toList encodePBftState :: - PBftCrypto c - => PBftState c -> Encoding + PBftCrypto c => + PBftState c -> Encoding encodePBftState st = - encodeVersion serializationFormatVersion1 $ - encode (invert st) + encodeVersion serializationFormatVersion1 $ + encode (invert st) decodePBftState :: - forall c. PBftCrypto c - => forall s. Decoder s (PBftState c) -decodePBftState = decodeVersion + forall c. + PBftCrypto c => + forall s. + Decoder s (PBftState c) +decodePBftState = + decodeVersion [(serializationFormatVersion1, Decode decodePBftState1)] - where - decodePBftState1 :: forall s. Decoder s (PBftState c) - decodePBftState1 = uninvert <$> decode + where + decodePBftState1 :: forall s. Decoder s (PBftState c) + decodePBftState1 = uninvert <$> decode instance Serialise (PBftVerKeyHash c) => Serialise (PBftSigner c) where encode = encode . toPair - where - toPair (PBftSigner{..}) = (pbftSignerSlotNo, pbftSignerGenesisKey) + where + toPair (PBftSigner{..}) = (pbftSignerSlotNo, pbftSignerGenesisKey) decode = fromPair <$> decode - where - fromPair (slotNo, genesisKey) = PBftSigner slotNo genesisKey + where + fromPair (slotNo, genesisKey) = PBftSigner slotNo genesisKey diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Signed.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Signed.hs index ccdc5313c0..5882a3b60b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Signed.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Protocol/Signed.hs @@ -1,12 +1,12 @@ {-# LANGUAGE TypeFamilies #-} -- | Support for protocols that include a signature -module Ouroboros.Consensus.Protocol.Signed ( - Signed +module Ouroboros.Consensus.Protocol.Signed + ( Signed , SignedHeader (..) ) where -import Data.Kind (Type) +import Data.Kind (Type) -- | The part of the header that is signed type family Signed hdr :: Type diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB.hs index 329d93a3a3..c91251ef07 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB.hs @@ -43,11 +43,10 @@ -- "Ouroboros.Consensus.Storage.LedgerDB" contains the whole definition and API -- for the LedgerDB, but the other three databases are broken up into multiple -- smaller submodules. We aim to resolve this when UTxO-HD is merged. --- -module Ouroboros.Consensus.Storage.ChainDB ( - module Ouroboros.Consensus.Storage.ChainDB.API +module Ouroboros.Consensus.Storage.ChainDB + ( module Ouroboros.Consensus.Storage.ChainDB.API , module Ouroboros.Consensus.Storage.ChainDB.Impl ) where -import Ouroboros.Consensus.Storage.ChainDB.API -import Ouroboros.Consensus.Storage.ChainDB.Impl +import Ouroboros.Consensus.Storage.ChainDB.API +import Ouroboros.Consensus.Storage.ChainDB.Impl diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API.hs index 1c4677c506..1831ffa879 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API.hs @@ -12,31 +12,37 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Storage.ChainDB.API ( - -- * Main ChainDB API +module Ouroboros.Consensus.Storage.ChainDB.API + ( -- * Main ChainDB API ChainDB (..) , getCurrentTip , getTipBlockNo + -- * Adding a block , AddBlockPromise (..) , AddBlockResult (..) , addBlock , addBlockWaitWrittenToDisk , addBlock_ + -- * Trigger chain selection , ChainSelectionPromise (..) , triggerChainSelection , triggerChainSelectionAsync + -- * Serialised block/header with its point , WithPoint (..) , getPoint , getSerialisedBlockWithPoint , getSerialisedHeaderWithPoint + -- * BlockComponent , BlockComponent (..) + -- * Support for tests , fromChain , toChain + -- * Iterator API , Iterator (..) , IteratorResult (..) @@ -48,49 +54,61 @@ module Ouroboros.Consensus.Storage.ChainDB.API ( , streamFrom , traverseIterator , validBounds + -- * Followers , ChainType (..) , Follower (..) , traverseFollower + -- * Recovery , ChainDbFailure (..) , IsEBB (..) + -- * Exceptions , ChainDbError (..) + -- * Genesis , GetLoEFragment , LoE (..) ) where -import Control.Monad (void) -import Control.ResourceRegistry -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderStateHistory - (HeaderStateHistory (..)) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.LedgerDB (GetForkerError, - ReadOnlyForker', Statistics) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (WithFingerprint) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (ChainUpdate, MaxSlotNo, - Serialised (..)) -import qualified Ouroboros.Network.Block as Network -import Ouroboros.Network.BlockFetch.ConsensusInterface - (ChainSelStarvation (..)) -import Ouroboros.Network.Mock.Chain (Chain (..)) -import qualified Ouroboros.Network.Mock.Chain as Chain -import Ouroboros.Network.Protocol.LocalStateQuery.Type -import System.FS.API.Types (FsError) +import Control.Monad (void) +import Control.ResourceRegistry +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateHistory (..) + ) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.LedgerDB + ( GetForkerError + , ReadOnlyForker' + , Statistics + ) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (WithFingerprint) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block + ( ChainUpdate + , MaxSlotNo + , Serialised (..) + ) +import Ouroboros.Network.Block qualified as Network +import Ouroboros.Network.BlockFetch.ConsensusInterface + ( ChainSelStarvation (..) + ) +import Ouroboros.Network.Mock.Chain (Chain (..)) +import Ouroboros.Network.Mock.Chain qualified as Chain +import Ouroboros.Network.Protocol.LocalStateQuery.Type +import System.FS.API.Types (FsError) -- | The chain database -- @@ -112,308 +130,290 @@ import System.FS.API.Types (FsError) -- -- The ChainDB instantiates all the various type parameters of these databases -- to conform to the unified interface we provide here. -data ChainDB m blk = ChainDB { - -- | Add a block to the heap of blocks - -- - -- We do /not/ assume that the block is valid (under the legder rules); - -- it is the responsibility of the Chain DB itself to only select chains - -- that are valid. - -- - -- Conversely, the caller cannot assume that the new block will be added - -- to the current chain; even if the block is valid, it will not become - -- part of the chain if there are other chains available that are - -- preferred by the consensus algorithm (typically, longer chains). - -- - -- This function typically returns immediately, yielding a - -- 'AddBlockPromise' which can be used to wait for the result. You can - -- use 'addBlock' to add the block synchronously. - -- - -- NOTE: back pressure can be applied when overloaded. - -- - -- PRECONDITON: the block to be added must not be from the future. - -- - -- The current code ensures that the two sources of blocks - -- ('ChainSync' and forging) do not allow blocks from the future, - -- however this is not guaranteed when during initialization if the - -- VolatileDB contains blocks from the future. See: - -- https://github.com/IntersectMBO/ouroboros-consensus/blob/main/docs/website/contents/for-developers/HandlingBlocksFromTheFuture.md#handling-blocks-from-the-future - -- - addBlockAsync :: InvalidBlockPunishment m -> blk -> m (AddBlockPromise m blk) - - -- | Trigger reprocessing of blocks postponed by the LoE. - , chainSelAsync :: m (ChainSelectionPromise m) - - -- | Get the current chain fragment - -- - -- Suppose the current chain is - -- - -- > a -> b -> c -> d -> e -> f - -- - -- and suppose @k = 2@; this means that the most distant fork we can - -- switch to is something like - -- - -- > a -> b -> c -> d -> e' -> f' - -- - -- The fragment we return will be @[e, f]@, anchored at @d@. In other - -- words, the length of the fragment will under normal circumstances - -- be exactly @k@ blocks long. It may be shorter if - -- - -- * We are near genesis - -- The anchor will be the genesis point - -- (which does not correspond to an actual block) - -- - -- * The volatile DB suffered some data loss - -- Typically (but not necessarily) the volatile DB will not be empty - -- and the anchor will be pointing to the tip of the immutable DB. - -- - -- POSTCONDITION: The Chain DB will be able to switch to any fork starting - -- from the anchor of the returned fragment or any subsequent block - -- (provided the new fork is at least of the same length as the old). - -- - -- NOTE: A direct consequence of this guarantee is that the anchor of the - -- fragment will move as the chain grows. - , getCurrentChain :: STM m (AnchoredFragment (Header blk)) - - -- | Exact same as 'getCurrentChain', except each header is annotated - -- with the 'RelativeTime' of the onset of its slot (translated according - -- to the chain it is on) - -- - -- INVARIANT @'hwtHeader' <$> 'getCurrentChainWithTime' = 'getCurrentChain'@ - , getCurrentChainWithTime - :: STM m (AnchoredFragment (HeaderWithTime blk)) - - -- | Get current ledger - , getCurrentLedger :: STM m (ExtLedgerState blk EmptyMK) - - -- | Get the immutable ledger, i.e., typically @k@ blocks back. - , getImmutableLedger :: STM m (ExtLedgerState blk EmptyMK) - - -- | Get the ledger for the given point. - -- - -- When the given point is not among the last @k@ blocks of the current - -- chain (i.e., older than @k@ or not on the current chain), 'Nothing' is - -- returned. - , getPastLedger :: Point blk -> STM m (Maybe (ExtLedgerState blk EmptyMK)) - - -- | Get a 'HeaderStateHistory' populated with the 'HeaderState's of the - -- last @k@ blocks of the current chain. - , getHeaderStateHistory :: STM m (HeaderStateHistory blk) - - -- | Acquire a read-only forker at a specific point if that point exists - -- on the db. - -- - -- Note that the forker should be closed by the caller of this function. - -- - -- The forker is read-only becase a read-write forker could be used to - -- change the internal state of the LedgerDB. - , getReadOnlyForkerAtPoint :: - ResourceRegistry m - -> Target (Point blk) - -> m (Either GetForkerError (ReadOnlyForker' m blk)) - - -- | Get block at the tip of the chain, if one exists - -- - -- Returns 'Nothing' if the database is empty. - , getTipBlock :: m (Maybe blk) - - -- | Get header at the tip of the chain - -- - -- NOTE: Calling 'getTipHeader' is cheaper than 'getTipBlock' and then - -- extracting the header: most of the time the header at the tip is - -- actually in memory, whereas the block never is. - -- - -- Returns 'Nothing' if the database is empty. - , getTipHeader :: m (Maybe (Header blk)) - - -- | Get point of the tip of the chain - -- - -- Will return 'genesisPoint' if the database is empty; if the - -- current chain fragment is empty due to data loss in the volatile DB, - -- 'getTipPoint' will return the tip of the immutable DB. - , getTipPoint :: STM m (Point blk) - - -- | Get the given component(s) of the block at the specified point. If - -- there is no block at the given point, 'Nothing' is returned. - , getBlockComponent :: forall b. BlockComponent blk b - -> RealPoint blk -> m (Maybe b) - - -- | Return membership check function for recent blocks. This includes - -- blocks in the VolatileDB and blocks that are currently being processed - -- or are waiting in a queue to be processed. - -- - -- This check is only reliable for blocks up to @k@ away from the tip. - -- For blocks older than that the results should be regarded as - -- non-deterministic. - , getIsFetched :: STM m (Point blk -> Bool) - - -- | Return a function that tells whether a block is known to be valid - -- or invalid. - -- - -- The function will return: - -- - -- * @Just True@: for blocks in the volatile DB that have been validated - -- and were found to be valid. All blocks in the current chain - -- fragment (i.e., 'getCurrentChain') are valid. - -- - -- * @Just False@: for blocks in the volatile DB that have been - -- validated and were found to be invalid. - -- - -- * @Nothing@: for blocks not or no longer in the volatile DB, whether - -- they are valid or not, including blocks in the immutable DB. Also - -- for blocks in the volatile DB that haven't been validated (yet), - -- e.g., because they are disconnected from the current chain or they - -- are part of a shorter fork. - , getIsValid :: STM m (RealPoint blk -> Maybe Bool) - - -- | Get the highest slot number stored in the ChainDB (this includes - -- blocks that are waiting in the background queue to be processed). - -- - -- Note that the corresponding block doesn't have to be part of the - -- current chain, it could be part of some fork, or even be a - -- disconnected block. - , getMaxSlotNo :: STM m MaxSlotNo - - -- | Stream blocks - -- - -- Streaming is not restricted to the current fork, but there must be an - -- unbroken path from the starting point to the end point /at the time - -- of initialization/ of the iterator. Once the iterator has been - -- initialized, it will not be affected by subsequent calls to - -- 'addBlock'. To track the current chain, use a 'Follower' instead. - -- - -- Streaming blocks older than @k@ is permitted, but only when they are - -- part of the current fork (at the time of initialization). Streaming a - -- fork that forks off more than @k@ blocks in the past is not permitted - -- and an 'UnknownRange' error will be returned in that case. - -- - -- The iterator /does/ have a limited lifetime, however. The chain DB - -- internally partitions the chain into an " immutable " part and a - -- " volatile " part, moving blocks from the volatile DB to the immutable - -- DB when they become more than @k@ deep into the chain. When a block - -- with slot number @n@ is added to the immutble DB, a time delay @t@ - -- kicks in; after that time delay expires, all blocks older than @n@ may - -- be removed from the volatile DB, /including any blocks that happen to - -- live on other forks/ (since those forks must now, by definition, be too - -- distant). This time delay @t@ also provides a worst-case bound for the - -- lifetime of the iterator: if the iterator traverses a chain that - -- forks off from our current chain at the tip of the immutable DB, - -- then the first block on that fork will become unavailable as soon as - -- another block is pushed to the current chain and the subsequent - -- time delay expires. - -- - -- Note: although blocks are moved from the volatile DB to the immutable - -- DB after they have become @k@ deep into the chain, due to data - -- corruption the suffix of the chain in the volatile DB might be - -- shorter than @k@. The immutable DB /always/ determines the maximum - -- rollback, which may therefore be shorter than @k@ under such - -- circumstances. In addition, streaming blocks which aren't on the - -- current fork is permitted, but the oldest volatile block must fit on - -- to the tip of the immutable DB. - -- - -- When the given bounds are nonsensical, an 'InvalidIteratorRange' is - -- thrown. - -- - -- When the given bounds are not part of the chain DB, an 'UnknownRange' - -- error is returned. - -- - -- To stream all blocks from the current chain, use 'streamAll', as it - -- correctly handles an empty ChainDB. - , stream :: - forall b. ResourceRegistry m - -> BlockComponent blk b - -> StreamFrom blk -> StreamTo blk - -> m (Either (UnknownRange blk) (Iterator m blk b)) - - -- | Chain follower - -- - -- A chain follower is an iterator that tracks the state of the /current/ - -- chain: calling @next@ on the iterator will either give you the next - -- block header, or (if we have switched to a fork) the instruction to - -- rollback. - -- - -- The tracking iterator starts at genesis (see also 'trackForward'). - -- - -- This is intended for use by chain consumers to /reliably/ follow a - -- chain, desipite the chain being volatile. - -- - -- Examples of users: - -- * The server side of the chain sync mini-protocol for the - -- node-to-node protocol using headers and the block size. - -- * The server side of the chain sync mini-protocol for the - -- node-to-client protocol using blocks. - -- - , newFollower :: - forall b. ResourceRegistry m - -> ChainType - -> BlockComponent blk b - -> m (Follower m blk b) - - -- | Function to check whether a block is known to be invalid. - -- - -- Blocks unknown to the ChainDB will result in 'Nothing'. - -- - -- If the hash corresponds to a block that is known to be invalid, but - -- is now older than @k@, this function may return 'Nothing'. - -- - -- Whenever a new invalid block is added, the 'Fingerprint' will be - -- changed. This is useful when \"watching\" this function in a - -- transaction. - -- - -- Note that when invalid blocks are garbage collected and thus no - -- longer detected by this function, the 'Fingerprint' doesn't have to - -- change, since the function will not detect new invalid blocks. - -- - -- It might seem natural to have this function also return whether the - -- ChainDB knows that a block is valid, thereby subsuming the 'getIsValid' - -- function and simplifying the API. However, this adds the overhead of - -- checking whether the block is valid for blocks that are not known to be - -- invalid that does not give useful information to current clients - -- (ChainSync), since they are only interested in whether a block is known - -- to be invalid. The extra information of whether a block is valid is - -- only used for testing. - -- - -- In particular, this affects the watcher in 'bracketChainSyncClient', - -- which rechecks the blocks in all candidate chains whenever a new - -- invalid block is detected. These blocks are likely to be valid. - , getIsInvalidBlock :: STM m (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) - - -- | Whether ChainSel is currently starved, or when was last time it - -- stopped being starved. - , getChainSelStarvation :: STM m ChainSelStarvation - - -- | Read ledger tables at a given point on the chain, if it exists. - -- - -- This is intended to be used by the mempool to hydrate a ledger state at - -- a specific point. - , getLedgerTablesAtFor :: - Point blk - -> LedgerTables (ExtLedgerState blk) KeysMK - -> m (Maybe (LedgerTables (ExtLedgerState blk) ValuesMK)) - - -- | Get statistics from the LedgerDB, in particular the number of entries - -- in the tables. - , getStatistics :: m (Maybe Statistics) - - -- | Close the ChainDB - -- - -- Idempotent. - -- - -- Should only be called on shutdown. - - , closeDB :: m () - - -- | Return 'True' when the database is open. - -- - -- 'False' when the database is closed. - , isOpen :: STM m Bool - } +data ChainDB m blk = ChainDB + { addBlockAsync :: InvalidBlockPunishment m -> blk -> m (AddBlockPromise m blk) + -- ^ Add a block to the heap of blocks + -- + -- We do /not/ assume that the block is valid (under the legder rules); + -- it is the responsibility of the Chain DB itself to only select chains + -- that are valid. + -- + -- Conversely, the caller cannot assume that the new block will be added + -- to the current chain; even if the block is valid, it will not become + -- part of the chain if there are other chains available that are + -- preferred by the consensus algorithm (typically, longer chains). + -- + -- This function typically returns immediately, yielding a + -- 'AddBlockPromise' which can be used to wait for the result. You can + -- use 'addBlock' to add the block synchronously. + -- + -- NOTE: back pressure can be applied when overloaded. + -- + -- PRECONDITON: the block to be added must not be from the future. + -- + -- The current code ensures that the two sources of blocks + -- ('ChainSync' and forging) do not allow blocks from the future, + -- however this is not guaranteed when during initialization if the + -- VolatileDB contains blocks from the future. See: + -- https://github.com/IntersectMBO/ouroboros-consensus/blob/main/docs/website/contents/for-developers/HandlingBlocksFromTheFuture.md#handling-blocks-from-the-future + , chainSelAsync :: m (ChainSelectionPromise m) + -- ^ Trigger reprocessing of blocks postponed by the LoE. + , getCurrentChain :: STM m (AnchoredFragment (Header blk)) + -- ^ Get the current chain fragment + -- + -- Suppose the current chain is + -- + -- > a -> b -> c -> d -> e -> f + -- + -- and suppose @k = 2@; this means that the most distant fork we can + -- switch to is something like + -- + -- > a -> b -> c -> d -> e' -> f' + -- + -- The fragment we return will be @[e, f]@, anchored at @d@. In other + -- words, the length of the fragment will under normal circumstances + -- be exactly @k@ blocks long. It may be shorter if + -- + -- * We are near genesis + -- The anchor will be the genesis point + -- (which does not correspond to an actual block) + -- + -- * The volatile DB suffered some data loss + -- Typically (but not necessarily) the volatile DB will not be empty + -- and the anchor will be pointing to the tip of the immutable DB. + -- + -- POSTCONDITION: The Chain DB will be able to switch to any fork starting + -- from the anchor of the returned fragment or any subsequent block + -- (provided the new fork is at least of the same length as the old). + -- + -- NOTE: A direct consequence of this guarantee is that the anchor of the + -- fragment will move as the chain grows. + , getCurrentChainWithTime :: + STM m (AnchoredFragment (HeaderWithTime blk)) + -- ^ Exact same as 'getCurrentChain', except each header is annotated + -- with the 'RelativeTime' of the onset of its slot (translated according + -- to the chain it is on) + -- + -- INVARIANT @'hwtHeader' <$> 'getCurrentChainWithTime' = 'getCurrentChain'@ + , getCurrentLedger :: STM m (ExtLedgerState blk EmptyMK) + -- ^ Get current ledger + , getImmutableLedger :: STM m (ExtLedgerState blk EmptyMK) + -- ^ Get the immutable ledger, i.e., typically @k@ blocks back. + , getPastLedger :: Point blk -> STM m (Maybe (ExtLedgerState blk EmptyMK)) + -- ^ Get the ledger for the given point. + -- + -- When the given point is not among the last @k@ blocks of the current + -- chain (i.e., older than @k@ or not on the current chain), 'Nothing' is + -- returned. + , getHeaderStateHistory :: STM m (HeaderStateHistory blk) + -- ^ Get a 'HeaderStateHistory' populated with the 'HeaderState's of the + -- last @k@ blocks of the current chain. + , getReadOnlyForkerAtPoint :: + ResourceRegistry m -> + Target (Point blk) -> + m (Either GetForkerError (ReadOnlyForker' m blk)) + -- ^ Acquire a read-only forker at a specific point if that point exists + -- on the db. + -- + -- Note that the forker should be closed by the caller of this function. + -- + -- The forker is read-only becase a read-write forker could be used to + -- change the internal state of the LedgerDB. + , getTipBlock :: m (Maybe blk) + -- ^ Get block at the tip of the chain, if one exists + -- + -- Returns 'Nothing' if the database is empty. + , getTipHeader :: m (Maybe (Header blk)) + -- ^ Get header at the tip of the chain + -- + -- NOTE: Calling 'getTipHeader' is cheaper than 'getTipBlock' and then + -- extracting the header: most of the time the header at the tip is + -- actually in memory, whereas the block never is. + -- + -- Returns 'Nothing' if the database is empty. + , getTipPoint :: STM m (Point blk) + -- ^ Get point of the tip of the chain + -- + -- Will return 'genesisPoint' if the database is empty; if the + -- current chain fragment is empty due to data loss in the volatile DB, + -- 'getTipPoint' will return the tip of the immutable DB. + , getBlockComponent :: + forall b. + BlockComponent blk b -> + RealPoint blk -> + m (Maybe b) + -- ^ Get the given component(s) of the block at the specified point. If + -- there is no block at the given point, 'Nothing' is returned. + , getIsFetched :: STM m (Point blk -> Bool) + -- ^ Return membership check function for recent blocks. This includes + -- blocks in the VolatileDB and blocks that are currently being processed + -- or are waiting in a queue to be processed. + -- + -- This check is only reliable for blocks up to @k@ away from the tip. + -- For blocks older than that the results should be regarded as + -- non-deterministic. + , getIsValid :: STM m (RealPoint blk -> Maybe Bool) + -- ^ Return a function that tells whether a block is known to be valid + -- or invalid. + -- + -- The function will return: + -- + -- * @Just True@: for blocks in the volatile DB that have been validated + -- and were found to be valid. All blocks in the current chain + -- fragment (i.e., 'getCurrentChain') are valid. + -- + -- * @Just False@: for blocks in the volatile DB that have been + -- validated and were found to be invalid. + -- + -- * @Nothing@: for blocks not or no longer in the volatile DB, whether + -- they are valid or not, including blocks in the immutable DB. Also + -- for blocks in the volatile DB that haven't been validated (yet), + -- e.g., because they are disconnected from the current chain or they + -- are part of a shorter fork. + , getMaxSlotNo :: STM m MaxSlotNo + -- ^ Get the highest slot number stored in the ChainDB (this includes + -- blocks that are waiting in the background queue to be processed). + -- + -- Note that the corresponding block doesn't have to be part of the + -- current chain, it could be part of some fork, or even be a + -- disconnected block. + , stream :: + forall b. + ResourceRegistry m -> + BlockComponent blk b -> + StreamFrom blk -> + StreamTo blk -> + m (Either (UnknownRange blk) (Iterator m blk b)) + -- ^ Stream blocks + -- + -- Streaming is not restricted to the current fork, but there must be an + -- unbroken path from the starting point to the end point /at the time + -- of initialization/ of the iterator. Once the iterator has been + -- initialized, it will not be affected by subsequent calls to + -- 'addBlock'. To track the current chain, use a 'Follower' instead. + -- + -- Streaming blocks older than @k@ is permitted, but only when they are + -- part of the current fork (at the time of initialization). Streaming a + -- fork that forks off more than @k@ blocks in the past is not permitted + -- and an 'UnknownRange' error will be returned in that case. + -- + -- The iterator /does/ have a limited lifetime, however. The chain DB + -- internally partitions the chain into an " immutable " part and a + -- " volatile " part, moving blocks from the volatile DB to the immutable + -- DB when they become more than @k@ deep into the chain. When a block + -- with slot number @n@ is added to the immutble DB, a time delay @t@ + -- kicks in; after that time delay expires, all blocks older than @n@ may + -- be removed from the volatile DB, /including any blocks that happen to + -- live on other forks/ (since those forks must now, by definition, be too + -- distant). This time delay @t@ also provides a worst-case bound for the + -- lifetime of the iterator: if the iterator traverses a chain that + -- forks off from our current chain at the tip of the immutable DB, + -- then the first block on that fork will become unavailable as soon as + -- another block is pushed to the current chain and the subsequent + -- time delay expires. + -- + -- Note: although blocks are moved from the volatile DB to the immutable + -- DB after they have become @k@ deep into the chain, due to data + -- corruption the suffix of the chain in the volatile DB might be + -- shorter than @k@. The immutable DB /always/ determines the maximum + -- rollback, which may therefore be shorter than @k@ under such + -- circumstances. In addition, streaming blocks which aren't on the + -- current fork is permitted, but the oldest volatile block must fit on + -- to the tip of the immutable DB. + -- + -- When the given bounds are nonsensical, an 'InvalidIteratorRange' is + -- thrown. + -- + -- When the given bounds are not part of the chain DB, an 'UnknownRange' + -- error is returned. + -- + -- To stream all blocks from the current chain, use 'streamAll', as it + -- correctly handles an empty ChainDB. + , newFollower :: + forall b. + ResourceRegistry m -> + ChainType -> + BlockComponent blk b -> + m (Follower m blk b) + -- ^ Chain follower + -- + -- A chain follower is an iterator that tracks the state of the /current/ + -- chain: calling @next@ on the iterator will either give you the next + -- block header, or (if we have switched to a fork) the instruction to + -- rollback. + -- + -- The tracking iterator starts at genesis (see also 'trackForward'). + -- + -- This is intended for use by chain consumers to /reliably/ follow a + -- chain, desipite the chain being volatile. + -- + -- Examples of users: + -- * The server side of the chain sync mini-protocol for the + -- node-to-node protocol using headers and the block size. + -- * The server side of the chain sync mini-protocol for the + -- node-to-client protocol using blocks. + , getIsInvalidBlock :: STM m (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) + -- ^ Function to check whether a block is known to be invalid. + -- + -- Blocks unknown to the ChainDB will result in 'Nothing'. + -- + -- If the hash corresponds to a block that is known to be invalid, but + -- is now older than @k@, this function may return 'Nothing'. + -- + -- Whenever a new invalid block is added, the 'Fingerprint' will be + -- changed. This is useful when \"watching\" this function in a + -- transaction. + -- + -- Note that when invalid blocks are garbage collected and thus no + -- longer detected by this function, the 'Fingerprint' doesn't have to + -- change, since the function will not detect new invalid blocks. + -- + -- It might seem natural to have this function also return whether the + -- ChainDB knows that a block is valid, thereby subsuming the 'getIsValid' + -- function and simplifying the API. However, this adds the overhead of + -- checking whether the block is valid for blocks that are not known to be + -- invalid that does not give useful information to current clients + -- (ChainSync), since they are only interested in whether a block is known + -- to be invalid. The extra information of whether a block is valid is + -- only used for testing. + -- + -- In particular, this affects the watcher in 'bracketChainSyncClient', + -- which rechecks the blocks in all candidate chains whenever a new + -- invalid block is detected. These blocks are likely to be valid. + , getChainSelStarvation :: STM m ChainSelStarvation + -- ^ Whether ChainSel is currently starved, or when was last time it + -- stopped being starved. + , getLedgerTablesAtFor :: + Point blk -> + LedgerTables (ExtLedgerState blk) KeysMK -> + m (Maybe (LedgerTables (ExtLedgerState blk) ValuesMK)) + -- ^ Read ledger tables at a given point on the chain, if it exists. + -- + -- This is intended to be used by the mempool to hydrate a ledger state at + -- a specific point. + , getStatistics :: m (Maybe Statistics) + -- ^ Get statistics from the LedgerDB, in particular the number of entries + -- in the tables. + , closeDB :: m () + -- ^ Close the ChainDB + -- + -- Idempotent. + -- + -- Should only be called on shutdown. + , isOpen :: STM m Bool + -- ^ Return 'True' when the database is open. + -- + -- 'False' when the database is closed. + } -getCurrentTip :: (Monad (STM m), HasHeader (Header blk)) - => ChainDB m blk -> STM m (Network.Tip blk) +getCurrentTip :: + (Monad (STM m), HasHeader (Header blk)) => + ChainDB m blk -> STM m (Network.Tip blk) getCurrentTip = fmap (AF.anchorToTip . AF.headAnchor) . getCurrentChain -getTipBlockNo :: (Monad (STM m), HasHeader (Header blk)) - => ChainDB m blk -> STM m (WithOrigin BlockNo) +getTipBlockNo :: + (Monad (STM m), HasHeader (Header blk)) => + ChainDB m blk -> STM m (WithOrigin BlockNo) getTipBlockNo = fmap Network.getTipBlockNo . getCurrentTip {------------------------------------------------------------------------------- @@ -421,38 +421,38 @@ getTipBlockNo = fmap Network.getTipBlockNo . getCurrentTip -------------------------------------------------------------------------------} data AddBlockPromise m blk = AddBlockPromise - { blockWrittenToDisk :: STM m Bool - -- ^ Use this 'STM' transaction to wait until the block has been written - -- to disk. - -- - -- Returns 'True' when the block was written to disk or 'False' when it - -- was ignored, e.g., because it was older than @k@. - -- - -- If the 'STM' transaction has returned 'True' then 'getIsFetched' will - -- return 'True' for the added block. - -- - -- NOTE: Even when the result is 'False', 'getIsFetched' might still - -- return 'True', e.g., the block was older than @k@, but it has been - -- downloaded and stored on disk before. - , blockProcessed :: STM m (AddBlockResult blk) - -- ^ Use this 'STM' transaction to wait until the block has been - -- processed: the block has been written to disk and chain selection has - -- been performed for the block, /unless/ the block is from the future. - -- - -- The ChainDB's tip after chain selection is returned. When this tip - -- doesn't match the added block, it doesn't necessarily mean the block - -- wasn't adopted. We might have adopted a longer chain of which the - -- added block is a part, but not the tip. - -- - -- It returns 'FailedToAddBlock' if the thread adding the block died. - -- - -- NOTE: When the block is from the future, chain selection for the - -- block won't be performed until the block is no longer in the future, - -- which might take some time. For that reason, this transaction will - -- not wait for chain selection of a block from the future. It will - -- return the current tip of the ChainDB after writing the block to - -- disk. - } + { blockWrittenToDisk :: STM m Bool + -- ^ Use this 'STM' transaction to wait until the block has been written + -- to disk. + -- + -- Returns 'True' when the block was written to disk or 'False' when it + -- was ignored, e.g., because it was older than @k@. + -- + -- If the 'STM' transaction has returned 'True' then 'getIsFetched' will + -- return 'True' for the added block. + -- + -- NOTE: Even when the result is 'False', 'getIsFetched' might still + -- return 'True', e.g., the block was older than @k@, but it has been + -- downloaded and stored on disk before. + , blockProcessed :: STM m (AddBlockResult blk) + -- ^ Use this 'STM' transaction to wait until the block has been + -- processed: the block has been written to disk and chain selection has + -- been performed for the block, /unless/ the block is from the future. + -- + -- The ChainDB's tip after chain selection is returned. When this tip + -- doesn't match the added block, it doesn't necessarily mean the block + -- wasn't adopted. We might have adopted a longer chain of which the + -- added block is a part, but not the tip. + -- + -- It returns 'FailedToAddBlock' if the thread adding the block died. + -- + -- NOTE: When the block is from the future, chain selection for the + -- block won't be performed until the block is no longer in the future, + -- which might take some time. For that reason, this transaction will + -- not wait for chain selection of a block from the future. It will + -- return the current tip of the ChainDB after writing the block to + -- disk. + } -- | This is a wrapper type for 'blockProcessed' function above. -- @@ -461,17 +461,18 @@ data AddBlockPromise m blk = AddBlockPromise -- -- The 'FailedToAddBlock' case will be returned if the thread adding the block -- died. --- -data AddBlockResult blk = SuccesfullyAddedBlock (Point blk) - | FailedToAddBlock String - deriving (Eq, Show) +data AddBlockResult blk + = SuccesfullyAddedBlock (Point blk) + | FailedToAddBlock String + deriving (Eq, Show) -- | Add a block synchronously: wait until the block has been written to disk -- (see 'blockWrittenToDisk'). -addBlockWaitWrittenToDisk :: IOLike m => ChainDB m blk -> InvalidBlockPunishment m -> blk -> m Bool +addBlockWaitWrittenToDisk :: + IOLike m => ChainDB m blk -> InvalidBlockPunishment m -> blk -> m Bool addBlockWaitWrittenToDisk chainDB punish blk = do - promise <- addBlockAsync chainDB punish blk - atomically $ blockWrittenToDisk promise + promise <- addBlockAsync chainDB punish blk + atomically $ blockWrittenToDisk promise -- | Add a block synchronously: wait until the block has been processed (see -- 'blockProcessed'). The new tip of the ChainDB is returned unless the thread adding the @@ -480,18 +481,17 @@ addBlockWaitWrittenToDisk chainDB punish blk = do -- Note: this is a partial function, only to support tests. -- -- PRECONDITION: the block to be added must not be from the future. See 'addBlockAsync'. --- addBlock :: IOLike m => ChainDB m blk -> InvalidBlockPunishment m -> blk -> m (AddBlockResult blk) addBlock chainDB punish blk = do - promise <- addBlockAsync chainDB punish blk - atomically $ blockProcessed promise + promise <- addBlockAsync chainDB punish blk + atomically $ blockProcessed promise -- | Add a block synchronously. Variant of 'addBlock' that doesn't return the -- new tip of the ChainDB. -- -- Note: this is a partial function, only to support tests. addBlock_ :: IOLike m => ChainDB m blk -> InvalidBlockPunishment m -> blk -> m () -addBlock_ = void ..: addBlock +addBlock_ = void ..: addBlock -- | Alias for naming consistency. -- The short name was chosen to avoid a larger diff from alignment changes. @@ -501,8 +501,8 @@ triggerChainSelectionAsync = chainSelAsync -- | A promise that the chain selection will be performed. It is returned by -- 'triggerChainSelectionAsync' and contains a monadic action that waits until -- the corresponding run of Chain Selection is done. -newtype ChainSelectionPromise m = ChainSelectionPromise { - -- NOTE: We might want a mechanism similar to 'AddBlockPromise' and +newtype ChainSelectionPromise m = ChainSelectionPromise + { -- NOTE: We might want a mechanism similar to 'AddBlockPromise' and -- 'AddBlockResult', in case the background ChainDB thread dies; but we -- currently only use the synchronous variant in tests. waitChainSelectionPromise :: m () @@ -512,7 +512,7 @@ newtype ChainSelectionPromise m = ChainSelectionPromise { -- performed. This is a partial function, only to support tests. triggerChainSelection :: IOLike m => ChainDB m blk -> m () triggerChainSelection chainDB = - waitChainSelectionPromise =<< chainSelAsync chainDB + waitChainSelectionPromise =<< chainSelAsync chainDB {------------------------------------------------------------------------------- Serialised block/header with its point @@ -524,9 +524,9 @@ triggerChainSelection chainDB = -- point itself of the block or header in question, and we don't want to -- deserialise the block to obtain it. data WithPoint blk b = WithPoint - { withoutPoint :: !b - , point :: !(Point blk) - } + { withoutPoint :: !b + , point :: !(Point blk) + } type instance HeaderHash (WithPoint blk b) = HeaderHash blk instance StandardHash blk => StandardHash (WithPoint blk b) @@ -535,87 +535,90 @@ getPoint :: BlockComponent blk (Point blk) getPoint = BlockPoint <$> GetSlot <*> GetHash getSerialisedBlockWithPoint :: - BlockComponent blk (WithPoint blk (Serialised blk)) + BlockComponent blk (WithPoint blk (Serialised blk)) getSerialisedBlockWithPoint = - WithPoint <$> (Serialised <$> GetRawBlock) <*> getPoint + WithPoint <$> (Serialised <$> GetRawBlock) <*> getPoint getSerialisedHeader :: BlockComponent blk (SerialisedHeader blk) getSerialisedHeader = - curry serialisedHeaderFromPair - <$> GetNestedCtxt - <*> GetRawHeader + curry serialisedHeaderFromPair + <$> GetNestedCtxt + <*> GetRawHeader getSerialisedHeaderWithPoint :: - BlockComponent blk (WithPoint blk (SerialisedHeader blk)) + BlockComponent blk (WithPoint blk (SerialisedHeader blk)) getSerialisedHeaderWithPoint = - WithPoint <$> getSerialisedHeader <*> getPoint + WithPoint <$> getSerialisedHeader <*> getPoint {------------------------------------------------------------------------------- Support for tests -------------------------------------------------------------------------------} toChain :: - forall m blk. (HasCallStack, IOLike m, HasHeader blk) - => ChainDB m blk -> m (Chain blk) + forall m blk. + (HasCallStack, IOLike m, HasHeader blk) => + ChainDB m blk -> m (Chain blk) toChain chainDB = withRegistry $ \registry -> - streamAll chainDB registry GetBlock >>= go Genesis - where - go :: Chain blk -> Iterator m blk blk -> m (Chain blk) - go chain it = do - next <- iteratorNext it - case next of - IteratorResult blk -> go (Chain.addBlock blk chain) it - IteratorExhausted -> return chain - IteratorBlockGCed _ -> - error "block on the current chain was garbage-collected" + streamAll chainDB registry GetBlock >>= go Genesis + where + go :: Chain blk -> Iterator m blk blk -> m (Chain blk) + go chain it = do + next <- iteratorNext it + case next of + IteratorResult blk -> go (Chain.addBlock blk chain) it + IteratorExhausted -> return chain + IteratorBlockGCed _ -> + error "block on the current chain was garbage-collected" fromChain :: - forall m blk. IOLike m - => m (ChainDB m blk) - -> Chain blk - -> m (ChainDB m blk) + forall m blk. + IOLike m => + m (ChainDB m blk) -> + Chain blk -> + m (ChainDB m blk) fromChain openDB chain = do - chainDB <- openDB - mapM_ (addBlock_ chainDB noPunishment) $ Chain.toOldestFirst chain - return chainDB + chainDB <- openDB + mapM_ (addBlock_ chainDB noPunishment) $ Chain.toOldestFirst chain + return chainDB {------------------------------------------------------------------------------- Iterator API -------------------------------------------------------------------------------} -data Iterator m blk b = Iterator { - iteratorNext :: m (IteratorResult blk b) - , iteratorClose :: m () - -- ^ When 'fmap'-ing or 'traverse'-ing (or using 'traverseIterator') an - -- 'Iterator', the resulting iterator will still refer to and use the - -- original one. This means that when either of them is closed, both - -- will be closed in practice. - } +data Iterator m blk b = Iterator + { iteratorNext :: m (IteratorResult blk b) + , iteratorClose :: m () + -- ^ When 'fmap'-ing or 'traverse'-ing (or using 'traverseIterator') an + -- 'Iterator', the resulting iterator will still refer to and use the + -- original one. This means that when either of them is closed, both + -- will be closed in practice. + } deriving (Functor, Foldable, Traversable) -- | An iterator that is immediately exhausted. emptyIterator :: Monad m => Iterator m blk b -emptyIterator = Iterator { - iteratorNext = return IteratorExhausted +emptyIterator = + Iterator + { iteratorNext = return IteratorExhausted , iteratorClose = return () } -- | Variant of 'traverse' instantiated to @'Iterator' m blk@ that executes -- the monadic function when calling 'iteratorNext'. traverseIterator :: - Monad m - => (b -> m b') - -> Iterator m blk b - -> Iterator m blk b' -traverseIterator f it = it { - iteratorNext = iteratorNext it >>= traverse f + Monad m => + (b -> m b') -> + Iterator m blk b -> + Iterator m blk b' +traverseIterator f it = + it + { iteratorNext = iteratorNext it >>= traverse f } -data IteratorResult blk b = - IteratorExhausted +data IteratorResult blk b + = IteratorExhausted | IteratorResult b - | IteratorBlockGCed (RealPoint blk) - -- ^ The block that was supposed to be streamed was garbage-collected from + | -- | The block that was supposed to be streamed was garbage-collected from -- the VolatileDB, but not added to the ImmutableDB. -- -- This will only happen when streaming very old forks very slowly. @@ -651,29 +654,31 @@ data IteratorResult blk b = -- would call 'iteratorNext' on @i@ __after__ block @e@ is copied to the -- immutable DB and @c@ and @d@ are garbage collected, then we will get -- 'IteratorBlockGCed'. - -- + IteratorBlockGCed (RealPoint blk) deriving (Functor, Foldable, Traversable) -deriving instance (Eq blk, Eq b, StandardHash blk) - => Eq (IteratorResult blk b) -deriving instance (Show blk, Show b, StandardHash blk) - => Show (IteratorResult blk b) +deriving instance + (Eq blk, Eq b, StandardHash blk) => + Eq (IteratorResult blk b) +deriving instance + (Show blk, Show b, StandardHash blk) => + Show (IteratorResult blk b) -data UnknownRange blk = - -- | The block at the given point was not found in the ChainDB. +data UnknownRange blk + = -- | The block at the given point was not found in the ChainDB. MissingBlock (RealPoint blk) - -- | The requested range forks off too far in the past, i.e. it doesn't + | -- | The requested range forks off too far in the past, i.e. it doesn't -- fit on the tip of the ImmutableDB. - | ForkTooOld (StreamFrom blk) + ForkTooOld (StreamFrom blk) deriving (Eq, Show) -- | Stream all blocks from the current chain. streamAll :: - (MonadSTM m, HasHeader blk, HasCallStack) - => ChainDB m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> m (Iterator m blk b) + (MonadSTM m, HasHeader blk, HasCallStack) => + ChainDB m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + m (Iterator m blk b) streamAll = streamFrom (StreamFromExclusive GenesisPoint) -- | Stream blocks from the given point up to the tip from the current chain. @@ -690,26 +695,27 @@ streamAll = streamFrom (StreamFromExclusive GenesisPoint) -- Note that this is not a 'Follower', so the stream will not include blocks -- that are added to the current chain after starting the stream. streamFrom :: - (MonadSTM m, HasHeader blk, HasCallStack) - => StreamFrom blk - -> ChainDB m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> m (Iterator m blk b) + (MonadSTM m, HasHeader blk, HasCallStack) => + StreamFrom blk -> + ChainDB m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + m (Iterator m blk b) streamFrom from db registry blockComponent = do - tip <- atomically $ getTipPoint db - case pointToWithOriginRealPoint tip of - Origin -> return emptyIterator - NotOrigin tip' -> do - errIt <- stream - db - registry - blockComponent - from - (StreamToInclusive tip') - case errIt of - Right it -> return it - Left e -> error $ "failed to stream from genesis to tip: " <> show e + tip <- atomically $ getTipPoint db + case pointToWithOriginRealPoint tip of + Origin -> return emptyIterator + NotOrigin tip' -> do + errIt <- + stream + db + registry + blockComponent + from + (StreamToInclusive tip') + case errIt of + Right it -> return it + Left e -> error $ "failed to stream from genesis to tip: " <> show e {------------------------------------------------------------------------------- Followers @@ -740,70 +746,68 @@ data ChainType = SelectedChain | TentativeChain -- the next 'ChainUpdate' wrt the follower's implicit position. -- -- The type parameter @a@ will be instantiated with @blk@ or @'Header' blk@. -data Follower m blk a = Follower { - -- | The next chain update (if one exists) - -- - -- The 'AddBlock' instruction (see 'ChainUpdate') indicates that, to - -- follow the current chain, the follower should extend its chain with the - -- given block component (which will be a value of type 'a'). - -- - -- The 'RollBack' instruction indicates that the follower should perform a - -- rollback by first backtracking to a certain point. - -- - -- If a follower should switch to a fork, then it will first receive a - -- 'RollBack' instruction followed by as many 'AddBlock' as necessary to - -- reach the tip of the new chain. - -- - -- When the follower's (implicit) position is in the immutable part of the - -- chain, no rollback instructions will be encountered. - -- - -- Not in @STM@ because might have to read the blocks or headers from - -- disk. - -- - -- We may roll back more than @k@, but only in case of data loss. - followerInstruction :: m (Maybe (ChainUpdate blk a)) - - -- | Blocking version of 'followerInstruction' - , followerInstructionBlocking :: m (ChainUpdate blk a) - - -- | Move the follower forward - -- - -- Must be given a list of points in order of preference; the iterator - -- will move forward to the first point on the list that is on the current - -- chain. Returns 'Nothing' if the iterator did not move, or the new point - -- otherwise. - -- - -- When successful, the first call to 'followerInstruction' after - -- 'followerForward' will be a 'RollBack' to the point returned by - -- 'followerForward'. - -- - -- Cannot live in @STM@ because the points specified might live in the - -- immutable DB. - , followerForward :: [Point blk] -> m (Maybe (Point blk)) - - -- | Close the follower. - -- - -- Idempotent. - -- - -- After closing, all other operations on the follower will throw - -- 'ClosedFollowerError'. - , followerClose :: m () - } - deriving (Functor) +data Follower m blk a = Follower + { followerInstruction :: m (Maybe (ChainUpdate blk a)) + -- ^ The next chain update (if one exists) + -- + -- The 'AddBlock' instruction (see 'ChainUpdate') indicates that, to + -- follow the current chain, the follower should extend its chain with the + -- given block component (which will be a value of type 'a'). + -- + -- The 'RollBack' instruction indicates that the follower should perform a + -- rollback by first backtracking to a certain point. + -- + -- If a follower should switch to a fork, then it will first receive a + -- 'RollBack' instruction followed by as many 'AddBlock' as necessary to + -- reach the tip of the new chain. + -- + -- When the follower's (implicit) position is in the immutable part of the + -- chain, no rollback instructions will be encountered. + -- + -- Not in @STM@ because might have to read the blocks or headers from + -- disk. + -- + -- We may roll back more than @k@, but only in case of data loss. + , followerInstructionBlocking :: m (ChainUpdate blk a) + -- ^ Blocking version of 'followerInstruction' + , followerForward :: [Point blk] -> m (Maybe (Point blk)) + -- ^ Move the follower forward + -- + -- Must be given a list of points in order of preference; the iterator + -- will move forward to the first point on the list that is on the current + -- chain. Returns 'Nothing' if the iterator did not move, or the new point + -- otherwise. + -- + -- When successful, the first call to 'followerInstruction' after + -- 'followerForward' will be a 'RollBack' to the point returned by + -- 'followerForward'. + -- + -- Cannot live in @STM@ because the points specified might live in the + -- immutable DB. + , followerClose :: m () + -- ^ Close the follower. + -- + -- Idempotent. + -- + -- After closing, all other operations on the follower will throw + -- 'ClosedFollowerError'. + } + deriving Functor -- | Variant of 'traverse' instantiated to @'Follower' m blk@ that executes the -- monadic function when calling 'followerInstruction' and -- 'followerInstructionBlocking'. traverseFollower :: - Monad m - => (b -> m b') - -> Follower m blk b - -> Follower m blk b' -traverseFollower f flr = Follower - { followerInstruction = followerInstruction flr >>= traverse (traverse f) + Monad m => + (b -> m b') -> + Follower m blk b -> + Follower m blk b' +traverseFollower f flr = + Follower + { followerInstruction = followerInstruction flr >>= traverse (traverse f) , followerInstructionBlocking = followerInstructionBlocking flr >>= traverse f - , followerForward = followerForward flr - , followerClose = followerClose flr + , followerForward = followerForward flr + , followerClose = followerClose flr } {------------------------------------------------------------------------------- @@ -819,28 +823,27 @@ traverseFollower f flr = Follower -- what went wrong, in case sysadmins want to investigate the disk failure. -- The Chain DB itself does not differentiate; all disk failures are treated -- equal and all trigger the same recovery procedure. -data ChainDbFailure blk = - -- | The ledger DB threw a file-system error +data ChainDbFailure blk + = -- | The ledger DB threw a file-system error LgrDbFailure FsError - - -- | Block missing from the chain DB + | -- | Block missing from the chain DB -- -- Thrown when we are not sure in which DB the block /should/ have been. - | ChainDbMissingBlock (RealPoint blk) + ChainDbMissingBlock (RealPoint blk) deriving instance StandardHash blk => Show (ChainDbFailure blk) instance (Typeable blk, StandardHash blk) => Exception (ChainDbFailure blk) where displayException = \case - LgrDbFailure fse -> fsError fse - ChainDbMissingBlock {} -> corruption - where - corruption = - "The database got corrupted, full validation will be enabled for the next startup" + LgrDbFailure fse -> fsError fse + ChainDbMissingBlock{} -> corruption + where + corruption = + "The database got corrupted, full validation will be enabled for the next startup" - -- The output will be a bit too detailed, but it will be quite clear. - fsError :: FsError -> String - fsError = displayException + -- The output will be a bit too detailed, but it will be quite clear. + fsError :: FsError -> String + fsError = displayException {------------------------------------------------------------------------------- Exceptions @@ -849,27 +852,25 @@ instance (Typeable blk, StandardHash blk) => Exception (ChainDbFailure blk) wher -- | Database error -- -- Thrown upon incorrect use: invalid input. -data ChainDbError blk = - -- | The ChainDB is closed. +data ChainDbError blk + = -- | The ChainDB is closed. -- -- This will be thrown when performing any operation on the ChainDB except -- for 'isOpen' and 'closeDB'. The 'CallStack' of the operation on the -- ChainDB is included in the error. ClosedDBError PrettyCallStack - - -- | The follower is closed. + | -- | The follower is closed. -- -- This will be thrown when performing any operation on a closed followers, -- except for 'followerClose'. - | ClosedFollowerError - - -- | When there is no chain/fork that satisfies the bounds passed to + ClosedFollowerError + | -- | When there is no chain/fork that satisfies the bounds passed to -- 'streamBlocks'. -- -- * The lower and upper bound are not on the same chain. -- * The bounds don't make sense, e.g., the lower bound starts after the -- upper bound, or the lower bound starts from genesis, /inclusive/. - | InvalidIteratorRange (StreamFrom blk) (StreamTo blk) + InvalidIteratorRange (StreamFrom blk) (StreamTo blk) deriving instance (Typeable blk, StandardHash blk) => Show (ChainDbError blk) @@ -879,13 +880,12 @@ instance (Typeable blk, StandardHash blk) => Exception (ChainDbError blk) where -- more information about the specific will have been thrown. This -- exception will only be thrown if some thread still tries to use the -- ChainDB afterwards, which should not happen. - ClosedDBError {} -> + ClosedDBError{} -> "The database was used after it was closed because it encountered an unrecoverable error" - -- The user won't see the exceptions below, they are not fatal. - ClosedFollowerError {} -> + ClosedFollowerError{} -> "The block/header follower was used after it was closed" - InvalidIteratorRange {} -> + InvalidIteratorRange{} -> "An invalid range of blocks was requested" -- | The Limit on Eagerness (LoE) is a mechanism for keeping ChainSel from @@ -907,14 +907,12 @@ instance (Typeable blk, StandardHash blk) => Exception (ChainDbError blk) where -- There is no a priori meaning assigned to the type parameter @a@. -- @LoE a@ is isomorphic to @Maybe a@, with the added meaning that -- @Just/LoEEnabled@ is only used when the LoE is enabled. --- -data LoE a = - -- | The LoE is disabled, so ChainSel will not keep the selection from - -- advancing. - LoEDisabled - | - -- | The LoE is enabled. - LoEEnabled !a +data LoE a + = -- | The LoE is disabled, so ChainSel will not keep the selection from + -- advancing. + LoEDisabled + | -- | The LoE is enabled. + LoEEnabled !a deriving (Eq, Show, Generic, NoThunks, Functor, Foldable, Traversable) -- | Get the current LoE fragment (if the LoE is enabled), see 'LoE' for more diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API/Types/InvalidBlockPunishment.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API/Types/InvalidBlockPunishment.hs index 74f845c5e3..908b66b7f8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API/Types/InvalidBlockPunishment.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/API/Types/InvalidBlockPunishment.hs @@ -5,10 +5,11 @@ {-# LANGUAGE TypeApplications #-} -- | How to punish the sender of a invalid block -module Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment ( - -- * opaque +module Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment + ( -- * opaque InvalidBlockPunishment , enact + -- * combinators , Invalidity (..) , branch @@ -17,15 +18,15 @@ module Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment ( , noPunishment ) where -import qualified Control.Exception as Exn -import Control.Monad (join) -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Util.IOLike +import Control.Exception qualified as Exn +import Control.Monad (join) +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Util.IOLike -- | Is the added block itself invalid, or is its prefix invalid? -data Invalidity = - BlockItself +data Invalidity + = BlockItself | BlockPrefix -- | How to handle a discovered 'Invalidity' @@ -34,11 +35,12 @@ data Invalidity = -- of where it is invoked during the chain selection. As a result, arbitrary -- monadic actions would be foot guns. Instead, this module defines a small DSL -- for punishment that we judge to be sound. -newtype InvalidBlockPunishment m = InvalidBlockPunishment { - enact :: Invalidity -> m () +newtype InvalidBlockPunishment m = InvalidBlockPunishment + { enact :: Invalidity -> m () } - deriving NoThunks via - OnlyCheckWhnfNamed "InvalidBlockPunishment" (InvalidBlockPunishment m) + deriving + NoThunks + via OnlyCheckWhnfNamed "InvalidBlockPunishment" (InvalidBlockPunishment m) -- | A noop punishment noPunishment :: Applicative m => InvalidBlockPunishment m @@ -47,42 +49,45 @@ noPunishment = InvalidBlockPunishment $ \_invalidity -> pure () -- | Create a punishment that kills this thread mkPunishThisThread :: IOLike m => m (InvalidBlockPunishment m) mkPunishThisThread = do - tid <- myThreadId - pure $ InvalidBlockPunishment $ \_invalidity -> - throwTo tid PeerSentAnInvalidBlockException + tid <- myThreadId + pure $ InvalidBlockPunishment $ \_invalidity -> + throwTo tid PeerSentAnInvalidBlockException -- | Thrown asynchronously to the client thread that added the block whose -- processing involved an invalid block. -- -- See 'punishThisThread'. data PeerSentAnInvalidBlockException = PeerSentAnInvalidBlockException - deriving (Show) + deriving Show instance Exn.Exception PeerSentAnInvalidBlockException -- | Allocate a stateful punishment that performs the given punishment if the -- given header does not satisfy the diffusion pipelining criterion. -mkForDiffusionPipelining :: forall m blk. - ( IOLike m - , BlockSupportsDiffusionPipelining blk - ) - => STM m ( BlockConfig blk - -> Header blk - -> InvalidBlockPunishment m - -> InvalidBlockPunishment m - ) +mkForDiffusionPipelining :: + forall m blk. + ( IOLike m + , BlockSupportsDiffusionPipelining blk + ) => + STM + m + ( BlockConfig blk -> + Header blk -> + InvalidBlockPunishment m -> + InvalidBlockPunishment m + ) mkForDiffusionPipelining = do - var <- newTVar (initialTentativeHeaderState (Proxy @blk)) - pure $ \cfg new punish -> InvalidBlockPunishment $ \invalidity -> join $ atomically $ do - mbSt' <- updateTentativeHeaderState cfg new <$> readTVar var - case mbSt' of - Just st' -> do - writeTVar var st' - pure $ pure () - Nothing -> - pure $ enact punish invalidity + var <- newTVar (initialTentativeHeaderState (Proxy @blk)) + pure $ \cfg new punish -> InvalidBlockPunishment $ \invalidity -> join $ atomically $ do + mbSt' <- updateTentativeHeaderState cfg new <$> readTVar var + case mbSt' of + Just st' -> do + writeTVar var st' + pure $ pure () + Nothing -> + pure $ enact punish invalidity -- | Punish according to the 'Invalidity' branch :: (Invalidity -> InvalidBlockPunishment m) -> InvalidBlockPunishment m branch f = InvalidBlockPunishment $ \invalidity -> - enact (f invalidity) invalidity + enact (f invalidity) invalidity diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl.hs index 83ee707fdb..f30c0e8a6b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl.hs @@ -5,13 +5,14 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ChainDB.Impl ( - -- * Initialization +module Ouroboros.Consensus.Storage.ChainDB.Impl + ( -- * Initialization ChainDbArgs (..) , SerialiseDiskConstraints , defaultArgs , openDB , withDB + -- * Trace types , SelectionChangedInfo (..) , TraceAddBlockEvent (..) @@ -25,316 +26,342 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl ( , TraceOpenEvent (..) , TracePipeliningEvent (..) , TraceValidationEvent (..) + -- * Re-exported for convenience , Args.RelativeMountPoint (..) , ImmutableDB.ImmutableDbSerialiseConstraints , LedgerDB.LedgerDbSerialiseConstraints , VolatileDB.VolatileDbSerialiseConstraints + -- * Internals for testing purposes , Internal (..) , openDBInternal ) where -import Control.Monad (void, when) -import Control.Monad.Trans.Class (lift) -import Control.ResourceRegistry (WithTempRegistry, allocate, - runInnerWithTempRegistry, runWithTempRegistry, - withRegistry) -import Control.Tracer -import Data.Functor ((<&>)) -import Data.Functor.Contravariant ((>$<)) -import qualified Data.Map.Strict as Map -import Data.Maybe.Strict (StrictMaybe (..)) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.Fragment.Validated as VF -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HeaderValidation (mkHeaderWithTime) -import Ouroboros.Consensus.Ledger.Extended (ledgerState) -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as API -import Ouroboros.Consensus.Storage.ChainDB.Impl.Args (ChainDbArgs, - defaultArgs) -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Args as Args -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Background as Background -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.ChainSel as ChainSel -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Follower as Follower -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Iterator as Iterator -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Query as Query -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Stream as ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB (LedgerSupportsLedgerDB) -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util (newFuse, whenJust, withFuse) -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (Fingerprint (..), - WithFingerprint (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.BlockFetch.ConsensusInterface - (ChainSelStarvation (..)) +import Control.Monad (void, when) +import Control.Monad.Trans.Class (lift) +import Control.ResourceRegistry + ( WithTempRegistry + , allocate + , runInnerWithTempRegistry + , runWithTempRegistry + , withRegistry + ) +import Control.Tracer +import Data.Functor ((<&>)) +import Data.Functor.Contravariant ((>$<)) +import Data.Map.Strict qualified as Map +import Data.Maybe.Strict (StrictMaybe (..)) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Fragment.Validated qualified as VF +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HeaderValidation (mkHeaderWithTime) +import Ouroboros.Consensus.Ledger.Extended (ledgerState) +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as API +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args + ( ChainDbArgs + , defaultArgs + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args qualified as Args +import Ouroboros.Consensus.Storage.ChainDB.Impl.Background qualified as Background +import Ouroboros.Consensus.Storage.ChainDB.Impl.ChainSel qualified as ChainSel +import Ouroboros.Consensus.Storage.ChainDB.Impl.Follower qualified as Follower +import Ouroboros.Consensus.Storage.ChainDB.Impl.Iterator qualified as Iterator +import Ouroboros.Consensus.Storage.ChainDB.Impl.Query qualified as Query +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.ImmutableDB.Stream qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB (LedgerSupportsLedgerDB) +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util (newFuse, whenJust, withFuse) +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM + ( Fingerprint (..) + , WithFingerprint (..) + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.BlockFetch.ConsensusInterface + ( ChainSelStarvation (..) + ) {------------------------------------------------------------------------------- Initialization -------------------------------------------------------------------------------} withDB :: - forall m blk a. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - , ConvertRawHash blk - , SerialiseDiskConstraints blk - , LedgerSupportsLedgerDB blk - ) - => Complete Args.ChainDbArgs m blk - -> (ChainDB m blk -> m a) - -> m a + forall m blk a. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + , ConvertRawHash blk + , SerialiseDiskConstraints blk + , LedgerSupportsLedgerDB blk + ) => + Complete Args.ChainDbArgs m blk -> + (ChainDB m blk -> m a) -> + m a withDB args = bracket (fst <$> openDBInternal args True) API.closeDB openDB :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - , ConvertRawHash blk - , SerialiseDiskConstraints blk - , LedgerSupportsLedgerDB blk - ) - => Complete Args.ChainDbArgs m blk - -> m (ChainDB m blk) + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + , ConvertRawHash blk + , SerialiseDiskConstraints blk + , LedgerSupportsLedgerDB blk + ) => + Complete Args.ChainDbArgs m blk -> + m (ChainDB m blk) openDB args = fst <$> openDBInternal args True openDBInternal :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - , ConvertRawHash blk - , SerialiseDiskConstraints blk - , HasCallStack - , LedgerSupportsLedgerDB blk - ) - => Complete Args.ChainDbArgs m blk - -> Bool -- ^ 'True' = Launch background tasks - -> m (ChainDB m blk, Internal m blk) + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + , ConvertRawHash blk + , SerialiseDiskConstraints blk + , HasCallStack + , LedgerSupportsLedgerDB blk + ) => + Complete Args.ChainDbArgs m blk -> + -- | 'True' = Launch background tasks + Bool -> + m (ChainDB m blk, Internal m blk) openDBInternal args launchBgTasks = runWithTempRegistry $ do - lift $ traceWith tracer $ TraceOpenEvent StartedOpeningDB - lift $ traceWith tracer $ TraceOpenEvent StartedOpeningImmutableDB - immutableDB <- ImmutableDB.openDB argsImmutableDb $ innerOpenCont ImmutableDB.closeDB - immutableDbTipPoint <- lift $ atomically $ ImmutableDB.getTipPoint immutableDB - let immutableDbTipChunk = - chunkIndexOfPoint (ImmutableDB.immChunkInfo argsImmutableDb) immutableDbTipPoint - lift $ traceWith tracer $ + lift $ traceWith tracer $ TraceOpenEvent StartedOpeningDB + lift $ traceWith tracer $ TraceOpenEvent StartedOpeningImmutableDB + immutableDB <- ImmutableDB.openDB argsImmutableDb $ innerOpenCont ImmutableDB.closeDB + immutableDbTipPoint <- lift $ atomically $ ImmutableDB.getTipPoint immutableDB + let immutableDbTipChunk = + chunkIndexOfPoint (ImmutableDB.immChunkInfo argsImmutableDb) immutableDbTipPoint + lift $ + traceWith tracer $ TraceOpenEvent $ OpenedImmutableDB immutableDbTipPoint immutableDbTipChunk - lift $ traceWith tracer $ TraceOpenEvent StartedOpeningVolatileDB - volatileDB <- VolatileDB.openDB argsVolatileDb $ innerOpenCont VolatileDB.closeDB - maxSlot <- lift $ atomically $ VolatileDB.getMaxSlotNo volatileDB - (chainDB, testing, env) <- lift $ do - traceWith tracer $ TraceOpenEvent (OpenedVolatileDB maxSlot) - traceWith tracer $ TraceOpenEvent StartedOpeningLgrDB - (lgrDB, replayed) <- LedgerDB.openDB - argsLgrDb - (ImmutableDB.streamAPI immutableDB) - immutableDbTipPoint - (Query.getAnyKnownBlock immutableDB volatileDB) - traceWith tracer $ TraceOpenEvent OpenedLgrDB - - varInvalid <- newTVarIO (WithFingerprint Map.empty (Fingerprint 0)) - - let initChainSelTracer = TraceInitChainSelEvent >$< tracer - - traceWith initChainSelTracer StartedInitChainSelection - initialLoE <- Args.cdbsLoE cdbSpecificArgs - chain <- withRegistry $ \rr -> do - chainAndLedger <- ChainSel.initialChainSelection - immutableDB - volatileDB - lgrDB - rr - initChainSelTracer - (Args.cdbsTopLevelConfig cdbSpecificArgs) - varInvalid - (void initialLoE) - traceWith initChainSelTracer InitialChainSelected - - let chain = VF.validatedFragment chainAndLedger - ledger = VF.validatedLedger chainAndLedger - - atomically $ LedgerDB.forkerCommit ledger - LedgerDB.forkerClose ledger - pure chain - LedgerDB.tryFlush lgrDB - - curLedger <- atomically $ LedgerDB.getVolatileTip lgrDB - let lcfg = configLedger (Args.cdbsTopLevelConfig cdbSpecificArgs) - - -- the volatile tip ledger state can translate the slots of the volatile - -- headers - chainWithTime = - AF.mapAnchoredFragment - (mkHeaderWithTime - lcfg - (ledgerState curLedger) - ) - chain - - varChain <- newTVarWithInvariantIO checkInternalChain $ InternalChain chain chainWithTime - varTentativeState <- newTVarIO $ initialTentativeHeaderState (Proxy @blk) - varTentativeHeader <- newTVarIO SNothing - varIterators <- newTVarIO Map.empty - varFollowers <- newTVarIO Map.empty - varNextIteratorKey <- newTVarIO (IteratorKey 0) - varNextFollowerKey <- newTVarIO (FollowerKey 0) - varKillBgThreads <- newTVarIO $ return () - copyFuse <- newFuse "copy to immutable db" - chainSelFuse <- newFuse "chain selection" - chainSelQueue <- newChainSelQueue (Args.cdbsBlocksToAddSize cdbSpecificArgs) - varChainSelStarvation <- newTVarIO ChainSelStarvationOngoing - - let env = CDB { cdbImmutableDB = immutableDB - , cdbVolatileDB = volatileDB - , cdbLedgerDB = lgrDB - , cdbChain = varChain - , cdbTentativeState = varTentativeState - , cdbTentativeHeader = varTentativeHeader - , cdbIterators = varIterators - , cdbFollowers = varFollowers - , cdbTopLevelConfig = Args.cdbsTopLevelConfig cdbSpecificArgs - , cdbInvalid = varInvalid - , cdbNextIteratorKey = varNextIteratorKey - , cdbNextFollowerKey = varNextFollowerKey - , cdbCopyFuse = copyFuse - , cdbChainSelFuse = chainSelFuse - , cdbTracer = tracer - , cdbRegistry = Args.cdbsRegistry cdbSpecificArgs - , cdbGcDelay = Args.cdbsGcDelay cdbSpecificArgs - , cdbGcInterval = Args.cdbsGcInterval cdbSpecificArgs - , cdbKillBgThreads = varKillBgThreads - , cdbChainSelQueue = chainSelQueue - , cdbLoE = Args.cdbsLoE cdbSpecificArgs - , cdbChainSelStarvation = varChainSelStarvation - } - h <- fmap CDBHandle $ newTVarIO $ ChainDbOpen env - let chainDB = API.ChainDB - { addBlockAsync = getEnv2 h ChainSel.addBlockAsync - , chainSelAsync = getEnv h ChainSel.triggerChainSelectionAsync - , getCurrentChain = getEnvSTM h Query.getCurrentChain - , getCurrentChainWithTime = getEnvSTM h Query.getCurrentChainWithTime - , getTipBlock = getEnv h Query.getTipBlock - , getTipHeader = getEnv h Query.getTipHeader - , getTipPoint = getEnvSTM h Query.getTipPoint - , getBlockComponent = getEnv2 h Query.getBlockComponent - , getIsFetched = getEnvSTM h Query.getIsFetched - , getIsValid = getEnvSTM h Query.getIsValid - , getMaxSlotNo = getEnvSTM h Query.getMaxSlotNo - , stream = Iterator.stream h - , newFollower = Follower.newFollower h - , getIsInvalidBlock = getEnvSTM h Query.getIsInvalidBlock - , getChainSelStarvation = getEnvSTM h Query.getChainSelStarvation - , closeDB = closeDB h - , isOpen = isOpen h - , getCurrentLedger = getEnvSTM h Query.getCurrentLedger - , getImmutableLedger = getEnvSTM h Query.getImmutableLedger - , getPastLedger = getEnvSTM1 h Query.getPastLedger - , getHeaderStateHistory = getEnvSTM h Query.getHeaderStateHistory - , getReadOnlyForkerAtPoint = getEnv2 h Query.getReadOnlyForkerAtPoint - , getLedgerTablesAtFor = getEnv2 h Query.getLedgerTablesAtFor - , getStatistics = getEnv h Query.getStatistics + lift $ traceWith tracer $ TraceOpenEvent StartedOpeningVolatileDB + volatileDB <- VolatileDB.openDB argsVolatileDb $ innerOpenCont VolatileDB.closeDB + maxSlot <- lift $ atomically $ VolatileDB.getMaxSlotNo volatileDB + (chainDB, testing, env) <- lift $ do + traceWith tracer $ TraceOpenEvent (OpenedVolatileDB maxSlot) + traceWith tracer $ TraceOpenEvent StartedOpeningLgrDB + (lgrDB, replayed) <- + LedgerDB.openDB + argsLgrDb + (ImmutableDB.streamAPI immutableDB) + immutableDbTipPoint + (Query.getAnyKnownBlock immutableDB volatileDB) + traceWith tracer $ TraceOpenEvent OpenedLgrDB + + varInvalid <- newTVarIO (WithFingerprint Map.empty (Fingerprint 0)) + + let initChainSelTracer = TraceInitChainSelEvent >$< tracer + + traceWith initChainSelTracer StartedInitChainSelection + initialLoE <- Args.cdbsLoE cdbSpecificArgs + chain <- withRegistry $ \rr -> do + chainAndLedger <- + ChainSel.initialChainSelection + immutableDB + volatileDB + lgrDB + rr + initChainSelTracer + (Args.cdbsTopLevelConfig cdbSpecificArgs) + varInvalid + (void initialLoE) + traceWith initChainSelTracer InitialChainSelected + + let chain = VF.validatedFragment chainAndLedger + ledger = VF.validatedLedger chainAndLedger + + atomically $ LedgerDB.forkerCommit ledger + LedgerDB.forkerClose ledger + pure chain + LedgerDB.tryFlush lgrDB + + curLedger <- atomically $ LedgerDB.getVolatileTip lgrDB + let lcfg = configLedger (Args.cdbsTopLevelConfig cdbSpecificArgs) + + -- the volatile tip ledger state can translate the slots of the volatile + -- headers + chainWithTime = + AF.mapAnchoredFragment + ( mkHeaderWithTime + lcfg + (ledgerState curLedger) + ) + chain + + varChain <- newTVarWithInvariantIO checkInternalChain $ InternalChain chain chainWithTime + varTentativeState <- newTVarIO $ initialTentativeHeaderState (Proxy @blk) + varTentativeHeader <- newTVarIO SNothing + varIterators <- newTVarIO Map.empty + varFollowers <- newTVarIO Map.empty + varNextIteratorKey <- newTVarIO (IteratorKey 0) + varNextFollowerKey <- newTVarIO (FollowerKey 0) + varKillBgThreads <- newTVarIO $ return () + copyFuse <- newFuse "copy to immutable db" + chainSelFuse <- newFuse "chain selection" + chainSelQueue <- newChainSelQueue (Args.cdbsBlocksToAddSize cdbSpecificArgs) + varChainSelStarvation <- newTVarIO ChainSelStarvationOngoing + + let env = + CDB + { cdbImmutableDB = immutableDB + , cdbVolatileDB = volatileDB + , cdbLedgerDB = lgrDB + , cdbChain = varChain + , cdbTentativeState = varTentativeState + , cdbTentativeHeader = varTentativeHeader + , cdbIterators = varIterators + , cdbFollowers = varFollowers + , cdbTopLevelConfig = Args.cdbsTopLevelConfig cdbSpecificArgs + , cdbInvalid = varInvalid + , cdbNextIteratorKey = varNextIteratorKey + , cdbNextFollowerKey = varNextFollowerKey + , cdbCopyFuse = copyFuse + , cdbChainSelFuse = chainSelFuse + , cdbTracer = tracer + , cdbRegistry = Args.cdbsRegistry cdbSpecificArgs + , cdbGcDelay = Args.cdbsGcDelay cdbSpecificArgs + , cdbGcInterval = Args.cdbsGcInterval cdbSpecificArgs + , cdbKillBgThreads = varKillBgThreads + , cdbChainSelQueue = chainSelQueue + , cdbLoE = Args.cdbsLoE cdbSpecificArgs + , cdbChainSelStarvation = varChainSelStarvation } - addBlockTestFuse <- newFuse "test chain selection" - copyTestFuse <- newFuse "test copy to immutable db" - let testing = Internal - { intCopyToImmutableDB = getEnv h (withFuse copyTestFuse . Background.copyToImmutableDB) - , intGarbageCollect = getEnv1 h Background.garbageCollect - , intTryTakeSnapshot = getEnv h $ \env' -> + h <- fmap CDBHandle $ newTVarIO $ ChainDbOpen env + let chainDB = + API.ChainDB + { addBlockAsync = getEnv2 h ChainSel.addBlockAsync + , chainSelAsync = getEnv h ChainSel.triggerChainSelectionAsync + , getCurrentChain = getEnvSTM h Query.getCurrentChain + , getCurrentChainWithTime = getEnvSTM h Query.getCurrentChainWithTime + , getTipBlock = getEnv h Query.getTipBlock + , getTipHeader = getEnv h Query.getTipHeader + , getTipPoint = getEnvSTM h Query.getTipPoint + , getBlockComponent = getEnv2 h Query.getBlockComponent + , getIsFetched = getEnvSTM h Query.getIsFetched + , getIsValid = getEnvSTM h Query.getIsValid + , getMaxSlotNo = getEnvSTM h Query.getMaxSlotNo + , stream = Iterator.stream h + , newFollower = Follower.newFollower h + , getIsInvalidBlock = getEnvSTM h Query.getIsInvalidBlock + , getChainSelStarvation = getEnvSTM h Query.getChainSelStarvation + , closeDB = closeDB h + , isOpen = isOpen h + , getCurrentLedger = getEnvSTM h Query.getCurrentLedger + , getImmutableLedger = getEnvSTM h Query.getImmutableLedger + , getPastLedger = getEnvSTM1 h Query.getPastLedger + , getHeaderStateHistory = getEnvSTM h Query.getHeaderStateHistory + , getReadOnlyForkerAtPoint = getEnv2 h Query.getReadOnlyForkerAtPoint + , getLedgerTablesAtFor = getEnv2 h Query.getLedgerTablesAtFor + , getStatistics = getEnv h Query.getStatistics + } + addBlockTestFuse <- newFuse "test chain selection" + copyTestFuse <- newFuse "test copy to immutable db" + let testing = + Internal + { intCopyToImmutableDB = getEnv h (withFuse copyTestFuse . Background.copyToImmutableDB) + , intGarbageCollect = getEnv1 h Background.garbageCollect + , intTryTakeSnapshot = getEnv h $ \env' -> void $ LedgerDB.tryTakeSnapshot (cdbLedgerDB env') Nothing maxBound - , intAddBlockRunner = getEnv h (Background.addBlockRunner addBlockTestFuse) - , intKillBgThreads = varKillBgThreads + , intAddBlockRunner = getEnv h (Background.addBlockRunner addBlockTestFuse) + , intKillBgThreads = varKillBgThreads } - traceWith tracer $ TraceOpenEvent $ OpenedDB - (castPoint $ AF.anchorPoint chain) - (castPoint $ AF.headPoint chain) + traceWith tracer $ + TraceOpenEvent $ + OpenedDB + (castPoint $ AF.anchorPoint chain) + (castPoint $ AF.headPoint chain) - when launchBgTasks $ Background.launchBgTasks env replayed + when launchBgTasks $ Background.launchBgTasks env replayed - return (chainDB, testing, env) + return (chainDB, testing, env) - _ <- lift $ allocate (Args.cdbsRegistry cdbSpecificArgs) (\_ -> return chainDB) API.closeDB + _ <- lift $ allocate (Args.cdbsRegistry cdbSpecificArgs) (\_ -> return chainDB) API.closeDB - return ((chainDB, testing), env) - where - tracer = Args.cdbsTracer cdbSpecificArgs - Args.ChainDbArgs argsImmutableDb argsVolatileDb argsLgrDb cdbSpecificArgs = args + return ((chainDB, testing), env) + where + tracer = Args.cdbsTracer cdbSpecificArgs + Args.ChainDbArgs argsImmutableDb argsVolatileDb argsLgrDb cdbSpecificArgs = args -- | We use 'runInnerWithTempRegistry' for the component databases. innerOpenCont :: - IOLike m - => (innerDB -> m ()) - -> WithTempRegistry st m (innerDB, st) - -> WithTempRegistry (ChainDbEnv m blk) m innerDB + IOLike m => + (innerDB -> m ()) -> + WithTempRegistry st m (innerDB, st) -> + WithTempRegistry (ChainDbEnv m blk) m innerDB innerOpenCont closer m = runInnerWithTempRegistry (fmap (\(innerDB, st) -> (innerDB, st, innerDB)) m) ((True <$) . closer) (\_env _innerDB -> True) - -- This check is degenerate because handles in @_env@ and the - -- @_innerDB@ handle do not support an equality check; all of the - -- identifying data is only in the handle's closure, not - -- accessible because of our intentional encapsulation choices. + +-- This check is degenerate because handles in @_env@ and the +-- @_innerDB@ handle do not support an equality check; all of the +-- identifying data is only in the handle's closure, not +-- accessible because of our intentional encapsulation choices. isOpen :: IOLike m => ChainDbHandle m blk -> STM m Bool -isOpen (CDBHandle varState) = readTVar varState <&> \case - ChainDbClosed -> False +isOpen (CDBHandle varState) = + readTVar varState <&> \case + ChainDbClosed -> False ChainDbOpen _env -> True closeDB :: - forall m blk. - ( IOLike m - , HasHeader (Header blk) - , HasCallStack - ) - => ChainDbHandle m blk -> m () + forall m blk. + ( IOLike m + , HasHeader (Header blk) + , HasCallStack + ) => + ChainDbHandle m blk -> m () closeDB (CDBHandle varState) = do - mbOpenEnv <- atomically $ readTVar varState >>= \case - -- Idempotent - ChainDbClosed -> return Nothing - ChainDbOpen env -> do - writeTVar varState ChainDbClosed - return $ Just env - - -- Only when the ChainDB was open - whenJust mbOpenEnv $ \cdb@CDB{..} -> do - - Follower.closeAllFollowers cdb - Iterator.closeAllIterators cdb - - killBgThreads <- atomically $ readTVar cdbKillBgThreads - killBgThreads - - ImmutableDB.closeDB cdbImmutableDB - VolatileDB.closeDB cdbVolatileDB - LedgerDB.closeDB cdbLedgerDB - - chain <- atomically $ icWithoutTime <$> readTVar cdbChain - - traceWith cdbTracer $ TraceOpenEvent $ ClosedDB - (castPoint $ AF.anchorPoint chain) - (castPoint $ AF.headPoint chain) + mbOpenEnv <- + atomically $ + readTVar varState >>= \case + -- Idempotent + ChainDbClosed -> return Nothing + ChainDbOpen env -> do + writeTVar varState ChainDbClosed + return $ Just env + + -- Only when the ChainDB was open + whenJust mbOpenEnv $ \cdb@CDB{..} -> do + Follower.closeAllFollowers cdb + Iterator.closeAllIterators cdb + + killBgThreads <- atomically $ readTVar cdbKillBgThreads + killBgThreads + + ImmutableDB.closeDB cdbImmutableDB + VolatileDB.closeDB cdbVolatileDB + LedgerDB.closeDB cdbLedgerDB + + chain <- atomically $ icWithoutTime <$> readTVar cdbChain + + traceWith cdbTracer $ + TraceOpenEvent $ + ClosedDB + (castPoint $ AF.anchorPoint chain) + (castPoint $ AF.headPoint chain) {------------------------------------------------------------------------------- Auxiliary @@ -345,5 +372,5 @@ closeDB (CDBHandle varState) = do -- Returns 'firstChunkNo' in case of 'GenesisPoint'. chunkIndexOfPoint :: ImmutableDB.ChunkInfo -> Point blk -> ImmutableDB.ChunkNo chunkIndexOfPoint chunkInfo = \case - GenesisPoint -> ImmutableDB.firstChunkNo - BlockPoint slot _ -> ImmutableDB.chunkIndexOfSlot chunkInfo slot + GenesisPoint -> ImmutableDB.firstChunkNo + BlockPoint slot _ -> ImmutableDB.chunkIndexOfSlot chunkInfo slot diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Args.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Args.hs index f4e6d5ff70..33658732fe 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Args.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Args.hs @@ -6,8 +6,8 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneKindSignatures #-} -module Ouroboros.Consensus.Storage.ChainDB.Impl.Args ( - ChainDbArgs (..) +module Ouroboros.Consensus.Storage.ChainDB.Impl.Args + ( ChainDbArgs (..) , ChainDbSpecificArgs (..) , RelativeMountPoint (..) , completeChainDbArgs @@ -19,74 +19,76 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Args ( , updateTracer ) where -import Control.ResourceRegistry (ResourceRegistry) -import Control.Tracer (Tracer, nullTracer) -import Data.Function ((&)) -import Data.Functor.Contravariant ((>$<)) -import Data.Kind -import Data.Time.Clock (secondsToDiffTime) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.API (GetLoEFragment, - LoE (LoEDisabled)) -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types - (TraceEvent (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB (LedgerDbFlavorArgs) -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import System.FS.API +import Control.ResourceRegistry (ResourceRegistry) +import Control.Tracer (Tracer, nullTracer) +import Data.Function ((&)) +import Data.Functor.Contravariant ((>$<)) +import Data.Kind +import Data.Time.Clock (secondsToDiffTime) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.API + ( GetLoEFragment + , LoE (LoEDisabled) + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types + ( TraceEvent (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB (LedgerDbFlavorArgs) +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import System.FS.API {------------------------------------------------------------------------------- Arguments -------------------------------------------------------------------------------} -data ChainDbArgs f m blk = ChainDbArgs { - cdbImmDbArgs :: ImmutableDB.ImmutableDbArgs f m blk +data ChainDbArgs f m blk = ChainDbArgs + { cdbImmDbArgs :: ImmutableDB.ImmutableDbArgs f m blk , cdbVolDbArgs :: VolatileDB.VolatileDbArgs f m blk , cdbLgrDbArgs :: LedgerDB.LedgerDbArgs f m blk - , cdbsArgs :: ChainDbSpecificArgs f m blk + , cdbsArgs :: ChainDbSpecificArgs f m blk } -- | Arguments specific to the ChainDB, not to the ImmutableDB, VolatileDB, or -- LedgerDB. type ChainDbSpecificArgs :: - (Type -> Type) - -> (Type -> Type) - -> Type - -> Type -data ChainDbSpecificArgs f m blk = ChainDbSpecificArgs { - cdbsBlocksToAddSize :: Word - -- ^ Size of the queue used to store asynchronously added blocks. This - -- is the maximum number of blocks that could be kept in memory at the - -- same time when the background thread processing the blocks can't keep - -- up. - , cdbsGcDelay :: DiffTime - -- ^ Delay between copying a block to the ImmutableDB and triggering a - -- garbage collection for the corresponding slot on the VolatileDB. - -- - -- The goal of the delay is to ensure that the write to the ImmutableDB - -- has been flushed to disk before deleting the block from the - -- VolatileDB, so that a crash won't result in the loss of the block. - , cdbsGcInterval :: DiffTime - -- ^ Batch all scheduled GCs so that at most one GC happens every - -- 'cdbsGcInterval'. - , cdbsRegistry :: HKD f (ResourceRegistry m) - , cdbsTracer :: Tracer m (TraceEvent blk) - , cdbsHasFSGsmDB :: HKD f (SomeHasFS m) - , cdbsTopLevelConfig :: HKD f (TopLevelConfig blk) - - -- Limit on Eagerness - , cdbsLoE :: GetLoEFragment m blk - -- ^ If this is 'LoEEnabled', it contains an action that returns the - -- current LoE fragment. - } + (Type -> Type) -> + (Type -> Type) -> + Type -> + Type +data ChainDbSpecificArgs f m blk = ChainDbSpecificArgs + { cdbsBlocksToAddSize :: Word + -- ^ Size of the queue used to store asynchronously added blocks. This + -- is the maximum number of blocks that could be kept in memory at the + -- same time when the background thread processing the blocks can't keep + -- up. + , cdbsGcDelay :: DiffTime + -- ^ Delay between copying a block to the ImmutableDB and triggering a + -- garbage collection for the corresponding slot on the VolatileDB. + -- + -- The goal of the delay is to ensure that the write to the ImmutableDB + -- has been flushed to disk before deleting the block from the + -- VolatileDB, so that a crash won't result in the loss of the block. + , cdbsGcInterval :: DiffTime + -- ^ Batch all scheduled GCs so that at most one GC happens every + -- 'cdbsGcInterval'. + , cdbsRegistry :: HKD f (ResourceRegistry m) + , cdbsTracer :: Tracer m (TraceEvent blk) + , cdbsHasFSGsmDB :: HKD f (SomeHasFS m) + , cdbsTopLevelConfig :: HKD f (TopLevelConfig blk) + , -- Limit on Eagerness + cdbsLoE :: GetLoEFragment m blk + -- ^ If this is 'LoEEnabled', it contains an action that returns the + -- current LoE fragment. + } -- | Default arguments -- @@ -110,15 +112,16 @@ data ChainDbSpecificArgs f m blk = ChainDbSpecificArgs { -- E.g., when syncing at 1k-2k blocks/s, this means 10k-20k blocks. During -- normal operation, we receive 1 block/20s, meaning at most 1 block. defaultSpecificArgs :: Monad m => Incomplete ChainDbSpecificArgs m blk -defaultSpecificArgs = ChainDbSpecificArgs { - cdbsBlocksToAddSize = 10 - , cdbsGcDelay = secondsToDiffTime 60 - , cdbsGcInterval = secondsToDiffTime 10 - , cdbsRegistry = noDefault - , cdbsTracer = nullTracer - , cdbsHasFSGsmDB = noDefault - , cdbsTopLevelConfig = noDefault - , cdbsLoE = pure LoEDisabled +defaultSpecificArgs = + ChainDbSpecificArgs + { cdbsBlocksToAddSize = 10 + , cdbsGcDelay = secondsToDiffTime 60 + , cdbsGcInterval = secondsToDiffTime 10 + , cdbsRegistry = noDefault + , cdbsTracer = nullTracer + , cdbsHasFSGsmDB = noDefault + , cdbsTopLevelConfig = noDefault + , cdbsLoE = pure LoEDisabled } -- | Default arguments @@ -127,44 +130,49 @@ defaultSpecificArgs = ChainDbSpecificArgs { -- and 'defaultSpecificArgs' for a list of which fields are not given a default -- and must therefore be set explicitly. defaultArgs :: - forall m blk . - Monad m - => Incomplete ChainDbArgs m blk + forall m blk. + Monad m => + Incomplete ChainDbArgs m blk defaultArgs = - ChainDbArgs ImmutableDB.defaultArgs - VolatileDB.defaultArgs - LedgerDB.defaultArgs - defaultSpecificArgs + ChainDbArgs + ImmutableDB.defaultArgs + VolatileDB.defaultArgs + LedgerDB.defaultArgs + defaultSpecificArgs ensureValidateAll :: - ChainDbArgs f m blk - -> ChainDbArgs f m blk + ChainDbArgs f m blk -> + ChainDbArgs f m blk ensureValidateAll args = - args { cdbImmDbArgs = (cdbImmDbArgs args) { - ImmutableDB.immValidationPolicy = ImmutableDB.ValidateAllChunks - } - , cdbVolDbArgs = (cdbVolDbArgs args) { - VolatileDB.volValidationPolicy = VolatileDB.ValidateAll - } - } + args + { cdbImmDbArgs = + (cdbImmDbArgs args) + { ImmutableDB.immValidationPolicy = ImmutableDB.ValidateAllChunks + } + , cdbVolDbArgs = + (cdbVolDbArgs args) + { VolatileDB.volValidationPolicy = VolatileDB.ValidateAll + } + } completeChainDbArgs :: - forall m blk. (ConsensusProtocol (BlockProtocol blk), IOLike m) - => ResourceRegistry m - -> TopLevelConfig blk - -> ExtLedgerState blk ValuesMK - -- ^ Initial ledger - -> ImmutableDB.ChunkInfo - -> (blk -> Bool) - -- ^ Check integrity - -> (RelativeMountPoint -> SomeHasFS m) - -- ^ Immutable FS, see 'NodeDatabasePaths' - -> (RelativeMountPoint -> SomeHasFS m) - -- ^ Volatile FS, see 'NodeDatabasePaths' - -> Complete LedgerDbFlavorArgs m - -> Incomplete ChainDbArgs m blk - -- ^ A set of incomplete arguments, possibly modified wrt @defaultArgs@ - -> Complete ChainDbArgs m blk + forall m blk. + (ConsensusProtocol (BlockProtocol blk), IOLike m) => + ResourceRegistry m -> + TopLevelConfig blk -> + -- | Initial ledger + ExtLedgerState blk ValuesMK -> + ImmutableDB.ChunkInfo -> + -- | Check integrity + (blk -> Bool) -> + -- | Immutable FS, see 'NodeDatabasePaths' + (RelativeMountPoint -> SomeHasFS m) -> + -- | Volatile FS, see 'NodeDatabasePaths' + (RelativeMountPoint -> SomeHasFS m) -> + Complete LedgerDbFlavorArgs m -> + -- | A set of incomplete arguments, possibly modified wrt @defaultArgs@ + Incomplete ChainDbArgs m blk -> + Complete ChainDbArgs m blk completeChainDbArgs registry cdbsTopLevelConfig @@ -174,72 +182,79 @@ completeChainDbArgs mkImmFS mkVolFS flavorArgs - defArgs - = defArgs { - cdbImmDbArgs = (cdbImmDbArgs defArgs) { - ImmutableDB.immChunkInfo - , ImmutableDB.immCheckIntegrity = checkIntegrity - , ImmutableDB.immRegistry = registry - , ImmutableDB.immCodecConfig = configCodec cdbsTopLevelConfig - , ImmutableDB.immHasFS = mkImmFS $ RelativeMountPoint "immutable" - } - , cdbVolDbArgs = (cdbVolDbArgs defArgs) { - VolatileDB.volHasFS = mkVolFS $ RelativeMountPoint "volatile" - , VolatileDB.volCheckIntegrity = checkIntegrity - , VolatileDB.volCodecConfig = configCodec cdbsTopLevelConfig - } - , cdbLgrDbArgs = (cdbLgrDbArgs defArgs) { - LedgerDB.lgrGenesis = pure initLedger - , LedgerDB.lgrHasFS = mkVolFS $ RelativeMountPoint "ledger" - , LedgerDB.lgrConfig = - LedgerDB.configLedgerDb - cdbsTopLevelConfig - (LedgerDB.ledgerDbCfgComputeLedgerEvents $ LedgerDB.lgrConfig (cdbLgrDbArgs defArgs)) - , LedgerDB.lgrFlavorArgs = flavorArgs - , LedgerDB.lgrRegistry = registry - } - , cdbsArgs = (cdbsArgs defArgs) { - cdbsRegistry = registry - , cdbsTopLevelConfig - , cdbsHasFSGsmDB = mkVolFS $ RelativeMountPoint "gsm" - } + defArgs = + defArgs + { cdbImmDbArgs = + (cdbImmDbArgs defArgs) + { ImmutableDB.immChunkInfo + , ImmutableDB.immCheckIntegrity = checkIntegrity + , ImmutableDB.immRegistry = registry + , ImmutableDB.immCodecConfig = configCodec cdbsTopLevelConfig + , ImmutableDB.immHasFS = mkImmFS $ RelativeMountPoint "immutable" + } + , cdbVolDbArgs = + (cdbVolDbArgs defArgs) + { VolatileDB.volHasFS = mkVolFS $ RelativeMountPoint "volatile" + , VolatileDB.volCheckIntegrity = checkIntegrity + , VolatileDB.volCodecConfig = configCodec cdbsTopLevelConfig + } + , cdbLgrDbArgs = + (cdbLgrDbArgs defArgs) + { LedgerDB.lgrGenesis = pure initLedger + , LedgerDB.lgrHasFS = mkVolFS $ RelativeMountPoint "ledger" + , LedgerDB.lgrConfig = + LedgerDB.configLedgerDb + cdbsTopLevelConfig + (LedgerDB.ledgerDbCfgComputeLedgerEvents $ LedgerDB.lgrConfig (cdbLgrDbArgs defArgs)) + , LedgerDB.lgrFlavorArgs = flavorArgs + , LedgerDB.lgrRegistry = registry + } + , cdbsArgs = + (cdbsArgs defArgs) + { cdbsRegistry = registry + , cdbsTopLevelConfig + , cdbsHasFSGsmDB = mkVolFS $ RelativeMountPoint "gsm" + } } updateTracer :: - Tracer m (TraceEvent blk) - -> ChainDbArgs f m blk - -> ChainDbArgs f m blk + Tracer m (TraceEvent blk) -> + ChainDbArgs f m blk -> + ChainDbArgs f m blk updateTracer trcr args = - args { - cdbImmDbArgs = (cdbImmDbArgs args) { ImmutableDB.immTracer = TraceImmutableDBEvent >$< trcr } - , cdbVolDbArgs = (cdbVolDbArgs args) { VolatileDB.volTracer = TraceVolatileDBEvent >$< trcr } - , cdbLgrDbArgs = (cdbLgrDbArgs args) { LedgerDB.lgrTracer = TraceLedgerDBEvent >$< trcr } - , cdbsArgs = (cdbsArgs args) { cdbsTracer = trcr } - } + args + { cdbImmDbArgs = (cdbImmDbArgs args){ImmutableDB.immTracer = TraceImmutableDBEvent >$< trcr} + , cdbVolDbArgs = (cdbVolDbArgs args){VolatileDB.volTracer = TraceVolatileDBEvent >$< trcr} + , cdbLgrDbArgs = (cdbLgrDbArgs args){LedgerDB.lgrTracer = TraceLedgerDBEvent >$< trcr} + , cdbsArgs = (cdbsArgs args){cdbsTracer = trcr} + } updateSnapshotPolicyArgs :: - SnapshotPolicyArgs - -> ChainDbArgs f m blk - -> ChainDbArgs f m blk + SnapshotPolicyArgs -> + ChainDbArgs f m blk -> + ChainDbArgs f m blk updateSnapshotPolicyArgs spa args = - args { cdbLgrDbArgs = (cdbLgrDbArgs args) { LedgerDB.lgrSnapshotPolicyArgs = spa } } + args{cdbLgrDbArgs = (cdbLgrDbArgs args){LedgerDB.lgrSnapshotPolicyArgs = spa}} updateQueryBatchSize :: - LedgerDB.QueryBatchSize - -> ChainDbArgs f m blk - -> ChainDbArgs f m blk + LedgerDB.QueryBatchSize -> + ChainDbArgs f m blk -> + ChainDbArgs f m blk updateQueryBatchSize qbs args = - args { cdbLgrDbArgs = (cdbLgrDbArgs args) { LedgerDB.lgrQueryBatchSize = qbs } } + args{cdbLgrDbArgs = (cdbLgrDbArgs args){LedgerDB.lgrQueryBatchSize = qbs}} enableLedgerEvents :: - Complete ChainDbArgs m blk - -> Complete ChainDbArgs m blk + Complete ChainDbArgs m blk -> + Complete ChainDbArgs m blk enableLedgerEvents args = - args { cdbLgrDbArgs = (cdbLgrDbArgs args) & \x -> - x { LedgerDB.lgrConfig = - (LedgerDB.lgrConfig x) { LedgerDB.ledgerDbCfgComputeLedgerEvents = ComputeLedgerEvents } - } - } + args + { cdbLgrDbArgs = + (cdbLgrDbArgs args) & \x -> + x + { LedgerDB.lgrConfig = + (LedgerDB.lgrConfig x){LedgerDB.ledgerDbCfgComputeLedgerEvents = ComputeLedgerEvents} + } + } {------------------------------------------------------------------------------- Relative mount points diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Background.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Background.hs index 3194631b3d..66e8682ae6 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Background.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Background.hs @@ -14,14 +14,17 @@ -- * Performing and scheduling garbage collections on the VolatileDB -- * Writing snapshots of the LedgerDB to disk and deleting old ones -- * Executing scheduled chain selections -module Ouroboros.Consensus.Storage.ChainDB.Impl.Background ( - -- * Launch background tasks +module Ouroboros.Consensus.Storage.ChainDB.Impl.Background + ( -- * Launch background tasks launchBgTasks + -- * Copying blocks from the VolatileDB to the ImmutableDB , copyAndSnapshotRunner , copyToImmutableDB + -- * Executing garbage collection , garbageCollect + -- * Scheduling garbage collections , GcParams (..) , GcSchedule @@ -29,77 +32,88 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Background ( , gcScheduleRunner , newGcSchedule , scheduleGC + -- ** Testing , ScheduledGc (..) , dumpGcSchedule + -- * Adding blocks to the ChainDB , addBlockRunner ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Exception (assert) -import Control.Monad (forM_, forever, void) -import Control.Monad.Trans.Class (lift) -import Control.ResourceRegistry -import Control.Tracer -import Data.Foldable (toList) -import qualified Data.Map.Strict as Map -import Data.Sequence.Strict (StrictSeq (..)) -import qualified Data.Sequence.Strict as Seq -import Data.Time.Clock -import Data.Void (Void) -import Data.Word -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.API (AddBlockResult (..), - BlockComponent (..)) -import Ouroboros.Consensus.Storage.ChainDB.Impl.ChainSel - (chainSelSync) -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Enclose (Enclosing' (..)) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.AnchoredFragment (AnchoredSeq (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Exception (assert) +import Control.Monad (forM_, forever, void) +import Control.Monad.Trans.Class (lift) +import Control.ResourceRegistry +import Control.Tracer +import Data.Foldable (toList) +import Data.Map.Strict qualified as Map +import Data.Sequence.Strict (StrictSeq (..)) +import Data.Sequence.Strict qualified as Seq +import Data.Time.Clock +import Data.Void (Void) +import Data.Word +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.API + ( AddBlockResult (..) + , BlockComponent (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.ChainSel + ( chainSelSync + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Enclose (Enclosing' (..)) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredFragment (AnchoredSeq (..)) +import Ouroboros.Network.AnchoredFragment qualified as AF {------------------------------------------------------------------------------- Launch background tasks -------------------------------------------------------------------------------} launchBgTasks :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - ) - => ChainDbEnv m blk - -> Word64 -- ^ Number of immutable blocks replayed on ledger DB startup - -> m () + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + ) => + ChainDbEnv m blk -> + -- | Number of immutable blocks replayed on ledger DB startup + Word64 -> + m () launchBgTasks cdb@CDB{..} replayed = do - !addBlockThread <- launch "ChainDB.addBlockRunner" $ + !addBlockThread <- + launch "ChainDB.addBlockRunner" $ addBlockRunner cdbChainSelFuse cdb - gcSchedule <- newGcSchedule - !gcThread <- launch "ChainDB.gcScheduleRunner" $ - gcScheduleRunner gcSchedule $ garbageCollect cdb - !copyAndSnapshotThread <- launch "ChainDB.copyAndSnapshotRunner" $ + gcSchedule <- newGcSchedule + !gcThread <- + launch "ChainDB.gcScheduleRunner" $ + gcScheduleRunner gcSchedule $ + garbageCollect cdb + !copyAndSnapshotThread <- + launch "ChainDB.copyAndSnapshotRunner" $ copyAndSnapshotRunner cdb gcSchedule replayed cdbCopyFuse - atomically $ writeTVar cdbKillBgThreads $ + atomically $ + writeTVar cdbKillBgThreads $ sequence_ [addBlockThread, gcThread, copyAndSnapshotThread] - where - launch :: String -> m Void -> m (m ()) - launch = fmap cancelThread .: forkLinkedThread cdbRegistry + where + launch :: String -> m Void -> m (m ()) + launch = fmap cancelThread .: forkLinkedThread cdbRegistry {------------------------------------------------------------------------------- Copying blocks from the VolatileDB to the ImmutableDB @@ -117,71 +131,72 @@ launchBgTasks cdb@CDB{..} replayed = do -- The 'SlotNo' of the tip of the ImmutableDB after copying the blocks is -- returned. This can be used for a garbage collection on the VolatileDB. copyToImmutableDB :: - forall m blk. - ( IOLike m - , ConsensusProtocol (BlockProtocol blk) - , HasHeader blk - , GetHeader blk - , HasCallStack - ) - => ChainDbEnv m blk - -> Electric m (WithOrigin SlotNo) + forall m blk. + ( IOLike m + , ConsensusProtocol (BlockProtocol blk) + , HasHeader blk + , GetHeader blk + , HasCallStack + ) => + ChainDbEnv m blk -> + Electric m (WithOrigin SlotNo) copyToImmutableDB CDB{..} = electric $ do - toCopy <- atomically $ do - curChain <- icWithoutTime <$> readTVar cdbChain - let nbToCopy = max 0 (AF.length curChain - fromIntegral (unNonZero k)) - toCopy :: [Point blk] - toCopy = map headerPoint - $ AF.toOldestFirst - $ AF.takeOldest nbToCopy curChain - return toCopy - - if null toCopy - -- This can't happen in practice, as we're only called when the fragment - -- is longer than @k@. However, in the tests, we will be calling this - -- function manually, which means it might be called when there are no - -- blocks to copy. - then trace NoBlocksToCopyToImmutableDB - else forM_ toCopy $ \pt -> do - let hash = case pointHash pt of - BlockHash h -> h - -- There is no actual genesis block that can occur on a chain - GenesisHash -> error "genesis block on current chain" - slotNoAtImmutableDBTip <- atomically $ ImmutableDB.getTipSlot cdbImmutableDB - assert (pointSlot pt >= slotNoAtImmutableDBTip) $ return () - -- When the block is corrupt, the function below will throw an - -- exception. This exception will make sure that we shut down the node - -- and that the next time we start, validation will be enabled. - blk <- VolatileDB.getKnownBlockComponent cdbVolatileDB GetVerifiedBlock hash - -- We're the only one modifying the ImmutableDB, so the tip cannot - -- have changed since we last checked it. - ImmutableDB.appendBlock cdbImmutableDB blk - -- TODO the invariant of 'cdbChain' is shortly violated between - -- these two lines: the tip was updated on the line above, but the - -- anchor point is only updated on the line below. - atomically $ removeFromChain pt - trace $ CopiedBlockToImmutableDB pt - - -- Get the /possibly/ updated tip of the ImmutableDB - atomically $ ImmutableDB.getTipSlot cdbImmutableDB - where - SecurityParam k = configSecurityParam cdbTopLevelConfig - trace = traceWith (contramap TraceCopyToImmutableDBEvent cdbTracer) - - -- | Remove the header corresponding to the given point from the beginning - -- of the current chain fragment. - -- - -- PRECONDITION: the header must be the first one (oldest) in the chain - removeFromChain :: Point blk -> STM m () - removeFromChain pt = do - -- The chain might have been extended in the meantime. - readTVar cdbChain >>= \case - InternalChain (hdr :< newChain) (_hwt :< newChainWithTime) - | headerPoint hdr == pt - -> writeTVar cdbChain $ InternalChain newChain newChainWithTime - -- We're the only one removing things from 'cdbChain', so this cannot - -- happen if the precondition was satisfied. - _ -> error "header to remove not on the current chain" + toCopy <- atomically $ do + curChain <- icWithoutTime <$> readTVar cdbChain + let nbToCopy = max 0 (AF.length curChain - fromIntegral (unNonZero k)) + toCopy :: [Point blk] + toCopy = + map headerPoint $ + AF.toOldestFirst $ + AF.takeOldest nbToCopy curChain + return toCopy + + if null toCopy + -- This can't happen in practice, as we're only called when the fragment + -- is longer than @k@. However, in the tests, we will be calling this + -- function manually, which means it might be called when there are no + -- blocks to copy. + then trace NoBlocksToCopyToImmutableDB + else forM_ toCopy $ \pt -> do + let hash = case pointHash pt of + BlockHash h -> h + -- There is no actual genesis block that can occur on a chain + GenesisHash -> error "genesis block on current chain" + slotNoAtImmutableDBTip <- atomically $ ImmutableDB.getTipSlot cdbImmutableDB + assert (pointSlot pt >= slotNoAtImmutableDBTip) $ return () + -- When the block is corrupt, the function below will throw an + -- exception. This exception will make sure that we shut down the node + -- and that the next time we start, validation will be enabled. + blk <- VolatileDB.getKnownBlockComponent cdbVolatileDB GetVerifiedBlock hash + -- We're the only one modifying the ImmutableDB, so the tip cannot + -- have changed since we last checked it. + ImmutableDB.appendBlock cdbImmutableDB blk + -- TODO the invariant of 'cdbChain' is shortly violated between + -- these two lines: the tip was updated on the line above, but the + -- anchor point is only updated on the line below. + atomically $ removeFromChain pt + trace $ CopiedBlockToImmutableDB pt + + -- Get the /possibly/ updated tip of the ImmutableDB + atomically $ ImmutableDB.getTipSlot cdbImmutableDB + where + SecurityParam k = configSecurityParam cdbTopLevelConfig + trace = traceWith (contramap TraceCopyToImmutableDBEvent cdbTracer) + + -- \| Remove the header corresponding to the given point from the beginning + -- of the current chain fragment. + -- + -- PRECONDITION: the header must be the first one (oldest) in the chain + removeFromChain :: Point blk -> STM m () + removeFromChain pt = do + -- The chain might have been extended in the meantime. + readTVar cdbChain >>= \case + InternalChain (hdr :< newChain) (_hwt :< newChainWithTime) + | headerPoint hdr == pt -> + writeTVar cdbChain $ InternalChain newChain newChainWithTime + -- We're the only one removing things from 'cdbChain', so this cannot + -- happen if the precondition was satisfied. + _ -> error "header to remove not on the current chain" {------------------------------------------------------------------------------- Snapshotting @@ -215,60 +230,61 @@ copyToImmutableDB CDB{..} = electric $ do -- /imply/ any previously scheduled GC, since GC is driven by slot number -- ("garbage collect anything older than @x@"). copyAndSnapshotRunner :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - ) - => ChainDbEnv m blk - -> GcSchedule m - -> Word64 -- ^ Number of immutable blocks replayed on ledger DB startup - -> Fuse m - -> m Void + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + ) => + ChainDbEnv m blk -> + GcSchedule m -> + -- | Number of immutable blocks replayed on ledger DB startup + Word64 -> + Fuse m -> + m Void copyAndSnapshotRunner cdb@CDB{..} gcSchedule replayed fuse = do - -- this first flush will persist the differences that come from the initial - -- chain selection. + -- this first flush will persist the differences that come from the initial + -- chain selection. + LedgerDB.tryFlush cdbLedgerDB + loop =<< LedgerDB.tryTakeSnapshot cdbLedgerDB Nothing replayed + where + SecurityParam k = configSecurityParam cdbTopLevelConfig + + loop :: LedgerDB.SnapCounters -> m Void + loop counters = do + let LedgerDB.SnapCounters + { prevSnapshotTime + , ntBlocksSinceLastSnap + } = counters + + -- Wait for the chain to grow larger than @k@ + numToWrite <- atomically $ do + curChain <- icWithoutTime <$> readTVar cdbChain + check $ fromIntegral (AF.length curChain) > unNonZero k + return $ fromIntegral (AF.length curChain) - unNonZero k + + -- Copy blocks to ImmutableDB + -- + -- This is a synchronous operation: when it returns, the blocks have been + -- copied to disk (though not flushed, necessarily). + withFuse fuse (copyToImmutableDB cdb) >>= scheduleGC' + LedgerDB.tryFlush cdbLedgerDB - loop =<< LedgerDB.tryTakeSnapshot cdbLedgerDB Nothing replayed - where - SecurityParam k = configSecurityParam cdbTopLevelConfig - - loop :: LedgerDB.SnapCounters -> m Void - loop counters = do - let LedgerDB.SnapCounters { - prevSnapshotTime - , ntBlocksSinceLastSnap - } = counters - - -- Wait for the chain to grow larger than @k@ - numToWrite <- atomically $ do - curChain <- icWithoutTime <$> readTVar cdbChain - check $ fromIntegral (AF.length curChain) > unNonZero k - return $ fromIntegral (AF.length curChain) - unNonZero k - - -- Copy blocks to ImmutableDB - -- - -- This is a synchronous operation: when it returns, the blocks have been - -- copied to disk (though not flushed, necessarily). - withFuse fuse (copyToImmutableDB cdb) >>= scheduleGC' - - LedgerDB.tryFlush cdbLedgerDB - - now <- getMonotonicTime - let ntBlocksSinceLastSnap' = ntBlocksSinceLastSnap + numToWrite - - loop =<< LedgerDB.tryTakeSnapshot cdbLedgerDB ((,now) <$> prevSnapshotTime) ntBlocksSinceLastSnap' - - scheduleGC' :: WithOrigin SlotNo -> m () - scheduleGC' Origin = return () - scheduleGC' (NotOrigin slotNo) = - scheduleGC - (contramap TraceGCEvent cdbTracer) - slotNo - GcParams { - gcDelay = cdbGcDelay - , gcInterval = cdbGcInterval - } - gcSchedule + + now <- getMonotonicTime + let ntBlocksSinceLastSnap' = ntBlocksSinceLastSnap + numToWrite + + loop =<< LedgerDB.tryTakeSnapshot cdbLedgerDB ((,now) <$> prevSnapshotTime) ntBlocksSinceLastSnap' + + scheduleGC' :: WithOrigin SlotNo -> m () + scheduleGC' Origin = return () + scheduleGC' (NotOrigin slotNo) = + scheduleGC + (contramap TraceGCEvent cdbTracer) + slotNo + GcParams + { gcDelay = cdbGcDelay + , gcInterval = cdbGcInterval + } + gcSchedule {------------------------------------------------------------------------------- Executing garbage collection @@ -291,11 +307,11 @@ copyAndSnapshotRunner cdb@CDB{..} gcSchedule replayed fuse = do -- @putBlock@ and @getBlock@. garbageCollect :: forall m blk. IOLike m => ChainDbEnv m blk -> SlotNo -> m () garbageCollect CDB{..} slotNo = do - VolatileDB.garbageCollect cdbVolatileDB slotNo - atomically $ do - LedgerDB.garbageCollect cdbLedgerDB slotNo - modifyTVar cdbInvalid $ fmap $ Map.filter ((>= slotNo) . invalidBlockSlotNo) - traceWith cdbTracer $ TraceGCEvent $ PerformedGC slotNo + VolatileDB.garbageCollect cdbVolatileDB slotNo + atomically $ do + LedgerDB.garbageCollect cdbLedgerDB slotNo + modifyTVar cdbInvalid $ fmap $ Map.filter ((>= slotNo) . invalidBlockSlotNo) + traceWith cdbTracer $ TraceGCEvent $ PerformedGC slotNo {------------------------------------------------------------------------------- Scheduling garbage collections @@ -369,54 +385,59 @@ garbageCollect CDB{..} slotNo = do -- precisely because of batching. newtype GcSchedule m = GcSchedule (StrictTVar m (StrictSeq ScheduledGc)) -data ScheduledGc = ScheduledGc { - scheduledGcTime :: !Time - -- ^ Time at which to run the garbage collection - , scheduledGcSlot :: !SlotNo - -- ^ For which slot to run the garbage collection - } +data ScheduledGc = ScheduledGc + { scheduledGcTime :: !Time + -- ^ Time at which to run the garbage collection + , scheduledGcSlot :: !SlotNo + -- ^ For which slot to run the garbage collection + } deriving (Eq, Show, Generic, NoThunks) instance Condense ScheduledGc where condense (ScheduledGc time slot) = condense (time, slot) -data GcParams = GcParams { - gcDelay :: !DiffTime - -- ^ How long to wait until performing the GC. See 'cdbsGcDelay'. - , gcInterval :: !DiffTime - -- ^ The GC interval: the minimum time between two GCs. See - -- 'cdbsGcInterval'. - } - deriving (Show) +data GcParams = GcParams + { gcDelay :: !DiffTime + -- ^ How long to wait until performing the GC. See 'cdbsGcDelay'. + , gcInterval :: !DiffTime + -- ^ The GC interval: the minimum time between two GCs. See + -- 'cdbsGcInterval'. + } + deriving Show newGcSchedule :: IOLike m => m (GcSchedule m) newGcSchedule = GcSchedule <$> newTVarIO Seq.empty scheduleGC :: - forall m blk. IOLike m - => Tracer m (TraceGCEvent blk) - -> SlotNo -- ^ The slot to use for garbage collection - -> GcParams - -> GcSchedule m - -> m () + forall m blk. + IOLike m => + Tracer m (TraceGCEvent blk) -> + -- | The slot to use for garbage collection + SlotNo -> + GcParams -> + GcSchedule m -> + m () scheduleGC tracer slotNo gcParams (GcSchedule varQueue) = do - timeScheduledForGC <- computeTimeForGC gcParams <$> getMonotonicTime - atomically $ modifyTVar varQueue $ \case - queue' :|> ScheduledGc { scheduledGcTime = lastTimeScheduledForGC } - | timeScheduledForGC == lastTimeScheduledForGC - -- Same interval, batch it - -> queue' :|> ScheduledGc timeScheduledForGC slotNo - queue - -- Different interval or empty, so append it - -> queue :|> ScheduledGc timeScheduledForGC slotNo - traceWith tracer $ ScheduledGC slotNo timeScheduledForGC + timeScheduledForGC <- computeTimeForGC gcParams <$> getMonotonicTime + atomically $ modifyTVar varQueue $ \case + queue' :|> ScheduledGc{scheduledGcTime = lastTimeScheduledForGC} + | timeScheduledForGC == lastTimeScheduledForGC -> + -- Same interval, batch it + queue' :|> ScheduledGc timeScheduledForGC slotNo + queue -> + -- Different interval or empty, so append it + queue :|> ScheduledGc timeScheduledForGC slotNo + traceWith tracer $ ScheduledGC slotNo timeScheduledForGC computeTimeForGC :: - GcParams - -> Time -- ^ Now - -> Time -- ^ The time at which to perform the GC -computeTimeForGC GcParams { gcDelay, gcInterval } (Time now) = - Time $ picosecondsToDiffTime $ + GcParams -> + -- | Now + Time -> + -- | The time at which to perform the GC + Time +computeTimeForGC GcParams{gcDelay, gcInterval} (Time now) = + Time $ + picosecondsToDiffTime $ -- We're rounding up to the nearest interval, because rounding down -- would mean GC'ing too early. roundUpToInterval @@ -434,34 +455,38 @@ computeTimeForGC GcParams { gcDelay, gcInterval } (Time now) = -- > == 0 roundUpToInterval :: (Integral a, Integral b) => b -> a -> a roundUpToInterval interval x - | m == 0 - = d * fromIntegral interval - | otherwise - = (d + 1) * fromIntegral interval - where - (d, m) = x `divMod` fromIntegral interval + | m == 0 = + d * fromIntegral interval + | otherwise = + (d + 1) * fromIntegral interval + where + (d, m) = x `divMod` fromIntegral interval gcScheduleRunner :: - forall m. IOLike m - => GcSchedule m - -> (SlotNo -> m ()) -- ^ GC function - -> m Void + forall m. + IOLike m => + GcSchedule m -> + -- | GC function + (SlotNo -> m ()) -> + m Void gcScheduleRunner (GcSchedule varQueue) runGc = forever $ do - -- Peek to know how long to wait - timeScheduledForGC <- atomically $ + -- Peek to know how long to wait + timeScheduledForGC <- + atomically $ readTVar varQueue >>= \case - Seq.Empty -> retry - ScheduledGc { scheduledGcTime } :<| _ -> return scheduledGcTime + Seq.Empty -> retry + ScheduledGc{scheduledGcTime} :<| _ -> return scheduledGcTime - currentTime <- getMonotonicTime - let toWait = max 0 (timeScheduledForGC `diffTime` currentTime) - threadDelay toWait + currentTime <- getMonotonicTime + let toWait = max 0 (timeScheduledForGC `diffTime` currentTime) + threadDelay toWait - -- After waiting, find the slot for which to GC and remove the entry from - -- the queue. - slotNo <- atomically $ + -- After waiting, find the slot for which to GC and remove the entry from + -- the queue. + slotNo <- + atomically $ readTVar varQueue >>= \case - ScheduledGc { scheduledGcSlot } :<| queue' -> do + ScheduledGc{scheduledGcSlot} :<| queue' -> do writeTVar varQueue queue' return scheduledGcSlot @@ -470,8 +495,8 @@ gcScheduleRunner (GcSchedule varQueue) runGc = forever $ do -- while we were waiting. Seq.Empty -> error "queue empty after waiting" - -- Garbage collection is called synchronously - runGc slotNo + -- Garbage collection is called synchronously + runGc slotNo -- | Return the current contents of the 'GcSchedule' queue without modifying -- it. @@ -487,43 +512,51 @@ dumpGcSchedule (GcSchedule varQueue) = toList <$> readTVar varQueue -- | Read blocks from 'cdbChainSelQueue' and add them synchronously to the -- ChainDB. addBlockRunner :: - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - , HasCallStack - ) - => Fuse m - -> ChainDbEnv m blk - -> m Void + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + , HasCallStack + ) => + Fuse m -> + ChainDbEnv m blk -> + m Void addBlockRunner fuse cdb@CDB{..} = forever $ do - let trace = traceWith cdbTracer . TraceAddBlockEvent - trace $ PoppedBlockFromQueue RisingEdge - -- if the `chainSelSync` does not complete because it was killed by an async - -- exception (or it errored), notify the blocked thread - withFuse fuse $ - bracketOnError - (lift $ getChainSelMessage starvationTracer cdbChainSelStarvation cdbChainSelQueue) - (\message -> lift $ atomically $ do + let trace = traceWith cdbTracer . TraceAddBlockEvent + trace $ PoppedBlockFromQueue RisingEdge + -- if the `chainSelSync` does not complete because it was killed by an async + -- exception (or it errored), notify the blocked thread + withFuse fuse $ + bracketOnError + (lift $ getChainSelMessage starvationTracer cdbChainSelStarvation cdbChainSelQueue) + ( \message -> lift $ atomically $ do case message of ChainSelReprocessLoEBlocks varProcessed -> void $ tryPutTMVar varProcessed () ChainSelAddBlock BlockToAdd{varBlockWrittenToDisk, varBlockProcessed} -> do - _ <- tryPutTMVar varBlockWrittenToDisk - False - _ <- tryPutTMVar varBlockProcessed - (FailedToAddBlock "Failed to add block synchronously") + _ <- + tryPutTMVar + varBlockWrittenToDisk + False + _ <- + tryPutTMVar + varBlockProcessed + (FailedToAddBlock "Failed to add block synchronously") pure () - closeChainSelQueue cdbChainSelQueue) - (\message -> do + closeChainSelQueue cdbChainSelQueue + ) + ( \message -> do lift $ case message of ChainSelReprocessLoEBlocks _ -> trace PoppedReprocessLoEBlocksFromQueue ChainSelAddBlock BlockToAdd{blockToAdd} -> - trace $ PoppedBlockFromQueue $ FallingEdgeWith $ - blockRealPoint blockToAdd + trace $ + PoppedBlockFromQueue $ + FallingEdgeWith $ + blockRealPoint blockToAdd chainSelSync cdb message - lift $ atomically $ processedChainSelMessage cdbChainSelQueue message) - where - starvationTracer = Tracer $ traceWith cdbTracer . TraceChainSelStarvationEvent + lift $ atomically $ processedChainSelMessage cdbChainSelQueue message + ) + where + starvationTracer = Tracer $ traceWith cdbTracer . TraceChainSelStarvationEvent diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/BlockCache.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/BlockCache.hs index f0eb0a39d2..56fb7a5f32 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/BlockCache.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/BlockCache.hs @@ -6,19 +6,20 @@ -- -- > import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache (BlockCache) -- > import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache as BlockCache -module Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache ( - cacheBlock +module Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache + ( cacheBlock , empty , lookup , singleton + -- * opaque , BlockCache ) where -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block -import Prelude hiding (lookup) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block +import Prelude hiding (lookup) newtype BlockCache blk = BlockCache (Map (HeaderHash blk) blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/ChainSel.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/ChainSel.hs index 43f96e5baa..b7dc222a0b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/ChainSel.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/ChainSel.hs @@ -11,107 +11,128 @@ -- | Operations involving chain selection: the initial chain selection and -- adding a block. -module Ouroboros.Consensus.Storage.ChainDB.Impl.ChainSel ( - addBlockAsync +module Ouroboros.Consensus.Storage.ChainDB.Impl.ChainSel + ( addBlockAsync , chainSelSync , chainSelectionForBlock , initialChainSelection , triggerChainSelectionAsync + -- * Exported for testing purposes , olderThanK ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Exception (assert) -import Control.Monad (forM, forM_, when) -import Control.Monad.Except () -import Control.Monad.Trans.Class (lift) -import Control.Monad.Trans.State.Strict -import Control.ResourceRegistry (ResourceRegistry, withRegistry) -import Control.Tracer (Tracer, nullTracer, traceWith) -import Data.Foldable (for_) -import Data.Function (on) -import Data.Functor.Contravariant ((>$<)) -import Data.List (sortBy) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromJust, isJust, isNothing) -import Data.Maybe.Strict (StrictMaybe (..), strictMaybeToMaybe) -import Data.Set (Set) -import qualified Data.Set as Set -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Fragment.Diff (ChainDiff (..)) -import qualified Ouroboros.Consensus.Fragment.Diff as Diff -import Ouroboros.Consensus.Fragment.Validated (ValidatedFragment) -import qualified Ouroboros.Consensus.Fragment.Validated as VF -import Ouroboros.Consensus.Fragment.ValidatedDiff - (ValidatedChainDiff (..)) -import qualified Ouroboros.Consensus.Fragment.ValidatedDiff as ValidatedDiff -import Ouroboros.Consensus.HardFork.Abstract -import qualified Ouroboros.Consensus.HardFork.History as History -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..), - mkHeaderWithTime) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Storage.ChainDB.API (AddBlockPromise (..), - AddBlockResult (..), BlockComponent (..), ChainType (..), - LoE (..)) -import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment - (InvalidBlockPunishment, noPunishment) -import qualified Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment as InvalidBlockPunishment -import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache - (BlockCache) -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache as BlockCache -import Ouroboros.Consensus.Storage.ChainDB.Impl.Paths - (LookupBlockInfo) -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Paths as Paths -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Query as Query -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB) -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.AnchoredFragment -import Ouroboros.Consensus.Util.Enclose (encloseWith) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (WithFingerprint (..)) -import Ouroboros.Network.AnchoredFragment (Anchor, AnchoredFragment, - AnchoredSeq (..)) -import qualified Ouroboros.Network.AnchoredFragment as AF -import qualified Ouroboros.Network.AnchoredSeq as AS -import Ouroboros.Network.Protocol.LocalStateQuery.Type (Target (..)) +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Exception (assert) +import Control.Monad (forM, forM_, when) +import Control.Monad.Except () +import Control.Monad.Trans.Class (lift) +import Control.Monad.Trans.State.Strict +import Control.ResourceRegistry (ResourceRegistry, withRegistry) +import Control.Tracer (Tracer, nullTracer, traceWith) +import Data.Foldable (for_) +import Data.Function (on) +import Data.Functor.Contravariant ((>$<)) +import Data.List (sortBy) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromJust, isJust, isNothing) +import Data.Maybe.Strict (StrictMaybe (..), strictMaybeToMaybe) +import Data.Set (Set) +import Data.Set qualified as Set +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Fragment.Diff (ChainDiff (..)) +import Ouroboros.Consensus.Fragment.Diff qualified as Diff +import Ouroboros.Consensus.Fragment.Validated (ValidatedFragment) +import Ouroboros.Consensus.Fragment.Validated qualified as VF +import Ouroboros.Consensus.Fragment.ValidatedDiff + ( ValidatedChainDiff (..) + ) +import Ouroboros.Consensus.Fragment.ValidatedDiff qualified as ValidatedDiff +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.History qualified as History +import Ouroboros.Consensus.HeaderValidation + ( HeaderWithTime (..) + , mkHeaderWithTime + ) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Storage.ChainDB.API + ( AddBlockPromise (..) + , AddBlockResult (..) + , BlockComponent (..) + , ChainType (..) + , LoE (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment + ( InvalidBlockPunishment + , noPunishment + ) +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment qualified as InvalidBlockPunishment +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache + ( BlockCache + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache qualified as BlockCache +import Ouroboros.Consensus.Storage.ChainDB.Impl.Paths + ( LookupBlockInfo + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Paths qualified as Paths +import Ouroboros.Consensus.Storage.ChainDB.Impl.Query qualified as Query +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB) +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.AnchoredFragment +import Ouroboros.Consensus.Util.Enclose (encloseWith) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (WithFingerprint (..)) +import Ouroboros.Network.AnchoredFragment + ( Anchor + , AnchoredFragment + , AnchoredSeq (..) + ) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.AnchoredSeq qualified as AS +import Ouroboros.Network.Protocol.LocalStateQuery.Type (Target (..)) -- | Perform the initial chain selection based on the tip of the ImmutableDB -- and the contents of the VolatileDB. -- -- Returns the chosen validated chain and corresponding ledger. --- initialChainSelection :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - ) - => ImmutableDB m blk - -> VolatileDB m blk - -> LedgerDB.LedgerDB' m blk - -> ResourceRegistry m - -> Tracer m (TraceInitChainSelEvent blk) - -> TopLevelConfig blk - -> StrictTVar m (WithFingerprint (InvalidBlocks blk)) - -> LoE () - -> m (ChainAndLedger m blk) -initialChainSelection immutableDB volatileDB lgrDB rr tracer cfg varInvalid - loE = do + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + ) => + ImmutableDB m blk -> + VolatileDB m blk -> + LedgerDB.LedgerDB' m blk -> + ResourceRegistry m -> + Tracer m (TraceInitChainSelEvent blk) -> + TopLevelConfig blk -> + StrictTVar m (WithFingerprint (InvalidBlocks blk)) -> + LoE () -> + m (ChainAndLedger m blk) +initialChainSelection + immutableDB + volatileDB + lgrDB + rr + tracer + cfg + varInvalid + loE = do -- TODO: Improve the user experience by trimming any potential -- blocks from the future from the VolatileDB. -- @@ -120,7 +141,7 @@ initialChainSelection immutableDB volatileDB lgrDB rr tracer cfg varInvalid -- some reason the clock of the node was set back (by a -- significant amount of time). This is a rare situation, but can -- arise for instance if the clock of the node was set in the - -- **far** future. In this case, node will be disconnected from + -- \**far** future. In this case, node will be disconnected from -- other peers when diffusing these blocks. Once the node is -- restarted with a synchronized clock, it will diffuse said -- blocks from the future again (assuming they're still from the @@ -135,122 +156,128 @@ initialChainSelection immutableDB volatileDB lgrDB rr tracer cfg varInvalid invalid <- forgetFingerprint <$> readTVar varInvalid (,) <$> ImmutableDB.getTipAnchor immutableDB - <*> (ignoreInvalidSuc volatileDB invalid <$> - VolatileDB.filterByPredecessor volatileDB) + <*> ( ignoreInvalidSuc volatileDB invalid + <$> VolatileDB.filterByPredecessor volatileDB + ) -- This is safe: the LedgerDB tip doesn't change in between the previous -- atomically block and this call to 'withTipForker'. -- -- We don't use 'LedgerDB.withTipForker' here, because 'curForker' might be -- returned as part of the selected chain. - curForker <- LedgerDB.getForkerAtTarget lgrDB rr VolatileTip >>= \case - Left{} -> error "Unreachable, VolatileTip MUST be in the LedgerDB" - Right frk -> pure frk + curForker <- + LedgerDB.getForkerAtTarget lgrDB rr VolatileTip >>= \case + Left{} -> error "Unreachable, VolatileTip MUST be in the LedgerDB" + Right frk -> pure frk chains <- constructChains i succsOf -- We use the empty fragment anchored at @i@ as the current chain (and -- ledger) and the default in case there is no better candidate. - let curChain = Empty (AF.castAnchor i) + let curChain = Empty (AF.castAnchor i) curChainAndLedger <- VF.newM curChain curForker case NE.nonEmpty (filter (preferAnchoredCandidate bcfg curChain) chains) of -- If there are no candidates, no chain selection is needed - Nothing -> return curChainAndLedger + Nothing -> return curChainAndLedger Just chains' -> chainSelection' curChainAndLedger chains' >>= \case -- The returned forker will be closed in 'openDBInternal'. - Nothing -> pure curChainAndLedger + Nothing -> pure curChainAndLedger Just newChain -> forkerClose curForker >> toChainAndLedger newChain - where + where bcfg :: BlockConfig blk bcfg = configBlock cfg SecurityParam k = configSecurityParam cfg - -- | Turn the 'ValidatedChainDiff' into a 'ChainAndLedger'. + -- \| Turn the 'ValidatedChainDiff' into a 'ChainAndLedger'. -- -- The rollback of the 'ChainDiff' must be empty, as the suffix starts -- from the tip of the ImmutableDB, and we can't roll back past that tip. -- This is guaranteed by the fact that all constructed candidates start -- from this tip. - toChainAndLedger - :: ValidatedChainDiff (Header blk) (Forker' m blk) - -> m (ChainAndLedger m blk) + toChainAndLedger :: + ValidatedChainDiff (Header blk) (Forker' m blk) -> + m (ChainAndLedger m blk) toChainAndLedger (ValidatedChainDiff chainDiff ledger) = case chainDiff of ChainDiff rollback suffix - | rollback == 0 - -> VF.newM suffix ledger - | otherwise - -> error "constructed an initial chain with rollback" + | rollback == 0 -> + VF.newM suffix ledger + | otherwise -> + error "constructed an initial chain with rollback" - -- | Use the VolatileDB to construct all chains starting from the tip of + -- \| Use the VolatileDB to construct all chains starting from the tip of -- the ImmutableDB. constructChains :: - Anchor blk -- ^ Tip of the ImmutableDB, @i@ - -> (ChainHash blk -> Set (HeaderHash blk)) - -> m [AnchoredFragment (Header blk)] - constructChains i succsOf = flip evalStateT Map.empty $ + Anchor blk -> + -- \^ Tip of the ImmutableDB, @i@ + (ChainHash blk -> Set (HeaderHash blk)) -> + m [AnchoredFragment (Header blk)] + constructChains i succsOf = + flip evalStateT Map.empty $ mapM constructChain suffixesAfterI - where - -- We now prevent selecting more than k blocks in maximalCandidates - -- when the LoE is enabled to avoid circumventing the LoE on startup. - -- Shutting down a syncing node and then restarting it should not cause - -- it to select the longest chain the VolDB, since that chain might be - -- adversarial (ie the LoE did not allow the node to select it when it - -- arrived). - suffixesAfterI :: [NonEmpty (HeaderHash blk)] - suffixesAfterI = Paths.maximalCandidates succsOf (unNonZero <$> limit) (AF.anchorToPoint i) - where - limit = case loE of - LoEDisabled -> Nothing - LoEEnabled () -> Just k - - constructChain :: - NonEmpty (HeaderHash blk) - -> StateT (Map (HeaderHash blk) (Header blk)) - m - (AnchoredFragment (Header blk)) - constructChain hashes = - AF.fromOldestFirst (AF.castAnchor i) <$> - mapM (getKnownHeaderThroughCache volatileDB) (NE.toList hashes) - - -- | Perform chain selection (including validation) on the given + where + -- We now prevent selecting more than k blocks in maximalCandidates + -- when the LoE is enabled to avoid circumventing the LoE on startup. + -- Shutting down a syncing node and then restarting it should not cause + -- it to select the longest chain the VolDB, since that chain might be + -- adversarial (ie the LoE did not allow the node to select it when it + -- arrived). + suffixesAfterI :: [NonEmpty (HeaderHash blk)] + suffixesAfterI = Paths.maximalCandidates succsOf (unNonZero <$> limit) (AF.anchorToPoint i) + where + limit = case loE of + LoEDisabled -> Nothing + LoEEnabled () -> Just k + + constructChain :: + NonEmpty (HeaderHash blk) -> + StateT + (Map (HeaderHash blk) (Header blk)) + m + (AnchoredFragment (Header blk)) + constructChain hashes = + AF.fromOldestFirst (AF.castAnchor i) + <$> mapM (getKnownHeaderThroughCache volatileDB) (NE.toList hashes) + + -- \| Perform chain selection (including validation) on the given -- candidates. -- -- PRECONDITION: all candidates are anchored at @i@. -- -- PRECONDITION: all candidates must be preferred over the current chain. chainSelection' :: - HasCallStack - => ChainAndLedger m blk - -- ^ The current chain and ledger, corresponding to - -- @i@. - -> NonEmpty (AnchoredFragment (Header blk)) - -- ^ Candidates anchored at @i@ - -> m (Maybe (ValidatedChainDiff (Header blk) (Forker' m blk))) + HasCallStack => + ChainAndLedger m blk -> + -- \^ The current chain and ledger, corresponding to + -- @i@. + NonEmpty (AnchoredFragment (Header blk)) -> + -- \^ Candidates anchored at @i@ + m (Maybe (ValidatedChainDiff (Header blk) (Forker' m blk))) chainSelection' curChainAndLedger candidates = - atomically (forkerCurrentPoint ledger) >>= \curpt -> + atomically (forkerCurrentPoint ledger) >>= \curpt -> assert (all ((curpt ==) . castPoint . AF.anchorPoint) candidates) $ - assert (all (preferAnchoredCandidate bcfg curChain) candidates) $ do - cse <- chainSelEnv - chainSelection cse rr (Diff.extend <$> candidates) - where - curChain = VF.validatedFragment curChainAndLedger - ledger = VF.validatedLedger curChainAndLedger - chainSelEnv = do - varTentativeState <- newTVarIO (initialTentativeHeaderState (Proxy @blk)) - varTentativeHeader <- newTVarIO SNothing - pure ChainSelEnv + assert (all (preferAnchoredCandidate bcfg curChain) candidates) $ do + cse <- chainSelEnv + chainSelection cse rr (Diff.extend <$> candidates) + where + curChain = VF.validatedFragment curChainAndLedger + ledger = VF.validatedLedger curChainAndLedger + chainSelEnv = do + varTentativeState <- newTVarIO (initialTentativeHeaderState (Proxy @blk)) + varTentativeHeader <- newTVarIO SNothing + pure + ChainSelEnv { lgrDB , bcfg , varInvalid , blockCache = BlockCache.empty , curChainAndLedger , validationTracer = InitChainSelValidation >$< tracer - -- initial chain selection is not concerned about pipelining - , pipeliningTracer = nullTracer + , -- initial chain selection is not concerned about pipelining + pipeliningTracer = nullTracer , varTentativeState , varTentativeHeader , punish = Nothing @@ -282,15 +309,15 @@ initialChainSelection immutableDB volatileDB lgrDB rr tracer cfg varInvalid -- -- PRECONDITON: the block to be added must not be from the future. -- See 'Ouroboros.Consensus.Storage.ChainDB.API.addBlockAsync'. --- addBlockAsync :: - forall m blk. (IOLike m, HasHeader blk) - => ChainDbEnv m blk - -> InvalidBlockPunishment m - -> blk - -> m (AddBlockPromise m blk) -addBlockAsync CDB { cdbTracer, cdbChainSelQueue } = - addBlockToAdd (TraceAddBlockEvent >$< cdbTracer) cdbChainSelQueue + forall m blk. + (IOLike m, HasHeader blk) => + ChainDbEnv m blk -> + InvalidBlockPunishment m -> + blk -> + m (AddBlockPromise m blk) +addBlockAsync CDB{cdbTracer, cdbChainSelQueue} = + addBlockToAdd (TraceAddBlockEvent >$< cdbTracer) cdbChainSelQueue -- | Schedule reprocessing of blocks postponed by the LoE. triggerChainSelectionAsync :: @@ -298,7 +325,7 @@ triggerChainSelectionAsync :: IOLike m => ChainDbEnv m blk -> m (ChainSelectionPromise m) -triggerChainSelectionAsync CDB {cdbTracer, cdbChainSelQueue} = +triggerChainSelectionAsync CDB{cdbTracer, cdbChainSelQueue} = addReprocessLoEBlocks (TraceAddBlockEvent >$< cdbTracer) cdbChainSelQueue -- | Add a block to the ChainDB, /synchronously/. @@ -310,18 +337,17 @@ triggerChainSelectionAsync CDB {cdbTracer, cdbChainSelQueue} = -- When the slot of the block is > the current slot, a chain selection will be -- scheduled in the slot of the block. chainSelSync :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - , HasCallStack - ) - => ChainDbEnv m blk - -> ChainSelMessage m blk - -> Electric m () - + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + , HasCallStack + ) => + ChainDbEnv m blk -> + ChainSelMessage m blk -> + Electric m () -- Reprocess headers that were postponed by the LoE. -- When we try to extend the current chain with a new block beyond the LoE -- limit, the block will be added to the DB without modifying the chain. @@ -334,90 +360,95 @@ chainSelSync :: -- peer. -- If 'cdbLoE' is 'LoEDisabled', this task is skipped. chainSelSync cdb@CDB{..} (ChainSelReprocessLoEBlocks varProcessed) = do - lift cdbLoE >>= \case - LoEDisabled -> pure () - LoEEnabled _ -> do - (succsOf, chain) <- lift $ atomically $ do - invalid <- forgetFingerprint <$> readTVar cdbInvalid - (,) - <$> (ignoreInvalidSuc cdbVolatileDB invalid <$> - VolatileDB.filterByPredecessor cdbVolatileDB) - <*> Query.getCurrentChain cdb - let - succsOf' = Set.toList . succsOf . pointHash . castPoint - loeHashes = succsOf' (AF.anchorPoint chain) - firstHeader = either (const Nothing) Just $ AF.last chain - -- We avoid the VolatileDB for the headers we already have in the chain - getHeaderFromHash hash = - case firstHeader of - Just header | headerHash header == hash -> pure header - _ -> VolatileDB.getKnownBlockComponent cdbVolatileDB GetHeader hash - loeHeaders <- lift (mapM getHeaderFromHash loeHashes) - for_ loeHeaders $ \hdr -> - chainSelectionForBlock cdb BlockCache.empty hdr noPunishment - lift $ atomically $ putTMVar varProcessed () - -chainSelSync cdb@CDB {..} (ChainSelAddBlock BlockToAdd { blockToAdd = b, .. }) = do - (isMember, invalid, curChain) <- lift $ atomically $ (,,) - <$> VolatileDB.getIsMember cdbVolatileDB - <*> (forgetFingerprint <$> readTVar cdbInvalid) - <*> Query.getCurrentChain cdb - - let immBlockNo = AF.anchorBlockNo curChain - - -- We follow the steps from section "## Adding a block" in ChainDB.md - - if - | olderThanK hdr isEBB immBlockNo -> do + lift cdbLoE >>= \case + LoEDisabled -> pure () + LoEEnabled _ -> do + (succsOf, chain) <- lift $ atomically $ do + invalid <- forgetFingerprint <$> readTVar cdbInvalid + (,) + <$> ( ignoreInvalidSuc cdbVolatileDB invalid + <$> VolatileDB.filterByPredecessor cdbVolatileDB + ) + <*> Query.getCurrentChain cdb + let + succsOf' = Set.toList . succsOf . pointHash . castPoint + loeHashes = succsOf' (AF.anchorPoint chain) + firstHeader = either (const Nothing) Just $ AF.last chain + -- We avoid the VolatileDB for the headers we already have in the chain + getHeaderFromHash hash = + case firstHeader of + Just header | headerHash header == hash -> pure header + _ -> VolatileDB.getKnownBlockComponent cdbVolatileDB GetHeader hash + loeHeaders <- lift (mapM getHeaderFromHash loeHashes) + for_ loeHeaders $ \hdr -> + chainSelectionForBlock cdb BlockCache.empty hdr noPunishment + lift $ atomically $ putTMVar varProcessed () +chainSelSync cdb@CDB{..} (ChainSelAddBlock BlockToAdd{blockToAdd = b, ..}) = do + (isMember, invalid, curChain) <- + lift $ + atomically $ + (,,) + <$> VolatileDB.getIsMember cdbVolatileDB + <*> (forgetFingerprint <$> readTVar cdbInvalid) + <*> Query.getCurrentChain cdb + + let immBlockNo = AF.anchorBlockNo curChain + + -- We follow the steps from section "## Adding a block" in ChainDB.md + + if + | olderThanK hdr isEBB immBlockNo -> do lift $ traceWith addBlockTracer $ IgnoreBlockOlderThanK (blockRealPoint b) lift $ deliverWrittenToDisk False - - | isMember (blockHash b) -> do + | isMember (blockHash b) -> do lift $ traceWith addBlockTracer $ IgnoreBlockAlreadyInVolatileDB (blockRealPoint b) lift $ deliverWrittenToDisk True - - | Just (InvalidBlockInfo reason _) <- Map.lookup (blockHash b) invalid -> do + | Just (InvalidBlockInfo reason _) <- Map.lookup (blockHash b) invalid -> do lift $ traceWith addBlockTracer $ IgnoreInvalidBlock (blockRealPoint b) reason lift $ deliverWrittenToDisk False -- We wouldn't know the block is invalid if its prefix was invalid, -- hence 'InvalidBlockPunishment.BlockItself'. - lift $ InvalidBlockPunishment.enact - blockPunish - InvalidBlockPunishment.BlockItself + lift $ + InvalidBlockPunishment.enact + blockPunish + InvalidBlockPunishment.BlockItself - -- The remaining cases - | otherwise -> do + -- The remaining cases + | otherwise -> do let traceEv = AddedBlockToVolatileDB (blockRealPoint b) (blockNo b) isEBB - lift $ encloseWith (traceEv >$< addBlockTracer) $ - VolatileDB.putBlock cdbVolatileDB b + lift $ + encloseWith (traceEv >$< addBlockTracer) $ + VolatileDB.putBlock cdbVolatileDB b lift $ deliverWrittenToDisk True chainSelectionForBlock cdb (BlockCache.singleton b) hdr blockPunish - newTip <- lift $ atomically $ Query.getTipPoint cdb + newTip <- lift $ atomically $ Query.getTipPoint cdb - lift $ deliverProcessed newTip - where - addBlockTracer :: Tracer m (TraceAddBlockEvent blk) - addBlockTracer = TraceAddBlockEvent >$< cdbTracer + lift $ deliverProcessed newTip + where + addBlockTracer :: Tracer m (TraceAddBlockEvent blk) + addBlockTracer = TraceAddBlockEvent >$< cdbTracer - hdr :: Header blk - hdr = getHeader b + hdr :: Header blk + hdr = getHeader b - isEBB :: IsEBB - isEBB = headerToIsEBB hdr + isEBB :: IsEBB + isEBB = headerToIsEBB hdr - -- | Fill in the 'TMVar' for the 'varBlockWrittenToDisk' of the block's - -- 'AddBlockPromise' with the given 'Bool'. - deliverWrittenToDisk :: Bool -> m () - deliverWrittenToDisk writtenToDisk = atomically $ - putTMVar varBlockWrittenToDisk writtenToDisk + -- \| Fill in the 'TMVar' for the 'varBlockWrittenToDisk' of the block's + -- 'AddBlockPromise' with the given 'Bool'. + deliverWrittenToDisk :: Bool -> m () + deliverWrittenToDisk writtenToDisk = + atomically $ + putTMVar varBlockWrittenToDisk writtenToDisk - -- | Fill in the 'TMVar' for the 'varBlockProcessed' of the block's - -- 'AddBlockPromise' with the given tip. - deliverProcessed :: Point blk -> m () - deliverProcessed tip = atomically $ - putTMVar varBlockProcessed (SuccesfullyAddedBlock tip) + -- \| Fill in the 'TMVar' for the 'varBlockProcessed' of the block's + -- 'AddBlockPromise' with the given tip. + deliverProcessed :: Point blk -> m () + deliverProcessed tip = + atomically $ + putTMVar varBlockProcessed (SuccesfullyAddedBlock tip) -- | Return 'True' when the given header should be ignored when adding it -- because it is too old, i.e., we wouldn't be able to switch to a chain @@ -434,23 +465,23 @@ chainSelSync cdb@CDB {..} (ChainSelAddBlock BlockToAdd { blockToAdd = b, .. }) = -- the chain. If we then try to add the EBB after it, it will have the same -- block number, so we must allow it. olderThanK :: - HasHeader (Header blk) - => Header blk - -- ^ Header of the block to add - -> IsEBB - -- ^ Whether the block is an EBB or not - -> WithOrigin BlockNo - -- ^ The block number of the most recent \"immutable\" block, i.e., the - -- block @k@ blocks back. - -> Bool + HasHeader (Header blk) => + -- | Header of the block to add + Header blk -> + -- | Whether the block is an EBB or not + IsEBB -> + -- | The block number of the most recent \"immutable\" block, i.e., the + -- block @k@ blocks back. + WithOrigin BlockNo -> + Bool olderThanK hdr isEBB immBlockNo - | NotOrigin bNo == immBlockNo - , isEBB == IsEBB - = False - | otherwise - = NotOrigin bNo <= immBlockNo - where - bNo = blockNo hdr + | NotOrigin bNo == immBlockNo + , isEBB == IsEBB = + False + | otherwise = + NotOrigin bNo <= immBlockNo + where + bNo = blockNo hdr -- | When we switch to a new selected chain, we are either extending the current -- chain by adding blocks on top or we are switching to a fork. @@ -488,77 +519,80 @@ data ChainSwitchType = AddingBlocks | SwitchingToAFork -- -- This cost is currently deemed acceptable. chainSelectionForBlock :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , InspectLedger blk - , HasHardForkHistory blk - , HasCallStack - ) - => ChainDbEnv m blk - -> BlockCache blk - -> Header blk - -> InvalidBlockPunishment m - -> Electric m () + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , InspectLedger blk + , HasHardForkHistory blk + , HasCallStack + ) => + ChainDbEnv m blk -> + BlockCache blk -> + Header blk -> + InvalidBlockPunishment m -> + Electric m () chainSelectionForBlock cdb@CDB{..} blockCache hdr punish = electric $ withRegistry $ \rr -> do - (invalid, succsOf, lookupBlockInfo, curChain, tipPoint) - <- atomically $ (,,,,) - <$> (forgetFingerprint <$> readTVar cdbInvalid) - <*> VolatileDB.filterByPredecessor cdbVolatileDB - <*> VolatileDB.getBlockInfo cdbVolatileDB - <*> Query.getCurrentChain cdb - <*> Query.getTipPoint cdb - -- This is safe: the LedgerDB tip doesn't change in between the previous - -- atomically block and this call to 'withTipForker'. - LedgerDB.withTipForker cdbLedgerDB rr $ \curForker -> do - curChainAndLedger :: ChainAndLedger m blk <- - -- The current chain we're working with here is not longer than @k@ - -- blocks (see 'getCurrentChain' and 'cdbChain'), which is easier to - -- reason about when doing chain selection, etc. - assert (fromIntegral (AF.length curChain) <= unNonZero k) $ - VF.newM curChain curForker - - let - immBlockNo :: WithOrigin BlockNo - immBlockNo = AF.anchorBlockNo curChain - - -- Let these two functions ignore invalid blocks - lookupBlockInfo' = ignoreInvalid cdb invalid lookupBlockInfo - succsOf' = ignoreInvalidSuc cdb invalid succsOf + (invalid, succsOf, lookupBlockInfo, curChain, tipPoint) <- + atomically $ + (,,,,) + <$> (forgetFingerprint <$> readTVar cdbInvalid) + <*> VolatileDB.filterByPredecessor cdbVolatileDB + <*> VolatileDB.getBlockInfo cdbVolatileDB + <*> Query.getCurrentChain cdb + <*> Query.getTipPoint cdb + -- This is safe: the LedgerDB tip doesn't change in between the previous + -- atomically block and this call to 'withTipForker'. + LedgerDB.withTipForker cdbLedgerDB rr $ \curForker -> do + curChainAndLedger :: ChainAndLedger m blk <- + -- The current chain we're working with here is not longer than @k@ + -- blocks (see 'getCurrentChain' and 'cdbChain'), which is easier to + -- reason about when doing chain selection, etc. + assert (fromIntegral (AF.length curChain) <= unNonZero k) $ + VF.newM curChain curForker + + let + immBlockNo :: WithOrigin BlockNo + immBlockNo = AF.anchorBlockNo curChain + + -- Let these two functions ignore invalid blocks + lookupBlockInfo' = ignoreInvalid cdb invalid lookupBlockInfo + succsOf' = ignoreInvalidSuc cdb invalid succsOf + + -- The preconditions + assert (isJust $ lookupBlockInfo (headerHash hdr)) $ return () + + let + -- Trim the LoE fragment to be anchored in the immutable tip, ie the + -- anchor of @curChain@. In particular, this establishes the property that + -- it intersects with the current chain. + sanitizeLoEFrag :: + AnchoredFragment (HeaderWithTime blk) -> + AnchoredFragment (HeaderWithTime blk) + sanitizeLoEFrag loeFrag0 = + case AF.splitAfterPoint loeFrag0 (AF.anchorPoint curChain) of + Just (_, frag) -> frag + -- As the (unsanitized) LoE fragment is rooted in a recent immutable + -- tip, this case means that it doesn't intersect with the current + -- chain. This can temporarily be the case; we are conservative and + -- use the empty fragment anchored at the immutable tip for chain + -- selection. + Nothing -> AF.Empty $ AF.castAnchor $ AF.anchor curChain + + loeFrag <- fmap sanitizeLoEFrag <$> cdbLoE + + traceWith + addBlockTracer + (ChainSelectionLoEDebug curChain (AF.mapAnchoredFragment hwtHeader <$> loeFrag)) - -- The preconditions - assert (isJust $ lookupBlockInfo (headerHash hdr)) $ return () - - let - -- Trim the LoE fragment to be anchored in the immutable tip, ie the - -- anchor of @curChain@. In particular, this establishes the property that - -- it intersects with the current chain. - sanitizeLoEFrag :: AnchoredFragment (HeaderWithTime blk) - -> AnchoredFragment (HeaderWithTime blk) - sanitizeLoEFrag loeFrag0 = - case AF.splitAfterPoint loeFrag0 (AF.anchorPoint curChain) of - Just (_, frag) -> frag - -- As the (unsanitized) LoE fragment is rooted in a recent immutable - -- tip, this case means that it doesn't intersect with the current - -- chain. This can temporarily be the case; we are conservative and - -- use the empty fragment anchored at the immutable tip for chain - -- selection. - Nothing -> AF.Empty $ AF.castAnchor $ AF.anchor curChain - - loeFrag <- fmap sanitizeLoEFrag <$> cdbLoE - - traceWith addBlockTracer - (ChainSelectionLoEDebug curChain (AF.mapAnchoredFragment hwtHeader <$> loeFrag)) - - if - -- The chain might have grown since we added the block such that the - -- block is older than @k@. - | olderThanK hdr isEBB immBlockNo -> do + if + -- The chain might have grown since we added the block such that the + -- block is older than @k@. + | olderThanK hdr isEBB immBlockNo -> do traceWith addBlockTracer $ IgnoreBlockOlderThanK p - -- The block is invalid - | Just (InvalidBlockInfo reason _) <- Map.lookup (headerHash hdr) invalid -> do + -- The block is invalid + | Just (InvalidBlockInfo reason _) <- Map.lookup (headerHash hdr) invalid -> do traceWith addBlockTracer $ IgnoreInvalidBlock p reason -- We wouldn't know the block is invalid if its prefix was invalid, @@ -567,439 +601,454 @@ chainSelectionForBlock cdb@CDB{..} blockCache hdr punish = electric $ withRegist punish InvalidBlockPunishment.BlockItself - -- The block fits onto the end of our current chain - | pointHash tipPoint == headerPrevHash hdr -> do + -- The block fits onto the end of our current chain + | pointHash tipPoint == headerPrevHash hdr -> do -- ### Add to current chain traceWith addBlockTracer (TryAddToCurrentChain p) addToCurrentChain rr succsOf' curChainAndLedger loeFrag - -- The block is reachable from the current selection - -- and it doesn't fit after the current selection - | Just diff <- Paths.isReachable lookupBlockInfo' curChain p -> do + -- The block is reachable from the current selection + -- and it doesn't fit after the current selection + | Just diff <- Paths.isReachable lookupBlockInfo' curChain p -> do -- ### Switch to a fork traceWith addBlockTracer (TrySwitchToAFork p diff) switchToAFork rr succsOf' lookupBlockInfo' curChainAndLedger loeFrag diff - -- We cannot reach the block from the current selection - | otherwise -> do + -- We cannot reach the block from the current selection + | otherwise -> do -- ### Store but don't change the current chain traceWith addBlockTracer (StoreButDontChange p) - - -- Note that we may have extended the chain, but have not trimmed it to - -- @k@ blocks/headers. That is the job of the background thread, which - -- will first copy the blocks/headers to trim (from the end of the - -- fragment) from the VolatileDB to the ImmutableDB. - where - SecurityParam k = configSecurityParam cdbTopLevelConfig - - p :: RealPoint blk - p = headerRealPoint hdr - - isEBB :: IsEBB - isEBB = headerToIsEBB hdr - - addBlockTracer :: Tracer m (TraceAddBlockEvent blk) - addBlockTracer = TraceAddBlockEvent >$< cdbTracer - - mkChainSelEnv :: ChainAndLedger m blk -> ChainSelEnv m blk - mkChainSelEnv curChainAndLedger = ChainSelEnv - { lgrDB = cdbLedgerDB - , bcfg = configBlock cdbTopLevelConfig - , varInvalid = cdbInvalid - , varTentativeState = cdbTentativeState - , varTentativeHeader = cdbTentativeHeader + where + -- Note that we may have extended the chain, but have not trimmed it to + -- @k@ blocks/headers. That is the job of the background thread, which + -- will first copy the blocks/headers to trim (from the end of the + -- fragment) from the VolatileDB to the ImmutableDB. + + SecurityParam k = configSecurityParam cdbTopLevelConfig + + p :: RealPoint blk + p = headerRealPoint hdr + + isEBB :: IsEBB + isEBB = headerToIsEBB hdr + + addBlockTracer :: Tracer m (TraceAddBlockEvent blk) + addBlockTracer = TraceAddBlockEvent >$< cdbTracer + + mkChainSelEnv :: ChainAndLedger m blk -> ChainSelEnv m blk + mkChainSelEnv curChainAndLedger = + ChainSelEnv + { lgrDB = cdbLedgerDB + , bcfg = configBlock cdbTopLevelConfig + , varInvalid = cdbInvalid + , varTentativeState = cdbTentativeState + , varTentativeHeader = cdbTentativeHeader , getTentativeFollowers = - filter ((TentativeChain ==) . fhChainType) . Map.elems - <$> readTVar cdbFollowers - , blockCache = blockCache - , curChainAndLedger = curChainAndLedger - , validationTracer = + filter ((TentativeChain ==) . fhChainType) . Map.elems + <$> readTVar cdbFollowers + , blockCache = blockCache + , curChainAndLedger = curChainAndLedger + , validationTracer = TraceAddBlockEvent . AddBlockValidation >$< cdbTracer - , pipeliningTracer = + , pipeliningTracer = TraceAddBlockEvent . PipeliningEvent >$< cdbTracer - , punish = Just (p, punish) + , punish = Just (p, punish) } - -- | PRECONDITION: the header @hdr@ (and block @b@) fit onto the end of - -- the current chain. - addToCurrentChain :: - HasCallStack - => ResourceRegistry m - -> (ChainHash blk -> Set (HeaderHash blk)) - -> ChainAndLedger m blk - -- ^ The current chain and ledger - -> LoE (AnchoredFragment (HeaderWithTime blk)) - -- ^ LoE fragment - -> m () - addToCurrentChain rr succsOf curChainAndLedger loeFrag = do - -- Extensions of @B@ that do not exceed the LoE - let suffixesAfterB = Paths.maximalCandidates succsOf Nothing (realPointToPoint p) - - -- Fragments that are anchored at @curHead@, i.e. suffixes of the - -- current chain. - candidates <- case NE.nonEmpty suffixesAfterB of - -- If there are no suffixes after @b@, just use the suffix just - -- containing @b@ as the sole candidate. - Nothing -> - return $ AF.fromOldestFirst curHead [hdr] NE.:| [] - Just suffixesAfterB' -> - -- We can start with an empty cache, because we're only looking - -- up the headers /after/ b, so they won't be on the current - -- chain. - flip evalStateT Map.empty $ forM suffixesAfterB' $ \hashes -> do - hdrs <- mapM (getKnownHeaderThroughCache cdbVolatileDB) $ - NE.toList hashes - return $ AF.fromOldestFirst curHead (hdr : hdrs) - - let chainDiffs = NE.nonEmpty - $ filter (preferAnchoredCandidate (bcfg chainSelEnv) curChain . Diff.getSuffix) - $ fmap (trimToLoE loeFrag curChainAndLedger) - $ fmap Diff.extend - $ NE.toList candidates - -- All candidates are longer than the current chain, so they will be - -- preferred over it, /unless/ the block we just added is an EBB, - -- which has the same 'BlockNo' as the block before it, so when - -- using the 'BlockNo' as the proxy for the length (note that some - -- protocols might do it differently), the candidate with the EBB - -- appended will not be preferred over the current chain. - -- - -- The consequence of this is that when adding an EBB, it will not - -- be selected by chain selection and thus not appended to the chain - -- until the block after it is added, which will again result in a - -- candidate preferred over the current chain. In this case, the - -- candidate will be a two-block (the EBB and the new block) - -- extension of the current chain. - case chainDiffs of - Nothing -> return () - Just chainDiffs' -> - chainSelection chainSelEnv rr chainDiffs' >>= \case - Nothing -> - return () - Just validatedChainDiff -> - switchTo - validatedChainDiff - (varTentativeHeader chainSelEnv) - AddingBlocks - where - chainSelEnv = mkChainSelEnv curChainAndLedger - curChain = VF.validatedFragment curChainAndLedger - curHead = AF.headAnchor curChain - - -- | Trim the given candidate fragment to respect the LoE. - -- - -- The returned fragment is such that: - -- - -- - It is a prefix of the given fragment. - -- - If it contains the tip of the LoE fragment, then it contains at most - -- @k@ block after it. - -- - If it does not contain the tip of the LoE fragment, then it is included - -- in the LoE fragment. + -- \| PRECONDITION: the header @hdr@ (and block @b@) fit onto the end of + -- the current chain. + addToCurrentChain :: + HasCallStack => + ResourceRegistry m -> + (ChainHash blk -> Set (HeaderHash blk)) -> + ChainAndLedger m blk -> + -- \^ The current chain and ledger + LoE (AnchoredFragment (HeaderWithTime blk)) -> + -- \^ LoE fragment + m () + addToCurrentChain rr succsOf curChainAndLedger loeFrag = do + -- Extensions of @B@ that do not exceed the LoE + let suffixesAfterB = Paths.maximalCandidates succsOf Nothing (realPointToPoint p) + + -- Fragments that are anchored at @curHead@, i.e. suffixes of the + -- current chain. + candidates <- case NE.nonEmpty suffixesAfterB of + -- If there are no suffixes after @b@, just use the suffix just + -- containing @b@ as the sole candidate. + Nothing -> + return $ AF.fromOldestFirst curHead [hdr] NE.:| [] + Just suffixesAfterB' -> + -- We can start with an empty cache, because we're only looking + -- up the headers /after/ b, so they won't be on the current + -- chain. + flip evalStateT Map.empty $ forM suffixesAfterB' $ \hashes -> do + hdrs <- + mapM (getKnownHeaderThroughCache cdbVolatileDB) $ + NE.toList hashes + return $ AF.fromOldestFirst curHead (hdr : hdrs) + + let chainDiffs = + NE.nonEmpty $ + filter (preferAnchoredCandidate (bcfg chainSelEnv) curChain . Diff.getSuffix) $ + fmap (trimToLoE loeFrag curChainAndLedger) $ + fmap Diff.extend $ + NE.toList candidates + -- All candidates are longer than the current chain, so they will be + -- preferred over it, /unless/ the block we just added is an EBB, + -- which has the same 'BlockNo' as the block before it, so when + -- using the 'BlockNo' as the proxy for the length (note that some + -- protocols might do it differently), the candidate with the EBB + -- appended will not be preferred over the current chain. -- - -- The fragment is represented by the current chain and a diff with that - -- current chain. It is tempting to only consider the suffix of the diff, - -- but that would be incorrect, because the diff might not intersect with - -- the LoE fragment, because the diff suffix is anchored somewhere on the - -- current chain and LoE frag's tip might be older than that anchor. - -- - -- PRECONDITIONS: - -- - -- 1. The given 'ChainDiff' can apply on top of the given 'ChainAndLedger'. - -- 2. The LoE fragment intersects with the current selection. - trimToLoE :: - (HasHeader blk', HeaderHash blk ~ HeaderHash blk') => - LoE (AnchoredFragment blk') -> - ChainAndLedger m blk -> - ChainDiff (Header blk) -> - ChainDiff (Header blk) - trimToLoE LoEDisabled _ diff = diff - trimToLoE (LoEEnabled loe) curChain diff = - case Diff.apply (VF.validatedFragment curChain) diff of - Nothing -> error "trimToLoE: precondition 1 violated: the given 'ChainDiff' must apply on top of the given 'ChainAndLedger'" - Just cand -> - case AF.intersect cand loe of - Nothing -> error "trimToLoE: precondition 2 violated: the LoE fragment must intersect with the current selection" - Just (candPrefix, _, candSuffix, loeSuffix) -> - let trimmedCandSuffix = AF.takeOldest (fromIntegral $ unNonZero k) candSuffix - trimmedCand = - if AF.null loeSuffix - then fromJust $ AF.join candPrefix trimmedCandSuffix - else candPrefix - in Diff.diff (VF.validatedFragment curChain) trimmedCand - - -- | We have found a 'ChainDiff' through the VolatileDB connecting the new - -- block to the current chain. We'll call the intersection/anchor @x@. - -- - -- We try to extend this path by looking for forks that start with the - -- given block, then we do chain selection and /possibly/ try to switch to - -- a new fork. - switchToAFork :: - HasCallStack - => ResourceRegistry m - -> (ChainHash blk -> Set (HeaderHash blk)) - -> LookupBlockInfo blk - -> ChainAndLedger m blk - -- ^ The current chain (anchored at @i@) and ledger - -> LoE (AnchoredFragment (HeaderWithTime blk)) - -- ^ LoE fragment - -> ChainDiff (HeaderFields blk) - -- ^ Header fields for @(x,b]@ - -> m () - switchToAFork rr succsOf lookupBlockInfo curChainAndLedger loeFrag diff = do - -- We use a cache to avoid reading the headers from disk multiple - -- times in case they're part of multiple forks that go through @b@. - let initCache = Map.singleton (headerHash hdr) hdr - chainDiffs <- - -- 5. Filter out candidates that are not preferred over the current - -- chain. - -- - -- The suffixes all fork off from the current chain within @k@ - -- blocks, so it satisfies the precondition of 'preferCandidate'. - fmap - ( filter - ( preferAnchoredCandidate (bcfg chainSelEnv) curChain - . Diff.getSuffix + -- The consequence of this is that when adding an EBB, it will not + -- be selected by chain selection and thus not appended to the chain + -- until the block after it is added, which will again result in a + -- candidate preferred over the current chain. In this case, the + -- candidate will be a two-block (the EBB and the new block) + -- extension of the current chain. + case chainDiffs of + Nothing -> return () + Just chainDiffs' -> + chainSelection chainSelEnv rr chainDiffs' >>= \case + Nothing -> + return () + Just validatedChainDiff -> + switchTo + validatedChainDiff + (varTentativeHeader chainSelEnv) + AddingBlocks + where + chainSelEnv = mkChainSelEnv curChainAndLedger + curChain = VF.validatedFragment curChainAndLedger + curHead = AF.headAnchor curChain + + -- \| Trim the given candidate fragment to respect the LoE. + -- + -- The returned fragment is such that: + -- + -- - It is a prefix of the given fragment. + -- - If it contains the tip of the LoE fragment, then it contains at most + -- @k@ block after it. + -- - If it does not contain the tip of the LoE fragment, then it is included + -- in the LoE fragment. + -- + -- The fragment is represented by the current chain and a diff with that + -- current chain. It is tempting to only consider the suffix of the diff, + -- but that would be incorrect, because the diff might not intersect with + -- the LoE fragment, because the diff suffix is anchored somewhere on the + -- current chain and LoE frag's tip might be older than that anchor. + -- + -- PRECONDITIONS: + -- + -- 1. The given 'ChainDiff' can apply on top of the given 'ChainAndLedger'. + -- 2. The LoE fragment intersects with the current selection. + trimToLoE :: + (HasHeader blk', HeaderHash blk ~ HeaderHash blk') => + LoE (AnchoredFragment blk') -> + ChainAndLedger m blk -> + ChainDiff (Header blk) -> + ChainDiff (Header blk) + trimToLoE LoEDisabled _ diff = diff + trimToLoE (LoEEnabled loe) curChain diff = + case Diff.apply (VF.validatedFragment curChain) diff of + Nothing -> + error + "trimToLoE: precondition 1 violated: the given 'ChainDiff' must apply on top of the given 'ChainAndLedger'" + Just cand -> + case AF.intersect cand loe of + Nothing -> + error + "trimToLoE: precondition 2 violated: the LoE fragment must intersect with the current selection" + Just (candPrefix, _, candSuffix, loeSuffix) -> + let trimmedCandSuffix = AF.takeOldest (fromIntegral $ unNonZero k) candSuffix + trimmedCand = + if AF.null loeSuffix + then fromJust $ AF.join candPrefix trimmedCandSuffix + else candPrefix + in Diff.diff (VF.validatedFragment curChain) trimmedCand + + -- \| We have found a 'ChainDiff' through the VolatileDB connecting the new + -- block to the current chain. We'll call the intersection/anchor @x@. + -- + -- We try to extend this path by looking for forks that start with the + -- given block, then we do chain selection and /possibly/ try to switch to + -- a new fork. + switchToAFork :: + HasCallStack => + ResourceRegistry m -> + (ChainHash blk -> Set (HeaderHash blk)) -> + LookupBlockInfo blk -> + ChainAndLedger m blk -> + -- \^ The current chain (anchored at @i@) and ledger + LoE (AnchoredFragment (HeaderWithTime blk)) -> + -- \^ LoE fragment + ChainDiff (HeaderFields blk) -> + -- \^ Header fields for @(x,b]@ + m () + switchToAFork rr succsOf lookupBlockInfo curChainAndLedger loeFrag diff = do + -- We use a cache to avoid reading the headers from disk multiple + -- times in case they're part of multiple forks that go through @b@. + let initCache = Map.singleton (headerHash hdr) hdr + chainDiffs <- + -- 5. Filter out candidates that are not preferred over the current + -- chain. + -- + -- The suffixes all fork off from the current chain within @k@ + -- blocks, so it satisfies the precondition of 'preferCandidate'. + fmap + ( filter + ( preferAnchoredCandidate (bcfg chainSelEnv) curChain + . Diff.getSuffix + ) + ) + -- 4. Trim fragments so that they follow the LoE, that is, they + -- extend the LoE or are extended by the LoE. Filter them out + -- otherwise. + . fmap (fmap (trimToLoE loeFrag curChainAndLedger)) + -- 3. Translate the 'HeaderFields' to 'Header' by reading the + -- headers from disk. + . flip evalStateT initCache + . mapM translateToHeaders + -- 2. Filter out candidates that are shorter than the current + -- chain. We don't want to needlessly read the headers from disk + -- for those candidates. + . NE.filter (not . Diff.rollbackExceedsSuffix) + -- 1. Extend the diff with candidates fitting on @B@ and not exceeding the LoE + . Paths.extendWithSuccessors succsOf lookupBlockInfo + $ diff + + case NE.nonEmpty chainDiffs of + -- No candidates preferred over the current chain + Nothing -> return () + Just chainDiffs' -> + chainSelection chainSelEnv rr chainDiffs' >>= \case + Nothing -> + return () + Just validatedChainDiff -> + switchTo + validatedChainDiff + (varTentativeHeader chainSelEnv) + SwitchingToAFork + where + chainSelEnv = mkChainSelEnv curChainAndLedger + curChain = VF.validatedFragment curChainAndLedger + + mkSelectionChangedInfo :: + AnchoredFragment (Header blk) -> + -- \^ old chain + AnchoredFragment (Header blk) -> + -- \^ new chain + ExtLedgerState blk EmptyMK -> + -- \^ new tip + SelectionChangedInfo blk + mkSelectionChangedInfo oldChain newChain newTip = + SelectionChangedInfo + { newTipPoint = castRealPoint tipPoint + , newTipEpoch = tipEpoch + , newTipSlotInEpoch = tipSlotInEpoch + , newTipTrigger = p + , newTipSelectView + , oldTipSelectView = + selectView (configBlock cfg) + <$> eitherToMaybe (AF.head oldChain) + } + where + cfg :: TopLevelConfig blk + cfg = cdbTopLevelConfig + + ledger :: LedgerState blk EmptyMK + ledger = ledgerState newTip + + summary :: History.Summary (HardForkIndices blk) + summary = + hardForkSummary + (configLedger cfg) + ledger + + (tipPoint, (tipEpoch, tipSlotInEpoch), newTipSelectView) = + case AF.head newChain of + Left _anchor -> error "cannot have switched to an empty chain" + Right tipHdr -> + let query = History.slotToEpoch' (blockSlot tipHdr) + tipEpochData = History.runQueryPure query summary + sv = selectView (configBlock cfg) tipHdr + in (blockRealPoint tipHdr, tipEpochData, sv) + + -- \| Try to apply the given 'ChainDiff' on the current chain fragment. The + -- 'LedgerDB' is updated in the same transaction. + -- + -- Note that we /cannot/ have switched to a different current chain in the + -- meantime, since this function will only be called by a single + -- background thread. + -- + -- It /is/ possible that the background thread copying headers older than + -- @k@ from the VolatileDB to the ImmutableDB has removed some headers + -- from the beginning of the current chain fragment, but does not affect + -- us, as we cannot roll back more than @k@ headers anyway. + switchTo :: + HasCallStack => + ValidatedChainDiff (Header blk) (Forker' m blk) -> + -- \^ Chain and ledger to switch to + StrictTVar m (StrictMaybe (Header blk)) -> + -- \^ Tentative header + ChainSwitchType -> + m () + switchTo vChainDiff varTentativeHeader chainSwitchType = do + traceWith addBlockTracer $ + ChangingSelection $ + castPoint $ + AF.headPoint $ + getSuffix $ + getChainDiff vChainDiff + (curChain, newChain, events, prevTentativeHeader, newLedger) <- atomically $ do + InternalChain curChain curChainWithTime <- readTVar cdbChain -- Not Query.getCurrentChain! + curLedger <- getVolatileTip cdbLedgerDB + newLedger <- forkerGetLedgerState newForker + case Diff.apply curChain chainDiff of + -- Impossible, as described in the docstring + Nothing -> + error "chainDiff doesn't fit onto current chain" + Just newChain -> do + let lcfg = configLedger cdbTopLevelConfig + diffWithTime = + -- the new ledger state can translate the slots of the new + -- headers + Diff.map + ( mkHeaderWithTime + lcfg + (ledgerState newLedger) ) - ) - -- 4. Trim fragments so that they follow the LoE, that is, they - -- extend the LoE or are extended by the LoE. Filter them out - -- otherwise. - . fmap (fmap (trimToLoE loeFrag curChainAndLedger)) - -- 3. Translate the 'HeaderFields' to 'Header' by reading the - -- headers from disk. - . flip evalStateT initCache - . mapM translateToHeaders - -- 2. Filter out candidates that are shorter than the current - -- chain. We don't want to needlessly read the headers from disk - -- for those candidates. - . NE.filter (not . Diff.rollbackExceedsSuffix) - -- 1. Extend the diff with candidates fitting on @B@ and not exceeding the LoE - . Paths.extendWithSuccessors succsOf lookupBlockInfo - $ diff - - case NE.nonEmpty chainDiffs of - -- No candidates preferred over the current chain - Nothing -> return () - Just chainDiffs' -> - chainSelection chainSelEnv rr chainDiffs' >>= \case - Nothing -> - return () - Just validatedChainDiff -> - switchTo - validatedChainDiff - (varTentativeHeader chainSelEnv) - SwitchingToAFork - where - chainSelEnv = mkChainSelEnv curChainAndLedger - curChain = VF.validatedFragment curChainAndLedger - - mkSelectionChangedInfo :: - AnchoredFragment (Header blk) -- ^ old chain - -> AnchoredFragment (Header blk) -- ^ new chain - -> ExtLedgerState blk EmptyMK -- ^ new tip - -> SelectionChangedInfo blk - mkSelectionChangedInfo oldChain newChain newTip = - SelectionChangedInfo { - newTipPoint = castRealPoint tipPoint - , newTipEpoch = tipEpoch - , newTipSlotInEpoch = tipSlotInEpoch - , newTipTrigger = p - , newTipSelectView - , oldTipSelectView = - selectView (configBlock cfg) - <$> eitherToMaybe (AF.head oldChain) - } - where - cfg :: TopLevelConfig blk - cfg = cdbTopLevelConfig - - ledger :: LedgerState blk EmptyMK - ledger = ledgerState newTip - - summary :: History.Summary (HardForkIndices blk) - summary = hardForkSummary - (configLedger cfg) - ledger - - (tipPoint, (tipEpoch, tipSlotInEpoch), newTipSelectView) = - case AF.head newChain of - Left _anchor -> error "cannot have switched to an empty chain" - Right tipHdr -> - let query = History.slotToEpoch' (blockSlot tipHdr) - tipEpochData = History.runQueryPure query summary - sv = selectView (configBlock cfg) tipHdr - in (blockRealPoint tipHdr, tipEpochData, sv) - - -- | Try to apply the given 'ChainDiff' on the current chain fragment. The - -- 'LedgerDB' is updated in the same transaction. - -- - -- Note that we /cannot/ have switched to a different current chain in the - -- meantime, since this function will only be called by a single - -- background thread. - -- - -- It /is/ possible that the background thread copying headers older than - -- @k@ from the VolatileDB to the ImmutableDB has removed some headers - -- from the beginning of the current chain fragment, but does not affect - -- us, as we cannot roll back more than @k@ headers anyway. - switchTo - :: HasCallStack - => ValidatedChainDiff (Header blk) (Forker' m blk) - -- ^ Chain and ledger to switch to - -> StrictTVar m (StrictMaybe (Header blk)) - -- ^ Tentative header - -> ChainSwitchType - -> m () - switchTo vChainDiff varTentativeHeader chainSwitchType = do - traceWith addBlockTracer $ - ChangingSelection - $ castPoint - $ AF.headPoint - $ getSuffix - $ getChainDiff vChainDiff - (curChain, newChain, events, prevTentativeHeader, newLedger) <- atomically $ do - InternalChain curChain curChainWithTime <- readTVar cdbChain -- Not Query.getCurrentChain! - curLedger <- getVolatileTip cdbLedgerDB - newLedger <- forkerGetLedgerState newForker - case Diff.apply curChain chainDiff of - -- Impossible, as described in the docstring - Nothing -> - error "chainDiff doesn't fit onto current chain" - Just newChain -> do - let lcfg = configLedger cdbTopLevelConfig - diffWithTime = - -- the new ledger state can translate the slots of the new - -- headers - Diff.map - (mkHeaderWithTime - lcfg - (ledgerState newLedger) - ) - chainDiff - newChainWithTime = - case Diff.apply curChainWithTime diffWithTime of - Nothing -> error "chainDiff failed for HeaderWithTime" - Just x -> x - - writeTVar cdbChain $ InternalChain newChain newChainWithTime - forkerCommit newForker - - -- Inspect the new ledger for potential problems - let events :: [LedgerEvent blk] - events = inspectLedger - cdbTopLevelConfig - (ledgerState curLedger) - (ledgerState newLedger) - - -- Clear the tentative header - prevTentativeHeader <- swapTVar varTentativeHeader SNothing - - case chainSwitchType of - -- When adding blocks, the intersection point of the old and new - -- tentative/selected chain is not receding, in which case - -- `fhSwitchFork` is unnecessary. In the case of pipelining a - -- block, it would even result in rolling back by one block and - -- rolling forward again. - AddingBlocks -> pure () - SwitchingToAFork -> do - -- Update the followers - -- - -- 'Follower.switchFork' needs to know the intersection point - -- (@ipoint@) between the old and the current chain. - let ipoint = castPoint $ Diff.getAnchorPoint chainDiff - followerHandles <- Map.elems <$> readTVar cdbFollowers - forM_ followerHandles $ switchFollowerToFork curChain newChain ipoint - - return (curChain, newChain, events, prevTentativeHeader, newLedger) - let mkTraceEvent = case chainSwitchType of - AddingBlocks -> AddedToCurrentChain - SwitchingToAFork -> SwitchedToAFork - selChangedInfo = mkSelectionChangedInfo curChain newChain newLedger - traceWith addBlockTracer $ - mkTraceEvent events selChangedInfo curChain newChain - whenJust (strictMaybeToMaybe prevTentativeHeader) $ traceWith $ - PipeliningEvent . OutdatedTentativeHeader >$< addBlockTracer - - forkerClose newForker - - where - -- Given the current chain and the new chain as chain fragments, and the - -- intersection point (an optimization, since it has already been - -- computed when calling this function), returns a function that updates - -- the state of a follower via its handle. - switchFollowerToFork curChain newChain ipoint = - let oldPoints = Set.fromList . fmap headerPoint . AS.toOldestFirst - $ Diff.getSuffix - $ Diff.diff newChain curChain - in assert (AF.withinFragmentBounds (castPoint ipoint) newChain) $ - \followerHandle -> fhSwitchFork followerHandle ipoint oldPoints - - ValidatedChainDiff chainDiff newForker = vChainDiff - - -- | We have a new block @b@ that doesn't fit onto the current chain, but - -- we have found a 'ChainDiff' connecting it to the current chain via - -- intersection point @x@. We may also have extended that 'ChainDiff' with - -- more blocks fitting onto @b@, i.e., a suffix @s@. - -- - -- We now translate that 'ChainDiff' from 'HeaderFields' to 'Header's by - -- reading the headers from disk. - -- - -- Note that we need to read the headers corresponding to the hashes - -- @(x,b)@ and @(b,?]@ from disk. Not for @b@, as that's in our cache. - translateToHeaders - :: ChainDiff (HeaderFields blk) - -> StateT (Map (HeaderHash blk) (Header blk)) - m - (ChainDiff (Header blk)) - -- ^ Fork, anchored at @x@, contains (the header of) @b@ and ends - -- with the suffix @s@. - translateToHeaders = - Diff.mapM (getKnownHeaderThroughCache cdbVolatileDB . headerFieldHash) + chainDiff + newChainWithTime = + case Diff.apply curChainWithTime diffWithTime of + Nothing -> error "chainDiff failed for HeaderWithTime" + Just x -> x + + writeTVar cdbChain $ InternalChain newChain newChainWithTime + forkerCommit newForker + + -- Inspect the new ledger for potential problems + let events :: [LedgerEvent blk] + events = + inspectLedger + cdbTopLevelConfig + (ledgerState curLedger) + (ledgerState newLedger) + + -- Clear the tentative header + prevTentativeHeader <- swapTVar varTentativeHeader SNothing + + case chainSwitchType of + -- When adding blocks, the intersection point of the old and new + -- tentative/selected chain is not receding, in which case + -- `fhSwitchFork` is unnecessary. In the case of pipelining a + -- block, it would even result in rolling back by one block and + -- rolling forward again. + AddingBlocks -> pure () + SwitchingToAFork -> do + -- Update the followers + -- + -- 'Follower.switchFork' needs to know the intersection point + -- (@ipoint@) between the old and the current chain. + let ipoint = castPoint $ Diff.getAnchorPoint chainDiff + followerHandles <- Map.elems <$> readTVar cdbFollowers + forM_ followerHandles $ switchFollowerToFork curChain newChain ipoint + + return (curChain, newChain, events, prevTentativeHeader, newLedger) + let mkTraceEvent = case chainSwitchType of + AddingBlocks -> AddedToCurrentChain + SwitchingToAFork -> SwitchedToAFork + selChangedInfo = mkSelectionChangedInfo curChain newChain newLedger + traceWith addBlockTracer $ + mkTraceEvent events selChangedInfo curChain newChain + whenJust (strictMaybeToMaybe prevTentativeHeader) $ + traceWith $ + PipeliningEvent . OutdatedTentativeHeader >$< addBlockTracer + + forkerClose newForker + where + -- Given the current chain and the new chain as chain fragments, and the + -- intersection point (an optimization, since it has already been + -- computed when calling this function), returns a function that updates + -- the state of a follower via its handle. + switchFollowerToFork curChain newChain ipoint = + let oldPoints = + Set.fromList . fmap headerPoint . AS.toOldestFirst $ + Diff.getSuffix $ + Diff.diff newChain curChain + in assert (AF.withinFragmentBounds (castPoint ipoint) newChain) $ + \followerHandle -> fhSwitchFork followerHandle ipoint oldPoints + + ValidatedChainDiff chainDiff newForker = vChainDiff + + -- \| We have a new block @b@ that doesn't fit onto the current chain, but + -- we have found a 'ChainDiff' connecting it to the current chain via + -- intersection point @x@. We may also have extended that 'ChainDiff' with + -- more blocks fitting onto @b@, i.e., a suffix @s@. + -- + -- We now translate that 'ChainDiff' from 'HeaderFields' to 'Header's by + -- reading the headers from disk. + -- + -- Note that we need to read the headers corresponding to the hashes + -- @(x,b)@ and @(b,?]@ from disk. Not for @b@, as that's in our cache. + translateToHeaders :: + ChainDiff (HeaderFields blk) -> + StateT + (Map (HeaderHash blk) (Header blk)) + m + (ChainDiff (Header blk)) + -- \^ Fork, anchored at @x@, contains (the header of) @b@ and ends + -- with the suffix @s@. + translateToHeaders = + Diff.mapM (getKnownHeaderThroughCache cdbVolatileDB . headerFieldHash) -- | Check whether the header for the hash is in the cache, if not, get -- the corresponding header from the VolatileDB and store it in the cache. -- -- PRECONDITION: the header (block) must exist in the VolatileDB. getKnownHeaderThroughCache :: - (MonadThrow m, HasHeader blk) - => VolatileDB m blk - -> HeaderHash blk - -> StateT (Map (HeaderHash blk) (Header blk)) m (Header blk) -getKnownHeaderThroughCache volatileDB hash = gets (Map.lookup hash) >>= \case + (MonadThrow m, HasHeader blk) => + VolatileDB m blk -> + HeaderHash blk -> + StateT (Map (HeaderHash blk) (Header blk)) m (Header blk) +getKnownHeaderThroughCache volatileDB hash = + gets (Map.lookup hash) >>= \case Just hdr -> return hdr - Nothing -> do + Nothing -> do hdr <- lift $ VolatileDB.getKnownBlockComponent volatileDB GetHeader hash modify (Map.insert hash hdr) return hdr -- | Environment used by 'chainSelection' and related functions. data ChainSelEnv m blk = ChainSelEnv - { lgrDB :: LedgerDB.LedgerDB' m blk - , validationTracer :: Tracer m (TraceValidationEvent blk) - , pipeliningTracer :: Tracer m (TracePipeliningEvent blk) - , bcfg :: BlockConfig blk - , varInvalid :: StrictTVar m (WithFingerprint (InvalidBlocks blk)) - , varTentativeState :: StrictTVar m (TentativeHeaderState blk) - , varTentativeHeader :: StrictTVar m (StrictMaybe (Header blk)) - , getTentativeFollowers :: STM m [FollowerHandle m blk] - , blockCache :: BlockCache blk - , curChainAndLedger :: ChainAndLedger m blk - -- | The block that this chain selection invocation is processing, and the - -- punish action for the peer that sent that block; see - -- 'InvalidBlockPunishment'. - -- - -- One subtlety: - -- - -- o If a BlockFetch client adds an invalid block but that block isn't - -- part of any desirable paths through the VolDB, then we won't attempt - -- to validate it and so we won't discover it's invalid. The peer will - -- not be punished. This seems acceptable, since it means we have turned - -- our focus to a another peer offering better blocks and so this peer - -- is no longer causing us BlockFetch work. - -- - -- Thus invalid blocks can be skipped entirely. This is part of - -- the reason we bothered to restrict the expressiveness of the - -- 'InvalidBlockPunishment' combinators. - , punish :: Maybe (RealPoint blk, InvalidBlockPunishment m) - } + { lgrDB :: LedgerDB.LedgerDB' m blk + , validationTracer :: Tracer m (TraceValidationEvent blk) + , pipeliningTracer :: Tracer m (TracePipeliningEvent blk) + , bcfg :: BlockConfig blk + , varInvalid :: StrictTVar m (WithFingerprint (InvalidBlocks blk)) + , varTentativeState :: StrictTVar m (TentativeHeaderState blk) + , varTentativeHeader :: StrictTVar m (StrictMaybe (Header blk)) + , getTentativeFollowers :: STM m [FollowerHandle m blk] + , blockCache :: BlockCache blk + , curChainAndLedger :: ChainAndLedger m blk + , punish :: Maybe (RealPoint blk, InvalidBlockPunishment m) + -- ^ The block that this chain selection invocation is processing, and the + -- punish action for the peer that sent that block; see + -- 'InvalidBlockPunishment'. + -- + -- One subtlety: + -- + -- o If a BlockFetch client adds an invalid block but that block isn't + -- part of any desirable paths through the VolDB, then we won't attempt + -- to validate it and so we won't discover it's invalid. The peer will + -- not be punished. This seems acceptable, since it means we have turned + -- our focus to a another peer offering better blocks and so this peer + -- is no longer causing us BlockFetch work. + -- + -- Thus invalid blocks can be skipped entirely. This is part of + -- the reason we bothered to restrict the expressiveness of the + -- 'InvalidBlockPunishment' combinators. + } -- | Perform chain selection with the given candidates. If a validated -- candidate was chosen to replace the current chain, return it along with the @@ -1010,165 +1059,173 @@ data ChainSelEnv m blk = ChainSelEnv -- PRECONDITION: the candidate chain diffs must fit on the (given) current -- chain. chainSelection :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , BlockSupportsDiffusionPipelining blk - , HasCallStack - ) - => ChainSelEnv m blk - -> ResourceRegistry m - -> NonEmpty (ChainDiff (Header blk)) - -> m (Maybe (ValidatedChainDiff (Header blk) (Forker' m blk))) - -- ^ The (valid) chain diff and corresponding LedgerDB that was selected, - -- or 'Nothing' if there is no valid chain diff preferred over the current - -- chain. + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , BlockSupportsDiffusionPipelining blk + , HasCallStack + ) => + ChainSelEnv m blk -> + ResourceRegistry m -> + NonEmpty (ChainDiff (Header blk)) -> + -- | The (valid) chain diff and corresponding LedgerDB that was selected, + -- or 'Nothing' if there is no valid chain diff preferred over the current + -- chain. + m (Maybe (ValidatedChainDiff (Header blk) (Forker' m blk))) chainSelection chainSelEnv rr chainDiffs = - assert (all (preferAnchoredCandidate bcfg curChain . Diff.getSuffix) - chainDiffs) $ - assert (all (isJust . Diff.apply curChain) - chainDiffs) $ - go (sortCandidates (NE.toList chainDiffs)) - where - ChainSelEnv {..} = chainSelEnv - - curChain = VF.validatedFragment curChainAndLedger - - sortCandidates :: [ChainDiff (Header blk)] -> [ChainDiff (Header blk)] - sortCandidates = - sortBy (flip (compareAnchoredFragments bcfg) `on` Diff.getSuffix) - - -- 1. Take the first candidate from the list of sorted candidates - -- 2. Validate it - -- - If it is invalid -> discard it and go to 1 with the rest of the - -- list. - -- - If it is valid and has the same tip -> return it - -- - If it is valid, but is a prefix of the original -> - -- add it to the list, sort it and go to 1. See the comment - -- [Ouroboros] below. - go :: - [ChainDiff (Header blk)] - -> m (Maybe (ValidatedChainDiff (Header blk) (Forker' m blk))) - go [] = return Nothing - go (candidate:candidates0) = do - mTentativeHeader <- setTentativeHeader - validateCandidate chainSelEnv rr candidate >>= \case - InsufficientSuffix -> - -- When the body of the tentative block turns out to be invalid, we - -- have a valid *empty* prefix, as the tentative header fits on top - -- of the current chain. - assert (isNothing mTentativeHeader) $ do - candidates1 <- truncateRejectedBlocks candidates0 - go (sortCandidates candidates1) - FullyValid validatedCandidate@(ValidatedChainDiff candidate' _) -> - -- The entire candidate is valid - assert (Diff.getTip candidate == Diff.getTip candidate') $ - return $ Just validatedCandidate - ValidPrefix candidate' -> do - whenJust mTentativeHeader clearTentativeHeader - -- Prefix of the candidate because it contained rejected blocks - -- (invalid blocks). Note that the - -- spec says go back to candidate selection, - -- because there might still be some candidates that contain the - -- same rejected block. To simplify the control flow, we do it - -- differently: instead of recomputing the candidates taking - -- rejected blocks into account, we just truncate the remaining - -- candidates that contain rejected blocks. - candidates1 <- truncateRejectedBlocks candidates0 - -- Only include the prefix if it is still preferred over the current - -- chain. When the candidate is now empty because of the truncation, - -- it will be dropped here, as it will not be preferred over the - -- current chain. - let candidates2 - | preferAnchoredCandidate bcfg curChain (Diff.getSuffix candidate') - = candidate':candidates1 - | otherwise - = candidates1 - go (sortCandidates candidates2) - where - -- | Set and return the tentative header, if applicable. Also return the - -- new 'TentativeHeaderState' in case the corresponding block body turns - -- out to be invalid. - setTentativeHeader :: m (Maybe (Header blk, TentativeHeaderState blk)) - setTentativeHeader = do - pipeliningResult <- - (\ts -> isPipelineable bcfg ts candidate) - <$> readTVarIO varTentativeState - whenJust pipeliningResult $ \(tentativeHeader, _) -> do - let setTentative = SetTentativeHeader tentativeHeader - encloseWith (setTentative >$< pipeliningTracer) $ - atomically $ writeTVar varTentativeHeader $ SJust tentativeHeader - -- As we are only extending the existing chain, the intersection - -- point is not receding, in which case fhSwitchFork is not - -- necessary. - - -- Just in case, explicitly yield to ensure that a capability (by - -- default, the node uses just two) has the opportunity to switch - -- to a ChainSync server thread. - yield - pure pipeliningResult - - -- | Clear a tentative header that turned out to be invalid. Also, roll - -- back the tentative followers. - clearTentativeHeader :: (Header blk, TentativeHeaderState blk) -> m () - clearTentativeHeader (tentativeHeader, tentativeSt) = do - atomically $ do - writeTVar varTentativeHeader SNothing - writeTVar varTentativeState tentativeSt - forTentativeFollowers $ \followerHandle -> do - let curTipPoint = castPoint $ AF.headPoint curChain - oldPoints = Set.singleton $ headerPoint tentativeHeader - fhSwitchFork followerHandle curTipPoint oldPoints - traceWith pipeliningTracer $ TrapTentativeHeader tentativeHeader - where - forTentativeFollowers f = getTentativeFollowers >>= mapM_ f - - -- | Truncate the given (remaining) candidates that contain rejected - -- blocks. Discard them if they are truncated so much that they are no - -- longer preferred over the current chain. - -- - -- A block is rejected if it is invalid (present in 'varInvalid', - -- i.e., 'cdbInvalid'). - truncateRejectedBlocks :: - [ChainDiff (Header blk)] - -> m [ChainDiff (Header blk)] - truncateRejectedBlocks cands = do - invalid <- atomically $ readTVar varInvalid - let isRejected hdr = - Map.member (headerHash hdr) (forgetFingerprint invalid) - return $ filter (preferAnchoredCandidate bcfg curChain . Diff.getSuffix) - $ map (Diff.takeWhileOldest (not . isRejected)) cands - - -- [Ouroboros] - -- - -- Ouroboros says that when we are given an invalid chain by a peer, we - -- should reject that peer's chain. However, since we're throwing all - -- blocks together in the ChainDB, we can't tell which block or which - -- chain came from which peer, so we can't simply reject a peer's chain. - -- - -- It might be that a good peer gave us a valid chain, but another peer - -- gave us an invalid block that fits onto the chain of the good peer. In - -- that case, we do still want to adopt the chain of the good peer, which - -- is a prefix of the chain that we constructed using all the blocks we - -- found in the VolatileDB, including the invalid block. - -- - -- This is the reason why we still take valid prefixes of a invalid chains - -- into account during chain selection: they might correspond to the good - -- peer's valid chain. + assert + ( all + (preferAnchoredCandidate bcfg curChain . Diff.getSuffix) + chainDiffs + ) + $ assert + ( all + (isJust . Diff.apply curChain) + chainDiffs + ) + $ go (sortCandidates (NE.toList chainDiffs)) + where + ChainSelEnv{..} = chainSelEnv + + curChain = VF.validatedFragment curChainAndLedger + + sortCandidates :: [ChainDiff (Header blk)] -> [ChainDiff (Header blk)] + sortCandidates = + sortBy (flip (compareAnchoredFragments bcfg) `on` Diff.getSuffix) + + -- 1. Take the first candidate from the list of sorted candidates + -- 2. Validate it + -- - If it is invalid -> discard it and go to 1 with the rest of the + -- list. + -- - If it is valid and has the same tip -> return it + -- - If it is valid, but is a prefix of the original -> + -- add it to the list, sort it and go to 1. See the comment + -- [Ouroboros] below. + go :: + [ChainDiff (Header blk)] -> + m (Maybe (ValidatedChainDiff (Header blk) (Forker' m blk))) + go [] = return Nothing + go (candidate : candidates0) = do + mTentativeHeader <- setTentativeHeader + validateCandidate chainSelEnv rr candidate >>= \case + InsufficientSuffix -> + -- When the body of the tentative block turns out to be invalid, we + -- have a valid *empty* prefix, as the tentative header fits on top + -- of the current chain. + assert (isNothing mTentativeHeader) $ do + candidates1 <- truncateRejectedBlocks candidates0 + go (sortCandidates candidates1) + FullyValid validatedCandidate@(ValidatedChainDiff candidate' _) -> + -- The entire candidate is valid + assert (Diff.getTip candidate == Diff.getTip candidate') $ + return $ + Just validatedCandidate + ValidPrefix candidate' -> do + whenJust mTentativeHeader clearTentativeHeader + -- Prefix of the candidate because it contained rejected blocks + -- (invalid blocks). Note that the + -- spec says go back to candidate selection, + -- because there might still be some candidates that contain the + -- same rejected block. To simplify the control flow, we do it + -- differently: instead of recomputing the candidates taking + -- rejected blocks into account, we just truncate the remaining + -- candidates that contain rejected blocks. + candidates1 <- truncateRejectedBlocks candidates0 + -- Only include the prefix if it is still preferred over the current + -- chain. When the candidate is now empty because of the truncation, + -- it will be dropped here, as it will not be preferred over the + -- current chain. + let candidates2 + | preferAnchoredCandidate bcfg curChain (Diff.getSuffix candidate') = + candidate' : candidates1 + | otherwise = + candidates1 + go (sortCandidates candidates2) + where + -- \| Set and return the tentative header, if applicable. Also return the + -- new 'TentativeHeaderState' in case the corresponding block body turns + -- out to be invalid. + setTentativeHeader :: m (Maybe (Header blk, TentativeHeaderState blk)) + setTentativeHeader = do + pipeliningResult <- + (\ts -> isPipelineable bcfg ts candidate) + <$> readTVarIO varTentativeState + whenJust pipeliningResult $ \(tentativeHeader, _) -> do + let setTentative = SetTentativeHeader tentativeHeader + encloseWith (setTentative >$< pipeliningTracer) $ + atomically $ + writeTVar varTentativeHeader $ + SJust tentativeHeader + -- As we are only extending the existing chain, the intersection + -- point is not receding, in which case fhSwitchFork is not + -- necessary. + + -- Just in case, explicitly yield to ensure that a capability (by + -- default, the node uses just two) has the opportunity to switch + -- to a ChainSync server thread. + yield + pure pipeliningResult + + -- \| Clear a tentative header that turned out to be invalid. Also, roll + -- back the tentative followers. + clearTentativeHeader :: (Header blk, TentativeHeaderState blk) -> m () + clearTentativeHeader (tentativeHeader, tentativeSt) = do + atomically $ do + writeTVar varTentativeHeader SNothing + writeTVar varTentativeState tentativeSt + forTentativeFollowers $ \followerHandle -> do + let curTipPoint = castPoint $ AF.headPoint curChain + oldPoints = Set.singleton $ headerPoint tentativeHeader + fhSwitchFork followerHandle curTipPoint oldPoints + traceWith pipeliningTracer $ TrapTentativeHeader tentativeHeader + where + forTentativeFollowers f = getTentativeFollowers >>= mapM_ f + + -- \| Truncate the given (remaining) candidates that contain rejected + -- blocks. Discard them if they are truncated so much that they are no + -- longer preferred over the current chain. + -- + -- A block is rejected if it is invalid (present in 'varInvalid', + -- i.e., 'cdbInvalid'). + truncateRejectedBlocks :: + [ChainDiff (Header blk)] -> + m [ChainDiff (Header blk)] + truncateRejectedBlocks cands = do + invalid <- atomically $ readTVar varInvalid + let isRejected hdr = + Map.member (headerHash hdr) (forgetFingerprint invalid) + return $ + filter (preferAnchoredCandidate bcfg curChain . Diff.getSuffix) $ + map (Diff.takeWhileOldest (not . isRejected)) cands + +-- [Ouroboros] +-- +-- Ouroboros says that when we are given an invalid chain by a peer, we +-- should reject that peer's chain. However, since we're throwing all +-- blocks together in the ChainDB, we can't tell which block or which +-- chain came from which peer, so we can't simply reject a peer's chain. +-- +-- It might be that a good peer gave us a valid chain, but another peer +-- gave us an invalid block that fits onto the chain of the good peer. In +-- that case, we do still want to adopt the chain of the good peer, which +-- is a prefix of the chain that we constructed using all the blocks we +-- found in the VolatileDB, including the invalid block. +-- +-- This is the reason why we still take valid prefixes of a invalid chains +-- into account during chain selection: they might correspond to the good +-- peer's valid chain. -- | Result of 'validateCandidate'. -data ValidationResult m blk = - -- | The entire candidate fragment was valid. - FullyValid (ValidatedChainDiff (Header blk) (Forker' m blk)) - - -- | The candidate fragment contained invalid blocks that had to - -- be truncated from the fragment. - | ValidPrefix (ChainDiff (Header blk)) - - -- | After truncating the invalid blocks from - -- the 'ChainDiff', it no longer contains enough blocks in its suffix to - -- compensate for the number of blocks it wants to roll back. - | InsufficientSuffix +data ValidationResult m blk + = -- | The entire candidate fragment was valid. + FullyValid (ValidatedChainDiff (Header blk) (Forker' m blk)) + | -- | The candidate fragment contained invalid blocks that had to + -- be truncated from the fragment. + ValidPrefix (ChainDiff (Header blk)) + | -- | After truncating the invalid blocks from + -- the 'ChainDiff', it no longer contains enough blocks in its suffix to + -- compensate for the number of blocks it wants to roll back. + InsufficientSuffix -- | Validate a candidate by applying its blocks to the ledger, and return a -- 'ValidatedChainDiff' for it, i.e., a chain diff along with a ledger @@ -1188,115 +1245,111 @@ data ValidationResult m blk = -- Note that this function returns a 'Forker', and that this forker should be -- closed when it is no longer used! ledgerValidateCandidate :: - forall m blk. - ( IOLike m - , LedgerSupportsProtocol blk - , HasCallStack - ) - => ChainSelEnv m blk - -> ResourceRegistry m - -> ChainDiff (Header blk) - -> m (ValidatedChainDiff (Header blk) (Forker' m blk)) + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , HasCallStack + ) => + ChainSelEnv m blk -> + ResourceRegistry m -> + ChainDiff (Header blk) -> + m (ValidatedChainDiff (Header blk) (Forker' m blk)) ledgerValidateCandidate chainSelEnv rr chainDiff@(ChainDiff rollback suffix) = - LedgerDB.validateFork lgrDB rr traceUpdate blockCache rollback newBlocks >>= \case - ValidateExceededRollBack {} -> - -- Impossible: we asked the LedgerDB to roll back past the immutable - -- tip, which is impossible, since the candidates we construct must - -- connect to the immutable tip. - error "found candidate requiring rolling back past the immutable tip" - - ValidateLedgerError (AnnLedgerError ledger' pt e) -> do - lastValid <- atomically $ forkerCurrentPoint ledger' - let chainDiff' = Diff.truncate (castPoint lastValid) chainDiff - traceWith validationTracer (InvalidBlock e pt) - addInvalidBlock e pt - traceWith validationTracer (ValidCandidate (Diff.getSuffix chainDiff')) - - -- punish the peer who sent a block if it is invalid or a block from its - -- prefix is invalid - -- - -- Note that it is a chain selection invariant that all candidates - -- involve the block being processed: see Lemma 11.1 (Properties of the - -- set of candidates) in the Chain Selection chapter of the The Cardano - -- Consensus and Storage Layer technical report. - whenJust punish $ \(addedPt, punishment) -> do - let m = InvalidBlockPunishment.enact punishment - $ if addedPt == pt + LedgerDB.validateFork lgrDB rr traceUpdate blockCache rollback newBlocks >>= \case + ValidateExceededRollBack{} -> + -- Impossible: we asked the LedgerDB to roll back past the immutable + -- tip, which is impossible, since the candidates we construct must + -- connect to the immutable tip. + error "found candidate requiring rolling back past the immutable tip" + ValidateLedgerError (AnnLedgerError ledger' pt e) -> do + lastValid <- atomically $ forkerCurrentPoint ledger' + let chainDiff' = Diff.truncate (castPoint lastValid) chainDiff + traceWith validationTracer (InvalidBlock e pt) + addInvalidBlock e pt + traceWith validationTracer (ValidCandidate (Diff.getSuffix chainDiff')) + + -- punish the peer who sent a block if it is invalid or a block from its + -- prefix is invalid + -- + -- Note that it is a chain selection invariant that all candidates + -- involve the block being processed: see Lemma 11.1 (Properties of the + -- set of candidates) in the Chain Selection chapter of the The Cardano + -- Consensus and Storage Layer technical report. + whenJust punish $ \(addedPt, punishment) -> do + let m = + InvalidBlockPunishment.enact punishment $ + if addedPt == pt then InvalidBlockPunishment.BlockItself else InvalidBlockPunishment.BlockPrefix - case realPointSlot pt `compare` realPointSlot addedPt of - LT -> m - GT -> pure () - EQ -> when (lastValid /= realPointToPoint addedPt) m - -- If pt and addedPt have the same slot, and addedPt is the tip of - -- the ledger that pt was validated against, then addedPt is an - -- EBB and is valid. - -- - -- Otherwise, either pt == addedPt or addedPt comes after pt, so - -- we should punish. (Tacit assumption made here: it's impossible - -- three blocks in a row have the same slot.) - - ValidatedDiff.newM chainDiff' ledger' - - ValidateSuccessful ledger' -> do - traceWith validationTracer (ValidCandidate suffix) - ValidatedDiff.newM chainDiff ledger' - where - ChainSelEnv { - lgrDB - , validationTracer - , blockCache - , varInvalid - , punish - } = chainSelEnv - - traceUpdate = traceWith $ UpdateLedgerDbTraceEvent >$< validationTracer - - newBlocks :: [Header blk] - newBlocks = AF.toOldestFirst suffix - - -- | Record the invalid block in 'cdbInvalid' and change its fingerprint. - addInvalidBlock :: ExtValidationError blk -> RealPoint blk -> m () - addInvalidBlock e (RealPoint slot hash) = atomically $ - modifyTVar varInvalid $ \(WithFingerprint invalid fp) -> - WithFingerprint - (Map.insert hash (InvalidBlockInfo e slot) invalid) - (succ fp) + case realPointSlot pt `compare` realPointSlot addedPt of + LT -> m + GT -> pure () + EQ -> when (lastValid /= realPointToPoint addedPt) m + -- If pt and addedPt have the same slot, and addedPt is the tip of + -- the ledger that pt was validated against, then addedPt is an + -- EBB and is valid. + -- + -- Otherwise, either pt == addedPt or addedPt comes after pt, so + -- we should punish. (Tacit assumption made here: it's impossible + -- three blocks in a row have the same slot.) + + ValidatedDiff.newM chainDiff' ledger' + ValidateSuccessful ledger' -> do + traceWith validationTracer (ValidCandidate suffix) + ValidatedDiff.newM chainDiff ledger' + where + ChainSelEnv + { lgrDB + , validationTracer + , blockCache + , varInvalid + , punish + } = chainSelEnv + + traceUpdate = traceWith $ UpdateLedgerDbTraceEvent >$< validationTracer + + newBlocks :: [Header blk] + newBlocks = AF.toOldestFirst suffix + + -- \| Record the invalid block in 'cdbInvalid' and change its fingerprint. + addInvalidBlock :: ExtValidationError blk -> RealPoint blk -> m () + addInvalidBlock e (RealPoint slot hash) = atomically $ + modifyTVar varInvalid $ \(WithFingerprint invalid fp) -> + WithFingerprint + (Map.insert hash (InvalidBlockInfo e slot) invalid) + (succ fp) -- | Validate a candidate chain using 'ledgerValidateCandidate'. validateCandidate :: - ( IOLike m - , LedgerSupportsProtocol blk - , HasCallStack - ) - => ChainSelEnv m blk - -> ResourceRegistry m - -> ChainDiff (Header blk) - -> m (ValidationResult m blk) + ( IOLike m + , LedgerSupportsProtocol blk + , HasCallStack + ) => + ChainSelEnv m blk -> + ResourceRegistry m -> + ChainDiff (Header blk) -> + m (ValidationResult m blk) validateCandidate chainSelEnv rr chainDiff = - ledgerValidateCandidate chainSelEnv rr chainDiff >>= \case - validatedChainDiff - | ValidatedDiff.rollbackExceedsSuffix validatedChainDiff - -> cleanup validatedChainDiff >> return InsufficientSuffix - - | AF.length (Diff.getSuffix chainDiff) == AF.length (Diff.getSuffix chainDiff') - -- No truncation - -> return $ FullyValid validatedChainDiff - - | otherwise - -- In case of invalid blocks, we throw away the ledger - -- corresponding to the truncated fragment and will have to - -- validate it again, even when it's the sole candidate. - -> return $ ValidPrefix chainDiff' - - where - chainDiff' = ValidatedDiff.getChainDiff validatedChainDiff - where - -- If this function does not return a validated chain diff, then there is a - -- leftover forker that we have to close so that its resources are correctly - -- released. - cleanup :: ValidatedChainDiff b (Forker' m blk) -> m () - cleanup = forkerClose . getLedger + ledgerValidateCandidate chainSelEnv rr chainDiff >>= \case + validatedChainDiff + | ValidatedDiff.rollbackExceedsSuffix validatedChainDiff -> + cleanup validatedChainDiff >> return InsufficientSuffix + | AF.length (Diff.getSuffix chainDiff) == AF.length (Diff.getSuffix chainDiff') -> + -- No truncation + return $ FullyValid validatedChainDiff + | otherwise -> + -- In case of invalid blocks, we throw away the ledger + -- corresponding to the truncated fragment and will have to + -- validate it again, even when it's the sole candidate. + return $ ValidPrefix chainDiff' + where + chainDiff' = ValidatedDiff.getChainDiff validatedChainDiff + where + -- If this function does not return a validated chain diff, then there is a + -- leftover forker that we have to close so that its resources are correctly + -- released. + cleanup :: ValidatedChainDiff b (Forker' m blk) -> m () + cleanup = forkerClose . getLedger {------------------------------------------------------------------------------- 'ChainAndLedger' @@ -1315,18 +1368,18 @@ type ChainAndLedger m blk = ValidatedFragment (Header blk) (Forker' m blk) -- -- PRECONDITION: The 'ChainDiff' fits on top of the current chain and is better. isPipelineable :: - (HasHeader (Header blk), BlockSupportsDiffusionPipelining blk) - => BlockConfig blk - -> TentativeHeaderState blk - -> ChainDiff (Header blk) - -> Maybe (Header blk, TentativeHeaderState blk) -isPipelineable bcfg st ChainDiff {..} + (HasHeader (Header blk), BlockSupportsDiffusionPipelining blk) => + BlockConfig blk -> + TentativeHeaderState blk -> + ChainDiff (Header blk) -> + Maybe (Header blk, TentativeHeaderState blk) +isPipelineable bcfg st ChainDiff{..} | -- we apply exactly one header AF.Empty _ :> hdr <- getSuffix , Just st' <- updateTentativeHeaderState bcfg hdr st - -- ensure that the diff is applied to the chain tip - , getRollback == 0 - = Just (hdr, st') + , -- ensure that the diff is applied to the chain tip + getRollback == 0 = + Just (hdr, st') | otherwise = Nothing {------------------------------------------------------------------------------- @@ -1335,22 +1388,22 @@ isPipelineable bcfg st ChainDiff {..} -- | Wrap a @getter@ function so that it returns 'Nothing' for invalid blocks. ignoreInvalid :: - HasHeader blk - => proxy blk - -> InvalidBlocks blk - -> (HeaderHash blk -> Maybe a) - -> (HeaderHash blk -> Maybe a) + HasHeader blk => + proxy blk -> + InvalidBlocks blk -> + (HeaderHash blk -> Maybe a) -> + (HeaderHash blk -> Maybe a) ignoreInvalid _ invalid getter hash - | Map.member hash invalid = Nothing - | otherwise = getter hash + | Map.member hash invalid = Nothing + | otherwise = getter hash -- | Wrap a @successors@ function so that invalid blocks are not returned as -- successors. ignoreInvalidSuc :: - HasHeader blk - => proxy blk - -> InvalidBlocks blk - -> (ChainHash blk -> Set (HeaderHash blk)) - -> (ChainHash blk -> Set (HeaderHash blk)) + HasHeader blk => + proxy blk -> + InvalidBlocks blk -> + (ChainHash blk -> Set (HeaderHash blk)) -> + (ChainHash blk -> Set (HeaderHash blk)) ignoreInvalidSuc _ invalid succsOf = - Set.filter (`Map.notMember` invalid) . succsOf + Set.filter (`Map.notMember` invalid) . succsOf diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Follower.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Follower.hs index eb62943530..cacba388bb 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Follower.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Follower.hs @@ -9,39 +9,44 @@ {-# LANGUAGE TypeApplications #-} -- | Followers -module Ouroboros.Consensus.Storage.ChainDB.Impl.Follower ( - closeAllFollowers +module Ouroboros.Consensus.Storage.ChainDB.Impl.Follower + ( closeAllFollowers , newFollower , switchFork ) where -import Codec.CBOR.Write (toLazyByteString) -import Control.Exception (assert) -import Control.Monad (join) -import Control.ResourceRegistry (ResourceRegistry) -import Control.Tracer (contramap, traceWith) -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor ((<&>)) -import Data.Functor.Identity (Identity (..)) -import qualified Data.Map.Strict as Map -import Data.Maybe.Strict (StrictMaybe (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Storage.ChainDB.API (BlockComponent (..), - ChainDbError (..), ChainType (..), Follower (..), getPoint) -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.Query as Query -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (blockUntilJust) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (ChainUpdate (..)) +import Codec.CBOR.Write (toLazyByteString) +import Control.Exception (assert) +import Control.Monad (join) +import Control.ResourceRegistry (ResourceRegistry) +import Control.Tracer (contramap, traceWith) +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor ((<&>)) +import Data.Functor.Identity (Identity (..)) +import Data.Map.Strict qualified as Map +import Data.Maybe.Strict (StrictMaybe (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Storage.ChainDB.API + ( BlockComponent (..) + , ChainDbError (..) + , ChainType (..) + , Follower (..) + , getPoint + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Query qualified as Query +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (blockUntilJust) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (ChainUpdate (..)) {------------------------------------------------------------------------------- Accessing the environment @@ -53,28 +58,33 @@ import Ouroboros.Network.Block (ChainUpdate (..)) -- -- Otherwise, execute the given function on the 'ChainDbEnv'. getFollower :: - forall m blk r. (IOLike m, HasCallStack, HasHeader blk) - => ChainDbHandle m blk - -> FollowerKey - -> (ChainDbEnv m blk -> m r) - -> m r + forall m blk r. + (IOLike m, HasCallStack, HasHeader blk) => + ChainDbHandle m blk -> + FollowerKey -> + (ChainDbEnv m blk -> m r) -> + m r getFollower (CDBHandle varState) followerKey f = do - env <- atomically $ readTVar varState >>= \case - ChainDbClosed -> throwIO $ ClosedDBError @blk prettyCallStack - ChainDbOpen env -> do - followerOpen <- Map.member followerKey <$> readTVar (cdbFollowers env) - if followerOpen - then return env - else throwIO $ ClosedFollowerError @blk - f env + env <- + atomically $ + readTVar varState >>= \case + ChainDbClosed -> throwIO $ ClosedDBError @blk prettyCallStack + ChainDbOpen env -> do + followerOpen <- Map.member followerKey <$> readTVar (cdbFollowers env) + if followerOpen + then return env + else throwIO $ ClosedFollowerError @blk + f env -- | Variant 'of 'getFollower' for functions taking one argument. getFollower1 :: - forall m blk a r. (IOLike m, HasHeader blk) - => ChainDbHandle m blk - -> FollowerKey - -> (ChainDbEnv m blk -> a -> m r) - -> a -> m r + forall m blk a r. + (IOLike m, HasHeader blk) => + ChainDbHandle m blk -> + FollowerKey -> + (ChainDbEnv m blk -> a -> m r) -> + a -> + m r getFollower1 h followerKey f a = getFollower h followerKey (\env -> f env a) {------------------------------------------------------------------------------- @@ -82,76 +92,80 @@ getFollower1 h followerKey f a = getFollower h followerKey (\env -> f env a) -------------------------------------------------------------------------------} newFollower :: - forall m blk b. - ( IOLike m - , HasHeader blk - , GetHeader blk - , HasNestedContent Header blk - , EncodeDiskDep (NestedCtxt Header) blk - ) - => ChainDbHandle m blk - -> ResourceRegistry m - -> ChainType - -> BlockComponent blk b - -> m (Follower m blk b) + forall m blk b. + ( IOLike m + , HasHeader blk + , GetHeader blk + , HasNestedContent Header blk + , EncodeDiskDep (NestedCtxt Header) blk + ) => + ChainDbHandle m blk -> + ResourceRegistry m -> + ChainType -> + BlockComponent blk b -> + m (Follower m blk b) newFollower h registry chainType blockComponent = getEnv h $ \CDB{..} -> do - -- The following operations don't need to be done in a single transaction - followerKey <- atomically $ stateTVar cdbNextFollowerKey $ \r -> (r, succ r) - varFollower <- newTVarIO FollowerInit - let followerHandle = mkFollowerHandle varFollower - atomically $ modifyTVar cdbFollowers $ Map.insert followerKey followerHandle - let follower = - makeNewFollower h followerKey varFollower chainType registry blockComponent - traceWith cdbTracer $ TraceFollowerEvent NewFollower - return follower - where - mkFollowerHandle :: StrictTVar m (FollowerState m blk b) -> FollowerHandle m blk - mkFollowerHandle varFollower = FollowerHandle - { fhChainType = chainType - , fhClose = do + -- The following operations don't need to be done in a single transaction + followerKey <- atomically $ stateTVar cdbNextFollowerKey $ \r -> (r, succ r) + varFollower <- newTVarIO FollowerInit + let followerHandle = mkFollowerHandle varFollower + atomically $ modifyTVar cdbFollowers $ Map.insert followerKey followerHandle + let follower = + makeNewFollower h followerKey varFollower chainType registry blockComponent + traceWith cdbTracer $ TraceFollowerEvent NewFollower + return follower + where + mkFollowerHandle :: StrictTVar m (FollowerState m blk b) -> FollowerHandle m blk + mkFollowerHandle varFollower = + FollowerHandle + { fhChainType = chainType + , fhClose = do -- This is only called by 'closeAllFollowers'. We just release the -- resources. We don't check whether the Follower is still open. -- We don't have to remove the follower from the 'cdbFollowers', -- 'closeAllFollowers' will empty that map already. followerState <- atomically $ readTVar varFollower closeFollowerState followerState - , fhSwitchFork = \ipoint oldPoints -> modifyTVar varFollower $ - switchFork ipoint oldPoints + , fhSwitchFork = \ipoint oldPoints -> + modifyTVar varFollower $ + switchFork ipoint oldPoints } - makeNewFollower :: - forall m blk b. - ( IOLike m - , HasHeader blk - , GetHeader blk - , HasNestedContent Header blk - , EncodeDiskDep (NestedCtxt Header) blk - ) - => ChainDbHandle m blk - -> FollowerKey - -> StrictTVar m (FollowerState m blk b) - -> ChainType - -> ResourceRegistry m - -> BlockComponent blk b - -> Follower m blk b -makeNewFollower h followerKey varFollower chainType registry blockComponent = Follower {..} - where - followerInstruction :: m (Maybe (ChainUpdate blk b)) - followerInstruction = getFollower h followerKey $ + forall m blk b. + ( IOLike m + , HasHeader blk + , GetHeader blk + , HasNestedContent Header blk + , EncodeDiskDep (NestedCtxt Header) blk + ) => + ChainDbHandle m blk -> + FollowerKey -> + StrictTVar m (FollowerState m blk b) -> + ChainType -> + ResourceRegistry m -> + BlockComponent blk b -> + Follower m blk b +makeNewFollower h followerKey varFollower chainType registry blockComponent = Follower{..} + where + followerInstruction :: m (Maybe (ChainUpdate blk b)) + followerInstruction = + getFollower h followerKey $ instructionHelper registry varFollower chainType blockComponent id - followerInstructionBlocking :: m (ChainUpdate blk b) - followerInstructionBlocking = fmap runIdentity $ + followerInstructionBlocking :: m (ChainUpdate blk b) + followerInstructionBlocking = + fmap runIdentity $ getFollower h followerKey $ - instructionHelper registry varFollower chainType blockComponent (fmap Identity . blockUntilJust) + instructionHelper registry varFollower chainType blockComponent (fmap Identity . blockUntilJust) - followerForward :: [Point blk] -> m (Maybe (Point blk)) - followerForward = getFollower1 h followerKey $ + followerForward :: [Point blk] -> m (Maybe (Point blk)) + followerForward = + getFollower1 h followerKey $ forward registry varFollower blockComponent - followerClose :: m () - followerClose = getEnv h $ close followerKey varFollower + followerClose :: m () + followerClose = getEnv h $ close followerKey varFollower -- | Implementation of 'followerClose'. -- @@ -162,27 +176,28 @@ makeNewFollower h followerKey varFollower chainType registry blockComponent = Fo -- Unlike 'closeAllFollowers', this is meant to be called by the user of the -- ChainDB.Follower. close :: - forall m blk b. IOLike m - => FollowerKey - -> StrictTVar m (FollowerState m blk b) - -> ChainDbEnv m blk - -> m () -close followerKey varFollower CDB { cdbFollowers } = do - -- If the FollowerKey is not present in the map, the Follower must have been - -- closed already. - atomically $ modifyTVar cdbFollowers $ Map.delete followerKey - followerState <- atomically $ readTVar varFollower - closeFollowerState followerState + forall m blk b. + IOLike m => + FollowerKey -> + StrictTVar m (FollowerState m blk b) -> + ChainDbEnv m blk -> + m () +close followerKey varFollower CDB{cdbFollowers} = do + -- If the FollowerKey is not present in the map, the Follower must have been + -- closed already. + atomically $ modifyTVar cdbFollowers $ Map.delete followerKey + followerState <- atomically $ readTVar varFollower + closeFollowerState followerState -- | Close the given 'FollowerState' by closing any 'ImmutableDB.Iterator' it -- might contain. closeFollowerState :: MonadCatch m => FollowerState m blk b -> m () closeFollowerState = \case - FollowerInit -> return () - FollowerInMem _ -> return () - -- IMPORTANT: the main reason we're closing followers: to close this open - -- iterator, which contains a reference to a file handle. - FollowerInImmutableDB _ immIt -> ImmutableDB.iteratorClose immIt + FollowerInit -> return () + FollowerInMem _ -> return () + -- IMPORTANT: the main reason we're closing followers: to close this open + -- iterator, which contains a reference to a file handle. + FollowerInImmutableDB _ immIt -> ImmutableDB.iteratorClose immIt -- | Helper for 'followerInstruction' and 'followerInstructionBlocking'. -- @@ -199,153 +214,161 @@ closeFollowerState = \case -- When in the 'FollowerInMem' state, we may have to block when we have reached -- the end of the current chain. instructionHelper :: - forall m blk b f. - ( IOLike m - , HasHeader blk - , GetHeader blk - , HasNestedContent Header blk - , EncodeDiskDep (NestedCtxt Header) blk - , Traversable f, Applicative f - ) - => ResourceRegistry m - -> StrictTVar m (FollowerState m blk b) - -> ChainType - -> BlockComponent blk b - -> ( STM m (Maybe (ChainUpdate blk (Header blk))) - -> STM m (f (ChainUpdate blk (Header blk)))) - -- ^ How to turn a transaction that may or may not result in a new - -- 'ChainUpdate' in one that returns the right return type: use @fmap - -- Identity . 'blockUntilJust'@ to block or 'id' to just return the - -- @Maybe@. - -> ChainDbEnv m blk - -> m (f (ChainUpdate blk b)) + forall m blk b f. + ( IOLike m + , HasHeader blk + , GetHeader blk + , HasNestedContent Header blk + , EncodeDiskDep (NestedCtxt Header) blk + , Traversable f + , Applicative f + ) => + ResourceRegistry m -> + StrictTVar m (FollowerState m blk b) -> + ChainType -> + BlockComponent blk b -> + -- | How to turn a transaction that may or may not result in a new + -- 'ChainUpdate' in one that returns the right return type: use @fmap + -- Identity . 'blockUntilJust'@ to block or 'id' to just return the + -- @Maybe@. + ( STM m (Maybe (ChainUpdate blk (Header blk))) -> + STM m (f (ChainUpdate blk (Header blk))) + ) -> + ChainDbEnv m blk -> + m (f (ChainUpdate blk b)) instructionHelper registry varFollower chainType blockComponent fromMaybeSTM CDB{..} = do - -- In one transaction: check in which state we are, if in the - -- @FollowerInMem@ state, just call 'instructionSTM', otherwise, - -- return the contents of the 'FollowerInImmutableDB' state. - inImmutableDBOrRes <- atomically $ do - curChain <- getCurrentChainByType - readTVar varFollower >>= \case - -- Just return the contents of the state and end the transaction in - -- these two cases. - FollowerInit - -> return $ Left (RollBackTo GenesisPoint, Nothing) - FollowerInImmutableDB rollState immIt - -> return $ Left (rollState, Just immIt) - - FollowerInMem rollState - | AF.withinFragmentBounds - (castPoint (followerRollStatePoint rollState)) curChain + -- In one transaction: check in which state we are, if in the + -- @FollowerInMem@ state, just call 'instructionSTM', otherwise, + -- return the contents of the 'FollowerInImmutableDB' state. + inImmutableDBOrRes <- atomically $ do + curChain <- getCurrentChainByType + readTVar varFollower >>= \case + -- Just return the contents of the state and end the transaction in + -- these two cases. + FollowerInit -> + return $ Left (RollBackTo GenesisPoint, Nothing) + FollowerInImmutableDB rollState immIt -> + return $ Left (rollState, Just immIt) + FollowerInMem rollState + | AF.withinFragmentBounds + (castPoint (followerRollStatePoint rollState)) + curChain -> -- The point is still in the current chain fragment - -> fmap Right $ fromMaybeSTM $ - instructionSTM - rollState - curChain - (writeTVar varFollower . FollowerInMem) - | otherwise + fmap Right $ + fromMaybeSTM $ + instructionSTM + rollState + curChain + (writeTVar varFollower . FollowerInMem) + | otherwise -> -- The point is no longer on the fragment. Blocks must have moved -- (off the fragment) to the ImmutableDB. Note that 'switchFork' -- will try to keep the point on the fragment in case we switch to -- a fork. - -> return $ Left (rollState, Nothing) - case inImmutableDBOrRes of - -- We were able to obtain the result inside the transaction as we were - -- in the 'FollowerInMem' state. We only got a header, which we must first - -- convert to the right block component. - Right fupdate -> headerUpdateToBlockComponentUpdate fupdate - -- We were in the 'FollowerInImmutableDB' state or we need to switch to it. - Left (rollState, mbImmIt) -> do - immIt <- case mbImmIt of - Just immIt -> return immIt - -- We were in the 'FollowerInMem' state but have to switch to the - -- 'FollowerInImmutableDB' state. - Nothing -> do - trace $ FollowerNoLongerInMem rollState - ImmutableDB.streamAfterKnownPoint cdbImmutableDB registry - ((,) <$> getPoint <*> blockComponent) - (followerRollStatePoint rollState) - case rollState of - RollForwardFrom pt -> rollForwardImmutableDB immIt pt - RollBackTo pt -> do - let followerState' = FollowerInImmutableDB (RollForwardFrom pt) immIt - atomically $ writeTVar varFollower followerState' - return $ pure $ RollBack pt - where - trace = traceWith (contramap TraceFollowerEvent cdbTracer) - - getCurrentChainByType = do - curChain <- icWithoutTime <$> readTVar cdbChain - case chainType of - SelectedChain -> pure curChain - TentativeChain -> readTVar cdbTentativeHeader <&> \case - SJust hdr -> curChain AF.:> hdr - SNothing -> curChain - - codecConfig :: CodecConfig blk - codecConfig = configCodec cdbTopLevelConfig - - headerUpdateToBlockComponentUpdate - :: f (ChainUpdate blk (Header blk)) -> m (f (ChainUpdate blk b)) - headerUpdateToBlockComponentUpdate = - traverse (traverse (`getBlockComponentFromHeader` blockComponent)) - - -- | We only got the header for the in-memory chain fragment, so depending - -- on the 'BlockComponent' that's requested, we might have to read the - -- whole block. - getBlockComponentFromHeader - :: forall b'. Header blk -> BlockComponent blk b' -> m b' - getBlockComponentFromHeader hdr = \case - GetVerifiedBlock -> getBlockComponent GetVerifiedBlock - GetBlock -> getBlockComponent GetBlock - GetRawBlock -> getBlockComponent GetRawBlock - GetHeader -> return $ hdr - GetRawHeader -> return $ rawHdr - GetHash -> return $ headerHash hdr - GetSlot -> return $ blockSlot hdr - GetIsEBB -> return $ headerToIsEBB hdr - GetBlockSize -> getBlockComponent GetBlockSize - -- We could look up the header size in the index of the VolatileDB, - -- but getting the serialisation is cheap because we keep the - -- serialisation in memory as an annotation, and the following way is - -- less stateful - GetHeaderSize -> return $ fromIntegral $ Lazy.length rawHdr - GetNestedCtxt -> return nestedCtxt - GetPure a -> return a - GetApply f bc -> - getBlockComponentFromHeader hdr f <*> - getBlockComponentFromHeader hdr bc - where - -- | Use the 'ImmutableDB' and 'VolatileDB' to read the 'BlockComponent' from - -- disk (or memory). - getBlockComponent :: forall c. BlockComponent blk c -> m c - getBlockComponent bc = - Query.getAnyKnownBlockComponent cdbImmutableDB cdbVolatileDB bc (headerRealPoint hdr) - - rawHdr :: Lazy.ByteString - nestedCtxt :: SomeSecond (NestedCtxt Header) blk - (nestedCtxt, rawHdr) = case unnest hdr of - DepPair ctxt h -> - ( SomeSecond ctxt - , toLazyByteString $ encodeDiskDep codecConfig ctxt h - ) - - next :: - ImmutableDB.Iterator m blk (Point blk, b) - -> m (Maybe (Point blk, b)) - next immIt = ImmutableDB.iteratorNext immIt <&> \case - ImmutableDB.IteratorResult b -> Just b + return $ Left (rollState, Nothing) + case inImmutableDBOrRes of + -- We were able to obtain the result inside the transaction as we were + -- in the 'FollowerInMem' state. We only got a header, which we must first + -- convert to the right block component. + Right fupdate -> headerUpdateToBlockComponentUpdate fupdate + -- We were in the 'FollowerInImmutableDB' state or we need to switch to it. + Left (rollState, mbImmIt) -> do + immIt <- case mbImmIt of + Just immIt -> return immIt + -- We were in the 'FollowerInMem' state but have to switch to the + -- 'FollowerInImmutableDB' state. + Nothing -> do + trace $ FollowerNoLongerInMem rollState + ImmutableDB.streamAfterKnownPoint + cdbImmutableDB + registry + ((,) <$> getPoint <*> blockComponent) + (followerRollStatePoint rollState) + case rollState of + RollForwardFrom pt -> rollForwardImmutableDB immIt pt + RollBackTo pt -> do + let followerState' = FollowerInImmutableDB (RollForwardFrom pt) immIt + atomically $ writeTVar varFollower followerState' + return $ pure $ RollBack pt + where + trace = traceWith (contramap TraceFollowerEvent cdbTracer) + + getCurrentChainByType = do + curChain <- icWithoutTime <$> readTVar cdbChain + case chainType of + SelectedChain -> pure curChain + TentativeChain -> + readTVar cdbTentativeHeader <&> \case + SJust hdr -> curChain AF.:> hdr + SNothing -> curChain + + codecConfig :: CodecConfig blk + codecConfig = configCodec cdbTopLevelConfig + + headerUpdateToBlockComponentUpdate :: + f (ChainUpdate blk (Header blk)) -> m (f (ChainUpdate blk b)) + headerUpdateToBlockComponentUpdate = + traverse (traverse (`getBlockComponentFromHeader` blockComponent)) + + -- \| We only got the header for the in-memory chain fragment, so depending + -- on the 'BlockComponent' that's requested, we might have to read the + -- whole block. + getBlockComponentFromHeader :: + forall b'. Header blk -> BlockComponent blk b' -> m b' + getBlockComponentFromHeader hdr = \case + GetVerifiedBlock -> getBlockComponent GetVerifiedBlock + GetBlock -> getBlockComponent GetBlock + GetRawBlock -> getBlockComponent GetRawBlock + GetHeader -> return $ hdr + GetRawHeader -> return $ rawHdr + GetHash -> return $ headerHash hdr + GetSlot -> return $ blockSlot hdr + GetIsEBB -> return $ headerToIsEBB hdr + GetBlockSize -> getBlockComponent GetBlockSize + -- We could look up the header size in the index of the VolatileDB, + -- but getting the serialisation is cheap because we keep the + -- serialisation in memory as an annotation, and the following way is + -- less stateful + GetHeaderSize -> return $ fromIntegral $ Lazy.length rawHdr + GetNestedCtxt -> return nestedCtxt + GetPure a -> return a + GetApply f bc -> + getBlockComponentFromHeader hdr f + <*> getBlockComponentFromHeader hdr bc + where + -- \| Use the 'ImmutableDB' and 'VolatileDB' to read the 'BlockComponent' from + -- disk (or memory). + getBlockComponent :: forall c. BlockComponent blk c -> m c + getBlockComponent bc = + Query.getAnyKnownBlockComponent cdbImmutableDB cdbVolatileDB bc (headerRealPoint hdr) + + rawHdr :: Lazy.ByteString + nestedCtxt :: SomeSecond (NestedCtxt Header) blk + (nestedCtxt, rawHdr) = case unnest hdr of + DepPair ctxt h -> + ( SomeSecond ctxt + , toLazyByteString $ encodeDiskDep codecConfig ctxt h + ) + + next :: + ImmutableDB.Iterator m blk (Point blk, b) -> + m (Maybe (Point blk, b)) + next immIt = + ImmutableDB.iteratorNext immIt <&> \case + ImmutableDB.IteratorResult b -> Just b ImmutableDB.IteratorExhausted -> Nothing - rollForwardImmutableDB :: - ImmutableDB.Iterator m blk (Point blk, b) - -> Point blk - -> m (f (ChainUpdate blk b)) - rollForwardImmutableDB immIt pt = next immIt >>= \case + rollForwardImmutableDB :: + ImmutableDB.Iterator m blk (Point blk, b) -> + Point blk -> + m (f (ChainUpdate blk b)) + rollForwardImmutableDB immIt pt = + next immIt >>= \case Just (pt', b) -> do let followerState' = FollowerInImmutableDB (RollForwardFrom pt') immIt atomically $ writeTVar varFollower followerState' return $ pure $ AddBlock b - Nothing -> do + Nothing -> do -- Even though an iterator is automatically closed internally when -- exhausted, we close it again (idempotent), but this time to -- unregister the associated clean-up action. @@ -355,26 +378,25 @@ instructionHelper registry varFollower chainType blockComponent fromMaybeSTM CDB -- time of opening the iterator. We must now check whether that is -- still the end (blocks might have been added to the ImmutableDB in -- the meantime). - pointAtImmutableDBTip - <- atomically $ ImmutableDB.getTipPoint cdbImmutableDB + pointAtImmutableDBTip <- + atomically $ ImmutableDB.getTipPoint cdbImmutableDB let slotNoAtImmutableDBTip = pointSlot pointAtImmutableDBTip case pointSlot pt `compare` slotNoAtImmutableDBTip of -- The ImmutableDB somehow rolled back GT -> error "follower streamed beyond tip of the ImmutableDB" - -- The tip is still the same, so switch to the in-memory chain - EQ | pt == pointAtImmutableDBTip - -> do - trace $ FollowerSwitchToMem pt slotNoAtImmutableDBTip - fupdate <- atomically $ fromMaybeSTM $ do - curChain <- getCurrentChainByType - instructionSTM - (RollForwardFrom pt) - curChain - (writeTVar varFollower . FollowerInMem) - -- We only got the header, we must first convert it to the right - -- block component. - headerUpdateToBlockComponentUpdate fupdate + EQ | pt == pointAtImmutableDBTip -> + do + trace $ FollowerSwitchToMem pt slotNoAtImmutableDBTip + fupdate <- atomically $ fromMaybeSTM $ do + curChain <- getCurrentChainByType + instructionSTM + (RollForwardFrom pt) + curChain + (writeTVar varFollower . FollowerInMem) + -- We only got the header, we must first convert it to the right + -- block component. + headerUpdateToBlockComponentUpdate fupdate -- Two possibilities: -- @@ -384,146 +406,155 @@ instructionHelper registry varFollower chainType blockComponent fromMaybeSTM CDB -- -- 2. (LT): the tip of the ImmutableDB has progressed since we -- opened the iterator. - _ -> do + _ -> do trace $ FollowerNewImmIterator pt slotNoAtImmutableDBTip - immIt' <- ImmutableDB.streamAfterKnownPoint cdbImmutableDB registry - ((,) <$> getPoint <*> blockComponent) pt + immIt' <- + ImmutableDB.streamAfterKnownPoint + cdbImmutableDB + registry + ((,) <$> getPoint <*> blockComponent) + pt -- Try again with the new iterator rollForwardImmutableDB immIt' pt -- | 'followerInstruction' for when the follower is in the 'FollowerInMem' state. instructionSTM :: - forall m blk. (MonadSTM m, HasHeader (Header blk)) - => FollowerRollState blk - -- ^ The current 'FollowerRollState' of the follower - -> AnchoredFragment (Header blk) - -- ^ The current chain fragment - -> (FollowerRollState blk -> STM m ()) - -- ^ How to save the updated 'FollowerRollState' - -> STM m (Maybe (ChainUpdate blk (Header blk))) + forall m blk. + (MonadSTM m, HasHeader (Header blk)) => + -- | The current 'FollowerRollState' of the follower + FollowerRollState blk -> + -- | The current chain fragment + AnchoredFragment (Header blk) -> + -- | How to save the updated 'FollowerRollState' + (FollowerRollState blk -> STM m ()) -> + STM m (Maybe (ChainUpdate blk (Header blk))) instructionSTM rollState curChain saveRollState = - assert (invariant curChain) $ case rollState of - RollForwardFrom pt -> - case AF.successorBlock (castPoint pt) curChain of - -- There is no successor block because the follower is at the head - Nothing -> return Nothing - Just hdr -> do - saveRollState $ RollForwardFrom $ headerPoint hdr - return $ Just $ AddBlock hdr - RollBackTo pt -> do - saveRollState $ RollForwardFrom pt - return $ Just $ RollBack pt - where - invariant = - AF.withinFragmentBounds (castPoint (followerRollStatePoint rollState)) + assert (invariant curChain) $ case rollState of + RollForwardFrom pt -> + case AF.successorBlock (castPoint pt) curChain of + -- There is no successor block because the follower is at the head + Nothing -> return Nothing + Just hdr -> do + saveRollState $ RollForwardFrom $ headerPoint hdr + return $ Just $ AddBlock hdr + RollBackTo pt -> do + saveRollState $ RollForwardFrom pt + return $ Just $ RollBack pt + where + invariant = + AF.withinFragmentBounds (castPoint (followerRollStatePoint rollState)) forward :: - forall m blk b. - ( IOLike m - , HasCallStack - , HasHeader blk - , HasHeader (Header blk) - ) - => ResourceRegistry m - -> StrictTVar m (FollowerState m blk b) - -> BlockComponent blk b - -> ChainDbEnv m blk - -> [Point blk] - -> m (Maybe (Point blk)) + forall m blk b. + ( IOLike m + , HasCallStack + , HasHeader blk + , HasHeader (Header blk) + ) => + ResourceRegistry m -> + StrictTVar m (FollowerState m blk b) -> + BlockComponent blk b -> + ChainDbEnv m blk -> + [Point blk] -> + m (Maybe (Point blk)) forward registry varFollower blockComponent CDB{..} = - findM checkIfPointOnChain - where - checkIfPointOnChain :: HasCallStack => Point blk -> m Bool - checkIfPointOnChain pt = join $ atomically $ do - -- NOTE: we use 'cdbChain' instead of 'Query.getCurrentChain', which only - -- returns the last @k@ headers, because we need to also see the headers - -- that happen to have not yet been copied over to the ImmutableDB. - curChain <- icWithoutTime <$> readTVar cdbChain - followerState <- readTVar varFollower - if - | AF.withinFragmentBounds (castPoint pt) curChain - -> do - -- It's in the in-memory chain fragment. - action <- updateState $ FollowerInMem $ RollBackTo pt - return $ True <$ action - - | otherwise - -- Not in the in-memory chain fragment, so older than @k@, hence it - -- should be in the ImmutableDB. If not, then the point is not on our - -- chain. - -- - -- We try to avoid IO (in the ImmutableDB) as much as possible by - -- checking whether the requested point corresponds to the current - -- state of the follower. - -> return $ case followerState of + findM checkIfPointOnChain + where + checkIfPointOnChain :: HasCallStack => Point blk -> m Bool + checkIfPointOnChain pt = join $ atomically $ do + -- NOTE: we use 'cdbChain' instead of 'Query.getCurrentChain', which only + -- returns the last @k@ headers, because we need to also see the headers + -- that happen to have not yet been copied over to the ImmutableDB. + curChain <- icWithoutTime <$> readTVar cdbChain + followerState <- readTVar varFollower + if + | AF.withinFragmentBounds (castPoint pt) curChain -> + do + -- It's in the in-memory chain fragment. + action <- updateState $ FollowerInMem $ RollBackTo pt + return $ True <$ action + | otherwise -> + -- Not in the in-memory chain fragment, so older than @k@, hence it + -- should be in the ImmutableDB. If not, then the point is not on our + -- chain. + -- + -- We try to avoid IO (in the ImmutableDB) as much as possible by + -- checking whether the requested point corresponds to the current + -- state of the follower. + return $ case followerState of FollowerInit - | pt == GenesisPoint - -- The 'FollowerInit' state is equivalent to @'RollBackTo' - -- 'genesisPoint'@, so the state doesn't have to change when - -- requesting a rollback to genesis. - -> return True - + | pt == GenesisPoint -> + -- The 'FollowerInit' state is equivalent to @'RollBackTo' + -- 'genesisPoint'@, so the state doesn't have to change when + -- requesting a rollback to genesis. + return True FollowerInImmutableDB rollState immIt - | rollState == RollBackTo pt - -- If we already have to roll back to the given point in the - -- ImmutableDB, the state doesn't have to change, saving us from - -- checking whether the point is in the ImmutableDB (cached disk - -- reads), closing, and opening the same ImmutableDB iterator. - -> return True - - | rollState == RollForwardFrom pt - -- If we're already rolling forward from the given point in the - -- ImmutableDB, we can reuse the open ImmutableDB iterator, - -- saving the same costs as in the comment above. We do have to - -- update the state from 'RollForwardFrom' to 'RollBackTo'. - -> do - atomically $ writeTVar varFollower $ - FollowerInImmutableDB (RollBackTo pt) immIt - return True - + | rollState == RollBackTo pt -> + -- If we already have to roll back to the given point in the + -- ImmutableDB, the state doesn't have to change, saving us from + -- checking whether the point is in the ImmutableDB (cached disk + -- reads), closing, and opening the same ImmutableDB iterator. + return True + | rollState == RollForwardFrom pt -> + -- If we're already rolling forward from the given point in the + -- ImmutableDB, we can reuse the open ImmutableDB iterator, + -- saving the same costs as in the comment above. We do have to + -- update the state from 'RollForwardFrom' to 'RollBackTo'. + do + atomically $ + writeTVar varFollower $ + FollowerInImmutableDB (RollBackTo pt) immIt + return True _otherwise -> case pointToWithOriginRealPoint pt of -- Genesis is always "in" the ImmutableDB Origin -> do join $ atomically $ updateState FollowerInit return True - NotOrigin pt' -> do inImmutableDB <- ImmutableDB.hasBlock cdbImmutableDB pt' - if inImmutableDB then do - immIt <- ImmutableDB.streamAfterKnownPoint cdbImmutableDB registry - ((,) <$> getPoint <*> blockComponent) pt - join $ atomically $ - updateState $ FollowerInImmutableDB (RollBackTo pt) immIt - return True - else - -- The point is not in the current chain - return False - - -- | Update the state of the follower to the given state. If the current - -- state is 'FollowerInImmutableDB', close the ImmutableDB iterator to avoid - -- leaking the file handles. - updateState :: FollowerState m blk b -> STM m (m ()) - updateState newFollowerState = - stateTVar varFollower $ \followerState -> - (, newFollowerState) $ case followerState of - -- Return a continuation (that we'll 'join') that closes the - -- previous iterator. - FollowerInImmutableDB _ immIt -> ImmutableDB.iteratorClose immIt - FollowerInit -> return () - FollowerInMem _ -> return () + if inImmutableDB + then do + immIt <- + ImmutableDB.streamAfterKnownPoint + cdbImmutableDB + registry + ((,) <$> getPoint <*> blockComponent) + pt + join $ + atomically $ + updateState $ + FollowerInImmutableDB (RollBackTo pt) immIt + return True + else + -- The point is not in the current chain + return False + + -- \| Update the state of the follower to the given state. If the current + -- state is 'FollowerInImmutableDB', close the ImmutableDB iterator to avoid + -- leaking the file handles. + updateState :: FollowerState m blk b -> STM m (m ()) + updateState newFollowerState = + stateTVar varFollower $ \followerState -> + (,newFollowerState) $ case followerState of + -- Return a continuation (that we'll 'join') that closes the + -- previous iterator. + FollowerInImmutableDB _ immIt -> ImmutableDB.iteratorClose immIt + FollowerInit -> return () + FollowerInMem _ -> return () -- | Switches the follower to the new fork, by checking whether the follower is -- following an old fork, and updating the follower state to rollback to the -- intersection point if it is. switchFork :: - forall m blk b. HasHeader blk - => Point blk - -- ^ Intersection point between the new and the old chain. - -> Set (Point blk) - -- ^ Set of points that are in the old chain and not in the + forall m blk b. + HasHeader blk => + -- | Intersection point between the new and the old chain. + Point blk -> + -- | Set of points that are in the old chain and not in the -- new chain. - -> FollowerState m blk b - -> FollowerState m blk b + Set (Point blk) -> + FollowerState m blk b -> + FollowerState m blk b switchFork ipoint oldPoints = \case -- Roll back to the intersection point if and only if the position of the @@ -536,15 +567,14 @@ switchFork ipoint oldPoints = | pt `Set.member` oldPoints -> FollowerInMem (RollBackTo ipoint) followerState -> followerState - -- | Close all open block and header 'Follower's. closeAllFollowers :: - IOLike m - => ChainDbEnv m blk - -> m () + IOLike m => + ChainDbEnv m blk -> + m () closeAllFollowers CDB{..} = do - followerHandles <- atomically $ do - followerHandles <- Map.elems <$> readTVar cdbFollowers - writeTVar cdbFollowers Map.empty - return followerHandles - mapM_ fhClose followerHandles + followerHandles <- atomically $ do + followerHandles <- Map.elems <$> readTVar cdbFollowers + writeTVar cdbFollowers Map.empty + return followerHandles + mapM_ fhClose followerHandles diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Iterator.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Iterator.hs index cbd08b2769..01ef47b96e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Iterator.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Iterator.hs @@ -9,42 +9,57 @@ {-# LANGUAGE ViewPatterns #-} -- | Iterators -module Ouroboros.Consensus.Storage.ChainDB.Impl.Iterator ( - closeAllIterators +module Ouroboros.Consensus.Storage.ChainDB.Impl.Iterator + ( closeAllIterators , stream + -- * Exported for testing purposes , IteratorEnv (..) , newIterator ) where -import Control.Monad (unless, when) -import Control.Monad.Except (ExceptT (..), catchError, runExceptT, - throwError, withExceptT) -import Control.Monad.Trans.Class (lift) -import Control.ResourceRegistry (ResourceRegistry) -import Control.Tracer -import Data.Functor (($>)) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (isJust) -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.ChainDB.API (BlockComponent (..), - ChainDbError (..), Iterator (..), IteratorResult (..), - StreamFrom (..), StreamTo (..), UnknownRange (..), - getPoint, validBounds) -import Ouroboros.Consensus.Storage.ChainDB.Impl.Paths (Path (..), - computePath) -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB) -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util.IOLike +import Control.Monad (unless, when) +import Control.Monad.Except + ( ExceptT (..) + , catchError + , runExceptT + , throwError + , withExceptT + ) +import Control.Monad.Trans.Class (lift) +import Control.ResourceRegistry (ResourceRegistry) +import Control.Tracer +import Data.Functor (($>)) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (isJust) +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.ChainDB.API + ( BlockComponent (..) + , ChainDbError (..) + , Iterator (..) + , IteratorResult (..) + , StreamFrom (..) + , StreamTo (..) + , UnknownRange (..) + , getPoint + , validBounds + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Paths + ( Path (..) + , computePath + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB) +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util.IOLike -- | Stream blocks -- @@ -176,22 +191,22 @@ import Ouroboros.Consensus.Util.IOLike -- instead. Opening such an iterator costs 2 (cached) reads from disk upfront. -- This can happen multiple times. stream :: - forall m blk b. - ( IOLike m - , HasHeader blk - , HasCallStack - ) - => ChainDbHandle m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> StreamFrom blk - -> StreamTo blk - -> m (Either (UnknownRange blk) (Iterator m blk b)) + forall m blk b. + ( IOLike m + , HasHeader blk + , HasCallStack + ) => + ChainDbHandle m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + StreamFrom blk -> + StreamTo blk -> + m (Either (UnknownRange blk) (Iterator m blk b)) stream h registry blockComponent from to = getEnv h $ \cdb -> - newIterator (fromChainDbEnv cdb) getItEnv registry blockComponent from to - where - getItEnv :: forall r. (IteratorEnv m blk -> m r) -> m r - getItEnv f = getEnv h (f . fromChainDbEnv) + newIterator (fromChainDbEnv cdb) getItEnv registry blockComponent from to + where + getItEnv :: forall r. (IteratorEnv m blk -> m r) -> m r + getItEnv f = getEnv h (f . fromChainDbEnv) {------------------------------------------------------------------------------- Iterator environment @@ -202,291 +217,308 @@ stream h registry blockComponent from to = getEnv h $ \cdb -> -- The main purpose of bundling these things in a separate record is to make -- it easier to test this code: no need to set up a whole ChainDB, just -- provide this record. -data IteratorEnv m blk = IteratorEnv { - itImmutableDB :: ImmutableDB m blk - , itVolatileDB :: VolatileDB m blk - , itIterators :: StrictTVar m (Map IteratorKey (m ())) - , itNextIteratorKey :: StrictTVar m IteratorKey - , itTracer :: Tracer m (TraceIteratorEvent blk) - } +data IteratorEnv m blk = IteratorEnv + { itImmutableDB :: ImmutableDB m blk + , itVolatileDB :: VolatileDB m blk + , itIterators :: StrictTVar m (Map IteratorKey (m ())) + , itNextIteratorKey :: StrictTVar m IteratorKey + , itTracer :: Tracer m (TraceIteratorEvent blk) + } -- | Obtain an 'IteratorEnv' from a 'ChainDbEnv'. fromChainDbEnv :: ChainDbEnv m blk -> IteratorEnv m blk -fromChainDbEnv CDB{..} = IteratorEnv { - itImmutableDB = cdbImmutableDB - , itVolatileDB = cdbVolatileDB - , itIterators = cdbIterators +fromChainDbEnv CDB{..} = + IteratorEnv + { itImmutableDB = cdbImmutableDB + , itVolatileDB = cdbVolatileDB + , itIterators = cdbIterators , itNextIteratorKey = cdbNextIteratorKey - , itTracer = contramap TraceIteratorEvent cdbTracer + , itTracer = contramap TraceIteratorEvent cdbTracer } -- | See 'stream'. newIterator :: - forall m blk b. (IOLike m, HasHeader blk, HasCallStack) - => IteratorEnv m blk - -> (forall r. (IteratorEnv m blk -> m r) -> m r) - -- ^ Function with which the operations on the returned iterator should - -- obtain their 'IteratorEnv'. This function should check whether the - -- ChainDB is still open or throw an exception otherwise. This makes sure - -- that when we call 'iteratorNext', we first check whether the ChainDB - -- is still open. - -> ResourceRegistry m - -> BlockComponent blk b - -> StreamFrom blk - -> StreamTo blk - -> m (Either (UnknownRange blk) (Iterator m blk b)) + forall m blk b. + (IOLike m, HasHeader blk, HasCallStack) => + IteratorEnv m blk -> + -- | Function with which the operations on the returned iterator should + -- obtain their 'IteratorEnv'. This function should check whether the + -- ChainDB is still open or throw an exception otherwise. This makes sure + -- that when we call 'iteratorNext', we first check whether the ChainDB + -- is still open. + (forall r. (IteratorEnv m blk -> m r) -> m r) -> + ResourceRegistry m -> + BlockComponent blk b -> + StreamFrom blk -> + StreamTo blk -> + m (Either (UnknownRange blk) (Iterator m blk b)) newIterator itEnv@IteratorEnv{..} getItEnv registry blockComponent from to = do - unless (validBounds from to) $ - throwIO $ InvalidIteratorRange from to - res <- runExceptT start - case res of - Left e -> trace $ UnknownRangeRequested e - _ -> return () - return res - where - trace = traceWith itTracer - - endPoint :: RealPoint blk - endPoint = case to of - StreamToInclusive pt -> pt - - -- | Use the tip of the ImmutableDB to determine whether to look directly - -- in the ImmutableDB (the range is <= the tip) or first try the - -- VolatileDB (in the other cases). - start :: HasCallStack - => ExceptT (UnknownRange blk) m (Iterator m blk b) - start = lift (atomically (ImmutableDB.getTip itImmutableDB)) >>= \case + unless (validBounds from to) $ + throwIO $ + InvalidIteratorRange from to + res <- runExceptT start + case res of + Left e -> trace $ UnknownRangeRequested e + _ -> return () + return res + where + trace = traceWith itTracer + + endPoint :: RealPoint blk + endPoint = case to of + StreamToInclusive pt -> pt + + -- \| Use the tip of the ImmutableDB to determine whether to look directly + -- in the ImmutableDB (the range is <= the tip) or first try the + -- VolatileDB (in the other cases). + start :: + HasCallStack => + ExceptT (UnknownRange blk) m (Iterator m blk b) + start = + lift (atomically (ImmutableDB.getTip itImmutableDB)) >>= \case Origin -> findPathInVolatileDB - NotOrigin ImmutableDB.Tip { tipSlotNo, tipHash, tipIsEBB } -> + NotOrigin ImmutableDB.Tip{tipSlotNo, tipHash, tipIsEBB} -> case realPointSlot endPoint `compare` tipSlotNo of -- The end point is < the tip of the ImmutableDB LT -> streamFromImmutableDB - - EQ | realPointHash endPoint == tipHash + EQ + | realPointHash endPoint == tipHash -> -- The end point == the tip of the ImmutableDB - -> streamFromImmutableDB - - -- The end point /= the tip of the ImmutableDB. - -- - -- The end point can be a regular block or EBB. So can the tip of - -- the ImmutableDB. We distinguish the following for cases where - -- each block and EBB has the same slot number, and a block or - -- EBB /not/ on the current chain is indicated with a '. - -- - -- 1. ImmutableDB: .. :> EBB :> B - -- end point: B' - -- desired outcome: ForkTooOld - -- - -- 2. ImmutableDB: .. :> EBB :> B - -- end point: EBB' - -- desired outcome: ForkTooOld - -- - -- 3. ImmutableDB: .. :> EBB :> B - -- end point: EBB - -- desired outcome: stream from ImmutableDB - -- - -- 4. ImmutableDB: .. :> EBB - -- end point: B - -- desired outcome: find path in the VolatileDB - -- - -- 5. ImmutableDB: .. :> EBB - -- end point: B' - -- desired outcome: ForkTooOld - -- - -- 6. ImmutableDB: .. :> EBB - -- end point: EBB' - -- desired outcome: ForkTooOld - -- - -- We don't know upfront whether the given end point refers to a - -- block or EBB nor whether it is part of the current chain or - -- not. This means we don't know yet with which case we are - -- dealing. The only thing we know for sure, is whether the - -- ImmutableDB tip ends with a regular block (1-3) or an EBB - -- (4-6). - - | IsNotEBB <- tipIsEBB -- Cases 1-3 - -> streamFromImmutableDB `catchError` - -- We also use 'streamFromImmutableDB' to check whether the - -- block or EBB is in the ImmutableDB. If that's not the case, - -- 'streamFromImmutableDB' will return 'MissingBlock'. Instead - -- of returning that, we should return 'ForkTooOld', which is - -- more correct. - const (throwError $ ForkTooOld from) - | otherwise -- Cases 4-6 - -> findPathInVolatileDB - + streamFromImmutableDB + -- The end point /= the tip of the ImmutableDB. + -- + -- The end point can be a regular block or EBB. So can the tip of + -- the ImmutableDB. We distinguish the following for cases where + -- each block and EBB has the same slot number, and a block or + -- EBB /not/ on the current chain is indicated with a '. + -- + -- 1. ImmutableDB: .. :> EBB :> B + -- end point: B' + -- desired outcome: ForkTooOld + -- + -- 2. ImmutableDB: .. :> EBB :> B + -- end point: EBB' + -- desired outcome: ForkTooOld + -- + -- 3. ImmutableDB: .. :> EBB :> B + -- end point: EBB + -- desired outcome: stream from ImmutableDB + -- + -- 4. ImmutableDB: .. :> EBB + -- end point: B + -- desired outcome: find path in the VolatileDB + -- + -- 5. ImmutableDB: .. :> EBB + -- end point: B' + -- desired outcome: ForkTooOld + -- + -- 6. ImmutableDB: .. :> EBB + -- end point: EBB' + -- desired outcome: ForkTooOld + -- + -- We don't know upfront whether the given end point refers to a + -- block or EBB nor whether it is part of the current chain or + -- not. This means we don't know yet with which case we are + -- dealing. The only thing we know for sure, is whether the + -- ImmutableDB tip ends with a regular block (1-3) or an EBB + -- (4-6). + + | IsNotEBB <- tipIsEBB -> -- Cases 1-3 + streamFromImmutableDB + `catchError` + -- We also use 'streamFromImmutableDB' to check whether the + -- block or EBB is in the ImmutableDB. If that's not the case, + -- 'streamFromImmutableDB' will return 'MissingBlock'. Instead + -- of returning that, we should return 'ForkTooOld', which is + -- more correct. + const (throwError $ ForkTooOld from) + | otherwise -> -- Cases 4-6 + findPathInVolatileDB -- The end point is > the tip of the ImmutableDB GT -> findPathInVolatileDB - -- | PRECONDITION: the upper bound >= the tip of the ImmutableDB. - -- Greater or /equal/, because of EBBs :( - findPathInVolatileDB :: - HasCallStack => ExceptT (UnknownRange blk) m (Iterator m blk b) - findPathInVolatileDB = do - path <- lift $ computePathVolatileDB itVolatileDB from to - case path of - NotInVolatileDB _hash -> throwError $ ForkTooOld from - PartiallyInVolatileDB predHash pts -> streamFromBoth predHash pts - CompletelyInVolatileDB pts -> case NE.nonEmpty pts of - Just pts' -> lift $ streamFromVolatileDB pts' - Nothing -> lift $ emptyIterator - - streamFromVolatileDB :: NonEmpty (RealPoint blk) -> m (Iterator m blk b) - streamFromVolatileDB pts = do - trace $ StreamFromVolatileDB from to (NE.toList pts) - createIterator $ InVolatileDB from pts - - streamFromImmutableDB :: ExceptT (UnknownRange blk) m (Iterator m blk b) - streamFromImmutableDB = do - lift $ trace $ StreamFromImmutableDB from to - streamFromImmutableDBHelper to - - streamFromImmutableDBHelper :: - StreamTo blk - -> ExceptT (UnknownRange blk) m (Iterator m blk b) - streamFromImmutableDBHelper to' = do - -- 'ImmutableDB.stream' will check the hash of the block at the - -- start and end bounds. - immIt <- - withExceptT missingBlockToUnknownRange $ ExceptT $ - ImmutableDB.stream - itImmutableDB - registry - ((,) <$> getPoint <*> blockComponent) - from - to' - lift $ createIterator $ InImmutableDB from immIt (StreamTo to') - - -- | If we have to stream from both the ImmutableDB and the VolatileDB, we - -- only allow the (current) tip of the ImmutableDB to be the switchover - -- point between the two DBs. If not, this would mean we have to stream a - -- fork that forks off more than @k@ blocks in the past, in which case the - -- risk of blocks going missing due to GC increases. So we refuse such a - -- stream. - streamFromBoth :: - HasCallStack - => HeaderHash blk - -> [RealPoint blk] - -> ExceptT (UnknownRange blk) m (Iterator m blk b) - streamFromBoth predHash pts = do - lift $ trace $ StreamFromBoth from to pts - lift (fmap ImmutableDB.tipToRealPoint <$> - atomically (ImmutableDB.getTip itImmutableDB)) >>= \case - -- The ImmutableDB is empty - Origin -> throwError $ ForkTooOld from - -- The incomplete path fits onto the tip of the ImmutableDB. - NotOrigin pt@(RealPoint _ tipHash) - | tipHash == predHash - -> case NE.nonEmpty pts of - Just pts' -> startStream pt pts' - -- The lower bound was in the ImmutableDB and the upper was - -- in the VolatileDB, but the path of points in the - -- VolatileDB is actually empty. It must be that the - -- exclusive bound was in the VolatileDB and its - -- predecessor is the tip of the ImmutableDB. - Nothing -> streamFromImmutableDBHelper (StreamToInclusive pt) - -- The incomplete path doesn't fit onto the tip of the - -- ImmutableDB. Note that since we have constructed the - -- incomplete path through the VolatileDB, blocks might have - -- moved from the VolatileDB to the ImmutableDB so that the tip - -- of the ImmutableDB has changed. Either the path used to fit - -- onto the tip but the tip has changed, or the path simply - -- never fitted onto the tip. - | otherwise -> case dropWhile (/= pt) pts of + -- \| PRECONDITION: the upper bound >= the tip of the ImmutableDB. + -- Greater or /equal/, because of EBBs :( + findPathInVolatileDB :: + HasCallStack => ExceptT (UnknownRange blk) m (Iterator m blk b) + findPathInVolatileDB = do + path <- lift $ computePathVolatileDB itVolatileDB from to + case path of + NotInVolatileDB _hash -> throwError $ ForkTooOld from + PartiallyInVolatileDB predHash pts -> streamFromBoth predHash pts + CompletelyInVolatileDB pts -> case NE.nonEmpty pts of + Just pts' -> lift $ streamFromVolatileDB pts' + Nothing -> lift $ emptyIterator + + streamFromVolatileDB :: NonEmpty (RealPoint blk) -> m (Iterator m blk b) + streamFromVolatileDB pts = do + trace $ StreamFromVolatileDB from to (NE.toList pts) + createIterator $ InVolatileDB from pts + + streamFromImmutableDB :: ExceptT (UnknownRange blk) m (Iterator m blk b) + streamFromImmutableDB = do + lift $ trace $ StreamFromImmutableDB from to + streamFromImmutableDBHelper to + + streamFromImmutableDBHelper :: + StreamTo blk -> + ExceptT (UnknownRange blk) m (Iterator m blk b) + streamFromImmutableDBHelper to' = do + -- 'ImmutableDB.stream' will check the hash of the block at the + -- start and end bounds. + immIt <- + withExceptT missingBlockToUnknownRange $ + ExceptT $ + ImmutableDB.stream + itImmutableDB + registry + ((,) <$> getPoint <*> blockComponent) + from + to' + lift $ createIterator $ InImmutableDB from immIt (StreamTo to') + + -- \| If we have to stream from both the ImmutableDB and the VolatileDB, we + -- only allow the (current) tip of the ImmutableDB to be the switchover + -- point between the two DBs. If not, this would mean we have to stream a + -- fork that forks off more than @k@ blocks in the past, in which case the + -- risk of blocks going missing due to GC increases. So we refuse such a + -- stream. + streamFromBoth :: + HasCallStack => + HeaderHash blk -> + [RealPoint blk] -> + ExceptT (UnknownRange blk) m (Iterator m blk b) + streamFromBoth predHash pts = do + lift $ trace $ StreamFromBoth from to pts + lift + ( fmap ImmutableDB.tipToRealPoint + <$> atomically (ImmutableDB.getTip itImmutableDB) + ) + >>= \case + -- The ImmutableDB is empty + Origin -> throwError $ ForkTooOld from + -- The incomplete path fits onto the tip of the ImmutableDB. + NotOrigin pt@(RealPoint _ tipHash) + | tipHash == predHash -> + case NE.nonEmpty pts of + Just pts' -> startStream pt pts' + -- The lower bound was in the ImmutableDB and the upper was + -- in the VolatileDB, but the path of points in the + -- VolatileDB is actually empty. It must be that the + -- exclusive bound was in the VolatileDB and its + -- predecessor is the tip of the ImmutableDB. + Nothing -> streamFromImmutableDBHelper (StreamToInclusive pt) + -- The incomplete path doesn't fit onto the tip of the + -- ImmutableDB. Note that since we have constructed the + -- incomplete path through the VolatileDB, blocks might have + -- moved from the VolatileDB to the ImmutableDB so that the tip + -- of the ImmutableDB has changed. Either the path used to fit + -- onto the tip but the tip has changed, or the path simply + -- never fitted onto the tip. + | otherwise -> case dropWhile (/= pt) pts of -- The current tip is not in the path, this means that the path -- never fitted onto the tip of the ImmutableDB. We refuse this -- stream. - [] -> throwError $ ForkTooOld from + [] -> throwError $ ForkTooOld from -- The current tip is in the path, with some points after it, -- this means that some blocks in our path have moved from the -- VolatileDB to the ImmutableDB. We can shift the switchover -- point to the current tip. - _tipPt:pt':pts' -> startStream pt (pt' NE.:| pts') + _tipPt : pt' : pts' -> startStream pt (pt' NE.:| pts') -- The current tip is the end of the path, this means we can -- actually stream everything from just the ImmutableDB. It -- could be that the exclusive end bound was not part of the -- ImmutableDB, so stream to the current tip of the ImmutableDB -- (inclusive) to avoid trying to stream (exclusive) to a block -- that's not in the ImmutableDB. - [_tipPt] -> streamFromImmutableDBHelper (StreamToInclusive pt) - where - startStream :: - RealPoint blk -- ^ Tip of the ImmutableDB - -> NonEmpty (RealPoint blk) - -> ExceptT (UnknownRange blk) m (Iterator m blk b) - startStream immTip pts' = do - let immEnd = SwitchToVolatileDBFrom (StreamToInclusive immTip) pts' - immIt <- withExceptT missingBlockToUnknownRange $ ExceptT $ + [_tipPt] -> streamFromImmutableDBHelper (StreamToInclusive pt) + where + startStream :: + RealPoint blk -> + -- \^ Tip of the ImmutableDB + NonEmpty (RealPoint blk) -> + ExceptT (UnknownRange blk) m (Iterator m blk b) + startStream immTip pts' = do + let immEnd = SwitchToVolatileDBFrom (StreamToInclusive immTip) pts' + immIt <- + withExceptT missingBlockToUnknownRange $ + ExceptT $ ImmutableDB.stream itImmutableDB registry ((,) <$> getPoint <*> blockComponent) from (StreamToInclusive immTip) - lift $ createIterator $ InImmutableDB from immIt immEnd - - makeIterator :: - Bool -- ^ Register the iterator in 'cdbIterators'? - -> IteratorState m blk b - -> m (Iterator m blk b) - makeIterator register itState = do - iteratorKey <- makeNewIteratorKey - varItState <- newTVarIO itState - when register $ atomically $ modifyTVar itIterators $ - -- Note that we don't use 'itEnv' here, because that would mean that - -- invoking the function only works when the database is open, which - -- probably won't be the case. - Map.insert iteratorKey (implIteratorClose varItState iteratorKey itEnv) - return Iterator { - iteratorNext = getItEnv $ - implIteratorNext registry varItState blockComponent - , iteratorClose = getItEnv $ - implIteratorClose varItState iteratorKey + lift $ createIterator $ InImmutableDB from immIt immEnd + + makeIterator :: + Bool -> + -- \^ Register the iterator in 'cdbIterators'? + IteratorState m blk b -> + m (Iterator m blk b) + makeIterator register itState = do + iteratorKey <- makeNewIteratorKey + varItState <- newTVarIO itState + when register $ + atomically $ + modifyTVar itIterators $ + -- Note that we don't use 'itEnv' here, because that would mean that + -- invoking the function only works when the database is open, which + -- probably won't be the case. + Map.insert iteratorKey (implIteratorClose varItState iteratorKey itEnv) + return + Iterator + { iteratorNext = + getItEnv $ + implIteratorNext registry varItState blockComponent + , iteratorClose = + getItEnv $ + implIteratorClose varItState iteratorKey } - emptyIterator :: m (Iterator m blk b) - emptyIterator = makeIterator False Closed + emptyIterator :: m (Iterator m blk b) + emptyIterator = makeIterator False Closed - -- | This is 'makeIterator' + it in 'cdbIterators'. - createIterator :: IteratorState m blk b -> m (Iterator m blk b) - createIterator = makeIterator True + -- \| This is 'makeIterator' + it in 'cdbIterators'. + createIterator :: IteratorState m blk b -> m (Iterator m blk b) + createIterator = makeIterator True - makeNewIteratorKey :: m IteratorKey - makeNewIteratorKey = atomically $ do - newIteratorKey <- readTVar itNextIteratorKey - modifyTVar itNextIteratorKey succ - return newIteratorKey + makeNewIteratorKey :: m IteratorKey + makeNewIteratorKey = atomically $ do + newIteratorKey <- readTVar itNextIteratorKey + modifyTVar itNextIteratorKey succ + return newIteratorKey -- | Variant of 'computePath' that computes a path through the VolatileDB. -- Throws an 'InvalidIteratorRange' exception when the range is invalid (i.e., -- 'computePath' returned 'Nothing'). computePathVolatileDB :: - (IOLike m, HasHeader blk) - => VolatileDB m blk - -> StreamFrom blk - -> StreamTo blk - -> m (Path blk) + (IOLike m, HasHeader blk) => + VolatileDB m blk -> + StreamFrom blk -> + StreamTo blk -> + m (Path blk) computePathVolatileDB volatileDB from to = do - lookupBlockInfo <- atomically $ VolatileDB.getBlockInfo volatileDB - case computePath lookupBlockInfo from to of - Just path -> return path - Nothing -> throwIO $ InvalidIteratorRange from to + lookupBlockInfo <- atomically $ VolatileDB.getBlockInfo volatileDB + case computePath lookupBlockInfo from to of + Just path -> return path + Nothing -> throwIO $ InvalidIteratorRange from to -- | Close the iterator and remove it from the map of iterators ('itIterators' -- and thus 'cdbIterators'). implIteratorClose :: - IOLike m - => StrictTVar m (IteratorState m blk b) - -> IteratorKey - -> IteratorEnv m blk - -> m () + IOLike m => + StrictTVar m (IteratorState m blk b) -> + IteratorKey -> + IteratorEnv m blk -> + m () implIteratorClose varItState itrKey IteratorEnv{..} = do - mbImmIt <- atomically $ do - modifyTVar itIterators (Map.delete itrKey) - mbImmIt <- iteratorStateImmutableIt <$> readTVar varItState - writeTVar varItState Closed - return mbImmIt - mapM_ ImmutableDB.iteratorClose mbImmIt + mbImmIt <- atomically $ do + modifyTVar itIterators (Map.delete itrKey) + mbImmIt <- iteratorStateImmutableIt <$> readTVar varItState + writeTVar varItState Closed + return mbImmIt + mapM_ ImmutableDB.iteratorClose mbImmIt -- | Possible states of an iterator. -- @@ -519,13 +551,8 @@ implIteratorClose varItState itrKey IteratorEnv{..} = do -- that were previously part of the VolatileDB. When we now encounter a -- block of which the point does not match the expected point or when the -- iterator is exhausted, we switch back to the 'InVolatileDB' state. --- data IteratorState m blk b - = InImmutableDB - !(StreamFrom blk) - !(ImmutableDB.Iterator m blk (Point blk, b)) - !(InImmutableDBEnd blk) - -- ^ Streaming from the ImmutableDB. + = -- | Streaming from the ImmutableDB. -- -- Invariant: an ImmutableDB iterator opened using the 'StreamFrom' -- parameter as lower bound will yield the same next block as the iterator @@ -536,10 +563,11 @@ data IteratorState m blk b -- include them in its stream. -- -- Invariant: the iterator is not exhausted. - | InVolatileDB + InImmutableDB !(StreamFrom blk) - !(NonEmpty (RealPoint blk)) - -- ^ Streaming from the VolatileDB. + !(ImmutableDB.Iterator m blk (Point blk, b)) + !(InImmutableDBEnd blk) + | -- | Streaming from the VolatileDB. -- -- The (non-empty) list of points is the path to follow through the -- VolatileDB. @@ -550,193 +578,203 @@ data IteratorState m blk b -- Note that the points of these blocks still have to be checked against -- the points in the path, because the blocks might not have been part of -- the current chain, in which case they will not be in the ImmutableDB. - | InImmutableDBRetry + InVolatileDB !(StreamFrom blk) - !(ImmutableDB.Iterator m blk (Point blk, b)) !(NonEmpty (RealPoint blk)) - -- ^ When streaming blocks (a list of points) from the VolatileDB, we + | -- | When streaming blocks (a list of points) from the VolatileDB, we -- noticed a block was missing from the VolatileDB. It may have moved to -- the ImmutableDB since we initialised the iterator (and built the path), -- so we'll try if we can stream it from the ImmutableDB. -- -- Invariants: invariants of 'InImmutableDB' + invariant of 'InVolatileDB'. - + InImmutableDBRetry + !(StreamFrom blk) + !(ImmutableDB.Iterator m blk (Point blk, b)) + !(NonEmpty (RealPoint blk)) | Closed - deriving (Generic) + deriving Generic + +instance + (Typeable blk, StandardHash blk) => + NoThunks (IteratorState m blk b) -instance (Typeable blk, StandardHash blk) - => NoThunks (IteratorState m blk b) - -- use generic instance +-- use generic instance -- | Extract the ImmutableDB Iterator from the 'IteratorState'. iteratorStateImmutableIt :: - IteratorState m blk b - -> Maybe (ImmutableDB.Iterator m blk (Point blk, b)) + IteratorState m blk b -> + Maybe (ImmutableDB.Iterator m blk (Point blk, b)) iteratorStateImmutableIt = \case - Closed -> Nothing - InImmutableDB _ immIt _ -> Just immIt - InImmutableDBRetry _ immIt _ -> Just immIt - InVolatileDB {} -> Nothing + Closed -> Nothing + InImmutableDB _ immIt _ -> Just immIt + InImmutableDBRetry _ immIt _ -> Just immIt + InVolatileDB{} -> Nothing -- | Determines if/when to stop streaming from the ImmutableDB and what to do -- afterwards. -data InImmutableDBEnd blk = +data InImmutableDBEnd blk + = -- | Don't stop streaming until the iterator is exhausted. StreamAll - -- ^ Don't stop streaming until the iterator is exhausted. - | StreamTo !(StreamTo blk) - -- ^ Stream to the upper bound. - | SwitchToVolatileDBFrom !(StreamTo blk) !(NonEmpty (RealPoint blk)) - -- ^ Stream to the upper bound. Afterwards, start streaming the path (the + | -- | Stream to the upper bound. + StreamTo !(StreamTo blk) + | -- | Stream to the upper bound. Afterwards, start streaming the path (the -- second parameter) from the VolatileDB. + SwitchToVolatileDBFrom !(StreamTo blk) !(NonEmpty (RealPoint blk)) deriving (Generic, NoThunks) implIteratorNext :: - forall m blk b. (IOLike m, HasHeader blk) - => ResourceRegistry m - -> StrictTVar m (IteratorState m blk b) - -> BlockComponent blk b - -> IteratorEnv m blk - -> m (IteratorResult blk b) + forall m blk b. + (IOLike m, HasHeader blk) => + ResourceRegistry m -> + StrictTVar m (IteratorState m blk b) -> + BlockComponent blk b -> + IteratorEnv m blk -> + m (IteratorResult blk b) implIteratorNext registry varItState blockComponent IteratorEnv{..} = - atomically (readTVar varItState) >>= \case - Closed -> - return IteratorExhausted - InImmutableDB continueAfter immIt immEnd -> - nextInImmutableDB continueAfter immIt immEnd - InImmutableDBRetry continueAfter immIt immPts -> - nextInImmutableDBRetry (Just continueAfter) immIt immPts - InVolatileDB continueAfter volPts -> - nextInVolatileDB continueAfter volPts - where - trace = traceWith itTracer - - -- | Read the next block while in the 'InVolatileDB' state. - nextInVolatileDB :: - StreamFrom blk - -- ^ In case the block corresponding to the first point in - -- the path is missing from the VolatileDB, we can use this - -- lower bound to try to stream it from the ImmutableDB (if - -- the block indeed has been moved there). - -> NonEmpty (RealPoint blk) - -> m (IteratorResult blk b) - nextInVolatileDB continueFrom (pt@(realPointHash -> hash) NE.:| pts) = - VolatileDB.getBlockComponent itVolatileDB blockComponent hash >>= \case - -- Block is missing - Nothing -> do - trace $ BlockMissingFromVolatileDB pt - -- Try if we can stream a block from the ImmutableDB that was - -- previously in the VolatileDB. This will only work if the block - -- was part of the current chain, otherwise it will not have been - -- copied to the ImmutableDB. - -- - -- This call cannot throw a 'ReadFutureSlotError' or a - -- 'ReadFutureEBBError' because if the block is missing, it /must/ - -- have been garbage-collected, which means that its slot was - -- older than the slot of the tip of the ImmutableDB. - (fmap ImmutableDB.tipToRealPoint <$> - atomically (ImmutableDB.getTip itImmutableDB)) >>= \case - Origin -> - -- The block was in the VolatileDB, but isn't anymore. This can - -- only happen due to GC. It's not guaranteed that GC will have - -- moved /that/ block to the ImmutableDb (so it might have just - -- disappeared altogether), /but/ after GC the ImmutableDB - -- cannot be empty (because GC will only be triggered after some - -- newly immutable blocks have been copied to the ImmutableDB). - error "nextInVolatileDB: impossible" - NotOrigin tip -> do - errOrIt <- ImmutableDB.stream + atomically (readTVar varItState) >>= \case + Closed -> + return IteratorExhausted + InImmutableDB continueAfter immIt immEnd -> + nextInImmutableDB continueAfter immIt immEnd + InImmutableDBRetry continueAfter immIt immPts -> + nextInImmutableDBRetry (Just continueAfter) immIt immPts + InVolatileDB continueAfter volPts -> + nextInVolatileDB continueAfter volPts + where + trace = traceWith itTracer + + -- \| Read the next block while in the 'InVolatileDB' state. + nextInVolatileDB :: + StreamFrom blk -> + -- \^ In case the block corresponding to the first point in + -- the path is missing from the VolatileDB, we can use this + -- lower bound to try to stream it from the ImmutableDB (if + -- the block indeed has been moved there). + NonEmpty (RealPoint blk) -> + m (IteratorResult blk b) + nextInVolatileDB continueFrom (pt@(realPointHash -> hash) NE.:| pts) = + VolatileDB.getBlockComponent itVolatileDB blockComponent hash >>= \case + -- Block is missing + Nothing -> do + trace $ BlockMissingFromVolatileDB pt + -- Try if we can stream a block from the ImmutableDB that was + -- previously in the VolatileDB. This will only work if the block + -- was part of the current chain, otherwise it will not have been + -- copied to the ImmutableDB. + -- + -- This call cannot throw a 'ReadFutureSlotError' or a + -- 'ReadFutureEBBError' because if the block is missing, it /must/ + -- have been garbage-collected, which means that its slot was + -- older than the slot of the tip of the ImmutableDB. + ( fmap ImmutableDB.tipToRealPoint + <$> atomically (ImmutableDB.getTip itImmutableDB) + ) + >>= \case + Origin -> + -- The block was in the VolatileDB, but isn't anymore. This can + -- only happen due to GC. It's not guaranteed that GC will have + -- moved /that/ block to the ImmutableDb (so it might have just + -- disappeared altogether), /but/ after GC the ImmutableDB + -- cannot be empty (because GC will only be triggered after some + -- newly immutable blocks have been copied to the ImmutableDB). + error "nextInVolatileDB: impossible" + NotOrigin tip -> do + errOrIt <- + ImmutableDB.stream itImmutableDB registry ((,) <$> getPoint <*> blockComponent) continueFrom (StreamToInclusive tip) - case errOrIt of - -- The block was not found in the ImmutableDB, it must have - -- been garbage-collected - Left _ -> do - trace $ BlockGCedFromVolatileDB pt - return $ IteratorBlockGCed pt - Right immIt -> - nextInImmutableDBRetry Nothing immIt (pt NE.:| pts) - - -- Block is there - Just b | Just pts' <- NE.nonEmpty pts -> do - let continueFrom' = StreamFromExclusive (realPointToPoint pt) - atomically $ writeTVar varItState (InVolatileDB continueFrom' pts') - return $ IteratorResult b - -- No more points, so we can stop - Just b -> do - atomically $ writeTVar varItState Closed - return $ IteratorResult b - - -- | Read the next block while in the 'InImmutableDB' state. - nextInImmutableDB :: - StreamFrom blk - -> ImmutableDB.Iterator m blk (Point blk, b) - -> InImmutableDBEnd blk - -> m (IteratorResult blk b) - nextInImmutableDB continueFrom immIt immEnd = - selectResult immIt >>= \case - NotDone (pt, b) -> do - let continueFrom' = StreamFromExclusive pt - atomically $ writeTVar varItState (InImmutableDB continueFrom' immIt immEnd) - return $ IteratorResult b - -- True indicates that this is the last element in the stream - DoneAfter (pt, b) | SwitchToVolatileDBFrom _ pts <- immEnd -> do - let continueFrom' = StreamFromExclusive pt - atomically $ writeTVar varItState (InVolatileDB continueFrom' pts) - return $ IteratorResult b - DoneAfter (_pt, b) -> do - atomically $ writeTVar varItState Closed - return $ IteratorResult b - Done | SwitchToVolatileDBFrom _ pts <- immEnd -> - nextInVolatileDB continueFrom pts - Done -> do - -- No need to switch to the VolatileDB, so we can stop - atomically $ writeTVar varItState Closed - return IteratorExhausted - - -- | Read the next block while in the 'InImmutableDBRetry' state. - -- - -- We try to stream blocks that we suspect are now in the ImmutableDB - -- because they are no longer in the VolatileDB. We don't know this for - -- sure, so we must check whether they match the expected points. - nextInImmutableDBRetry :: - Maybe (StreamFrom blk) - -- ^ 'Nothing' iff the iterator was just opened and nothing has been - -- streamed from it yet. This is used to avoid switching right back - -- to the VolatileDB if we came from there. - -> ImmutableDB.Iterator m blk (Point blk, b) - -> NonEmpty (RealPoint blk) - -> m (IteratorResult blk b) - nextInImmutableDBRetry mbContinueFrom immIt (expectedPt NE.:| pts) = - selectResult immIt >>= \case - NotDone (actualPt, b) | actualPt == realPointToPoint expectedPt -> do - trace $ BlockWasCopiedToImmutableDB expectedPt - let continueFrom' = StreamFromExclusive (realPointToPoint expectedPt) - case NE.nonEmpty pts of - Nothing -> do - atomically $ writeTVar varItState Closed - ImmutableDB.iteratorClose immIt - Just pts' -> - atomically $ writeTVar varItState $ + case errOrIt of + -- The block was not found in the ImmutableDB, it must have + -- been garbage-collected + Left _ -> do + trace $ BlockGCedFromVolatileDB pt + return $ IteratorBlockGCed pt + Right immIt -> + nextInImmutableDBRetry Nothing immIt (pt NE.:| pts) + + -- Block is there + Just b | Just pts' <- NE.nonEmpty pts -> do + let continueFrom' = StreamFromExclusive (realPointToPoint pt) + atomically $ writeTVar varItState (InVolatileDB continueFrom' pts') + return $ IteratorResult b + -- No more points, so we can stop + Just b -> do + atomically $ writeTVar varItState Closed + return $ IteratorResult b + + -- \| Read the next block while in the 'InImmutableDB' state. + nextInImmutableDB :: + StreamFrom blk -> + ImmutableDB.Iterator m blk (Point blk, b) -> + InImmutableDBEnd blk -> + m (IteratorResult blk b) + nextInImmutableDB continueFrom immIt immEnd = + selectResult immIt >>= \case + NotDone (pt, b) -> do + let continueFrom' = StreamFromExclusive pt + atomically $ writeTVar varItState (InImmutableDB continueFrom' immIt immEnd) + return $ IteratorResult b + -- True indicates that this is the last element in the stream + DoneAfter (pt, b) | SwitchToVolatileDBFrom _ pts <- immEnd -> do + let continueFrom' = StreamFromExclusive pt + atomically $ writeTVar varItState (InVolatileDB continueFrom' pts) + return $ IteratorResult b + DoneAfter (_pt, b) -> do + atomically $ writeTVar varItState Closed + return $ IteratorResult b + Done + | SwitchToVolatileDBFrom _ pts <- immEnd -> + nextInVolatileDB continueFrom pts + Done -> do + -- No need to switch to the VolatileDB, so we can stop + atomically $ writeTVar varItState Closed + return IteratorExhausted + + -- \| Read the next block while in the 'InImmutableDBRetry' state. + -- + -- We try to stream blocks that we suspect are now in the ImmutableDB + -- because they are no longer in the VolatileDB. We don't know this for + -- sure, so we must check whether they match the expected points. + nextInImmutableDBRetry :: + Maybe (StreamFrom blk) -> + -- \^ 'Nothing' iff the iterator was just opened and nothing has been + -- streamed from it yet. This is used to avoid switching right back + -- to the VolatileDB if we came from there. + ImmutableDB.Iterator m blk (Point blk, b) -> + NonEmpty (RealPoint blk) -> + m (IteratorResult blk b) + nextInImmutableDBRetry mbContinueFrom immIt (expectedPt NE.:| pts) = + selectResult immIt >>= \case + NotDone (actualPt, b) | actualPt == realPointToPoint expectedPt -> do + trace $ BlockWasCopiedToImmutableDB expectedPt + let continueFrom' = StreamFromExclusive (realPointToPoint expectedPt) + case NE.nonEmpty pts of + Nothing -> do + atomically $ writeTVar varItState Closed + ImmutableDB.iteratorClose immIt + Just pts' -> + atomically $ + writeTVar varItState $ InImmutableDBRetry continueFrom' immIt pts' - return $ IteratorResult b - - DoneAfter (actualPt, b) | actualPt == realPointToPoint expectedPt -> do - -- 'DoneAfter': 'selectResult' will have closed the ImmutableDB iterator - -- already - trace $ BlockWasCopiedToImmutableDB expectedPt - let continueFrom' = StreamFromExclusive (realPointToPoint expectedPt) - case NE.nonEmpty pts of - Nothing -> atomically $ writeTVar varItState Closed - Just pts' -> do - atomically $ writeTVar varItState $ InVolatileDB continueFrom' pts' - trace SwitchBackToVolatileDB - return $ IteratorResult b - - -- Point mismatch or 'Done'. Close the ImmutableDB Iterator (idempotent). - _ -> ImmutableDB.iteratorClose immIt *> case mbContinueFrom of + return $ IteratorResult b + DoneAfter (actualPt, b) | actualPt == realPointToPoint expectedPt -> do + -- 'DoneAfter': 'selectResult' will have closed the ImmutableDB iterator + -- already + trace $ BlockWasCopiedToImmutableDB expectedPt + let continueFrom' = StreamFromExclusive (realPointToPoint expectedPt) + case NE.nonEmpty pts of + Nothing -> atomically $ writeTVar varItState Closed + Just pts' -> do + atomically $ writeTVar varItState $ InVolatileDB continueFrom' pts' + trace SwitchBackToVolatileDB + return $ IteratorResult b + + -- Point mismatch or 'Done'. Close the ImmutableDB Iterator (idempotent). + _ -> + ImmutableDB.iteratorClose immIt *> case mbContinueFrom of -- We just switched to this state and the iterator was just opened. -- The block must be GC'ed, since we opened the iterator because it -- was missing from the VolatileDB and now it is not in the @@ -754,42 +792,42 @@ implIteratorNext registry varItState blockComponent IteratorEnv{..} = trace SwitchBackToVolatileDB nextInVolatileDB continueFrom (expectedPt NE.:| pts) - -- | Given an ImmutableDB iterator, try to stream a value from it and - -- convert it to a 'Done'. See the documentation of 'Done' for more - -- details. - -- - -- Note that this function closes the iterator when necessary, i.e., when - -- the return value is 'Done' or 'DoneAfter'. - selectResult :: - ImmutableDB.Iterator m blk (Point blk, b) - -> m (Done (Point blk, b)) - selectResult immIt = do - itRes <- ImmutableDB.iteratorNext immIt - hasNext <- isJust <$> atomically (ImmutableDB.iteratorHasNext immIt) - case itRes of - ImmutableDB.IteratorResult blk -> select blk hasNext - ImmutableDB.IteratorExhausted -> return Done - where - select blk hasNext - | hasNext - = return $ NotDone blk - | otherwise - = ImmutableDB.iteratorClose immIt $> DoneAfter blk + -- \| Given an ImmutableDB iterator, try to stream a value from it and + -- convert it to a 'Done'. See the documentation of 'Done' for more + -- details. + -- + -- Note that this function closes the iterator when necessary, i.e., when + -- the return value is 'Done' or 'DoneAfter'. + selectResult :: + ImmutableDB.Iterator m blk (Point blk, b) -> + m (Done (Point blk, b)) + selectResult immIt = do + itRes <- ImmutableDB.iteratorNext immIt + hasNext <- isJust <$> atomically (ImmutableDB.iteratorHasNext immIt) + case itRes of + ImmutableDB.IteratorResult blk -> select blk hasNext + ImmutableDB.IteratorExhausted -> return Done + where + select blk hasNext + | hasNext = + return $ NotDone blk + | otherwise = + ImmutableDB.iteratorClose immIt $> DoneAfter blk -- | Auxiliary data type used for 'selectResult' in 'implIteratorNext'. -data Done blk = - Done - -- ^ We're done with the iterator, either it is exhausted or we reached +data Done blk + = -- | We're done with the iterator, either it is exhausted or we reached -- its upper bound. - | DoneAfter blk - -- ^ We're done with the iterator, but have to return this last block. We + Done + | -- | We're done with the iterator, but have to return this last block. We -- must have reached its upper /inclusive/ bound. - | NotDone blk - -- ^ We're not done yet with the iterator and have to return this block. + DoneAfter blk + | -- | We're not done yet with the iterator and have to return this block. + NotDone blk missingBlockToUnknownRange :: - ImmutableDB.MissingBlock blk - -> UnknownRange blk + ImmutableDB.MissingBlock blk -> + UnknownRange blk missingBlockToUnknownRange = MissingBlock . ImmutableDB.missingBlockPoint -- | Close all open 'Iterator's. @@ -797,6 +835,6 @@ missingBlockToUnknownRange = MissingBlock . ImmutableDB.missingBlockPoint -- This /can/ be called when the ChainDB is already closed. closeAllIterators :: IOLike m => ChainDbEnv m blk -> m () closeAllIterators CDB{..} = do - iteratorClosers <- atomically $ Map.elems <$> readTVar cdbIterators - -- Note that each closer removes its entry from the 'cdbIterators' map. - sequence_ iteratorClosers + iteratorClosers <- atomically $ Map.elems <$> readTVar cdbIterators + -- Note that each closer removes its entry from the 'cdbIterators' map. + sequence_ iteratorClosers diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Paths.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Paths.hs index c98f94d042..60e1fb6f0e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Paths.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Paths.hs @@ -5,37 +5,44 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} -module Ouroboros.Consensus.Storage.ChainDB.Impl.Paths ( - -- * LookupBlockInfo +module Ouroboros.Consensus.Storage.ChainDB.Impl.Paths + ( -- * LookupBlockInfo LookupBlockInfo + -- * Candidates , extendWithSuccessors , maximalCandidates + -- * Path , Path (..) , computePath + -- * Reverse path , ReversePath (..) , computeReversePath + -- * Reachability , isReachable ) where -import Data.Foldable as Foldable (foldl') -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Maybe (fromMaybe, mapMaybe) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Fragment.Diff (ChainDiff (..)) -import qualified Ouroboros.Consensus.Fragment.Diff as Diff -import Ouroboros.Consensus.Storage.ChainDB.API hiding (ChainDB (..), - closeDB, getMaxSlotNo) -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF +import Data.Foldable as Foldable (foldl') +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Maybe (fromMaybe, mapMaybe) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Fragment.Diff (ChainDiff (..)) +import Ouroboros.Consensus.Fragment.Diff qualified as Diff +import Ouroboros.Consensus.Storage.ChainDB.API hiding + ( ChainDB (..) + , closeDB + , getMaxSlotNo + ) +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF {------------------------------------------------------------------------------- LookupBlockInfo @@ -63,30 +70,32 @@ type LookupBlockInfo blk = HeaderHash blk -> Maybe (VolatileDB.BlockInfo blk) -- -- NOTE: it is possible that no candidates are found, but don't forget that -- the chain (fragment) ending with @B@ is also a potential candidate. --- maximalCandidates :: - forall blk. - (ChainHash blk -> Set (HeaderHash blk)) - -- ^ @filterByPredecessor@ - -> Maybe Word64 -- ^ Optional max length of any candidate, used during initial - -- chain selection when LoE is enabled. - -> Point blk -- ^ @B@ - -> [NonEmpty (HeaderHash blk)] - -- ^ Each element in the list is a list of hashes from which we can - -- construct a fragment anchored at the point @B@. + forall blk. + -- | @filterByPredecessor@ + (ChainHash blk -> Set (HeaderHash blk)) -> + -- | Optional max length of any candidate, used during initial + -- chain selection when LoE is enabled. + Maybe Word64 -> + -- | @B@ + Point blk -> + -- | Each element in the list is a list of hashes from which we can + -- construct a fragment anchored at the point @B@. + [NonEmpty (HeaderHash blk)] maximalCandidates succsOf sizeLimit b = - mapMaybe (NE.nonEmpty . trimToSizeLimit) $ go (pointHash b) - where - go :: ChainHash blk -> [[HeaderHash blk]] - go mbHash = case Set.toList $ succsOf mbHash of - [] -> [[]] - succs -> [ next : candidate - | next <- succs - , candidate <- go (BlockHash next) - ] - trimToSizeLimit = case sizeLimit of - Just limit -> take (fromIntegral limit) - Nothing -> id + mapMaybe (NE.nonEmpty . trimToSizeLimit) $ go (pointHash b) + where + go :: ChainHash blk -> [[HeaderHash blk]] + go mbHash = case Set.toList $ succsOf mbHash of + [] -> [[]] + succs -> + [ next : candidate + | next <- succs + , candidate <- go (BlockHash next) + ] + trimToSizeLimit = case sizeLimit of + Just limit -> take (fromIntegral limit) + Nothing -> id -- | Extend the 'ChainDiff' with the successors found by 'maximalCandidates'. -- @@ -99,29 +108,30 @@ maximalCandidates succsOf sizeLimit b = -- Only the longest possible extensions are returned, no intermediary prefixes -- of extensions. extendWithSuccessors :: - forall blk. HasHeader blk - => (ChainHash blk -> Set (HeaderHash blk)) - -> LookupBlockInfo blk - -> ChainDiff (HeaderFields blk) - -> NonEmpty (ChainDiff (HeaderFields blk)) + forall blk. + HasHeader blk => + (ChainHash blk -> Set (HeaderHash blk)) -> + LookupBlockInfo blk -> + ChainDiff (HeaderFields blk) -> + NonEmpty (ChainDiff (HeaderFields blk)) extendWithSuccessors succsOf lookupBlockInfo diff = - case NE.nonEmpty extensions of - Nothing -> diff NE.:| [] - Just extensions' -> extensions' - where - extensions = - [ Foldable.foldl' Diff.append diff (lookupHeaderFields <$> candHashes) - | candHashes <- maximalCandidates succsOf Nothing (castPoint (Diff.getTip diff)) - ] + case NE.nonEmpty extensions of + Nothing -> diff NE.:| [] + Just extensions' -> extensions' + where + extensions = + [ Foldable.foldl' Diff.append diff (lookupHeaderFields <$> candHashes) + | candHashes <- maximalCandidates succsOf Nothing (castPoint (Diff.getTip diff)) + ] - lookupHeaderFields :: HeaderHash blk -> HeaderFields blk - lookupHeaderFields = - headerFieldsFromBlockInfo - -- The successor mapping is populated with the blocks in the - -- VolatileDB, so looking up the block info of a successor /must/ - -- succeed. - . fromMaybe (error "successor must in the VolatileDB") - . lookupBlockInfo + lookupHeaderFields :: HeaderHash blk -> HeaderFields blk + lookupHeaderFields = + headerFieldsFromBlockInfo + -- The successor mapping is populated with the blocks in the + -- VolatileDB, so looking up the block info of a successor /must/ + -- succeed. + . fromMaybe (error "successor must in the VolatileDB") + . lookupBlockInfo {------------------------------------------------------------------------------- Paths @@ -136,86 +146,87 @@ extendWithSuccessors succsOf lookupBlockInfo diff = -- -- See the documentation of 'Path'. computePath :: - forall blk. HasHeader blk - => LookupBlockInfo blk - -> StreamFrom blk - -> StreamTo blk - -> Maybe (Path blk) + forall blk. + HasHeader blk => + LookupBlockInfo blk -> + StreamFrom blk -> + StreamTo blk -> + Maybe (Path blk) computePath lookupBlockInfo from to = - case computeReversePath lookupBlockInfo (realPointHash endPt) of - Nothing -> Just $ NotInVolatileDB endPt - Just volPath -> go [] volPath - where - endPt :: RealPoint blk - endPt = case to of - StreamToInclusive pt -> pt - - fieldsToRealPoint :: HeaderFields blk -> RealPoint blk - fieldsToRealPoint flds = - RealPoint (headerFieldSlot flds) (headerFieldHash flds) + case computeReversePath lookupBlockInfo (realPointHash endPt) of + Nothing -> Just $ NotInVolatileDB endPt + Just volPath -> go [] volPath + where + endPt :: RealPoint blk + endPt = case to of + StreamToInclusive pt -> pt - -- | Convert the 'HeaderFields' to a 'RealPoint' and prepend that to the - -- accumulator. - -- - -- NOTE: we will store the returned list in the state of a ChainDB - -- iterator as a lazy non-empty list. To avoid thunks, we force the - -- elements now, when adding them to the accumulator. TODO #2341 - addToAcc :: HeaderFields blk -> [RealPoint blk] -> [RealPoint blk] - addToAcc flds pts = pt : pts - -- When the returned list is forced, @pt@ is forced. The returned list - -- is forced because the accumulator is forced in @go@. - where - !pt = fieldsToRealPoint flds + fieldsToRealPoint :: HeaderFields blk -> RealPoint blk + fieldsToRealPoint flds = + RealPoint (headerFieldSlot flds) (headerFieldHash flds) - go :: - [RealPoint blk] -- ^ Accumulator for the 'Path' - -> ReversePath blk -- ^ Prefix of the path to 'StreamFrom' - -> Maybe (Path blk) - go !acc = \case - StoppedAtGenesis - | StreamFromExclusive GenesisPoint <- from - -> Just $ CompletelyInVolatileDB acc - | otherwise - -- If 'StreamFrom' was not from genesis, then the range must be - -- invalid. - -> Nothing + -- \| Convert the 'HeaderFields' to a 'RealPoint' and prepend that to the + -- accumulator. + -- + -- NOTE: we will store the returned list in the state of a ChainDB + -- iterator as a lazy non-empty list. To avoid thunks, we force the + -- elements now, when adding them to the accumulator. TODO #2341 + addToAcc :: HeaderFields blk -> [RealPoint blk] -> [RealPoint blk] + addToAcc flds pts = pt : pts + where + -- When the returned list is forced, @pt@ is forced. The returned list + -- is forced because the accumulator is forced in @go@. - StoppedAt hash _bno - | StreamFromExclusive GenesisPoint <- from - -> Just $ PartiallyInVolatileDB hash acc - | StreamFromExclusive (BlockPoint _ hash') <- from - , hash == hash' - -> Just $ CompletelyInVolatileDB acc - | StreamFromExclusive (BlockPoint _ _) <- from - -> Just $ PartiallyInVolatileDB hash acc - | StreamFromInclusive _ <- from - -> Just $ PartiallyInVolatileDB hash acc + !pt = fieldsToRealPoint flds - volPath' ::> (flds, _isEBB) - | StreamFromExclusive GenesisPoint <- from - -> go (addToAcc flds acc) volPath' - | StreamFromExclusive (BlockPoint _ hash') <- from - , headerFieldHash flds == hash' - -> Just $ CompletelyInVolatileDB acc - | StreamFromExclusive (BlockPoint _ _) <- from - -> go (addToAcc flds acc) volPath' - | StreamFromInclusive pt' <- from - , fieldsToRealPoint flds == pt' - -> Just $ CompletelyInVolatileDB (addToAcc flds acc) - | StreamFromInclusive _ <- from - -> go (addToAcc flds acc) volPath' + go :: + [RealPoint blk] -> + -- \^ Accumulator for the 'Path' + ReversePath blk -> + -- \^ Prefix of the path to 'StreamFrom' + Maybe (Path blk) + go !acc = \case + StoppedAtGenesis + | StreamFromExclusive GenesisPoint <- from -> + Just $ CompletelyInVolatileDB acc + | otherwise -> + -- If 'StreamFrom' was not from genesis, then the range must be + -- invalid. + Nothing + StoppedAt hash _bno + | StreamFromExclusive GenesisPoint <- from -> + Just $ PartiallyInVolatileDB hash acc + | StreamFromExclusive (BlockPoint _ hash') <- from + , hash == hash' -> + Just $ CompletelyInVolatileDB acc + | StreamFromExclusive (BlockPoint _ _) <- from -> + Just $ PartiallyInVolatileDB hash acc + | StreamFromInclusive _ <- from -> + Just $ PartiallyInVolatileDB hash acc + volPath' ::> (flds, _isEBB) + | StreamFromExclusive GenesisPoint <- from -> + go (addToAcc flds acc) volPath' + | StreamFromExclusive (BlockPoint _ hash') <- from + , headerFieldHash flds == hash' -> + Just $ CompletelyInVolatileDB acc + | StreamFromExclusive (BlockPoint _ _) <- from -> + go (addToAcc flds acc) volPath' + | StreamFromInclusive pt' <- from + , fieldsToRealPoint flds == pt' -> + Just $ CompletelyInVolatileDB (addToAcc flds acc) + | StreamFromInclusive _ <- from -> + go (addToAcc flds acc) volPath' -- | A path through the VolatileDB from a 'StreamFrom' to a 'StreamTo'. -- -- Invariant: the @AnchoredFragment@ (oldest first) constructed using the blocks -- corresponding to the points in the path will be valid, i.e., the blocks -- will fit onto each other. -data Path blk = - NotInVolatileDB (RealPoint blk) - -- ^ The @end@ point (@'StreamToInclusive' end@) was not part of the +data Path blk + = -- | The @end@ point (@'StreamToInclusive' end@) was not part of the -- VolatileDB. - | CompletelyInVolatileDB [RealPoint blk] - -- ^ A complete path, from start point to end point was constructed from + NotInVolatileDB (RealPoint blk) + | -- | A complete path, from start point to end point was constructed from -- the VolatileDB. The list contains the points from oldest to newest. -- -- * If the lower bound was @'StreamFromInclusive' pt@, then @pt@ will be @@ -225,8 +236,8 @@ data Path blk = -- -- * If the upper bound was @'StreamToInclusive' pt@, then @pt@ will be -- the last element of the list. - | PartiallyInVolatileDB (HeaderHash blk) [RealPoint blk] - -- ^ Only a partial path could be constructed from the VolatileDB. The + CompletelyInVolatileDB [RealPoint blk] + | -- | Only a partial path could be constructed from the VolatileDB. The -- missing predecessor could still be in the ImmutableDB. The list -- contains the points from oldest to newest. -- @@ -241,8 +252,9 @@ data Path blk = -- 'StartToEnd'. -- -- The same invariants hold for the upper bound as for 'StartToEnd'. + PartiallyInVolatileDB (HeaderHash blk) [RealPoint blk] -deriving instance HasHeader blk => Eq (Path blk) +deriving instance HasHeader blk => Eq (Path blk) deriving instance HasHeader blk => Show (Path blk) {------------------------------------------------------------------------------- @@ -250,103 +262,104 @@ deriving instance HasHeader blk => Show (Path blk) -------------------------------------------------------------------------------} headerFieldsFromBlockInfo :: VolatileDB.BlockInfo blk -> HeaderFields blk -headerFieldsFromBlockInfo VolatileDB.BlockInfo { biSlotNo, biHash, biBlockNo } = - HeaderFields { - headerFieldHash = biHash - , headerFieldSlot = biSlotNo - , headerFieldBlockNo = biBlockNo - } +headerFieldsFromBlockInfo VolatileDB.BlockInfo{biSlotNo, biHash, biBlockNo} = + HeaderFields + { headerFieldHash = biHash + , headerFieldSlot = biSlotNo + , headerFieldBlockNo = biBlockNo + } -- | A reverse path through the VolatileDB starting at a block in the -- VolatileDB until we reach genesis or leave the VolatileDB. -data ReversePath blk = - -- | The path stopped at genesis - StoppedAtGenesis - - -- | The path stopped at this hash, which is the hash of the predecessor - -- of the last block in the path (that was still stored in the - -- VolatileDB). - -- - -- The block corresponding to the predecessor is /not/ stored in the - -- VolatileDB. Either because it is missing, or because it is old and - -- has been garbage collected. - -- - -- Since block numbers are consecutive, we subtract 1 from the block - -- number of the last block to obtain the block number corresponding to - -- this hash. - -- - -- EBBs share their block number with their predecessor: - -- - -- > block: regular block 1 | EBB | regular block 2 - -- > block number: X | X | X + 1 - -- - -- So when the hash refers to regular block 1, we see that the successor - -- block is an EBB and use its block number without subtracting 1. - -- - -- Edge case: if there are two or more consecutive EBBs, we might - -- predict the wrong block number, but there are no consecutive EBBs in - -- practice, they are one epoch apart. - | StoppedAt (HeaderHash blk) BlockNo - - -- | Snoc: the block with the given 'HeaderFields' is in the VolatileDB. - -- We also track whether it is an EBB or not. - -- - -- NOTE: we are intentionally lazy in the spine, as constructing the - -- path requires lookups in the VolatileDB's in-memory indices, which - -- are logarithmic in the size of the index. - | (ReversePath blk) ::> (HeaderFields blk, IsEBB) - --- | Lazily compute the 'ReversePath' that starts (i.e., ends) with the given --- 'HeaderHash'. -computeReversePath - :: forall blk. - LookupBlockInfo blk - -> HeaderHash blk - -- ^ End hash - -> Maybe (ReversePath blk) - -- ^ Reverse path from the end point to genesis or the first predecessor - -- not in the VolatileDB. Nothing when the end hash is not in the - -- VolatileDB. -computeReversePath lookupBlockInfo endHash = - case lookupBlockInfo endHash of - Nothing -> Nothing - Just blockInfo@VolatileDB.BlockInfo { biBlockNo, biIsEBB, biPrevHash } -> Just $ - go biPrevHash biBlockNo biIsEBB ::> (headerFieldsFromBlockInfo blockInfo, biIsEBB) - where - go :: - ChainHash blk - -- ^ The predecessor of the last block added to the path. Not - -- necessarily in the VolatileDB. - -> BlockNo -- ^ The block number of the last block - -> IsEBB -- ^ Whether the last block is an EBB or not - -> ReversePath blk - go predecessor lastBlockNo lastIsEBB = case predecessor of - GenesisHash -> StoppedAtGenesis - BlockHash prevHash -> case lookupBlockInfo prevHash of - Nothing -> - StoppedAt prevHash (prevBlockNo lastBlockNo lastIsEBB) - Just blockInfo@VolatileDB.BlockInfo { biBlockNo, biIsEBB, biPrevHash } -> - go biPrevHash biBlockNo biIsEBB ::> (headerFieldsFromBlockInfo blockInfo, biIsEBB) - - -- | Predict the block number of the missing predecessor. +data ReversePath blk + = -- | The path stopped at genesis + StoppedAtGenesis + | -- | The path stopped at this hash, which is the hash of the predecessor + -- of the last block in the path (that was still stored in the + -- VolatileDB). + -- + -- The block corresponding to the predecessor is /not/ stored in the + -- VolatileDB. Either because it is missing, or because it is old and + -- has been garbage collected. -- - -- PRECONDITION: the block number and 'IsEBB' correspond to a block that - -- has a predecessor. + -- Since block numbers are consecutive, we subtract 1 from the block + -- number of the last block to obtain the block number corresponding to + -- this hash. -- - -- For regular blocks, this is just block number - 1, EBBs are special of - -- course: they share their block number with their predecessor: + -- EBBs share their block number with their predecessor: -- -- > block: regular block 1 | EBB | regular block 2 -- > block number: X | X | X + 1 -- - -- Edge case: if there are two or more consecutive EBBs, we might predict - -- the wrong block number, but there are no consecutive EBBs in practice - -- (nor in the tests), they are one epoch apart. - prevBlockNo :: BlockNo -> IsEBB -> BlockNo - prevBlockNo bno isEBB = case (bno, isEBB) of - (0, IsNotEBB) -> error "precondition violated" - (_, IsNotEBB) -> bno - 1 - (_, IsEBB) -> bno + -- So when the hash refers to regular block 1, we see that the successor + -- block is an EBB and use its block number without subtracting 1. + -- + -- Edge case: if there are two or more consecutive EBBs, we might + -- predict the wrong block number, but there are no consecutive EBBs in + -- practice, they are one epoch apart. + StoppedAt (HeaderHash blk) BlockNo + | -- | Snoc: the block with the given 'HeaderFields' is in the VolatileDB. + -- We also track whether it is an EBB or not. + -- + -- NOTE: we are intentionally lazy in the spine, as constructing the + -- path requires lookups in the VolatileDB's in-memory indices, which + -- are logarithmic in the size of the index. + (ReversePath blk) ::> (HeaderFields blk, IsEBB) + +-- | Lazily compute the 'ReversePath' that starts (i.e., ends) with the given +-- 'HeaderHash'. +computeReversePath :: + forall blk. + LookupBlockInfo blk -> + -- | End hash + HeaderHash blk -> + -- | Reverse path from the end point to genesis or the first predecessor + -- not in the VolatileDB. Nothing when the end hash is not in the + -- VolatileDB. + Maybe (ReversePath blk) +computeReversePath lookupBlockInfo endHash = + case lookupBlockInfo endHash of + Nothing -> Nothing + Just blockInfo@VolatileDB.BlockInfo{biBlockNo, biIsEBB, biPrevHash} -> + Just $ + go biPrevHash biBlockNo biIsEBB ::> (headerFieldsFromBlockInfo blockInfo, biIsEBB) + where + go :: + ChainHash blk -> + -- \^ The predecessor of the last block added to the path. Not + -- necessarily in the VolatileDB. + BlockNo -> + -- \^ The block number of the last block + IsEBB -> + -- \^ Whether the last block is an EBB or not + ReversePath blk + go predecessor lastBlockNo lastIsEBB = case predecessor of + GenesisHash -> StoppedAtGenesis + BlockHash prevHash -> case lookupBlockInfo prevHash of + Nothing -> + StoppedAt prevHash (prevBlockNo lastBlockNo lastIsEBB) + Just blockInfo@VolatileDB.BlockInfo{biBlockNo, biIsEBB, biPrevHash} -> + go biPrevHash biBlockNo biIsEBB ::> (headerFieldsFromBlockInfo blockInfo, biIsEBB) + + -- \| Predict the block number of the missing predecessor. + -- + -- PRECONDITION: the block number and 'IsEBB' correspond to a block that + -- has a predecessor. + -- + -- For regular blocks, this is just block number - 1, EBBs are special of + -- course: they share their block number with their predecessor: + -- + -- > block: regular block 1 | EBB | regular block 2 + -- > block number: X | X | X + 1 + -- + -- Edge case: if there are two or more consecutive EBBs, we might predict + -- the wrong block number, but there are no consecutive EBBs in practice + -- (nor in the tests), they are one epoch apart. + prevBlockNo :: BlockNo -> IsEBB -> BlockNo + prevBlockNo bno isEBB = case (bno, isEBB) of + (0, IsNotEBB) -> error "precondition violated" + (_, IsNotEBB) -> bno - 1 + (_, IsEBB) -> bno {------------------------------------------------------------------------------- Reachability @@ -377,94 +390,93 @@ computeReversePath lookupBlockInfo endHash = -- -- When the suffix of the 'ChainDiff' is non-empty, @P@ will be the last point -- in the suffix. -isReachable - :: forall blk. (HasHeader blk, GetHeader blk) - => LookupBlockInfo blk - -> AnchoredFragment (Header blk) -- ^ Chain fragment to connect the point to - -> RealPoint blk - -> Maybe (ChainDiff (HeaderFields blk)) +isReachable :: + forall blk. + (HasHeader blk, GetHeader blk) => + LookupBlockInfo blk -> + -- | Chain fragment to connect the point to + AnchoredFragment (Header blk) -> + RealPoint blk -> + Maybe (ChainDiff (HeaderFields blk)) isReachable lookupBlockInfo = \chain b -> - case computeReversePath lookupBlockInfo (realPointHash b) of - -- Block not in the VolatileDB, so it's unreachable - Nothing -> Nothing - Just reversePath -> go chain reversePath 0 [] - where - -- | NOTE: the 'ReversePath' is lazy in its spine. We will only force as - -- many elements as 'RealPoint's we return. In the worst case, the path is - -- not connected to the current chain at all, in which case we do force - -- the entire path. - -- - -- We're trying to find a common block, i.e., one with the same point and - -- thus the same slot. Both the chain and the path are ordered by slots, - -- so we compare the slots and drop the largest one until we have a match - -- in slot, then we check hashes. If those don't match, we drop both. - -- Note: EBBs complicate things, see 'ebbAwareCompare'. - go - :: AnchoredFragment (Header blk) - -- ^ Prefix of the current chain - -> ReversePath blk - -- ^ Prefix of the path through the VolatileDB - -> Word64 - -- ^ Number of blocks we have had to roll back from the current chain - -> [HeaderFields blk] - -- ^ Accumulator for the suffix, from oldest to newest - -> Maybe (ChainDiff (HeaderFields blk)) - go chain path !rollback acc = case (chain, path) of - (AF.Empty anchor, StoppedAt hash bno) - | AF.anchorToBlockNo anchor == NotOrigin bno - , AF.anchorToHash anchor == BlockHash hash - -> Just (ChainDiff rollback (AF.fromOldestFirst (AF.castAnchor anchor) acc)) - | otherwise - -> Nothing - - (AF.Empty anchor, path' ::> (flds, _)) - | AF.anchorToHeaderFields (AF.castAnchor anchor) == NotOrigin flds - -> Just (ChainDiff rollback (AF.fromOldestFirst (AF.castAnchor anchor) acc)) - | AF.anchorToBlockNo anchor > NotOrigin (headerFieldBlockNo flds) - -> Nothing - | otherwise - -> go chain path' rollback (flds:acc) - - (chain' AF.:> hdr, StoppedAt hash bno) - | blockNo hdr == bno - , headerHash hdr == hash - , let anchor = AF.castAnchor (AF.anchorFromBlock hdr) - -> Just (ChainDiff rollback (AF.fromOldestFirst anchor acc)) - | blockNo hdr < bno - -> Nothing - | otherwise - -> go chain' path (rollback + 1) acc - - (_, StoppedAtGenesis) - | AF.anchorIsGenesis (AF.anchor chain) - -> let !rollback' = rollback + fromIntegral (AF.length chain) - in Just (ChainDiff rollback' (AF.fromOldestFirst AF.AnchorGenesis acc)) - | otherwise - -> Nothing - - (chain' AF.:> hdr, path' ::> (flds, ptIsEBB)) -> - case hdr `ebbAwareCompare` (headerFieldBlockNo flds, ptIsEBB) of - -- Drop from the path - LT -> go chain path' rollback (flds:acc) - -- Drop from the current chain fragment - GT -> go chain' path (rollback + 1) acc - -- Same slot and value for 'IsEBB' - EQ | blockHash hdr == headerFieldHash flds - , let anchor = AF.castAnchor (AF.anchorFromBlock hdr) - -- Found a match - -> Just (ChainDiff rollback (AF.fromOldestFirst anchor acc)) - -- Different hashes, drop both - | otherwise - -> go chain' path' (rollback + 1) (flds:acc) + case computeReversePath lookupBlockInfo (realPointHash b) of + -- Block not in the VolatileDB, so it's unreachable + Nothing -> Nothing + Just reversePath -> go chain reversePath 0 [] + where + -- \| NOTE: the 'ReversePath' is lazy in its spine. We will only force as + -- many elements as 'RealPoint's we return. In the worst case, the path is + -- not connected to the current chain at all, in which case we do force + -- the entire path. + -- + -- We're trying to find a common block, i.e., one with the same point and + -- thus the same slot. Both the chain and the path are ordered by slots, + -- so we compare the slots and drop the largest one until we have a match + -- in slot, then we check hashes. If those don't match, we drop both. + -- Note: EBBs complicate things, see 'ebbAwareCompare'. + go :: + AnchoredFragment (Header blk) -> + -- \^ Prefix of the current chain + ReversePath blk -> + -- \^ Prefix of the path through the VolatileDB + Word64 -> + -- \^ Number of blocks we have had to roll back from the current chain + [HeaderFields blk] -> + -- \^ Accumulator for the suffix, from oldest to newest + Maybe (ChainDiff (HeaderFields blk)) + go chain path !rollback acc = case (chain, path) of + (AF.Empty anchor, StoppedAt hash bno) + | AF.anchorToBlockNo anchor == NotOrigin bno + , AF.anchorToHash anchor == BlockHash hash -> + Just (ChainDiff rollback (AF.fromOldestFirst (AF.castAnchor anchor) acc)) + | otherwise -> + Nothing + (AF.Empty anchor, path' ::> (flds, _)) + | AF.anchorToHeaderFields (AF.castAnchor anchor) == NotOrigin flds -> + Just (ChainDiff rollback (AF.fromOldestFirst (AF.castAnchor anchor) acc)) + | AF.anchorToBlockNo anchor > NotOrigin (headerFieldBlockNo flds) -> + Nothing + | otherwise -> + go chain path' rollback (flds : acc) + (chain' AF.:> hdr, StoppedAt hash bno) + | blockNo hdr == bno + , headerHash hdr == hash + , let anchor = AF.castAnchor (AF.anchorFromBlock hdr) -> + Just (ChainDiff rollback (AF.fromOldestFirst anchor acc)) + | blockNo hdr < bno -> + Nothing + | otherwise -> + go chain' path (rollback + 1) acc + (_, StoppedAtGenesis) + | AF.anchorIsGenesis (AF.anchor chain) -> + let !rollback' = rollback + fromIntegral (AF.length chain) + in Just (ChainDiff rollback' (AF.fromOldestFirst AF.AnchorGenesis acc)) + | otherwise -> + Nothing + (chain' AF.:> hdr, path' ::> (flds, ptIsEBB)) -> + case hdr `ebbAwareCompare` (headerFieldBlockNo flds, ptIsEBB) of + -- Drop from the path + LT -> go chain path' rollback (flds : acc) + -- Drop from the current chain fragment + GT -> go chain' path (rollback + 1) acc + -- Same slot and value for 'IsEBB' + EQ + | blockHash hdr == headerFieldHash flds + , let anchor = AF.castAnchor (AF.anchorFromBlock hdr) -> + -- Found a match + Just (ChainDiff rollback (AF.fromOldestFirst anchor acc)) + -- Different hashes, drop both + | otherwise -> + go chain' path' (rollback + 1) (flds : acc) - -- | EBBs have the same block number as their predecessor, which means - -- that in case we have an EBB and a regular block with the same slot, the - -- EBB comes /after/ the regular block. - ebbAwareCompare :: Header blk -> (BlockNo, IsEBB) -> Ordering - ebbAwareCompare hdr (ptBlockNo, ptIsEBB) = - compare (blockNo hdr) ptBlockNo `mappend` - case (headerToIsEBB hdr, ptIsEBB) of - (IsEBB, IsNotEBB) -> GT - (IsNotEBB, IsEBB) -> LT - (IsEBB, IsEBB) -> EQ + -- \| EBBs have the same block number as their predecessor, which means + -- that in case we have an EBB and a regular block with the same slot, the + -- EBB comes /after/ the regular block. + ebbAwareCompare :: Header blk -> (BlockNo, IsEBB) -> Ordering + ebbAwareCompare hdr (ptBlockNo, ptIsEBB) = + compare (blockNo hdr) ptBlockNo + `mappend` case (headerToIsEBB hdr, ptIsEBB) of + (IsEBB, IsNotEBB) -> GT + (IsNotEBB, IsEBB) -> LT + (IsEBB, IsEBB) -> EQ (IsNotEBB, IsNotEBB) -> EQ diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Query.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Query.hs index ba5a3e06bf..4ceebfa3ce 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Query.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Query.hs @@ -5,8 +5,8 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Queries -module Ouroboros.Consensus.Storage.ChainDB.Impl.Query ( - -- * Queries +module Ouroboros.Consensus.Storage.ChainDB.Impl.Query + ( -- * Queries getBlockComponent , getCurrentChain , getCurrentChainWithTime @@ -24,6 +24,7 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Query ( , getTipBlock , getTipHeader , getTipPoint + -- * Low-level queries , getAnyBlockComponent , getAnyKnownBlock @@ -31,35 +32,39 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Query ( , getChainSelStarvation ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.ResourceRegistry (ResourceRegistry) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderStateHistory - (HeaderStateHistory (..)) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) -import Ouroboros.Consensus.Ledger.Abstract (EmptyMK, KeysMK, ValuesMK) -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.API (BlockComponent (..), - ChainDbFailure (..)) -import Ouroboros.Consensus.Storage.ChainDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB) -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util (eitherToMaybe) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM (WithFingerprint (..)) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (MaxSlotNo, maxSlotNoFromWithOrigin) -import Ouroboros.Network.BlockFetch.ConsensusInterface - (ChainSelStarvation (..)) -import Ouroboros.Network.Protocol.LocalStateQuery.Type +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.ResourceRegistry (ResourceRegistry) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateHistory (..) + ) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime) +import Ouroboros.Consensus.Ledger.Abstract (EmptyMK, KeysMK, ValuesMK) +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.API + ( BlockComponent (..) + , ChainDbFailure (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB) +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util (eitherToMaybe) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM (WithFingerprint (..)) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (MaxSlotNo, maxSlotNoFromWithOrigin) +import Ouroboros.Network.BlockFetch.ConsensusInterface + ( ChainSelStarvation (..) + ) +import Ouroboros.Network.Protocol.LocalStateQuery.Type -- | Return the last @k@ headers. -- @@ -77,31 +82,31 @@ import Ouroboros.Network.Protocol.LocalStateQuery.Type -- account, as we know they /must/ have been \"immutable\" at some point, and, -- therefore, /must/ still be \"immutable\". getCurrentChain :: - forall m blk. - ( IOLike m - , HasHeader (Header blk) - , ConsensusProtocol (BlockProtocol blk) - ) - => ChainDbEnv m blk - -> STM m (AnchoredFragment (Header blk)) + forall m blk. + ( IOLike m + , HasHeader (Header blk) + , ConsensusProtocol (BlockProtocol blk) + ) => + ChainDbEnv m blk -> + STM m (AnchoredFragment (Header blk)) getCurrentChain CDB{..} = - AF.anchorNewest (unNonZero k) . icWithoutTime <$> readTVar cdbChain - where - SecurityParam k = configSecurityParam cdbTopLevelConfig + AF.anchorNewest (unNonZero k) . icWithoutTime <$> readTVar cdbChain + where + SecurityParam k = configSecurityParam cdbTopLevelConfig -- | Same as 'getCurrentChain', /mutatis mutandi/. getCurrentChainWithTime :: - forall m blk. - ( IOLike m - , HasHeader (HeaderWithTime blk) - , ConsensusProtocol (BlockProtocol blk) - ) - => ChainDbEnv m blk - -> STM m (AnchoredFragment (HeaderWithTime blk)) + forall m blk. + ( IOLike m + , HasHeader (HeaderWithTime blk) + , ConsensusProtocol (BlockProtocol blk) + ) => + ChainDbEnv m blk -> + STM m (AnchoredFragment (HeaderWithTime blk)) getCurrentChainWithTime CDB{..} = - AF.anchorNewest (unNonZero k) . icWithTime <$> readTVar cdbChain - where - SecurityParam k = configSecurityParam cdbTopLevelConfig + AF.anchorNewest (unNonZero k) . icWithTime <$> readTVar cdbChain + where + SecurityParam k = configSecurityParam cdbTopLevelConfig -- | Get a 'HeaderStateHistory' populated with the 'HeaderState's of the -- last @k@ blocks of the current chain. @@ -109,114 +114,124 @@ getHeaderStateHistory :: ChainDbEnv m blk -> STM m (HeaderStateHistory blk) getHeaderStateHistory = LedgerDB.getHeaderStateHistory . cdbLedgerDB getTipBlock :: - forall m blk. - ( IOLike m - , HasHeader blk - , HasHeader (Header blk) - ) - => ChainDbEnv m blk - -> m (Maybe blk) + forall m blk. + ( IOLike m + , HasHeader blk + , HasHeader (Header blk) + ) => + ChainDbEnv m blk -> + m (Maybe blk) getTipBlock cdb@CDB{..} = do - tipPoint <- atomically $ getTipPoint cdb - case pointToWithOriginRealPoint tipPoint of - Origin -> return Nothing - NotOrigin p -> Just <$> getAnyKnownBlock cdbImmutableDB cdbVolatileDB p + tipPoint <- atomically $ getTipPoint cdb + case pointToWithOriginRealPoint tipPoint of + Origin -> return Nothing + NotOrigin p -> Just <$> getAnyKnownBlock cdbImmutableDB cdbVolatileDB p getTipHeader :: - forall m blk. - ( IOLike m - , HasHeader blk - , HasHeader (Header blk) - ) - => ChainDbEnv m blk - -> m (Maybe (Header blk)) + forall m blk. + ( IOLike m + , HasHeader blk + , HasHeader (Header blk) + ) => + ChainDbEnv m blk -> + m (Maybe (Header blk)) getTipHeader CDB{..} = do - anchorOrHdr <- AF.head . icWithoutTime <$> atomically (readTVar cdbChain) - case anchorOrHdr of - Right hdr -> return $ Just hdr - Left anch -> - case pointToWithOriginRealPoint (castPoint (AF.anchorToPoint anch)) of - Origin -> return Nothing - NotOrigin p -> - -- In this case, the fragment is empty but the anchor point is not - -- genesis. It must be that the VolatileDB got emptied and that our - -- current tip is now the tip of the ImmutableDB. + anchorOrHdr <- AF.head . icWithoutTime <$> atomically (readTVar cdbChain) + case anchorOrHdr of + Right hdr -> return $ Just hdr + Left anch -> + case pointToWithOriginRealPoint (castPoint (AF.anchorToPoint anch)) of + Origin -> return Nothing + NotOrigin p -> + -- In this case, the fragment is empty but the anchor point is not + -- genesis. It must be that the VolatileDB got emptied and that our + -- current tip is now the tip of the ImmutableDB. - -- Note that we can't use 'getBlockAtTip' because a block might have - -- been appended to the ImmutableDB since we obtained 'anchorOrHdr'. - Just <$> ImmutableDB.getKnownBlockComponent cdbImmutableDB GetHeader p + -- Note that we can't use 'getBlockAtTip' because a block might have + -- been appended to the ImmutableDB since we obtained 'anchorOrHdr'. + Just <$> ImmutableDB.getKnownBlockComponent cdbImmutableDB GetHeader p getTipPoint :: - forall m blk. (IOLike m, HasHeader (Header blk)) - => ChainDbEnv m blk -> STM m (Point blk) + forall m blk. + (IOLike m, HasHeader (Header blk)) => + ChainDbEnv m blk -> STM m (Point blk) getTipPoint CDB{..} = - (castPoint . AF.headPoint . icWithoutTime) <$> readTVar cdbChain + (castPoint . AF.headPoint . icWithoutTime) <$> readTVar cdbChain getBlockComponent :: - forall m blk b. IOLike m - => ChainDbEnv m blk - -> BlockComponent blk b - -> RealPoint blk -> m (Maybe b) + forall m blk b. + IOLike m => + ChainDbEnv m blk -> + BlockComponent blk b -> + RealPoint blk -> + m (Maybe b) getBlockComponent CDB{..} = getAnyBlockComponent cdbImmutableDB cdbVolatileDB getIsFetched :: - forall m blk. (IOLike m, HasHeader blk) - => ChainDbEnv m blk -> STM m (Point blk -> Bool) + forall m blk. + (IOLike m, HasHeader blk) => + ChainDbEnv m blk -> STM m (Point blk -> Bool) getIsFetched CDB{..} = do - checkQueue <- memberChainSelQueue cdbChainSelQueue - checkVolDb <- VolatileDB.getIsMember cdbVolatileDB - return $ \pt -> - case pointToWithOriginRealPoint pt of - Origin -> False - NotOrigin pt' -> checkQueue pt' || checkVolDb (realPointHash pt') + checkQueue <- memberChainSelQueue cdbChainSelQueue + checkVolDb <- VolatileDB.getIsMember cdbVolatileDB + return $ \pt -> + case pointToWithOriginRealPoint pt of + Origin -> False + NotOrigin pt' -> checkQueue pt' || checkVolDb (realPointHash pt') getIsInvalidBlock :: - forall m blk. (IOLike m, HasHeader blk) - => ChainDbEnv m blk - -> STM m (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) + forall m blk. + (IOLike m, HasHeader blk) => + ChainDbEnv m blk -> + STM m (WithFingerprint (HeaderHash blk -> Maybe (ExtValidationError blk))) getIsInvalidBlock CDB{..} = fmap (fmap (fmap invalidBlockReason) . flip Map.lookup) <$> readTVar cdbInvalid getChainSelStarvation :: - forall m blk. IOLike m - => ChainDbEnv m blk - -> STM m ChainSelStarvation -getChainSelStarvation CDB {..} = readTVar cdbChainSelStarvation + forall m blk. + IOLike m => + ChainDbEnv m blk -> + STM m ChainSelStarvation +getChainSelStarvation CDB{..} = readTVar cdbChainSelStarvation getIsValid :: - forall m blk. (IOLike m, HasHeader blk) - => ChainDbEnv m blk - -> STM m (RealPoint blk -> Maybe Bool) + forall m blk. + (IOLike m, HasHeader blk) => + ChainDbEnv m blk -> + STM m (RealPoint blk -> Maybe Bool) getIsValid CDB{..} = do - prevApplied <- LedgerDB.getPrevApplied cdbLedgerDB - invalid <- forgetFingerprint <$> readTVar cdbInvalid - return $ \pt@(RealPoint _ hash) -> - -- A block can not both be in the set of invalid blocks and - -- previously-applied blocks, so the order in which we check them does not - -- matter. - if | Map.member hash invalid -> Just False - | Set.member pt prevApplied -> Just True - | otherwise -> Nothing + prevApplied <- LedgerDB.getPrevApplied cdbLedgerDB + invalid <- forgetFingerprint <$> readTVar cdbInvalid + return $ \pt@(RealPoint _ hash) -> + -- A block can not both be in the set of invalid blocks and + -- previously-applied blocks, so the order in which we check them does not + -- matter. + if + | Map.member hash invalid -> Just False + | Set.member pt prevApplied -> Just True + | otherwise -> Nothing getMaxSlotNo :: - forall m blk. (IOLike m, HasHeader (Header blk)) - => ChainDbEnv m blk -> STM m MaxSlotNo + forall m blk. + (IOLike m, HasHeader (Header blk)) => + ChainDbEnv m blk -> STM m MaxSlotNo getMaxSlotNo CDB{..} = do - -- Note that we need to look at both the current chain and the VolatileDB - -- in all cases (even when the VolatileDB is not empty), because the - -- VolatileDB might have been corrupted. - -- - -- For example, imagine the VolatileDB has been corrupted so that it only - -- contains block 9'. The ImmutableDB contains blocks 1-10. The max slot - -- of the current chain will be 10 (being the anchor point of the empty - -- current chain), while the max slot of the VolatileDB will be 9. - -- - -- Moreover, we have to look in 'ChainSelQueue' too. - curChainMaxSlotNo <- maxSlotNoFromWithOrigin . AF.headSlot . icWithoutTime - <$> readTVar cdbChain - volatileDbMaxSlotNo <- VolatileDB.getMaxSlotNo cdbVolatileDB - queuedMaxSlotNo <- getMaxSlotNoChainSelQueue cdbChainSelQueue - return $ curChainMaxSlotNo `max` volatileDbMaxSlotNo `max` queuedMaxSlotNo + -- Note that we need to look at both the current chain and the VolatileDB + -- in all cases (even when the VolatileDB is not empty), because the + -- VolatileDB might have been corrupted. + -- + -- For example, imagine the VolatileDB has been corrupted so that it only + -- contains block 9'. The ImmutableDB contains blocks 1-10. The max slot + -- of the current chain will be 10 (being the anchor point of the empty + -- current chain), while the max slot of the VolatileDB will be 9. + -- + -- Moreover, we have to look in 'ChainSelQueue' too. + curChainMaxSlotNo <- + maxSlotNoFromWithOrigin . AF.headSlot . icWithoutTime + <$> readTVar cdbChain + volatileDbMaxSlotNo <- VolatileDB.getMaxSlotNo cdbVolatileDB + queuedMaxSlotNo <- getMaxSlotNoChainSelQueue cdbChainSelQueue + return $ curChainMaxSlotNo `max` volatileDbMaxSlotNo `max` queuedMaxSlotNo -- | Get current ledger getCurrentLedger :: ChainDbEnv m blk -> STM m (ExtLedgerState blk EmptyMK) @@ -232,27 +247,27 @@ getImmutableLedger CDB{..} = LedgerDB.getImmutableTip cdbLedgerDB -- chain (i.e., older than @k@ or not on the current chain), 'Nothing' is -- returned. getPastLedger :: - ChainDbEnv m blk - -> Point blk - -> STM m (Maybe (ExtLedgerState blk EmptyMK)) + ChainDbEnv m blk -> + Point blk -> + STM m (Maybe (ExtLedgerState blk EmptyMK)) getPastLedger CDB{..} = LedgerDB.getPastLedgerState cdbLedgerDB getReadOnlyForkerAtPoint :: - IOLike m - => ChainDbEnv m blk - -> ResourceRegistry m - -> Target (Point blk) - -> m (Either LedgerDB.GetForkerError (LedgerDB.ReadOnlyForker' m blk)) + IOLike m => + ChainDbEnv m blk -> + ResourceRegistry m -> + Target (Point blk) -> + m (Either LedgerDB.GetForkerError (LedgerDB.ReadOnlyForker' m blk)) getReadOnlyForkerAtPoint CDB{..} = LedgerDB.getReadOnlyForker cdbLedgerDB getLedgerTablesAtFor :: - IOLike m - => ChainDbEnv m blk - -> Point blk - -> LedgerTables (ExtLedgerState blk) KeysMK - -> m (Maybe (LedgerTables (ExtLedgerState blk) ValuesMK)) + IOLike m => + ChainDbEnv m blk -> + Point blk -> + LedgerTables (ExtLedgerState blk) KeysMK -> + m (Maybe (LedgerTables (ExtLedgerState blk) ValuesMK)) getLedgerTablesAtFor = - (\ldb pt ks -> eitherToMaybe <$> LedgerDB.readLedgerTablesAtFor ldb pt ks) + (\ldb pt ks -> eitherToMaybe <$> LedgerDB.readLedgerTablesAtFor ldb pt ks) . cdbLedgerDB getStatistics :: IOLike m => ChainDbEnv m blk -> m (Maybe LedgerDB.Statistics) @@ -266,83 +281,86 @@ getStatistics CDB{..} = LedgerDB.getTipStatistics cdbLedgerDB -- | Variant of 'getAnyBlockComponent' instantiated with 'GetBlock'. getAnyKnownBlock :: - forall m blk. - ( IOLike m - , HasHeader blk - ) - => ImmutableDB m blk - -> VolatileDB m blk - -> RealPoint blk - -> m blk + forall m blk. + ( IOLike m + , HasHeader blk + ) => + ImmutableDB m blk -> + VolatileDB m blk -> + RealPoint blk -> + m blk getAnyKnownBlock immutableDB volatileDB = - getAnyKnownBlockComponent immutableDB volatileDB GetBlock + getAnyKnownBlockComponent immutableDB volatileDB GetBlock -- | Wrapper around 'getAnyBlockComponent' for blocks we know should exist. -- -- If the block does not exist, this indicates disk failure. getAnyKnownBlockComponent :: - forall m blk b. - ( IOLike m - , HasHeader blk - ) - => ImmutableDB m blk - -> VolatileDB m blk - -> BlockComponent blk b - -> RealPoint blk - -> m b + forall m blk b. + ( IOLike m + , HasHeader blk + ) => + ImmutableDB m blk -> + VolatileDB m blk -> + BlockComponent blk b -> + RealPoint blk -> + m b getAnyKnownBlockComponent immutableDB volatileDB blockComponent p = do - mBlock <- - mustExist p <$> - getAnyBlockComponent immutableDB volatileDB blockComponent p - case mBlock of - Right b -> return b - Left err -> throwIO err + mBlock <- + mustExist p + <$> getAnyBlockComponent immutableDB volatileDB blockComponent p + case mBlock of + Right b -> return b + Left err -> throwIO err -- | Get a block component from either the immutable DB or volatile DB. -- -- Returns 'Nothing' if the 'Point' is unknown. -- Throws 'NoGenesisBlockException' if the 'Point' refers to the genesis block. getAnyBlockComponent :: - forall m blk b. IOLike m - => ImmutableDB m blk - -> VolatileDB m blk - -> BlockComponent blk b - -> RealPoint blk - -> m (Maybe b) + forall m blk b. + IOLike m => + ImmutableDB m blk -> + VolatileDB m blk -> + BlockComponent blk b -> + RealPoint blk -> + m (Maybe b) getAnyBlockComponent immutableDB volatileDB blockComponent p = do - -- Note: to determine whether a block is in the ImmutableDB, we can - -- look at the slot of its tip, which we'll call @immTipSlot@. If the - -- slot of the requested point > @immTipSlot@, then the block will not - -- be in the ImmutableDB but in the VolatileDB. However, there is a - -- race condition here: if between the time we got @immTipSlot@ and - -- the time we look up the block in the VolatileDB the block was moved - -- from the VolatileDB to the ImmutableDB, and it was deleted from the - -- VolatileDB, we won't find the block, even though it is in the - -- ChainDB. - -- - -- Therefore, we first query the VolatileDB and if the block is not in - -- it, then we can get @immTipSlot@ and compare it to the slot of the - -- requested point. If the slot <= @immTipSlot@ it /must/ be in the - -- ImmutableDB (no race condition here). - mbVolatileB <- VolatileDB.getBlockComponent - volatileDB - blockComponent - hash - case mbVolatileB of - Just b -> return $ Just b - Nothing -> do - -- ImmutableDB will throw an exception if we ask for a block past the tip - immTipSlot <- atomically $ ImmutableDB.getTipSlot immutableDB - if NotOrigin (realPointSlot p) > immTipSlot then + -- Note: to determine whether a block is in the ImmutableDB, we can + -- look at the slot of its tip, which we'll call @immTipSlot@. If the + -- slot of the requested point > @immTipSlot@, then the block will not + -- be in the ImmutableDB but in the VolatileDB. However, there is a + -- race condition here: if between the time we got @immTipSlot@ and + -- the time we look up the block in the VolatileDB the block was moved + -- from the VolatileDB to the ImmutableDB, and it was deleted from the + -- VolatileDB, we won't find the block, even though it is in the + -- ChainDB. + -- + -- Therefore, we first query the VolatileDB and if the block is not in + -- it, then we can get @immTipSlot@ and compare it to the slot of the + -- requested point. If the slot <= @immTipSlot@ it /must/ be in the + -- ImmutableDB (no race condition here). + mbVolatileB <- + VolatileDB.getBlockComponent + volatileDB + blockComponent + hash + case mbVolatileB of + Just b -> return $ Just b + Nothing -> do + -- ImmutableDB will throw an exception if we ask for a block past the tip + immTipSlot <- atomically $ ImmutableDB.getTipSlot immutableDB + if NotOrigin (realPointSlot p) > immTipSlot + then -- It's not supposed to be in the ImmutableDB and the VolatileDB -- didn't contain it, so return 'Nothing'. return Nothing else - eitherToMaybe <$> - ImmutableDB.getBlockComponent immutableDB blockComponent p - where - hash = realPointHash p + eitherToMaybe + <$> ImmutableDB.getBlockComponent immutableDB blockComponent p + where + hash = realPointHash p mustExist :: RealPoint blk -> Maybe b -> Either (ChainDbFailure blk) b -mustExist p Nothing = Left $ ChainDbMissingBlock p +mustExist p Nothing = Left $ ChainDbMissingBlock p mustExist _ (Just b) = Right b diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Types.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Types.hs index 4d2af17d08..cb7488ad00 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Types.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Impl/Types.hs @@ -19,8 +19,8 @@ -- | Types used throughout the implementation: handle, state, environment, -- types, trace types, etc. -module Ouroboros.Consensus.Storage.ChainDB.Impl.Types ( - ChainDbEnv (..) +module Ouroboros.Consensus.Storage.ChainDB.Impl.Types + ( ChainDbEnv (..) , ChainDbHandle (..) , ChainDbState (..) , ChainSelectionPromise (..) @@ -30,21 +30,26 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Types ( , getEnv2 , getEnvSTM , getEnvSTM1 + -- * Exposed internals for testing purposes , Internal (..) , InternalChain (..) , checkInternalChain + -- * Iterator-related , IteratorKey (..) + -- * Follower-related , FollowerHandle (..) , FollowerKey (..) , FollowerRollState (..) , FollowerState (..) , followerRollStatePoint + -- * Invalid blocks , InvalidBlockInfo (..) , InvalidBlocks + -- * Blocks to add , BlockToAdd (..) , ChainSelMessage (..) @@ -57,6 +62,7 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Types ( , memberChainSelQueue , newChainSelQueue , processedChainSelMessage + -- * Trace types , SelectionChangedInfo (..) , TraceAddBlockEvent (..) @@ -72,114 +78,144 @@ module Ouroboros.Consensus.Storage.ChainDB.Impl.Types ( , TraceValidationEvent (..) ) where -import Control.Monad (when) -import Control.ResourceRegistry -import Control.Tracer -import Data.Foldable (traverse_) -import Data.Map.Strict (Map) -import Data.Maybe (mapMaybe) -import Data.Maybe.Strict (StrictMaybe (..)) -import Data.MultiSet (MultiSet) -import qualified Data.MultiSet as MultiSet -import Data.Set (Set) -import Data.Typeable -import Data.Void (Void) -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Fragment.Diff (ChainDiff) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Consensus.Ledger.Extended (ExtValidationError) -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.API (AddBlockPromise (..), - AddBlockResult (..), ChainDbError (..), - ChainSelectionPromise (..), ChainType, LoE, StreamFrom, - StreamTo, UnknownRange) -import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment - (InvalidBlockPunishment) -import Ouroboros.Consensus.Storage.ImmutableDB (ImmutableDB, - ImmutableDbSerialiseConstraints) -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB (LedgerDB', - LedgerDbSerialiseConstraints) -import qualified Ouroboros.Consensus.Storage.LedgerDB as LedgerDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Storage.VolatileDB (VolatileDB, - VolatileDbSerialiseConstraints) -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util (Fuse) -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.Enclose (Enclosing, Enclosing' (..)) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Consensus.Util.STM (WithFingerprint) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block (MaxSlotNo (..)) -import Ouroboros.Network.BlockFetch.ConsensusInterface - (ChainSelStarvation (..)) +import Control.Monad (when) +import Control.ResourceRegistry +import Control.Tracer +import Data.Foldable (traverse_) +import Data.Map.Strict (Map) +import Data.Maybe (mapMaybe) +import Data.Maybe.Strict (StrictMaybe (..)) +import Data.MultiSet (MultiSet) +import Data.MultiSet qualified as MultiSet +import Data.Set (Set) +import Data.Typeable +import Data.Void (Void) +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Fragment.Diff (ChainDiff) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Consensus.Ledger.Extended (ExtValidationError) +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.API + ( AddBlockPromise (..) + , AddBlockResult (..) + , ChainDbError (..) + , ChainSelectionPromise (..) + , ChainType + , LoE + , StreamFrom + , StreamTo + , UnknownRange + ) +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment + ( InvalidBlockPunishment + ) +import Ouroboros.Consensus.Storage.ImmutableDB + ( ImmutableDB + , ImmutableDbSerialiseConstraints + ) +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB + ( LedgerDB' + , LedgerDbSerialiseConstraints + ) +import Ouroboros.Consensus.Storage.LedgerDB qualified as LedgerDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Storage.VolatileDB + ( VolatileDB + , VolatileDbSerialiseConstraints + ) +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util (Fuse) +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.Enclose (Enclosing, Enclosing' (..)) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Consensus.Util.STM (WithFingerprint) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block (MaxSlotNo (..)) +import Ouroboros.Network.BlockFetch.ConsensusInterface + ( ChainSelStarvation (..) + ) -- | All the serialisation related constraints needed by the ChainDB. -class ( ImmutableDbSerialiseConstraints blk - , LedgerDbSerialiseConstraints blk - , VolatileDbSerialiseConstraints blk - -- Needed for Follower - , EncodeDiskDep (NestedCtxt Header) blk - ) => SerialiseDiskConstraints blk +class + ( ImmutableDbSerialiseConstraints blk + , LedgerDbSerialiseConstraints blk + , VolatileDbSerialiseConstraints blk + , -- Needed for Follower + EncodeDiskDep (NestedCtxt Header) blk + ) => + SerialiseDiskConstraints blk -- | A handle to the internal ChainDB state newtype ChainDbHandle m blk = CDBHandle (StrictTVar m (ChainDbState m blk)) -- | Check if the ChainDB is open, if so, executing the given function on the -- 'ChainDbEnv', otherwise, throw a 'CloseDBError'. -getEnv :: forall m blk r. (IOLike m, HasCallStack, HasHeader blk) - => ChainDbHandle m blk - -> (ChainDbEnv m blk -> m r) - -> m r -getEnv (CDBHandle varState) f = atomically (readTVar varState) >>= \case +getEnv :: + forall m blk r. + (IOLike m, HasCallStack, HasHeader blk) => + ChainDbHandle m blk -> + (ChainDbEnv m blk -> m r) -> + m r +getEnv (CDBHandle varState) f = + atomically (readTVar varState) >>= \case ChainDbOpen env -> f env - ChainDbClosed -> throwIO $ ClosedDBError @blk prettyCallStack + ChainDbClosed -> throwIO $ ClosedDBError @blk prettyCallStack -- | Variant 'of 'getEnv' for functions taking one argument. -getEnv1 :: (IOLike m, HasCallStack, HasHeader blk) - => ChainDbHandle m blk - -> (ChainDbEnv m blk -> a -> m r) - -> a -> m r +getEnv1 :: + (IOLike m, HasCallStack, HasHeader blk) => + ChainDbHandle m blk -> + (ChainDbEnv m blk -> a -> m r) -> + a -> + m r getEnv1 h f a = getEnv h (\env -> f env a) -- | Variant 'of 'getEnv' for functions taking two arguments. -getEnv2 :: (IOLike m, HasCallStack, HasHeader blk) - => ChainDbHandle m blk - -> (ChainDbEnv m blk -> a -> b -> m r) - -> a -> b -> m r +getEnv2 :: + (IOLike m, HasCallStack, HasHeader blk) => + ChainDbHandle m blk -> + (ChainDbEnv m blk -> a -> b -> m r) -> + a -> + b -> + m r getEnv2 h f a b = getEnv h (\env -> f env a b) - -- | Variant of 'getEnv' that works in 'STM'. -getEnvSTM :: forall m blk r. (IOLike m, HasCallStack, HasHeader blk) - => ChainDbHandle m blk - -> (ChainDbEnv m blk -> STM m r) - -> STM m r -getEnvSTM (CDBHandle varState) f = readTVar varState >>= \case +getEnvSTM :: + forall m blk r. + (IOLike m, HasCallStack, HasHeader blk) => + ChainDbHandle m blk -> + (ChainDbEnv m blk -> STM m r) -> + STM m r +getEnvSTM (CDBHandle varState) f = + readTVar varState >>= \case ChainDbOpen env -> f env - ChainDbClosed -> throwSTM $ ClosedDBError @blk prettyCallStack + ChainDbClosed -> throwSTM $ ClosedDBError @blk prettyCallStack -- | Variant of 'getEnv1' that works in 'STM'. getEnvSTM1 :: - forall m blk a r. (IOLike m, HasCallStack, HasHeader blk) - => ChainDbHandle m blk - -> (ChainDbEnv m blk -> a -> STM m r) - -> a -> STM m r -getEnvSTM1 (CDBHandle varState) f a = readTVar varState >>= \case + forall m blk a r. + (IOLike m, HasCallStack, HasHeader blk) => + ChainDbHandle m blk -> + (ChainDbEnv m blk -> a -> STM m r) -> + a -> + STM m r +getEnvSTM1 (CDBHandle varState) f a = + readTVar varState >>= \case ChainDbOpen env -> f env a - ChainDbClosed -> throwSTM $ ClosedDBError @blk prettyCallStack + ChainDbClosed -> throwSTM $ ClosedDBError @blk prettyCallStack data ChainDbState m blk - = ChainDbOpen !(ChainDbEnv m blk) + = ChainDbOpen !(ChainDbEnv m blk) | ChainDbClosed deriving (Generic, NoThunks) @@ -194,130 +230,137 @@ data ChainDbState m blk -- 'Ouroboros.Network.BlockFetch.ConsensusInterface.BlockFetchConsensusInterface'), -- etc. data InternalChain blk = InternalChain - { icWithoutTime :: !(AnchoredFragment (Header blk)) - , icWithTime :: !(AnchoredFragment (HeaderWithTime blk)) + { icWithoutTime :: !(AnchoredFragment (Header blk)) + , icWithTime :: !(AnchoredFragment (HeaderWithTime blk)) } - deriving (Generic) + deriving Generic deriving instance (HasHeader blk, NoThunks (Header blk)) => NoThunks (InternalChain blk) checkInternalChain :: - forall blk. (HasHeader blk, HasHeader (Header blk)) - => InternalChain blk - -> Maybe String + forall blk. + (HasHeader blk, HasHeader (Header blk)) => + InternalChain blk -> + Maybe String checkInternalChain (InternalChain cur curWithTime) = - if cnv id cur == cnv hwtHeader curWithTime then Nothing else - Just $ unlines - [ "cdbChain and cdbChainWithTime were out of sync:" - , show (cnv id cur) - , show (cnv hwtHeader curWithTime) - ] - where - cnv :: - (HeaderHash h ~ HeaderHash blk) - => (h -> Header blk) -> AnchoredFragment h -> (Point blk, [Point blk]) - cnv f af = - ( castPoint $ AF.anchorPoint af - , (headerPoint . f) `map` AF.toNewestFirst af - ) + if cnv id cur == cnv hwtHeader curWithTime + then Nothing + else + Just $ + unlines + [ "cdbChain and cdbChainWithTime were out of sync:" + , show (cnv id cur) + , show (cnv hwtHeader curWithTime) + ] + where + cnv :: + HeaderHash h ~ HeaderHash blk => + (h -> Header blk) -> AnchoredFragment h -> (Point blk, [Point blk]) + cnv f af = + ( castPoint $ AF.anchorPoint af + , (headerPoint . f) `map` AF.toNewestFirst af + ) data ChainDbEnv m blk = CDB - { cdbImmutableDB :: !(ImmutableDB m blk) - , cdbVolatileDB :: !(VolatileDB m blk) - , cdbLedgerDB :: !(LedgerDB' m blk) - , cdbChain :: !(StrictTVar m (InternalChain blk)) - -- ^ Contains the current chain fragment. - -- - -- INVARIANT: the anchor point of this fragment is the tip of the - -- ImmutableDB. This implies that this fragment never contains any blocks - -- that are stored in the immutable DB. - -- - -- Note that this fragment might be shorter than @k@ headers when the - -- whole chain is shorter than @k@ or in case of corruption of the - -- VolatileDB. - -- - -- Note that this fragment might also be /longer/ than @k@ headers, - -- because the oldest blocks from the fragment might not yet have been - -- copied from the VolatileDB to the ImmutableDB. - -- - -- The anchor point of this chain should be the most recent \"immutable\" - -- block according to the protocol, i.e., a block that cannot be rolled - -- back. - -- - -- Note that the \"immutable\" block isn't necessarily at the tip of the - -- ImmutableDB, but could temporarily still be on the in-memory chain - -- fragment. When the background thread that copies blocks to the - -- ImmutableDB has caught up, the \"immutable\" block will be at the tip - -- of the ImmutableDB again. - -- - -- Note that the \"immutable\" block might be less than @k@ blocks from - -- our tip in case the whole chain is shorter than @k@ or in case of - -- corruption of the VolatileDB. - -- - -- Note that the \"immutable\" block will /never/ be /more/ than @k@ - -- blocks back, as opposed to the anchor point of 'cdbChain'. - , cdbTentativeState :: !(StrictTVar m (TentativeHeaderState blk)) + { cdbImmutableDB :: !(ImmutableDB m blk) + , cdbVolatileDB :: !(VolatileDB m blk) + , cdbLedgerDB :: !(LedgerDB' m blk) + , cdbChain :: !(StrictTVar m (InternalChain blk)) + -- ^ Contains the current chain fragment. + -- + -- INVARIANT: the anchor point of this fragment is the tip of the + -- ImmutableDB. This implies that this fragment never contains any blocks + -- that are stored in the immutable DB. + -- + -- Note that this fragment might be shorter than @k@ headers when the + -- whole chain is shorter than @k@ or in case of corruption of the + -- VolatileDB. + -- + -- Note that this fragment might also be /longer/ than @k@ headers, + -- because the oldest blocks from the fragment might not yet have been + -- copied from the VolatileDB to the ImmutableDB. + -- + -- The anchor point of this chain should be the most recent \"immutable\" + -- block according to the protocol, i.e., a block that cannot be rolled + -- back. + -- + -- Note that the \"immutable\" block isn't necessarily at the tip of the + -- ImmutableDB, but could temporarily still be on the in-memory chain + -- fragment. When the background thread that copies blocks to the + -- ImmutableDB has caught up, the \"immutable\" block will be at the tip + -- of the ImmutableDB again. + -- + -- Note that the \"immutable\" block might be less than @k@ blocks from + -- our tip in case the whole chain is shorter than @k@ or in case of + -- corruption of the VolatileDB. + -- + -- Note that the \"immutable\" block will /never/ be /more/ than @k@ + -- blocks back, as opposed to the anchor point of 'cdbChain'. + , cdbTentativeState :: !(StrictTVar m (TentativeHeaderState blk)) , cdbTentativeHeader :: !(StrictTVar m (StrictMaybe (Header blk))) - -- ^ The tentative header, for diffusion pipelining. - -- - -- INVARIANT: It fits on top of the current chain, and its body is not known - -- to be invalid, but might turn out to be. - , cdbIterators :: !(StrictTVar m (Map IteratorKey (m ()))) - -- ^ The iterators. - -- - -- This maps the 'IteratorKey's of each open 'Iterator' to a function - -- that, when called, closes the iterator. This is used when closing the - -- ChainDB: the open file handles used by iterators can be closed, and the - -- iterators themselves are closed so that it is impossible to use an - -- iterator after closing the ChainDB itself. - , cdbFollowers :: !(StrictTVar m (Map FollowerKey (FollowerHandle m blk))) - -- ^ The followers. - -- - -- A follower is open iff its 'FollowerKey' is this 'Map'. - -- - -- INVARIANT: the 'followerPoint' of each follower is 'withinFragmentBounds' - -- of the current chain fragment (retrieved 'cdbGetCurrentChain', not by - -- reading 'cdbChain' directly). - , cdbTopLevelConfig :: !(TopLevelConfig blk) - , cdbInvalid :: !(StrictTVar m (WithFingerprint (InvalidBlocks blk))) - -- ^ See the docstring of 'InvalidBlocks'. - -- - -- The 'Fingerprint' changes every time a hash is added to the map, but - -- not when hashes are garbage-collected from the map. + -- ^ The tentative header, for diffusion pipelining. + -- + -- INVARIANT: It fits on top of the current chain, and its body is not known + -- to be invalid, but might turn out to be. + , cdbIterators :: !(StrictTVar m (Map IteratorKey (m ()))) + -- ^ The iterators. + -- + -- This maps the 'IteratorKey's of each open 'Iterator' to a function + -- that, when called, closes the iterator. This is used when closing the + -- ChainDB: the open file handles used by iterators can be closed, and the + -- iterators themselves are closed so that it is impossible to use an + -- iterator after closing the ChainDB itself. + , cdbFollowers :: !(StrictTVar m (Map FollowerKey (FollowerHandle m blk))) + -- ^ The followers. + -- + -- A follower is open iff its 'FollowerKey' is this 'Map'. + -- + -- INVARIANT: the 'followerPoint' of each follower is 'withinFragmentBounds' + -- of the current chain fragment (retrieved 'cdbGetCurrentChain', not by + -- reading 'cdbChain' directly). + , cdbTopLevelConfig :: !(TopLevelConfig blk) + , cdbInvalid :: !(StrictTVar m (WithFingerprint (InvalidBlocks blk))) + -- ^ See the docstring of 'InvalidBlocks'. + -- + -- The 'Fingerprint' changes every time a hash is added to the map, but + -- not when hashes are garbage-collected from the map. , cdbNextIteratorKey :: !(StrictTVar m IteratorKey) , cdbNextFollowerKey :: !(StrictTVar m FollowerKey) - , cdbCopyFuse :: !(Fuse m) - , cdbChainSelFuse :: !(Fuse m) - , cdbTracer :: !(Tracer m (TraceEvent blk)) - , cdbRegistry :: !(ResourceRegistry m) - , cdbGcDelay :: !DiffTime - -- ^ How long to wait between copying a block from the VolatileDB to - -- ImmutableDB and garbage collecting it from the VolatileDB - , cdbGcInterval :: !DiffTime - -- ^ Minimum time between two garbage collections. Is used to batch - -- garbage collections. - , cdbKillBgThreads :: !(StrictTVar m (m ())) - -- ^ A handle to kill the background threads. - , cdbChainSelQueue :: !(ChainSelQueue m blk) - -- ^ Queue of blocks that still have to be added. - , cdbLoE :: !(m (LoE (AnchoredFragment (HeaderWithTime blk)))) - -- ^ Configure the Limit on Eagerness. If this is 'LoEEnabled', it contains - -- an action that returns the LoE fragment, which indicates the latest rollback - -- point, i.e. we are not allowed to select a chain from which we could not - -- switch back to a chain containing it. The fragment is usually anchored at - -- a recent immutable tip; if it does not, it will conservatively be treated - -- as the empty fragment anchored in the current immutable tip. + , cdbCopyFuse :: !(Fuse m) + , cdbChainSelFuse :: !(Fuse m) + , cdbTracer :: !(Tracer m (TraceEvent blk)) + , cdbRegistry :: !(ResourceRegistry m) + , cdbGcDelay :: !DiffTime + -- ^ How long to wait between copying a block from the VolatileDB to + -- ImmutableDB and garbage collecting it from the VolatileDB + , cdbGcInterval :: !DiffTime + -- ^ Minimum time between two garbage collections. Is used to batch + -- garbage collections. + , cdbKillBgThreads :: !(StrictTVar m (m ())) + -- ^ A handle to kill the background threads. + , cdbChainSelQueue :: !(ChainSelQueue m blk) + -- ^ Queue of blocks that still have to be added. + , cdbLoE :: !(m (LoE (AnchoredFragment (HeaderWithTime blk)))) + -- ^ Configure the Limit on Eagerness. If this is 'LoEEnabled', it contains + -- an action that returns the LoE fragment, which indicates the latest rollback + -- point, i.e. we are not allowed to select a chain from which we could not + -- switch back to a chain containing it. The fragment is usually anchored at + -- a recent immutable tip; if it does not, it will conservatively be treated + -- as the empty fragment anchored in the current immutable tip. , cdbChainSelStarvation :: !(StrictTVar m ChainSelStarvation) - -- ^ Information on the last starvation of ChainSel, whether ongoing or - -- ended recently. - } deriving (Generic) + -- ^ Information on the last starvation of ChainSel, whether ongoing or + -- ended recently. + } + deriving Generic -- | We include @blk@ in 'showTypeOf' because it helps resolving type families -- (but avoid including @m@ because we cannot impose @Typeable m@ as a -- constraint and still have it work with the simulator) -instance (IOLike m, LedgerSupportsProtocol blk, BlockSupportsDiffusionPipelining blk) - => NoThunks (ChainDbEnv m blk) where - showTypeOf _ = "ChainDbEnv m " ++ show (typeRep (Proxy @blk)) +instance + (IOLike m, LedgerSupportsProtocol blk, BlockSupportsDiffusionPipelining blk) => + NoThunks (ChainDbEnv m blk) + where + showTypeOf _ = "ChainDbEnv m " ++ show (typeRep (Proxy @blk)) {------------------------------------------------------------------------------- Exposed internals for testing purposes @@ -325,21 +368,21 @@ instance (IOLike m, LedgerSupportsProtocol blk, BlockSupportsDiffusionPipelining data Internal m blk = Internal { intCopyToImmutableDB :: m (WithOrigin SlotNo) - -- ^ Copy the blocks older than @k@ from to the VolatileDB to the - -- ImmutableDB and update the in-memory chain fragment correspondingly. - -- - -- The 'SlotNo' of the tip of the ImmutableDB after copying the blocks is - -- returned. This can be used for a garbage collection on the VolatileDB. - , intGarbageCollect :: SlotNo -> m () - -- ^ Perform garbage collection for blocks <= the given 'SlotNo'. - , intTryTakeSnapshot :: m () - -- ^ Write a new LedgerDB snapshot to disk and remove the oldest one(s). - , intAddBlockRunner :: m Void - -- ^ Start the loop that adds blocks to the ChainDB retrieved from the - -- queue populated by 'ChainDB.addBlock'. Execute this loop in a separate - -- thread. - , intKillBgThreads :: StrictTVar m (m ()) - -- ^ A handle to kill the background threads. + -- ^ Copy the blocks older than @k@ from to the VolatileDB to the + -- ImmutableDB and update the in-memory chain fragment correspondingly. + -- + -- The 'SlotNo' of the tip of the ImmutableDB after copying the blocks is + -- returned. This can be used for a garbage collection on the VolatileDB. + , intGarbageCollect :: SlotNo -> m () + -- ^ Perform garbage collection for blocks <= the given 'SlotNo'. + , intTryTakeSnapshot :: m () + -- ^ Write a new LedgerDB snapshot to disk and remove the oldest one(s). + , intAddBlockRunner :: m Void + -- ^ Start the loop that adds blocks to the ChainDB retrieved from the + -- queue populated by 'ChainDB.addBlock'. Execute this loop in a separate + -- thread. + , intKillBgThreads :: StrictTVar m (m ()) + -- ^ A handle to kill the background threads. } {------------------------------------------------------------------------------- @@ -353,7 +396,7 @@ data Internal m blk = Internal -- We store them in the map so that the ChainDB can close all open iterators -- when it is closed itself. newtype IteratorKey = IteratorKey Word - deriving stock (Show) + deriving stock Show deriving newtype (Eq, Ord, Enum, NoThunks) {------------------------------------------------------------------------------- @@ -373,32 +416,31 @@ newtype IteratorKey = IteratorKey Word -- when it is closed itself and to update the followers in case we switch to a -- different chain. newtype FollowerKey = FollowerKey Word - deriving stock (Show) + deriving stock Show deriving newtype (Eq, Ord, Enum, NoThunks) -- | Internal handle to a 'Follower' without an explicit @b@ (@blk@, @'Header' -- blk@, etc.) parameter so 'Follower's with different' @b@s can be stored -- together in 'cdbFollowers'. data FollowerHandle m blk = FollowerHandle - { fhChainType :: ChainType - -- ^ Whether we follow the tentative chain. + { fhChainType :: ChainType + -- ^ Whether we follow the tentative chain. , fhSwitchFork :: Point blk -> Set (Point blk) -> STM m () - -- ^ When we have switched to a fork, all open 'Follower's must be notified. - , fhClose :: m () - -- ^ When closing the ChainDB, we must also close all open 'Follower's, as - -- they might be holding on to resources. - -- - -- Call 'fhClose' will release the resources used by the 'Follower'. - -- - -- NOTE the 'Follower' is not removed from 'cdbFollowers'. (That is done by - -- 'closeAllFollowers'). + -- ^ When we have switched to a fork, all open 'Follower's must be notified. + , fhClose :: m () + -- ^ When closing the ChainDB, we must also close all open 'Follower's, as + -- they might be holding on to resources. + -- + -- Call 'fhClose' will release the resources used by the 'Follower'. + -- + -- NOTE the 'Follower' is not removed from 'cdbFollowers'. (That is done by + -- 'closeAllFollowers'). } deriving NoThunks via OnlyCheckWhnfNamed "FollowerHandle" (FollowerHandle m blk) -- | @b@ corresponds to the 'BlockComponent' that is being read. data FollowerState m blk b - = FollowerInit - -- ^ The 'Follower' is in its initial state. Its 'FollowerRollState' is + = -- | The 'Follower' is in its initial state. Its 'FollowerRollState' is -- @'RollBackTo' 'genesisPoint'@. -- -- This is equivalent to having a 'FollowerInImmutableDB' with the same @@ -411,10 +453,8 @@ data FollowerState m blk b -- -- Therefore, we have this extra initial state, that avoids this cost. -- When the user doesn't move the Follower forward, an iterator is opened. - | FollowerInImmutableDB - !(FollowerRollState blk) - !(ImmutableDB.Iterator m blk (Point blk, b)) - -- ^ The 'Follower' is reading from the ImmutableDB. + FollowerInit + | -- | The 'Follower' is reading from the ImmutableDB. -- -- Note that the iterator includes 'Point blk' in addition to @b@, as it -- is needed to keep track of where the iterator is. @@ -423,24 +463,27 @@ data FollowerState m blk b -- of the next block streamed by @immIt@ must be the block identified by -- @followerRollStatePoint rollState@. In other words: the iterator is -- positioned /on/ @followerRollStatePoint rollState@. - | FollowerInMem !(FollowerRollState blk) - -- ^ The 'Follower' is reading from the in-memory current chain fragment. + FollowerInImmutableDB + !(FollowerRollState blk) + !(ImmutableDB.Iterator m blk (Point blk, b)) + | -- | The 'Follower' is reading from the in-memory current chain fragment. + FollowerInMem !(FollowerRollState blk) deriving (Generic, NoThunks) -- | Similar to 'Ouroboros.Network.Mock.ProducerState.FollowerState'. data FollowerRollState blk - = RollBackTo !(Point blk) - -- ^ We don't know at which point the user is, but the next message we'll + = -- | We don't know at which point the user is, but the next message we'll -- send is to roll back to this point. - | RollForwardFrom !(Point blk) - -- ^ We know that the follower is at this point and the next message we'll + RollBackTo !(Point blk) + | -- | We know that the follower is at this point and the next message we'll -- send is to roll forward to the point /after/ this point on our chain. + RollForwardFrom !(Point blk) deriving (Eq, Show, Generic, NoThunks) -- | Get the point the 'FollowerRollState' should roll back to or roll forward -- from. followerRollStatePoint :: FollowerRollState blk -> Point blk -followerRollStatePoint (RollBackTo pt) = pt +followerRollStatePoint (RollBackTo pt) = pt followerRollStatePoint (RollForwardFrom pt) = pt {------------------------------------------------------------------------------- @@ -458,7 +501,8 @@ type InvalidBlocks blk = Map (HeaderHash blk) (InvalidBlockInfo blk) data InvalidBlockInfo blk = InvalidBlockInfo { invalidBlockReason :: !(ExtValidationError blk) , invalidBlockSlotNo :: !SlotNo - } deriving (Eq, Show, Generic, NoThunks) + } + deriving (Eq, Show, Generic, NoThunks) {------------------------------------------------------------------------------- Blocks to add @@ -476,8 +520,8 @@ data InvalidBlockInfo blk = InvalidBlockInfo -- INVARIANT: Counted with multiplicity, @varChainSelPoints@ contains exactly -- the same hashes or at most one additional hash compared to the hashes of -- blocks in @varChainSelQueue@. -data ChainSelQueue m blk = ChainSelQueue { - varChainSelQueue :: TBQueue m (ChainSelMessage m blk) +data ChainSelQueue m blk = ChainSelQueue + { varChainSelQueue :: TBQueue m (ChainSelMessage m blk) , varChainSelPoints :: StrictTVar m (MultiSet (RealPoint blk)) } deriving NoThunks via OnlyCheckWhnfNamed "ChainSelQueue" (ChainSelQueue m blk) @@ -485,114 +529,120 @@ data ChainSelQueue m blk = ChainSelQueue { -- | Entry in the 'ChainSelQueue' queue: a block together with the 'TMVar's used -- to implement 'AddBlockPromise'. data BlockToAdd m blk = BlockToAdd - { blockPunish :: !(InvalidBlockPunishment m) - -- ^ Executed immediately upon determining this block or one from its prefix - -- is invalid. - , blockToAdd :: !blk + { blockPunish :: !(InvalidBlockPunishment m) + -- ^ Executed immediately upon determining this block or one from its prefix + -- is invalid. + , blockToAdd :: !blk , varBlockWrittenToDisk :: !(StrictTMVar m Bool) - -- ^ Used for the 'blockWrittenToDisk' field of 'AddBlockPromise'. - , varBlockProcessed :: !(StrictTMVar m (AddBlockResult blk)) - -- ^ Used for the 'blockProcessed' field of 'AddBlockPromise'. + -- ^ Used for the 'blockWrittenToDisk' field of 'AddBlockPromise'. + , varBlockProcessed :: !(StrictTMVar m (AddBlockResult blk)) + -- ^ Used for the 'blockProcessed' field of 'AddBlockPromise'. } -- | Different async tasks for triggering ChainSel data ChainSelMessage m blk - -- | Add a new block - = ChainSelAddBlock !(BlockToAdd m blk) - -- | Reprocess blocks that have been postponed by the LoE. - | ChainSelReprocessLoEBlocks + = -- | Add a new block + ChainSelAddBlock !(BlockToAdd m blk) + | -- | Reprocess blocks that have been postponed by the LoE. + ChainSelReprocessLoEBlocks + -- | Used for 'ChainSelectionPromise'. !(StrictTMVar m ()) - -- ^ Used for 'ChainSelectionPromise'. -- | Create a new 'ChainSelQueue' with the given size. newChainSelQueue :: (IOLike m, StandardHash blk, Typeable blk) => Word -> m (ChainSelQueue m blk) newChainSelQueue chainSelQueueCapacity = do - varChainSelQueue <- newTBQueueIO (fromIntegral chainSelQueueCapacity) + varChainSelQueue <- newTBQueueIO (fromIntegral chainSelQueueCapacity) varChainSelPoints <- newTVarIO MultiSet.empty - pure ChainSelQueue { - varChainSelQueue - , varChainSelPoints - } + pure + ChainSelQueue + { varChainSelQueue + , varChainSelPoints + } -- | Add a block to the 'ChainSelQueue' queue. Can block when the queue is full. addBlockToAdd :: - (IOLike m, HasHeader blk) - => Tracer m (TraceAddBlockEvent blk) - -> ChainSelQueue m blk - -> InvalidBlockPunishment m - -> blk - -> m (AddBlockPromise m blk) -addBlockToAdd tracer (ChainSelQueue {varChainSelQueue, varChainSelPoints}) punish blk = do - varBlockWrittenToDisk <- newEmptyTMVarIO - varBlockProcessed <- newEmptyTMVarIO - let !toAdd = BlockToAdd - { blockPunish = punish - , blockToAdd = blk + (IOLike m, HasHeader blk) => + Tracer m (TraceAddBlockEvent blk) -> + ChainSelQueue m blk -> + InvalidBlockPunishment m -> + blk -> + m (AddBlockPromise m blk) +addBlockToAdd tracer (ChainSelQueue{varChainSelQueue, varChainSelPoints}) punish blk = do + varBlockWrittenToDisk <- newEmptyTMVarIO + varBlockProcessed <- newEmptyTMVarIO + let !toAdd = + BlockToAdd + { blockPunish = punish + , blockToAdd = blk , varBlockWrittenToDisk , varBlockProcessed } - pt = blockRealPoint blk - traceWith tracer $ AddedBlockToQueue pt RisingEdge - queueSize <- atomically $ do - writeTBQueue varChainSelQueue (ChainSelAddBlock toAdd) - modifyTVar varChainSelPoints $ MultiSet.insert pt - lengthTBQueue varChainSelQueue - traceWith tracer $ - AddedBlockToQueue (blockRealPoint blk) (FallingEdgeWith (fromIntegral queueSize)) - return AddBlockPromise - { blockWrittenToDisk = readTMVar varBlockWrittenToDisk - , blockProcessed = readTMVar varBlockProcessed + pt = blockRealPoint blk + traceWith tracer $ AddedBlockToQueue pt RisingEdge + queueSize <- atomically $ do + writeTBQueue varChainSelQueue (ChainSelAddBlock toAdd) + modifyTVar varChainSelPoints $ MultiSet.insert pt + lengthTBQueue varChainSelQueue + traceWith tracer $ + AddedBlockToQueue (blockRealPoint blk) (FallingEdgeWith (fromIntegral queueSize)) + return + AddBlockPromise + { blockWrittenToDisk = readTMVar varBlockWrittenToDisk + , blockProcessed = readTMVar varBlockProcessed } -- | Try to add blocks again that were postponed due to the LoE. -addReprocessLoEBlocks - :: IOLike m - => Tracer m (TraceAddBlockEvent blk) - -> ChainSelQueue m blk - -> m (ChainSelectionPromise m) -addReprocessLoEBlocks tracer ChainSelQueue {varChainSelQueue} = do +addReprocessLoEBlocks :: + IOLike m => + Tracer m (TraceAddBlockEvent blk) -> + ChainSelQueue m blk -> + m (ChainSelectionPromise m) +addReprocessLoEBlocks tracer ChainSelQueue{varChainSelQueue} = do varProcessed <- newEmptyTMVarIO let waitUntilRan = atomically $ readTMVar varProcessed traceWith tracer $ AddedReprocessLoEBlocksToQueue - atomically $ writeTBQueue varChainSelQueue $ - ChainSelReprocessLoEBlocks varProcessed + atomically $ + writeTBQueue varChainSelQueue $ + ChainSelReprocessLoEBlocks varProcessed return $ ChainSelectionPromise waitUntilRan -- | Get the oldest message from the 'ChainSelQueue' queue. Can block when the -- queue is empty; in that case, reports the starvation (and its end) via the -- given tracer. -getChainSelMessage - :: forall m blk. (HasHeader blk, IOLike m) - => Tracer m (TraceChainSelStarvationEvent blk) - -> StrictTVar m ChainSelStarvation - -> ChainSelQueue m blk - -> m (ChainSelMessage m blk) +getChainSelMessage :: + forall m blk. + (HasHeader blk, IOLike m) => + Tracer m (TraceChainSelStarvationEvent blk) -> + StrictTVar m ChainSelStarvation -> + ChainSelQueue m blk -> + m (ChainSelMessage m blk) getChainSelMessage starvationTracer starvationVar chainSelQueue = - atomically (tryReadTBQueue' queue) >>= \case - Just msg -> pure msg - Nothing -> do - startStarvationMeasure - msg <- atomically $ readTBQueue queue - terminateStarvationMeasure msg - pure msg - where - ChainSelQueue { - varChainSelQueue = queue - } = chainSelQueue - - startStarvationMeasure :: m () - startStarvationMeasure = do - prevStarvation <- atomically $ swapTVar starvationVar ChainSelStarvationOngoing - when (prevStarvation /= ChainSelStarvationOngoing) $ - traceWith starvationTracer $ ChainSelStarvation RisingEdge - - terminateStarvationMeasure :: ChainSelMessage m blk -> m () - terminateStarvationMeasure = \case - ChainSelAddBlock BlockToAdd{blockToAdd=block} -> do - let pt = blockRealPoint block - traceWith starvationTracer $ ChainSelStarvation (FallingEdgeWith pt) - atomically . writeTVar starvationVar . ChainSelStarvationEndedAt =<< getMonotonicTime - ChainSelReprocessLoEBlocks{} -> pure () + atomically (tryReadTBQueue' queue) >>= \case + Just msg -> pure msg + Nothing -> do + startStarvationMeasure + msg <- atomically $ readTBQueue queue + terminateStarvationMeasure msg + pure msg + where + ChainSelQueue + { varChainSelQueue = queue + } = chainSelQueue + + startStarvationMeasure :: m () + startStarvationMeasure = do + prevStarvation <- atomically $ swapTVar starvationVar ChainSelStarvationOngoing + when (prevStarvation /= ChainSelStarvationOngoing) $ + traceWith starvationTracer $ + ChainSelStarvation RisingEdge + + terminateStarvationMeasure :: ChainSelMessage m blk -> m () + terminateStarvationMeasure = \case + ChainSelAddBlock BlockToAdd{blockToAdd = block} -> do + let pt = blockRealPoint block + traceWith starvationTracer $ ChainSelStarvation (FallingEdgeWith pt) + atomically . writeTVar starvationVar . ChainSelStarvationEndedAt =<< getMonotonicTime + ChainSelReprocessLoEBlocks{} -> pure () -- TODO Can't use tryReadTBQueue from io-classes because it is broken for IOSim -- (but not for IO). https://github.com/input-output-hk/io-sim/issues/195 @@ -600,53 +650,56 @@ tryReadTBQueue' :: MonadSTM m => TBQueue m a -> STM m (Maybe a) tryReadTBQueue' q = (Just <$> readTBQueue q) `orElse` pure Nothing -- | Flush the 'ChainSelQueue' queue and notify the waiting threads. --- closeChainSelQueue :: IOLike m => ChainSelQueue m blk -> STM m () closeChainSelQueue ChainSelQueue{varChainSelQueue = queue} = do as <- mapMaybe blockAdd <$> flushTBQueue queue - traverse_ (\a -> tryPutTMVar (varBlockProcessed a) - (FailedToAddBlock "Queue flushed")) - as - where - blockAdd = \case - ChainSelAddBlock ab -> Just ab - ChainSelReprocessLoEBlocks _ -> Nothing + traverse_ + ( \a -> + tryPutTMVar + (varBlockProcessed a) + (FailedToAddBlock "Queue flushed") + ) + as + where + blockAdd = \case + ChainSelAddBlock ab -> Just ab + ChainSelReprocessLoEBlocks _ -> Nothing -- | To invoke when the given 'ChainSelMessage' has been processed by ChainSel. -- This is used to remove the respective point from the multiset of points in -- the 'ChainSelQueue' (as the block has now been written to disk by ChainSel). processedChainSelMessage :: - (IOLike m, HasHeader blk) - => ChainSelQueue m blk - -> ChainSelMessage m blk - -> STM m () -processedChainSelMessage ChainSelQueue {varChainSelPoints} = \case - ChainSelAddBlock BlockToAdd{blockToAdd = blk} -> - modifyTVar varChainSelPoints $ MultiSet.delete (blockRealPoint blk) - ChainSelReprocessLoEBlocks{} -> - pure () + (IOLike m, HasHeader blk) => + ChainSelQueue m blk -> + ChainSelMessage m blk -> + STM m () +processedChainSelMessage ChainSelQueue{varChainSelPoints} = \case + ChainSelAddBlock BlockToAdd{blockToAdd = blk} -> + modifyTVar varChainSelPoints $ MultiSet.delete (blockRealPoint blk) + ChainSelReprocessLoEBlocks{} -> + pure () -- | Return a function to test the membership memberChainSelQueue :: - (IOLike m, HasHeader blk) - => ChainSelQueue m blk - -> STM m (RealPoint blk -> Bool) -memberChainSelQueue ChainSelQueue {varChainSelPoints} = - flip MultiSet.member <$> readTVar varChainSelPoints + (IOLike m, HasHeader blk) => + ChainSelQueue m blk -> + STM m (RealPoint blk -> Bool) +memberChainSelQueue ChainSelQueue{varChainSelPoints} = + flip MultiSet.member <$> readTVar varChainSelPoints getMaxSlotNoChainSelQueue :: - IOLike m - => ChainSelQueue m blk - -> STM m MaxSlotNo -getMaxSlotNoChainSelQueue ChainSelQueue {varChainSelPoints} = - aux <$> readTVar varChainSelPoints - where - -- | The 'Ord' instance of 'RealPoint' orders by 'SlotNo' first, so the - -- maximal key of the map has the greatest 'SlotNo'. - aux :: MultiSet (RealPoint blk) -> MaxSlotNo - aux pts = case MultiSet.maxView pts of - Nothing -> NoMaxSlotNo - Just (RealPoint s _, _) -> MaxSlotNo s + IOLike m => + ChainSelQueue m blk -> + STM m MaxSlotNo +getMaxSlotNoChainSelQueue ChainSelQueue{varChainSelPoints} = + aux <$> readTVar varChainSelPoints + where + -- \| The 'Ord' instance of 'RealPoint' orders by 'SlotNo' first, so the + -- maximal key of the map has the greatest 'SlotNo'. + aux :: MultiSet (RealPoint blk) -> MaxSlotNo + aux pts = case MultiSet.maxView pts of + Nothing -> NoMaxSlotNo + Just (RealPoint s _, _) -> MaxSlotNo s {------------------------------------------------------------------------------- Trace types @@ -654,64 +707,65 @@ getMaxSlotNoChainSelQueue ChainSelQueue {varChainSelPoints} = -- | Trace type for the various events of the ChainDB. data TraceEvent blk - = TraceAddBlockEvent (TraceAddBlockEvent blk) - | TraceFollowerEvent (TraceFollowerEvent blk) - | TraceCopyToImmutableDBEvent (TraceCopyToImmutableDBEvent blk) - | TraceGCEvent (TraceGCEvent blk) - | TraceInitChainSelEvent (TraceInitChainSelEvent blk) - | TraceOpenEvent (TraceOpenEvent blk) - | TraceIteratorEvent (TraceIteratorEvent blk) - | TraceLedgerDBEvent (LedgerDB.TraceEvent blk) - | TraceImmutableDBEvent (ImmutableDB.TraceEvent blk) - | TraceVolatileDBEvent (VolatileDB.TraceEvent blk) + = TraceAddBlockEvent (TraceAddBlockEvent blk) + | TraceFollowerEvent (TraceFollowerEvent blk) + | TraceCopyToImmutableDBEvent (TraceCopyToImmutableDBEvent blk) + | TraceGCEvent (TraceGCEvent blk) + | TraceInitChainSelEvent (TraceInitChainSelEvent blk) + | TraceOpenEvent (TraceOpenEvent blk) + | TraceIteratorEvent (TraceIteratorEvent blk) + | TraceLedgerDBEvent (LedgerDB.TraceEvent blk) + | TraceImmutableDBEvent (ImmutableDB.TraceEvent blk) + | TraceVolatileDBEvent (VolatileDB.TraceEvent blk) | TraceLastShutdownUnclean - | TraceChainSelStarvationEvent(TraceChainSelStarvationEvent blk) - deriving (Generic) + | TraceChainSelStarvationEvent (TraceChainSelStarvationEvent blk) + deriving Generic deriving instance ( Eq (Header blk) , LedgerSupportsProtocol blk , InspectLedger blk - ) => Eq (TraceEvent blk) + ) => + Eq (TraceEvent blk) deriving instance ( Show (Header blk) , LedgerSupportsProtocol blk , InspectLedger blk - ) => Show (TraceEvent blk) + ) => + Show (TraceEvent blk) -data TraceOpenEvent blk = - -- | The ChainDB started the process of opening. +data TraceOpenEvent blk + = -- | The ChainDB started the process of opening. StartedOpeningDB - -- | The ChainDB was opened. - | OpenedDB - (Point blk) -- ^ Immutable tip - (Point blk) -- ^ Tip of the current chain - - -- | The ChainDB was closed. - | ClosedDB - (Point blk) -- ^ Immutable tip - (Point blk) -- ^ Tip of the current chain - - -- | The ImmutableDB started the process of opening. - | StartedOpeningImmutableDB - - -- | The ImmutableDB was opened. - | OpenedImmutableDB - (Point blk) -- ^ Immutable tip - ImmutableDB.ChunkNo -- ^ Chunk number of the immutable tip - - -- | The VolatileDB started opening. - | StartedOpeningVolatileDB - - -- | The VolatileDB was opened, with the highest seen slot number for any + | -- | The ChainDB was opened. + OpenedDB + -- | Immutable tip + (Point blk) + -- | Tip of the current chain + (Point blk) + | -- | The ChainDB was closed. + ClosedDB + -- | Immutable tip + (Point blk) + -- | Tip of the current chain + (Point blk) + | -- | The ImmutableDB started the process of opening. + StartedOpeningImmutableDB + | -- | The ImmutableDB was opened. + OpenedImmutableDB + -- | Immutable tip + (Point blk) + -- | Chunk number of the immutable tip + ImmutableDB.ChunkNo + | -- | The VolatileDB started opening. + StartedOpeningVolatileDB + | -- | The VolatileDB was opened, with the highest seen slot number for any -- block currently in the DB. - | OpenedVolatileDB MaxSlotNo - - -- | The LedgerDB started opening. - | StartedOpeningLgrDB - - -- | The LedgerDB was opened. - | OpenedLgrDB + OpenedVolatileDB MaxSlotNo + | -- | The LedgerDB started opening. + StartedOpeningLgrDB + | -- | The LedgerDB was opened. + OpenedLgrDB deriving (Generic, Eq, Show) -- | Information on having changed our selection to a chain with a (necessarily) @@ -721,257 +775,238 @@ data TraceOpenEvent blk = -- forcing of this information in case it doesn't have to be traced. However, -- this means that the tracer processing this message /must not/ hold on to -- it, otherwise it leaks memory. -data SelectionChangedInfo blk = SelectionChangedInfo { - newTipPoint :: RealPoint blk - -- ^ The new tip of the current chain. - , newTipEpoch :: EpochNo - -- ^ The epoch of the new tip. - , newTipSlotInEpoch :: Word64 - -- ^ The slot in the epoch, i.e., the relative slot number, of the new - -- tip. - , newTipTrigger :: RealPoint blk - -- ^ The new tip of the current chain ('newTipPoint') is the result of - -- performing chain selection for a /trigger/ block ('newTipTrigger'). - -- In most cases, we add a new block to the tip of the current chain, in - -- which case the new tip /is/ the trigger block. - -- - -- However, this is not always the case. For example, with our current - -- chain being A and having a disconnected C lying around, adding B will - -- result in A -> B -> C as the new chain. The trigger B /= the new tip - -- C. - , newTipSelectView :: SelectView (BlockProtocol blk) - -- ^ The 'SelectView' of the new tip. It is guaranteed that - -- - -- > Just newTipSelectView > oldTipSelectView - -- True - , oldTipSelectView :: Maybe (SelectView (BlockProtocol blk)) - -- ^ The 'SelectView' of the old, previous tip. This can be 'Nothing' when - -- the previous chain/tip was Genesis. - } - deriving (Generic) - -deriving stock instance (Show (SelectView (BlockProtocol blk)), StandardHash blk) => Show (SelectionChangedInfo blk) -deriving stock instance (Eq (SelectView (BlockProtocol blk)), StandardHash blk) => Eq (SelectionChangedInfo blk) +data SelectionChangedInfo blk = SelectionChangedInfo + { newTipPoint :: RealPoint blk + -- ^ The new tip of the current chain. + , newTipEpoch :: EpochNo + -- ^ The epoch of the new tip. + , newTipSlotInEpoch :: Word64 + -- ^ The slot in the epoch, i.e., the relative slot number, of the new + -- tip. + , newTipTrigger :: RealPoint blk + -- ^ The new tip of the current chain ('newTipPoint') is the result of + -- performing chain selection for a /trigger/ block ('newTipTrigger'). + -- In most cases, we add a new block to the tip of the current chain, in + -- which case the new tip /is/ the trigger block. + -- + -- However, this is not always the case. For example, with our current + -- chain being A and having a disconnected C lying around, adding B will + -- result in A -> B -> C as the new chain. The trigger B /= the new tip + -- C. + , newTipSelectView :: SelectView (BlockProtocol blk) + -- ^ The 'SelectView' of the new tip. It is guaranteed that + -- + -- > Just newTipSelectView > oldTipSelectView + -- True + , oldTipSelectView :: Maybe (SelectView (BlockProtocol blk)) + -- ^ The 'SelectView' of the old, previous tip. This can be 'Nothing' when + -- the previous chain/tip was Genesis. + } + deriving Generic + +deriving stock instance + (Show (SelectView (BlockProtocol blk)), StandardHash blk) => Show (SelectionChangedInfo blk) +deriving stock instance + (Eq (SelectView (BlockProtocol blk)), StandardHash blk) => Eq (SelectionChangedInfo blk) -- | Trace type for the various events that occur when adding a block. -data TraceAddBlockEvent blk = - -- | A block with a 'BlockNo' more than @k@ back than the current tip was +data TraceAddBlockEvent blk + = -- | A block with a 'BlockNo' more than @k@ back than the current tip was -- ignored. IgnoreBlockOlderThanK (RealPoint blk) - - -- | A block that is already in the Volatile DB was ignored. - | IgnoreBlockAlreadyInVolatileDB (RealPoint blk) - - -- | A block that is know to be invalid was ignored. - | IgnoreInvalidBlock (RealPoint blk) (ExtValidationError blk) - - -- | The block was added to the queue and will be added to the ChainDB by + | -- | A block that is already in the Volatile DB was ignored. + IgnoreBlockAlreadyInVolatileDB (RealPoint blk) + | -- | A block that is know to be invalid was ignored. + IgnoreInvalidBlock (RealPoint blk) (ExtValidationError blk) + | -- | The block was added to the queue and will be added to the ChainDB by -- the background thread. The size of the queue is included. - | AddedBlockToQueue (RealPoint blk) (Enclosing' Word) - - -- | The block popped from the queue and will imminently be added to the + AddedBlockToQueue (RealPoint blk) (Enclosing' Word) + | -- | The block popped from the queue and will imminently be added to the -- ChainDB. - | PoppedBlockFromQueue (Enclosing' (RealPoint blk)) - - -- | A message was added to the queue that requests that ChainSel reprocess + PoppedBlockFromQueue (Enclosing' (RealPoint blk)) + | -- | A message was added to the queue that requests that ChainSel reprocess -- blocks that were postponed by the LoE. - | AddedReprocessLoEBlocksToQueue - - -- | ChainSel will reprocess blocks that were postponed by the LoE. - | PoppedReprocessLoEBlocksFromQueue - - -- | A block was added to the Volatile DB - | AddedBlockToVolatileDB (RealPoint blk) BlockNo IsEBB Enclosing - - -- | The block fits onto the current chain, we'll try to use it to extend + AddedReprocessLoEBlocksToQueue + | -- | ChainSel will reprocess blocks that were postponed by the LoE. + PoppedReprocessLoEBlocksFromQueue + | -- | A block was added to the Volatile DB + AddedBlockToVolatileDB (RealPoint blk) BlockNo IsEBB Enclosing + | -- | The block fits onto the current chain, we'll try to use it to extend -- our chain. - | TryAddToCurrentChain (RealPoint blk) - - -- | The block fits onto some fork, we'll try to switch to that fork (if + TryAddToCurrentChain (RealPoint blk) + | -- | The block fits onto some fork, we'll try to switch to that fork (if -- it is preferable to our chain). - | TrySwitchToAFork (RealPoint blk) (ChainDiff (HeaderFields blk)) - - -- | The block doesn't fit onto any other block, so we store it and ignore + TrySwitchToAFork (RealPoint blk) (ChainDiff (HeaderFields blk)) + | -- | The block doesn't fit onto any other block, so we store it and ignore -- it. - | StoreButDontChange (RealPoint blk) - - -- | Debugging information about chain selection and LoE - | ChainSelectionLoEDebug (AnchoredFragment (Header blk)) (LoE (AnchoredFragment (Header blk))) - - -- | The new block fits onto the current chain (first + StoreButDontChange (RealPoint blk) + | -- | Debugging information about chain selection and LoE + ChainSelectionLoEDebug (AnchoredFragment (Header blk)) (LoE (AnchoredFragment (Header blk))) + | -- | The new block fits onto the current chain (first -- fragment) and we have successfully used it to extend our (new) current -- chain (second fragment). - | AddedToCurrentChain + AddedToCurrentChain [LedgerEvent blk] (SelectionChangedInfo blk) (AnchoredFragment (Header blk)) (AnchoredFragment (Header blk)) - - -- | The new block fits onto some fork and we have switched to that fork + | -- | The new block fits onto some fork and we have switched to that fork -- (second fragment), as it is preferable to our (previous) current chain -- (first fragment). - | SwitchedToAFork + SwitchedToAFork [LedgerEvent blk] (SelectionChangedInfo blk) (AnchoredFragment (Header blk)) (AnchoredFragment (Header blk)) - - -- | An event traced during validating performed while adding a block. - | AddBlockValidation (TraceValidationEvent blk) - - -- | The tentative header (in the context of diffusion pipelining) has been + | -- | An event traced during validating performed while adding a block. + AddBlockValidation (TraceValidationEvent blk) + | -- | The tentative header (in the context of diffusion pipelining) has been -- updated. - | PipeliningEvent (TracePipeliningEvent blk) - - -- | Herald of 'AddedToCurrentChain' or 'SwitchedToAFork'. Lists the tip of + PipeliningEvent (TracePipeliningEvent blk) + | -- | Herald of 'AddedToCurrentChain' or 'SwitchedToAFork'. Lists the tip of -- the new chain. - | ChangingSelection (Point blk) - - deriving (Generic) + ChangingSelection (Point blk) + deriving Generic deriving instance ( Eq (Header blk) , LedgerSupportsProtocol blk , InspectLedger blk - ) => Eq (TraceAddBlockEvent blk) + ) => + Eq (TraceAddBlockEvent blk) deriving instance ( Show (Header blk) , LedgerSupportsProtocol blk , InspectLedger blk - ) => Show (TraceAddBlockEvent blk) + ) => + Show (TraceAddBlockEvent blk) -data TraceValidationEvent blk = - -- | A point was found to be invalid. +data TraceValidationEvent blk + = -- | A point was found to be invalid. InvalidBlock (ExtValidationError blk) (RealPoint blk) - - -- | A candidate chain was valid. - | ValidCandidate (AnchoredFragment (Header blk)) - + | -- | A candidate chain was valid. + ValidCandidate (AnchoredFragment (Header blk)) | UpdateLedgerDbTraceEvent (LedgerDB.TraceValidateEvent blk) - deriving (Generic) + deriving Generic deriving instance - ( Eq (Header blk) + ( Eq (Header blk) , LedgerSupportsProtocol blk - ) => Eq (TraceValidationEvent blk) + ) => + Eq (TraceValidationEvent blk) deriving instance - ( Show (Header blk) + ( Show (Header blk) , LedgerSupportsProtocol blk - ) => Show (TraceValidationEvent blk) + ) => + Show (TraceValidationEvent blk) -data TracePipeliningEvent blk = - -- | A new tentative header got set. +data TracePipeliningEvent blk + = -- | A new tentative header got set. SetTentativeHeader (Header blk) Enclosing - -- | The body of tentative block turned out to be invalid. - | TrapTentativeHeader (Header blk) - -- | We selected a new (better) chain, which cleared the previous tentative + | -- | The body of tentative block turned out to be invalid. + TrapTentativeHeader (Header blk) + | -- | We selected a new (better) chain, which cleared the previous tentative -- header. - | OutdatedTentativeHeader (Header blk) + OutdatedTentativeHeader (Header blk) -deriving stock instance Eq (Header blk) => Eq (TracePipeliningEvent blk) +deriving stock instance Eq (Header blk) => Eq (TracePipeliningEvent blk) deriving stock instance Show (Header blk) => Show (TracePipeliningEvent blk) -data TraceInitChainSelEvent blk = - StartedInitChainSelection - -- ^ An event traced when inital chain selection has started during the +data TraceInitChainSelEvent blk + = -- | An event traced when inital chain selection has started during the -- initialization of ChainDB - | InitialChainSelected - -- ^ An event traced when inital chain has been selected - | InitChainSelValidation (TraceValidationEvent blk) - -- ^ An event traced during validation performed while performing initial + StartedInitChainSelection + | -- | An event traced when inital chain has been selected + InitialChainSelected + | -- | An event traced during validation performed while performing initial -- chain selection. - deriving (Generic) + InitChainSelValidation (TraceValidationEvent blk) + deriving Generic deriving instance - ( Eq (Header blk) + ( Eq (Header blk) , LedgerSupportsProtocol blk - ) => Eq (TraceInitChainSelEvent blk) + ) => + Eq (TraceInitChainSelEvent blk) deriving instance - ( Show (Header blk) + ( Show (Header blk) , LedgerSupportsProtocol blk - ) => Show (TraceInitChainSelEvent blk) + ) => + Show (TraceInitChainSelEvent blk) - -data TraceFollowerEvent blk = - -- | A new follower was created. +data TraceFollowerEvent blk + = -- | A new follower was created. NewFollower - - -- | The follower was in the 'FollowerInMem' state but its point is no longer on + | -- | The follower was in the 'FollowerInMem' state but its point is no longer on -- the in-memory chain fragment, so it has to switch to the -- 'FollowerInImmutableDB' state. - | FollowerNoLongerInMem (FollowerRollState blk) - - -- | The follower was in the 'FollowerInImmutableDB' state and is switched to + FollowerNoLongerInMem (FollowerRollState blk) + | -- | The follower was in the 'FollowerInImmutableDB' state and is switched to -- the 'FollowerInMem' state. - | FollowerSwitchToMem - (Point blk) -- ^ Point at which the follower is - (WithOrigin SlotNo) -- ^ Slot number at the tip of the ImmutableDB - - -- | The follower is in the 'FollowerInImmutableDB' state but the iterator is + FollowerSwitchToMem + -- | Point at which the follower is + (Point blk) + -- | Slot number at the tip of the ImmutableDB + (WithOrigin SlotNo) + | -- | The follower is in the 'FollowerInImmutableDB' state but the iterator is -- exhausted while the ImmutableDB has grown, so we open a new iterator to -- stream these blocks too. - | FollowerNewImmIterator - (Point blk) -- ^ Point at which the follower is - (WithOrigin SlotNo) -- ^ Slot number at the tip of the ImmutableDB + FollowerNewImmIterator + -- | Point at which the follower is + (Point blk) + -- | Slot number at the tip of the ImmutableDB + (WithOrigin SlotNo) deriving (Generic, Eq, Show) - data TraceCopyToImmutableDBEvent blk - = CopiedBlockToImmutableDB (Point blk) - -- ^ A block was successfully copied to the ImmutableDB. - | NoBlocksToCopyToImmutableDB - -- ^ There are no block to copy to the ImmutableDB. + = -- | A block was successfully copied to the ImmutableDB. + CopiedBlockToImmutableDB (Point blk) + | -- | There are no block to copy to the ImmutableDB. + NoBlocksToCopyToImmutableDB deriving (Generic, Eq, Show) data TraceGCEvent blk - = ScheduledGC SlotNo Time - -- ^ A garbage collection for the given 'SlotNo' was scheduled to happen + = -- | A garbage collection for the given 'SlotNo' was scheduled to happen -- at the given time. - | PerformedGC SlotNo - -- ^ A garbage collection for the given 'SlotNo' was performed. + ScheduledGC SlotNo Time + | -- | A garbage collection for the given 'SlotNo' was performed. + PerformedGC SlotNo deriving (Generic, Eq, Show) data TraceIteratorEvent blk - -- | An unknown range was requested, see 'UnknownRange'. - = UnknownRangeRequested (UnknownRange blk) - - -- | Stream only from the VolatileDB. - | StreamFromVolatileDB + = -- | An unknown range was requested, see 'UnknownRange'. + UnknownRangeRequested (UnknownRange blk) + | -- | Stream only from the VolatileDB. + StreamFromVolatileDB (StreamFrom blk) - (StreamTo blk) - [RealPoint blk] - - -- | Stream only from the ImmutableDB. - | StreamFromImmutableDB + (StreamTo blk) + [RealPoint blk] + | -- | Stream only from the ImmutableDB. + StreamFromImmutableDB (StreamFrom blk) - (StreamTo blk) - - -- | Stream from both the VolatileDB and the ImmutableDB. - | StreamFromBoth + (StreamTo blk) + | -- | Stream from both the VolatileDB and the ImmutableDB. + StreamFromBoth (StreamFrom blk) - (StreamTo blk) - [RealPoint blk] - - -- | A block is no longer in the VolatileDB because it has been garbage + (StreamTo blk) + [RealPoint blk] + | -- | A block is no longer in the VolatileDB because it has been garbage -- collected. It might now be in the ImmutableDB if it was part of the -- current chain. - | BlockMissingFromVolatileDB (RealPoint blk) - - -- | A block that has been garbage collected from the VolatileDB is now + BlockMissingFromVolatileDB (RealPoint blk) + | -- | A block that has been garbage collected from the VolatileDB is now -- found and streamed from the ImmutableDB. - | BlockWasCopiedToImmutableDB (RealPoint blk) - - -- | A block is no longer in the VolatileDB and isn't in the ImmutableDB + BlockWasCopiedToImmutableDB (RealPoint blk) + | -- | A block is no longer in the VolatileDB and isn't in the ImmutableDB -- either; it wasn't part of the current chain. - | BlockGCedFromVolatileDB (RealPoint blk) - - -- | We have streamed one or more blocks from the ImmutableDB that were part + BlockGCedFromVolatileDB (RealPoint blk) + | -- | We have streamed one or more blocks from the ImmutableDB that were part -- of the VolatileDB when initialising the iterator. Now, we have to look -- back in the VolatileDB again because the ImmutableDB doesn't have the -- next block we're looking for. - | SwitchBackToVolatileDB + SwitchBackToVolatileDB deriving (Generic, Eq, Show) -- | Chain selection is /starved/ when the background thread runs out of work. @@ -983,6 +1018,6 @@ data TraceIteratorEvent blk -- times. -- -- The point in the trace is the block that finished the starvation. -newtype TraceChainSelStarvationEvent blk = - ChainSelStarvation (Enclosing' (RealPoint blk)) +newtype TraceChainSelStarvationEvent blk + = ChainSelStarvation (Enclosing' (RealPoint blk)) deriving (Generic, Eq, Show) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Init.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Init.hs index 0c140aed3e..cda71c74f3 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Init.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ChainDB/Init.hs @@ -4,45 +4,47 @@ -- -- > import Ouroboros.Consensus.Storage.ChainDB.Init (InitChainDB) -- > import qualified Ouroboros.Consensus.Storage.ChainDB.Init as InitChainDB -module Ouroboros.Consensus.Storage.ChainDB.Init ( - InitChainDB (..) +module Ouroboros.Consensus.Storage.ChainDB.Init + ( InitChainDB (..) , fromFull , map ) where -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) -import qualified Ouroboros.Consensus.Storage.ChainDB.API as ChainDB -import qualified Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment as InvalidBlockPunishment -import Ouroboros.Consensus.Util.IOLike -import Prelude hiding (map) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Storage.ChainDB.API (ChainDB) +import Ouroboros.Consensus.Storage.ChainDB.API qualified as ChainDB +import Ouroboros.Consensus.Storage.ChainDB.API.Types.InvalidBlockPunishment qualified as InvalidBlockPunishment +import Ouroboros.Consensus.Util.IOLike +import Prelude hiding (map) -- | Restricted interface to the 'ChainDB' used on node initialization -data InitChainDB m blk = InitChainDB { - -- | Add a block to the DB - addBlock :: blk -> m () - - -- | Return the current ledger state - , getCurrentLedger :: m (LedgerState blk EmptyMK) - } +data InitChainDB m blk = InitChainDB + { addBlock :: blk -> m () + -- ^ Add a block to the DB + , getCurrentLedger :: m (LedgerState blk EmptyMK) + -- ^ Return the current ledger state + } fromFull :: - IOLike m - => ChainDB m blk -> InitChainDB m blk -fromFull db = InitChainDB { - addBlock = + IOLike m => + ChainDB m blk -> InitChainDB m blk +fromFull db = + InitChainDB + { addBlock = ChainDB.addBlock_ db InvalidBlockPunishment.noPunishment , getCurrentLedger = atomically $ ledgerState <$> ChainDB.getCurrentLedger db } map :: - Functor m - => (blk' -> blk) - -> (LedgerState blk EmptyMK -> LedgerState blk' EmptyMK) - -> InitChainDB m blk -> InitChainDB m blk' -map f g db = InitChainDB { - addBlock = addBlock db . f + Functor m => + (blk' -> blk) -> + (LedgerState blk EmptyMK -> LedgerState blk' EmptyMK) -> + InitChainDB m blk -> + InitChainDB m blk' +map f g db = + InitChainDB + { addBlock = addBlock db . f , getCurrentLedger = g <$> getCurrentLedger db } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Common.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Common.hs index 985755b219..4a318a481d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Common.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Common.hs @@ -6,42 +6,47 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Storage.Common ( - -- * Indexing +module Ouroboros.Consensus.Storage.Common + ( -- * Indexing tipIsGenesis + -- * PrefixLen , PrefixLen (..) , addPrefixLen , takePrefix + -- * BinaryBlockInfo , BinaryBlockInfo (..) , extractHeader + -- * Iterator bounds , StreamFrom (..) , StreamTo (..) , validBounds + -- * BlockComponent , BlockComponent (..) + -- * Re-exports , SizeInBytes ) where -import Data.ByteString.Lazy (ByteString) -import qualified Data.ByteString.Lazy as BL -import Data.ByteString.Short (ShortByteString) -import qualified Data.ByteString.Short as Short -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Network.SizeInBytes (SizeInBytes) +import Data.ByteString.Lazy (ByteString) +import Data.ByteString.Lazy qualified as BL +import Data.ByteString.Short (ShortByteString) +import Data.ByteString.Short qualified as Short +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Network.SizeInBytes (SizeInBytes) {------------------------------------------------------------------------------- Indexing -------------------------------------------------------------------------------} tipIsGenesis :: WithOrigin r -> Bool -tipIsGenesis Origin = True +tipIsGenesis Origin = True tipIsGenesis (NotOrigin _) = False {------------------------------------------------------------------------------- @@ -52,18 +57,18 @@ tipIsGenesis (NotOrigin _) = False -- nested context. -- -- See 'reconstructPrefixLen'. -newtype PrefixLen = PrefixLen { - getPrefixLen :: Word8 - } - deriving stock (Eq, Ord, Show, Generic) - deriving newtype (NoThunks) +newtype PrefixLen = PrefixLen + { getPrefixLen :: Word8 + } + deriving stock (Eq, Ord, Show, Generic) + deriving newtype NoThunks addPrefixLen :: Word8 -> PrefixLen -> PrefixLen addPrefixLen m (PrefixLen n) = PrefixLen (m + n) takePrefix :: PrefixLen -> BL.ByteString -> ShortByteString takePrefix (PrefixLen n) = - Short.toShort . BL.toStrict . BL.take (fromIntegral n) + Short.toShort . BL.toStrict . BL.take (fromIntegral n) {------------------------------------------------------------------------------- BinaryBlockInfo @@ -72,24 +77,23 @@ takePrefix (PrefixLen n) = -- | Information about the serialised block. data BinaryBlockInfo = BinaryBlockInfo { headerOffset :: !Word16 - -- ^ The offset within the serialised block at which the header starts. - , headerSize :: !Word16 - -- ^ How many bytes the header is long. Extracting the 'headerSize' bytes - -- from serialised block starting from 'headerOffset' should yield the - -- header. Before passing the extracted bytes to the decoder for headers, - -- an envelope can be around using 'nodeAddHeaderEnvelope'. - - -- In the future, i.e. Shelley, we might want to extend this to include a - -- field to tell where the transaction body ends and where the transaction - -- witnesses begin so we can only extract the transaction body. - } deriving (Eq, Show, Generic) - + -- ^ The offset within the serialised block at which the header starts. + , headerSize :: !Word16 + -- ^ How many bytes the header is long. Extracting the 'headerSize' bytes + -- from serialised block starting from 'headerOffset' should yield the + -- header. Before passing the extracted bytes to the decoder for headers, + -- an envelope can be around using 'nodeAddHeaderEnvelope'. + } + -- In the future, i.e. Shelley, we might want to extend this to include a + -- field to tell where the transaction body ends and where the transaction + -- witnesses begin so we can only extract the transaction body. + deriving (Eq, Show, Generic) -- | Extract the header from the given 'ByteString' using the -- 'BinaryBlockInfo'. extractHeader :: BinaryBlockInfo -> ByteString -> ByteString -extractHeader BinaryBlockInfo { headerOffset, headerSize } = - BL.take (fromIntegral headerSize) +extractHeader BinaryBlockInfo{headerOffset, headerSize} = + BL.take (fromIntegral headerSize) . BL.drop (fromIntegral headerOffset) {------------------------------------------------------------------------------- @@ -100,16 +104,16 @@ extractHeader BinaryBlockInfo { headerOffset, headerSize } = -- -- Hint: use @'StreamFromExclusive' 'genesisPoint'@ to start streaming from -- Genesis. -data StreamFrom blk = - StreamFromInclusive !(RealPoint blk) - | StreamFromExclusive !(Point blk) - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) +data StreamFrom blk + = StreamFromInclusive !(RealPoint blk) + | StreamFromExclusive !(Point blk) + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks -newtype StreamTo blk = - StreamToInclusive (RealPoint blk) - deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) +newtype StreamTo blk + = StreamToInclusive (RealPoint blk) + deriving stock (Show, Eq, Generic) + deriving anyclass NoThunks -- | Check whether the bounds make sense -- @@ -122,16 +126,16 @@ newtype StreamTo blk = -- blocks. validBounds :: StandardHash blk => StreamFrom blk -> StreamTo blk -> Bool validBounds from (StreamToInclusive (RealPoint sto hto)) = - case from of - StreamFromExclusive GenesisPoint -> True - -- EBBs spoil the fun again: when 'StreamFromExclusive' refers to an EBB - -- in slot X and 'StreamToInclusive' to the regular block in the same slot - -- X, the bound is still valid. Without EBBs, we would have @sfrom < sto@. - -- - -- We /can/ rule out streaming exclusively from the block to the same - -- block. - StreamFromExclusive (BlockPoint sfrom hfrom) -> hfrom /= hto && sfrom <= sto - StreamFromInclusive (RealPoint sfrom _) -> sfrom <= sto + case from of + StreamFromExclusive GenesisPoint -> True + -- EBBs spoil the fun again: when 'StreamFromExclusive' refers to an EBB + -- in slot X and 'StreamToInclusive' to the regular block in the same slot + -- X, the bound is still valid. Without EBBs, we would have @sfrom < sto@. + -- + -- We /can/ rule out streaming exclusively from the block to the same + -- block. + StreamFromExclusive (BlockPoint sfrom hfrom) -> hfrom /= hto && sfrom <= sto + StreamFromInclusive (RealPoint sfrom _) -> sfrom <= sto {------------------------------------------------------------------------------- BlockComponent @@ -146,25 +150,27 @@ data BlockComponent blk a where -- hashes. The interpreter should throw an exception when the block does not -- pass the check. GetVerifiedBlock :: BlockComponent blk blk - GetBlock :: BlockComponent blk blk - GetRawBlock :: BlockComponent blk ByteString - GetHeader :: BlockComponent blk (Header blk) - GetRawHeader :: BlockComponent blk ByteString - GetHash :: BlockComponent blk (HeaderHash blk) - GetSlot :: BlockComponent blk SlotNo - GetIsEBB :: BlockComponent blk IsEBB - GetBlockSize :: BlockComponent blk SizeInBytes - GetHeaderSize :: BlockComponent blk Word16 - GetNestedCtxt :: BlockComponent blk (SomeSecond (NestedCtxt Header) blk) - GetPure :: a - -> BlockComponent blk a - GetApply :: BlockComponent blk (a -> b) - -> BlockComponent blk a - -> BlockComponent blk b + GetBlock :: BlockComponent blk blk + GetRawBlock :: BlockComponent blk ByteString + GetHeader :: BlockComponent blk (Header blk) + GetRawHeader :: BlockComponent blk ByteString + GetHash :: BlockComponent blk (HeaderHash blk) + GetSlot :: BlockComponent blk SlotNo + GetIsEBB :: BlockComponent blk IsEBB + GetBlockSize :: BlockComponent blk SizeInBytes + GetHeaderSize :: BlockComponent blk Word16 + GetNestedCtxt :: BlockComponent blk (SomeSecond (NestedCtxt Header) blk) + GetPure :: + a -> + BlockComponent blk a + GetApply :: + BlockComponent blk (a -> b) -> + BlockComponent blk a -> + BlockComponent blk b instance Functor (BlockComponent blk) where fmap f = (GetPure f <*>) instance Applicative (BlockComponent blk) where - pure = GetPure + pure = GetPure (<*>) = GetApply diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB.hs index 5264fe0c3a..1eecb28015 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB.hs @@ -1,5 +1,5 @@ module Ouroboros.Consensus.Storage.ImmutableDB (module X) where -import Ouroboros.Consensus.Storage.ImmutableDB.API as X -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks as X -import Ouroboros.Consensus.Storage.ImmutableDB.Impl as X +import Ouroboros.Consensus.Storage.ImmutableDB.API as X +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks as X +import Ouroboros.Consensus.Storage.ImmutableDB.Impl as X diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/API.hs index b35eb013e8..ad21b5e466 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/API.hs @@ -10,14 +10,16 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} -module Ouroboros.Consensus.Storage.ImmutableDB.API ( - -- * API +module Ouroboros.Consensus.Storage.ImmutableDB.API + ( -- * API ImmutableDB (..) + -- * Iterator API , Iterator (..) , IteratorResult (..) , iteratorToList , traverseIterator + -- * Types , CompareTip (..) , SecondaryOffset @@ -27,6 +29,7 @@ module Ouroboros.Consensus.Storage.ImmutableDB.API ( , tipToAnchor , tipToPoint , tipToRealPoint + -- * Errors , ApiMisuse (..) , ImmutableDBError (..) @@ -35,12 +38,14 @@ module Ouroboros.Consensus.Storage.ImmutableDB.API ( , missingBlockPoint , throwApiMisuse , throwUnexpectedFailure + -- * Wrappers that preserve 'HasCallStack' , appendBlock , closeDB , getBlockComponent , getTip , stream + -- * Derived functionality , getKnownBlockComponent , getTipAnchor @@ -53,27 +58,27 @@ module Ouroboros.Consensus.Storage.ImmutableDB.API ( , withDB ) where -import qualified Codec.CBOR.Read as CBOR -import Control.Monad.Except (ExceptT (..), runExceptT, throwError) -import Control.Monad.Trans.Class (lift) -import Control.ResourceRegistry (ResourceRegistry) -import qualified Data.ByteString.Lazy as Lazy -import Data.Either (isRight) -import Data.Function (on) -import Data.List.NonEmpty (NonEmpty) -import Data.Sequence.Strict (StrictSeq) -import Data.Typeable (Typeable) -import Data.Word (Word32) -import GHC.Generics (Generic) -import NoThunks.Class (OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import qualified Ouroboros.Network.AnchoredFragment as AF -import System.FS.API.Types (FsError, FsPath) -import System.FS.CRC (CRC) +import Codec.CBOR.Read qualified as CBOR +import Control.Monad.Except (ExceptT (..), runExceptT, throwError) +import Control.Monad.Trans.Class (lift) +import Control.ResourceRegistry (ResourceRegistry) +import Data.ByteString.Lazy qualified as Lazy +import Data.Either (isRight) +import Data.Function (on) +import Data.List.NonEmpty (NonEmpty) +import Data.Sequence.Strict (StrictSeq) +import Data.Typeable (Typeable) +import Data.Word (Word32) +import GHC.Generics (Generic) +import NoThunks.Class (OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredFragment qualified as AF +import System.FS.API.Types (FsError, FsPath) +import System.FS.CRC (CRC) {------------------------------------------------------------------------------- API @@ -108,69 +113,68 @@ type SecondaryOffset = Word32 -- -- The database can be explicitly closed, but can also be automatically closed -- in case of an 'UnexpectedFailure'. -data ImmutableDB m blk = ImmutableDB { - -- | Close the database. - -- - -- Idempotent. - -- - -- __Note__: Use 'withDB' instead of this function. - closeDB_ :: HasCallStack => m () - - -- | Return the tip of the database. - -- - -- The tip of the database will never point to an unfilled slot or missing - -- EBB. - -- - -- Throws a 'ClosedDBError' if the database is closed. - , getTip_ :: HasCallStack => STM m (WithOrigin (Tip blk)) - - -- | Get the block component of the block with the given 'Point'. - -- - -- The hash of the point is used to distinguish a potential EBB from the - -- regular block in the same slot. - -- - -- Returns a 'MissingBlockError' if no block was stored with the given - -- 'Point', either because the slot was empty or because the block stored - -- with that slot had a different hash. - -- - -- Throws a 'ClosedDBError' if the database is closed. - , getBlockComponent_ :: - forall b. HasCallStack - => BlockComponent blk b -> RealPoint blk -> m (Either (MissingBlock blk) b) - - -- | Appends a block to the ImmutableDB. - -- - -- Throws an 'AppendBlockNotNewerThanTipError' if the given slot is <= the - -- result of 'getTip'. - -- - -- Throws a 'ClosedDBError' if the database is closed. - , appendBlock_ - :: HasCallStack => blk -> m () - - -- | Return an 'Iterator' to efficiently stream blocks from the - -- ImmutableDB. - -- - -- Throws an 'InvalidIteratorRangeError' if the start of the range is - -- greater than the end of the range. - -- - -- NOTE: 'MissingBlock' is returned, but 'InvalidIteratorRangeError' is - -- thrown. This is because the former is expected to occur during normal - -- operation: a node serving blocks might get requests to stream blocks - -- that are not in the database. The latter exception indicates incorrect - -- usage and should not happen during normal operation. - -- - -- Throws a 'ClosedDBError' if the database is closed. - -- - -- The iterator is automatically closed when exhausted, and can be - -- prematurely closed with 'iteratorClose'. - , stream_ - :: forall b. HasCallStack - => ResourceRegistry m - -> BlockComponent blk b - -> StreamFrom blk - -> StreamTo blk - -> m (Either (MissingBlock blk) (Iterator m blk b)) - } +data ImmutableDB m blk = ImmutableDB + { closeDB_ :: HasCallStack => m () + -- ^ Close the database. + -- + -- Idempotent. + -- + -- __Note__: Use 'withDB' instead of this function. + , getTip_ :: HasCallStack => STM m (WithOrigin (Tip blk)) + -- ^ Return the tip of the database. + -- + -- The tip of the database will never point to an unfilled slot or missing + -- EBB. + -- + -- Throws a 'ClosedDBError' if the database is closed. + , getBlockComponent_ :: + forall b. + HasCallStack => + BlockComponent blk b -> RealPoint blk -> m (Either (MissingBlock blk) b) + -- ^ Get the block component of the block with the given 'Point'. + -- + -- The hash of the point is used to distinguish a potential EBB from the + -- regular block in the same slot. + -- + -- Returns a 'MissingBlockError' if no block was stored with the given + -- 'Point', either because the slot was empty or because the block stored + -- with that slot had a different hash. + -- + -- Throws a 'ClosedDBError' if the database is closed. + , appendBlock_ :: + HasCallStack => + blk -> m () + -- ^ Appends a block to the ImmutableDB. + -- + -- Throws an 'AppendBlockNotNewerThanTipError' if the given slot is <= the + -- result of 'getTip'. + -- + -- Throws a 'ClosedDBError' if the database is closed. + , stream_ :: + forall b. + HasCallStack => + ResourceRegistry m -> + BlockComponent blk b -> + StreamFrom blk -> + StreamTo blk -> + m (Either (MissingBlock blk) (Iterator m blk b)) + -- ^ Return an 'Iterator' to efficiently stream blocks from the + -- ImmutableDB. + -- + -- Throws an 'InvalidIteratorRangeError' if the start of the range is + -- greater than the end of the range. + -- + -- NOTE: 'MissingBlock' is returned, but 'InvalidIteratorRangeError' is + -- thrown. This is because the former is expected to occur during normal + -- operation: a node serving blocks might get requests to stream blocks + -- that are not in the database. The latter exception indicates incorrect + -- usage and should not happen during normal operation. + -- + -- Throws a 'ClosedDBError' if the database is closed. + -- + -- The iterator is automatically closed when exhausted, and can be + -- prematurely closed with 'iteratorClose'. + } deriving NoThunks via OnlyCheckWhnfNamed "ImmutableDB" (ImmutableDB m blk) {------------------------------------------------------------------------------- @@ -179,44 +183,43 @@ data ImmutableDB m blk = ImmutableDB { -- | An 'Iterator' is a handle which can be used to efficiently stream block -- components from the ImmutableDB. -data Iterator m blk b = Iterator { - -- | Steps an 'Iterator' yielding an 'IteratorResult'. - -- - -- After returning the block component as an 'IteratorResult', the - -- iterator is advanced to the next non-empty slot or non-empty EBB. - -- - -- Throws a 'ClosedDBError' if the database is closed. - -- - -- The iterator is automatically closed when exhausted - -- ('IteratorExhausted'), and can be prematurely closed with - -- 'iteratorClose'. - iteratorNext :: HasCallStack => m (IteratorResult b) - - -- | Return the point of the next block to stream, if there is one. Return - -- 'Nothing' if not. - -- - -- This operation is idempotent. - , iteratorHasNext :: HasCallStack => STM m (Maybe (RealPoint blk)) - - -- | Dispose of the 'Iterator' by closing any open handles. - -- - -- Idempotent operation. - , iteratorClose :: HasCallStack => m () - } - deriving (Functor) +data Iterator m blk b = Iterator + { iteratorNext :: HasCallStack => m (IteratorResult b) + -- ^ Steps an 'Iterator' yielding an 'IteratorResult'. + -- + -- After returning the block component as an 'IteratorResult', the + -- iterator is advanced to the next non-empty slot or non-empty EBB. + -- + -- Throws a 'ClosedDBError' if the database is closed. + -- + -- The iterator is automatically closed when exhausted + -- ('IteratorExhausted'), and can be prematurely closed with + -- 'iteratorClose'. + , iteratorHasNext :: HasCallStack => STM m (Maybe (RealPoint blk)) + -- ^ Return the point of the next block to stream, if there is one. Return + -- 'Nothing' if not. + -- + -- This operation is idempotent. + , iteratorClose :: HasCallStack => m () + -- ^ Dispose of the 'Iterator' by closing any open handles. + -- + -- Idempotent operation. + } + deriving Functor deriving NoThunks via OnlyCheckWhnfNamed "Iterator" (Iterator m blk b) -- | Variant of 'traverse' instantiated to @'Iterator' m blk m@ that executes -- the monadic function when calling 'iteratorNext'. traverseIterator :: - Monad m - => (b -> m b') - -> Iterator m blk b - -> Iterator m blk b' -traverseIterator f itr = Iterator{ - iteratorNext = iteratorNext itr >>= traverse f + Monad m => + (b -> m b') -> + Iterator m blk b -> + Iterator m blk b' +traverseIterator f itr = + Iterator + { iteratorNext = iteratorNext itr >>= traverse f , iteratorHasNext = iteratorHasNext itr - , iteratorClose = iteratorClose itr + , iteratorClose = iteratorClose itr } -- | The result of stepping an 'Iterator'. @@ -228,22 +231,24 @@ data IteratorResult b -- | Consume an 'Iterator' by stepping until it is exhausted. A list of all -- the 'IteratorResult's (excluding the final 'IteratorExhausted') produced by -- the 'Iterator' is returned. -iteratorToList :: (HasCallStack, Monad m) - => Iterator m blk b -> m [b] +iteratorToList :: + (HasCallStack, Monad m) => + Iterator m blk b -> m [b] iteratorToList it = go [] - where - go acc = do - next <- iteratorNext it - case next of - IteratorExhausted -> return $ reverse acc - IteratorResult res -> go (res:acc) + where + go acc = do + next <- iteratorNext it + case next of + IteratorExhausted -> return $ reverse acc + IteratorResult res -> go (res : acc) -- | An iterator that is immediately exhausted. emptyIterator :: MonadSTM m => Iterator m blk b -emptyIterator = Iterator { - iteratorNext = return IteratorExhausted +emptyIterator = + Iterator + { iteratorNext = return IteratorExhausted , iteratorHasNext = return Nothing - , iteratorClose = return () + , iteratorClose = return () } {------------------------------------------------------------------------------- @@ -251,39 +256,40 @@ emptyIterator = Iterator { -------------------------------------------------------------------------------} -- | Information about the tip of the ImmutableDB. -data Tip blk = Tip { - tipSlotNo :: !SlotNo - , tipIsEBB :: !IsEBB - , tipBlockNo :: !BlockNo - , tipHash :: !(HeaderHash blk) - } - deriving (Generic) - -deriving instance StandardHash blk => Eq (Tip blk) -deriving instance StandardHash blk => Show (Tip blk) +data Tip blk = Tip + { tipSlotNo :: !SlotNo + , tipIsEBB :: !IsEBB + , tipBlockNo :: !BlockNo + , tipHash :: !(HeaderHash blk) + } + deriving Generic + +deriving instance StandardHash blk => Eq (Tip blk) +deriving instance StandardHash blk => Show (Tip blk) deriving instance StandardHash blk => NoThunks (Tip blk) tipToRealPoint :: Tip blk -> RealPoint blk -tipToRealPoint Tip { tipSlotNo, tipHash } = RealPoint tipSlotNo tipHash +tipToRealPoint Tip{tipSlotNo, tipHash} = RealPoint tipSlotNo tipHash tipToPoint :: WithOrigin (Tip blk) -> Point blk tipToPoint = \case - Origin -> GenesisPoint - NotOrigin tip -> realPointToPoint $ tipToRealPoint tip + Origin -> GenesisPoint + NotOrigin tip -> realPointToPoint $ tipToRealPoint tip tipToAnchor :: WithOrigin (Tip blk) -> AF.Anchor blk tipToAnchor = \case - Origin -> - AF.AnchorGenesis - NotOrigin (Tip { tipSlotNo, tipHash, tipBlockNo }) -> - AF.Anchor tipSlotNo tipHash tipBlockNo + Origin -> + AF.AnchorGenesis + NotOrigin (Tip{tipSlotNo, tipHash, tipBlockNo}) -> + AF.Anchor tipSlotNo tipHash tipBlockNo headerToTip :: GetHeader blk => Header blk -> Tip blk -headerToTip hdr = Tip { - tipSlotNo = blockSlot hdr - , tipIsEBB = headerToIsEBB hdr - , tipBlockNo = blockNo hdr - , tipHash = blockHash hdr +headerToTip hdr = + Tip + { tipSlotNo = blockSlot hdr + , tipIsEBB = headerToIsEBB hdr + , tipBlockNo = blockNo hdr + , tipHash = blockHash hdr } blockToTip :: GetHeader blk => blk -> Tip blk @@ -291,103 +297,96 @@ blockToTip = headerToTip . getHeader -- | newtype with an 'Ord' instance that only uses 'tipSlotNo' and 'tipIsEBB' -- and ignores the other fields. -newtype CompareTip blk = CompareTip { getCompareTip :: Tip blk } +newtype CompareTip blk = CompareTip {getCompareTip :: Tip blk} instance Eq (CompareTip blk) where a == b = compare a b == EQ instance Ord (CompareTip blk) where - compare = mconcat [ - compare `on` tipSlotNo . getCompareTip - , compareIsEBB `on` tipIsEBB . getCompareTip + compare = + mconcat + [ compare `on` tipSlotNo . getCompareTip + , compareIsEBB `on` tipIsEBB . getCompareTip ] - where - -- When a block and an EBB share a slot number, the EBB is "older". - compareIsEBB :: IsEBB -> IsEBB -> Ordering - compareIsEBB IsEBB IsNotEBB = LT - compareIsEBB IsNotEBB IsEBB = GT - compareIsEBB _ _ = EQ + where + -- When a block and an EBB share a slot number, the EBB is "older". + compareIsEBB :: IsEBB -> IsEBB -> Ordering + compareIsEBB IsEBB IsNotEBB = LT + compareIsEBB IsNotEBB IsEBB = GT + compareIsEBB _ _ = EQ {------------------------------------------------------------------------------- Errors -------------------------------------------------------------------------------} -- | Errors that might arise when working with this database. -data ImmutableDBError blk = - ApiMisuse (ApiMisuse blk) PrettyCallStack - -- ^ An error thrown because of incorrect usage of the immutable database +data ImmutableDBError blk + = -- | An error thrown because of incorrect usage of the immutable database -- by the user. - | UnexpectedFailure (UnexpectedFailure blk) - -- ^ An unexpected error thrown because something went wrong on a lower + ApiMisuse (ApiMisuse blk) PrettyCallStack + | -- | An unexpected error thrown because something went wrong on a lower -- layer. + UnexpectedFailure (UnexpectedFailure blk) deriving (Generic, Show) -instance (StandardHash blk, Typeable blk) - => Exception (ImmutableDBError blk) where +instance + (StandardHash blk, Typeable blk) => + Exception (ImmutableDBError blk) + where displayException = \case - ApiMisuse {} -> - "ImmutableDB incorrectly used, indicative of a bug" - UnexpectedFailure (FileSystemError fse) -> - displayException fse - UnexpectedFailure {} -> - "The ImmutableDB got corrupted, full validation will be enabled for the next startup" - -data ApiMisuse blk = - -- | When trying to append a new block, it was not newer than the current + ApiMisuse{} -> + "ImmutableDB incorrectly used, indicative of a bug" + UnexpectedFailure (FileSystemError fse) -> + displayException fse + UnexpectedFailure{} -> + "The ImmutableDB got corrupted, full validation will be enabled for the next startup" + +data ApiMisuse blk + = -- | When trying to append a new block, it was not newer than the current -- tip, i.e., the slot was older than or equal to the current tip's slot. -- -- The 'RealPoint' corresponds to the new block and the 'Point' to the -- current tip. AppendBlockNotNewerThanTipError (RealPoint blk) (Point blk) - - -- | When the chosen iterator range was invalid, i.e. the @start@ (first + | -- | When the chosen iterator range was invalid, i.e. the @start@ (first -- parameter) came after the @end@ (second parameter). - | InvalidIteratorRangeError (StreamFrom blk) (StreamTo blk) - - -- | When performing an operation on a closed DB that is only allowed when + InvalidIteratorRangeError (StreamFrom blk) (StreamTo blk) + | -- | When performing an operation on a closed DB that is only allowed when -- the database is open. - | ClosedDBError - - -- | When performing an operation on an open DB that is only allowed when + ClosedDBError + | -- | When performing an operation on an open DB that is only allowed when -- the database is closed. - | OpenDBError + OpenDBError deriving instance (StandardHash blk, Typeable blk) => Show (ApiMisuse blk) throwApiMisuse :: - (MonadThrow m, HasCallStack, StandardHash blk, Typeable blk) - => ApiMisuse blk -> m a + (MonadThrow m, HasCallStack, StandardHash blk, Typeable blk) => + ApiMisuse blk -> m a throwApiMisuse e = throwIO $ ApiMisuse e prettyCallStack -data UnexpectedFailure blk = - -- | An IO operation on the file-system threw an error. +data UnexpectedFailure blk + = -- | An IO operation on the file-system threw an error. FileSystemError FsError -- An FsError already stores the callstack - - -- | When loading an epoch or index file, its contents did not pass + | -- | When loading an epoch or index file, its contents did not pass -- validation. - | InvalidFileError FsPath String PrettyCallStack - - -- | A missing epoch or index file. - | MissingFileError FsPath PrettyCallStack - - -- | There was a checksum mismatch when reading the block with the given + InvalidFileError FsPath String PrettyCallStack + | -- | A missing epoch or index file. + MissingFileError FsPath PrettyCallStack + | -- | There was a checksum mismatch when reading the block with the given -- point. The first 'CRC' is the expected one, the second one the actual -- one. - | ChecksumMismatchError (RealPoint blk) CRC CRC FsPath PrettyCallStack - - -- | A block failed to parse - | ParseError FsPath (RealPoint blk) CBOR.DeserialiseFailure - - -- | When parsing a block we got some trailing data - | TrailingDataError FsPath (RealPoint blk) Lazy.ByteString - - -- | Block missing + ChecksumMismatchError (RealPoint blk) CRC CRC FsPath PrettyCallStack + | -- | A block failed to parse + ParseError FsPath (RealPoint blk) CBOR.DeserialiseFailure + | -- | When parsing a block we got some trailing data + TrailingDataError FsPath (RealPoint blk) Lazy.ByteString + | -- | Block missing -- -- This exception gets thrown when a block that we /know/ it should be in -- the ImmutableDB, nonetheless was not found. - | MissingBlockError (MissingBlock blk) - - -- | A (parsed) block did not pass the integrity check. + MissingBlockError (MissingBlock blk) + | -- | A (parsed) block did not pass the integrity check. -- -- This exception gets thrown when a block doesn't pass the integrity check -- done for 'GetVerifiedBlock'. @@ -396,40 +395,42 @@ data UnexpectedFailure blk = -- ImmutableDB. While this exception typically means the block has been -- corrupted, it could also mean the block didn't pass the check at the time -- it was added. - | CorruptBlockError (RealPoint blk) + CorruptBlockError (RealPoint blk) deriving instance (StandardHash blk, Typeable blk) => Show (UnexpectedFailure blk) throwUnexpectedFailure :: - (StandardHash blk, Typeable blk, MonadThrow m) - => UnexpectedFailure blk -> m a + (StandardHash blk, Typeable blk, MonadThrow m) => + UnexpectedFailure blk -> m a throwUnexpectedFailure = throwIO . UnexpectedFailure -- | This type can be part of an exception, but also returned as part of an -- 'Either', because it can be expected in some cases. data MissingBlock blk - -- | There is no block in the slot of the given point. - = EmptySlot - (RealPoint blk) -- ^ The requested point - ChunkNo -- ^ The chunk we thought it was in - [RelativeSlot] -- ^ What should be the relative slot in the chunk - (Maybe (StrictSeq SecondaryOffset)) - -- ^ Which offsets are known if we are looking at the current (probably cached) chunk - - -- | The block and/or EBB in the slot of the given point have a different + = -- | There is no block in the slot of the given point. + EmptySlot + -- | The requested point + (RealPoint blk) + -- | The chunk we thought it was in + ChunkNo + -- | What should be the relative slot in the chunk + [RelativeSlot] + -- | Which offsets are known if we are looking at the current (probably cached) chunk + (Maybe (StrictSeq SecondaryOffset)) + | -- | The block and/or EBB in the slot of the given point have a different -- hash. We return the 'HeaderHash' for each block we found with the -- corresponding slot number. - | WrongHash (RealPoint blk) (NonEmpty (HeaderHash blk)) - -- | The requested point is in the future, i.e., its slot is greater than + WrongHash (RealPoint blk) (NonEmpty (HeaderHash blk)) + | -- | The requested point is in the future, i.e., its slot is greater than -- that of the tip. We record the tip as the second argument. - | NewerThanTip (RealPoint blk) (Point blk) + NewerThanTip (RealPoint blk) (Point blk) deriving (Eq, Show, Generic) -- | Return the 'RealPoint' of the block that was missing. missingBlockPoint :: MissingBlock blk -> RealPoint blk missingBlockPoint (EmptySlot pt _ _ _) = pt -missingBlockPoint (WrongHash pt _) = pt -missingBlockPoint (NewerThanTip pt _) = pt +missingBlockPoint (WrongHash pt _) = pt +missingBlockPoint (NewerThanTip pt _) = pt {------------------------------------------------------------------------------- Wrappers that preserve 'HasCallStack' @@ -438,37 +439,40 @@ missingBlockPoint (NewerThanTip pt _) = pt -------------------------------------------------------------------------------} closeDB :: - HasCallStack - => ImmutableDB m blk - -> m () + HasCallStack => + ImmutableDB m blk -> + m () closeDB = closeDB_ getTip :: - HasCallStack - => ImmutableDB m blk - -> STM m (WithOrigin (Tip blk)) + HasCallStack => + ImmutableDB m blk -> + STM m (WithOrigin (Tip blk)) getTip = getTip_ getBlockComponent :: - HasCallStack - => ImmutableDB m blk - -> BlockComponent blk b -> RealPoint blk -> m (Either (MissingBlock blk) b) + HasCallStack => + ImmutableDB m blk -> + BlockComponent blk b -> + RealPoint blk -> + m (Either (MissingBlock blk) b) getBlockComponent = getBlockComponent_ appendBlock :: - HasCallStack - => ImmutableDB m blk - -> blk -> m () + HasCallStack => + ImmutableDB m blk -> + blk -> + m () appendBlock = appendBlock_ stream :: - HasCallStack - => ImmutableDB m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> StreamFrom blk - -> StreamTo blk - -> m (Either (MissingBlock blk) (Iterator m blk b)) + HasCallStack => + ImmutableDB m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + StreamFrom blk -> + StreamTo blk -> + m (Either (MissingBlock blk) (Iterator m blk b)) stream = stream_ {------------------------------------------------------------------------------- @@ -479,60 +483,59 @@ stream = stream_ -- using the database, and closes the database using its 'closeDB' function, -- in case of success or when an exception was raised. withDB :: - (HasCallStack, MonadThrow m) - => m (ImmutableDB m blk) - -- ^ How to open the database - -> (ImmutableDB m blk -> m a) - -- ^ Action to perform using the database - -> m a + (HasCallStack, MonadThrow m) => + -- | How to open the database + m (ImmutableDB m blk) -> + -- | Action to perform using the database + (ImmutableDB m blk -> m a) -> + m a withDB openDB = bracket openDB closeDB getKnownBlockComponent :: - (MonadThrow m, HasHeader blk) - => ImmutableDB m blk - -> BlockComponent blk b - -> RealPoint blk - -> m b + (MonadThrow m, HasHeader blk) => + ImmutableDB m blk -> + BlockComponent blk b -> + RealPoint blk -> + m b getKnownBlockComponent db blockComponent pt = - getBlockComponent db blockComponent pt >>= \case - Left missing -> throwUnexpectedFailure $ MissingBlockError missing - Right b -> return b + getBlockComponent db blockComponent pt >>= \case + Left missing -> throwUnexpectedFailure $ MissingBlockError missing + Right b -> return b -- | Open an iterator with the given point as lower exclusive bound and the -- current tip as the inclusive upper bound. -- -- Returns a 'MissingBlock' when the point is not in the ImmutableDB. streamAfterPoint :: - (MonadSTM m, HasHeader blk, HasCallStack) - => ImmutableDB m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> Point blk - -> m (Either (MissingBlock blk) (Iterator m blk b)) + (MonadSTM m, HasHeader blk, HasCallStack) => + ImmutableDB m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + Point blk -> + m (Either (MissingBlock blk) (Iterator m blk b)) streamAfterPoint db registry blockComponent fromPt = runExceptT $ do - tipPt <- lift $ atomically $ getTipPoint db - case (pointToWithOriginRealPoint fromPt, - pointToWithOriginRealPoint tipPt) of - - (Origin, Origin) -> - -- Nothing to stream - return emptyIterator - - (NotOrigin fromPt', Origin) -> - -- Asked to stream after a block while the ImmutableDB is empty - throwError $ NewerThanTip fromPt' GenesisPoint - - (NotOrigin fromPt', NotOrigin _) | pointSlot fromPt > pointSlot tipPt -> - -- Lower bound is newer than the tip, nothing to stream - throwError $ NewerThanTip fromPt' tipPt - - (NotOrigin fromPt', NotOrigin tipPt') | fromPt' == tipPt' -> - -- Nothing to stream after the tip - return emptyIterator - - (_, NotOrigin tipPt') -> - -- Stream from the given point to the current tip (not genesis) - ExceptT $ stream + tipPt <- lift $ atomically $ getTipPoint db + case ( pointToWithOriginRealPoint fromPt + , pointToWithOriginRealPoint tipPt + ) of + (Origin, Origin) -> + -- Nothing to stream + return emptyIterator + (NotOrigin fromPt', Origin) -> + -- Asked to stream after a block while the ImmutableDB is empty + throwError $ NewerThanTip fromPt' GenesisPoint + (NotOrigin fromPt', NotOrigin _) + | pointSlot fromPt > pointSlot tipPt -> + -- Lower bound is newer than the tip, nothing to stream + throwError $ NewerThanTip fromPt' tipPt + (NotOrigin fromPt', NotOrigin tipPt') + | fromPt' == tipPt' -> + -- Nothing to stream after the tip + return emptyIterator + (_, NotOrigin tipPt') -> + -- Stream from the given point to the current tip (not genesis) + ExceptT $ + stream db registry blockComponent @@ -542,43 +545,43 @@ streamAfterPoint db registry blockComponent fromPt = runExceptT $ do -- | Variant of 'streamAfterPoint' that throws a 'MissingBlockError' when the -- point is not in the ImmutableDB (or genesis). streamAfterKnownPoint :: - (MonadSTM m, MonadThrow m, HasHeader blk, HasCallStack) - => ImmutableDB m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> Point blk - -> m (Iterator m blk b) + (MonadSTM m, MonadThrow m, HasHeader blk, HasCallStack) => + ImmutableDB m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + Point blk -> + m (Iterator m blk b) streamAfterKnownPoint db registry blockComponent fromPt = - streamAfterPoint db registry blockComponent fromPt >>= - either (throwUnexpectedFailure . MissingBlockError) return + streamAfterPoint db registry blockComponent fromPt + >>= either (throwUnexpectedFailure . MissingBlockError) return streamAll :: - (MonadSTM m, MonadThrow m, HasHeader blk, HasCallStack) - => ImmutableDB m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> m (Iterator m blk b) + (MonadSTM m, MonadThrow m, HasHeader blk, HasCallStack) => + ImmutableDB m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + m (Iterator m blk b) streamAll db registry blockComponent = - streamAfterKnownPoint db registry blockComponent GenesisPoint + streamAfterKnownPoint db registry blockComponent GenesisPoint hasBlock :: - (MonadSTM m, HasCallStack) - => ImmutableDB m blk - -> RealPoint blk - -> m Bool + (MonadSTM m, HasCallStack) => + ImmutableDB m blk -> + RealPoint blk -> + m Bool hasBlock db pt = isRight <$> getBlockComponent db (pure ()) pt getTipPoint :: - (MonadSTM m, HasCallStack) - => ImmutableDB m blk -> STM m (Point blk) + (MonadSTM m, HasCallStack) => + ImmutableDB m blk -> STM m (Point blk) getTipPoint = fmap tipToPoint . getTip getTipAnchor :: - (MonadSTM m, HasCallStack) - => ImmutableDB m blk -> STM m (AF.Anchor blk) + (MonadSTM m, HasCallStack) => + ImmutableDB m blk -> STM m (AF.Anchor blk) getTipAnchor = fmap tipToAnchor . getTip getTipSlot :: - (MonadSTM m, HasCallStack) - => ImmutableDB m blk -> STM m (WithOrigin SlotNo) + (MonadSTM m, HasCallStack) => + ImmutableDB m blk -> STM m (WithOrigin SlotNo) getTipSlot = fmap (fmap tipSlotNo) . getTip diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks.hs index d05a94d39e..6ed22cfff5 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks.hs @@ -1,11 +1,21 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Chunks (module X) where - -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal as X - (ChunkInfo (..), ChunkNo, ChunkSize (..), - chunkInfoSupportsEBBs, chunksBetween, compareRelativeSlot, - countChunks, firstChunkNo, getChunkSize, mkRelativeSlot, - nextChunkNo, prevChunkNo, simpleChunkInfo, singleChunkInfo) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout as X +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal as X + ( ChunkInfo (..) + , ChunkNo + , ChunkSize (..) + , chunkInfoSupportsEBBs + , chunksBetween + , compareRelativeSlot + , countChunks + , firstChunkNo + , getChunkSize + , mkRelativeSlot + , nextChunkNo + , prevChunkNo + , simpleChunkInfo + , singleChunkInfo + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout as X -- The Internal re-export only exposes public API diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Internal.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Internal.hs index 15b29b51d4..d091e768c2 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Internal.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Internal.hs @@ -6,11 +6,12 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal ( - ChunkInfo (..) +module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal + ( ChunkInfo (..) , chunkInfoSupportsEBBs , simpleChunkInfo , singleChunkInfo + -- * Chunk number , ChunkNo (..) , chunkNoFromInt @@ -22,15 +23,18 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal ( , prevChunkNo , unsafeChunkNoToEpochNo , unsafeEpochNoToChunkNo + -- * Chunk size , ChunkSize (..) , getChunkSize + -- * Layout , RelativeSlot (..) , assertRelativeSlotInChunk , compareRelativeSlot , maxRelativeIndex , mkRelativeSlot + -- * Assertions , ChunkAssertionFailure , assertChunkCanContainEBB @@ -38,14 +42,14 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal ( , assertWithinBounds ) where -import Control.Exception -import Control.Monad -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.RedundantConstraints +import Control.Exception +import Control.Monad +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.RedundantConstraints (keepRedundantConstraint) -- | Size of the chunks of the immutable DB -- @@ -53,16 +57,15 @@ import Ouroboros.Consensus.Util.RedundantConstraints -- -- TODO: Add support for non-uniform 'ChunkInfo' -- -data ChunkInfo = - -- | A single, uniform, chunk size +data ChunkInfo + = -- | A single, uniform, chunk size -- -- If EBBs are present, the chunk size must line up precisely with the -- epoch size (that is, the number of regular blocks in the chunk must equal -- the number of regular blocks in an epoch). - -- UniformChunkSize !ChunkSize - deriving stock (Show, Generic) - deriving anyclass (NoThunks) + deriving stock (Show, Generic) + deriving anyclass NoThunks -- | Simple chunk config with a single chunk size -- @@ -84,7 +87,7 @@ singleChunkInfo = UniformChunkSize -- become more complicated) once we support non-uniform 'ChunkInfo'. chunkInfoSupportsEBBs :: ChunkInfo -> Bool chunkInfoSupportsEBBs (UniformChunkSize chunkSize) = - chunkCanContainEBB chunkSize + chunkCanContainEBB chunkSize {------------------------------------------------------------------------------- Queries @@ -94,19 +97,18 @@ chunkInfoSupportsEBBs (UniformChunkSize chunkSize) = -- -- The total number of slots available in a chunk is equal to 'numRegularBlocks' -- if @not@ 'chunkCanContainEBB', and 'numRegularBlocks' @+ 1@ otherwise. -data ChunkSize = ChunkSize { - -- | Does this chunk also accomodate an EBB? - chunkCanContainEBB :: !Bool - - -- | The number of regular blocks in this chunk - , numRegularBlocks :: !Word64 - } - deriving stock (Show, Generic) - deriving anyclass (NoThunks) +data ChunkSize = ChunkSize + { chunkCanContainEBB :: !Bool + -- ^ Does this chunk also accomodate an EBB? + , numRegularBlocks :: !Word64 + -- ^ The number of regular blocks in this chunk + } + deriving stock (Show, Generic) + deriving anyclass NoThunks -- | Chunk number -newtype ChunkNo = ChunkNo { unChunkNo :: Word64 } - deriving stock (Eq, Ord, Generic) +newtype ChunkNo = ChunkNo {unChunkNo :: Word64} + deriving stock (Eq, Ord, Generic) deriving newtype (Show, NoThunks) -- | First chunk @@ -143,8 +145,9 @@ countChunks (ChunkNo a) (ChunkNo b) = if a >= b then a - b else b - a -- > chunksBetween x x == [x] -- > chunksBetween x (nextChunkNo x) == [x, nextChunkNo x] chunksBetween :: ChunkNo -> ChunkNo -> [ChunkNo] -chunksBetween (ChunkNo a) (ChunkNo b) = map ChunkNo $ - if a >= b then [a .. b] else [b .. a] +chunksBetween (ChunkNo a) (ChunkNo b) = + map ChunkNo $ + if a >= b then [a .. b] else [b .. a] -- | Translate 'EpochNo' to 'ChunkNo' -- @@ -163,8 +166,8 @@ unsafeChunkNoToEpochNo (ChunkNo n) = EpochNo n getChunkSize :: ChunkInfo -> ChunkNo -> ChunkSize getChunkSize chunkInfo _chunk = - case chunkInfo of - UniformChunkSize sz -> sz + case chunkInfo of + UniformChunkSize sz -> sz {------------------------------------------------------------------------------- Layout @@ -176,41 +179,39 @@ getChunkSize chunkInfo _chunk = -------------------------------------------------------------------------------} -- | A /relative/ slot within a chunk -data RelativeSlot = RelativeSlot { - -- | The chunk index of the chunk this slot is in - -- - -- Recorded primarily to be able to define a semi-sensible 'Ord' instance. - relativeSlotChunkNo :: !ChunkNo - - -- | The size of the chunk that this slot is in - -- - -- We record this for bounds checking as well as to be able to answer - -- questions such as 'relativeSlotIsEBB'. +data RelativeSlot = RelativeSlot + { relativeSlotChunkNo :: !ChunkNo + -- ^ The chunk index of the chunk this slot is in + -- + -- Recorded primarily to be able to define a semi-sensible 'Ord' instance. , relativeSlotChunkSize :: !ChunkSize - - -- | The index within the chunk - , relativeSlotIndex :: !Word64 + -- ^ The size of the chunk that this slot is in + -- + -- We record this for bounds checking as well as to be able to answer + -- questions such as 'relativeSlotIsEBB'. + , relativeSlotIndex :: !Word64 + -- ^ The index within the chunk } - deriving stock (Show, Generic) - deriving anyclass (NoThunks) + deriving stock (Show, Generic) + deriving anyclass NoThunks -- | Maximum relative index within a chunk maxRelativeIndex :: ChunkSize -> Word64 maxRelativeIndex ChunkSize{..} | chunkCanContainEBB = numRegularBlocks - | otherwise = numRegularBlocks - 1 + | otherwise = numRegularBlocks - 1 -- | Smart constructor for 'RelativeSlot' mkRelativeSlot :: HasCallStack => ChunkInfo -> ChunkNo -> Word64 -> RelativeSlot mkRelativeSlot chunkInfo chunk index = - assertWithinBounds index size $ - RelativeSlot { - relativeSlotChunkNo = chunk + assertWithinBounds index size $ + RelativeSlot + { relativeSlotChunkNo = chunk , relativeSlotChunkSize = size - , relativeSlotIndex = index + , relativeSlotIndex = index } - where - size = getChunkSize chunkInfo chunk + where + size = getChunkSize chunkInfo chunk instance Eq RelativeSlot where a == b @@ -226,13 +227,13 @@ instance Eq RelativeSlot where -- will result in an assertion failure. compareRelativeSlot :: HasCallStack => RelativeSlot -> RelativeSlot -> Ordering compareRelativeSlot a b = - assertSameChunk (relativeSlotChunkNo a) (relativeSlotChunkNo b) $ - compare (relativeSlotIndex a) (relativeSlotIndex b) + assertSameChunk (relativeSlotChunkNo a) (relativeSlotChunkNo b) $ + compare (relativeSlotIndex a) (relativeSlotIndex b) assertRelativeSlotInChunk :: HasCallStack => ChunkNo -> RelativeSlot -> Word64 assertRelativeSlotInChunk chunk relSlot = - assertSameChunk (relativeSlotChunkNo relSlot) chunk $ - relativeSlotIndex relSlot + assertSameChunk (relativeSlotChunkNo relSlot) chunk $ + relativeSlotIndex relSlot {------------------------------------------------------------------------------- Assert failures @@ -242,14 +243,15 @@ assertRelativeSlotInChunk chunk relSlot = any functions that (transitively) call these functions. -------------------------------------------------------------------------------} -data ChunkAssertionFailure = - NotSameChunk ChunkNo ChunkNo PrettyCallStack +data ChunkAssertionFailure + = NotSameChunk ChunkNo ChunkNo PrettyCallStack | NotWithinBounds Word64 ChunkSize PrettyCallStack | ChunkCannotContainEBBs ChunkNo PrettyCallStack - deriving (Show) + deriving Show instance Exception ChunkAssertionFailure +{- FOURMOLU_DISABLE -} assertSameChunk :: HasCallStack => ChunkNo -> ChunkNo -> a -> a #if ENABLE_ASSERTIONS assertSameChunk a b @@ -260,7 +262,9 @@ assertSameChunk _ _ = id #endif where _ = keepRedundantConstraint (Proxy @HasCallStack) +{- FOURMOLU_ENABLE -} +{- FOURMOLU_DISABLE -} assertWithinBounds :: HasCallStack => Word64 -> ChunkSize -> a -> a #if ENABLE_ASSERTIONS assertWithinBounds ix sz @@ -271,7 +275,9 @@ assertWithinBounds _ _ = id #endif where _ = keepRedundantConstraint (Proxy @HasCallStack) +{- FOURMOLU_ENABLE -} +{- FOURMOLU_DISABLE -} assertChunkCanContainEBB :: HasCallStack => ChunkNo -> ChunkSize -> a -> a #if ENABLE_ASSERTIONS assertChunkCanContainEBB chunk size @@ -282,3 +288,4 @@ assertChunkCanContainEBB _ _ = id #endif where _ = keepRedundantConstraint (Proxy @HasCallStack) +{- FOURMOLU_ENABLE -} diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Layout.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Layout.hs index bf0b72d59c..c064058f9a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Layout.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Chunks/Layout.hs @@ -11,8 +11,8 @@ -- This module is not re-exported from the public Chunks API, since it's only -- relevant internally in the immutable DB. This module makes the layout -- decisions. -module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout ( - -- * Relative slots +module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout + ( -- * Relative slots NextRelativeSlot (..) , firstBlockOrEBB , maxRelativeSlot @@ -20,13 +20,17 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout ( , nthBlockOrEBB , relativeSlotIsEBB , unsafeNextRelativeSlot + -- ** Opaque , RelativeSlot + -- * Chunks , chunkIndexOfSlot + -- * Slots within a chunk , ChunkSlot (..) , pattern ChunkSlot + -- ** Translation /to/ 'ChunkSlot' , chunkSlotForBlockOrEBB , chunkSlotForBoundaryBlock @@ -34,24 +38,27 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout ( , chunkSlotForRelativeSlot , chunkSlotForTip , chunkSlotForUnknownBlock + -- ** Translation /from/ 'ChunkSlot' , chunkSlotToBlockOrEBB , chunkSlotToSlot + -- ** Support for EBBs , slotMightBeEBB , slotNoOfBlockOrEBB , slotNoOfEBB ) where -import Control.Monad -import GHC.Generics (Generic) -import GHC.Stack -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.ImmutableDB.API (Tip (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types - (BlockOrEBB (..)) +import Control.Monad +import GHC.Generics (Generic) +import GHC.Stack +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.ImmutableDB.API (Tip (..)) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types + ( BlockOrEBB (..) + ) {------------------------------------------------------------------------------- Relative slots @@ -60,24 +67,25 @@ import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types -- | The last relative slot within a chunk of the given size maxRelativeSlot :: ChunkInfo -> ChunkNo -> RelativeSlot maxRelativeSlot ci chunk = - mkRelativeSlot ci chunk (maxRelativeIndex size) - where - size = getChunkSize ci chunk + mkRelativeSlot ci chunk (maxRelativeIndex size) + where + size = getChunkSize ci chunk -- | Is this relative slot reserved for an EBB? relativeSlotIsEBB :: RelativeSlot -> IsEBB relativeSlotIsEBB RelativeSlot{..} | relativeSlotIndex == 0 - , chunkCanContainEBB relativeSlotChunkSize - = IsEBB - | otherwise - = IsNotEBB + , chunkCanContainEBB relativeSlotChunkSize = + IsEBB + | otherwise = + IsNotEBB -- | The @n@'th relative slot for an arbitrary block -- -- NOTE: Offset @0@ refers to an EBB only if the 'ChunkSize' supports it. -nthBlockOrEBB :: (HasCallStack, Integral a) - => ChunkInfo -> ChunkNo -> a -> RelativeSlot +nthBlockOrEBB :: + (HasCallStack, Integral a) => + ChunkInfo -> ChunkNo -> a -> RelativeSlot nthBlockOrEBB ci chunk = mkRelativeSlot ci chunk . fromIntegral -- | The first relative slot @@ -87,21 +95,20 @@ firstBlockOrEBB :: ChunkInfo -> ChunkNo -> RelativeSlot firstBlockOrEBB ci chunk = mkRelativeSlot ci chunk 0 -- | Result of 'nextRelativeSlot' -data NextRelativeSlot = - -- | There is a next negative slot +data NextRelativeSlot + = -- | There is a next negative slot NextRelativeSlot RelativeSlot - - -- | We reached the end of the chunk - | NoMoreRelativeSlots + | -- | We reached the end of the chunk + NoMoreRelativeSlots -- | Next relative slot nextRelativeSlot :: HasCallStack => RelativeSlot -> NextRelativeSlot nextRelativeSlot s@RelativeSlot{..} = - -- Assert that the /current/ value is within bounds - assertWithinBounds relativeSlotIndex relativeSlotChunkSize $ - if relativeSlotIndex == maxRelativeIndex relativeSlotChunkSize - then NoMoreRelativeSlots - else NextRelativeSlot $ s { relativeSlotIndex = succ relativeSlotIndex } + -- Assert that the /current/ value is within bounds + assertWithinBounds relativeSlotIndex relativeSlotChunkSize $ + if relativeSlotIndex == maxRelativeIndex relativeSlotChunkSize + then NoMoreRelativeSlots + else NextRelativeSlot $ s{relativeSlotIndex = succ relativeSlotIndex} -- | Variation on 'nextRelativeSlot' where the caller /knows/ that there must -- be a next slot @@ -110,15 +117,16 @@ nextRelativeSlot s@RelativeSlot{..} = -- next slot. unsafeNextRelativeSlot :: HasCallStack => RelativeSlot -> RelativeSlot unsafeNextRelativeSlot s@RelativeSlot{..} = - assertWithinBounds (succ relativeSlotIndex) relativeSlotChunkSize $ - s { relativeSlotIndex = succ relativeSlotIndex } + assertWithinBounds (succ relativeSlotIndex) relativeSlotChunkSize $ + s{relativeSlotIndex = succ relativeSlotIndex} {------------------------------------------------------------------------------- Chucks -------------------------------------------------------------------------------} chunkIndexOfSlot :: ChunkInfo -> SlotNo -> ChunkNo -chunkIndexOfSlot (UniformChunkSize ChunkSize{..}) (SlotNo slot) = ChunkNo $ +chunkIndexOfSlot (UniformChunkSize ChunkSize{..}) (SlotNo slot) = + ChunkNo $ slot `div` numRegularBlocks {------------------------------------------------------------------------------- @@ -130,18 +138,20 @@ chunkIndexOfSlot (UniformChunkSize ChunkSize{..}) (SlotNo slot) = ChunkNo $ -- Constructor marked as 'Unsafe'; construction should normally happen inside -- this module only (though see the 'ChunkSlot' pattern synonym). data ChunkSlot = UnsafeChunkSlot - { chunkIndex :: !ChunkNo + { chunkIndex :: !ChunkNo , chunkRelative :: !RelativeSlot - } deriving (Eq, Generic, NoThunks) + } + deriving (Eq, Generic, NoThunks) -- | We provide a manual 'Ord' instance because 'RelativeSlot' does not -- (and cannot) define one. By comparing the 'chunkIndex' before the index here, -- we establish the precondition to 'compareRelativeSlot'. instance Ord ChunkSlot where - compare a b = mconcat [ - compare (chunkIndex a) (chunkIndex b) - , compareRelativeSlot (chunkRelative a) (chunkRelative b) - ] + compare a b = + mconcat + [ compare (chunkIndex a) (chunkIndex b) + , compareRelativeSlot (chunkRelative a) (chunkRelative b) + ] {-# COMPLETE ChunkSlot #-} pattern ChunkSlot :: ChunkNo -> RelativeSlot -> ChunkSlot @@ -159,65 +169,73 @@ instance Show ChunkSlot where -- This returns /two/ 'ChunkSlot's: one in case the block could be an EBB, -- and one in case the block is a regular block. In addition, it also returns -- the 'ChunkNo' that both of these 'ChunkSlot's must necessarily share. -chunkSlotForUnknownBlock :: HasCallStack - => ChunkInfo - -> SlotNo - -> (ChunkNo, Maybe ChunkSlot, ChunkSlot) -chunkSlotForUnknownBlock ci slot = ( - (case mIfBoundary of - Nothing -> id - Just ifBoundary -> assertSameChunk (chunkIndex ifBoundary) - (chunkIndex ifRegular)) $ - chunkIndex ifRegular - , mIfBoundary - , ifRegular +chunkSlotForUnknownBlock :: + HasCallStack => + ChunkInfo -> + SlotNo -> + (ChunkNo, Maybe ChunkSlot, ChunkSlot) +chunkSlotForUnknownBlock ci slot = + ( ( case mIfBoundary of + Nothing -> id + Just ifBoundary -> + assertSameChunk + (chunkIndex ifBoundary) + (chunkIndex ifRegular) ) - where - ifRegular = chunkSlotForRegularBlock ci slot - mIfBoundary = chunkSlotForBoundaryBlock ci <$> slotMightBeEBB ci slot + $ chunkIndex ifRegular + , mIfBoundary + , ifRegular + ) + where + ifRegular = chunkSlotForRegularBlock ci slot + mIfBoundary = chunkSlotForBoundaryBlock ci <$> slotMightBeEBB ci slot -- | Chunk slot for a regular block (i.e., not an EBB) chunkSlotForRegularBlock :: ChunkInfo -> SlotNo -> ChunkSlot chunkSlotForRegularBlock (UniformChunkSize sz@ChunkSize{..}) (SlotNo slot) = - UnsafeChunkSlot { - chunkIndex = ChunkNo chunk - , chunkRelative = RelativeSlot (ChunkNo chunk) sz $ - if chunkCanContainEBB - then withinChunk + 1 - else withinChunk - } - where - (chunk, withinChunk) = slot `divMod` numRegularBlocks + UnsafeChunkSlot + { chunkIndex = ChunkNo chunk + , chunkRelative = + RelativeSlot (ChunkNo chunk) sz $ + if chunkCanContainEBB + then withinChunk + 1 + else withinChunk + } + where + (chunk, withinChunk) = slot `divMod` numRegularBlocks -- | Chunk slot for EBB chunkSlotForBoundaryBlock :: HasCallStack => ChunkInfo -> EpochNo -> ChunkSlot chunkSlotForBoundaryBlock ci epoch = - assertChunkCanContainEBB chunk size $ - UnsafeChunkSlot chunk $ firstBlockOrEBB ci chunk - where - chunk = unsafeEpochNoToChunkNo epoch - size = getChunkSize ci chunk + assertChunkCanContainEBB chunk size $ + UnsafeChunkSlot chunk $ + firstBlockOrEBB ci chunk + where + chunk = unsafeEpochNoToChunkNo epoch + size = getChunkSize ci chunk -- | Chunk slot for 'BlockOrEBB' chunkSlotForBlockOrEBB :: ChunkInfo -> BlockOrEBB -> ChunkSlot chunkSlotForBlockOrEBB ci = \case - Block slot -> chunkSlotForRegularBlock ci slot - EBB epoch -> chunkSlotForBoundaryBlock ci epoch + Block slot -> chunkSlotForRegularBlock ci slot + EBB epoch -> chunkSlotForBoundaryBlock ci epoch -- | Chunk slot for 'Tip' chunkSlotForTip :: ChunkInfo -> Tip blk -> ChunkSlot -chunkSlotForTip ci Tip { tipSlotNo, tipIsEBB } = case tipIsEBB of - IsNotEBB -> chunkSlotForRegularBlock ci tipSlotNo - IsEBB -> assertChunkCanContainEBB chunkIndex relativeSlotChunkSize $ - UnsafeChunkSlot chunkIndex $ firstBlockOrEBB ci chunkIndex - where - UnsafeChunkSlot{..} = chunkSlotForRegularBlock ci tipSlotNo - RelativeSlot{..} = chunkRelative +chunkSlotForTip ci Tip{tipSlotNo, tipIsEBB} = case tipIsEBB of + IsNotEBB -> chunkSlotForRegularBlock ci tipSlotNo + IsEBB -> + assertChunkCanContainEBB chunkIndex relativeSlotChunkSize $ + UnsafeChunkSlot chunkIndex $ + firstBlockOrEBB ci chunkIndex + where + UnsafeChunkSlot{..} = chunkSlotForRegularBlock ci tipSlotNo + RelativeSlot{..} = chunkRelative chunkSlotForRelativeSlot :: ChunkNo -> RelativeSlot -> ChunkSlot chunkSlotForRelativeSlot chunk relSlot = - assertSameChunk (relativeSlotChunkNo relSlot) chunk $ - UnsafeChunkSlot chunk relSlot + assertSameChunk (relativeSlotChunkNo relSlot) chunk $ + UnsafeChunkSlot chunk relSlot {------------------------------------------------------------------------------- Translation /from/ 'ChunkSlot' @@ -233,21 +251,22 @@ chunkSlotForRelativeSlot chunk relSlot = -- This can be used for EBBs and regular blocks, since they don't share a -- relative slot. chunkSlotToSlot :: ChunkInfo -> ChunkSlot -> SlotNo -chunkSlotToSlot (UniformChunkSize ChunkSize{..}) UnsafeChunkSlot{..} = SlotNo $ - chunk * numRegularBlocks - + case (chunkCanContainEBB, relativeSlotIndex) of - (_ , 0) -> 0 - (True , n) -> n - 1 +chunkSlotToSlot (UniformChunkSize ChunkSize{..}) UnsafeChunkSlot{..} = + SlotNo $ + chunk * numRegularBlocks + + case (chunkCanContainEBB, relativeSlotIndex) of + (_, 0) -> 0 + (True, n) -> n - 1 (False, n) -> n - where - ChunkNo chunk = chunkIndex - RelativeSlot{..} = chunkRelative + where + ChunkNo chunk = chunkIndex + RelativeSlot{..} = chunkRelative chunkSlotToBlockOrEBB :: ChunkInfo -> ChunkSlot -> BlockOrEBB chunkSlotToBlockOrEBB chunkInfo chunkSlot@(ChunkSlot chunk relSlot) = - case relativeSlotIsEBB relSlot of - IsEBB -> EBB $ unsafeChunkNoToEpochNo chunk - IsNotEBB -> Block $ chunkSlotToSlot chunkInfo chunkSlot + case relativeSlotIsEBB relSlot of + IsEBB -> EBB $ unsafeChunkNoToEpochNo chunk + IsNotEBB -> Block $ chunkSlotToSlot chunkInfo chunkSlot {------------------------------------------------------------------------------- Support for EBBs @@ -258,12 +277,12 @@ slotNoOfEBB ci = chunkSlotToSlot ci . chunkSlotForBoundaryBlock ci slotMightBeEBB :: ChunkInfo -> SlotNo -> Maybe EpochNo slotMightBeEBB ci slot = do - guard $ chunkCanContainEBB relativeSlotChunkSize && relativeSlotIndex == 1 - return $ unsafeChunkNoToEpochNo chunkIndex - where - UnsafeChunkSlot{..} = chunkSlotForRegularBlock ci slot - RelativeSlot{..} = chunkRelative + guard $ chunkCanContainEBB relativeSlotChunkSize && relativeSlotIndex == 1 + return $ unsafeChunkNoToEpochNo chunkIndex + where + UnsafeChunkSlot{..} = chunkSlotForRegularBlock ci slot + RelativeSlot{..} = chunkRelative slotNoOfBlockOrEBB :: ChunkInfo -> BlockOrEBB -> SlotNo -slotNoOfBlockOrEBB _ (Block slot) = slot -slotNoOfBlockOrEBB ci (EBB epoch) = slotNoOfEBB ci epoch +slotNoOfBlockOrEBB _ (Block slot) = slot +slotNoOfBlockOrEBB ci (EBB epoch) = slotNoOfEBB ci epoch diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl.hs index 290d5f6134..dcb9eff4af 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl.hs @@ -82,18 +82,20 @@ -- -- * A \"primary index file\" that maps slots to offsets in the secondary -- index file. -module Ouroboros.Consensus.Storage.ImmutableDB.Impl ( - -- * Opening the databse +module Ouroboros.Consensus.Storage.ImmutableDB.Impl + ( -- * Opening the databse ImmutableDbArgs (..) , ImmutableDbSerialiseConstraints , defaultArgs , openDB + -- * Re-exported , ChunkFileError (..) , Index.CacheConfig (..) , TraceChunkValidation (..) , TraceEvent (..) , ValidationPolicy (..) + -- * Internals for testing purposes , Internal (..) , deleteAfter @@ -101,88 +103,94 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Impl ( , openDBInternal ) where -import qualified Codec.CBOR.Write as CBOR -import Control.Monad (replicateM_, unless, when) -import Control.Monad.Except (runExceptT) -import Control.Monad.State.Strict (get, modify, put) -import Control.ResourceRegistry -import Control.Tracer (Tracer, nullTracer, traceWith) -import qualified Data.ByteString.Lazy as Lazy -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block hiding (headerHash) -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.ImmutableDB.API -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index (Index) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index as Index -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary as Primary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary - (BlockOffset (..), HeaderOffset (..), HeaderSize (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary as Secondary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Iterator -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Parser -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.State -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Validation -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util (SomePair (..)) -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.EarlyExit -import Ouroboros.Consensus.Util.IOLike -import System.FS.API.Lazy hiding (allowExisting) -import System.FS.CRC +import Codec.CBOR.Write qualified as CBOR +import Control.Monad (replicateM_, unless, when) +import Control.Monad.Except (runExceptT) +import Control.Monad.State.Strict (get, modify, put) +import Control.ResourceRegistry +import Control.Tracer (Tracer, nullTracer, traceWith) +import Data.ByteString.Lazy qualified as Lazy +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block hiding (headerHash) +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB.API +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index (Index) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index qualified as Index +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary qualified as Primary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary + ( BlockOffset (..) + , HeaderOffset (..) + , HeaderSize (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary qualified as Secondary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Iterator +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Parser +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.State +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Validation +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util (SomePair (..)) +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.EarlyExit +import Ouroboros.Consensus.Util.IOLike +import System.FS.API.Lazy hiding (allowExisting) +import System.FS.CRC {------------------------------------------------------------------------------ Opening the database ------------------------------------------------------------------------------} -data ImmutableDbArgs f m blk = ImmutableDbArgs { - immCacheConfig :: Index.CacheConfig - -- | Predicate to check for integrity of - -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when - -- extracting them from the ImmutableDB. - , immCheckIntegrity :: HKD f (blk -> Bool) - -- ^ Predicate to check for integrity of - -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when - -- extracting them from the ImmutableDB. - , immChunkInfo :: HKD f ChunkInfo - , immCodecConfig :: HKD f (CodecConfig blk) - , immHasFS :: HKD f (SomeHasFS m) - , immRegistry :: HKD f (ResourceRegistry m) - , immTracer :: Tracer m (TraceEvent blk) - -- | Which chunks of the ImmutableDB to validate on opening: all chunks, or - -- only the most recent chunk? - , immValidationPolicy :: ValidationPolicy - -- ^ Which chunks of the ImmutableDB to validate on opening: all chunks, or - -- only the most recent chunk? - } +data ImmutableDbArgs f m blk = ImmutableDbArgs + { immCacheConfig :: Index.CacheConfig + , immCheckIntegrity :: HKD f (blk -> Bool) + -- ^ Predicate to check for integrity of + -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when + -- extracting them from the ImmutableDB. + , -- \^ Predicate to check for integrity of + -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when + -- extracting them from the ImmutableDB. + immChunkInfo :: HKD f ChunkInfo + , immCodecConfig :: HKD f (CodecConfig blk) + , immHasFS :: HKD f (SomeHasFS m) + , immRegistry :: HKD f (ResourceRegistry m) + , immTracer :: Tracer m (TraceEvent blk) + , immValidationPolicy :: ValidationPolicy + -- ^ Which chunks of the ImmutableDB to validate on opening: all chunks, or + -- only the most recent chunk? + } + +-- \^ Which chunks of the ImmutableDB to validate on opening: all chunks, or +-- only the most recent chunk? -- | Default arguments defaultArgs :: Applicative m => Incomplete ImmutableDbArgs m blk -defaultArgs = ImmutableDbArgs { - immCacheConfig = cacheConfig - , immCheckIntegrity = noDefault - , immChunkInfo = noDefault - , immCodecConfig = noDefault - , immHasFS = noDefault - , immRegistry = noDefault - , immTracer = nullTracer +defaultArgs = + ImmutableDbArgs + { immCacheConfig = cacheConfig + , immCheckIntegrity = noDefault + , immChunkInfo = noDefault + , immCodecConfig = noDefault + , immHasFS = noDefault + , immRegistry = noDefault + , immTracer = nullTracer , immValidationPolicy = ValidateMostRecentChunk } - where - -- Cache 250 past chunks by default. This will take roughly 250 MB of RAM. - -- At the time of writing (1/2020), there are 166 epochs, and we store one - -- epoch per chunk, so even one year from now, we will be able to cache all - -- chunks' indices in the chain. - -- - -- If this number were too low, i.e., less than the number of chunks that - -- that clients are requesting blocks from, we would constantly evict and - -- reparse indices, causing a much higher CPU load. - cacheConfig = Index.CacheConfig { - pastChunksToCache = 250 - , expireUnusedAfter = 5 * 60 -- Expire after 1 minute - } + where + -- Cache 250 past chunks by default. This will take roughly 250 MB of RAM. + -- At the time of writing (1/2020), there are 166 epochs, and we store one + -- epoch per chunk, so even one year from now, we will be able to cache all + -- chunks' indices in the chain. + -- + -- If this number were too low, i.e., less than the number of chunks that + -- that clients are requesting blocks from, we would constantly evict and + -- reparse indices, causing a much higher CPU load. + cacheConfig = + Index.CacheConfig + { pastChunksToCache = 250 + , expireUnusedAfter = 5 * 60 -- Expire after 1 minute + } -- | 'EncodeDisk' and 'DecodeDisk' constraints needed for the ImmutableDB. type ImmutableDbSerialiseConstraints blk = @@ -197,19 +205,19 @@ type ImmutableDbSerialiseConstraints blk = Exposed internals and/or extra functionality for testing purposes ------------------------------------------------------------------------------} -data Internal m blk = Internal { - -- | Delete everything in the database after the specified tip. - -- - -- PRECONDITION: The tip must correspond to an existing block or genesis. - -- - -- The correctness of open iterators is not guaranteed, they should be - -- closed before calling this operation. - -- - -- Throws a 'ClosedDBError' if the database is closed. - deleteAfter_ :: HasCallStack => WithOrigin (Tip blk) -> m () - -- | Get the hash of the block in the given slot. If the slot contains both - -- an EBB and a non-EBB, return the hash of the non-EBB. +data Internal m blk = Internal + { deleteAfter_ :: HasCallStack => WithOrigin (Tip blk) -> m () + -- ^ Delete everything in the database after the specified tip. + -- + -- PRECONDITION: The tip must correspond to an existing block or genesis. + -- + -- The correctness of open iterators is not guaranteed, they should be + -- closed before calling this operation. + -- + -- Throws a 'ClosedDBError' if the database is closed. , getHashForSlot_ :: HasCallStack => SlotNo -> m (Maybe (HeaderHash blk)) + -- ^ Get the hash of the block in the given slot. If the slot contains both + -- an EBB and a non-EBB, return the hash of the non-EBB. } -- | Wrapper around 'deleteAfter_' to ensure 'HasCallStack' constraint @@ -229,106 +237,110 @@ getHashForSlot = getHashForSlot_ ------------------------------------------------------------------------------} openDB :: - forall m blk ans. - ( IOLike m - , GetPrevHash blk - , ConvertRawHash blk - , ImmutableDbSerialiseConstraints blk - , HasCallStack - ) - => Complete ImmutableDbArgs m blk - -> (forall st. WithTempRegistry st m (ImmutableDB m blk, st) -> ans) - -> ans + forall m blk ans. + ( IOLike m + , GetPrevHash blk + , ConvertRawHash blk + , ImmutableDbSerialiseConstraints blk + , HasCallStack + ) => + Complete ImmutableDbArgs m blk -> + (forall st. WithTempRegistry st m (ImmutableDB m blk, st) -> ans) -> + ans openDB args cont = - openDBInternal args (cont . fmap swizzle) - where - swizzle ((immdb, _internal), ost) = (immdb, ost) + openDBInternal args (cont . fmap swizzle) + where + swizzle ((immdb, _internal), ost) = (immdb, ost) -- | For testing purposes: exposes internals via 'Internal' --- --- openDBInternal :: - forall m blk ans. - ( IOLike m - , GetPrevHash blk - , ConvertRawHash blk - , ImmutableDbSerialiseConstraints blk - , HasCallStack - ) - => Complete ImmutableDbArgs m blk - -> (forall h. - WithTempRegistry - (OpenState m blk h) - m - ((ImmutableDB m blk, Internal m blk), OpenState m blk h) - -> ans - ) - -> ans -openDBInternal ImmutableDbArgs { immHasFS = SomeHasFS hasFS, .. } cont = cont $ do - lift $ createDirectoryIfMissing hasFS True (mkFsPath []) - let validateEnv = ValidateEnv { - hasFS = hasFS - , chunkInfo = immChunkInfo - , tracer = immTracer - , cacheConfig = immCacheConfig - , codecConfig = immCodecConfig + forall m blk ans. + ( IOLike m + , GetPrevHash blk + , ConvertRawHash blk + , ImmutableDbSerialiseConstraints blk + , HasCallStack + ) => + Complete ImmutableDbArgs m blk -> + ( forall h. + WithTempRegistry + (OpenState m blk h) + m + ((ImmutableDB m blk, Internal m blk), OpenState m blk h) -> + ans + ) -> + ans +openDBInternal ImmutableDbArgs{immHasFS = SomeHasFS hasFS, ..} cont = cont $ do + lift $ createDirectoryIfMissing hasFS True (mkFsPath []) + let validateEnv = + ValidateEnv + { hasFS = hasFS + , chunkInfo = immChunkInfo + , tracer = immTracer + , cacheConfig = immCacheConfig + , codecConfig = immCodecConfig , checkIntegrity = immCheckIntegrity } - ost <- validateAndReopen validateEnv immRegistry immValidationPolicy + ost <- validateAndReopen validateEnv immRegistry immValidationPolicy - stVar <- lift $ newSVar (DbOpen ost) + stVar <- lift $ newSVar (DbOpen ost) - let dbEnv = ImmutableDBEnv { - hasFS = hasFS + let dbEnv = + ImmutableDBEnv + { hasFS = hasFS , varInternalState = stVar - , checkIntegrity = immCheckIntegrity - , chunkInfo = immChunkInfo - , tracer = immTracer - , cacheConfig = immCacheConfig - , codecConfig = immCodecConfig + , checkIntegrity = immCheckIntegrity + , chunkInfo = immChunkInfo + , tracer = immTracer + , cacheConfig = immCacheConfig + , codecConfig = immCodecConfig } - db = ImmutableDB { - closeDB_ = closeDBImpl dbEnv - , getTip_ = getTipImpl dbEnv + db = + ImmutableDB + { closeDB_ = closeDBImpl dbEnv + , getTip_ = getTipImpl dbEnv , getBlockComponent_ = getBlockComponentImpl dbEnv - , appendBlock_ = appendBlockImpl dbEnv - , stream_ = streamImpl dbEnv + , appendBlock_ = appendBlockImpl dbEnv + , stream_ = streamImpl dbEnv } - internal = Internal { - deleteAfter_ = deleteAfterImpl dbEnv + internal = + Internal + { deleteAfter_ = deleteAfterImpl dbEnv , getHashForSlot_ = getHashForSlotImpl dbEnv } - return ((db, internal), ost) + return ((db, internal), ost) closeDBImpl :: - forall m blk. (HasCallStack, IOLike m) - => ImmutableDBEnv m blk - -> m () -closeDBImpl ImmutableDBEnv { hasFS, tracer, varInternalState } = do - internalState <- takeSVar varInternalState - case internalState of - -- Already closed - DbClosed -> do - putSVar varInternalState internalState - traceWith tracer $ DBAlreadyClosed - DbOpen openState -> do - -- Close the database before doing the file-system operations so that - -- in case these fail, we don't leave the database open. - putSVar varInternalState DbClosed - cleanUp hasFS openState - traceWith tracer DBClosed + forall m blk. + (HasCallStack, IOLike m) => + ImmutableDBEnv m blk -> + m () +closeDBImpl ImmutableDBEnv{hasFS, tracer, varInternalState} = do + internalState <- takeSVar varInternalState + case internalState of + -- Already closed + DbClosed -> do + putSVar varInternalState internalState + traceWith tracer $ DBAlreadyClosed + DbOpen openState -> do + -- Close the database before doing the file-system operations so that + -- in case these fail, we don't leave the database open. + putSVar varInternalState DbClosed + cleanUp hasFS openState + traceWith tracer DBClosed deleteAfterImpl :: - forall m blk. (HasCallStack, ConvertRawHash blk, IOLike m, HasHeader blk) - => ImmutableDBEnv m blk - -> WithOrigin (Tip blk) - -> m () -deleteAfterImpl dbEnv@ImmutableDBEnv { tracer, chunkInfo } newTip = + forall m blk. + (HasCallStack, ConvertRawHash blk, IOLike m, HasHeader blk) => + ImmutableDBEnv m blk -> + WithOrigin (Tip blk) -> + m () +deleteAfterImpl dbEnv@ImmutableDBEnv{tracer, chunkInfo} newTip = -- We're not using 'Index' in this function but truncating the index files -- directly. modifyOpenState dbEnv $ \hasFS -> do - st@OpenState { currentIndex, currentTip } <- get + st@OpenState{currentIndex, currentTip} <- get when ((CompareTip <$> newTip) < (CompareTip <$> currentTip)) $ do lift $ lift $ do @@ -343,311 +355,327 @@ deleteAfterImpl dbEnv@ImmutableDBEnv { tracer, chunkInfo } newTip = ost <- lift $ mkOpenState hasFS currentIndex newChunk newTip allowExisting put ost - where - newTipChunkSlot :: WithOrigin ChunkSlot - newTipChunkSlot = chunkSlotForTip chunkInfo <$> newTip - - newChunk :: ChunkNo - allowExisting :: AllowExisting - (newChunk, allowExisting) = case newTipChunkSlot of - Origin -> (firstChunkNo, MustBeNew) - NotOrigin (ChunkSlot chunk _) -> (chunk, AllowExisting) - - truncateTo :: - HasFS m h - -> OpenState m blk h - -> WithOrigin ChunkSlot - -> m () - truncateTo hasFS OpenState {} = \case - Origin -> - removeFilesStartingFrom hasFS firstChunkNo - NotOrigin (ChunkSlot chunk relSlot) -> do - removeFilesStartingFrom hasFS (nextChunkNo chunk) - - -- Retrieve the needed info from the primary index file and then - -- truncate it. - primaryIndex <- Primary.load (Proxy @blk) hasFS chunk - Primary.truncateToSlotFS hasFS chunk relSlot - let lastSecondaryOffset = Primary.offsetOfSlot primaryIndex relSlot - isEBB = relativeSlotIsEBB relSlot - - -- Retrieve the needed info from the secondary index file and then - -- truncate it. - (entry :: Secondary.Entry blk, blockSize) <- - Secondary.readEntry hasFS chunk isEBB lastSecondaryOffset - Secondary.truncateToEntry (Proxy @blk) hasFS chunk lastSecondaryOffset - - -- Truncate the chunk file. - case blockSize of - -- The block is the last block in the chunk file, so no need to - -- truncate - Secondary.LastEntry -> return () - Secondary.BlockSize size -> - withFile hasFS chunkFile (AppendMode AllowExisting) $ \eHnd -> - hTruncate hasFS eHnd offset - where - chunkFile = fsPathChunkFile chunk - offset = unBlockOffset (Secondary.blockOffset entry) - + fromIntegral size + where + newTipChunkSlot :: WithOrigin ChunkSlot + newTipChunkSlot = chunkSlotForTip chunkInfo <$> newTip + + newChunk :: ChunkNo + allowExisting :: AllowExisting + (newChunk, allowExisting) = case newTipChunkSlot of + Origin -> (firstChunkNo, MustBeNew) + NotOrigin (ChunkSlot chunk _) -> (chunk, AllowExisting) + + truncateTo :: + HasFS m h -> + OpenState m blk h -> + WithOrigin ChunkSlot -> + m () + truncateTo hasFS OpenState{} = \case + Origin -> + removeFilesStartingFrom hasFS firstChunkNo + NotOrigin (ChunkSlot chunk relSlot) -> do + removeFilesStartingFrom hasFS (nextChunkNo chunk) + + -- Retrieve the needed info from the primary index file and then + -- truncate it. + primaryIndex <- Primary.load (Proxy @blk) hasFS chunk + Primary.truncateToSlotFS hasFS chunk relSlot + let lastSecondaryOffset = Primary.offsetOfSlot primaryIndex relSlot + isEBB = relativeSlotIsEBB relSlot + + -- Retrieve the needed info from the secondary index file and then + -- truncate it. + (entry :: Secondary.Entry blk, blockSize) <- + Secondary.readEntry hasFS chunk isEBB lastSecondaryOffset + Secondary.truncateToEntry (Proxy @blk) hasFS chunk lastSecondaryOffset + + -- Truncate the chunk file. + case blockSize of + -- The block is the last block in the chunk file, so no need to + -- truncate + Secondary.LastEntry -> return () + Secondary.BlockSize size -> + withFile hasFS chunkFile (AppendMode AllowExisting) $ \eHnd -> + hTruncate hasFS eHnd offset + where + chunkFile = fsPathChunkFile chunk + offset = + unBlockOffset (Secondary.blockOffset entry) + + fromIntegral size getHashForSlotImpl :: - forall m blk. (HasCallStack, IOLike m, HasHeader blk) - => ImmutableDBEnv m blk - -> SlotNo - -> m (Maybe (HeaderHash blk)) + forall m blk. + (HasCallStack, IOLike m, HasHeader blk) => + ImmutableDBEnv m blk -> + SlotNo -> + m (Maybe (HeaderHash blk)) getHashForSlotImpl dbEnv slot = - withOpenState dbEnv $ \_hasFS openState -> withEarlyExit $ do - let OpenState{currentTip, currentIndex = index} = openState - - readOffset offset = - lift $ Index.readOffset index chunk (chunkRelative offset) - - (chunk, mIfBoundary, ifRegular) = - chunkSlotForUnknownBlock chunkInfo slot - - -- Check that the slot is not beyond the tip. - case currentTip of - NotOrigin (Tip { tipSlotNo }) - | slot <= tipSlotNo - -> pure () - _ -> exitEarly - - -- Primary index: test whether the slot contains a non-EBB, or an EBB as a - -- fallback. - (offset, isEBB) <- readOffset ifRegular >>= \case + withOpenState dbEnv $ \_hasFS openState -> withEarlyExit $ do + let OpenState{currentTip, currentIndex = index} = openState + + readOffset offset = + lift $ Index.readOffset index chunk (chunkRelative offset) + + (chunk, mIfBoundary, ifRegular) = + chunkSlotForUnknownBlock chunkInfo slot + + -- Check that the slot is not beyond the tip. + case currentTip of + NotOrigin (Tip{tipSlotNo}) + | slot <= tipSlotNo -> + pure () + _ -> exitEarly + + -- Primary index: test whether the slot contains a non-EBB, or an EBB as a + -- fallback. + (offset, isEBB) <- + readOffset ifRegular >>= \case (Just offset, _) -> pure (offset, IsNotEBB) - (Nothing, _) -> case mIfBoundary of - Nothing -> exitEarly - Just ifBoundary -> readOffset ifBoundary >>= \case - (Just offset, _) -> pure (offset, IsEBB) - (Nothing, _) -> exitEarly - - -- Read hash from secondary index. - (entry, _) <- lift $ Index.readEntry index chunk isEBB offset - pure $ Secondary.headerHash entry - where - ImmutableDBEnv { chunkInfo } = dbEnv + (Nothing, _) -> case mIfBoundary of + Nothing -> exitEarly + Just ifBoundary -> + readOffset ifBoundary >>= \case + (Just offset, _) -> pure (offset, IsEBB) + (Nothing, _) -> exitEarly + + -- Read hash from secondary index. + (entry, _) <- lift $ Index.readEntry index chunk isEBB offset + pure $ Secondary.headerHash entry + where + ImmutableDBEnv{chunkInfo} = dbEnv getTipImpl :: - forall m blk. (HasCallStack, IOLike m, HasHeader blk) - => ImmutableDBEnv m blk - -> STM m (WithOrigin (Tip blk)) + forall m blk. + (HasCallStack, IOLike m, HasHeader blk) => + ImmutableDBEnv m blk -> + STM m (WithOrigin (Tip blk)) getTipImpl dbEnv = do - SomePair _hasFS OpenState { currentTip } <- getOpenState dbEnv - return currentTip + SomePair _hasFS OpenState{currentTip} <- getOpenState dbEnv + return currentTip getBlockComponentImpl :: - forall m blk b. - ( HasHeader blk - , ReconstructNestedCtxt Header blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , DecodeDiskDep (NestedCtxt Header) blk - , IOLike m - ) - => ImmutableDBEnv m blk - -> BlockComponent blk b - -> RealPoint blk - -> m (Either (MissingBlock blk) b) + forall m blk b. + ( HasHeader blk + , ReconstructNestedCtxt Header blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , DecodeDiskDep (NestedCtxt Header) blk + , IOLike m + ) => + ImmutableDBEnv m blk -> + BlockComponent blk b -> + RealPoint blk -> + m (Either (MissingBlock blk) b) getBlockComponentImpl dbEnv blockComponent pt = - withOpenState dbEnv $ \hasFS OpenState{..} -> runExceptT $ do - slotInfo <- getSlotInfo chunkInfo currentIndex currentTip pt - let (ChunkSlot chunk _, (entry, blockSize), _secondaryOffset) = slotInfo - chunkFile = fsPathChunkFile chunk - Secondary.Entry { blockOffset } = entry - - -- TODO don't open the 'chunkFile' unless we need to. In practice, - -- we only use this to read (raw) blocks or (raw) headers, which - -- does require opening the 'chunkFile'. Related: #2227. - lift $ withFile hasFS chunkFile ReadMode $ \eHnd -> do - - actualBlockSize <- case blockSize of - Secondary.BlockSize size - -> return size - -- See the 'GetBlock' case for more info about - -- 'Secondary.LastEntry'. - Secondary.LastEntry - | chunk == currentChunk - -> return $ fromIntegral $ currentChunkOffset - blockOffset - | otherwise - -> do - -- With cached indices, we'll never hit this case. - offsetAfterLastBlock <- hGetSize hasFS eHnd - return $ fromIntegral $ - offsetAfterLastBlock - unBlockOffset blockOffset - - extractBlockComponent - hasFS - chunkInfo - chunk - codecConfig - checkIntegrity - eHnd - (WithBlockSize actualBlockSize entry) - blockComponent - where - ImmutableDBEnv { chunkInfo, codecConfig, checkIntegrity } = dbEnv + withOpenState dbEnv $ \hasFS OpenState{..} -> runExceptT $ do + slotInfo <- getSlotInfo chunkInfo currentIndex currentTip pt + let (ChunkSlot chunk _, (entry, blockSize), _secondaryOffset) = slotInfo + chunkFile = fsPathChunkFile chunk + Secondary.Entry{blockOffset} = entry + + -- TODO don't open the 'chunkFile' unless we need to. In practice, + -- we only use this to read (raw) blocks or (raw) headers, which + -- does require opening the 'chunkFile'. Related: #2227. + lift $ withFile hasFS chunkFile ReadMode $ \eHnd -> do + actualBlockSize <- case blockSize of + Secondary.BlockSize size -> + return size + -- See the 'GetBlock' case for more info about + -- 'Secondary.LastEntry'. + Secondary.LastEntry + | chunk == currentChunk -> + return $ fromIntegral $ currentChunkOffset - blockOffset + | otherwise -> + do + -- With cached indices, we'll never hit this case. + offsetAfterLastBlock <- hGetSize hasFS eHnd + return $ + fromIntegral $ + offsetAfterLastBlock - unBlockOffset blockOffset + + extractBlockComponent + hasFS + chunkInfo + chunk + codecConfig + checkIntegrity + eHnd + (WithBlockSize actualBlockSize entry) + blockComponent + where + ImmutableDBEnv{chunkInfo, codecConfig, checkIntegrity} = dbEnv appendBlockImpl :: - forall m blk. - ( HasHeader blk - , GetHeader blk - , EncodeDisk blk blk - , HasBinaryBlockInfo blk - , IOLike m - , HasCallStack - ) - => ImmutableDBEnv m blk - -> blk - -> m () + forall m blk. + ( HasHeader blk + , GetHeader blk + , EncodeDisk blk blk + , HasBinaryBlockInfo blk + , IOLike m + , HasCallStack + ) => + ImmutableDBEnv m blk -> + blk -> + m () appendBlockImpl dbEnv blk = - modifyOpenState dbEnv $ \hasFS -> do - OpenState { - currentTip = initialTip - , currentIndex = index - , currentChunk = initialChunk - } <- get - - -- Check that we're not appending to the past - let blockAfterTip = - NotOrigin (CompareTip blockTip) > (CompareTip <$> initialTip) - - unless blockAfterTip $ lift $ + modifyOpenState dbEnv $ \hasFS -> do + OpenState + { currentTip = initialTip + , currentIndex = index + , currentChunk = initialChunk + } <- + get + + -- Check that we're not appending to the past + let blockAfterTip = + NotOrigin (CompareTip blockTip) > (CompareTip <$> initialTip) + + unless blockAfterTip $ + lift $ throwApiMisuse $ AppendBlockNotNewerThanTipError (blockRealPoint blk) (tipToPoint initialTip) - -- If the slot is in a chunk > the current one, we have to finalise the - -- current one and start a new chunk file, possibly skipping some chunks. - when (chunk > initialChunk) $ do - let newChunksToStart :: Int - newChunksToStart = fromIntegral $ countChunks chunk initialChunk - replicateM_ newChunksToStart $ - startNewChunk hasFS index chunkInfo initialChunk - - -- We may have updated the state with 'startNewChunk', so get the - -- (possibly) updated state. - OpenState { - currentTip - , currentChunkHandle - , currentChunkOffset - , currentSecondaryHandle - , currentSecondaryOffset - , currentPrimaryHandle - } <- get - - -- Compute the next empty slot @m@, if we need to write to slot @n@, we - -- will need to backfill @n - m@ slots. - let nextFreeRelSlot :: RelativeSlot - nextFreeRelSlot = - if chunk > initialChunk - -- If we had to start a new chunk, we start with slot 0. Note that - -- in this case the 'currentTip' will refer to something in a - -- chunk before 'currentChunk'. - then firstBlockOrEBB chunkInfo chunk - else case currentTip of - Origin -> firstBlockOrEBB chunkInfo firstChunkNo - -- Invariant: the currently open chunk is never full - NotOrigin tip -> unsafeNextRelativeSlot . chunkRelative $ - chunkSlotForTip chunkInfo tip - - -- Append to the end of the chunk file. - (blockSize, entrySize) <- lift $ lift $ do - - -- Write to the chunk file - let bytes = CBOR.toLazyByteString $ encodeDisk codecConfig blk - (blockSize, crc) <- hPutAllCRC hasFS currentChunkHandle bytes - - -- Write to the secondary index file - let entry = Secondary.Entry { - blockOffset = currentChunkOffset - , headerOffset = HeaderOffset headerOffset - , headerSize = HeaderSize headerSize - , checksum = crc - , headerHash = tipHash blockTip - , blockOrEBB = blockOrEBB - } - entrySize <- - fromIntegral <$> - Index.appendEntry - index - chunk - currentSecondaryHandle - (WithBlockSize (fromIntegral blockSize) entry) - - -- Write to the primary index file - let backfillOffsets = - Primary.backfill - relSlot - nextFreeRelSlot - currentSecondaryOffset - offsets = backfillOffsets <> [currentSecondaryOffset + entrySize] - Index.appendOffsets index currentPrimaryHandle offsets - - return (blockSize, entrySize) - - modify $ \st -> st - { currentChunkOffset = currentChunkOffset + fromIntegral blockSize + -- If the slot is in a chunk > the current one, we have to finalise the + -- current one and start a new chunk file, possibly skipping some chunks. + when (chunk > initialChunk) $ do + let newChunksToStart :: Int + newChunksToStart = fromIntegral $ countChunks chunk initialChunk + replicateM_ newChunksToStart $ + startNewChunk hasFS index chunkInfo initialChunk + + -- We may have updated the state with 'startNewChunk', so get the + -- (possibly) updated state. + OpenState + { currentTip + , currentChunkHandle + , currentChunkOffset + , currentSecondaryHandle + , currentSecondaryOffset + , currentPrimaryHandle + } <- + get + + -- Compute the next empty slot @m@, if we need to write to slot @n@, we + -- will need to backfill @n - m@ slots. + let nextFreeRelSlot :: RelativeSlot + nextFreeRelSlot = + if chunk > initialChunk + -- If we had to start a new chunk, we start with slot 0. Note that + -- in this case the 'currentTip' will refer to something in a + -- chunk before 'currentChunk'. + then firstBlockOrEBB chunkInfo chunk + else case currentTip of + Origin -> firstBlockOrEBB chunkInfo firstChunkNo + -- Invariant: the currently open chunk is never full + NotOrigin tip -> + unsafeNextRelativeSlot . chunkRelative $ + chunkSlotForTip chunkInfo tip + + -- Append to the end of the chunk file. + (blockSize, entrySize) <- lift $ lift $ do + -- Write to the chunk file + let bytes = CBOR.toLazyByteString $ encodeDisk codecConfig blk + (blockSize, crc) <- hPutAllCRC hasFS currentChunkHandle bytes + + -- Write to the secondary index file + let entry = + Secondary.Entry + { blockOffset = currentChunkOffset + , headerOffset = HeaderOffset headerOffset + , headerSize = HeaderSize headerSize + , checksum = crc + , headerHash = tipHash blockTip + , blockOrEBB = blockOrEBB + } + entrySize <- + fromIntegral + <$> Index.appendEntry + index + chunk + currentSecondaryHandle + (WithBlockSize (fromIntegral blockSize) entry) + + -- Write to the primary index file + let backfillOffsets = + Primary.backfill + relSlot + nextFreeRelSlot + currentSecondaryOffset + offsets = backfillOffsets <> [currentSecondaryOffset + entrySize] + Index.appendOffsets index currentPrimaryHandle offsets + + return (blockSize, entrySize) + + modify $ \st -> + st + { currentChunkOffset = currentChunkOffset + fromIntegral blockSize , currentSecondaryOffset = currentSecondaryOffset + entrySize - , currentTip = NotOrigin blockTip + , currentTip = NotOrigin blockTip } - where - ImmutableDBEnv { chunkInfo, codecConfig } = dbEnv + where + ImmutableDBEnv{chunkInfo, codecConfig} = dbEnv - newBlockIsEBB :: Maybe EpochNo - newBlockIsEBB = blockIsEBB blk + newBlockIsEBB :: Maybe EpochNo + newBlockIsEBB = blockIsEBB blk - blockOrEBB :: BlockOrEBB - blockOrEBB = case newBlockIsEBB of - Just epochNo -> EBB epochNo - Nothing -> Block (blockSlot blk) + blockOrEBB :: BlockOrEBB + blockOrEBB = case newBlockIsEBB of + Just epochNo -> EBB epochNo + Nothing -> Block (blockSlot blk) - ChunkSlot chunk relSlot = chunkSlotForBlockOrEBB chunkInfo blockOrEBB + ChunkSlot chunk relSlot = chunkSlotForBlockOrEBB chunkInfo blockOrEBB - blockTip :: Tip blk - blockTip = blockToTip blk + blockTip :: Tip blk + blockTip = blockToTip blk - BinaryBlockInfo {..} = getBinaryBlockInfo blk + BinaryBlockInfo{..} = getBinaryBlockInfo blk startNewChunk :: - forall m h blk. (HasCallStack, IOLike m, Eq h) - => HasFS m h - -> Index m blk h - -> ChunkInfo - -> ChunkNo -- ^ Chunk containing the tip - -> ModifyOpenState m blk h () + forall m h blk. + (HasCallStack, IOLike m, Eq h) => + HasFS m h -> + Index m blk h -> + ChunkInfo -> + -- | Chunk containing the tip + ChunkNo -> + ModifyOpenState m blk h () startNewChunk hasFS index chunkInfo tipChunk = do - st@OpenState {..} <- get - - -- We have to take care when starting multiple new chunks in a row. In the - -- first call the tip will be in the current chunk, but in subsequent - -- calls, the tip will still be in an chunk in the past, not the - -- 'currentChunk'. In that case, we can't use the relative slot of the - -- tip, since it will point to a relative slot in a past chunk. So when - -- the current (empty) chunk is not the chunk containing the tip, we use - -- relative slot 0 to calculate how much to pad. - let nextFreeRelSlot :: NextRelativeSlot - nextFreeRelSlot = case currentTip of - Origin -> - NextRelativeSlot $ firstBlockOrEBB chunkInfo firstChunkNo - NotOrigin tip -> - if tipChunk == currentChunk then + st@OpenState{..} <- get + + -- We have to take care when starting multiple new chunks in a row. In the + -- first call the tip will be in the current chunk, but in subsequent + -- calls, the tip will still be in an chunk in the past, not the + -- 'currentChunk'. In that case, we can't use the relative slot of the + -- tip, since it will point to a relative slot in a past chunk. So when + -- the current (empty) chunk is not the chunk containing the tip, we use + -- relative slot 0 to calculate how much to pad. + let nextFreeRelSlot :: NextRelativeSlot + nextFreeRelSlot = case currentTip of + Origin -> + NextRelativeSlot $ firstBlockOrEBB chunkInfo firstChunkNo + NotOrigin tip -> + if tipChunk == currentChunk + then let ChunkSlot _ relSlot = chunkSlotForTip chunkInfo tip - in nextRelativeSlot relSlot + in nextRelativeSlot relSlot else NextRelativeSlot $ firstBlockOrEBB chunkInfo currentChunk - let backfillOffsets = Primary.backfillChunk - chunkInfo - currentChunk - nextFreeRelSlot - currentSecondaryOffset + let backfillOffsets = + Primary.backfillChunk + chunkInfo + currentChunk + nextFreeRelSlot + currentSecondaryOffset - lift $ lift $ + lift $ + lift $ Index.appendOffsets index currentPrimaryHandle backfillOffsets - `finally` closeOpenHandles hasFS st + `finally` closeOpenHandles hasFS st - st' <- lift $ + st' <- + lift $ mkOpenState hasFS index (nextChunkNo currentChunk) currentTip MustBeNew - put st' + put st' diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index.hs index 65b6156121..5b2f39729b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index.hs @@ -4,43 +4,50 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TupleSections #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index ( - -- * Index +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index + ( -- * Index Index (..) , readEntry , readOffset + -- * File-backed index , fileBackedIndex + -- * Cached index , CacheConfig (..) , cachedIndex ) where -import Control.ResourceRegistry -import Control.Tracer (Tracer) -import Data.Functor.Identity (Identity (..)) -import Data.Proxy (Proxy (..)) -import Data.Sequence.Strict (StrictSeq) -import Data.Typeable (Typeable) -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import NoThunks.Class (OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block (ConvertRawHash, IsEBB, StandardHash) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Cache - (CacheConfig (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Cache as Cache -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary - (SecondaryOffset) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary as Primary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary - (BlockSize) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary as Secondary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types - (TraceCacheEvent, WithBlockSize (..)) -import Ouroboros.Consensus.Util.IOLike -import System.FS.API (HasFS) -import System.FS.API.Types (AllowExisting, Handle) +import Control.ResourceRegistry +import Control.Tracer (Tracer) +import Data.Functor.Identity (Identity (..)) +import Data.Proxy (Proxy (..)) +import Data.Sequence.Strict (StrictSeq) +import Data.Typeable (Typeable) +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import NoThunks.Class (OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block (ConvertRawHash, IsEBB, StandardHash) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Cache + ( CacheConfig (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Cache qualified as Cache +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary + ( SecondaryOffset + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary qualified as Primary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary + ( BlockSize + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary qualified as Secondary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types + ( TraceCacheEvent + , WithBlockSize (..) + ) +import Ouroboros.Consensus.Util.IOLike +import System.FS.API (HasFS) +import System.FS.API.Types (AllowExisting, Handle) {------------------------------------------------------------------------------ Index @@ -49,124 +56,123 @@ import System.FS.API.Types (AllowExisting, Handle) -- | Bundle the operations on the primary and secondary index that touch the -- files. This allows us to easily introduce an intermediary caching layer. data Index m blk h = Index - { -- | See 'Primary.readOffsets' - readOffsets - :: forall t. (HasCallStack, Traversable t) - => ChunkNo - -> t RelativeSlot - -> m (t (Maybe SecondaryOffset), Maybe (StrictSeq SecondaryOffset)) - - -- | See 'Primary.readFirstFilledSlot' - , readFirstFilledSlot - :: HasCallStack - => ChunkNo - -> m (Maybe RelativeSlot) - - -- | See 'Primary.open' - , openPrimaryIndex - :: HasCallStack - => ChunkNo - -> AllowExisting - -> m (Handle h) - - -- | See 'Primary.appendOffsets' - , appendOffsets - :: forall f. (HasCallStack, Foldable f) - => Handle h - -> f SecondaryOffset - -> m () - - -- | See 'Secondary.readEntries' - , readEntries - :: forall t. (HasCallStack, Traversable t) - => ChunkNo - -> t (IsEBB, SecondaryOffset) - -> m (t (Secondary.Entry blk, BlockSize)) - - -- | See 'Secondary.readAllEntries' - , readAllEntries - :: HasCallStack - => SecondaryOffset - -> ChunkNo - -> (Secondary.Entry blk -> Bool) - -> Word64 - -> IsEBB - -> m [WithBlockSize (Secondary.Entry blk)] - - -- | See 'Secondary.appendEntry' - , appendEntry - :: HasCallStack - => ChunkNo - -> Handle h - -> WithBlockSize (Secondary.Entry blk) - -> m Word64 - - -- | Close the index and stop any background threads. - -- - -- Should be called when the ImmutableDB is closed. - , close - :: HasCallStack - => m () - - -- | Restart a closed index using the given chunk as the current chunk, - -- drop all previously cached information. - -- - -- NOTE: this will only used in the testsuite, when we need to truncate. - , restart - :: HasCallStack - => ChunkNo - -> m () + { readOffsets :: + forall t. + (HasCallStack, Traversable t) => + ChunkNo -> + t RelativeSlot -> + m (t (Maybe SecondaryOffset), Maybe (StrictSeq SecondaryOffset)) + -- ^ See 'Primary.readOffsets' + , readFirstFilledSlot :: + HasCallStack => + ChunkNo -> + m (Maybe RelativeSlot) + -- ^ See 'Primary.readFirstFilledSlot' + , openPrimaryIndex :: + HasCallStack => + ChunkNo -> + AllowExisting -> + m (Handle h) + -- ^ See 'Primary.open' + , appendOffsets :: + forall f. + (HasCallStack, Foldable f) => + Handle h -> + f SecondaryOffset -> + m () + -- ^ See 'Primary.appendOffsets' + , readEntries :: + forall t. + (HasCallStack, Traversable t) => + ChunkNo -> + t (IsEBB, SecondaryOffset) -> + m (t (Secondary.Entry blk, BlockSize)) + -- ^ See 'Secondary.readEntries' + , readAllEntries :: + HasCallStack => + SecondaryOffset -> + ChunkNo -> + (Secondary.Entry blk -> Bool) -> + Word64 -> + IsEBB -> + m [WithBlockSize (Secondary.Entry blk)] + -- ^ See 'Secondary.readAllEntries' + , appendEntry :: + HasCallStack => + ChunkNo -> + Handle h -> + WithBlockSize (Secondary.Entry blk) -> + m Word64 + -- ^ See 'Secondary.appendEntry' + , close :: + HasCallStack => + m () + -- ^ Close the index and stop any background threads. + -- + -- Should be called when the ImmutableDB is closed. + , restart :: + HasCallStack => + ChunkNo -> + m () + -- ^ Restart a closed index using the given chunk as the current chunk, + -- drop all previously cached information. + -- + -- NOTE: this will only used in the testsuite, when we need to truncate. } deriving NoThunks via OnlyCheckWhnfNamed "Index" (Index m blk h) -- | See 'Primary.readOffset'. readOffset :: - Functor m - => Index m blk h - -> ChunkNo - -> RelativeSlot - -> m (Maybe SecondaryOffset, Maybe (StrictSeq SecondaryOffset)) -readOffset index chunk slot = (\(x, y) -> (runIdentity x, y)) <$> - readOffsets index chunk (Identity slot) + Functor m => + Index m blk h -> + ChunkNo -> + RelativeSlot -> + m (Maybe SecondaryOffset, Maybe (StrictSeq SecondaryOffset)) +readOffset index chunk slot = + (\(x, y) -> (runIdentity x, y)) + <$> readOffsets index chunk (Identity slot) -- | See 'Secondary.readEntry'. readEntry :: - Functor m - => Index m blk h - -> ChunkNo - -> IsEBB - -> SecondaryOffset - -> m (Secondary.Entry blk, BlockSize) -readEntry index chunk isEBB slotOffset = runIdentity <$> - readEntries index chunk (Identity (isEBB, slotOffset)) + Functor m => + Index m blk h -> + ChunkNo -> + IsEBB -> + SecondaryOffset -> + m (Secondary.Entry blk, BlockSize) +readEntry index chunk isEBB slotOffset = + runIdentity + <$> readEntries index chunk (Identity (isEBB, slotOffset)) {------------------------------------------------------------------------------ File-backed index ------------------------------------------------------------------------------} fileBackedIndex :: - forall m blk h. - (ConvertRawHash blk, MonadCatch m, StandardHash blk, Typeable blk) - => HasFS m h - -> ChunkInfo - -> Index m blk h -fileBackedIndex hasFS chunkInfo = Index - { readOffsets = \x y -> (,Nothing) <$> - Primary.readOffsets p hasFS x y + forall m blk h. + (ConvertRawHash blk, MonadCatch m, StandardHash blk, Typeable blk) => + HasFS m h -> + ChunkInfo -> + Index m blk h +fileBackedIndex hasFS chunkInfo = + Index + { readOffsets = \x y -> + (,Nothing) + <$> Primary.readOffsets p hasFS x y , readFirstFilledSlot = Primary.readFirstFilledSlot p hasFS chunkInfo - , openPrimaryIndex = Primary.open hasFS - , appendOffsets = Primary.appendOffsets hasFS - , readEntries = Secondary.readEntries hasFS - , readAllEntries = Secondary.readAllEntries hasFS - , appendEntry = \_chunk h (WithBlockSize _ entry) -> - Secondary.appendEntry hasFS h entry - -- Nothing to do - , close = return () - , restart = \_newCurChunk -> return () + , openPrimaryIndex = Primary.open hasFS + , appendOffsets = Primary.appendOffsets hasFS + , readEntries = Secondary.readEntries hasFS + , readAllEntries = Secondary.readAllEntries hasFS + , appendEntry = \_chunk h (WithBlockSize _ entry) -> + Secondary.appendEntry hasFS h entry + , -- Nothing to do + close = return () + , restart = \_newCurChunk -> return () } - where - p :: Proxy blk - p = Proxy + where + p :: Proxy blk + p = Proxy {------------------------------------------------------------------------------ Cached index @@ -178,31 +184,34 @@ fileBackedIndex hasFS chunkInfo = Index -- Spawns a background thread to expire past chunks from the cache that -- haven't been used for a while. cachedIndex :: - forall m blk h. - (IOLike m, ConvertRawHash blk, StandardHash blk, Typeable blk) - => HasFS m h - -> ResourceRegistry m - -> Tracer m TraceCacheEvent - -> CacheConfig - -> ChunkInfo - -> ChunkNo -- ^ Current chunk - -> m (Index m blk h) + forall m blk h. + (IOLike m, ConvertRawHash blk, StandardHash blk, Typeable blk) => + HasFS m h -> + ResourceRegistry m -> + Tracer m TraceCacheEvent -> + CacheConfig -> + ChunkInfo -> + -- | Current chunk + ChunkNo -> + m (Index m blk h) cachedIndex hasFS registry tracer cacheConfig chunkInfo chunk = do - cacheEnv <- Cache.newEnv - hasFS - registry - tracer - cacheConfig - chunkInfo - chunk - return Index - { readOffsets = Cache.readOffsets cacheEnv + cacheEnv <- + Cache.newEnv + hasFS + registry + tracer + cacheConfig + chunkInfo + chunk + return + Index + { readOffsets = Cache.readOffsets cacheEnv , readFirstFilledSlot = Cache.readFirstFilledSlot cacheEnv - , openPrimaryIndex = Cache.openPrimaryIndex cacheEnv - , appendOffsets = Cache.appendOffsets cacheEnv - , readEntries = Cache.readEntries cacheEnv - , readAllEntries = Cache.readAllEntries cacheEnv - , appendEntry = Cache.appendEntry cacheEnv - , close = Cache.close cacheEnv - , restart = Cache.restart cacheEnv + , openPrimaryIndex = Cache.openPrimaryIndex cacheEnv + , appendOffsets = Cache.appendOffsets cacheEnv + , readEntries = Cache.readEntries cacheEnv + , readAllEntries = Cache.readAllEntries cacheEnv + , appendEntry = Cache.appendEntry cacheEnv + , close = Cache.close cacheEnv + , restart = Cache.restart cacheEnv } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Cache.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Cache.hs index ae55a1e18b..4c7df1b329 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Cache.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Cache.hs @@ -12,71 +12,90 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Cache ( - -- * Environment +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Cache + ( -- * Environment CacheConfig (..) , CacheEnv , checkInvariants , newEnv + -- * Background thread , expireUnusedChunks + -- * Operations , close , restart + -- ** On the primary index , appendOffsets , openPrimaryIndex , readFirstFilledSlot , readOffsets + -- ** On the secondary index , appendEntry , readAllEntries , readEntries ) where -import Cardano.Prelude (forceElemsToWHNF) -import Control.Exception (assert) -import Control.Monad (forM, forM_, forever, unless, void, when) -import Control.Monad.Except (throwError) -import Control.ResourceRegistry -import Control.Tracer (Tracer, traceWith) -import Data.Foldable (toList) -import Data.Functor ((<&>)) -import Data.IntPSQ (IntPSQ) -import qualified Data.IntPSQ as PSQ -import Data.Maybe (fromMaybe) -import Data.Proxy (Proxy (..)) -import Data.Sequence.Strict (StrictSeq) -import qualified Data.Sequence.Strict as Seq -import Data.Typeable (Typeable) -import Data.Vector (Vector) -import qualified Data.Vector as Vector -import Data.Void (Void) -import Data.Word (Word32, Word64) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block (ConvertRawHash, IsEBB (..), - StandardHash) -import Ouroboros.Consensus.Storage.ImmutableDB.API - (UnexpectedFailure (..), throwUnexpectedFailure) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary - (PrimaryIndex, SecondaryOffset) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary as Primary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary - (BlockSize (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary as Secondary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types - (TraceCacheEvent (..), WithBlockSize (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util - (fsPathChunkFile, fsPathPrimaryIndexFile, - fsPathSecondaryIndexFile) -import Ouroboros.Consensus.Util (takeUntil, whenJust) -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import System.FS.API (HasFS (..), withFile) -import System.FS.API.Types (AllowExisting (..), Handle, - OpenMode (ReadMode)) +import Cardano.Prelude (forceElemsToWHNF) +import Control.Exception (assert) +import Control.Monad (forM, forM_, forever, unless, void, when) +import Control.Monad.Except (throwError) +import Control.ResourceRegistry +import Control.Tracer (Tracer, traceWith) +import Data.Foldable (toList) +import Data.Functor ((<&>)) +import Data.IntPSQ (IntPSQ) +import Data.IntPSQ qualified as PSQ +import Data.Maybe (fromMaybe) +import Data.Proxy (Proxy (..)) +import Data.Sequence.Strict (StrictSeq) +import Data.Sequence.Strict qualified as Seq +import Data.Typeable (Typeable) +import Data.Vector (Vector) +import Data.Vector qualified as Vector +import Data.Void (Void) +import Data.Word (Word32, Word64) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block + ( ConvertRawHash + , IsEBB (..) + , StandardHash + ) +import Ouroboros.Consensus.Storage.ImmutableDB.API + ( UnexpectedFailure (..) + , throwUnexpectedFailure + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary + ( PrimaryIndex + , SecondaryOffset + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary qualified as Primary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary + ( BlockSize (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary qualified as Secondary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types + ( TraceCacheEvent (..) + , WithBlockSize (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util + ( fsPathChunkFile + , fsPathPrimaryIndexFile + , fsPathSecondaryIndexFile + ) +import Ouroboros.Consensus.Util (takeUntil, whenJust) +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import System.FS.API (HasFS (..), withFile) +import System.FS.API.Types + ( AllowExisting (..) + , Handle + , OpenMode (ReadMode) + ) -- TODO property and/or q-s-m tests comparing with 'fileBackedIndex' @@ -86,12 +105,12 @@ import System.FS.API.Types (AllowExisting (..), Handle, data CacheConfig = CacheConfig { pastChunksToCache :: Word32 - -- ^ Maximum number of past chunks to cache, excluding the current chunk. - -- - -- NOTE: must be > 0 + -- ^ Maximum number of past chunks to cache, excluding the current chunk. + -- + -- NOTE: must be > 0 , expireUnusedAfter :: DiffTime - -- ^ Expire past chunks that haven't been used for 'expireUnusedAfter' - -- from the cache, regardless the number of past chunks in the cache. + -- ^ Expire past chunks that haven't been used for 'expireUnusedAfter' + -- from the cache, regardless the number of past chunks in the cache. } deriving (Eq, Show) @@ -103,18 +122,19 @@ type Entry blk = WithBlockSize (Secondary.Entry blk) -- We use sequences (as opposed to vectors) to allow for efficient appending -- in addition to (reasonably) efficient indexing. data CurrentChunkInfo blk = CurrentChunkInfo - { currentChunkNo :: !ChunkNo + { currentChunkNo :: !ChunkNo , currentChunkOffsets :: !(StrictSeq SecondaryOffset) , currentChunkEntries :: !(StrictSeq (Entry blk)) } deriving (Show, Generic, NoThunks) emptyCurrentChunkInfo :: ChunkNo -> CurrentChunkInfo blk -emptyCurrentChunkInfo chunk = CurrentChunkInfo - { currentChunkNo = chunk - , currentChunkOffsets = Seq.singleton 0 - , currentChunkEntries = Seq.empty - } +emptyCurrentChunkInfo chunk = + CurrentChunkInfo + { currentChunkNo = chunk + , currentChunkOffsets = Seq.singleton 0 + , currentChunkEntries = Seq.empty + } -- | Convert a 'CurrentChunkInfo' to a 'PastChunkInfo' -- @@ -123,14 +143,14 @@ emptyCurrentChunkInfo chunk = CurrentChunkInfo -- just leave in memory as seqs? toPastChunkInfo :: CurrentChunkInfo blk -> PastChunkInfo blk toPastChunkInfo CurrentChunkInfo{..} = - PastChunkInfo - { pastChunkOffsets = - fromMaybe (error "invalid current chunk") $ + PastChunkInfo + { pastChunkOffsets = + fromMaybe (error "invalid current chunk") $ Primary.mk currentChunkNo (toList currentChunkOffsets) - , pastChunkEntries = - -- TODO optimise this - Vector.fromList $ toList currentChunkEntries - } + , pastChunkEntries = + -- TODO optimise this + Vector.fromList $ toList currentChunkEntries + } -- | The cached primary and secondary indices of an chunk in the past. -- @@ -151,69 +171,77 @@ newtype LastUsed = LastUsed Time -- | The data stored in the cache. data Cached blk = Cached - { currentChunk :: !ChunkNo - -- ^ The current chunk of the ImmutableDB, i.e., the chunk we're still - -- appending entries too. + { currentChunk :: !ChunkNo + -- ^ The current chunk of the ImmutableDB, i.e., the chunk we're still + -- appending entries too. , currentChunkInfo :: !(CurrentChunkInfo blk) - -- ^ We always cache the current chunk. - -- - -- When clients are in sync with our chain, they will only request blocks - -- from the current chunk, so it is worth optimising this case. - -- Additionally, by appending to the current chunk through the cache, we - -- are sure the current chunk info is never stale. - -- - -- We use an 'IntPSQ' here, where the keys are in fact chunk numbers. Since - -- chunk numbers are internally represented by a 'Word64', one might be worried - -- about a potential overflow here. While possible, it's not worth worrying about: - -- - Whilst guaranteed to be only at least 30 bits, in practice, 64-bit GHC has 64-bit - -- integers, so the conversion is bijective. - -- - An chunk currently lasts around a week. Systems using a smaller representation - -- might need to worry in a million years or so. - -- - In the event of running for a million years, we're unlikely to have a problem anyway, - -- since we only really cache _recent_ chunks. So the fact that they clash with the - -- chunks from a million years ago isn't likely to be an issue. - , pastChunksInfo :: !(IntPSQ LastUsed (PastChunkInfo blk)) - -- ^ Cached chunks from the past. - -- - -- A LRU-cache (least recently used). Whenever a we get a cache hit - -- ('getChunkInfo') for a past chunk, we change its 'LastUsed' priority to - -- the current time. When the cache is full, see 'pastChunksToCache', we - -- will remove the chunk with the lowest priority, i.e. the least recently - -- used past chunk. - -- - -- INVARIANT: all past chunks are < 'currentChunk' - -- - -- INVARIANT: @'PSQ.size' 'pastChunksInfo' <= 'pastChunksToCache'@ - , nbPastChunks :: !Word32 - -- ^ Cached size of 'pastChunksInfo', as an 'IntPSQ' only provides a \(O(n) - -- \) 'PSQ.size' operation. - -- - -- INVARIANT: 'nbPastChunks' == @'PSQ.size' 'pastChunksInfo'@ + -- ^ We always cache the current chunk. + -- + -- When clients are in sync with our chain, they will only request blocks + -- from the current chunk, so it is worth optimising this case. + -- Additionally, by appending to the current chunk through the cache, we + -- are sure the current chunk info is never stale. + -- + -- We use an 'IntPSQ' here, where the keys are in fact chunk numbers. Since + -- chunk numbers are internally represented by a 'Word64', one might be worried + -- about a potential overflow here. While possible, it's not worth worrying about: + -- - Whilst guaranteed to be only at least 30 bits, in practice, 64-bit GHC has 64-bit + -- integers, so the conversion is bijective. + -- - An chunk currently lasts around a week. Systems using a smaller representation + -- might need to worry in a million years or so. + -- - In the event of running for a million years, we're unlikely to have a problem anyway, + -- since we only really cache _recent_ chunks. So the fact that they clash with the + -- chunks from a million years ago isn't likely to be an issue. + , pastChunksInfo :: !(IntPSQ LastUsed (PastChunkInfo blk)) + -- ^ Cached chunks from the past. + -- + -- A LRU-cache (least recently used). Whenever a we get a cache hit + -- ('getChunkInfo') for a past chunk, we change its 'LastUsed' priority to + -- the current time. When the cache is full, see 'pastChunksToCache', we + -- will remove the chunk with the lowest priority, i.e. the least recently + -- used past chunk. + -- + -- INVARIANT: all past chunks are < 'currentChunk' + -- + -- INVARIANT: @'PSQ.size' 'pastChunksInfo' <= 'pastChunksToCache'@ + , nbPastChunks :: !Word32 + -- ^ Cached size of 'pastChunksInfo', as an 'IntPSQ' only provides a \(O(n) + -- \) 'PSQ.size' operation. + -- + -- INVARIANT: 'nbPastChunks' == @'PSQ.size' 'pastChunksInfo'@ } deriving (Generic, NoThunks) checkInvariants :: - Word32 -- ^ Maximum number of past chunks to cache - -> Cached blk - -> Maybe String -checkInvariants pastChunksToCache Cached {..} = either Just (const Nothing) $ do - forM_ (PSQ.keys pastChunksInfo) $ \pastChunk -> - unless (pastChunk < chunkNoToInt currentChunk) $ - throwError $ - "past chunk (" <> show pastChunk <> ") >= current chunk (" <> - show currentChunk <> ")" - - unless (PSQ.size pastChunksInfo <= fromIntegral pastChunksToCache) $ + -- | Maximum number of past chunks to cache + Word32 -> + Cached blk -> + Maybe String +checkInvariants pastChunksToCache Cached{..} = either Just (const Nothing) $ do + forM_ (PSQ.keys pastChunksInfo) $ \pastChunk -> + unless (pastChunk < chunkNoToInt currentChunk) $ throwError $ - "PSQ.size pastChunksInfo (" <> show (PSQ.size pastChunksInfo) <> - ") > pastChunksToCache (" <> show pastChunksToCache <> ")" - - unless (nbPastChunks == fromIntegral (PSQ.size pastChunksInfo)) $ - throwError $ - "nbPastChunks (" <> show nbPastChunks <> - ") /= PSQ.size pastChunksInfo (" <> show (PSQ.size pastChunksInfo) <> - ")" - + "past chunk (" + <> show pastChunk + <> ") >= current chunk (" + <> show currentChunk + <> ")" + + unless (PSQ.size pastChunksInfo <= fromIntegral pastChunksToCache) $ + throwError $ + "PSQ.size pastChunksInfo (" + <> show (PSQ.size pastChunksInfo) + <> ") > pastChunksToCache (" + <> show pastChunksToCache + <> ")" + + unless (nbPastChunks == fromIntegral (PSQ.size pastChunksInfo)) $ + throwError $ + "nbPastChunks (" + <> show nbPastChunks + <> ") /= PSQ.size pastChunksInfo (" + <> show (PSQ.size pastChunksInfo) + <> ")" -- | Store the 'PastChunkInfo' for the given 'ChunkNo' in 'Cached'. -- @@ -223,34 +251,34 @@ checkInvariants pastChunksToCache Cached {..} = either Just (const Nothing) $ do -- -- PRECONDITION: the given 'ChunkNo' is < the 'currentChunk'. addPastChunkInfo :: - ChunkNo - -> LastUsed - -> PastChunkInfo blk - -> Cached blk - -> Cached blk + ChunkNo -> + LastUsed -> + PastChunkInfo blk -> + Cached blk -> + Cached blk addPastChunkInfo chunk lastUsed pastChunkInfo cached = - assert (chunk < currentChunk cached) $ + assert (chunk < currentChunk cached) $ -- NOTE: in case of multiple concurrent cache misses of the same chunk, -- we might add the same past chunk multiple times to the cache. This -- means the following cannot be a precondition: -- assert (not (PSQ.member chunk pastChunksInfo)) $ cached { pastChunksInfo = pastChunksInfo' - , nbPastChunks = nbPastChunks' + , nbPastChunks = nbPastChunks' } - where - Cached { pastChunksInfo, nbPastChunks } = cached + where + Cached{pastChunksInfo, nbPastChunks} = cached - -- In case of multiple concurrent cache misses of the same chunk, the - -- chunk might already be in there. - (mbAlreadyPresent, pastChunksInfo') = - PSQ.insertView (chunkNoToInt chunk) lastUsed pastChunkInfo pastChunksInfo + -- In case of multiple concurrent cache misses of the same chunk, the + -- chunk might already be in there. + (mbAlreadyPresent, pastChunksInfo') = + PSQ.insertView (chunkNoToInt chunk) lastUsed pastChunkInfo pastChunksInfo - nbPastChunks' - | Just _ <- mbAlreadyPresent - = nbPastChunks - | otherwise - = succ nbPastChunks + nbPastChunks' + | Just _ <- mbAlreadyPresent = + nbPastChunks + | otherwise = + succ nbPastChunks -- | Remove the least recently used past chunk from the cache when 'Cached' -- contains more chunks than the given maximum. @@ -261,104 +289,115 @@ addPastChunkInfo chunk lastUsed pastChunkInfo cached = -- -- If a past chunk was evicted, its chunk number is returned. evictIfNecessary :: - Word32 -- ^ Maximum number of past chunks to cache - -> Cached blk - -> (Cached blk, Maybe ChunkNo) + -- | Maximum number of past chunks to cache + Word32 -> + Cached blk -> + (Cached blk, Maybe ChunkNo) evictIfNecessary maxNbPastChunks cached - | nbPastChunks > maxNbPastChunks - = assert (nbPastChunks == maxNbPastChunks + 1) $ - case PSQ.minView pastChunksInfo of - Nothing -> error - "nbPastChunks > maxNbPastChunks but pastChunksInfo was empty" - Just (chunkNo, _p, _v, pastChunksInfo') -> + | nbPastChunks > maxNbPastChunks = + assert (nbPastChunks == maxNbPastChunks + 1) $ + case PSQ.minView pastChunksInfo of + Nothing -> + error + "nbPastChunks > maxNbPastChunks but pastChunksInfo was empty" + Just (chunkNo, _p, _v, pastChunksInfo') -> (cached', Just $ chunkNoFromInt chunkNo) - where - cached' = cached - { nbPastChunks = maxNbPastChunks - , pastChunksInfo = pastChunksInfo' - } - | otherwise - = (cached, Nothing) - where - Cached { nbPastChunks, pastChunksInfo } = cached + where + cached' = + cached + { nbPastChunks = maxNbPastChunks + , pastChunksInfo = pastChunksInfo' + } + | otherwise = + (cached, Nothing) + where + Cached{nbPastChunks, pastChunksInfo} = cached -- NOTE: we must inline 'evictIfNecessary' otherwise we get unexplained thunks -- in 'Cached' and thus a space leak. Alternatively, we could disable the -- @-fstrictness@ optimisation (enabled by default for -O1). {-# INLINE evictIfNecessary #-} lookupPastChunkInfo :: - ChunkNo - -> LastUsed - -> Cached blk - -> Maybe (PastChunkInfo blk, Cached blk) -lookupPastChunkInfo chunk lastUsed cached@Cached { pastChunksInfo } = - case PSQ.alter lookupAndUpdateLastUsed (chunkNoToInt chunk) pastChunksInfo of - (Nothing, _) -> Nothing - (Just pastChunkInfo, pastChunksInfo') -> Just (pastChunkInfo, cached') - where - cached' = cached { pastChunksInfo = pastChunksInfo' } - where - lookupAndUpdateLastUsed - :: Maybe (LastUsed, PastChunkInfo blk) - -> (Maybe (PastChunkInfo blk), Maybe (LastUsed, PastChunkInfo blk)) - lookupAndUpdateLastUsed = \case - Nothing -> (Nothing, Nothing) - Just (_lastUsed, info) -> (Just info, Just (lastUsed, info)) + ChunkNo -> + LastUsed -> + Cached blk -> + Maybe (PastChunkInfo blk, Cached blk) +lookupPastChunkInfo chunk lastUsed cached@Cached{pastChunksInfo} = + case PSQ.alter lookupAndUpdateLastUsed (chunkNoToInt chunk) pastChunksInfo of + (Nothing, _) -> Nothing + (Just pastChunkInfo, pastChunksInfo') -> Just (pastChunkInfo, cached') + where + cached' = cached{pastChunksInfo = pastChunksInfo'} + where + lookupAndUpdateLastUsed :: + Maybe (LastUsed, PastChunkInfo blk) -> + (Maybe (PastChunkInfo blk), Maybe (LastUsed, PastChunkInfo blk)) + lookupAndUpdateLastUsed = \case + Nothing -> (Nothing, Nothing) + Just (_lastUsed, info) -> (Just info, Just (lastUsed, info)) openChunk :: - ChunkNo - -> LastUsed - -> CurrentChunkInfo blk - -> Cached blk - -> Cached blk + ChunkNo -> + LastUsed -> + CurrentChunkInfo blk -> + Cached blk -> + Cached blk openChunk chunk lastUsed newCurrentChunkInfo cached - | currentChunk == chunk - = cached - { currentChunkInfo = newCurrentChunkInfo } - - | nextChunkNo currentChunk == chunk - = Cached - { currentChunk = chunk + | currentChunk == chunk = + cached + { currentChunkInfo = newCurrentChunkInfo + } + | nextChunkNo currentChunk == chunk = + Cached + { currentChunk = chunk , currentChunkInfo = newCurrentChunkInfo - -- We use 'lastUsed' for the current chunk that has now become a + , -- We use 'lastUsed' for the current chunk that has now become a -- "past" chunk, which means that that chunk is most recently used -- one. When clients are roughly in sync with us, when we switch to a -- new chunk, they might still request blocks from the previous one. -- So to avoid throwing away that cached information, we give it the -- highest priority. - , pastChunksInfo = PSQ.insert (chunkNoToInt currentChunk) lastUsed - (toPastChunkInfo currentChunkInfo) pastChunksInfo - , nbPastChunks = succ nbPastChunks + pastChunksInfo = + PSQ.insert + (chunkNoToInt currentChunk) + lastUsed + (toPastChunkInfo currentChunkInfo) + pastChunksInfo + , nbPastChunks = succ nbPastChunks } - - | otherwise - = error $ "Going from chunk " <> show currentChunk <> " to " <> show chunk - where - Cached - { currentChunk, currentChunkInfo, pastChunksInfo, nbPastChunks - } = cached + | otherwise = + error $ "Going from chunk " <> show currentChunk <> " to " <> show chunk + where + Cached + { currentChunk + , currentChunkInfo + , pastChunksInfo + , nbPastChunks + } = cached emptyCached :: - ChunkNo -- ^ The current chunk - -> CurrentChunkInfo blk - -> Cached blk -emptyCached currentChunk currentChunkInfo = Cached + -- | The current chunk + ChunkNo -> + CurrentChunkInfo blk -> + Cached blk +emptyCached currentChunk currentChunkInfo = + Cached { currentChunk , currentChunkInfo , pastChunksInfo = PSQ.empty - , nbPastChunks = 0 + , nbPastChunks = 0 } -- | Environment used by functions operating on the cached index. data CacheEnv m blk h = CacheEnv - { hasFS :: HasFS m h - , registry :: ResourceRegistry m - , tracer :: Tracer m TraceCacheEvent - , cacheVar :: StrictMVar m (Cached blk) + { hasFS :: HasFS m h + , registry :: ResourceRegistry m + , tracer :: Tracer m TraceCacheEvent + , cacheVar :: StrictMVar m (Cached blk) , cacheConfig :: CacheConfig , bgThreadVar :: StrictMVar m (Maybe (Thread m Void)) - -- ^ Nothing if no thread running - , chunkInfo :: ChunkInfo + -- ^ Nothing if no thread running + , chunkInfo :: ChunkInfo } -- | Creates a new 'CacheEnv' and launches a background thread that expires @@ -366,39 +405,41 @@ data CacheEnv m blk h = CacheEnv -- -- PRECONDITION: 'pastChunksToCache' (in 'CacheConfig') > 0 newEnv :: - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => HasFS m h - -> ResourceRegistry m - -> Tracer m TraceCacheEvent - -> CacheConfig - -> ChunkInfo - -> ChunkNo -- ^ Current chunk - -> m (CacheEnv m blk h) + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + HasFS m h -> + ResourceRegistry m -> + Tracer m TraceCacheEvent -> + CacheConfig -> + ChunkInfo -> + -- | Current chunk + ChunkNo -> + m (CacheEnv m blk h) newEnv hasFS registry tracer cacheConfig chunkInfo chunk = do - when (pastChunksToCache == 0) $ - error "pastChunksToCache must be > 0" - - currentChunkInfo <- loadCurrentChunkInfo hasFS chunkInfo chunk - cacheVar <- newMVarWithInvariants $ emptyCached chunk currentChunkInfo - bgThreadVar <- newMVar Nothing - let cacheEnv = CacheEnv {..} - mask_ $ modifyMVar_ bgThreadVar $ \_mustBeNothing -> do - !bgThread <- forkLinkedThread registry "ImmutableDB.expireUnusedChunks" $ + when (pastChunksToCache == 0) $ + error "pastChunksToCache must be > 0" + + currentChunkInfo <- loadCurrentChunkInfo hasFS chunkInfo chunk + cacheVar <- newMVarWithInvariants $ emptyCached chunk currentChunkInfo + bgThreadVar <- newMVar Nothing + let cacheEnv = CacheEnv{..} + mask_ $ modifyMVar_ bgThreadVar $ \_mustBeNothing -> do + !bgThread <- + forkLinkedThread registry "ImmutableDB.expireUnusedChunks" $ expireUnusedChunks cacheEnv - return $ Just bgThread - return cacheEnv - where - CacheConfig { pastChunksToCache } = cacheConfig + return $ Just bgThread + return cacheEnv + where + CacheConfig{pastChunksToCache} = cacheConfig - -- When checking invariants, check both our invariants and for thunks. - -- Note that this is only done when the corresponding flag is enabled. - newMVarWithInvariants = - newMVarWithInvariant $ checkInvariants pastChunksToCache + -- When checking invariants, check both our invariants and for thunks. + -- Note that this is only done when the corresponding flag is enabled. + newMVarWithInvariants = + newMVarWithInvariant $ checkInvariants pastChunksToCache {------------------------------------------------------------------------------ Background thread @@ -409,211 +450,228 @@ newEnv hasFS registry tracer cacheConfig chunkInfo chunk = do -- Will expire past chunks that haven't been used for 'expireUnusedAfter' from -- the cache. expireUnusedChunks :: - (HasCallStack, IOLike m) - => CacheEnv m blk h - -> m Void -expireUnusedChunks CacheEnv { cacheVar, cacheConfig, tracer } = - forever $ do - now <- getMonotonicTime - mbTraceMsg <- modifyMVar cacheVar $ pure . garbageCollect now - mapM_ (traceWith tracer) mbTraceMsg - threadDelay expireUnusedAfter - where - CacheConfig { expireUnusedAfter } = cacheConfig - - -- | Remove the least recently used past chunk from 'Cached' /if/ it - -- hasn't been used for 'expireUnusedAfter', otherwise the original - -- 'Cached' is returned. - -- - -- In case a 'TracePastChunksExpired' event should be traced, it is - -- returned as a 'Just'. - garbageCollect - :: Time - -> Cached blk - -> (Cached blk, Maybe TraceCacheEvent) - garbageCollect now cached@Cached { pastChunksInfo, nbPastChunks } = - case expiredPastChunks of - [] -> (cached, Nothing) - _ -> (cached', Just traceMsg) - where - -- Every past chunk last used before (or at) this time, must be - -- expired. - expiredLastUsedTime :: LastUsed - expiredLastUsedTime = LastUsed $ - Time (now `diffTime` Time expireUnusedAfter) - - (expiredPastChunks, pastChunksInfo') = - PSQ.atMostView expiredLastUsedTime pastChunksInfo - - nbPastChunks' = nbPastChunks - fromIntegral (length expiredPastChunks) - - cached' = cached - { pastChunksInfo = pastChunksInfo' - , nbPastChunks = nbPastChunks' - } + (HasCallStack, IOLike m) => + CacheEnv m blk h -> + m Void +expireUnusedChunks CacheEnv{cacheVar, cacheConfig, tracer} = + forever $ do + now <- getMonotonicTime + mbTraceMsg <- modifyMVar cacheVar $ pure . garbageCollect now + mapM_ (traceWith tracer) mbTraceMsg + threadDelay expireUnusedAfter + where + CacheConfig{expireUnusedAfter} = cacheConfig + + -- \| Remove the least recently used past chunk from 'Cached' /if/ it + -- hasn't been used for 'expireUnusedAfter', otherwise the original + -- 'Cached' is returned. + -- + -- In case a 'TracePastChunksExpired' event should be traced, it is + -- returned as a 'Just'. + garbageCollect :: + Time -> + Cached blk -> + (Cached blk, Maybe TraceCacheEvent) + garbageCollect now cached@Cached{pastChunksInfo, nbPastChunks} = + case expiredPastChunks of + [] -> (cached, Nothing) + _ -> (cached', Just traceMsg) + where + -- Every past chunk last used before (or at) this time, must be + -- expired. + expiredLastUsedTime :: LastUsed + expiredLastUsedTime = + LastUsed $ + Time (now `diffTime` Time expireUnusedAfter) + + (expiredPastChunks, pastChunksInfo') = + PSQ.atMostView expiredLastUsedTime pastChunksInfo + + nbPastChunks' = nbPastChunks - fromIntegral (length expiredPastChunks) + + cached' = + cached + { pastChunksInfo = pastChunksInfo' + , nbPastChunks = nbPastChunks' + } - !traceMsg = TracePastChunksExpired - -- Force this list, otherwise the traced message holds onto to the - -- past chunk indices. - (forceElemsToWHNF + !traceMsg = + TracePastChunksExpired + -- Force this list, otherwise the traced message holds onto to the + -- past chunk indices. + ( forceElemsToWHNF [ chunkNoFromInt $ chunk | (chunk, _, _) <- expiredPastChunks - ]) - nbPastChunks' + ] + ) + nbPastChunks' {------------------------------------------------------------------------------ Reading indices ------------------------------------------------------------------------------} readPrimaryIndex :: - (HasCallStack, IOLike m, Typeable blk, StandardHash blk) - => Proxy blk - -> HasFS m h - -> ChunkInfo - -> ChunkNo - -> m (PrimaryIndex, IsEBB) - -- ^ The primary index and whether it starts with an EBB or not + (HasCallStack, IOLike m, Typeable blk, StandardHash blk) => + Proxy blk -> + HasFS m h -> + ChunkInfo -> + ChunkNo -> + -- | The primary index and whether it starts with an EBB or not + m (PrimaryIndex, IsEBB) readPrimaryIndex pb hasFS chunkInfo chunk = do - primaryIndex <- Primary.load pb hasFS chunk - let firstIsEBB - | Primary.containsSlot primaryIndex firstRelativeSlot - , Primary.isFilledSlot primaryIndex firstRelativeSlot - = relativeSlotIsEBB firstRelativeSlot - | otherwise - = IsNotEBB - return (primaryIndex, firstIsEBB) - where - firstRelativeSlot :: RelativeSlot - firstRelativeSlot = firstBlockOrEBB chunkInfo chunk + primaryIndex <- Primary.load pb hasFS chunk + let firstIsEBB + | Primary.containsSlot primaryIndex firstRelativeSlot + , Primary.isFilledSlot primaryIndex firstRelativeSlot = + relativeSlotIsEBB firstRelativeSlot + | otherwise = + IsNotEBB + return (primaryIndex, firstIsEBB) + where + firstRelativeSlot :: RelativeSlot + firstRelativeSlot = firstBlockOrEBB chunkInfo chunk readSecondaryIndex :: - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => HasFS m h - -> ChunkNo - -> IsEBB - -> m [Entry blk] -readSecondaryIndex hasFS@HasFS { hGetSize } chunk firstIsEBB = do - !chunkFileSize <- withFile hasFS chunkFile ReadMode hGetSize - Secondary.readAllEntries hasFS secondaryOffset - chunk stopCondition chunkFileSize firstIsEBB - where - chunkFile = fsPathChunkFile chunk - -- Read from the start - secondaryOffset = 0 - -- Don't stop until the end - stopCondition = const False + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + HasFS m h -> + ChunkNo -> + IsEBB -> + m [Entry blk] +readSecondaryIndex hasFS@HasFS{hGetSize} chunk firstIsEBB = do + !chunkFileSize <- withFile hasFS chunkFile ReadMode hGetSize + Secondary.readAllEntries + hasFS + secondaryOffset + chunk + stopCondition + chunkFileSize + firstIsEBB + where + chunkFile = fsPathChunkFile chunk + -- Read from the start + secondaryOffset = 0 + -- Don't stop until the end + stopCondition = const False loadCurrentChunkInfo :: - forall m h blk. - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => HasFS m h - -> ChunkInfo - -> ChunkNo - -> m (CurrentChunkInfo blk) + forall m h blk. + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + HasFS m h -> + ChunkInfo -> + ChunkNo -> + m (CurrentChunkInfo blk) loadCurrentChunkInfo hasFS chunkInfo chunk = do - -- We're assuming that when the primary index file exists, the secondary - -- index file will also exist - chunkExists <- doesFileExist hasFS primaryIndexFile - if chunkExists then do + -- We're assuming that when the primary index file exists, the secondary + -- index file will also exist + chunkExists <- doesFileExist hasFS primaryIndexFile + if chunkExists + then do (primaryIndex, firstIsEBB) <- readPrimaryIndex (Proxy @blk) hasFS chunkInfo chunk entries <- readSecondaryIndex hasFS chunk firstIsEBB - return CurrentChunkInfo - { currentChunkNo = chunk - , currentChunkOffsets = - -- TODO optimise this - Seq.fromList . Primary.toSecondaryOffsets $ primaryIndex - , currentChunkEntries = Seq.fromList entries - } + return + CurrentChunkInfo + { currentChunkNo = chunk + , currentChunkOffsets = + -- TODO optimise this + Seq.fromList . Primary.toSecondaryOffsets $ primaryIndex + , currentChunkEntries = Seq.fromList entries + } else return $ emptyCurrentChunkInfo chunk - where - primaryIndexFile = fsPathPrimaryIndexFile chunk + where + primaryIndexFile = fsPathPrimaryIndexFile chunk loadPastChunkInfo :: - forall blk m h. - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => HasFS m h - -> ChunkInfo - -> ChunkNo - -> m (PastChunkInfo blk) + forall blk m h. + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + HasFS m h -> + ChunkInfo -> + ChunkNo -> + m (PastChunkInfo blk) loadPastChunkInfo hasFS chunkInfo chunk = do - (primaryIndex, firstIsEBB) <- readPrimaryIndex (Proxy @blk) hasFS chunkInfo chunk - entries <- readSecondaryIndex hasFS chunk firstIsEBB - return PastChunkInfo + (primaryIndex, firstIsEBB) <- readPrimaryIndex (Proxy @blk) hasFS chunkInfo chunk + entries <- readSecondaryIndex hasFS chunk firstIsEBB + return + PastChunkInfo { pastChunkOffsets = primaryIndex , pastChunkEntries = Vector.fromList $ forceElemsToWHNF entries } getChunkInfo :: - forall m blk h. - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => CacheEnv m blk h - -> ChunkNo - -> m (Either (CurrentChunkInfo blk) (PastChunkInfo blk)) + forall m blk h. + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + CacheEnv m blk h -> + ChunkNo -> + m (Either (CurrentChunkInfo blk) (PastChunkInfo blk)) getChunkInfo cacheEnv chunk = do - lastUsed <- LastUsed <$> getMonotonicTime - -- Make sure we don't leave an empty MVar in case of an exception. - (mbCacheHit, tr) <- modifyMVar cacheVar $ - \cached@Cached { currentChunk, currentChunkInfo, nbPastChunks } -> if + lastUsed <- LastUsed <$> getMonotonicTime + -- Make sure we don't leave an empty MVar in case of an exception. + (mbCacheHit, tr) <- modifyMVar cacheVar $ + \cached@Cached{currentChunk, currentChunkInfo, nbPastChunks} -> + if | chunk == currentChunk -> do - -- Cache hit for the current chunk - return ( cached - , (Just $ Left currentChunkInfo, TraceCurrentChunkHit chunk nbPastChunks) - ) + -- Cache hit for the current chunk + return + ( cached + , (Just $ Left currentChunkInfo, TraceCurrentChunkHit chunk nbPastChunks) + ) | Just (pastChunkInfo, cached') <- lookupPastChunkInfo chunk lastUsed cached -> do - -- Cache hit for an chunk in the past - return ( cached' - , (Just $ Right pastChunkInfo, TracePastChunkHit chunk nbPastChunks) - ) + -- Cache hit for an chunk in the past + return + ( cached' + , (Just $ Right pastChunkInfo, TracePastChunkHit chunk nbPastChunks) + ) | otherwise -> do - -- Cache miss for an chunk in the past. We don't want to hold on to - -- the 'cacheVar' MVar, blocking all other access to the cace, while - -- we're reading things from disk, so put it back now and update the - -- cache afterwards. - return ( cached - , (Nothing, TracePastChunkMiss chunk nbPastChunks) - ) - traceWith tracer tr - case mbCacheHit of - Just hit -> return hit - Nothing -> do - -- Cache miss, load both entire indices for the chunk from disk. - pastChunkInfo <- loadPastChunkInfo hasFS chunkInfo chunk - -- Loading the chunk might have taken some time, so obtain the time - -- again. - lastUsed' <- LastUsed <$> getMonotonicTime - mbEvicted <- modifyMVar cacheVar $ - pure . - evictIfNecessary pastChunksToCache . - addPastChunkInfo chunk lastUsed' pastChunkInfo - whenJust mbEvicted $ \evicted -> - -- If we had to evict, we are at 'pastChunksToCache' - traceWith tracer $ TracePastChunkEvict evicted pastChunksToCache - return $ Right pastChunkInfo - where - CacheEnv { hasFS, cacheVar, cacheConfig, tracer, chunkInfo } = cacheEnv - CacheConfig { pastChunksToCache } = cacheConfig + -- Cache miss for an chunk in the past. We don't want to hold on to + -- the 'cacheVar' MVar, blocking all other access to the cace, while + -- we're reading things from disk, so put it back now and update the + -- cache afterwards. + return + ( cached + , (Nothing, TracePastChunkMiss chunk nbPastChunks) + ) + traceWith tracer tr + case mbCacheHit of + Just hit -> return hit + Nothing -> do + -- Cache miss, load both entire indices for the chunk from disk. + pastChunkInfo <- loadPastChunkInfo hasFS chunkInfo chunk + -- Loading the chunk might have taken some time, so obtain the time + -- again. + lastUsed' <- LastUsed <$> getMonotonicTime + mbEvicted <- + modifyMVar cacheVar $ + pure + . evictIfNecessary pastChunksToCache + . addPastChunkInfo chunk lastUsed' pastChunkInfo + whenJust mbEvicted $ \evicted -> + -- If we had to evict, we are at 'pastChunksToCache' + traceWith tracer $ TracePastChunkEvict evicted pastChunksToCache + return $ Right pastChunkInfo + where + CacheEnv{hasFS, cacheVar, cacheConfig, tracer, chunkInfo} = cacheEnv + CacheConfig{pastChunksToCache} = cacheConfig {------------------------------------------------------------------------------ Operations @@ -623,10 +681,10 @@ getChunkInfo cacheEnv chunk = do -- -- This operation is idempotent. close :: IOLike m => CacheEnv m blk h -> m () -close CacheEnv { bgThreadVar } = - mask_ $ modifyMVar_ bgThreadVar $ \mbBgThread -> do - mapM_ cancelThread mbBgThread - return Nothing +close CacheEnv{bgThreadVar} = + mask_ $ modifyMVar_ bgThreadVar $ \mbBgThread -> do + mapM_ cancelThread mbBgThread + return Nothing -- | Restarts the background expiration thread, drops all previously cached -- information, loads the given chunk. @@ -634,147 +692,153 @@ close CacheEnv { bgThreadVar } = -- PRECONDITION: the background thread expiring unused past chunks must have -- been terminated. restart :: - (ConvertRawHash blk, IOLike m, StandardHash blk, Typeable blk) - => CacheEnv m blk h - -> ChunkNo -- ^ The new current chunk - -> m () + (ConvertRawHash blk, IOLike m, StandardHash blk, Typeable blk) => + CacheEnv m blk h -> + -- | The new current chunk + ChunkNo -> + m () restart cacheEnv chunk = do - currentChunkInfo <- loadCurrentChunkInfo hasFS chunkInfo chunk - void $ swapMVar cacheVar $ emptyCached chunk currentChunkInfo - mask_ $ modifyMVar_ bgThreadVar $ \mbBgThread -> - case mbBgThread of - Just _ -> throwIO $ userError "background thread still running" - Nothing -> do - !bgThread <- forkLinkedThread registry "ImmutableDB.expireUnusedChunks" $ + currentChunkInfo <- loadCurrentChunkInfo hasFS chunkInfo chunk + void $ swapMVar cacheVar $ emptyCached chunk currentChunkInfo + mask_ $ modifyMVar_ bgThreadVar $ \mbBgThread -> + case mbBgThread of + Just _ -> throwIO $ userError "background thread still running" + Nothing -> do + !bgThread <- + forkLinkedThread registry "ImmutableDB.expireUnusedChunks" $ expireUnusedChunks cacheEnv - return $ Just bgThread - where - CacheEnv { hasFS, registry, cacheVar, bgThreadVar, chunkInfo } = cacheEnv + return $ Just bgThread + where + CacheEnv{hasFS, registry, cacheVar, bgThreadVar, chunkInfo} = cacheEnv {------------------------------------------------------------------------------ On the primary index ------------------------------------------------------------------------------} readOffsets :: - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - , Traversable t - ) - => CacheEnv m blk h - -> ChunkNo - -> t RelativeSlot - -> m (t (Maybe SecondaryOffset), Maybe (StrictSeq SecondaryOffset)) + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + , Traversable t + ) => + CacheEnv m blk h -> + ChunkNo -> + t RelativeSlot -> + m (t (Maybe SecondaryOffset), Maybe (StrictSeq SecondaryOffset)) readOffsets cacheEnv chunk relSlots = do - ci <- getChunkInfo cacheEnv chunk - pure $ case ci of - Left CurrentChunkInfo { currentChunkOffsets } -> - (getOffsetFromSecondaryOffsets currentChunkOffsets <$> relSlots, Just currentChunkOffsets) - Right PastChunkInfo { pastChunkOffsets } -> - (getOffsetFromPrimaryIndex pastChunkOffsets <$> relSlots, Nothing) - where - getOffsetFromSecondaryOffsets - :: StrictSeq SecondaryOffset - -> RelativeSlot - -> Maybe SecondaryOffset - getOffsetFromSecondaryOffsets offsets relSlot = - let s = assertRelativeSlotInChunk chunk relSlot in - case Seq.splitAt (fromIntegral s + 1) offsets of - (_ Seq.:|> offset, offsetAfter Seq.:<| _) - | offset /= offsetAfter - -- The slot is not empty - -> Just offset - _ -> Nothing - - getOffsetFromPrimaryIndex - :: PrimaryIndex - -> RelativeSlot - -> Maybe SecondaryOffset - getOffsetFromPrimaryIndex index relSlot - | Primary.containsSlot index relSlot - , Primary.isFilledSlot index relSlot - = Just $ Primary.offsetOfSlot index relSlot - | otherwise - = Nothing + ci <- getChunkInfo cacheEnv chunk + pure $ case ci of + Left CurrentChunkInfo{currentChunkOffsets} -> + (getOffsetFromSecondaryOffsets currentChunkOffsets <$> relSlots, Just currentChunkOffsets) + Right PastChunkInfo{pastChunkOffsets} -> + (getOffsetFromPrimaryIndex pastChunkOffsets <$> relSlots, Nothing) + where + getOffsetFromSecondaryOffsets :: + StrictSeq SecondaryOffset -> + RelativeSlot -> + Maybe SecondaryOffset + getOffsetFromSecondaryOffsets offsets relSlot = + let s = assertRelativeSlotInChunk chunk relSlot + in case Seq.splitAt (fromIntegral s + 1) offsets of + (_ Seq.:|> offset, offsetAfter Seq.:<| _) + | offset /= offsetAfter -> + -- The slot is not empty + Just offset + _ -> Nothing + + getOffsetFromPrimaryIndex :: + PrimaryIndex -> + RelativeSlot -> + Maybe SecondaryOffset + getOffsetFromPrimaryIndex index relSlot + | Primary.containsSlot index relSlot + , Primary.isFilledSlot index relSlot = + Just $ Primary.offsetOfSlot index relSlot + | otherwise = + Nothing readFirstFilledSlot :: - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => CacheEnv m blk h - -> ChunkNo - -> m (Maybe RelativeSlot) + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + CacheEnv m blk h -> + ChunkNo -> + m (Maybe RelativeSlot) readFirstFilledSlot cacheEnv chunk = - getChunkInfo cacheEnv chunk <&> \case - Left CurrentChunkInfo { currentChunkOffsets } -> - firstFilledSlotInSeq currentChunkOffsets - Right PastChunkInfo { pastChunkOffsets } -> - Primary.firstFilledSlot chunkInfo pastChunkOffsets - where - CacheEnv { chunkInfo } = cacheEnv - - firstFilledSlotInSeq :: StrictSeq SecondaryOffset -> Maybe RelativeSlot - firstFilledSlotInSeq = fmap indexToRelativeSlot . Seq.findIndexL (/= 0) - where - indexToRelativeSlot :: Int -> RelativeSlot - indexToRelativeSlot = mkRelativeSlot chunkInfo chunk . fromIntegral . pred + getChunkInfo cacheEnv chunk <&> \case + Left CurrentChunkInfo{currentChunkOffsets} -> + firstFilledSlotInSeq currentChunkOffsets + Right PastChunkInfo{pastChunkOffsets} -> + Primary.firstFilledSlot chunkInfo pastChunkOffsets + where + CacheEnv{chunkInfo} = cacheEnv + + firstFilledSlotInSeq :: StrictSeq SecondaryOffset -> Maybe RelativeSlot + firstFilledSlotInSeq = fmap indexToRelativeSlot . Seq.findIndexL (/= 0) + where + indexToRelativeSlot :: Int -> RelativeSlot + indexToRelativeSlot = mkRelativeSlot chunkInfo chunk . fromIntegral . pred -- | This is called when a new chunk is started, which means we need to update -- 'Cached' to reflect this. openPrimaryIndex :: - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => CacheEnv m blk h - -> ChunkNo - -> AllowExisting - -> m (Handle h) + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + CacheEnv m blk h -> + ChunkNo -> + AllowExisting -> + m (Handle h) openPrimaryIndex cacheEnv chunk allowExisting = do - lastUsed <- LastUsed <$> getMonotonicTime - pHnd <- Primary.open hasFS chunk allowExisting - -- Don't leak the handle in case of an exception - flip onException (hClose pHnd) $ do - newCurrentChunkInfo <- case allowExisting of - MustBeNew -> return $ emptyCurrentChunkInfo chunk - AllowExisting -> loadCurrentChunkInfo hasFS chunkInfo chunk - mbEvicted <- modifyMVar cacheVar $ - pure . - evictIfNecessary pastChunksToCache . - openChunk chunk lastUsed newCurrentChunkInfo - whenJust mbEvicted $ \evicted -> - -- If we had to evict, we are at 'pastChunksToCache' - traceWith tracer $ TracePastChunkEvict evicted pastChunksToCache - return pHnd - where - CacheEnv { hasFS, cacheVar, cacheConfig, tracer, chunkInfo } = cacheEnv - HasFS { hClose } = hasFS - CacheConfig { pastChunksToCache } = cacheConfig + lastUsed <- LastUsed <$> getMonotonicTime + pHnd <- Primary.open hasFS chunk allowExisting + -- Don't leak the handle in case of an exception + flip onException (hClose pHnd) $ do + newCurrentChunkInfo <- case allowExisting of + MustBeNew -> return $ emptyCurrentChunkInfo chunk + AllowExisting -> loadCurrentChunkInfo hasFS chunkInfo chunk + mbEvicted <- + modifyMVar cacheVar $ + pure + . evictIfNecessary pastChunksToCache + . openChunk chunk lastUsed newCurrentChunkInfo + whenJust mbEvicted $ \evicted -> + -- If we had to evict, we are at 'pastChunksToCache' + traceWith tracer $ TracePastChunkEvict evicted pastChunksToCache + return pHnd + where + CacheEnv{hasFS, cacheVar, cacheConfig, tracer, chunkInfo} = cacheEnv + HasFS{hClose} = hasFS + CacheConfig{pastChunksToCache} = cacheConfig appendOffsets :: - (HasCallStack, Foldable f, IOLike m) - => CacheEnv m blk h - -> Handle h - -> f SecondaryOffset - -> m () -appendOffsets CacheEnv { hasFS, cacheVar } pHnd offsets = do - Primary.appendOffsets hasFS pHnd offsets - modifyMVar_ cacheVar $ pure . addCurrentChunkOffsets - where - -- Lenses would be nice here - addCurrentChunkOffsets :: Cached blk -> Cached blk - addCurrentChunkOffsets cached@Cached { currentChunkInfo } = cached - { currentChunkInfo = currentChunkInfo - { currentChunkOffsets = currentChunkOffsets currentChunkInfo <> - Seq.fromList (toList offsets) - } + (HasCallStack, Foldable f, IOLike m) => + CacheEnv m blk h -> + Handle h -> + f SecondaryOffset -> + m () +appendOffsets CacheEnv{hasFS, cacheVar} pHnd offsets = do + Primary.appendOffsets hasFS pHnd offsets + modifyMVar_ cacheVar $ pure . addCurrentChunkOffsets + where + -- Lenses would be nice here + addCurrentChunkOffsets :: Cached blk -> Cached blk + addCurrentChunkOffsets cached@Cached{currentChunkInfo} = + cached + { currentChunkInfo = + currentChunkInfo + { currentChunkOffsets = + currentChunkOffsets currentChunkInfo + <> Seq.fromList (toList offsets) + } } {------------------------------------------------------------------------------ @@ -782,98 +846,113 @@ appendOffsets CacheEnv { hasFS, cacheVar } pHnd offsets = do ------------------------------------------------------------------------------} readEntries :: - forall m blk h t. - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - , Traversable t - ) - => CacheEnv m blk h - -> ChunkNo - -> t (IsEBB, SecondaryOffset) - -> m (t (Secondary.Entry blk, BlockSize)) + forall m blk h t. + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + , Traversable t + ) => + CacheEnv m blk h -> + ChunkNo -> + t (IsEBB, SecondaryOffset) -> + m (t (Secondary.Entry blk, BlockSize)) readEntries cacheEnv chunk toRead = - getChunkInfo cacheEnv chunk >>= \case - Left CurrentChunkInfo { currentChunkEntries } -> - forM toRead $ \(_isEBB, secondaryOffset) -> - case currentChunkEntries Seq.!? indexForOffset secondaryOffset of - Just (WithBlockSize size entry) -> return (entry, BlockSize size) - Nothing -> noEntry secondaryOffset - Right PastChunkInfo { pastChunkEntries } -> - forM toRead $ \(_isEBB, secondaryOffset) -> - case pastChunkEntries Vector.!? indexForOffset secondaryOffset of - Just (WithBlockSize size entry) -> return (entry, BlockSize size) - Nothing -> noEntry secondaryOffset - where - indexForOffset :: SecondaryOffset -> Int - indexForOffset secondaryOffset = fromIntegral $ + getChunkInfo cacheEnv chunk >>= \case + Left CurrentChunkInfo{currentChunkEntries} -> + forM toRead $ \(_isEBB, secondaryOffset) -> + case currentChunkEntries Seq.!? indexForOffset secondaryOffset of + Just (WithBlockSize size entry) -> return (entry, BlockSize size) + Nothing -> noEntry secondaryOffset + Right PastChunkInfo{pastChunkEntries} -> + forM toRead $ \(_isEBB, secondaryOffset) -> + case pastChunkEntries Vector.!? indexForOffset secondaryOffset of + Just (WithBlockSize size entry) -> return (entry, BlockSize size) + Nothing -> noEntry secondaryOffset + where + indexForOffset :: SecondaryOffset -> Int + indexForOffset secondaryOffset = + fromIntegral $ secondaryOffset `div` Secondary.entrySize (Proxy @blk) - -- There was no entry in the secondary index for the given - -- 'SecondaryOffset'. Either the secondary index is incomplete, /or/, the - -- primary index from which we read the 'SecondaryOffset' got corrupted. - -- We don't know which of the two things happened, but the former is more - -- likely, so we mention that file in the error message. - noEntry :: SecondaryOffset -> m a - noEntry secondaryOffset = throwUnexpectedFailure $ InvalidFileError @blk - (fsPathSecondaryIndexFile chunk) - ("no entry missing for " <> show secondaryOffset) - prettyCallStack + -- There was no entry in the secondary index for the given + -- 'SecondaryOffset'. Either the secondary index is incomplete, /or/, the + -- primary index from which we read the 'SecondaryOffset' got corrupted. + -- We don't know which of the two things happened, but the former is more + -- likely, so we mention that file in the error message. + noEntry :: SecondaryOffset -> m a + noEntry secondaryOffset = + throwUnexpectedFailure $ + InvalidFileError @blk + (fsPathSecondaryIndexFile chunk) + ("no entry missing for " <> show secondaryOffset) + prettyCallStack readAllEntries :: - forall m blk h. - ( HasCallStack - , ConvertRawHash blk - , IOLike m - , StandardHash blk - , Typeable blk - ) - => CacheEnv m blk h - -> SecondaryOffset - -> ChunkNo - -> (Secondary.Entry blk -> Bool) - -> Word64 - -> IsEBB - -> m [WithBlockSize (Secondary.Entry blk)] -readAllEntries cacheEnv secondaryOffset chunk stopCondition - _chunkFileSize _firstIsEBB = + forall m blk h. + ( HasCallStack + , ConvertRawHash blk + , IOLike m + , StandardHash blk + , Typeable blk + ) => + CacheEnv m blk h -> + SecondaryOffset -> + ChunkNo -> + (Secondary.Entry blk -> Bool) -> + Word64 -> + IsEBB -> + m [WithBlockSize (Secondary.Entry blk)] +readAllEntries + cacheEnv + secondaryOffset + chunk + stopCondition + _chunkFileSize + _firstIsEBB = getChunkInfo cacheEnv chunk <&> \case - Left CurrentChunkInfo { currentChunkEntries } -> + Left CurrentChunkInfo{currentChunkEntries} -> takeUntil (stopCondition . withoutBlockSize) $ - toList $ Seq.drop toDrop currentChunkEntries - Right PastChunkInfo { pastChunkEntries } -> + toList $ + Seq.drop toDrop currentChunkEntries + Right PastChunkInfo{pastChunkEntries} -> takeUntil (stopCondition . withoutBlockSize) $ - toList $ Vector.drop toDrop pastChunkEntries - where + toList $ + Vector.drop toDrop pastChunkEntries + where toDrop :: Int - toDrop = fromIntegral $ - secondaryOffset `div` Secondary.entrySize (Proxy @blk) + toDrop = + fromIntegral $ + secondaryOffset `div` Secondary.entrySize (Proxy @blk) appendEntry :: - forall m blk h. (HasCallStack, ConvertRawHash blk, IOLike m) - => CacheEnv m blk h - -> ChunkNo - -> Handle h - -> Entry blk - -> m Word64 -appendEntry CacheEnv { hasFS, cacheVar } chunk sHnd entry = do - nbBytes <- Secondary.appendEntry hasFS sHnd (withoutBlockSize entry) - modifyMVar_ cacheVar $ pure . addCurrentChunkEntry - return nbBytes - where - -- Lenses would be nice here - addCurrentChunkEntry :: Cached blk -> Cached blk - addCurrentChunkEntry cached@Cached { currentChunk, currentChunkInfo } - | currentChunk /= chunk - = error $ - "Appending to chunk " <> show chunk <> - " while the index is still in " <> show currentChunk - | otherwise - = cached - { currentChunkInfo = currentChunkInfo - { currentChunkEntries = - currentChunkEntries currentChunkInfo Seq.|> entry - } + forall m blk h. + (HasCallStack, ConvertRawHash blk, IOLike m) => + CacheEnv m blk h -> + ChunkNo -> + Handle h -> + Entry blk -> + m Word64 +appendEntry CacheEnv{hasFS, cacheVar} chunk sHnd entry = do + nbBytes <- Secondary.appendEntry hasFS sHnd (withoutBlockSize entry) + modifyMVar_ cacheVar $ pure . addCurrentChunkEntry + return nbBytes + where + -- Lenses would be nice here + addCurrentChunkEntry :: Cached blk -> Cached blk + addCurrentChunkEntry cached@Cached{currentChunk, currentChunkInfo} + | currentChunk /= chunk = + error $ + "Appending to chunk " + <> show chunk + <> " while the index is still in " + <> show currentChunk + | otherwise = + cached + { currentChunkInfo = + currentChunkInfo + { currentChunkEntries = + currentChunkEntries currentChunkInfo Seq.|> entry + } } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Primary.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Primary.hs index c97fcac64f..f5ef8591cf 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Primary.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Primary.hs @@ -10,9 +10,10 @@ -- -- Intended for qualified import -- > import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary as PrimaryIndex -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary ( - -- * SecondaryOffset +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary + ( -- * SecondaryOffset SecondaryOffset + -- * PrimaryIndex , PrimaryIndex (..) , appendOffsets @@ -40,37 +41,42 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary ( , truncateToSlotFS , unfinalise , write + -- * Exported for testing purposes , mk , toSecondaryOffsets ) where -import Control.Exception (assert) -import Control.Monad -import Data.Binary (Get, Put) -import qualified Data.Binary.Get as Get -import qualified Data.Binary.Put as Put -import qualified Data.ByteString as Strict -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor.Identity (Identity (..)) -import Data.Proxy (Proxy (..)) -import Data.Typeable (Typeable) -import Data.Vector.Unboxed (Vector) -import qualified Data.Vector.Unboxed as V -import Data.Word -import Foreign.Storable (sizeOf) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block (StandardHash) -import Ouroboros.Consensus.Storage.ImmutableDB.API - (ImmutableDBError (..), SecondaryOffset, - UnexpectedFailure (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util - (fsPathPrimaryIndexFile, runGet) -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import System.FS.API.Lazy hiding (allowExisting) +import Control.Exception (assert) +import Control.Monad +import Data.Binary (Get, Put) +import Data.Binary.Get qualified as Get +import Data.Binary.Put qualified as Put +import Data.ByteString qualified as Strict +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor.Identity (Identity (..)) +import Data.Proxy (Proxy (..)) +import Data.Typeable (Typeable) +import Data.Vector.Unboxed (Vector) +import Data.Vector.Unboxed qualified as V +import Data.Word +import Foreign.Storable (sizeOf) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block (StandardHash) +import Ouroboros.Consensus.Storage.ImmutableDB.API + ( ImmutableDBError (..) + , SecondaryOffset + , UnexpectedFailure (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util + ( fsPathPrimaryIndexFile + , runGet + ) +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import System.FS.API.Lazy hiding (allowExisting) {------------------------------------------------------------------------------ SecondaryOffset @@ -122,15 +128,14 @@ secondaryOffsetSize = fromIntegral $ sizeOf (error "sizeOf" :: SecondaryOffset) -- -- The serialisation of a primary index file starts with -- @currentVersionNumber@ followed by all its offset. -data PrimaryIndex = MkPrimaryIndex { - -- | The 'ChunkNo' of the chunk this index is associated with - primaryIndexChunkNo :: !ChunkNo - - -- | The entries in the index proper - , primaryIndexOffsets :: !(Vector SecondaryOffset) - } - deriving stock (Eq, Show, Generic) - deriving anyclass (NoThunks) +data PrimaryIndex = MkPrimaryIndex + { primaryIndexChunkNo :: !ChunkNo + -- ^ The 'ChunkNo' of the chunk this index is associated with + , primaryIndexOffsets :: !(Vector SecondaryOffset) + -- ^ The entries in the index proper + } + deriving stock (Eq, Show, Generic) + deriving anyclass NoThunks assertInPrimaryIndex :: HasCallStack => PrimaryIndex -> RelativeSlot -> Word64 assertInPrimaryIndex = assertRelativeSlotInChunk . primaryIndexChunkNo @@ -138,9 +143,9 @@ assertInPrimaryIndex = assertRelativeSlotInChunk . primaryIndexChunkNo -- | Smart constructor: checks that the offsets are non-decreasing, there is -- at least one offset, and that the first offset is 0. mk :: ChunkNo -> [SecondaryOffset] -> Maybe PrimaryIndex -mk chunk offsets@(0:_) - | and $ zipWith (<=) offsets (drop 1 offsets) - = Just $ MkPrimaryIndex chunk $ V.fromList offsets +mk chunk offsets@(0 : _) + | and $ zipWith (<=) offsets (drop 1 offsets) = + Just $ MkPrimaryIndex chunk $ V.fromList offsets mk _ _ = Nothing -- | Return the 'SecondaryOffset's in the 'PrimaryIndex'. @@ -158,59 +163,63 @@ slots (MkPrimaryIndex _ offsets) = fromIntegral $ V.length offsets - 1 -- | Read the 'SecondaryOffset' corresponding to the given relative slot in -- the primary index. Return 'Nothing' when the slot is empty. readOffset :: - forall blk m h. - (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) - => Proxy blk - -> HasFS m h - -> ChunkNo - -> RelativeSlot - -> m (Maybe SecondaryOffset) -readOffset pb hasFS chunk slot = runIdentity <$> - readOffsets pb hasFS chunk (Identity slot) + forall blk m h. + (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) => + Proxy blk -> + HasFS m h -> + ChunkNo -> + RelativeSlot -> + m (Maybe SecondaryOffset) +readOffset pb hasFS chunk slot = + runIdentity + <$> readOffsets pb hasFS chunk (Identity slot) -- | Same as 'readOffset', but for multiple offsets. -- -- NOTE: only use this for a few offsets, as we will seek (@pread@) for each -- offset. Use 'load' if you want to read the whole primary index. readOffsets :: - forall blk m h t. - ( HasCallStack - , MonadThrow m - , Traversable t - , StandardHash blk - , Typeable blk - ) - => Proxy blk - -> HasFS m h - -> ChunkNo - -> t RelativeSlot - -> m (t (Maybe SecondaryOffset)) - -- ^ The offset in the secondary index file corresponding to the given - -- slot. 'Nothing' when the slot is empty. -readOffsets pb hasFS@HasFS { hGetSize } chunk toRead = - withFile hasFS primaryIndexFile ReadMode $ \pHnd -> do - size <- hGetSize pHnd - forM toRead $ \relSlot -> do - let slot = assertRelativeSlotInChunk chunk relSlot - let offset = AbsOffset $ - fromIntegral (sizeOf currentVersionNumber) + - slot * secondaryOffsetSize - if unAbsOffset offset + nbBytes > size then + forall blk m h t. + ( HasCallStack + , MonadThrow m + , Traversable t + , StandardHash blk + , Typeable blk + ) => + Proxy blk -> + HasFS m h -> + ChunkNo -> + t RelativeSlot -> + -- | The offset in the secondary index file corresponding to the given + -- slot. 'Nothing' when the slot is empty. + m (t (Maybe SecondaryOffset)) +readOffsets pb hasFS@HasFS{hGetSize} chunk toRead = + withFile hasFS primaryIndexFile ReadMode $ \pHnd -> do + size <- hGetSize pHnd + forM toRead $ \relSlot -> do + let slot = assertRelativeSlotInChunk chunk relSlot + let offset = + AbsOffset $ + fromIntegral (sizeOf currentVersionNumber) + + slot * secondaryOffsetSize + if unAbsOffset offset + nbBytes > size + then -- Don't try reading if the file doesn't contain enough bytes return Nothing else do (secondaryOffset, nextSecondaryOffset) <- - runGet pb primaryIndexFile get =<< - hGetExactlyAt hasFS pHnd nbBytes offset - return $ if nextSecondaryOffset - secondaryOffset > 0 - then Just secondaryOffset - else Nothing - where - primaryIndexFile = fsPathPrimaryIndexFile chunk - nbBytes = secondaryOffsetSize * 2 - - get :: Get (SecondaryOffset, SecondaryOffset) - get = (,) <$> getSecondaryOffset <*> getSecondaryOffset + runGet pb primaryIndexFile get + =<< hGetExactlyAt hasFS pHnd nbBytes offset + return $ + if nextSecondaryOffset - secondaryOffset > 0 + then Just secondaryOffset + else Nothing + where + primaryIndexFile = fsPathPrimaryIndexFile chunk + nbBytes = secondaryOffsetSize * 2 + + get :: Get (SecondaryOffset, SecondaryOffset) + get = (,) <$> getSecondaryOffset <*> getSecondaryOffset -- | Return the first filled slot in the primary index file, or 'Nothing' in -- case there are no filled slots. @@ -220,92 +229,99 @@ readOffsets pb hasFS@HasFS { hGetSize } chunk toRead = -- -- May throw 'InvalidPrimaryIndexException'. readFirstFilledSlot :: - forall blk m h. - (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) - => Proxy blk - -> HasFS m h - -> ChunkInfo - -> ChunkNo - -> m (Maybe RelativeSlot) -readFirstFilledSlot pb hasFS@HasFS { hSeek, hGetSome } chunkInfo chunk = - withFile hasFS primaryIndexFile ReadMode $ \pHnd -> do - hSeek pHnd AbsoluteSeek skip - go pHnd $ NextRelativeSlot (firstBlockOrEBB chunkInfo chunk) - where - primaryIndexFile = fsPathPrimaryIndexFile chunk - - -- | Skip the version number and the first offset, which is always 0. - skip = fromIntegral (sizeOf currentVersionNumber) - + fromIntegral secondaryOffsetSize - - -- | Read offset per offset until we find a non-zero one. In the - -- Byron-era, the first slot is always filled with an EBB, so we only need - -- to read one 4-byte offset. In the Shelley era, approximately one in ten - -- slots is filled, so on average we need to read 5 4-byte offsets. The OS - -- will buffer this anyway. - go :: HasCallStack => Handle h -> NextRelativeSlot -> m (Maybe RelativeSlot) - go pHnd nextRelative = getNextOffset pHnd >>= \mOffset -> + forall blk m h. + (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) => + Proxy blk -> + HasFS m h -> + ChunkInfo -> + ChunkNo -> + m (Maybe RelativeSlot) +readFirstFilledSlot pb hasFS@HasFS{hSeek, hGetSome} chunkInfo chunk = + withFile hasFS primaryIndexFile ReadMode $ \pHnd -> do + hSeek pHnd AbsoluteSeek skip + go pHnd $ NextRelativeSlot (firstBlockOrEBB chunkInfo chunk) + where + primaryIndexFile = fsPathPrimaryIndexFile chunk + + -- \| Skip the version number and the first offset, which is always 0. + skip = + fromIntegral (sizeOf currentVersionNumber) + + fromIntegral secondaryOffsetSize + + -- \| Read offset per offset until we find a non-zero one. In the + -- Byron-era, the first slot is always filled with an EBB, so we only need + -- to read one 4-byte offset. In the Shelley era, approximately one in ten + -- slots is filled, so on average we need to read 5 4-byte offsets. The OS + -- will buffer this anyway. + go :: HasCallStack => Handle h -> NextRelativeSlot -> m (Maybe RelativeSlot) + go pHnd nextRelative = + getNextOffset pHnd >>= \mOffset -> case (nextRelative, mOffset) of (_, Nothing) -> -- Reached end of file, no filled slot return Nothing (NoMoreRelativeSlots, Just _) -> - throwIO $ UnexpectedFailure $ - InvalidFileError - @blk - primaryIndexFile - "Index file too large" - prettyCallStack + throwIO $ + UnexpectedFailure $ + InvalidFileError + @blk + primaryIndexFile + "Index file too large" + prettyCallStack (NextRelativeSlot slot, Just offset) | offset == 0 -> go pHnd (nextRelativeSlot slot) - | otherwise -> return $ Just slot - - -- | We don't know in advance if there are bytes left to read, so it could - -- be that 'hGetSome' returns 0 bytes, in which case we reached EOF and - -- return 'Nothing'. - -- - -- NOTE: when using 'hGetSome' directly, we can get partial reads, which - -- we should handle appropriately. - getNextOffset :: Handle h -> m (Maybe SecondaryOffset) - getNextOffset pHnd = goGet secondaryOffsetSize mempty - where - goGet :: Word64 -> Lazy.ByteString -> m (Maybe SecondaryOffset) - goGet remaining acc = do - bs <- hGetSome pHnd remaining - let acc' = acc <> Lazy.fromStrict bs - case fromIntegral (Strict.length bs) of - 0 -> return Nothing - n | n < remaining -- Partial read, read some more - -> goGet (remaining - n) acc' - | otherwise -- All bytes read, 'Get' the offset - -> assert (n == remaining) $ Just <$> - runGet pb primaryIndexFile getSecondaryOffset acc' + | otherwise -> return $ Just slot + + -- \| We don't know in advance if there are bytes left to read, so it could + -- be that 'hGetSome' returns 0 bytes, in which case we reached EOF and + -- return 'Nothing'. + -- + -- NOTE: when using 'hGetSome' directly, we can get partial reads, which + -- we should handle appropriately. + getNextOffset :: Handle h -> m (Maybe SecondaryOffset) + getNextOffset pHnd = goGet secondaryOffsetSize mempty + where + goGet :: Word64 -> Lazy.ByteString -> m (Maybe SecondaryOffset) + goGet remaining acc = do + bs <- hGetSome pHnd remaining + let acc' = acc <> Lazy.fromStrict bs + case fromIntegral (Strict.length bs) of + 0 -> return Nothing + n + | n < remaining -> -- Partial read, read some more + goGet (remaining - n) acc' + | otherwise -> -- All bytes read, 'Get' the offset + assert (n == remaining) $ + Just + <$> runGet pb primaryIndexFile getSecondaryOffset acc' -- | Load a primary index file in memory. load :: - forall blk m h. - (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) - => Proxy blk - -> HasFS m h - -> ChunkNo - -> m PrimaryIndex + forall blk m h. + (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) => + Proxy blk -> + HasFS m h -> + ChunkNo -> + m PrimaryIndex load pb hasFS chunk = - withFile hasFS primaryIndexFile ReadMode $ \pHnd -> - hGetAll hasFS pHnd >>= runGet pb primaryIndexFile get - where - primaryIndexFile = fsPathPrimaryIndexFile chunk - - -- TODO incremental? - get :: Get PrimaryIndex - get = Get.getWord8 >>= \versionNumber -> + withFile hasFS primaryIndexFile ReadMode $ \pHnd -> + hGetAll hasFS pHnd >>= runGet pb primaryIndexFile get + where + primaryIndexFile = fsPathPrimaryIndexFile chunk + + -- TODO incremental? + get :: Get PrimaryIndex + get = + Get.getWord8 >>= \versionNumber -> if versionNumber == currentVersionNumber then MkPrimaryIndex chunk . V.fromList <$> go else fail $ "unknown version number: " <> show versionNumber - where - go = do - isEmpty <- Get.isEmpty - if isEmpty then return [] - else (:) <$> getSecondaryOffset <*> go + where + go = do + isEmpty <- Get.isEmpty + if isEmpty + then return [] + else (:) <$> getSecondaryOffset <*> go -- | Write a primary index to a file. -- @@ -317,56 +333,60 @@ load pb hasFS chunk = -- Then it must be that: -- -- > primaryIndex === primaryIndex' --- write :: - (HasCallStack, MonadThrow m) - => HasFS m h - -> ChunkNo - -> PrimaryIndex - -> m () -write hasFS@HasFS { hTruncate } chunk (MkPrimaryIndex _ offsets) = - withFile hasFS primaryIndexFile (AppendMode AllowExisting) $ \pHnd -> do - -- NOTE: open it in AppendMode and truncate it first, otherwise we might - -- just overwrite part of the data stored in the index file. - hTruncate pHnd 0 - void $ hPut hasFS pHnd $ Put.execPut $ - -- The version number - Put.putWord8 currentVersionNumber <> - -- Hopefully the intermediary list is fused away - foldMap putSecondaryOffset (V.toList offsets) - where - primaryIndexFile = fsPathPrimaryIndexFile chunk + (HasCallStack, MonadThrow m) => + HasFS m h -> + ChunkNo -> + PrimaryIndex -> + m () +write hasFS@HasFS{hTruncate} chunk (MkPrimaryIndex _ offsets) = + withFile hasFS primaryIndexFile (AppendMode AllowExisting) $ \pHnd -> do + -- NOTE: open it in AppendMode and truncate it first, otherwise we might + -- just overwrite part of the data stored in the index file. + hTruncate pHnd 0 + void $ + hPut hasFS pHnd $ + Put.execPut $ + -- The version number + Put.putWord8 currentVersionNumber + <> + -- Hopefully the intermediary list is fused away + foldMap putSecondaryOffset (V.toList offsets) + where + primaryIndexFile = fsPathPrimaryIndexFile chunk -- | Truncate the primary index so that the given 'RelativeSlot' will be the -- last slot (filled or not) in the primary index, unless the primary index -- didn't contain the 'RelativeSlot' in the first place. truncateToSlot :: ChunkInfo -> RelativeSlot -> PrimaryIndex -> PrimaryIndex truncateToSlot chunkInfo relSlot primary@(MkPrimaryIndex _ offsets) = - case getLastSlot chunkInfo primary of - Just lastSlot | compareRelativeSlot lastSlot relSlot == GT -> - primary { primaryIndexOffsets = V.take (fromIntegral slot + 2) offsets } - _otherwise -> - primary - where - slot = assertInPrimaryIndex primary relSlot + case getLastSlot chunkInfo primary of + Just lastSlot + | compareRelativeSlot lastSlot relSlot == GT -> + primary{primaryIndexOffsets = V.take (fromIntegral slot + 2) offsets} + _otherwise -> + primary + where + slot = assertInPrimaryIndex primary relSlot -- | On-disk variant of 'truncateToSlot'. The truncation is done without -- reading the primary index from disk. truncateToSlotFS :: - (HasCallStack, MonadThrow m) - => HasFS m h - -> ChunkNo - -> RelativeSlot - -> m () -truncateToSlotFS hasFS@HasFS { hTruncate, hGetSize } chunk relSlot = - withFile hasFS primaryIndexFile (AppendMode AllowExisting) $ \pHnd -> do - size <- hGetSize pHnd - when (offset < size) $ hTruncate pHnd offset - where - slot = assertRelativeSlotInChunk chunk relSlot - primaryIndexFile = fsPathPrimaryIndexFile chunk - offset = fromIntegral (sizeOf currentVersionNumber) - + (slot + 2) * secondaryOffsetSize + (HasCallStack, MonadThrow m) => + HasFS m h -> + ChunkNo -> + RelativeSlot -> + m () +truncateToSlotFS hasFS@HasFS{hTruncate, hGetSize} chunk relSlot = + withFile hasFS primaryIndexFile (AppendMode AllowExisting) $ \pHnd -> do + size <- hGetSize pHnd + when (offset < size) $ hTruncate pHnd offset + where + slot = assertRelativeSlotInChunk chunk relSlot + primaryIndexFile = fsPathPrimaryIndexFile chunk + offset = + fromIntegral (sizeOf currentVersionNumber) + + (slot + 2) * secondaryOffsetSize -- | Remove all trailing empty slots that were added during the -- finalisation/backfilling of the primary index. @@ -374,55 +394,58 @@ truncateToSlotFS hasFS@HasFS { hTruncate, hGetSize } chunk relSlot = -- POSTCONDITION: the last slot of the primary index file will be filled, -- unless the index itself is empty. unfinalise :: - (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) - => Proxy blk - -> HasFS m h - -> ChunkInfo - -> ChunkNo - -> m () + (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) => + Proxy blk -> + HasFS m h -> + ChunkInfo -> + ChunkNo -> + m () unfinalise pb hasFS chunkInfo chunk = do - -- TODO optimise so that we only need to open the file once - primaryIndex <- load pb hasFS chunk - case lastFilledSlot chunkInfo primaryIndex of - Nothing -> return () - Just slot -> truncateToSlotFS hasFS chunk slot + -- TODO optimise so that we only need to open the file once + primaryIndex <- load pb hasFS chunk + case lastFilledSlot chunkInfo primaryIndex of + Nothing -> return () + Just slot -> truncateToSlotFS hasFS chunk slot -- | Open a primary index file for the given chunk and return a handle to it. -- -- The file is opened with the given 'AllowExisting' value. When given -- 'MustBeNew', the version number is written to the file. open :: - (HasCallStack, MonadCatch m) - => HasFS m h - -> ChunkNo - -> AllowExisting - -> m (Handle h) -open hasFS@HasFS { hOpen, hClose } chunk allowExisting = do - -- TODO we rely on the fact that if the file exists, it already contains - -- the version number and the first offset. What if that is not the case? - pHnd <- hOpen primaryIndexFile (AppendMode allowExisting) - flip onException (hClose pHnd) $ do - case allowExisting of - AllowExisting -> return () - -- If the file is new, write the version number and the first offset, - -- i.e. 0. - MustBeNew -> void $ hPut hasFS pHnd $ Put.execPut $ - Put.putWord8 currentVersionNumber <> - putSecondaryOffset 0 - return pHnd - where - primaryIndexFile = fsPathPrimaryIndexFile chunk + (HasCallStack, MonadCatch m) => + HasFS m h -> + ChunkNo -> + AllowExisting -> + m (Handle h) +open hasFS@HasFS{hOpen, hClose} chunk allowExisting = do + -- TODO we rely on the fact that if the file exists, it already contains + -- the version number and the first offset. What if that is not the case? + pHnd <- hOpen primaryIndexFile (AppendMode allowExisting) + flip onException (hClose pHnd) $ do + case allowExisting of + AllowExisting -> return () + -- If the file is new, write the version number and the first offset, + -- i.e. 0. + MustBeNew -> + void $ + hPut hasFS pHnd $ + Put.execPut $ + Put.putWord8 currentVersionNumber + <> putSecondaryOffset 0 + return pHnd + where + primaryIndexFile = fsPathPrimaryIndexFile chunk -- | Append the given 'SecondaryOffset' to the end of the file (passed as a -- handle). appendOffsets :: - (Monad m, Foldable f, HasCallStack) - => HasFS m h - -> Handle h - -> f SecondaryOffset - -> m () + (Monad m, Foldable f, HasCallStack) => + HasFS m h -> + Handle h -> + f SecondaryOffset -> + m () appendOffsets hasFS pHnd offsets = - void $ hPut hasFS pHnd $ Put.execPut $ foldMap putSecondaryOffset offsets + void $ hPut hasFS pHnd $ Put.execPut $ foldMap putSecondaryOffset offsets -- | Return the last 'SecondaryOffset' in the primary index file. lastOffset :: PrimaryIndex -> SecondaryOffset @@ -435,15 +458,15 @@ lastOffset (MkPrimaryIndex _ offsets) -- Returns 'Nothing' if the index is empty. getLastSlot :: ChunkInfo -> PrimaryIndex -> Maybe RelativeSlot getLastSlot chunkInfo (MkPrimaryIndex chunk offsets) = do - guard $ V.length offsets >= 2 - return $ nthBlockOrEBB chunkInfo chunk (V.length offsets - 2) + guard $ V.length offsets >= 2 + return $ nthBlockOrEBB chunkInfo chunk (V.length offsets - 2) -- | Check whether the given slot is within the primary index. containsSlot :: PrimaryIndex -> RelativeSlot -> Bool containsSlot primary@(MkPrimaryIndex _ offsets) relSlot = - slot < fromIntegral (V.length offsets) - 1 - where - slot = assertInPrimaryIndex primary relSlot + slot < fromIntegral (V.length offsets) - 1 + where + slot = assertInPrimaryIndex primary relSlot -- | Return the offset for the given slot. -- @@ -451,9 +474,9 @@ containsSlot primary@(MkPrimaryIndex _ offsets) relSlot = -- ('containsSlot'). offsetOfSlot :: HasCallStack => PrimaryIndex -> RelativeSlot -> SecondaryOffset offsetOfSlot primary@(MkPrimaryIndex _ offsets) relSlot = - offsets ! fromIntegral slot - where - slot = assertInPrimaryIndex primary relSlot + offsets ! fromIntegral slot + where + slot = assertInPrimaryIndex primary relSlot -- | Return the size of the given slot according to the primary index. -- @@ -461,12 +484,12 @@ offsetOfSlot primary@(MkPrimaryIndex _ offsets) relSlot = -- ('containsSlot'). sizeOfSlot :: HasCallStack => PrimaryIndex -> RelativeSlot -> Word32 sizeOfSlot primary@(MkPrimaryIndex _ offsets) relSlot = - offsetAfter - offsetAt - where - slot = assertInPrimaryIndex primary relSlot - i = fromIntegral slot - offsetAt = offsets ! i - offsetAfter = offsets ! (i + 1) + offsetAfter - offsetAt + where + slot = assertInPrimaryIndex primary relSlot + i = fromIntegral slot + offsetAt = offsets ! i + offsetAfter = offsets ! (i + 1) -- | Return 'True' when the given slot is filled. -- @@ -491,21 +514,21 @@ isFilledSlot primary slot = sizeOfSlot primary slot /= 0 -- Return slot 4. nextFilledSlot :: ChunkInfo -> PrimaryIndex -> RelativeSlot -> Maybe RelativeSlot nextFilledSlot chunkInfo primary@(MkPrimaryIndex chunk offsets) relSlot = - go (fromIntegral slot + 1) - where - slot = assertInPrimaryIndex primary relSlot - - len :: Int - len = V.length offsets - - go :: Int -> Maybe RelativeSlot - go i - | i + 1 >= len - = Nothing - | offsets ! i == offsets ! (i + 1) - = go (i + 1) - | otherwise - = Just (nthBlockOrEBB chunkInfo chunk i) + go (fromIntegral slot + 1) + where + slot = assertInPrimaryIndex primary relSlot + + len :: Int + len = V.length offsets + + go :: Int -> Maybe RelativeSlot + go i + | i + 1 >= len = + Nothing + | offsets ! i == offsets ! (i + 1) = + go (i + 1) + | otherwise = + Just (nthBlockOrEBB chunkInfo chunk i) -- | Find the first filled (length > zero) slot in the primary index. If there -- is none, return 'Nothing'. @@ -520,40 +543,40 @@ nextFilledSlot chunkInfo primary@(MkPrimaryIndex chunk offsets) relSlot = -- Return slot 1. firstFilledSlot :: ChunkInfo -> PrimaryIndex -> Maybe RelativeSlot firstFilledSlot chunkInfo (MkPrimaryIndex chunk offsets) = go 1 - where - len :: Int - len = V.length offsets - - go :: Int -> Maybe RelativeSlot - go i - | i >= len - = Nothing - | offsets ! i == 0 - = go (i + 1) - | otherwise - = Just (nthBlockOrEBB chunkInfo chunk (i - 1)) + where + len :: Int + len = V.length offsets + + go :: Int -> Maybe RelativeSlot + go i + | i >= len = + Nothing + | offsets ! i == 0 = + go (i + 1) + | otherwise = + Just (nthBlockOrEBB chunkInfo chunk (i - 1)) -- | Return a list of all the filled (length > zero) slots in the primary -- index. filledSlots :: ChunkInfo -> PrimaryIndex -> [RelativeSlot] filledSlots chunkInfo primary = go (firstFilledSlot chunkInfo primary) - where - go Nothing = [] - go (Just slot) = slot : go (nextFilledSlot chunkInfo primary slot) + where + go Nothing = [] + go (Just slot) = slot : go (nextFilledSlot chunkInfo primary slot) -- | Return the last filled slot in the primary index. lastFilledSlot :: HasCallStack => ChunkInfo -> PrimaryIndex -> Maybe RelativeSlot lastFilledSlot chunkInfo (MkPrimaryIndex chunk offsets) = - go (V.length offsets - 1) - where - go :: Int -> Maybe RelativeSlot - go i - | i < 1 - = Nothing - | offsets ! i == offsets ! (i - 1) - = go (i - 1) - | otherwise - = Just (nthBlockOrEBB chunkInfo chunk (i - 1)) + go (V.length offsets - 1) + where + go :: Int -> Maybe RelativeSlot + go i + | i < 1 = + Nothing + | offsets ! i == offsets ! (i - 1) = + go (i - 1) + | otherwise = + Just (nthBlockOrEBB chunkInfo chunk (i - 1)) -- | Return the slots to backfill the primary index file with. -- @@ -594,35 +617,40 @@ lastFilledSlot chunkInfo (MkPrimaryIndex chunk offsets) = -- We use @x, y, z@ in the examples above, but in practice these will be -- multiples of the (fixed) size of an entry in secondary index. backfill :: - RelativeSlot -- ^ The slot to write to (>= next expected slot) - -> RelativeSlot -- ^ The next expected slot to write to - -> SecondaryOffset -- ^ The last 'SecondaryOffset' written to - -> [SecondaryOffset] + -- | The slot to write to (>= next expected slot) + RelativeSlot -> + -- | The next expected slot to write to + RelativeSlot -> + -- | The last 'SecondaryOffset' written to + SecondaryOffset -> + [SecondaryOffset] backfill slot nextExpected offset = - replicate (fromIntegral gap) offset - where - gap = relativeSlotIndex slot - - relativeSlotIndex nextExpected + replicate (fromIntegral gap) offset + where + gap = + relativeSlotIndex slot + - relativeSlotIndex nextExpected -- | Return the slots to backfill the primary index file with when padding it -- to the chunk size. -- -- See 'backfill' for more details. backfillChunk :: - ChunkInfo - -> ChunkNo - -> NextRelativeSlot - -> SecondaryOffset - -> [SecondaryOffset] + ChunkInfo -> + ChunkNo -> + NextRelativeSlot -> + SecondaryOffset -> + [SecondaryOffset] backfillChunk _ _ NoMoreRelativeSlots _ = - [] + [] backfillChunk chunkInfo chunk (NextRelativeSlot nextExpected) offset = - replicate (fromIntegral gap) offset - where - finalSlot = maxRelativeSlot chunkInfo chunk - gap = relativeSlotIndex finalSlot - - relativeSlotIndex nextExpected - + 1 -- fill all slots /including/ 'finalSlot' + replicate (fromIntegral gap) offset + where + finalSlot = maxRelativeSlot chunkInfo chunk + gap = + relativeSlotIndex finalSlot + - relativeSlotIndex nextExpected + + 1 -- fill all slots /including/ 'finalSlot' {------------------------------------------------------------------------------ Helper for debugging @@ -630,9 +658,10 @@ backfillChunk chunkInfo chunk (NextRelativeSlot nextExpected) offset = (!) :: (HasCallStack, V.Unbox a) => Vector a -> Int -> a v ! i - | 0 <= i, i < V.length v - = V.unsafeIndex v i - | otherwise - = error $ - "Index " <> show i <> " out of bounds (0, " <> show (V.length v - 1) <> ")" + | 0 <= i + , i < V.length v = + V.unsafeIndex v i + | otherwise = + error $ + "Index " <> show i <> " out of bounds (0, " <> show (V.length v - 1) <> ")" {-# INLINE (!) #-} diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Secondary.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Secondary.hs index a0d6f065a1..e687238599 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Secondary.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Index/Secondary.hs @@ -9,8 +9,8 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary ( - BlockOffset (..) +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary + ( BlockOffset (..) , BlockSize (..) , Entry (..) , HeaderOffset (..) @@ -24,52 +24,58 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary ( , writeAllEntries ) where -import Control.Exception (assert) -import Control.Monad (forM) -import Data.Binary (Binary (..), Get, Put) -import qualified Data.Binary.Get as Get -import qualified Data.Binary.Put as Put -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor.Identity (Identity (..)) -import Data.Typeable (Typeable) -import Data.Word -import Foreign.Storable (Storable (sizeOf)) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block hiding (headerHash) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary - (SecondaryOffset) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types - (BlockOrEBB (..), WithBlockSize (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util - (fsPathSecondaryIndexFile, runGet, runGetWithUnconsumed) -import Ouroboros.Consensus.Util.IOLike -import System.FS.API.Lazy -import System.FS.CRC +import Control.Exception (assert) +import Control.Monad (forM) +import Data.Binary (Binary (..), Get, Put) +import Data.Binary.Get qualified as Get +import Data.Binary.Put qualified as Put +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor.Identity (Identity (..)) +import Data.Typeable (Typeable) +import Data.Word +import Foreign.Storable (Storable (sizeOf)) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block hiding (headerHash) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary + ( SecondaryOffset + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types + ( BlockOrEBB (..) + , WithBlockSize (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util + ( fsPathSecondaryIndexFile + , runGet + , runGetWithUnconsumed + ) +import Ouroboros.Consensus.Util.IOLike +import System.FS.API.Lazy +import System.FS.CRC {------------------------------------------------------------------------------ Types ------------------------------------------------------------------------------} -newtype BlockOffset = BlockOffset { unBlockOffset :: Word64 } - deriving stock (Show) +newtype BlockOffset = BlockOffset {unBlockOffset :: Word64} + deriving stock Show deriving newtype (Eq, Ord, Enum, Real, Integral, Num, Storable, NoThunks) instance Binary BlockOffset where get = BlockOffset <$> Get.getWord64be put = Put.putWord64be . unBlockOffset -newtype HeaderOffset = HeaderOffset { unHeaderOffset :: Word16 } - deriving stock (Show) +newtype HeaderOffset = HeaderOffset {unHeaderOffset :: Word16} + deriving stock Show deriving newtype (Eq, Storable, NoThunks) instance Binary HeaderOffset where get = HeaderOffset <$> Get.getWord16be put = Put.putWord16be . unHeaderOffset -newtype HeaderSize = HeaderSize { unHeaderSize :: Word16 } - deriving stock (Show) +newtype HeaderSize = HeaderSize {unHeaderSize :: Word16} + deriving stock Show deriving newtype (Eq, Storable, NoThunks) instance Binary HeaderSize where @@ -77,96 +83,98 @@ instance Binary HeaderSize where put = Put.putWord16be . unHeaderSize getBlockOrEBB :: IsEBB -> Get BlockOrEBB -getBlockOrEBB IsEBB = EBB . EpochNo <$> Get.getWord64be -getBlockOrEBB IsNotEBB = Block . SlotNo <$> Get.getWord64be +getBlockOrEBB IsEBB = EBB . EpochNo <$> Get.getWord64be +getBlockOrEBB IsNotEBB = Block . SlotNo <$> Get.getWord64be putBlockOrEBB :: BlockOrEBB -> Put putBlockOrEBB blockOrEBB = Put.putWord64be $ case blockOrEBB of - Block slotNo -> unSlotNo slotNo - EBB epochNo -> unEpochNo epochNo + Block slotNo -> unSlotNo slotNo + EBB epochNo -> unEpochNo epochNo {------------------------------------------------------------------------------ Entry ------------------------------------------------------------------------------} -data Entry blk = Entry { - blockOffset :: !BlockOffset - , headerOffset :: !HeaderOffset - , headerSize :: !HeaderSize - , checksum :: !CRC - , headerHash :: !(HeaderHash blk) - , blockOrEBB :: !BlockOrEBB - } - deriving (Generic) - -deriving instance StandardHash blk => Eq (Entry blk) -deriving instance StandardHash blk => Show (Entry blk) +data Entry blk = Entry + { blockOffset :: !BlockOffset + , headerOffset :: !HeaderOffset + , headerSize :: !HeaderSize + , checksum :: !CRC + , headerHash :: !(HeaderHash blk) + , blockOrEBB :: !BlockOrEBB + } + deriving Generic + +deriving instance StandardHash blk => Eq (Entry blk) +deriving instance StandardHash blk => Show (Entry blk) deriving instance StandardHash blk => NoThunks (Entry blk) getEntry :: forall blk. ConvertRawHash blk => IsEBB -> Get (Entry blk) getEntry isEBB = do - blockOffset <- get - headerOffset <- get - headerSize <- get - checksum <- CRC <$> Get.getWord32be - headerHash <- getHash pb - blockOrEBB <- getBlockOrEBB isEBB - return Entry {..} - where - pb :: Proxy blk - pb = Proxy + blockOffset <- get + headerOffset <- get + headerSize <- get + checksum <- CRC <$> Get.getWord32be + headerHash <- getHash pb + blockOrEBB <- getBlockOrEBB isEBB + return Entry{..} + where + pb :: Proxy blk + pb = Proxy putEntry :: forall blk. ConvertRawHash blk => Entry blk -> Put -putEntry Entry {..} = mconcat [ - put blockOffset - , put headerOffset - , put headerSize - , Put.putWord32be (getCRC checksum) - , putHash pb headerHash - , putBlockOrEBB blockOrEBB +putEntry Entry{..} = + mconcat + [ put blockOffset + , put headerOffset + , put headerSize + , Put.putWord32be (getCRC checksum) + , putHash pb headerHash + , putBlockOrEBB blockOrEBB ] - where - pb :: Proxy blk - pb = Proxy + where + pb :: Proxy blk + pb = Proxy entrySize :: ConvertRawHash blk => Proxy blk -> Word32 entrySize pb = - size 8 "blockOffset" blockOffset - + size 2 "headerOffset" headerOffset - + size 2 "headerSize" headerSize - + size 4 "checksum" checksum - + hashSize pb - + 8 -- blockOrEBB - where - size :: Storable a => Word32 -> String -> (Entry blk -> a) -> Word32 - size expected name field = assert (expected == actual) actual - where - actual = fromIntegral (sizeOf (field (error name))) + size 8 "blockOffset" blockOffset + + size 2 "headerOffset" headerOffset + + size 2 "headerSize" headerSize + + size 4 "checksum" checksum + + hashSize pb + + 8 -- blockOrEBB + where + size :: Storable a => Word32 -> String -> (Entry blk -> a) -> Word32 + size expected name field = assert (expected == actual) actual + where + actual = fromIntegral (sizeOf (field (error name))) data BlockSize = BlockSize Word32 - | LastEntry - -- ^ In case of the last entry, we don't have any entry and thus block + | -- | In case of the last entry, we don't have any entry and thus block -- offset after it that we can use to calculate the size of the block. + LastEntry deriving (Eq, Show, Generic, NoThunks) -- | Read the entry at the given 'SecondaryOffset'. Interpret it as an EBB -- depending on the given 'IsEBB'. readEntry :: - forall m blk h. - ( HasCallStack - , ConvertRawHash blk - , MonadThrow m - , StandardHash blk - , Typeable blk - ) - => HasFS m h - -> ChunkNo - -> IsEBB - -> SecondaryOffset - -> m (Entry blk, BlockSize) -readEntry hasFS chunk isEBB slotOffset = runIdentity <$> - readEntries hasFS chunk (Identity (isEBB, slotOffset)) + forall m blk h. + ( HasCallStack + , ConvertRawHash blk + , MonadThrow m + , StandardHash blk + , Typeable blk + ) => + HasFS m h -> + ChunkNo -> + IsEBB -> + SecondaryOffset -> + m (Entry blk, BlockSize) +readEntry hasFS chunk isEBB slotOffset = + runIdentity + <$> readEntries hasFS chunk (Identity (isEBB, slotOffset)) -- | Same as 'readEntry', but for multiple entries. -- @@ -174,167 +182,186 @@ readEntry hasFS chunk isEBB slotOffset = runIdentity <$> -- entry. Use 'readAllEntries' if you want to read all entries in the -- secondary index file. readEntries :: - forall m blk h t. - ( HasCallStack - , ConvertRawHash blk - , MonadThrow m - , StandardHash blk - , Typeable blk - , Traversable t - ) - => HasFS m h - -> ChunkNo - -> t (IsEBB, SecondaryOffset) - -> m (t (Entry blk, BlockSize)) + forall m blk h t. + ( HasCallStack + , ConvertRawHash blk + , MonadThrow m + , StandardHash blk + , Typeable blk + , Traversable t + ) => + HasFS m h -> + ChunkNo -> + t (IsEBB, SecondaryOffset) -> + m (t (Entry blk, BlockSize)) readEntries hasFS chunk toRead = - withFile hasFS secondaryIndexFile ReadMode $ \sHnd -> do - -- TODO can we avoid this call to 'hGetSize'? - size <- hGetSize sHnd - forM toRead $ \(isEBB, slotOffset) -> do - let offset = AbsOffset (fromIntegral slotOffset) - -- Is there another entry after the entry we need to read so that - -- we can read its 'blockOffset' that will allow us to calculate - -- the size of the block. - anotherEntryAfter = size >= - unAbsOffset offset + nbBytes + nbBlockOffsetBytes - if anotherEntryAfter then do + withFile hasFS secondaryIndexFile ReadMode $ \sHnd -> do + -- TODO can we avoid this call to 'hGetSize'? + size <- hGetSize sHnd + forM toRead $ \(isEBB, slotOffset) -> do + let offset = AbsOffset (fromIntegral slotOffset) + -- Is there another entry after the entry we need to read so that + -- we can read its 'blockOffset' that will allow us to calculate + -- the size of the block. + anotherEntryAfter = + size + >= unAbsOffset offset + nbBytes + nbBlockOffsetBytes + if anotherEntryAfter + then do (entry, nextBlockOffset) <- - hGetExactlyAt hasFS sHnd (nbBytes + nbBlockOffsetBytes) offset >>= - runGet (Proxy @blk) secondaryIndexFile - ((,) <$> getEntry isEBB <*> get) - let blockSize = fromIntegral $ - unBlockOffset nextBlockOffset - - unBlockOffset (blockOffset entry) + hGetExactlyAt hasFS sHnd (nbBytes + nbBlockOffsetBytes) offset + >>= runGet + (Proxy @blk) + secondaryIndexFile + ((,) <$> getEntry isEBB <*> get) + let blockSize = + fromIntegral $ + unBlockOffset nextBlockOffset + - unBlockOffset (blockOffset entry) return (entry, BlockSize blockSize) else do - entry <- hGetExactlyAt hasFS sHnd nbBytes offset >>= - runGet (Proxy @blk) secondaryIndexFile (getEntry isEBB) + entry <- + hGetExactlyAt hasFS sHnd nbBytes offset + >>= runGet (Proxy @blk) secondaryIndexFile (getEntry isEBB) return (entry, LastEntry) - where - secondaryIndexFile = fsPathSecondaryIndexFile chunk - nbBytes = fromIntegral $ entrySize (Proxy @blk) - nbBlockOffsetBytes = fromIntegral (sizeOf (blockOffset (error "blockOffset"))) - HasFS { hGetSize } = hasFS + where + secondaryIndexFile = fsPathSecondaryIndexFile chunk + nbBytes = fromIntegral $ entrySize (Proxy @blk) + nbBlockOffsetBytes = fromIntegral (sizeOf (blockOffset (error "blockOffset"))) + HasFS{hGetSize} = hasFS -- | Read all entries in a secondary index file, starting from the given -- 'SecondaryOffset' until the stop condition is true or until the end of the -- file is reached. The entry for which the stop condition is true will be the -- last in the returned list of entries. readAllEntries :: - forall m blk h. - ( HasCallStack - , ConvertRawHash blk - , MonadThrow m - , StandardHash blk - , Typeable blk - ) - => HasFS m h - -> SecondaryOffset -- ^ Start from this offset - -> ChunkNo - -> (Entry blk -> Bool) -- ^ Stop condition: stop after this entry - -> Word64 -- ^ The size of the chunk file, used to compute - -- the size of the last block. - -> IsEBB -- ^ Is the first entry to read an EBB? - -> m [WithBlockSize (Entry blk)] + forall m blk h. + ( HasCallStack + , ConvertRawHash blk + , MonadThrow m + , StandardHash blk + , Typeable blk + ) => + HasFS m h -> + -- | Start from this offset + SecondaryOffset -> + ChunkNo -> + -- | Stop condition: stop after this entry + (Entry blk -> Bool) -> + -- | The size of the chunk file, used to compute + -- the size of the last block. + Word64 -> + -- | Is the first entry to read an EBB? + IsEBB -> + m [WithBlockSize (Entry blk)] readAllEntries hasFS secondaryOffset chunk stopAfter chunkFileSize = \isEBB -> - withFile hasFS secondaryIndexFile ReadMode $ \sHnd -> do - bl <- hGetAllAt hasFS sHnd (AbsOffset (fromIntegral secondaryOffset)) - go isEBB bl [] Nothing - where - secondaryIndexFile = fsPathSecondaryIndexFile chunk - - go :: IsEBB -- ^ Interpret the next entry as an EBB? - -> Lazy.ByteString - -> [WithBlockSize (Entry blk)] -- ^ Accumulator - -> Maybe (Entry blk) - -- ^ The previous entry we read. We can only add it to the - -- accumulator when we know its block size, which we compute based - -- on the next entry's offset. - -> m [WithBlockSize (Entry blk)] - go isEBB bl acc mbPrevEntry - | Lazy.null bl = return $ reverse $ - (addBlockSize chunkFileSize <$> mbPrevEntry) `consMaybe` acc - | otherwise = do + withFile hasFS secondaryIndexFile ReadMode $ \sHnd -> do + bl <- hGetAllAt hasFS sHnd (AbsOffset (fromIntegral secondaryOffset)) + go isEBB bl [] Nothing + where + secondaryIndexFile = fsPathSecondaryIndexFile chunk + + go :: + IsEBB -> + -- \^ Interpret the next entry as an EBB? + Lazy.ByteString -> + [WithBlockSize (Entry blk)] -> + -- \^ Accumulator + Maybe (Entry blk) -> + -- \^ The previous entry we read. We can only add it to the + -- accumulator when we know its block size, which we compute based + -- on the next entry's offset. + m [WithBlockSize (Entry blk)] + go isEBB bl acc mbPrevEntry + | Lazy.null bl = + return $ + reverse $ + (addBlockSize chunkFileSize <$> mbPrevEntry) `consMaybe` acc + | otherwise = do (remaining, entry) <- runGetWithUnconsumed (Proxy @blk) secondaryIndexFile (getEntry isEBB) bl let offsetAfterPrevBlock = unBlockOffset (blockOffset entry) - acc' = (addBlockSize offsetAfterPrevBlock <$> mbPrevEntry) - `consMaybe` acc - if stopAfter entry then - - if Lazy.null remaining then - return $ reverse $ addBlockSize chunkFileSize entry : acc' - else do - -- Read the next blockOffset so we can compute the size of the - -- last block we read. - -- - -- We know @remaining@ is not empty, so it contains at least the - -- next entry (unless the file is invalid) and definitely the - -- next entry's block offset. - (_, nextBlockOffset) <- - runGetWithUnconsumed (Proxy @blk) secondaryIndexFile get remaining - return $ reverse $ addBlockSize nextBlockOffset entry : acc' - - else - -- Pass 'IsNotEBB' because there can only be one EBB and that must - -- be the first one in the file. - go IsNotEBB remaining acc' (Just entry) - - -- | Add the block size to an entry, it is computed by subtracting the - -- entry's block offset from the offset after the entry's block, i.e., - -- where the next block starts. - addBlockSize :: Word64 -> Entry blk -> WithBlockSize (Entry blk) - addBlockSize offsetAfter entry = WithBlockSize size entry - where - size = fromIntegral $ offsetAfter - unBlockOffset (blockOffset entry) - - consMaybe :: Maybe a -> [a] -> [a] - consMaybe = maybe id (:) + acc' = + (addBlockSize offsetAfterPrevBlock <$> mbPrevEntry) + `consMaybe` acc + if stopAfter entry + then + if Lazy.null remaining + then + return $ reverse $ addBlockSize chunkFileSize entry : acc' + else do + -- Read the next blockOffset so we can compute the size of the + -- last block we read. + -- + -- We know @remaining@ is not empty, so it contains at least the + -- next entry (unless the file is invalid) and definitely the + -- next entry's block offset. + (_, nextBlockOffset) <- + runGetWithUnconsumed (Proxy @blk) secondaryIndexFile get remaining + return $ reverse $ addBlockSize nextBlockOffset entry : acc' + else + -- Pass 'IsNotEBB' because there can only be one EBB and that must + -- be the first one in the file. + go IsNotEBB remaining acc' (Just entry) + + -- \| Add the block size to an entry, it is computed by subtracting the + -- entry's block offset from the offset after the entry's block, i.e., + -- where the next block starts. + addBlockSize :: Word64 -> Entry blk -> WithBlockSize (Entry blk) + addBlockSize offsetAfter entry = WithBlockSize size entry + where + size = fromIntegral $ offsetAfter - unBlockOffset (blockOffset entry) + + consMaybe :: Maybe a -> [a] -> [a] + consMaybe = maybe id (:) appendEntry :: - forall m blk h. (HasCallStack, ConvertRawHash blk, MonadThrow m) - => HasFS m h - -> Handle h - -> Entry blk - -> m Word64 - -- ^ The number of bytes written + forall m blk h. + (HasCallStack, ConvertRawHash blk, MonadThrow m) => + HasFS m h -> + Handle h -> + Entry blk -> + -- | The number of bytes written + m Word64 appendEntry hasFS sHnd entry = do - bytesWritten <- hPut hasFS sHnd $ Put.execPut $ putEntry entry - return $ - assert (bytesWritten == fromIntegral (entrySize (Proxy @blk))) bytesWritten + bytesWritten <- hPut hasFS sHnd $ Put.execPut $ putEntry entry + return $ + assert (bytesWritten == fromIntegral (entrySize (Proxy @blk))) bytesWritten -- | Remove all entries after the entry at the given 'SecondaryOffset'. That -- entry will now be the last entry in the secondary index file. truncateToEntry :: - forall m blk h. (HasCallStack, ConvertRawHash blk, MonadThrow m) - => Proxy blk - -> HasFS m h - -> ChunkNo - -> SecondaryOffset - -> m () + forall m blk h. + (HasCallStack, ConvertRawHash blk, MonadThrow m) => + Proxy blk -> + HasFS m h -> + ChunkNo -> + SecondaryOffset -> + m () truncateToEntry pb hasFS chunk secondaryOffset = - withFile hasFS secondaryIndexFile (AppendMode AllowExisting) $ \sHnd -> - hTruncate sHnd offset - where - secondaryIndexFile = fsPathSecondaryIndexFile chunk - HasFS { hTruncate } = hasFS - offset = fromIntegral (secondaryOffset + entrySize pb) + withFile hasFS secondaryIndexFile (AppendMode AllowExisting) $ \sHnd -> + hTruncate sHnd offset + where + secondaryIndexFile = fsPathSecondaryIndexFile chunk + HasFS{hTruncate} = hasFS + offset = fromIntegral (secondaryOffset + entrySize pb) writeAllEntries :: - forall m blk h. (HasCallStack, ConvertRawHash blk, MonadThrow m) - => HasFS m h - -> ChunkNo - -> [Entry blk] - -> m () + forall m blk h. + (HasCallStack, ConvertRawHash blk, MonadThrow m) => + HasFS m h -> + ChunkNo -> + [Entry blk] -> + m () writeAllEntries hasFS chunk entries = - withFile hasFS secondaryIndexFile (AppendMode AllowExisting) $ \sHnd -> do - -- First truncate the file, otherwise we might leave some old contents - -- at the end if the new contents are smaller than the previous contents - hTruncate sHnd 0 - mapM_ (appendEntry hasFS sHnd) entries - where - secondaryIndexFile = fsPathSecondaryIndexFile chunk - HasFS { hTruncate } = hasFS + withFile hasFS secondaryIndexFile (AppendMode AllowExisting) $ \sHnd -> do + -- First truncate the file, otherwise we might leave some old contents + -- at the end if the new contents are smaller than the previous contents + hTruncate sHnd 0 + mapM_ (appendEntry hasFS sHnd) entries + where + secondaryIndexFile = fsPathSecondaryIndexFile chunk + HasFS{hTruncate} = hasFS {------------------------------------------------------------------------------ Binary functions @@ -342,8 +369,8 @@ writeAllEntries hasFS chunk entries = getHash :: ConvertRawHash blk => Proxy blk -> Get (HeaderHash blk) getHash pb = do - bytes <- Get.getByteString (fromIntegral (hashSize pb)) - return $! fromRawHash pb bytes + bytes <- Get.getByteString (fromIntegral (hashSize pb)) + return $! fromRawHash pb bytes putHash :: ConvertRawHash blk => Proxy blk -> HeaderHash blk -> Put putHash pb = Put.putShortByteString . toShortRawHash pb diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Iterator.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Iterator.hs index ac897e5e6a..60d7df22a4 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Iterator.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Iterator.hs @@ -8,46 +8,54 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Iterator ( - CurrentChunkInfo (..) +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Iterator + ( CurrentChunkInfo (..) , extractBlockComponent , getSlotInfo , streamImpl ) where -import Cardano.Prelude (forceElemsToWHNF) -import qualified Codec.CBOR.Read as CBOR -import Control.Monad (unless, void, when) -import Control.Monad.Except (ExceptT, runExceptT, throwError) -import Control.Monad.Trans.Class (lift) -import Control.ResourceRegistry (ResourceKey, ResourceRegistry, - allocate, release, unsafeRelease) -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.ByteString.Short as Short -import Data.Foldable (find) -import Data.Functor ((<&>)) -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block hiding (headerHash) -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.ImmutableDB.API hiding - (getBlockComponent) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index (Index) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index as Index -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary - (BlockOffset (..), BlockSize (..)) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary as Secondary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.State -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.SizeInBytes -import System.FS.API.Lazy -import System.FS.CRC +import Cardano.Prelude (forceElemsToWHNF) +import Codec.CBOR.Read qualified as CBOR +import Control.Monad (unless, void, when) +import Control.Monad.Except (ExceptT, runExceptT, throwError) +import Control.Monad.Trans.Class (lift) +import Control.ResourceRegistry + ( ResourceKey + , ResourceRegistry + , allocate + , release + , unsafeRelease + ) +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short qualified as Short +import Data.Foldable (find) +import Data.Functor ((<&>)) +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block hiding (headerHash) +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB.API hiding + ( getBlockComponent + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index (Index) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index qualified as Index +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary + ( BlockOffset (..) + , BlockSize (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary qualified as Secondary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.State +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.SizeInBytes +import System.FS.API.Lazy +import System.FS.CRC {------------------------------------------------------------------------------ ImmutableDB Iterator Implementation @@ -57,44 +65,44 @@ import System.FS.CRC -- -- Note: in contrast to 'IteratorState', these fields remain static for the -- lifetime of the iterator. -data IteratorHandle m blk h = IteratorHandle { - ithHasFS :: !(HasFS m h) - -- ^ Bundled HasFS instance because of the existential @h@. - , ithIndex :: !(Index m blk h) - -- ^ Bundled Index instance because of the existential @h@. - , ithVarState :: !(StrictTVar m (IteratorStateOrExhausted m blk h)) - -- ^ The state of the iterator - , ithEndChunk :: !ChunkNo - -- ^ The chunk in which the last block to stream is located. - , ithEndHash :: !(HeaderHash blk) - -- ^ The has of the last block the iterator should return. - } - -data IteratorStateOrExhausted m hash h = - IteratorStateOpen !(IteratorState m hash h) +data IteratorHandle m blk h = IteratorHandle + { ithHasFS :: !(HasFS m h) + -- ^ Bundled HasFS instance because of the existential @h@. + , ithIndex :: !(Index m blk h) + -- ^ Bundled Index instance because of the existential @h@. + , ithVarState :: !(StrictTVar m (IteratorStateOrExhausted m blk h)) + -- ^ The state of the iterator + , ithEndChunk :: !ChunkNo + -- ^ The chunk in which the last block to stream is located. + , ithEndHash :: !(HeaderHash blk) + -- ^ The has of the last block the iterator should return. + } + +data IteratorStateOrExhausted m hash h + = IteratorStateOpen !(IteratorState m hash h) | IteratorStateExhausted deriving (Generic, NoThunks) -data IteratorState m blk h = IteratorState { - itsChunk :: !ChunkNo - -- ^ The current chunk the iterator is streaming from. - , itsChunkHandle :: !(Handle h) - -- ^ A handle to the chunk file corresponding with 'itsChunk'. - , itsChunkKey :: !(ResourceKey m) - -- ^ The 'ResourceKey' corresponding to the 'itsChunkHandle'. We use it to - -- release the handle from the 'ResourceRegistry'. - -- - -- NOTE: if we only close the handle but don't release the resource, the - -- registry will still hold on to the (closed) handle/resource. - , itsChunkEntries :: !(NonEmpty (WithBlockSize (Secondary.Entry blk))) - -- ^ The entries from the secondary index corresponding to the current - -- chunk. The first entry in the list is the next one to stream. - -- - -- Invariant: all the entries in this list must be included in the stream. - -- In other words, entries corresponding to blocks after the end bound are - -- not included in this list. - } - deriving (Generic) +data IteratorState m blk h = IteratorState + { itsChunk :: !ChunkNo + -- ^ The current chunk the iterator is streaming from. + , itsChunkHandle :: !(Handle h) + -- ^ A handle to the chunk file corresponding with 'itsChunk'. + , itsChunkKey :: !(ResourceKey m) + -- ^ The 'ResourceKey' corresponding to the 'itsChunkHandle'. We use it to + -- release the handle from the 'ResourceRegistry'. + -- + -- NOTE: if we only close the handle but don't release the resource, the + -- registry will still hold on to the (closed) handle/resource. + , itsChunkEntries :: !(NonEmpty (WithBlockSize (Secondary.Entry blk))) + -- ^ The entries from the secondary index corresponding to the current + -- chunk. The first entry in the list is the next one to stream. + -- + -- Invariant: all the entries in this list must be included in the stream. + -- In other words, entries corresponding to blocks after the end bound are + -- not included in this list. + } + deriving Generic deriving instance (StandardHash blk, IOLike m) => NoThunks (IteratorState m blk h) @@ -105,146 +113,154 @@ deriving instance (StandardHash blk, IOLike m) => NoThunks (IteratorState m blk data CurrentChunkInfo = CurrentChunkInfo !ChunkNo !BlockOffset streamImpl :: - forall m blk b. - ( IOLike m - , HasHeader blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , DecodeDiskDep (NestedCtxt Header) blk - , ReconstructNestedCtxt Header blk - , HasCallStack - ) - => ImmutableDBEnv m blk - -> ResourceRegistry m - -> BlockComponent blk b - -> StreamFrom blk - -> StreamTo blk - -> m (Either (MissingBlock blk) (Iterator m blk b)) + forall m blk b. + ( IOLike m + , HasHeader blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , DecodeDiskDep (NestedCtxt Header) blk + , ReconstructNestedCtxt Header blk + , HasCallStack + ) => + ImmutableDBEnv m blk -> + ResourceRegistry m -> + BlockComponent blk b -> + StreamFrom blk -> + StreamTo blk -> + m (Either (MissingBlock blk) (Iterator m blk b)) streamImpl dbEnv registry blockComponent = \from to -> - withOpenState dbEnv $ \hasFS OpenState{..} -> runExceptT $ do - unless (validBounds from to) $ - lift $ throwApiMisuse $ InvalidIteratorRangeError from to - - endChunkSlot <- checkUpperBound currentIndex currentTip to - - -- When the lower bound is exclusive, we do the same as when it is - -- inclusive. We set up the iterator to point at the lower bound. Only at - -- the very end of this function do we advance it to the block after it, - -- in the case of an exclusive lower bound. - (secondaryOffset, startChunkSlot) <- - checkLowerBound currentIndex currentTip from - - lift $ do - -- 'validBounds' will catch nearly all invalid ranges, except for one: - -- streaming from the regular block to the EBB in the same slot. The - -- EBB comes before the regular block, so these bounds are invalid. - -- However, to distinguish the EBB from the regular block, as both - -- have the same slot number, we need to look at the hashes. - -- 'validateBounds' doesn't have enough information to do that. - when (startChunkSlot > endChunkSlot) $ - throwApiMisuse $ InvalidIteratorRangeError from to - - let ChunkSlot startChunk startRelSlot = startChunkSlot - startIsEBB = relativeSlotIsEBB startRelSlot - currentChunkInfo = CurrentChunkInfo currentChunk currentChunkOffset - endHash = case to of - StreamToInclusive (RealPoint _slot hash) -> hash - - iteratorState <- - iteratorStateForChunk - hasFS - currentIndex - registry - currentChunkInfo - endHash - startChunk - secondaryOffset - startIsEBB - - varIteratorState <- newTVarIO $ IteratorStateOpen iteratorState - - let ith = IteratorHandle { - ithHasFS = hasFS - , ithIndex = currentIndex + withOpenState dbEnv $ \hasFS OpenState{..} -> runExceptT $ do + unless (validBounds from to) $ + lift $ + throwApiMisuse $ + InvalidIteratorRangeError from to + + endChunkSlot <- checkUpperBound currentIndex currentTip to + + -- When the lower bound is exclusive, we do the same as when it is + -- inclusive. We set up the iterator to point at the lower bound. Only at + -- the very end of this function do we advance it to the block after it, + -- in the case of an exclusive lower bound. + (secondaryOffset, startChunkSlot) <- + checkLowerBound currentIndex currentTip from + + lift $ do + -- 'validBounds' will catch nearly all invalid ranges, except for one: + -- streaming from the regular block to the EBB in the same slot. The + -- EBB comes before the regular block, so these bounds are invalid. + -- However, to distinguish the EBB from the regular block, as both + -- have the same slot number, we need to look at the hashes. + -- 'validateBounds' doesn't have enough information to do that. + when (startChunkSlot > endChunkSlot) $ + throwApiMisuse $ + InvalidIteratorRangeError from to + + let ChunkSlot startChunk startRelSlot = startChunkSlot + startIsEBB = relativeSlotIsEBB startRelSlot + currentChunkInfo = CurrentChunkInfo currentChunk currentChunkOffset + endHash = case to of + StreamToInclusive (RealPoint _slot hash) -> hash + + iteratorState <- + iteratorStateForChunk + hasFS + currentIndex + registry + currentChunkInfo + endHash + startChunk + secondaryOffset + startIsEBB + + varIteratorState <- newTVarIO $ IteratorStateOpen iteratorState + + let ith = + IteratorHandle + { ithHasFS = hasFS + , ithIndex = currentIndex , ithVarState = varIteratorState , ithEndChunk = chunkIndex endChunkSlot - , ithEndHash = endHash + , ithEndHash = endHash } - -- When streaming from an exclusive lower bound that is not genesis, we - -- have opened the iterator at the bound itself, so we have to skip it - -- first. - case from of - StreamFromExclusive (BlockPoint {}) -> - stepIterator registry currentChunkInfo ith - _otherwise -> return () - - return $ mkIterator ith - where - ImmutableDBEnv { chunkInfo } = dbEnv - - -- | Check the upper bound: check whether it exists in the database (return - -- a 'MissingBlock' otherwise), and return the corresponding 'ChunkSlot'. - checkUpperBound :: - HasCallStack - => Index m blk h - -> WithOrigin (Tip blk) -- ^ Current tip - -> StreamTo blk - -> ExceptT (MissingBlock blk) m ChunkSlot - -- ^ We can't return 'TipInfo' here because the secondary index does - -- not give us block numbers - checkUpperBound index currentTip (StreamToInclusive endPt) = do - (chunkSlot, _, _) <- getSlotInfo chunkInfo index currentTip endPt - return chunkSlot - - -- | Check the lower bound: check whether it exists in the database (return - -- a 'MissingBlock' otherwise), and return the corresponding 'ChunkSlot' and - -- 'SecondaryOffset'. - -- - -- PRECONDITION: the end bound has been checked already - -- - -- PRECONDITION: the bounds passed the 'validBounds' check - -- - -- Both preconditions combined guarantee us that the tip is not origin and - -- that the lower bound is <= the tip. - checkLowerBound :: - HasCallStack - => Index m blk h - -> WithOrigin (Tip blk) -- ^ Current tip - -> StreamFrom blk - -> ExceptT (MissingBlock blk) m (SecondaryOffset, ChunkSlot) - checkLowerBound index currentTip = \case - StreamFromInclusive startPt -> do - (chunkSlot, _, secondaryOffset) <- - getSlotInfo chunkInfo index currentTip startPt - return (secondaryOffset, chunkSlot) - StreamFromExclusive startPt -> case pointToWithOriginRealPoint startPt of - Origin -> lift $ findFirstBlock index - NotOrigin startPt' -> do - (chunkSlot, _, secondaryOffset) <- - getSlotInfo chunkInfo index currentTip startPt' - return (secondaryOffset, chunkSlot) - - mkIterator :: IteratorHandle m blk h -> Iterator m blk b - mkIterator ith = Iterator { - iteratorNext = iteratorNextImpl dbEnv ith registry blockComponent - , iteratorHasNext = iteratorHasNextImpl dbEnv ith - , iteratorClose = iteratorCloseImpl ith - } - - -- | Find the 'SecondaryOffset' and 'ChunkSlot' corresponding to the first - -- block in the ImmutableDB. - -- - -- PRECONDITION: the ImmutableDB is not empty. - findFirstBlock :: - HasCallStack - => Index m blk h - -> m (SecondaryOffset, ChunkSlot) - findFirstBlock index = go firstChunkNo - where - go :: ChunkNo -> m (SecondaryOffset, ChunkSlot) - go chunk = Index.readFirstFilledSlot index chunk >>= \case - Nothing -> go (nextChunkNo chunk) - Just relSlot -> return (0, chunkSlotForRelativeSlot chunk relSlot) + -- When streaming from an exclusive lower bound that is not genesis, we + -- have opened the iterator at the bound itself, so we have to skip it + -- first. + case from of + StreamFromExclusive (BlockPoint{}) -> + stepIterator registry currentChunkInfo ith + _otherwise -> return () + + return $ mkIterator ith + where + ImmutableDBEnv{chunkInfo} = dbEnv + + -- \| Check the upper bound: check whether it exists in the database (return + -- a 'MissingBlock' otherwise), and return the corresponding 'ChunkSlot'. + checkUpperBound :: + HasCallStack => + Index m blk h -> + WithOrigin (Tip blk) -> + -- \^ Current tip + StreamTo blk -> + ExceptT (MissingBlock blk) m ChunkSlot + -- \^ We can't return 'TipInfo' here because the secondary index does + -- not give us block numbers + checkUpperBound index currentTip (StreamToInclusive endPt) = do + (chunkSlot, _, _) <- getSlotInfo chunkInfo index currentTip endPt + return chunkSlot + + -- \| Check the lower bound: check whether it exists in the database (return + -- a 'MissingBlock' otherwise), and return the corresponding 'ChunkSlot' and + -- 'SecondaryOffset'. + -- + -- PRECONDITION: the end bound has been checked already + -- + -- PRECONDITION: the bounds passed the 'validBounds' check + -- + -- Both preconditions combined guarantee us that the tip is not origin and + -- that the lower bound is <= the tip. + checkLowerBound :: + HasCallStack => + Index m blk h -> + WithOrigin (Tip blk) -> + -- \^ Current tip + StreamFrom blk -> + ExceptT (MissingBlock blk) m (SecondaryOffset, ChunkSlot) + checkLowerBound index currentTip = \case + StreamFromInclusive startPt -> do + (chunkSlot, _, secondaryOffset) <- + getSlotInfo chunkInfo index currentTip startPt + return (secondaryOffset, chunkSlot) + StreamFromExclusive startPt -> case pointToWithOriginRealPoint startPt of + Origin -> lift $ findFirstBlock index + NotOrigin startPt' -> do + (chunkSlot, _, secondaryOffset) <- + getSlotInfo chunkInfo index currentTip startPt' + return (secondaryOffset, chunkSlot) + + mkIterator :: IteratorHandle m blk h -> Iterator m blk b + mkIterator ith = + Iterator + { iteratorNext = iteratorNextImpl dbEnv ith registry blockComponent + , iteratorHasNext = iteratorHasNextImpl dbEnv ith + , iteratorClose = iteratorCloseImpl ith + } + + -- \| Find the 'SecondaryOffset' and 'ChunkSlot' corresponding to the first + -- block in the ImmutableDB. + -- + -- PRECONDITION: the ImmutableDB is not empty. + findFirstBlock :: + HasCallStack => + Index m blk h -> + m (SecondaryOffset, ChunkSlot) + findFirstBlock index = go firstChunkNo + where + go :: ChunkNo -> m (SecondaryOffset, ChunkSlot) + go chunk = + Index.readFirstFilledSlot index chunk >>= \case + Nothing -> go (nextChunkNo chunk) + Just relSlot -> return (0, chunkSlotForRelativeSlot chunk relSlot) -- | Get information about the block or EBB at the given slot with the given -- hash. If no such block exists, because the slot is empty, it contains a block @@ -259,73 +275,77 @@ streamImpl dbEnv registry blockComponent = \from to -> -- the 'SecondaryOffset' is for the slot. The secondary index is read to check -- the hash and to return the 'Secondary.Entry'. getSlotInfo :: - forall m blk h. (HasCallStack, IOLike m, HasHeader blk) - => ChunkInfo - -> Index m blk h - -> WithOrigin (Tip blk) -- ^ Current tip - -> RealPoint blk - -> ExceptT (MissingBlock blk) m - ( ChunkSlot - , (Secondary.Entry blk, BlockSize) - , SecondaryOffset - ) + forall m blk h. + (HasCallStack, IOLike m, HasHeader blk) => + ChunkInfo -> + Index m blk h -> + -- | Current tip + WithOrigin (Tip blk) -> + RealPoint blk -> + ExceptT + (MissingBlock blk) + m + ( ChunkSlot + , (Secondary.Entry blk, BlockSize) + , SecondaryOffset + ) getSlotInfo chunkInfo index currentTip pt@(RealPoint slot hash) = do - let (chunk, mIfBoundary, ifRegular) = - chunkSlotForUnknownBlock chunkInfo slot - - case currentTip of - NotOrigin (Tip { tipSlotNo }) - | slot <= tipSlotNo - -> return () - _otherwise - -> throwError $ NewerThanTip pt (tipToPoint currentTip) - - -- Obtain the offsets in the secondary index file from the primary index - -- file. The block /could/ still correspond to an EBB, a regular block or - -- both. We will know which one it is when we can check the hashes from - -- the secondary index file with the hash we have. - toRead :: NonEmpty (IsEBB, SecondaryOffset) <- case mIfBoundary of - Just ifBoundary -> do - let relatives@(Two relb relr) = chunkRelative <$> Two ifBoundary ifRegular - (offsets, s) <- lift $ Index.readOffsets index chunk relatives - case offsets of - Two Nothing Nothing -> - throwError $ EmptySlot pt chunk [relb, relr] s - Two (Just ebbOffset) (Just blkOffset) -> - return ((IsEBB, ebbOffset) NE.:| [(IsNotEBB, blkOffset)]) - Two (Just ebbOffset) Nothing -> - return ((IsEBB, ebbOffset) NE.:| []) - Two Nothing (Just blkOffset) -> - return ((IsNotEBB, blkOffset) NE.:| []) - Nothing -> do - let relr = chunkRelative ifRegular - (offset, s) <- lift $ Index.readOffset index chunk relr - case offset of - Nothing -> - throwError $ EmptySlot pt chunk [relr] s - Just blkOffset -> - return ((IsNotEBB, blkOffset) NE.:| []) - - entriesWithBlockSizes :: NonEmpty (Secondary.Entry blk, BlockSize) <- - lift $ Index.readEntries index chunk toRead - - -- Return the entry from the secondary index file that matches the - -- expected hash. - (secondaryOffset, (entry, blockSize)) <- - case find ((== hash) . Secondary.headerHash . fst . snd) - (NE.zip (fmap snd toRead) entriesWithBlockSizes) of - Just found -> return found - Nothing -> throwError $ WrongHash pt hashes - where - hashes = Secondary.headerHash . fst <$> entriesWithBlockSizes - - -- Use the secondary index entry to determine whether the slot + hash - -- correspond to an EBB or a regular block. - let chunkSlot = case (mIfBoundary, Secondary.blockOrEBB entry) of - (Just ifBoundary, EBB _) -> ifBoundary - _otherwise -> ifRegular - return (chunkSlot, (entry, blockSize), secondaryOffset) - + let (chunk, mIfBoundary, ifRegular) = + chunkSlotForUnknownBlock chunkInfo slot + + case currentTip of + NotOrigin (Tip{tipSlotNo}) + | slot <= tipSlotNo -> + return () + _otherwise -> + throwError $ NewerThanTip pt (tipToPoint currentTip) + + -- Obtain the offsets in the secondary index file from the primary index + -- file. The block /could/ still correspond to an EBB, a regular block or + -- both. We will know which one it is when we can check the hashes from + -- the secondary index file with the hash we have. + toRead :: NonEmpty (IsEBB, SecondaryOffset) <- case mIfBoundary of + Just ifBoundary -> do + let relatives@(Two relb relr) = chunkRelative <$> Two ifBoundary ifRegular + (offsets, s) <- lift $ Index.readOffsets index chunk relatives + case offsets of + Two Nothing Nothing -> + throwError $ EmptySlot pt chunk [relb, relr] s + Two (Just ebbOffset) (Just blkOffset) -> + return ((IsEBB, ebbOffset) NE.:| [(IsNotEBB, blkOffset)]) + Two (Just ebbOffset) Nothing -> + return ((IsEBB, ebbOffset) NE.:| []) + Two Nothing (Just blkOffset) -> + return ((IsNotEBB, blkOffset) NE.:| []) + Nothing -> do + let relr = chunkRelative ifRegular + (offset, s) <- lift $ Index.readOffset index chunk relr + case offset of + Nothing -> + throwError $ EmptySlot pt chunk [relr] s + Just blkOffset -> + return ((IsNotEBB, blkOffset) NE.:| []) + + entriesWithBlockSizes :: NonEmpty (Secondary.Entry blk, BlockSize) <- + lift $ Index.readEntries index chunk toRead + + -- Return the entry from the secondary index file that matches the + -- expected hash. + (secondaryOffset, (entry, blockSize)) <- + case find + ((== hash) . Secondary.headerHash . fst . snd) + (NE.zip (fmap snd toRead) entriesWithBlockSizes) of + Just found -> return found + Nothing -> throwError $ WrongHash pt hashes + where + hashes = Secondary.headerHash . fst <$> entriesWithBlockSizes + + -- Use the secondary index entry to determine whether the slot + hash + -- correspond to an EBB or a regular block. + let chunkSlot = case (mIfBoundary, Secondary.blockOrEBB entry) of + (Just ifBoundary, EBB _) -> ifBoundary + _otherwise -> ifRegular + return (chunkSlot, (entry, blockSize), secondaryOffset) -- | Move the iterator to the next position that can be read from, -- advancing chunks if necessary. If no next position can be found, the @@ -336,38 +356,43 @@ getSlotInfo chunkInfo index currentTip pt@(RealPoint slot hash) = do -- -- PRECONDITION: the iterator is not exhausted. stepIterator :: - forall m blk h. (HasCallStack, IOLike m, HasHeader blk) - => ResourceRegistry m - -> CurrentChunkInfo - -> IteratorHandle m blk h - -> m () -stepIterator registry currentChunkInfo - ith@IteratorHandle { ithHasFS, ithIndex, ithVarState, ithEndChunk, ithEndHash } = + forall m blk h. + (HasCallStack, IOLike m, HasHeader blk) => + ResourceRegistry m -> + CurrentChunkInfo -> + IteratorHandle m blk h -> + m () +stepIterator + registry + currentChunkInfo + ith@IteratorHandle{ithHasFS, ithIndex, ithVarState, ithEndChunk, ithEndHash} = atomically (readTVar ithVarState) >>= \case IteratorStateExhausted -> error "precondition violated: iterator must not be exhausted" - IteratorStateOpen its@IteratorState { itsChunkEntries, itsChunkKey, itsChunk } -> + IteratorStateOpen its@IteratorState{itsChunkEntries, itsChunkKey, itsChunk} -> case NE.nonEmpty (NE.tail itsChunkEntries) of -- There are entries left in this chunk, so continue. See the -- invariant on 'itsChunkEntries' Just itsChunkEntries' -> - atomically $ writeTVar ithVarState $ - IteratorStateOpen its { itsChunkEntries = itsChunkEntries' } - + atomically $ + writeTVar ithVarState $ + IteratorStateOpen its{itsChunkEntries = itsChunkEntries'} -- No more entries in this chunk, so open the next. Nothing -> do -- Release the resource, i.e., close the handle. void $ release itsChunkKey -- If this was the final chunk, close the iterator - if itsChunk >= ithEndChunk then - iteratorCloseImpl ith - else - openNextChunk (nextChunkNo itsChunk) >>= \its' -> - atomically $ writeTVar ithVarState $ IteratorStateOpen its' - where + if itsChunk >= ithEndChunk + then + iteratorCloseImpl ith + else + openNextChunk (nextChunkNo itsChunk) >>= \its' -> + atomically $ writeTVar ithVarState $ IteratorStateOpen its' + where openNextChunk :: - ChunkNo -- ^ The chunk to open - -> m (IteratorState m blk h) + ChunkNo -> + -- \^ The chunk to open + m (IteratorState m blk h) openNextChunk chunk = Index.readFirstFilledSlot ithIndex chunk >>= \case -- This chunk is empty, look in the next one. @@ -376,7 +401,7 @@ stepIterator registry currentChunkInfo -- when we reach the non-empty chunk containing the end bound. This -- cannot loop forever as an error would be thrown when opening the -- index file(s) of a non-existing chunk. - Nothing -> openNextChunk (nextChunkNo chunk) + Nothing -> openNextChunk (nextChunkNo chunk) Just relSlot -> do -- Note that the only reason we actually open the primary index file -- is to see whether the first block in the chunk is an EBB or not. @@ -385,7 +410,7 @@ stepIterator registry currentChunkInfo -- 'secondaryOffset' will be 0, as the first entry in the secondary -- index file always starts at offset 0. The same is true for -- 'findFirstFilledSlot'. - let firstIsEBB = relativeSlotIsEBB relSlot + let firstIsEBB = relativeSlotIsEBB relSlot secondaryOffset = 0 iteratorStateForChunk @@ -398,116 +423,123 @@ stepIterator registry currentChunkInfo secondaryOffset firstIsEBB - iteratorNextImpl :: - forall m blk b h. - ( IOLike m - , HasHeader blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , DecodeDiskDep (NestedCtxt Header) blk - , ReconstructNestedCtxt Header blk - ) - => ImmutableDBEnv m blk - -> IteratorHandle m blk h - -> ResourceRegistry m - -> BlockComponent blk b - -> m (IteratorResult b) -iteratorNextImpl dbEnv ith@IteratorHandle { ithHasFS, ithVarState } registry blockComponent = do - -- The idea is that if the state is not 'IteratorStateExhausted', then the - -- head of 'itsChunkEntries' is always ready to be read. After extracting - -- the block component, 'stepIterator' will advance the iterator to the next - -- block. - atomically (readTVar ithVarState) >>= \case - -- Iterator already closed - IteratorStateExhausted -> return IteratorExhausted - IteratorStateOpen IteratorState { itsChunkEntries, itsChunk, itsChunkHandle } -> - withOpenState dbEnv $ \_ st -> do - let entry = NE.head itsChunkEntries - currentChunkInfo = CurrentChunkInfo - (currentChunk st) - (currentChunkOffset st) - b <- - extractBlockComponent - ithHasFS - chunkInfo - itsChunk - codecConfig - checkIntegrity - itsChunkHandle - entry - blockComponent - stepIterator registry currentChunkInfo ith - return $ IteratorResult b - where - ImmutableDBEnv { codecConfig, chunkInfo, checkIntegrity } = dbEnv + forall m blk b h. + ( IOLike m + , HasHeader blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , DecodeDiskDep (NestedCtxt Header) blk + , ReconstructNestedCtxt Header blk + ) => + ImmutableDBEnv m blk -> + IteratorHandle m blk h -> + ResourceRegistry m -> + BlockComponent blk b -> + m (IteratorResult b) +iteratorNextImpl dbEnv ith@IteratorHandle{ithHasFS, ithVarState} registry blockComponent = do + -- The idea is that if the state is not 'IteratorStateExhausted', then the + -- head of 'itsChunkEntries' is always ready to be read. After extracting + -- the block component, 'stepIterator' will advance the iterator to the next + -- block. + atomically (readTVar ithVarState) >>= \case + -- Iterator already closed + IteratorStateExhausted -> return IteratorExhausted + IteratorStateOpen IteratorState{itsChunkEntries, itsChunk, itsChunkHandle} -> + withOpenState dbEnv $ \_ st -> do + let entry = NE.head itsChunkEntries + currentChunkInfo = + CurrentChunkInfo + (currentChunk st) + (currentChunkOffset st) + b <- + extractBlockComponent + ithHasFS + chunkInfo + itsChunk + codecConfig + checkIntegrity + itsChunkHandle + entry + blockComponent + stepIterator registry currentChunkInfo ith + return $ IteratorResult b + where + ImmutableDBEnv{codecConfig, chunkInfo, checkIntegrity} = dbEnv iteratorHasNextImpl :: - IOLike m - => ImmutableDBEnv m blk - -> IteratorHandle m blk h - -> STM m (Maybe (RealPoint blk)) -iteratorHasNextImpl ImmutableDBEnv { chunkInfo } IteratorHandle { ithVarState } = - readTVar ithVarState <&> \case - IteratorStateExhausted -> Nothing - IteratorStateOpen IteratorState { itsChunkEntries } -> - Just (RealPoint slotNo (Secondary.headerHash nextEntry)) - where - WithBlockSize _ nextEntry NE.:| _ = itsChunkEntries - - slotNo :: SlotNo - slotNo = slotNoOfBlockOrEBB chunkInfo (Secondary.blockOrEBB nextEntry) + IOLike m => + ImmutableDBEnv m blk -> + IteratorHandle m blk h -> + STM m (Maybe (RealPoint blk)) +iteratorHasNextImpl ImmutableDBEnv{chunkInfo} IteratorHandle{ithVarState} = + readTVar ithVarState <&> \case + IteratorStateExhausted -> Nothing + IteratorStateOpen IteratorState{itsChunkEntries} -> + Just (RealPoint slotNo (Secondary.headerHash nextEntry)) + where + WithBlockSize _ nextEntry NE.:| _ = itsChunkEntries + + slotNo :: SlotNo + slotNo = slotNoOfBlockOrEBB chunkInfo (Secondary.blockOrEBB nextEntry) iteratorCloseImpl :: - (HasCallStack, IOLike m) - => IteratorHandle m blk h - -> m () -iteratorCloseImpl IteratorHandle { ithVarState } = do - atomically (readTVar ithVarState) >>= \case - -- Already closed - IteratorStateExhausted -> return () - IteratorStateOpen IteratorState { itsChunkKey } -> do - -- First set it to Nothing to indicate it is closed, as the call to - -- 'release' might fail, which would leave the iterator open in an - -- invalid state. - atomically $ writeTVar ithVarState IteratorStateExhausted - -- TODO: we must use 'unsafeRelease' instead of 'release' because we - -- might close the iterator from an /untracked thread/, i.e., a thread - -- that was not spawned by the resource registry (or the thread that - -- opened the resource registry) in which the handle was allocated. - -- - -- This happens in the consensus tests (but not in the actual node), - -- where the protocol threads that open iterators (BlockFetchServer - -- and ChainSyncServer) are spawned using a different resource - -- registry (A) than the one the ImmutableDB (and ChainDB) use (B). - -- When the ChainDB is closed (by the thread that opened B), we're - -- closing all open iterators, i.e., the iterators opened by the - -- protocol threads. So we're releasing handles allocated in resource - -- registry A from a thread tracked by resource registry B. See #1390. - void $ unsafeRelease itsChunkKey + (HasCallStack, IOLike m) => + IteratorHandle m blk h -> + m () +iteratorCloseImpl IteratorHandle{ithVarState} = do + atomically (readTVar ithVarState) >>= \case + -- Already closed + IteratorStateExhausted -> return () + IteratorStateOpen IteratorState{itsChunkKey} -> do + -- First set it to Nothing to indicate it is closed, as the call to + -- 'release' might fail, which would leave the iterator open in an + -- invalid state. + atomically $ writeTVar ithVarState IteratorStateExhausted + -- TODO: we must use 'unsafeRelease' instead of 'release' because we + -- might close the iterator from an /untracked thread/, i.e., a thread + -- that was not spawned by the resource registry (or the thread that + -- opened the resource registry) in which the handle was allocated. + -- + -- This happens in the consensus tests (but not in the actual node), + -- where the protocol threads that open iterators (BlockFetchServer + -- and ChainSyncServer) are spawned using a different resource + -- registry (A) than the one the ImmutableDB (and ChainDB) use (B). + -- When the ChainDB is closed (by the thread that opened B), we're + -- closing all open iterators, i.e., the iterators opened by the + -- protocol threads. So we're releasing handles allocated in resource + -- registry A from a thread tracked by resource registry B. See #1390. + void $ unsafeRelease itsChunkKey iteratorStateForChunk :: - (HasCallStack, HasHeader blk, IOLike m) - => HasFS m h - -> Index m blk h - -> ResourceRegistry m - -> CurrentChunkInfo - -> HeaderHash blk - -- ^ Hash of the end bound - -> ChunkNo - -> SecondaryOffset - -- ^ Where to start in the secondary index - -> IsEBB - -- ^ Whether the first expected block will be an EBB or not. - -> m (IteratorState m blk h) -iteratorStateForChunk hasFS index registry - (CurrentChunkInfo curChunk curChunkOffset) endHash - chunk secondaryOffset firstIsEBB = do + (HasCallStack, HasHeader blk, IOLike m) => + HasFS m h -> + Index m blk h -> + ResourceRegistry m -> + CurrentChunkInfo -> + -- | Hash of the end bound + HeaderHash blk -> + ChunkNo -> + -- | Where to start in the secondary index + SecondaryOffset -> + -- | Whether the first expected block will be an EBB or not. + IsEBB -> + m (IteratorState m blk h) +iteratorStateForChunk + hasFS + index + registry + (CurrentChunkInfo curChunk curChunkOffset) + endHash + chunk + secondaryOffset + firstIsEBB = do -- Open the chunk file. Allocate the handle in the registry so that it -- will be closed in case of an exception. - (key, eHnd) <- allocate - registry - (\_key -> hOpen (fsPathChunkFile chunk) ReadMode) - hClose + (key, eHnd) <- + allocate + registry + (\_key -> hOpen (fsPathChunkFile chunk) ReadMode) + hClose -- If the last entry in @entries@ corresponds to the last block in the -- chunk, we cannot calculate the block size based on the next block. @@ -534,79 +566,98 @@ iteratorStateForChunk hasFS index registry -- the state. In this case, the chunk file size (@curChunkOffset@) we are -- passed is consistent with the tip, as it was obtained from the same -- consistent state. - chunkFileSize <- if chunk == curChunk - then return (unBlockOffset curChunkOffset) - else hGetSize eHnd - - entries <- Index.readAllEntries index secondaryOffset chunk - ((== endHash) . Secondary.headerHash) chunkFileSize firstIsEBB + chunkFileSize <- + if chunk == curChunk + then return (unBlockOffset curChunkOffset) + else hGetSize eHnd + + entries <- + Index.readAllEntries + index + secondaryOffset + chunk + ((== endHash) . Secondary.headerHash) + chunkFileSize + firstIsEBB case NE.nonEmpty entries of -- We still haven't encountered the end bound, so it cannot be -- that this non-empty chunk contains no entries <= the end bound. - Nothing -> error - "impossible: there must be entries according to the primary index" - - Just itsChunkEntries -> return IteratorState { - itsChunk = chunk - , itsChunkHandle = eHnd - , itsChunkKey = key - -- Force so we don't store any thunks in the state - , itsChunkEntries = forceElemsToWHNF itsChunkEntries - } - where - HasFS { hOpen, hClose, hGetSize } = hasFS + Nothing -> + error + "impossible: there must be entries according to the primary index" + Just itsChunkEntries -> + return + IteratorState + { itsChunk = chunk + , itsChunkHandle = eHnd + , itsChunkKey = key + , -- Force so we don't store any thunks in the state + itsChunkEntries = forceElemsToWHNF itsChunkEntries + } + where + HasFS{hOpen, hClose, hGetSize} = hasFS extractBlockComponent :: - forall m blk b h. - ( HasHeader blk - , ReconstructNestedCtxt Header blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , DecodeDiskDep (NestedCtxt Header) blk - , IOLike m - ) - => HasFS m h - -> ChunkInfo - -> ChunkNo - -> CodecConfig blk - -> (blk -> Bool) - -> Handle h - -> WithBlockSize (Secondary.Entry blk) - -> BlockComponent blk b - -> m b -extractBlockComponent hasFS chunkInfo chunk ccfg checkIntegrity eHnd - (WithBlockSize blockSize entry) = go - where + forall m blk b h. + ( HasHeader blk + , ReconstructNestedCtxt Header blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , DecodeDiskDep (NestedCtxt Header) blk + , IOLike m + ) => + HasFS m h -> + ChunkInfo -> + ChunkNo -> + CodecConfig blk -> + (blk -> Bool) -> + Handle h -> + WithBlockSize (Secondary.Entry blk) -> + BlockComponent blk b -> + m b +extractBlockComponent + hasFS + chunkInfo + chunk + ccfg + checkIntegrity + eHnd + (WithBlockSize blockSize entry) = go + where go :: forall b'. BlockComponent blk b' -> m b' go = \case - GetHash -> return headerHash - GetSlot -> return slotNo - GetIsEBB -> return $ isBlockOrEBB blockOrEBB - GetBlockSize -> return $ SizeInBytes blockSize - GetHeaderSize -> return $ fromIntegral $ Secondary.unHeaderSize headerSize - GetRawBlock -> readBlock - GetRawHeader -> readHeader - GetNestedCtxt -> readNestedCtxt - GetBlock -> do rawBlk <- go GetRawBlock - parseBlock rawBlk - GetHeader -> do rawHdr <- go GetRawHeader - ctxt <- readNestedCtxt - parseHeader ctxt rawHdr - GetVerifiedBlock -> do blk <- go GetBlock - unless (checkIntegrity blk) $ - throwUnexpectedFailure $ CorruptBlockError pt - return blk - GetPure a -> return a - GetApply f bc -> go f <*> go bc - - Secondary.Entry { - blockOffset - , checksum - , headerHash - , headerSize - , headerOffset - , blockOrEBB - } = entry + GetHash -> return headerHash + GetSlot -> return slotNo + GetIsEBB -> return $ isBlockOrEBB blockOrEBB + GetBlockSize -> return $ SizeInBytes blockSize + GetHeaderSize -> return $ fromIntegral $ Secondary.unHeaderSize headerSize + GetRawBlock -> readBlock + GetRawHeader -> readHeader + GetNestedCtxt -> readNestedCtxt + GetBlock -> do + rawBlk <- go GetRawBlock + parseBlock rawBlk + GetHeader -> do + rawHdr <- go GetRawHeader + ctxt <- readNestedCtxt + parseHeader ctxt rawHdr + GetVerifiedBlock -> do + blk <- go GetBlock + unless (checkIntegrity blk) $ + throwUnexpectedFailure $ + CorruptBlockError pt + return blk + GetPure a -> return a + GetApply f bc -> go f <*> go bc + + Secondary.Entry + { blockOffset + , checksum + , headerHash + , headerSize + , headerOffset + , blockOrEBB + } = entry slotNo :: SlotNo slotNo = slotNoOfBlockOrEBB chunkInfo blockOrEBB @@ -614,7 +665,7 @@ extractBlockComponent hasFS chunkInfo chunk ccfg checkIntegrity eHnd pt :: RealPoint blk pt = RealPoint slotNo headerHash - -- | We don't rely on the position of the handle, we always use + -- \| We don't rely on the position of the handle, we always use -- 'hGetExactlyAtCRC', i.e. @pread@ for reading from a given offset. -- -- In case the requested chunk is the current chunk, we will be reading @@ -626,64 +677,68 @@ extractBlockComponent hasFS chunkInfo chunk ccfg checkIntegrity eHnd -- implementations of the 'HasFS' API guarantee this too). readBlock :: m Lazy.ByteString readBlock = do - (bl, checksum') <- hGetExactlyAtCRC hasFS eHnd size offset - checkChecksum chunkFile pt checksum checksum' - return bl - where - size = fromIntegral blockSize - offset = AbsOffset $ Secondary.unBlockOffset blockOffset - chunkFile = fsPathChunkFile chunk - - -- | We don't rely on the position of the handle, we always use + (bl, checksum') <- hGetExactlyAtCRC hasFS eHnd size offset + checkChecksum chunkFile pt checksum checksum' + return bl + where + size = fromIntegral blockSize + offset = AbsOffset $ Secondary.unBlockOffset blockOffset + chunkFile = fsPathChunkFile chunk + + -- \| We don't rely on the position of the handle, we always use -- 'hGetExactlyAt', i.e. @pread@ for reading from a given offset. readHeader :: m Lazy.ByteString readHeader = - -- We cannot check the checksum in this case, as we're not reading the - -- whole block - hGetExactlyAt hasFS eHnd size offset - where - size = fromIntegral $ Secondary.unHeaderSize headerSize - offset = AbsOffset $ - (Secondary.unBlockOffset blockOffset) + - fromIntegral (Secondary.unHeaderOffset headerOffset) + -- We cannot check the checksum in this case, as we're not reading the + -- whole block + hGetExactlyAt hasFS eHnd size offset + where + size = fromIntegral $ Secondary.unHeaderSize headerSize + offset = + AbsOffset $ + (Secondary.unBlockOffset blockOffset) + + fromIntegral (Secondary.unHeaderOffset headerOffset) readNestedCtxt :: m (SomeSecond (NestedCtxt Header) blk) readNestedCtxt = do - bytes <- Short.toShort . Lazy.toStrict <$> - hGetExactlyAt hasFS eHnd size offset - return $ reconstructNestedCtxt p bytes (SizeInBytes blockSize) - where - p :: Proxy (Header blk) - p = Proxy + bytes <- + Short.toShort . Lazy.toStrict + <$> hGetExactlyAt hasFS eHnd size offset + return $ reconstructNestedCtxt p bytes (SizeInBytes blockSize) + where + p :: Proxy (Header blk) + p = Proxy - size = fromIntegral (getPrefixLen (reconstructPrefixLen p)) - offset = AbsOffset $ Secondary.unBlockOffset blockOffset + size = fromIntegral (getPrefixLen (reconstructPrefixLen p)) + offset = AbsOffset $ Secondary.unBlockOffset blockOffset parseBlock :: Lazy.ByteString -> m blk - parseBlock bytes = throwParseErrors bytes $ + parseBlock bytes = + throwParseErrors bytes $ CBOR.deserialiseFromBytes (decodeDisk ccfg) bytes parseHeader :: - SomeSecond (NestedCtxt Header) blk - -> Lazy.ByteString - -> m (Header blk) - parseHeader (SomeSecond ctxt) bytes = throwParseErrors bytes $ + SomeSecond (NestedCtxt Header) blk -> + Lazy.ByteString -> + m (Header blk) + parseHeader (SomeSecond ctxt) bytes = + throwParseErrors bytes $ CBOR.deserialiseFromBytes ((\f -> nest . DepPair ctxt . f) <$> decodeDiskDep ccfg ctxt) bytes throwParseErrors :: - forall b'. - Lazy.ByteString - -> Either CBOR.DeserialiseFailure (Lazy.ByteString, Lazy.ByteString -> b') - -> m b' + forall b'. + Lazy.ByteString -> + Either CBOR.DeserialiseFailure (Lazy.ByteString, Lazy.ByteString -> b') -> + m b' throwParseErrors fullBytes = \case - Right (trailing, f) - | Lazy.null trailing - -> return $ f fullBytes - | otherwise - -> throwUnexpectedFailure $ - TrailingDataError (fsPathChunkFile chunk) pt trailing - Left err - -> throwUnexpectedFailure $ - ParseError (fsPathChunkFile chunk) pt err + Right (trailing, f) + | Lazy.null trailing -> + return $ f fullBytes + | otherwise -> + throwUnexpectedFailure $ + TrailingDataError (fsPathChunkFile chunk) pt trailing + Left err -> + throwUnexpectedFailure $ + ParseError (fsPathChunkFile chunk) pt err diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Parser.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Parser.hs index 59315baa43..29b1575086 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Parser.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Parser.hs @@ -7,42 +7,44 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Parser ( - BlockSummary (..) +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Parser + ( BlockSummary (..) , ChunkFileError (..) , parseChunkFile ) where -import Codec.CBOR.Decoding (Decoder) -import Data.Bifunctor (first) -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor ((<&>)) -import Data.Word (Word64) -import Ouroboros.Consensus.Block hiding (headerHash) -import Ouroboros.Consensus.Storage.Common -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary as Secondary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types -import Ouroboros.Consensus.Storage.Serialisation (DecodeDisk (..), - HasBinaryBlockInfo (..)) -import Ouroboros.Consensus.Util.CBOR (withStreamIncrementalOffsets) -import Ouroboros.Consensus.Util.IOLike -import qualified Streaming as S -import Streaming (Of, Stream) -import qualified Streaming.Prelude as S -import System.FS.API (HasFS) -import System.FS.API.Types (FsPath) -import System.FS.CRC +import Codec.CBOR.Decoding (Decoder) +import Data.Bifunctor (first) +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor ((<&>)) +import Data.Word (Word64) +import Ouroboros.Consensus.Block hiding (headerHash) +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary qualified as Secondary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types +import Ouroboros.Consensus.Storage.Serialisation + ( DecodeDisk (..) + , HasBinaryBlockInfo (..) + ) +import Ouroboros.Consensus.Util.CBOR (withStreamIncrementalOffsets) +import Ouroboros.Consensus.Util.IOLike +import Streaming (Of, Stream) +import Streaming qualified as S +import Streaming.Prelude qualified as S +import System.FS.API (HasFS) +import System.FS.API.Types (FsPath) +import System.FS.CRC -- | Information about a block returned by the parser. -- -- The fields of this record are strict to make sure that by evaluating this -- record to WHNF, we no longer hold on to the entire block. Otherwise, we might -- accidentally keep all blocks in a single file in memory during parsing. -data BlockSummary blk = BlockSummary { - summaryEntry :: !(Secondary.Entry blk) - , summaryBlockNo :: !BlockNo - , summarySlotNo :: !SlotNo - } +data BlockSummary blk = BlockSummary + { summaryEntry :: !(Secondary.Entry blk) + , summaryBlockNo :: !BlockNo + , summarySlotNo :: !SlotNo + } -- | Parse the contents of a chunk file. -- @@ -70,131 +72,142 @@ data BlockSummary blk = BlockSummary { -- parsed successfully, in which case we still want these valid entries, but -- also want to know about the error so we can truncate the file to get rid of -- the unparseable data. --- parseChunkFile :: - forall m blk h r. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , DecodeDisk blk (Lazy.ByteString -> blk) - ) - => CodecConfig blk - -> HasFS m h - -> (blk -> Bool) -- ^ Check integrity of the block. 'False' = corrupt. - -> FsPath - -> [CRC] - -> ( Stream (Of (BlockSummary blk, ChainHash blk)) - m - (Maybe (ChunkFileError blk, Word64)) - -> m r - ) - -> m r + forall m blk h r. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , DecodeDisk blk (Lazy.ByteString -> blk) + ) => + CodecConfig blk -> + HasFS m h -> + -- | Check integrity of the block. 'False' = corrupt. + (blk -> Bool) -> + FsPath -> + [CRC] -> + ( Stream + (Of (BlockSummary blk, ChainHash blk)) + m + (Maybe (ChunkFileError blk, Word64)) -> + m r + ) -> + m r parseChunkFile ccfg hasFS isNotCorrupt fsPath expectedChecksums k = - withStreamIncrementalOffsets hasFS decoder fsPath - ( k + withStreamIncrementalOffsets + hasFS + decoder + fsPath + ( k . checkIfHashesLineUp . checkEntries expectedChecksums . fmap (fmap (first ChunkErrRead)) - ) - where - decoder :: forall s. Decoder s (Lazy.ByteString -> (blk, CRC)) - decoder = decodeDisk ccfg <&> \mkBlk bs -> - let !blk = mkBlk bs + ) + where + decoder :: forall s. Decoder s (Lazy.ByteString -> (blk, CRC)) + decoder = + decodeDisk ccfg <&> \mkBlk bs -> + let !blk = mkBlk bs !checksum = computeCRC bs - in (blk, checksum) - - -- | Go over the expected checksums and blocks in parallel. Stop with an - -- error when a block is corrupt. Yield correct entries along the way. - -- - -- If there's an expected checksum and it matches the block's checksum, - -- then the block is correct. Continue with the next. - -- - -- If they do not match or if there's no expected checksum in the stream, - -- check the integrity of the block (expensive). When corrupt, stop - -- parsing blocks and return an error that the block is corrupt. When not - -- corrupt, continue with the next. - checkEntries - :: [CRC] - -- ^ Expected checksums - -> Stream (Of (Word64, (Word64, (blk, CRC)))) - m - (Maybe (ChunkFileError blk, Word64)) - -- ^ Input stream of blocks (with additional info) - -> Stream (Of (BlockSummary blk, ChainHash blk)) - m - (Maybe (ChunkFileError blk, Word64)) - checkEntries = \expected -> mapAccumS expected updateAcc - where - updateAcc - :: [CRC] - -> (Word64, (Word64, (blk, CRC))) - -> Either (Maybe (ChunkFileError blk, Word64)) - ( (BlockSummary blk, ChainHash blk) - , [CRC] - ) - updateAcc expected blkAndInfo@(offset, (_, (blk, checksum))) = - case expected of - expectedChecksum:expected' - | expectedChecksum == checksum - -> Right (entryAndPrevHash, expected') - -- No expected entry or a mismatch - _ | isNotCorrupt blk - -- The (expensive) integrity check passed, so continue - -> Right (entryAndPrevHash, drop 1 expected) - | otherwise - -- The block is corrupt, stop - -> Left $ Just (ChunkErrCorrupt (blockPoint blk), offset) - where - entryAndPrevHash = entryForBlockAndInfo blkAndInfo + in (blk, checksum) - entryForBlockAndInfo - :: (Word64, (Word64, (blk, CRC))) - -> (BlockSummary blk, ChainHash blk) - entryForBlockAndInfo (offset, (_size, (blk, checksum))) = - (blockSummary, prevHash) - where - -- Don't accidentally hold on to the block! - !prevHash = blockPrevHash blk - entry = Secondary.Entry { - blockOffset = Secondary.BlockOffset offset - , headerOffset = Secondary.HeaderOffset headerOffset - , headerSize = Secondary.HeaderSize headerSize - , checksum = checksum - , headerHash = blockHash blk - , blockOrEBB = case blockIsEBB blk of - Just epoch -> EBB epoch - Nothing -> Block (blockSlot blk) - } - !blockSummary = BlockSummary { - summaryEntry = entry - , summaryBlockNo = blockNo blk - , summarySlotNo = blockSlot blk - - } - BinaryBlockInfo { headerOffset, headerSize } = getBinaryBlockInfo blk + -- \| Go over the expected checksums and blocks in parallel. Stop with an + -- error when a block is corrupt. Yield correct entries along the way. + -- + -- If there's an expected checksum and it matches the block's checksum, + -- then the block is correct. Continue with the next. + -- + -- If they do not match or if there's no expected checksum in the stream, + -- check the integrity of the block (expensive). When corrupt, stop + -- parsing blocks and return an error that the block is corrupt. When not + -- corrupt, continue with the next. + checkEntries :: + [CRC] -> + -- \^ Expected checksums + Stream + (Of (Word64, (Word64, (blk, CRC)))) + m + (Maybe (ChunkFileError blk, Word64)) -> + -- \^ Input stream of blocks (with additional info) + Stream + (Of (BlockSummary blk, ChainHash blk)) + m + (Maybe (ChunkFileError blk, Word64)) + checkEntries = \expected -> mapAccumS expected updateAcc + where + updateAcc :: + [CRC] -> + (Word64, (Word64, (blk, CRC))) -> + Either + (Maybe (ChunkFileError blk, Word64)) + ( (BlockSummary blk, ChainHash blk) + , [CRC] + ) + updateAcc expected blkAndInfo@(offset, (_, (blk, checksum))) = + case expected of + expectedChecksum : expected' + | expectedChecksum == checksum -> + Right (entryAndPrevHash, expected') + -- No expected entry or a mismatch + _ + | isNotCorrupt blk -> + -- The (expensive) integrity check passed, so continue + Right (entryAndPrevHash, drop 1 expected) + | otherwise -> + -- The block is corrupt, stop + Left $ Just (ChunkErrCorrupt (blockPoint blk), offset) + where + entryAndPrevHash = entryForBlockAndInfo blkAndInfo + entryForBlockAndInfo :: + (Word64, (Word64, (blk, CRC))) -> + (BlockSummary blk, ChainHash blk) + entryForBlockAndInfo (offset, (_size, (blk, checksum))) = + (blockSummary, prevHash) + where + -- Don't accidentally hold on to the block! + !prevHash = blockPrevHash blk + entry = + Secondary.Entry + { blockOffset = Secondary.BlockOffset offset + , headerOffset = Secondary.HeaderOffset headerOffset + , headerSize = Secondary.HeaderSize headerSize + , checksum = checksum + , headerHash = blockHash blk + , blockOrEBB = case blockIsEBB blk of + Just epoch -> EBB epoch + Nothing -> Block (blockSlot blk) + } + !blockSummary = + BlockSummary + { summaryEntry = entry + , summaryBlockNo = blockNo blk + , summarySlotNo = blockSlot blk + } + BinaryBlockInfo{headerOffset, headerSize} = getBinaryBlockInfo blk - checkIfHashesLineUp - :: Stream (Of (BlockSummary blk, ChainHash blk)) - m - (Maybe (ChunkFileError blk, Word64)) - -> Stream (Of (BlockSummary blk, ChainHash blk)) - m - (Maybe (ChunkFileError blk, Word64)) - checkIfHashesLineUp = mapAccumS0 checkFirst checkNext - where - -- We pass the hash of the previous block around as the state (@s@). - checkFirst x@(BlockSummary { summaryEntry }, _) = - Right (x, Secondary.headerHash summaryEntry) + checkIfHashesLineUp :: + Stream + (Of (BlockSummary blk, ChainHash blk)) + m + (Maybe (ChunkFileError blk, Word64)) -> + Stream + (Of (BlockSummary blk, ChainHash blk)) + m + (Maybe (ChunkFileError blk, Word64)) + checkIfHashesLineUp = mapAccumS0 checkFirst checkNext + where + -- We pass the hash of the previous block around as the state (@s@). + checkFirst x@(BlockSummary{summaryEntry}, _) = + Right (x, Secondary.headerHash summaryEntry) - checkNext hashOfPrevBlock x@(BlockSummary { summaryEntry }, prevHash) - | prevHash == BlockHash hashOfPrevBlock - = Right (x, Secondary.headerHash summaryEntry) - | otherwise - = Left (Just (err, offset)) - where - err = ChunkErrHashMismatch hashOfPrevBlock prevHash - offset = Secondary.unBlockOffset $ Secondary.blockOffset summaryEntry + checkNext hashOfPrevBlock x@(BlockSummary{summaryEntry}, prevHash) + | prevHash == BlockHash hashOfPrevBlock = + Right (x, Secondary.headerHash summaryEntry) + | otherwise = + Left (Just (err, offset)) + where + err = ChunkErrHashMismatch hashOfPrevBlock prevHash + offset = Secondary.unBlockOffset $ Secondary.blockOffset summaryEntry {------------------------------------------------------------------------------- Streaming utilities @@ -203,17 +216,19 @@ parseChunkFile ccfg hasFS isNotCorrupt fsPath expectedChecksums k = -- | Thread some state through a 'Stream'. An early return is possible by -- returning 'Left'. mapAccumS :: - Monad m - => s -- ^ Initial state - -> (s -> a -> Either r (b, s)) - -> Stream (Of a) m r - -> Stream (Of b) m r + Monad m => + -- | Initial state + s -> + (s -> a -> Either r (b, s)) -> + Stream (Of a) m r -> + Stream (Of b) m r mapAccumS st0 updateAcc = go st0 - where - go st input = S.lift (S.next input) >>= \case - Left r -> return r + where + go st input = + S.lift (S.next input) >>= \case + Left r -> return r Right (a, input') -> case updateAcc st a of - Left r -> return r + Left r -> return r Right (b, st') -> S.yield b *> go st' input' -- | Variant of 'mapAccumS' that calls the first function argument on the @@ -221,12 +236,13 @@ mapAccumS st0 updateAcc = go st0 -- elements in the stream after the first one, the second function argument is -- used. mapAccumS0 :: - forall m a b r s. Monad m - => (a -> Either r (b, s)) - -> (s -> a -> Either r (b, s)) - -> Stream (Of a) m r - -> Stream (Of b) m r + forall m a b r s. + Monad m => + (a -> Either r (b, s)) -> + (s -> a -> Either r (b, s)) -> + Stream (Of a) m r -> + Stream (Of b) m r mapAccumS0 initAcc updateAcc = mapAccumS Nothing updateAcc' - where - updateAcc' :: Maybe s -> a -> Either r (b, Maybe s) - updateAcc' mbSt = fmap (fmap Just) . maybe initAcc updateAcc mbSt + where + updateAcc' :: Maybe s -> a -> Either r (b, Maybe s) + updateAcc' mbSt = fmap (fmap Just) . maybe initAcc updateAcc mbSt diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/State.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/State.hs index 0e6da84388..5b9f694ed6 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/State.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/State.hs @@ -9,12 +9,13 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.State ( - -- * State types +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.State + ( -- * State types ImmutableDBEnv (..) , InternalState (..) , OpenState (..) , dbIsOpen + -- * State helpers , ModifyOpenState , cleanUp @@ -25,71 +26,72 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Impl.State ( , withOpenState ) where -import Control.Monad (unless) -import Control.Monad.State.Strict (StateT, lift) -import Control.ResourceRegistry -import Control.Tracer (Tracer) -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.ImmutableDB.API -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index (Index) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index as Index -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary - (BlockOffset (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util -import Ouroboros.Consensus.Util (SomePair (..)) -import Ouroboros.Consensus.Util.IOLike -import System.FS.API +import Control.Monad (unless) +import Control.Monad.State.Strict (StateT, lift) +import Control.ResourceRegistry +import Control.Tracer (Tracer) +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.ImmutableDB.API +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index (Index) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index qualified as Index +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary + ( BlockOffset (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util +import Ouroboros.Consensus.Util (SomePair (..)) +import Ouroboros.Consensus.Util.IOLike +import System.FS.API {------------------------------------------------------------------------------ Main types ------------------------------------------------------------------------------} -- | The environment used by the immutable database. -data ImmutableDBEnv m blk = forall h. Eq h => ImmutableDBEnv { - hasFS :: !(HasFS m h) - , varInternalState :: !(StrictSVar m (InternalState m blk h)) - , checkIntegrity :: !(blk -> Bool) - , chunkInfo :: !ChunkInfo - , tracer :: !(Tracer m (TraceEvent blk)) - , cacheConfig :: !Index.CacheConfig - , codecConfig :: !(CodecConfig blk) - } +data ImmutableDBEnv m blk = forall h. Eq h => ImmutableDBEnv + { hasFS :: !(HasFS m h) + , varInternalState :: !(StrictSVar m (InternalState m blk h)) + , checkIntegrity :: !(blk -> Bool) + , chunkInfo :: !ChunkInfo + , tracer :: !(Tracer m (TraceEvent blk)) + , cacheConfig :: !Index.CacheConfig + , codecConfig :: !(CodecConfig blk) + } -data InternalState m blk h = - DbClosed +data InternalState m blk h + = DbClosed | DbOpen !(OpenState m blk h) deriving (Generic, NoThunks) dbIsOpen :: InternalState m blk h -> Bool -dbIsOpen DbClosed = False +dbIsOpen DbClosed = False dbIsOpen (DbOpen _) = True -- | Internal state when the database is open. -data OpenState m blk h = OpenState { - currentChunk :: !ChunkNo - -- ^ The current 'ChunkNo' the immutable store is writing to. - , currentChunkOffset :: !BlockOffset - -- ^ The offset at which the next block will be written in the current - -- chunk file. - , currentSecondaryOffset :: !SecondaryOffset - -- ^ The offset at which the next index entry will be written in the - -- current secondary index. - , currentChunkHandle :: !(Handle h) - -- ^ The write handle for the current chunk file. - , currentPrimaryHandle :: !(Handle h) - -- ^ The write handle for the current primary index file. - , currentSecondaryHandle :: !(Handle h) - -- ^ The write handle for the current secondary index file. - , currentTip :: !(WithOrigin (Tip blk)) - -- ^ The current tip of the database. - , currentIndex :: !(Index m blk h) - -- ^ An abstraction layer on top of the indices to allow for caching. - } +data OpenState m blk h = OpenState + { currentChunk :: !ChunkNo + -- ^ The current 'ChunkNo' the immutable store is writing to. + , currentChunkOffset :: !BlockOffset + -- ^ The offset at which the next block will be written in the current + -- chunk file. + , currentSecondaryOffset :: !SecondaryOffset + -- ^ The offset at which the next index entry will be written in the + -- current secondary index. + , currentChunkHandle :: !(Handle h) + -- ^ The write handle for the current chunk file. + , currentPrimaryHandle :: !(Handle h) + -- ^ The write handle for the current primary index file. + , currentSecondaryHandle :: !(Handle h) + -- ^ The write handle for the current secondary index file. + , currentTip :: !(WithOrigin (Tip blk)) + -- ^ The current tip of the database. + , currentIndex :: !(Index m blk h) + -- ^ An abstraction layer on top of the indices to allow for caching. + } deriving (Generic, NoThunks) {------------------------------------------------------------------------------ @@ -98,40 +100,42 @@ data OpenState m blk h = OpenState { -- | Create the internal open state for the given chunk. mkOpenState :: - forall m blk h. (HasCallStack, IOLike m, Eq h) - => HasFS m h - -> Index m blk h - -> ChunkNo - -> WithOrigin (Tip blk) - -> AllowExisting - -> WithTempRegistry (OpenState m blk h) m (OpenState m blk h) + forall m blk h. + (HasCallStack, IOLike m, Eq h) => + HasFS m h -> + Index m blk h -> + ChunkNo -> + WithOrigin (Tip blk) -> + AllowExisting -> + WithTempRegistry (OpenState m blk h) m (OpenState m blk h) mkOpenState hasFS@HasFS{..} index chunk tip existing = do - eHnd <- allocateHandle currentChunkHandle $ hOpen (fsPathChunkFile chunk) appendMode - pHnd <- allocateHandle currentPrimaryHandle $ Index.openPrimaryIndex index chunk existing - sHnd <- allocateHandle currentSecondaryHandle $ hOpen (fsPathSecondaryIndexFile chunk) appendMode - chunkOffset <- lift $ hGetSize eHnd - secondaryOffset <- lift $ hGetSize sHnd - return OpenState - { currentChunk = chunk - , currentChunkOffset = BlockOffset chunkOffset + eHnd <- allocateHandle currentChunkHandle $ hOpen (fsPathChunkFile chunk) appendMode + pHnd <- allocateHandle currentPrimaryHandle $ Index.openPrimaryIndex index chunk existing + sHnd <- allocateHandle currentSecondaryHandle $ hOpen (fsPathSecondaryIndexFile chunk) appendMode + chunkOffset <- lift $ hGetSize eHnd + secondaryOffset <- lift $ hGetSize sHnd + return + OpenState + { currentChunk = chunk + , currentChunkOffset = BlockOffset chunkOffset , currentSecondaryOffset = fromIntegral secondaryOffset - , currentChunkHandle = eHnd - , currentPrimaryHandle = pHnd + , currentChunkHandle = eHnd + , currentPrimaryHandle = pHnd , currentSecondaryHandle = sHnd - , currentTip = tip - , currentIndex = index + , currentTip = tip + , currentIndex = index } - where - appendMode = AppendMode existing + where + appendMode = AppendMode existing - allocateHandle - :: (OpenState m blk h -> Handle h) - -> m (Handle h) - -> WithTempRegistry (OpenState m blk h) m (Handle h) - allocateHandle getHandle open = - -- To check whether the handle made it in the final state, we check for - -- equality. - allocateTemp open (hClose' hasFS) ((==) . getHandle) + allocateHandle :: + (OpenState m blk h -> Handle h) -> + m (Handle h) -> + WithTempRegistry (OpenState m blk h) m (Handle h) + allocateHandle getHandle open = + -- To check whether the handle made it in the final state, we check for + -- equality. + allocateTemp open (hClose' hasFS) ((==) . getHandle) -- | Get the 'OpenState' of the given database, throw a 'ClosedDBError' in -- case it is closed. @@ -143,16 +147,17 @@ mkOpenState hasFS@HasFS{..} index chunk tip existing = do -- to use an existing 'HasFS' instance already in scope otherwise, since the -- @h@ parameters would not be known to match. getOpenState :: - forall m blk. (HasCallStack, IOLike m, StandardHash blk, Typeable blk) - => ImmutableDBEnv m blk - -> STM m (SomePair (HasFS m) (OpenState m blk)) -getOpenState ImmutableDBEnv {..} = do - -- We use 'readSVarSTM' to read a potentially stale internal state if - -- somebody's appending to the ImmutableDB at the same time. - internalState <- readSVarSTM varInternalState - case internalState of - DbClosed -> throwApiMisuse $ ClosedDBError @blk - DbOpen openState -> return (SomePair hasFS openState) + forall m blk. + (HasCallStack, IOLike m, StandardHash blk, Typeable blk) => + ImmutableDBEnv m blk -> + STM m (SomePair (HasFS m) (OpenState m blk)) +getOpenState ImmutableDBEnv{..} = do + -- We use 'readSVarSTM' to read a potentially stale internal state if + -- somebody's appending to the ImmutableDB at the same time. + internalState <- readSVarSTM varInternalState + case internalState of + DbClosed -> throwApiMisuse $ ClosedDBError @blk + DbOpen openState -> return (SomePair hasFS openState) -- | Shorthand type ModifyOpenState m blk h = @@ -179,45 +184,47 @@ type ModifyOpenState m blk h = -- gotchas that @modifyMVar@ does; the effects are observable and it is -- susceptible to deadlock. modifyOpenState :: - forall m blk a. (HasCallStack, IOLike m, StandardHash blk, Typeable blk) - => ImmutableDBEnv m blk - -> (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) - -> m a -modifyOpenState ImmutableDBEnv { hasFS = hasFS :: HasFS m h, .. } modSt = - wrapFsError (Proxy @blk) $ modifyWithTempRegistry getSt putSt (modSt hasFS) - where - getSt :: m (OpenState m blk h) - getSt = mask_ $ takeSVar varInternalState >>= \case - DbOpen ost -> return ost - DbClosed -> do - putSVar varInternalState DbClosed - throwApiMisuse $ ClosedDBError @blk - - putSt :: OpenState m blk h -> ExitCase (OpenState m blk h) -> m () - putSt ost ec = do - -- It is crucial to replace the SVar. - putSVar varInternalState st' - unless (dbIsOpen st') $ cleanUp hasFS ost - where - st' = case ec of - ExitCaseSuccess ost' -> DbOpen ost' + forall m blk a. + (HasCallStack, IOLike m, StandardHash blk, Typeable blk) => + ImmutableDBEnv m blk -> + (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) -> + m a +modifyOpenState ImmutableDBEnv{hasFS = hasFS :: HasFS m h, ..} modSt = + wrapFsError (Proxy @blk) $ modifyWithTempRegistry getSt putSt (modSt hasFS) + where + getSt :: m (OpenState m blk h) + getSt = + mask_ $ + takeSVar varInternalState >>= \case + DbOpen ost -> return ost + DbClosed -> do + putSVar varInternalState DbClosed + throwApiMisuse $ ClosedDBError @blk - -- When something goes wrong, close the ImmutableDB for safety. - -- Except for user errors, because they stem from incorrect use of - -- the ImmutableDB. - -- - -- NOTE: we only modify the ImmutableDB in a background thread of - -- the ChainDB, not in per-connection threads that could be killed - -- at any point. When an exception is encountered while modifying - -- the ImmutableDB in the background thread, or that background - -- thread itself is killed with an async exception, we will shut - -- down the node anway, so it is safe to close the ImmutableDB here. - ExitCaseAbort -> DbClosed - ExitCaseException ex - | Just (ApiMisuse {} :: ImmutableDBError blk) <- fromException ex - -> DbOpen ost - | otherwise - -> DbClosed + putSt :: OpenState m blk h -> ExitCase (OpenState m blk h) -> m () + putSt ost ec = do + -- It is crucial to replace the SVar. + putSVar varInternalState st' + unless (dbIsOpen st') $ cleanUp hasFS ost + where + st' = case ec of + ExitCaseSuccess ost' -> DbOpen ost' + -- When something goes wrong, close the ImmutableDB for safety. + -- Except for user errors, because they stem from incorrect use of + -- the ImmutableDB. + -- + -- NOTE: we only modify the ImmutableDB in a background thread of + -- the ChainDB, not in per-connection threads that could be killed + -- at any point. When an exception is encountered while modifying + -- the ImmutableDB in the background thread, or that background + -- thread itself is killed with an async exception, we will shut + -- down the node anway, so it is safe to close the ImmutableDB here. + ExitCaseAbort -> DbClosed + ExitCaseException ex + | Just (ApiMisuse{} :: ImmutableDBError blk) <- fromException ex -> + DbOpen ost + | otherwise -> + DbClosed -- | Perform an action that accesses the internal state of an open database. -- @@ -227,60 +234,64 @@ modifyOpenState ImmutableDBEnv { hasFS = hasFS :: HasFS m h, .. } modSt = -- database is closed to prevent further appending to a database in a -- potentially inconsistent state. withOpenState :: - forall m blk r. (HasCallStack, IOLike m, StandardHash blk, Typeable blk) - => ImmutableDBEnv m blk - -> (forall h. HasFS m h -> OpenState m blk h -> m r) - -> m r -withOpenState ImmutableDBEnv { hasFS = hasFS :: HasFS m h, .. } action = do - (mr, ()) <- - generalBracket open (const close) (tryImmutableDB (Proxy @blk) . access) - case mr of - Left e -> throwIO e - Right r -> return r - where - -- We use 'readSVarSTM' to read a potentially stale internal state if - -- somebody's appending to the ImmutableDB at the same time. Reads can - -- safely happen concurrently with appends, so this is fine and allows for - -- some extra concurrency. - open :: m (OpenState m blk h) - open = atomically (readSVarSTM varInternalState) >>= \case + forall m blk r. + (HasCallStack, IOLike m, StandardHash blk, Typeable blk) => + ImmutableDBEnv m blk -> + (forall h. HasFS m h -> OpenState m blk h -> m r) -> + m r +withOpenState ImmutableDBEnv{hasFS = hasFS :: HasFS m h, ..} action = do + (mr, ()) <- + generalBracket open (const close) (tryImmutableDB (Proxy @blk) . access) + case mr of + Left e -> throwIO e + Right r -> return r + where + -- We use 'readSVarSTM' to read a potentially stale internal state if + -- somebody's appending to the ImmutableDB at the same time. Reads can + -- safely happen concurrently with appends, so this is fine and allows for + -- some extra concurrency. + open :: m (OpenState m blk h) + open = + atomically (readSVarSTM varInternalState) >>= \case DbOpen ost -> return ost - DbClosed -> throwApiMisuse $ ClosedDBError @blk + DbClosed -> throwApiMisuse $ ClosedDBError @blk - -- close doesn't take the state that @open@ returned, because the state - -- may have been updated by someone else since we got it (remember we're - -- using 'readSVarSTM' here, not 'takeSVar'). So we need to get the most - -- recent state anyway. - close :: ExitCase (Either (ImmutableDBError blk) r) - -> m () - close ec = case ec of - ExitCaseAbort -> return () - ExitCaseException _ex -> return () - ExitCaseSuccess (Right _) -> return () - -- In case of an ImmutableDBError, close when unexpected - ExitCaseSuccess (Left (UnexpectedFailure {})) -> shutDown - ExitCaseSuccess (Left (ApiMisuse {})) -> return () + -- close doesn't take the state that @open@ returned, because the state + -- may have been updated by someone else since we got it (remember we're + -- using 'readSVarSTM' here, not 'takeSVar'). So we need to get the most + -- recent state anyway. + close :: + ExitCase (Either (ImmutableDBError blk) r) -> + m () + close ec = case ec of + ExitCaseAbort -> return () + ExitCaseException _ex -> return () + ExitCaseSuccess (Right _) -> return () + -- In case of an ImmutableDBError, close when unexpected + ExitCaseSuccess (Left (UnexpectedFailure{})) -> shutDown + ExitCaseSuccess (Left (ApiMisuse{})) -> return () - shutDown :: m () - shutDown = swapSVar varInternalState DbClosed >>= \case + shutDown :: m () + shutDown = + swapSVar varInternalState DbClosed >>= \case DbOpen ost -> wrapFsError (Proxy @blk) $ cleanUp hasFS ost - DbClosed -> return () + DbClosed -> return () - access :: OpenState m blk h -> m r - access = action hasFS + access :: OpenState m blk h -> m r + access = action hasFS -- | Close the handles in the 'OpenState'. -- -- Idempotent, as closing a handle is idempotent. closeOpenHandles :: Monad m => HasFS m h -> OpenState m blk h -> m () -closeOpenHandles HasFS { hClose } OpenState {..} = do - hClose currentChunkHandle - hClose currentPrimaryHandle - hClose currentSecondaryHandle +closeOpenHandles HasFS{hClose} OpenState{..} = do + hClose currentChunkHandle + hClose currentPrimaryHandle + hClose currentSecondaryHandle -- | Clean up the 'OpenState': 'closeOpenHandles' + close the index (i.e., -- shut down its background thread) cleanUp :: Monad m => HasFS m h -> OpenState m blk h -> m () -cleanUp hasFS ost@OpenState {..} = do - Index.close currentIndex - closeOpenHandles hasFS ost +cleanUp hasFS ost@OpenState{..} = do + Index.close currentIndex + closeOpenHandles hasFS ost diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Types.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Types.hs index 17bc5cd5df..f85cdd8f0b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Types.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Types.hs @@ -2,48 +2,52 @@ {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE DeriveTraversable #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types ( - -- * Misc types +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types + ( -- * Misc types BlockOrEBB (..) , WithBlockSize (..) , isBlockOrEBB + -- * Validation policy , ValidationPolicy (..) + -- * Chunk file error , ChunkFileError (..) + -- * Tracing , TraceCacheEvent (..) , TraceChunkValidation (..) , TraceEvent (..) ) where -import Data.Text (Text) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.ImmutableDB.API (Tip) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal - (ChunkNo) -import Ouroboros.Consensus.Util.CBOR (ReadIncrementalErr) +import Data.Text (Text) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.ImmutableDB.API (Tip) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal + ( ChunkNo + ) +import Ouroboros.Consensus.Util.CBOR (ReadIncrementalErr) {------------------------------------------------------------------------------ Misc types ------------------------------------------------------------------------------} -data BlockOrEBB = - Block !SlotNo - | EBB !EpochNo +data BlockOrEBB + = Block !SlotNo + | EBB !EpochNo deriving (Eq, Show, Generic, NoThunks) isBlockOrEBB :: BlockOrEBB -> IsEBB isBlockOrEBB (Block _) = IsNotEBB -isBlockOrEBB (EBB _) = IsEBB +isBlockOrEBB (EBB _) = IsEBB -data WithBlockSize a = WithBlockSize { - blockSize :: !Word32 - , withoutBlockSize :: !a - } +data WithBlockSize a = WithBlockSize + { blockSize :: !Word32 + , withoutBlockSize :: !a + } deriving (Eq, Show, Generic, NoThunks, Functor, Foldable, Traversable) {------------------------------------------------------------------------------ @@ -59,9 +63,8 @@ data WithBlockSize a = WithBlockSize { -- closed. -- -- The recovery policy dictates which on-disk files should be validated. -data ValidationPolicy = - ValidateMostRecentChunk - -- ^ The chunk and index files of the most recent chunk stored on disk will +data ValidationPolicy + = -- | The chunk and index files of the most recent chunk stored on disk will -- be validated. -- -- Prior chunk and index files are ignored, even their presence will not @@ -72,12 +75,13 @@ data ValidationPolicy = -- -- Because not all files are validated, subsequent operations on the -- database after opening may result in unexpected errors. - | ValidateAllChunks - -- ^ The chunk and index files of all chunks starting from the first one up + ValidateMostRecentChunk + | -- | The chunk and index files of all chunks starting from the first one up -- to the last chunk stored on disk will be validated. -- -- A 'MissingFileError' or an 'InvalidFileError' will be thrown in case of a -- missing or invalid chunk file, or an invalid index file. + ValidateAllChunks deriving (Show, Eq, Generic) {------------------------------------------------------------------------------ @@ -86,68 +90,67 @@ data ValidationPolicy = -- | Defined here instead of in the @Parser@ module because 'TraceEvent' -- depends on it. -data ChunkFileError blk = - -- | A block could not be decoded +data ChunkFileError blk + = -- | A block could not be decoded ChunkErrRead ReadIncrementalErr - - -- | The previous hash of a block did not match the hash of the previous + | -- | The previous hash of a block did not match the hash of the previous -- block. - | ChunkErrHashMismatch - (HeaderHash blk) -- ^ The hash of the previous block - (ChainHash blk) -- ^ The previous hash of the block - - -- | The integrity verification of the block with the given point returned + ChunkErrHashMismatch + -- | The hash of the previous block + (HeaderHash blk) + -- | The previous hash of the block + (ChainHash blk) + | -- | The integrity verification of the block with the given point returned -- 'False', indicating that the block got corrupted. - | ChunkErrCorrupt (Point blk) + ChunkErrCorrupt (Point blk) deriving (Eq, Show) {------------------------------------------------------------------------------ Tracing ------------------------------------------------------------------------------} -data TraceEvent blk = - NoValidLastLocation +data TraceEvent blk + = NoValidLastLocation | ValidatedLastLocation ChunkNo (Tip blk) - -- Validation of previous DB - | ChunkValidationEvent (TraceChunkValidation blk ChunkNo) - | ChunkFileDoesntFit (ChainHash blk) (ChainHash blk) - -- ^ The hash of the last block in the previous epoch doesn't match the + | -- Validation of previous DB + ChunkValidationEvent (TraceChunkValidation blk ChunkNo) + | -- | The hash of the last block in the previous epoch doesn't match the -- previous hash of the first block in the current epoch - | Migrating Text - -- ^ Performing a migration of the on-disk files - - -- Delete after - | DeletingAfter (WithOrigin (Tip blk)) - -- Closing the DB - | DBAlreadyClosed + ChunkFileDoesntFit (ChainHash blk) (ChainHash blk) + | -- | Performing a migration of the on-disk files + Migrating Text + | -- Delete after + DeletingAfter (WithOrigin (Tip blk)) + | -- Closing the DB + DBAlreadyClosed | DBClosed - -- Events traced by the index cache - | TraceCacheEvent !TraceCacheEvent + | -- Events traced by the index cache + TraceCacheEvent !TraceCacheEvent deriving (Eq, Generic, Show) -data TraceChunkValidation blk validateTo = - StartedValidatingChunk ChunkNo validateTo - | ValidatedChunk ChunkNo validateTo - | MissingChunkFile ChunkNo - | InvalidChunkFile ChunkNo (ChunkFileError blk) - | MissingPrimaryIndex ChunkNo - | MissingSecondaryIndex ChunkNo - | InvalidPrimaryIndex ChunkNo - | InvalidSecondaryIndex ChunkNo - | RewritePrimaryIndex ChunkNo - | RewriteSecondaryIndex ChunkNo +data TraceChunkValidation blk validateTo + = StartedValidatingChunk ChunkNo validateTo + | ValidatedChunk ChunkNo validateTo + | MissingChunkFile ChunkNo + | InvalidChunkFile ChunkNo (ChunkFileError blk) + | MissingPrimaryIndex ChunkNo + | MissingSecondaryIndex ChunkNo + | InvalidPrimaryIndex ChunkNo + | InvalidSecondaryIndex ChunkNo + | RewritePrimaryIndex ChunkNo + | RewriteSecondaryIndex ChunkNo deriving (Generic, Eq, Show, Functor) -- | The argument with type 'Word32' is the number of past chunk currently in -- the cache. -data TraceCacheEvent = - TraceCurrentChunkHit ChunkNo Word32 - | TracePastChunkHit ChunkNo Word32 - | TracePastChunkMiss ChunkNo Word32 - | TracePastChunkEvict ChunkNo Word32 - -- ^ The least recently used past chunk was evicted because the cache +data TraceCacheEvent + = TraceCurrentChunkHit ChunkNo Word32 + | TracePastChunkHit ChunkNo Word32 + | TracePastChunkMiss ChunkNo Word32 + | -- | The least recently used past chunk was evicted because the cache -- was full. - | TracePastChunksExpired [ChunkNo] Word32 - -- ^ Past chunks were expired from the cache because they haven't been + TracePastChunkEvict ChunkNo Word32 + | -- | Past chunks were expired from the cache because they haven't been -- used for a while. + TracePastChunksExpired [ChunkNo] Word32 deriving (Eq, Generic, Show) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Util.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Util.hs index bbd7e1bb93..54a091b6c5 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Util.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Util.hs @@ -6,8 +6,8 @@ {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util ( - -- * Utilities +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util + ( -- * Utilities Two (..) , checkChecksum , dbFilesOnDisk @@ -23,25 +23,26 @@ module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util ( , wrapFsError ) where -import Control.Monad (forM_) -import Data.Binary.Get (Get) -import qualified Data.Binary.Get as Get -import qualified Data.ByteString.Lazy as Lazy -import Data.List as List (foldl') -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Text (Text) -import qualified Data.Text as T -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block hiding (hashSize) -import Ouroboros.Consensus.Storage.ImmutableDB.API -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal - (ChunkNo (..)) -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import System.FS.API -import System.FS.CRC -import Text.Read (readMaybe) +import Control.Monad (forM_) +import Data.Binary.Get (Get) +import Data.Binary.Get qualified as Get +import Data.ByteString.Lazy qualified as Lazy +import Data.List as List (foldl') +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Text (Text) +import Data.Text qualified as T +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block hiding (hashSize) +import Ouroboros.Consensus.Storage.ImmutableDB.API +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal + ( ChunkNo (..) + ) +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import System.FS.API +import System.FS.CRC +import Text.Read (readMaybe) {------------------------------------------------------------------------------ Utilities @@ -64,8 +65,8 @@ fsPathSecondaryIndexFile = renderFile "secondary" -- | Opposite of 'parseDBFile'. renderFile :: Text -> ChunkNo -> FsPath renderFile fileType (ChunkNo chunk) = fsPathFromList [name] - where - name = T.justifyRight 5 '0' (T.pack (show chunk)) <> "." <> fileType + where + name = T.justifyRight 5 '0' (T.pack (show chunk)) <> "." <> fileType -- | Parse the prefix and chunk number from the filename of an index or chunk -- file. @@ -76,42 +77,44 @@ renderFile fileType (ChunkNo chunk) = fsPathFromList [name] -- Just ("primary", 12) parseDBFile :: String -> Maybe (String, ChunkNo) parseDBFile s = case T.splitOn "." $ T.pack s of - [n, ext] -> (T.unpack ext,) . ChunkNo <$> readMaybe (T.unpack n) - _ -> Nothing + [n, ext] -> (T.unpack ext,) . ChunkNo <$> readMaybe (T.unpack n) + _ -> Nothing -- | Go through all files, making three sets: the set of chunk files, primary -- index files, and secondary index files, discarding all others. dbFilesOnDisk :: Set String -> (Set ChunkNo, Set ChunkNo, Set ChunkNo) dbFilesOnDisk = List.foldl' categorise mempty - where - categorise fs@(!chunk, !primary, !secondary) file = - case parseDBFile file of - Just ("chunk", n) -> (Set.insert n chunk, primary, secondary) - Just ("primary", n) -> (chunk, Set.insert n primary, secondary) - Just ("secondary", n) -> (chunk, primary, Set.insert n secondary) - _ -> fs + where + categorise fs@(!chunk, !primary, !secondary) file = + case parseDBFile file of + Just ("chunk", n) -> (Set.insert n chunk, primary, secondary) + Just ("primary", n) -> (chunk, Set.insert n primary, secondary) + Just ("secondary", n) -> (chunk, primary, Set.insert n secondary) + _ -> fs -- | Remove all chunk and index starting from the given chunk (included). -removeFilesStartingFrom :: (HasCallStack, Monad m) - => HasFS m h - -> ChunkNo - -> m () -removeFilesStartingFrom HasFS { removeFile, listDirectory } chunk = do - filesInDBFolder <- listDirectory (mkFsPath []) - let (chunkFiles, primaryFiles, secondaryFiles) = dbFilesOnDisk filesInDBFolder - forM_ (takeWhile (>= chunk) (Set.toDescList chunkFiles)) $ \e -> - removeFile (fsPathChunkFile e) - forM_ (takeWhile (>= chunk) (Set.toDescList primaryFiles)) $ \i -> - removeFile (fsPathPrimaryIndexFile i) - forM_ (takeWhile (>= chunk) (Set.toDescList secondaryFiles)) $ \i -> - removeFile (fsPathSecondaryIndexFile i) +removeFilesStartingFrom :: + (HasCallStack, Monad m) => + HasFS m h -> + ChunkNo -> + m () +removeFilesStartingFrom HasFS{removeFile, listDirectory} chunk = do + filesInDBFolder <- listDirectory (mkFsPath []) + let (chunkFiles, primaryFiles, secondaryFiles) = dbFilesOnDisk filesInDBFolder + forM_ (takeWhile (>= chunk) (Set.toDescList chunkFiles)) $ \e -> + removeFile (fsPathChunkFile e) + forM_ (takeWhile (>= chunk) (Set.toDescList primaryFiles)) $ \i -> + removeFile (fsPathPrimaryIndexFile i) + forM_ (takeWhile (>= chunk) (Set.toDescList secondaryFiles)) $ \i -> + removeFile (fsPathSecondaryIndexFile i) -- | Rewrap 'FsError' in a 'ImmutableDBError'. wrapFsError :: - forall blk m a. (MonadCatch m, StandardHash blk, Typeable blk) - => Proxy blk - -> m a - -> m a + forall blk m a. + (MonadCatch m, StandardHash blk, Typeable blk) => + Proxy blk -> + m a -> + m a wrapFsError _ = handle $ throwUnexpectedFailure @blk . FileSystemError -- | Execute an action and catch the 'ImmutableDBError' and 'FsError' that can @@ -122,61 +125,64 @@ wrapFsError _ = handle $ throwUnexpectedFailure @blk . FileSystemError -- and catch the 'ImmutableDBError' and the 'FsError' (wrapped in the former) -- it may thrown. tryImmutableDB :: - forall m blk a. (MonadCatch m, StandardHash blk, Typeable blk) - => Proxy blk - -> m a - -> m (Either (ImmutableDBError blk) a) + forall m blk a. + (MonadCatch m, StandardHash blk, Typeable blk) => + Proxy blk -> + m a -> + m (Either (ImmutableDBError blk) a) tryImmutableDB pb = try . wrapFsError pb -- | Wrapper around 'Get.runGetOrFail' that throws an 'InvalidFileError' when -- it failed or when there was unconsumed input. runGet :: - forall blk a m. - (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) - => Proxy blk - -> FsPath - -> Get a - -> Lazy.ByteString - -> m a + forall blk a m. + (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) => + Proxy blk -> + FsPath -> + Get a -> + Lazy.ByteString -> + m a runGet _ file get bl = case Get.runGetOrFail get bl of - Right (unconsumed, _, primary) - | Lazy.null unconsumed - -> return primary - | otherwise - -> throwUnexpectedFailure $ - InvalidFileError @blk file "left-over bytes" prettyCallStack - Left (_, _, msg) - -> throwUnexpectedFailure $ - InvalidFileError @blk file msg prettyCallStack + Right (unconsumed, _, primary) + | Lazy.null unconsumed -> + return primary + | otherwise -> + throwUnexpectedFailure $ + InvalidFileError @blk file "left-over bytes" prettyCallStack + Left (_, _, msg) -> + throwUnexpectedFailure $ + InvalidFileError @blk file msg prettyCallStack -- | Same as 'runGet', but allows unconsumed input and returns it. runGetWithUnconsumed :: - forall blk a m. - (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) - => Proxy blk - -> FsPath - -> Get a - -> Lazy.ByteString - -> m (Lazy.ByteString, a) + forall blk a m. + (HasCallStack, MonadThrow m, StandardHash blk, Typeable blk) => + Proxy blk -> + FsPath -> + Get a -> + Lazy.ByteString -> + m (Lazy.ByteString, a) runGetWithUnconsumed _ file get bl = case Get.runGetOrFail get bl of - Right (unconsumed, _, primary) - -> return (unconsumed, primary) - Left (_, _, msg) - -> throwUnexpectedFailure $ - InvalidFileError @blk file msg prettyCallStack + Right (unconsumed, _, primary) -> + return (unconsumed, primary) + Left (_, _, msg) -> + throwUnexpectedFailure $ + InvalidFileError @blk file msg prettyCallStack -- | Check whether the given checksums match. If not, throw a -- 'ChecksumMismatchError'. checkChecksum :: - (HasCallStack, HasHeader blk, MonadThrow m) - => FsPath - -> RealPoint blk - -> CRC -- ^ Expected checksum - -> CRC -- ^ Actual checksum - -> m () + (HasCallStack, HasHeader blk, MonadThrow m) => + FsPath -> + RealPoint blk -> + -- | Expected checksum + CRC -> + -- | Actual checksum + CRC -> + m () checkChecksum chunkFile pt expected actual - | expected == actual - = return () - | otherwise - = throwUnexpectedFailure $ + | expected == actual = + return () + | otherwise = + throwUnexpectedFailure $ ChecksumMismatchError pt expected actual chunkFile prettyCallStack diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Validation.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Validation.hs index 6ab3e82d46..a9bb19bb4d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Validation.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Impl/Validation.hs @@ -8,99 +8,110 @@ {-# LANGUAGE TupleSections #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Validation ( - ValidateEnv (..) +module Ouroboros.Consensus.Storage.ImmutableDB.Impl.Validation + ( ValidateEnv (..) , validateAndReopen + -- * Exported for testing purposes , ShouldBeFinalised (..) , reconstructPrimaryIndex ) where -import Control.Exception (assert) -import Control.Monad (forM_, unless, when) -import Control.Monad.Except (ExceptT, runExceptT, throwError) -import Control.Monad.Trans.Class (lift) -import Control.ResourceRegistry -import Control.Tracer (Tracer, contramap, traceWith) -import qualified Data.ByteString.Lazy as Lazy -import Data.Functor (($>)) -import Data.Maybe (fromMaybe, mapMaybe) -import qualified Data.Set as Set -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block hiding (hashSize) -import Ouroboros.Consensus.Storage.ImmutableDB.API -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal - (unChunkNo, unsafeEpochNoToChunkNo) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index - (cachedIndex) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index as Index -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary - (PrimaryIndex) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary as Primary -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary as Secondary -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Parser - (BlockSummary (..), parseChunkFile) -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.State -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types -import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util -import Ouroboros.Consensus.Storage.Serialisation (DecodeDisk (..), - HasBinaryBlockInfo (..)) -import Ouroboros.Consensus.Util (lastMaybe, whenJust) -import Ouroboros.Consensus.Util.IOLike -import Streaming (Of (..)) -import qualified Streaming.Prelude as S -import System.FS.API +import Control.Exception (assert) +import Control.Monad (forM_, unless, when) +import Control.Monad.Except (ExceptT, runExceptT, throwError) +import Control.Monad.Trans.Class (lift) +import Control.ResourceRegistry +import Control.Tracer (Tracer, contramap, traceWith) +import Data.ByteString.Lazy qualified as Lazy +import Data.Functor (($>)) +import Data.Maybe (fromMaybe, mapMaybe) +import Data.Set qualified as Set +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block hiding (hashSize) +import Ouroboros.Consensus.Storage.ImmutableDB.API +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal + ( unChunkNo + , unsafeEpochNoToChunkNo + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index + ( cachedIndex + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index qualified as Index +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary + ( PrimaryIndex + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Primary qualified as Primary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index.Secondary qualified as Secondary +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Parser + ( BlockSummary (..) + , parseChunkFile + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.State +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Types +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Util +import Ouroboros.Consensus.Storage.Serialisation + ( DecodeDisk (..) + , HasBinaryBlockInfo (..) + ) +import Ouroboros.Consensus.Util (lastMaybe, whenJust) +import Ouroboros.Consensus.Util.IOLike +import Streaming (Of (..)) +import Streaming.Prelude qualified as S +import System.FS.API -- | Bundle of arguments used most validation functions. -- -- Note that we don't use "Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index" -- because we are reading and manipulating index files in different ways, e.g., -- truncating them. -data ValidateEnv m blk h = ValidateEnv { - hasFS :: !(HasFS m h) - , chunkInfo :: !ChunkInfo - , tracer :: !(Tracer m (TraceEvent blk)) - , cacheConfig :: !Index.CacheConfig - , codecConfig :: !(CodecConfig blk) - , checkIntegrity :: !(blk -> Bool) - } +data ValidateEnv m blk h = ValidateEnv + { hasFS :: !(HasFS m h) + , chunkInfo :: !ChunkInfo + , tracer :: !(Tracer m (TraceEvent blk)) + , cacheConfig :: !Index.CacheConfig + , codecConfig :: !(CodecConfig blk) + , checkIntegrity :: !(blk -> Bool) + } -- | Perform validation as per the 'ValidationPolicy' using 'validate' and -- create an 'OpenState' corresponding to its outcome using 'mkOpenState'. validateAndReopen :: - forall m blk h. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , ConvertRawHash blk - , Eq h - , HasCallStack - ) - => ValidateEnv m blk h - -> ResourceRegistry m - -> ValidationPolicy - -> WithTempRegistry (OpenState m blk h) m (OpenState m blk h) + forall m blk h. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , ConvertRawHash blk + , Eq h + , HasCallStack + ) => + ValidateEnv m blk h -> + ResourceRegistry m -> + ValidationPolicy -> + WithTempRegistry (OpenState m blk h) m (OpenState m blk h) validateAndReopen validateEnv registry valPol = wrapFsError (Proxy @blk) $ do - (chunk, tip) <- lift $ validate validateEnv valPol - index <- lift $ cachedIndex - hasFS - registry - cacheTracer - cacheConfig - chunkInfo - chunk - case tip of - Origin -> assert (chunk == firstChunkNo) $ do - lift $ traceWith tracer NoValidLastLocation - mkOpenState hasFS index chunk Origin MustBeNew - NotOrigin tip' -> do - lift $ traceWith tracer $ ValidatedLastLocation chunk tip' - mkOpenState hasFS index chunk tip AllowExisting - where - ValidateEnv { hasFS, tracer, cacheConfig, chunkInfo } = validateEnv - cacheTracer = contramap TraceCacheEvent tracer + (chunk, tip) <- lift $ validate validateEnv valPol + index <- + lift $ + cachedIndex + hasFS + registry + cacheTracer + cacheConfig + chunkInfo + chunk + case tip of + Origin -> assert (chunk == firstChunkNo) $ do + lift $ traceWith tracer NoValidLastLocation + mkOpenState hasFS index chunk Origin MustBeNew + NotOrigin tip' -> do + lift $ traceWith tracer $ ValidatedLastLocation chunk tip' + mkOpenState hasFS index chunk tip AllowExisting + where + ValidateEnv{hasFS, tracer, cacheConfig, chunkInfo} = validateEnv + cacheTracer = contramap TraceCacheEvent tracer -- | Execute the 'ValidationPolicy'. -- @@ -112,179 +123,185 @@ validateAndReopen validateEnv registry valPol = wrapFsError (Proxy @blk) $ do -- we don't have to worry about leaking handles, they will be closed when the -- process terminates. validate :: - forall m blk h. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , ConvertRawHash blk - , HasCallStack - ) - => ValidateEnv m blk h - -> ValidationPolicy - -> m (ChunkNo, WithOrigin (Tip blk)) -validate validateEnv@ValidateEnv{ hasFS, tracer } valPol = do - - -- First migrate any old files before validating them - migrate validateEnv - - filesInDBFolder <- listDirectory (mkFsPath []) - let (chunkFiles, _, _) = dbFilesOnDisk filesInDBFolder - case Set.lookupMax chunkFiles of - Nothing -> do - -- Remove left-over index files - -- TODO calls listDirectory again - removeFilesStartingFrom hasFS firstChunkNo - return (firstChunkNo, Origin) - - Just lastChunkOnDisk -> - let validateTracer = - decorateValidateTracer - lastChunkOnDisk - tracer - in - case valPol of - ValidateAllChunks -> - validateAllChunks validateEnv validateTracer lastChunkOnDisk - ValidateMostRecentChunk -> - validateMostRecentChunk validateEnv validateTracer lastChunkOnDisk - where - HasFS { listDirectory } = hasFS - - -- | Using the Functor instance of TraceChunkValidation, by a contravariant - -- tracer annotate the event with the total number of chunks on the relevant - -- constructors of the datatype. - decorateValidateTracer - :: ChunkNo - -> Tracer m (TraceEvent blk) - -> Tracer m (TraceChunkValidation blk ()) - decorateValidateTracer c' = - contramap (ChunkValidationEvent . fmap (const c')) - + forall m blk h. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , ConvertRawHash blk + , HasCallStack + ) => + ValidateEnv m blk h -> + ValidationPolicy -> + m (ChunkNo, WithOrigin (Tip blk)) +validate validateEnv@ValidateEnv{hasFS, tracer} valPol = do + -- First migrate any old files before validating them + migrate validateEnv + + filesInDBFolder <- listDirectory (mkFsPath []) + let (chunkFiles, _, _) = dbFilesOnDisk filesInDBFolder + case Set.lookupMax chunkFiles of + Nothing -> do + -- Remove left-over index files + -- TODO calls listDirectory again + removeFilesStartingFrom hasFS firstChunkNo + return (firstChunkNo, Origin) + Just lastChunkOnDisk -> + let validateTracer = + decorateValidateTracer + lastChunkOnDisk + tracer + in case valPol of + ValidateAllChunks -> + validateAllChunks validateEnv validateTracer lastChunkOnDisk + ValidateMostRecentChunk -> + validateMostRecentChunk validateEnv validateTracer lastChunkOnDisk + where + HasFS{listDirectory} = hasFS + + -- \| Using the Functor instance of TraceChunkValidation, by a contravariant + -- tracer annotate the event with the total number of chunks on the relevant + -- constructors of the datatype. + decorateValidateTracer :: + ChunkNo -> + Tracer m (TraceEvent blk) -> + Tracer m (TraceChunkValidation blk ()) + decorateValidateTracer c' = + contramap (ChunkValidationEvent . fmap (const c')) -- | Validate chunks from oldest to newest, stop after the most recent chunk -- on disk. During this validation, keep track of the last valid block we -- encountered. If at the end, that block is not in the last chunk on disk, -- remove the chunk and index files after that chunk. validateAllChunks :: - forall m blk h. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , ConvertRawHash blk - , HasCallStack - ) - => ValidateEnv m blk h - -> Tracer m (TraceChunkValidation blk ()) - -> ChunkNo - -- ^ Most recent chunk on disk - -> m (ChunkNo, WithOrigin (Tip blk)) -validateAllChunks validateEnv@ValidateEnv { hasFS, chunkInfo } validateTracer lastChunk = - go (firstChunkNo, Origin) firstChunkNo GenesisHash - where - go :: - (ChunkNo, WithOrigin (Tip blk)) -- ^ The last valid chunk and tip - -> ChunkNo -- ^ The chunk to validate now - -> ChainHash blk -- ^ The hash of the last block of - -- the previous chunk - -> m (ChunkNo, WithOrigin (Tip blk)) - go lastValid chunk prevHash = do - let shouldBeFinalised = - if chunk == lastChunk - then ShouldNotBeFinalised - else ShouldBeFinalised - runExceptT - (validateChunk validateEnv shouldBeFinalised chunk (Just prevHash) validateTracer) >>= \case - Left () -> cleanup lastValid chunk $> lastValid - Right Nothing -> continueOrStop lastValid chunk prevHash - Right (Just validBlk) -> continueOrStop (chunk, NotOrigin validBlk) chunk prevHash' - where - prevHash' = BlockHash (tipHash validBlk) - - -- | Validate the next chunk, unless the chunk just validated is the last - -- chunk to validate. Cleanup files corresponding to chunks after the - -- chunk in which we found the last valid block. Return that chunk and the - -- tip corresponding to that block. - continueOrStop :: - (ChunkNo, WithOrigin (Tip blk)) - -> ChunkNo -- ^ The chunk just validated - -> ChainHash blk -- ^ The hash of the last block of the previous chunk - -> m (ChunkNo, WithOrigin (Tip blk)) - continueOrStop lastValid chunk prevHash - | chunk < lastChunk - = do + forall m blk h. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , ConvertRawHash blk + , HasCallStack + ) => + ValidateEnv m blk h -> + Tracer m (TraceChunkValidation blk ()) -> + -- | Most recent chunk on disk + ChunkNo -> + m (ChunkNo, WithOrigin (Tip blk)) +validateAllChunks validateEnv@ValidateEnv{hasFS, chunkInfo} validateTracer lastChunk = + go (firstChunkNo, Origin) firstChunkNo GenesisHash + where + go :: + (ChunkNo, WithOrigin (Tip blk)) -> + -- \^ The last valid chunk and tip + ChunkNo -> + -- \^ The chunk to validate now + ChainHash blk -> + -- \^ The hash of the last block of + -- the previous chunk + m (ChunkNo, WithOrigin (Tip blk)) + go lastValid chunk prevHash = do + let shouldBeFinalised = + if chunk == lastChunk + then ShouldNotBeFinalised + else ShouldBeFinalised + runExceptT + (validateChunk validateEnv shouldBeFinalised chunk (Just prevHash) validateTracer) + >>= \case + Left () -> cleanup lastValid chunk $> lastValid + Right Nothing -> continueOrStop lastValid chunk prevHash + Right (Just validBlk) -> continueOrStop (chunk, NotOrigin validBlk) chunk prevHash' + where + prevHash' = BlockHash (tipHash validBlk) + + -- \| Validate the next chunk, unless the chunk just validated is the last + -- chunk to validate. Cleanup files corresponding to chunks after the + -- chunk in which we found the last valid block. Return that chunk and the + -- tip corresponding to that block. + continueOrStop :: + (ChunkNo, WithOrigin (Tip blk)) -> + ChunkNo -> + -- \^ The chunk just validated + ChainHash blk -> + -- \^ The hash of the last block of the previous chunk + m (ChunkNo, WithOrigin (Tip blk)) + continueOrStop lastValid chunk prevHash + | chunk < lastChunk = + do traceWith validateTracer (ValidatedChunk chunk ()) go lastValid (nextChunkNo chunk) prevHash - | otherwise - = assert (chunk == lastChunk) $ do - -- Cleanup is only needed when the final chunk was empty, yet valid. - cleanup lastValid chunk - return lastValid - - -- | Remove left over files from chunks newer than the last chunk - -- containing a valid file. Also unfinalise it if necessary. - cleanup :: - (ChunkNo, WithOrigin (Tip blk)) -- ^ The last valid chunk and tip - -> ChunkNo -- ^ The last validated chunk, could have been invalid or - -- empty - -> m () - cleanup (lastValidChunk, tip) lastValidatedChunk = case tip of - Origin -> - removeFilesStartingFrom hasFS firstChunkNo - NotOrigin _ -> do - removeFilesStartingFrom hasFS (nextChunkNo lastValidChunk) - when (lastValidChunk < lastValidatedChunk) $ - Primary.unfinalise (Proxy @blk) hasFS chunkInfo lastValidChunk + | otherwise = + assert (chunk == lastChunk) $ do + -- Cleanup is only needed when the final chunk was empty, yet valid. + cleanup lastValid chunk + return lastValid + + -- \| Remove left over files from chunks newer than the last chunk + -- containing a valid file. Also unfinalise it if necessary. + cleanup :: + (ChunkNo, WithOrigin (Tip blk)) -> + -- \^ The last valid chunk and tip + ChunkNo -> + -- \^ The last validated chunk, could have been invalid or + -- empty + m () + cleanup (lastValidChunk, tip) lastValidatedChunk = case tip of + Origin -> + removeFilesStartingFrom hasFS firstChunkNo + NotOrigin _ -> do + removeFilesStartingFrom hasFS (nextChunkNo lastValidChunk) + when (lastValidChunk < lastValidatedChunk) $ + Primary.unfinalise (Proxy @blk) hasFS chunkInfo lastValidChunk -- | Validate the given most recent chunk. If that chunk contains no valid -- block, try the chunk before it, and so on. Stop as soon as an chunk with a -- valid block is found, returning that chunk and the tip corresponding to -- that block. If no valid blocks are found, chunk 0 and 'TipGen' is returned. validateMostRecentChunk :: - forall m blk h. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , ConvertRawHash blk - , HasCallStack - ) - => ValidateEnv m blk h - -> Tracer m (TraceChunkValidation blk ()) - -> ChunkNo - -- ^ Most recent chunk on disk, the chunk to validate - -> m (ChunkNo, WithOrigin (Tip blk)) -validateMostRecentChunk validateEnv@ValidateEnv { hasFS } validateTracer c = do - res <- go c - traceWith validateTracer (ValidatedChunk c ()) - return res - where - go :: ChunkNo -> m (ChunkNo, WithOrigin (Tip blk)) - go chunk = runExceptT - (validateChunk validateEnv ShouldNotBeFinalised chunk Nothing validateTracer) >>= \case + forall m blk h. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , ConvertRawHash blk + , HasCallStack + ) => + ValidateEnv m blk h -> + Tracer m (TraceChunkValidation blk ()) -> + -- | Most recent chunk on disk, the chunk to validate + ChunkNo -> + m (ChunkNo, WithOrigin (Tip blk)) +validateMostRecentChunk validateEnv@ValidateEnv{hasFS} validateTracer c = do + res <- go c + traceWith validateTracer (ValidatedChunk c ()) + return res + where + go :: ChunkNo -> m (ChunkNo, WithOrigin (Tip blk)) + go chunk = + runExceptT + (validateChunk validateEnv ShouldNotBeFinalised chunk Nothing validateTracer) + >>= \case Right (Just validBlk) -> do - -- Found a valid block, we can stop now. - removeFilesStartingFrom hasFS (nextChunkNo chunk) - return (chunk, NotOrigin validBlk) - _ -- This chunk file is unusable: either the chunk is empty or - -- everything after it should be truncated. + -- Found a valid block, we can stop now. + removeFilesStartingFrom hasFS (nextChunkNo chunk) + return (chunk, NotOrigin validBlk) + _ -- This chunk file is unusable: either the chunk is empty or + -- everything after it should be truncated. | Just chunk' <- prevChunkNo chunk -> go chunk' | otherwise -> do - -- Found no valid blocks on disk. - -- TODO be more precise in which cases we need which cleanup. - removeFilesStartingFrom hasFS firstChunkNo - return (firstChunkNo, Origin) + -- Found no valid blocks on disk. + -- TODO be more precise in which cases we need which cleanup. + removeFilesStartingFrom hasFS firstChunkNo + return (firstChunkNo, Origin) -- | Iff the chunk is the most recent chunk, it should not be finalised. -- -- With finalising, we mean: if there are one or more empty slots at the end -- of the chunk, the primary index should be padded with offsets to indicate -- that these slots are empty. See 'Primary.backfill'. -data ShouldBeFinalised = - ShouldBeFinalised +data ShouldBeFinalised + = ShouldBeFinalised | ShouldNotBeFinalised - deriving (Show) + deriving Show -- | Validate the given chunk -- @@ -319,248 +336,265 @@ data ShouldBeFinalised = -- -- * All but the most recent chunk in the database should be finalised, i.e. -- padded to the size of the chunk. --- validateChunk :: - forall m blk h. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , ConvertRawHash blk - , HasCallStack - ) - => ValidateEnv m blk h - -> ShouldBeFinalised - -> ChunkNo - -> Maybe (ChainHash blk) - -- ^ The hash of the last block of the previous chunk. 'Nothing' if - -- unknown. When this is the first chunk, it should be 'Just Origin'. - -> Tracer m (TraceChunkValidation blk ()) - -> ExceptT () m (Maybe (Tip blk)) - -- ^ When non-empty, the 'Tip' corresponds to the last valid block in the - -- chunk. - -- - -- When the chunk file is missing or when we should truncate starting from - -- this chunk because it doesn't fit onto the previous one, @()@ is thrown. - -- - -- Note that when an invalid block is detected, we don't throw, but we - -- truncate the chunk file. When validating the chunk file after it, we - -- would notice it doesn't fit anymore, and then throw. + forall m blk h. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , ConvertRawHash blk + , HasCallStack + ) => + ValidateEnv m blk h -> + ShouldBeFinalised -> + ChunkNo -> + -- | The hash of the last block of the previous chunk. 'Nothing' if + -- unknown. When this is the first chunk, it should be 'Just Origin'. + Maybe (ChainHash blk) -> + Tracer m (TraceChunkValidation blk ()) -> + -- | When non-empty, the 'Tip' corresponds to the last valid block in the + -- chunk. + -- + -- When the chunk file is missing or when we should truncate starting from + -- this chunk because it doesn't fit onto the previous one, @()@ is thrown. + -- + -- Note that when an invalid block is detected, we don't throw, but we + -- truncate the chunk file. When validating the chunk file after it, we + -- would notice it doesn't fit anymore, and then throw. + ExceptT () m (Maybe (Tip blk)) validateChunk ValidateEnv{..} shouldBeFinalised chunk mbPrevHash validationTracer = do - lift $ traceWith validationTracer $ StartedValidatingChunk chunk () - chunkFileExists <- lift $ doesFileExist chunkFile - unless chunkFileExists $ do - lift $ traceWith validationTracer $ MissingChunkFile chunk - throwError () - - -- Read the entries from the secondary index file, if it exists. - secondaryIndexFileExists <- lift $ doesFileExist secondaryIndexFile - entriesFromSecondaryIndex <- lift $ if secondaryIndexFileExists - then tryJust isInvalidFileError - -- Note the 'maxBound': it is used to calculate the block size for - -- each entry, but we don't care about block sizes here, so we use - -- some dummy value. - (Secondary.readAllEntries hasFS 0 chunk (const False) maxBound IsEBB) >>= \case - Left _ -> do - traceWith validationTracer $ InvalidSecondaryIndex chunk - return [] - Right entriesFromFile -> - return $ fixupEBB (map withoutBlockSize entriesFromFile) - else do - traceWith validationTracer $ MissingSecondaryIndex chunk - return [] - - -- Parse the chunk file using the checksums from the secondary index file - -- as input. If the checksums match, the parser doesn't have to do the - -- expensive integrity check of a block. - let expectedChecksums = map Secondary.checksum entriesFromSecondaryIndex - (entriesWithPrevHashes, mbErr) <- lift $ - parseChunkFile - codecConfig - hasFS - checkIntegrity - chunkFile - expectedChecksums - (\entries -> (\(es :> mbErr) -> (es, mbErr)) <$> S.toList entries) - - -- Check whether the first block of this chunk fits onto the last block of - -- the previous chunk. - case entriesWithPrevHashes of - (_, actualPrevHash) : _ - | Just expectedPrevHash <- mbPrevHash - , expectedPrevHash /= actualPrevHash + lift $ traceWith validationTracer $ StartedValidatingChunk chunk () + chunkFileExists <- lift $ doesFileExist chunkFile + unless chunkFileExists $ do + lift $ traceWith validationTracer $ MissingChunkFile chunk + throwError () + + -- Read the entries from the secondary index file, if it exists. + secondaryIndexFileExists <- lift $ doesFileExist secondaryIndexFile + entriesFromSecondaryIndex <- + lift $ + if secondaryIndexFileExists + then + tryJust + isInvalidFileError + -- Note the 'maxBound': it is used to calculate the block size for + -- each entry, but we don't care about block sizes here, so we use + -- some dummy value. + (Secondary.readAllEntries hasFS 0 chunk (const False) maxBound IsEBB) + >>= \case + Left _ -> do + traceWith validationTracer $ InvalidSecondaryIndex chunk + return [] + Right entriesFromFile -> + return $ fixupEBB (map withoutBlockSize entriesFromFile) + else do + traceWith validationTracer $ MissingSecondaryIndex chunk + return [] + + -- Parse the chunk file using the checksums from the secondary index file + -- as input. If the checksums match, the parser doesn't have to do the + -- expensive integrity check of a block. + let expectedChecksums = map Secondary.checksum entriesFromSecondaryIndex + (entriesWithPrevHashes, mbErr) <- + lift $ + parseChunkFile + codecConfig + hasFS + checkIntegrity + chunkFile + expectedChecksums + (\entries -> (\(es :> mbErr) -> (es, mbErr)) <$> S.toList entries) + + -- Check whether the first block of this chunk fits onto the last block of + -- the previous chunk. + case entriesWithPrevHashes of + (_, actualPrevHash) : _ + | Just expectedPrevHash <- mbPrevHash + , expectedPrevHash /= actualPrevHash -> -- The previous hash of the first block in the chunk does not match -- the hash of the last block of the previous chunk. There must be a -- gap. This chunk should be truncated. - -> do - lift $ traceWith tracer $ ChunkFileDoesntFit expectedPrevHash actualPrevHash - throwError () - _ -> return () - - lift $ do - - -- If the parser returneds a deserialisation error, truncate the chunk - -- file. Don't truncate the database just yet, because the - -- deserialisation error may be due to some extra random bytes that - -- shouldn't have been there in the first place. - whenJust mbErr $ \(parseErr, endOfLastValidBlock) -> do - traceWith validationTracer $ InvalidChunkFile chunk parseErr - withFile hasFS chunkFile (AppendMode AllowExisting) $ \eHnd -> - hTruncate eHnd endOfLastValidBlock - - -- If the secondary index file is missing, parsing it failed, or it does - -- not match the entries from the chunk file, overwrite it using those - -- (truncate first). - let summary = map fst entriesWithPrevHashes - entries = map summaryEntry summary - when (entriesFromSecondaryIndex /= entries || - not secondaryIndexFileExists) $ do + do + lift $ traceWith tracer $ ChunkFileDoesntFit expectedPrevHash actualPrevHash + throwError () + _ -> return () + + lift $ do + -- If the parser returneds a deserialisation error, truncate the chunk + -- file. Don't truncate the database just yet, because the + -- deserialisation error may be due to some extra random bytes that + -- shouldn't have been there in the first place. + whenJust mbErr $ \(parseErr, endOfLastValidBlock) -> do + traceWith validationTracer $ InvalidChunkFile chunk parseErr + withFile hasFS chunkFile (AppendMode AllowExisting) $ \eHnd -> + hTruncate eHnd endOfLastValidBlock + + -- If the secondary index file is missing, parsing it failed, or it does + -- not match the entries from the chunk file, overwrite it using those + -- (truncate first). + let summary = map fst entriesWithPrevHashes + entries = map summaryEntry summary + when + ( entriesFromSecondaryIndex /= entries + || not secondaryIndexFileExists + ) + $ do traceWith validationTracer $ RewriteSecondaryIndex chunk Secondary.writeAllEntries hasFS chunk entries - -- Reconstruct the primary index from the 'Secondary.Entry's. - -- - -- Read the primary index file, if it is missing, parsing fails, or it - -- does not match the reconstructed primary index, overwrite it using - -- the reconstructed index (truncate first). - let primaryIndex = reconstructPrimaryIndex - (Proxy @blk) - chunkInfo - shouldBeFinalised - chunk - (map Secondary.blockOrEBB entries) - primaryIndexFileExists <- doesFileExist primaryIndexFile - primaryIndexFileMatches <- if primaryIndexFileExists - then tryJust isInvalidFileError (Primary.load (Proxy @blk) hasFS chunk) >>= \case - Left () -> do - traceWith validationTracer $ InvalidPrimaryIndex chunk - return False - Right primaryIndexFromFile -> - return $ primaryIndexFromFile == primaryIndex + -- Reconstruct the primary index from the 'Secondary.Entry's. + -- + -- Read the primary index file, if it is missing, parsing fails, or it + -- does not match the reconstructed primary index, overwrite it using + -- the reconstructed index (truncate first). + let primaryIndex = + reconstructPrimaryIndex + (Proxy @blk) + chunkInfo + shouldBeFinalised + chunk + (map Secondary.blockOrEBB entries) + primaryIndexFileExists <- doesFileExist primaryIndexFile + primaryIndexFileMatches <- + if primaryIndexFileExists + then + tryJust isInvalidFileError (Primary.load (Proxy @blk) hasFS chunk) >>= \case + Left () -> do + traceWith validationTracer $ InvalidPrimaryIndex chunk + return False + Right primaryIndexFromFile -> + return $ primaryIndexFromFile == primaryIndex else do traceWith validationTracer $ MissingPrimaryIndex chunk return False - unless primaryIndexFileMatches $ do - traceWith validationTracer $ RewritePrimaryIndex chunk - Primary.write hasFS chunk primaryIndex - - return $ summaryToTipInfo <$> lastMaybe summary - where - chunkFile = fsPathChunkFile chunk - primaryIndexFile = fsPathPrimaryIndexFile chunk - secondaryIndexFile = fsPathSecondaryIndexFile chunk - - HasFS { hTruncate, doesFileExist } = hasFS - - summaryToTipInfo :: BlockSummary blk -> Tip blk - summaryToTipInfo BlockSummary {..} = Tip { - tipSlotNo = summarySlotNo - , tipIsEBB = isBlockOrEBB $ Secondary.blockOrEBB summaryEntry - , tipBlockNo = summaryBlockNo - , tipHash = Secondary.headerHash summaryEntry - } - - -- | 'InvalidFileError' is the only error that can be thrown while loading - -- a primary or a secondary index file - isInvalidFileError :: ImmutableDBError blk -> Maybe () - isInvalidFileError = \case - UnexpectedFailure (InvalidFileError {}) -> Just () - _ -> Nothing - - -- | When reading the entries from the secondary index file, we need to - -- pass in a value of type 'IsEBB' so we know whether the first entry - -- corresponds to an EBB or a regular block. We need this information to - -- correctly interpret the deserialised 'Word64' as a 'BlockOrEBB': if - -- it's an EBB, it's the 'EpochNo' ('Word64'), if it's a regular block, - -- it's a 'SlotNo' ('Word64'). - -- - -- However, at the point we are reading the secondary index file, we don't - -- yet know whether the first block will be an EBB or a regular block. We - -- will find that out when we read the actual block from the chunk file. - -- - -- Fortunately, we can make a /very/ good guess: if the 'Word64' of the - -- 'BlockOrEBB' matches the chunk number, it is almost certainly an EBB, - -- as the slot numbers increase @10k@ times faster than chunk numbers - -- (remember that for EBBs, chunk numbers and epoch numbers must line up). - -- Property: for every chunk @e > 0@, for all slot numbers @s@ in chunk - -- @e@ we have @s > e@. The only exception is chunk 0, which contains a - -- slot number 0. From this follows that it's an EBB if and only if the - -- 'Word64' matches the chunk number. - -- - -- E.g., the first slot number in chunk 1 will be 21600 if @k = 2160@. We - -- could only make the wrong guess in the first very first chunk, i.e., - -- chunk 0, as the first slot number is also 0. However, we know that the - -- real blockchain starts with an EBB, so even in that case we're fine. - -- - -- If the chunk size were 1, then we would make the wrong guess for each - -- chunk that contains an EBB, which is a rather unrealistic scenario. - -- - -- Note that even making the wrong guess is not a problem. The (CRC) - -- checksums are the only thing we extract from the secondary index file. - -- These are passed to the 'ChunkFileParser'. We then reconstruct the - -- secondary index using the output of the 'ChunkFileParser'. If that - -- output doesn't match the parsed secondary index file, we will overwrite - -- the secondary index file. - -- - -- So the only thing that wouldn't go according to plan is that we will - -- needlessly overwrite the secondary index file. - fixupEBB :: forall hash. [Secondary.Entry hash] -> [Secondary.Entry hash] - fixupEBB = \case - entry@Secondary.Entry { blockOrEBB = EBB epoch' }:rest - | let chunk' = unsafeEpochNoToChunkNo epoch' - , chunk' /= chunk - -> entry { Secondary.blockOrEBB = Block (SlotNo (unChunkNo chunk')) }:rest - entries -> entries + unless primaryIndexFileMatches $ do + traceWith validationTracer $ RewritePrimaryIndex chunk + Primary.write hasFS chunk primaryIndex + + return $ summaryToTipInfo <$> lastMaybe summary + where + chunkFile = fsPathChunkFile chunk + primaryIndexFile = fsPathPrimaryIndexFile chunk + secondaryIndexFile = fsPathSecondaryIndexFile chunk + + HasFS{hTruncate, doesFileExist} = hasFS + + summaryToTipInfo :: BlockSummary blk -> Tip blk + summaryToTipInfo BlockSummary{..} = + Tip + { tipSlotNo = summarySlotNo + , tipIsEBB = isBlockOrEBB $ Secondary.blockOrEBB summaryEntry + , tipBlockNo = summaryBlockNo + , tipHash = Secondary.headerHash summaryEntry + } + + -- \| 'InvalidFileError' is the only error that can be thrown while loading + -- a primary or a secondary index file + isInvalidFileError :: ImmutableDBError blk -> Maybe () + isInvalidFileError = \case + UnexpectedFailure (InvalidFileError{}) -> Just () + _ -> Nothing + + -- \| When reading the entries from the secondary index file, we need to + -- pass in a value of type 'IsEBB' so we know whether the first entry + -- corresponds to an EBB or a regular block. We need this information to + -- correctly interpret the deserialised 'Word64' as a 'BlockOrEBB': if + -- it's an EBB, it's the 'EpochNo' ('Word64'), if it's a regular block, + -- it's a 'SlotNo' ('Word64'). + -- + -- However, at the point we are reading the secondary index file, we don't + -- yet know whether the first block will be an EBB or a regular block. We + -- will find that out when we read the actual block from the chunk file. + -- + -- Fortunately, we can make a /very/ good guess: if the 'Word64' of the + -- 'BlockOrEBB' matches the chunk number, it is almost certainly an EBB, + -- as the slot numbers increase @10k@ times faster than chunk numbers + -- (remember that for EBBs, chunk numbers and epoch numbers must line up). + -- Property: for every chunk @e > 0@, for all slot numbers @s@ in chunk + -- @e@ we have @s > e@. The only exception is chunk 0, which contains a + -- slot number 0. From this follows that it's an EBB if and only if the + -- 'Word64' matches the chunk number. + -- + -- E.g., the first slot number in chunk 1 will be 21600 if @k = 2160@. We + -- could only make the wrong guess in the first very first chunk, i.e., + -- chunk 0, as the first slot number is also 0. However, we know that the + -- real blockchain starts with an EBB, so even in that case we're fine. + -- + -- If the chunk size were 1, then we would make the wrong guess for each + -- chunk that contains an EBB, which is a rather unrealistic scenario. + -- + -- Note that even making the wrong guess is not a problem. The (CRC) + -- checksums are the only thing we extract from the secondary index file. + -- These are passed to the 'ChunkFileParser'. We then reconstruct the + -- secondary index using the output of the 'ChunkFileParser'. If that + -- output doesn't match the parsed secondary index file, we will overwrite + -- the secondary index file. + -- + -- So the only thing that wouldn't go according to plan is that we will + -- needlessly overwrite the secondary index file. + fixupEBB :: forall hash. [Secondary.Entry hash] -> [Secondary.Entry hash] + fixupEBB = \case + entry@Secondary.Entry{blockOrEBB = EBB epoch'} : rest + | let chunk' = unsafeEpochNoToChunkNo epoch' + , chunk' /= chunk -> + entry{Secondary.blockOrEBB = Block (SlotNo (unChunkNo chunk'))} : rest + entries -> entries -- | Reconstruct a 'PrimaryIndex' based on a list of 'Secondary.Entry's. reconstructPrimaryIndex :: - forall blk. (ConvertRawHash blk, HasCallStack) - => Proxy blk - -> ChunkInfo - -> ShouldBeFinalised - -> ChunkNo - -> [BlockOrEBB] - -> PrimaryIndex + forall blk. + (ConvertRawHash blk, HasCallStack) => + Proxy blk -> + ChunkInfo -> + ShouldBeFinalised -> + ChunkNo -> + [BlockOrEBB] -> + PrimaryIndex reconstructPrimaryIndex pb chunkInfo shouldBeFinalised chunk blockOrEBBs = - fromMaybe (error nonIncreasing) $ - Primary.mk chunk . (0:) $ - go (NextRelativeSlot (firstBlockOrEBB chunkInfo chunk)) 0 $ - map (chunkRelative . chunkSlotForBlockOrEBB chunkInfo) blockOrEBBs - where - nonIncreasing :: String - nonIncreasing = "blocks have non-increasing slot numbers" - - go :: HasCallStack - => NextRelativeSlot - -> SecondaryOffset - -> [RelativeSlot] - -> [SecondaryOffset] - go expected lastSecondaryOffset relSlots = - case (expected, relSlots) of - (_, []) -> - case shouldBeFinalised of - ShouldNotBeFinalised -> [] - ShouldBeFinalised -> Primary.backfillChunk - chunkInfo - chunk - expected - lastSecondaryOffset - (NoMoreRelativeSlots, _) -> - -- Assumption: when we validate the chunk file, we check its size - error "reconstructPrimaryIndex: too many entries" - (NextRelativeSlot nextExpectedRelSlot, relSlot:relSlots') -> - if compareRelativeSlot relSlot nextExpectedRelSlot == LT then - error nonIncreasing - else - let backfilled = Primary.backfill - relSlot - nextExpectedRelSlot - lastSecondaryOffset - secondaryOffset = lastSecondaryOffset - + Secondary.entrySize pb - in backfilled ++ secondaryOffset - : go (nextRelativeSlot relSlot) secondaryOffset relSlots' - + fromMaybe (error nonIncreasing) $ + Primary.mk chunk . (0 :) $ + go (NextRelativeSlot (firstBlockOrEBB chunkInfo chunk)) 0 $ + map (chunkRelative . chunkSlotForBlockOrEBB chunkInfo) blockOrEBBs + where + nonIncreasing :: String + nonIncreasing = "blocks have non-increasing slot numbers" + + go :: + HasCallStack => + NextRelativeSlot -> + SecondaryOffset -> + [RelativeSlot] -> + [SecondaryOffset] + go expected lastSecondaryOffset relSlots = + case (expected, relSlots) of + (_, []) -> + case shouldBeFinalised of + ShouldNotBeFinalised -> [] + ShouldBeFinalised -> + Primary.backfillChunk + chunkInfo + chunk + expected + lastSecondaryOffset + (NoMoreRelativeSlots, _) -> + -- Assumption: when we validate the chunk file, we check its size + error "reconstructPrimaryIndex: too many entries" + (NextRelativeSlot nextExpectedRelSlot, relSlot : relSlots') -> + if compareRelativeSlot relSlot nextExpectedRelSlot == LT + then + error nonIncreasing + else + let backfilled = + Primary.backfill + relSlot + nextExpectedRelSlot + lastSecondaryOffset + secondaryOffset = + lastSecondaryOffset + + Secondary.entrySize pb + in backfilled + ++ secondaryOffset + : go (nextRelativeSlot relSlot) secondaryOffset relSlots' {------------------------------------------------------------------------------ Migration @@ -606,25 +640,25 @@ reconstructPrimaryIndex pb chunkInfo shouldBeFinalised chunk blockOrEBBs = -- Implementation note: as currently the sole migration we need to be able to -- perform only requires renaming files, we keep it simple for now. migrate :: (IOLike m, HasCallStack) => ValidateEnv m blk h -> m () -migrate ValidateEnv { hasFS, tracer } = do - filesInDBFolder <- listDirectory (mkFsPath []) - -- Any old "XXXXX.epoch" files - let epochFileChunkNos :: [(FsPath, ChunkNo)] - epochFileChunkNos = - mapMaybe - (\file -> (mkFsPath [file],) <$> isEpochFile file) - (Set.toAscList filesInDBFolder) - - unless (null epochFileChunkNos) $ do - traceWith tracer $ Migrating ".epoch files to .chunk files" - forM_ epochFileChunkNos $ \(epochFile, chunk) -> - renameFile epochFile (fsPathChunkFile chunk) - where - HasFS { listDirectory, renameFile } = hasFS - - isEpochFile :: String -> Maybe ChunkNo - isEpochFile s = case parseDBFile s of - Just (prefix, chunk) - | prefix == "epoch" - -> Just chunk - _ -> Nothing +migrate ValidateEnv{hasFS, tracer} = do + filesInDBFolder <- listDirectory (mkFsPath []) + -- Any old "XXXXX.epoch" files + let epochFileChunkNos :: [(FsPath, ChunkNo)] + epochFileChunkNos = + mapMaybe + (\file -> (mkFsPath [file],) <$> isEpochFile file) + (Set.toAscList filesInDBFolder) + + unless (null epochFileChunkNos) $ do + traceWith tracer $ Migrating ".epoch files to .chunk files" + forM_ epochFileChunkNos $ \(epochFile, chunk) -> + renameFile epochFile (fsPathChunkFile chunk) + where + HasFS{listDirectory, renameFile} = hasFS + + isEpochFile :: String -> Maybe ChunkNo + isEpochFile s = case parseDBFile s of + Just (prefix, chunk) + | prefix == "epoch" -> + Just chunk + _ -> Nothing diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Stream.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Stream.hs index cb4fee2e40..bc30dbb95e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Stream.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/ImmutableDB/Stream.hs @@ -3,22 +3,22 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Storage.ImmutableDB.Stream ( - NextItem (..) +module Ouroboros.Consensus.Storage.ImmutableDB.Stream + ( NextItem (..) , StreamAPI (..) , streamAPI , streamAPI' , streamAll ) where -import Control.Monad.Except -import Control.ResourceRegistry -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.Common -import Ouroboros.Consensus.Storage.ImmutableDB hiding (streamAll) -import qualified Ouroboros.Consensus.Storage.ImmutableDB.API as ImmutableDB -import Ouroboros.Consensus.Util.IOLike +import Control.Monad.Except +import Control.ResourceRegistry +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.Common +import Ouroboros.Consensus.Storage.ImmutableDB hiding (streamAll) +import Ouroboros.Consensus.Storage.ImmutableDB.API qualified as ImmutableDB +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- Abstraction over the streaming API provided by the Chain DB @@ -34,80 +34,89 @@ data NextItem blk = NoMoreItems | NextItem blk -- tip to bring the ledger up to date with the tip of the immutable DB. -- -- In CPS form to enable the use of 'withXYZ' style iterator init functions. -newtype StreamAPI m blk a = StreamAPI { - -- | Start streaming after the specified block - streamAfter :: forall b. HasCallStack - => Point blk - -- Reference to the block corresponding to the snapshot we found - -- (or 'GenesisPoint' if we didn't find any) +newtype StreamAPI m blk a = StreamAPI + { streamAfter :: + forall b. + HasCallStack => + Point blk -> + -- Reference to the block corresponding to the snapshot we found + -- (or 'GenesisPoint' if we didn't find any) - -> (Either (RealPoint blk) (m (NextItem a)) -> m b) - -- Get the next item - -- - -- Should be @Left pt@ if the snapshot we found is more recent than the - -- tip of the immutable DB. Since we only store snapshots to disk for - -- blocks in the immutable DB, this can only happen if the immutable DB - -- got truncated due to disk corruption. The returned @pt@ is a - -- 'RealPoint', not a 'Point', since it must always be possible to - -- stream after genesis. - -> m b - } + (Either (RealPoint blk) (m (NextItem a)) -> m b) -> + -- Get the next item + -- + -- Should be @Left pt@ if the snapshot we found is more recent than the + -- tip of the immutable DB. Since we only store snapshots to disk for + -- blocks in the immutable DB, this can only happen if the immutable DB + -- got truncated due to disk corruption. The returned @pt@ is a + -- 'RealPoint', not a 'Point', since it must always be possible to + -- stream after genesis. + m b + -- ^ Start streaming after the specified block + } -- | Stream all items streamAll :: - forall m blk e b a. (Monad m, HasCallStack) - => StreamAPI m blk b - -> Point blk -- ^ Starting point for streaming - -> (RealPoint blk -> e) -- ^ Error when tip not found - -> a -- ^ Starting point when tip /is/ found - -> (b -> a -> m a) -- ^ Update function for each item - -> ExceptT e m a + forall m blk e b a. + (Monad m, HasCallStack) => + StreamAPI m blk b -> + -- | Starting point for streaming + Point blk -> + -- | Error when tip not found + (RealPoint blk -> e) -> + -- | Starting point when tip /is/ found + a -> + -- | Update function for each item + (b -> a -> m a) -> + ExceptT e m a streamAll StreamAPI{..} tip notFound e f = ExceptT $ - streamAfter tip $ \case - Left tip' -> return $ Left (notFound tip') - - Right getNext -> do - let go :: a -> m a - go a = do mNext <- getNext - case mNext of - NoMoreItems -> return a - NextItem b -> go =<< f b a - Right <$> go e - + streamAfter tip $ \case + Left tip' -> return $ Left (notFound tip') + Right getNext -> do + let go :: a -> m a + go a = do + mNext <- getNext + case mNext of + NoMoreItems -> return a + NextItem b -> go =<< f b a + Right <$> go e streamAPI :: - (IOLike m, HasHeader blk) - => ImmutableDB m blk -> StreamAPI m blk blk + (IOLike m, HasHeader blk) => + ImmutableDB m blk -> StreamAPI m blk blk streamAPI = streamAPI' (return . NextItem) GetBlock streamAPI' :: - forall m blk a. - (IOLike m, HasHeader blk) - => (a -> m (NextItem a)) -- ^ Stop condition - -> BlockComponent blk a - -> ImmutableDB m blk - -> StreamAPI m blk a + forall m blk a. + (IOLike m, HasHeader blk) => + -- | Stop condition + (a -> m (NextItem a)) -> + BlockComponent blk a -> + ImmutableDB m blk -> + StreamAPI m blk a streamAPI' shouldStop blockComponent immutableDB = StreamAPI streamAfter - where - streamAfter :: Point blk - -> (Either (RealPoint blk) (m (NextItem a)) -> m b) - -> m b - streamAfter tip k = withRegistry $ \registry -> do - eItr <- - ImmutableDB.streamAfterPoint - immutableDB - registry - blockComponent - tip - case eItr of - -- Snapshot is too recent - Left err -> k $ Left $ ImmutableDB.missingBlockPoint err - Right itr -> k $ Right $ streamUsing itr + where + streamAfter :: + Point blk -> + (Either (RealPoint blk) (m (NextItem a)) -> m b) -> + m b + streamAfter tip k = withRegistry $ \registry -> do + eItr <- + ImmutableDB.streamAfterPoint + immutableDB + registry + blockComponent + tip + case eItr of + -- Snapshot is too recent + Left err -> k $ Left $ ImmutableDB.missingBlockPoint err + Right itr -> k $ Right $ streamUsing itr - streamUsing :: ImmutableDB.Iterator m blk a - -> m (NextItem a) - streamUsing itr = do - itrResult <- ImmutableDB.iteratorNext itr - case itrResult of - ImmutableDB.IteratorExhausted -> return NoMoreItems - ImmutableDB.IteratorResult b -> shouldStop b + streamUsing :: + ImmutableDB.Iterator m blk a -> + m (NextItem a) + streamUsing itr = do + itrResult <- ImmutableDB.iteratorNext itr + case itrResult of + ImmutableDB.IteratorExhausted -> return NoMoreItems + ImmutableDB.IteratorResult b -> shouldStop b diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB.hs index 57195ca6c3..08ef3d117a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB.hs @@ -5,34 +5,35 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Storage.LedgerDB ( - -- * API +module Ouroboros.Consensus.Storage.LedgerDB + ( -- * API module Ouroboros.Consensus.Storage.LedgerDB.API , module Ouroboros.Consensus.Storage.LedgerDB.Args , module Ouroboros.Consensus.Storage.LedgerDB.Forker , module Ouroboros.Consensus.Storage.LedgerDB.TraceEvent + -- * Impl , openDB , openDBInternal ) where -import Data.Functor.Contravariant ((>$<)) -import Data.Word -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Storage.ImmutableDB.Stream -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Args -import Ouroboros.Consensus.Storage.LedgerDB.Forker -import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1 as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2 as V2 -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import System.FS.API +import Data.Functor.Contravariant ((>$<)) +import Data.Word +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Storage.ImmutableDB.Stream +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Args +import Ouroboros.Consensus.Storage.LedgerDB.Forker +import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent +import Ouroboros.Consensus.Storage.LedgerDB.V1 qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V2 qualified as V2 +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import System.FS.API openDB :: forall m blk. @@ -43,60 +44,61 @@ openDB :: , HasCallStack , HasHardForkHistory blk , LedgerSupportsLedgerDB blk - ) - => Complete LedgerDbArgs m blk - -- ^ Stateless initializaton arguments - -> StreamAPI m blk blk - -- ^ Stream source for blocks. + ) => + -- | Stateless initializaton arguments + Complete LedgerDbArgs m blk -> + -- | Stream source for blocks. -- -- After reading a snapshot from disk, the ledger DB will be brought up to -- date with the tip of this steam of blocks. The corresponding ledger state -- can then be used as the starting point for chain selection in the ChainDB -- driver. - -> Point blk - -- ^ The Replay goal i.e. the tip of the stream of blocks. - -> ResolveBlock m blk - -- ^ How to get blocks from the ChainDB - -> m (LedgerDB' m blk, Word64) + StreamAPI m blk blk -> + -- | The Replay goal i.e. the tip of the stream of blocks. + Point blk -> + -- | How to get blocks from the ChainDB + ResolveBlock m blk -> + m (LedgerDB' m blk, Word64) openDB args stream replayGoal getBlock = case lgrFlavorArgs args of LedgerDbFlavorArgsV1 bss -> - let initDb = V1.mkInitDb - args - bss - getBlock - in - doOpenDB args initDb stream replayGoal + let initDb = + V1.mkInitDb + args + bss + getBlock + in doOpenDB args initDb stream replayGoal LedgerDbFlavorArgsV2 bss -> - let initDb = V2.mkInitDb - args - bss - getBlock - in - doOpenDB args initDb stream replayGoal - + let initDb = + V2.mkInitDb + args + bss + getBlock + in doOpenDB args initDb stream replayGoal {------------------------------------------------------------------------------- Opening a LedgerDB -------------------------------------------------------------------------------} doOpenDB :: - forall m blk db. ( IOLike m + forall m blk db. + ( IOLike m , LedgerSupportsProtocol blk , InspectLedger blk , HasCallStack - ) - => Complete LedgerDbArgs m blk - -> InitDB db m blk - -> StreamAPI m blk blk - -> Point blk - -> m (LedgerDB' m blk, Word64) + ) => + Complete LedgerDbArgs m blk -> + InitDB db m blk -> + StreamAPI m blk blk -> + Point blk -> + m (LedgerDB' m blk, Word64) doOpenDB args initDb stream replayGoal = - f <$> openDBInternal args initDb stream replayGoal - where f (ldb, replayCounter, _) = (ldb, replayCounter) + f <$> openDBInternal args initDb stream replayGoal + where + f (ldb, replayCounter, _) = (ldb, replayCounter) -- | Open the ledger DB and expose internals for testing purposes openDBInternal :: @@ -104,34 +106,33 @@ openDBInternal :: , LedgerSupportsProtocol blk , InspectLedger blk , HasCallStack - ) - => Complete LedgerDbArgs m blk - -> InitDB db m blk - -> StreamAPI m blk blk - -> Point blk - -> m (LedgerDB' m blk, Word64, TestInternals' m blk) -openDBInternal args@(LedgerDbArgs { lgrHasFS = SomeHasFS fs }) initDb stream replayGoal = do - createDirectoryIfMissing fs True (mkFsPath []) - (_initLog, db, replayCounter) <- - initialize - replayTracer - snapTracer - lgrHasFS - lgrConfig - stream - replayGoal - initDb - lgrStartSnapshot - (ledgerDb, internal) <- mkLedgerDb initDb db - return (ledgerDb, replayCounter, internal) - - where - LedgerDbArgs { - lgrConfig - , lgrTracer - , lgrHasFS - , lgrStartSnapshot - } = args + ) => + Complete LedgerDbArgs m blk -> + InitDB db m blk -> + StreamAPI m blk blk -> + Point blk -> + m (LedgerDB' m blk, Word64, TestInternals' m blk) +openDBInternal args@(LedgerDbArgs{lgrHasFS = SomeHasFS fs}) initDb stream replayGoal = do + createDirectoryIfMissing fs True (mkFsPath []) + (_initLog, db, replayCounter) <- + initialize + replayTracer + snapTracer + lgrHasFS + lgrConfig + stream + replayGoal + initDb + lgrStartSnapshot + (ledgerDb, internal) <- mkLedgerDb initDb db + return (ledgerDb, replayCounter, internal) + where + LedgerDbArgs + { lgrConfig + , lgrTracer + , lgrHasFS + , lgrStartSnapshot + } = args - replayTracer = LedgerReplayEvent >$< lgrTracer - snapTracer = LedgerDBSnapshotEvent >$< lgrTracer + replayTracer = LedgerReplayEvent >$< lgrTracer + snapTracer = LedgerDBSnapshotEvent >$< lgrTracer diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/API.hs index 40cbeeaf6e..17fbb8f45f 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/API.hs @@ -108,9 +108,8 @@ -- >>> \ \\draw (60pt, 60pt) node[fill=white] {$k$};\ -- >>> \ \\draw [dashed] (30pt, -40pt) -- (30pt, 45pt);" -- >>> :} --- -module Ouroboros.Consensus.Storage.LedgerDB.API ( - -- * Main API +module Ouroboros.Consensus.Storage.LedgerDB.API + ( -- * Main API CanUpgradeLedgerTables (..) , LedgerDB (..) , LedgerDB' @@ -121,10 +120,12 @@ module Ouroboros.Consensus.Storage.LedgerDB.API ( , LedgerSupportsOnDiskLedgerDB , ResolveBlock , currentPoint + -- * Initialization , InitDB (..) , InitLog (..) , initialize + -- ** Tracing , ReplayGoal (..) , ReplayStart (..) @@ -133,62 +134,67 @@ module Ouroboros.Consensus.Storage.LedgerDB.API ( , TraceReplayStartEvent (..) , decorateReplayTracerWithGoal , decorateReplayTracerWithStart + -- * Configuration , LedgerDbCfg , LedgerDbCfgF (..) , configLedgerDb + -- * Exceptions , LedgerDbError (..) + -- * Forker , getReadOnlyForker , getTipStatistics , readLedgerTablesAtFor , withPrivateTipForker , withTipForker + -- * Snapshots , SnapCounters (..) + -- * Testing , TestInternals (..) , TestInternals' , WhereToTakeSnapshot (..) ) where -import Codec.Serialise -import qualified Control.Monad as Monad -import Control.Monad.Class.MonadTime.SI -import Control.Monad.Except -import Control.ResourceRegistry -import Control.Tracer -import Data.Functor.Contravariant ((>$<)) -import Data.Kind -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.Set (Set) -import Data.Void (absurd) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HeaderStateHistory -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache -import Ouroboros.Consensus.Storage.ImmutableDB.Stream -import Ouroboros.Consensus.Storage.LedgerDB.Forker -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block -import Ouroboros.Network.Protocol.LocalStateQuery.Type -import System.FS.API +import Codec.Serialise +import Control.Monad qualified as Monad +import Control.Monad.Class.MonadTime.SI +import Control.Monad.Except +import Control.ResourceRegistry +import Control.Tracer +import Data.Functor.Contravariant ((>$<)) +import Data.Kind +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.Set (Set) +import Data.Void (absurd) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HeaderStateHistory +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache +import Ouroboros.Consensus.Storage.ImmutableDB.Stream +import Ouroboros.Consensus.Storage.LedgerDB.Forker +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Network.Block +import Ouroboros.Network.Protocol.LocalStateQuery.Type +import System.FS.API {------------------------------------------------------------------------------- Main API @@ -197,93 +203,83 @@ import System.FS.API -- | Serialization constraints required by the 'LedgerDB' to be properly -- instantiated with a @blk@. type LedgerDbSerialiseConstraints blk = - ( Serialise (HeaderHash blk) + ( Serialise (HeaderHash blk) , EncodeDisk blk (LedgerState blk EmptyMK) , DecodeDisk blk (LedgerState blk EmptyMK) - , EncodeDisk blk (AnnTip blk) - , DecodeDisk blk (AnnTip blk) + , EncodeDisk blk (AnnTip blk) + , DecodeDisk blk (AnnTip blk) , EncodeDisk blk (ChainDepState (BlockProtocol blk)) , DecodeDisk blk (ChainDepState (BlockProtocol blk)) - -- For InMemory LedgerDBs - , MemPack (TxIn (LedgerState blk)) + , -- For InMemory LedgerDBs + MemPack (TxIn (LedgerState blk)) , SerializeTablesWithHint (LedgerState blk) - -- For OnDisk LedgerDBs - , IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk)) + , -- For OnDisk LedgerDBs + IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk)) ) -- | The core API of the LedgerDB component type LedgerDB :: (Type -> Type) -> LedgerStateKind -> Type -> Type -data LedgerDB m l blk = LedgerDB { - -- | Get the empty ledger state at the (volatile) tip of the LedgerDB. - getVolatileTip :: STM m (l EmptyMK) - - -- | Get the empty ledger state at the immutable tip of the LedgerDB. - , getImmutableTip :: STM m (l EmptyMK) - - -- | Get an empty ledger state at a requested point in the LedgerDB, if it - -- exists. - , getPastLedgerState :: Point blk -> STM m (Maybe (l EmptyMK)) - - -- | Get the header state history for all ledger states in the LedgerDB. - , getHeaderStateHistory :: - (l ~ ExtLedgerState blk) - => STM m (HeaderStateHistory blk) - - -- | Acquire a 'Forker' at the requested point. If a ledger state associated - -- with the requested point does not exist in the LedgerDB, it will return a - -- 'GetForkerError'. - -- - -- We pass in the producer/consumer registry. - , getForkerAtTarget :: - ResourceRegistry m - -> Target (Point blk) - -> m (Either GetForkerError (Forker m l blk)) - - -- | Try to apply a sequence of blocks on top of the LedgerDB, first rolling - -- back as many blocks as the passed @Word64@. +data LedgerDB m l blk = LedgerDB + { getVolatileTip :: STM m (l EmptyMK) + -- ^ Get the empty ledger state at the (volatile) tip of the LedgerDB. + , getImmutableTip :: STM m (l EmptyMK) + -- ^ Get the empty ledger state at the immutable tip of the LedgerDB. + , getPastLedgerState :: Point blk -> STM m (Maybe (l EmptyMK)) + -- ^ Get an empty ledger state at a requested point in the LedgerDB, if it + -- exists. + , getHeaderStateHistory :: + l ~ ExtLedgerState blk => + STM m (HeaderStateHistory blk) + -- ^ Get the header state history for all ledger states in the LedgerDB. + , getForkerAtTarget :: + ResourceRegistry m -> + Target (Point blk) -> + m (Either GetForkerError (Forker m l blk)) + -- ^ Acquire a 'Forker' at the requested point. If a ledger state associated + -- with the requested point does not exist in the LedgerDB, it will return a + -- 'GetForkerError'. + -- + -- We pass in the producer/consumer registry. , validateFork :: - (l ~ ExtLedgerState blk) - => ResourceRegistry m - -> (TraceValidateEvent blk -> m ()) - -> BlockCache blk - -> Word64 - -> [Header blk] - -> m (ValidateResult m l blk) - - -- | Get the references to blocks that have previously been applied. + l ~ ExtLedgerState blk => + ResourceRegistry m -> + (TraceValidateEvent blk -> m ()) -> + BlockCache blk -> + Word64 -> + [Header blk] -> + m (ValidateResult m l blk) + -- ^ Try to apply a sequence of blocks on top of the LedgerDB, first rolling + -- back as many blocks as the passed @Word64@. , getPrevApplied :: STM m (Set (RealPoint blk)) - - -- | Garbage collect references to old blocks that have been previously - -- applied and committed. + -- ^ Get the references to blocks that have previously been applied. , garbageCollect :: SlotNo -> STM m () - - -- | If the provided arguments indicate so (based on the SnapshotPolicy with - -- which this LedgerDB was opened), take a snapshot and delete stale ones. - -- - -- The arguments are: - -- - -- - If a snapshot has been taken already, the time at which it was taken - -- and the current time. - -- - -- - How many blocks have been processed since the last snapshot. + -- ^ Garbage collect references to old blocks that have been previously + -- applied and committed. , tryTakeSnapshot :: - (l ~ ExtLedgerState blk) - => Maybe (Time, Time) - -> Word64 - -> m SnapCounters - - -- | Flush V1 in-memory LedgerDB state to disk, if possible. This is a no-op - -- for implementations that do not need an explicit flush function. - -- - -- Note that this is rate-limited by 'ldbShouldFlush'. + l ~ ExtLedgerState blk => + Maybe (Time, Time) -> + Word64 -> + m SnapCounters + -- ^ If the provided arguments indicate so (based on the SnapshotPolicy with + -- which this LedgerDB was opened), take a snapshot and delete stale ones. + -- + -- The arguments are: + -- + -- - If a snapshot has been taken already, the time at which it was taken + -- and the current time. + -- + -- - How many blocks have been processed since the last snapshot. , tryFlush :: m () - - -- | Close the LedgerDB - -- - -- Idempotent. - -- - -- Should only be called on shutdown. + -- ^ Flush V1 in-memory LedgerDB state to disk, if possible. This is a no-op + -- for implementations that do not need an explicit flush function. + -- + -- Note that this is rate-limited by 'ldbShouldFlush'. , closeDB :: m () + -- ^ Close the LedgerDB + -- + -- Idempotent. + -- + -- Should only be called on shutdown. } deriving NoThunks via OnlyCheckWhnfNamed "LedgerDB" (LedgerDB m l blk) @@ -292,20 +288,20 @@ type instance HeaderHash (LedgerDB m l blk) = HeaderHash blk type LedgerDB' m blk = LedgerDB m (ExtLedgerState blk) blk currentPoint :: - (GetTip l, HeaderHash l ~ HeaderHash blk, Functor (STM m)) - => LedgerDB m l blk - -> STM m (Point blk) + (GetTip l, HeaderHash l ~ HeaderHash blk, Functor (STM m)) => + LedgerDB m l blk -> + STM m (Point blk) currentPoint ldb = castPoint . getTip <$> getVolatileTip ldb data WhereToTakeSnapshot = TakeAtImmutableTip | TakeAtVolatileTip deriving Eq -data TestInternals m l blk = TestInternals { - wipeLedgerDB :: m () - , takeSnapshotNOW :: WhereToTakeSnapshot -> Maybe String -> m () - , push :: ExtLedgerState blk DiffMK -> m () +data TestInternals m l blk = TestInternals + { wipeLedgerDB :: m () + , takeSnapshotNOW :: WhereToTakeSnapshot -> Maybe String -> m () + , push :: ExtLedgerState blk DiffMK -> m () , reapplyThenPushNOW :: blk -> m () - , truncateSnapshots :: m () - , closeLedgerDB :: m () + , truncateSnapshots :: m () + , closeLedgerDB :: m () } deriving NoThunks via OnlyCheckWhnfNamed "TestInternals" (TestInternals m l blk) @@ -315,25 +311,26 @@ type TestInternals' m blk = TestInternals m (ExtLedgerState blk) blk Config -------------------------------------------------------------------------------} -data LedgerDbCfgF f l = LedgerDbCfg { - ledgerDbCfgSecParam :: !(HKD f SecurityParam) - , ledgerDbCfg :: !(HKD f (LedgerCfg l)) - , ledgerDbCfgComputeLedgerEvents :: !ComputeLedgerEvents - } - deriving (Generic) +data LedgerDbCfgF f l = LedgerDbCfg + { ledgerDbCfgSecParam :: !(HKD f SecurityParam) + , ledgerDbCfg :: !(HKD f (LedgerCfg l)) + , ledgerDbCfgComputeLedgerEvents :: !ComputeLedgerEvents + } + deriving Generic type LedgerDbCfg l = Complete LedgerDbCfgF l deriving instance NoThunks (LedgerCfg l) => NoThunks (LedgerDbCfg l) configLedgerDb :: - ConsensusProtocol (BlockProtocol blk) - => TopLevelConfig blk - -> ComputeLedgerEvents - -> LedgerDbCfg (ExtLedgerState blk) -configLedgerDb config evs = LedgerDbCfg { - ledgerDbCfgSecParam = configSecurityParam config - , ledgerDbCfg = ExtLedgerCfg config + ConsensusProtocol (BlockProtocol blk) => + TopLevelConfig blk -> + ComputeLedgerEvents -> + LedgerDbCfg (ExtLedgerState blk) +configLedgerDb config evs = + LedgerDbCfg + { ledgerDbCfgSecParam = configSecurityParam config + , ledgerDbCfg = ExtLedgerCfg config , ledgerDbCfgComputeLedgerEvents = evs } @@ -344,16 +341,16 @@ configLedgerDb config evs = LedgerDbCfg { -- | Database error -- -- Thrown upon incorrect use: invalid input. -data LedgerDbError blk = - -- | The LedgerDB is closed. - -- - -- This will be thrown when performing some operations on the LedgerDB. The - -- 'CallStack' of the operation on the LedgerDB is included in the error. - ClosedDBError PrettyCallStack - -- | A Forker is closed. - | ClosedForkerError ForkerKey PrettyCallStack - deriving (Show) - deriving anyclass (Exception) +data LedgerDbError blk + = -- | The LedgerDB is closed. + -- + -- This will be thrown when performing some operations on the LedgerDB. The + -- 'CallStack' of the operation on the LedgerDB is included in the error. + ClosedDBError PrettyCallStack + | -- | A Forker is closed. + ClosedForkerError ForkerKey PrettyCallStack + deriving Show + deriving anyclass Exception {------------------------------------------------------------------------------- Forker @@ -361,17 +358,17 @@ data LedgerDbError blk = -- | 'bracket'-style usage of a forker at the LedgerDB tip. withTipForker :: - IOLike m - => LedgerDB m l blk - -> ResourceRegistry m - -> (Forker m l blk -> m a) - -> m a + IOLike m => + LedgerDB m l blk -> + ResourceRegistry m -> + (Forker m l blk -> m a) -> + m a withTipForker ldb rr = bracket - (do + ( do eFrk <- getForkerAtTarget ldb rr VolatileTip case eFrk of - Left {} -> error "Unreachable, volatile tip MUST be in the LedgerDB" + Left{} -> error "Unreachable, volatile tip MUST be in the LedgerDB" Right frk -> pure frk ) forkerClose @@ -379,57 +376,58 @@ withTipForker ldb rr = -- | Like 'withTipForker', but it uses a private registry to allocate and -- de-allocate the forker. withPrivateTipForker :: - IOLike m - => LedgerDB m l blk - -> (Forker m l blk -> m a) -> m a + IOLike m => + LedgerDB m l blk -> + (Forker m l blk -> m a) -> + m a withPrivateTipForker ldb = bracketWithPrivateRegistry - (\rr -> do + ( \rr -> do eFrk <- getForkerAtTarget ldb rr VolatileTip case eFrk of - Left {} -> error "Unreachable, volatile tip MUST be in the LedgerDB" + Left{} -> error "Unreachable, volatile tip MUST be in the LedgerDB" Right frk -> pure frk ) forkerClose -- | Get statistics from the tip of the LedgerDB. getTipStatistics :: - IOLike m - => LedgerDB m l blk - -> m (Maybe Statistics) + IOLike m => + LedgerDB m l blk -> + m (Maybe Statistics) getTipStatistics ldb = withPrivateTipForker ldb forkerReadStatistics getReadOnlyForker :: - MonadSTM m - => LedgerDB m l blk - -> ResourceRegistry m - -> Target (Point blk) - -> m (Either GetForkerError (ReadOnlyForker m l blk)) + MonadSTM m => + LedgerDB m l blk -> + ResourceRegistry m -> + Target (Point blk) -> + m (Either GetForkerError (ReadOnlyForker m l blk)) getReadOnlyForker ldb rr pt = fmap readOnlyForker <$> getForkerAtTarget ldb rr pt -- | Read a table of values at the requested point via a 'ReadOnlyForker' readLedgerTablesAtFor :: - IOLike m - => LedgerDB m l blk - -> Point blk - -> LedgerTables l KeysMK - -> m (Either GetForkerError (LedgerTables l ValuesMK)) + IOLike m => + LedgerDB m l blk -> + Point blk -> + LedgerTables l KeysMK -> + m (Either GetForkerError (LedgerTables l ValuesMK)) readLedgerTablesAtFor ldb p ks = - bracketWithPrivateRegistry - (\rr -> fmap readOnlyForker <$> getForkerAtTarget ldb rr (SpecificPoint p)) - (mapM_ roforkerClose) - $ \foEith -> Monad.forM foEith (`roforkerReadTables` ks) + bracketWithPrivateRegistry + (\rr -> fmap readOnlyForker <$> getForkerAtTarget ldb rr (SpecificPoint p)) + (mapM_ roforkerClose) + $ \foEith -> Monad.forM foEith (`roforkerReadTables` ks) {------------------------------------------------------------------------------- Snapshots -------------------------------------------------------------------------------} -- | Counters to keep track of when we made the last snapshot. -data SnapCounters = SnapCounters { - -- | When was the last time we made a snapshot - prevSnapshotTime :: !(Maybe Time) - -- | How many blocks have we processed since the last snapshot +data SnapCounters = SnapCounters + { prevSnapshotTime :: !(Maybe Time) + -- ^ When was the last time we made a snapshot , ntBlocksSinceLastSnap :: !Word64 + -- ^ How many blocks have we processed since the last snapshot } {------------------------------------------------------------------------------- @@ -441,44 +439,43 @@ data SnapCounters = SnapCounters { -- The initialization log records which snapshots from disk were considered, -- in which order, and why some snapshots were rejected. It is primarily useful -- for monitoring purposes. -data InitLog blk = - -- | Defaulted to initialization from genesis +data InitLog blk + = -- | Defaulted to initialization from genesis -- -- NOTE: Unless the blockchain is near genesis, or this is the first time we -- boot the node, we should see this /only/ if data corruption occurred. InitFromGenesis - - -- | Used a snapshot corresponding to the specified tip - | InitFromSnapshot DiskSnapshot (RealPoint blk) - - -- | Initialization skipped a snapshot + | -- | Used a snapshot corresponding to the specified tip + InitFromSnapshot DiskSnapshot (RealPoint blk) + | -- | Initialization skipped a snapshot -- -- We record the reason why it was skipped. -- -- NOTE: We should /only/ see this if data corruption occurred or codecs -- for snapshots changed. - | InitFailure DiskSnapshot (SnapshotFailure blk) (InitLog blk) + InitFailure DiskSnapshot (SnapshotFailure blk) (InitLog blk) deriving (Show, Eq, Generic) -- | Functions required to initialize a LedgerDB type InitDB :: Type -> (Type -> Type) -> Type -> Type -data InitDB db m blk = InitDB { - initFromGenesis :: !(m db) - -- ^ Create a DB from the genesis state +data InitDB db m blk = InitDB + { initFromGenesis :: !(m db) + -- ^ Create a DB from the genesis state , initFromSnapshot :: !(DiskSnapshot -> m (Either (SnapshotFailure blk) (db, RealPoint blk))) - -- ^ Create a DB from a Snapshot - , closeDb :: !(db -> m ()) - -- ^ Closing the database, to be reopened again with a different snapshot or - -- with the genesis state. + -- ^ Create a DB from a Snapshot + , closeDb :: !(db -> m ()) + -- ^ Closing the database, to be reopened again with a different snapshot or + -- with the genesis state. , initReapplyBlock :: !(LedgerDbCfg (ExtLedgerState blk) -> blk -> db -> m db) - -- ^ Reapply a block from the immutable DB when initializing the DB. - , currentTip :: !(db -> LedgerState blk EmptyMK) - -- ^ Getting the current tip for tracing the Ledger Events. - , pruneDb :: !(db -> m db) - -- ^ Prune the database so that no immutable states are considered volatile. - , mkLedgerDb :: !(db -> m (LedgerDB m (ExtLedgerState blk) blk, TestInternals m (ExtLedgerState blk) blk)) - -- ^ Create a LedgerDB from the initialized data structures from previous - -- steps. + -- ^ Reapply a block from the immutable DB when initializing the DB. + , currentTip :: !(db -> LedgerState blk EmptyMK) + -- ^ Getting the current tip for tracing the Ledger Events. + , pruneDb :: !(db -> m db) + -- ^ Prune the database so that no immutable states are considered volatile. + , mkLedgerDb :: + !(db -> m (LedgerDB m (ExtLedgerState blk) blk, TestInternals m (ExtLedgerState blk) blk)) + -- ^ Create a LedgerDB from the initialized data structures from previous + -- steps. } -- | Initialize the ledger DB from the most recent snapshot on disk @@ -502,53 +499,58 @@ data InitDB db m blk = InitDB { -- obtained in this way will (hopefully) share much of their memory footprint -- with their predecessors. initialize :: - forall m blk db. - ( IOLike m - , LedgerSupportsProtocol blk - , InspectLedger blk - , HasCallStack - ) - => Tracer m (TraceReplayEvent blk) - -> Tracer m (TraceSnapshotEvent blk) - -> SomeHasFS m - -> LedgerDbCfg (ExtLedgerState blk) - -> StreamAPI m blk blk - -> Point blk - -> InitDB db m blk - -> Maybe DiskSnapshot - -> m (InitLog blk, db, Word64) -initialize replayTracer - snapTracer - hasFS - cfg - stream - replayGoal - dbIface - fromSnapshot = + forall m blk db. + ( IOLike m + , LedgerSupportsProtocol blk + , InspectLedger blk + , HasCallStack + ) => + Tracer m (TraceReplayEvent blk) -> + Tracer m (TraceSnapshotEvent blk) -> + SomeHasFS m -> + LedgerDbCfg (ExtLedgerState blk) -> + StreamAPI m blk blk -> + Point blk -> + InitDB db m blk -> + Maybe DiskSnapshot -> + m (InitLog blk, db, Word64) +initialize + replayTracer + snapTracer + hasFS + cfg + stream + replayGoal + dbIface + fromSnapshot = case fromSnapshot of - Nothing -> listSnapshots hasFS >>= tryNewestFirst id + Nothing -> listSnapshots hasFS >>= tryNewestFirst id Just snap -> tryNewestFirst id [snap] - where - InitDB {initFromGenesis, initFromSnapshot, closeDb} = dbIface - - tryNewestFirst :: (InitLog blk -> InitLog blk) - -> [DiskSnapshot] - -> m ( InitLog blk - , db - , Word64 - ) + where + InitDB{initFromGenesis, initFromSnapshot, closeDb} = dbIface + + tryNewestFirst :: + (InitLog blk -> InitLog blk) -> + [DiskSnapshot] -> + m + ( InitLog blk + , db + , Word64 + ) tryNewestFirst acc [] = do -- We're out of snapshots. Start at genesis traceWith (TraceReplayStartEvent >$< replayTracer) ReplayFromGenesis let replayTracer'' = decorateReplayTracerWithStart (Point Origin) replayTracer' initDb <- initFromGenesis - eDB <- runExceptT $ replayStartingWith - replayTracer'' - cfg - stream - initDb - (Point Origin) - dbIface + eDB <- + runExceptT $ + replayStartingWith + replayTracer'' + cfg + stream + initDb + (Point Origin) + dbIface case eDB of Left err -> do @@ -556,12 +558,12 @@ initialize replayTracer error $ "Invariant violation: invalid immutable chain " <> show err Right (db, replayed) -> do db' <- pruneDb dbIface db - return ( acc InitFromGenesis - , db' - , replayed - ) - - tryNewestFirst acc (s:ss) = do + return + ( acc InitFromGenesis + , db' + , replayed + ) + tryNewestFirst acc (s : ss) = do eInitDb <- initFromSnapshot s case eInitDb of -- If the snapshot is missing a metadata file, issue a warning and try @@ -593,19 +595,19 @@ initialize replayTracer deleteSnapshot hasFS s traceWith snapTracer . InvalidSnapshot s $ err tryNewestFirst (acc . InitFailure s err) ss - Right (initDb, pt) -> do let pt' = realPointToPoint pt traceWith (TraceReplayStartEvent >$< replayTracer) (ReplayFromSnapshot s (ReplayStart pt')) let replayTracer'' = decorateReplayTracerWithStart pt' replayTracer' - eDB <- runExceptT - $ replayStartingWith - replayTracer'' - cfg - stream - initDb - pt' - dbIface + eDB <- + runExceptT $ + replayStartingWith + replayTracer'' + cfg + stream + initDb + pt' + dbIface case eDB of Left err -> do traceWith snapTracer . InvalidSnapshot s $ err @@ -616,71 +618,78 @@ initialize replayTracer db' <- pruneDb dbIface db return (acc (InitFromSnapshot s pt), db', replayed) - replayTracer' = decorateReplayTracerWithGoal - replayGoal - (TraceReplayProgressEvent >$< replayTracer) + replayTracer' = + decorateReplayTracerWithGoal + replayGoal + (TraceReplayProgressEvent >$< replayTracer) -- | Replay all blocks in the Immutable database using the 'StreamAPI' provided -- on top of the given @LedgerDB' blk@. -- -- It will also return the number of blocks that were replayed. replayStartingWith :: - forall m blk db. ( - IOLike m - , LedgerSupportsProtocol blk - , InspectLedger blk - , HasCallStack - ) - => Tracer m (ReplayStart blk -> ReplayGoal blk -> TraceReplayProgressEvent blk) - -> LedgerDbCfg (ExtLedgerState blk) - -> StreamAPI m blk blk - -> db - -> Point blk - -> InitDB db m blk - -> ExceptT (SnapshotFailure blk) m (db, Word64) + forall m blk db. + ( IOLike m + , LedgerSupportsProtocol blk + , InspectLedger blk + , HasCallStack + ) => + Tracer m (ReplayStart blk -> ReplayGoal blk -> TraceReplayProgressEvent blk) -> + LedgerDbCfg (ExtLedgerState blk) -> + StreamAPI m blk blk -> + db -> + Point blk -> + InitDB db m blk -> + ExceptT (SnapshotFailure blk) m (db, Word64) replayStartingWith tracer cfg stream initDb from InitDB{initReapplyBlock, currentTip} = do - streamAll stream from - InitFailureTooRecent - (initDb, 0) - push - where - push :: blk - -> (db, Word64) - -> m (db, Word64) - push blk (!db, !replayed) = do - !db' <- initReapplyBlock cfg blk db - - let !replayed' = replayed + 1 - - events = inspectLedger - (getExtLedgerCfg (ledgerDbCfg cfg)) - (currentTip db) - (currentTip db') - - traceWith tracer (ReplayedBlock (blockRealPoint blk) events) - return (db', replayed') + streamAll + stream + from + InitFailureTooRecent + (initDb, 0) + push + where + push :: + blk -> + (db, Word64) -> + m (db, Word64) + push blk (!db, !replayed) = do + !db' <- initReapplyBlock cfg blk db + + let !replayed' = replayed + 1 + + events = + inspectLedger + (getExtLedgerCfg (ledgerDbCfg cfg)) + (currentTip db) + (currentTip db') + + traceWith tracer (ReplayedBlock (blockRealPoint blk) events) + return (db', replayed') {------------------------------------------------------------------------------- Trace replay events -------------------------------------------------------------------------------} -data TraceReplayEvent blk = - TraceReplayStartEvent (TraceReplayStartEvent blk) - | TraceReplayProgressEvent (TraceReplayProgressEvent blk) - deriving (Show, Eq) +data TraceReplayEvent blk + = TraceReplayStartEvent (TraceReplayStartEvent blk) + | TraceReplayProgressEvent (TraceReplayProgressEvent blk) + deriving (Show, Eq) -- | Add the tip of the Immutable DB to the trace event -decorateReplayTracerWithGoal - :: Point blk -- ^ Tip of the ImmutableDB - -> Tracer m (TraceReplayProgressEvent blk) - -> Tracer m (ReplayGoal blk -> TraceReplayProgressEvent blk) +decorateReplayTracerWithGoal :: + -- | Tip of the ImmutableDB + Point blk -> + Tracer m (TraceReplayProgressEvent blk) -> + Tracer m (ReplayGoal blk -> TraceReplayProgressEvent blk) decorateReplayTracerWithGoal immTip = (($ ReplayGoal immTip) >$<) -- | Add the block at which a replay started. -decorateReplayTracerWithStart - :: Point blk -- ^ Starting point of the replay - -> Tracer m (ReplayGoal blk -> TraceReplayProgressEvent blk) - -> Tracer m (ReplayStart blk -> ReplayGoal blk -> TraceReplayProgressEvent blk) +decorateReplayTracerWithStart :: + -- | Starting point of the replay + Point blk -> + Tracer m (ReplayGoal blk -> TraceReplayProgressEvent blk) -> + Tracer m (ReplayStart blk -> ReplayGoal blk -> TraceReplayProgressEvent blk) decorateReplayTracerWithStart start = (($ ReplayStart start) >$<) -- | Which point the replay started from @@ -697,11 +706,12 @@ data TraceReplayStartEvent blk = -- | There were no LedgerDB snapshots on disk, so we're replaying all blocks -- starting from Genesis against the initial ledger. ReplayFromGenesis - -- | There was a LedgerDB snapshot on disk corresponding to the given tip. + | -- | There was a LedgerDB snapshot on disk corresponding to the given tip. -- We're replaying more recent blocks against it. - | ReplayFromSnapshot - DiskSnapshot - (ReplayStart blk) -- ^ the block at which this replay started + ReplayFromSnapshot + DiskSnapshot + -- | the block at which this replay started + (ReplayStart blk) deriving (Generic, Eq, Show) -- | We replayed the given block (reference) on the genesis snapshot during @@ -709,12 +719,15 @@ data TraceReplayStartEvent blk -- -- Using this trace the node could (if it so desired) easily compute a -- "percentage complete". -data TraceReplayProgressEvent blk = - ReplayedBlock - (RealPoint blk) -- ^ the block being replayed - [LedgerEvent blk] - (ReplayStart blk) -- ^ the block at which this replay started - (ReplayGoal blk) -- ^ the block at the tip of the ImmutableDB +data TraceReplayProgressEvent blk + = ReplayedBlock + -- | the block being replayed + (RealPoint blk) + [LedgerEvent blk] + -- | the block at which this replay started + (ReplayStart blk) + -- | the block at the tip of the ImmutableDB + (ReplayGoal blk) deriving (Generic, Eq, Show) {------------------------------------------------------------------------------- @@ -732,20 +745,27 @@ type LedgerSupportsInMemoryLedgerDB blk = (CanUpgradeLedgerTables (LedgerState b -- upgrading the TxOuts every time we consult them. class CanUpgradeLedgerTables l where upgradeTables :: - l mk1 -- ^ The original ledger state before the upgrade. This will be the - -- tip before applying the block. - -> l mk2 -- ^ The ledger state after the upgrade, which might be in a - -- different era than the one above. - -> LedgerTables l ValuesMK -- ^ The tables we want to maybe upgrade. - -> LedgerTables l ValuesMK - -instance CanUpgradeLedgerTables (LedgerState blk) - => CanUpgradeLedgerTables (ExtLedgerState blk) where + -- | The original ledger state before the upgrade. This will be the + -- tip before applying the block. + l mk1 -> + -- | The ledger state after the upgrade, which might be in a + -- different era than the one above. + l mk2 -> + -- | The tables we want to maybe upgrade. + LedgerTables l ValuesMK -> + LedgerTables l ValuesMK + +instance + CanUpgradeLedgerTables (LedgerState blk) => + CanUpgradeLedgerTables (ExtLedgerState blk) + where upgradeTables (ExtLedgerState st0 _) (ExtLedgerState st1 _) = castLedgerTables . upgradeTables st0 st1 . castLedgerTables -instance LedgerTablesAreTrivial l - => CanUpgradeLedgerTables (TrivialLedgerTables l) where +instance + LedgerTablesAreTrivial l => + CanUpgradeLedgerTables (TrivialLedgerTables l) + where upgradeTables _ _ (LedgerTables (ValuesMK mk)) = LedgerTables (ValuesMK (Map.map absurd mk)) @@ -754,8 +774,7 @@ instance LedgerTablesAreTrivial l -------------------------------------------------------------------------------} type LedgerSupportsOnDiskLedgerDB blk = - ( IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk)) - ) + (IndexedMemPack (LedgerState blk EmptyMK) (TxOut (LedgerState blk))) type LedgerSupportsLedgerDB blk = ( LedgerSupportsOnDiskLedgerDB blk diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Args.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Args.hs index 5e484c42cd..4b0102ed99 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Args.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Args.hs @@ -16,29 +16,29 @@ {-# LANGUAGE UndecidableInstances #-} -- | Arguments for LedgerDB initialization. -module Ouroboros.Consensus.Storage.LedgerDB.Args ( - LedgerDbArgs (..) +module Ouroboros.Consensus.Storage.LedgerDB.Args + ( LedgerDbArgs (..) , LedgerDbFlavorArgs (..) , QueryBatchSize (..) , defaultArgs , defaultQueryBatchSize ) where -import Control.ResourceRegistry -import Control.Tracer -import Data.Kind -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.Args as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2.Args as V2 -import Ouroboros.Consensus.Util.Args -import System.FS.API +import Control.ResourceRegistry +import Control.Tracer +import Data.Kind +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V2.Args qualified as V2 +import Ouroboros.Consensus.Util.Args +import System.FS.API {------------------------------------------------------------------------------- Arguments @@ -46,48 +46,48 @@ import System.FS.API -- | Arguments required to initialize a LedgerDB. type LedgerDbArgs :: - (Type -> Type) - -> (Type -> Type) - -> Type - -> Type -data LedgerDbArgs f m blk = LedgerDbArgs { - lgrSnapshotPolicyArgs :: SnapshotPolicyArgs - , lgrGenesis :: HKD f (m (ExtLedgerState blk ValuesMK)) - , lgrHasFS :: HKD f (SomeHasFS m) - , lgrConfig :: LedgerDbCfgF f (ExtLedgerState blk) - , lgrTracer :: Tracer m (TraceEvent blk) - , lgrFlavorArgs :: LedgerDbFlavorArgs f m - , lgrRegistry :: HKD f (ResourceRegistry m) - , lgrQueryBatchSize :: QueryBatchSize - -- | If provided, the ledgerdb will start using said snapshot and fallback - -- to genesis. It will ignore any other existing snapshots. Useful for - -- db-analyser. - , lgrStartSnapshot :: Maybe DiskSnapshot - } + (Type -> Type) -> + (Type -> Type) -> + Type -> + Type +data LedgerDbArgs f m blk = LedgerDbArgs + { lgrSnapshotPolicyArgs :: SnapshotPolicyArgs + , lgrGenesis :: HKD f (m (ExtLedgerState blk ValuesMK)) + , lgrHasFS :: HKD f (SomeHasFS m) + , lgrConfig :: LedgerDbCfgF f (ExtLedgerState blk) + , lgrTracer :: Tracer m (TraceEvent blk) + , lgrFlavorArgs :: LedgerDbFlavorArgs f m + , lgrRegistry :: HKD f (ResourceRegistry m) + , lgrQueryBatchSize :: QueryBatchSize + , lgrStartSnapshot :: Maybe DiskSnapshot + -- ^ If provided, the ledgerdb will start using said snapshot and fallback + -- to genesis. It will ignore any other existing snapshots. Useful for + -- db-analyser. + } -- | Default arguments defaultArgs :: - Applicative m - => Incomplete LedgerDbArgs m blk -defaultArgs = LedgerDbArgs { - lgrSnapshotPolicyArgs = defaultSnapshotPolicyArgs - , lgrGenesis = NoDefault - , lgrHasFS = NoDefault - , lgrConfig = LedgerDbCfg NoDefault NoDefault OmitLedgerEvents - , lgrQueryBatchSize = DefaultQueryBatchSize - , lgrTracer = nullTracer - -- This value is the closest thing to a pre-UTxO-HD node, and as such it + Applicative m => + Incomplete LedgerDbArgs m blk +defaultArgs = + LedgerDbArgs + { lgrSnapshotPolicyArgs = defaultSnapshotPolicyArgs + , lgrGenesis = NoDefault + , lgrHasFS = NoDefault + , lgrConfig = LedgerDbCfg NoDefault NoDefault OmitLedgerEvents + , lgrQueryBatchSize = DefaultQueryBatchSize + , lgrTracer = nullTracer + , -- This value is the closest thing to a pre-UTxO-HD node, and as such it -- will be the default for end-users. - , lgrFlavorArgs = LedgerDbFlavorArgsV2 (V2.V2Args V2.InMemoryHandleArgs) - , lgrRegistry = NoDefault - , lgrStartSnapshot = Nothing + lgrFlavorArgs = LedgerDbFlavorArgsV2 (V2.V2Args V2.InMemoryHandleArgs) + , lgrRegistry = NoDefault + , lgrStartSnapshot = Nothing } -data LedgerDbFlavorArgs f m = - LedgerDbFlavorArgsV1 (V1.LedgerDbFlavorArgs f m) +data LedgerDbFlavorArgs f m + = LedgerDbFlavorArgsV1 (V1.LedgerDbFlavorArgs f m) | LedgerDbFlavorArgsV2 (V2.LedgerDbFlavorArgs f m) - {------------------------------------------------------------------------------- QueryBatchSize -------------------------------------------------------------------------------} @@ -104,19 +104,19 @@ data LedgerDbFlavorArgs f m = -- -- It is fine if the result of a range read contains less than this number of -- keys, but it should never return more. -data QueryBatchSize = - -- | A default value, which is determined by a specific +data QueryBatchSize + = -- | A default value, which is determined by a specific -- 'QueryBatchSize'. See 'defaultQueryBatchSize' as an example. DefaultQueryBatchSize - -- | A requested value: the number of keys to read from disk in each batch. - | RequestedQueryBatchSize Word64 + | -- | A requested value: the number of keys to read from disk in each batch. + RequestedQueryBatchSize Word64 deriving (Show, Eq, Generic) deriving anyclass NoThunks defaultQueryBatchSize :: QueryBatchSize -> Word64 defaultQueryBatchSize requestedQueryBatchSize = case requestedQueryBatchSize of - RequestedQueryBatchSize value -> value - -- Experiments showed that 100_000 is a reasonable value, which yields - -- acceptable performance. We might want to tweak this further, but for now - -- this default seems good enough. - DefaultQueryBatchSize -> 100_000 + RequestedQueryBatchSize value -> value + -- Experiments showed that 100_000 is a reasonable value, which yields + -- acceptable performance. We might want to tweak this further, but for now + -- this default seems good enough. + DefaultQueryBatchSize -> 100_000 diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Forker.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Forker.hs index 5b5f978664..91cef56fe2 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Forker.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Forker.hs @@ -16,10 +16,9 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} --- | -module Ouroboros.Consensus.Storage.LedgerDB.Forker ( - -- * Forker API +module Ouroboros.Consensus.Storage.LedgerDB.Forker + ( -- * Forker API ExceededRollback (..) , Forker (..) , Forker' @@ -29,13 +28,16 @@ module Ouroboros.Consensus.Storage.LedgerDB.Forker ( , RangeQueryPrevious (..) , Statistics (..) , forkerCurrentPoint + -- ** Read only , ReadOnlyForker (..) , ReadOnlyForker' , readOnlyForker + -- ** Tracing , TraceForkerEvent (..) , TraceForkerEventWithKey (..) + -- * Validation , AnnLedgerError (..) , AnnLedgerError' @@ -43,6 +45,7 @@ module Ouroboros.Consensus.Storage.LedgerDB.Forker ( , ValidateArgs (..) , ValidateResult (..) , validate + -- ** Tracing , PushGoal (..) , PushStart (..) @@ -50,28 +53,32 @@ module Ouroboros.Consensus.Storage.LedgerDB.Forker ( , TraceValidateEvent (..) ) where -import Control.Monad (void) -import Control.Monad.Base -import Control.Monad.Except (ExceptT (..), MonadError (..), runExcept, - runExceptT) -import Control.Monad.Reader (ReaderT (..)) -import Control.Monad.Trans (MonadTrans (..)) -import Control.ResourceRegistry -import Data.Kind -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache -import qualified Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache as BlockCache -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike +import Control.Monad (void) +import Control.Monad.Base +import Control.Monad.Except + ( ExceptT (..) + , MonadError (..) + , runExcept + , runExceptT + ) +import Control.Monad.Reader (ReaderT (..)) +import Control.Monad.Trans (MonadTrans (..)) +import Control.ResourceRegistry +import Data.Kind +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache qualified as BlockCache +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- Forker @@ -80,50 +87,44 @@ import Ouroboros.Consensus.Util.IOLike -- | An independent handle to a point in the LedgerDB, which can be advanced to -- evaluate forks in the chain. type Forker :: (Type -> Type) -> LedgerStateKind -> Type -> Type -data Forker m l blk = Forker { - -- | Close the current forker (idempotent). - -- - -- Other functions on forkers should throw a 'ClosedForkError' once the - -- forker is closed. - -- - -- Note: always use this functions before the forker is forgotten! - -- Otherwise, cleanup of (on-disk) state might not be prompt or guaranteed. - -- - -- This function should release any resources that are held by the forker, - -- and not by the LedgerDB. - forkerClose :: !(m ()) - - -- Queries - - -- | Read ledger tables from disk. - , forkerReadTables :: !(LedgerTables l KeysMK -> m (LedgerTables l ValuesMK)) - - -- | Range-read ledger tables from disk. - -- - -- This range read will return as many values as the 'QueryBatchSize' that - -- was passed when opening the LedgerDB. - , forkerRangeReadTables :: !(RangeQueryPrevious l -> m (LedgerTables l ValuesMK)) - - -- | Get the full ledger state without tables. - -- - -- If an empty ledger state is all you need, use 'getVolatileTip', - -- 'getImmutableTip', or 'getPastLedgerState' instead of using a 'Forker'. - , forkerGetLedgerState :: !(STM m (l EmptyMK)) +data Forker m l blk = Forker + { forkerClose :: !(m ()) + -- ^ Close the current forker (idempotent). + -- + -- Other functions on forkers should throw a 'ClosedForkError' once the + -- forker is closed. + -- + -- Note: always use this functions before the forker is forgotten! + -- Otherwise, cleanup of (on-disk) state might not be prompt or guaranteed. + -- + -- This function should release any resources that are held by the forker, + -- and not by the LedgerDB. + , -- Queries - -- | Get statistics about the current state of the handle if possible. - -- - -- Returns 'Nothing' if the implementation is backed by @lsm-tree@. + forkerReadTables :: !(LedgerTables l KeysMK -> m (LedgerTables l ValuesMK)) + -- ^ Read ledger tables from disk. + , forkerRangeReadTables :: !(RangeQueryPrevious l -> m (LedgerTables l ValuesMK)) + -- ^ Range-read ledger tables from disk. + -- + -- This range read will return as many values as the 'QueryBatchSize' that + -- was passed when opening the LedgerDB. + , forkerGetLedgerState :: !(STM m (l EmptyMK)) + -- ^ Get the full ledger state without tables. + -- + -- If an empty ledger state is all you need, use 'getVolatileTip', + -- 'getImmutableTip', or 'getPastLedgerState' instead of using a 'Forker'. , forkerReadStatistics :: !(m (Maybe Statistics)) + -- ^ Get statistics about the current state of the handle if possible. + -- + -- Returns 'Nothing' if the implementation is backed by @lsm-tree@. + , -- Updates - -- Updates - - -- | Advance the fork handle by pushing a new ledger state to the tip of the - -- current fork. - , forkerPush :: !(l DiffMK -> m ()) - - -- | Commit the fork, which was constructed using 'forkerPush', as the - -- current version of the LedgerDB. + forkerPush :: !(l DiffMK -> m ()) + -- ^ Advance the fork handle by pushing a new ledger state to the tip of the + -- current fork. , forkerCommit :: !(STM m ()) + -- ^ Commit the fork, which was constructed using 'forkerPush', as the + -- current version of the LedgerDB. } -- | An identifier for a 'Forker'. See 'ldbForkers'. @@ -135,14 +136,16 @@ type instance HeaderHash (Forker m l blk) = HeaderHash l type Forker' m blk = Forker m (ExtLedgerState blk) blk -instance (GetTip l, HeaderHash l ~ HeaderHash blk, MonadSTM m) - => GetTipSTM m (Forker m l blk) where +instance + (GetTip l, HeaderHash l ~ HeaderHash blk, MonadSTM m) => + GetTipSTM m (Forker m l blk) + where getTipSTM forker = castPoint . getTip <$> forkerGetLedgerState forker data RangeQueryPrevious l = NoPreviousQuery | PreviousQueryWasFinal | PreviousQueryWasUpTo (TxIn l) -data RangeQuery l = RangeQuery { - rqPrev :: !(RangeQueryPrevious l) +data RangeQuery l = RangeQuery + { rqPrev :: !(RangeQueryPrevious l) , rqCount :: !Int } @@ -151,19 +154,19 @@ data RangeQuery l = RangeQuery { -- -- This is for now the only metric that was requested from other components, but -- this type might be augmented in the future with more statistics. -newtype Statistics = Statistics { - ledgerTableSize :: Int +newtype Statistics = Statistics + { ledgerTableSize :: Int } -- | Errors that can be thrown while acquiring forkers. -data GetForkerError = - -- | The requested point was not found in the LedgerDB, but the point is +data GetForkerError + = -- | The requested point was not found in the LedgerDB, but the point is -- recent enough that the point is not in the immutable part of the chain, -- i.e. it belongs to an unselected fork. PointNotOnChain - -- | The requested point was not found in the LedgerDB because the point + | -- | The requested point was not found in the LedgerDB because the point -- older than the immutable tip. - | PointTooOld !(Maybe ExceededRollback) + PointTooOld !(Maybe ExceededRollback) deriving (Show, Eq) -- | Exceeded maximum rollback supported by the current ledger DB state @@ -173,17 +176,18 @@ data GetForkerError = -- but that is disallowed by all currently known Ouroboros protocols). -- -- Records both the supported and the requested rollback. -data ExceededRollback = ExceededRollback { - rollbackMaximum :: Word64 - , rollbackRequested :: Word64 - } deriving (Show, Eq) +data ExceededRollback = ExceededRollback + { rollbackMaximum :: Word64 + , rollbackRequested :: Word64 + } + deriving (Show, Eq) forkerCurrentPoint :: - (GetTip l, HeaderHash l ~ HeaderHash blk, Functor (STM m)) - => Forker m l blk - -> STM m (Point blk) + (GetTip l, HeaderHash l ~ HeaderHash blk, Functor (STM m)) => + Forker m l blk -> + STM m (Point blk) forkerCurrentPoint forker = - castPoint + castPoint . getTip <$> forkerGetLedgerState forker @@ -202,17 +206,17 @@ forkerCurrentPoint forker = -- -- - Mempool. type ReadOnlyForker :: (Type -> Type) -> LedgerStateKind -> Type -> Type -data ReadOnlyForker m l blk = ReadOnlyForker { - -- | See 'forkerClose' - roforkerClose :: !(m ()) - -- | See 'forkerReadTables' +data ReadOnlyForker m l blk = ReadOnlyForker + { roforkerClose :: !(m ()) + -- ^ See 'forkerClose' , roforkerReadTables :: !(LedgerTables l KeysMK -> m (LedgerTables l ValuesMK)) - -- | See 'forkerRangeReadTables'. + -- ^ See 'forkerReadTables' , roforkerRangeReadTables :: !(RangeQueryPrevious l -> m (LedgerTables l ValuesMK)) - -- | See 'forkerGetLedgerState' - , roforkerGetLedgerState :: !(STM m (l EmptyMK)) - -- | See 'forkerReadStatistics' + -- ^ See 'forkerRangeReadTables'. + , roforkerGetLedgerState :: !(STM m (l EmptyMK)) + -- ^ See 'forkerGetLedgerState' , roforkerReadStatistics :: !(m (Maybe Statistics)) + -- ^ See 'forkerReadStatistics' } type instance HeaderHash (ReadOnlyForker m l blk) = HeaderHash l @@ -220,8 +224,9 @@ type instance HeaderHash (ReadOnlyForker m l blk) = HeaderHash l type ReadOnlyForker' m blk = ReadOnlyForker m (ExtLedgerState blk) blk readOnlyForker :: Forker m l blk -> ReadOnlyForker m l blk -readOnlyForker forker = ReadOnlyForker { - roforkerClose = forkerClose forker +readOnlyForker forker = + ReadOnlyForker + { roforkerClose = forkerClose forker , roforkerReadTables = forkerReadTables forker , roforkerRangeReadTables = forkerRangeReadTables forker , roforkerGetLedgerState = forkerGetLedgerState forker @@ -232,108 +237,121 @@ readOnlyForker forker = ReadOnlyForker { Validation -------------------------------------------------------------------------------} -data ValidateArgs m blk = ValidateArgs { - -- | How to retrieve blocks from headers - resolve :: !(ResolveBlock m blk) - -- | The config +data ValidateArgs m blk = ValidateArgs + { resolve :: !(ResolveBlock m blk) + -- ^ How to retrieve blocks from headers , validateConfig :: !(TopLevelConfig blk) - -- | How to add a previously applied block to the set of known blocks + -- ^ The config , addPrevApplied :: !([RealPoint blk] -> STM m ()) - -- | Get the current set of previously applied blocks + -- ^ How to add a previously applied block to the set of known blocks , prevApplied :: !(STM m (Set (RealPoint blk))) - -- | Create a forker from the tip + -- ^ Get the current set of previously applied blocks , forkerAtFromTip :: !(ResourceRegistry m -> Word64 -> m (Either GetForkerError (Forker' m blk))) - -- | The resource registry + -- ^ Create a forker from the tip , resourceReg :: !(ResourceRegistry m) - -- | A tracer for validate events + -- ^ The resource registry , trace :: !(TraceValidateEvent blk -> m ()) - -- | The block cache + -- ^ A tracer for validate events , blockCache :: BlockCache blk - -- | How many blocks to roll back before applying the blocks + -- ^ The block cache , numRollbacks :: Word64 - -- | The headers we want to apply + -- ^ How many blocks to roll back before applying the blocks , hdrs :: [Header blk] + -- ^ The headers we want to apply } validate :: - forall m blk. ( - IOLike m - , LedgerSupportsProtocol blk - , HasCallStack - ) - => ComputeLedgerEvents - -> ValidateArgs m blk - -> m (ValidateResult' m blk) + forall m blk. + ( IOLike m + , LedgerSupportsProtocol blk + , HasCallStack + ) => + ComputeLedgerEvents -> + ValidateArgs m blk -> + m (ValidateResult' m blk) validate evs args = do - aps <- mkAps <$> atomically prevApplied - res <- fmap rewrap $ defaultResolveWithErrors resolve $ - switch - forkerAtFromTip - resourceReg - evs - (ExtLedgerCfg validateConfig) - numRollbacks - (lift . lift . trace) - aps - liftBase $ atomically $ addPrevApplied (validBlockPoints res (map headerRealPoint hdrs)) - return res - where - ValidateArgs { - resolve - , validateConfig - , addPrevApplied - , prevApplied - , forkerAtFromTip - , resourceReg - , trace - , blockCache - , numRollbacks - , hdrs - } = args - - rewrap :: Either (AnnLedgerError' n blk) (Either GetForkerError (Forker' n blk)) - -> ValidateResult' n blk - rewrap (Left e) = ValidateLedgerError e - rewrap (Right (Left (PointTooOld (Just e)))) = ValidateExceededRollBack e - rewrap (Right (Left _)) = error "Unreachable, validating will always rollback from the tip" - rewrap (Right (Right l)) = ValidateSuccessful l - - mkAps :: forall bn n l. l ~ ExtLedgerState blk - => Set (RealPoint blk) - -> [Ap bn n l blk ( ResolvesBlocks n blk - , ThrowsLedgerError bn n l blk - )] - mkAps prev = - [ case ( Set.member (headerRealPoint hdr) prev - , BlockCache.lookup (headerHash hdr) blockCache - ) of - (False, Nothing) -> ApplyRef (headerRealPoint hdr) - (True, Nothing) -> Weaken $ ReapplyRef (headerRealPoint hdr) - (False, Just blk) -> Weaken $ ApplyVal blk - (True, Just blk) -> Weaken $ ReapplyVal blk - | hdr <- hdrs - ] - - -- | Based on the 'ValidateResult', return the hashes corresponding to - -- valid blocks. - validBlockPoints :: forall n. ValidateResult' n blk -> [RealPoint blk] -> [RealPoint blk] - validBlockPoints = \case - ValidateExceededRollBack _ -> const [] - ValidateSuccessful _ -> id - ValidateLedgerError e -> takeWhile (/= annLedgerErrRef e) + aps <- mkAps <$> atomically prevApplied + res <- + fmap rewrap $ + defaultResolveWithErrors resolve $ + switch + forkerAtFromTip + resourceReg + evs + (ExtLedgerCfg validateConfig) + numRollbacks + (lift . lift . trace) + aps + liftBase $ atomically $ addPrevApplied (validBlockPoints res (map headerRealPoint hdrs)) + return res + where + ValidateArgs + { resolve + , validateConfig + , addPrevApplied + , prevApplied + , forkerAtFromTip + , resourceReg + , trace + , blockCache + , numRollbacks + , hdrs + } = args + + rewrap :: + Either (AnnLedgerError' n blk) (Either GetForkerError (Forker' n blk)) -> + ValidateResult' n blk + rewrap (Left e) = ValidateLedgerError e + rewrap (Right (Left (PointTooOld (Just e)))) = ValidateExceededRollBack e + rewrap (Right (Left _)) = error "Unreachable, validating will always rollback from the tip" + rewrap (Right (Right l)) = ValidateSuccessful l + + mkAps :: + forall bn n l. + l ~ ExtLedgerState blk => + Set (RealPoint blk) -> + [ Ap + bn + n + l + blk + ( ResolvesBlocks n blk + , ThrowsLedgerError bn n l blk + ) + ] + mkAps prev = + [ case ( Set.member (headerRealPoint hdr) prev + , BlockCache.lookup (headerHash hdr) blockCache + ) of + (False, Nothing) -> ApplyRef (headerRealPoint hdr) + (True, Nothing) -> Weaken $ ReapplyRef (headerRealPoint hdr) + (False, Just blk) -> Weaken $ ApplyVal blk + (True, Just blk) -> Weaken $ ReapplyVal blk + | hdr <- hdrs + ] + + -- \| Based on the 'ValidateResult', return the hashes corresponding to + -- valid blocks. + validBlockPoints :: forall n. ValidateResult' n blk -> [RealPoint blk] -> [RealPoint blk] + validBlockPoints = \case + ValidateExceededRollBack _ -> const [] + ValidateSuccessful _ -> id + ValidateLedgerError e -> takeWhile (/= annLedgerErrRef e) -- | Switch to a fork by rolling back a number of blocks and then pushing the -- new blocks. switch :: - (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) - => (ResourceRegistry bm -> Word64 -> bm (Either GetForkerError (Forker bm l blk))) - -> ResourceRegistry bm - -> ComputeLedgerEvents - -> LedgerCfg l - -> Word64 -- ^ How many blocks to roll back - -> (TraceValidateEvent blk -> m ()) - -> [Ap bm m l blk c] -- ^ New blocks to apply - -> m (Either GetForkerError (Forker bm l blk)) + (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) => + (ResourceRegistry bm -> Word64 -> bm (Either GetForkerError (Forker bm l blk))) -> + ResourceRegistry bm -> + ComputeLedgerEvents -> + LedgerCfg l -> + -- | How many blocks to roll back + Word64 -> + (TraceValidateEvent blk -> m ()) -> + -- | New blocks to apply + [Ap bm m l blk c] -> + m (Either GetForkerError (Forker bm l blk)) switch forkerAtFromTip rr evs cfg numRollbacks trace newBlocks = do foEith <- liftBase $ forkerAtFromTip rr numRollbacks case foEith of @@ -342,22 +360,23 @@ switch forkerAtFromTip rr evs cfg numRollbacks trace newBlocks = do case newBlocks of [] -> pure () -- no blocks to apply to ledger state, return the forker - (firstBlock:_) -> do - let start = PushStart . toRealPoint $ firstBlock - goal = PushGoal . toRealPoint . last $ newBlocks - void $ applyThenPushMany - (trace . StartedPushingBlockToTheLedgerDb start goal) - evs - cfg - newBlocks - fo + (firstBlock : _) -> do + let start = PushStart . toRealPoint $ firstBlock + goal = PushGoal . toRealPoint . last $ newBlocks + void $ + applyThenPushMany + (trace . StartedPushingBlockToTheLedgerDb start goal) + evs + cfg + newBlocks + fo pure $ Right fo {------------------------------------------------------------------------------- Apply blocks -------------------------------------------------------------------------------} -newtype ValidLedgerState l = ValidLedgerState { getValidLedgerState :: l } +newtype ValidLedgerState l = ValidLedgerState {getValidLedgerState :: l} -- | 'Ap' is used to pass information about blocks to ledger DB updates -- @@ -376,12 +395,19 @@ newtype ValidLedgerState l = ValidLedgerState { getValidLedgerState :: l } -- 2. If we are applying rather than reapplying, we might have ledger errors. type Ap :: (Type -> Type) -> (Type -> Type) -> LedgerStateKind -> Type -> Constraint -> Type data Ap bm m l blk c where - ReapplyVal :: blk -> Ap bm m l blk () - ApplyVal :: blk -> Ap bm m l blk ( ThrowsLedgerError bm m l blk ) - ReapplyRef :: RealPoint blk -> Ap bm m l blk ( ResolvesBlocks m blk ) - ApplyRef :: RealPoint blk -> Ap bm m l blk ( ResolvesBlocks m blk - , ThrowsLedgerError bm m l blk ) - + ReapplyVal :: blk -> Ap bm m l blk () + ApplyVal :: blk -> Ap bm m l blk (ThrowsLedgerError bm m l blk) + ReapplyRef :: RealPoint blk -> Ap bm m l blk (ResolvesBlocks m blk) + ApplyRef :: + RealPoint blk -> + Ap + bm + m + l + blk + ( ResolvesBlocks m blk + , ThrowsLedgerError bm m l blk + ) -- | 'Weaken' increases the constraint on the monad @m@. -- -- This is primarily useful when combining multiple 'Ap's in a single @@ -390,73 +416,79 @@ data Ap bm m l blk c where toRealPoint :: HasHeader blk => Ap bm m l blk c -> RealPoint blk toRealPoint (ReapplyVal blk) = blockRealPoint blk -toRealPoint (ApplyVal blk) = blockRealPoint blk -toRealPoint (ReapplyRef rp) = rp -toRealPoint (ApplyRef rp) = rp -toRealPoint (Weaken ap) = toRealPoint ap +toRealPoint (ApplyVal blk) = blockRealPoint blk +toRealPoint (ReapplyRef rp) = rp +toRealPoint (ApplyRef rp) = rp +toRealPoint (Weaken ap) = toRealPoint ap -- | Apply blocks to the given forker -applyBlock :: forall m bm c l blk. (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) - => ComputeLedgerEvents - -> LedgerCfg l - -> Ap bm m l blk c - -> Forker bm l blk - -> m (ValidLedgerState (l DiffMK)) +applyBlock :: + forall m bm c l blk. + (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) => + ComputeLedgerEvents -> + LedgerCfg l -> + Ap bm m l blk c -> + Forker bm l blk -> + m (ValidLedgerState (l DiffMK)) applyBlock evs cfg ap fo = case ap of - ReapplyVal b -> - ValidLedgerState + ReapplyVal b -> + ValidLedgerState <$> withValues b (return . tickThenReapply evs cfg b) - ApplyVal b -> - ValidLedgerState - <$> withValues b - ( either (throwLedgerError fo (blockRealPoint b)) return + ApplyVal b -> + ValidLedgerState + <$> withValues + b + ( either (throwLedgerError fo (blockRealPoint b)) return . runExcept . tickThenApply evs cfg b - ) - ReapplyRef r -> do - b <- doResolveBlock r - applyBlock evs cfg (ReapplyVal b) fo - ApplyRef r -> do - b <- doResolveBlock r - applyBlock evs cfg (ApplyVal b) fo - Weaken ap' -> - applyBlock evs cfg ap' fo - where - withValues :: blk -> (l ValuesMK -> m (l DiffMK)) -> m (l DiffMK) - withValues blk f = do - l <- liftBase $ atomically $ forkerGetLedgerState fo - vs <- withLedgerTables l - <$> liftBase (forkerReadTables fo (getBlockKeySets blk)) - f vs + ) + ReapplyRef r -> do + b <- doResolveBlock r + applyBlock evs cfg (ReapplyVal b) fo + ApplyRef r -> do + b <- doResolveBlock r + applyBlock evs cfg (ApplyVal b) fo + Weaken ap' -> + applyBlock evs cfg ap' fo + where + withValues :: blk -> (l ValuesMK -> m (l DiffMK)) -> m (l DiffMK) + withValues blk f = do + l <- liftBase $ atomically $ forkerGetLedgerState fo + vs <- + withLedgerTables l + <$> liftBase (forkerReadTables fo (getBlockKeySets blk)) + f vs -- | If applying a block on top of the ledger state at the tip is succesful, -- push the resulting ledger state to the forker. -- -- Note that we require @c@ (from the particular choice of @Ap m l blk c@) so -- this sometimes can throw ledger errors. -applyThenPush :: (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) - => ComputeLedgerEvents - -> LedgerCfg l - -> Ap bm m l blk c - -> Forker bm l blk - -> m () +applyThenPush :: + (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) => + ComputeLedgerEvents -> + LedgerCfg l -> + Ap bm m l blk c -> + Forker bm l blk -> + m () applyThenPush evs cfg ap fo = - liftBase . forkerPush fo . getValidLedgerState =<< - applyBlock evs cfg ap fo + liftBase . forkerPush fo . getValidLedgerState + =<< applyBlock evs cfg ap fo -- | Apply and push a sequence of blocks (oldest first). -applyThenPushMany :: (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) - => (Pushing blk -> m ()) - -> ComputeLedgerEvents - -> LedgerCfg l - -> [Ap bm m l blk c] - -> Forker bm l blk - -> m () +applyThenPushMany :: + (ApplyBlock l blk, MonadBase bm m, c, MonadSTM bm) => + (Pushing blk -> m ()) -> + ComputeLedgerEvents -> + LedgerCfg l -> + [Ap bm m l blk c] -> + Forker bm l blk -> + m () applyThenPushMany trace evs cfg aps fo = mapM_ pushAndTrace aps - where - pushAndTrace ap = do - trace $ Pushing . toRealPoint $ ap - applyThenPush evs cfg ap fo + where + pushAndTrace ap = do + trace $ Pushing . toRealPoint $ ap + applyThenPush evs cfg ap fo {------------------------------------------------------------------------------- Annotated ledger errors @@ -468,17 +500,20 @@ class Monad m => ThrowsLedgerError bm m l blk where instance Monad m => ThrowsLedgerError bm (ExceptT (AnnLedgerError bm l blk) m) l blk where throwLedgerError f l r = throwError $ AnnLedgerError f l r -defaultThrowLedgerErrors :: ExceptT (AnnLedgerError bm l blk) m a - -> m (Either (AnnLedgerError bm l blk) a) +defaultThrowLedgerErrors :: + ExceptT (AnnLedgerError bm l blk) m a -> + m (Either (AnnLedgerError bm l blk) a) defaultThrowLedgerErrors = runExceptT -defaultResolveWithErrors :: ResolveBlock m blk - -> ExceptT (AnnLedgerError bm l blk) - (ReaderT (ResolveBlock m blk) m) - a - -> m (Either (AnnLedgerError bm l blk) a) +defaultResolveWithErrors :: + ResolveBlock m blk -> + ExceptT + (AnnLedgerError bm l blk) + (ReaderT (ResolveBlock m blk) m) + a -> + m (Either (AnnLedgerError bm l blk) a) defaultResolveWithErrors resolve = - defaultResolveBlocks resolve + defaultResolveBlocks resolve . defaultThrowLedgerErrors {------------------------------------------------------------------------------- @@ -507,14 +542,17 @@ class Monad m => ResolvesBlocks m blk | m -> blk where instance Monad m => ResolvesBlocks (ReaderT (ResolveBlock m blk) m) blk where doResolveBlock r = ReaderT $ \f -> f r -defaultResolveBlocks :: ResolveBlock m blk - -> ReaderT (ResolveBlock m blk) m a - -> m a +defaultResolveBlocks :: + ResolveBlock m blk -> + ReaderT (ResolveBlock m blk) m a -> + m a defaultResolveBlocks = flip runReaderT -- Quite a specific instance so we can satisfy the fundep -instance Monad m - => ResolvesBlocks (ExceptT e (ReaderT (ResolveBlock m blk) m)) blk where +instance + Monad m => + ResolvesBlocks (ExceptT e (ReaderT (ResolveBlock m blk) m)) blk + where doResolveBlock = lift . doResolveBlock {------------------------------------------------------------------------------- @@ -522,9 +560,9 @@ instance Monad m -------------------------------------------------------------------------------} -- | When validating a sequence of blocks, these are the possible outcomes. -data ValidateResult m l blk = - ValidateSuccessful (Forker m l blk) - | ValidateLedgerError (AnnLedgerError m l blk) +data ValidateResult m l blk + = ValidateSuccessful (Forker m l blk) + | ValidateLedgerError (AnnLedgerError m l blk) | ValidateExceededRollBack ExceededRollback type ValidateResult' m blk = ValidateResult m (ExtLedgerState blk) blk @@ -534,16 +572,14 @@ type ValidateResult' m blk = ValidateResult m (ExtLedgerState blk) blk -------------------------------------------------------------------------------} -- | Annotated ledger errors -data AnnLedgerError m l blk = AnnLedgerError { - -- | The ledger DB just /before/ this block was applied - annLedgerState :: Forker m l blk - - -- | Reference to the block that had the error - , annLedgerErrRef :: RealPoint blk - - -- | The ledger error itself - , annLedgerErr :: LedgerErr l - } +data AnnLedgerError m l blk = AnnLedgerError + { annLedgerState :: Forker m l blk + -- ^ The ledger DB just /before/ this block was applied + , annLedgerErrRef :: RealPoint blk + -- ^ Reference to the block that had the error + , annLedgerErr :: LedgerErr l + -- ^ The ledger error itself + } type AnnLedgerError' m blk = AnnLedgerError m (ExtLedgerState blk) blk @@ -551,38 +587,38 @@ type AnnLedgerError' m blk = AnnLedgerError m (ExtLedgerState blk) blk Trace validation events -------------------------------------------------------------------------------} -newtype PushStart blk = PushStart { unPushStart :: RealPoint blk } +newtype PushStart blk = PushStart {unPushStart :: RealPoint blk} deriving (Show, Eq) -newtype PushGoal blk = PushGoal { unPushGoal :: RealPoint blk } +newtype PushGoal blk = PushGoal {unPushGoal :: RealPoint blk} deriving (Show, Eq) -newtype Pushing blk = Pushing { unPushing :: RealPoint blk } +newtype Pushing blk = Pushing {unPushing :: RealPoint blk} deriving (Show, Eq) -data TraceValidateEvent blk = - -- | Event fired when we are about to push a block to a forker - StartedPushingBlockToTheLedgerDb - !(PushStart blk) - -- ^ Point from which we started pushing new blocks - (PushGoal blk) - -- ^ Point to which we are updating the ledger, the last event - -- StartedPushingBlockToTheLedgerDb will have Pushing and PushGoal - -- wrapping over the same RealPoint - !(Pushing blk) - -- ^ Point which block we are about to push +data TraceValidateEvent blk + = -- | Event fired when we are about to push a block to a forker + StartedPushingBlockToTheLedgerDb + -- | Point from which we started pushing new blocks + !(PushStart blk) + -- | Point to which we are updating the ledger, the last event + -- StartedPushingBlockToTheLedgerDb will have Pushing and PushGoal + -- wrapping over the same RealPoint + (PushGoal blk) + -- | Point which block we are about to push + !(Pushing blk) deriving (Show, Eq, Generic) {------------------------------------------------------------------------------- Forker events -------------------------------------------------------------------------------} -data TraceForkerEventWithKey = - TraceForkerEventWithKey ForkerKey TraceForkerEvent +data TraceForkerEventWithKey + = TraceForkerEventWithKey ForkerKey TraceForkerEvent deriving (Show, Eq) -data TraceForkerEvent = - ForkerOpen +data TraceForkerEvent + = ForkerOpen | ForkerCloseUncommitted | ForkerCloseCommitted | ForkerReadTablesStart diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Snapshots.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Snapshots.hs index 3c49783790..166c2cedf3 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Snapshots.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/Snapshots.hs @@ -17,8 +17,8 @@ -- restart the node without having to replay the whole chain. Regardless of the -- actual LedgerDB implementation chosen, the general management of snapshots is -- common to all implementations. -module Ouroboros.Consensus.Storage.LedgerDB.Snapshots ( - -- * Snapshots +module Ouroboros.Consensus.Storage.LedgerDB.Snapshots + ( -- * Snapshots CRCError (..) , DiskSnapshot (..) , MetadataErr (..) @@ -29,9 +29,11 @@ module Ouroboros.Consensus.Storage.LedgerDB.Snapshots ( , SnapshotMetadata (..) , SnapshotPolicyArgs (..) , defaultSnapshotPolicyArgs + -- * Codec , readExtLedgerState , writeExtLedgerState + -- * Paths , diskSnapshotIsTemporary , snapshotFromPath @@ -39,22 +41,27 @@ module Ouroboros.Consensus.Storage.LedgerDB.Snapshots ( , snapshotToDirName , snapshotToDirPath , snapshotToMetadataPath + -- * Management , deleteSnapshot , listSnapshots , loadSnapshotMetadata , trimSnapshots , writeSnapshotMetadata + -- * Policy , SnapshotInterval (..) , SnapshotPolicy (..) , defaultSnapshotPolicy , pattern DoDiskSnapshotChecksum , pattern NoDoDiskSnapshotChecksum + -- * Tracing , TraceSnapshotEvent (..) + -- * Re-exports , Flag (..) + -- * Testing , decodeLBackwardsCompatible , destroySnapshots @@ -62,116 +69,119 @@ module Ouroboros.Consensus.Storage.LedgerDB.Snapshots ( , snapshotsMapM_ ) where -import Cardano.Ledger.BaseTypes -import Codec.CBOR.Decoding -import Codec.CBOR.Encoding -import qualified Codec.CBOR.Write as CBOR -import qualified Codec.Serialise.Decoding as Dec -import Control.Monad -import qualified Control.Monad as Monad -import Control.Monad.Class.MonadTime.SI -import Control.Monad.Except -import Control.Tracer -import Data.Aeson (FromJSON (..), ToJSON (..), (.:), (.=)) -import qualified Data.Aeson as Aeson -import Data.Functor.Identity -import qualified Data.List as List -import Data.Maybe (isJust, mapMaybe) -import Data.Ord -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Time.Clock (secondsToDiffTime) -import Data.Word -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract (EmptyMK) -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Util (Flag (..)) -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.CBOR (ReadIncrementalErr, - decodeWithOrigin, readIncremental) -import Ouroboros.Consensus.Util.CRC -import Ouroboros.Consensus.Util.Enclose -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Versioned -import System.FS.API -import System.FS.API.Lazy -import System.FS.CRC -import Text.Read (readMaybe) +import Cardano.Ledger.BaseTypes +import Codec.CBOR.Decoding +import Codec.CBOR.Encoding +import Codec.CBOR.Write qualified as CBOR +import Codec.Serialise.Decoding qualified as Dec +import Control.Monad +import Control.Monad qualified as Monad +import Control.Monad.Class.MonadTime.SI +import Control.Monad.Except +import Control.Tracer +import Data.Aeson (FromJSON (..), ToJSON (..), (.:), (.=)) +import Data.Aeson qualified as Aeson +import Data.Functor.Identity +import Data.List qualified as List +import Data.Maybe (isJust, mapMaybe) +import Data.Ord +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Time.Clock (secondsToDiffTime) +import Data.Word +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract (EmptyMK) +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Util (Flag (..)) +import Ouroboros.Consensus.Util.CBOR + ( ReadIncrementalErr + , decodeWithOrigin + , readIncremental + ) +import Ouroboros.Consensus.Util.CRC +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.Enclose +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Versioned +import System.FS.API +import System.FS.API.Lazy +import System.FS.CRC +import Text.Read (readMaybe) -- | Name of a disk snapshot. -- -- The snapshot itself might not yet exist on disk. -data DiskSnapshot = DiskSnapshot { - -- | Snapshots are numbered. We will try the snapshots with the highest - -- number first. - -- - -- When creating a snapshot, we use the slot number of the ledger state it - -- corresponds to as the snapshot number. This gives an indication of how - -- recent the snapshot is. - -- - -- Note that the snapshot names are only indicative, we don't rely on the - -- snapshot number matching the slot number of the corresponding ledger - -- state. We only use the snapshots numbers to determine the order in - -- which we try them. - dsNumber :: Word64 - - -- | Snapshots can optionally have a suffix, separated by the snapshot - -- number with an underscore, e.g., @4492799_last_Byron@. This suffix acts - -- as metadata for the operator of the node. Snapshots with a suffix will - -- /not be deleted/. - , dsSuffix :: Maybe String - } +data DiskSnapshot = DiskSnapshot + { dsNumber :: Word64 + -- ^ Snapshots are numbered. We will try the snapshots with the highest + -- number first. + -- + -- When creating a snapshot, we use the slot number of the ledger state it + -- corresponds to as the snapshot number. This gives an indication of how + -- recent the snapshot is. + -- + -- Note that the snapshot names are only indicative, we don't rely on the + -- snapshot number matching the slot number of the corresponding ledger + -- state. We only use the snapshots numbers to determine the order in + -- which we try them. + , dsSuffix :: Maybe String + -- ^ Snapshots can optionally have a suffix, separated by the snapshot + -- number with an underscore, e.g., @4492799_last_Byron@. This suffix acts + -- as metadata for the operator of the node. Snapshots with a suffix will + -- /not be deleted/. + } deriving (Show, Eq, Generic) instance Ord DiskSnapshot where compare = comparing dsNumber -data SnapshotFailure blk = - -- | We failed to deserialise the snapshot +data SnapshotFailure blk + = -- | We failed to deserialise the snapshot -- -- This can happen due to data corruption in the ledger DB or if the codecs -- changed. InitFailureRead ReadSnapshotErr - - -- | This snapshot is too recent (ahead of the tip of the immutable chain) - | InitFailureTooRecent (RealPoint blk) - - -- | This snapshot was of the ledger state at genesis, even though we never + | -- | This snapshot is too recent (ahead of the tip of the immutable chain) + InitFailureTooRecent (RealPoint blk) + | -- | This snapshot was of the ledger state at genesis, even though we never -- take snapshots at genesis, so this is unexpected. - | InitFailureGenesis + InitFailureGenesis deriving (Show, Eq, Generic) -data ReadSnapshotErr = - -- | Error while de-serialising data +data ReadSnapshotErr + = -- | Error while de-serialising data ReadSnapshotFailed ReadIncrementalErr - -- | Checksum of read snapshot differs from the one tracked by + | -- | Checksum of read snapshot differs from the one tracked by -- its corresponding metadata file - | ReadSnapshotDataCorruption - -- | An error occurred while reading the snapshot metadata file - | ReadMetadataError FsPath MetadataErr + ReadSnapshotDataCorruption + | -- | An error occurred while reading the snapshot metadata file + ReadMetadataError FsPath MetadataErr deriving (Eq, Show) data SnapshotMetadata = SnapshotMetadata - { snapshotBackend :: SnapshotBackend + { snapshotBackend :: SnapshotBackend , snapshotChecksum :: CRC - } deriving (Eq, Show) + } + deriving (Eq, Show) instance ToJSON SnapshotMetadata where - toJSON sm = Aeson.object - [ "backend" .= snapshotBackend sm - , "checksum" .= getCRC (snapshotChecksum sm) - ] + toJSON sm = + Aeson.object + [ "backend" .= snapshotBackend sm + , "checksum" .= getCRC (snapshotChecksum sm) + ] instance FromJSON SnapshotMetadata where parseJSON = Aeson.withObject "SnapshotMetadata" $ \o -> - SnapshotMetadata <$> o .: "backend" - <*> fmap CRC (o .: "checksum") + SnapshotMetadata + <$> o .: "backend" + <*> fmap CRC (o .: "checksum") -data SnapshotBackend = - UTxOHDMemSnapshot +data SnapshotBackend + = UTxOHDMemSnapshot | UTxOHDLMDBSnapshot deriving (Eq, Show) @@ -186,13 +196,13 @@ instance FromJSON SnapshotBackend where "utxohd-lmdb" -> pure UTxOHDLMDBSnapshot _ -> fail "unknown SnapshotBackend" -data MetadataErr = - -- | The metadata file does not exist +data MetadataErr + = -- | The metadata file does not exist MetadataFileDoesNotExist - -- | The metadata file is invalid and does not deserialize - | MetadataInvalid String - -- | The metadata file has the incorrect backend - | MetadataBackendMismatch + | -- | The metadata file is invalid and does not deserialize + MetadataInvalid String + | -- | The metadata file has the incorrect backend + MetadataBackendMismatch deriving (Eq, Show) -- | Named snapshot are permanent, they will never be deleted even if failing to @@ -207,23 +217,23 @@ diskSnapshotIsTemporary = not . diskSnapshotIsPermanent snapshotFromPath :: String -> Maybe DiskSnapshot snapshotFromPath fileName = do - number <- readMaybe prefix - return $ DiskSnapshot number suffix' - where - (prefix, suffix) = break (== '_') fileName + number <- readMaybe prefix + return $ DiskSnapshot number suffix' + where + (prefix, suffix) = break (== '_') fileName - suffix' :: Maybe String - suffix' = case suffix of - "" -> Nothing - _ : str -> Just str + suffix' :: Maybe String + suffix' = case suffix of + "" -> Nothing + _ : str -> Just str -- | List on-disk snapshots, highest number first. listSnapshots :: Monad m => SomeHasFS m -> m [DiskSnapshot] listSnapshots (SomeHasFS HasFS{listDirectory}) = - aux <$> listDirectory (mkFsPath []) - where - aux :: Set String -> [DiskSnapshot] - aux = List.sortOn (Down . dsNumber) . mapMaybe snapshotFromPath . Set.toList + aux <$> listDirectory (mkFsPath []) + where + aux :: Set String -> [DiskSnapshot] + aux = List.sortOn (Down . dsNumber) . mapMaybe snapshotFromPath . Set.toList -- | Delete snapshot from disk deleteSnapshot :: (Monad m, HasCallStack) => SomeHasFS m -> DiskSnapshot -> m () @@ -234,11 +244,11 @@ deleteSnapshot (SomeHasFS HasFS{doesDirectoryExist, removeDirectoryRecursive}) s -- | Write a snapshot metadata JSON file. writeSnapshotMetadata :: - MonadThrow m - => SomeHasFS m - -> DiskSnapshot - -> SnapshotMetadata - -> m () + MonadThrow m => + SomeHasFS m -> + DiskSnapshot -> + SnapshotMetadata -> + m () writeSnapshotMetadata (SomeHasFS hasFS) ds meta = do let metadataPath = snapshotToMetadataPath ds withFile hasFS metadataPath (WriteMode MustBeNew) $ \h -> @@ -250,10 +260,10 @@ writeSnapshotMetadata (SomeHasFS hasFS) ds meta = do -- - Fails with 'MetadataInvalid' when the contents of the file cannot be -- deserialised correctly loadSnapshotMetadata :: - IOLike m - => SomeHasFS m - -> DiskSnapshot - -> ExceptT MetadataErr m SnapshotMetadata + IOLike m => + SomeHasFS m -> + DiskSnapshot -> + ExceptT MetadataErr m SnapshotMetadata loadSnapshotMetadata (SomeHasFS hasFS) ds = ExceptT $ do let metadataPath = snapshotToMetadataPath ds exists <- doesFileExist hasFS metadataPath @@ -264,84 +274,93 @@ loadSnapshotMetadata (SomeHasFS hasFS) ds = ExceptT $ do bs <- hGetAll hasFS h case Aeson.eitherDecode bs of Left decodeErr -> pure $ Left $ MetadataInvalid decodeErr - Right meta -> pure $ Right meta + Right meta -> pure $ Right meta snapshotsMapM_ :: Monad m => SomeHasFS m -> (FilePath -> m a) -> m () snapshotsMapM_ (SomeHasFS fs) f = do - mapM_ f =<< Set.lookupMax . Set.filter (isJust . snapshotFromPath) <$> listDirectory fs (mkFsPath []) + mapM_ f + =<< Set.lookupMax . Set.filter (isJust . snapshotFromPath) <$> listDirectory fs (mkFsPath []) -- | Testing only! Destroy all snapshots in the DB. destroySnapshots :: Monad m => SomeHasFS m -> m () destroySnapshots sfs@(SomeHasFS fs) = do - snapshotsMapM_ sfs ((\d -> do - isDir <- doesDirectoryExist fs d - if isDir - then removeDirectoryRecursive fs d - else removeFile fs d - ) . mkFsPath . (:[])) + snapshotsMapM_ + sfs + ( ( \d -> do + isDir <- doesDirectoryExist fs d + if isDir + then removeDirectoryRecursive fs d + else removeFile fs d + ) + . mkFsPath + . (: []) + ) -- | Read an extended ledger state from disk readExtLedgerState :: - forall m blk. IOLike m - => SomeHasFS m - -> (forall s. Decoder s (ExtLedgerState blk EmptyMK)) - -> (forall s. Decoder s (HeaderHash blk)) - -> FsPath - -> ExceptT ReadIncrementalErr m (ExtLedgerState blk EmptyMK, CRC) -readExtLedgerState hasFS decLedger decHash = do - ExceptT + forall m blk. + IOLike m => + SomeHasFS m -> + (forall s. Decoder s (ExtLedgerState blk EmptyMK)) -> + (forall s. Decoder s (HeaderHash blk)) -> + FsPath -> + ExceptT ReadIncrementalErr m (ExtLedgerState blk EmptyMK, CRC) +readExtLedgerState hasFS decLedger decHash = + do + ExceptT . fmap (fmap (fmap runIdentity)) . readIncremental hasFS Identity decoder - where - decoder :: Decoder s (ExtLedgerState blk EmptyMK) - decoder = decodeLBackwardsCompatible (Proxy @blk) decLedger decHash + where + decoder :: Decoder s (ExtLedgerState blk EmptyMK) + decoder = decodeLBackwardsCompatible (Proxy @blk) decLedger decHash -- | Write an extended ledger state to disk writeExtLedgerState :: - forall m blk. MonadThrow m - => SomeHasFS m - -> (ExtLedgerState blk EmptyMK -> Encoding) - -> FsPath - -> ExtLedgerState blk EmptyMK - -> m CRC + forall m blk. + MonadThrow m => + SomeHasFS m -> + (ExtLedgerState blk EmptyMK -> Encoding) -> + FsPath -> + ExtLedgerState blk EmptyMK -> + m CRC writeExtLedgerState (SomeHasFS hasFS) encLedger path cs = do - withFile hasFS path (WriteMode MustBeNew) $ \h -> - snd <$> hPutAllCRC hasFS h (CBOR.toLazyByteString $ encoder cs) - where - encoder :: ExtLedgerState blk EmptyMK -> Encoding - encoder = encodeL encLedger + withFile hasFS path (WriteMode MustBeNew) $ \h -> + snd <$> hPutAllCRC hasFS h (CBOR.toLazyByteString $ encoder cs) + where + encoder :: ExtLedgerState blk EmptyMK -> Encoding + encoder = encodeL encLedger -- | Trim the number of on disk snapshots so that at most 'onDiskNumSnapshots' -- snapshots are stored on disk. The oldest snapshots are deleted. -- -- The deleted snapshots are returned. trimSnapshots :: - Monad m - => Tracer m (TraceSnapshotEvent r) - -> SomeHasFS m - -> SnapshotPolicy - -> m [DiskSnapshot] + Monad m => + Tracer m (TraceSnapshotEvent r) -> + SomeHasFS m -> + SnapshotPolicy -> + m [DiskSnapshot] trimSnapshots tracer fs SnapshotPolicy{onDiskNumSnapshots} = do - -- We only trim temporary snapshots - ss <- filter diskSnapshotIsTemporary <$> listSnapshots fs - -- The snapshot are most recent first, so we can simply drop from the - -- front to get the snapshots that are "too" old. - let ssTooOld = drop (fromIntegral onDiskNumSnapshots) ss - mapM - (\s -> do - deleteSnapshot fs s - traceWith tracer $ DeletedSnapshot s - pure s - ) - ssTooOld + -- We only trim temporary snapshots + ss <- filter diskSnapshotIsTemporary <$> listSnapshots fs + -- The snapshot are most recent first, so we can simply drop from the + -- front to get the snapshots that are "too" old. + let ssTooOld = drop (fromIntegral onDiskNumSnapshots) ss + mapM + ( \s -> do + deleteSnapshot fs s + traceWith tracer $ DeletedSnapshot s + pure s + ) + ssTooOld snapshotToDirName :: DiskSnapshot -> String -snapshotToDirName DiskSnapshot { dsNumber, dsSuffix } = - show dsNumber <> suffix - where - suffix = case dsSuffix of - Nothing -> "" - Just s -> "_" <> s +snapshotToDirName DiskSnapshot{dsNumber, dsSuffix} = + show dsNumber <> suffix + where + suffix = case dsSuffix of + Nothing -> "" + Just s -> "_" <> s snapshotToChecksumPath :: DiskSnapshot -> FsPath snapshotToChecksumPath = mkFsPath . (\x -> [x, "checksum"]) . snapshotToDirName @@ -351,7 +370,7 @@ snapshotToMetadataPath = mkFsPath . (\x -> [x, "meta"]) . snapshotToDirName -- | The path within the LedgerDB's filesystem to the snapshot's directory snapshotToDirPath :: DiskSnapshot -> FsPath -snapshotToDirPath = mkFsPath . (:[]) . snapshotToDirName +snapshotToDirPath = mkFsPath . (: []) . snapshotToDirName -- | Version 1: uses versioning ('Ouroboros.Consensus.Util.Versioned') and only -- encodes the ledger state @l@. @@ -361,7 +380,7 @@ snapshotEncodingVersion1 = 1 -- | Encoder to be used in combination with 'decodeSnapshotBackwardsCompatible'. encodeL :: (l -> Encoding) -> l -> Encoding encodeL encodeLedger l = - encodeVersion snapshotEncodingVersion1 (encodeLedger l) + encodeVersion snapshotEncodingVersion1 (encodeLedger l) -- | To remain backwards compatible with existing snapshots stored on disk, we -- must accept the old format as well as the new format. @@ -379,37 +398,39 @@ encodeL encodeLedger l = -- This decoder will accept and ignore them. The encoder ('encodeSnapshot') will -- no longer encode them. decodeLBackwardsCompatible :: - forall l blk. - Proxy blk - -> (forall s. Decoder s l) - -> (forall s. Decoder s (HeaderHash blk)) - -> forall s. Decoder s l + forall l blk. + Proxy blk -> + (forall s. Decoder s l) -> + (forall s. Decoder s (HeaderHash blk)) -> + forall s. + Decoder s l decodeLBackwardsCompatible _ decodeLedger decodeHash = - decodeVersionWithHook - decodeOldFormat - [(snapshotEncodingVersion1, Decode decodeVersion1)] - where - decodeVersion1 :: forall s. Decoder s l - decodeVersion1 = decodeLedger - - decodeOldFormat :: Maybe Int -> forall s. Decoder s l - decodeOldFormat (Just 3) = do - _ <- withOriginRealPointToPoint <$> - decodeWithOrigin (decodeRealPoint @blk decodeHash) - _ <- Dec.decodeWord64 - decodeLedger - decodeOldFormat mbListLen = - fail $ - "decodeSnapshotBackwardsCompatible: invalid start " <> - show mbListLen + decodeVersionWithHook + decodeOldFormat + [(snapshotEncodingVersion1, Decode decodeVersion1)] + where + decodeVersion1 :: forall s. Decoder s l + decodeVersion1 = decodeLedger + + decodeOldFormat :: Maybe Int -> forall s. Decoder s l + decodeOldFormat (Just 3) = do + _ <- + withOriginRealPointToPoint + <$> decodeWithOrigin (decodeRealPoint @blk decodeHash) + _ <- Dec.decodeWord64 + decodeLedger + decodeOldFormat mbListLen = + fail $ + "decodeSnapshotBackwardsCompatible: invalid start " + <> show mbListLen {------------------------------------------------------------------------------- Policy -------------------------------------------------------------------------------} -- | Length of time that has to pass after which a snapshot is taken. -data SnapshotInterval = - DefaultSnapshotInterval +data SnapshotInterval + = DefaultSnapshotInterval | RequestedSnapshotInterval DiffTime | DisableSnapshots deriving stock (Eq, Generic, Show) @@ -417,8 +438,8 @@ data SnapshotInterval = -- | Number of snapshots to be stored on disk. This is either the default value -- as determined by the @'SnapshotPolicy'@, or it is provided by the user. See the -- @'SnapshotPolicy'@ documentation for more information. -data NumOfDiskSnapshots = - DefaultNumOfDiskSnapshots +data NumOfDiskSnapshots + = DefaultNumOfDiskSnapshots | RequestedNumOfDiskSnapshots Word deriving stock (Eq, Generic, Show) @@ -434,52 +455,51 @@ pattern NoDoDiskSnapshotChecksum = Flag False -- We only write ledger states that are older than @k@ blocks to disk (that is, -- snapshots that are guaranteed valid). The on-disk policy determines how often -- we write to disk and how many checkpoints we keep. -data SnapshotPolicy = SnapshotPolicy { - -- | How many snapshots do we want to keep on disk? - -- - -- A higher number of on-disk snapshots is primarily a safe-guard against - -- disk corruption: it trades disk space for reliability. - -- - -- Examples: - -- - -- * @0@: Delete the snapshot immediately after writing. - -- Probably not a useful value :-D - -- * @1@: Delete the previous snapshot immediately after writing the next - -- Dangerous policy: if for some reason the deletion happens before - -- the new snapshot is written entirely to disk (we don't @fsync@), - -- we have no choice but to start at the genesis snapshot on the - -- next startup. - -- * @2@: Always keep 2 snapshots around. This means that when we write - -- the next snapshot, we delete the oldest one, leaving the middle - -- one available in case of truncation of the write. This is - -- probably a sane value in most circumstances. - onDiskNumSnapshots :: Word - - -- | Should we write a snapshot of the ledger state to disk? - -- - -- This function is passed two bits of information: - -- - -- * The time since the last snapshot, or 'NoSnapshotTakenYet' if none was taken yet. - -- Note that 'NoSnapshotTakenYet' merely means no snapshot had been taking yet - -- since the node was started; it does not necessarily mean that none - -- exist on disk. - -- - -- * The distance in terms of blocks applied to the /oldest/ ledger - -- snapshot in memory. During normal operation, this is the number of - -- blocks written to the ImmutableDB since the last snapshot. On - -- startup, it is computed by counting how many immutable blocks we had - -- to reapply to get to the chain tip. This is useful, as it allows the - -- policy to decide to take a snapshot /on node startup/ if a lot of - -- blocks had to be replayed. - -- - -- See also 'defaultSnapshotPolicy' - , onDiskShouldTakeSnapshot :: Maybe DiffTime -> Word64 -> Bool - } +data SnapshotPolicy = SnapshotPolicy + { onDiskNumSnapshots :: Word + -- ^ How many snapshots do we want to keep on disk? + -- + -- A higher number of on-disk snapshots is primarily a safe-guard against + -- disk corruption: it trades disk space for reliability. + -- + -- Examples: + -- + -- * @0@: Delete the snapshot immediately after writing. + -- Probably not a useful value :-D + -- * @1@: Delete the previous snapshot immediately after writing the next + -- Dangerous policy: if for some reason the deletion happens before + -- the new snapshot is written entirely to disk (we don't @fsync@), + -- we have no choice but to start at the genesis snapshot on the + -- next startup. + -- * @2@: Always keep 2 snapshots around. This means that when we write + -- the next snapshot, we delete the oldest one, leaving the middle + -- one available in case of truncation of the write. This is + -- probably a sane value in most circumstances. + , onDiskShouldTakeSnapshot :: Maybe DiffTime -> Word64 -> Bool + -- ^ Should we write a snapshot of the ledger state to disk? + -- + -- This function is passed two bits of information: + -- + -- * The time since the last snapshot, or 'NoSnapshotTakenYet' if none was taken yet. + -- Note that 'NoSnapshotTakenYet' merely means no snapshot had been taking yet + -- since the node was started; it does not necessarily mean that none + -- exist on disk. + -- + -- * The distance in terms of blocks applied to the /oldest/ ledger + -- snapshot in memory. During normal operation, this is the number of + -- blocks written to the ImmutableDB since the last snapshot. On + -- startup, it is computed by counting how many immutable blocks we had + -- to reapply to get to the chain tip. This is useful, as it allows the + -- policy to decide to take a snapshot /on node startup/ if a lot of + -- blocks had to be replayed. + -- + -- See also 'defaultSnapshotPolicy' + } deriving NoThunks via OnlyCheckWhnf SnapshotPolicy -data SnapshotPolicyArgs = SnapshotPolicyArgs { - spaInterval :: !SnapshotInterval - , spaNum :: !NumOfDiskSnapshots +data SnapshotPolicyArgs = SnapshotPolicyArgs + { spaInterval :: !SnapshotInterval + , spaNum :: !NumOfDiskSnapshots } defaultSnapshotPolicyArgs :: SnapshotPolicyArgs @@ -489,28 +509,27 @@ defaultSnapshotPolicyArgs = DefaultNumOfDiskSnapshots -- | Default on-disk policy suitable to use with cardano-node --- defaultSnapshotPolicy :: - SecurityParam - -> SnapshotPolicyArgs - -> SnapshotPolicy + SecurityParam -> + SnapshotPolicyArgs -> + SnapshotPolicy defaultSnapshotPolicy (SecurityParam k) (SnapshotPolicyArgs requestedInterval reqNumOfSnapshots) = - SnapshotPolicy { - onDiskNumSnapshots + SnapshotPolicy + { onDiskNumSnapshots , onDiskShouldTakeSnapshot } - where + where onDiskNumSnapshots :: Word onDiskNumSnapshots = case reqNumOfSnapshots of - DefaultNumOfDiskSnapshots -> 2 + DefaultNumOfDiskSnapshots -> 2 RequestedNumOfDiskSnapshots value -> value onDiskShouldTakeSnapshot :: - Maybe DiffTime - -> Word64 - -> Bool + Maybe DiffTime -> + Word64 -> + Bool onDiskShouldTakeSnapshot Nothing blocksSinceLast = -- If users never leave their wallet running for long, this would mean -- that under some circumstances we would never take a snapshot @@ -520,42 +539,41 @@ defaultSnapshotPolicy -- take a snapshot roughly every @k@ blocks. It does mean the possibility of -- an extra unnecessary snapshot during syncing (if the node is restarted), but -- that is not a big deal. - blocksSinceLast >=unNonZero k - + blocksSinceLast >= unNonZero k onDiskShouldTakeSnapshot (Just timeSinceLast) blocksSinceLast = - snapshotInterval timeSinceLast - || substantialAmountOfBlocksWereProcessed blocksSinceLast timeSinceLast + snapshotInterval timeSinceLast + || substantialAmountOfBlocksWereProcessed blocksSinceLast timeSinceLast - -- | We want to create a snapshot after a substantial amount of blocks were + -- \| We want to create a snapshot after a substantial amount of blocks were -- processed (hard-coded to 50k blocks). Given the fact that during bootstrap -- a fresh node will see a lot of blocks over a short period of time, we want -- to limit this condition to happen not more often then a fixed amount of -- time (here hard-coded to 6 minutes) substantialAmountOfBlocksWereProcessed blocksSinceLast timeSinceLast = - let minBlocksBeforeSnapshot = 50_000 - minTimeBeforeSnapshot = 6 * secondsToDiffTime 60 - in blocksSinceLast >= minBlocksBeforeSnapshot - && timeSinceLast >= minTimeBeforeSnapshot + let minBlocksBeforeSnapshot = 50_000 + minTimeBeforeSnapshot = 6 * secondsToDiffTime 60 + in blocksSinceLast >= minBlocksBeforeSnapshot + && timeSinceLast >= minTimeBeforeSnapshot - -- | Requested snapshot interval can be explicitly provided by the + -- \| Requested snapshot interval can be explicitly provided by the -- caller (RequestedSnapshotInterval) or the caller might request the default -- snapshot interval (DefaultSnapshotInterval). If the latter then the -- snapshot interval is defaulted to k * 2 seconds - when @k = 2160@ the interval -- defaults to 72 minutes. snapshotInterval t = case requestedInterval of RequestedSnapshotInterval value -> t >= value - DefaultSnapshotInterval -> t >= secondsToDiffTime (fromIntegral $ unNonZero k * 2) - DisableSnapshots -> False + DefaultSnapshotInterval -> t >= secondsToDiffTime (fromIntegral $ unNonZero k * 2) + DisableSnapshots -> False {------------------------------------------------------------------------------- Tracing snapshot events -------------------------------------------------------------------------------} data TraceSnapshotEvent blk - = InvalidSnapshot DiskSnapshot (SnapshotFailure blk) - -- ^ An on disk snapshot was skipped because it was invalid. - | TookSnapshot DiskSnapshot (RealPoint blk) EnclosingTimed - -- ^ A snapshot was written to disk. - | DeletedSnapshot DiskSnapshot - -- ^ An old or invalid on-disk snapshot was deleted + = -- | An on disk snapshot was skipped because it was invalid. + InvalidSnapshot DiskSnapshot (SnapshotFailure blk) + | -- | A snapshot was written to disk. + TookSnapshot DiskSnapshot (RealPoint blk) EnclosingTimed + | -- | An old or invalid on-disk snapshot was deleted + DeletedSnapshot DiskSnapshot deriving (Generic, Eq, Show) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/TraceEvent.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/TraceEvent.hs index fbdb0270c9..fafaf244a1 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/TraceEvent.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/TraceEvent.hs @@ -5,39 +5,39 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Storage.LedgerDB.TraceEvent ( - FlavorImplSpecificTrace (..) +module Ouroboros.Consensus.Storage.LedgerDB.TraceEvent + ( FlavorImplSpecificTrace (..) , TraceEvent (..) ) where -import GHC.Generics -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Forker -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore as V1 -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2.Args as V2 +import GHC.Generics +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Forker +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V2.Args qualified as V2 {------------------------------------------------------------------------------- Tracing -------------------------------------------------------------------------------} -data FlavorImplSpecificTrace = - FlavorImplSpecificTraceV1 V1.FlavorImplSpecificTrace +data FlavorImplSpecificTrace + = FlavorImplSpecificTraceV1 V1.FlavorImplSpecificTrace | FlavorImplSpecificTraceV2 V2.FlavorImplSpecificTrace deriving (Show, Eq) -data TraceEvent blk = - LedgerDBSnapshotEvent !(TraceSnapshotEvent blk) - | LedgerReplayEvent !(TraceReplayEvent blk) - | LedgerDBForkerEvent !TraceForkerEventWithKey - | LedgerDBFlavorImplEvent !FlavorImplSpecificTrace - deriving (Generic) +data TraceEvent blk + = LedgerDBSnapshotEvent !(TraceSnapshotEvent blk) + | LedgerReplayEvent !(TraceReplayEvent blk) + | LedgerDBForkerEvent !TraceForkerEventWithKey + | LedgerDBFlavorImplEvent !FlavorImplSpecificTrace + deriving Generic deriving instance - (StandardHash blk, InspectLedger blk) - => Show (TraceEvent blk) + (StandardHash blk, InspectLedger blk) => + Show (TraceEvent blk) deriving instance - (StandardHash blk, InspectLedger blk) - => Eq (TraceEvent blk) + (StandardHash blk, InspectLedger blk) => + Eq (TraceEvent blk) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1.hs index 735ffc58b9..eab5eda9d5 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1.hs @@ -20,53 +20,57 @@ -- module will be gone. module Ouroboros.Consensus.Storage.LedgerDB.V1 (mkInitDb) where -import Control.Arrow ((>>>)) -import Control.Monad -import Control.Monad.Except -import Control.ResourceRegistry -import Control.Tracer -import Data.Bifunctor (first) -import qualified Data.Foldable as Foldable -import Data.Functor.Contravariant ((>$<)) -import Data.Kind (Type) -import Data.Map (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (isJust) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HeaderStateHistory - (HeaderStateHistory (..), mkHeaderStateWithTimeFromSummary) -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Args -import Ouroboros.Consensus.Storage.LedgerDB.Forker -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent -import Ouroboros.Consensus.Storage.LedgerDB.V1.Args as V1 -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore as BS -import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog as DbCh - (empty, flushableLength) -import Ouroboros.Consensus.Storage.LedgerDB.V1.Forker -import Ouroboros.Consensus.Storage.LedgerDB.V1.Lock -import Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import qualified Ouroboros.Network.AnchoredSeq as AS -import Ouroboros.Network.Protocol.LocalStateQuery.Type -import System.FS.API +import Control.Arrow ((>>>)) +import Control.Monad +import Control.Monad.Except +import Control.ResourceRegistry +import Control.Tracer +import Data.Bifunctor (first) +import Data.Foldable qualified as Foldable +import Data.Functor.Contravariant ((>$<)) +import Data.Kind (Type) +import Data.Map (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (isJust) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateHistory (..) + , mkHeaderStateWithTimeFromSummary + ) +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Args +import Ouroboros.Consensus.Storage.LedgerDB.Forker +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore as BS +import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog +import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog qualified as DbCh + ( empty + , flushableLength + ) +import Ouroboros.Consensus.Storage.LedgerDB.V1.Forker +import Ouroboros.Consensus.Storage.LedgerDB.V1.Lock +import Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredSeq qualified as AS +import Ouroboros.Network.Protocol.LocalStateQuery.Type +import System.FS.API mkInitDb :: forall m blk. @@ -75,173 +79,185 @@ mkInitDb :: , LedgerDbSerialiseConstraints blk , HasHardForkHistory blk , LedgerSupportsLedgerDB blk - ) - => Complete LedgerDbArgs m blk - -> Complete V1.LedgerDbFlavorArgs m - -> ResolveBlock m blk - -> InitDB (DbChangelog' blk, BackingStore' m blk) m blk + ) => + Complete LedgerDbArgs m blk -> + Complete V1.LedgerDbFlavorArgs m -> + ResolveBlock m blk -> + InitDB (DbChangelog' blk, BackingStore' m blk) m blk mkInitDb args bss getBlock = - InitDB { - initFromGenesis = do - st <- lgrGenesis - let genesis = forgetLedgerTables st - chlog = DbCh.empty genesis - (_, backingStore) <- - allocate - lgrRegistry - (\_ -> newBackingStore bsTracer baArgs lgrHasFS' genesis (projectLedgerTables st)) - bsClose - pure (chlog, backingStore) - , initFromSnapshot = - runExceptT . loadSnapshot bsTracer baArgs (configCodec . getExtLedgerCfg . ledgerDbCfg $ lgrConfig) lgrHasFS' - , closeDb = bsClose . snd - , initReapplyBlock = \cfg blk (chlog, bstore) -> do - !chlog' <- reapplyThenPush cfg blk (readKeySets bstore) chlog - -- It's OK to flush without a lock here, since the `LedgerDB` has not - -- finished initializing, only this thread has access to the backing - -- store. - chlog'' <- unsafeIgnoreWriteLock - $ if shouldFlush flushFreq (flushableLength chlog') - then do - let (toFlush, toKeep) = splitForFlushing chlog' - mapM_ (flushIntoBackingStore bstore) toFlush - pure toKeep - else pure chlog' - pure (chlog'', bstore) - , currentTip = ledgerState . current . fst - , pruneDb = pure . first pruneToImmTipOnly - , mkLedgerDb = \(db, lgrBackingStore) -> do - (varDB, prevApplied) <- - (,) <$> newTVarIO db <*> newTVarIO Set.empty - flushLock <- mkLedgerDBLock - forkers <- newTVarIO Map.empty - nextForkerKey <- newTVarIO (ForkerKey 0) - let env = LedgerDBEnv { - ldbChangelog = varDB - , ldbBackingStore = lgrBackingStore - , ldbLock = flushLock - , ldbPrevApplied = prevApplied - , ldbForkers = forkers - , ldbNextForkerKey = nextForkerKey - , ldbSnapshotPolicy = defaultSnapshotPolicy (ledgerDbCfgSecParam lgrConfig) lgrSnapshotPolicyArgs - , ldbTracer = lgrTracer - , ldbCfg = lgrConfig - , ldbHasFS = lgrHasFS' - , ldbShouldFlush = shouldFlush flushFreq - , ldbQueryBatchSize = lgrQueryBatchSize - , ldbResolveBlock = getBlock - } - h <- LDBHandle <$> newTVarIO (LedgerDBOpen env) - pure $ implMkLedgerDb h - } - where - bsTracer = LedgerDBFlavorImplEvent . FlavorImplSpecificTraceV1 >$< lgrTracer + InitDB + { initFromGenesis = do + st <- lgrGenesis + let genesis = forgetLedgerTables st + chlog = DbCh.empty genesis + (_, backingStore) <- + allocate + lgrRegistry + (\_ -> newBackingStore bsTracer baArgs lgrHasFS' genesis (projectLedgerTables st)) + bsClose + pure (chlog, backingStore) + , initFromSnapshot = + runExceptT + . loadSnapshot bsTracer baArgs (configCodec . getExtLedgerCfg . ledgerDbCfg $ lgrConfig) lgrHasFS' + , closeDb = bsClose . snd + , initReapplyBlock = \cfg blk (chlog, bstore) -> do + !chlog' <- reapplyThenPush cfg blk (readKeySets bstore) chlog + -- It's OK to flush without a lock here, since the `LedgerDB` has not + -- finished initializing, only this thread has access to the backing + -- store. + chlog'' <- + unsafeIgnoreWriteLock $ + if shouldFlush flushFreq (flushableLength chlog') + then do + let (toFlush, toKeep) = splitForFlushing chlog' + mapM_ (flushIntoBackingStore bstore) toFlush + pure toKeep + else pure chlog' + pure (chlog'', bstore) + , currentTip = ledgerState . current . fst + , pruneDb = pure . first pruneToImmTipOnly + , mkLedgerDb = \(db, lgrBackingStore) -> do + (varDB, prevApplied) <- + (,) <$> newTVarIO db <*> newTVarIO Set.empty + flushLock <- mkLedgerDBLock + forkers <- newTVarIO Map.empty + nextForkerKey <- newTVarIO (ForkerKey 0) + let env = + LedgerDBEnv + { ldbChangelog = varDB + , ldbBackingStore = lgrBackingStore + , ldbLock = flushLock + , ldbPrevApplied = prevApplied + , ldbForkers = forkers + , ldbNextForkerKey = nextForkerKey + , ldbSnapshotPolicy = defaultSnapshotPolicy (ledgerDbCfgSecParam lgrConfig) lgrSnapshotPolicyArgs + , ldbTracer = lgrTracer + , ldbCfg = lgrConfig + , ldbHasFS = lgrHasFS' + , ldbShouldFlush = shouldFlush flushFreq + , ldbQueryBatchSize = lgrQueryBatchSize + , ldbResolveBlock = getBlock + } + h <- LDBHandle <$> newTVarIO (LedgerDBOpen env) + pure $ implMkLedgerDb h + } + where + bsTracer = LedgerDBFlavorImplEvent . FlavorImplSpecificTraceV1 >$< lgrTracer - LedgerDbArgs { - lgrHasFS - , lgrTracer - , lgrSnapshotPolicyArgs - , lgrConfig - , lgrGenesis - , lgrRegistry - , lgrQueryBatchSize - } = args + LedgerDbArgs + { lgrHasFS + , lgrTracer + , lgrSnapshotPolicyArgs + , lgrConfig + , lgrGenesis + , lgrRegistry + , lgrQueryBatchSize + } = args - lgrHasFS' = SnapshotsFS lgrHasFS + lgrHasFS' = SnapshotsFS lgrHasFS - V1Args flushFreq baArgs = bss + V1Args flushFreq baArgs = bss implMkLedgerDb :: - forall m l blk. - ( IOLike m - , HasCallStack - , StandardHash l - , LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - , ApplyBlock l blk - , l ~ ExtLedgerState blk - , HasHardForkHistory blk - ) - => LedgerDBHandle m l blk - -> (LedgerDB' m blk, TestInternals' m blk) -implMkLedgerDb h = (LedgerDB { - getVolatileTip = getEnvSTM h implGetVolatileTip - , getImmutableTip = getEnvSTM h implGetImmutableTip - , getPastLedgerState = getEnvSTM1 h implGetPastLedgerState - , getHeaderStateHistory = getEnvSTM h implGetHeaderStateHistory - , getForkerAtTarget = newForkerAtTarget h - , validateFork = getEnv5 h (implValidate h) - , getPrevApplied = getEnvSTM h implGetPrevApplied - , garbageCollect = getEnvSTM1 h implGarbageCollect - , tryTakeSnapshot = getEnv2 h implTryTakeSnapshot - , tryFlush = getEnv h implTryFlush - , closeDB = implCloseDB h - }, mkInternals h) + forall m l blk. + ( IOLike m + , HasCallStack + , StandardHash l + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + , ApplyBlock l blk + , l ~ ExtLedgerState blk + , HasHardForkHistory blk + ) => + LedgerDBHandle m l blk -> + (LedgerDB' m blk, TestInternals' m blk) +implMkLedgerDb h = + ( LedgerDB + { getVolatileTip = getEnvSTM h implGetVolatileTip + , getImmutableTip = getEnvSTM h implGetImmutableTip + , getPastLedgerState = getEnvSTM1 h implGetPastLedgerState + , getHeaderStateHistory = getEnvSTM h implGetHeaderStateHistory + , getForkerAtTarget = newForkerAtTarget h + , validateFork = getEnv5 h (implValidate h) + , getPrevApplied = getEnvSTM h implGetPrevApplied + , garbageCollect = getEnvSTM1 h implGarbageCollect + , tryTakeSnapshot = getEnv2 h implTryTakeSnapshot + , tryFlush = getEnv h implTryFlush + , closeDB = implCloseDB h + } + , mkInternals h + ) implGetVolatileTip :: - (MonadSTM m, GetTip l) - => LedgerDBEnv m l blk - -> STM m (l EmptyMK) + (MonadSTM m, GetTip l) => + LedgerDBEnv m l blk -> + STM m (l EmptyMK) implGetVolatileTip = fmap current . readTVar . ldbChangelog implGetImmutableTip :: - MonadSTM m - => LedgerDBEnv m l blk - -> STM m (l EmptyMK) + MonadSTM m => + LedgerDBEnv m l blk -> + STM m (l EmptyMK) implGetImmutableTip = fmap anchor . readTVar . ldbChangelog implGetPastLedgerState :: - ( MonadSTM m , HasHeader blk, IsLedger l, StandardHash l - , HasLedgerTables l, HeaderHash l ~ HeaderHash blk ) - => LedgerDBEnv m l blk -> Point blk -> STM m (Maybe (l EmptyMK)) + ( MonadSTM m + , HasHeader blk + , IsLedger l + , StandardHash l + , HasLedgerTables l + , HeaderHash l ~ HeaderHash blk + ) => + LedgerDBEnv m l blk -> Point blk -> STM m (Maybe (l EmptyMK)) implGetPastLedgerState env point = getPastLedgerAt point <$> readTVar (ldbChangelog env) implGetHeaderStateHistory :: - ( MonadSTM m - , l ~ ExtLedgerState blk - , IsLedger (LedgerState blk) - , HasHardForkHistory blk - , HasAnnTip blk - ) - => LedgerDBEnv m l blk -> STM m (HeaderStateHistory blk) + ( MonadSTM m + , l ~ ExtLedgerState blk + , IsLedger (LedgerState blk) + , HasHardForkHistory blk + , HasAnnTip blk + ) => + LedgerDBEnv m l blk -> STM m (HeaderStateHistory blk) implGetHeaderStateHistory env = do - ldb <- readTVar (ldbChangelog env) - let currentLedgerState = ledgerState $ current ldb - -- This summary can convert all tip slots of the ledger states in the - -- @ledgerDb@ as these are not newer than the tip slot of the current - -- ledger state (Property 17.1 in the Consensus report). - summary = hardForkSummary (configLedger $ getExtLedgerCfg $ ledgerDbCfg $ ldbCfg env) currentLedgerState - mkHeaderStateWithTime' = - mkHeaderStateWithTimeFromSummary summary - . headerState - pure - . HeaderStateHistory - . AS.bimap mkHeaderStateWithTime' mkHeaderStateWithTime' - $ changelogStates ldb + ldb <- readTVar (ldbChangelog env) + let currentLedgerState = ledgerState $ current ldb + -- This summary can convert all tip slots of the ledger states in the + -- @ledgerDb@ as these are not newer than the tip slot of the current + -- ledger state (Property 17.1 in the Consensus report). + summary = hardForkSummary (configLedger $ getExtLedgerCfg $ ledgerDbCfg $ ldbCfg env) currentLedgerState + mkHeaderStateWithTime' = + mkHeaderStateWithTimeFromSummary summary + . headerState + pure + . HeaderStateHistory + . AS.bimap mkHeaderStateWithTime' mkHeaderStateWithTime' + $ changelogStates ldb implValidate :: - forall m l blk. ( - IOLike m - , LedgerSupportsProtocol blk - , HasCallStack - , l ~ ExtLedgerState blk - ) - => LedgerDBHandle m l blk - -> LedgerDBEnv m l blk - -> ResourceRegistry m - -> (TraceValidateEvent blk -> m ()) - -> BlockCache blk - -> Word64 - -> [Header blk] - -> m (ValidateResult m (ExtLedgerState blk) blk) + forall m l blk. + ( IOLike m + , LedgerSupportsProtocol blk + , HasCallStack + , l ~ ExtLedgerState blk + ) => + LedgerDBHandle m l blk -> + LedgerDBEnv m l blk -> + ResourceRegistry m -> + (TraceValidateEvent blk -> m ()) -> + BlockCache blk -> + Word64 -> + [Header blk] -> + m (ValidateResult m (ExtLedgerState blk) blk) implValidate h ldbEnv rr tr cache rollbacks hdrs = validate (ledgerDbCfgComputeLedgerEvents $ ldbCfg ldbEnv) $ ValidateArgs (ldbResolveBlock ldbEnv) (getExtLedgerCfg . ledgerDbCfg $ ldbCfg ldbEnv) - (\l -> do + ( \l -> do prev <- readTVar (ldbPrevApplied ldbEnv) - writeTVar (ldbPrevApplied ldbEnv) (Foldable.foldl' (flip Set.insert) prev l)) + writeTVar (ldbPrevApplied ldbEnv) (Foldable.foldl' (flip Set.insert) prev l) + ) (readTVar (ldbPrevApplied ldbEnv)) (newForkerByRollback h) rr @@ -256,28 +272,36 @@ implGetPrevApplied env = readTVar (ldbPrevApplied env) -- | Remove all points with a slot older than the given slot from the set of -- previously applied points. implGarbageCollect :: MonadSTM m => LedgerDBEnv m l blk -> SlotNo -> STM m () -implGarbageCollect env slotNo = modifyTVar (ldbPrevApplied env) $ +implGarbageCollect env slotNo = + modifyTVar (ldbPrevApplied env) $ Set.dropWhileAntitone ((< slotNo) . realPointSlot) implTryTakeSnapshot :: - ( l ~ ExtLedgerState blk - , IOLike m, LedgerDbSerialiseConstraints blk, LedgerSupportsProtocol blk - ) - => LedgerDBEnv m l blk -> Maybe (Time, Time) -> Word64 -> m SnapCounters + ( l ~ ExtLedgerState blk + , IOLike m + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + ) => + LedgerDBEnv m l blk -> Maybe (Time, Time) -> Word64 -> m SnapCounters implTryTakeSnapshot env mTime nrBlocks = - if onDiskShouldTakeSnapshot (ldbSnapshotPolicy env) (uncurry (flip diffTime) <$> mTime) nrBlocks then do - void $ withReadLock (ldbLock env) (takeSnapshot - (ldbChangelog env) - (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) - (LedgerDBSnapshotEvent >$< ldbTracer env) - (ldbHasFS env) - (ldbBackingStore env) - Nothing - ) - void $ trimSnapshots - (LedgerDBSnapshotEvent >$< ldbTracer env) - (snapshotsFs $ ldbHasFS env) - (ldbSnapshotPolicy env) + if onDiskShouldTakeSnapshot (ldbSnapshotPolicy env) (uncurry (flip diffTime) <$> mTime) nrBlocks + then do + void $ + withReadLock + (ldbLock env) + ( takeSnapshot + (ldbChangelog env) + (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) + (LedgerDBSnapshotEvent >$< ldbTracer env) + (ldbHasFS env) + (ldbBackingStore env) + Nothing + ) + void $ + trimSnapshots + (LedgerDBSnapshotEvent >$< ldbTracer env) + (snapshotsFs $ ldbHasFS env) + (ldbSnapshotPolicy env) (`SnapCounters` 0) . Just <$> maybe getMonotonicTime (pure . snd) mTime else pure $ SnapCounters (fst <$> mTime) nrBlocks @@ -286,40 +310,44 @@ implTryTakeSnapshot env mTime nrBlocks = -- with which this LedgerDB was opened), flush differences to the backing -- store. Note this acquires a write lock on the backing store. implTryFlush :: - (IOLike m, HasLedgerTables l, GetTip l) - => LedgerDBEnv m l blk -> m () + (IOLike m, HasLedgerTables l, GetTip l) => + LedgerDBEnv m l blk -> m () implTryFlush env = do - ldb <- readTVarIO $ ldbChangelog env - when (ldbShouldFlush env $ DbCh.flushableLength ldb) - (withWriteLock - (ldbLock env) - (flushLedgerDB (ldbChangelog env) (ldbBackingStore env)) - ) + ldb <- readTVarIO $ ldbChangelog env + when + (ldbShouldFlush env $ DbCh.flushableLength ldb) + ( withWriteLock + (ldbLock env) + (flushLedgerDB (ldbChangelog env) (ldbBackingStore env)) + ) implCloseDB :: IOLike m => LedgerDBHandle m l blk -> m () implCloseDB (LDBHandle varState) = do - mbOpenEnv <- atomically $ readTVar varState >>= \case - -- Idempotent - LedgerDBClosed -> return Nothing - LedgerDBOpen env -> do - writeTVar varState LedgerDBClosed - return $ Just env - - -- Only when the LedgerDB was open - whenJust mbOpenEnv $ \env -> do - closeAllForkers env - bsClose (ldbBackingStore env) + mbOpenEnv <- + atomically $ + readTVar varState >>= \case + -- Idempotent + LedgerDBClosed -> return Nothing + LedgerDBOpen env -> do + writeTVar varState LedgerDBClosed + return $ Just env + + -- Only when the LedgerDB was open + whenJust mbOpenEnv $ \env -> do + closeAllForkers env + bsClose (ldbBackingStore env) mkInternals :: - ( IOLike m - , LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - , ApplyBlock (ExtLedgerState blk) blk - ) - => LedgerDBHandle m (ExtLedgerState blk) blk - -> TestInternals' m blk -mkInternals h = TestInternals { - takeSnapshotNOW = getEnv2 h implIntTakeSnapshot + ( IOLike m + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + , ApplyBlock (ExtLedgerState blk) blk + ) => + LedgerDBHandle m (ExtLedgerState blk) blk -> + TestInternals' m blk +mkInternals h = + TestInternals + { takeSnapshotNOW = getEnv2 h implIntTakeSnapshot , push = getEnv1 h implIntPush , reapplyThenPushNOW = getEnv1 h implIntReapplyThenPush , wipeLedgerDB = getEnv h $ void . destroySnapshots . snapshotsFs . ldbHasFS @@ -331,76 +359,80 @@ mkInternals h = TestInternals { implIntTruncateSnapshots :: MonadThrow m => SnapshotsFS m -> m () implIntTruncateSnapshots (SnapshotsFS (SomeHasFS fs)) = do dirs <- Set.lookupMax . Set.filter (isJust . snapshotFromPath) <$> listDirectory fs (mkFsPath []) - mapM_ (truncateRecursively . (:[])) dirs - where - truncateRecursively pre = do - dirs <- listDirectory fs (mkFsPath pre) - mapM_ (\d -> do - let d' = pre ++ [d] - isDir <- doesDirectoryExist fs $ mkFsPath d' - if isDir - then truncateRecursively d' - else withFile fs (mkFsPath d') (AppendMode AllowExisting) $ \h -> hTruncate fs h 0 - ) dirs + mapM_ (truncateRecursively . (: [])) dirs + where + truncateRecursively pre = do + dirs <- listDirectory fs (mkFsPath pre) + mapM_ + ( \d -> do + let d' = pre ++ [d] + isDir <- doesDirectoryExist fs $ mkFsPath d' + if isDir + then truncateRecursively d' + else withFile fs (mkFsPath d') (AppendMode AllowExisting) $ \h -> hTruncate fs h 0 + ) + dirs implIntTakeSnapshot :: - ( IOLike m - , LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - , l ~ ExtLedgerState blk - ) - => LedgerDBEnv m l blk -> WhereToTakeSnapshot -> Maybe String -> m () + ( IOLike m + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + , l ~ ExtLedgerState blk + ) => + LedgerDBEnv m l blk -> WhereToTakeSnapshot -> Maybe String -> m () implIntTakeSnapshot env whereTo suffix = do when (whereTo == TakeAtVolatileTip) $ atomically $ modifyTVar (ldbChangelog env) pruneToImmTipOnly withWriteLock - (ldbLock env) - (flushLedgerDB (ldbChangelog env) (ldbBackingStore env)) - void $ withReadLock (ldbLock env) $ - takeSnapshot - (ldbChangelog env) - (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) - (LedgerDBSnapshotEvent >$< ldbTracer env) - (ldbHasFS env) - (ldbBackingStore env) - suffix + (ldbLock env) + (flushLedgerDB (ldbChangelog env) (ldbBackingStore env)) + void $ + withReadLock (ldbLock env) $ + takeSnapshot + (ldbChangelog env) + (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) + (LedgerDBSnapshotEvent >$< ldbTracer env) + (ldbHasFS env) + (ldbBackingStore env) + suffix implIntPush :: - ( IOLike m - , ApplyBlock l blk - , l ~ ExtLedgerState blk - ) - => LedgerDBEnv m l blk -> l DiffMK -> m () + ( IOLike m + , ApplyBlock l blk + , l ~ ExtLedgerState blk + ) => + LedgerDBEnv m l blk -> l DiffMK -> m () implIntPush env st = do chlog <- readTVarIO $ ldbChangelog env let chlog' = prune (LedgerDbPruneKeeping (ledgerDbCfgSecParam $ ldbCfg env)) $ extend st chlog atomically $ writeTVar (ldbChangelog env) chlog' implIntReapplyThenPush :: - ( IOLike m - , ApplyBlock l blk - , l ~ ExtLedgerState blk - ) - => LedgerDBEnv m l blk -> blk -> m () + ( IOLike m + , ApplyBlock l blk + , l ~ ExtLedgerState blk + ) => + LedgerDBEnv m l blk -> blk -> m () implIntReapplyThenPush env blk = do chlog <- readTVarIO $ ldbChangelog env - chlog' <- reapplyThenPush (ldbCfg env) blk (readKeySets (ldbBackingStore env)) chlog + chlog' <- reapplyThenPush (ldbCfg env) blk (readKeySets (ldbBackingStore env)) chlog atomically $ writeTVar (ldbChangelog env) chlog' {------------------------------------------------------------------------------- Flushing -------------------------------------------------------------------------------} -flushLedgerDB :: (MonadSTM m, GetTip l, HasLedgerTables l) - => StrictTVar m (DbChangelog l) - -> LedgerBackingStore m l - -> WriteLocked m () +flushLedgerDB :: + (MonadSTM m, GetTip l, HasLedgerTables l) => + StrictTVar m (DbChangelog l) -> + LedgerBackingStore m l -> + WriteLocked m () flushLedgerDB chlogVar bstore = do diffs <- writeLocked $ atomically $ do ldb' <- readTVar chlogVar let (toFlush, toKeep) = splitForFlushing ldb' case toFlush of Nothing -> pure () - Just {} -> writeTVar chlogVar toKeep + Just{} -> writeTVar chlogVar toKeep pure toFlush mapM_ (flushIntoBackingStore bstore) diffs @@ -411,12 +443,13 @@ flushLedgerDB chlogVar bstore = do -- -- The write lock must be held before calling this function. flushIntoBackingStore :: LedgerBackingStore m l -> DiffsToFlush l -> WriteLocked m () -flushIntoBackingStore backingStore dblog = writeLocked $ - bsWrite - backingStore - (toFlushSlot dblog) - (toFlushState dblog) - (toFlushDiffs dblog) +flushIntoBackingStore backingStore dblog = + writeLocked $ + bsWrite + backingStore + (toFlushSlot dblog) + (toFlushState dblog) + (toFlushDiffs dblog) {------------------------------------------------------------------------------- LedgerDB internal state @@ -425,210 +458,240 @@ flushIntoBackingStore backingStore dblog = writeLocked $ newtype LedgerDBHandle m l blk = LDBHandle (StrictTVar m (LedgerDBState m l blk)) deriving Generic -data LedgerDBState m l blk = - LedgerDBOpen !(LedgerDBEnv m l blk) +data LedgerDBState m l blk + = LedgerDBOpen !(LedgerDBEnv m l blk) | LedgerDBClosed deriving Generic -deriving instance ( IOLike m - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - , NoThunks (LedgerCfg l) - ) => NoThunks (LedgerDBState m l blk) +deriving instance + ( IOLike m + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + , NoThunks (LedgerCfg l) + ) => + NoThunks (LedgerDBState m l blk) type LedgerDBEnv :: (Type -> Type) -> LedgerStateKind -> Type -> Type -data LedgerDBEnv m l blk = LedgerDBEnv { - -- | INVARIANT: the tip of the 'LedgerDB' is always in sync with the tip of - -- the current chain of the ChainDB. - ldbChangelog :: !(StrictTVar m (DbChangelog l)) - -- | Handle to the ledger's backing store, containing the parts that grow too - -- big for in-memory residency - , ldbBackingStore :: !(LedgerBackingStore m l) - -- | The flush lock to the 'BackingStore'. This lock is crucial when it - -- comes to keeping the data in memory consistent with the data on-disk. - -- - -- This lock should be held whenever we want to keep a consistent view of - -- the backing store for some time. In particular we use this: - -- - -- - when performing a query on the ledger state, we need to hold a - -- 'LocalStateQueryView' which, while live, must maintain a consistent view - -- of the DB, and therefore we acquire a Read lock. - -- - -- - when taking a snapshot of the ledger db, we need to prevent others (eg - -- ChainSel) from altering the backing store at the same time, thus we - -- acquire a Write lock. - , ldbLock :: !(LedgerDBLock m) - -- | INVARIANT: this set contains only points that are in the - -- VolatileDB. - -- - -- INVARIANT: all points on the current chain fragment are in this set. - -- - -- The VolatileDB might contain invalid blocks, these will not be in - -- this set. - -- - -- When a garbage-collection is performed on the VolatileDB, the points - -- of the blocks eligible for garbage-collection should be removed from - -- this set. - , ldbPrevApplied :: !(StrictTVar m (Set (RealPoint blk))) - -- | Open forkers. - -- - -- INVARIANT: a forker is open iff its 'ForkerKey' is in this 'Map. - , ldbForkers :: !(StrictTVar m (Map ForkerKey (ForkerEnv m l blk))) - , ldbNextForkerKey :: !(StrictTVar m ForkerKey) - +data LedgerDBEnv m l blk = LedgerDBEnv + { ldbChangelog :: !(StrictTVar m (DbChangelog l)) + -- ^ INVARIANT: the tip of the 'LedgerDB' is always in sync with the tip of + -- the current chain of the ChainDB. + , ldbBackingStore :: !(LedgerBackingStore m l) + -- ^ Handle to the ledger's backing store, containing the parts that grow too + -- big for in-memory residency + , ldbLock :: !(LedgerDBLock m) + -- ^ The flush lock to the 'BackingStore'. This lock is crucial when it + -- comes to keeping the data in memory consistent with the data on-disk. + -- + -- This lock should be held whenever we want to keep a consistent view of + -- the backing store for some time. In particular we use this: + -- + -- - when performing a query on the ledger state, we need to hold a + -- 'LocalStateQueryView' which, while live, must maintain a consistent view + -- of the DB, and therefore we acquire a Read lock. + -- + -- - when taking a snapshot of the ledger db, we need to prevent others (eg + -- ChainSel) from altering the backing store at the same time, thus we + -- acquire a Write lock. + , ldbPrevApplied :: !(StrictTVar m (Set (RealPoint blk))) + -- ^ INVARIANT: this set contains only points that are in the + -- VolatileDB. + -- + -- INVARIANT: all points on the current chain fragment are in this set. + -- + -- The VolatileDB might contain invalid blocks, these will not be in + -- this set. + -- + -- When a garbage-collection is performed on the VolatileDB, the points + -- of the blocks eligible for garbage-collection should be removed from + -- this set. + , ldbForkers :: !(StrictTVar m (Map ForkerKey (ForkerEnv m l blk))) + -- ^ Open forkers. + -- + -- INVARIANT: a forker is open iff its 'ForkerKey' is in this 'Map. + , ldbNextForkerKey :: !(StrictTVar m ForkerKey) , ldbSnapshotPolicy :: !SnapshotPolicy - , ldbTracer :: !(Tracer m (TraceEvent blk)) - , ldbCfg :: !(LedgerDbCfg l) - , ldbHasFS :: !(SnapshotsFS m) - -- | Determine whether we should flush depending on the number of flushable - -- diffs that we currently have in the LedgerDB, based on the flush - -- frequency that was provided when opening the LedgerDB. - , ldbShouldFlush :: !(Word64 -> Bool) + , ldbTracer :: !(Tracer m (TraceEvent blk)) + , ldbCfg :: !(LedgerDbCfg l) + , ldbHasFS :: !(SnapshotsFS m) + , ldbShouldFlush :: !(Word64 -> Bool) + -- ^ Determine whether we should flush depending on the number of flushable + -- diffs that we currently have in the LedgerDB, based on the flush + -- frequency that was provided when opening the LedgerDB. , ldbQueryBatchSize :: !QueryBatchSize - , ldbResolveBlock :: !(ResolveBlock m blk) - } deriving (Generic) + , ldbResolveBlock :: !(ResolveBlock m blk) + } + deriving Generic -deriving instance ( IOLike m - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - , NoThunks (LedgerCfg l) - ) => NoThunks (LedgerDBEnv m l blk) +deriving instance + ( IOLike m + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + , NoThunks (LedgerCfg l) + ) => + NoThunks (LedgerDBEnv m l blk) -- | Check if the LedgerDB is open, if so, executing the given function on the -- 'LedgerDBEnv', otherwise, throw a 'CloseDBError'. getEnv :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> m r) - -> m r -getEnv (LDBHandle varState) f = readTVarIO varState >>= \case + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> m r) -> + m r +getEnv (LDBHandle varState) f = + readTVarIO varState >>= \case LedgerDBOpen env -> f env - LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack + LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack -- | Variant 'of 'getEnv' for functions taking one argument. getEnv1 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> a -> m r) - -> a -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> a -> m r) -> + a -> + m r getEnv1 h f a = getEnv h (`f` a) -- | Variant 'of 'getEnv' for functions taking two arguments. getEnv2 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> a -> b -> m r) - -> a -> b -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> a -> b -> m r) -> + a -> + b -> + m r getEnv2 h f a b = getEnv h (\env -> f env a b) -- | Variant 'of 'getEnv' for functions taking five arguments. getEnv5 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> a -> b -> c -> d -> e -> m r) - -> a -> b -> c -> d -> e -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> a -> b -> c -> d -> e -> m r) -> + a -> + b -> + c -> + d -> + e -> + m r getEnv5 h f a b c d e = getEnv h (\env -> f env a b c d e) -- | Variant of 'getEnv' that works in 'STM'. getEnvSTM :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> STM m r) - -> STM m r -getEnvSTM (LDBHandle varState) f = readTVar varState >>= \case + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> STM m r) -> + STM m r +getEnvSTM (LDBHandle varState) f = + readTVar varState >>= \case LedgerDBOpen env -> f env - LedgerDBClosed -> throwSTM $ ClosedDBError @blk prettyCallStack + LedgerDBClosed -> throwSTM $ ClosedDBError @blk prettyCallStack -- | Variant of 'getEnv1' that works in 'STM'. getEnvSTM1 :: - forall m l blk a r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> a -> STM m r) - -> a -> STM m r -getEnvSTM1 (LDBHandle varState) f a = readTVar varState >>= \case + forall m l blk a r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> a -> STM m r) -> + a -> + STM m r +getEnvSTM1 (LDBHandle varState) f a = + readTVar varState >>= \case LedgerDBOpen env -> f env a - LedgerDBClosed -> throwSTM $ ClosedDBError @blk prettyCallStack + LedgerDBClosed -> throwSTM $ ClosedDBError @blk prettyCallStack {------------------------------------------------------------------------------- Forkers -------------------------------------------------------------------------------} getForkerEnv :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> ForkerKey - -> (ForkerEnv m l blk -> m r) - -> m r + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + ForkerKey -> + (ForkerEnv m l blk -> m r) -> + m r getForkerEnv (LDBHandle varState) forkerKey f = do - forkerEnv <- atomically $ readTVar varState >>= \case - LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack - LedgerDBOpen env -> (Map.lookup forkerKey <$> readTVar (ldbForkers env)) >>= \case - Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack - Just forkerEnv -> pure forkerEnv + forkerEnv <- + atomically $ + readTVar varState >>= \case + LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack + LedgerDBOpen env -> + (Map.lookup forkerKey <$> readTVar (ldbForkers env)) >>= \case + Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack + Just forkerEnv -> pure forkerEnv - f forkerEnv + f forkerEnv getForkerEnv1 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> ForkerKey - -> (ForkerEnv m l blk -> a -> m r) - -> a -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + ForkerKey -> + (ForkerEnv m l blk -> a -> m r) -> + a -> + m r getForkerEnv1 h forkerKey f a = getForkerEnv h forkerKey (`f` a) getForkerEnvSTM :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> ForkerKey - -> (ForkerEnv m l blk -> STM m r) - -> STM m r -getForkerEnvSTM (LDBHandle varState) forkerKey f = readTVar varState >>= \case - LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack - LedgerDBOpen env -> readTVar (ldbForkers env) >>= (Map.lookup forkerKey >>> \case - Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack - Just forkerEnv -> f forkerEnv) + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + ForkerKey -> + (ForkerEnv m l blk -> STM m r) -> + STM m r +getForkerEnvSTM (LDBHandle varState) forkerKey f = + readTVar varState >>= \case + LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack + LedgerDBOpen env -> + readTVar (ldbForkers env) + >>= ( Map.lookup forkerKey >>> \case + Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack + Just forkerEnv -> f forkerEnv + ) -- | Will call 'error' if the point is not on the LedgerDB newForkerAtTarget :: - ( HeaderHash l ~ HeaderHash blk - , IOLike m - , IsLedger l - , StandardHash l - , HasLedgerTables l - , LedgerSupportsProtocol blk - ) - => LedgerDBHandle m l blk - -> ResourceRegistry m - -> Target (Point blk) - -> m (Either GetForkerError (Forker m l blk)) + ( HeaderHash l ~ HeaderHash blk + , IOLike m + , IsLedger l + , StandardHash l + , HasLedgerTables l + , LedgerSupportsProtocol blk + ) => + LedgerDBHandle m l blk -> + ResourceRegistry m -> + Target (Point blk) -> + m (Either GetForkerError (Forker m l blk)) newForkerAtTarget h rr pt = getEnv h $ \ldbEnv -> - withReadLock (ldbLock ldbEnv) (acquireAtTarget ldbEnv rr (Right pt)) >>= traverse (newForker h ldbEnv) + withReadLock (ldbLock ldbEnv) (acquireAtTarget ldbEnv rr (Right pt)) + >>= traverse (newForker h ldbEnv) newForkerByRollback :: - ( HeaderHash l ~ HeaderHash blk - , IOLike m - , IsLedger l - , StandardHash l - , HasLedgerTables l - , LedgerSupportsProtocol blk - ) - => LedgerDBHandle m l blk - -> ResourceRegistry m - -- | How many blocks to rollback from the tip - -> Word64 - -> m (Either GetForkerError (Forker m l blk)) + ( HeaderHash l ~ HeaderHash blk + , IOLike m + , IsLedger l + , StandardHash l + , HasLedgerTables l + , LedgerSupportsProtocol blk + ) => + LedgerDBHandle m l blk -> + ResourceRegistry m -> + -- | How many blocks to rollback from the tip + Word64 -> + m (Either GetForkerError (Forker m l blk)) newForkerByRollback h rr n = getEnv h $ \ldbEnv -> do - withReadLock (ldbLock ldbEnv) (acquireAtTarget ldbEnv rr (Left n)) >>= traverse (newForker h ldbEnv) + withReadLock (ldbLock ldbEnv) (acquireAtTarget ldbEnv rr (Left n)) >>= traverse (newForker h ldbEnv) -- | Close all open block and header 'Forker's. closeAllForkers :: - IOLike m - => LedgerDBEnv m l blk - -> m () + IOLike m => + LedgerDBEnv m l blk -> + m () closeAllForkers ldbEnv = do forkerEnvs <- atomically $ do @@ -636,135 +699,147 @@ closeAllForkers ldbEnv = writeTVar forkersVar Map.empty return forkerEnvs mapM_ closeForkerEnv forkerEnvs - where - forkersVar = ldbForkers ldbEnv + where + forkersVar = ldbForkers ldbEnv type Resources m l = - (LedgerBackingStoreValueHandle m l, DbChangelog l) + (LedgerBackingStoreValueHandle m l, DbChangelog l) -- | Acquire both a value handle and a db changelog at the tip. Holds a read lock -- while doing so. acquireAtTarget :: - forall m l blk. ( - HeaderHash l ~ HeaderHash blk - , IOLike m - , IsLedger l - , StandardHash l - , HasLedgerTables l - , LedgerSupportsProtocol blk - ) - => LedgerDBEnv m l blk - -> ResourceRegistry m - -> Either Word64 (Target (Point blk)) - -> ReadLocked m (Either GetForkerError (Resources m l)) + forall m l blk. + ( HeaderHash l ~ HeaderHash blk + , IOLike m + , IsLedger l + , StandardHash l + , HasLedgerTables l + , LedgerSupportsProtocol blk + ) => + LedgerDBEnv m l blk -> + ResourceRegistry m -> + Either Word64 (Target (Point blk)) -> + ReadLocked m (Either GetForkerError (Resources m l)) acquireAtTarget ldbEnv rr (Right VolatileTip) = - readLocked $ do - dblog <- readTVarIO (ldbChangelog ldbEnv) - Right . (,dblog) <$> acquire ldbEnv rr dblog + readLocked $ do + dblog <- readTVarIO (ldbChangelog ldbEnv) + Right . (,dblog) <$> acquire ldbEnv rr dblog acquireAtTarget ldbEnv rr (Right ImmutableTip) = - readLocked $ do - dblog <- readTVarIO (ldbChangelog ldbEnv) - Right . (, rollbackToAnchor dblog) - <$> acquire ldbEnv rr dblog + readLocked $ do + dblog <- readTVarIO (ldbChangelog ldbEnv) + Right . (,rollbackToAnchor dblog) + <$> acquire ldbEnv rr dblog acquireAtTarget ldbEnv rr (Right (SpecificPoint pt)) = - readLocked $ do - dblog <- readTVarIO (ldbChangelog ldbEnv) - let immTip = getTip $ anchor dblog - case rollback pt dblog of - Nothing | pointSlot pt < pointSlot immTip -> pure $ Left $ PointTooOld Nothing - | otherwise -> pure $ Left PointNotOnChain - Just dblog' -> Right . (,dblog') <$> acquire ldbEnv rr dblog' + readLocked $ do + dblog <- readTVarIO (ldbChangelog ldbEnv) + let immTip = getTip $ anchor dblog + case rollback pt dblog of + Nothing + | pointSlot pt < pointSlot immTip -> pure $ Left $ PointTooOld Nothing + | otherwise -> pure $ Left PointNotOnChain + Just dblog' -> Right . (,dblog') <$> acquire ldbEnv rr dblog' acquireAtTarget ldbEnv rr (Left n) = readLocked $ do - dblog <- readTVarIO (ldbChangelog ldbEnv) - case rollbackN n dblog of - Nothing -> - return $ Left $ PointTooOld $ Just $ ExceededRollback { - rollbackMaximum = maxRollback dblog - , rollbackRequested = n - } - Just dblog' -> - Right . (,dblog') <$> acquire ldbEnv rr dblog' + dblog <- readTVarIO (ldbChangelog ldbEnv) + case rollbackN n dblog of + Nothing -> + return $ + Left $ + PointTooOld $ + Just $ + ExceededRollback + { rollbackMaximum = maxRollback dblog + , rollbackRequested = n + } + Just dblog' -> + Right . (,dblog') <$> acquire ldbEnv rr dblog' acquire :: - (IOLike m, GetTip l) - => LedgerDBEnv m l blk - -> ResourceRegistry m - -> DbChangelog l - -> m (LedgerBackingStoreValueHandle m l) -acquire ldbEnv rr dblog = do + (IOLike m, GetTip l) => + LedgerDBEnv m l blk -> + ResourceRegistry m -> + DbChangelog l -> + m (LedgerBackingStoreValueHandle m l) +acquire ldbEnv rr dblog = do -- bsvhClose is idempotent, so we let the resource call it even if the value -- handle might have been closed somewhere else (_, vh) <- allocate rr (\_ -> bsValueHandle $ ldbBackingStore ldbEnv) bsvhClose let dblogSlot = getTipSlot (changelogLastFlushedState dblog) if bsvhAtSlot vh == dblogSlot then pure vh - else bsvhClose vh >> - error ( "Critical error: Value handles are created at " - <> show (bsvhAtSlot vh) - <> " while the db changelog is at " - <> show dblogSlot - <> ". There is either a race condition or a logic bug" - ) + else + bsvhClose vh + >> error + ( "Critical error: Value handles are created at " + <> show (bsvhAtSlot vh) + <> " while the db changelog is at " + <> show dblogSlot + <> ". There is either a race condition or a logic bug" + ) {------------------------------------------------------------------------------- Make forkers from consistent views -------------------------------------------------------------------------------} newForker :: - ( IOLike m - , HasLedgerTables l - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , GetTip l - ) - => LedgerDBHandle m l blk - -> LedgerDBEnv m l blk - -> Resources m l - -> m (Forker m l blk) + ( IOLike m + , HasLedgerTables l + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , GetTip l + ) => + LedgerDBHandle m l blk -> + LedgerDBEnv m l blk -> + Resources m l -> + m (Forker m l blk) newForker h ldbEnv (vh, dblog) = do - dblogVar <- newTVarIO dblog - forkerKey <- atomically $ stateTVar (ldbNextForkerKey ldbEnv) $ \r -> (r, r + 1) - let forkerEnv = ForkerEnv { - foeBackingStoreValueHandle = vh - , foeChangelog = dblogVar - , foeSwitchVar = ldbChangelog ldbEnv - , foeSecurityParam = ledgerDbCfgSecParam $ ldbCfg ldbEnv - , foeTracer = LedgerDBForkerEvent . TraceForkerEventWithKey forkerKey >$< ldbTracer ldbEnv - } + dblogVar <- newTVarIO dblog + forkerKey <- atomically $ stateTVar (ldbNextForkerKey ldbEnv) $ \r -> (r, r + 1) + let forkerEnv = + ForkerEnv + { foeBackingStoreValueHandle = vh + , foeChangelog = dblogVar + , foeSwitchVar = ldbChangelog ldbEnv + , foeSecurityParam = ledgerDbCfgSecParam $ ldbCfg ldbEnv + , foeTracer = + LedgerDBForkerEvent . TraceForkerEventWithKey forkerKey >$< ldbTracer ldbEnv + } atomically $ modifyTVar (ldbForkers ldbEnv) $ Map.insert forkerKey forkerEnv traceWith (foeTracer forkerEnv) ForkerOpen pure $ mkForker h (ldbQueryBatchSize ldbEnv) forkerKey mkForker :: - ( IOLike m - , HasHeader blk - , HasLedgerTables l - , GetTip l - ) - => LedgerDBHandle m l blk - -> QueryBatchSize - -> ForkerKey - -> Forker m l blk -mkForker h qbs forkerKey = Forker { - forkerClose = implForkerClose h forkerKey - , forkerReadTables = getForkerEnv1 h forkerKey implForkerReadTables - , forkerRangeReadTables = getForkerEnv1 h forkerKey (implForkerRangeReadTables qbs) - , forkerGetLedgerState = getForkerEnvSTM h forkerKey implForkerGetLedgerState - , forkerReadStatistics = getForkerEnv h forkerKey implForkerReadStatistics - , forkerPush = getForkerEnv1 h forkerKey implForkerPush - , forkerCommit = getForkerEnvSTM h forkerKey implForkerCommit + ( IOLike m + , HasHeader blk + , HasLedgerTables l + , GetTip l + ) => + LedgerDBHandle m l blk -> + QueryBatchSize -> + ForkerKey -> + Forker m l blk +mkForker h qbs forkerKey = + Forker + { forkerClose = implForkerClose h forkerKey + , forkerReadTables = getForkerEnv1 h forkerKey implForkerReadTables + , forkerRangeReadTables = getForkerEnv1 h forkerKey (implForkerRangeReadTables qbs) + , forkerGetLedgerState = getForkerEnvSTM h forkerKey implForkerGetLedgerState + , forkerReadStatistics = getForkerEnv h forkerKey implForkerReadStatistics + , forkerPush = getForkerEnv1 h forkerKey implForkerPush + , forkerCommit = getForkerEnvSTM h forkerKey implForkerCommit } implForkerClose :: - IOLike m - => LedgerDBHandle m l blk - -> ForkerKey - -> m () + IOLike m => + LedgerDBHandle m l blk -> + ForkerKey -> + m () implForkerClose (LDBHandle varState) forkerKey = do - envMay <- atomically $ readTVar varState >>= \case - LedgerDBClosed -> pure Nothing - LedgerDBOpen ldbEnv -> do - stateTVar + envMay <- + atomically $ + readTVar varState >>= \case + LedgerDBClosed -> pure Nothing + LedgerDBOpen ldbEnv -> do + stateTVar (ldbForkers ldbEnv) (Map.updateLookupWithKey (\_ _ -> Nothing) forkerKey) - whenJust envMay closeForkerEnv + whenJust envMay closeForkerEnv diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Args.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Args.hs index c000c95895..b7803d11a8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Args.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Args.hs @@ -9,54 +9,54 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Storage.LedgerDB.V1.Args ( - BackingStoreArgs (..) +module Ouroboros.Consensus.Storage.LedgerDB.V1.Args + ( BackingStoreArgs (..) , FlushFrequency (..) , LedgerDbFlavorArgs (..) , defaultLedgerDbFlavorArgs , shouldFlush ) where -import Control.Monad.IO.Class -import Control.Monad.Primitive -import qualified Data.SOP.Dict as Dict -import Data.Word -import GHC.Generics -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB -import Ouroboros.Consensus.Util.Args +import Control.Monad.IO.Class +import Control.Monad.Primitive +import Data.SOP.Dict qualified as Dict +import Data.Word +import GHC.Generics +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB +import Ouroboros.Consensus.Util.Args -- | The number of blocks in the immutable part of the chain that we have to see -- before we flush the ledger tables to disk. See 'onDiskShouldFlush'. -data FlushFrequency = - -- | A default value, which is determined by a specific 'SnapshotPolicy'. See +data FlushFrequency + = -- | A default value, which is determined by a specific 'SnapshotPolicy'. See -- 'defaultSnapshotPolicy' as an example. DefaultFlushFrequency - -- | A requested value: the number of diffs in the immutable part of the + | -- | A requested value: the number of diffs in the immutable part of the -- chain required before flushing. - | RequestedFlushFrequency Word64 - -- | To disable flushing, to be used in tests - | DisableFlushing + RequestedFlushFrequency Word64 + | -- | To disable flushing, to be used in tests + DisableFlushing deriving (Show, Eq, Generic) shouldFlush :: FlushFrequency -> (Word64 -> Bool) shouldFlush requestedFlushFrequency = case requestedFlushFrequency of - RequestedFlushFrequency value -> (>= value) - DefaultFlushFrequency -> (>= 100) - DisableFlushing -> const False + RequestedFlushFrequency value -> (>= value) + DefaultFlushFrequency -> (>= 100) + DisableFlushing -> const False -data LedgerDbFlavorArgs f m = V1Args { - v1FlushFrequency :: FlushFrequency - , v1BackendArgs :: BackingStoreArgs f m +data LedgerDbFlavorArgs f m = V1Args + { v1FlushFrequency :: FlushFrequency + , v1BackendArgs :: BackingStoreArgs f m } -data BackingStoreArgs f m = - LMDBBackingStoreArgs FilePath (HKD f LMDBLimits) (Dict.Dict MonadIOPrim m) +data BackingStoreArgs f m + = LMDBBackingStoreArgs FilePath (HKD f LMDBLimits) (Dict.Dict MonadIOPrim m) | InMemoryBackingStoreArgs class (MonadIO m, PrimState m ~ PrimState IO) => MonadIOPrim m instance (MonadIO m, PrimState m ~ PrimState IO) => MonadIOPrim m -defaultLedgerDbFlavorArgs :: Incomplete LedgerDbFlavorArgs m +defaultLedgerDbFlavorArgs :: Incomplete LedgerDbFlavorArgs m defaultLedgerDbFlavorArgs = V1Args DefaultFlushFrequency defaultBackingStoreArgs defaultBackingStoreArgs :: Incomplete BackingStoreArgs m diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore.hs index ed602931ec..c5683db73d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore.hs @@ -15,92 +15,97 @@ -- -- * "Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB": an -- external disk-based database. -module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore ( - -- * API - -- +module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore + ( -- * API + + -- + -- | Most of the documentation on the behaviour of the 'BackingStore' lives -- in this module. module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API + -- * Initialization , newBackingStore , restoreBackingStore + -- * Tracing , FlavorImplSpecificTrace (..) , FlavorImplSpecificTraceInMemory (..) , FlavorImplSpecificTraceOnDisk (..) + -- * Testing , newBackingStoreInitialiser ) where -import Cardano.Slotting.Slot -import Control.Tracer -import Data.Functor.Contravariant -import Data.SOP.Dict (Dict (..)) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.V1.Args -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.InMemory as InMemory -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB as LMDB -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import System.FS.API -import System.FS.IO +import Cardano.Slotting.Slot +import Control.Tracer +import Data.Functor.Contravariant +import Data.SOP.Dict (Dict (..)) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.InMemory qualified as InMemory +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB qualified as LMDB +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import System.FS.API +import System.FS.IO type BackingStoreInitialiser m l = - InitFrom (LedgerTables l ValuesMK) - -> m (LedgerBackingStore m l) + InitFrom (LedgerTables l ValuesMK) -> + m (LedgerBackingStore m l) -- | Overwrite the 'BackingStore' tables with the snapshot's tables restoreBackingStore :: - ( IOLike m - , HasLedgerTables l - , HasCallStack - , CanUpgradeLedgerTables l - , MemPackIdx l EmptyMK ~ l EmptyMK - , SerializeTablesWithHint l - ) - => Tracer m FlavorImplSpecificTrace - -> Complete BackingStoreArgs m - -> SnapshotsFS m - -> l EmptyMK - -> FsPath - -> m (LedgerBackingStore m l) + ( IOLike m + , HasLedgerTables l + , HasCallStack + , CanUpgradeLedgerTables l + , MemPackIdx l EmptyMK ~ l EmptyMK + , SerializeTablesWithHint l + ) => + Tracer m FlavorImplSpecificTrace -> + Complete BackingStoreArgs m -> + SnapshotsFS m -> + l EmptyMK -> + FsPath -> + m (LedgerBackingStore m l) restoreBackingStore trcr bss fs st loadPath = - newBackingStoreInitialiser trcr bss fs (InitFromCopy st loadPath) + newBackingStoreInitialiser trcr bss fs (InitFromCopy st loadPath) -- | Create a 'BackingStore' from the given initial tables. newBackingStore :: - ( IOLike m - , HasLedgerTables l - , HasCallStack - , CanUpgradeLedgerTables l - , MemPackIdx l EmptyMK ~ l EmptyMK - , SerializeTablesWithHint l - ) - => Tracer m FlavorImplSpecificTrace - -> Complete BackingStoreArgs m - -> SnapshotsFS m - -> l EmptyMK - -> LedgerTables l ValuesMK - -> m (LedgerBackingStore m l) + ( IOLike m + , HasLedgerTables l + , HasCallStack + , CanUpgradeLedgerTables l + , MemPackIdx l EmptyMK ~ l EmptyMK + , SerializeTablesWithHint l + ) => + Tracer m FlavorImplSpecificTrace -> + Complete BackingStoreArgs m -> + SnapshotsFS m -> + l EmptyMK -> + LedgerTables l ValuesMK -> + m (LedgerBackingStore m l) newBackingStore trcr bss fs st tables = - newBackingStoreInitialiser trcr bss fs (InitFromValues Origin st tables) + newBackingStoreInitialiser trcr bss fs (InitFromValues Origin st tables) newBackingStoreInitialiser :: - forall m l. - ( IOLike m - , HasLedgerTables l - , HasCallStack - , CanUpgradeLedgerTables l - , MemPackIdx l EmptyMK ~ l EmptyMK - , SerializeTablesWithHint l - ) - => Tracer m FlavorImplSpecificTrace - -> Complete BackingStoreArgs m - -> SnapshotsFS m - -> BackingStoreInitialiser m l + forall m l. + ( IOLike m + , HasLedgerTables l + , HasCallStack + , CanUpgradeLedgerTables l + , MemPackIdx l EmptyMK ~ l EmptyMK + , SerializeTablesWithHint l + ) => + Tracer m FlavorImplSpecificTrace -> + Complete BackingStoreArgs m -> + SnapshotsFS m -> + BackingStoreInitialiser m l newBackingStoreInitialiser trcr bss = case bss of LMDBBackingStoreArgs fs limits Dict -> @@ -116,17 +121,17 @@ newBackingStoreInitialiser trcr bss = Tracing -------------------------------------------------------------------------------} -data FlavorImplSpecificTrace = - FlavorImplSpecificTraceInMemory FlavorImplSpecificTraceInMemory +data FlavorImplSpecificTrace + = FlavorImplSpecificTraceInMemory FlavorImplSpecificTraceInMemory | FlavorImplSpecificTraceOnDisk FlavorImplSpecificTraceOnDisk deriving (Eq, Show) -data FlavorImplSpecificTraceInMemory = - InMemoryBackingStoreInitialise +data FlavorImplSpecificTraceInMemory + = InMemoryBackingStoreInitialise | InMemoryBackingStoreTrace BackingStoreTrace deriving (Eq, Show) -data FlavorImplSpecificTraceOnDisk = - OnDiskBackingStoreInitialise LMDB.LMDBLimits +data FlavorImplSpecificTraceOnDisk + = OnDiskBackingStoreInitialise LMDB.LMDBLimits | OnDiskBackingStoreTrace BackingStoreTrace deriving (Eq, Show) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/API.hs index ad5ab737a1..37ee124526 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/API.hs @@ -20,10 +20,11 @@ -- -- See "Ouroboros.Consensus.Storage.LedgerDB.BackingStore" for the -- implementations provided. -module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API ( - -- * FileSystem newtypes +module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API + ( -- * FileSystem newtypes LiveLMDBFS (..) , SnapshotsFS (..) + -- * Backing store , BackingStore (..) , BackingStore' @@ -33,41 +34,46 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API ( , LedgerBackingStore , ReadHint , WriteHint + -- * Value handle , BackingStoreValueHandle (..) , BackingStoreValueHandle' , LedgerBackingStoreValueHandle , castBackingStoreValueHandle , withBsValueHandle + -- * Query , RangeQuery (..) + -- * Statistics , Statistics (..) + -- * Tracing , BackingStoreTrace (..) , BackingStoreValueHandleTrace (..) + -- * 🧪 Testing , bsRead , bsReadAll ) where -import Cardano.Slotting.Slot (SlotNo, WithOrigin (..)) -import Data.Kind -import GHC.Generics -import NoThunks.Class (OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Util.IOLike -import System.FS.API -import qualified System.FS.API.Types as FS +import Cardano.Slotting.Slot (SlotNo, WithOrigin (..)) +import Data.Kind +import GHC.Generics +import NoThunks.Class (OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Util.IOLike +import System.FS.API +import System.FS.API.Types qualified as FS -- | The LedgerDB file system. Typically pointing to @/ledger@. -newtype SnapshotsFS m = SnapshotsFS { snapshotsFs :: SomeHasFS m } +newtype SnapshotsFS m = SnapshotsFS {snapshotsFs :: SomeHasFS m} deriving (Generic, NoThunks) -- | The LMDB file system. Typically pointing to @/lmdb@. -newtype LiveLMDBFS m = LiveLMDBFS { liveLMDBFs :: SomeHasFS m } +newtype LiveLMDBFS m = LiveLMDBFS {liveLMDBFs :: SomeHasFS m} deriving (Generic, NoThunks) {------------------------------------------------------------------------------- @@ -76,46 +82,49 @@ newtype LiveLMDBFS m = LiveLMDBFS { liveLMDBFs :: SomeHasFS m } -- | A container for differences that are inteded to be flushed to a -- 'BackingStore' -data DiffsToFlush l = DiffsToFlush { - -- | The set of differences that should be flushed into the 'BackingStore' - toFlushDiffs :: !(LedgerTables l DiffMK) - -- | The last flushed state and the newly flushed state. This will be the - -- immutable tip. +data DiffsToFlush l = DiffsToFlush + { toFlushDiffs :: !(LedgerTables l DiffMK) + -- ^ The set of differences that should be flushed into the 'BackingStore' , toFlushState :: !(l EmptyMK, l EmptyMK) - -- | At which slot the diffs were split. This must be the slot of the state - -- considered as "last flushed" in the kept 'DbChangelog' - , toFlushSlot :: !SlotNo + -- ^ The last flushed state and the newly flushed state. This will be the + -- immutable tip. + , toFlushSlot :: !SlotNo + -- ^ At which slot the diffs were split. This must be the slot of the state + -- considered as "last flushed" in the kept 'DbChangelog' } -data BackingStore m keys values diff = BackingStore { - -- | Close the backing store - -- - -- Other methods throw exceptions if called on a closed store. 'bsClose' - -- itself is idempotent. - bsClose :: !(m ()) - -- | Create a persistent copy - -- - -- Each backing store implementation will offer a way to initialize itself - -- from such a path. - -- - -- The destination path must not already exist. After this operation, it - -- will be a directory. - , bsCopy :: !(SerializeTablesHint values -> FS.FsPath -> m ()) - -- | Open a 'BackingStoreValueHandle' capturing the current value of the - -- entire database - , bsValueHandle :: !(m (BackingStoreValueHandle m keys values)) - -- | Apply a valid diff to the contents of the backing store - , bsWrite :: !(SlotNo -> WriteHint diff -> diff -> m ()) - -- | The name of the BackingStore backend, for loading and writing snapshots - -- to disk +data BackingStore m keys values diff = BackingStore + { bsClose :: !(m ()) + -- ^ Close the backing store + -- + -- Other methods throw exceptions if called on a closed store. 'bsClose' + -- itself is idempotent. + , bsCopy :: !(SerializeTablesHint values -> FS.FsPath -> m ()) + -- ^ Create a persistent copy + -- + -- Each backing store implementation will offer a way to initialize itself + -- from such a path. + -- + -- The destination path must not already exist. After this operation, it + -- will be a directory. + , bsValueHandle :: !(m (BackingStoreValueHandle m keys values)) + -- ^ Open a 'BackingStoreValueHandle' capturing the current value of the + -- entire database + , bsWrite :: !(SlotNo -> WriteHint diff -> diff -> m ()) + -- ^ Apply a valid diff to the contents of the backing store , bsSnapshotBackend :: !SnapshotBackend + -- ^ The name of the BackingStore backend, for loading and writing snapshots + -- to disk } -deriving via OnlyCheckWhnfNamed "BackingStore" (BackingStore m keys values diff) - instance NoThunks (BackingStore m keys values diff) +deriving via + OnlyCheckWhnfNamed "BackingStore" (BackingStore m keys values diff) + instance + NoThunks (BackingStore m keys values diff) type LedgerBackingStore m l = - BackingStore m + BackingStore + m (LedgerTables l KeysMK) (LedgerTables l ValuesMK) (LedgerTables l DiffMK) @@ -132,12 +141,12 @@ type family ReadHint values :: Type type instance ReadHint (LedgerTables l ValuesMK) = l EmptyMK -- | Choose how to initialize the backing store -data InitFrom values = - -- | Initialize from a set of values, at the given slot. +data InitFrom values + = -- | Initialize from a set of values, at the given slot. InitFromValues !(WithOrigin SlotNo) !(InitHint values) !values - -- | Use a snapshot at the given path to overwrite the set of values in the + | -- | Use a snapshot at the given path to overwrite the set of values in the -- opened database. - | InitFromCopy !(InitHint values) !FS.FsPath + InitFromCopy !(InitHint values) !FS.FsPath {------------------------------------------------------------------------------- Value handles @@ -148,92 +157,95 @@ data InitFrom values = -- The performance cost is usually minimal unless this handle is held open too -- long. We expect clients of the 'BackingStore' to not retain handles for a -- long time. -data BackingStoreValueHandle m keys values = BackingStoreValueHandle { - -- | At which slot this handle was created - bsvhAtSlot :: !(WithOrigin SlotNo) - -- | Close the handle - -- - -- Other methods throw exceptions if called on a closed handle. 'bsvhClose' - -- itself is idempotent. - , bsvhClose :: !(m ()) - -- | See 'RangeQuery' +data BackingStoreValueHandle m keys values = BackingStoreValueHandle + { bsvhAtSlot :: !(WithOrigin SlotNo) + -- ^ At which slot this handle was created + , bsvhClose :: !(m ()) + -- ^ Close the handle + -- + -- Other methods throw exceptions if called on a closed handle. 'bsvhClose' + -- itself is idempotent. , bsvhRangeRead :: !(ReadHint values -> RangeQuery keys -> m values) - -- | Costly read all operation, not to be used in Consensus but only in - -- snapshot-converter executable. - , bsvhReadAll :: !(ReadHint values -> m values) - -- | Read the given keys from the handle - -- - -- Absent keys will merely not be present in the result instead of causing a - -- failure or an exception. - , bsvhRead :: !(ReadHint values -> keys -> m values) - -- | Retrieve statistics - , bsvhStat :: !(m Statistics) + -- ^ See 'RangeQuery' + , bsvhReadAll :: !(ReadHint values -> m values) + -- ^ Costly read all operation, not to be used in Consensus but only in + -- snapshot-converter executable. + , bsvhRead :: !(ReadHint values -> keys -> m values) + -- ^ Read the given keys from the handle + -- + -- Absent keys will merely not be present in the result instead of causing a + -- failure or an exception. + , bsvhStat :: !(m Statistics) + -- ^ Retrieve statistics } -deriving via OnlyCheckWhnfNamed "BackingStoreValueHandle" (BackingStoreValueHandle m keys values) - instance NoThunks (BackingStoreValueHandle m keys values) +deriving via + OnlyCheckWhnfNamed "BackingStoreValueHandle" (BackingStoreValueHandle m keys values) + instance + NoThunks (BackingStoreValueHandle m keys values) type LedgerBackingStoreValueHandle m l = - BackingStoreValueHandle m + BackingStoreValueHandle + m (LedgerTables l KeysMK) (LedgerTables l ValuesMK) type BackingStoreValueHandle' m blk = LedgerBackingStoreValueHandle m (ExtLedgerState blk) castBackingStoreValueHandle :: - (Functor m, ReadHint values ~ ReadHint values') - => (values -> values') - -> (keys' -> keys) - -> BackingStoreValueHandle m keys values - -> BackingStoreValueHandle m keys' values' + (Functor m, ReadHint values ~ ReadHint values') => + (values -> values') -> + (keys' -> keys) -> + BackingStoreValueHandle m keys values -> + BackingStoreValueHandle m keys' values' castBackingStoreValueHandle f g bsvh = - BackingStoreValueHandle { - bsvhAtSlot + BackingStoreValueHandle + { bsvhAtSlot , bsvhClose , bsvhReadAll = \rhint -> f <$> bsvhReadAll rhint , bsvhRangeRead = \rhint (RangeQuery prev count) -> - fmap f . bsvhRangeRead rhint $ RangeQuery (fmap g prev) count + fmap f . bsvhRangeRead rhint $ RangeQuery (fmap g prev) count , bsvhRead = \rhint -> fmap f . bsvhRead rhint . g , bsvhStat } - where - BackingStoreValueHandle { - bsvhClose - , bsvhReadAll - , bsvhAtSlot - , bsvhRangeRead - , bsvhRead - , bsvhStat - } = bsvh + where + BackingStoreValueHandle + { bsvhClose + , bsvhReadAll + , bsvhAtSlot + , bsvhRangeRead + , bsvhRead + , bsvhStat + } = bsvh -- | A combination of 'bsValueHandle' and 'bsvhRead' bsRead :: - MonadThrow m - => BackingStore m keys values diff - -> ReadHint values - -> keys - -> m (WithOrigin SlotNo, values) + MonadThrow m => + BackingStore m keys values diff -> + ReadHint values -> + keys -> + m (WithOrigin SlotNo, values) bsRead store rhint keys = withBsValueHandle store $ \vh -> do - values <- bsvhRead vh rhint keys - pure (bsvhAtSlot vh, values) + values <- bsvhRead vh rhint keys + pure (bsvhAtSlot vh, values) bsReadAll :: - MonadThrow m - => BackingStore m keys values diff - -> ReadHint values - -> m values + MonadThrow m => + BackingStore m keys values diff -> + ReadHint values -> + m values bsReadAll store rhint = withBsValueHandle store $ \vh -> bsvhReadAll vh rhint -- | A 'IOLike.bracket'ed 'bsValueHandle' withBsValueHandle :: - MonadThrow m - => BackingStore m keys values diff - -> (BackingStoreValueHandle m keys values -> m a) - -> m a + MonadThrow m => + BackingStore m keys values diff -> + (BackingStoreValueHandle m keys values -> m a) -> + m a withBsValueHandle store = - bracket - (bsValueHandle store) - bsvhClose + bracket + (bsValueHandle store) + bsvhClose {------------------------------------------------------------------------------- Query @@ -242,25 +254,25 @@ withBsValueHandle store = -- | The arguments for a query to the backing store, it is up to the particular -- function that is performing the query to construct a value of this type, run -- the query and, if appropriate, repeat this process to do a subsequent query. -data RangeQuery keys = RangeQuery { - -- | The result of this range query begin at first key that is strictly - -- greater than the greatest key in 'rqPrev'. - -- - -- If the given set of keys is 'Just' but contains no keys, then the query - -- will return no results. (This is the steady-state once a looping range - -- query reaches the end of the table.) - rqPrev :: !(Maybe keys) - -- | Roughly how many values to read. - -- - -- The query may return a different number of values than this even if it - -- has not reached the last key. The only crucial invariant is that the - -- query only returns an empty map if there are no more keys to read on - -- disk, or if 'QueryBatchSize' consecutive values have been deleted in - -- the changelog, which is extremely unlikely due to the random access - -- pattern of the UTxO set. - , rqCount :: !Int - } - deriving stock (Show, Eq) +data RangeQuery keys = RangeQuery + { rqPrev :: !(Maybe keys) + -- ^ The result of this range query begin at first key that is strictly + -- greater than the greatest key in 'rqPrev'. + -- + -- If the given set of keys is 'Just' but contains no keys, then the query + -- will return no results. (This is the steady-state once a looping range + -- query reaches the end of the table.) + , rqCount :: !Int + -- ^ Roughly how many values to read. + -- + -- The query may return a different number of values than this even if it + -- has not reached the last key. The only crucial invariant is that the + -- query only returns an empty map if there are no more keys to read on + -- disk, or if 'QueryBatchSize' consecutive values have been deleted in + -- the changelog, which is extremely unlikely due to the random access + -- pattern of the UTxO set. + } + deriving stock (Show, Eq) {------------------------------------------------------------------------------- Statistics @@ -272,14 +284,14 @@ data RangeQuery keys = RangeQuery { -- state of a key-value store. Combine this with information from a -- 'DbChangelog' to obtain statistics about a "logical" state of the key-value -- store. See 'getStatistics'. -data Statistics = Statistics { - -- | The last slot number for which key-value pairs were stored. - -- - -- INVARIANT: the 'sequenceNumber' returned by using 'bsvhStat' on a value - -- handle should match 'bsvhAtSlot' for that same value handle. - sequenceNumber :: !(WithOrigin SlotNo) - -- | The total number of key-value pair entries that are stored. - , numEntries :: !Int +data Statistics = Statistics + { sequenceNumber :: !(WithOrigin SlotNo) + -- ^ The last slot number for which key-value pairs were stored. + -- + -- INVARIANT: the 'sequenceNumber' returned by using 'bsvhStat' on a value + -- handle should match 'bsvhAtSlot' for that same value handle. + , numEntries :: !Int + -- ^ The total number of key-value pair entries that are stored. } deriving stock (Show, Eq) @@ -287,30 +299,30 @@ data Statistics = Statistics { Tracing -------------------------------------------------------------------------------} -data BackingStoreTrace = - BSOpening - | BSOpened !(Maybe FS.FsPath) - | BSInitialisingFromCopy !FS.FsPath - | BSInitialisedFromCopy !FS.FsPath +data BackingStoreTrace + = BSOpening + | BSOpened !(Maybe FS.FsPath) + | BSInitialisingFromCopy !FS.FsPath + | BSInitialisedFromCopy !FS.FsPath | BSInitialisingFromValues !(WithOrigin SlotNo) - | BSInitialisedFromValues !(WithOrigin SlotNo) + | BSInitialisedFromValues !(WithOrigin SlotNo) | BSClosing | BSAlreadyClosed | BSClosed - | BSCopying !FS.FsPath - | BSCopied !FS.FsPath + | BSCopying !FS.FsPath + | BSCopied !FS.FsPath | BSCreatingValueHandle | BSValueHandleTrace -- | The index of the value handle !(Maybe Int) !BackingStoreValueHandleTrace | BSCreatedValueHandle - | BSWriting !SlotNo - | BSWritten !(WithOrigin SlotNo) !SlotNo + | BSWriting !SlotNo + | BSWritten !(WithOrigin SlotNo) !SlotNo deriving (Eq, Show) -data BackingStoreValueHandleTrace = - BSVHClosing +data BackingStoreValueHandleTrace + = BSVHClosing | BSVHAlreadyClosed | BSVHClosed | BSVHRangeReading diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/InMemory.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/InMemory.hs index 7e3f2f3cf1..8fcae79f64 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/InMemory.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/InMemory.hs @@ -11,273 +11,300 @@ -- | An implementation of a 'BackingStore' using a TVar. This is the -- implementation known as \"InMemory\". -module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.InMemory ( - -- * Constructor +module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.InMemory + ( -- * Constructor newInMemoryBackingStore + -- * Errors , InMemoryBackingStoreExn (..) , InMemoryBackingStoreInitExn (..) ) where -import Cardano.Binary as CBOR -import Cardano.Slotting.Slot (SlotNo, WithOrigin (..)) -import qualified Codec.CBOR.Read as CBOR -import qualified Codec.CBOR.Write as CBOR -import Control.Monad (join, unless, void, when) -import Control.Monad.Class.MonadThrow (catch) -import Control.Tracer (Tracer, traceWith) -import qualified Data.ByteString.Lazy as BSL -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Data.String (fromString) -import GHC.Generics -import Ouroboros.Consensus.Ledger.Basics -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots - (SnapshotBackend (..)) -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API -import Ouroboros.Consensus.Util.IOLike (Exception, IOLike, - MonadSTM (STM, atomically), MonadThrow (throwIO), NoThunks, - StrictTVar, newTVarIO, readTVar, throwSTM, writeTVar) -import Prelude hiding (lookup) -import System.FS.API - (HasFS (createDirectory, doesDirectoryExist, doesFileExist, mkFsErrorPath), - SomeHasFS (SomeHasFS), withFile) -import System.FS.API.Lazy (hGetAll, hPutAll) -import System.FS.API.Types (AllowExisting (MustBeNew), FsErrorPath, - FsPath (fsPathToList), OpenMode (ReadMode, WriteMode), - fsPathFromList) +import Cardano.Binary as CBOR +import Cardano.Slotting.Slot (SlotNo, WithOrigin (..)) +import Codec.CBOR.Read qualified as CBOR +import Codec.CBOR.Write qualified as CBOR +import Control.Monad (join, unless, void, when) +import Control.Monad.Class.MonadThrow (catch) +import Control.Tracer (Tracer, traceWith) +import Data.ByteString.Lazy qualified as BSL +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Data.String (fromString) +import GHC.Generics +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots + ( SnapshotBackend (..) + ) +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API +import Ouroboros.Consensus.Util.IOLike + ( Exception + , IOLike + , MonadSTM (STM, atomically) + , MonadThrow (throwIO) + , NoThunks + , StrictTVar + , newTVarIO + , readTVar + , throwSTM + , writeTVar + ) +import System.FS.API + ( HasFS (createDirectory, doesDirectoryExist, doesFileExist, mkFsErrorPath) + , SomeHasFS (SomeHasFS) + , withFile + ) +import System.FS.API.Lazy (hGetAll, hPutAll) +import System.FS.API.Types + ( AllowExisting (MustBeNew) + , FsErrorPath + , FsPath (fsPathToList) + , OpenMode (ReadMode, WriteMode) + , fsPathFromList + ) +import Prelude hiding (lookup) {------------------------------------------------------------------------------- An in-memory backing store -------------------------------------------------------------------------------} -data BackingStoreContents m l = - BackingStoreContentsClosed +data BackingStoreContents m l + = BackingStoreContentsClosed | BackingStoreContents !(WithOrigin SlotNo) !(LedgerTables l ValuesMK) - deriving (Generic) + deriving Generic -deriving instance ( NoThunks (TxIn l) - , NoThunks (TxOut l) - ) => NoThunks (BackingStoreContents m l) +deriving instance + ( NoThunks (TxIn l) + , NoThunks (TxOut l) + ) => + NoThunks (BackingStoreContents m l) -- | Use a 'TVar' as a trivial backing store newInMemoryBackingStore :: - forall l m. - ( IOLike m - , HasLedgerTables l - , CanUpgradeLedgerTables l - , SerializeTablesWithHint l - ) - => Tracer m BackingStoreTrace - -> SnapshotsFS m - -> InitFrom (LedgerTables l ValuesMK) - -> m (LedgerBackingStore m l) + forall l m. + ( IOLike m + , HasLedgerTables l + , CanUpgradeLedgerTables l + , SerializeTablesWithHint l + ) => + Tracer m BackingStoreTrace -> + SnapshotsFS m -> + InitFrom (LedgerTables l ValuesMK) -> + m (LedgerBackingStore m l) newInMemoryBackingStore tracer (SnapshotsFS (SomeHasFS fs)) initialization = do - traceWith tracer BSOpening - ref <- do - (slot, values) <- case initialization of - InitFromCopy hint path -> do - traceWith tracer $ BSInitialisingFromCopy path - tvarFileExists <- doesFileExist fs (extendPath path) - unless tvarFileExists $ - throwIO . StoreDirIsIncompatible $ mkFsErrorPath fs path - withFile fs (extendPath path) ReadMode $ \h -> do - bs <- hGetAll fs h - case CBOR.deserialiseFromBytes ((,) <$> CBOR.fromCBOR <*> valuesMKDecoder hint) bs of - Left err -> throwIO $ InMemoryBackingStoreDeserialiseExn err - Right (extra, x) -> do - unless (BSL.null extra) $ throwIO InMemoryIncompleteDeserialiseExn - traceWith tracer $ BSInitialisedFromCopy path - pure x - InitFromValues slot _ values -> do - traceWith tracer $ BSInitialisingFromValues slot - pure (slot, values) - newTVarIO $ BackingStoreContents slot values - traceWith tracer $ BSOpened Nothing - pure BackingStore { - bsClose = do - traceWith tracer BSClosing - catch - (atomically $ do - guardClosed ref - writeTVar ref BackingStoreContentsClosed - ) - (\case + traceWith tracer BSOpening + ref <- do + (slot, values) <- case initialization of + InitFromCopy hint path -> do + traceWith tracer $ BSInitialisingFromCopy path + tvarFileExists <- doesFileExist fs (extendPath path) + unless tvarFileExists $ + throwIO . StoreDirIsIncompatible $ + mkFsErrorPath fs path + withFile fs (extendPath path) ReadMode $ \h -> do + bs <- hGetAll fs h + case CBOR.deserialiseFromBytes ((,) <$> CBOR.fromCBOR <*> valuesMKDecoder hint) bs of + Left err -> throwIO $ InMemoryBackingStoreDeserialiseExn err + Right (extra, x) -> do + unless (BSL.null extra) $ throwIO InMemoryIncompleteDeserialiseExn + traceWith tracer $ BSInitialisedFromCopy path + pure x + InitFromValues slot _ values -> do + traceWith tracer $ BSInitialisingFromValues slot + pure (slot, values) + newTVarIO $ BackingStoreContents slot values + traceWith tracer $ BSOpened Nothing + pure + BackingStore + { bsClose = do + traceWith tracer BSClosing + catch + ( atomically $ do + guardClosed ref + writeTVar ref BackingStoreContentsClosed + ) + ( \case InMemoryBackingStoreClosedExn -> traceWith tracer BSAlreadyClosed e -> throwIO e - ) - traceWith tracer BSClosed + ) + traceWith tracer BSClosed , bsCopy = \hint path -> do traceWith tracer $ BSCopying path join $ atomically $ do readTVar ref >>= \case - BackingStoreContentsClosed -> + BackingStoreContentsClosed -> throwSTM InMemoryBackingStoreClosedExn BackingStoreContents slot values -> pure $ do exists <- doesDirectoryExist fs path when exists $ throwIO InMemoryBackingStoreDirectoryExists createDirectory fs path withFile fs (extendPath path) (WriteMode MustBeNew) $ \h -> - void $ hPutAll fs h - $ CBOR.toLazyByteString - $ CBOR.toCBOR slot <> valuesMKEncoder hint values + void $ + hPutAll fs h $ + CBOR.toLazyByteString $ + CBOR.toCBOR slot <> valuesMKEncoder hint values traceWith tracer $ BSCopied path , bsValueHandle = do traceWith tracer BSCreatingValueHandle vh <- join $ atomically $ do - readTVar ref >>= \case - BackingStoreContentsClosed -> - throwSTM InMemoryBackingStoreClosedExn - BackingStoreContents slot values -> pure $ do - refHandleClosed <- newTVarIO False - pure $ BackingStoreValueHandle { - bsvhAtSlot = slot - , bsvhClose = do - traceWith tracer $ BSValueHandleTrace Nothing BSVHClosing - catch - (atomically $ do - guardClosed ref - guardHandleClosed refHandleClosed - writeTVar refHandleClosed True - ) - (\case - InMemoryBackingStoreClosedExn -> - traceWith tracer BSAlreadyClosed - InMemoryBackingStoreValueHandleClosedExn -> - traceWith tracer (BSValueHandleTrace Nothing BSVHAlreadyClosed) - e -> - throwIO e - ) - traceWith tracer $ BSValueHandleTrace Nothing BSVHClosed - , bsvhRangeRead = \_ rq -> do - traceWith tracer $ BSValueHandleTrace Nothing BSVHRangeReading - r <- atomically $ do - guardClosed ref - guardHandleClosed refHandleClosed - pure $ rangeRead rq values - traceWith tracer $ BSValueHandleTrace Nothing BSVHRangeRead - pure r - , bsvhReadAll = \_ -> - atomically $ do - guardClosed ref - guardHandleClosed refHandleClosed - pure values - , bsvhRead = \_ keys -> do - traceWith tracer $ BSValueHandleTrace Nothing BSVHReading - r <- atomically $ do - guardClosed ref - guardHandleClosed refHandleClosed - pure $ lookup keys values - traceWith tracer $ BSValueHandleTrace Nothing BSVHRead - pure r - , bsvhStat = do - traceWith tracer $ BSValueHandleTrace Nothing BSVHStatting - r <- atomically $ do - guardClosed ref - guardHandleClosed refHandleClosed - pure $ Statistics slot (count values) - traceWith tracer $ BSValueHandleTrace Nothing BSVHStatted - pure r - } + readTVar ref >>= \case + BackingStoreContentsClosed -> + throwSTM InMemoryBackingStoreClosedExn + BackingStoreContents slot values -> pure $ do + refHandleClosed <- newTVarIO False + pure $ + BackingStoreValueHandle + { bsvhAtSlot = slot + , bsvhClose = do + traceWith tracer $ BSValueHandleTrace Nothing BSVHClosing + catch + ( atomically $ do + guardClosed ref + guardHandleClosed refHandleClosed + writeTVar refHandleClosed True + ) + ( \case + InMemoryBackingStoreClosedExn -> + traceWith tracer BSAlreadyClosed + InMemoryBackingStoreValueHandleClosedExn -> + traceWith tracer (BSValueHandleTrace Nothing BSVHAlreadyClosed) + e -> + throwIO e + ) + traceWith tracer $ BSValueHandleTrace Nothing BSVHClosed + , bsvhRangeRead = \_ rq -> do + traceWith tracer $ BSValueHandleTrace Nothing BSVHRangeReading + r <- atomically $ do + guardClosed ref + guardHandleClosed refHandleClosed + pure $ rangeRead rq values + traceWith tracer $ BSValueHandleTrace Nothing BSVHRangeRead + pure r + , bsvhReadAll = \_ -> + atomically $ do + guardClosed ref + guardHandleClosed refHandleClosed + pure values + , bsvhRead = \_ keys -> do + traceWith tracer $ BSValueHandleTrace Nothing BSVHReading + r <- atomically $ do + guardClosed ref + guardHandleClosed refHandleClosed + pure $ lookup keys values + traceWith tracer $ BSValueHandleTrace Nothing BSVHRead + pure r + , bsvhStat = do + traceWith tracer $ BSValueHandleTrace Nothing BSVHStatting + r <- atomically $ do + guardClosed ref + guardHandleClosed refHandleClosed + pure $ Statistics slot (count values) + traceWith tracer $ BSValueHandleTrace Nothing BSVHStatted + pure r + } traceWith tracer BSCreatedValueHandle pure vh , bsWrite = \slot2 (st, st') diff -> do - traceWith tracer $ BSWriting slot2 - slot1 <- atomically $ do - readTVar ref >>= \case - BackingStoreContentsClosed -> - throwSTM InMemoryBackingStoreClosedExn - BackingStoreContents slot1 values -> do - unless (slot1 <= At slot2) $ - throwSTM $ InMemoryBackingStoreNonMonotonicSeq (At slot2) slot1 - writeTVar ref $ - BackingStoreContents - (At slot2) - (upgradeTables st st' (appDiffs values diff)) - pure slot1 - traceWith tracer $ BSWritten slot1 slot2 + traceWith tracer $ BSWriting slot2 + slot1 <- atomically $ do + readTVar ref >>= \case + BackingStoreContentsClosed -> + throwSTM InMemoryBackingStoreClosedExn + BackingStoreContents slot1 values -> do + unless (slot1 <= At slot2) $ + throwSTM $ + InMemoryBackingStoreNonMonotonicSeq (At slot2) slot1 + writeTVar ref $ + BackingStoreContents + (At slot2) + (upgradeTables st st' (appDiffs values diff)) + pure slot1 + traceWith tracer $ BSWritten slot1 slot2 , bsSnapshotBackend = UTxOHDMemSnapshot } - where - extendPath path = - fsPathFromList $ fsPathToList path <> [fromString "tvar"] - - lookup :: LedgerTables l KeysMK - -> LedgerTables l ValuesMK - -> LedgerTables l ValuesMK - lookup = ltliftA2 lookup' + where + extendPath path = + fsPathFromList $ fsPathToList path <> [fromString "tvar"] - lookup' :: - Ord k - => KeysMK k v - -> ValuesMK k v - -> ValuesMK k v - lookup' (KeysMK ks) (ValuesMK vs) = - ValuesMK (Map.restrictKeys vs ks) + lookup :: + LedgerTables l KeysMK -> + LedgerTables l ValuesMK -> + LedgerTables l ValuesMK + lookup = ltliftA2 lookup' + lookup' :: + Ord k => + KeysMK k v -> + ValuesMK k v -> + ValuesMK k v + lookup' (KeysMK ks) (ValuesMK vs) = + ValuesMK (Map.restrictKeys vs ks) - rangeRead :: RangeQuery (LedgerTables l KeysMK) - -> LedgerTables l ValuesMK - -> LedgerTables l ValuesMK - rangeRead rq values = case rqPrev rq of - Nothing -> - ltmap (rangeRead0' (rqCount rq)) values - Just keys -> - ltliftA2 (rangeRead' (rqCount rq)) keys values + rangeRead :: + RangeQuery (LedgerTables l KeysMK) -> + LedgerTables l ValuesMK -> + LedgerTables l ValuesMK + rangeRead rq values = case rqPrev rq of + Nothing -> + ltmap (rangeRead0' (rqCount rq)) values + Just keys -> + ltliftA2 (rangeRead' (rqCount rq)) keys values - rangeRead0' :: - Int - -> ValuesMK k v - -> ValuesMK k v - rangeRead0' n (ValuesMK vs) = - ValuesMK $ Map.take n vs + rangeRead0' :: + Int -> + ValuesMK k v -> + ValuesMK k v + rangeRead0' n (ValuesMK vs) = + ValuesMK $ Map.take n vs - rangeRead' :: - Ord k - => Int - -> KeysMK k v - -> ValuesMK k v - -> ValuesMK k v - rangeRead' n (KeysMK ks) (ValuesMK vs) = - case Set.lookupMax ks of - Nothing -> ValuesMK Map.empty - Just k -> ValuesMK $ Map.take n $ snd $ Map.split k vs + rangeRead' :: + Ord k => + Int -> + KeysMK k v -> + ValuesMK k v -> + ValuesMK k v + rangeRead' n (KeysMK ks) (ValuesMK vs) = + case Set.lookupMax ks of + Nothing -> ValuesMK Map.empty + Just k -> ValuesMK $ Map.take n $ snd $ Map.split k vs - appDiffs :: LedgerTables l ValuesMK - -> LedgerTables l DiffMK - -> LedgerTables l ValuesMK - appDiffs = ltliftA2 applyDiff_ + appDiffs :: + LedgerTables l ValuesMK -> + LedgerTables l DiffMK -> + LedgerTables l ValuesMK + appDiffs = ltliftA2 applyDiff_ - applyDiff_ :: - Ord k - => ValuesMK k v - -> DiffMK k v - -> ValuesMK k v - applyDiff_ (ValuesMK values) (DiffMK diff) = - ValuesMK (Diff.applyDiff values diff) + applyDiff_ :: + Ord k => + ValuesMK k v -> + DiffMK k v -> + ValuesMK k v + applyDiff_ (ValuesMK values) (DiffMK diff) = + ValuesMK (Diff.applyDiff values diff) - count :: LedgerTables l ValuesMK -> Int - count = ltcollapse . ltmap (K2 . count') + count :: LedgerTables l ValuesMK -> Int + count = ltcollapse . ltmap (K2 . count') - count' :: ValuesMK k v -> Int - count' (ValuesMK values) = Map.size values + count' :: ValuesMK k v -> Int + count' (ValuesMK values) = Map.size values guardClosed :: - IOLike m - => StrictTVar m (BackingStoreContents ks vs) - -> STM m () -guardClosed ref = readTVar ref >>= \case - BackingStoreContentsClosed -> throwSTM InMemoryBackingStoreClosedExn - BackingStoreContents _ _ -> pure () + IOLike m => + StrictTVar m (BackingStoreContents ks vs) -> + STM m () +guardClosed ref = + readTVar ref >>= \case + BackingStoreContentsClosed -> throwSTM InMemoryBackingStoreClosedExn + BackingStoreContents _ _ -> pure () guardHandleClosed :: - IOLike m - => StrictTVar m Bool - -> STM m () + IOLike m => + StrictTVar m Bool -> + STM m () guardHandleClosed refHandleClosed = do isClosed <- readTVar refHandleClosed when isClosed $ throwSTM InMemoryBackingStoreValueHandleClosedExn @@ -290,27 +317,27 @@ guardHandleClosed refHandleClosed = do -- -- __WARNING__: these errors will be thrown in IO as having a corrupt database -- is critical for the functioning of Consensus. -data InMemoryBackingStoreExn = - InMemoryBackingStoreClosedExn +data InMemoryBackingStoreExn + = InMemoryBackingStoreClosedExn | InMemoryBackingStoreValueHandleClosedExn | InMemoryBackingStoreDirectoryExists | InMemoryBackingStoreNonMonotonicSeq !(WithOrigin SlotNo) !(WithOrigin SlotNo) | InMemoryBackingStoreDeserialiseExn CBOR.DeserialiseFailure | InMemoryIncompleteDeserialiseExn - deriving anyclass (Exception) - deriving stock (Show) + deriving anyclass Exception + deriving stock Show -- | Errors that the InMemory backing store can throw on initialization. -- -- __WARNING__: these errors will be thrown in IO as having a corrupt database -- is critical for the functioning of Consensus. -newtype InMemoryBackingStoreInitExn = - StoreDirIsIncompatible FsErrorPath - deriving anyclass (Exception) +newtype InMemoryBackingStoreInitExn + = StoreDirIsIncompatible FsErrorPath + deriving anyclass Exception instance Show InMemoryBackingStoreInitExn where show (StoreDirIsIncompatible p) = - "In-Memory database not found in the database directory: " - <> show p - <> ".\nPre-UTxO-HD and LMDB implementations are incompatible with the In-Memory \ - \ implementation. Please delete your ledger database directory." + "In-Memory database not found in the database directory: " + <> show p + <> ".\nPre-UTxO-HD and LMDB implementations are incompatible with the In-Memory \ + \ implementation. Please delete your ledger database directory." diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB.hs index 2dae0d3075..962ebb2489 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB.hs @@ -13,12 +13,14 @@ {-# LANGUAGE TypeOperators #-} -- | A 'BackingStore' implementation based on [LMDB](http://www.lmdb.tech/doc/). -module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB ( - -- * Opening a database +module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB + ( -- * Opening a database LMDBLimits (LMDBLimits, lmdbMapSize, lmdbMaxDatabases, lmdbMaxReaders) , newLMDBBackingStore + -- * Errors , LMDBErr (..) + -- * Internals exposed for @snapshot-converter@ , DbSeqNo (..) , LMDBMK (..) @@ -27,85 +29,94 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB ( , withDbSeqNoRWMaybeNull ) where -import Cardano.Slotting.Slot (SlotNo, WithOrigin (At)) -import qualified Codec.Serialise as S (Serialise (..)) -import qualified Control.Concurrent.Class.MonadSTM.TVar as IOLike -import Control.Monad (forM_, unless, void, when) -import qualified Control.Monad.Class.MonadSTM as IOLike -import Control.Monad.IO.Class (MonadIO (liftIO)) -import qualified Control.Tracer as Trace -import Data.Functor (($>), (<&>)) -import Data.Functor.Contravariant ((>$<)) -import Data.Map (Map) -import qualified Data.Map.Strict as Map -import Data.MemPack -import Data.Proxy -import qualified Data.Set as Set -import qualified Data.Text as Strict -import qualified Database.LMDB.Simple as LMDB -import qualified Database.LMDB.Simple.Cursor as LMDB.Cursor -import qualified Database.LMDB.Simple.Extra as LMDB -import qualified Database.LMDB.Simple.Internal as LMDB.Internal -import qualified Database.LMDB.Simple.TransactionHandle as TrH -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Ledger.Tables -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots - (SnapshotBackend (..)) -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API as API -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Bridge as Bridge -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Status - (Status (..), StatusLock) -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Status as Status -import Ouroboros.Consensus.Util (foldlM') -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.IOLike (Exception (..), IOLike, - MonadCatch (..), MonadThrow (..), bracket) -import qualified System.FS.API as FS +import Cardano.Slotting.Slot (SlotNo, WithOrigin (At)) +import Codec.Serialise qualified as S (Serialise (..)) +import Control.Concurrent.Class.MonadSTM.TVar qualified as IOLike +import Control.Monad (forM_, unless, void, when) +import Control.Monad.Class.MonadSTM qualified as IOLike +import Control.Monad.IO.Class (MonadIO (liftIO)) +import Control.Tracer qualified as Trace +import Data.Functor (($>), (<&>)) +import Data.Functor.Contravariant ((>$<)) +import Data.Map (Map) +import Data.Map.Strict qualified as Map +import Data.MemPack +import Data.Proxy +import Data.Set qualified as Set +import Data.Text qualified as Strict +import Database.LMDB.Simple qualified as LMDB +import Database.LMDB.Simple.Cursor qualified as LMDB.Cursor +import Database.LMDB.Simple.Extra qualified as LMDB +import Database.LMDB.Simple.Internal qualified as LMDB.Internal +import Database.LMDB.Simple.TransactionHandle qualified as TrH +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots + ( SnapshotBackend (..) + ) +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API qualified as API +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Bridge qualified as Bridge +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Status + ( Status (..) + , StatusLock + ) +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Status qualified as Status +import Ouroboros.Consensus.Util (foldlM') +import Ouroboros.Consensus.Util.IOLike + ( Exception (..) + , IOLike + , MonadCatch (..) + , MonadThrow (..) + , bracket + ) +import Ouroboros.Consensus.Util.IndexedMemPack +import System.FS.API qualified as FS {------------------------------------------------------------------------------- Database definition -------------------------------------------------------------------------------} -- | The LMDB database that underlies the backing store. -data Db m l = Db { - -- | The LMDB environment is a pointer to the directory that contains the - -- @`Db`@. - dbEnv :: !(LMDB.Environment LMDB.ReadWrite) - -- | The on-disk state of the @`Db`@. - -- - -- The state is kept in an LDMB table with only one key and one value: - -- The current sequence number of the @`Db`@. - , dbState :: !(LMDB.Database () DbSeqNo) - -- | The LMDB tables with the key-value stores. +data Db m l = Db + { dbEnv :: !(LMDB.Environment LMDB.ReadWrite) + -- ^ The LMDB environment is a pointer to the directory that contains the + -- @`Db`@. + , dbState :: !(LMDB.Database () DbSeqNo) + -- ^ The on-disk state of the @`Db`@. + -- + -- The state is kept in an LDMB table with only one key and one value: + -- The current sequence number of the @`Db`@. , dbBackingTables :: !(LedgerTables l LMDBMK) - , dbFilePath :: !FilePath - , dbTracer :: !(Trace.Tracer m API.BackingStoreTrace) - -- | Status of the LMDB backing store. When 'Closed', all backing store - -- (value handle) operations will fail. - , dbStatusLock :: !(StatusLock m) - -- | Map of open value handles to cleanup actions. When closing the backing - -- store, these cleanup actions are used to ensure all value handles cleaned - -- up. - -- - -- Note: why not use 'bsvhClose' here? We would get nested lock acquisition - -- on 'dbStatusLock', which causes a deadlock: - -- - -- * 'bsClose' acquires a write lock - -- - -- * 'bsvhClose' is called on a value handle - -- - -- * 'bsvhClose' tries to acquire a read lock, but it has to wait for - -- 'bsClose' to give up its write lock - , dbOpenHandles :: !(IOLike.TVar m (Map Int (Cleanup m))) - , dbNextId :: !(IOLike.TVar m Int) + -- ^ The LMDB tables with the key-value stores. + , dbFilePath :: !FilePath + , dbTracer :: !(Trace.Tracer m API.BackingStoreTrace) + , dbStatusLock :: !(StatusLock m) + -- ^ Status of the LMDB backing store. When 'Closed', all backing store + -- (value handle) operations will fail. + , dbOpenHandles :: !(IOLike.TVar m (Map Int (Cleanup m))) + -- ^ Map of open value handles to cleanup actions. When closing the backing + -- store, these cleanup actions are used to ensure all value handles cleaned + -- up. + -- + -- Note: why not use 'bsvhClose' here? We would get nested lock acquisition + -- on 'dbStatusLock', which causes a deadlock: + -- + -- * 'bsClose' acquires a write lock + -- + -- * 'bsvhClose' is called on a value handle + -- + -- * 'bsvhClose' tries to acquire a read lock, but it has to wait for + -- 'bsClose' to give up its write lock + , dbNextId :: !(IOLike.TVar m Int) } newtype LMDBLimits = MkLMDBLimits {unLMDBLimits :: LMDB.Limits} deriving (Show, Eq) {-# COMPLETE LMDBLimits #-} + -- | Configuration to use for LMDB backing store initialisation. -- -- Keep the following in mind: @@ -117,17 +128,18 @@ newtype LMDBLimits = MkLMDBLimits {unLMDBLimits :: LMDB.Limits} -- 1 for the database state @'DbSeqNo'@. pattern LMDBLimits :: Int -> Int -> Int -> LMDBLimits pattern LMDBLimits{lmdbMapSize, lmdbMaxDatabases, lmdbMaxReaders} = - MkLMDBLimits LMDB.Limits { - LMDB.mapSize = lmdbMapSize - , LMDB.maxDatabases = lmdbMaxDatabases - , LMDB.maxReaders = lmdbMaxReaders - } + MkLMDBLimits + LMDB.Limits + { LMDB.mapSize = lmdbMapSize + , LMDB.maxDatabases = lmdbMaxDatabases + , LMDB.maxReaders = lmdbMaxReaders + } -- | The database state consists of only the database sequence number @dbsSeq@. -- @dbsSeq@ represents the slot up to which we have flushed changes to disk. -- Note that we only flush changes to disk if they have become immutable. -newtype DbSeqNo = DbSeqNo { - dbsSeq :: WithOrigin SlotNo +newtype DbSeqNo = DbSeqNo + { dbsSeq :: WithOrigin SlotNo } deriving stock (Show, Generic) deriving anyclass S.Serialise @@ -140,21 +152,23 @@ data LMDBMK k v = LMDBMK !String !(LMDB.Database k v) -------------------------------------------------------------------------------} getDb :: - LMDB.Internal.IsMode mode - => K2 String k v - -> LMDB.Transaction mode (LMDBMK k v) + LMDB.Internal.IsMode mode => + K2 String k v -> + LMDB.Transaction mode (LMDBMK k v) getDb (K2 name) = LMDBMK name <$> LMDB.getDatabase (Just name) readAll :: - (Ord (TxIn l), MemPack (TxIn l), IndexedMemPack idx (TxOut l)) - => Proxy l - -> idx - -> LMDBMK (TxIn l) (TxOut l) - -> LMDB.Transaction mode (ValuesMK (TxIn l) (TxOut l)) + (Ord (TxIn l), MemPack (TxIn l), IndexedMemPack idx (TxOut l)) => + Proxy l -> + idx -> + LMDBMK (TxIn l) (TxOut l) -> + LMDB.Transaction mode (ValuesMK (TxIn l) (TxOut l)) readAll _ st (LMDBMK _ dbMK) = - ValuesMK <$> Bridge.runCursorAsTransaction' st - LMDB.Cursor.cgetAll - dbMK + ValuesMK + <$> Bridge.runCursorAsTransaction' + st + LMDB.Cursor.cgetAll + dbMK -- | @'rangeRead' rq dbMK@ performs a range read of @rqCount rq@ -- values from database @dbMK@, starting from some key depending on @rqPrev rq@. @@ -170,133 +184,141 @@ readAll _ st (LMDBMK _ dbMK) = -- lexicographical ordering of the serialised keys, or the result of this -- function will be unexpected. rangeRead :: - forall mode l idx. - (Ord (TxIn l), MemPack (TxIn l), IndexedMemPack idx (TxOut l)) - => API.RangeQuery (LedgerTables l KeysMK) - -> idx - -> LMDBMK (TxIn l) (TxOut l) - -> LMDB.Transaction mode (ValuesMK (TxIn l) (TxOut l)) + forall mode l idx. + (Ord (TxIn l), MemPack (TxIn l), IndexedMemPack idx (TxOut l)) => + API.RangeQuery (LedgerTables l KeysMK) -> + idx -> + LMDBMK (TxIn l) (TxOut l) -> + LMDB.Transaction mode (ValuesMK (TxIn l) (TxOut l)) rangeRead rq st dbMK = - ValuesMK <$> case ksMK of - Nothing -> runCursorHelper Nothing - Just (LedgerTables (KeysMK ks)) -> case Set.lookupMax ks of - Nothing -> pure mempty - Just lastExcludedKey -> - runCursorHelper $ Just (lastExcludedKey, LMDB.Cursor.Exclusive) - where - LMDBMK _ db = dbMK - - API.RangeQuery ksMK count = rq - - runCursorHelper :: - Maybe (TxIn l, LMDB.Cursor.Bound) -- ^ Lower bound on read range - -> LMDB.Transaction mode (Map (TxIn l) (TxOut l)) - runCursorHelper lb = - Bridge.runCursorAsTransaction' st - (LMDB.Cursor.cgetMany lb count) - db + ValuesMK <$> case ksMK of + Nothing -> runCursorHelper Nothing + Just (LedgerTables (KeysMK ks)) -> case Set.lookupMax ks of + Nothing -> pure mempty + Just lastExcludedKey -> + runCursorHelper $ Just (lastExcludedKey, LMDB.Cursor.Exclusive) + where + LMDBMK _ db = dbMK + + API.RangeQuery ksMK count = rq + + runCursorHelper :: + Maybe (TxIn l, LMDB.Cursor.Bound) -> + -- \^ Lower bound on read range + LMDB.Transaction mode (Map (TxIn l) (TxOut l)) + runCursorHelper lb = + Bridge.runCursorAsTransaction' + st + (LMDB.Cursor.cgetMany lb count) + db initLMDBTable :: - (IndexedMemPack idx v, MemPack k) - => idx - -> LMDBMK k v - -> ValuesMK k v - -> LMDB.Transaction LMDB.ReadWrite (EmptyMK k v) + (IndexedMemPack idx v, MemPack k) => + idx -> + LMDBMK k v -> + ValuesMK k v -> + LMDB.Transaction LMDB.ReadWrite (EmptyMK k v) initLMDBTable st (LMDBMK tblName db) (ValuesMK utxoVals) = - EmptyMK <$ lmdbInitTable - where - lmdbInitTable = do - isEmpty <- LMDB.null db - unless isEmpty $ liftIO . throwIO $ LMDBErrInitialisingNonEmpty tblName - void $ Map.traverseWithKey - (Bridge.indexedPut st db) - utxoVals + EmptyMK <$ lmdbInitTable + where + lmdbInitTable = do + isEmpty <- LMDB.null db + unless isEmpty $ liftIO . throwIO $ LMDBErrInitialisingNonEmpty tblName + void $ + Map.traverseWithKey + (Bridge.indexedPut st db) + utxoVals readLMDBTable :: - (IndexedMemPack idx v, MemPack k) - => Ord k - => idx - -> LMDBMK k v - -> KeysMK k v - -> LMDB.Transaction mode (ValuesMK k v) + (IndexedMemPack idx v, MemPack k) => + Ord k => + idx -> + LMDBMK k v -> + KeysMK k v -> + LMDB.Transaction mode (ValuesMK k v) readLMDBTable st (LMDBMK _ db) (KeysMK keys) = - ValuesMK <$> lmdbReadTable - where - lmdbReadTable = foldlM' go Map.empty (Set.toList keys) - where - go m k = Bridge.indexedGet st db k <&> \case - Nothing -> m - Just v -> Map.insert k v m + ValuesMK <$> lmdbReadTable + where + lmdbReadTable = foldlM' go Map.empty (Set.toList keys) + where + go m k = + Bridge.indexedGet st db k <&> \case + Nothing -> m + Just v -> Map.insert k v m writeLMDBTable :: - (IndexedMemPack idx v, MemPack k) - => idx - -> LMDBMK k v - -> DiffMK k v - -> LMDB.Transaction LMDB.ReadWrite (EmptyMK k v) + (IndexedMemPack idx v, MemPack k) => + idx -> + LMDBMK k v -> + DiffMK k v -> + LMDB.Transaction LMDB.ReadWrite (EmptyMK k v) writeLMDBTable st (LMDBMK _ db) (DiffMK d) = - EmptyMK <$ lmdbWriteTable - where - lmdbWriteTable = void $ Diff.traverseDeltaWithKey_ go d - where - go k de = case de of - Diff.Delete -> void $ Bridge.delete db k - Diff.Insert v -> Bridge.indexedPut st db k v + EmptyMK <$ lmdbWriteTable + where + lmdbWriteTable = void $ Diff.traverseDeltaWithKey_ go d + where + go k de = case de of + Diff.Delete -> void $ Bridge.delete db k + Diff.Insert v -> Bridge.indexedPut st db k v {------------------------------------------------------------------------------- Db state -------------------------------------------------------------------------------} readDbSeqNoMaybeNull :: - LMDB.Database () DbSeqNo - -> LMDB.Transaction mode (Maybe DbSeqNo) + LMDB.Database () DbSeqNo -> + LMDB.Transaction mode (Maybe DbSeqNo) readDbSeqNoMaybeNull db = LMDB.get db () readDbSeqNo :: - LMDB.Database () DbSeqNo - -> LMDB.Transaction mode DbSeqNo + LMDB.Database () DbSeqNo -> + LMDB.Transaction mode DbSeqNo readDbSeqNo db = readDbSeqNoMaybeNull db >>= maybe (liftIO . throwIO $ LMDBErrNoDbSeqNo) pure withDbSeqNoRW :: - LMDB.Database () DbSeqNo - -> (DbSeqNo -> LMDB.Transaction LMDB.ReadWrite (a, DbSeqNo)) - -> LMDB.Transaction LMDB.ReadWrite a + LMDB.Database () DbSeqNo -> + (DbSeqNo -> LMDB.Transaction LMDB.ReadWrite (a, DbSeqNo)) -> + LMDB.Transaction LMDB.ReadWrite a withDbSeqNoRW db f = withDbSeqNoRWMaybeNull db $ maybe (liftIO . throwIO $ LMDBErrNoDbSeqNo) f withDbSeqNoRWMaybeNull :: - LMDB.Database () DbSeqNo - -> (Maybe DbSeqNo -> LMDB.Transaction LMDB.ReadWrite (a, DbSeqNo)) - -> LMDB.Transaction LMDB.ReadWrite a -withDbSeqNoRWMaybeNull db f = + LMDB.Database () DbSeqNo -> + (Maybe DbSeqNo -> LMDB.Transaction LMDB.ReadWrite (a, DbSeqNo)) -> + LMDB.Transaction LMDB.ReadWrite a +withDbSeqNoRWMaybeNull db f = readDbSeqNoMaybeNull db >>= f >>= \(r, sNew) -> LMDB.put db () (Just sNew) $> r {------------------------------------------------------------------------------- Guards -------------------------------------------------------------------------------} -data GuardDbDir = DirMustExist | DirMustNotExist +data GuardDbDir = DirMustExist | DirMustNotExist -- | Guard for the existence/non-existence of a database directory, -- and create it if missing. checkAndOpenDbDir :: - (MonadIO m, IOLike m) - => GuardDbDir - -> FS.SomeHasFS m - -> FS.FsPath - -> m FilePath + (MonadIO m, IOLike m) => + GuardDbDir -> + FS.SomeHasFS m -> + FS.FsPath -> + m FilePath checkAndOpenDbDir mustExistDir (FS.SomeHasFS fs) path = do fileEx <- FS.doesFileExist fs path when fileEx $ - throwIO $ LMDBErrNotADir path + throwIO $ + LMDBErrNotADir path dirEx <- FS.doesDirectoryExist fs path - lmdbFileExists <- FS.doesFileExist fs path { FS.fsPathToList = FS.fsPathToList path ++ [Strict.pack "data.mdb"] } + lmdbFileExists <- + FS.doesFileExist fs path{FS.fsPathToList = FS.fsPathToList path ++ [Strict.pack "data.mdb"]} filepath <- FS.unsafeToFilePath fs path case dirEx of - True | DirMustNotExist <- mustExistDir -> throwIO $ LMDBErrDirExists filepath - | not lmdbFileExists -> throwIO $ LMDBErrDirIsNotLMDB filepath - | otherwise -> pure () - False | DirMustExist <- mustExistDir -> throwIO $ LMDBErrDirDoesntExist filepath - | otherwise -> pure () + True + | DirMustNotExist <- mustExistDir -> throwIO $ LMDBErrDirExists filepath + | not lmdbFileExists -> throwIO $ LMDBErrDirIsNotLMDB filepath + | otherwise -> pure () + False + | DirMustExist <- mustExistDir -> throwIO $ LMDBErrDirDoesntExist filepath + | otherwise -> pure () FS.createDirectoryIfMissing fs True path pure filepath @@ -309,19 +331,19 @@ checkAndOpenDbDir mustExistDir (FS.SomeHasFS fs) path = do -- (non-snapshot) tables will probably still be on-disk. These tables are not -- removed when stopping the node, so they should be "overwritten". checkAndOpenDbDirWithRetry :: - (MonadIO m, IOLike m) - => GuardDbDir - -> FS.SomeHasFS m - -> FS.FsPath - -> m FilePath + (MonadIO m, IOLike m) => + GuardDbDir -> + FS.SomeHasFS m -> + FS.FsPath -> + m FilePath checkAndOpenDbDirWithRetry gdd shfs@(FS.SomeHasFS fs) path = - handle retryHandler (checkAndOpenDbDir gdd shfs path) - where - retryHandler e = case (gdd, e) of - (DirMustNotExist, LMDBErrDirExists _path) -> do - FS.removeDirectoryRecursive fs path - checkAndOpenDbDir DirMustNotExist shfs path - _ -> throwIO e + handle retryHandler (checkAndOpenDbDir gdd shfs path) + where + retryHandler e = case (gdd, e) of + (DirMustNotExist, LMDBErrDirExists _path) -> do + FS.removeDirectoryRecursive fs path + checkAndOpenDbDir DirMustNotExist shfs path + _ -> throwIO e {------------------------------------------------------------------------------- Initialize an LMDB @@ -329,66 +351,74 @@ checkAndOpenDbDirWithRetry gdd shfs@(FS.SomeHasFS fs) path = -- | Initialise an LMDB database from these provided values. initFromVals :: - forall l m. - (HasLedgerTables l, MonadIO m, MemPackIdx l EmptyMK ~ l EmptyMK) - => Trace.Tracer m API.BackingStoreTrace - -> WithOrigin SlotNo - -- ^ The slot number up to which the ledger tables contain values. - -> LedgerTables l ValuesMK - -- ^ The ledger tables to initialise the LMDB database tables with. - -> LMDB.Environment LMDB.Internal.ReadWrite - -- ^ The LMDB environment. - -> LMDB.Database () DbSeqNo - -> l EmptyMK - -> LedgerTables l LMDBMK - -> m () + forall l m. + (HasLedgerTables l, MonadIO m, MemPackIdx l EmptyMK ~ l EmptyMK) => + Trace.Tracer m API.BackingStoreTrace -> + -- | The slot number up to which the ledger tables contain values. + WithOrigin SlotNo -> + -- | The ledger tables to initialise the LMDB database tables with. + LedgerTables l ValuesMK -> + -- | The LMDB environment. + LMDB.Environment LMDB.Internal.ReadWrite -> + LMDB.Database () DbSeqNo -> + l EmptyMK -> + LedgerTables l LMDBMK -> + m () initFromVals tracer dbsSeq vals env st lst backingTables = do Trace.traceWith tracer $ API.BSInitialisingFromValues dbsSeq - liftIO $ LMDB.readWriteTransaction env $ - withDbSeqNoRWMaybeNull st $ \case - Nothing -> ltzipWith2A (initLMDBTable lst) backingTables vals - $> ((), DbSeqNo{dbsSeq}) - Just _ -> liftIO . throwIO $ LMDBErrInitialisingAlreadyHasState + liftIO $ + LMDB.readWriteTransaction env $ + withDbSeqNoRWMaybeNull st $ \case + Nothing -> + ltzipWith2A (initLMDBTable lst) backingTables vals + $> ((), DbSeqNo{dbsSeq}) + Just _ -> liftIO . throwIO $ LMDBErrInitialisingAlreadyHasState Trace.traceWith tracer $ API.BSInitialisedFromValues dbsSeq -- | Initialise an LMDB database from an existing LMDB database. initFromLMDBs :: - (MonadIO m, IOLike m) - => Trace.Tracer m API.BackingStoreTrace - -> LMDBLimits - -- ^ Configuration for the LMDB database that we initialise from. - -> API.SnapshotsFS m - -- ^ Abstraction over the filesystem. - -> FS.FsPath - -- ^ The path that contains the LMDB database that we want to initialise from. - -> API.LiveLMDBFS m - -- ^ Abstraction over the filesystem. - -> FS.FsPath - -- ^ The path where the new LMDB database should be initialised. - -> m () + (MonadIO m, IOLike m) => + Trace.Tracer m API.BackingStoreTrace -> + -- | Configuration for the LMDB database that we initialise from. + LMDBLimits -> + -- | Abstraction over the filesystem. + API.SnapshotsFS m -> + -- | The path that contains the LMDB database that we want to initialise from. + FS.FsPath -> + -- | Abstraction over the filesystem. + API.LiveLMDBFS m -> + -- | The path where the new LMDB database should be initialised. + FS.FsPath -> + m () initFromLMDBs tracer limits (API.SnapshotsFS shfsFrom@(FS.SomeHasFS fsFrom)) from0 (API.LiveLMDBFS shfsTo) to0 = do - Trace.traceWith tracer $ API.BSInitialisingFromCopy from0 - from <- checkAndOpenDbDir DirMustExist shfsFrom from0 - -- On Windows, if we don't choose the mapsize carefully it will make the - -- snapshot grow. Therefore we are using the current filesize as mapsize - -- when opening the snapshot to avoid this. - stat <- FS.withFile fsFrom (from0 { FS.fsPathToList = FS.fsPathToList from0 ++ [Strict.pack "data.mdb"] }) FS.ReadMode (FS.hGetSize fsFrom) - to <- checkAndOpenDbDirWithRetry DirMustNotExist shfsTo to0 - bracket - (liftIO $ LMDB.openEnvironment from ((unLMDBLimits limits) { LMDB.mapSize = fromIntegral stat })) - (liftIO . LMDB.closeEnvironment) - (flip (lmdbCopy from0 tracer) to) - Trace.traceWith tracer $ API.BSInitialisedFromCopy from0 + Trace.traceWith tracer $ API.BSInitialisingFromCopy from0 + from <- checkAndOpenDbDir DirMustExist shfsFrom from0 + -- On Windows, if we don't choose the mapsize carefully it will make the + -- snapshot grow. Therefore we are using the current filesize as mapsize + -- when opening the snapshot to avoid this. + stat <- + FS.withFile + fsFrom + (from0{FS.fsPathToList = FS.fsPathToList from0 ++ [Strict.pack "data.mdb"]}) + FS.ReadMode + (FS.hGetSize fsFrom) + to <- checkAndOpenDbDirWithRetry DirMustNotExist shfsTo to0 + bracket + (liftIO $ LMDB.openEnvironment from ((unLMDBLimits limits){LMDB.mapSize = fromIntegral stat})) + (liftIO . LMDB.closeEnvironment) + (flip (lmdbCopy from0 tracer) to) + Trace.traceWith tracer $ API.BSInitialisedFromCopy from0 -- | Copy an existing LMDB database to a given directory. -lmdbCopy :: MonadIO m - => FS.FsPath - -> Trace.Tracer m API.BackingStoreTrace - -> LMDB.Environment LMDB.ReadWrite - -- ^ The environment in which the LMDB database lives. - -> FilePath - -- ^ The path where the copy should reside. - -> m () +lmdbCopy :: + MonadIO m => + FS.FsPath -> + Trace.Tracer m API.BackingStoreTrace -> + -- | The environment in which the LMDB database lives. + LMDB.Environment LMDB.ReadWrite -> + -- | The path where the copy should reside. + FilePath -> + m () lmdbCopy from0 tracer e to = do Trace.traceWith tracer $ API.BSCopying from0 liftIO $ LMDB.copyEnvironment e to @@ -396,152 +426,161 @@ lmdbCopy from0 tracer e to = do -- | Initialise a backing store. newLMDBBackingStore :: - forall m l. ( - HasCallStack, HasLedgerTables l, MonadIO m - , IOLike m, MemPackIdx l EmptyMK ~ l EmptyMK - ) - => Trace.Tracer m API.BackingStoreTrace - -> LMDBLimits - -- ^ Configuration parameters for the LMDB database that we - -- initialise. In case we initialise the LMDB database from - -- an existing LMDB database, we use these same configuration parameters - -- to open the existing LMDB database. - -> API.LiveLMDBFS m - -- ^ The FS for the LMDB live database - -> API.SnapshotsFS m - -> API.InitFrom (LedgerTables l ValuesMK) - -> m (API.LedgerBackingStore m l) + forall m l. + ( HasCallStack + , HasLedgerTables l + , MonadIO m + , IOLike m + , MemPackIdx l EmptyMK ~ l EmptyMK + ) => + Trace.Tracer m API.BackingStoreTrace -> + -- | Configuration parameters for the LMDB database that we + -- initialise. In case we initialise the LMDB database from + -- an existing LMDB database, we use these same configuration parameters + -- to open the existing LMDB database. + LMDBLimits -> + -- | The FS for the LMDB live database + API.LiveLMDBFS m -> + API.SnapshotsFS m -> + API.InitFrom (LedgerTables l ValuesMK) -> + m (API.LedgerBackingStore m l) newLMDBBackingStore dbTracer limits liveFS@(API.LiveLMDBFS liveFS') snapFS@(API.SnapshotsFS snapFS') initFrom = do - Trace.traceWith dbTracer API.BSOpening + Trace.traceWith dbTracer API.BSOpening - db@Db { dbEnv - , dbState - , dbBackingTables - } <- createOrGetDB + db@Db + { dbEnv + , dbState + , dbBackingTables + } <- + createOrGetDB - maybePopulate dbEnv dbState dbBackingTables + maybePopulate dbEnv dbState dbBackingTables - Trace.traceWith dbTracer $ API.BSOpened $ Just path + Trace.traceWith dbTracer $ API.BSOpened $ Just path - pure $ mkBackingStore db + pure $ mkBackingStore db where - - path = FS.mkFsPath ["tables"] - - st = case initFrom of - API.InitFromCopy st' _ -> st' - API.InitFromValues _ st' _ -> st' - - createOrGetDB :: m (Db m l) - createOrGetDB = do - - dbOpenHandles <- IOLike.newTVarIO Map.empty - dbStatusLock <- Status.new Open - - -- get the filepath for this db creates the directory if appropriate - dbFilePath <- checkAndOpenDbDirWithRetry DirMustNotExist liveFS' path - - -- copy from another lmdb path if appropriate - case initFrom of - API.InitFromCopy _ fp -> initFromLMDBs dbTracer limits snapFS fp liveFS path - API.InitFromValues{} -> pure () - - -- open this database - dbEnv <- liftIO $ LMDB.openEnvironment dbFilePath (unLMDBLimits limits) - - -- The LMDB.Database that holds the @`DbSeqNo`@ (i.e. sequence number) - -- This transaction must be read-write because on initialisation it creates the database - dbState <- liftIO $ LMDB.readWriteTransaction dbEnv $ LMDB.getDatabase (Just "_dbstate") - - -- Here we get the LMDB.Databases for the tables of the ledger state - -- Must be read-write transaction because tables may need to be created - dbBackingTables <- liftIO $ LMDB.readWriteTransaction dbEnv $ - lttraverse getDb (ltpure $ K2 "utxo") - - dbNextId <- IOLike.newTVarIO 0 - - pure $ Db { dbEnv - , dbState - , dbBackingTables - , dbFilePath - , dbTracer - , dbStatusLock - , dbOpenHandles - , dbNextId - } - - maybePopulate :: LMDB.Internal.Environment LMDB.Internal.ReadWrite - -> LMDB.Internal.Database () DbSeqNo - -> LedgerTables l LMDBMK - -> m () - maybePopulate dbEnv dbState dbBackingTables = do - -- now initialise those tables if appropriate - case initFrom of - API.InitFromValues slot _ vals -> initFromVals dbTracer slot vals dbEnv dbState st dbBackingTables - API.InitFromCopy{} -> pure () - - mkBackingStore :: HasCallStack => Db m l -> API.LedgerBackingStore m l - mkBackingStore db = - let bsClose :: m () - bsClose = Status.withWriteAccess dbStatusLock traceAlreadyClosed $ do - Trace.traceWith dbTracer API.BSClosing - openHandles <- IOLike.readTVarIO dbOpenHandles - forM_ openHandles runCleanup - IOLike.atomically $ IOLike.writeTVar dbOpenHandles mempty - liftIO $ LMDB.closeEnvironment dbEnv - Trace.traceWith dbTracer API.BSClosed - pure ((), Closed) - where - traceAlreadyClosed = Trace.traceWith dbTracer API.BSAlreadyClosed - - bsCopy bsp = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - to <- checkAndOpenDbDir DirMustNotExist snapFS' bsp - lmdbCopy path dbTracer dbEnv to - - bsValueHandle = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - mkLMDBBackingStoreValueHandle db - - bsWrite :: SlotNo -> (l EmptyMK, l EmptyMK) -> LedgerTables l DiffMK -> m () - bsWrite slot (_st, st') diffs = do - Trace.traceWith dbTracer $ API.BSWriting slot - Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - oldSlot <- liftIO $ LMDB.readWriteTransaction dbEnv $ withDbSeqNoRW dbState $ \s@DbSeqNo{dbsSeq} -> do - unless (dbsSeq <= At slot) $ - -- This inequality is non-strict because of EBBs having the - -- same slot as its predecessor. - liftIO . throwIO $ LMDBErrNonMonotonicSeq (At slot) dbsSeq - void $ ltzipWith2A (writeLMDBTable st') dbBackingTables diffs - pure (dbsSeq, s {dbsSeq = At slot}) - Trace.traceWith dbTracer $ API.BSWritten oldSlot slot - - in API.BackingStore { API.bsClose = bsClose - , API.bsCopy = const bsCopy - , API.bsValueHandle = bsValueHandle - , API.bsWrite = bsWrite - , API.bsSnapshotBackend = UTxOHDLMDBSnapshot - } - - where - Db { dbEnv - , dbState - , dbBackingTables - , dbStatusLock - , dbOpenHandles - } = db + path = FS.mkFsPath ["tables"] + + st = case initFrom of + API.InitFromCopy st' _ -> st' + API.InitFromValues _ st' _ -> st' + + createOrGetDB :: m (Db m l) + createOrGetDB = do + dbOpenHandles <- IOLike.newTVarIO Map.empty + dbStatusLock <- Status.new Open + + -- get the filepath for this db creates the directory if appropriate + dbFilePath <- checkAndOpenDbDirWithRetry DirMustNotExist liveFS' path + + -- copy from another lmdb path if appropriate + case initFrom of + API.InitFromCopy _ fp -> initFromLMDBs dbTracer limits snapFS fp liveFS path + API.InitFromValues{} -> pure () + + -- open this database + dbEnv <- liftIO $ LMDB.openEnvironment dbFilePath (unLMDBLimits limits) + + -- The LMDB.Database that holds the @`DbSeqNo`@ (i.e. sequence number) + -- This transaction must be read-write because on initialisation it creates the database + dbState <- liftIO $ LMDB.readWriteTransaction dbEnv $ LMDB.getDatabase (Just "_dbstate") + + -- Here we get the LMDB.Databases for the tables of the ledger state + -- Must be read-write transaction because tables may need to be created + dbBackingTables <- + liftIO $ + LMDB.readWriteTransaction dbEnv $ + lttraverse getDb (ltpure $ K2 "utxo") + + dbNextId <- IOLike.newTVarIO 0 + + pure $ + Db + { dbEnv + , dbState + , dbBackingTables + , dbFilePath + , dbTracer + , dbStatusLock + , dbOpenHandles + , dbNextId + } + + maybePopulate :: + LMDB.Internal.Environment LMDB.Internal.ReadWrite -> + LMDB.Internal.Database () DbSeqNo -> + LedgerTables l LMDBMK -> + m () + maybePopulate dbEnv dbState dbBackingTables = do + -- now initialise those tables if appropriate + case initFrom of + API.InitFromValues slot _ vals -> initFromVals dbTracer slot vals dbEnv dbState st dbBackingTables + API.InitFromCopy{} -> pure () + + mkBackingStore :: HasCallStack => Db m l -> API.LedgerBackingStore m l + mkBackingStore db = + let bsClose :: m () + bsClose = Status.withWriteAccess dbStatusLock traceAlreadyClosed $ do + Trace.traceWith dbTracer API.BSClosing + openHandles <- IOLike.readTVarIO dbOpenHandles + forM_ openHandles runCleanup + IOLike.atomically $ IOLike.writeTVar dbOpenHandles mempty + liftIO $ LMDB.closeEnvironment dbEnv + Trace.traceWith dbTracer API.BSClosed + pure ((), Closed) + where + traceAlreadyClosed = Trace.traceWith dbTracer API.BSAlreadyClosed + + bsCopy bsp = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do + to <- checkAndOpenDbDir DirMustNotExist snapFS' bsp + lmdbCopy path dbTracer dbEnv to + + bsValueHandle = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do + mkLMDBBackingStoreValueHandle db + + bsWrite :: SlotNo -> (l EmptyMK, l EmptyMK) -> LedgerTables l DiffMK -> m () + bsWrite slot (_st, st') diffs = do + Trace.traceWith dbTracer $ API.BSWriting slot + Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do + oldSlot <- liftIO $ LMDB.readWriteTransaction dbEnv $ withDbSeqNoRW dbState $ \s@DbSeqNo{dbsSeq} -> do + unless (dbsSeq <= At slot) $ + -- This inequality is non-strict because of EBBs having the + -- same slot as its predecessor. + liftIO . throwIO $ + LMDBErrNonMonotonicSeq (At slot) dbsSeq + void $ ltzipWith2A (writeLMDBTable st') dbBackingTables diffs + pure (dbsSeq, s{dbsSeq = At slot}) + Trace.traceWith dbTracer $ API.BSWritten oldSlot slot + in API.BackingStore + { API.bsClose = bsClose + , API.bsCopy = const bsCopy + , API.bsValueHandle = bsValueHandle + , API.bsWrite = bsWrite + , API.bsSnapshotBackend = UTxOHDLMDBSnapshot + } + where + Db + { dbEnv + , dbState + , dbBackingTables + , dbStatusLock + , dbOpenHandles + } = db -- | Create a backing store value handle that has a consistent view of the -- current database state. mkLMDBBackingStoreValueHandle :: - forall l m. - (HasLedgerTables l, MonadIO m, IOLike m, HasCallStack, MemPackIdx l EmptyMK ~ l EmptyMK) - => Db m l - -- ^ The LMDB database for which the backing store value handle is - -- created. - -> m (API.LedgerBackingStoreValueHandle m l) + forall l m. + (HasLedgerTables l, MonadIO m, IOLike m, HasCallStack, MemPackIdx l EmptyMK ~ l EmptyMK) => + -- | The LMDB database for which the backing store value handle is + -- created. + Db m l -> + m (API.LedgerBackingStoreValueHandle m l) mkLMDBBackingStoreValueHandle db = do vhId <- IOLike.atomically $ do vhId <- IOLike.readTVar dbNextId - IOLike.modifyTVar' dbNextId (+1) + IOLike.modifyTVar' dbNextId (+ 1) pure vhId let @@ -556,99 +595,109 @@ mkLMDBBackingStoreValueHandle db = do vhStatusLock <- Status.new Open let - -- | Clean up a backing store value handle by committing its transaction + -- \| Clean up a backing store value handle by committing its transaction -- handle. cleanup :: Cleanup m - cleanup = Cleanup $ - liftIO $ TrH.commit trh + cleanup = + Cleanup $ + liftIO $ + TrH.commit trh bsvhClose :: m () bsvhClose = Status.withReadAccess dbStatusLock traceAlreadyClosed $ do - Status.withWriteAccess vhStatusLock traceTVHAlreadyClosed $ do - Trace.traceWith tracer API.BSVHClosing - runCleanup cleanup - IOLike.atomically $ IOLike.modifyTVar' dbOpenHandles (Map.delete vhId) - Trace.traceWith tracer API.BSVHClosed - pure ((), Closed) - where - traceAlreadyClosed = Trace.traceWith dbTracer API.BSAlreadyClosed - traceTVHAlreadyClosed = Trace.traceWith tracer API.BSVHAlreadyClosed + Status.withWriteAccess vhStatusLock traceTVHAlreadyClosed $ do + Trace.traceWith tracer API.BSVHClosing + runCleanup cleanup + IOLike.atomically $ IOLike.modifyTVar' dbOpenHandles (Map.delete vhId) + Trace.traceWith tracer API.BSVHClosed + pure ((), Closed) + where + traceAlreadyClosed = Trace.traceWith dbTracer API.BSAlreadyClosed + traceTVHAlreadyClosed = Trace.traceWith tracer API.BSVHAlreadyClosed bsvhRead :: l EmptyMK -> LedgerTables l KeysMK -> m (LedgerTables l ValuesMK) bsvhRead st keys = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do - Trace.traceWith tracer API.BSVHReading - res <- liftIO $ TrH.submitReadOnly trh $ - ltzipWith2A (readLMDBTable st) dbBackingTables keys - Trace.traceWith tracer API.BSVHRead - pure res + Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do + Trace.traceWith tracer API.BSVHReading + res <- + liftIO $ + TrH.submitReadOnly trh $ + ltzipWith2A (readLMDBTable st) dbBackingTables keys + Trace.traceWith tracer API.BSVHRead + pure res bsvhRangeRead :: - l EmptyMK - -> API.RangeQuery (LedgerTables l KeysMK) - -> m (LedgerTables l ValuesMK) + l EmptyMK -> + API.RangeQuery (LedgerTables l KeysMK) -> + m (LedgerTables l ValuesMK) bsvhRangeRead st rq = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do - Trace.traceWith tracer API.BSVHRangeReading - res <- liftIO $ TrH.submitReadOnly trh $ - let dbMK = getLedgerTables dbBackingTables - in LedgerTables <$> rangeRead rq st dbMK - Trace.traceWith tracer API.BSVHRangeRead - pure res + Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do + Trace.traceWith tracer API.BSVHRangeReading + res <- + liftIO $ + TrH.submitReadOnly trh $ + let dbMK = getLedgerTables dbBackingTables + in LedgerTables <$> rangeRead rq st dbMK + Trace.traceWith tracer API.BSVHRangeRead + pure res bsvhStat :: m API.Statistics bsvhStat = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do - Trace.traceWith tracer API.BSVHStatting - let transaction = do - DbSeqNo{dbsSeq} <- readDbSeqNo dbState - constn <- lttraverse (\(LMDBMK _ dbx) -> K2 <$> LMDB.size dbx) dbBackingTables - let n = ltcollapse constn - pure $ API.Statistics dbsSeq n - res <- liftIO $ TrH.submitReadOnly trh transaction - Trace.traceWith tracer API.BSVHStatted - pure res + Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do + Trace.traceWith tracer API.BSVHStatting + let transaction = do + DbSeqNo{dbsSeq} <- readDbSeqNo dbState + constn <- lttraverse (\(LMDBMK _ dbx) -> K2 <$> LMDB.size dbx) dbBackingTables + let n = ltcollapse constn + pure $ API.Statistics dbsSeq n + res <- liftIO $ TrH.submitReadOnly trh transaction + Trace.traceWith tracer API.BSVHStatted + pure res bsvhReadAll :: l EmptyMK -> m (LedgerTables l ValuesMK) bsvhReadAll st = Status.withReadAccess dbStatusLock (throwIO LMDBErrClosed) $ do - Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do - Trace.traceWith tracer API.BSVHRangeReading - res <- liftIO $ TrH.submitReadOnly trh $ - let dbMK = getLedgerTables dbBackingTables - in LedgerTables <$> readAll (Proxy @l) st dbMK - Trace.traceWith tracer API.BSVHRangeRead - pure res - - bsvh = API.BackingStoreValueHandle { API.bsvhAtSlot = initSlot - , API.bsvhClose = bsvhClose - , API.bsvhRead = bsvhRead - , API.bsvhReadAll = bsvhReadAll - , API.bsvhRangeRead = bsvhRangeRead - , API.bsvhStat = bsvhStat - } + Status.withReadAccess vhStatusLock (throwIO (LMDBErrNoValueHandle vhId)) $ do + Trace.traceWith tracer API.BSVHRangeReading + res <- + liftIO $ + TrH.submitReadOnly trh $ + let dbMK = getLedgerTables dbBackingTables + in LedgerTables <$> readAll (Proxy @l) st dbMK + Trace.traceWith tracer API.BSVHRangeRead + pure res + + bsvh = + API.BackingStoreValueHandle + { API.bsvhAtSlot = initSlot + , API.bsvhClose = bsvhClose + , API.bsvhRead = bsvhRead + , API.bsvhReadAll = bsvhReadAll + , API.bsvhRangeRead = bsvhRangeRead + , API.bsvhStat = bsvhStat + } IOLike.atomically $ IOLike.modifyTVar' dbOpenHandles (Map.insert vhId cleanup) Trace.traceWith dbTracer API.BSCreatedValueHandle pure bsvh - where - Db { dbEnv - , dbTracer - , dbState - , dbOpenHandles - , dbBackingTables - , dbNextId - , dbStatusLock - } = db + Db + { dbEnv + , dbTracer + , dbState + , dbOpenHandles + , dbBackingTables + , dbNextId + , dbStatusLock + } = db -- | A monadic action used for cleaning up resources. -newtype Cleanup m = Cleanup { runCleanup :: m () } +newtype Cleanup m = Cleanup {runCleanup :: m ()} {------------------------------------------------------------------------------- Errors @@ -658,40 +707,40 @@ newtype Cleanup m = Cleanup { runCleanup :: m () } -- -- __WARNING__: these errors will be thrown in IO as having a corrupt database -- is critical for the functioning of Consensus. -data LMDBErr = - -- | The database state can not be found on-disk. +data LMDBErr + = -- | The database state can not be found on-disk. LMDBErrNoDbSeqNo - -- | The sequence number of a @`Db`@ should be monotonically increasing + | -- | The sequence number of a @`Db`@ should be monotonically increasing -- across calls to @`bsWrite`@, since we use @`bsWrite`@ to flush -- /immutable/ changes. That is, we can only flush with a newer sequence -- number because the changes should be /immutable/. Note that this does -- not mean that values can not be changed in the future, only that we -- can not change values in the past. - | LMDBErrNonMonotonicSeq !(WithOrigin SlotNo) !(WithOrigin SlotNo) - -- | The database table that is being initialised is non-empty. - | LMDBErrInitialisingNonEmpty !String - -- | The database that is being initialized already had a DbSeqNo table - | LMDBErrInitialisingAlreadyHasState - -- | Trying to use a non-existing value handle. - | LMDBErrNoValueHandle !Int - -- | Couldn't create a value handle because we couldn't read the sequence + LMDBErrNonMonotonicSeq !(WithOrigin SlotNo) !(WithOrigin SlotNo) + | -- | The database table that is being initialised is non-empty. + LMDBErrInitialisingNonEmpty !String + | -- | The database that is being initialized already had a DbSeqNo table + LMDBErrInitialisingAlreadyHasState + | -- | Trying to use a non-existing value handle. + LMDBErrNoValueHandle !Int + | -- | Couldn't create a value handle because we couldn't read the sequence -- number - | LMDBErrUnableToReadSeqNo - -- | Failed to read a value from a database table. - | LMDBErrBadRead - -- | Failed to read a range of values from a database table. - | LMDBErrBadRangeRead - -- | A database directory should not exist already. - | LMDBErrDirExists !FilePath - -- | A database directory should exist already. - | LMDBErrDirDoesntExist !FilePath - -- | The directory exists but is not an LMDB directory! - | LMDBErrDirIsNotLMDB !FilePath - -- | What should be a directory is in fact a file - | LMDBErrNotADir !FS.FsPath - -- | The database has been closed, so all backing store operations should + LMDBErrUnableToReadSeqNo + | -- | Failed to read a value from a database table. + LMDBErrBadRead + | -- | Failed to read a range of values from a database table. + LMDBErrBadRangeRead + | -- | A database directory should not exist already. + LMDBErrDirExists !FilePath + | -- | A database directory should exist already. + LMDBErrDirDoesntExist !FilePath + | -- | The directory exists but is not an LMDB directory! + LMDBErrDirIsNotLMDB !FilePath + | -- | What should be a directory is in fact a file + LMDBErrNotADir !FS.FsPath + | -- | The database has been closed, so all backing store operations should -- throw an error. - | LMDBErrClosed + LMDBErrClosed instance Exception LMDBErr @@ -699,7 +748,8 @@ instance Exception LMDBErr -- include: (i) an indication of the probable cause of the error, and -- (ii) a descriptive error message for the specific @`LMDBErr`@. instance Show LMDBErr where - show dbErr = mconcat + show dbErr = + mconcat [ "[LMDB-ERROR] " , "The LMDB Backing store has encountered a fatal exception. " , "Possibly, the LMDB database is corrupted.\n" @@ -714,9 +764,9 @@ prettyPrintLMDBErr = \case "Can not find the database state on-disk." LMDBErrNonMonotonicSeq s1 s2 -> "Trying to write to the database with a non-monotonic sequence number: " - <> showParen True (shows s1) "" - <> " is not <= " - <> showParen True (shows s2) "" + <> showParen True (shows s1) "" + <> " is not <= " + <> showParen True (shows s2) "" LMDBErrInitialisingNonEmpty s -> "The database table that is being initialised is non-empty: " <> s LMDBErrInitialisingAlreadyHasState -> @@ -735,10 +785,10 @@ prettyPrintLMDBErr = \case "Database directory should already exist: " <> show path LMDBErrDirIsNotLMDB path -> "Database directory doesn't contain an LMDB database: " - <> show path - <> "\nPre-UTxO-HD and In-Memory implementations are incompatible \ - \ with the LMDB implementation, please delete your ledger database \ - \ if you want to run with LMDB" + <> show path + <> "\nPre-UTxO-HD and In-Memory implementations are incompatible \ + \ with the LMDB implementation, please delete your ledger database \ + \ if you want to run with LMDB" LMDBErrNotADir path -> "The path " <> show path <> " should be a directory but it is a file instead." LMDBErrClosed -> "The database has been closed." diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Bridge.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Bridge.hs index e34bf4c049..c4f901dd68 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Bridge.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Bridge.hs @@ -1,22 +1,21 @@ {-# LANGUAGE DataKinds #-} {-# LANGUAGE MagicHash #-} {-# LANGUAGE Rank2Types #-} - {-# OPTIONS_GHC -Wno-orphans #-} -{-| Alternatives to LMDB operations that do not rely on @'Serialise'@ instances - - We cannot (easily and without runtime overhead) satisfy the @'Serialise'@ - constraints that the @lmdb-simple@ operations require. We have access to the - codification and decodification functions provided in @'CodecMK'@, thus, we - redefine parts of the internal @LMDB.Simple@ operations here. The - redefinitions are largely analogous to their counterparts, though they thread - through explicit CBOR encoders and decoders. --} -module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Bridge ( - -- * Cursor +-- | Alternatives to LMDB operations that do not rely on @'Serialise'@ instances +-- +-- We cannot (easily and without runtime overhead) satisfy the @'Serialise'@ +-- constraints that the @lmdb-simple@ operations require. We have access to the +-- codification and decodification functions provided in @'CodecMK'@, thus, we +-- redefine parts of the internal @LMDB.Simple@ operations here. The +-- redefinitions are largely analogous to their counterparts, though they thread +-- through explicit CBOR encoders and decoders. +module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Bridge + ( -- * Cursor fromCodecMK , runCursorAsTransaction' + -- * Internal: get and put , delete , deleteBS @@ -29,21 +28,21 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Bridge ( , putBS ) where -import Control.Exception (assert) -import Control.Monad ((>=>)) -import qualified Control.Monad as Monad -import Control.Monad.IO.Class (MonadIO (liftIO)) -import qualified Data.ByteString as BS -import Data.MemPack -import Data.MemPack.Buffer -import Database.LMDB.Raw (MDB_val (..), mdb_reserve') -import Database.LMDB.Simple (Database, Mode (ReadWrite), Transaction) -import Database.LMDB.Simple.Cursor (CursorM) -import qualified Database.LMDB.Simple.Cursor as Cursor -import qualified Database.LMDB.Simple.Internal as Internal -import Foreign (Storable (peek, poke), castPtr) -import GHC.Ptr (Ptr (..)) -import Ouroboros.Consensus.Util.IndexedMemPack +import Control.Exception (assert) +import Control.Monad ((>=>)) +import Control.Monad qualified as Monad +import Control.Monad.IO.Class (MonadIO (liftIO)) +import Data.ByteString qualified as BS +import Data.MemPack +import Data.MemPack.Buffer +import Database.LMDB.Raw (MDB_val (..), mdb_reserve') +import Database.LMDB.Simple (Database, Mode (ReadWrite), Transaction) +import Database.LMDB.Simple.Cursor (CursorM) +import Database.LMDB.Simple.Cursor qualified as Cursor +import Database.LMDB.Simple.Internal qualified as Internal +import Foreign (Storable (peek, poke), castPtr) +import GHC.Ptr (Ptr (..)) +import Ouroboros.Consensus.Util.IndexedMemPack instance Buffer MDB_val where bufferByteCount = fromIntegral . mv_size @@ -73,21 +72,22 @@ indexedPokeMDBValMemPack idx ptr x = Internal.marshalOutBS (indexedPackByteStrin -------------------------------------------------------------------------------} fromCodecMK :: (IndexedMemPack idx v, MemPack k) => idx -> Cursor.PeekPoke k v -fromCodecMK idx = Cursor.PeekPoke { - Cursor.kPeek = peekMDBValMemPack - , Cursor.vPeek = indexedPeekMDBValMemPack idx - , Cursor.kPoke = pokeMDBValMemPack - , Cursor.vPoke = indexedPokeMDBValMemPack idx - } +fromCodecMK idx = + Cursor.PeekPoke + { Cursor.kPeek = peekMDBValMemPack + , Cursor.vPeek = indexedPeekMDBValMemPack idx + , Cursor.kPoke = pokeMDBValMemPack + , Cursor.vPoke = indexedPokeMDBValMemPack idx + } -- | Wrapper around @'Cursor.runCursorAsTransaction''@ that requires a -- @'CodecMK'@ instead of a @'PeekPoke'@. runCursorAsTransaction' :: - (MemPack k, IndexedMemPack idx v) - => idx - -> CursorM k v mode a - -> Database k v - -> Transaction mode a + (MemPack k, IndexedMemPack idx v) => + idx -> + CursorM k v mode a -> + Database k v -> + Transaction mode a runCursorAsTransaction' idx cm db = Cursor.runCursorAsTransaction' cm db (fromCodecMK idx) @@ -96,54 +96,56 @@ runCursorAsTransaction' idx cm db = -------------------------------------------------------------------------------} get :: - (MemPack k, MemPack v) - => Database k v - -> k - -> Transaction mode (Maybe v) + (MemPack k, MemPack v) => + Database k v -> + k -> + Transaction mode (Maybe v) get db = getBS db . packByteString getBS :: - MemPack v - => Database k v - -> BS.ByteString - -> Transaction mode (Maybe v) -getBS db k = getBS' db k >>= - maybe (return Nothing) (liftIO . fmap Just . pure . unpackError) + MemPack v => + Database k v -> + BS.ByteString -> + Transaction mode (Maybe v) +getBS db k = + getBS' db k + >>= maybe (return Nothing) (liftIO . fmap Just . pure . unpackError) indexedGet :: - (IndexedMemPack idx v, MemPack k) - => idx - -> Database k v - -> k - -> Transaction mode (Maybe v) + (IndexedMemPack idx v, MemPack k) => + idx -> + Database k v -> + k -> + Transaction mode (Maybe v) indexedGet idx db = indexedGetBS idx db . packByteString indexedGetBS :: - IndexedMemPack idx v - => idx - -> Database k v - -> BS.ByteString - -> Transaction mode (Maybe v) -indexedGetBS idx db k = getBS' db k >>= - maybe (return Nothing) (liftIO . fmap Just . pure . indexedUnpackError idx) + IndexedMemPack idx v => + idx -> + Database k v -> + BS.ByteString -> + Transaction mode (Maybe v) +indexedGetBS idx db k = + getBS' db k + >>= maybe (return Nothing) (liftIO . fmap Just . pure . indexedUnpackError idx) getBS' :: Database k v -> BS.ByteString -> Transaction mode (Maybe MDB_val) getBS' = Internal.getBS' put :: - (MemPack v, MemPack k) - => Database k v - -> k - -> v - -> Transaction ReadWrite () + (MemPack v, MemPack k) => + Database k v -> + k -> + v -> + Transaction ReadWrite () put db = putBS db . packByteString putBS :: - MemPack v - => Database k v - -> BS.ByteString - -> v - -> Transaction ReadWrite () + MemPack v => + Database k v -> + BS.ByteString -> + v -> + Transaction ReadWrite () putBS (Internal.Db _ dbi) keyBS value = Internal.Txn $ \txn -> Internal.marshalOutBS keyBS $ \kval -> do let valueBS = packByteString value @@ -153,21 +155,21 @@ putBS (Internal.Db _ dbi) keyBS value = Internal.Txn $ \txn -> Monad.void $ assert (len' == sz) $ Internal.copyBS (castPtr ptr, len') valueBS indexedPut :: - (IndexedMemPack idx v, MemPack k) - => idx - -> Database k v - -> k - -> v - -> Transaction ReadWrite () + (IndexedMemPack idx v, MemPack k) => + idx -> + Database k v -> + k -> + v -> + Transaction ReadWrite () indexedPut idx db = indexedPutBS idx db . packByteString indexedPutBS :: - IndexedMemPack idx v - => idx - -> Database k v - -> BS.ByteString - -> v - -> Transaction ReadWrite () + IndexedMemPack idx v => + idx -> + Database k v -> + BS.ByteString -> + v -> + Transaction ReadWrite () indexedPutBS idx (Internal.Db _ dbi) keyBS value = Internal.Txn $ \txn -> Internal.marshalOutBS keyBS $ \kval -> do let valueBS = indexedPackByteString idx value @@ -177,10 +179,10 @@ indexedPutBS idx (Internal.Db _ dbi) keyBS value = Internal.Txn $ \txn -> Monad.void $ assert (len' == sz) $ Internal.copyBS (castPtr ptr, len') valueBS delete :: - MemPack k - => Database k v - -> k - -> Transaction ReadWrite Bool + MemPack k => + Database k v -> + k -> + Transaction ReadWrite Bool delete db = deleteBS db . packByteString deleteBS :: Database k v -> BS.ByteString -> Transaction ReadWrite Bool diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Status.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Status.hs index 43b2408ce4..e7ef4c6cac 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Status.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/BackingStore/Impl/LMDB/Status.hs @@ -5,29 +5,30 @@ {-# LANGUAGE TupleSections #-} -- | LMDB resource status with read-append-write locking -module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Status ( - -- * Status +module Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.Impl.LMDB.Status + ( -- * Status Status (..) , StatusLock + -- * Locks , new , withReadAccess , withWriteAccess ) where -import Control.RAWLock (RAWLock) -import qualified Control.RAWLock as RAW -import Data.Functor ((<&>)) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Util.IOLike (IOLike) +import Control.RAWLock (RAWLock) +import Control.RAWLock qualified as RAW +import Data.Functor ((<&>)) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Util.IOLike (IOLike) {------------------------------------------------------------------------------- Status -------------------------------------------------------------------------------} -- | A 'RAWLock' for 'Status'. -newtype StatusLock m = StatusLock { getStatusLock :: RAWLock m Status } +newtype StatusLock m = StatusLock {getStatusLock :: RAWLock m Status} -- | Whether a resource is open or closed. -- @@ -35,7 +36,7 @@ newtype StatusLock m = StatusLock { getStatusLock :: RAWLock m Status } -- (ii) each of the LMDB backing store value handles. data Status = Open | Closed deriving stock (Show, Eq, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks {------------------------------------------------------------------------------- Locks @@ -52,14 +53,16 @@ new st = StatusLock <$> RAW.new st -- acquired lock is not of type @'Status' -> ('Status', a)@. The 'Status' is -- known to be 'Open', or an exception would have been thrown. withWriteAccess :: - IOLike m - => StatusLock m - -> m a -- ^ Action to perform if closed - -> m (a, Status) -- ^ Action to perform if open, possibly updating the 'Status' - -> m a + IOLike m => + StatusLock m -> + -- | Action to perform if closed + m a -> + -- | Action to perform if open, possibly updating the 'Status' + m (a, Status) -> + m a withWriteAccess lock ifClosed ifOpen = RAW.withWriteAccess (getStatusLock lock) $ \case - Open -> ifOpen + Open -> ifOpen Closed -> ifClosed <&> (,Closed) -- | A variant of 'RAW.withReadAccess' that throws an exception if @'Status' == @@ -69,12 +72,14 @@ withWriteAccess lock ifClosed ifOpen = -- acquired lock is not of type @'Status' -> a@. The 'Status' is known to be -- 'Open', or an exception would have been thrown. withReadAccess :: - IOLike m - => StatusLock m - -> m a -- ^ Action to perform when closed - -> m a -- ^ Action to perform when open - -> m a + IOLike m => + StatusLock m -> + -- | Action to perform when closed + m a -> + -- | Action to perform when open + m a -> + m a withReadAccess lock ifClosed ifOpen = RAW.withReadAccess (getStatusLock lock) $ \case - Open -> ifOpen + Open -> ifOpen Closed -> ifClosed diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DbChangelog.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DbChangelog.hs index 2380a201a1..9d82c78639 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DbChangelog.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DbChangelog.hs @@ -16,7 +16,6 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | A 'DbChangelog' is the component of the @@ -105,16 +104,21 @@ -- -- Only when flushing, the 'SeqDiffMK' is pruned, by extracting the differences -- in between the last flushed state and the current immutable tip. -module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog ( - -- * The DbChangelog +module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog + ( -- * The DbChangelog DbChangelog (..) , DbChangelog' + -- * Construction , empty , pruneToImmTipOnly + -- * Updating a @DbChangelog@ + -- ** Applying blocks - -- + + -- + -- | Applying blocks to the 'DbChangelog' will extend it if the result is -- successful. -- @@ -123,8 +127,11 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog ( -- the ledger state](#g:hydratingTheLedgerState) and then finally call the -- ledger, which might throw errors. , reapplyThenPush + -- *** Hydrating the ledger state #hydratingTheLedgerState# - -- + + -- + -- | When trying to get tables at a specific ledger state, we must follow a -- process we call /hydrating the ledger state/. This process consists of 3 steps: -- @@ -137,19 +144,23 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog ( -- 3. Forward those values by applying the differences in the 'DbChangelog' up to -- the requested point. , withKeysReadSets + -- **** Read , KeySetsReader , UnforwardedReadSets (..) , readKeySets , readKeySetsWith , trivialKeySetsReader + -- **** Forward , RewindReadFwdError (..) , forwardTableKeySets , forwardTableKeySets' + -- ** Flushing , DiffsToFlush (..) , splitForFlushing + -- * Queries , anchor , current @@ -159,7 +170,9 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog ( , snapshots , tip , volatileStatesBimap + -- * 🧪 Testing + -- ** Internal , extend , immutableTipSlot @@ -169,6 +182,7 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog ( , rollbackN , rollbackToAnchor , rollbackToPoint + -- * Testing , reapplyThenPush' , reapplyThenPushMany' @@ -176,29 +190,29 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog ( , switch' ) where -import Cardano.Ledger.BaseTypes -import Cardano.Slotting.Slot -import Control.Exception as Exn -import Data.Bifunctor (bimap) -import Data.Functor.Identity -import Data.Map.Diff.Strict as AntiDiff (applyDiffForKeys) -import Data.SOP (K, unK) -import Data.SOP.Functors -import Data.Word -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Forker -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq as DS -import Ouroboros.Consensus.Util (repeatedlyM) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.AnchoredSeq (AnchoredSeq) -import qualified Ouroboros.Network.AnchoredSeq as AS +import Cardano.Ledger.BaseTypes +import Cardano.Slotting.Slot +import Control.Exception as Exn +import Data.Bifunctor (bimap) +import Data.Functor.Identity +import Data.Map.Diff.Strict as AntiDiff (applyDiffForKeys) +import Data.SOP (K, unK) +import Data.SOP.Functors +import Data.Word +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Forker +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API +import Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq qualified as DS +import Ouroboros.Consensus.Util (repeatedlyM) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredSeq (AnchoredSeq) +import Ouroboros.Network.AnchoredSeq qualified as AS {------------------------------------------------------------------------------- The DbChangelog @@ -252,37 +266,41 @@ import qualified Ouroboros.Network.AnchoredSeq as AS -- -- As said above, this @DbChangelog@ has to be coupled with a @BackingStore@ -- which provides the pointers to the on-disk data. -data DbChangelog l = DbChangelog { - -- | The last flushed ledger state. - -- - -- We need to keep track of this one as this will be the state written to - -- disk when we make a snapshot - changelogLastFlushedState :: !(l EmptyMK) - - -- | The sequence of differences between the last flushed state - -- ('changelogLastFlushedState') and the tip of the volatile sequence - -- ('changelogStates'). - , changelogDiffs :: !(LedgerTables l SeqDiffMK) - -- | The volatile sequence of states. - -- - -- The anchor of this sequence is the immutable tip, so whenever we flush, - -- we should do so up until that point. The length of this sequence will be - -- @k@ except in abnormal circumstances like rollbacks or data corruption. - -- - -- Note that @length 'changelogDiffs' >= length 'changelogStates'@. - , changelogStates :: !(AnchoredSeq - (WithOrigin SlotNo) - (l EmptyMK) - (l EmptyMK)) +data DbChangelog l = DbChangelog + { changelogLastFlushedState :: !(l EmptyMK) + -- ^ The last flushed ledger state. + -- + -- We need to keep track of this one as this will be the state written to + -- disk when we make a snapshot + , changelogDiffs :: !(LedgerTables l SeqDiffMK) + -- ^ The sequence of differences between the last flushed state + -- ('changelogLastFlushedState') and the tip of the volatile sequence + -- ('changelogStates'). + , changelogStates :: + !( AnchoredSeq + (WithOrigin SlotNo) + (l EmptyMK) + (l EmptyMK) + ) + -- ^ The volatile sequence of states. + -- + -- The anchor of this sequence is the immutable tip, so whenever we flush, + -- we should do so up until that point. The length of this sequence will be + -- @k@ except in abnormal circumstances like rollbacks or data corruption. + -- + -- Note that @length 'changelogDiffs' >= length 'changelogStates'@. } - deriving (Generic) - -deriving instance (Eq (TxIn l), Eq (TxOut l), Eq (l EmptyMK)) - => Eq (DbChangelog l) -deriving instance (NoThunks (TxIn l), NoThunks (TxOut l), NoThunks (l EmptyMK)) - => NoThunks (DbChangelog l) -deriving instance (Show (TxIn l), Show (TxOut l), Show (l EmptyMK)) - => Show (DbChangelog l) + deriving Generic + +deriving instance + (Eq (TxIn l), Eq (TxOut l), Eq (l EmptyMK)) => + Eq (DbChangelog l) +deriving instance + (NoThunks (TxIn l), NoThunks (TxOut l), NoThunks (l EmptyMK)) => + NoThunks (DbChangelog l) +deriving instance + (Show (TxIn l), Show (TxOut l), Show (l EmptyMK)) => + Show (DbChangelog l) type DbChangelog' blk = DbChangelog (ExtLedgerState blk) @@ -291,15 +309,17 @@ instance GetTip l => AS.Anchorable (WithOrigin SlotNo) (l EmptyMK) (l EmptyMK) w getAnchorMeasure _ = getTipSlot instance IsLedger l => GetTip (K (DbChangelog l)) where - getTip = castPoint - . getTip - . either id id - . AS.head - . changelogStates - . unK - -type instance HeaderHash (K @MapKind (DbChangelog l)) = - HeaderHash l + getTip = + castPoint + . getTip + . either id id + . AS.head + . changelogStates + . unK + +type instance + HeaderHash (K @MapKind (DbChangelog l)) = + HeaderHash l {------------------------------------------------------------------------------- Construction @@ -307,40 +327,43 @@ type instance HeaderHash (K @MapKind (DbChangelog l)) = -- | Creates an empty @DbChangelog@. empty :: - (HasLedgerTables l, GetTip l) - => l EmptyMK -> DbChangelog l + (HasLedgerTables l, GetTip l) => + l EmptyMK -> DbChangelog l empty theAnchor = - DbChangelog { - changelogLastFlushedState = theAnchor - , changelogDiffs = ltpure (SeqDiffMK DS.empty) - , changelogStates = AS.Empty theAnchor - } + DbChangelog + { changelogLastFlushedState = theAnchor + , changelogDiffs = ltpure (SeqDiffMK DS.empty) + , changelogStates = AS.Empty theAnchor + } {------------------------------------------------------------------------------- Mapping changelogs -------------------------------------------------------------------------------} -reapplyBlock :: forall m l blk. (ApplyBlock l blk, Monad m) - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> KeySetsReader m l - -> DbChangelog l - -> m (l DiffMK) +reapplyBlock :: + forall m l blk. + (ApplyBlock l blk, Monad m) => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + KeySetsReader m l -> + DbChangelog l -> + m (l DiffMK) reapplyBlock evs cfg b ksReader db = - withKeysReadSets (current db) ksReader db (getBlockKeySets b) (return . tickThenReapply evs cfg b) + withKeysReadSets (current db) ksReader db (getBlockKeySets b) (return . tickThenReapply evs cfg b) -- | Apply a block on top of the ledger state and extend the DbChangelog with -- the result ledger state. -reapplyThenPush :: (Monad m, ApplyBlock l blk) - => LedgerDbCfg l - -> blk - -> KeySetsReader m l - -> DbChangelog l - -> m (DbChangelog l) +reapplyThenPush :: + (Monad m, ApplyBlock l blk) => + LedgerDbCfg l -> + blk -> + KeySetsReader m l -> + DbChangelog l -> + m (DbChangelog l) reapplyThenPush cfg ap ksReader db = - (\current' -> prune (LedgerDbPruneKeeping (ledgerDbCfgSecParam cfg)) $ extend current' db) <$> - reapplyBlock (ledgerDbCfgComputeLedgerEvents cfg) (ledgerDbCfg cfg) ap ksReader db + (\current' -> prune (LedgerDbPruneKeeping (ledgerDbCfgSecParam cfg)) $ extend current' db) + <$> reapplyBlock (ledgerDbCfgComputeLedgerEvents cfg) (ledgerDbCfg cfg) ap ksReader db -- | Prune oldest ledger states until at we have at most @k@ in the DbChangelog, -- excluding the one stored at the anchor. @@ -354,30 +377,31 @@ reapplyThenPush cfg ap ksReader db = -- +--------------+----------------------------+----------------------+ -- | @L0@ | @L2 :> [ L3, L4 ]@ | @[ D1, D2, D3, D4 ]@ | -- +--------------+----------------------------+----------------------+ -prune :: GetTip l - => LedgerDbPrune - -> DbChangelog l - -> DbChangelog l +prune :: + GetTip l => + LedgerDbPrune -> + DbChangelog l -> + DbChangelog l prune (LedgerDbPruneKeeping (SecurityParam k)) dblog = - dblog { changelogStates = vol' } - where - DbChangelog { changelogStates } = dblog + dblog{changelogStates = vol'} + where + DbChangelog{changelogStates} = dblog - nvol = AS.length changelogStates + nvol = AS.length changelogStates - vol' = - if toEnum nvol <= unNonZero k + vol' = + if toEnum nvol <= unNonZero k then changelogStates else snd $ AS.splitAt (nvol - fromEnum (unNonZero k)) changelogStates prune LedgerDbPruneAll dblog = - dblog { changelogStates = vol' } - where - DbChangelog { changelogStates } = dblog + dblog{changelogStates = vol'} + where + DbChangelog{changelogStates} = dblog - nvol = AS.length changelogStates + nvol = AS.length changelogStates - vol' = - snd $ AS.splitAt nvol changelogStates + vol' = + snd $ AS.splitAt nvol changelogStates -- NOTE: we must inline 'prune' otherwise we get unexplained thunks in -- 'DbChangelog' and thus a space leak. Alternatively, we could disable the @@ -397,37 +421,38 @@ prune LedgerDbPruneAll dblog = -- +------+----------------------------+----------------------+ -- | @L2@ | @L2 :> [ L3, L4, L5, L6 ]@ | @[ D3, D4, D5, D6 ]@ | -- +------+----------------------------+----------------------+ -extend :: (GetTip l, HasLedgerTables l) - => l DiffMK - -> DbChangelog l - -> DbChangelog l +extend :: + (GetTip l, HasLedgerTables l) => + l DiffMK -> + DbChangelog l -> + DbChangelog l extend newState dblog = - DbChangelog { - changelogLastFlushedState = changelogLastFlushedState - , changelogDiffs = ltliftA2 ext changelogDiffs tablesDiff - , changelogStates = changelogStates AS.:> l' + DbChangelog + { changelogLastFlushedState = changelogLastFlushedState + , changelogDiffs = ltliftA2 ext changelogDiffs tablesDiff + , changelogStates = changelogStates AS.:> l' } - where - slot = case getTipSlot l' of - Origin -> error "impossible! extending a DbChangelog with a state at Origin" - At s -> s - - ext :: - (Ord k, Eq v) - => SeqDiffMK k v - -> DiffMK k v - -> SeqDiffMK k v - ext (SeqDiffMK sq) (DiffMK d) = - SeqDiffMK $ DS.extend sq slot $ DS.toAntiDiff d - - l' = forgetLedgerTables newState - tablesDiff = projectLedgerTables newState - - DbChangelog { - changelogLastFlushedState - , changelogDiffs - , changelogStates - } = dblog + where + slot = case getTipSlot l' of + Origin -> error "impossible! extending a DbChangelog with a state at Origin" + At s -> s + + ext :: + (Ord k, Eq v) => + SeqDiffMK k v -> + DiffMK k v -> + SeqDiffMK k v + ext (SeqDiffMK sq) (DiffMK d) = + SeqDiffMK $ DS.extend sq slot $ DS.toAntiDiff d + + l' = forgetLedgerTables newState + tablesDiff = projectLedgerTables newState + + DbChangelog + { changelogLastFlushedState + , changelogDiffs + , changelogStates + } = dblog {------------------------------------------------------------------------------- Read @@ -436,58 +461,59 @@ extend newState dblog = type KeySetsReader m l = l EmptyMK -> LedgerTables l KeysMK -> m (UnforwardedReadSets l) readKeySets :: - IOLike m - => LedgerBackingStore m l - -> KeySetsReader m l + IOLike m => + LedgerBackingStore m l -> + KeySetsReader m l readKeySets backingStore st rew = do - withBsValueHandle backingStore (\bsvh -> readKeySetsWith bsvh st rew) + withBsValueHandle backingStore (\bsvh -> readKeySetsWith bsvh st rew) readKeySetsWith :: - Monad m - => LedgerBackingStoreValueHandle m l - -> KeySetsReader m l + Monad m => + LedgerBackingStoreValueHandle m l -> + KeySetsReader m l readKeySetsWith bsvh st rew = do - values <- bsvhRead bsvh st rew - pure UnforwardedReadSets { - ursSeqNo = bsvhAtSlot bsvh + values <- bsvhRead bsvh st rew + pure + UnforwardedReadSets + { ursSeqNo = bsvhAtSlot bsvh , ursValues = values - , ursKeys = rew - } + , ursKeys = rew + } withKeysReadSets :: - (HasLedgerTables l, Monad m, GetTip l) - => l EmptyMK - -> KeySetsReader m l - -> DbChangelog l - -> LedgerTables l KeysMK - -> (l ValuesMK -> m a) - -> m a + (HasLedgerTables l, Monad m, GetTip l) => + l EmptyMK -> + KeySetsReader m l -> + DbChangelog l -> + LedgerTables l KeysMK -> + (l ValuesMK -> m a) -> + m a withKeysReadSets st ksReader dbch ks f = do - urs <- ksReader st ks - case withHydratedLedgerState urs of - Left err -> - -- We performed the rewind;read;forward sequence in this function. So - -- the forward operation should not fail. If this is the case we're in - -- the presence of a problem that we cannot deal with at this level, - -- so we throw an error. - -- - -- When we introduce pipelining, if the forward operation fails it - -- could be because the DB handle was modified by a DB flush that took - -- place when __after__ we read the unforwarded keys-set from disk. - -- However, performing rewind;read;forward with the same __locked__ - -- changelog should always succeed. - error $ "Changelog rewind;read;forward sequence failed, " <> show err - Right res -> res - - where - withHydratedLedgerState urs = - f - . withLedgerTables st + urs <- ksReader st ks + case withHydratedLedgerState urs of + Left err -> + -- We performed the rewind;read;forward sequence in this function. So + -- the forward operation should not fail. If this is the case we're in + -- the presence of a problem that we cannot deal with at this level, + -- so we throw an error. + -- + -- When we introduce pipelining, if the forward operation fails it + -- could be because the DB handle was modified by a DB flush that took + -- place when __after__ we read the unforwarded keys-set from disk. + -- However, performing rewind;read;forward with the same __locked__ + -- changelog should always succeed. + error $ "Changelog rewind;read;forward sequence failed, " <> show err + Right res -> res + where + withHydratedLedgerState urs = + f + . withLedgerTables st <$> forwardTableKeySets dbch urs -trivialKeySetsReader :: (Monad m, LedgerTablesAreTrivial l) - => WithOrigin SlotNo - -> KeySetsReader m l +trivialKeySetsReader :: + (Monad m, LedgerTablesAreTrivial l) => + WithOrigin SlotNo -> + KeySetsReader m l trivialKeySetsReader s _st _ = pure $ UnforwardedReadSets s trivialLedgerTables trivialLedgerTables @@ -495,50 +521,53 @@ trivialKeySetsReader s _st _ = Forward -------------------------------------------------------------------------------} -data UnforwardedReadSets l = UnforwardedReadSets { - -- | The Slot number of the anchor of the 'DbChangelog' that was used when - -- rewinding and reading. - ursSeqNo :: !(WithOrigin SlotNo) - -- | The values that were found in the 'BackingStore'. +data UnforwardedReadSets l = UnforwardedReadSets + { ursSeqNo :: !(WithOrigin SlotNo) + -- ^ The Slot number of the anchor of the 'DbChangelog' that was used when + -- rewinding and reading. , ursValues :: !(LedgerTables l ValuesMK) - -- | All the requested keys, being or not present in the 'BackingStore'. - , ursKeys :: !(LedgerTables l KeysMK) + -- ^ The values that were found in the 'BackingStore'. + , ursKeys :: !(LedgerTables l KeysMK) + -- ^ All the requested keys, being or not present in the 'BackingStore'. } -- | The DbChangelog and the BackingStore got out of sync. This is a critical -- error, we cannot recover from this. -data RewindReadFwdError = RewindReadFwdError { - rrfBackingStoreAt :: !(WithOrigin SlotNo) - , rrfDbChangelogAt :: !(WithOrigin SlotNo) - } deriving Show +data RewindReadFwdError = RewindReadFwdError + { rrfBackingStoreAt :: !(WithOrigin SlotNo) + , rrfDbChangelogAt :: !(WithOrigin SlotNo) + } + deriving Show forwardTableKeySets' :: - HasLedgerTables l - => WithOrigin SlotNo - -> LedgerTables l SeqDiffMK - -> UnforwardedReadSets l - -> Either RewindReadFwdError - (LedgerTables l ValuesMK) + HasLedgerTables l => + WithOrigin SlotNo -> + LedgerTables l SeqDiffMK -> + UnforwardedReadSets l -> + Either + RewindReadFwdError + (LedgerTables l ValuesMK) forwardTableKeySets' seqNo chdiffs = \(UnforwardedReadSets seqNo' values keys) -> - if seqNo /= seqNo' + if seqNo /= seqNo' then Left $ RewindReadFwdError seqNo' seqNo else Right $ ltliftA3 forward values keys chdiffs - where - forward :: - (Ord k, Eq v) - => ValuesMK k v - -> KeysMK k v - -> SeqDiffMK k v - -> ValuesMK k v - forward (ValuesMK values) (KeysMK keys) (SeqDiffMK diffs) = - ValuesMK $ AntiDiff.applyDiffForKeys values keys (DS.cumulativeDiff diffs) + where + forward :: + (Ord k, Eq v) => + ValuesMK k v -> + KeysMK k v -> + SeqDiffMK k v -> + ValuesMK k v + forward (ValuesMK values) (KeysMK keys) (SeqDiffMK diffs) = + ValuesMK $ AntiDiff.applyDiffForKeys values keys (DS.cumulativeDiff diffs) forwardTableKeySets :: - (HasLedgerTables l, GetTip l) - => DbChangelog l - -> UnforwardedReadSets l - -> Either RewindReadFwdError - (LedgerTables l ValuesMK) + (HasLedgerTables l, GetTip l) => + DbChangelog l -> + UnforwardedReadSets l -> + Either + RewindReadFwdError + (LedgerTables l ValuesMK) forwardTableKeySets dblog = forwardTableKeySets' (getTipSlot $ changelogLastFlushedState dblog) @@ -567,9 +596,10 @@ forwardTableKeySets dblog = -- +--------------+----------------------------+----------------------+ -- | @L0@ | @L4 :> [ ]@ | @[ D1, D2, D3, D4 ]@ | -- +--------------+----------------------------+----------------------+ -pruneToImmTipOnly :: GetTip l - => DbChangelog l - -> DbChangelog l +pruneToImmTipOnly :: + GetTip l => + DbChangelog l -> + DbChangelog l pruneToImmTipOnly = prune LedgerDbPruneAll {------------------------------------------------------------------------------- @@ -591,27 +621,28 @@ pruneToImmTipOnly = prune LedgerDbPruneAll -- | @L2@ | @L3 :> [ ] @ | @[ D2, D3 ]@ | -- +--------------+------------------------+--------------------------+ rollbackN :: - (GetTip l, HasLedgerTables l) - => Word64 - -> DbChangelog l - -> Maybe (DbChangelog l) + (GetTip l, HasLedgerTables l) => + Word64 -> + DbChangelog l -> + Maybe (DbChangelog l) rollbackN n dblog - | n <= maxRollback dblog - = Just $ dblog { - changelogDiffs = ltmap truncSeqDiff changelogDiffs - , changelogStates = AS.dropNewest (fromIntegral n) changelogStates - } - | otherwise - = Nothing - where - truncSeqDiff :: (Ord k, Eq v) => SeqDiffMK k v -> SeqDiffMK k v - truncSeqDiff (SeqDiffMK sq) = - SeqDiffMK $ fst $ DS.splitAtFromEnd (fromIntegral n) sq - - DbChangelog { - changelogDiffs - , changelogStates - } = dblog + | n <= maxRollback dblog = + Just $ + dblog + { changelogDiffs = ltmap truncSeqDiff changelogDiffs + , changelogStates = AS.dropNewest (fromIntegral n) changelogStates + } + | otherwise = + Nothing + where + truncSeqDiff :: (Ord k, Eq v) => SeqDiffMK k v -> SeqDiffMK k v + truncSeqDiff (SeqDiffMK sq) = + SeqDiffMK $ fst $ DS.splitAtFromEnd (fromIntegral n) sq + + DbChangelog + { changelogDiffs + , changelogStates + } = dblog {------------------------------------------------------------------------------- Flushing @@ -632,57 +663,60 @@ rollbackN n dblog -- | @L3@ | @L3 :> [ L4, L5, L6 ]@ | @[ D4, D5, D6 ]@ | -- +--------------+------------------------+------------------------------------------+ splitForFlushing :: - forall l. - (GetTip l, HasLedgerTables l) - => DbChangelog l - -> (Maybe (DiffsToFlush l), DbChangelog l) + forall l. + (GetTip l, HasLedgerTables l) => + DbChangelog l -> + (Maybe (DiffsToFlush l), DbChangelog l) splitForFlushing dblog = - if getTipSlot immTip == Origin || ltcollapse (ltmap (K2 . DS.length . getSeqDiffMK) l) == 0 + if getTipSlot immTip == Origin || ltcollapse (ltmap (K2 . DS.length . getSeqDiffMK) l) == 0 then (Nothing, dblog) else (Just ldblog, rdblog) - where - DbChangelog { - changelogLastFlushedState - , changelogDiffs - , changelogStates - } = dblog - - immTip = AS.anchor changelogStates - - splitSeqDiff :: - (Ord k, Eq v) - => SeqDiffMK k v - -> (SeqDiffMK k v, SeqDiffMK k v) - splitSeqDiff (SeqDiffMK sq) = - let numToFlush = DS.length sq - AS.length changelogStates - in bimap (maybe emptyMK SeqDiffMK) SeqDiffMK - $ if numToFlush > 0 - then let (tf, tk) = DS.splitAt numToFlush sq + where + DbChangelog + { changelogLastFlushedState + , changelogDiffs + , changelogStates + } = dblog + + immTip = AS.anchor changelogStates + + splitSeqDiff :: + (Ord k, Eq v) => + SeqDiffMK k v -> + (SeqDiffMK k v, SeqDiffMK k v) + splitSeqDiff (SeqDiffMK sq) = + let numToFlush = DS.length sq - AS.length changelogStates + in bimap (maybe emptyMK SeqDiffMK) SeqDiffMK $ + if numToFlush > 0 + then + let (tf, tk) = DS.splitAt numToFlush sq in (Just tf, tk) - else (Nothing, sq) + else (Nothing, sq) - lr = ltmap (uncurry Pair2 . splitSeqDiff) changelogDiffs - l = ltmap (\(Pair2 x _) -> x) lr - r = ltmap (\(Pair2 _ y) -> y) lr + lr = ltmap (uncurry Pair2 . splitSeqDiff) changelogDiffs + l = ltmap (\(Pair2 x _) -> x) lr + r = ltmap (\(Pair2 _ y) -> y) lr - prj :: - (Ord k, Eq v) - => SeqDiffMK k v - -> DiffMK k v - prj (SeqDiffMK sq) = DiffMK (DS.fromAntiDiff $ DS.cumulativeDiff sq) + prj :: + (Ord k, Eq v) => + SeqDiffMK k v -> + DiffMK k v + prj (SeqDiffMK sq) = DiffMK (DS.fromAntiDiff $ DS.cumulativeDiff sq) - ldblog = DiffsToFlush { - toFlushDiffs = ltmap prj l + ldblog = + DiffsToFlush + { toFlushDiffs = ltmap prj l , toFlushState = (changelogLastFlushedState, immTip) - , toFlushSlot = - fromWithOrigin (error "Flushing a DbChangelog at origin should never happen") - $ getTipSlot immTip + , toFlushSlot = + fromWithOrigin (error "Flushing a DbChangelog at origin should never happen") $ + getTipSlot immTip } - rdblog = DbChangelog { - changelogLastFlushedState = immTip - , changelogDiffs = r - , changelogStates = changelogStates + rdblog = + DbChangelog + { changelogLastFlushedState = immTip + , changelogDiffs = r + , changelogStates = changelogStates } {------------------------------------------------------------------------------- @@ -692,16 +726,16 @@ splitForFlushing dblog = -- | The ledger state at the tip of the chain current :: GetTip l => DbChangelog l -> l EmptyMK current = - either id id - . AS.head - . changelogStates + either id id + . AS.head + . changelogStates -- | The ledger state at the anchor of the Volatile chain (i.e. the immutable -- tip). anchor :: DbChangelog l -> l EmptyMK anchor = - AS.anchor - . changelogStates + AS.anchor + . changelogStates -- | All snapshots currently stored by the ledger DB (new to old) -- @@ -709,16 +743,16 @@ anchor = -- return the distance from the tip. snapshots :: DbChangelog l -> [(Word64, l EmptyMK)] snapshots = - zip [0..] + zip [0 ..] . AS.toNewestFirst . changelogStates -- | How many blocks can we currently roll back? maxRollback :: GetTip l => DbChangelog l -> Word64 maxRollback = - fromIntegral - . AS.length - . changelogStates + fromIntegral + . AS.length + . changelogStates -- | Reference to the block at the tip of the chain tip :: GetTip l => DbChangelog l -> Point l @@ -727,7 +761,7 @@ tip = castPoint . getTip . current -- | Have we seen at least @k@ blocks? isSaturated :: GetTip l => SecurityParam -> DbChangelog l -> Bool isSaturated (SecurityParam k) db = - maxRollback db >= unNonZero k + maxRollback db >= unNonZero k -- | Get a past ledger state -- @@ -736,64 +770,69 @@ isSaturated (SecurityParam k) db = -- When no ledger state (or anchor) has the given 'Point', 'Nothing' is -- returned. getPastLedgerAt :: - ( HasHeader blk, IsLedger l, HeaderHash l ~ HeaderHash blk - , StandardHash l, HasLedgerTables l - ) - => Point blk - -> DbChangelog l - -> Maybe (l EmptyMK) + ( HasHeader blk + , IsLedger l + , HeaderHash l ~ HeaderHash blk + , StandardHash l + , HasLedgerTables l + ) => + Point blk -> + DbChangelog l -> + Maybe (l EmptyMK) getPastLedgerAt pt db = current <$> rollback pt db -- | Roll back the volatile states up to the specified point. rollbackToPoint :: - ( StandardHash l - , GetTip l - , HasLedgerTables l - ) - => Point l -> DbChangelog l -> Maybe (DbChangelog l) + ( StandardHash l + , GetTip l + , HasLedgerTables l + ) => + Point l -> DbChangelog l -> Maybe (DbChangelog l) rollbackToPoint pt dblog = do - vol' <- - AS.rollback - (pointSlot pt) - ((== pt) . getTip . either id id) - changelogStates - let ndropped = AS.length changelogStates - AS.length vol' - diffs' = ltmap (trunc ndropped) changelogDiffs - Exn.assert (ndropped >= 0) $ pure DbChangelog { - changelogLastFlushedState - , changelogDiffs = diffs' + vol' <- + AS.rollback + (pointSlot pt) + ((== pt) . getTip . either id id) + changelogStates + let ndropped = AS.length changelogStates - AS.length vol' + diffs' = ltmap (trunc ndropped) changelogDiffs + Exn.assert (ndropped >= 0) $ + pure + DbChangelog + { changelogLastFlushedState + , changelogDiffs = diffs' , changelogStates = vol' } - where - DbChangelog { - changelogLastFlushedState - , changelogDiffs - , changelogStates - } = dblog + where + DbChangelog + { changelogLastFlushedState + , changelogDiffs + , changelogStates + } = dblog -- | Rollback the volatile states up to the volatile anchor. rollbackToAnchor :: - (GetTip l, HasLedgerTables l) - => DbChangelog l -> DbChangelog l + (GetTip l, HasLedgerTables l) => + DbChangelog l -> DbChangelog l rollbackToAnchor dblog = - DbChangelog { - changelogLastFlushedState - , changelogDiffs = diffs' - , changelogStates = AS.Empty (AS.anchor vol) - } - where - DbChangelog { - changelogLastFlushedState - , changelogDiffs - , changelogStates = vol - } = dblog + DbChangelog + { changelogLastFlushedState + , changelogDiffs = diffs' + , changelogStates = AS.Empty (AS.anchor vol) + } + where + DbChangelog + { changelogLastFlushedState + , changelogDiffs + , changelogStates = vol + } = dblog - ndropped = AS.length vol - diffs' = ltmap (trunc ndropped) changelogDiffs + ndropped = AS.length vol + diffs' = ltmap (trunc ndropped) changelogDiffs trunc :: - (Ord k, Eq v) - => Int -> SeqDiffMK k v -> SeqDiffMK k v + (Ord k, Eq v) => + Int -> SeqDiffMK k v -> SeqDiffMK k v trunc n (SeqDiffMK sq) = SeqDiffMK $ fst $ DS.splitAtFromEnd n sq @@ -804,118 +843,130 @@ trunc n (SeqDiffMK sq) = -- When no ledger state (or anchor) has the given 'Point', 'Nothing' is -- returned. rollback :: - ( HasHeader blk, IsLedger l, HeaderHash l ~ HeaderHash blk - , StandardHash l, HasLedgerTables l - ) - => Point blk - -> DbChangelog l - -> Maybe (DbChangelog l) + ( HasHeader blk + , IsLedger l + , HeaderHash l ~ HeaderHash blk + , StandardHash l + , HasLedgerTables l + ) => + Point blk -> + DbChangelog l -> + Maybe (DbChangelog l) rollback pt db - | pt == castPoint (getTip (anchor db)) - = Just $ rollbackToAnchor db - | otherwise - = rollbackToPoint (castPoint pt) db + | pt == castPoint (getTip (anchor db)) = + Just $ rollbackToAnchor db + | otherwise = + rollbackToPoint (castPoint pt) db immutableTipSlot :: - GetTip l - => DbChangelog l -> WithOrigin SlotNo + GetTip l => + DbChangelog l -> WithOrigin SlotNo immutableTipSlot = - getTipSlot + getTipSlot . AS.anchor . changelogStates -- | How many diffs we can flush to the backing store? -- -- NOTE: This will be wrong once we have more than one table. -flushableLength :: (HasLedgerTables l, GetTip l) - => DbChangelog l - -> Word64 +flushableLength :: + (HasLedgerTables l, GetTip l) => + DbChangelog l -> + Word64 flushableLength chlog = - (\x -> x - fromIntegral (AS.length (changelogStates chlog))) - . ltcollapse - . ltmap (K2 . f) - $ changelogDiffs chlog + (\x -> x - fromIntegral (AS.length (changelogStates chlog))) + . ltcollapse + . ltmap (K2 . f) + $ changelogDiffs chlog where - f :: (Ord k, Eq v) - => SeqDiffMK k v - -> Word64 - f (SeqDiffMK sq) = fromIntegral $ DS.length sq + f :: + (Ord k, Eq v) => + SeqDiffMK k v -> + Word64 + f (SeqDiffMK sq) = fromIntegral $ DS.length sq -- | Transform the underlying volatile 'AnchoredSeq' using the given functions. volatileStatesBimap :: - AS.Anchorable (WithOrigin SlotNo) a b - => (l EmptyMK -> a) - -> (l EmptyMK -> b) - -> DbChangelog l - -> AS.AnchoredSeq (WithOrigin SlotNo) a b + AS.Anchorable (WithOrigin SlotNo) a b => + (l EmptyMK -> a) -> + (l EmptyMK -> b) -> + DbChangelog l -> + AS.AnchoredSeq (WithOrigin SlotNo) a b volatileStatesBimap f g = - AS.bimap f g + AS.bimap f g . changelogStates {------------------------------------------------------------------------------- Testing -------------------------------------------------------------------------------} -reapplyThenPush' :: ApplyBlock l blk - => LedgerDbCfg l - -> blk - -> KeySetsReader Identity l - -> DbChangelog l - -> DbChangelog l +reapplyThenPush' :: + ApplyBlock l blk => + LedgerDbCfg l -> + blk -> + KeySetsReader Identity l -> + DbChangelog l -> + DbChangelog l reapplyThenPush' cfg b bk = runIdentity . reapplyThenPush cfg b bk -reapplyThenPushMany' :: (ApplyBlock l blk, LedgerTablesAreTrivial l) - => LedgerDbCfg l - -> [blk] - -> DbChangelog l - -> DbChangelog l +reapplyThenPushMany' :: + (ApplyBlock l blk, LedgerTablesAreTrivial l) => + LedgerDbCfg l -> + [blk] -> + DbChangelog l -> + DbChangelog l reapplyThenPushMany' cfg bs dblog = - runIdentity - . reapplyThenPushMany cfg bs (trivialKeySetsReader (getTipSlot (changelogLastFlushedState dblog))) - $ dblog + runIdentity + . reapplyThenPushMany cfg bs (trivialKeySetsReader (getTipSlot (changelogLastFlushedState dblog))) + $ dblog reapplyThenPushMany :: - (ApplyBlock l blk, Monad m) - => LedgerDbCfg l - -> [blk] - -> KeySetsReader m l - -> DbChangelog l - -> m (DbChangelog l) + (ApplyBlock l blk, Monad m) => + LedgerDbCfg l -> + [blk] -> + KeySetsReader m l -> + DbChangelog l -> + m (DbChangelog l) reapplyThenPushMany cfg aps ksReader = repeatedlyM (\ap -> reapplyThenPush cfg ap ksReader) aps switch :: - (ApplyBlock l blk, Monad m) - => LedgerDbCfg l - -> Word64 - -> [blk] - -> KeySetsReader m l - -> DbChangelog l - -> m (Either ExceededRollback (DbChangelog l)) + (ApplyBlock l blk, Monad m) => + LedgerDbCfg l -> + Word64 -> + [blk] -> + KeySetsReader m l -> + DbChangelog l -> + m (Either ExceededRollback (DbChangelog l)) switch cfg numRollbacks newBlocks ksReader db = case rollbackN numRollbacks db of - Nothing -> - return $ Left $ ExceededRollback { - rollbackMaximum = maxRollback db - , rollbackRequested = numRollbacks - } - Just db' -> - if null newBlocks + Nothing -> + return $ + Left $ + ExceededRollback + { rollbackMaximum = maxRollback db + , rollbackRequested = numRollbacks + } + Just db' -> + if null newBlocks then pure $ Right db' -- no blocks to apply to ledger state, return current DbChangelog - else Right <$> reapplyThenPushMany - cfg - newBlocks - ksReader - db' - -switch' :: (ApplyBlock l blk, LedgerTablesAreTrivial l) - => LedgerDbCfg l - -> Word64 - -> [blk] - -> DbChangelog l - -> Maybe (DbChangelog l) + else + Right + <$> reapplyThenPushMany + cfg + newBlocks + ksReader + db' + +switch' :: + (ApplyBlock l blk, LedgerTablesAreTrivial l) => + LedgerDbCfg l -> + Word64 -> + [blk] -> + DbChangelog l -> + Maybe (DbChangelog l) switch' cfg n bs db = case runIdentity $ switch cfg n bs (trivialKeySetsReader (getTipSlot (changelogLastFlushedState db))) db of - Left ExceededRollback{} -> Nothing - Right db' -> Just db' + Left ExceededRollback{} -> Nothing + Right db' -> Just db' diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DiffSeq.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DiffSeq.hs index 18a131f12a..b4755ad690 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DiffSeq.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/DiffSeq.hs @@ -8,102 +8,101 @@ {-# LANGUAGE MonoLocalBinds #-} {-# LANGUAGE MultiParamTypeClasses #-} -{- | Sequences of diffs for ledger tables. - - These diff sequences are an instantiation of a strict finger tree with root - measures. The tree/sequence itself contains diffs and slot information, while - the root measure is the total sum of all diffs in the sequence. The internal - measure is used to keep track of sequence length and maximum slot numbers. - - The diff datatype that we use forms a cancellative monoid, which allows for - relatively efficient splitting of finger trees with respect to recomputing - measures by means of subtracting diffs using the 'stripPrefix' and - 'stripSuffix' functions that cancellative monoids provide. Namely, if either - the left or right part of the split is small in comparison with the input - sequence, then we can subtract the diffs in the smaller part from the root - measure of the input to (quickly) compute the root measure of the /other/ - part of the split. This is much faster than computing the root measures from - scratch by doing a linear-time pass over the elements of the split parts, or - a logarithmic-time pass over intermediate sums of diffs in case we store - cumulative diffs in the nodes of the finger tree. - - === Example of fast splits - - As an analogy, consider this example: we have a sequence of consecutive - integer numbers @xs = [1..n]@ where @n@ is large, and we define the root - measure of the sequence to be the total sum of these numbers, @rmxs = sum - [1..n]@ (we assume @rmxs@ is fully evaluated). Say we split this sequence of - integer numbers at the index @2@, then we get /left/ and /right/ parts of the - split @ys@ and @zs@ respectively. - - > splitAt 2 xs = (ys, zs) = ([1..2], [3..n]) - - How should we compute we the root measure @rmys@ of @ys@? Since @ys@ is - small, we can just compute @rmys = sum [1..2]@. How should we compute the - root measure @rmzs@ of @zs@? We should not compute @rmzs = sum [3..n]@ in - this case, since @n@ is large. Instead, we compute @rmzs = rmxs - rmys@, - which evaluates to its result in time that is linear in the length of @ys@, - in this case @O(1)@. - - === Why not store sums of diffs in the internal measure instead of the root - measure? - - We could also have used the interal measure of the strict finger tree to - store intermediate sums of diffs for all subtrees of the node. The subtree - rooted at the root of the tree would then store the total sum of diffs. - However, we would have now to recompute a possibly logarithmic number of sums - of diffs when we split or extend the sequence. Given that in @consensus@ we - use the total sum of diffs nearly as often as we split or extend the diff - sequence, this proved to be too costly. The single-instance root measure - reduces the overhead of this "caching" of intermediate sums of diffs by only - using a single total sum of diffs, though augmented with 'stripPrefix' and - 'stripSuffix' operations to facilitate computing updated root measures. - - === Root measures in practice - - In consensus, we have the following access pattern. We perform @A@ then @B@ a - total of @n@ times, and then we perform @C(n)@ once. Repeat. - - > A = retrieve the total sum of diffs - > B = snoc a diff to the sequence - > C(n) = split n diffs from the left of the sequence - - In Cardano, @n == 100@ by default. That means we split roughly @2^7@ diffs - from a sequence of length roughly @2^11@. At first glance, it seems - counterintuitive that using a root measured finger tree would be quicker than - using a "normal" finger tree, because the former has a split function with a - linear cost. It needs to recompute the sum of @2^7@ diffs, instead of @7@ - diffs if we were to use the normal finger tree split, which has logarithmic - complexity. - - We wrote a benchmark that exercises the root measured finger tree and the - normal finger tree according to the described access pattern. It turned out - that the root measured fingertree was faster. If we look at the complexity of - these operations, then for a normal fingertree: - - > A = O(1) amortised - > B = O(1) amortised - > C(100) = O(log 100) amortised - - For a root measured fingertree: - - > A = O(1) worst-case - > B = O(1) worst-case - > C(100) = O(100) worst-case - - Complexity wise, the root measured finger tree looks worse, but in practice it - performs a bit better than the normal finger tree. It might mean there are - higher constants at play for the computational complexity of the normal finger - tree operations. - - TODO: I wonder if is worth it to keep using the root measured finger tree. The - root measured finger tree sacrifices computational complexity for an algorithm - that works well in pratice for @n=100@; given that the flush frequency is - configurable, using a value other than @100@ might lead to worse performance - than if we were to use a normal finger tree. --} -module Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq ( - -- * Sequences of diffs +-- | Sequences of diffs for ledger tables. +-- +-- These diff sequences are an instantiation of a strict finger tree with root +-- measures. The tree/sequence itself contains diffs and slot information, while +-- the root measure is the total sum of all diffs in the sequence. The internal +-- measure is used to keep track of sequence length and maximum slot numbers. +-- +-- The diff datatype that we use forms a cancellative monoid, which allows for +-- relatively efficient splitting of finger trees with respect to recomputing +-- measures by means of subtracting diffs using the 'stripPrefix' and +-- 'stripSuffix' functions that cancellative monoids provide. Namely, if either +-- the left or right part of the split is small in comparison with the input +-- sequence, then we can subtract the diffs in the smaller part from the root +-- measure of the input to (quickly) compute the root measure of the /other/ +-- part of the split. This is much faster than computing the root measures from +-- scratch by doing a linear-time pass over the elements of the split parts, or +-- a logarithmic-time pass over intermediate sums of diffs in case we store +-- cumulative diffs in the nodes of the finger tree. +-- +-- === Example of fast splits +-- +-- As an analogy, consider this example: we have a sequence of consecutive +-- integer numbers @xs = [1..n]@ where @n@ is large, and we define the root +-- measure of the sequence to be the total sum of these numbers, @rmxs = sum +-- [1..n]@ (we assume @rmxs@ is fully evaluated). Say we split this sequence of +-- integer numbers at the index @2@, then we get /left/ and /right/ parts of the +-- split @ys@ and @zs@ respectively. +-- +-- > splitAt 2 xs = (ys, zs) = ([1..2], [3..n]) +-- +-- How should we compute we the root measure @rmys@ of @ys@? Since @ys@ is +-- small, we can just compute @rmys = sum [1..2]@. How should we compute the +-- root measure @rmzs@ of @zs@? We should not compute @rmzs = sum [3..n]@ in +-- this case, since @n@ is large. Instead, we compute @rmzs = rmxs - rmys@, +-- which evaluates to its result in time that is linear in the length of @ys@, +-- in this case @O(1)@. +-- +-- === Why not store sums of diffs in the internal measure instead of the root +-- measure? +-- +-- We could also have used the interal measure of the strict finger tree to +-- store intermediate sums of diffs for all subtrees of the node. The subtree +-- rooted at the root of the tree would then store the total sum of diffs. +-- However, we would have now to recompute a possibly logarithmic number of sums +-- of diffs when we split or extend the sequence. Given that in @consensus@ we +-- use the total sum of diffs nearly as often as we split or extend the diff +-- sequence, this proved to be too costly. The single-instance root measure +-- reduces the overhead of this "caching" of intermediate sums of diffs by only +-- using a single total sum of diffs, though augmented with 'stripPrefix' and +-- 'stripSuffix' operations to facilitate computing updated root measures. +-- +-- === Root measures in practice +-- +-- In consensus, we have the following access pattern. We perform @A@ then @B@ a +-- total of @n@ times, and then we perform @C(n)@ once. Repeat. +-- +-- > A = retrieve the total sum of diffs +-- > B = snoc a diff to the sequence +-- > C(n) = split n diffs from the left of the sequence +-- +-- In Cardano, @n == 100@ by default. That means we split roughly @2^7@ diffs +-- from a sequence of length roughly @2^11@. At first glance, it seems +-- counterintuitive that using a root measured finger tree would be quicker than +-- using a "normal" finger tree, because the former has a split function with a +-- linear cost. It needs to recompute the sum of @2^7@ diffs, instead of @7@ +-- diffs if we were to use the normal finger tree split, which has logarithmic +-- complexity. +-- +-- We wrote a benchmark that exercises the root measured finger tree and the +-- normal finger tree according to the described access pattern. It turned out +-- that the root measured fingertree was faster. If we look at the complexity of +-- these operations, then for a normal fingertree: +-- +-- > A = O(1) amortised +-- > B = O(1) amortised +-- > C(100) = O(log 100) amortised +-- +-- For a root measured fingertree: +-- +-- > A = O(1) worst-case +-- > B = O(1) worst-case +-- > C(100) = O(100) worst-case +-- +-- Complexity wise, the root measured finger tree looks worse, but in practice it +-- performs a bit better than the normal finger tree. It might mean there are +-- higher constants at play for the computational complexity of the normal finger +-- tree operations. +-- +-- TODO: I wonder if is worth it to keep using the root measured finger tree. The +-- root measured finger tree sacrifices computational complexity for an algorithm +-- that works well in pratice for @n=100@; given that the flush frequency is +-- configurable, using a value other than @100@ might lead to worse performance +-- than if we were to use a normal finger tree. +module Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq + ( -- * Sequences of diffs DiffSeq (..) , Element (..) , InternalMeasure (..) @@ -111,47 +110,53 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq ( , RootMeasure (..) , SlotNoLB (..) , SlotNoUB (..) + -- * Short-hands for type-class constraints , SM + -- * Queries , cumulativeDiff , length , numDeletes , numInserts + -- * Construction , append , empty , extend + -- * Slots , maxSlot , minSlot + -- * Splitting , split , splitAt , splitAtFromEnd , splitAtSlot + -- * Conversion , fromAntiDiff , toAntiDiff ) where -import qualified Cardano.Slotting.Slot as Slot -import qualified Control.Exception as Exn -import Data.Bifunctor (Bifunctor (bimap)) -import Data.FingerTree.RootMeasured.Strict hiding (split) -import qualified Data.FingerTree.RootMeasured.Strict as RMFT (splitSized) -import qualified Data.Map.Diff.Strict.Internal as Anti -import qualified Data.Map.Strict as Map -import Data.Maybe.Strict -import Data.Monoid (Sum (..)) -import Data.Semigroup (Max (..), Min (..)) -import Data.Semigroup.Cancellative -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import NoThunks.Class (NoThunks) -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Util.Orphans () -import Prelude hiding (length, splitAt) +import Cardano.Slotting.Slot qualified as Slot +import Control.Exception qualified as Exn +import Data.Bifunctor (Bifunctor (bimap)) +import Data.FingerTree.RootMeasured.Strict hiding (split) +import Data.FingerTree.RootMeasured.Strict qualified as RMFT (splitSized) +import Data.Map.Diff.Strict.Internal qualified as Anti +import Data.Map.Strict qualified as Map +import Data.Maybe.Strict +import Data.Monoid (Sum (..)) +import Data.Semigroup (Max (..), Min (..)) +import Data.Semigroup.Cancellative +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Util.Orphans () +import Prelude hiding (length, splitAt) {------------------------------------------------------------------------------- Sequences of diffs @@ -162,60 +167,60 @@ import Prelude hiding (length, splitAt) -- INVARIANT: The slot numbers of consecutive elements should be strictly -- increasing. Manipulating the underlying @'StrictFingerTree'@ directly may -- break this invariant. -newtype DiffSeq k v = - UnsafeDiffSeq - (StrictFingerTree - (RootMeasure k v) - (InternalMeasure k v) - (Element k v) - ) +newtype DiffSeq k v + = UnsafeDiffSeq + ( StrictFingerTree + (RootMeasure k v) + (InternalMeasure k v) + (Element k v) + ) deriving stock (Generic, Show, Eq) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -- The @'SlotNo'@ is not included in the root measure, since it is not a -- cancellative monoid. -data RootMeasure k v = RootMeasure { - -- | Cumulative length - rmLength :: {-# UNPACK #-} !Length - -- | Cumulative diff - , rmDiff :: !(Anti.Diff k v) - -- | Cumulative number of inserts +data RootMeasure k v = RootMeasure + { rmLength :: {-# UNPACK #-} !Length + -- ^ Cumulative length + , rmDiff :: !(Anti.Diff k v) + -- ^ Cumulative diff , rmNumInserts :: !(Sum Int) - -- | Cumulative number of deletes + -- ^ Cumulative number of inserts , rmNumDeletes :: !(Sum Int) + -- ^ Cumulative number of deletes } deriving stock (Generic, Show, Eq, Functor) - deriving anyclass (NoThunks) - -data InternalMeasure k v = InternalMeasure { - -- | Cumulative length - imLength :: {-# UNPACK #-} !Length - -- | Leftmost slot number (or lower bound) - -- - -- Empty diff sequences have no rightmost slot number, so in that case - -- @imSlotNo == Nothing@. - , imSlotNoL :: !(StrictMaybe SlotNoLB) - -- | Rightmost slot number (or upper bound) - -- - -- Empty diff sequences have no leftmost slot number, so in that case - -- @imSlotNo == Nothing@. - , imSlotNoR :: !(StrictMaybe SlotNoUB) + deriving anyclass NoThunks + +data InternalMeasure k v = InternalMeasure + { imLength :: {-# UNPACK #-} !Length + -- ^ Cumulative length + , imSlotNoL :: !(StrictMaybe SlotNoLB) + -- ^ Leftmost slot number (or lower bound) + -- + -- Empty diff sequences have no rightmost slot number, so in that case + -- @imSlotNo == Nothing@. + , imSlotNoR :: !(StrictMaybe SlotNoUB) + -- ^ Rightmost slot number (or upper bound) + -- + -- Empty diff sequences have no leftmost slot number, so in that case + -- @imSlotNo == Nothing@. } deriving stock (Generic, Show, Eq, Functor) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -data Element k v = Element { - elSlotNo :: {-# UNPACK #-} !Slot.SlotNo - , elDiff :: !(Anti.Diff k v) +data Element k v = Element + { elSlotNo :: {-# UNPACK #-} !Slot.SlotNo + , elDiff :: !(Anti.Diff k v) } deriving stock (Generic, Show, Eq, Functor) - deriving anyclass (NoThunks) + deriving anyclass NoThunks -- | Length of a sequence of differences. -newtype Length = Length { unLength :: Int } +newtype Length = Length {unLength :: Int} deriving stock (Generic, Show, Eq, Ord) - deriving newtype (Num) - deriving anyclass (NoThunks) + deriving newtype Num + deriving anyclass NoThunks deriving Semigroup via Sum Int deriving Monoid via Sum Int deriving (LeftReductive, RightReductive) via Sum Int @@ -224,16 +229,16 @@ newtype Length = Length { unLength :: Int } -- | An upper bound on slot numbers. newtype SlotNoUB = SlotNoUB {unSlotNoUB :: Slot.SlotNo} deriving stock (Generic, Show, Eq, Ord) - deriving newtype (Num) - deriving anyclass (NoThunks) + deriving newtype Num + deriving anyclass NoThunks deriving Semigroup via Max Slot.SlotNo deriving Monoid via Max Slot.SlotNo -- | A lower bound on slot numbers. newtype SlotNoLB = SlotNoLB {unSlotNoLB :: Slot.SlotNo} deriving stock (Generic, Show, Eq, Ord) - deriving newtype (Num) - deriving anyclass (NoThunks) + deriving newtype Num + deriving anyclass NoThunks deriving Semigroup via Min Slot.SlotNo deriving Monoid via Min Slot.SlotNo @@ -247,24 +252,30 @@ noSlotBoundsIntersect (SlotNoUB sl1) (SlotNoLB sl2) = sl1 <= sl2 instance (Ord k, Eq v) => RootMeasured (RootMeasure k v) (Element k v) where measureRoot (Element _ d) = - RootMeasure 1 d (Sum $ Anti.numInserts d) (Sum $ Anti.numDeletes d) + RootMeasure 1 d (Sum $ Anti.numInserts d) (Sum $ Anti.numDeletes d) instance (Ord k, Eq v) => Semigroup (RootMeasure k v) where RootMeasure len1 d1 n1 m1 <> RootMeasure len2 d2 n2 m2 = - RootMeasure (len1 <> len2) (d1 <> d2) (n1 <> n2) (m1 <> m2) + RootMeasure (len1 <> len2) (d1 <> d2) (n1 <> n2) (m1 <> m2) instance (Ord k, Eq v) => Monoid (RootMeasure k v) where mempty = RootMeasure mempty mempty mempty mempty instance (Ord k, Eq v) => LeftReductive (RootMeasure k v) where stripPrefix (RootMeasure len1 d1 n1 m1) (RootMeasure len2 d2 n2 m2) = - RootMeasure <$> stripPrefix len1 len2 <*> stripPrefix d1 d2 - <*> stripPrefix n1 n2 <*> stripPrefix m1 m2 + RootMeasure + <$> stripPrefix len1 len2 + <*> stripPrefix d1 d2 + <*> stripPrefix n1 n2 + <*> stripPrefix m1 m2 instance (Ord k, Eq v) => RightReductive (RootMeasure k v) where stripSuffix (RootMeasure len1 d1 n1 m1) (RootMeasure len2 d2 n2 m2) = - RootMeasure <$> stripSuffix len1 len2 <*> stripSuffix d1 d2 - <*> stripSuffix n1 n2 <*> stripSuffix m1 m2 + RootMeasure + <$> stripSuffix len1 len2 + <*> stripSuffix d1 d2 + <*> stripSuffix n1 n2 + <*> stripSuffix m1 m2 instance (Ord k, Eq v) => LeftCancellative (RootMeasure k v) instance (Ord k, Eq v) => RightCancellative (RootMeasure k v) @@ -274,11 +285,12 @@ instance (Ord k, Eq v) => RightCancellative (RootMeasure k v) -------------------------------------------------------------------------------} instance Measured (InternalMeasure k v) (Element k v) where - measure (Element sl _d) = InternalMeasure { - imLength = 1 - , imSlotNoL = SJust $ SlotNoLB sl - , imSlotNoR = SJust $ SlotNoUB sl - } + measure (Element sl _d) = + InternalMeasure + { imLength = 1 + , imSlotNoL = SJust $ SlotNoLB sl + , imSlotNoR = SJust $ SlotNoUB sl + } instance Semigroup (InternalMeasure k v) where InternalMeasure len1 sl1L sl1R <> InternalMeasure len2 sl2L sl2R = @@ -300,24 +312,24 @@ type SM k v = -------------------------------------------------------------------------------} cumulativeDiff :: - SM k v - => DiffSeq k v - -> Anti.Diff k v + SM k v => + DiffSeq k v -> + Anti.Diff k v cumulativeDiff (UnsafeDiffSeq ft) = rmDiff $ measureRoot ft length :: - SM k v - => DiffSeq k v -> Int + SM k v => + DiffSeq k v -> Int length (UnsafeDiffSeq ft) = unLength . rmLength $ measureRoot ft numInserts :: - SM k v - => DiffSeq k v -> Sum Int + SM k v => + DiffSeq k v -> Sum Int numInserts (UnsafeDiffSeq ft) = rmNumInserts $ measureRoot ft numDeletes :: - SM k v - => DiffSeq k v -> Sum Int + SM k v => + DiffSeq k v -> Sum Int numDeletes (UnsafeDiffSeq ft) = rmNumDeletes $ measureRoot ft {------------------------------------------------------------------------------- @@ -325,35 +337,35 @@ numDeletes (UnsafeDiffSeq ft) = rmNumDeletes $ measureRoot ft -------------------------------------------------------------------------------} extend :: - SM k v - => DiffSeq k v - -> Slot.SlotNo - -> Anti.Diff k v - -> DiffSeq k v + SM k v => + DiffSeq k v -> + Slot.SlotNo -> + Anti.Diff k v -> + DiffSeq k v extend (UnsafeDiffSeq ft) sl d = - Exn.assert invariant $ UnsafeDiffSeq $ ft |> Element sl d - where - invariant = case imSlotNoR $ measure ft of - SNothing -> True - SJust slR -> noSlotBoundsIntersect slR (SlotNoLB sl) + Exn.assert invariant $ UnsafeDiffSeq $ ft |> Element sl d + where + invariant = case imSlotNoR $ measure ft of + SNothing -> True + SJust slR -> noSlotBoundsIntersect slR (SlotNoLB sl) append :: - (Ord k, Eq v) - => DiffSeq k v - -> DiffSeq k v - -> DiffSeq k v + (Ord k, Eq v) => + DiffSeq k v -> + DiffSeq k v -> + DiffSeq k v append (UnsafeDiffSeq ft1) (UnsafeDiffSeq ft2) = - Exn.assert invariant $ UnsafeDiffSeq (ft1 <> ft2) - where - sl1R = imSlotNoR $ measure ft1 - sl2L = imSlotNoL $ measure ft2 - invariant = case noSlotBoundsIntersect <$> sl1R <*> sl2L of - SNothing -> True - SJust v -> v + Exn.assert invariant $ UnsafeDiffSeq (ft1 <> ft2) + where + sl1R = imSlotNoR $ measure ft1 + sl2L = imSlotNoL $ measure ft2 + invariant = case noSlotBoundsIntersect <$> sl1R <*> sl2L of + SNothing -> True + SJust v -> v empty :: - (Ord k, Eq v) - => DiffSeq k v + (Ord k, Eq v) => + DiffSeq k v empty = UnsafeDiffSeq mempty {------------------------------------------------------------------------------- @@ -361,15 +373,15 @@ empty = UnsafeDiffSeq mempty -------------------------------------------------------------------------------} maxSlot :: - SM k v - => DiffSeq k v - -> StrictMaybe Slot.SlotNo + SM k v => + DiffSeq k v -> + StrictMaybe Slot.SlotNo maxSlot (UnsafeDiffSeq ft) = unSlotNoUB <$> imSlotNoR (measure ft) minSlot :: - SM k v - => DiffSeq k v - -> StrictMaybe Slot.SlotNo + SM k v => + DiffSeq k v -> + StrictMaybe Slot.SlotNo minSlot (UnsafeDiffSeq ft) = unSlotNoLB <$> imSlotNoL (measure ft) {------------------------------------------------------------------------------- @@ -380,40 +392,40 @@ instance Sized (InternalMeasure k v) where size = unLength . imLength splitAtSlot :: - SM k v - => Slot.SlotNo - -> DiffSeq k v - -> (DiffSeq k v, DiffSeq k v) + SM k v => + Slot.SlotNo -> + DiffSeq k v -> + (DiffSeq k v, DiffSeq k v) splitAtSlot slot = - split (strictMaybe False (slot <=) . fmap unSlotNoUB . imSlotNoR) + split (strictMaybe False (slot <=) . fmap unSlotNoUB . imSlotNoR) split :: - SM k v - => (InternalMeasure k v -> Bool) - -> DiffSeq k v - -> (DiffSeq k v, DiffSeq k v) -split p (UnsafeDiffSeq ft) = bimap UnsafeDiffSeq UnsafeDiffSeq $ + SM k v => + (InternalMeasure k v -> Bool) -> + DiffSeq k v -> + (DiffSeq k v, DiffSeq k v) +split p (UnsafeDiffSeq ft) = + bimap UnsafeDiffSeq UnsafeDiffSeq $ RMFT.splitSized p ft splitAt :: - SM k v - => Int - -> DiffSeq k v - -> (DiffSeq k v, DiffSeq k v) -splitAt n = split ((Length n<) . imLength) + SM k v => + Int -> + DiffSeq k v -> + (DiffSeq k v, DiffSeq k v) +splitAt n = split ((Length n <) . imLength) splitAtFromEnd :: - (SM k v, HasCallStack) - => Int - -> DiffSeq k v - -> (DiffSeq k v, DiffSeq k v) + (SM k v, HasCallStack) => + Int -> + DiffSeq k v -> + (DiffSeq k v, DiffSeq k v) splitAtFromEnd n dseq = - if n <= len + if n <= len then splitAt (len - n) dseq else error $ "Can't split a seq of length " ++ show len ++ " from end at " ++ show n - where - len = length dseq - + where + len = length dseq {------------------------------------------------------------------------------- From-to diffs @@ -421,12 +433,12 @@ splitAtFromEnd n dseq = fromAntiDiff :: Anti.Diff k v -> Diff.Diff k v fromAntiDiff (Anti.Diff d) = Diff.Diff (Map.map (f . Anti.last) d) - where - f (Anti.Insert v) = Diff.Insert v - f Anti.Delete{} = Diff.Delete + where + f (Anti.Insert v) = Diff.Insert v + f Anti.Delete{} = Diff.Delete toAntiDiff :: Diff.Diff k v -> Anti.Diff k v toAntiDiff (Diff.Diff d) = Anti.Diff (Map.map f d) - where - f (Diff.Insert v) = Anti.singletonInsert v - f Diff.Delete = Anti.singletonDelete + where + f (Diff.Insert v) = Anti.singletonInsert v + f Diff.Delete = Anti.singletonDelete diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Forker.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Forker.hs index d09fb5d6d0..660934eed6 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Forker.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Forker.hs @@ -7,8 +7,8 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Storage.LedgerDB.V1.Forker ( - ForkerEnv (..) +module Ouroboros.Consensus.Storage.LedgerDB.V1.Forker + ( ForkerEnv (..) , closeForkerEnv , implForkerCommit , implForkerGetLedgerState @@ -18,290 +18,306 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.Forker ( , implForkerReadTables ) where -import Control.Tracer -import qualified Data.Map.Strict as Map -import Data.Semigroup -import qualified Data.Set as Set -import GHC.Generics (Generic) -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsProtocol -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Args -import Ouroboros.Consensus.Storage.LedgerDB.Forker as Forker -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API as BackingStore -import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog -import Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq (numDeletes, - numInserts) -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq as DS -import Ouroboros.Consensus.Util.IOLike +import Control.Tracer +import Data.Map.Strict qualified as Map +import Data.Semigroup +import Data.Set qualified as Set +import GHC.Generics (Generic) +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Args +import Ouroboros.Consensus.Storage.LedgerDB.Forker as Forker +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore.API qualified as BackingStore +import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog +import Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq + ( numDeletes + , numInserts + ) +import Ouroboros.Consensus.Storage.LedgerDB.V1.DiffSeq qualified as DS +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- Forkers -------------------------------------------------------------------------------} -data ForkerEnv m l blk = ForkerEnv { - -- | Local, consistent view of backing store - foeBackingStoreValueHandle :: !(LedgerBackingStoreValueHandle m l) - -- | In memory db changelog, 'foeBackingStoreValueHandle' must refer to - -- the anchor of this changelog. - , foeChangelog :: !(StrictTVar m (DbChangelog l)) - -- | The same 'StrictTVar' as 'ldbChangelog' - -- - -- The anchor of this and 'foeChangelog' might get out of sync if diffs are - -- flushed, but 'forkerCommit' will take care of this. - , foeSwitchVar :: !(StrictTVar m (DbChangelog l)) - -- | Config - , foeSecurityParam :: !SecurityParam - -- | Config - , foeTracer :: !(Tracer m TraceForkerEvent) +data ForkerEnv m l blk = ForkerEnv + { foeBackingStoreValueHandle :: !(LedgerBackingStoreValueHandle m l) + -- ^ Local, consistent view of backing store + , foeChangelog :: !(StrictTVar m (DbChangelog l)) + -- ^ In memory db changelog, 'foeBackingStoreValueHandle' must refer to + -- the anchor of this changelog. + , foeSwitchVar :: !(StrictTVar m (DbChangelog l)) + -- ^ The same 'StrictTVar' as 'ldbChangelog' + -- + -- The anchor of this and 'foeChangelog' might get out of sync if diffs are + -- flushed, but 'forkerCommit' will take care of this. + , foeSecurityParam :: !SecurityParam + -- ^ Config + , foeTracer :: !(Tracer m TraceForkerEvent) + -- ^ Config } deriving Generic -deriving instance ( IOLike m - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - ) => NoThunks (ForkerEnv m l blk) +deriving instance + ( IOLike m + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + ) => + NoThunks (ForkerEnv m l blk) {------------------------------------------------------------------------------- Close -------------------------------------------------------------------------------} - closeForkerEnv :: ForkerEnv m l blk -> m () -closeForkerEnv ForkerEnv { foeBackingStoreValueHandle } = bsvhClose foeBackingStoreValueHandle +closeForkerEnv ForkerEnv{foeBackingStoreValueHandle} = bsvhClose foeBackingStoreValueHandle {------------------------------------------------------------------------------- Acquiring consistent views -------------------------------------------------------------------------------} implForkerReadTables :: - (MonadSTM m, HasLedgerTables l, GetTip l) - => ForkerEnv m l blk - -> LedgerTables l KeysMK - -> m (LedgerTables l ValuesMK) + (MonadSTM m, HasLedgerTables l, GetTip l) => + ForkerEnv m l blk -> + LedgerTables l KeysMK -> + m (LedgerTables l ValuesMK) implForkerReadTables env ks = do - traceWith (foeTracer env) ForkerReadTablesStart - chlog <- readTVarIO (foeChangelog env) - unfwd <- readKeySetsWith lvh (changelogLastFlushedState chlog) ks - case forwardTableKeySets chlog unfwd of - Left _err -> error "impossible!" - Right vs -> do - traceWith (foeTracer env) ForkerReadTablesEnd - pure vs - where - lvh = foeBackingStoreValueHandle env + traceWith (foeTracer env) ForkerReadTablesStart + chlog <- readTVarIO (foeChangelog env) + unfwd <- readKeySetsWith lvh (changelogLastFlushedState chlog) ks + case forwardTableKeySets chlog unfwd of + Left _err -> error "impossible!" + Right vs -> do + traceWith (foeTracer env) ForkerReadTablesEnd + pure vs + where + lvh = foeBackingStoreValueHandle env implForkerRangeReadTables :: - (MonadSTM m, HasLedgerTables l) - => QueryBatchSize - -> ForkerEnv m l blk - -> RangeQueryPrevious l - -> m (LedgerTables l ValuesMK) + (MonadSTM m, HasLedgerTables l) => + QueryBatchSize -> + ForkerEnv m l blk -> + RangeQueryPrevious l -> + m (LedgerTables l ValuesMK) implForkerRangeReadTables qbs env rq0 = do - traceWith (foeTracer env) ForkerRangeReadTablesStart - ldb <- readTVarIO $ foeChangelog env - let -- Get the differences without the keys that are greater or equal - -- than the maximum previously seen key. - diffs = - maybe - id - (ltliftA2 doDropLTE) - (BackingStore.rqPrev rq) - $ ltmap prj - $ changelogDiffs ldb - -- (1) Ensure that we never delete everything read from disk (ie if - -- our result is non-empty then it contains something read from - -- disk, as we only get an empty result if we reached the end of - -- the table). - -- - -- (2) Also, read one additional key, which we will not include in - -- the result but need in order to know which in-memory - -- insertions to include. - maxDeletes = ltcollapse $ ltmap (K2 . numDeletesDiffMK) diffs - nrequested = 1 + max (BackingStore.rqCount rq) (1 + maxDeletes) + traceWith (foeTracer env) ForkerRangeReadTablesStart + ldb <- readTVarIO $ foeChangelog env + let + -- Get the differences without the keys that are greater or equal + -- than the maximum previously seen key. + diffs = + maybe + id + (ltliftA2 doDropLTE) + (BackingStore.rqPrev rq) + $ ltmap prj + $ changelogDiffs ldb + -- (1) Ensure that we never delete everything read from disk (ie if + -- our result is non-empty then it contains something read from + -- disk, as we only get an empty result if we reached the end of + -- the table). + -- + -- (2) Also, read one additional key, which we will not include in + -- the result but need in order to know which in-memory + -- insertions to include. + maxDeletes = ltcollapse $ ltmap (K2 . numDeletesDiffMK) diffs + nrequested = 1 + max (BackingStore.rqCount rq) (1 + maxDeletes) - let st = changelogLastFlushedState ldb - values <- BackingStore.bsvhRangeRead lvh st (rq{BackingStore.rqCount = nrequested}) - traceWith (foeTracer env) ForkerRangeReadTablesEnd - pure $ ltliftA2 (doFixupReadResult nrequested) diffs values - where - lvh = foeBackingStoreValueHandle env + let st = changelogLastFlushedState ldb + values <- BackingStore.bsvhRangeRead lvh st (rq{BackingStore.rqCount = nrequested}) + traceWith (foeTracer env) ForkerRangeReadTablesEnd + pure $ ltliftA2 (doFixupReadResult nrequested) diffs values + where + lvh = foeBackingStoreValueHandle env - rq = BackingStore.RangeQuery rq1 (fromIntegral $ defaultQueryBatchSize qbs) + rq = BackingStore.RangeQuery rq1 (fromIntegral $ defaultQueryBatchSize qbs) - rq1 = case rq0 of - NoPreviousQuery -> Nothing - PreviousQueryWasFinal -> Just (LedgerTables $ KeysMK Set.empty) - PreviousQueryWasUpTo k -> Just (LedgerTables $ KeysMK $ Set.singleton k) + rq1 = case rq0 of + NoPreviousQuery -> Nothing + PreviousQueryWasFinal -> Just (LedgerTables $ KeysMK Set.empty) + PreviousQueryWasUpTo k -> Just (LedgerTables $ KeysMK $ Set.singleton k) - prj :: - (Ord k, Eq v) - => SeqDiffMK k v - -> DiffMK k v - prj (SeqDiffMK sq) = DiffMK (DS.fromAntiDiff $ DS.cumulativeDiff sq) + prj :: + (Ord k, Eq v) => + SeqDiffMK k v -> + DiffMK k v + prj (SeqDiffMK sq) = DiffMK (DS.fromAntiDiff $ DS.cumulativeDiff sq) - -- Remove all diff elements that are <= to the greatest given key - doDropLTE :: - Ord k - => KeysMK k v - -> DiffMK k v - -> DiffMK k v - doDropLTE (KeysMK ks) (DiffMK ds) = - DiffMK - $ case Set.lookupMax ks of - Nothing -> ds - Just k -> Diff.filterWithKeyOnly (> k) ds + -- Remove all diff elements that are <= to the greatest given key + doDropLTE :: + Ord k => + KeysMK k v -> + DiffMK k v -> + DiffMK k v + doDropLTE (KeysMK ks) (DiffMK ds) = + DiffMK $ + case Set.lookupMax ks of + Nothing -> ds + Just k -> Diff.filterWithKeyOnly (> k) ds - -- NOTE: this is counting the deletions wrt disk because deletions of values - -- created along the diffs will have been collapsed to the empty diff. - numDeletesDiffMK :: DiffMK k v -> Int - numDeletesDiffMK (DiffMK d) = - getSum $ Diff.foldMapDelta (Sum . oneIfDel) d - where - oneIfDel x = case x of - Diff.Delete -> 1 - Diff.Insert _ -> 0 + -- NOTE: this is counting the deletions wrt disk because deletions of values + -- created along the diffs will have been collapsed to the empty diff. + numDeletesDiffMK :: DiffMK k v -> Int + numDeletesDiffMK (DiffMK d) = + getSum $ Diff.foldMapDelta (Sum . oneIfDel) d + where + oneIfDel x = case x of + Diff.Delete -> 1 + Diff.Insert _ -> 0 - -- INVARIANT: nrequested > 0 - -- - -- (1) if we reached the end of the store, then simply yield the given diff - -- applied to the given values - -- (2) otherwise, the readset must be non-empty, since 'rqCount' is positive - -- (3) remove the greatest read key - -- (4) remove all diff elements that are >= the greatest read key - -- (5) apply the remaining diff - -- (6) (the greatest read key will be the first fetched if the yield of this - -- result is next passed as 'rqPrev') - -- - -- Note that if the in-memory changelog contains the greatest key, then - -- we'll return that in step (1) above, in which case the next passed - -- 'rqPrev' will contain it, which will cause 'doDropLTE' to result in an - -- empty diff, which will result in an entirely empty range query result, - -- which is the termination case. - doFixupReadResult :: - Ord k - => Int - -- ^ Number of requested keys from the backing store. - -> DiffMK k v - -- ^ Differences that will be applied to the values read from the backing - -- store. - -> ValuesMK k v - -- ^ Values read from the backing store. The number of values read should - -- be at most @nrequested@. - -> ValuesMK k v - doFixupReadResult - nrequested - (DiffMK ds) - (ValuesMK vs) = - let includingAllKeys = - Diff.applyDiff vs ds - definitelyNoMoreToFetch = Map.size vs < nrequested - in - ValuesMK - $ case Map.maxViewWithKey vs of - Nothing -> - if definitelyNoMoreToFetch - then includingAllKeys - else error $ "Size of values " <> show (Map.size vs) <> ", nrequested " <> show nrequested - Just ((k, _v), vs') -> - if definitelyNoMoreToFetch then includingAllKeys else - Diff.applyDiff - vs' - (Diff.filterWithKeyOnly (< k) ds) + -- INVARIANT: nrequested > 0 + -- + -- (1) if we reached the end of the store, then simply yield the given diff + -- applied to the given values + -- (2) otherwise, the readset must be non-empty, since 'rqCount' is positive + -- (3) remove the greatest read key + -- (4) remove all diff elements that are >= the greatest read key + -- (5) apply the remaining diff + -- (6) (the greatest read key will be the first fetched if the yield of this + -- result is next passed as 'rqPrev') + -- + -- Note that if the in-memory changelog contains the greatest key, then + -- we'll return that in step (1) above, in which case the next passed + -- 'rqPrev' will contain it, which will cause 'doDropLTE' to result in an + -- empty diff, which will result in an entirely empty range query result, + -- which is the termination case. + doFixupReadResult :: + Ord k => + Int -> + -- \^ Number of requested keys from the backing store. + DiffMK k v -> + -- \^ Differences that will be applied to the values read from the backing + -- store. + ValuesMK k v -> + -- \^ Values read from the backing store. The number of values read should + -- be at most @nrequested@. + ValuesMK k v + doFixupReadResult + nrequested + (DiffMK ds) + (ValuesMK vs) = + let includingAllKeys = + Diff.applyDiff vs ds + definitelyNoMoreToFetch = Map.size vs < nrequested + in ValuesMK $ + case Map.maxViewWithKey vs of + Nothing -> + if definitelyNoMoreToFetch + then includingAllKeys + else error $ "Size of values " <> show (Map.size vs) <> ", nrequested " <> show nrequested + Just ((k, _v), vs') -> + if definitelyNoMoreToFetch + then includingAllKeys + else + Diff.applyDiff + vs' + (Diff.filterWithKeyOnly (< k) ds) implForkerGetLedgerState :: - (MonadSTM m, GetTip l) - => ForkerEnv m l blk - -> STM m (l EmptyMK) + (MonadSTM m, GetTip l) => + ForkerEnv m l blk -> + STM m (l EmptyMK) implForkerGetLedgerState env = current <$> readTVar (foeChangelog env) -- | Obtain statistics for a combination of backing store value handle and -- changelog. implForkerReadStatistics :: - (MonadSTM m, HasLedgerTables l, GetTip l) - => ForkerEnv m l blk - -> m (Maybe Forker.Statistics) + (MonadSTM m, HasLedgerTables l, GetTip l) => + ForkerEnv m l blk -> + m (Maybe Forker.Statistics) implForkerReadStatistics env = do - traceWith (foeTracer env) ForkerReadStatistics - dblog <- readTVarIO (foeChangelog env) + traceWith (foeTracer env) ForkerReadStatistics + dblog <- readTVarIO (foeChangelog env) - let seqNo = getTipSlot $ changelogLastFlushedState dblog - BackingStore.Statistics{sequenceNumber = seqNo', numEntries = n} <- bsvhStat lbsvh - if seqNo /= seqNo' then - error $ "Statistics seqNo (" - ++ show seqNo' - ++ ") is different from the seqNo in the DbChangelog last flushed field (" - ++ show seqNo - ++ ")" + let seqNo = getTipSlot $ changelogLastFlushedState dblog + BackingStore.Statistics{sequenceNumber = seqNo', numEntries = n} <- bsvhStat lbsvh + if seqNo /= seqNo' + then + error $ + "Statistics seqNo (" + ++ show seqNo' + ++ ") is different from the seqNo in the DbChangelog last flushed field (" + ++ show seqNo + ++ ")" else do let diffs = changelogDiffs dblog - nInserts = ltcollapse - $ ltmap (K2 . getSum . numInserts . getSeqDiffMK) - diffs - nDeletes = ltcollapse - $ ltmap (K2 . getSum . numDeletes . getSeqDiffMK) - diffs - pure . Just $ Forker.Statistics { - ledgerTableSize = n + nInserts - nDeletes - } - where - lbsvh = foeBackingStoreValueHandle env + nInserts = + ltcollapse $ + ltmap + (K2 . getSum . numInserts . getSeqDiffMK) + diffs + nDeletes = + ltcollapse $ + ltmap + (K2 . getSum . numDeletes . getSeqDiffMK) + diffs + pure . Just $ + Forker.Statistics + { ledgerTableSize = n + nInserts - nDeletes + } + where + lbsvh = foeBackingStoreValueHandle env implForkerPush :: - (MonadSTM m, GetTip l, HasLedgerTables l) - => ForkerEnv m l blk - -> l DiffMK - -> m () + (MonadSTM m, GetTip l, HasLedgerTables l) => + ForkerEnv m l blk -> + l DiffMK -> + m () implForkerPush env newState = do traceWith (foeTracer env) ForkerPushStart atomically $ do chlog <- readTVar (foeChangelog env) - let chlog' = prune (LedgerDbPruneKeeping (foeSecurityParam env)) - $ extend newState chlog + let chlog' = + prune (LedgerDbPruneKeeping (foeSecurityParam env)) $ + extend newState chlog writeTVar (foeChangelog env) chlog' traceWith (foeTracer env) ForkerPushEnd implForkerCommit :: - (MonadSTM m, GetTip l, HasLedgerTables l) - => ForkerEnv m l blk - -> STM m () + (MonadSTM m, GetTip l, HasLedgerTables l) => + ForkerEnv m l blk -> + STM m () implForkerCommit env = do dblog <- readTVar (foeChangelog env) modifyTVar (foeSwitchVar env) $ \orig -> -- We don't need to distinguish Origin from 0 because Origin has no diffs -- (SeqDiffMK is a fingertree measured by slot so there cannot be an entry -- for Origin). - let s = fromWithOrigin 0 - . pointSlot - . getTip - $ changelogLastFlushedState orig - in DbChangelog { - changelogLastFlushedState = changelogLastFlushedState orig - , changelogStates = changelogStates dblog - , changelogDiffs = - ltliftA2 (doPrune s) (changelogDiffs orig) (changelogDiffs dblog) - } - where - -- Prune the diffs from the forker's log that have already been flushed to - -- disk - doPrune :: (Ord k, Eq v) - => SlotNo - -> SeqDiffMK k v - -> SeqDiffMK k v - -> SeqDiffMK k v - doPrune s (SeqDiffMK prunedSeq) (SeqDiffMK extendedSeq) = SeqDiffMK $ + let s = + fromWithOrigin 0 + . pointSlot + . getTip + $ changelogLastFlushedState orig + in DbChangelog + { changelogLastFlushedState = changelogLastFlushedState orig + , changelogStates = changelogStates dblog + , changelogDiffs = + ltliftA2 (doPrune s) (changelogDiffs orig) (changelogDiffs dblog) + } + where + -- Prune the diffs from the forker's log that have already been flushed to + -- disk + doPrune :: + (Ord k, Eq v) => + SlotNo -> + SeqDiffMK k v -> + SeqDiffMK k v -> + SeqDiffMK k v + doPrune s (SeqDiffMK prunedSeq) (SeqDiffMK extendedSeq) = + SeqDiffMK $ -- This is acceptable because Byron has no tables, so combination of Byron -- block and EBB diffs will always result in the empty ledger table hence -- it doesn't matter. if DS.minSlot prunedSeq == DS.minSlot extendedSeq - then extendedSeq - else snd $ DS.splitAtSlot s extendedSeq + then extendedSeq + else snd $ DS.splitAtSlot s extendedSeq diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Lock.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Lock.hs index 7665d020e6..88c8e7aad8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Lock.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Lock.hs @@ -6,8 +6,8 @@ {-# LANGUAGE TupleSections #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Storage.LedgerDB.V1.Lock ( - -- * LedgerDB lock +module Ouroboros.Consensus.Storage.LedgerDB.V1.Lock + ( -- * LedgerDB lock LedgerDBLock , ReadLocked , WriteLocked @@ -19,9 +19,9 @@ module Ouroboros.Consensus.Storage.LedgerDB.V1.Lock ( , writeLocked ) where -import qualified Control.RAWLock as Lock -import NoThunks.Class -import Ouroboros.Consensus.Util.IOLike +import Control.RAWLock qualified as Lock +import NoThunks.Class +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- LedgerDB lock @@ -57,7 +57,7 @@ mkLedgerDBLock :: IOLike m => m (LedgerDBLock m) mkLedgerDBLock = LedgerDBLock <$> Lock.new () -- | An action in @m@ that has to hold the read lock. See @withReadLock@. -newtype ReadLocked m a = ReadLocked { runReadLocked :: m a } +newtype ReadLocked m a = ReadLocked {runReadLocked :: m a} deriving newtype (Functor, Applicative, Monad) -- | Enforce that the action has to be run while holding the read lock. @@ -67,10 +67,10 @@ readLocked = ReadLocked -- | Acquire the ledger DB read lock and hold it while performing an action withReadLock :: IOLike m => LedgerDBLock m -> ReadLocked m a -> m a withReadLock (LedgerDBLock lock) m = - Lock.withReadAccess lock (\() -> runReadLocked m) + Lock.withReadAccess lock (\() -> runReadLocked m) -- | An action in @m@ that has to hold the write lock. See @withWriteLock@. -newtype WriteLocked m a = WriteLocked { runWriteLocked :: m a } +newtype WriteLocked m a = WriteLocked {runWriteLocked :: m a} deriving newtype (Functor, Applicative, Monad) -- | Used safely, for example, during initialization. @@ -84,4 +84,4 @@ writeLocked = WriteLocked -- | Acquire the ledger DB write lock and hold it while performing an action withWriteLock :: IOLike m => LedgerDBLock m -> WriteLocked m a -> m a withWriteLock (LedgerDBLock lock) m = - Lock.withWriteAccess lock (\() -> (,()) <$> runWriteLocked m) + Lock.withWriteAccess lock (\() -> (,()) <$> runWriteLocked m) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Snapshots.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Snapshots.hs index 2fd9a55e5b..e56e3d2543 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Snapshots.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V1/Snapshots.hs @@ -2,165 +2,164 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE ScopedTypeVariables #-} -{- | Snapshots - - Snapshotting a ledger state means saving a copy of the in-memory part of the - ledger state serialized as a file on disk, as well as flushing differences on - the ledger tables between the last snapshotted ledger state and the one that - we are snapshotting now and making a copy of that resulting on-disk state. - - == Startup - - During initialisation, the goal is to construct an initial 'LedgerDB' where - the sequence of in-memory states is empty except for the ledger state at the - anchor or the 'DbChangelog', which has to correspond to the immutable tip, - i.e., the block at the tip of the Immutable DB. - - Ideally, we can construct the initial 'LedgerDB' from a snapshot of the ledger - state that we wrote to disk. Remember that updating a ledger state with a - block is not invertible: we can apply a block to a ledger state, but we cannot - /unapply/ a block to a ledger state. This means the snapshot has to be at - least as old as the anchor. A snapshot matching the anchor can be used as is. - A snapshot older than the anchor can be used after reapplying the necessary - blocks. A snapshot newer than the anchor can /not/ be used, as we cannot - unapply blocks to get the ledger state corresponding to the anchor. This is - the reason why we only take snapshots of an immutable ledger state, i.e., of - the anchor of the 'DbChangelog' (or older). - - On startup, the node will: - - 1. Find the latest snapshot which will be a directory inside @\\/\@ named as the slot number of the ledger state that - was snapshotted: - - > - > ├── volatile - > ├── immutable - > └── ledger - > ├── - > │   ├── tables - > │   └── state - > ├── ... - > └── - >    ├── tables - >    └── state - - The @state@ file is a serialization of the in-memory part of the ledger - state with empty tables (i.e. a @ExtLedgerState blk EmptyMK@), and - @tables@ will store a persistent copy of the 'LedgerTable's. Depending on - the 'BackingStore' implementation in use, this might be a file or a - directory. - - 2. Depending on the snapshots found, there are two possibilities: - - - If there is no snapshot to load, create a new @'BackingStore'@ with the - contents of the Genesis ledger tables and finish. - - - If there is a snapshot found, then deserialize (with @DecodeDisk@) the - @state@ file. If deserialization fails, delete this snapshot and start - again. If the snapshot is newer than the immutable tip, delete this - snapshot and start again. - - In case we found an snapshot, we will overwrite (either literally - overwriting it or using some feature from the specific backend used) the - @BackingStore@ tables with the contents from @tables@ from said snapshot - as it was left in whatever state it was when the node shut down. - - 3. The deserialized ledger state and tables will be then used as the initial - ledger state for the ledger database. - - 4. Reapply the immutable blocks after the snapshot to obtain the ledger state - at the immutable tip. The blocks to reapply are streamed from the Immutable - DB, using an iterator. - - Note that we can /reapply/ these blocks, which is quicker than applying - them, as the existence of a snapshot newer than these blocks, and them - being in the immutable DB proves (unless the on-disk database has been - tampered with, but this is not an attack we intend to protect against, as - this would mean the machine has already been compromised) that they have - been successfully applied in the past. - - Reading and applying blocks is costly. Typically, very few blocks need to be - reapplied in practice. However, there is one exception: when the serialisation - format of the ledger state changes, all snapshots (written using the old - serialisation format) will fail to deserialise, and all blocks starting from - genesis will have to be reapplied. - - At this point, the node carries a @DbChangelog@ that is initialized and ready - to be applied blocks on the volatile database. - - == Taking snapshots during normal operation - - Snapshots are taken by the @'copyAndSnapshotRunner'@ when the disk policy - dictates to do so. Whenever the chain grows past @k@ blocks, said runner will - copy the blocks which are more than @k@ blocks from the tip (i.e. the ones - that must be considered immutable) to the immutable database and then: - - 1. Every time we have processed a specific amount of blocks since the last - flush (set by default to 100), perform a flush of differences in the - 'DbChangelog' up to the immutable db tip. - - 2. If dictated by the disk policy, flush immediately all the differences up to - the immutable db tip and serialize (using 'EncodeDisk') the DbChangelog - in-memory ledger states anchor (@ExtLedgerState blk EmptyMK@). - - A directory is created named after the slot number of the ledger state - being snapshotted, and the serialization from above is written into the - @\/state@ file and the @BackingStore@ tables are copied into - the @\/tables@ file. - - 3. There is a maximum number of snapshots that should exist in the disk at any - time, dictated by the @SnapshotPolicy@, so if needed, we will trim out old - snapshots. - - == Flush during startup and snapshot at the end of startup - - Due to the nature of the V1 LedgerDB having to carry around all the - differences between the last snapshotted state and the current tip, there is a - need to flush when replaying the chain as otherwise, for example on a replay - from genesis to the tip, we would carry millions of differences in memory. - - Because of this, when we are replaying blocks we will flush regularly. As the - last snapshot that was taken lives in a @\/tables@ file, there is - no risk of destroying it (overwriting tables at another earlier snapshot) by - flushing. Only when we finish replaying blocks and start the background - threads (and specifically the @copyAndSnapshotRunner@), we will take a - snapshot of the current immutable database anchor as described above. - --------------------------------------------------------------------------------} - -module Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots ( - loadSnapshot +-- | Snapshots +-- +-- Snapshotting a ledger state means saving a copy of the in-memory part of the +-- ledger state serialized as a file on disk, as well as flushing differences on +-- the ledger tables between the last snapshotted ledger state and the one that +-- we are snapshotting now and making a copy of that resulting on-disk state. +-- +-- == Startup +-- +-- During initialisation, the goal is to construct an initial 'LedgerDB' where +-- the sequence of in-memory states is empty except for the ledger state at the +-- anchor or the 'DbChangelog', which has to correspond to the immutable tip, +-- i.e., the block at the tip of the Immutable DB. +-- +-- Ideally, we can construct the initial 'LedgerDB' from a snapshot of the ledger +-- state that we wrote to disk. Remember that updating a ledger state with a +-- block is not invertible: we can apply a block to a ledger state, but we cannot +-- /unapply/ a block to a ledger state. This means the snapshot has to be at +-- least as old as the anchor. A snapshot matching the anchor can be used as is. +-- A snapshot older than the anchor can be used after reapplying the necessary +-- blocks. A snapshot newer than the anchor can /not/ be used, as we cannot +-- unapply blocks to get the ledger state corresponding to the anchor. This is +-- the reason why we only take snapshots of an immutable ledger state, i.e., of +-- the anchor of the 'DbChangelog' (or older). +-- +-- On startup, the node will: +-- +-- 1. Find the latest snapshot which will be a directory inside @\\/\@ named as the slot number of the ledger state that +-- was snapshotted: +-- +-- > +-- > ├── volatile +-- > ├── immutable +-- > └── ledger +-- > ├── +-- > │   ├── tables +-- > │   └── state +-- > ├── ... +-- > └── +-- >    ├── tables +-- >    └── state +-- +-- The @state@ file is a serialization of the in-memory part of the ledger +-- state with empty tables (i.e. a @ExtLedgerState blk EmptyMK@), and +-- @tables@ will store a persistent copy of the 'LedgerTable's. Depending on +-- the 'BackingStore' implementation in use, this might be a file or a +-- directory. +-- +-- 2. Depending on the snapshots found, there are two possibilities: +-- +-- - If there is no snapshot to load, create a new @'BackingStore'@ with the +-- contents of the Genesis ledger tables and finish. +-- +-- - If there is a snapshot found, then deserialize (with @DecodeDisk@) the +-- @state@ file. If deserialization fails, delete this snapshot and start +-- again. If the snapshot is newer than the immutable tip, delete this +-- snapshot and start again. +-- +-- In case we found an snapshot, we will overwrite (either literally +-- overwriting it or using some feature from the specific backend used) the +-- @BackingStore@ tables with the contents from @tables@ from said snapshot +-- as it was left in whatever state it was when the node shut down. +-- +-- 3. The deserialized ledger state and tables will be then used as the initial +-- ledger state for the ledger database. +-- +-- 4. Reapply the immutable blocks after the snapshot to obtain the ledger state +-- at the immutable tip. The blocks to reapply are streamed from the Immutable +-- DB, using an iterator. +-- +-- Note that we can /reapply/ these blocks, which is quicker than applying +-- them, as the existence of a snapshot newer than these blocks, and them +-- being in the immutable DB proves (unless the on-disk database has been +-- tampered with, but this is not an attack we intend to protect against, as +-- this would mean the machine has already been compromised) that they have +-- been successfully applied in the past. +-- +-- Reading and applying blocks is costly. Typically, very few blocks need to be +-- reapplied in practice. However, there is one exception: when the serialisation +-- format of the ledger state changes, all snapshots (written using the old +-- serialisation format) will fail to deserialise, and all blocks starting from +-- genesis will have to be reapplied. +-- +-- At this point, the node carries a @DbChangelog@ that is initialized and ready +-- to be applied blocks on the volatile database. +-- +-- == Taking snapshots during normal operation +-- +-- Snapshots are taken by the @'copyAndSnapshotRunner'@ when the disk policy +-- dictates to do so. Whenever the chain grows past @k@ blocks, said runner will +-- copy the blocks which are more than @k@ blocks from the tip (i.e. the ones +-- that must be considered immutable) to the immutable database and then: +-- +-- 1. Every time we have processed a specific amount of blocks since the last +-- flush (set by default to 100), perform a flush of differences in the +-- 'DbChangelog' up to the immutable db tip. +-- +-- 2. If dictated by the disk policy, flush immediately all the differences up to +-- the immutable db tip and serialize (using 'EncodeDisk') the DbChangelog +-- in-memory ledger states anchor (@ExtLedgerState blk EmptyMK@). +-- +-- A directory is created named after the slot number of the ledger state +-- being snapshotted, and the serialization from above is written into the +-- @\/state@ file and the @BackingStore@ tables are copied into +-- the @\/tables@ file. +-- +-- 3. There is a maximum number of snapshots that should exist in the disk at any +-- time, dictated by the @SnapshotPolicy@, so if needed, we will trim out old +-- snapshots. +-- +-- == Flush during startup and snapshot at the end of startup +-- +-- Due to the nature of the V1 LedgerDB having to carry around all the +-- differences between the last snapshotted state and the current tip, there is a +-- need to flush when replaying the chain as otherwise, for example on a replay +-- from genesis to the tip, we would carry millions of differences in memory. +-- +-- Because of this, when we are replaying blocks we will flush regularly. As the +-- last snapshot that was taken lives in a @\/tables@ file, there is +-- no risk of destroying it (overwriting tables at another earlier snapshot) by +-- flushing. Only when we finish replaying blocks and start the background +-- threads (and specifically the @copyAndSnapshotRunner@), we will take a +-- snapshot of the current immutable database anchor as described above. +-- +-- ------------------------------------------------------------------------------ +module Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots + ( loadSnapshot , takeSnapshot + -- * Testing , snapshotToStatePath , snapshotToTablesPath ) where -import Codec.CBOR.Encoding -import Codec.Serialise -import qualified Control.Monad as Monad -import Control.Monad.Except -import qualified Control.Monad.Trans as Trans (lift) -import Control.Tracer -import Data.Functor.Contravariant ((>$<)) -import qualified Data.List as List -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Storage.LedgerDB.V1.Args -import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore -import qualified Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore as V1 -import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog -import Ouroboros.Consensus.Storage.LedgerDB.V1.Lock -import Ouroboros.Consensus.Util.Args (Complete) -import Ouroboros.Consensus.Util.Enclose -import Ouroboros.Consensus.Util.IOLike -import System.FS.API - +import Codec.CBOR.Encoding +import Codec.Serialise +import Control.Monad qualified as Monad +import Control.Monad.Except +import Control.Monad.Trans qualified as Trans (lift) +import Control.Tracer +import Data.Functor.Contravariant ((>$<)) +import Data.List qualified as List +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.V1.Args +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore +import Ouroboros.Consensus.Storage.LedgerDB.V1.BackingStore qualified as V1 +import Ouroboros.Consensus.Storage.LedgerDB.V1.DbChangelog +import Ouroboros.Consensus.Storage.LedgerDB.V1.Lock +import Ouroboros.Consensus.Util.Args (Complete) +import Ouroboros.Consensus.Util.Enclose +import Ouroboros.Consensus.Util.IOLike +import System.FS.API -- | Try to take a snapshot of the /oldest ledger state/ in the ledger DB -- @@ -185,50 +184,55 @@ takeSnapshot :: ( IOLike m , LedgerDbSerialiseConstraints blk , LedgerSupportsProtocol blk - ) - => StrictTVar m (DbChangelog' blk) - -> CodecConfig blk - -> Tracer m (TraceSnapshotEvent blk) - -> SnapshotsFS m - -> BackingStore' m blk - -> Maybe String -- ^ Override for snapshot numbering - -> ReadLocked m (Maybe (DiskSnapshot, RealPoint blk)) + ) => + StrictTVar m (DbChangelog' blk) -> + CodecConfig blk -> + Tracer m (TraceSnapshotEvent blk) -> + SnapshotsFS m -> + BackingStore' m blk -> + -- | Override for snapshot numbering + Maybe String -> + ReadLocked m (Maybe (DiskSnapshot, RealPoint blk)) takeSnapshot ldbvar ccfg tracer (SnapshotsFS hasFS') backingStore suffix = readLocked $ do - state <- changelogLastFlushedState <$> readTVarIO ldbvar - case pointToWithOriginRealPoint (castPoint (getTip state)) of - Origin -> - return Nothing - NotOrigin t -> do - let number = unSlotNo (realPointSlot t) - snapshot = DiskSnapshot number suffix - diskSnapshots <- listSnapshots hasFS' - if List.any (== DiskSnapshot number suffix) diskSnapshots then + state <- changelogLastFlushedState <$> readTVarIO ldbvar + case pointToWithOriginRealPoint (castPoint (getTip state)) of + Origin -> + return Nothing + NotOrigin t -> do + let number = unSlotNo (realPointSlot t) + snapshot = DiskSnapshot number suffix + diskSnapshots <- listSnapshots hasFS' + if List.any (== DiskSnapshot number suffix) diskSnapshots + then return Nothing else do - encloseTimedWith (TookSnapshot snapshot t >$< tracer) - $ writeSnapshot hasFS' backingStore (encodeDiskExtLedgerState ccfg) snapshot state + encloseTimedWith (TookSnapshot snapshot t >$< tracer) $ + writeSnapshot hasFS' backingStore (encodeDiskExtLedgerState ccfg) snapshot state return $ Just (snapshot, t) -- | Write snapshot to disk writeSnapshot :: - MonadThrow m - => SomeHasFS m - -> BackingStore' m blk - -> (ExtLedgerState blk EmptyMK -> Encoding) - -> DiskSnapshot - -> ExtLedgerState blk EmptyMK - -> m () + MonadThrow m => + SomeHasFS m -> + BackingStore' m blk -> + (ExtLedgerState blk EmptyMK -> Encoding) -> + DiskSnapshot -> + ExtLedgerState blk EmptyMK -> + m () writeSnapshot fs@(SomeHasFS hasFS) backingStore encLedger snapshot cs = do - createDirectory hasFS (snapshotToDirPath snapshot) - crc <- writeExtLedgerState fs encLedger (snapshotToStatePath snapshot) cs - writeSnapshotMetadata fs snapshot SnapshotMetadata + createDirectory hasFS (snapshotToDirPath snapshot) + crc <- writeExtLedgerState fs encLedger (snapshotToStatePath snapshot) cs + writeSnapshotMetadata + fs + snapshot + SnapshotMetadata { snapshotBackend = bsSnapshotBackend backingStore , snapshotChecksum = crc } - bsCopy - backingStore - cs - (snapshotToTablesPath snapshot) + bsCopy + backingStore + cs + (snapshotToTablesPath snapshot) -- | The path within the LedgerDB's filesystem to the file that contains the -- snapshot's serialized ledger state @@ -245,33 +249,40 @@ snapshotToTablesPath = mkFsPath . (\x -> [x, "tables"]) . snapshotToDirName -- Fail on data corruption, i.e. when the checksum of the read data differs -- from the one tracked by @'DiskSnapshot'@. loadSnapshot :: - forall m blk. - ( IOLike m - , LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - , LedgerSupportsLedgerDB blk - ) - => Tracer m V1.FlavorImplSpecificTrace - -> Complete BackingStoreArgs m - -> CodecConfig blk - -> SnapshotsFS m - -> DiskSnapshot - -> ExceptT (SnapshotFailure blk) m ((DbChangelog' blk, LedgerBackingStore m (ExtLedgerState blk)), RealPoint blk) + forall m blk. + ( IOLike m + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + , LedgerSupportsLedgerDB blk + ) => + Tracer m V1.FlavorImplSpecificTrace -> + Complete BackingStoreArgs m -> + CodecConfig blk -> + SnapshotsFS m -> + DiskSnapshot -> + ExceptT + (SnapshotFailure blk) + m + ((DbChangelog' blk, LedgerBackingStore m (ExtLedgerState blk)), RealPoint blk) loadSnapshot tracer bss ccfg fs@(SnapshotsFS fs') s = do - (extLedgerSt, checksumAsRead) <- withExceptT (InitFailureRead . ReadSnapshotFailed) $ - readExtLedgerState fs' (decodeDiskExtLedgerState ccfg) decode (snapshotToStatePath s) - snapshotMeta <- withExceptT (InitFailureRead . ReadMetadataError (snapshotToMetadataPath s)) $ - loadSnapshotMetadata fs' s + (extLedgerSt, checksumAsRead) <- + withExceptT (InitFailureRead . ReadSnapshotFailed) $ + readExtLedgerState fs' (decodeDiskExtLedgerState ccfg) decode (snapshotToStatePath s) + snapshotMeta <- + withExceptT (InitFailureRead . ReadMetadataError (snapshotToMetadataPath s)) $ + loadSnapshotMetadata fs' s case (bss, snapshotBackend snapshotMeta) of (InMemoryBackingStoreArgs, UTxOHDMemSnapshot) -> pure () (LMDBBackingStoreArgs _ _ _, UTxOHDLMDBSnapshot) -> pure () (_, _) -> throwError $ InitFailureRead $ ReadMetadataError (snapshotToMetadataPath s) MetadataBackendMismatch Monad.when (checksumAsRead /= snapshotChecksum snapshotMeta) $ - throwError $ InitFailureRead $ ReadSnapshotDataCorruption + throwError $ + InitFailureRead $ + ReadSnapshotDataCorruption case pointToWithOriginRealPoint (castPoint (getTip extLedgerSt)) of - Origin -> throwError InitFailureGenesis + Origin -> throwError InitFailureGenesis NotOrigin pt -> do - backingStore <- Trans.lift (restoreBackingStore tracer bss fs extLedgerSt (snapshotToTablesPath s)) - let chlog = empty extLedgerSt - pure ((chlog, backingStore), pt) + backingStore <- Trans.lift (restoreBackingStore tracer bss fs extLedgerSt (snapshotToTablesPath s)) + let chlog = empty extLedgerSt + pure ((chlog, backingStore), pt) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2.hs index 1ebb566c88..3e5fe2a489 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2.hs @@ -17,66 +17,69 @@ module Ouroboros.Consensus.Storage.LedgerDB.V2 (mkInitDb) where -import Control.Arrow ((>>>)) -import qualified Control.Monad as Monad (void, (>=>)) -import Control.Monad.Except -import Control.RAWLock -import qualified Control.RAWLock as RAWLock -import Control.ResourceRegistry -import Control.Tracer -import qualified Data.Foldable as Foldable -import Data.Functor.Contravariant ((>$<)) -import Data.Kind (Type) -import Data.Map (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Void -import Data.Word -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HeaderStateHistory - (HeaderStateHistory (..), mkHeaderStateWithTimeFromSummary) -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Args -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent -import Ouroboros.Consensus.Storage.LedgerDB.V2.Args as V2 -import Ouroboros.Consensus.Storage.LedgerDB.V2.Forker -import qualified Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory as InMemory -import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq -import Ouroboros.Consensus.Util -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.NormalForm.StrictTVar () -import qualified Ouroboros.Network.AnchoredSeq as AS -import Ouroboros.Network.Protocol.LocalStateQuery.Type -import Prelude hiding (read) -import System.FS.API - -mkInitDb :: forall m blk. - ( LedgerSupportsProtocol blk - , IOLike m - , LedgerDbSerialiseConstraints blk - , HasHardForkHistory blk - , LedgerSupportsInMemoryLedgerDB blk - ) - => Complete LedgerDbArgs m blk - -> Complete V2.LedgerDbFlavorArgs m - -> ResolveBlock m blk - -> InitDB (LedgerSeq' m blk) m blk +import Control.Arrow ((>>>)) +import Control.Monad qualified as Monad (void, (>=>)) +import Control.Monad.Except +import Control.RAWLock +import Control.RAWLock qualified as RAWLock +import Control.ResourceRegistry +import Control.Tracer +import Data.Foldable qualified as Foldable +import Data.Functor.Contravariant ((>$<)) +import Data.Kind (Type) +import Data.Map (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Void +import Data.Word +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HeaderStateHistory + ( HeaderStateHistory (..) + , mkHeaderStateWithTimeFromSummary + ) +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Storage.ChainDB.Impl.BlockCache +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Args +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.TraceEvent +import Ouroboros.Consensus.Storage.LedgerDB.V2.Args as V2 +import Ouroboros.Consensus.Storage.LedgerDB.V2.Forker +import Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory qualified as InMemory +import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq +import Ouroboros.Consensus.Util +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.NormalForm.StrictTVar () +import Ouroboros.Network.AnchoredSeq qualified as AS +import Ouroboros.Network.Protocol.LocalStateQuery.Type +import System.FS.API +import Prelude hiding (read) + +mkInitDb :: + forall m blk. + ( LedgerSupportsProtocol blk + , IOLike m + , LedgerDbSerialiseConstraints blk + , HasHardForkHistory blk + , LedgerSupportsInMemoryLedgerDB blk + ) => + Complete LedgerDbArgs m blk -> + Complete V2.LedgerDbFlavorArgs m -> + ResolveBlock m blk -> + InitDB (LedgerSeq' m blk) m blk mkInitDb args flavArgs getBlock = - InitDB { - initFromGenesis = emptyF =<< lgrGenesis + InitDB + { initFromGenesis = emptyF =<< lgrGenesis , initFromSnapshot = loadSnapshot (configCodec . getExtLedgerCfg . ledgerDbCfg $ lgrConfig) lgrHasFS , closeDb = closeLedgerSeq @@ -95,211 +98,236 @@ mkInitDb args flavArgs getBlock = forkers <- newTVarIO Map.empty nextForkerKey <- newTVarIO (ForkerKey 0) lock <- RAWLock.new LDBLock - let env = LedgerDBEnv { - ldbSeq = varDB - , ldbPrevApplied = prevApplied - , ldbForkers = forkers - , ldbNextForkerKey = nextForkerKey - , ldbSnapshotPolicy = defaultSnapshotPolicy (ledgerDbCfgSecParam lgrConfig) lgrSnapshotPolicyArgs - , ldbTracer = lgrTracer - , ldbCfg = lgrConfig - , ldbHasFS = lgrHasFS - , ldbResolveBlock = getBlock - , ldbQueryBatchSize = lgrQueryBatchSize - , ldbOpenHandlesLock = lock - } + let env = + LedgerDBEnv + { ldbSeq = varDB + , ldbPrevApplied = prevApplied + , ldbForkers = forkers + , ldbNextForkerKey = nextForkerKey + , ldbSnapshotPolicy = defaultSnapshotPolicy (ledgerDbCfgSecParam lgrConfig) lgrSnapshotPolicyArgs + , ldbTracer = lgrTracer + , ldbCfg = lgrConfig + , ldbHasFS = lgrHasFS + , ldbResolveBlock = getBlock + , ldbQueryBatchSize = lgrQueryBatchSize + , ldbOpenHandlesLock = lock + } h <- LDBHandle <$> newTVarIO (LedgerDBOpen env) pure $ implMkLedgerDb h bss } where - LedgerDbArgs { - lgrConfig - , lgrGenesis - , lgrHasFS - , lgrSnapshotPolicyArgs - , lgrTracer - , lgrQueryBatchSize - , lgrRegistry - } = args - - bss = case flavArgs of V2Args bss0 -> bss0 - - emptyF :: ExtLedgerState blk ValuesMK - -> m (LedgerSeq' m blk) - emptyF st = - empty' st $ case bss of - InMemoryHandleArgs -> InMemory.newInMemoryLedgerTablesHandle lgrHasFS - LSMHandleArgs x -> absurd x - - loadSnapshot :: CodecConfig blk - -> SomeHasFS m - -> DiskSnapshot - -> m (Either (SnapshotFailure blk) (LedgerSeq' m blk, RealPoint blk)) - loadSnapshot ccfg fs ds = case bss of - InMemoryHandleArgs -> runExceptT $ InMemory.loadSnapshot lgrRegistry ccfg fs ds - LSMHandleArgs x -> absurd x + LedgerDbArgs + { lgrConfig + , lgrGenesis + , lgrHasFS + , lgrSnapshotPolicyArgs + , lgrTracer + , lgrQueryBatchSize + , lgrRegistry + } = args + + bss = case flavArgs of V2Args bss0 -> bss0 + + emptyF :: + ExtLedgerState blk ValuesMK -> + m (LedgerSeq' m blk) + emptyF st = + empty' st $ case bss of + InMemoryHandleArgs -> InMemory.newInMemoryLedgerTablesHandle lgrHasFS + LSMHandleArgs x -> absurd x + + loadSnapshot :: + CodecConfig blk -> + SomeHasFS m -> + DiskSnapshot -> + m (Either (SnapshotFailure blk) (LedgerSeq' m blk, RealPoint blk)) + loadSnapshot ccfg fs ds = case bss of + InMemoryHandleArgs -> runExceptT $ InMemory.loadSnapshot lgrRegistry ccfg fs ds + LSMHandleArgs x -> absurd x implMkLedgerDb :: - forall m l blk. - ( IOLike m - , HasCallStack - , IsLedger l - , l ~ ExtLedgerState blk - , StandardHash l, HasLedgerTables l - , LedgerSupportsProtocol blk - , LedgerDbSerialiseConstraints blk - , HasHardForkHistory blk - ) - => LedgerDBHandle m l blk - -> HandleArgs - -> (LedgerDB m l blk, TestInternals m l blk) -implMkLedgerDb h bss = (LedgerDB { - getVolatileTip = getEnvSTM h implGetVolatileTip - , getImmutableTip = getEnvSTM h implGetImmutableTip - , getPastLedgerState = \s -> getEnvSTM h (flip implGetPastLedgerState s) - , getHeaderStateHistory = getEnvSTM h implGetHeaderStateHistory - , getForkerAtTarget = newForkerAtTarget h - , validateFork = getEnv5 h (implValidate h) - , getPrevApplied = getEnvSTM h implGetPrevApplied - , garbageCollect = \s -> getEnvSTM h (flip implGarbageCollect s) - , tryTakeSnapshot = getEnv2 h (implTryTakeSnapshot bss) - , tryFlush = getEnv h implTryFlush - , closeDB = implCloseDB h - }, mkInternals bss h) + forall m l blk. + ( IOLike m + , HasCallStack + , IsLedger l + , l ~ ExtLedgerState blk + , StandardHash l + , HasLedgerTables l + , LedgerSupportsProtocol blk + , LedgerDbSerialiseConstraints blk + , HasHardForkHistory blk + ) => + LedgerDBHandle m l blk -> + HandleArgs -> + (LedgerDB m l blk, TestInternals m l blk) +implMkLedgerDb h bss = + ( LedgerDB + { getVolatileTip = getEnvSTM h implGetVolatileTip + , getImmutableTip = getEnvSTM h implGetImmutableTip + , getPastLedgerState = \s -> getEnvSTM h (flip implGetPastLedgerState s) + , getHeaderStateHistory = getEnvSTM h implGetHeaderStateHistory + , getForkerAtTarget = newForkerAtTarget h + , validateFork = getEnv5 h (implValidate h) + , getPrevApplied = getEnvSTM h implGetPrevApplied + , garbageCollect = \s -> getEnvSTM h (flip implGarbageCollect s) + , tryTakeSnapshot = getEnv2 h (implTryTakeSnapshot bss) + , tryFlush = getEnv h implTryFlush + , closeDB = implCloseDB h + } + , mkInternals bss h + ) mkInternals :: - forall m blk. - ( IOLike m - , LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - , ApplyBlock (ExtLedgerState blk) blk - ) - => HandleArgs - -> LedgerDBHandle m (ExtLedgerState blk) blk - -> TestInternals' m blk -mkInternals bss h = TestInternals { - takeSnapshotNOW = \whereTo suff -> getEnv h $ \env -> do - st <- (case whereTo of - TakeAtVolatileTip -> anchorHandle - TakeAtImmutableTip -> currentHandle) <$> readTVarIO (ldbSeq env) - Monad.void $ takeSnapshot - (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) - (LedgerDBSnapshotEvent >$< ldbTracer env) - (ldbHasFS env) - suff - st + forall m blk. + ( IOLike m + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + , ApplyBlock (ExtLedgerState blk) blk + ) => + HandleArgs -> + LedgerDBHandle m (ExtLedgerState blk) blk -> + TestInternals' m blk +mkInternals bss h = + TestInternals + { takeSnapshotNOW = \whereTo suff -> getEnv h $ \env -> do + st <- + ( case whereTo of + TakeAtVolatileTip -> anchorHandle + TakeAtImmutableTip -> currentHandle + ) + <$> readTVarIO (ldbSeq env) + Monad.void $ + takeSnapshot + (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) + (LedgerDBSnapshotEvent >$< ldbTracer env) + (ldbHasFS env) + suff + st , push = \st -> withRegistry $ \reg -> do - eFrk <- newForkerAtTarget h reg VolatileTip - case eFrk of - Left {} -> error "Unreachable, Volatile tip MUST be in LedgerDB" - Right frk -> - forkerPush frk st >> atomically (forkerCommit frk) >> forkerClose frk + eFrk <- newForkerAtTarget h reg VolatileTip + case eFrk of + Left{} -> error "Unreachable, Volatile tip MUST be in LedgerDB" + Right frk -> + forkerPush frk st >> atomically (forkerCommit frk) >> forkerClose frk , reapplyThenPushNOW = \blk -> getEnv h $ \env -> withRegistry $ \reg -> do - eFrk <- newForkerAtTarget h reg VolatileTip - case eFrk of - Left {} -> error "Unreachable, Volatile tip MUST be in LedgerDB" - Right frk -> do - st <- atomically $ forkerGetLedgerState frk - tables <- forkerReadTables frk (getBlockKeySets blk) - let st' = tickThenReapply (ledgerDbCfgComputeLedgerEvents (ldbCfg env)) (ledgerDbCfg $ ldbCfg env) blk (st `withLedgerTables` tables) - forkerPush frk st' >> atomically (forkerCommit frk) >> forkerClose frk + eFrk <- newForkerAtTarget h reg VolatileTip + case eFrk of + Left{} -> error "Unreachable, Volatile tip MUST be in LedgerDB" + Right frk -> do + st <- atomically $ forkerGetLedgerState frk + tables <- forkerReadTables frk (getBlockKeySets blk) + let st' = + tickThenReapply + (ledgerDbCfgComputeLedgerEvents (ldbCfg env)) + (ledgerDbCfg $ ldbCfg env) + blk + (st `withLedgerTables` tables) + forkerPush frk st' >> atomically (forkerCommit frk) >> forkerClose frk , wipeLedgerDB = getEnv h $ destroySnapshots . ldbHasFS , closeLedgerDB = - let LDBHandle tvar = h in - atomically (writeTVar tvar LedgerDBClosed) + let LDBHandle tvar = h + in atomically (writeTVar tvar LedgerDBClosed) , truncateSnapshots = getEnv h $ implIntTruncateSnapshots . ldbHasFS } - where - takeSnapshot :: CodecConfig blk - -> Tracer m (TraceSnapshotEvent blk) - -> SomeHasFS m - -> Maybe String - -> StateRef m (ExtLedgerState blk) - -> m (Maybe (DiskSnapshot, RealPoint blk)) - takeSnapshot = case bss of - InMemoryHandleArgs -> InMemory.takeSnapshot - LSMHandleArgs x -> absurd x + where + takeSnapshot :: + CodecConfig blk -> + Tracer m (TraceSnapshotEvent blk) -> + SomeHasFS m -> + Maybe String -> + StateRef m (ExtLedgerState blk) -> + m (Maybe (DiskSnapshot, RealPoint blk)) + takeSnapshot = case bss of + InMemoryHandleArgs -> InMemory.takeSnapshot + LSMHandleArgs x -> absurd x -- | Testing only! Truncate all snapshots in the DB. implIntTruncateSnapshots :: MonadThrow m => SomeHasFS m -> m () implIntTruncateSnapshots sfs@(SomeHasFS fs) = do - snapshotsMapM_ sfs (truncateRecursively . (:[])) - where - truncateRecursively pre = do - dirs <- listDirectory fs (mkFsPath pre) - mapM_ (\d -> do - let d' = pre ++ [d] - isDir <- doesDirectoryExist fs $ mkFsPath d' - if isDir - then truncateRecursively d' - else withFile fs (mkFsPath d') (AppendMode AllowExisting) $ \h -> hTruncate fs h 0 - ) dirs + snapshotsMapM_ sfs (truncateRecursively . (: [])) + where + truncateRecursively pre = do + dirs <- listDirectory fs (mkFsPath pre) + mapM_ + ( \d -> do + let d' = pre ++ [d] + isDir <- doesDirectoryExist fs $ mkFsPath d' + if isDir + then truncateRecursively d' + else withFile fs (mkFsPath d') (AppendMode AllowExisting) $ \h -> hTruncate fs h 0 + ) + dirs implGetVolatileTip :: - (MonadSTM m, GetTip l) - => LedgerDBEnv m l blk - -> STM m (l EmptyMK) + (MonadSTM m, GetTip l) => + LedgerDBEnv m l blk -> + STM m (l EmptyMK) implGetVolatileTip = fmap current . readTVar . ldbSeq implGetImmutableTip :: - MonadSTM m - => LedgerDBEnv m l blk - -> STM m (l EmptyMK) + MonadSTM m => + LedgerDBEnv m l blk -> + STM m (l EmptyMK) implGetImmutableTip = fmap anchor . readTVar . ldbSeq implGetPastLedgerState :: - ( MonadSTM m , HasHeader blk, IsLedger l, StandardHash l - , HeaderHash l ~ HeaderHash blk ) - => LedgerDBEnv m l blk -> Point blk -> STM m (Maybe (l EmptyMK)) + ( MonadSTM m + , HasHeader blk + , IsLedger l + , StandardHash l + , HeaderHash l ~ HeaderHash blk + ) => + LedgerDBEnv m l blk -> Point blk -> STM m (Maybe (l EmptyMK)) implGetPastLedgerState env point = getPastLedgerAt point <$> readTVar (ldbSeq env) implGetHeaderStateHistory :: - ( MonadSTM m - , l ~ ExtLedgerState blk - , IsLedger (LedgerState blk) - , HasHardForkHistory blk - , HasAnnTip blk - ) - => LedgerDBEnv m l blk -> STM m (HeaderStateHistory blk) + ( MonadSTM m + , l ~ ExtLedgerState blk + , IsLedger (LedgerState blk) + , HasHardForkHistory blk + , HasAnnTip blk + ) => + LedgerDBEnv m l blk -> STM m (HeaderStateHistory blk) implGetHeaderStateHistory env = do - ldb <- readTVar (ldbSeq env) - let currentLedgerState = ledgerState $ current ldb - -- This summary can convert all tip slots of the ledger states in the - -- @ledgerDb@ as these are not newer than the tip slot of the current - -- ledger state (Property 17.1 in the Consensus report). - summary = hardForkSummary (configLedger $ getExtLedgerCfg $ ledgerDbCfg $ ldbCfg env) currentLedgerState - mkHeaderStateWithTime' = - mkHeaderStateWithTimeFromSummary summary - . headerState - . state - pure - . HeaderStateHistory - . AS.bimap mkHeaderStateWithTime' mkHeaderStateWithTime' - $ getLedgerSeq ldb + ldb <- readTVar (ldbSeq env) + let currentLedgerState = ledgerState $ current ldb + -- This summary can convert all tip slots of the ledger states in the + -- @ledgerDb@ as these are not newer than the tip slot of the current + -- ledger state (Property 17.1 in the Consensus report). + summary = hardForkSummary (configLedger $ getExtLedgerCfg $ ledgerDbCfg $ ldbCfg env) currentLedgerState + mkHeaderStateWithTime' = + mkHeaderStateWithTimeFromSummary summary + . headerState + . state + pure + . HeaderStateHistory + . AS.bimap mkHeaderStateWithTime' mkHeaderStateWithTime' + $ getLedgerSeq ldb implValidate :: - forall m l blk. ( - IOLike m - , LedgerSupportsProtocol blk - , HasCallStack - , l ~ ExtLedgerState blk - ) - => LedgerDBHandle m l blk - -> LedgerDBEnv m l blk - -> ResourceRegistry m - -> (TraceValidateEvent blk -> m ()) - -> BlockCache blk - -> Word64 - -> [Header blk] - -> m (ValidateResult m (ExtLedgerState blk) blk) + forall m l blk. + ( IOLike m + , LedgerSupportsProtocol blk + , HasCallStack + , l ~ ExtLedgerState blk + ) => + LedgerDBHandle m l blk -> + LedgerDBEnv m l blk -> + ResourceRegistry m -> + (TraceValidateEvent blk -> m ()) -> + BlockCache blk -> + Word64 -> + [Header blk] -> + m (ValidateResult m (ExtLedgerState blk) blk) implValidate h ldbEnv rr tr cache rollbacks hdrs = validate (ledgerDbCfgComputeLedgerEvents $ ldbCfg ldbEnv) $ ValidateArgs (ldbResolveBlock ldbEnv) (getExtLedgerCfg . ledgerDbCfg $ ldbCfg ldbEnv) - (\l -> do + ( \l -> do prev <- readTVar (ldbPrevApplied ldbEnv) - writeTVar (ldbPrevApplied ldbEnv) (Foldable.foldl' (flip Set.insert) prev l)) + writeTVar (ldbPrevApplied ldbEnv) (Foldable.foldl' (flip Set.insert) prev l) + ) (readTVar (ldbPrevApplied ldbEnv)) (newForkerByRollback h) rr @@ -314,51 +342,56 @@ implGetPrevApplied env = readTVar (ldbPrevApplied env) -- | Remove all points with a slot older than the given slot from the set of -- previously applied points. implGarbageCollect :: MonadSTM m => LedgerDBEnv m l blk -> SlotNo -> STM m () -implGarbageCollect env slotNo = modifyTVar (ldbPrevApplied env) $ +implGarbageCollect env slotNo = + modifyTVar (ldbPrevApplied env) $ Set.dropWhileAntitone ((< slotNo) . realPointSlot) implTryTakeSnapshot :: - forall m l blk. - ( l ~ ExtLedgerState blk - , IOLike m - , LedgerSupportsProtocol blk - , LedgerDbSerialiseConstraints blk - ) - => HandleArgs - -> LedgerDBEnv m l blk - -> Maybe (Time, Time) - -> Word64 - -> m SnapCounters + forall m l blk. + ( l ~ ExtLedgerState blk + , IOLike m + , LedgerSupportsProtocol blk + , LedgerDbSerialiseConstraints blk + ) => + HandleArgs -> + LedgerDBEnv m l blk -> + Maybe (Time, Time) -> + Word64 -> + m SnapCounters implTryTakeSnapshot bss env mTime nrBlocks = - if onDiskShouldTakeSnapshot (ldbSnapshotPolicy env) (uncurry (flip diffTime) <$> mTime) nrBlocks then do - Monad.void . takeSnapshot - (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) - (LedgerDBSnapshotEvent >$< ldbTracer env) - (ldbHasFS env) - . anchorHandle - =<< readTVarIO (ldbSeq env) - Monad.void $ trimSnapshots - (LedgerDBSnapshotEvent >$< ldbTracer env) - (ldbHasFS env) - (ldbSnapshotPolicy env) + if onDiskShouldTakeSnapshot (ldbSnapshotPolicy env) (uncurry (flip diffTime) <$> mTime) nrBlocks + then do + Monad.void + . takeSnapshot + (configCodec . getExtLedgerCfg . ledgerDbCfg $ ldbCfg env) + (LedgerDBSnapshotEvent >$< ldbTracer env) + (ldbHasFS env) + . anchorHandle + =<< readTVarIO (ldbSeq env) + Monad.void $ + trimSnapshots + (LedgerDBSnapshotEvent >$< ldbTracer env) + (ldbHasFS env) + (ldbSnapshotPolicy env) (`SnapCounters` 0) . Just <$> maybe getMonotonicTime (pure . snd) mTime else pure $ SnapCounters (fst <$> mTime) nrBlocks - where - takeSnapshot :: CodecConfig blk - -> Tracer m (TraceSnapshotEvent blk) - -> SomeHasFS m - -> StateRef m (ExtLedgerState blk) - -> m (Maybe (DiskSnapshot, RealPoint blk)) - takeSnapshot config trcr fs ref = case bss of - InMemoryHandleArgs -> - InMemory.takeSnapshot - config - trcr - fs - Nothing - ref - LSMHandleArgs x -> absurd x + where + takeSnapshot :: + CodecConfig blk -> + Tracer m (TraceSnapshotEvent blk) -> + SomeHasFS m -> + StateRef m (ExtLedgerState blk) -> + m (Maybe (DiskSnapshot, RealPoint blk)) + takeSnapshot config trcr fs ref = case bss of + InMemoryHandleArgs -> + InMemory.takeSnapshot + config + trcr + fs + Nothing + ref + LSMHandleArgs x -> absurd x -- In the first version of the LedgerDB for UTxO-HD, there is a need to -- periodically flush the accumulated differences to the disk. However, in the @@ -369,16 +402,18 @@ implTryFlush _ = pure () implCloseDB :: IOLike m => LedgerDBHandle m l blk -> m () implCloseDB (LDBHandle varState) = do - mbOpenEnv <- atomically $ readTVar varState >>= \case - -- Idempotent - LedgerDBClosed -> return Nothing - LedgerDBOpen env -> do - writeTVar varState LedgerDBClosed - return $ Just env - - -- Only when the LedgerDB was open - whenJust mbOpenEnv $ \env -> do - closeAllForkers env + mbOpenEnv <- + atomically $ + readTVar varState >>= \case + -- Idempotent + LedgerDBClosed -> return Nothing + LedgerDBOpen env -> do + writeTVar varState LedgerDBClosed + return $ Just env + + -- Only when the LedgerDB was open + whenJust mbOpenEnv $ \env -> do + closeAllForkers env {------------------------------------------------------------------------------- The LedgerDBEnv @@ -387,104 +422,118 @@ implCloseDB (LDBHandle varState) = do data LDBLock = LDBLock deriving (Generic, NoThunks) type LedgerDBEnv :: (Type -> Type) -> LedgerStateKind -> Type -> Type -data LedgerDBEnv m l blk = LedgerDBEnv { - -- | INVARIANT: the tip of the 'LedgerDB' is always in sync with the tip of - -- the current chain of the ChainDB. - ldbSeq :: !(StrictTVar m (LedgerSeq m l)) - -- | INVARIANT: this set contains only points that are in the - -- VolatileDB. - -- - -- INVARIANT: all points on the current chain fragment are in this set. - -- - -- The VolatileDB might contain invalid blocks, these will not be in - -- this set. - -- - -- When a garbage-collection is performed on the VolatileDB, the points - -- of the blocks eligible for garbage-collection should be removed from - -- this set. - , ldbPrevApplied :: !(StrictTVar m (Set (RealPoint blk))) - -- | Open forkers. - -- - -- INVARIANT: a forker is open iff its 'ForkerKey' is in this 'Map. - , ldbForkers :: !(StrictTVar m (Map ForkerKey (ForkerEnv m l blk))) - , ldbNextForkerKey :: !(StrictTVar m ForkerKey) - - , ldbSnapshotPolicy :: !SnapshotPolicy - , ldbTracer :: !(Tracer m (TraceEvent blk)) - , ldbCfg :: !(LedgerDbCfg l) - , ldbHasFS :: !(SomeHasFS m) - , ldbResolveBlock :: !(ResolveBlock m blk) - , ldbQueryBatchSize :: !QueryBatchSize +data LedgerDBEnv m l blk = LedgerDBEnv + { ldbSeq :: !(StrictTVar m (LedgerSeq m l)) + -- ^ INVARIANT: the tip of the 'LedgerDB' is always in sync with the tip of + -- the current chain of the ChainDB. + , ldbPrevApplied :: !(StrictTVar m (Set (RealPoint blk))) + -- ^ INVARIANT: this set contains only points that are in the + -- VolatileDB. + -- + -- INVARIANT: all points on the current chain fragment are in this set. + -- + -- The VolatileDB might contain invalid blocks, these will not be in + -- this set. + -- + -- When a garbage-collection is performed on the VolatileDB, the points + -- of the blocks eligible for garbage-collection should be removed from + -- this set. + , ldbForkers :: !(StrictTVar m (Map ForkerKey (ForkerEnv m l blk))) + -- ^ Open forkers. + -- + -- INVARIANT: a forker is open iff its 'ForkerKey' is in this 'Map. + , ldbNextForkerKey :: !(StrictTVar m ForkerKey) + , ldbSnapshotPolicy :: !SnapshotPolicy + , ldbTracer :: !(Tracer m (TraceEvent blk)) + , ldbCfg :: !(LedgerDbCfg l) + , ldbHasFS :: !(SomeHasFS m) + , ldbResolveBlock :: !(ResolveBlock m blk) + , ldbQueryBatchSize :: !QueryBatchSize , ldbOpenHandlesLock :: !(RAWLock m LDBLock) - } deriving (Generic) + } + deriving Generic -deriving instance ( IOLike m - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - , NoThunks (LedgerCfg l) - ) => NoThunks (LedgerDBEnv m l blk) +deriving instance + ( IOLike m + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + , NoThunks (LedgerCfg l) + ) => + NoThunks (LedgerDBEnv m l blk) {------------------------------------------------------------------------------- The LedgerDBHandle -------------------------------------------------------------------------------} type LedgerDBHandle :: (Type -> Type) -> LedgerStateKind -> Type -> Type -newtype LedgerDBHandle m l blk = - LDBHandle (StrictTVar m (LedgerDBState m l blk)) +newtype LedgerDBHandle m l blk + = LDBHandle (StrictTVar m (LedgerDBState m l blk)) deriving Generic -data LedgerDBState m l blk = - LedgerDBOpen !(LedgerDBEnv m l blk) +data LedgerDBState m l blk + = LedgerDBOpen !(LedgerDBEnv m l blk) | LedgerDBClosed deriving Generic -deriving instance ( IOLike m - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - , NoThunks (LedgerCfg l) - ) => NoThunks (LedgerDBState m l blk) - +deriving instance + ( IOLike m + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + , NoThunks (LedgerCfg l) + ) => + NoThunks (LedgerDBState m l blk) -- | Check if the LedgerDB is open, if so, executing the given function on the -- 'LedgerDBEnv', otherwise, throw a 'CloseDBError'. getEnv :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> m r) - -> m r -getEnv (LDBHandle varState) f = readTVarIO varState >>= \case + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> m r) -> + m r +getEnv (LDBHandle varState) f = + readTVarIO varState >>= \case LedgerDBOpen env -> f env - LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack + LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack -- | Variant 'of 'getEnv' for functions taking two arguments. getEnv2 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> a -> b -> m r) - -> a -> b -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> a -> b -> m r) -> + a -> + b -> + m r getEnv2 h f a b = getEnv h (\env -> f env a b) -- | Variant 'of 'getEnv' for functions taking five arguments. getEnv5 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> a -> b -> c -> d -> e -> m r) - -> a -> b -> c -> d -> e -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> a -> b -> c -> d -> e -> m r) -> + a -> + b -> + c -> + d -> + e -> + m r getEnv5 h f a b c d e = getEnv h (\env -> f env a b c d e) -- | Variant of 'getEnv' that works in 'STM'. getEnvSTM :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> (LedgerDBEnv m l blk -> STM m r) - -> STM m r -getEnvSTM (LDBHandle varState) f = readTVar varState >>= \case + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + (LedgerDBEnv m l blk -> STM m r) -> + STM m r +getEnvSTM (LDBHandle varState) f = + readTVar varState >>= \case LedgerDBOpen env -> f env - LedgerDBClosed -> throwSTM $ ClosedDBError @blk prettyCallStack + LedgerDBClosed -> throwSTM $ ClosedDBError @blk prettyCallStack {------------------------------------------------------------------------------- Acquiring consistent views @@ -493,16 +542,16 @@ getEnvSTM (LDBHandle varState) f = readTVar varState >>= \case -- | This function must hold the 'LDBLock' such that handles are not released -- before they are duplicated. acquireAtTarget :: - ( HeaderHash l ~ HeaderHash blk - , IOLike m - , GetTip l - , StandardHash l - , LedgerSupportsProtocol blk - ) - => LedgerDBEnv m l blk - -> Either Word64 (Target (Point blk)) - -> LDBLock - -> m (Either GetForkerError (StateRef m l)) + ( HeaderHash l ~ HeaderHash blk + , IOLike m + , GetTip l + , StandardHash l + , LedgerSupportsProtocol blk + ) => + LedgerDBEnv m l blk -> + Either Word64 (Target (Point blk)) -> + LDBLock -> + m (Either GetForkerError (StateRef m l)) acquireAtTarget ldbEnv (Right VolatileTip) _ = do l <- readTVarIO (ldbSeq ldbEnv) let StateRef st tbs = currentHandle l @@ -517,61 +566,66 @@ acquireAtTarget ldbEnv (Right (SpecificPoint pt)) _ = do dblog <- readTVarIO (ldbSeq ldbEnv) let immTip = getTip $ anchor dblog case currentHandle <$> rollback pt dblog of - Nothing | pointSlot pt < pointSlot immTip -> pure $ Left $ PointTooOld Nothing - | otherwise -> pure $ Left PointNotOnChain + Nothing + | pointSlot pt < pointSlot immTip -> pure $ Left $ PointTooOld Nothing + | otherwise -> pure $ Left PointNotOnChain Just (StateRef st tbs) -> - Right . StateRef st <$> duplicate tbs + Right . StateRef st <$> duplicate tbs acquireAtTarget ldbEnv (Left n) _ = do - dblog <- readTVarIO (ldbSeq ldbEnv) - case currentHandle <$> rollbackN n dblog of - Nothing -> - return $ Left $ PointTooOld $ Just $ ExceededRollback { - rollbackMaximum = maxRollback dblog - , rollbackRequested = n - } - Just (StateRef st tbs) -> - Right . StateRef st <$> duplicate tbs + dblog <- readTVarIO (ldbSeq ldbEnv) + case currentHandle <$> rollbackN n dblog of + Nothing -> + return $ + Left $ + PointTooOld $ + Just $ + ExceededRollback + { rollbackMaximum = maxRollback dblog + , rollbackRequested = n + } + Just (StateRef st tbs) -> + Right . StateRef st <$> duplicate tbs newForkerAtTarget :: - ( HeaderHash l ~ HeaderHash blk - , IOLike m - , IsLedger l - , HasLedgerTables l - , LedgerSupportsProtocol blk - , StandardHash l - ) - => LedgerDBHandle m l blk - -> ResourceRegistry m - -> Target (Point blk) - -> m (Either GetForkerError (Forker m l blk)) + ( HeaderHash l ~ HeaderHash blk + , IOLike m + , IsLedger l + , HasLedgerTables l + , LedgerSupportsProtocol blk + , StandardHash l + ) => + LedgerDBHandle m l blk -> + ResourceRegistry m -> + Target (Point blk) -> + m (Either GetForkerError (Forker m l blk)) newForkerAtTarget h rr pt = getEnv h $ \ldbEnv@LedgerDBEnv{ldbOpenHandlesLock = lock} -> - RAWLock.withReadAccess lock (acquireAtTarget ldbEnv (Right pt)) >>= traverse (newForker h ldbEnv rr) + RAWLock.withReadAccess lock (acquireAtTarget ldbEnv (Right pt)) >>= traverse (newForker h ldbEnv rr) newForkerByRollback :: - ( HeaderHash l ~ HeaderHash blk - , IOLike m - , IsLedger l - , StandardHash l - , HasLedgerTables l - , LedgerSupportsProtocol blk - ) - => LedgerDBHandle m l blk - -> ResourceRegistry m - -> Word64 - -> m (Either GetForkerError (Forker m l blk)) + ( HeaderHash l ~ HeaderHash blk + , IOLike m + , IsLedger l + , StandardHash l + , HasLedgerTables l + , LedgerSupportsProtocol blk + ) => + LedgerDBHandle m l blk -> + ResourceRegistry m -> + Word64 -> + m (Either GetForkerError (Forker m l blk)) newForkerByRollback h rr n = getEnv h $ \ldbEnv@LedgerDBEnv{ldbOpenHandlesLock = lock} -> do - RAWLock.withReadAccess lock (acquireAtTarget ldbEnv (Left n)) >>= traverse (newForker h ldbEnv rr) + RAWLock.withReadAccess lock (acquireAtTarget ldbEnv (Left n)) >>= traverse (newForker h ldbEnv rr) -- | Close all open 'Forker's. closeAllForkers :: - IOLike m - => LedgerDBEnv m l blk - -> m () + IOLike m => + LedgerDBEnv m l blk -> + m () closeAllForkers ldbEnv = do - toClose <- fmap (ldbEnv,) <$> (atomically $ stateTVar forkersVar (, Map.empty)) - mapM_ closeForkerEnv toClose - where - forkersVar = ldbForkers ldbEnv + toClose <- fmap (ldbEnv,) <$> (atomically $ stateTVar forkersVar (,Map.empty)) + mapM_ closeForkerEnv toClose + where + forkersVar = ldbForkers ldbEnv closeForkerEnv :: IOLike m => (LedgerDBEnv m l blk, ForkerEnv m l blk) -> m () closeForkerEnv (LedgerDBEnv{ldbOpenHandlesLock}, frkEnv) = @@ -582,87 +636,105 @@ closeForkerEnv (LedgerDBEnv{ldbOpenHandlesLock}, frkEnv) = pure ((), LDBLock) getForkerEnv :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> ForkerKey - -> (ForkerEnv m l blk -> m r) - -> m r + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + ForkerKey -> + (ForkerEnv m l blk -> m r) -> + m r getForkerEnv (LDBHandle varState) forkerKey f = do - forkerEnv <- atomically $ readTVar varState >>= \case - LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack - LedgerDBOpen env -> readTVar (ldbForkers env) >>= (Map.lookup forkerKey >>> \case - Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack - Just forkerEnv -> pure forkerEnv) - f forkerEnv + forkerEnv <- + atomically $ + readTVar varState >>= \case + LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack + LedgerDBOpen env -> + readTVar (ldbForkers env) + >>= ( Map.lookup forkerKey >>> \case + Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack + Just forkerEnv -> pure forkerEnv + ) + f forkerEnv getForkerEnv1 :: - (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> ForkerKey - -> (ForkerEnv m l blk -> a -> m r) - -> a -> m r + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + ForkerKey -> + (ForkerEnv m l blk -> a -> m r) -> + a -> + m r getForkerEnv1 h forkerKey f a = getForkerEnv h forkerKey (`f` a) getForkerEnvSTM :: - forall m l blk r. (IOLike m, HasCallStack, HasHeader blk) - => LedgerDBHandle m l blk - -> ForkerKey - -> (ForkerEnv m l blk -> STM m r) - -> STM m r -getForkerEnvSTM (LDBHandle varState) forkerKey f = readTVar varState >>= \case - LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack - LedgerDBOpen env -> readTVar (ldbForkers env) >>= (Map.lookup forkerKey >>> \case - Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack - Just forkerEnv -> f forkerEnv) + forall m l blk r. + (IOLike m, HasCallStack, HasHeader blk) => + LedgerDBHandle m l blk -> + ForkerKey -> + (ForkerEnv m l blk -> STM m r) -> + STM m r +getForkerEnvSTM (LDBHandle varState) forkerKey f = + readTVar varState >>= \case + LedgerDBClosed -> throwIO $ ClosedDBError @blk prettyCallStack + LedgerDBOpen env -> + readTVar (ldbForkers env) + >>= ( Map.lookup forkerKey >>> \case + Nothing -> throwSTM $ ClosedForkerError @blk forkerKey prettyCallStack + Just forkerEnv -> f forkerEnv + ) -- | Will release all handles in the 'foeLedgerSeq'. implForkerClose :: - IOLike m - => LedgerDBHandle m l blk - -> ForkerKey - -> m () + IOLike m => + LedgerDBHandle m l blk -> + ForkerKey -> + m () implForkerClose (LDBHandle varState) forkerKey = do - menv <- atomically $ readTVar varState >>= \case - LedgerDBClosed -> pure Nothing - LedgerDBOpen ldbEnv -> fmap (ldbEnv,) <$> - stateTVar - (ldbForkers ldbEnv) - (Map.updateLookupWithKey (\_ _ -> Nothing) forkerKey) - whenJust menv closeForkerEnv + menv <- + atomically $ + readTVar varState >>= \case + LedgerDBClosed -> pure Nothing + LedgerDBOpen ldbEnv -> + fmap (ldbEnv,) + <$> stateTVar + (ldbForkers ldbEnv) + (Map.updateLookupWithKey (\_ _ -> Nothing) forkerKey) + whenJust menv closeForkerEnv newForker :: - ( IOLike m - , HasLedgerTables l - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , GetTip l - , StandardHash l - ) - => LedgerDBHandle m l blk - -> LedgerDBEnv m l blk - -> ResourceRegistry m - -> StateRef m l - -> m (Forker m l blk) + ( IOLike m + , HasLedgerTables l + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , GetTip l + , StandardHash l + ) => + LedgerDBHandle m l blk -> + LedgerDBEnv m l blk -> + ResourceRegistry m -> + StateRef m l -> + m (Forker m l blk) newForker h ldbEnv rr st = do - forkerKey <- atomically $ stateTVar (ldbNextForkerKey ldbEnv) $ \r -> (r, r + 1) - let tr = LedgerDBForkerEvent . TraceForkerEventWithKey forkerKey >$< ldbTracer ldbEnv - traceWith tr ForkerOpen - lseqVar <- newTVarIO . LedgerSeq . AS.Empty $ st - (_, toRelease) <- allocate rr (\_ -> newTVarIO (pure ())) (readTVarIO Monad.>=> id) - let forkerEnv = ForkerEnv { - foeLedgerSeq = lseqVar - , foeSwitchVar = ldbSeq ldbEnv - , foeSecurityParam = ledgerDbCfgSecParam $ ldbCfg ldbEnv - , foeTracer = tr - , foeResourcesToRelease = toRelease - } - atomically $ modifyTVar (ldbForkers ldbEnv) $ Map.insert forkerKey forkerEnv - pure $ Forker { - forkerReadTables = getForkerEnv1 h forkerKey implForkerReadTables - , forkerRangeReadTables = getForkerEnv1 h forkerKey (implForkerRangeReadTables (ldbQueryBatchSize ldbEnv)) - , forkerGetLedgerState = getForkerEnvSTM h forkerKey implForkerGetLedgerState - , forkerReadStatistics = getForkerEnv h forkerKey implForkerReadStatistics - , forkerPush = getForkerEnv1 h forkerKey implForkerPush - , forkerCommit = getForkerEnvSTM h forkerKey implForkerCommit - , forkerClose = implForkerClose h forkerKey + forkerKey <- atomically $ stateTVar (ldbNextForkerKey ldbEnv) $ \r -> (r, r + 1) + let tr = LedgerDBForkerEvent . TraceForkerEventWithKey forkerKey >$< ldbTracer ldbEnv + traceWith tr ForkerOpen + lseqVar <- newTVarIO . LedgerSeq . AS.Empty $ st + (_, toRelease) <- allocate rr (\_ -> newTVarIO (pure ())) (readTVarIO Monad.>=> id) + let forkerEnv = + ForkerEnv + { foeLedgerSeq = lseqVar + , foeSwitchVar = ldbSeq ldbEnv + , foeSecurityParam = ledgerDbCfgSecParam $ ldbCfg ldbEnv + , foeTracer = tr + , foeResourcesToRelease = toRelease + } + atomically $ modifyTVar (ldbForkers ldbEnv) $ Map.insert forkerKey forkerEnv + pure $ + Forker + { forkerReadTables = getForkerEnv1 h forkerKey implForkerReadTables + , forkerRangeReadTables = + getForkerEnv1 h forkerKey (implForkerRangeReadTables (ldbQueryBatchSize ldbEnv)) + , forkerGetLedgerState = getForkerEnvSTM h forkerKey implForkerGetLedgerState + , forkerReadStatistics = getForkerEnv h forkerKey implForkerReadStatistics + , forkerPush = getForkerEnv1 h forkerKey implForkerPush + , forkerCommit = getForkerEnvSTM h forkerKey implForkerCommit + , forkerClose = implForkerClose h forkerKey } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Args.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Args.hs index 58f07ac38b..6a6344e8f8 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Args.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Args.hs @@ -6,27 +6,26 @@ {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE TypeFamilies #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Storage.LedgerDB.V2.Args ( - FlavorImplSpecificTrace (..) +module Ouroboros.Consensus.Storage.LedgerDB.V2.Args + ( FlavorImplSpecificTrace (..) , HandleArgs (..) , LedgerDbFlavorArgs (..) ) where -import Data.Void (Void) -import GHC.Generics -import NoThunks.Class +import Data.Void (Void) +import GHC.Generics +import NoThunks.Class data LedgerDbFlavorArgs f m = V2Args HandleArgs -data HandleArgs = - InMemoryHandleArgs +data HandleArgs + = InMemoryHandleArgs | LSMHandleArgs Void deriving (Generic, NoThunks) -data FlavorImplSpecificTrace = - FlavorImplSpecificTraceInMemory +data FlavorImplSpecificTrace + = FlavorImplSpecificTraceInMemory | FlavorImplSpecificTraceOnDisk deriving (Show, Eq) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Forker.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Forker.hs index 0a63ad21ff..df3af6589d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Forker.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/Forker.hs @@ -5,113 +5,115 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} --- | -module Ouroboros.Consensus.Storage.LedgerDB.V2.Forker ( - ForkerEnv (..) +module Ouroboros.Consensus.Storage.LedgerDB.V2.Forker + ( ForkerEnv (..) , implForkerCommit , implForkerGetLedgerState , implForkerPush , implForkerRangeReadTables , implForkerReadStatistics , implForkerReadTables + -- * The API , module Ouroboros.Consensus.Storage.LedgerDB.Forker ) where -import Control.Tracer -import Data.Maybe (fromMaybe) -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Args -import Ouroboros.Consensus.Storage.LedgerDB.Forker -import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq -import Ouroboros.Consensus.Util.CallStack -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.NormalForm.StrictTVar () -import qualified Ouroboros.Network.AnchoredSeq as AS -import Prelude hiding (read) +import Control.Tracer +import Data.Maybe (fromMaybe) +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Args +import Ouroboros.Consensus.Storage.LedgerDB.Forker +import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq +import Ouroboros.Consensus.Util.CallStack +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.NormalForm.StrictTVar () +import Ouroboros.Network.AnchoredSeq qualified as AS +import Prelude hiding (read) {------------------------------------------------------------------------------- Forker operations -------------------------------------------------------------------------------} -data ForkerEnv m l blk = ForkerEnv { - -- | Local version of the LedgerSeq - foeLedgerSeq :: !(StrictTVar m (LedgerSeq m l)) - -- | This TVar is the same as the LedgerDB one - , foeSwitchVar :: !(StrictTVar m (LedgerSeq m l)) - -- | Config - , foeSecurityParam :: !SecurityParam - -- | Config - , foeTracer :: !(Tracer m TraceForkerEvent) - -- | Release the resources +data ForkerEnv m l blk = ForkerEnv + { foeLedgerSeq :: !(StrictTVar m (LedgerSeq m l)) + -- ^ Local version of the LedgerSeq + , foeSwitchVar :: !(StrictTVar m (LedgerSeq m l)) + -- ^ This TVar is the same as the LedgerDB one + , foeSecurityParam :: !SecurityParam + -- ^ Config + , foeTracer :: !(Tracer m TraceForkerEvent) + -- ^ Config , foeResourcesToRelease :: !(StrictTVar m (m ())) + -- ^ Release the resources } deriving Generic -deriving instance ( IOLike m - , LedgerSupportsProtocol blk - , NoThunks (l EmptyMK) - , NoThunks (TxIn l) - , NoThunks (TxOut l) - ) => NoThunks (ForkerEnv m l blk) +deriving instance + ( IOLike m + , LedgerSupportsProtocol blk + , NoThunks (l EmptyMK) + , NoThunks (TxIn l) + , NoThunks (TxOut l) + ) => + NoThunks (ForkerEnv m l blk) implForkerReadTables :: - (MonadSTM m, GetTip l) - => ForkerEnv m l blk - -> LedgerTables l KeysMK - -> m (LedgerTables l ValuesMK) + (MonadSTM m, GetTip l) => + ForkerEnv m l blk -> + LedgerTables l KeysMK -> + m (LedgerTables l ValuesMK) implForkerReadTables env ks = do - traceWith (foeTracer env) ForkerReadTablesStart - lseq <- readTVarIO (foeLedgerSeq env) - tbs <- read (tables $ currentHandle lseq) ks - traceWith (foeTracer env) ForkerReadTablesEnd - pure tbs + traceWith (foeTracer env) ForkerReadTablesStart + lseq <- readTVarIO (foeLedgerSeq env) + tbs <- read (tables $ currentHandle lseq) ks + traceWith (foeTracer env) ForkerReadTablesEnd + pure tbs implForkerRangeReadTables :: - (MonadSTM m, GetTip l, HasLedgerTables l) - => QueryBatchSize - -> ForkerEnv m l blk - -> RangeQueryPrevious l - -> m (LedgerTables l ValuesMK) + (MonadSTM m, GetTip l, HasLedgerTables l) => + QueryBatchSize -> + ForkerEnv m l blk -> + RangeQueryPrevious l -> + m (LedgerTables l ValuesMK) implForkerRangeReadTables qbs env rq0 = do - traceWith (foeTracer env) ForkerRangeReadTablesStart - ldb <- readTVarIO $ foeLedgerSeq env - let n = fromIntegral $ defaultQueryBatchSize qbs - case rq0 of - NoPreviousQuery -> readRange (tables $ currentHandle ldb) (Nothing, n) - PreviousQueryWasFinal -> pure $ LedgerTables emptyMK - PreviousQueryWasUpTo k -> do - tbs <- readRange (tables $ currentHandle ldb) (Just k, n) - traceWith (foeTracer env) ForkerRangeReadTablesEnd - pure tbs + traceWith (foeTracer env) ForkerRangeReadTablesStart + ldb <- readTVarIO $ foeLedgerSeq env + let n = fromIntegral $ defaultQueryBatchSize qbs + case rq0 of + NoPreviousQuery -> readRange (tables $ currentHandle ldb) (Nothing, n) + PreviousQueryWasFinal -> pure $ LedgerTables emptyMK + PreviousQueryWasUpTo k -> do + tbs <- readRange (tables $ currentHandle ldb) (Just k, n) + traceWith (foeTracer env) ForkerRangeReadTablesEnd + pure tbs implForkerGetLedgerState :: - (MonadSTM m, GetTip l) - => ForkerEnv m l blk - -> STM m (l EmptyMK) + (MonadSTM m, GetTip l) => + ForkerEnv m l blk -> + STM m (l EmptyMK) implForkerGetLedgerState env = current <$> readTVar (foeLedgerSeq env) implForkerReadStatistics :: - (MonadSTM m, GetTip l) - => ForkerEnv m l blk - -> m (Maybe Statistics) + (MonadSTM m, GetTip l) => + ForkerEnv m l blk -> + m (Maybe Statistics) implForkerReadStatistics env = do traceWith (foeTracer env) ForkerReadStatistics fmap (fmap Statistics) . tablesSize . tables . currentHandle =<< readTVarIO (foeLedgerSeq env) implForkerPush :: - (IOLike m, GetTip l, HasLedgerTables l, HasCallStack) - => ForkerEnv m l blk - -> l DiffMK - -> m () + (IOLike m, GetTip l, HasLedgerTables l, HasCallStack) => + ForkerEnv m l blk -> + l DiffMK -> + m () implForkerPush env newState = do traceWith (foeTracer env) ForkerPushStart lseq <- readTVarIO (foeLedgerSeq env) @@ -122,21 +124,21 @@ implForkerPush env newState = do bracketOnError (duplicate (tables $ currentHandle lseq)) close - (\newtbs -> do + ( \newtbs -> do pushDiffs newtbs st0 newState let lseq' = extend (StateRef st newtbs) lseq traceWith (foeTracer env) ForkerPushEnd atomically $ do - writeTVar (foeLedgerSeq env) lseq' - modifyTVar (foeResourcesToRelease env) (>> close newtbs) - ) + writeTVar (foeLedgerSeq env) lseq' + modifyTVar (foeResourcesToRelease env) (>> close newtbs) + ) implForkerCommit :: - (IOLike m, GetTip l, StandardHash l) - => ForkerEnv m l blk - -> STM m () + (IOLike m, GetTip l, StandardHash l) => + ForkerEnv m l blk -> + STM m () implForkerCommit env = do LedgerSeq lseq <- readTVar foeLedgerSeq let intersectionSlot = getTipSlot $ state $ AS.anchor lseq @@ -144,7 +146,7 @@ implForkerCommit env = do (discardedBySelection, LedgerSeq discardedByPruning) <- do stateTVar foeSwitchVar - (\(LedgerSeq olddb) -> fromMaybe theImpossible $ do + ( \(LedgerSeq olddb) -> fromMaybe theImpossible $ do -- Split the selection at the intersection point. The snd component will -- have to be closed. (olddb', toClose) <- AS.splitAfterMeasure intersectionSlot (either predicate predicate) olddb @@ -160,16 +162,18 @@ implForkerCommit env = do -- those we have to close the ones discarded in this function and forget about -- those releasing actions. writeTVar foeResourcesToRelease $ - mapM_ (close . tables) $ AS.toOldestFirst discardedBySelection ++ AS.toOldestFirst discardedByPruning - - where - ForkerEnv { - foeLedgerSeq - , foeSwitchVar - , foeResourcesToRelease - } = env - - theImpossible = - error $ unwords [ "Critical invariant violation:" - , "Forker chain does no longer intersect with selected chain." - ] + mapM_ (close . tables) $ + AS.toOldestFirst discardedBySelection ++ AS.toOldestFirst discardedByPruning + where + ForkerEnv + { foeLedgerSeq + , foeSwitchVar + , foeResourcesToRelease + } = env + + theImpossible = + error $ + unwords + [ "Critical invariant violation:" + , "Forker chain does no longer intersect with selected chain." + ] diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/InMemory.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/InMemory.hs index 770a85862d..bf73d10141 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/InMemory.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/InMemory.hs @@ -17,9 +17,10 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory ( - -- * LedgerTablesHandle +module Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory + ( -- * LedgerTablesHandle newInMemoryLedgerTablesHandle + -- * Snapshots , loadSnapshot , snapshotToStatePath @@ -27,44 +28,44 @@ module Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory ( , takeSnapshot ) where -import Cardano.Binary as CBOR -import qualified Codec.CBOR.Write as CBOR -import Codec.Serialise (decode) -import qualified Control.Monad as Monad -import Control.Monad.Trans (lift) -import Control.Monad.Trans.Except -import Control.ResourceRegistry -import Control.Tracer -import Data.Functor.Contravariant ((>$<)) -import Data.Functor.Identity -import qualified Data.List as List -import qualified Data.Map.Strict as Map -import Data.Maybe -import Data.String (fromString) -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsProtocol -import qualified Ouroboros.Consensus.Ledger.Tables.Diff as Diff -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq -import Ouroboros.Consensus.Util.CBOR (readIncremental) -import Ouroboros.Consensus.Util.CRC -import Ouroboros.Consensus.Util.Enclose -import Ouroboros.Consensus.Util.IOLike -import Prelude hiding (read) -import System.FS.API -import System.FS.CRC +import Cardano.Binary as CBOR +import Codec.CBOR.Write qualified as CBOR +import Codec.Serialise (decode) +import Control.Monad qualified as Monad +import Control.Monad.Trans (lift) +import Control.Monad.Trans.Except +import Control.ResourceRegistry +import Control.Tracer +import Data.Functor.Contravariant ((>$<)) +import Data.Functor.Identity +import Data.List qualified as List +import Data.Map.Strict qualified as Map +import Data.Maybe +import Data.String (fromString) +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Diff qualified as Diff +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq +import Ouroboros.Consensus.Util.CBOR (readIncremental) +import Ouroboros.Consensus.Util.CRC +import Ouroboros.Consensus.Util.Enclose +import Ouroboros.Consensus.Util.IOLike +import System.FS.API +import System.FS.CRC +import Prelude hiding (read) {------------------------------------------------------------------------------- InMemory implementation of LedgerTablesHandles -------------------------------------------------------------------------------} -data LedgerTablesHandleState l = - LedgerTablesHandleOpen !(LedgerTables l ValuesMK) +data LedgerTablesHandleState l + = LedgerTablesHandleOpen !(LedgerTables l ValuesMK) | LedgerTablesHandleClosed deriving Generic @@ -74,55 +75,72 @@ data InMemoryClosedExn = InMemoryClosedExn deriving (Show, Exception) guardClosed :: LedgerTablesHandleState l -> (LedgerTables l ValuesMK -> a) -> a -guardClosed LedgerTablesHandleClosed _ = error $ show InMemoryClosedExn +guardClosed LedgerTablesHandleClosed _ = error $ show InMemoryClosedExn guardClosed (LedgerTablesHandleOpen st) f = f st newInMemoryLedgerTablesHandle :: - forall m l. - ( IOLike m - , HasLedgerTables l - , CanUpgradeLedgerTables l - , SerializeTablesWithHint l - ) - => SomeHasFS m - -> LedgerTables l ValuesMK - -> m (LedgerTablesHandle m l) + forall m l. + ( IOLike m + , HasLedgerTables l + , CanUpgradeLedgerTables l + , SerializeTablesWithHint l + ) => + SomeHasFS m -> + LedgerTables l ValuesMK -> + m (LedgerTablesHandle m l) newInMemoryLedgerTablesHandle someFS@(SomeHasFS hasFS) l = do !tv <- newTVarIO (LedgerTablesHandleOpen l) - pure LedgerTablesHandle { - close = - atomically $ writeTVar tv LedgerTablesHandleClosed - , duplicate = do - hs <- readTVarIO tv - !x <- guardClosed hs $ newInMemoryLedgerTablesHandle someFS - pure x - , read = \keys -> do - hs <- readTVarIO tv - guardClosed hs (pure . flip (ltliftA2 (\(ValuesMK v) (KeysMK k) -> ValuesMK $ v `Map.restrictKeys` k)) keys) - , readRange = \(f, t) -> do - hs <- readTVarIO tv - guardClosed hs (\(LedgerTables (ValuesMK m)) -> - pure . LedgerTables . ValuesMK . Map.take t . (maybe id (\g -> snd . Map.split g) f) $ m) - , readAll = do - hs <- readTVarIO tv - guardClosed hs pure - , pushDiffs = \st0 !diffs -> - atomically - $ modifyTVar tv - (\r -> guardClosed r (LedgerTablesHandleOpen . flip (ltliftA2 (\(ValuesMK vals) (DiffMK d) -> ValuesMK (Diff.applyDiff vals d))) (projectLedgerTables diffs) . upgradeTables st0 diffs)) - , takeHandleSnapshot = \hint snapshotName -> do - createDirectoryIfMissing hasFS True $ mkFsPath [snapshotName, "tables"] - h <- readTVarIO tv - guardClosed h $ - \values -> - withFile hasFS (mkFsPath [snapshotName, "tables", "tvar"]) (WriteMode MustBeNew) $ \hf -> - fmap snd $ hPutAllCRC hasFS hf - $ CBOR.toLazyByteString - $ valuesMKEncoder hint values - , tablesSize = do - hs <- readTVarIO tv - guardClosed hs (pure . Just . Map.size . getValuesMK . getLedgerTables) - } + pure + LedgerTablesHandle + { close = + atomically $ writeTVar tv LedgerTablesHandleClosed + , duplicate = do + hs <- readTVarIO tv + !x <- guardClosed hs $ newInMemoryLedgerTablesHandle someFS + pure x + , read = \keys -> do + hs <- readTVarIO tv + guardClosed + hs + (pure . flip (ltliftA2 (\(ValuesMK v) (KeysMK k) -> ValuesMK $ v `Map.restrictKeys` k)) keys) + , readRange = \(f, t) -> do + hs <- readTVarIO tv + guardClosed + hs + ( \(LedgerTables (ValuesMK m)) -> + pure . LedgerTables . ValuesMK . Map.take t . (maybe id (\g -> snd . Map.split g) f) $ m + ) + , readAll = do + hs <- readTVarIO tv + guardClosed hs pure + , pushDiffs = \st0 !diffs -> + atomically $ + modifyTVar + tv + ( \r -> + guardClosed + r + ( LedgerTablesHandleOpen + . flip + (ltliftA2 (\(ValuesMK vals) (DiffMK d) -> ValuesMK (Diff.applyDiff vals d))) + (projectLedgerTables diffs) + . upgradeTables st0 diffs + ) + ) + , takeHandleSnapshot = \hint snapshotName -> do + createDirectoryIfMissing hasFS True $ mkFsPath [snapshotName, "tables"] + h <- readTVarIO tv + guardClosed h $ + \values -> + withFile hasFS (mkFsPath [snapshotName, "tables", "tvar"]) (WriteMode MustBeNew) $ \hf -> + fmap snd $ + hPutAllCRC hasFS hf $ + CBOR.toLazyByteString $ + valuesMKEncoder hint values + , tablesSize = do + hs <- readTVarIO tv + guardClosed hs (pure . Just . Map.size . getValuesMK . getLedgerTables) + } {------------------------------------------------------------------------------- Snapshots @@ -137,44 +155,46 @@ snapshotToTablePath :: DiskSnapshot -> FsPath snapshotToTablePath = mkFsPath . (\x -> [x, "tables", "tvar"]) . snapshotToDirName writeSnapshot :: - MonadThrow m - => SomeHasFS m - -> (ExtLedgerState blk EmptyMK -> Encoding) - -> DiskSnapshot - -> StateRef m (ExtLedgerState blk) - -> m () + MonadThrow m => + SomeHasFS m -> + (ExtLedgerState blk EmptyMK -> Encoding) -> + DiskSnapshot -> + StateRef m (ExtLedgerState blk) -> + m () writeSnapshot fs@(SomeHasFS hasFs) encLedger ds st = do - createDirectoryIfMissing hasFs True $ snapshotToDirPath ds - crc1 <- writeExtLedgerState fs encLedger (snapshotToStatePath ds) $ state st - crc2 <- takeHandleSnapshot (tables st) (state st) $ snapshotToDirName ds - writeSnapshotMetadata fs ds $ SnapshotMetadata + createDirectoryIfMissing hasFs True $ snapshotToDirPath ds + crc1 <- writeExtLedgerState fs encLedger (snapshotToStatePath ds) $ state st + crc2 <- takeHandleSnapshot (tables st) (state st) $ snapshotToDirName ds + writeSnapshotMetadata fs ds $ + SnapshotMetadata { snapshotBackend = UTxOHDMemSnapshot , snapshotChecksum = crcOfConcat crc1 crc2 } takeSnapshot :: - ( IOLike m - , LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - ) - => CodecConfig blk - -> Tracer m (TraceSnapshotEvent blk) - -> SomeHasFS m - -> Maybe String - -> StateRef m (ExtLedgerState blk) - -> m (Maybe (DiskSnapshot, RealPoint blk)) + ( IOLike m + , LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + ) => + CodecConfig blk -> + Tracer m (TraceSnapshotEvent blk) -> + SomeHasFS m -> + Maybe String -> + StateRef m (ExtLedgerState blk) -> + m (Maybe (DiskSnapshot, RealPoint blk)) takeSnapshot ccfg tracer hasFS suffix st = do case pointToWithOriginRealPoint (castPoint (getTip $ state st)) of Origin -> return Nothing NotOrigin t -> do - let number = unSlotNo (realPointSlot t) + let number = unSlotNo (realPointSlot t) snapshot = DiskSnapshot number suffix diskSnapshots <- listSnapshots hasFS - if List.any (== DiskSnapshot number suffix) diskSnapshots then - return Nothing + if List.any (== DiskSnapshot number suffix) diskSnapshots + then + return Nothing else do - encloseTimedWith (TookSnapshot snapshot t >$< tracer) - $ writeSnapshot hasFS (encodeDiskExtLedgerState ccfg) snapshot st + encloseTimedWith (TookSnapshot snapshot t >$< tracer) $ + writeSnapshot hasFS (encodeDiskExtLedgerState ccfg) snapshot st return $ Just (snapshot, t) -- | Read snapshot from disk. @@ -182,35 +202,44 @@ takeSnapshot ccfg tracer hasFS suffix st = do -- Fail on data corruption, i.e. when the checksum of the read data differs -- from the one tracked by @'DiskSnapshot'@. loadSnapshot :: - forall blk m. ( LedgerDbSerialiseConstraints blk - , LedgerSupportsProtocol blk - , IOLike m - , LedgerSupportsInMemoryLedgerDB blk - ) - => ResourceRegistry m - -> CodecConfig blk - -> SomeHasFS m - -> DiskSnapshot - -> ExceptT (SnapshotFailure blk) m (LedgerSeq' m blk, RealPoint blk) + forall blk m. + ( LedgerDbSerialiseConstraints blk + , LedgerSupportsProtocol blk + , IOLike m + , LedgerSupportsInMemoryLedgerDB blk + ) => + ResourceRegistry m -> + CodecConfig blk -> + SomeHasFS m -> + DiskSnapshot -> + ExceptT (SnapshotFailure blk) m (LedgerSeq' m blk, RealPoint blk) loadSnapshot _rr ccfg fs ds = do - snapshotMeta <- withExceptT (InitFailureRead . ReadMetadataError (snapshotToMetadataPath ds)) $ - loadSnapshotMetadata fs ds + snapshotMeta <- + withExceptT (InitFailureRead . ReadMetadataError (snapshotToMetadataPath ds)) $ + loadSnapshotMetadata fs ds Monad.when (snapshotBackend snapshotMeta /= UTxOHDMemSnapshot) $ do throwE $ InitFailureRead $ ReadMetadataError (snapshotToMetadataPath ds) MetadataBackendMismatch - (extLedgerSt, checksumAsRead) <- withExceptT - (InitFailureRead . ReadSnapshotFailed) $ - readExtLedgerState fs (decodeDiskExtLedgerState ccfg) decode (snapshotToStatePath ds) + (extLedgerSt, checksumAsRead) <- + withExceptT + (InitFailureRead . ReadSnapshotFailed) + $ readExtLedgerState fs (decodeDiskExtLedgerState ccfg) decode (snapshotToStatePath ds) case pointToWithOriginRealPoint (castPoint (getTip extLedgerSt)) of - Origin -> throwE InitFailureGenesis + Origin -> throwE InitFailureGenesis NotOrigin pt -> do (values, Identity crcTables) <- withExceptT (InitFailureRead . ReadSnapshotFailed) $ - ExceptT $ readIncremental fs Identity - (valuesMKDecoder extLedgerSt) - (fsPathFromList - $ fsPathToList (snapshotToDirPath ds) - <> [fromString "tables", fromString "tvar"]) + ExceptT $ + readIncremental + fs + Identity + (valuesMKDecoder extLedgerSt) + ( fsPathFromList $ + fsPathToList (snapshotToDirPath ds) + <> [fromString "tables", fromString "tvar"] + ) let computedCRC = crcOfConcat checksumAsRead crcTables Monad.when (computedCRC /= snapshotChecksum snapshotMeta) $ - throwE $ InitFailureRead $ ReadSnapshotDataCorruption + throwE $ + InitFailureRead $ + ReadSnapshotDataCorruption (,pt) <$> lift (empty extLedgerSt values (newInMemoryLedgerTablesHandle fs)) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/LedgerSeq.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/LedgerSeq.hs index 1e22076d0b..3dafee6dbd 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/LedgerSeq.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/LedgerDB/V2/LedgerSeq.hs @@ -15,9 +15,10 @@ {-# LANGUAGE UndecidableInstances #-} -- | The data structure that holds the cached ledger states. -module Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq ( - -- * LedgerHandles +module Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq + ( -- * LedgerHandles LedgerTablesHandle (..) + -- * The ledger seq , LedgerSeq (..) , LedgerSeq' @@ -25,12 +26,14 @@ module Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq ( , closeLedgerSeq , empty , empty' + -- * Apply Blocks , extend , prune , pruneToImmTipOnly , reapplyBlock , reapplyThenPush + -- * Queries , anchor , anchorHandle @@ -49,49 +52,53 @@ module Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq ( , volatileStatesBimap ) where -import Cardano.Ledger.BaseTypes -import Control.ResourceRegistry -import qualified Data.Bifunctor as B -import Data.Function (on) -import Data.Word -import GHC.Generics -import NoThunks.Class -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config.SecurityParam -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.AnchoredSeq hiding (anchor, last, map, - rollback) -import qualified Ouroboros.Network.AnchoredSeq as AS hiding (map) -import Prelude hiding (read) -import System.FS.CRC (CRC) +import Cardano.Ledger.BaseTypes +import Control.ResourceRegistry +import Data.Bifunctor qualified as B +import Data.Function (on) +import Data.Word +import GHC.Generics +import NoThunks.Class +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config.SecurityParam +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.AnchoredSeq hiding + ( anchor + , last + , map + , rollback + ) +import Ouroboros.Network.AnchoredSeq qualified as AS hiding (map) +import System.FS.CRC (CRC) +import Prelude hiding (read) {------------------------------------------------------------------------------- LedgerTablesHandles -------------------------------------------------------------------------------} -data LedgerTablesHandle m l = LedgerTablesHandle { - close :: !(m ()) - -- | It is expected that this operation takes constant time. - , duplicate :: !(m (LedgerTablesHandle m l)) - , read :: !(LedgerTables l KeysMK -> m (LedgerTables l ValuesMK)) - , readRange :: !((Maybe (TxIn l), Int) -> m (LedgerTables l ValuesMK)) - -- | Costly read all operation, not to be used in Consensus but only in - -- snapshot-converter executable. - , readAll :: !(m (LedgerTables l ValuesMK)) - -- | Push some diffs into the ledger tables handle. - -- - -- The first argument has to be the ledger state before applying - -- the block, the second argument should be the ledger state after - -- applying a block. See 'CanUpgradeLedgerTables'. - , pushDiffs :: !(forall mk. l mk -> l DiffMK -> m ()) +data LedgerTablesHandle m l = LedgerTablesHandle + { close :: !(m ()) + , duplicate :: !(m (LedgerTablesHandle m l)) + -- ^ It is expected that this operation takes constant time. + , read :: !(LedgerTables l KeysMK -> m (LedgerTables l ValuesMK)) + , readRange :: !((Maybe (TxIn l), Int) -> m (LedgerTables l ValuesMK)) + , readAll :: !(m (LedgerTables l ValuesMK)) + -- ^ Costly read all operation, not to be used in Consensus but only in + -- snapshot-converter executable. + , pushDiffs :: !(forall mk. l mk -> l DiffMK -> m ()) + -- ^ Push some diffs into the ledger tables handle. + -- + -- The first argument has to be the ledger state before applying + -- the block, the second argument should be the ledger state after + -- applying a block. See 'CanUpgradeLedgerTables'. , takeHandleSnapshot :: !(l EmptyMK -> String -> m CRC) - -- | Consult the size of the ledger tables in the database. This will return - -- 'Nothing' in backends that do not support this operation. - , tablesSize :: !(m (Maybe Int)) + , tablesSize :: !(m (Maybe Int)) + -- ^ Consult the size of the ledger tables in the database. This will return + -- 'Nothing' in backends that do not support this operation. } deriving NoThunks via OnlyCheckWhnfNamed "LedgerTablesHandle" (LedgerTablesHandle m l) @@ -116,10 +123,11 @@ data LedgerTablesHandle m l = LedgerTablesHandle { -- Therefore it sounds reasonable to hold a @LedgerState blk EmptyMK@ with no -- values, and a @LedgerTables blk ValuesMK@ next to it, that will live its -- entire lifetime as @LedgerTables@ of the @HardForkBlock@. -data StateRef m l = StateRef { - state :: !(l EmptyMK) +data StateRef m l = StateRef + { state :: !(l EmptyMK) , tables :: !(LedgerTablesHandle m l) - } deriving (Generic) + } + deriving Generic deriving instance (IOLike m, NoThunks (l EmptyMK)) => NoThunks (StateRef m l) @@ -137,13 +145,14 @@ instance GetTip l => Anchorable (WithOrigin SlotNo) (StateRef m l) (StateRef m l The LedgerSeq -------------------------------------------------------------------------------} -newtype LedgerSeq m l = LedgerSeq { - getLedgerSeq :: AnchoredSeq (WithOrigin SlotNo) (StateRef m l) (StateRef m l) - } deriving (Generic) +newtype LedgerSeq m l = LedgerSeq + { getLedgerSeq :: AnchoredSeq (WithOrigin SlotNo) (StateRef m l) (StateRef m l) + } + deriving Generic deriving newtype instance (IOLike m, NoThunks (l EmptyMK)) => NoThunks (LedgerSeq m l) -deriving newtype instance Eq (l EmptyMK) => Eq (LedgerSeq m l) +deriving newtype instance Eq (l EmptyMK) => Eq (LedgerSeq m l) deriving newtype instance Show (l EmptyMK) => Show (LedgerSeq m l) type LedgerSeq' m blk = LedgerSeq m (ExtLedgerState blk) @@ -154,24 +163,24 @@ type LedgerSeq' m blk = LedgerSeq m (ExtLedgerState blk) -- | Creates an empty @LedgerSeq@ empty :: - ( GetTip l - , IOLike m - ) - => l EmptyMK - -> LedgerTables l ValuesMK - -> (LedgerTables l ValuesMK -> m ( LedgerTablesHandle m l)) - -> m (LedgerSeq m l) + ( GetTip l + , IOLike m + ) => + l EmptyMK -> + LedgerTables l ValuesMK -> + (LedgerTables l ValuesMK -> m (LedgerTablesHandle m l)) -> + m (LedgerSeq m l) empty st tbs new = LedgerSeq . AS.Empty . StateRef st <$> new tbs -- | Creates an empty @LedgerSeq@ empty' :: - ( GetTip l - , IOLike m - , HasLedgerTables l - ) - => l ValuesMK - -> (LedgerTables l ValuesMK -> m (LedgerTablesHandle m l)) - -> m (LedgerSeq m l) + ( GetTip l + , IOLike m + , HasLedgerTables l + ) => + l ValuesMK -> + (LedgerTables l ValuesMK -> m (LedgerTablesHandle m l)) -> + m (LedgerSeq m l) empty' st = empty (forgetLedgerTables st) (ltprj st) closeLedgerSeq :: Monad m => LedgerSeq m l -> m () @@ -186,23 +195,26 @@ closeLedgerSeq = mapM_ (close . tables) . toOldestFirst . getLedgerSeq -- -- The @fst@ component of the result should be closed as it contains the pruned -- states. -reapplyThenPush :: (IOLike m, ApplyBlock l blk) - => ResourceRegistry m - -> LedgerDbCfg l - -> blk - -> LedgerSeq m l - -> m (LedgerSeq m l, LedgerSeq m l) +reapplyThenPush :: + (IOLike m, ApplyBlock l blk) => + ResourceRegistry m -> + LedgerDbCfg l -> + blk -> + LedgerSeq m l -> + m (LedgerSeq m l, LedgerSeq m l) reapplyThenPush rr cfg ap db = - (\current' -> prune (LedgerDbPruneKeeping (ledgerDbCfgSecParam cfg)) $ extend current' db) <$> - reapplyBlock (ledgerDbCfgComputeLedgerEvents cfg) (ledgerDbCfg cfg) ap rr db - -reapplyBlock :: forall m l blk. (ApplyBlock l blk, IOLike m) - => ComputeLedgerEvents - -> LedgerCfg l - -> blk - -> ResourceRegistry m - -> LedgerSeq m l - -> m (StateRef m l) + (\current' -> prune (LedgerDbPruneKeeping (ledgerDbCfgSecParam cfg)) $ extend current' db) + <$> reapplyBlock (ledgerDbCfgComputeLedgerEvents cfg) (ledgerDbCfg cfg) ap rr db + +reapplyBlock :: + forall m l blk. + (ApplyBlock l blk, IOLike m) => + ComputeLedgerEvents -> + LedgerCfg l -> + blk -> + ResourceRegistry m -> + LedgerSeq m l -> + m (StateRef m l) reapplyBlock evs cfg b _rr db = do let ks = getBlockKeySets b StateRef st tbs = currentHandle db @@ -223,23 +235,24 @@ reapplyBlock evs cfg b _rr db = do -- >>> ldb' = LedgerSeq $ AS.fromOldestFirst l1 [l2, l3] -- >>> snd (prune (LedgerDbPruneKeeping (SecurityParam (unsafeNonZero 2))) ldb) == ldb' -- True -prune :: GetTip l - => LedgerDbPrune - -> LedgerSeq m l - -> (LedgerSeq m l, LedgerSeq m l) +prune :: + GetTip l => + LedgerDbPrune -> + LedgerSeq m l -> + (LedgerSeq m l, LedgerSeq m l) prune (LedgerDbPruneKeeping (SecurityParam k)) (LedgerSeq ldb) = - if toEnum nvol <= unNonZero k + if toEnum nvol <= unNonZero k then (LedgerSeq $ Empty (AS.anchor ldb), LedgerSeq ldb) else -- We remove the new anchor from the @fst@ component so that its handle is -- not closed. B.bimap (LedgerSeq . dropNewest 1) LedgerSeq $ AS.splitAt (nvol - fromEnum (unNonZero k)) ldb - where - nvol = AS.length ldb + where + nvol = AS.length ldb prune LedgerDbPruneAll (LedgerSeq ldb) = - B.bimap (LedgerSeq . dropNewest 1) LedgerSeq $ AS.splitAt nvol ldb - where - nvol = AS.length ldb + B.bimap (LedgerSeq . dropNewest 1) LedgerSeq $ AS.splitAt nvol ldb + where + nvol = AS.length ldb -- NOTE: we must inline 'prune' otherwise we get unexplained thunks in -- 'LedgerSeq' and thus a space leak. Alternatively, we could disable the @@ -256,10 +269,11 @@ prune LedgerDbPruneAll (LedgerSeq ldb) = -- >>> LedgerSeq ldb' = extend l4 ldb -- >>> AS.toOldestFirst ldb' == [l1, l2, l3, l4] -- True -extend :: GetTip l - => StateRef m l - -> LedgerSeq m l - -> LedgerSeq m l +extend :: + GetTip l => + StateRef m l -> + LedgerSeq m l -> + LedgerSeq m l extend newState = LedgerSeq . (:> newState) . getLedgerSeq @@ -281,9 +295,10 @@ extend newState = -- >>> LedgerSeq ldb' = snd $ pruneToImmTipOnly ldb -- >>> AS.anchor ldb' == l3 && AS.toOldestFirst ldb' == [] -- True -pruneToImmTipOnly :: GetTip l - => LedgerSeq m l - -> (LedgerSeq m l, LedgerSeq m l) +pruneToImmTipOnly :: + GetTip l => + LedgerSeq m l -> + (LedgerSeq m l, LedgerSeq m l) pruneToImmTipOnly = prune LedgerDbPruneAll {------------------------------------------------------------------------------- @@ -299,15 +314,15 @@ pruneToImmTipOnly = prune LedgerDbPruneAll -- >>> fmap (([l1] ==) . AS.toOldestFirst . getLedgerSeq) (rollbackN 2 ldb) -- Just True rollbackN :: - GetTip l - => Word64 - -> LedgerSeq m l - -> Maybe (LedgerSeq m l) + GetTip l => + Word64 -> + LedgerSeq m l -> + Maybe (LedgerSeq m l) rollbackN n ldb - | n <= maxRollback ldb - = Just $ LedgerSeq (AS.dropNewest (fromIntegral n) $ getLedgerSeq ldb) - | otherwise - = Nothing + | n <= maxRollback ldb = + Just $ LedgerSeq (AS.dropNewest (fromIntegral n) $ getLedgerSeq ldb) + | otherwise = + Nothing {------------------------------------------------------------------------------- Queries @@ -346,7 +361,7 @@ anchorHandle = AS.anchor . getLedgerSeq -- True snapshots :: LedgerSeq m l -> [(Word64, l EmptyMK)] snapshots = - zip [0..] + zip [0 ..] . map state . AS.toNewestFirst . getLedgerSeq @@ -358,9 +373,9 @@ snapshots = -- 3 maxRollback :: GetTip l => LedgerSeq m l -> Word64 maxRollback = - fromIntegral - . AS.length - . getLedgerSeq + fromIntegral + . AS.length + . getLedgerSeq -- | Reference to the block at the tip of the chain -- @@ -379,7 +394,7 @@ tip = castPoint . getTip . current -- False isSaturated :: GetTip l => SecurityParam -> LedgerSeq m l -> Bool isSaturated (SecurityParam k) db = - maxRollback db >= unNonZero k + maxRollback db >= unNonZero k -- | Get a past ledger state -- @@ -394,12 +409,14 @@ isSaturated (SecurityParam k) db = -- >>> getPastLedgerAt (Point (At (Block 1 1)) :: Point B) ldb == Just l2s -- True getPastLedgerAt :: - ( HasHeader blk, GetTip l, HeaderHash l ~ HeaderHash blk - , StandardHash l - ) - => Point blk - -> LedgerSeq m l - -> Maybe (l EmptyMK) + ( HasHeader blk + , GetTip l + , HeaderHash l ~ HeaderHash blk + , StandardHash l + ) => + Point blk -> + LedgerSeq m l -> + Maybe (l EmptyMK) getPastLedgerAt pt db = current <$> rollback pt db -- | Roll back the volatile states up to the specified point. @@ -414,16 +431,16 @@ getPastLedgerAt pt db = current <$> rollback pt db -- >>> AS.anchor ldb' == l0 && AS.toOldestFirst ldb' == [l1, l2] -- True rollbackToPoint :: - ( StandardHash l - , GetTip l - ) - => Point l -> LedgerSeq m l -> Maybe (LedgerSeq m l) + ( StandardHash l + , GetTip l + ) => + Point l -> LedgerSeq m l -> Maybe (LedgerSeq m l) rollbackToPoint pt (LedgerSeq ldb) = do - LedgerSeq <$> - AS.rollback - (pointSlot pt) - ((== pt) . getTip . either state state) - ldb + LedgerSeq + <$> AS.rollback + (pointSlot pt) + ((== pt) . getTip . either state state) + ldb -- | Rollback the volatile states up to the volatile anchor. -- @@ -432,10 +449,10 @@ rollbackToPoint pt (LedgerSeq ldb) = do -- >>> AS.anchor ldb' == l0 && AS.toOldestFirst ldb' == [] -- True rollbackToAnchor :: - GetTip l - => LedgerSeq m l -> LedgerSeq m l + GetTip l => + LedgerSeq m l -> LedgerSeq m l rollbackToAnchor (LedgerSeq vol) = - LedgerSeq (AS.Empty (AS.anchor vol)) + LedgerSeq (AS.Empty (AS.anchor vol)) -- | Get a prefix of the LedgerDB that ends at the given point -- @@ -444,36 +461,38 @@ rollbackToAnchor (LedgerSeq vol) = -- When no ledger state (or anchor) has the given 'Point', 'Nothing' is -- returned. rollback :: - ( HasHeader blk, GetTip l, HeaderHash l ~ HeaderHash blk - , StandardHash l - ) - => Point blk - -> LedgerSeq m l - -> Maybe (LedgerSeq m l) + ( HasHeader blk + , GetTip l + , HeaderHash l ~ HeaderHash blk + , StandardHash l + ) => + Point blk -> + LedgerSeq m l -> + Maybe (LedgerSeq m l) rollback pt db - | pt == castPoint (getTip (anchor db)) - = Just $ rollbackToAnchor db - | otherwise - = rollbackToPoint (castPoint pt) db + | pt == castPoint (getTip (anchor db)) = + Just $ rollbackToAnchor db + | otherwise = + rollbackToPoint (castPoint pt) db immutableTipSlot :: - GetTip l - => LedgerSeq m l -> WithOrigin SlotNo + GetTip l => + LedgerSeq m l -> WithOrigin SlotNo immutableTipSlot = - getTipSlot + getTipSlot . state . AS.anchor . getLedgerSeq -- | Transform the underlying volatile 'AnchoredSeq' using the given functions. volatileStatesBimap :: - AS.Anchorable (WithOrigin SlotNo) a b - => (l EmptyMK -> a) - -> (l EmptyMK -> b) - -> LedgerSeq m l - -> AS.AnchoredSeq (WithOrigin SlotNo) a b + AS.Anchorable (WithOrigin SlotNo) a b => + (l EmptyMK -> a) -> + (l EmptyMK -> b) -> + LedgerSeq m l -> + AS.AnchoredSeq (WithOrigin SlotNo) a b volatileStatesBimap f g = - AS.bimap (f . state) (g . state) + AS.bimap (f . state) (g . state) . getLedgerSeq {------------------------------------------------------------------------------- diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Serialisation.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Serialisation.hs index a72d38df23..c7523e418b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Serialisation.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/Serialisation.hs @@ -24,15 +24,17 @@ -- have the precise bytestring that we can pass in as the annotation). If we -- coupled the encoder to the decoder, we wouldn't be able to cleanly model -- this use case. Moreover, sometimes we only need a single direction. -module Ouroboros.Consensus.Storage.Serialisation ( - -- * Serialisation to/from disk storage +module Ouroboros.Consensus.Storage.Serialisation + ( -- * Serialisation to/from disk storage DecodeDisk (..) , EncodeDisk (..) + -- * Support for dependent pairs , DecodeDiskDep (..) , DecodeDiskDepIx (..) , EncodeDiskDep (..) , EncodeDiskDepIx (..) + -- * Serialised header , SerialisedHeader (..) , castSerialisedHeader @@ -40,39 +42,50 @@ module Ouroboros.Consensus.Storage.Serialisation ( , encodeTrivialSerialisedHeader , serialisedHeaderFromPair , serialisedHeaderToPair + -- * Reconstruct nested type , PrefixLen (..) , ReconstructNestedCtxt (..) , addPrefixLen , takePrefix + -- * Binary block info , BinaryBlockInfo (..) , HasBinaryBlockInfo (..) + -- * Re-exported for convenience , SizeInBytes + -- * Exported for the benefit of tests , decodeDepPair , encodeDepPair ) where -import Cardano.Binary (enforceSize) -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise -import qualified Data.ByteString.Lazy as Lazy -import Data.ByteString.Short (ShortByteString) -import Data.SOP.BasicFunctors -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..), - PrefixLen (..), addPrefixLen, takePrefix) -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.RedundantConstraints -import Ouroboros.Network.Block (Serialised (..), fromSerialised, - mkSerialised) -import Ouroboros.Network.SizeInBytes (SizeInBytes) +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short (ShortByteString) +import Data.SOP.BasicFunctors +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.Common + ( BinaryBlockInfo (..) + , PrefixLen (..) + , addPrefixLen + , takePrefix + ) +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.RedundantConstraints +import Ouroboros.Network.Block + ( Serialised (..) + , fromSerialised + , mkSerialised + ) +import Ouroboros.Network.SizeInBytes (SizeInBytes) {------------------------------------------------------------------------------- Serialisation to/from disk storage @@ -85,12 +98,11 @@ import Ouroboros.Network.SizeInBytes (SizeInBytes) -- compatibility. class EncodeDisk blk a where encodeDisk :: CodecConfig blk -> a -> Encoding - -- When the config is not needed, we provide a default implementation using -- 'Serialise' - default encodeDisk - :: Serialise a - => CodecConfig blk -> a -> Encoding + default encodeDisk :: + Serialise a => + CodecConfig blk -> a -> Encoding encodeDisk _ccfg = encode -- | Decode a type @a@ read from disk. @@ -100,12 +112,11 @@ class EncodeDisk blk a where -- compatibility. class DecodeDisk blk a where decodeDisk :: CodecConfig blk -> forall s. Decoder s a - -- When the config is not needed, we provide a default implementation using -- 'Serialise' - default decodeDisk - :: Serialise a - => CodecConfig blk -> forall s. Decoder s a + default decodeDisk :: + Serialise a => + CodecConfig blk -> forall s. Decoder s a decodeDisk _ccfg = decode {------------------------------------------------------------------------------- @@ -115,32 +126,29 @@ class DecodeDisk blk a where -- | Encode dependent index class EncodeDiskDepIx f blk where encodeDiskDepIx :: CodecConfig blk -> SomeSecond f blk -> Encoding - - default encodeDiskDepIx - :: TrivialDependency (f blk) - => CodecConfig blk -> SomeSecond f blk -> Encoding + default encodeDiskDepIx :: + TrivialDependency (f blk) => + CodecConfig blk -> SomeSecond f blk -> Encoding encodeDiskDepIx _ _ = encode () - where - _ = keepRedundantConstraint (Proxy @(TrivialDependency (f blk))) + where + _ = keepRedundantConstraint (Proxy @(TrivialDependency (f blk))) -- | Encode a dependent value class EncodeDiskDep f blk where encodeDiskDep :: CodecConfig blk -> f blk a -> a -> Encoding - - default encodeDiskDep - :: ( TrivialDependency (f blk) - , EncodeDisk blk (TrivialIndex (f blk)) - ) - => CodecConfig blk -> f blk a -> a -> Encoding + default encodeDiskDep :: + ( TrivialDependency (f blk) + , EncodeDisk blk (TrivialIndex (f blk)) + ) => + CodecConfig blk -> f blk a -> a -> Encoding encodeDiskDep cfg ctxt = encodeDisk cfg . fromTrivialDependency ctxt -- | Decode dependent index class DecodeDiskDepIx f blk where decodeDiskDepIx :: CodecConfig blk -> Decoder s (SomeSecond f blk) - - default decodeDiskDepIx - :: TrivialDependency (f blk) - => CodecConfig blk -> Decoder s (SomeSecond f blk) + default decodeDiskDepIx :: + TrivialDependency (f blk) => + CodecConfig blk -> Decoder s (SomeSecond f blk) decodeDiskDepIx _ = (\() -> SomeSecond indexIsTrivial) <$> decode -- | Decode a dependent value @@ -148,52 +156,60 @@ class DecodeDiskDepIx f blk where -- Typical usage: @f = NestedCtxt Header@. class DecodeDiskDep f blk where decodeDiskDep :: CodecConfig blk -> f blk a -> forall s. Decoder s (Lazy.ByteString -> a) - - default decodeDiskDep - :: ( TrivialDependency (f blk) - , DecodeDisk blk (Lazy.ByteString -> TrivialIndex (f blk)) - ) - => CodecConfig blk -> f blk a -> forall s. Decoder s (Lazy.ByteString -> a) + default decodeDiskDep :: + ( TrivialDependency (f blk) + , DecodeDisk blk (Lazy.ByteString -> TrivialIndex (f blk)) + ) => + CodecConfig blk -> f blk a -> forall s. Decoder s (Lazy.ByteString -> a) decodeDiskDep cfg ctxt = - (\f -> toTrivialDependency ctxt . f) <$> decodeDisk cfg + (\f -> toTrivialDependency ctxt . f) <$> decodeDisk cfg -instance (EncodeDiskDepIx f blk, EncodeDiskDep f blk) - => EncodeDisk blk (DepPair (f blk)) where +instance + (EncodeDiskDepIx f blk, EncodeDiskDep f blk) => + EncodeDisk blk (DepPair (f blk)) + where encodeDisk ccfg = encodeDisk ccfg . encodeDepPair ccfg -instance (DecodeDiskDepIx f blk, DecodeDiskDep f blk) - => DecodeDisk blk (DepPair (f blk)) where +instance + (DecodeDiskDepIx f blk, DecodeDiskDep f blk) => + DecodeDisk blk (DepPair (f blk)) + where decodeDisk ccfg = decodeDisk ccfg >>= decodeDepPair ccfg {------------------------------------------------------------------------------- Internal: support for serialisation of dependent pairs -------------------------------------------------------------------------------} -encodeDepPair :: EncodeDiskDep f blk - => CodecConfig blk - -> DepPair (f blk) -> GenDepPair Serialised (f blk) +encodeDepPair :: + EncodeDiskDep f blk => + CodecConfig blk -> + DepPair (f blk) -> + GenDepPair Serialised (f blk) encodeDepPair ccfg (DepPair fa a) = - GenDepPair fa (mkSerialised (encodeDiskDep ccfg fa) a) + GenDepPair fa (mkSerialised (encodeDiskDep ccfg fa) a) -decodeDepPair :: DecodeDiskDep f blk - => CodecConfig blk - -> GenDepPair Serialised (f blk) -> Decoder s (DepPair (f blk)) +decodeDepPair :: + DecodeDiskDep f blk => + CodecConfig blk -> + GenDepPair Serialised (f blk) -> + Decoder s (DepPair (f blk)) decodeDepPair ccfg (GenDepPair fa serialised) = - DepPair fa <$> fromSerialised (decodeDiskDep ccfg fa) serialised + DepPair fa <$> fromSerialised (decodeDiskDep ccfg fa) serialised instance EncodeDiskDepIx f blk => EncodeDisk blk (GenDepPair Serialised (f blk)) where - encodeDisk ccfg (GenDepPair fa serialised) = mconcat [ - CBOR.encodeListLen 2 + encodeDisk ccfg (GenDepPair fa serialised) = + mconcat + [ CBOR.encodeListLen 2 , encodeDiskDepIx ccfg (SomeSecond fa) , encode serialised ] instance DecodeDiskDepIx f blk => DecodeDisk blk (GenDepPair Serialised (f blk)) where decodeDisk ccfg = do - enforceSize "DecodeDisk GenDepPair" 2 - SomeSecond fa <- decodeDiskDepIx ccfg - serialised <- decode - return $ GenDepPair fa serialised + enforceSize "DecodeDisk GenDepPair" 2 + SomeSecond fa <- decodeDiskDepIx ccfg + serialised <- decode + return $ GenDepPair fa serialised {------------------------------------------------------------------------------- Serialised header @@ -207,71 +223,81 @@ instance DecodeDiskDepIx f blk => DecodeDisk blk (GenDepPair Serialised (f blk)) -- -- The 'SerialiseNodeToNodeDep' for 'Header' will decide how to actually -- encode this. -newtype SerialisedHeader blk = SerialisedHeaderFromDepPair { - serialisedHeaderToDepPair :: GenDepPair Serialised (NestedCtxt Header blk) - } +newtype SerialisedHeader blk = SerialisedHeaderFromDepPair + { serialisedHeaderToDepPair :: GenDepPair Serialised (NestedCtxt Header blk) + } deriving instance HasNestedContent Header blk => Show (SerialisedHeader blk) instance ShowProxy blk => ShowProxy (SerialisedHeader blk) where - showProxy _ = "SerialisedHeader " ++ showProxy (Proxy :: Proxy blk) + showProxy _ = "SerialisedHeader " ++ showProxy (Proxy :: Proxy blk) -- | Only needed for the 'ChainSyncServer' type instance HeaderHash (SerialisedHeader blk) = HeaderHash blk + instance StandardHash blk => StandardHash (SerialisedHeader blk) serialisedHeaderToPair :: - SerialisedHeader blk - -> (SomeSecond (NestedCtxt Header) blk, Lazy.ByteString) + SerialisedHeader blk -> + (SomeSecond (NestedCtxt Header) blk, Lazy.ByteString) serialisedHeaderToPair hdr = - case serialisedHeaderToDepPair hdr of - GenDepPair ctxt (Serialised bs) -> (SomeSecond ctxt, bs) + case serialisedHeaderToDepPair hdr of + GenDepPair ctxt (Serialised bs) -> (SomeSecond ctxt, bs) serialisedHeaderFromPair :: - (SomeSecond (NestedCtxt Header) blk, Lazy.ByteString) - -> SerialisedHeader blk + (SomeSecond (NestedCtxt Header) blk, Lazy.ByteString) -> + SerialisedHeader blk serialisedHeaderFromPair (SomeSecond ctxt, bs) = - SerialisedHeaderFromDepPair $ - GenDepPair ctxt (Serialised bs) + SerialisedHeaderFromDepPair $ + GenDepPair ctxt (Serialised bs) castSerialisedHeader :: - (forall a. NestedCtxt_ blk Header a -> NestedCtxt_ blk' Header a) - -> SerialisedHeader blk -> SerialisedHeader blk' + (forall a. NestedCtxt_ blk Header a -> NestedCtxt_ blk' Header a) -> + SerialisedHeader blk -> + SerialisedHeader blk' castSerialisedHeader f = - SerialisedHeaderFromDepPair + SerialisedHeaderFromDepPair . depPairFirst (castNestedCtxt f) . serialisedHeaderToDepPair -instance EncodeDiskDepIx (NestedCtxt Header) blk - => EncodeDisk blk (SerialisedHeader blk) where +instance + EncodeDiskDepIx (NestedCtxt Header) blk => + EncodeDisk blk (SerialisedHeader blk) + where encodeDisk ccfg = encodeDisk ccfg . serialisedHeaderToDepPair -instance DecodeDiskDepIx (NestedCtxt Header) blk - => DecodeDisk blk (SerialisedHeader blk) where +instance + DecodeDiskDepIx (NestedCtxt Header) blk => + DecodeDisk blk (SerialisedHeader blk) + where decodeDisk ccfg = SerialisedHeaderFromDepPair <$> decodeDisk ccfg -- | Encode the header without the 'NestedCtxt' -- -- Uses CBOR-in-CBOR encodeTrivialSerialisedHeader :: - forall blk. TrivialDependency (NestedCtxt_ blk Header) - => SerialisedHeader blk -> Encoding + forall blk. + TrivialDependency (NestedCtxt_ blk Header) => + SerialisedHeader blk -> Encoding encodeTrivialSerialisedHeader = - encode + encode . Serialised . snd . serialisedHeaderToPair - where - _ = keepRedundantConstraint (Proxy @(TrivialDependency (NestedCtxt_ blk Header))) + where + _ = keepRedundantConstraint (Proxy @(TrivialDependency (NestedCtxt_ blk Header))) -- | Inverse to 'encodeTrivialSerialisedHeader' decodeTrivialSerialisedHeader :: - forall blk. TrivialDependency (NestedCtxt_ blk Header) - => forall s. Decoder s (SerialisedHeader blk) + forall blk. + TrivialDependency (NestedCtxt_ blk Header) => + forall s. + Decoder s (SerialisedHeader blk) decodeTrivialSerialisedHeader = - ( serialisedHeaderFromPair - . (SomeSecond (NestedCtxt indexIsTrivial), ) - . unSerialised - ) <$> decode + ( serialisedHeaderFromPair + . (SomeSecond (NestedCtxt indexIsTrivial),) + . unSerialised + ) + <$> decode {------------------------------------------------------------------------------- Reconstruct nested type @@ -288,26 +314,30 @@ class HasNestedContent f blk => ReconstructNestedCtxt f blk where -- -- TODO: Allow to fail. reconstructNestedCtxt :: - proxy (f blk) - -> ShortByteString -- ^ First bytes ('reconstructPrefixLen') of the block - -> SizeInBytes -- ^ Block size - -> SomeSecond (NestedCtxt f) blk + proxy (f blk) -> + -- | First bytes ('reconstructPrefixLen') of the block + ShortByteString -> + -- | Block size + SizeInBytes -> + SomeSecond (NestedCtxt f) blk -- Defaults if there is only one type default reconstructPrefixLen :: - TrivialDependency (NestedCtxt_ blk f) - => proxy (f blk) -> PrefixLen + TrivialDependency (NestedCtxt_ blk f) => + proxy (f blk) -> PrefixLen reconstructPrefixLen _ = PrefixLen 0 - where - _ = keepRedundantConstraint (Proxy @(TrivialDependency (NestedCtxt_ blk f))) + where + _ = keepRedundantConstraint (Proxy @(TrivialDependency (NestedCtxt_ blk f))) default reconstructNestedCtxt :: - TrivialDependency (NestedCtxt_ blk f) - => proxy (f blk) - -> ShortByteString -- ^ First bytes ('reconstructPrefixLen') of the block - -> SizeInBytes -- ^ Block size - -> SomeSecond (NestedCtxt f) blk + TrivialDependency (NestedCtxt_ blk f) => + proxy (f blk) -> + -- | First bytes ('reconstructPrefixLen') of the block + ShortByteString -> + -- | Block size + SizeInBytes -> + SomeSecond (NestedCtxt f) blk reconstructNestedCtxt _ _ _ = SomeSecond indexIsTrivial {------------------------------------------------------------------------------- @@ -323,22 +353,32 @@ class HasBinaryBlockInfo blk where Forwarding instances -------------------------------------------------------------------------------} -instance EncodeDisk blk (ChainDepState (BlockProtocol blk)) - => EncodeDisk blk (WrapChainDepState blk) where +instance + EncodeDisk blk (ChainDepState (BlockProtocol blk)) => + EncodeDisk blk (WrapChainDepState blk) + where encodeDisk cfg (WrapChainDepState st) = encodeDisk cfg st -instance DecodeDisk blk (ChainDepState (BlockProtocol blk)) - => DecodeDisk blk (WrapChainDepState blk) where +instance + DecodeDisk blk (ChainDepState (BlockProtocol blk)) => + DecodeDisk blk (WrapChainDepState blk) + where decodeDisk cfg = WrapChainDepState <$> decodeDisk cfg -instance EncodeDisk blk blk - => EncodeDisk blk (I blk) where +instance + EncodeDisk blk blk => + EncodeDisk blk (I blk) + where encodeDisk cfg (I b) = encodeDisk cfg b -instance DecodeDisk blk blk - => DecodeDisk blk (I blk) where +instance + DecodeDisk blk blk => + DecodeDisk blk (I blk) + where decodeDisk cfg = I <$> decodeDisk cfg -instance DecodeDisk blk (a -> f blk) - => DecodeDisk blk (((->) a :.: f) blk) where +instance + DecodeDisk blk (a -> f blk) => + DecodeDisk blk (((->) a :.: f) blk) + where decodeDisk cfg = Comp <$> decodeDisk cfg diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB.hs index 466a1d789a..c9a8f3c4b1 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB.hs @@ -1,4 +1,4 @@ module Ouroboros.Consensus.Storage.VolatileDB (module X) where -import Ouroboros.Consensus.Storage.VolatileDB.API as X -import Ouroboros.Consensus.Storage.VolatileDB.Impl as X +import Ouroboros.Consensus.Storage.VolatileDB.API as X +import Ouroboros.Consensus.Storage.VolatileDB.Impl as X diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/API.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/API.hs index 20cbce8849..6806541498 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/API.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/API.hs @@ -10,15 +10,18 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Storage.VolatileDB.API ( - -- * API +module Ouroboros.Consensus.Storage.VolatileDB.API + ( -- * API VolatileDB (..) + -- * Types , BlockInfo (..) + -- * Errors , ApiMisuse (..) , UnexpectedFailure (..) , VolatileDBError (..) + -- * Derived functionality , getIsMember , getKnownBlockComponent @@ -26,118 +29,120 @@ module Ouroboros.Consensus.Storage.VolatileDB.API ( , withDB ) where -import qualified Codec.CBOR.Read as CBOR -import qualified Data.ByteString.Lazy as Lazy -import Data.Maybe (isJust) -import Data.Set (Set) -import Data.Typeable (Typeable) -import Data.Word (Word16) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import NoThunks.Class (OnlyCheckWhnfNamed (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (MaxSlotNo) -import System.FS.API.Types (FsError, FsPath) +import Codec.CBOR.Read qualified as CBOR +import Data.ByteString.Lazy qualified as Lazy +import Data.Maybe (isJust) +import Data.Set (Set) +import Data.Typeable (Typeable) +import Data.Word (Word16) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import NoThunks.Class (OnlyCheckWhnfNamed (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block (MaxSlotNo) +import System.FS.API.Types (FsError, FsPath) {------------------------------------------------------------------------------- API -------------------------------------------------------------------------------} -data VolatileDB m blk = VolatileDB { - -- | Close the VolatileDB. - -- - -- NOTE: idempotent after a manual closure, but not after an automatic - -- closure in case of an 'UnexpectedFailure'. In that case, closing it - -- again will cause a 'ClosedDBError' wrapping the original - -- 'UnexpectedFailure' to be thrown. - closeDB :: HasCallStack => m () - -- | Return the request block component for the block with the given - -- hash. When not in the VolatileDB, 'Nothing' is returned. - , getBlockComponent :: forall b. HasCallStack - => BlockComponent blk b - -> HeaderHash blk - -> m (Maybe b) - -- | Store the given block in the VolatileDB. - -- - -- Returns after the block has been written to disk. - , putBlock :: HasCallStack => blk -> m () - -- | Return a function that returns the successors of the block with the - -- given hash. - -- - -- This function will return a non-empty set for any block of which a - -- successor has been added to the VolatileDB and will return an empty - -- set if no successors for the given block have been added to the - -- VolatileDB (yet). - -- - -- Note that it is not required that the given block has been added to - -- the VolatileDB. - , filterByPredecessor :: HasCallStack => STM m (ChainHash blk -> Set (HeaderHash blk)) - -- | Return a function that returns the 'BlockInfo' of the block with - -- the given hash or 'Nothing' if the block is not found in the - -- VolatileDB. - , getBlockInfo :: HasCallStack => STM m (HeaderHash blk -> Maybe (BlockInfo blk)) - -- | Try to remove all blocks with a slot number less than the given - -- one. - -- - -- = Context - -- - -- When the current chain changes, blocks older than @k@, i.e., blocks - -- that are followed by @k@ blocks or more, become /immutable/. Whenever - -- this happens, we schedule a garbage collection on the VolatileDB that - -- will try to remove blocks older than the most recent immutable block, - -- as such blocks will never be adopted. There's no point in storing - -- them anymore. - -- - -- = Block number vs slot number - -- - -- While we typically talk in terms of /block numbers/ when discussing - -- immutability, i.e., /@k@ blocks/, we use /slot number/ for garbage - -- collection. We schedule a garbage collection for blocks with a /slot - -- number/ less than the slot number of the immutable block, as opposed - -- to the block number. The reason for this is that the VolatileDB is - -- not aware of block numbers, only of slot numbers. - -- - -- By using slot numbers for garbage collection, we might not /yet/ have - -- garbage collected some blocks that could never be adopted again and - -- that we would have garbage collected when using block numbers. This - -- is harmless. The opposite direction is more important and - -- problematic: garbage collecting a block that we might want to adopt - -- after all. Say we have mistakenly garbage collected such a block, in - -- that case the following would be true: - -- - -- 1. The block has a slot number older than the immutable block's slot - -- number: otherwise we wouldn't have mistakenly garbage collected - -- it. - -- - -- 2. The block has a block number greater than the immutable block's - -- block number: otherwise we wouldn't want to adopt it, as it would - -- have been older than @k@. - -- - -- 3. The block is a part of a fork fitting on the immutable block. As - -- we cannot roll back this block, all forks we could ever adopt - -- would have to go through this block. - -- - -- As slot numbers grow monotonically within a chain, all forks starting - -- after the immutable block will only contain blocks with slot numbers - -- greater (or equal to in case of EBBs) than the immutable block's slot - -- number. This directly contradicts (1), so we will /never/ garbage - -- collect a block that we might still want to adopt. - -- - -- = Less than vs. less than or equal to - -- - -- Note that we remove blocks with a slot number /less than/ the given - -- slot number, but not /equal to/ it. In practice, this off-by-one - -- difference will not matter in terms of disk space usage, because as - -- soon as the chain grows again by at least one block, those blocks - -- will be removed anyway. The reason for @<@ opposed to @<=@ is to - -- avoid issues with /EBBs/, which have the same slot number as the - -- block after it. - , garbageCollect :: HasCallStack => SlotNo -> m () - -- | Return the highest slot number ever stored by the VolatileDB. - , getMaxSlotNo :: HasCallStack => STM m MaxSlotNo - } +data VolatileDB m blk = VolatileDB + { closeDB :: HasCallStack => m () + -- ^ Close the VolatileDB. + -- + -- NOTE: idempotent after a manual closure, but not after an automatic + -- closure in case of an 'UnexpectedFailure'. In that case, closing it + -- again will cause a 'ClosedDBError' wrapping the original + -- 'UnexpectedFailure' to be thrown. + , getBlockComponent :: + forall b. + HasCallStack => + BlockComponent blk b -> + HeaderHash blk -> + m (Maybe b) + -- ^ Return the request block component for the block with the given + -- hash. When not in the VolatileDB, 'Nothing' is returned. + , putBlock :: HasCallStack => blk -> m () + -- ^ Store the given block in the VolatileDB. + -- + -- Returns after the block has been written to disk. + , filterByPredecessor :: HasCallStack => STM m (ChainHash blk -> Set (HeaderHash blk)) + -- ^ Return a function that returns the successors of the block with the + -- given hash. + -- + -- This function will return a non-empty set for any block of which a + -- successor has been added to the VolatileDB and will return an empty + -- set if no successors for the given block have been added to the + -- VolatileDB (yet). + -- + -- Note that it is not required that the given block has been added to + -- the VolatileDB. + , getBlockInfo :: HasCallStack => STM m (HeaderHash blk -> Maybe (BlockInfo blk)) + -- ^ Return a function that returns the 'BlockInfo' of the block with + -- the given hash or 'Nothing' if the block is not found in the + -- VolatileDB. + , garbageCollect :: HasCallStack => SlotNo -> m () + -- ^ Try to remove all blocks with a slot number less than the given + -- one. + -- + -- = Context + -- + -- When the current chain changes, blocks older than @k@, i.e., blocks + -- that are followed by @k@ blocks or more, become /immutable/. Whenever + -- this happens, we schedule a garbage collection on the VolatileDB that + -- will try to remove blocks older than the most recent immutable block, + -- as such blocks will never be adopted. There's no point in storing + -- them anymore. + -- + -- = Block number vs slot number + -- + -- While we typically talk in terms of /block numbers/ when discussing + -- immutability, i.e., /@k@ blocks/, we use /slot number/ for garbage + -- collection. We schedule a garbage collection for blocks with a /slot + -- number/ less than the slot number of the immutable block, as opposed + -- to the block number. The reason for this is that the VolatileDB is + -- not aware of block numbers, only of slot numbers. + -- + -- By using slot numbers for garbage collection, we might not /yet/ have + -- garbage collected some blocks that could never be adopted again and + -- that we would have garbage collected when using block numbers. This + -- is harmless. The opposite direction is more important and + -- problematic: garbage collecting a block that we might want to adopt + -- after all. Say we have mistakenly garbage collected such a block, in + -- that case the following would be true: + -- + -- 1. The block has a slot number older than the immutable block's slot + -- number: otherwise we wouldn't have mistakenly garbage collected + -- it. + -- + -- 2. The block has a block number greater than the immutable block's + -- block number: otherwise we wouldn't want to adopt it, as it would + -- have been older than @k@. + -- + -- 3. The block is a part of a fork fitting on the immutable block. As + -- we cannot roll back this block, all forks we could ever adopt + -- would have to go through this block. + -- + -- As slot numbers grow monotonically within a chain, all forks starting + -- after the immutable block will only contain blocks with slot numbers + -- greater (or equal to in case of EBBs) than the immutable block's slot + -- number. This directly contradicts (1), so we will /never/ garbage + -- collect a block that we might still want to adopt. + -- + -- = Less than vs. less than or equal to + -- + -- Note that we remove blocks with a slot number /less than/ the given + -- slot number, but not /equal to/ it. In practice, this off-by-one + -- difference will not matter in terms of disk space usage, because as + -- soon as the chain grows again by at least one block, those blocks + -- will be removed anyway. The reason for @<@ opposed to @<=@ is to + -- avoid issues with /EBBs/, which have the same slot number as the + -- block after it. + , getMaxSlotNo :: HasCallStack => STM m MaxSlotNo + -- ^ Return the highest slot number ever stored by the VolatileDB. + } deriving NoThunks via OnlyCheckWhnfNamed "VolatileDB" (VolatileDB m blk) {------------------------------------------------------------------------------ @@ -145,15 +150,15 @@ data VolatileDB m blk = VolatileDB { ------------------------------------------------------------------------------} -- | The information that the user has to provide for each new block. -data BlockInfo blk = BlockInfo { - biHash :: !(HeaderHash blk) - , biSlotNo :: !SlotNo - , biBlockNo :: !BlockNo - , biPrevHash :: !(ChainHash blk) - , biIsEBB :: !IsEBB - , biHeaderOffset :: !Word16 - , biHeaderSize :: !Word16 - } +data BlockInfo blk = BlockInfo + { biHash :: !(HeaderHash blk) + , biSlotNo :: !SlotNo + , biBlockNo :: !BlockNo + , biPrevHash :: !(ChainHash blk) + , biIsEBB :: !IsEBB + , biHeaderOffset :: !Word16 + , biHeaderSize :: !Word16 + } deriving (Eq, Show, Generic, NoThunks) {------------------------------------------------------------------------------ @@ -161,51 +166,46 @@ data BlockInfo blk = BlockInfo { ------------------------------------------------------------------------------} -- | Errors which might arise when working with this database. -data VolatileDBError blk = - -- | An error thrown because of incorrect usage of the VolatileDB +data VolatileDBError blk + = -- | An error thrown because of incorrect usage of the VolatileDB -- by the user. ApiMisuse ApiMisuse - - -- | An unexpected failure thrown because something went wrong. - | UnexpectedFailure (UnexpectedFailure blk) + | -- | An unexpected failure thrown because something went wrong. + UnexpectedFailure (UnexpectedFailure blk) deriving instance (StandardHash blk, Typeable blk) => Show (VolatileDBError blk) instance (StandardHash blk, Typeable blk) => Exception (VolatileDBError blk) where displayException = \case - ApiMisuse {} -> - "VolatileDB incorrectly used, indicative of a bug" - UnexpectedFailure (FileSystemError fse) -> - displayException fse - UnexpectedFailure {} -> - "The VolatileDB got corrupted, full validation will be enabled for the next startup" + ApiMisuse{} -> + "VolatileDB incorrectly used, indicative of a bug" + UnexpectedFailure (FileSystemError fse) -> + displayException fse + UnexpectedFailure{} -> + "The VolatileDB got corrupted, full validation will be enabled for the next startup" -newtype ApiMisuse = - -- | The VolatileDB was closed. In case it was automatically closed +newtype ApiMisuse + = -- | The VolatileDB was closed. In case it was automatically closed -- because an unexpected error was thrown during a read operation or any -- exception was thrown during a write operation, that exception is -- embedded. ClosedDBError (Maybe SomeException) - deriving (Show) + deriving Show -data UnexpectedFailure blk = - FileSystemError FsError - - -- | A block failed to parse - | ParseError FsPath (RealPoint blk) CBOR.DeserialiseFailure - - -- | When parsing a block we got some trailing data - | TrailingDataError FsPath (RealPoint blk) Lazy.ByteString - - -- | Block missing +data UnexpectedFailure blk + = FileSystemError FsError + | -- | A block failed to parse + ParseError FsPath (RealPoint blk) CBOR.DeserialiseFailure + | -- | When parsing a block we got some trailing data + TrailingDataError FsPath (RealPoint blk) Lazy.ByteString + | -- | Block missing -- -- This exception gets thrown when a block that we /know/ it should be in -- the VolatileDB, nonetheless was not found. -- -- This exception will be thrown by @getKnownBlockComponent@. - | MissingBlockError (HeaderHash blk) - - -- | A (parsed) block did not pass the integrity check. + MissingBlockError (HeaderHash blk) + | -- | A (parsed) block did not pass the integrity check. -- -- This exception gets thrown when a block doesn't pass the integrity check -- done for 'GetVerifiedBlock'. @@ -214,7 +214,7 @@ data UnexpectedFailure blk = -- VolatileDB. While this exception typically means the block has been -- corrupted, it could also mean the block didn't pass the check at the time -- it was added. - | CorruptBlockError (HeaderHash blk) + CorruptBlockError (HeaderHash blk) deriving instance (Typeable blk, StandardHash blk) => Show (UnexpectedFailure blk) @@ -226,45 +226,46 @@ deriving instance (Typeable blk, StandardHash blk) => Show (UnexpectedFailure bl -- using the database, and closes the database using its 'closeDB' function, -- in case of success or when an exception was raised. withDB :: - (HasCallStack, MonadThrow m) - => m (VolatileDB m blk) - -- ^ How to open the database - -> (VolatileDB m blk -> m a) - -- ^ Action to perform using the database - -> m a + (HasCallStack, MonadThrow m) => + -- | How to open the database + m (VolatileDB m blk) -> + -- | Action to perform using the database + (VolatileDB m blk -> m a) -> + m a withDB openDB = bracket openDB closeDB getIsMember :: - Functor (STM m) - => VolatileDB m blk - -> STM m (HeaderHash blk -> Bool) + Functor (STM m) => + VolatileDB m blk -> + STM m (HeaderHash blk -> Bool) getIsMember = fmap (isJust .) . getBlockInfo getPredecessor :: - Functor (STM m) - => VolatileDB m blk - -> STM m (HeaderHash blk -> Maybe (ChainHash blk)) + Functor (STM m) => + VolatileDB m blk -> + STM m (HeaderHash blk -> Maybe (ChainHash blk)) getPredecessor = fmap (fmap biPrevHash .) . getBlockInfo getKnownBlockComponent :: - (MonadThrow m, HasHeader blk) - => VolatileDB m blk - -> BlockComponent blk b - -> HeaderHash blk - -> m b + (MonadThrow m, HasHeader blk) => + VolatileDB m blk -> + BlockComponent blk b -> + HeaderHash blk -> + m b getKnownBlockComponent db blockComponent hash = do - mBlock <- mustExist db hash <$> - getBlockComponent db blockComponent hash - case mBlock of - Right b -> return b - Left err -> throwIO err + mBlock <- + mustExist db hash + <$> getBlockComponent db blockComponent hash + case mBlock of + Right b -> return b + Left err -> throwIO err mustExist :: - forall proxy blk b. - proxy blk - -> HeaderHash blk - -> Maybe b - -> Either (VolatileDBError blk) b + forall proxy blk b. + proxy blk -> + HeaderHash blk -> + Maybe b -> + Either (VolatileDBError blk) b mustExist _ hash = \case - Nothing -> Left $ UnexpectedFailure $ MissingBlockError @blk hash - Just b -> Right $ b + Nothing -> Left $ UnexpectedFailure $ MissingBlockError @blk hash + Just b -> Right $ b diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl.hs index 71f45407e1..6e6e8a7a45 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl.hs @@ -99,12 +99,13 @@ -- means deleting all of its contents. In order to achieve this, it truncates -- the files containing blocks if some blocks fail to parse, are invalid, or are -- duplicated. -module Ouroboros.Consensus.Storage.VolatileDB.Impl ( - -- * Opening the database +module Ouroboros.Consensus.Storage.VolatileDB.Impl + ( -- * Opening the database VolatileDbArgs (..) , VolatileDbSerialiseConstraints , defaultArgs , openDB + -- * Re-exported , BlockValidationPolicy (..) , BlocksPerFile @@ -114,69 +115,77 @@ module Ouroboros.Consensus.Storage.VolatileDB.Impl ( , mkBlocksPerFile ) where -import qualified Codec.CBOR.Read as CBOR -import qualified Codec.CBOR.Write as CBOR -import Control.Monad (unless, when) -import Control.Monad.State.Strict (get, gets, lift, modify, put, - state) -import qualified Control.RAWLock as RAWLock -import Control.ResourceRegistry -import Control.Tracer (Tracer, nullTracer, traceWith) -import qualified Data.ByteString.Lazy as Lazy -import Data.List as List (foldl') -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe, mapMaybe) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Storage.VolatileDB.API -import Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo (FileInfo) -import qualified Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo as FileInfo -import qualified Ouroboros.Consensus.Storage.VolatileDB.Impl.Index as Index -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser -import Ouroboros.Consensus.Storage.VolatileDB.Impl.State -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Util -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (MaxSlotNo (..)) -import System.FS.API.Lazy +import Codec.CBOR.Read qualified as CBOR +import Codec.CBOR.Write qualified as CBOR +import Control.Monad (unless, when) +import Control.Monad.State.Strict + ( get + , gets + , lift + , modify + , put + , state + ) +import Control.RAWLock qualified as RAWLock +import Control.ResourceRegistry +import Control.Tracer (Tracer, nullTracer, traceWith) +import Data.ByteString.Lazy qualified as Lazy +import Data.List as List (foldl') +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe, mapMaybe) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.Common (BlockComponent (..)) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Storage.VolatileDB.API +import Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo (FileInfo) +import Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo qualified as FileInfo +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Index qualified as Index +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser +import Ouroboros.Consensus.Storage.VolatileDB.Impl.State +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Util +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block (MaxSlotNo (..)) +import System.FS.API.Lazy {------------------------------------------------------------------------------ Opening the database ------------------------------------------------------------------------------} -data VolatileDbArgs f m blk = VolatileDbArgs { - -- | Predicate to check for integrity of - -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when - -- extracting them from the VolatileDB. - volCheckIntegrity :: HKD f (blk -> Bool) - -- ^ Predicate to check for integrity of - -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when - -- extracting them from the VolatileDB. - , volCodecConfig :: HKD f (CodecConfig blk) - , volHasFS :: HKD f (SomeHasFS m) - , volMaxBlocksPerFile :: BlocksPerFile - , volTracer :: Tracer m (TraceEvent blk) - -- | Should the parser for the VolatileDB fail when it encounters a - -- corrupt/invalid block? - , volValidationPolicy :: BlockValidationPolicy - -- ^ Should the parser for the VolatileDB fail when it encounters a - -- corrupt/invalid block? - } +data VolatileDbArgs f m blk = VolatileDbArgs + { volCheckIntegrity :: HKD f (blk -> Bool) + -- ^ Predicate to check for integrity of + -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when + -- extracting them from the VolatileDB. + , -- \^ Predicate to check for integrity of + -- 'Ouroboros.Consensus.Storage.Common.GetVerifiedBlock' components when + -- extracting them from the VolatileDB. + volCodecConfig :: HKD f (CodecConfig blk) + , volHasFS :: HKD f (SomeHasFS m) + , volMaxBlocksPerFile :: BlocksPerFile + , volTracer :: Tracer m (TraceEvent blk) + , volValidationPolicy :: BlockValidationPolicy + -- ^ Should the parser for the VolatileDB fail when it encounters a + -- corrupt/invalid block? + } + +-- \^ Should the parser for the VolatileDB fail when it encounters a +-- corrupt/invalid block? -- | Default arguments defaultArgs :: Applicative m => Incomplete VolatileDbArgs m blk -defaultArgs = VolatileDbArgs { - volCheckIntegrity = noDefault - , volCodecConfig = noDefault - , volHasFS = noDefault +defaultArgs = + VolatileDbArgs + { volCheckIntegrity = noDefault + , volCodecConfig = noDefault + , volHasFS = noDefault , volMaxBlocksPerFile = mkBlocksPerFile 1000 - , volTracer = nullTracer + , volTracer = nullTracer , volValidationPolicy = NoValidation } @@ -190,145 +199,155 @@ type VolatileDbSerialiseConstraints blk = ) openDB :: - forall m blk ans. - ( HasCallStack - , IOLike m - , GetPrevHash blk - , VolatileDbSerialiseConstraints blk - ) - => Complete VolatileDbArgs m blk - -> (forall st. WithTempRegistry st m (VolatileDB m blk, st) -> ans) - -> ans -openDB VolatileDbArgs { volHasFS = SomeHasFS hasFS, .. } cont = cont $ do - lift $ createDirectoryIfMissing hasFS True (mkFsPath []) - ost <- mkOpenState - volCodecConfig - hasFS - volCheckIntegrity - volValidationPolicy - volTracer - volMaxBlocksPerFile - stVar <- lift $ RAWLock.new (DbOpen ost) - let env = VolatileDBEnv { - hasFS = hasFS + forall m blk ans. + ( HasCallStack + , IOLike m + , GetPrevHash blk + , VolatileDbSerialiseConstraints blk + ) => + Complete VolatileDbArgs m blk -> + (forall st. WithTempRegistry st m (VolatileDB m blk, st) -> ans) -> + ans +openDB VolatileDbArgs{volHasFS = SomeHasFS hasFS, ..} cont = cont $ do + lift $ createDirectoryIfMissing hasFS True (mkFsPath []) + ost <- + mkOpenState + volCodecConfig + hasFS + volCheckIntegrity + volValidationPolicy + volTracer + volMaxBlocksPerFile + stVar <- lift $ RAWLock.new (DbOpen ost) + let env = + VolatileDBEnv + { hasFS = hasFS , varInternalState = stVar , maxBlocksPerFile = volMaxBlocksPerFile - , tracer = volTracer - , codecConfig = volCodecConfig - , checkIntegrity = volCheckIntegrity + , tracer = volTracer + , codecConfig = volCodecConfig + , checkIntegrity = volCheckIntegrity } - volatileDB = VolatileDB { - closeDB = closeDBImpl env - , getBlockComponent = getBlockComponentImpl env - , putBlock = putBlockImpl env - , garbageCollect = garbageCollectImpl env + volatileDB = + VolatileDB + { closeDB = closeDBImpl env + , getBlockComponent = getBlockComponentImpl env + , putBlock = putBlockImpl env + , garbageCollect = garbageCollectImpl env , filterByPredecessor = filterByPredecessorImpl env - , getBlockInfo = getBlockInfoImpl env - , getMaxSlotNo = getMaxSlotNoImpl env + , getBlockInfo = getBlockInfoImpl env + , getMaxSlotNo = getMaxSlotNoImpl env } - return (volatileDB, ost) + return (volatileDB, ost) {------------------------------------------------------------------------------ VolatileDB API ------------------------------------------------------------------------------} closeDBImpl :: - forall m blk. (IOLike m, HasHeader blk) - => VolatileDBEnv m blk - -> m () -closeDBImpl VolatileDBEnv { varInternalState, tracer, hasFS } = do - mbInternalState <- - RAWLock.withWriteAccess varInternalState $ \st -> return (st, DbClosed) - case mbInternalState of - DbClosed -> traceWith tracer DBAlreadyClosed - DbOpen ost -> do - wrapFsError (Proxy @blk) $ closeOpenHandles hasFS ost - traceWith tracer DBClosed + forall m blk. + (IOLike m, HasHeader blk) => + VolatileDBEnv m blk -> + m () +closeDBImpl VolatileDBEnv{varInternalState, tracer, hasFS} = do + mbInternalState <- + RAWLock.withWriteAccess varInternalState $ \st -> return (st, DbClosed) + case mbInternalState of + DbClosed -> traceWith tracer DBAlreadyClosed + DbOpen ost -> do + wrapFsError (Proxy @blk) $ closeOpenHandles hasFS ost + traceWith tracer DBClosed getBlockComponentImpl :: - forall m blk b. - ( IOLike m - , HasHeader blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , HasNestedContent Header blk - , DecodeDiskDep (NestedCtxt Header) blk - , HasCallStack - ) - => VolatileDBEnv m blk - -> BlockComponent blk b - -> HeaderHash blk - -> m (Maybe b) -getBlockComponentImpl env@VolatileDBEnv { codecConfig, checkIntegrity } blockComponent hash = - withOpenState env $ \hasFS OpenState { currentRevMap } -> - case Map.lookup hash currentRevMap of - Nothing -> return Nothing - Just internalBlockInfo -> Just <$> - getBlockComponent hasFS internalBlockInfo blockComponent - where - getBlockComponent :: - forall b' h. HasFS m h - -> InternalBlockInfo blk - -> BlockComponent blk b' - -> m b' - getBlockComponent hasFS ibi = \case - GetHash -> return hash - GetSlot -> return biSlotNo - GetIsEBB -> return biIsEBB - GetBlockSize -> return $ fromIntegral $ unBlockSize ibiBlockSize - GetHeaderSize -> return biHeaderSize - GetPure a -> return a - GetApply f bc -> - getBlockComponent hasFS ibi f <*> getBlockComponent hasFS ibi bc - GetBlock -> - getBlockComponent hasFS ibi GetRawBlock >>= parseBlock - GetRawBlock -> withFile hasFS ibiFile ReadMode $ \hndl -> do - let size = fromIntegral $ unBlockSize ibiBlockSize - offset = unBlockOffset ibiBlockOffset - hGetExactlyAt hasFS hndl size (AbsOffset offset) - GetHeader -> - getBlockComponent hasFS ibi GetRawHeader >>= parseHeader - GetRawHeader -> withFile hasFS ibiFile ReadMode $ \hndl -> do - let size = fromIntegral biHeaderSize - offset = unBlockOffset ibiBlockOffset + fromIntegral biHeaderOffset - hGetExactlyAt hasFS hndl size (AbsOffset offset) - GetNestedCtxt -> return ibiNestedCtxt - GetVerifiedBlock -> - getBlockComponent hasFS ibi GetBlock >>= \blk -> do - unless (checkIntegrity blk) $ - throwIO $ UnexpectedFailure $ CorruptBlockError @blk hash - return blk - where - InternalBlockInfo { ibiBlockInfo = BlockInfo {..}, .. } = ibi - - parseBlock :: Lazy.ByteString -> m blk - parseBlock bytes = throwParseErrors bytes $ - CBOR.deserialiseFromBytes (decodeDisk codecConfig) bytes - - parseHeader :: Lazy.ByteString -> m (Header blk) - parseHeader bytes = throwParseErrors bytes $ - case ibiNestedCtxt of - SomeSecond ctxt -> - CBOR.deserialiseFromBytes - ((\f -> nest . DepPair ctxt . f) <$> - decodeDiskDep codecConfig ctxt) - bytes - - pt :: RealPoint blk - pt = RealPoint biSlotNo hash - - throwParseErrors :: - forall b''. - Lazy.ByteString - -> Either CBOR.DeserialiseFailure (Lazy.ByteString, Lazy.ByteString -> b'') - -> m b'' - throwParseErrors fullBytes = \case - Right (trailing, f) - | Lazy.null trailing - -> return $ f fullBytes - | otherwise - -> throwIO $ UnexpectedFailure $ TrailingDataError ibiFile pt trailing - Left err - -> throwIO $ UnexpectedFailure $ ParseError ibiFile pt err + forall m blk b. + ( IOLike m + , HasHeader blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , HasNestedContent Header blk + , DecodeDiskDep (NestedCtxt Header) blk + , HasCallStack + ) => + VolatileDBEnv m blk -> + BlockComponent blk b -> + HeaderHash blk -> + m (Maybe b) +getBlockComponentImpl env@VolatileDBEnv{codecConfig, checkIntegrity} blockComponent hash = + withOpenState env $ \hasFS OpenState{currentRevMap} -> + case Map.lookup hash currentRevMap of + Nothing -> return Nothing + Just internalBlockInfo -> + Just + <$> getBlockComponent hasFS internalBlockInfo blockComponent + where + getBlockComponent :: + forall b' h. + HasFS m h -> + InternalBlockInfo blk -> + BlockComponent blk b' -> + m b' + getBlockComponent hasFS ibi = \case + GetHash -> return hash + GetSlot -> return biSlotNo + GetIsEBB -> return biIsEBB + GetBlockSize -> return $ fromIntegral $ unBlockSize ibiBlockSize + GetHeaderSize -> return biHeaderSize + GetPure a -> return a + GetApply f bc -> + getBlockComponent hasFS ibi f <*> getBlockComponent hasFS ibi bc + GetBlock -> + getBlockComponent hasFS ibi GetRawBlock >>= parseBlock + GetRawBlock -> withFile hasFS ibiFile ReadMode $ \hndl -> do + let size = fromIntegral $ unBlockSize ibiBlockSize + offset = unBlockOffset ibiBlockOffset + hGetExactlyAt hasFS hndl size (AbsOffset offset) + GetHeader -> + getBlockComponent hasFS ibi GetRawHeader >>= parseHeader + GetRawHeader -> withFile hasFS ibiFile ReadMode $ \hndl -> do + let size = fromIntegral biHeaderSize + offset = unBlockOffset ibiBlockOffset + fromIntegral biHeaderOffset + hGetExactlyAt hasFS hndl size (AbsOffset offset) + GetNestedCtxt -> return ibiNestedCtxt + GetVerifiedBlock -> + getBlockComponent hasFS ibi GetBlock >>= \blk -> do + unless (checkIntegrity blk) $ + throwIO $ + UnexpectedFailure $ + CorruptBlockError @blk hash + return blk + where + InternalBlockInfo{ibiBlockInfo = BlockInfo{..}, ..} = ibi + + parseBlock :: Lazy.ByteString -> m blk + parseBlock bytes = + throwParseErrors bytes $ + CBOR.deserialiseFromBytes (decodeDisk codecConfig) bytes + + parseHeader :: Lazy.ByteString -> m (Header blk) + parseHeader bytes = throwParseErrors bytes $ + case ibiNestedCtxt of + SomeSecond ctxt -> + CBOR.deserialiseFromBytes + ( (\f -> nest . DepPair ctxt . f) + <$> decodeDiskDep codecConfig ctxt + ) + bytes + + pt :: RealPoint blk + pt = RealPoint biSlotNo hash + + throwParseErrors :: + forall b''. + Lazy.ByteString -> + Either CBOR.DeserialiseFailure (Lazy.ByteString, Lazy.ByteString -> b'') -> + m b'' + throwParseErrors fullBytes = \case + Right (trailing, f) + | Lazy.null trailing -> + return $ f fullBytes + | otherwise -> + throwIO $ UnexpectedFailure $ TrailingDataError ibiFile pt trailing + Left err -> + throwIO $ UnexpectedFailure $ ParseError ibiFile pt err -- | This function follows the approach: -- (1) hPut bytes to the file @@ -347,59 +366,67 @@ getBlockComponentImpl env@VolatileDBEnv { codecConfig, checkIntegrity } blockCom -- We should be careful about not leaking open fds when we open a new file, -- since this can affect garbage collection of files. putBlockImpl :: - forall m blk. - ( GetPrevHash blk - , EncodeDisk blk blk - , HasBinaryBlockInfo blk - , HasNestedContent Header blk - , IOLike m - ) - => VolatileDBEnv m blk - -> blk - -> m () -putBlockImpl env@VolatileDBEnv{ maxBlocksPerFile, tracer, codecConfig } - blk = + forall m blk. + ( GetPrevHash blk + , EncodeDisk blk blk + , HasBinaryBlockInfo blk + , HasNestedContent Header blk + , IOLike m + ) => + VolatileDBEnv m blk -> + blk -> + m () +putBlockImpl + env@VolatileDBEnv{maxBlocksPerFile, tracer, codecConfig} + blk = appendOpenState env $ \hasFS -> do - OpenState { currentRevMap, currentWriteHandle } <- get - if Map.member biHash currentRevMap then - lift $ lift $ traceWith tracer $ BlockAlreadyHere biHash - else do - let bytes = CBOR.toLazyByteString $ encodeDisk codecConfig blk - bytesWritten <- lift $ lift $ hPutAll hasFS currentWriteHandle bytes - fileIsFull <- state $ updateStateAfterWrite bytesWritten - when fileIsFull $ nextFile hasFS - where - blockInfo@BlockInfo { biHash, biSlotNo, biPrevHash } = extractBlockInfo blk - - updateStateAfterWrite - :: forall h. - Word64 - -> OpenState blk h - -> (Bool, OpenState blk h) -- ^ True: current file is full + OpenState{currentRevMap, currentWriteHandle} <- get + if Map.member biHash currentRevMap + then + lift $ lift $ traceWith tracer $ BlockAlreadyHere biHash + else do + let bytes = CBOR.toLazyByteString $ encodeDisk codecConfig blk + bytesWritten <- lift $ lift $ hPutAll hasFS currentWriteHandle bytes + fileIsFull <- state $ updateStateAfterWrite bytesWritten + when fileIsFull $ nextFile hasFS + where + blockInfo@BlockInfo{biHash, biSlotNo, biPrevHash} = extractBlockInfo blk + + updateStateAfterWrite :: + forall h. + Word64 -> + OpenState blk h -> + (Bool, OpenState blk h) + -- \^ True: current file is full updateStateAfterWrite bytesWritten st@OpenState{..} = - (FileInfo.isFull maxBlocksPerFile fileInfo', st') - where - fileInfo = fromMaybe - (error $ "VolatileDB invariant violation:" - ++ "Current write file not found in Index.") - (Index.lookup currentWriteId currentMap) - fileInfo' = FileInfo.addBlock biSlotNo biHash fileInfo - currentMap' = Index.insert currentWriteId fileInfo' currentMap - internalBlockInfo' = InternalBlockInfo { - ibiFile = currentWritePath - , ibiBlockOffset = BlockOffset currentWriteOffset - , ibiBlockSize = BlockSize $ fromIntegral bytesWritten - , ibiBlockInfo = blockInfo - , ibiNestedCtxt = case unnest (getHeader blk) of - DepPair nestedCtxt _ -> SomeSecond nestedCtxt + (FileInfo.isFull maxBlocksPerFile fileInfo', st') + where + fileInfo = + fromMaybe + ( error $ + "VolatileDB invariant violation:" + ++ "Current write file not found in Index." + ) + (Index.lookup currentWriteId currentMap) + fileInfo' = FileInfo.addBlock biSlotNo biHash fileInfo + currentMap' = Index.insert currentWriteId fileInfo' currentMap + internalBlockInfo' = + InternalBlockInfo + { ibiFile = currentWritePath + , ibiBlockOffset = BlockOffset currentWriteOffset + , ibiBlockSize = BlockSize $ fromIntegral bytesWritten + , ibiBlockInfo = blockInfo + , ibiNestedCtxt = case unnest (getHeader blk) of + DepPair nestedCtxt _ -> SomeSecond nestedCtxt } - currentRevMap' = Map.insert biHash internalBlockInfo' currentRevMap - st' = st { - currentWriteOffset = currentWriteOffset + bytesWritten - , currentMap = currentMap' - , currentRevMap = currentRevMap' - , currentSuccMap = insertMapSet biPrevHash biHash currentSuccMap - , currentMaxSlotNo = currentMaxSlotNo `max` MaxSlotNo biSlotNo + currentRevMap' = Map.insert biHash internalBlockInfo' currentRevMap + st' = + st + { currentWriteOffset = currentWriteOffset + bytesWritten + , currentMap = currentMap' + , currentRevMap = currentRevMap' + , currentSuccMap = insertMapSet biPrevHash biHash currentSuccMap + , currentMaxSlotNo = currentMaxSlotNo `max` MaxSlotNo biSlotNo } -- | Garbage collect all files of which the highest slot is less than the @@ -416,55 +443,58 @@ putBlockImpl env@VolatileDBEnv{ maxBlocksPerFile, tracer, codecConfig } -- -- NOTE: the current file is never garbage collected. garbageCollectImpl :: - forall m blk. (IOLike m, HasHeader blk) - => VolatileDBEnv m blk - -> SlotNo - -> m () + forall m blk. + (IOLike m, HasHeader blk) => + VolatileDBEnv m blk -> + SlotNo -> + m () garbageCollectImpl env slot = do - -- Check if we can actually GC something using a cheaper read (allowing - -- for more concurrency) before obtaining the more expensive exclusive - -- write lock. - usefulGC <- atomically $ getterSTM gcPossible env - - when usefulGC $ - writeOpenState env $ \hasFS -> do - -- This event will be picked up by ghc-events-analyze - lift $ lift $ traceEventIO "START garbage collection" - -- Note that this is /monotonic/: if 'usefulGC' is @True@, then - -- 'filesToGC' has to be non-empty. - -- - -- Only a single thread performs garbage collection, so no files could - -- have been GC'ed in the meantime. The only thing that could have - -- happened is that blocks have been appended. If they have been - -- appended to the current file, nothing changes, as we never GC the - -- current file anyway. If a new file was opened, either we can now GC - -- the previous file (increase in the number of files to GC) or not - -- (same number of files to GC). - filesToGC <- gets getFilesToGC - mapM_ (garbageCollectFile hasFS) filesToGC - -- Recompute the 'MaxSlotNo' based on the files left in the - -- VolatileDB. This value can never go down, except to 'NoMaxSlotNo' - -- (when we GC everything), because a GC can only delete blocks < a - -- slot. - modify $ \st -> st { - currentMaxSlotNo = FileInfo.maxSlotNoInFiles - (Index.elems (currentMap st)) + -- Check if we can actually GC something using a cheaper read (allowing + -- for more concurrency) before obtaining the more expensive exclusive + -- write lock. + usefulGC <- atomically $ getterSTM gcPossible env + + when usefulGC $ + writeOpenState env $ \hasFS -> do + -- This event will be picked up by ghc-events-analyze + lift $ lift $ traceEventIO "START garbage collection" + -- Note that this is /monotonic/: if 'usefulGC' is @True@, then + -- 'filesToGC' has to be non-empty. + -- + -- Only a single thread performs garbage collection, so no files could + -- have been GC'ed in the meantime. The only thing that could have + -- happened is that blocks have been appended. If they have been + -- appended to the current file, nothing changes, as we never GC the + -- current file anyway. If a new file was opened, either we can now GC + -- the previous file (increase in the number of files to GC) or not + -- (same number of files to GC). + filesToGC <- gets getFilesToGC + mapM_ (garbageCollectFile hasFS) filesToGC + -- Recompute the 'MaxSlotNo' based on the files left in the + -- VolatileDB. This value can never go down, except to 'NoMaxSlotNo' + -- (when we GC everything), because a GC can only delete blocks < a + -- slot. + modify $ \st -> + st + { currentMaxSlotNo = + FileInfo.maxSlotNoInFiles + (Index.elems (currentMap st)) } - lift $ lift $ traceEventIO "STOP garbage collection" - where - -- | Return 'True' if a garbage collection would actually garbage collect - -- at least one file. - gcPossible :: OpenState blk h -> Bool - gcPossible = not . null . getFilesToGC - - -- | Return the list of files that can be garbage collected. - getFilesToGC :: OpenState blk h -> [(FileId, FileInfo blk)] - getFilesToGC st = filter canGC . Index.toAscList . currentMap $ st - where - -- We don't GC the current file. This is unlikely to happen in - -- practice anyway, and it makes things simpler. - canGC (fileId, fileInfo) = - FileInfo.canGC fileInfo slot && fileId /= currentWriteId st + lift $ lift $ traceEventIO "STOP garbage collection" + where + -- \| Return 'True' if a garbage collection would actually garbage collect + -- at least one file. + gcPossible :: OpenState blk h -> Bool + gcPossible = not . null . getFilesToGC + + -- \| Return the list of files that can be garbage collected. + getFilesToGC :: OpenState blk h -> [(FileId, FileInfo blk)] + getFilesToGC st = filter canGC . Index.toAscList . currentMap $ st + where + -- We don't GC the current file. This is unlikely to happen in + -- practice anyway, and it makes things simpler. + canGC (fileId, fileInfo) = + FileInfo.canGC fileInfo slot && fileId /= currentWriteId st -- | Garbage collect the given file /unconditionally/, updating the -- 'OpenState'. @@ -478,49 +508,53 @@ garbageCollectImpl env slot = do -- -- This may throw an FsError. garbageCollectFile :: - forall m h blk. (MonadThrow m, HasHeader blk) - => HasFS m h - -> (FileId, FileInfo blk) - -> ModifyOpenState m blk h () + forall m h blk. + (MonadThrow m, HasHeader blk) => + HasFS m h -> + (FileId, FileInfo blk) -> + ModifyOpenState m blk h () garbageCollectFile hasFS (fileId, fileInfo) = do - - lift $ lift $ removeFile hasFS $ filePath fileId - - st@OpenState { currentMap, currentRevMap, currentSuccMap } <- get - - let hashes = FileInfo.hashes fileInfo - currentRevMap' = Map.withoutKeys currentRevMap hashes - deletedPairs = - mapMaybe - (\h -> (, h) . biPrevHash . ibiBlockInfo <$> Map.lookup h currentRevMap) - (Set.toList hashes) - currentSuccMap' = - List.foldl' (flip (uncurry deleteMapSet)) currentSuccMap deletedPairs - - put st { - currentMap = Index.delete fileId currentMap - , currentRevMap = currentRevMap' + lift $ lift $ removeFile hasFS $ filePath fileId + + st@OpenState{currentMap, currentRevMap, currentSuccMap} <- get + + let hashes = FileInfo.hashes fileInfo + currentRevMap' = Map.withoutKeys currentRevMap hashes + deletedPairs = + mapMaybe + (\h -> (,h) . biPrevHash . ibiBlockInfo <$> Map.lookup h currentRevMap) + (Set.toList hashes) + currentSuccMap' = + List.foldl' (flip (uncurry deleteMapSet)) currentSuccMap deletedPairs + + put + st + { currentMap = Index.delete fileId currentMap + , currentRevMap = currentRevMap' , currentSuccMap = currentSuccMap' } filterByPredecessorImpl :: - forall m blk. (IOLike m, HasHeader blk) - => VolatileDBEnv m blk - -> STM m (ChainHash blk -> Set (HeaderHash blk)) + forall m blk. + (IOLike m, HasHeader blk) => + VolatileDBEnv m blk -> + STM m (ChainHash blk -> Set (HeaderHash blk)) filterByPredecessorImpl = getterSTM $ \st hash -> - fromMaybe Set.empty (Map.lookup hash (currentSuccMap st)) + fromMaybe Set.empty (Map.lookup hash (currentSuccMap st)) getBlockInfoImpl :: - forall m blk. (IOLike m, HasHeader blk) - => VolatileDBEnv m blk - -> STM m (HeaderHash blk -> Maybe (BlockInfo blk)) + forall m blk. + (IOLike m, HasHeader blk) => + VolatileDBEnv m blk -> + STM m (HeaderHash blk -> Maybe (BlockInfo blk)) getBlockInfoImpl = getterSTM $ \st hash -> - ibiBlockInfo <$> Map.lookup hash (currentRevMap st) + ibiBlockInfo <$> Map.lookup hash (currentRevMap st) getMaxSlotNoImpl :: - forall m blk. (IOLike m, HasHeader blk) - => VolatileDBEnv m blk - -> STM m MaxSlotNo + forall m blk. + (IOLike m, HasHeader blk) => + VolatileDBEnv m blk -> + STM m MaxSlotNo getMaxSlotNoImpl = getterSTM currentMaxSlotNo {------------------------------------------------------------------------------ @@ -530,37 +564,45 @@ getMaxSlotNoImpl = getterSTM currentMaxSlotNo -- | Creates a new file and updates the 'OpenState' accordingly. -- This may throw an FsError. nextFile :: - forall h m blk. (IOLike m, Eq h) - => HasFS m h -> ModifyOpenState m blk h () + forall h m blk. + (IOLike m, Eq h) => + HasFS m h -> ModifyOpenState m blk h () nextFile hasFS = do - st@OpenState { currentWriteHandle = curHndl, currentWriteId, currentMap } <- get - - let currentWriteId' = currentWriteId + 1 - file = filePath currentWriteId' - - lift $ lift $ hClose hasFS curHndl - - hndl <- lift $ allocateTemp - (hOpen hasFS file (AppendMode MustBeNew)) - (hClose' hasFS) - ((==) . currentWriteHandle) - put st { - currentWriteHandle = hndl - , currentWritePath = file - , currentWriteId = currentWriteId' + st@OpenState{currentWriteHandle = curHndl, currentWriteId, currentMap} <- get + + let currentWriteId' = currentWriteId + 1 + file = filePath currentWriteId' + + lift $ lift $ hClose hasFS curHndl + + hndl <- + lift $ + allocateTemp + (hOpen hasFS file (AppendMode MustBeNew)) + (hClose' hasFS) + ((==) . currentWriteHandle) + put + st + { currentWriteHandle = hndl + , currentWritePath = file + , currentWriteId = currentWriteId' , currentWriteOffset = 0 - , currentMap = Index.insert currentWriteId' FileInfo.empty - currentMap + , currentMap = + Index.insert + currentWriteId' + FileInfo.empty + currentMap } -- | Gets part of the 'OpenState' in 'STM'. getterSTM :: - forall m blk a. (IOLike m, HasHeader blk) - => (forall h. OpenState blk h -> a) - -> VolatileDBEnv m blk - -> STM m a -getterSTM fromSt VolatileDBEnv { varInternalState } = do - mSt <- RAWLock.read varInternalState - case mSt of - DbClosed -> throwIO $ ApiMisuse @blk $ ClosedDBError Nothing - DbOpen st -> return $ fromSt st + forall m blk a. + (IOLike m, HasHeader blk) => + (forall h. OpenState blk h -> a) -> + VolatileDBEnv m blk -> + STM m a +getterSTM fromSt VolatileDBEnv{varInternalState} = do + mSt <- RAWLock.read varInternalState + case mSt of + DbClosed -> throwIO $ ApiMisuse @blk $ ClosedDBError Nothing + DbOpen st -> return $ fromSt st diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/FileInfo.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/FileInfo.hs index 9fb0a8d167..41a8b255c2 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/FileInfo.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/FileInfo.hs @@ -8,13 +8,15 @@ -- | Information about the files stored by the volatile DB -- -- Intended for qualified import. -module Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo ( - -- * opaque +module Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo + ( -- * opaque FileInfo + -- * Construction , addBlock , empty , fromParsedBlockInfos + -- * Queries , canGC , hashes @@ -23,26 +25,26 @@ module Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo ( , maxSlotNoInFiles ) where -import Data.Set (Set) -import qualified Data.Set as Set -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.VolatileDB.API (BlockInfo (..)) -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types -import Ouroboros.Network.Block (MaxSlotNo (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.VolatileDB.API (BlockInfo (..)) +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types +import Ouroboros.Network.Block (MaxSlotNo (..)) {------------------------------------------------------------------------------- Types -------------------------------------------------------------------------------} -- | The internal information the VolatileDB keeps for each file. -data FileInfo blk = FileInfo { - maxSlotNo :: !MaxSlotNo - , hashes :: !(Set (HeaderHash blk)) - } - deriving (Generic) +data FileInfo blk = FileInfo + { maxSlotNo :: !MaxSlotNo + , hashes :: !(Set (HeaderHash blk)) + } + deriving Generic deriving instance StandardHash blk => Show (FileInfo blk) deriving instance StandardHash blk => NoThunks (FileInfo blk) @@ -52,30 +54,33 @@ deriving instance StandardHash blk => NoThunks (FileInfo blk) -------------------------------------------------------------------------------} empty :: FileInfo blk -empty = FileInfo { - maxSlotNo = NoMaxSlotNo - , hashes = Set.empty +empty = + FileInfo + { maxSlotNo = NoMaxSlotNo + , hashes = Set.empty } -- | Adds a block to a 'FileInfo'. addBlock :: StandardHash blk => SlotNo -> HeaderHash blk -> FileInfo blk -> FileInfo blk -addBlock slotNo hash FileInfo { maxSlotNo, hashes } = - FileInfo { - maxSlotNo = maxSlotNo `max` MaxSlotNo slotNo - , hashes = Set.insert hash hashes - } +addBlock slotNo hash FileInfo{maxSlotNo, hashes} = + FileInfo + { maxSlotNo = maxSlotNo `max` MaxSlotNo slotNo + , hashes = Set.insert hash hashes + } -- | Construct a 'FileInfo' from the parser result. fromParsedBlockInfos :: - forall blk. StandardHash blk - => [ParsedBlockInfo blk] -> FileInfo blk -fromParsedBlockInfos parsedBlockInfos = FileInfo { - maxSlotNo = foldMap parsedBlockInfoToMaxSlotNo parsedBlockInfos - , hashes = Set.fromList $ map (biHash . pbiBlockInfo) parsedBlockInfos + forall blk. + StandardHash blk => + [ParsedBlockInfo blk] -> FileInfo blk +fromParsedBlockInfos parsedBlockInfos = + FileInfo + { maxSlotNo = foldMap parsedBlockInfoToMaxSlotNo parsedBlockInfos + , hashes = Set.fromList $ map (biHash . pbiBlockInfo) parsedBlockInfos } - where - parsedBlockInfoToMaxSlotNo :: ParsedBlockInfo blk -> MaxSlotNo - parsedBlockInfoToMaxSlotNo = MaxSlotNo . biSlotNo . pbiBlockInfo + where + parsedBlockInfoToMaxSlotNo :: ParsedBlockInfo blk -> MaxSlotNo + parsedBlockInfoToMaxSlotNo = MaxSlotNo . biSlotNo . pbiBlockInfo {------------------------------------------------------------------------------- Queries @@ -83,18 +88,19 @@ fromParsedBlockInfos parsedBlockInfos = FileInfo { -- | Checks if this file can be GCed. canGC :: - FileInfo blk - -> SlotNo -- ^ The slot which we want to GC - -> Bool -canGC FileInfo { maxSlotNo } slot = - case maxSlotNo of - NoMaxSlotNo -> True - MaxSlotNo latest -> latest < slot + FileInfo blk -> + -- | The slot which we want to GC + SlotNo -> + Bool +canGC FileInfo{maxSlotNo} slot = + case maxSlotNo of + NoMaxSlotNo -> True + MaxSlotNo latest -> latest < slot -- | Has this file reached its maximum size? isFull :: BlocksPerFile -> FileInfo blk -> Bool -isFull maxBlocksPerFile FileInfo { hashes } = - fromIntegral (Set.size hashes) >= unBlocksPerFile maxBlocksPerFile +isFull maxBlocksPerFile FileInfo{hashes} = + fromIntegral (Set.size hashes) >= unBlocksPerFile maxBlocksPerFile maxSlotNoInFiles :: [FileInfo blk] -> MaxSlotNo maxSlotNoInFiles = foldMap maxSlotNo diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Index.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Index.hs index c8d3f4b378..4d35b5d88d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Index.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Index.hs @@ -5,34 +5,35 @@ -- -- Intended for qualified import -- > import qualified Ouroboros.Consensus.Storage.VolatileDB.Impl.Index as Index -module Ouroboros.Consensus.Storage.VolatileDB.Impl.Index ( - delete +module Ouroboros.Consensus.Storage.VolatileDB.Impl.Index + ( delete , elems , empty , insert , lastFile , lookup , toAscList + -- * opaque , Index ) where -import Data.IntMap.Strict (IntMap) -import qualified Data.IntMap.Strict as IM -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo (FileInfo) -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types (FileId) -import Prelude hiding (lookup) +import Data.IntMap.Strict (IntMap) +import Data.IntMap.Strict qualified as IM +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo (FileInfo) +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types (FileId) +import Prelude hiding (lookup) -- | Mapping from 'FileId' to 'FileInfo' -newtype Index blk = Index { unIndex :: IntMap (FileInfo blk) } +newtype Index blk = Index {unIndex :: IntMap (FileInfo blk)} deriving (Generic, NoThunks) modifyIndex :: - (IntMap (FileInfo blk) -> IntMap (FileInfo blk)) - -> Index blk - -> Index blk + (IntMap (FileInfo blk) -> IntMap (FileInfo blk)) -> + Index blk -> + Index blk modifyIndex f (Index index) = Index (f index) empty :: Index blk diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Parser.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Parser.hs index c71827de63..3e06491b30 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Parser.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Parser.hs @@ -4,28 +4,31 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser ( - ParseError (..) +module Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser + ( ParseError (..) , ParsedBlockInfo (..) , parseBlockFile + -- * Auxiliary , extractBlockInfo ) where -import Data.Bifunctor (bimap) -import qualified Data.ByteString.Lazy as Lazy -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Storage.VolatileDB.API (BlockInfo (..)) -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types -import Ouroboros.Consensus.Util.CBOR (ReadIncrementalErr (..), - withStreamIncrementalOffsets) -import Ouroboros.Consensus.Util.IOLike -import qualified Streaming.Prelude as S -import Streaming.Prelude (Of (..), Stream) -import System.FS.API (HasFS) -import System.FS.API.Types (FsPath) +import Data.Bifunctor (bimap) +import Data.ByteString.Lazy qualified as Lazy +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Storage.VolatileDB.API (BlockInfo (..)) +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types +import Ouroboros.Consensus.Util.CBOR + ( ReadIncrementalErr (..) + , withStreamIncrementalOffsets + ) +import Ouroboros.Consensus.Util.IOLike +import Streaming.Prelude (Of (..), Stream) +import Streaming.Prelude qualified as S +import System.FS.API (HasFS) +import System.FS.API.Types (FsPath) -- | Information returned by the parser about a single block. -- @@ -34,84 +37,91 @@ import System.FS.API.Types (FsPath) -- The fields of this record are strict to make sure that by evaluating this -- record to WHNF, we no longer hold on to the entire block. Otherwise, we might -- accidentally keep all blocks in a single file in memory during parsing. -data ParsedBlockInfo blk = ParsedBlockInfo { - pbiBlockOffset :: !BlockOffset - , pbiBlockSize :: !BlockSize - , pbiBlockInfo :: !(BlockInfo blk) - , pbiNestedCtxt :: !(SomeSecond (NestedCtxt Header) blk) - } +data ParsedBlockInfo blk = ParsedBlockInfo + { pbiBlockOffset :: !BlockOffset + , pbiBlockSize :: !BlockSize + , pbiBlockInfo :: !(BlockInfo blk) + , pbiNestedCtxt :: !(SomeSecond (NestedCtxt Header) blk) + } -- | Parse the given file containing blocks. -- -- Return the 'ParsedBlockInfo' for all the valid blocks in the file. Stop -- when encountering an error and include the offset to truncate to. parseBlockFile :: - forall m blk h. - ( IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , HasNestedContent Header blk - , DecodeDisk blk (Lazy.ByteString -> blk) - ) - => CodecConfig blk - -> HasFS m h - -> (blk -> Bool) - -> BlockValidationPolicy - -> FsPath - -> m ( [ParsedBlockInfo blk] - , Maybe (ParseError blk, BlockOffset) - ) + forall m blk h. + ( IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , HasNestedContent Header blk + , DecodeDisk blk (Lazy.ByteString -> blk) + ) => + CodecConfig blk -> + HasFS m h -> + (blk -> Bool) -> + BlockValidationPolicy -> + FsPath -> + m + ( [ParsedBlockInfo blk] + , Maybe (ParseError blk, BlockOffset) + ) parseBlockFile ccfg hasFS isNotCorrupt validationPolicy fsPath = - withStreamIncrementalOffsets hasFS (decodeDisk ccfg) fsPath $ - checkEntries [] - where - noValidation :: Bool - noValidation = validationPolicy == NoValidation + withStreamIncrementalOffsets hasFS (decodeDisk ccfg) fsPath $ + checkEntries [] + where + noValidation :: Bool + noValidation = validationPolicy == NoValidation - checkEntries :: - [ParsedBlockInfo blk] - -> Stream (Of (Word64, (Word64, blk))) - m - (Maybe (ReadIncrementalErr, Word64)) - -> m ( [ParsedBlockInfo blk] - , Maybe (ParseError blk, BlockOffset) - ) - checkEntries parsed stream = S.next stream >>= \case - Left mbErr - -> return (reverse parsed, bimap BlockReadErr BlockOffset <$> mbErr) + checkEntries :: + [ParsedBlockInfo blk] -> + Stream + (Of (Word64, (Word64, blk))) + m + (Maybe (ReadIncrementalErr, Word64)) -> + m + ( [ParsedBlockInfo blk] + , Maybe (ParseError blk, BlockOffset) + ) + checkEntries parsed stream = + S.next stream >>= \case + Left mbErr -> + return (reverse parsed, bimap BlockReadErr BlockOffset <$> mbErr) Right ((offset, (size, blk)), stream') - | noValidation || isNotCorrupt blk - -> let !blockInfo = extractBlockInfo blk - !newParsed = ParsedBlockInfo { - pbiBlockOffset = BlockOffset offset - , pbiBlockSize = BlockSize $ fromIntegral size - , pbiBlockInfo = blockInfo - , pbiNestedCtxt = case unnest (getHeader blk) of - DepPair nestedCtxt _ -> SomeSecond nestedCtxt - } - in checkEntries (newParsed : parsed) stream' - | otherwise -- The block was invalid - -> let !hash = blockHash blk - in return ( reverse parsed - , Just (BlockCorruptedErr hash, BlockOffset offset) - ) + | noValidation || isNotCorrupt blk -> + let !blockInfo = extractBlockInfo blk + !newParsed = + ParsedBlockInfo + { pbiBlockOffset = BlockOffset offset + , pbiBlockSize = BlockSize $ fromIntegral size + , pbiBlockInfo = blockInfo + , pbiNestedCtxt = case unnest (getHeader blk) of + DepPair nestedCtxt _ -> SomeSecond nestedCtxt + } + in checkEntries (newParsed : parsed) stream' + | otherwise -> -- The block was invalid + let !hash = blockHash blk + in return + ( reverse parsed + , Just (BlockCorruptedErr hash, BlockOffset offset) + ) {------------------------------------------------------------------------------- Auxiliary -------------------------------------------------------------------------------} extractBlockInfo :: - (GetPrevHash blk, HasBinaryBlockInfo blk) - => blk - -> BlockInfo blk -extractBlockInfo blk = BlockInfo { - biHash = blockHash blk - , biSlotNo = blockSlot blk - , biBlockNo = blockNo blk - , biIsEBB = blockToIsEBB blk - , biPrevHash = blockPrevHash blk + (GetPrevHash blk, HasBinaryBlockInfo blk) => + blk -> + BlockInfo blk +extractBlockInfo blk = + BlockInfo + { biHash = blockHash blk + , biSlotNo = blockSlot blk + , biBlockNo = blockNo blk + , biIsEBB = blockToIsEBB blk + , biPrevHash = blockPrevHash blk , biHeaderOffset = headerOffset - , biHeaderSize = headerSize + , biHeaderSize = headerSize } - where - BinaryBlockInfo { headerOffset, headerSize } = getBinaryBlockInfo blk + where + BinaryBlockInfo{headerOffset, headerSize} = getBinaryBlockInfo blk diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/State.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/State.hs index acec6f9c1c..c3779b75c4 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/State.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/State.hs @@ -9,9 +9,10 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.VolatileDB.Impl.State ( - -- * Tracing +module Ouroboros.Consensus.Storage.VolatileDB.Impl.State + ( -- * Tracing TraceEvent (..) + -- * State types , BlockOffset (..) , BlockSize (..) @@ -22,6 +23,7 @@ module Ouroboros.Consensus.Storage.VolatileDB.Impl.State ( , SuccessorsIndex , VolatileDBEnv (..) , dbIsOpen + -- * State helpers , ModifyOpenState , appendOpenState @@ -31,80 +33,83 @@ module Ouroboros.Consensus.Storage.VolatileDB.Impl.State ( , writeOpenState ) where -import Control.Monad -import Control.Monad.State.Strict hiding (withState) -import Control.RAWLock (RAWLock) -import qualified Control.RAWLock as RAWLock -import Control.ResourceRegistry (WithTempRegistry, allocateTemp, - modifyWithTempRegistry) -import Control.Tracer (Tracer, traceWith) -import qualified Data.ByteString.Lazy as Lazy -import Data.List as List (foldl') -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import qualified Data.Set as Set -import Data.Typeable (Typeable) -import Data.Word (Word64) -import GHC.Generics (Generic) -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Storage.VolatileDB.API -import qualified Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo as FileInfo -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Index (Index) -import qualified Ouroboros.Consensus.Storage.VolatileDB.Impl.Index as Index -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Util -import Ouroboros.Consensus.Util (whenJust) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Block (MaxSlotNo (..)) -import System.FS.API +import Control.Monad +import Control.Monad.State.Strict hiding (withState) +import Control.RAWLock (RAWLock) +import Control.RAWLock qualified as RAWLock +import Control.ResourceRegistry + ( WithTempRegistry + , allocateTemp + , modifyWithTempRegistry + ) +import Control.Tracer (Tracer, traceWith) +import Data.ByteString.Lazy qualified as Lazy +import Data.List as List (foldl') +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set qualified as Set +import Data.Typeable (Typeable) +import Data.Word (Word64) +import GHC.Generics (Generic) +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Storage.VolatileDB.API +import Ouroboros.Consensus.Storage.VolatileDB.Impl.FileInfo qualified as FileInfo +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Index (Index) +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Index qualified as Index +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Parser +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Util +import Ouroboros.Consensus.Util (whenJust) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Block (MaxSlotNo (..)) +import System.FS.API {------------------------------------------------------------------------------ State types ------------------------------------------------------------------------------} -data VolatileDBEnv m blk = forall h. Eq h => VolatileDBEnv { - hasFS :: !(HasFS m h) - , varInternalState :: !(RAWLock m (InternalState blk h)) - , maxBlocksPerFile :: !BlocksPerFile - , checkIntegrity :: !(blk -> Bool) - , codecConfig :: !(CodecConfig blk) - , tracer :: !(Tracer m (TraceEvent blk)) - } - -data InternalState blk h = - DbClosed +data VolatileDBEnv m blk = forall h. Eq h => VolatileDBEnv + { hasFS :: !(HasFS m h) + , varInternalState :: !(RAWLock m (InternalState blk h)) + , maxBlocksPerFile :: !BlocksPerFile + , checkIntegrity :: !(blk -> Bool) + , codecConfig :: !(CodecConfig blk) + , tracer :: !(Tracer m (TraceEvent blk)) + } + +data InternalState blk h + = DbClosed | DbOpen !(OpenState blk h) deriving (Generic, NoThunks) dbIsOpen :: InternalState blk h -> Bool dbIsOpen (DbOpen _) = True -dbIsOpen DbClosed = False +dbIsOpen DbClosed = False -- | Internal state when the database is open. -data OpenState blk h = OpenState { - currentWriteHandle :: !(Handle h) - -- ^ The only open file we append blocks to. - , currentWritePath :: !FsPath - -- ^ The path of the file above. - , currentWriteId :: !FileId - -- ^ The 'FileId' of the same file. - , currentWriteOffset :: !Word64 - -- ^ The offset of the same file. - , currentMap :: !(Index blk) - -- ^ The contents of each file. - , currentRevMap :: !(ReverseIndex blk) - -- ^ Where to find each block based on its slot number. - , currentSuccMap :: !(SuccessorsIndex blk) - -- ^ The successors for each block. - , currentMaxSlotNo :: !MaxSlotNo - -- ^ Highest stored SlotNo. - -- - -- INVARIANT: this is the cached value of: - -- > FileInfo.maxSlotNoInFiles (Index.elems (currentMap st)) - } +data OpenState blk h = OpenState + { currentWriteHandle :: !(Handle h) + -- ^ The only open file we append blocks to. + , currentWritePath :: !FsPath + -- ^ The path of the file above. + , currentWriteId :: !FileId + -- ^ The 'FileId' of the same file. + , currentWriteOffset :: !Word64 + -- ^ The offset of the same file. + , currentMap :: !(Index blk) + -- ^ The contents of each file. + , currentRevMap :: !(ReverseIndex blk) + -- ^ Where to find each block based on its slot number. + , currentSuccMap :: !(SuccessorsIndex blk) + -- ^ The successors for each block. + , currentMaxSlotNo :: !MaxSlotNo + -- ^ Highest stored SlotNo. + -- + -- INVARIANT: this is the cached value of: + -- > FileInfo.maxSlotNoInFiles (Index.elems (currentMap st)) + } deriving (Generic, NoThunks) {------------------------------------------------------------------------------ @@ -119,16 +124,18 @@ data AppendOrWrite = Append | Write -- | NOTE: This is safe in terms of throwing FsErrors. modifyOpenState :: - forall blk m a. (IOLike m, HasCallStack, StandardHash blk, Typeable blk) - => AppendOrWrite - -> VolatileDBEnv m blk - -> (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) - -> m a -modifyOpenState appendOrWrite - VolatileDBEnv {hasFS = hasFS :: HasFS m h, varInternalState} - modSt = + forall blk m a. + (IOLike m, HasCallStack, StandardHash blk, Typeable blk) => + AppendOrWrite -> + VolatileDBEnv m blk -> + (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) -> + m a +modifyOpenState + appendOrWrite + VolatileDBEnv{hasFS = hasFS :: HasFS m h, varInternalState} + modSt = wrapFsError (Proxy @blk) $ modifyWithTempRegistry getSt putSt (modSt hasFS) - where + where -- NOTE: we can't use the bracketed variants, as that's incompatible with -- 'modifyWithTempRegistry', which takes a function to put back the state, -- as that must have succeeded before the resources are released from the @@ -136,67 +143,69 @@ modifyOpenState appendOrWrite (acquire, release) = case appendOrWrite of Append -> (RAWLock.unsafeAcquireAppendAccess, RAWLock.unsafeReleaseAppendAccess) - Write -> + Write -> (RAWLock.unsafeAcquireWriteAccess, RAWLock.unsafeReleaseWriteAccess) getSt :: m (OpenState blk h) - getSt = acquire varInternalState >>= \case - DbOpen ost -> return ost - DbClosed -> do - release varInternalState DbClosed - throwIO $ ApiMisuse @blk $ ClosedDBError Nothing + getSt = + acquire varInternalState >>= \case + DbOpen ost -> return ost + DbClosed -> do + release varInternalState DbClosed + throwIO $ ApiMisuse @blk $ ClosedDBError Nothing putSt :: OpenState blk h -> ExitCase (OpenState blk h) -> m () putSt ost ec = case closeOrRelease of - -- We must close the VolatileDB - Left ex -> do - -- Poison the internal state lock with the exception that caused us - -- to close the VolatileDB so the next time somebody accesses the - -- VolatileDB, a 'ClosedDBError' containing the exception that - -- caused it is thrown. - -- - -- We don't care about the current state, as we were appending or - -- writing, which means that the state couldn't have changed in the - -- background. - _mbCurState <- - RAWLock.poison varInternalState $ \_st -> - ApiMisuse @blk (ClosedDBError (Just ex)) - closeOpenHandles hasFS ost - Right ost' -> release varInternalState (DbOpen ost') - where - closeOrRelease :: Either SomeException (OpenState blk h) - closeOrRelease = case ec of - ExitCaseSuccess ost' - -> Right ost' - -- When something goes wrong, close the VolatileDB for safety. - -- Except for user errors, because they stem from incorrect use of - -- the VolatileDB. - -- - -- NOTE: we only modify the VolatileDB in background threads of the - -- ChainDB, not in per-connection threads that could be killed at - -- any point. When an exception is encountered while modifying the - -- VolatileDB in a background thread, or that background thread - -- itself is killed with an async exception, we will shut down the - -- node anway, so it is safe to close the VolatileDB here. - ExitCaseAbort - -- Only caused by 'throwE' or 'throwError' like functions, which - -- we don't use, but we use @IOLike m => m@ here. - -> error "impossible" - ExitCaseException ex - | Just (ApiMisuse {} :: VolatileDBError blk) <- fromException ex - -> Right ost - | otherwise - -> Left ex + -- We must close the VolatileDB + Left ex -> do + -- Poison the internal state lock with the exception that caused us + -- to close the VolatileDB so the next time somebody accesses the + -- VolatileDB, a 'ClosedDBError' containing the exception that + -- caused it is thrown. + -- + -- We don't care about the current state, as we were appending or + -- writing, which means that the state couldn't have changed in the + -- background. + _mbCurState <- + RAWLock.poison varInternalState $ \_st -> + ApiMisuse @blk (ClosedDBError (Just ex)) + closeOpenHandles hasFS ost + Right ost' -> release varInternalState (DbOpen ost') + where + closeOrRelease :: Either SomeException (OpenState blk h) + closeOrRelease = case ec of + ExitCaseSuccess ost' -> + Right ost' + -- When something goes wrong, close the VolatileDB for safety. + -- Except for user errors, because they stem from incorrect use of + -- the VolatileDB. + -- + -- NOTE: we only modify the VolatileDB in background threads of the + -- ChainDB, not in per-connection threads that could be killed at + -- any point. When an exception is encountered while modifying the + -- VolatileDB in a background thread, or that background thread + -- itself is killed with an async exception, we will shut down the + -- node anway, so it is safe to close the VolatileDB here. + ExitCaseAbort -> + -- Only caused by 'throwE' or 'throwError' like functions, which + -- we don't use, but we use @IOLike m => m@ here. + error "impossible" + ExitCaseException ex + | Just (ApiMisuse{} :: VolatileDBError blk) <- fromException ex -> + Right ost + | otherwise -> + Left ex -- | Append to the open state. Reads can happen concurrently with this -- operation. -- -- NOTE: This is safe in terms of throwing FsErrors. appendOpenState :: - forall blk m a. (IOLike m, Typeable blk, StandardHash blk) - => VolatileDBEnv m blk - -> (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) - -> m a + forall blk m a. + (IOLike m, Typeable blk, StandardHash blk) => + VolatileDBEnv m blk -> + (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) -> + m a appendOpenState = modifyOpenState Append -- | Write to the open state. No reads or appends can concurrently with this @@ -204,10 +213,11 @@ appendOpenState = modifyOpenState Append -- -- NOTE: This is safe in terms of throwing FsErrors. writeOpenState :: - forall blk m a. (IOLike m, Typeable blk, StandardHash blk) - => VolatileDBEnv m blk - -> (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) - -> m a + forall blk m a. + (IOLike m, Typeable blk, StandardHash blk) => + VolatileDBEnv m blk -> + (forall h. Eq h => HasFS m h -> ModifyOpenState m blk h a) -> + m a writeOpenState = modifyOpenState Write -- | Perform an action that accesses the internal state of an open database. @@ -219,66 +229,66 @@ writeOpenState = modifyOpenState Write -- potentially inconsistent state. All other exceptions will leave the database -- open. withOpenState :: - forall blk m r. (IOLike m, StandardHash blk, Typeable blk) - => VolatileDBEnv m blk - -> (forall h. HasFS m h -> OpenState blk h -> m r) - -> m r -withOpenState VolatileDBEnv {hasFS = hasFS :: HasFS m h, varInternalState} action = do - (mr, ()) <- generalBracket open close (tryVolatileDB (Proxy @blk). access) - case mr of - Left e -> throwIO e - Right r -> return r - where - open :: m (OpenState blk h) - open = - atomically (RAWLock.unsafeAcquireReadAccess varInternalState) >>= \case - DbOpen ost -> return ost - DbClosed -> do - atomically $ RAWLock.unsafeReleaseReadAccess varInternalState - throwIO $ ApiMisuse @blk $ ClosedDBError Nothing + forall blk m r. + (IOLike m, StandardHash blk, Typeable blk) => + VolatileDBEnv m blk -> + (forall h. HasFS m h -> OpenState blk h -> m r) -> + m r +withOpenState VolatileDBEnv{hasFS = hasFS :: HasFS m h, varInternalState} action = do + (mr, ()) <- generalBracket open close (tryVolatileDB (Proxy @blk) . access) + case mr of + Left e -> throwIO e + Right r -> return r + where + open :: m (OpenState blk h) + open = + atomically (RAWLock.unsafeAcquireReadAccess varInternalState) >>= \case + DbOpen ost -> return ost + DbClosed -> do + atomically $ RAWLock.unsafeReleaseReadAccess varInternalState + throwIO $ ApiMisuse @blk $ ClosedDBError Nothing - close :: - OpenState blk h - -> ExitCase (Either (VolatileDBError blk) r) - -> m () - close ost ec - | Just ex <- shouldClose - = do - -- Poison the internal state lock with the exception that caused - -- us to close the VolatileDB so the next time somebody accesses - -- the VolatileDB, a 'ClosedDBError' containing the exception that - -- caused it is thrown. - mbCurState <- - RAWLock.poison varInternalState $ \_st -> - ApiMisuse @blk (ClosedDBError (Just ex)) - -- Close the open handles - wrapFsError (Proxy @blk) $ case mbCurState of - -- The handles in the most recent state - Just (DbOpen ost') -> closeOpenHandles hasFS ost' - -- The state was already closed, which is always followed by - -- closing the open handles, so nothing to do. - Just DbClosed -> return () - -- No current value, e.g., we interrupted a thread in a middle - -- of a write. Close the last open handles we know about. The - -- interrupted thread will clean up its own resources that - -- haven't yet made it into the state (thanks to - -- 'modifyWithTempRegistry'). - Nothing -> closeOpenHandles hasFS ost - - | otherwise - = atomically $ RAWLock.unsafeReleaseReadAccess varInternalState - where - shouldClose :: Maybe SomeException - shouldClose = case ec of - ExitCaseAbort -> Nothing - ExitCaseException _ex -> Nothing - ExitCaseSuccess (Right _) -> Nothing - -- In case of a VolatileDBError, close when unexpected - ExitCaseSuccess (Left ex@UnexpectedFailure {}) -> Just (toException ex) - ExitCaseSuccess (Left ApiMisuse {}) -> Nothing - - access :: OpenState blk h -> m r - access = action hasFS + close :: + OpenState blk h -> + ExitCase (Either (VolatileDBError blk) r) -> + m () + close ost ec + | Just ex <- shouldClose = + do + -- Poison the internal state lock with the exception that caused + -- us to close the VolatileDB so the next time somebody accesses + -- the VolatileDB, a 'ClosedDBError' containing the exception that + -- caused it is thrown. + mbCurState <- + RAWLock.poison varInternalState $ \_st -> + ApiMisuse @blk (ClosedDBError (Just ex)) + -- Close the open handles + wrapFsError (Proxy @blk) $ case mbCurState of + -- The handles in the most recent state + Just (DbOpen ost') -> closeOpenHandles hasFS ost' + -- The state was already closed, which is always followed by + -- closing the open handles, so nothing to do. + Just DbClosed -> return () + -- No current value, e.g., we interrupted a thread in a middle + -- of a write. Close the last open handles we know about. The + -- interrupted thread will clean up its own resources that + -- haven't yet made it into the state (thanks to + -- 'modifyWithTempRegistry'). + Nothing -> closeOpenHandles hasFS ost + | otherwise = + atomically $ RAWLock.unsafeReleaseReadAccess varInternalState + where + shouldClose :: Maybe SomeException + shouldClose = case ec of + ExitCaseAbort -> Nothing + ExitCaseException _ex -> Nothing + ExitCaseSuccess (Right _) -> Nothing + -- In case of a VolatileDBError, close when unexpected + ExitCaseSuccess (Left ex@UnexpectedFailure{}) -> Just (toException ex) + ExitCaseSuccess (Left ApiMisuse{}) -> Nothing + + access :: OpenState blk h -> m r + access = action hasFS -- | Close the handles in the 'OpenState'. -- @@ -287,55 +297,56 @@ withOpenState VolatileDBEnv {hasFS = hasFS :: HasFS m h, varInternalState} actio -- NOTE: does not wrap 'FsError's and must be called within 'wrapFsError' or -- 'tryVolatileDB'. closeOpenHandles :: HasFS m h -> OpenState blk h -> m () -closeOpenHandles HasFS { hClose } OpenState { currentWriteHandle } = - hClose currentWriteHandle +closeOpenHandles HasFS{hClose} OpenState{currentWriteHandle} = + hClose currentWriteHandle mkOpenState :: - forall m blk h. - ( HasCallStack - , IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , HasNestedContent Header blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , Eq h - ) - => CodecConfig blk - -> HasFS m h - -> (blk -> Bool) - -> BlockValidationPolicy - -> Tracer m (TraceEvent blk) - -> BlocksPerFile - -> WithTempRegistry (OpenState blk h) m (OpenState blk h) + forall m blk h. + ( HasCallStack + , IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , HasNestedContent Header blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , Eq h + ) => + CodecConfig blk -> + HasFS m h -> + (blk -> Bool) -> + BlockValidationPolicy -> + Tracer m (TraceEvent blk) -> + BlocksPerFile -> + WithTempRegistry (OpenState blk h) m (OpenState blk h) mkOpenState ccfg hasFS@HasFS{..} checkInvariants validationPolicy tracer maxBlocksPerFile = do - lift $ createDirectoryIfMissing True dbDir - allFiles <- map toFsPath . Set.toList <$> lift (listDirectory dbDir) - filesWithIds <- lift $ logInvalidFiles $ parseAllFds allFiles - mkOpenStateHelper - ccfg - hasFS - checkInvariants - validationPolicy - tracer - maxBlocksPerFile - filesWithIds - where - -- | Logs about any invalid 'FsPath' and returns the valid ones. - logInvalidFiles :: ([(FileId, FsPath)], [FsPath]) -> m [(FileId, FsPath)] - logInvalidFiles (valid, invalid) = do - unless (null invalid) $ - traceWith tracer $ InvalidFileNames invalid - return valid - - dbDir = mkFsPath [] - - toFsPath :: String -> FsPath - toFsPath file = mkFsPath [file] + lift $ createDirectoryIfMissing True dbDir + allFiles <- map toFsPath . Set.toList <$> lift (listDirectory dbDir) + filesWithIds <- lift $ logInvalidFiles $ parseAllFds allFiles + mkOpenStateHelper + ccfg + hasFS + checkInvariants + validationPolicy + tracer + maxBlocksPerFile + filesWithIds + where + -- \| Logs about any invalid 'FsPath' and returns the valid ones. + logInvalidFiles :: ([(FileId, FsPath)], [FsPath]) -> m [(FileId, FsPath)] + logInvalidFiles (valid, invalid) = do + unless (null invalid) $ + traceWith tracer $ + InvalidFileNames invalid + return valid + + dbDir = mkFsPath [] + + toFsPath :: String -> FsPath + toFsPath file = mkFsPath [file] -- | Short-hand for all three index types type Indices blk = - ( Index blk - , ReverseIndex blk + ( Index blk + , ReverseIndex blk , SuccessorsIndex blk ) @@ -343,97 +354,101 @@ type Indices blk = -- -- It may create a new file to append new blocks to or use an existing one. mkOpenStateHelper :: - forall blk m h. - ( HasCallStack - , IOLike m - , GetPrevHash blk - , HasBinaryBlockInfo blk - , HasNestedContent Header blk - , DecodeDisk blk (Lazy.ByteString -> blk) - , Eq h - ) - => CodecConfig blk - -> HasFS m h - -> (blk -> Bool) - -> BlockValidationPolicy - -> Tracer m (TraceEvent blk) - -> BlocksPerFile - -> [(FileId, FsPath)] - -> WithTempRegistry (OpenState blk h) m (OpenState blk h) + forall blk m h. + ( HasCallStack + , IOLike m + , GetPrevHash blk + , HasBinaryBlockInfo blk + , HasNestedContent Header blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , Eq h + ) => + CodecConfig blk -> + HasFS m h -> + (blk -> Bool) -> + BlockValidationPolicy -> + Tracer m (TraceEvent blk) -> + BlocksPerFile -> + [(FileId, FsPath)] -> + WithTempRegistry (OpenState blk h) m (OpenState blk h) mkOpenStateHelper ccfg hasFS checkIntegrity validationPolicy tracer maxBlocksPerFile files = do - (currentMap', currentRevMap', currentSuccMap') <- lift $ + (currentMap', currentRevMap', currentSuccMap') <- + lift $ foldM validateFile (Index.empty, Map.empty, Map.empty) files - let (currentWriteId, currentMap'') = case Index.lastFile currentMap' of - -- The DB is empty. Create a new file with 'FileId' 0 - Nothing - -> (0, Index.insert 0 FileInfo.empty currentMap') - Just (lastWriteId, lastFileInfo) - | FileInfo.isFull maxBlocksPerFile lastFileInfo - , let nextWriteId = lastWriteId + 1 + let (currentWriteId, currentMap'') = case Index.lastFile currentMap' of + -- The DB is empty. Create a new file with 'FileId' 0 + Nothing -> + (0, Index.insert 0 FileInfo.empty currentMap') + Just (lastWriteId, lastFileInfo) + | FileInfo.isFull maxBlocksPerFile lastFileInfo + , let nextWriteId = lastWriteId + 1 -> -- If the last file is full, we need to create a new one - -> (nextWriteId, Index.insert nextWriteId FileInfo.empty currentMap') - | otherwise + (nextWriteId, Index.insert nextWriteId FileInfo.empty currentMap') + | otherwise -> -- If the last file is not full, then use that one - -> (lastWriteId, currentMap') - - let currentWritePath = filePath currentWriteId - - currentWriteHandle <- - allocateTemp - (hOpen hasFS currentWritePath (AppendMode AllowExisting)) - (hClose' hasFS) - ((==) . currentWriteHandle) - currentWriteOffset <- lift $ hGetSize hasFS currentWriteHandle - - return OpenState { - currentWriteHandle = currentWriteHandle - , currentWritePath = currentWritePath - , currentWriteId = currentWriteId + (lastWriteId, currentMap') + + let currentWritePath = filePath currentWriteId + + currentWriteHandle <- + allocateTemp + (hOpen hasFS currentWritePath (AppendMode AllowExisting)) + (hClose' hasFS) + ((==) . currentWriteHandle) + currentWriteOffset <- lift $ hGetSize hasFS currentWriteHandle + + return + OpenState + { currentWriteHandle = currentWriteHandle + , currentWritePath = currentWritePath + , currentWriteId = currentWriteId , currentWriteOffset = currentWriteOffset - , currentMap = currentMap'' - , currentRevMap = currentRevMap' - , currentSuccMap = currentSuccMap' - , currentMaxSlotNo = FileInfo.maxSlotNoInFiles (Index.elems currentMap') + , currentMap = currentMap'' + , currentRevMap = currentRevMap' + , currentSuccMap = currentSuccMap' + , currentMaxSlotNo = FileInfo.maxSlotNoInFiles (Index.elems currentMap') } - where - validateFile :: Indices blk -> (FileId, FsPath) -> m (Indices blk) - validateFile (currentMap, currentRevMap, currentSuccMap) (fd, file) = do - (parsedBlocks, mErr) <- - parseBlockFile ccfg hasFS checkIntegrity validationPolicy file - whenJust mErr $ \(e, offset) -> - truncateError file e offset - - let (currentRevMap', acceptedBlocks, mErr') = - addToReverseIndex file currentRevMap parsedBlocks - -- We can find duplicate blocks when merging the parsed blocks with the - -- 'ReverseIndex', so we might have to truncate at this point too. - whenJust mErr' $ \(e, offset) -> - truncateError file e offset - - let fileInfo = FileInfo.fromParsedBlockInfos acceptedBlocks - currentMap' = Index.insert fd fileInfo currentMap - currentSuccMap' = List.foldl' - (\succMap ParsedBlockInfo { pbiBlockInfo } -> - insertMapSet (biPrevHash pbiBlockInfo) (biHash pbiBlockInfo) succMap) + where + validateFile :: Indices blk -> (FileId, FsPath) -> m (Indices blk) + validateFile (currentMap, currentRevMap, currentSuccMap) (fd, file) = do + (parsedBlocks, mErr) <- + parseBlockFile ccfg hasFS checkIntegrity validationPolicy file + whenJust mErr $ \(e, offset) -> + truncateError file e offset + + let (currentRevMap', acceptedBlocks, mErr') = + addToReverseIndex file currentRevMap parsedBlocks + -- We can find duplicate blocks when merging the parsed blocks with the + -- 'ReverseIndex', so we might have to truncate at this point too. + whenJust mErr' $ \(e, offset) -> + truncateError file e offset + + let fileInfo = FileInfo.fromParsedBlockInfos acceptedBlocks + currentMap' = Index.insert fd fileInfo currentMap + currentSuccMap' = + List.foldl' + ( \succMap ParsedBlockInfo{pbiBlockInfo} -> + insertMapSet (biPrevHash pbiBlockInfo) (biHash pbiBlockInfo) succMap + ) currentSuccMap acceptedBlocks - return (currentMap', currentRevMap', currentSuccMap') - - truncateError :: FsPath -> ParseError blk -> BlockOffset -> m () - truncateError file e offset = do - traceWith tracer $ Truncate e file offset - -- The handle of the parser is closed at this point. We need - -- to reopen the file in 'AppendMode' now (parser opens with - -- 'ReadMode'). - -- - -- Note that no file is open at this point, so we can safely - -- open with 'AppendMode' any file, without the fear of opening - -- multiple concurrent writers, which is not allowed, or concurrent - -- read with truncate. - withFile hasFS file (AppendMode AllowExisting) $ \hndl -> - hTruncate hasFS hndl (unBlockOffset offset) + return (currentMap', currentRevMap', currentSuccMap') + + truncateError :: FsPath -> ParseError blk -> BlockOffset -> m () + truncateError file e offset = do + traceWith tracer $ Truncate e file offset + -- The handle of the parser is closed at this point. We need + -- to reopen the file in 'AppendMode' now (parser opens with + -- 'ReadMode'). + -- + -- Note that no file is open at this point, so we can safely + -- open with 'AppendMode' any file, without the fear of opening + -- multiple concurrent writers, which is not allowed, or concurrent + -- read with truncate. + withFile hasFS file (AppendMode AllowExisting) $ \hndl -> + hTruncate hasFS hndl (unBlockOffset offset) -- | For each block found in a parsed file, we insert its 'InternalBlockInfo' -- in the 'ReverseIndex'. @@ -448,56 +463,58 @@ mkOpenStateHelper ccfg hasFS checkIntegrity validationPolicy tracer maxBlocksPer -- the given list, or most often, the original input list. -- * In case of an error, the error and the offset to truncate to. addToReverseIndex :: - forall blk. HasHeader blk - => FsPath - -> ReverseIndex blk - -> [ParsedBlockInfo blk] - -> ( ReverseIndex blk - , [ParsedBlockInfo blk] - , Maybe (ParseError blk, BlockOffset) - ) + forall blk. + HasHeader blk => + FsPath -> + ReverseIndex blk -> + [ParsedBlockInfo blk] -> + ( ReverseIndex blk + , [ParsedBlockInfo blk] + , Maybe (ParseError blk, BlockOffset) + ) addToReverseIndex file = \revMap -> go revMap [] - where - go :: - ReverseIndex blk - -> [ParsedBlockInfo blk] -- accumulator of the accepted blocks. - -> [ParsedBlockInfo blk] - -> ( ReverseIndex blk - , [ParsedBlockInfo blk] - , Maybe (ParseError blk, BlockOffset) - ) - go revMap acc = \case - [] -> (revMap, reverse acc, Nothing) - parsedBlock:rest -> case insertNew biHash internalBlockInfo revMap of - Right revMap' -> go revMap' (parsedBlock:acc) rest - Left InternalBlockInfo { ibiFile = alreadyExistsHere } -> - ( revMap - , reverse acc - , Just (DuplicatedBlock biHash alreadyExistsHere file, offset) - ) - where - ParsedBlockInfo { - pbiBlockOffset = offset - , pbiBlockSize = size - , pbiBlockInfo = blockInfo@BlockInfo { biHash } - , pbiNestedCtxt = nestedCtxt - } = parsedBlock - internalBlockInfo = InternalBlockInfo { - ibiFile = file - , ibiBlockOffset = offset - , ibiBlockSize = size - , ibiBlockInfo = blockInfo - , ibiNestedCtxt = nestedCtxt - } - - -- | Insert the value at the key returning the updated map, unless there - -- already is a key at the same location, in which case we return the - -- original value. - -- - -- Should be more efficient than the combination of 'Map.lookup' and - -- 'Map.insert'. - insertNew :: forall k a. Ord k => k -> a -> Map k a -> Either a (Map k a) - insertNew k a m = - case Map.insertLookupWithKey (\_k new _old -> new) k a m of - (Nothing, m') -> Right m' - (Just a', _) -> Left a' + where + go :: + ReverseIndex blk -> + [ParsedBlockInfo blk] -> -- accumulator of the accepted blocks. + [ParsedBlockInfo blk] -> + ( ReverseIndex blk + , [ParsedBlockInfo blk] + , Maybe (ParseError blk, BlockOffset) + ) + go revMap acc = \case + [] -> (revMap, reverse acc, Nothing) + parsedBlock : rest -> case insertNew biHash internalBlockInfo revMap of + Right revMap' -> go revMap' (parsedBlock : acc) rest + Left InternalBlockInfo{ibiFile = alreadyExistsHere} -> + ( revMap + , reverse acc + , Just (DuplicatedBlock biHash alreadyExistsHere file, offset) + ) + where + ParsedBlockInfo + { pbiBlockOffset = offset + , pbiBlockSize = size + , pbiBlockInfo = blockInfo@BlockInfo{biHash} + , pbiNestedCtxt = nestedCtxt + } = parsedBlock + internalBlockInfo = + InternalBlockInfo + { ibiFile = file + , ibiBlockOffset = offset + , ibiBlockSize = size + , ibiBlockInfo = blockInfo + , ibiNestedCtxt = nestedCtxt + } + + -- \| Insert the value at the key returning the updated map, unless there + -- already is a key at the same location, in which case we return the + -- original value. + -- + -- Should be more efficient than the combination of 'Map.lookup' and + -- 'Map.insert'. + insertNew :: forall k a. Ord k => k -> a -> Map k a -> Either a (Map k a) + insertNew k a m = + case Map.insertLookupWithKey (\_k new _old -> new) k a m of + (Nothing, m') -> Right m' + (Just a', _) -> Left a' diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Types.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Types.hs index bb00586c34..84ed2f6c0d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Types.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Types.hs @@ -2,18 +2,23 @@ {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE StandaloneDeriving #-} -module Ouroboros.Consensus.Storage.VolatileDB.Impl.Types ( - -- * Blocks per file +module Ouroboros.Consensus.Storage.VolatileDB.Impl.Types + ( -- * Blocks per file mkBlocksPerFile , unBlocksPerFile + -- ** opaque , BlocksPerFile + -- * Block validation policy , BlockValidationPolicy (..) + -- * Parse error , ParseError (..) + -- * Tracing , TraceEvent (..) + -- * Internal indices , BlockOffset (..) , BlockSize (..) @@ -23,24 +28,23 @@ module Ouroboros.Consensus.Storage.VolatileDB.Impl.Types ( , SuccessorsIndex ) where - -import Data.Map.Strict (Map) -import Data.Set (Set) -import Data.Word (Word32, Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Storage.VolatileDB.API (BlockInfo) -import Ouroboros.Consensus.Util.CBOR (ReadIncrementalErr (..)) -import System.FS.API.Types (FsPath) +import Data.Map.Strict (Map) +import Data.Set (Set) +import Data.Word (Word32, Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Storage.VolatileDB.API (BlockInfo) +import Ouroboros.Consensus.Util.CBOR (ReadIncrementalErr (..)) +import System.FS.API.Types (FsPath) {------------------------------------------------------------------------------ Blocks per file ------------------------------------------------------------------------------} -- | The maximum number of blocks to store per file. -newtype BlocksPerFile = BlocksPerFile { unBlocksPerFile :: Word32 } - deriving (Generic, Show) +newtype BlocksPerFile = BlocksPerFile {unBlocksPerFile :: Word32} + deriving (Generic, Show) -- | Create a 'BlocksPerFile'. -- @@ -56,10 +60,10 @@ mkBlocksPerFile n = BlocksPerFile n -- | When block validation is enabled, the parser checks for each block a -- number of properties and stops parsing if it finds any invalid blocks. -data BlockValidationPolicy = - NoValidation +data BlockValidationPolicy + = NoValidation | ValidateAll - deriving (Eq) + deriving Eq {------------------------------------------------------------------------------ Parse error @@ -70,19 +74,19 @@ data BlockValidationPolicy = -- -- Defined here instead of in the @Parser@ module because 'TraceEvent' depends -- on it. -data ParseError blk = +data ParseError blk + = -- | A block could not be parsed. BlockReadErr ReadIncrementalErr - -- ^ A block could not be parsed. - | BlockCorruptedErr (HeaderHash blk) - -- ^ A block was corrupted, e.g., checking its signature and/or hash + | -- | A block was corrupted, e.g., checking its signature and/or hash -- failed. - | DuplicatedBlock (HeaderHash blk) FsPath FsPath - -- ^ A block with the same hash occurred twice in the VolatileDB files. + BlockCorruptedErr (HeaderHash blk) + | -- | A block with the same hash occurred twice in the VolatileDB files. -- -- We include the file in which it occurred first and the file in which it -- occured the second time. The two files can be the same. + DuplicatedBlock (HeaderHash blk) FsPath FsPath -deriving instance StandardHash blk => Eq (ParseError blk) +deriving instance StandardHash blk => Eq (ParseError blk) deriving instance StandardHash blk => Show (ParseError blk) {------------------------------------------------------------------------------ @@ -90,11 +94,11 @@ deriving instance StandardHash blk => Show (ParseError blk) ------------------------------------------------------------------------------} data TraceEvent blk - = DBAlreadyClosed - | BlockAlreadyHere (HeaderHash blk) - | Truncate (ParseError blk) FsPath BlockOffset - | InvalidFileNames [FsPath] - | DBClosed + = DBAlreadyClosed + | BlockAlreadyHere (HeaderHash blk) + | Truncate (ParseError blk) FsPath BlockOffset + | InvalidFileNames [FsPath] + | DBClosed deriving (Eq, Generic, Show) {------------------------------------------------------------------------------ @@ -112,19 +116,19 @@ type ReverseIndex blk = Map (HeaderHash blk) (InternalBlockInfo blk) -- a predecessor (set of successors). type SuccessorsIndex blk = Map (ChainHash blk) (Set (HeaderHash blk)) -newtype BlockSize = BlockSize { unBlockSize :: Word32 } +newtype BlockSize = BlockSize {unBlockSize :: Word32} deriving (Eq, Show, Generic, NoThunks) -- | The offset at which a block is stored in a file. -newtype BlockOffset = BlockOffset { unBlockOffset :: Word64 } +newtype BlockOffset = BlockOffset {unBlockOffset :: Word64} deriving (Eq, Show, Generic, NoThunks) -- | The internal information the db keeps for each block. -data InternalBlockInfo blk = InternalBlockInfo { - ibiFile :: !FsPath - , ibiBlockOffset :: !BlockOffset - , ibiBlockSize :: !BlockSize - , ibiBlockInfo :: !(BlockInfo blk) - , ibiNestedCtxt :: !(SomeSecond (NestedCtxt Header) blk) - } +data InternalBlockInfo blk = InternalBlockInfo + { ibiFile :: !FsPath + , ibiBlockOffset :: !BlockOffset + , ibiBlockSize :: !BlockSize + , ibiBlockInfo :: !(BlockInfo blk) + , ibiNestedCtxt :: !(SomeSecond (NestedCtxt Header) blk) + } deriving (Generic, NoThunks) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Util.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Util.hs index 52e06af223..3d3d6f171a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Util.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Storage/VolatileDB/Impl/Util.hs @@ -3,38 +3,40 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Ouroboros.Consensus.Storage.VolatileDB.Impl.Util ( - -- * FileId utilities +module Ouroboros.Consensus.Storage.VolatileDB.Impl.Util + ( -- * FileId utilities filePath , findLastFd , parseAllFds , parseFd + -- * Exception handling , tryVolatileDB , wrapFsError + -- * Map of Set utilities , deleteMapSet , insertMapSet ) where -import Control.Monad -import Data.Bifunctor (first) -import Data.List (sortOn) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy (Proxy (..)) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Text (Text) -import qualified Data.Text as T -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (StandardHash) -import Ouroboros.Consensus.Storage.VolatileDB.API -import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types -import Ouroboros.Consensus.Util (lastMaybe) -import Ouroboros.Consensus.Util.IOLike -import System.FS.API.Types -import Text.Read (readMaybe) +import Control.Monad +import Data.Bifunctor (first) +import Data.List (sortOn) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy (Proxy (..)) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Text (Text) +import Data.Text qualified as T +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (StandardHash) +import Ouroboros.Consensus.Storage.VolatileDB.API +import Ouroboros.Consensus.Storage.VolatileDB.Impl.Types +import Ouroboros.Consensus.Util (lastMaybe) +import Ouroboros.Consensus.Util.IOLike +import System.FS.API.Types +import Text.Read (readMaybe) {------------------------------------------------------------------------------ FileId utilities @@ -42,15 +44,16 @@ import Text.Read (readMaybe) parseFd :: FsPath -> Maybe FileId parseFd file = - parseFilename <=< lastMaybe $ fsPathToList file - where - parseFilename :: Text -> Maybe FileId - parseFilename = readMaybe - . T.unpack - . snd - . T.breakOnEnd "-" - . fst - . T.breakOn "." + parseFilename <=< lastMaybe $ fsPathToList file + where + parseFilename :: Text -> Maybe FileId + parseFilename = + readMaybe + . T.unpack + . snd + . T.breakOnEnd "-" + . fst + . T.breakOn "." -- | Parses the 'FileId' of each 'FsPath' and zips them together. Returns -- the results sorted on the 'FileId'. @@ -58,10 +61,10 @@ parseFd file = -- Return separately any 'FsPath' which failed to parse. parseAllFds :: [FsPath] -> ([(FileId, FsPath)], [FsPath]) parseAllFds = first (sortOn fst) . foldr judge ([], []) - where - judge fsPath (parsed, notParsed) = case parseFd fsPath of - Nothing -> (parsed, fsPath : notParsed) - Just fileId -> ((fileId, fsPath) : parsed, notParsed) + where + judge fsPath (parsed, notParsed) = case parseFd fsPath of + Nothing -> (parsed, fsPath : notParsed) + Just fileId -> ((fileId, fsPath) : parsed, notParsed) -- | This also returns any 'FsPath' which failed to parse. findLastFd :: [FsPath] -> (Maybe FileId, [FsPath]) @@ -75,10 +78,11 @@ filePath fd = mkFsPath ["blocks-" ++ show fd ++ ".dat"] ------------------------------------------------------------------------------} wrapFsError :: - forall m a blk. (MonadCatch m, StandardHash blk, Typeable blk) - => Proxy blk - -> m a - -> m a + forall m a blk. + (MonadCatch m, StandardHash blk, Typeable blk) => + Proxy blk -> + m a -> + m a wrapFsError _ = handle $ throwIO . UnexpectedFailure @blk . FileSystemError -- | Execute an action and catch the 'VolatileDBError' and 'FsError' that can @@ -89,10 +93,11 @@ wrapFsError _ = handle $ throwIO . UnexpectedFailure @blk . FileSystemError -- and catch the 'VolatileDBError' and the 'FsError' (wrapped in the former) -- it may thrown. tryVolatileDB :: - forall m a blk. (MonadCatch m, Typeable blk, StandardHash blk) - => Proxy blk - -> m a - -> m (Either (VolatileDBError blk) a) + forall m a blk. + (MonadCatch m, Typeable blk, StandardHash blk) => + Proxy blk -> + m a -> + m (Either (VolatileDBError blk) a) tryVolatileDB pb = try . wrapFsError pb {------------------------------------------------------------------------------ @@ -100,31 +105,33 @@ tryVolatileDB pb = try . wrapFsError pb ------------------------------------------------------------------------------} insertMapSet :: - forall k v. (Ord k, Ord v) - => k - -> v - -> Map k (Set v) - -> Map k (Set v) + forall k v. + (Ord k, Ord v) => + k -> + v -> + Map k (Set v) -> + Map k (Set v) insertMapSet k v = Map.alter ins k - where - ins :: Maybe (Set v) -> Maybe (Set v) - ins = \case - Nothing -> Just $ Set.singleton v - Just set -> Just $ Set.insert v set + where + ins :: Maybe (Set v) -> Maybe (Set v) + ins = \case + Nothing -> Just $ Set.singleton v + Just set -> Just $ Set.insert v set deleteMapSet :: - forall k v. (Ord k, Ord v) - => k - -> v - -> Map k (Set v) - -> Map k (Set v) + forall k v. + (Ord k, Ord v) => + k -> + v -> + Map k (Set v) -> + Map k (Set v) deleteMapSet k v = Map.update del k - where - del :: Set v -> Maybe (Set v) - del set - | Set.null set' - = Nothing - | otherwise - = Just set' - where - set' = Set.delete v set + where + del :: Set v -> Maybe (Set v) + del set + | Set.null set' = + Nothing + | otherwise = + Just set' + where + set' = Set.delete v set diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ticked.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ticked.hs index e5dd25c64a..14bb38a0ab 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ticked.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Ticked.hs @@ -10,10 +10,10 @@ module Ouroboros.Consensus.Ticked (Ticked (..)) where -import Data.Kind (Type) -import Data.SOP.BasicFunctors -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block.Abstract +import Data.Kind (Type) +import Data.SOP.BasicFunctors +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block.Abstract {------------------------------------------------------------------------------- Ticked state @@ -48,7 +48,7 @@ data family Ticked st -- Standard instance for use with trivial state data instance Ticked () = TickedTrivial - deriving (Show) + deriving Show type instance HeaderHash (Ticked l) = HeaderHash l @@ -56,10 +56,11 @@ type instance HeaderHash (Ticked l) = HeaderHash l Forwarding type class instances -------------------------------------------------------------------------------} -deriving newtype instance {-# OVERLAPPING #-} - Show (Ticked (f a)) - => Show ((Ticked :.: f) (a :: Type)) +deriving newtype instance + {-# OVERLAPPING #-} + Show (Ticked (f a)) => + Show ((Ticked :.: f) (a :: Type)) deriving newtype instance - NoThunks (Ticked (f a)) - => NoThunks ((Ticked :.: f) a) + NoThunks (Ticked (f a)) => + NoThunks ((Ticked :.: f) a) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/TypeFamilyWrappers.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/TypeFamilyWrappers.hs index 8e8418be4f..8ee0c52500 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/TypeFamilyWrappers.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/TypeFamilyWrappers.hs @@ -5,8 +5,8 @@ {-# LANGUAGE UndecidableInstances #-} -- | Newtypes around type families so that they can be partially applied -module Ouroboros.Consensus.TypeFamilyWrappers ( - -- * Block based +module Ouroboros.Consensus.TypeFamilyWrappers + ( -- * Block based WrapApplyTxErr (..) , WrapCannotForge (..) , WrapEnvelopeErr (..) @@ -26,6 +26,7 @@ module Ouroboros.Consensus.TypeFamilyWrappers ( , WrapTxMeasure (..) , WrapTxOut (..) , WrapValidatedGenTx (..) + -- * Protocol based , WrapCanBeLeader (..) , WrapChainDepState (..) @@ -36,42 +37,44 @@ module Ouroboros.Consensus.TypeFamilyWrappers ( , WrapSelectView (..) , WrapValidateView (..) , WrapValidationErr (..) + -- * Versioning , WrapNodeToClientVersion (..) , WrapNodeToNodeVersion (..) + -- * Type family instances , Ticked (..) ) where -import Codec.Serialise (Serialise) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Protocol.Abstract +import Codec.Serialise (Serialise) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Protocol.Abstract {------------------------------------------------------------------------------- Block based -------------------------------------------------------------------------------} -newtype WrapApplyTxErr blk = WrapApplyTxErr { unwrapApplyTxErr :: ApplyTxErr blk } -newtype WrapCannotForge blk = WrapCannotForge { unwrapCannotForge :: CannotForge blk } -newtype WrapEnvelopeErr blk = WrapEnvelopeErr { unwrapEnvelopeErr :: OtherHeaderEnvelopeError blk } -newtype WrapForgeStateInfo blk = WrapForgeStateInfo { unwrapForgeStateInfo :: ForgeStateInfo blk } -newtype WrapForgeStateUpdateError blk = WrapForgeStateUpdateError { unwrapForgeStateUpdateError :: ForgeStateUpdateError blk } -newtype WrapGenTxId blk = WrapGenTxId { unwrapGenTxId :: GenTxId blk } -newtype WrapHeaderHash blk = WrapHeaderHash { unwrapHeaderHash :: HeaderHash blk } -newtype WrapLedgerConfig blk = WrapLedgerConfig { unwrapLedgerConfig :: LedgerConfig blk } -newtype WrapLedgerEvent blk = WrapLedgerEvent { unwrapLedgerEvent :: AuxLedgerEvent (LedgerState blk) } -newtype WrapLedgerErr blk = WrapLedgerErr { unwrapLedgerErr :: LedgerError blk } -newtype WrapLedgerUpdate blk = WrapLedgerUpdate { unwrapLedgerUpdate :: LedgerUpdate blk } -newtype WrapLedgerWarning blk = WrapLedgerWarning { unwrapLedgerWarning :: LedgerWarning blk } -newtype WrapTentativeHeaderState blk = WrapTentativeHeaderState { unwrapTentativeHeaderState :: TentativeHeaderState blk } -newtype WrapTentativeHeaderView blk = WrapTentativeHeaderView { unwrapTentativeHeaderView :: TentativeHeaderView blk } -newtype WrapTipInfo blk = WrapTipInfo { unwrapTipInfo :: TipInfo blk } +newtype WrapApplyTxErr blk = WrapApplyTxErr {unwrapApplyTxErr :: ApplyTxErr blk} +newtype WrapCannotForge blk = WrapCannotForge {unwrapCannotForge :: CannotForge blk} +newtype WrapEnvelopeErr blk = WrapEnvelopeErr {unwrapEnvelopeErr :: OtherHeaderEnvelopeError blk} +newtype WrapForgeStateInfo blk = WrapForgeStateInfo {unwrapForgeStateInfo :: ForgeStateInfo blk} +newtype WrapForgeStateUpdateError blk = WrapForgeStateUpdateError {unwrapForgeStateUpdateError :: ForgeStateUpdateError blk} +newtype WrapGenTxId blk = WrapGenTxId {unwrapGenTxId :: GenTxId blk} +newtype WrapHeaderHash blk = WrapHeaderHash {unwrapHeaderHash :: HeaderHash blk} +newtype WrapLedgerConfig blk = WrapLedgerConfig {unwrapLedgerConfig :: LedgerConfig blk} +newtype WrapLedgerEvent blk = WrapLedgerEvent {unwrapLedgerEvent :: AuxLedgerEvent (LedgerState blk)} +newtype WrapLedgerErr blk = WrapLedgerErr {unwrapLedgerErr :: LedgerError blk} +newtype WrapLedgerUpdate blk = WrapLedgerUpdate {unwrapLedgerUpdate :: LedgerUpdate blk} +newtype WrapLedgerWarning blk = WrapLedgerWarning {unwrapLedgerWarning :: LedgerWarning blk} +newtype WrapTentativeHeaderState blk = WrapTentativeHeaderState {unwrapTentativeHeaderState :: TentativeHeaderState blk} +newtype WrapTentativeHeaderView blk = WrapTentativeHeaderView {unwrapTentativeHeaderView :: TentativeHeaderView blk} +newtype WrapTipInfo blk = WrapTipInfo {unwrapTipInfo :: TipInfo blk} -- | A data family wrapper for @'Validated' . 'GenTx'@ -- @@ -82,71 +85,78 @@ newtype WrapTipInfo blk = WrapTipInfo { unwrapTipInf -- :.: g)@ requires @'Data.Functor.Classes.Eq1' f)). The bespoke composition -- 'WrapValidatedGenTx' therefore serves much the same purpose as the other -- wrappers in this module. -newtype WrapValidatedGenTx blk = WrapValidatedGenTx { unwrapValidatedGenTx :: Validated (GenTx blk) } -newtype WrapTxMeasure blk = WrapTxMeasure { unwrapTxMeasure :: TxMeasure blk } +newtype WrapValidatedGenTx blk = WrapValidatedGenTx {unwrapValidatedGenTx :: Validated (GenTx blk)} + +newtype WrapTxMeasure blk = WrapTxMeasure {unwrapTxMeasure :: TxMeasure blk} -newtype WrapTxIn blk = WrapTxIn { unwrapTxIn :: TxIn (LedgerState blk) } -newtype WrapTxOut blk = WrapTxOut { unwrapTxOut :: TxOut (LedgerState blk) } +newtype WrapTxIn blk = WrapTxIn {unwrapTxIn :: TxIn (LedgerState blk)} +newtype WrapTxOut blk = WrapTxOut {unwrapTxOut :: TxOut (LedgerState blk)} {------------------------------------------------------------------------------- Consensus based -------------------------------------------------------------------------------} -newtype WrapCanBeLeader blk = WrapCanBeLeader { unwrapCanBeLeader :: CanBeLeader (BlockProtocol blk) } -newtype WrapChainDepState blk = WrapChainDepState { unwrapChainDepState :: ChainDepState (BlockProtocol blk) } -newtype WrapChainOrderConfig blk = WrapChainOrderConfig { unwrapChainOrderConfig :: ChainOrderConfig (SelectView (BlockProtocol blk)) } -newtype WrapConsensusConfig blk = WrapConsensusConfig { unwrapConsensusConfig :: ConsensusConfig (BlockProtocol blk) } -newtype WrapIsLeader blk = WrapIsLeader { unwrapIsLeader :: IsLeader (BlockProtocol blk) } -newtype WrapLedgerView blk = WrapLedgerView { unwrapLedgerView :: LedgerView (BlockProtocol blk) } -newtype WrapSelectView blk = WrapSelectView { unwrapSelectView :: SelectView (BlockProtocol blk) } -newtype WrapValidateView blk = WrapValidateView { unwrapValidateView :: ValidateView (BlockProtocol blk) } -newtype WrapValidationErr blk = WrapValidationErr { unwrapValidationErr :: ValidationErr (BlockProtocol blk) } +newtype WrapCanBeLeader blk = WrapCanBeLeader {unwrapCanBeLeader :: CanBeLeader (BlockProtocol blk)} +newtype WrapChainDepState blk = WrapChainDepState {unwrapChainDepState :: ChainDepState (BlockProtocol blk)} +newtype WrapChainOrderConfig blk = WrapChainOrderConfig {unwrapChainOrderConfig :: ChainOrderConfig (SelectView (BlockProtocol blk))} +newtype WrapConsensusConfig blk = WrapConsensusConfig {unwrapConsensusConfig :: ConsensusConfig (BlockProtocol blk)} +newtype WrapIsLeader blk = WrapIsLeader {unwrapIsLeader :: IsLeader (BlockProtocol blk)} +newtype WrapLedgerView blk = WrapLedgerView {unwrapLedgerView :: LedgerView (BlockProtocol blk)} +newtype WrapSelectView blk = WrapSelectView {unwrapSelectView :: SelectView (BlockProtocol blk)} +newtype WrapValidateView blk = WrapValidateView {unwrapValidateView :: ValidateView (BlockProtocol blk)} +newtype WrapValidationErr blk = WrapValidationErr {unwrapValidationErr :: ValidationErr (BlockProtocol blk)} {------------------------------------------------------------------------------- Versioning -------------------------------------------------------------------------------} -newtype WrapNodeToNodeVersion blk = WrapNodeToNodeVersion { unwrapNodeToNodeVersion :: BlockNodeToNodeVersion blk } -newtype WrapNodeToClientVersion blk = WrapNodeToClientVersion { unwrapNodeToClientVersion :: BlockNodeToClientVersion blk } +newtype WrapNodeToNodeVersion blk = WrapNodeToNodeVersion {unwrapNodeToNodeVersion :: BlockNodeToNodeVersion blk} +newtype WrapNodeToClientVersion blk = WrapNodeToClientVersion {unwrapNodeToClientVersion :: BlockNodeToClientVersion blk} {------------------------------------------------------------------------------- Instances -------------------------------------------------------------------------------} -deriving instance Eq (ApplyTxErr blk ) => Eq (WrapApplyTxErr blk) -deriving instance Eq (GenTxId blk ) => Eq (WrapGenTxId blk) -deriving instance Eq (LedgerError blk ) => Eq (WrapLedgerErr blk) -deriving instance Eq (LedgerUpdate blk ) => Eq (WrapLedgerUpdate blk) -deriving instance Eq (LedgerWarning blk ) => Eq (WrapLedgerWarning blk) -deriving instance Eq (OtherHeaderEnvelopeError blk ) => Eq (WrapEnvelopeErr blk) -deriving instance Eq (TentativeHeaderState blk ) => Eq (WrapTentativeHeaderState blk) -deriving instance Eq (TentativeHeaderView blk ) => Eq (WrapTentativeHeaderView blk) -deriving instance Eq (TipInfo blk ) => Eq (WrapTipInfo blk) -deriving instance Eq (Validated (GenTx blk)) => Eq (WrapValidatedGenTx blk) - -deriving instance Ord (GenTxId blk) => Ord (WrapGenTxId blk) +deriving instance Eq (ApplyTxErr blk) => Eq (WrapApplyTxErr blk) +deriving instance Eq (GenTxId blk) => Eq (WrapGenTxId blk) +deriving instance Eq (LedgerError blk) => Eq (WrapLedgerErr blk) +deriving instance Eq (LedgerUpdate blk) => Eq (WrapLedgerUpdate blk) +deriving instance Eq (LedgerWarning blk) => Eq (WrapLedgerWarning blk) +deriving instance Eq (OtherHeaderEnvelopeError blk) => Eq (WrapEnvelopeErr blk) +deriving instance Eq (TentativeHeaderState blk) => Eq (WrapTentativeHeaderState blk) +deriving instance Eq (TentativeHeaderView blk) => Eq (WrapTentativeHeaderView blk) +deriving instance Eq (TipInfo blk) => Eq (WrapTipInfo blk) +deriving instance Eq (Validated (GenTx blk)) => Eq (WrapValidatedGenTx blk) + +deriving instance Ord (GenTxId blk) => Ord (WrapGenTxId blk) deriving instance Ord (TentativeHeaderState blk) => Ord (WrapTentativeHeaderState blk) -deriving instance Show (ApplyTxErr blk ) => Show (WrapApplyTxErr blk) -deriving instance Show (CannotForge blk ) => Show (WrapCannotForge blk) -deriving instance Show (ForgeStateInfo blk ) => Show (WrapForgeStateInfo blk) -deriving instance Show (ForgeStateUpdateError blk ) => Show (WrapForgeStateUpdateError blk) -deriving instance Show (GenTxId blk ) => Show (WrapGenTxId blk) -deriving instance Show (LedgerError blk ) => Show (WrapLedgerErr blk) -deriving instance Show (LedgerUpdate blk ) => Show (WrapLedgerUpdate blk) -deriving instance Show (LedgerWarning blk ) => Show (WrapLedgerWarning blk) -deriving instance Show (OtherHeaderEnvelopeError blk ) => Show (WrapEnvelopeErr blk) -deriving instance Show (TentativeHeaderState blk ) => Show (WrapTentativeHeaderState blk) -deriving instance Show (TentativeHeaderView blk ) => Show (WrapTentativeHeaderView blk) -deriving instance Show (TipInfo blk ) => Show (WrapTipInfo blk) -deriving instance Show (Validated (GenTx blk)) => Show (WrapValidatedGenTx blk) - -deriving instance NoThunks (GenTxId blk ) => NoThunks (WrapGenTxId blk) -deriving instance NoThunks (LedgerError blk ) => NoThunks (WrapLedgerErr blk) -deriving instance NoThunks (OtherHeaderEnvelopeError blk ) => NoThunks (WrapEnvelopeErr blk) -deriving instance NoThunks (TentativeHeaderState blk ) => NoThunks (WrapTentativeHeaderState blk) -deriving instance NoThunks (TipInfo blk ) => NoThunks (WrapTipInfo blk) -deriving instance NoThunks (Validated (GenTx blk)) => NoThunks (WrapValidatedGenTx blk) +deriving instance Show (ApplyTxErr blk) => Show (WrapApplyTxErr blk) +deriving instance Show (CannotForge blk) => Show (WrapCannotForge blk) +deriving instance Show (ForgeStateInfo blk) => Show (WrapForgeStateInfo blk) +deriving instance Show (ForgeStateUpdateError blk) => Show (WrapForgeStateUpdateError blk) +deriving instance Show (GenTxId blk) => Show (WrapGenTxId blk) +deriving instance Show (LedgerError blk) => Show (WrapLedgerErr blk) +deriving instance Show (LedgerUpdate blk) => Show (WrapLedgerUpdate blk) +deriving instance Show (LedgerWarning blk) => Show (WrapLedgerWarning blk) +deriving instance Show (OtherHeaderEnvelopeError blk) => Show (WrapEnvelopeErr blk) +deriving instance Show (TentativeHeaderState blk) => Show (WrapTentativeHeaderState blk) +deriving instance Show (TentativeHeaderView blk) => Show (WrapTentativeHeaderView blk) +deriving instance Show (TipInfo blk) => Show (WrapTipInfo blk) +deriving instance Show (Validated (GenTx blk)) => Show (WrapValidatedGenTx blk) + +deriving instance + NoThunks (GenTxId blk) => NoThunks (WrapGenTxId blk) +deriving instance + NoThunks (LedgerError blk) => NoThunks (WrapLedgerErr blk) +deriving instance + NoThunks (OtherHeaderEnvelopeError blk) => NoThunks (WrapEnvelopeErr blk) +deriving instance + NoThunks (TentativeHeaderState blk) => NoThunks (WrapTentativeHeaderState blk) +deriving instance + NoThunks (TipInfo blk) => NoThunks (WrapTipInfo blk) +deriving instance + NoThunks (Validated (GenTx blk)) => NoThunks (WrapValidatedGenTx blk) deriving instance Show (TxIn (LedgerState blk)) => Show (WrapTxIn blk) deriving instance Eq (TxIn (LedgerState blk)) => Eq (WrapTxIn blk) @@ -163,7 +173,7 @@ deriving instance NoThunks (TxOut (LedgerState blk)) => NoThunks (WrapTxOut blk) -------------------------------------------------------------------------------} deriving instance Eq (ChainDepState (BlockProtocol blk)) => Eq (WrapChainDepState blk) -deriving instance Eq (SelectView (BlockProtocol blk)) => Eq (WrapSelectView blk) +deriving instance Eq (SelectView (BlockProtocol blk)) => Eq (WrapSelectView blk) deriving instance Eq (ValidationErr (BlockProtocol blk)) => Eq (WrapValidationErr blk) deriving instance Ord (SelectView (BlockProtocol blk)) => Ord (WrapSelectView blk) @@ -171,22 +181,22 @@ deriving instance Ord (SelectView (BlockProtocol blk)) => Ord (WrapSelectView bl deriving instance ChainOrder (SelectView (BlockProtocol blk)) => ChainOrder (WrapSelectView blk) deriving instance Show (ChainDepState (BlockProtocol blk)) => Show (WrapChainDepState blk) -deriving instance Show (LedgerView (BlockProtocol blk)) => Show (WrapLedgerView blk) -deriving instance Show (SelectView (BlockProtocol blk)) => Show (WrapSelectView blk) +deriving instance Show (LedgerView (BlockProtocol blk)) => Show (WrapLedgerView blk) +deriving instance Show (SelectView (BlockProtocol blk)) => Show (WrapSelectView blk) deriving instance Show (ValidationErr (BlockProtocol blk)) => Show (WrapValidationErr blk) deriving instance NoThunks (ChainDepState (BlockProtocol blk)) => NoThunks (WrapChainDepState blk) -deriving instance NoThunks (SelectView (BlockProtocol blk)) => NoThunks (WrapSelectView blk) +deriving instance NoThunks (SelectView (BlockProtocol blk)) => NoThunks (WrapSelectView blk) deriving instance NoThunks (ValidationErr (BlockProtocol blk)) => NoThunks (WrapValidationErr blk) {------------------------------------------------------------------------------- Versioning -------------------------------------------------------------------------------} -deriving instance Show (BlockNodeToNodeVersion blk) => Show (WrapNodeToNodeVersion blk) +deriving instance Show (BlockNodeToNodeVersion blk) => Show (WrapNodeToNodeVersion blk) deriving instance Show (BlockNodeToClientVersion blk) => Show (WrapNodeToClientVersion blk) -deriving instance Eq (BlockNodeToNodeVersion blk) => Eq (WrapNodeToNodeVersion blk) +deriving instance Eq (BlockNodeToNodeVersion blk) => Eq (WrapNodeToNodeVersion blk) deriving instance Eq (BlockNodeToClientVersion blk) => Eq (WrapNodeToClientVersion blk) {------------------------------------------------------------------------------- @@ -195,9 +205,10 @@ deriving instance Eq (BlockNodeToClientVersion blk) => Eq (WrapNodeToClientVersi These are primarily useful in testing. -------------------------------------------------------------------------------} -deriving instance Serialise (GenTxId blk) => Serialise (WrapGenTxId blk) -deriving instance Serialise (ChainDepState (BlockProtocol blk)) => Serialise (WrapChainDepState blk) -deriving instance Serialise (TipInfo blk) => Serialise (WrapTipInfo blk) +deriving instance Serialise (GenTxId blk) => Serialise (WrapGenTxId blk) +deriving instance + Serialise (ChainDepState (BlockProtocol blk)) => Serialise (WrapChainDepState blk) +deriving instance Serialise (TipInfo blk) => Serialise (WrapTipInfo blk) {------------------------------------------------------------------------------- Ticking @@ -205,6 +216,6 @@ deriving instance Serialise (TipInfo blk) => Serialise (Wr These are just forwarding instances -------------------------------------------------------------------------------} -newtype instance Ticked (WrapChainDepState blk) = WrapTickedChainDepState { - unwrapTickedChainDepState :: Ticked (ChainDepState (BlockProtocol blk)) - } +newtype instance Ticked (WrapChainDepState blk) = WrapTickedChainDepState + { unwrapTickedChainDepState :: Ticked (ChainDepState (BlockProtocol blk)) + } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util.hs index 7565e310ca..967c86bf38 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util.hs @@ -16,20 +16,22 @@ {-# LANGUAGE UndecidableInstances #-} -- | Miscellaneous utilities -module Ouroboros.Consensus.Util ( - -- * Type-level utility +module Ouroboros.Consensus.Util + ( -- * Type-level utility Empty , ShowProxy (..) , Some (..) , SomePair (..) , SomeSecond (..) , mustBeRight + -- * Folding variations , foldlM' , nTimes , nTimesM , repeatedly , repeatedlyM + -- * Lists , allEqual , chunks @@ -42,23 +44,30 @@ module Ouroboros.Consensus.Util ( , splits , takeLast , takeUntil + -- * Safe variants of existing base functions , lastMaybe , safeMaximum , safeMaximumBy , safeMaximumOn + -- * Hashes , hashFromBytesE , hashFromBytesShortE + -- * Bytestrings , byteStringChunks , lazyByteStringChunks + -- * Monadic utilities , whenJust + -- * Test code , checkThat + -- * Sets , allDisjoint + -- * Composition , (......:) , (.....:) @@ -66,12 +75,15 @@ module Ouroboros.Consensus.Util ( , (...:) , (..:) , (.:) + -- * Product , pairFst , pairSnd + -- * Miscellaneous , eitherToMaybe , fib + -- * Electric code , Electric , Fuse @@ -79,37 +91,42 @@ module Ouroboros.Consensus.Util ( , electric , newFuse , withFuse + -- * Type-safe boolean flags , Flag (..) ) where -import Cardano.Crypto.Hash (Hash, HashAlgorithm, hashFromBytes, - hashFromBytesShort) -import Control.Monad (unless) -import Control.Monad.Class.MonadThrow -import Control.Monad.Trans.Class -import qualified Data.ByteString as Strict -import qualified Data.ByteString.Lazy as Lazy -import Data.ByteString.Short (ShortByteString) -import Data.Foldable (asum, toList) -import Data.Function (on) -import Data.Functor.Identity -import Data.Functor.Product -import Data.Kind (Type) -import Data.List as List (foldl', maximumBy) -import Data.List.NonEmpty (NonEmpty (..), (<|)) -import Data.Maybe (fromMaybe) -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Text (Text) -import Data.Void -import Data.Word (Word64) -import GHC.Generics (Generic) -import GHC.Stack -import GHC.TypeLits (Symbol) -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Network.Protocol.LocalStateQuery.Codec (Some (..)) -import Ouroboros.Network.Util.ShowProxy (ShowProxy (..)) +import Cardano.Crypto.Hash + ( Hash + , HashAlgorithm + , hashFromBytes + , hashFromBytesShort + ) +import Control.Monad (unless) +import Control.Monad.Class.MonadThrow +import Control.Monad.Trans.Class +import Data.ByteString qualified as Strict +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Short (ShortByteString) +import Data.Foldable (asum, toList) +import Data.Function (on) +import Data.Functor.Identity +import Data.Functor.Product +import Data.Kind (Type) +import Data.List as List (foldl', maximumBy) +import Data.List.NonEmpty (NonEmpty (..), (<|)) +import Data.Maybe (fromMaybe) +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Text (Text) +import Data.Void +import Data.Word (Word64) +import GHC.Generics (Generic) +import GHC.Stack +import GHC.TypeLits (Symbol) +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Network.Protocol.LocalStateQuery.Codec (Some (..)) +import Ouroboros.Network.Util.ShowProxy (ShowProxy (..)) {------------------------------------------------------------------------------- Type-level utility @@ -120,7 +137,7 @@ instance Empty a -- | Pair of functors instantiated to the /same/ existential data SomePair (f :: k -> Type) (g :: k -> Type) where - SomePair :: f a -> g a -> SomePair f g + SomePair :: f a -> g a -> SomePair f g -- | Hide the second type argument of some functor -- @@ -131,7 +148,7 @@ data SomeSecond f a where SomeSecond :: !(f a b) -> SomeSecond f a mustBeRight :: Either Void a -> a -mustBeRight (Left v) = absurd v +mustBeRight (Left v) = absurd v mustBeRight (Right a) = a {------------------------------------------------------------------------------- @@ -140,10 +157,10 @@ mustBeRight (Right a) = a foldlM' :: forall m a b. Monad m => (b -> a -> m b) -> b -> [a] -> m b foldlM' f = go - where - go :: b -> [a] -> m b - go !acc [] = return acc - go !acc (x:xs) = f acc x >>= \acc' -> go acc' xs + where + go :: b -> [a] -> m b + go !acc [] = return acc + go !acc (x : xs) = f acc x >>= \acc' -> go acc' xs repeatedly :: (a -> b -> b) -> ([a] -> b -> b) repeatedly = flip . List.foldl' . flip @@ -159,10 +176,10 @@ nTimes f n = runIdentity . nTimesM (Identity . f) n -- application is forced. nTimesM :: forall m a. Monad m => (a -> m a) -> Word64 -> (a -> m a) nTimesM f = go - where - go :: Word64 -> (a -> m a) - go 0 !x = return x - go n !x = go (n - 1) =<< f x + where + go :: Word64 -> (a -> m a) + go 0 !x = return x + go n !x = go (n - 1) =<< f x {------------------------------------------------------------------------------- Lists @@ -170,8 +187,9 @@ nTimesM f = go chunks :: Int -> [a] -> [[a]] chunks _ [] = [] -chunks n xs = let (chunk, xs') = splitAt n xs - in chunk : chunks n xs' +chunks n xs = + let (chunk, xs') = splitAt n xs + in chunk : chunks n xs' -- | All possible ways to pick on element from a list, preserving order -- @@ -180,17 +198,18 @@ chunks n xs = let (chunk, xs') = splitAt n xs -- > , ([1,2], 3, []) -- > ] pickOne :: [a] -> [([a], a, [a])] -pickOne [] = [] -pickOne (x:xs) = ([], x, xs) - : map (\(as, b, cs) -> (x:as, b, cs)) (pickOne xs) +pickOne [] = [] +pickOne (x : xs) = + ([], x, xs) + : map (\(as, b, cs) -> (x : as, b, cs)) (pickOne xs) -- | Mark the last element of the list as 'Right' markLast :: [a] -> [Either a a] markLast = go - where - go [] = [] - go [x] = [Right x] - go (x:xs) = Left x : go xs + where + go [] = [] + go [x] = [Right x] + go (x : xs) = Left x : go xs -- | Take the last @n@ elements takeLast :: Word64 -> [a] -> [a] @@ -204,9 +223,9 @@ firstJust :: forall a b f. Foldable f => (a -> Maybe b) -> f a -> Maybe b firstJust f = asum . fmap f . toList allEqual :: Eq a => [a] -> Bool -allEqual [] = True -allEqual [_] = True -allEqual (x:y:zs) = x == y && allEqual (y:zs) +allEqual [] = True +allEqual [_] = True +allEqual (x : y : zs) = x == y && allEqual (y : zs) -- | Take items until the condition is true. If the condition is true for an -- item, include that item as the last item in the returned list. If the @@ -220,13 +239,13 @@ allEqual (x:y:zs) = x == y && allEqual (y:zs) -- [2] takeUntil :: (a -> Bool) -> [a] -> [a] takeUntil p = \case + [] -> [] - -> [] - x:xs - | p x - -> [x] - | otherwise - -> x:takeUntil p xs + x : xs + | p x -> + [x] + | otherwise -> + x : takeUntil p xs findM :: (Foldable f, Monad m) => (a -> m Bool) -> f a -> m (Maybe a) findM p = @@ -242,8 +261,8 @@ findM p = -- > , ([1,2] , 3 , [] ) -- > ] splits :: [a] -> [([a], a, [a])] -splits [] = [] -splits (a:as) = ([], a, as) : map (\(xs, y, zs) -> (a:xs, y, zs)) (splits as) +splits [] = [] +splits (a : as) = ([], a, as) : map (\(xs, y, zs) -> (a : xs, y, zs)) (splits as) -- | Split a list given a delimiter predicate. -- @@ -256,25 +275,25 @@ splits (a:as) = ([], a, as) : map (\(xs, y, zs) -> (a:xs, y, zs)) (splits as) -- > length (split p as) === length (filter p as) + 1 split :: (a -> Bool) -> [a] -> NonEmpty [a] split p = \case - [] -> pure [] - a : as | p a -> [] <| split p as - a : as -> let bs :| bss = split p as in (a : bs) :| bss + [] -> pure [] + a : as | p a -> [] <| split p as + a : as -> let bs :| bss = split p as in (a : bs) :| bss {------------------------------------------------------------------------------- Safe variants of existing base functions -------------------------------------------------------------------------------} lastMaybe :: [a] -> Maybe a -lastMaybe [] = Nothing -lastMaybe [x] = Just x -lastMaybe (_:xs) = lastMaybe xs +lastMaybe [] = Nothing +lastMaybe [x] = Just x +lastMaybe (_ : xs) = lastMaybe xs safeMaximum :: Ord a => [a] -> Maybe a safeMaximum = safeMaximumBy compare safeMaximumBy :: (a -> a -> Ordering) -> [a] -> Maybe a safeMaximumBy _cmp [] = Nothing -safeMaximumBy cmp ls = Just $ maximumBy cmp ls +safeMaximumBy cmp ls = Just $ maximumBy cmp ls safeMaximumOn :: Ord b => (a -> b) -> [a] -> Maybe a safeMaximumOn f = safeMaximumBy (compare `on` f) @@ -286,26 +305,29 @@ safeMaximumOn f = safeMaximumBy (compare `on` f) -- | Calls 'hashFromBytes' and throws an error if the input is of the wrong -- length. hashFromBytesE :: - forall h a. (HashAlgorithm h, HasCallStack) - => Strict.ByteString - -> Hash h a + forall h a. + (HashAlgorithm h, HasCallStack) => + Strict.ByteString -> + Hash h a hashFromBytesE bs = fromMaybe (error msg) $ hashFromBytes bs - where - msg = - "hashFromBytes called with ByteString of the wrong length: " <> - show bs + where + msg = + "hashFromBytes called with ByteString of the wrong length: " + <> show bs -- | Calls 'hashFromBytesShort' and throws an error if the input is of the -- wrong length. hashFromBytesShortE :: - forall h a. (HashAlgorithm h, HasCallStack) - => ShortByteString - -> Hash h a + forall h a. + (HashAlgorithm h, HasCallStack) => + ShortByteString -> + Hash h a hashFromBytesShortE bs = fromMaybe (error msg) $ hashFromBytesShort bs - where - msg = - "hashFromBytesShort called with ShortByteString of the wrong length: " <> - show bs + where + msg = + "hashFromBytesShort called with ShortByteString of the wrong length: " + <> show bs + {------------------------------------------------------------------------------- Bytestrings -------------------------------------------------------------------------------} @@ -316,8 +338,9 @@ byteStringChunks n = map Strict.pack . chunks n . Strict.unpack lazyByteStringChunks :: Int -> Lazy.ByteString -> [Lazy.ByteString] lazyByteStringChunks n bs | Lazy.null bs = [] - | otherwise = let (chunk, bs') = Lazy.splitAt (fromIntegral n) bs - in chunk : lazyByteStringChunks n bs' + | otherwise = + let (chunk, bs') = Lazy.splitAt (fromIntegral n) bs + in chunk : lazyByteStringChunks n bs' {------------------------------------------------------------------------------- Monadic utilities @@ -325,7 +348,7 @@ lazyByteStringChunks n bs whenJust :: Applicative f => Maybe a -> (a -> f ()) -> f () whenJust (Just x) f = f x -whenJust Nothing _ = pure () +whenJust Nothing _ = pure () {------------------------------------------------------------------------------- Test code @@ -334,15 +357,21 @@ whenJust Nothing _ = pure () -- | Assertion -- -- Variation on 'assert' for use in testing code. -checkThat :: (Show a, Monad m) - => String - -> (a -> Bool) - -> a - -> m () +checkThat :: + (Show a, Monad m) => + String -> + (a -> Bool) -> + a -> + m () checkThat label prd a - | prd a = return () - | otherwise = error $ label ++ " failed on " ++ show a ++ "\n" - ++ prettyCallStack callStack + | prd a = return () + | otherwise = + error $ + label + ++ " failed on " + ++ show a + ++ "\n" + ++ prettyCallStack callStack {------------------------------------------------------------------------------- Sets @@ -351,10 +380,10 @@ checkThat label prd a -- | Check that a bunch of sets are all mutually disjoint allDisjoint :: forall a. Ord a => [Set a] -> Bool allDisjoint = go Set.empty - where - go :: Set a -> [Set a] -> Bool - go _ [] = True - go acc (xs:xss) = Set.disjoint acc xs && go (Set.union acc xs) xss + where + go :: Set a -> [Set a] -> Bool + go _ [] = True + go acc (xs : xss) = Set.disjoint acc xs && go (Set.union acc xs) xss {------------------------------------------------------------------------------- Composition @@ -372,10 +401,14 @@ allDisjoint = go Set.empty (....:) :: (y -> z) -> (x0 -> x1 -> x2 -> x3 -> x4 -> y) -> (x0 -> x1 -> x2 -> x3 -> x4 -> z) (f ....: g) x0 x1 x2 x3 x4 = f (g x0 x1 x2 x3 x4) -(.....:) :: (y -> z) -> (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> y) -> (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> z) +(.....:) :: + (y -> z) -> (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> y) -> (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> z) (f .....: g) x0 x1 x2 x3 x4 x5 = f (g x0 x1 x2 x3 x4 x5) -(......:) :: (y -> z) -> (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> x6 -> y) -> (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> x6 -> z) +(......:) :: + (y -> z) -> + (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> x6 -> y) -> + (x0 -> x1 -> x2 -> x3 -> x4 -> x5 -> x6 -> z) (f ......: g) x0 x1 x2 x3 x4 x5 x6 = f (g x0 x1 x2 x3 x4 x5 x6) {------------------------------------------------------------------------------- @@ -395,13 +428,13 @@ pairSnd (Pair _ b) = b -- | Fast Fibonacci computation, using Binet's formula fib :: Word64 -> Word64 fib n = round $ phi ** fromIntegral n / sq5 - where - sq5, phi :: Double - sq5 = sqrt 5 - phi = (1 + sq5) / 2 + where + sq5, phi :: Double + sq5 = sqrt 5 + phi = (1 + sq5) / 2 eitherToMaybe :: Either a b -> Maybe b -eitherToMaybe (Left _) = Nothing +eitherToMaybe (Left _) = Nothing eitherToMaybe (Right x) = Just x {------------------------------------------------------------------------------- @@ -424,7 +457,7 @@ electric :: m a -> Electric m a electric = Electric -- | A simple semaphore, though instead of blocking a fatal exception is thrown. -data Fuse m = Fuse !Text !(StrictMVar m ()) deriving (Generic) +data Fuse m = Fuse !Text !(StrictMVar m ()) deriving Generic deriving instance NoThunks (StrictMVar m ()) => NoThunks (Fuse m) @@ -443,10 +476,10 @@ newFuse name = Fuse name <$> newMVar () -- WARNING If the given action throws its own exception, then it will never stop -- putting load on the 'Fuse'. withFuse :: - (MonadThrow m, MonadMVar m) - => Fuse m - -> Electric m a - -> m a + (MonadThrow m, MonadMVar m) => + Fuse m -> + Electric m a -> + m a withFuse (Fuse name m) (Electric io) = do tryTakeMVar m >>= \case Nothing -> throwIO $ FuseBlownException name @@ -457,8 +490,8 @@ withFuse (Fuse name m) (Electric io) = do -- | Too much electrical load was put on the 'Fuse', see 'withFuse'. newtype FuseBlownException = FuseBlownException Text - deriving (Show) - deriving anyclass (Exception) + deriving Show + deriving anyclass Exception {------------------------------------------------------------------------------- Type-safe boolean flags @@ -471,4 +504,4 @@ newtype FuseBlownException = FuseBlownException Text -- See 'Ouroboros.Consensus.Storage.LedgerDB.Snapshots.DiskSnapshotChecksum' -- for an example. newtype Flag (name :: Symbol) = Flag {getFlag :: Bool} - deriving (Eq, Show, Generic) + deriving (Eq, Show, Generic) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/AnchoredFragment.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/AnchoredFragment.hs index 63fa746dc6..89ea81b476 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/AnchoredFragment.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/AnchoredFragment.hs @@ -6,8 +6,8 @@ -- -- Intended for qualified import -- > import qualified Ouroboros.Consensus.Util.AnchoredFragment as AF -module Ouroboros.Consensus.Util.AnchoredFragment ( - compareAnchoredFragments +module Ouroboros.Consensus.Util.AnchoredFragment + ( compareAnchoredFragments , compareHeadBlockNo , cross , forksAtMostKBlocks @@ -15,20 +15,22 @@ module Ouroboros.Consensus.Util.AnchoredFragment ( , stripCommonPrefix ) where -import Control.Monad.Except (throwError) -import Data.Foldable (toList) -import qualified Data.Foldable1 as F1 -import Data.Function (on) -import qualified Data.List.NonEmpty as NE -import Data.Maybe (isJust) -import Data.Word (Word64) -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Util.Assert -import Ouroboros.Network.AnchoredFragment (AnchoredFragment, - AnchoredSeq (Empty, (:>))) -import qualified Ouroboros.Network.AnchoredFragment as AF +import Control.Monad.Except (throwError) +import Data.Foldable (toList) +import Data.Foldable1 qualified as F1 +import Data.Function (on) +import Data.List.NonEmpty qualified as NE +import Data.Maybe (isJust) +import Data.Word (Word64) +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Util.Assert +import Ouroboros.Network.AnchoredFragment + ( AnchoredFragment + , AnchoredSeq (Empty, (:>)) + ) +import Ouroboros.Network.AnchoredFragment qualified as AF {------------------------------------------------------------------------------- Utility functions on anchored fragments @@ -51,22 +53,26 @@ import qualified Ouroboros.Network.AnchoredFragment as AF -- a new block/header is added to the fragment, the right decision will be -- made again ('GT' or 'LT'). compareHeadBlockNo :: - HasHeader b - => AnchoredFragment b - -> AnchoredFragment b - -> Ordering + HasHeader b => + AnchoredFragment b -> + AnchoredFragment b -> + Ordering compareHeadBlockNo = compare `on` AF.headBlockNo forksAtMostKBlocks :: - HasHeader b - => Word64 -- ^ How many blocks can it fork? - -> AnchoredFragment b -- ^ Our chain. - -> AnchoredFragment b -- ^ Their chain - -> Bool -- ^ Indicates whether their chain forks at most the - -- specified number of blocks. + HasHeader b => + -- | How many blocks can it fork? + Word64 -> + -- | Our chain. + AnchoredFragment b -> + -- | Their chain + AnchoredFragment b -> + -- | Indicates whether their chain forks at most the + -- specified number of blocks. + Bool forksAtMostKBlocks k ours theirs = case ours `AF.intersect` theirs of - Nothing -> False - Just (_, _, ourSuffix, _) -> fromIntegral (AF.length ourSuffix) <= k + Nothing -> False + Just (_, _, ourSuffix, _) -> fromIntegral (AF.length ourSuffix) <= k -- | Compare two (potentially empty!) 'AnchoredFragment's. -- @@ -82,18 +88,18 @@ forksAtMostKBlocks k ours theirs = case ours `AF.intersect` theirs of -- these fragments intersect with our current chain, they must by transitivity -- also intersect each other. compareAnchoredFragments :: - forall blk h. - ( BlockSupportsProtocol blk - , HasCallStack - , GetHeader1 h - , HasHeader (h blk) - ) - => BlockConfig blk - -> AnchoredFragment (h blk) - -> AnchoredFragment (h blk) - -> Ordering + forall blk h. + ( BlockSupportsProtocol blk + , HasCallStack + , GetHeader1 h + , HasHeader (h blk) + ) => + BlockConfig blk -> + AnchoredFragment (h blk) -> + AnchoredFragment (h blk) -> + Ordering compareAnchoredFragments cfg frag1 frag2 = - assertWithMsg (precondition frag1 frag2) $ + assertWithMsg (precondition frag1 frag2) $ case (frag1, frag2) of (Empty _, Empty _) -> -- The fragments intersect but are equal: their anchors must be equal, @@ -117,7 +123,7 @@ compareAnchoredFragments cfg frag1 frag2 = (_ :> tip, _ :> tip') -> -- Case 4 compare - (selectView cfg (getHeader1 tip )) + (selectView cfg (getHeader1 tip)) (selectView cfg (getHeader1 tip')) -- | Lift 'preferCandidate' to 'AnchoredFragment' @@ -131,21 +137,23 @@ compareAnchoredFragments cfg frag1 frag2 = -- from our tip, although the exact distance does not matter for -- 'compareAnchoredFragments'). preferAnchoredCandidate :: - forall blk h h'. - ( BlockSupportsProtocol blk - , HasCallStack - , GetHeader1 h - , GetHeader1 h' - , HeaderHash (h blk) ~ HeaderHash (h' blk) - , HasHeader (h blk) - , HasHeader (h' blk) - ) - => BlockConfig blk - -> AnchoredFragment (h blk) -- ^ Our chain - -> AnchoredFragment (h' blk) -- ^ Candidate - -> Bool + forall blk h h'. + ( BlockSupportsProtocol blk + , HasCallStack + , GetHeader1 h + , GetHeader1 h' + , HeaderHash (h blk) ~ HeaderHash (h' blk) + , HasHeader (h blk) + , HasHeader (h' blk) + ) => + BlockConfig blk -> + -- | Our chain + AnchoredFragment (h blk) -> + -- | Candidate + AnchoredFragment (h' blk) -> + Bool preferAnchoredCandidate cfg ours cand = - assertWithMsg (precondition ours cand) $ + assertWithMsg (precondition ours cand) $ case (ours, cand) of (_, Empty _) -> False (Empty ourAnchor, _ :> theirTip) -> @@ -158,71 +166,72 @@ preferAnchoredCandidate cfg ours cand = -- For 'compareAnchoredFragment' and 'preferAnchoredCandidate'. precondition :: - ( HeaderHash (h blk) ~ HeaderHash (h' blk) - , HasHeader (h blk) - , HasHeader (h' blk) - ) - => AnchoredFragment (h blk) - -> AnchoredFragment (h' blk) - -> Either String () + ( HeaderHash (h blk) ~ HeaderHash (h' blk) + , HasHeader (h blk) + , HasHeader (h' blk) + ) => + AnchoredFragment (h blk) -> + AnchoredFragment (h' blk) -> + Either String () precondition frag1 frag2 - | not (AF.null frag1), not (AF.null frag2) - = return () - | isJust (AF.intersectionPoint frag1 frag2) - = return () - | otherwise - = throwError - "precondition violated: fragments should both be non-empty or they \ - \should intersect" + | not (AF.null frag1) + , not (AF.null frag2) = + return () + | isJust (AF.intersectionPoint frag1 frag2) = + return () + | otherwise = + throwError + "precondition violated: fragments should both be non-empty or they \ + \should intersect" -- | If the two fragments `c1` and `c2` intersect, return the intersection -- point and join the prefix of `c1` before the intersection with the suffix -- of `c2` after the intersection. The resulting fragment has the same -- anchor as `c1` and the same head as `c2`. cross :: - HasHeader block - => AnchoredFragment block - -> AnchoredFragment block - -> Maybe (Point block, AnchoredFragment block) + HasHeader block => + AnchoredFragment block -> + AnchoredFragment block -> + Maybe (Point block, AnchoredFragment block) cross c1 c2 = do - (p1, _p2, _s1, s2) <- AF.intersect c1 c2 - -- Note that the head of `p1` and `_p2` is the intersection point, and - -- `_s1` and `s2` are anchored in the intersection point. - let crossed = case AF.join p1 s2 of - Just c -> c - Nothing -> error "invariant violation of AF.intersect" - pure (AF.anchorPoint s2, crossed) + (p1, _p2, _s1, s2) <- AF.intersect c1 c2 + -- Note that the head of `p1` and `_p2` is the intersection point, and + -- `_s1` and `s2` are anchored in the intersection point. + let crossed = case AF.join p1 s2 of + Just c -> c + Nothing -> error "invariant violation of AF.intersect" + pure (AF.anchorPoint s2, crossed) -- | Strip the common prefix of multiple fragments. -- -- PRECONDITION: all fragments have the given anchor as their anchor. stripCommonPrefix :: - forall f blk. - (Functor f, Foldable f, HasHeader blk) -- TODO: this uses the lazy 'map' for 'Map'... - => AF.Anchor blk - -> f (AnchoredFragment blk) - -> (AnchoredFragment blk, f (AnchoredFragment blk)) + forall f blk. + (Functor f, Foldable f, HasHeader blk) => -- TODO: this uses the lazy 'map' for 'Map'... + AF.Anchor blk -> + f (AnchoredFragment blk) -> + (AnchoredFragment blk, f (AnchoredFragment blk)) stripCommonPrefix sharedAnchor frags - | all ((sharedAnchor ==) . AF.anchor) frags - = (commonPrefix, splitAfterCommonPrefix <$> frags) - | otherwise - = error "Not all fragments are anchored in the given anchor" - where - -- Return the common prefix of two fragments with the same anchor - -- 'sharedAnchor'. - computeCommonPrefix :: - AnchoredFragment blk - -> AnchoredFragment blk - -> AnchoredFragment blk - computeCommonPrefix frag1 frag2 = case AF.intersect frag1 frag2 of - Just (cp, _, _, _) -> cp - Nothing -> error "unreachable" + | all ((sharedAnchor ==) . AF.anchor) frags = + (commonPrefix, splitAfterCommonPrefix <$> frags) + | otherwise = + error "Not all fragments are anchored in the given anchor" + where + -- Return the common prefix of two fragments with the same anchor + -- 'sharedAnchor'. + computeCommonPrefix :: + AnchoredFragment blk -> + AnchoredFragment blk -> + AnchoredFragment blk + computeCommonPrefix frag1 frag2 = case AF.intersect frag1 frag2 of + Just (cp, _, _, _) -> cp + Nothing -> error "unreachable" - commonPrefix = case NE.nonEmpty $ toList frags of - Nothing -> AF.Empty sharedAnchor - Just fragsNE -> F1.foldl1' computeCommonPrefix fragsNE + commonPrefix = case NE.nonEmpty $ toList frags of + Nothing -> AF.Empty sharedAnchor + Just fragsNE -> F1.foldl1' computeCommonPrefix fragsNE - splitAfterCommonPrefix frag = - case AF.splitAfterPoint frag (AF.headPoint commonPrefix) of - Just (_, afterCommonPrefix) -> afterCommonPrefix - Nothing -> error "unreachable" + splitAfterCommonPrefix frag = + case AF.splitAfterPoint frag (AF.headPoint commonPrefix) of + Just (_, afterCommonPrefix) -> afterCommonPrefix + Nothing -> error "unreachable" diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Args.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Args.hs index c0fc821fed..b187766ce7 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Args.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Args.hs @@ -30,31 +30,32 @@ -- -- Leaving out the 'hasNoDefault' field from 'theArgs' will result in a type -- error. -module Ouroboros.Consensus.Util.Args ( - Defaults (..) +module Ouroboros.Consensus.Util.Args + ( Defaults (..) , HKD , MapHKD (..) + -- * Types , Complete , Incomplete , noDefault ) where -import Data.Functor.Identity (Identity (..)) -import Data.Kind +import Data.Functor.Identity (Identity (..)) +import Data.Kind data Defaults t = NoDefault - deriving (Functor) + deriving Functor noDefault :: Defaults t noDefault = NoDefault type family HKD f a where HKD Identity a = a - HKD f a = f a + HKD f a = f a type Incomplete (args :: (Type -> Type) -> k) = args Defaults -type Complete (args :: (Type -> Type) -> k) = args Identity +type Complete (args :: (Type -> Type) -> k) = args Identity class MapHKD f where mapHKD :: proxy (f b) -> (a -> b) -> HKD f a -> HKD f b diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Assert.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Assert.hs index 4ed49f7213..de9b899be4 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Assert.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Assert.hs @@ -3,13 +3,13 @@ module Ouroboros.Consensus.Util.Assert (assertWithMsg) where -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Util.RedundantConstraints +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Util.RedundantConstraints assertWithMsg :: HasCallStack => Either String () -> a -> a #if ENABLE_ASSERTIONS assertWithMsg (Left msg) _ = error msg #endif -assertWithMsg _ a = a - where - _ = keepRedundantConstraint (Proxy @HasCallStack) +assertWithMsg _ a = a + where + _ = keepRedundantConstraint (Proxy @HasCallStack) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CBOR.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CBOR.hs index 1e91355d85..c94c402889 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CBOR.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CBOR.hs @@ -3,20 +3,24 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Util.CBOR ( - -- * Incremental parsing in I/O +module Ouroboros.Consensus.Util.CBOR + ( -- * Incremental parsing in I/O IDecodeIO (..) , deserialiseIncrementalIO , fromIDecode + -- * Higher-level incremental interface , Decoder (..) , initDecoderIO + -- * Decode as FlatTerm , decodeAsFlatTerm + -- * HasFS interaction , ReadIncrementalErr (..) , readIncremental , withStreamIncrementalOffsets + -- * Encoding/decoding containers , decodeList , decodeMaybe @@ -28,149 +32,153 @@ module Ouroboros.Consensus.Util.CBOR ( , encodeWithOrigin ) where -import Cardano.Binary (decodeMaybe, encodeMaybe) -import Cardano.Slotting.Slot (WithOrigin (..), withOriginFromMaybe, - withOriginToMaybe) -import qualified Codec.CBOR.Decoding as CBOR.D -import qualified Codec.CBOR.Encoding as CBOR.E -import qualified Codec.CBOR.FlatTerm as CBOR.F -import qualified Codec.CBOR.Read as CBOR.R -import Control.Exception (assert) -import Control.Monad -import Control.Monad.Except -import Control.Monad.ST -import qualified Control.Monad.ST.Lazy as ST.Lazy -import Data.ByteString (ByteString) -import qualified Data.ByteString as BS -import Data.ByteString.Builder.Extra (defaultChunkSize) -import qualified Data.ByteString.Lazy as LBS -import Data.Foldable (toList) -import Data.IORef -import Data.Sequence.Strict (StrictSeq) -import qualified Data.Sequence.Strict as Seq -import Data.Word (Word64) -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Util.IOLike as U -import qualified Streaming as S -import qualified Streaming.Prelude as S -import Streaming.Prelude (Of (..), Stream) -import System.FS.API -import System.FS.CRC (CRC (..), initCRC, updateCRC) +import Cardano.Binary (decodeMaybe, encodeMaybe) +import Cardano.Slotting.Slot + ( WithOrigin (..) + , withOriginFromMaybe + , withOriginToMaybe + ) +import Codec.CBOR.Decoding qualified as CBOR.D +import Codec.CBOR.Encoding qualified as CBOR.E +import Codec.CBOR.FlatTerm qualified as CBOR.F +import Codec.CBOR.Read qualified as CBOR.R +import Control.Exception (assert) +import Control.Monad +import Control.Monad.Except +import Control.Monad.ST +import Control.Monad.ST.Lazy qualified as ST.Lazy +import Data.ByteString (ByteString) +import Data.ByteString qualified as BS +import Data.ByteString.Builder.Extra (defaultChunkSize) +import Data.ByteString.Lazy qualified as LBS +import Data.Foldable (toList) +import Data.IORef +import Data.Sequence.Strict (StrictSeq) +import Data.Sequence.Strict qualified as Seq +import Data.Word (Word64) +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Util.IOLike as U +import Streaming qualified as S +import Streaming.Prelude (Of (..), Stream) +import Streaming.Prelude qualified as S +import System.FS.API +import System.FS.CRC (CRC (..), initCRC, updateCRC) {------------------------------------------------------------------------------- Incremental parsing in I/O -------------------------------------------------------------------------------} -data IDecodeIO a = - Partial (Maybe ByteString -> IO (IDecodeIO a)) +data IDecodeIO a + = Partial (Maybe ByteString -> IO (IDecodeIO a)) | Done !ByteString !CBOR.R.ByteOffset a | Fail !ByteString !CBOR.R.ByteOffset CBOR.R.DeserialiseFailure fromIDecode :: CBOR.R.IDecode RealWorld a -> IDecodeIO a -fromIDecode (CBOR.R.Partial k) = Partial $ fmap fromIDecode . U.stToIO . k +fromIDecode (CBOR.R.Partial k) = Partial $ fmap fromIDecode . U.stToIO . k fromIDecode (CBOR.R.Done bs off x) = Done bs off x fromIDecode (CBOR.R.Fail bs off e) = Fail bs off e deserialiseIncrementalIO :: (forall s. CBOR.D.Decoder s a) -> IO (IDecodeIO a) -deserialiseIncrementalIO = fmap fromIDecode - . U.stToIO - . CBOR.R.deserialiseIncremental +deserialiseIncrementalIO = + fmap fromIDecode + . U.stToIO + . CBOR.R.deserialiseIncremental {------------------------------------------------------------------------------- Higher-level incremental interface -------------------------------------------------------------------------------} -data Decoder m = Decoder { - -- | Decode next failure - -- - -- May throw 'CBOR.DeserialiseFailure' - decodeNext :: forall a. (forall s. CBOR.D.Decoder s a) -> m a - } +data Decoder m = Decoder + { decodeNext :: forall a. (forall s. CBOR.D.Decoder s a) -> m a + -- ^ Decode next failure + -- + -- May throw 'CBOR.DeserialiseFailure' + } -- | Construct incremental decoder given a way to get chunks -- -- Resulting decoder is not thread safe. initDecoderIO :: IO ByteString -> IO (Decoder IO) initDecoderIO getChunk = do - leftover <- newIORef BS.empty - let go :: forall a. (forall s. CBOR.D.Decoder s a) -> IO a - go decoder = do - i <- deserialiseIncrementalIO decoder - case i of - Done bs _ a -> assert (BS.null bs) $ return a - Fail _ _ e -> throwIO e - Partial k -> readIORef leftover >>= (k . Just >=> goWith) - - goWith :: forall a. IDecodeIO a -> IO a - goWith (Partial k) = getChunk' >>= (k >=> goWith) - goWith (Done bs _ a) = writeIORef leftover bs >> return a - goWith (Fail _ _ e) = throwIO e - - return $ Decoder go - - where - getChunk' :: IO (Maybe ByteString) - getChunk' = checkEmpty <$> getChunk - - checkEmpty :: ByteString -> Maybe ByteString - checkEmpty bs | BS.null bs = Nothing - | otherwise = Just bs + leftover <- newIORef BS.empty + let go :: forall a. (forall s. CBOR.D.Decoder s a) -> IO a + go decoder = do + i <- deserialiseIncrementalIO decoder + case i of + Done bs _ a -> assert (BS.null bs) $ return a + Fail _ _ e -> throwIO e + Partial k -> readIORef leftover >>= (k . Just >=> goWith) + + goWith :: forall a. IDecodeIO a -> IO a + goWith (Partial k) = getChunk' >>= (k >=> goWith) + goWith (Done bs _ a) = writeIORef leftover bs >> return a + goWith (Fail _ _ e) = throwIO e + + return $ Decoder go + where + getChunk' :: IO (Maybe ByteString) + getChunk' = checkEmpty <$> getChunk + + checkEmpty :: ByteString -> Maybe ByteString + checkEmpty bs + | BS.null bs = Nothing + | otherwise = Just bs {------------------------------------------------------------------------------- Decode as FlatTerm -------------------------------------------------------------------------------} decodeAsFlatTerm :: - ByteString - -> Either CBOR.R.DeserialiseFailure CBOR.F.FlatTerm + ByteString -> + Either CBOR.R.DeserialiseFailure CBOR.F.FlatTerm decodeAsFlatTerm bs0 = - ST.Lazy.runST (runExceptT (provideInput bs0)) - where - provideInput :: - ByteString - -> ExceptT CBOR.R.DeserialiseFailure (ST.Lazy.ST s) CBOR.F.FlatTerm - provideInput bs - | BS.null bs = return [] - | otherwise = do - next <- S.lift $ ST.Lazy.strictToLazyST $ do - -- This will always be a 'Partial' here because decodeTermToken - -- always starts by requesting initial input. Only decoders that - -- fail or return a value without looking at their input can give - -- a different initial result. - idc <- CBOR.R.deserialiseIncremental CBOR.F.decodeTermToken - let k = fromPartial idc - k (Just bs) - collectOutput next - - where - fromPartial :: - CBOR.R.IDecode s a - -> Maybe ByteString - -> ST s (CBOR.R.IDecode s a) - fromPartial idc = case idc of - CBOR.R.Partial k -> k - CBOR.R.Done{} -> error "fromPartial: expected a Partial decoder" - CBOR.R.Fail{} -> error "fromPartial: expected a Partial decoder" - - collectOutput :: - CBOR.R.IDecode s CBOR.F.TermToken - -> ExceptT CBOR.R.DeserialiseFailure (ST.Lazy.ST s) CBOR.F.FlatTerm - collectOutput (CBOR.R.Fail _ _ err) = throwError err - collectOutput (CBOR.R.Partial k) = S.lift (ST.Lazy.strictToLazyST (k Nothing)) >>= - collectOutput - collectOutput (CBOR.R.Done bs' _ x) = do xs <- provideInput bs' - return (x : xs) + ST.Lazy.runST (runExceptT (provideInput bs0)) + where + provideInput :: + ByteString -> + ExceptT CBOR.R.DeserialiseFailure (ST.Lazy.ST s) CBOR.F.FlatTerm + provideInput bs + | BS.null bs = return [] + | otherwise = do + next <- S.lift $ ST.Lazy.strictToLazyST $ do + -- This will always be a 'Partial' here because decodeTermToken + -- always starts by requesting initial input. Only decoders that + -- fail or return a value without looking at their input can give + -- a different initial result. + idc <- CBOR.R.deserialiseIncremental CBOR.F.decodeTermToken + let k = fromPartial idc + k (Just bs) + collectOutput next + where + fromPartial :: + CBOR.R.IDecode s a -> + Maybe ByteString -> + ST s (CBOR.R.IDecode s a) + fromPartial idc = case idc of + CBOR.R.Partial k -> k + CBOR.R.Done{} -> error "fromPartial: expected a Partial decoder" + CBOR.R.Fail{} -> error "fromPartial: expected a Partial decoder" + + collectOutput :: + CBOR.R.IDecode s CBOR.F.TermToken -> + ExceptT CBOR.R.DeserialiseFailure (ST.Lazy.ST s) CBOR.F.FlatTerm + collectOutput (CBOR.R.Fail _ _ err) = throwError err + collectOutput (CBOR.R.Partial k) = + S.lift (ST.Lazy.strictToLazyST (k Nothing)) + >>= collectOutput + collectOutput (CBOR.R.Done bs' _ x) = do + xs <- provideInput bs' + return (x : xs) {------------------------------------------------------------------------------- HasFS interaction -------------------------------------------------------------------------------} -data ReadIncrementalErr = - -- | Could not deserialise the data +data ReadIncrementalErr + = -- | Could not deserialise the data ReadFailed CBOR.R.DeserialiseFailure - - -- | Deserialisation was successful, but there was additional data - | TrailingBytes ByteString + | -- | Deserialisation was successful, but there was additional data + TrailingBytes ByteString deriving (Eq, Show) -- | Read a file incrementally, optionally calculating the CRC checksum. @@ -183,35 +191,40 @@ data ReadIncrementalErr = -- -- NOTE: This currently expects the file to contain precisely one value; see also -- 'withStreamIncrementalOffsets'. -readIncremental :: forall m f a. (IOLike m, Functor f) - => SomeHasFS m - -> (CRC -> f CRC) - -> CBOR.D.Decoder (U.PrimState m) a - -> FsPath - -> m (Either ReadIncrementalErr (a, f CRC)) +readIncremental :: + forall m f a. + (IOLike m, Functor f) => + SomeHasFS m -> + (CRC -> f CRC) -> + CBOR.D.Decoder (U.PrimState m) a -> + FsPath -> + m (Either ReadIncrementalErr (a, f CRC)) readIncremental = \(SomeHasFS hasFS) mkInitCRC decoder fp -> do - withFile hasFS fp ReadMode $ \h -> - go hasFS h (mkInitCRC initCRC) =<< U.stToIO (CBOR.R.deserialiseIncremental decoder) - where - go :: HasFS m h - -> Handle h - -> f CRC - -> CBOR.R.IDecode (U.PrimState m) a - -> m (Either ReadIncrementalErr (a, f CRC)) - go hasFS@HasFS{..} h !checksum (CBOR.R.Partial k) = do - bs <- hGetSome h (fromIntegral defaultChunkSize) - dec' <- U.stToIO $ k (checkEmpty bs) - go hasFS h (updateCRC bs <$> checksum) dec' - go _ _ !checksum (CBOR.R.Done leftover _ a) = - return $ if BS.null leftover - then Right (a, checksum) - else Left $ TrailingBytes leftover - go _ _ _ (CBOR.R.Fail _ _ err) = - return $ Left $ ReadFailed err - - checkEmpty :: ByteString -> Maybe ByteString - checkEmpty bs | BS.null bs = Nothing - | otherwise = Just bs + withFile hasFS fp ReadMode $ \h -> + go hasFS h (mkInitCRC initCRC) =<< U.stToIO (CBOR.R.deserialiseIncremental decoder) + where + go :: + HasFS m h -> + Handle h -> + f CRC -> + CBOR.R.IDecode (U.PrimState m) a -> + m (Either ReadIncrementalErr (a, f CRC)) + go hasFS@HasFS{..} h !checksum (CBOR.R.Partial k) = do + bs <- hGetSome h (fromIntegral defaultChunkSize) + dec' <- U.stToIO $ k (checkEmpty bs) + go hasFS h (updateCRC bs <$> checksum) dec' + go _ _ !checksum (CBOR.R.Done leftover _ a) = + return $ + if BS.null leftover + then Right (a, checksum) + else Left $ TrailingBytes leftover + go _ _ _ (CBOR.R.Fail _ _ err) = + return $ Left $ ReadFailed err + + checkEmpty :: ByteString -> Maybe ByteString + checkEmpty bs + | BS.null bs = Nothing + | otherwise = Just bs -- | Read multiple @a@s incrementally from a file in a streaming way. -- @@ -226,93 +239,100 @@ readIncremental = \(SomeHasFS hasFS) mkInitCRC decoder fp -> do -- using @streaming@ here should not dictate that we should stick with it -- later; rather, we should revisit this code at that point. withStreamIncrementalOffsets :: - forall m h a r. (IOLike m, HasCallStack) - => HasFS m h - -> (forall s . CBOR.D.Decoder s (LBS.ByteString -> a)) - -> FsPath - -> (Stream (Of (Word64, (Word64, a))) m (Maybe (ReadIncrementalErr, Word64)) -> m r) - -> m r + forall m h a r. + (IOLike m, HasCallStack) => + HasFS m h -> + (forall s. CBOR.D.Decoder s (LBS.ByteString -> a)) -> + FsPath -> + (Stream (Of (Word64, (Word64, a))) m (Maybe (ReadIncrementalErr, Word64)) -> m r) -> + m r withStreamIncrementalOffsets hasFS@HasFS{..} decoder fp = \k -> - withFile hasFS fp ReadMode $ \h -> k $ do - fileSize <- S.lift $ hGetSize h - if fileSize == 0 then - -- If the file is empty, we will immediately get "end of input" - return Nothing - else - S.lift (U.stToIO (CBOR.R.deserialiseIncremental decoder)) >>= - go h 0 Nothing [] fileSize - where - -- TODO stream from HasFS? - go :: Handle h - -> Word64 -- ^ Offset - -> Maybe ByteString -- ^ Unconsumed bytes from last time - -> [ByteString] -- ^ Chunks pushed for this item (rev order) - -> Word64 -- ^ Total file size - -> CBOR.R.IDecode (U.PrimState m) (LBS.ByteString -> a) - -> Stream (Of (Word64, (Word64, a))) m (Maybe (ReadIncrementalErr, Word64)) - go h offset mbUnconsumed bss fileSize dec = case dec of - CBOR.R.Partial k -> do - -- First use the unconsumed bytes from a previous read before read - -- some more bytes from the file. - bs <- case mbUnconsumed of - Just unconsumed -> return unconsumed - Nothing -> S.lift $ hGetSome h (fromIntegral defaultChunkSize) - dec' <- S.lift $ U.stToIO $ k (checkEmpty bs) - go h offset Nothing (bs:bss) fileSize dec' - - CBOR.R.Done leftover size mkA -> do - let nextOffset = offset + fromIntegral size - -- We've been keeping track of the bytes pushed into the decoder - -- for this item so far in bss. Now there's some trailing data to - -- remove and we can get the whole bytes used for this item. We - -- supply the bytes to the final decoded value. This is to support - -- annotating values with their original input bytes. - aBytes = case bss of - [] -> LBS.empty - bs:bss' -> LBS.fromChunks (reverse (bs' : bss')) - where - bs' = BS.take (BS.length bs - BS.length leftover) bs - -- The bang on the @a'@ here allows the used 'Decoder' to force - -- its computation. For example, the decoder might decode a whole - -- block and then (maybe through a use of 'fmap') just return its - -- hash. If we don't force the value it returned here, we're just - -- putting a thunk that references the whole block in the list - -- instead of merely the hash. - !a = mkA aBytes - S.yield (offset, (fromIntegral size, a)) - case checkEmpty leftover of - Nothing - | nextOffset == fileSize + withFile hasFS fp ReadMode $ \h -> k $ do + fileSize <- S.lift $ hGetSize h + if fileSize == 0 + then + -- If the file is empty, we will immediately get "end of input" + return Nothing + else + S.lift (U.stToIO (CBOR.R.deserialiseIncremental decoder)) + >>= go h 0 Nothing [] fileSize + where + -- TODO stream from HasFS? + go :: + Handle h -> + Word64 -> + -- \^ Offset + Maybe ByteString -> + -- \^ Unconsumed bytes from last time + [ByteString] -> + -- \^ Chunks pushed for this item (rev order) + Word64 -> + -- \^ Total file size + CBOR.R.IDecode (U.PrimState m) (LBS.ByteString -> a) -> + Stream (Of (Word64, (Word64, a))) m (Maybe (ReadIncrementalErr, Word64)) + go h offset mbUnconsumed bss fileSize dec = case dec of + CBOR.R.Partial k -> do + -- First use the unconsumed bytes from a previous read before read + -- some more bytes from the file. + bs <- case mbUnconsumed of + Just unconsumed -> return unconsumed + Nothing -> S.lift $ hGetSome h (fromIntegral defaultChunkSize) + dec' <- S.lift $ U.stToIO $ k (checkEmpty bs) + go h offset Nothing (bs : bss) fileSize dec' + CBOR.R.Done leftover size mkA -> do + let nextOffset = offset + fromIntegral size + -- We've been keeping track of the bytes pushed into the decoder + -- for this item so far in bss. Now there's some trailing data to + -- remove and we can get the whole bytes used for this item. We + -- supply the bytes to the final decoded value. This is to support + -- annotating values with their original input bytes. + aBytes = case bss of + [] -> LBS.empty + bs : bss' -> LBS.fromChunks (reverse (bs' : bss')) + where + bs' = BS.take (BS.length bs - BS.length leftover) bs + -- The bang on the @a'@ here allows the used 'Decoder' to force + -- its computation. For example, the decoder might decode a whole + -- block and then (maybe through a use of 'fmap') just return its + -- hash. If we don't force the value it returned here, we're just + -- putting a thunk that references the whole block in the list + -- instead of merely the hash. + !a = mkA aBytes + S.yield (offset, (fromIntegral size, a)) + case checkEmpty leftover of + Nothing + | nextOffset == fileSize -> -- We're at the end of the file, so stop - -> return Nothing - -- Some more bytes, so try to read the next @a@. - mbLeftover -> - S.lift (U.stToIO (CBOR.R.deserialiseIncremental decoder)) >>= - go h nextOffset mbLeftover [] fileSize - - CBOR.R.Fail _ _ err -> return $ Just (ReadFailed err, offset) - - checkEmpty :: ByteString -> Maybe ByteString - checkEmpty bs | BS.null bs = Nothing - | otherwise = Just bs + return Nothing + -- Some more bytes, so try to read the next @a@. + mbLeftover -> + S.lift (U.stToIO (CBOR.R.deserialiseIncremental decoder)) + >>= go h nextOffset mbLeftover [] fileSize + CBOR.R.Fail _ _ err -> return $ Just (ReadFailed err, offset) + + checkEmpty :: ByteString -> Maybe ByteString + checkEmpty bs + | BS.null bs = Nothing + | otherwise = Just bs {------------------------------------------------------------------------------- Encoding/decoding lists -------------------------------------------------------------------------------} encodeList :: (a -> CBOR.E.Encoding) -> [a] -> CBOR.E.Encoding -encodeList _ [] = CBOR.E.encodeListLen 0 -encodeList enc xs = mconcat [ - CBOR.E.encodeListLenIndef +encodeList _ [] = CBOR.E.encodeListLen 0 +encodeList enc xs = + mconcat + [ CBOR.E.encodeListLenIndef , foldr (\x r -> enc x <> r) CBOR.E.encodeBreak xs ] decodeList :: CBOR.D.Decoder s a -> CBOR.D.Decoder s [a] decodeList dec = do - mn <- CBOR.D.decodeListLenOrIndef - case mn of - Nothing -> CBOR.D.decodeSequenceLenIndef (flip (:)) [] reverse dec - Just n -> CBOR.D.decodeSequenceLenN (flip (:)) [] reverse n dec + mn <- CBOR.D.decodeListLenOrIndef + case mn of + Nothing -> CBOR.D.decodeSequenceLenIndef (flip (:)) [] reverse dec + Just n -> CBOR.D.decodeSequenceLenN (flip (:)) [] reverse n dec encodeSeq :: (a -> CBOR.E.Encoding) -> StrictSeq a -> CBOR.E.Encoding encodeSeq f = encodeList f . toList diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CRC.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CRC.hs index 9ea73bf849..3a4f0e12d3 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CRC.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CRC.hs @@ -1,69 +1,73 @@ {-# LANGUAGE BangPatterns #-} -module Ouroboros.Consensus.Util.CRC ( - CRCError (..) + +module Ouroboros.Consensus.Util.CRC + ( CRCError (..) , crcOfConcat , readCRC ) where -import Control.Monad.Class.MonadThrow -import Control.Monad.Except -import Data.Bits -import qualified Data.ByteString.Builder as BS -import qualified Data.ByteString.Char8 as BSC -import qualified Data.ByteString.Lazy as BSL -import Data.Char hiding (isHexDigit) -import System.FS.API -import System.FS.API.Lazy -import System.FS.CRC +import Control.Monad.Class.MonadThrow +import Control.Monad.Except +import Data.Bits +import Data.ByteString.Builder qualified as BS +import Data.ByteString.Char8 qualified as BSC +import Data.ByteString.Lazy qualified as BSL +import Data.Char hiding (isHexDigit) +import System.FS.API +import System.FS.API.Lazy +import System.FS.CRC crcOfConcat :: CRC -> CRC -> CRC crcOfConcat crc1 crc2 = - computeCRC - $ BSL.toStrict - $ BS.toLazyByteString - $ (BS.word32Dec $ getCRC crc1) - <> (BS.word32Dec $ getCRC crc2) + computeCRC $ + BSL.toStrict $ + BS.toLazyByteString $ + (BS.word32Dec $ getCRC crc1) + <> (BS.word32Dec $ getCRC crc2) -data CRCError = - CRCInvalid | CRCNoFile +data CRCError + = CRCInvalid + | CRCNoFile deriving (Eq, Show) readCRC :: - MonadThrow m - => HasFS m h - -> FsPath - -> ExceptT CRCError m CRC + MonadThrow m => + HasFS m h -> + FsPath -> + ExceptT CRCError m CRC readCRC hasFS crcPath = ExceptT $ do - crcExists <- doesFileExist hasFS crcPath - if not crcExists - then pure (Left CRCNoFile) - else do - withFile hasFS crcPath ReadMode $ \h -> do - str <- BSL.toStrict <$> hGetAll hasFS h - if not (BSC.length str == 8 && BSC.all isHexDigit str) - then pure (Left CRCInvalid) - else pure . Right . CRC $ fromIntegral (hexdigitsToInt str) - -- TODO: remove the functions in the where clause when we start depending on lsm-tree - where - isHexDigit :: Char -> Bool - isHexDigit c = (c >= '0' && c <= '9') - || (c >= 'a' && c <= 'f') --lower case only - - -- Precondition: BSC.all isHexDigit - hexdigitsToInt :: BSC.ByteString -> Word - hexdigitsToInt = - BSC.foldl' accumdigit 0 - where - accumdigit :: Word -> Char -> Word - accumdigit !a !c = - (a `shiftL` 4) .|. hexdigitToWord c + crcExists <- doesFileExist hasFS crcPath + if not crcExists + then pure (Left CRCNoFile) + else do + withFile hasFS crcPath ReadMode $ \h -> do + str <- BSL.toStrict <$> hGetAll hasFS h + if not (BSC.length str == 8 && BSC.all isHexDigit str) + then pure (Left CRCInvalid) + else pure . Right . CRC $ fromIntegral (hexdigitsToInt str) + where + -- TODO: remove the functions in the where clause when we start depending on lsm-tree + isHexDigit :: Char -> Bool + isHexDigit c = + (c >= '0' && c <= '9') + || (c >= 'a' && c <= 'f') -- lower case only - -- Precondition: isHexDigit - hexdigitToWord :: Char -> Word - hexdigitToWord c - | let !dec = fromIntegral (ord c - ord '0') - , dec <= 9 = dec + -- Precondition: BSC.all isHexDigit + hexdigitsToInt :: BSC.ByteString -> Word + hexdigitsToInt = + BSC.foldl' accumdigit 0 + where + accumdigit :: Word -> Char -> Word + accumdigit !a !c = + (a `shiftL` 4) .|. hexdigitToWord c - | let !hex = fromIntegral (ord c - ord 'a' + 10) - , otherwise = hex + -- Precondition: isHexDigit + hexdigitToWord :: Char -> Word + hexdigitToWord c + | let !dec = fromIntegral (ord c - ord '0') + , dec <= 9 = + dec + | let !hex = fromIntegral (ord c - ord 'a' + 10) + , otherwise = + hex diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CallStack.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CallStack.hs index 575f5d1695..fb9170d20f 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CallStack.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/CallStack.hs @@ -3,17 +3,19 @@ -- | CallStack with a nicer 'Show' instance -- -- Use of this module is intended to /replace/ import of @GHC.Stack@ -module Ouroboros.Consensus.Util.CallStack ( - prettyCallStack +module Ouroboros.Consensus.Util.CallStack + ( prettyCallStack + -- * opaque , PrettyCallStack + -- * Re-exports , HasCallStack ) where -import GHC.Stack (CallStack, HasCallStack) -import qualified GHC.Stack as GHC -import NoThunks.Class (NoThunks) +import GHC.Stack (CallStack, HasCallStack) +import GHC.Stack qualified as GHC +import NoThunks.Class (NoThunks) {------------------------------------------------------------------------------- Auxiliary: CallStack with different Show instance @@ -21,7 +23,7 @@ import NoThunks.Class (NoThunks) -- | CallStack with 'Show' instance using 'prettyCallStack' newtype PrettyCallStack = PrettyCallStack CallStack - deriving (NoThunks) + deriving NoThunks instance Show PrettyCallStack where show (PrettyCallStack cs) = GHC.prettyCallStack cs diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Condense.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Condense.hs index 89b97cfa51..a49c81ba9b 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Condense.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Condense.hs @@ -4,8 +4,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Util.Condense ( - Condense (..) +module Ouroboros.Consensus.Util.Condense + ( Condense (..) , Condense1 (..) , CondenseList (..) , PaddingDirection (..) @@ -14,39 +14,58 @@ module Ouroboros.Consensus.Util.Condense ( , padListWith ) where -import Cardano.Crypto.DSIGN (Ed25519DSIGN, Ed448DSIGN, MockDSIGN, - SigDSIGN, SignedDSIGN (..), VerKeyDSIGN, - pattern SigEd25519DSIGN, pattern SigEd448DSIGN, - pattern SigMockDSIGN) -import Cardano.Crypto.Hash (Hash) -import Cardano.Crypto.KES (MockKES, NeverKES, SigKES, SignedKES (..), - SimpleKES, SingleKES, SumKES, VerKeyKES, - pattern SigMockKES, pattern SigSimpleKES, - pattern SigSingleKES, pattern SigSumKES, - pattern SignKeyMockKES, pattern VerKeyMockKES, - pattern VerKeySingleKES, pattern VerKeySumKES) -import Cardano.Slotting.Slot (EpochNo (..), WithOrigin (..)) -import Control.Monad.Class.MonadTime.SI (Time (..)) -import qualified Data.ByteString as BS.Strict -import qualified Data.ByteString.Lazy as BS.Lazy -import Data.Int -import Data.List (intercalate, maximumBy) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Text (Text, unpack) -import Data.Void -import Data.Word -import Numeric.Natural -import Ouroboros.Consensus.Util.HList (All, HList (..)) -import qualified Ouroboros.Consensus.Util.HList as HList -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF -import Ouroboros.Network.Block -import Ouroboros.Network.Mock.Chain hiding (length) -import Text.Printf (printf) +import Cardano.Crypto.DSIGN + ( Ed25519DSIGN + , Ed448DSIGN + , MockDSIGN + , SigDSIGN + , SignedDSIGN (..) + , VerKeyDSIGN + , pattern SigEd25519DSIGN + , pattern SigEd448DSIGN + , pattern SigMockDSIGN + ) +import Cardano.Crypto.Hash (Hash) +import Cardano.Crypto.KES + ( MockKES + , NeverKES + , SigKES + , SignedKES (..) + , SimpleKES + , SingleKES + , SumKES + , VerKeyKES + , pattern SigMockKES + , pattern SigSimpleKES + , pattern SigSingleKES + , pattern SigSumKES + , pattern SignKeyMockKES + , pattern VerKeyMockKES + , pattern VerKeySingleKES + , pattern VerKeySumKES + ) +import Cardano.Slotting.Slot (EpochNo (..), WithOrigin (..)) +import Control.Monad.Class.MonadTime.SI (Time (..)) +import Data.ByteString qualified as BS.Strict +import Data.ByteString.Lazy qualified as BS.Lazy +import Data.Int +import Data.List (intercalate, maximumBy) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Text (Text, unpack) +import Data.Void +import Data.Word +import Numeric.Natural +import Ouroboros.Consensus.Util.HList (All, HList (..)) +import Ouroboros.Consensus.Util.HList qualified as HList +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF +import Ouroboros.Network.Block +import Ouroboros.Network.Mock.Chain hiding (length) +import Text.Printf (printf) {------------------------------------------------------------------------------- Main class @@ -67,15 +86,14 @@ condenseListWithPadding padding as = padListWith padding $ condense <$> as padListWith :: PaddingDirection -> [String] -> [String] padListWith padding strings = let maxLength = maximumBy compare $ length <$> strings - in - fmap - (\c -> - let spaces = replicate (maxLength - length c) ' ' - in case padding of - PadLeft -> spaces ++ c - PadRight -> c ++ spaces - ) - strings + in fmap + ( \c -> + let spaces = replicate (maxLength - length c) ' ' + in case padding of + PadLeft -> spaces ++ c + PadRight -> c ++ spaces + ) + strings data PaddingDirection = PadLeft | PadRight @@ -135,7 +153,7 @@ instance Condense a => Condense [a] where instance Condense a => Condense (Maybe a) where condense (Just a) = "Just " ++ condense a - condense Nothing = "Nothing" + condense Nothing = "Nothing" instance Condense a => Condense (Set a) where condense = condense1 @@ -182,30 +200,32 @@ instance Condense EpochNo where condense (EpochNo n) = show n instance Condense (HeaderHash b) => Condense (ChainHash b) where - condense GenesisHash = "genesis" + condense GenesisHash = "genesis" condense (BlockHash h) = condense h instance Condense (HeaderHash b) => Condense (Tip b) where - condense TipGenesis = "genesis" + condense TipGenesis = "genesis" condense (Tip slot h bno) = - "b" <> condense bno <> "-s" <> condense slot <> "-h" <> condense h + "b" <> condense bno <> "-s" <> condense slot <> "-h" <> condense h instance Condense a => Condense (WithOrigin a) where condense Origin = "origin" condense (At a) = condense a instance Condense (HeaderHash block) => Condense (Point block) where - condense GenesisPoint = "Origin" - condense (BlockPoint s h) = "(Point " <> condense s <> ", " <> condense h <> ")" + condense GenesisPoint = "Origin" + condense (BlockPoint s h) = "(Point " <> condense s <> ", " <> condense h <> ")" instance Condense block => Condense (Chain block) where - condense Genesis = "Genesis" - condense (cs :> b) = condense cs <> " :> " <> condense b + condense Genesis = "Genesis" + condense (cs :> b) = condense cs <> " :> " <> condense b -instance (Condense block, HasHeader block, Condense (HeaderHash block)) - => Condense (AnchoredFragment block) where - condense (AF.Empty pt) = "EmptyAnchor " <> condense (AF.anchorToPoint pt) - condense (cs AF.:> b) = condense cs <> " :> " <> condense b +instance + (Condense block, HasHeader block, Condense (HeaderHash block)) => + Condense (AnchoredFragment block) + where + condense (AF.Empty pt) = "EmptyAnchor " <> condense (AF.anchorToPoint pt) + condense (cs AF.:> b) = condense cs <> " :> " <> condense b {------------------------------------------------------------------------------- Instances for cardano-crypto-classes @@ -227,37 +247,39 @@ instance Condense (SigKES v) => Condense (SignedKES v a) where condense (SignedKES sig) = condense sig instance Condense (SigKES (MockKES t)) where - condense (SigMockKES n (SignKeyMockKES (VerKeyMockKES v) j)) = - show n - <> ":" - <> show v - <> ":" - <> show j + condense (SigMockKES n (SignKeyMockKES (VerKeyMockKES v) j)) = + show n + <> ":" + <> show v + <> ":" + <> show j instance Condense (SigKES NeverKES) where condense = show instance Condense (SigDSIGN d) => Condense (SigKES (SimpleKES d t)) where - condense (SigSimpleKES sig) = condense sig + condense (SigSimpleKES sig) = condense sig instance Condense (SigDSIGN d) => Condense (SigKES (SingleKES d)) where - condense (SigSingleKES sig) = condense sig + condense (SigSingleKES sig) = condense sig instance Show (VerKeyDSIGN d) => Condense (VerKeyDSIGN d) where condense = show -instance (Condense (SigKES d), Condense (VerKeyKES d)) - => Condense (SigKES (SumKES h d)) where - condense (SigSumKES sk vk1 vk2) = condense (sk, vk1, vk2) +instance + (Condense (SigKES d), Condense (VerKeyKES d)) => + Condense (SigKES (SumKES h d)) + where + condense (SigSumKES sk vk1 vk2) = condense (sk, vk1, vk2) instance Condense (VerKeyDSIGN d) => Condense (VerKeyKES (SingleKES d)) where - condense (VerKeySingleKES h) = condense h + condense (VerKeySingleKES h) = condense h instance Condense (VerKeyKES (SumKES h d)) where - condense (VerKeySumKES h) = condense h + condense (VerKeySumKES h) = condense h instance Condense (Hash h a) where - condense = show + condense = show instance Condense Time where - condense (Time dt) = show dt + condense (Time dt) = show dt diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/DepPair.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/DepPair.hs index d964b4ae26..050f64783a 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/DepPair.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/DepPair.hs @@ -9,27 +9,30 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.Util.DepPair ( - -- * Dependent pairs +module Ouroboros.Consensus.Util.DepPair + ( -- * Dependent pairs DepPair , GenDepPair (GenDepPair, DepPair) , depPairFirst + -- * Compare indices , SameDepIndex (..) , SameDepIndex2 (..) + -- * Trivial dependency , TrivialDependency (..) , fromTrivialDependency , toTrivialDependency + -- * Convenience re-exports , Proxy (..) , (:~:) (..) ) where -import Data.Kind (Constraint, Type) -import Data.Proxy -import Data.SOP.BasicFunctors (I (..)) -import Data.Type.Equality ((:~:) (..)) +import Data.Kind (Constraint, Type) +import Data.Proxy +import Data.SOP.BasicFunctors (I (..)) +import Data.Type.Equality ((:~:) (..)) {------------------------------------------------------------------------------- Dependent pairs @@ -61,7 +64,6 @@ depPairFirst f (GenDepPair ix a) = GenDepPair (f ix) a type SameDepIndex :: (k -> Type) -> Constraint class SameDepIndex f where sameDepIndex :: f a -> f b -> Maybe (a :~: b) - default sameDepIndex :: TrivialDependency f => f a -> f b -> Maybe (a :~: b) sameDepIndex ix ix' = Just $ hasSingleIndex ix ix' @@ -82,10 +84,10 @@ class TrivialDependency f where fromTrivialDependency :: TrivialDependency f => f a -> a -> TrivialIndex f fromTrivialDependency ix = - case hasSingleIndex indexIsTrivial ix of - Refl -> id + case hasSingleIndex indexIsTrivial ix of + Refl -> id toTrivialDependency :: TrivialDependency f => f a -> TrivialIndex f -> a toTrivialDependency ix = - case hasSingleIndex indexIsTrivial ix of - Refl -> id + case hasSingleIndex indexIsTrivial ix of + Refl -> id diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/EarlyExit.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/EarlyExit.hs index 899e95a16f..0d51550e6d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/EarlyExit.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/EarlyExit.hs @@ -11,80 +11,97 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Util.EarlyExit ( - exitEarly +module Ouroboros.Consensus.Util.EarlyExit + ( exitEarly , withEarlyExit , withEarlyExit_ + -- * Re-exports , lift + -- * opaque , WithEarlyExit ) where -import Control.Applicative -import Control.Concurrent.Class.MonadMVar (MVar, MonadMVar (..)) -import qualified Control.Concurrent.Class.MonadMVar.Strict as Strict -import qualified Control.Concurrent.Class.MonadSTM.Strict as StrictSTM -import Control.Monad -import Control.Monad.Base -import Control.Monad.Class.MonadAsync -import Control.Monad.Class.MonadEventlog -import Control.Monad.Class.MonadFork -import Control.Monad.Class.MonadSay -import Control.Monad.Class.MonadST -import Control.Monad.Class.MonadSTM.Internal -import Control.Monad.Class.MonadThrow -import Control.Monad.Class.MonadTime -import Control.Monad.Class.MonadTime.SI -import Control.Monad.Class.MonadTimer -import qualified Control.Monad.Class.MonadTimer.SI as TimerSI -import Control.Monad.Trans.Class -import Control.Monad.Trans.Maybe -import Data.Function (on) -import Data.Proxy -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Util ((.:)) -import Ouroboros.Consensus.Util.IOLike (IOLike (..), PrimMonad (..), - StrictMVar, StrictSVar, StrictTVar, castStrictSVar) +import Control.Applicative +import Control.Concurrent.Class.MonadMVar (MVar, MonadMVar (..)) +import Control.Concurrent.Class.MonadMVar.Strict qualified as Strict +import Control.Concurrent.Class.MonadSTM.Strict qualified as StrictSTM +import Control.Monad +import Control.Monad.Base +import Control.Monad.Class.MonadAsync +import Control.Monad.Class.MonadEventlog +import Control.Monad.Class.MonadFork +import Control.Monad.Class.MonadST +import Control.Monad.Class.MonadSTM.Internal +import Control.Monad.Class.MonadSay +import Control.Monad.Class.MonadThrow +import Control.Monad.Class.MonadTime +import Control.Monad.Class.MonadTime.SI +import Control.Monad.Class.MonadTimer +import Control.Monad.Class.MonadTimer.SI qualified as TimerSI +import Control.Monad.Trans.Class +import Control.Monad.Trans.Maybe +import Data.Function (on) +import Data.Proxy +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Util ((.:)) +import Ouroboros.Consensus.Util.IOLike + ( IOLike (..) + , PrimMonad (..) + , StrictMVar + , StrictSVar + , StrictTVar + , castStrictSVar + ) {------------------------------------------------------------------------------- Basic definitions -------------------------------------------------------------------------------} -newtype WithEarlyExit m a = WithEarlyExit { - unWithEarlyExit :: MaybeT m a - } - deriving ( Functor - , Applicative - , Alternative - , Monad - , MonadTrans - , MonadPlus - ) - -instance NoThunks (StrictSTM.StrictTVar m a) - => NoThunks (StrictSTM.StrictTVar (WithEarlyExit m) a) where +newtype WithEarlyExit m a = WithEarlyExit + { unWithEarlyExit :: MaybeT m a + } + deriving + ( Functor + , Applicative + , Alternative + , Monad + , MonadTrans + , MonadPlus + ) + +instance + NoThunks (StrictSTM.StrictTVar m a) => + NoThunks (StrictSTM.StrictTVar (WithEarlyExit m) a) + where showTypeOf _ = "StrictTVar (WithEarlyExit m)" wNoThunks ctxt tv = do - wNoThunks ctxt (StrictSTM.castStrictTVar tv :: StrictSTM.StrictTVar m a) + wNoThunks ctxt (StrictSTM.castStrictTVar tv :: StrictSTM.StrictTVar m a) -instance NoThunks (StrictSTM.StrictTMVar m a) - => NoThunks (StrictSTM.StrictTMVar (WithEarlyExit m) a) where +instance + NoThunks (StrictSTM.StrictTMVar m a) => + NoThunks (StrictSTM.StrictTMVar (WithEarlyExit m) a) + where showTypeOf _ = "StrictTMVar (WithEarlyExit m)" wNoThunks ctxt tv = do - wNoThunks ctxt (StrictSTM.castStrictTMVar tv :: StrictSTM.StrictTMVar m a) + wNoThunks ctxt (StrictSTM.castStrictTMVar tv :: StrictSTM.StrictTMVar m a) -instance NoThunks (Strict.StrictMVar m a) - => NoThunks (Strict.StrictMVar (WithEarlyExit m) a) where +instance + NoThunks (Strict.StrictMVar m a) => + NoThunks (Strict.StrictMVar (WithEarlyExit m) a) + where showTypeOf _ = "StrictMVar (WithEarlyExit m)" wNoThunks ctxt mv = do - wNoThunks ctxt (Strict.castStrictMVar mv :: Strict.StrictMVar m a) + wNoThunks ctxt (Strict.castStrictMVar mv :: Strict.StrictMVar m a) -instance NoThunks (StrictSVar m a) - => NoThunks (StrictSVar (WithEarlyExit m) a) where +instance + NoThunks (StrictSVar m a) => + NoThunks (StrictSVar (WithEarlyExit m) a) + where showTypeOf _ = "StrictSVar (WithEarlyExit m)" wNoThunks ctxt tv = do - wNoThunks ctxt (castStrictSVar tv :: StrictSVar m a) + wNoThunks ctxt (castStrictSVar tv :: StrictSVar m a) -- | Internal only earlyExit :: m (Maybe a) -> WithEarlyExit m a @@ -97,16 +114,18 @@ withEarlyExit_ :: Functor m => WithEarlyExit m () -> m () withEarlyExit_ = fmap collapse . withEarlyExit collapse :: Maybe () -> () -collapse Nothing = () +collapse Nothing = () collapse (Just ()) = () exitEarly :: Applicative m => WithEarlyExit m a exitEarly = earlyExit $ pure Nothing -instance (forall a'. NoThunks (m a')) - => NoThunks (WithEarlyExit m a) where - showTypeOf _p = "WithEarlyExit " ++ showTypeOf (Proxy @(m a)) - wNoThunks ctxt = wNoThunks ctxt . withEarlyExit +instance + (forall a'. NoThunks (m a')) => + NoThunks (WithEarlyExit m a) + where + showTypeOf _p = "WithEarlyExit " ++ showTypeOf (Proxy @(m a)) + wNoThunks ctxt = wNoThunks ctxt . withEarlyExit instance Monad m => MonadBase (WithEarlyExit m) (WithEarlyExit m) where liftBase = id @@ -117,87 +136,89 @@ instance Monad m => MonadBase (WithEarlyExit m) (WithEarlyExit m) where instance MonadSTM m => MonadSTM (WithEarlyExit m) where type STM (WithEarlyExit m) = WithEarlyExit (STM m) - atomically = earlyExit . atomically . withEarlyExit + atomically = earlyExit . atomically . withEarlyExit - type TVar (WithEarlyExit m) = TVar m - type TMVar (WithEarlyExit m) = TMVar m - type TQueue (WithEarlyExit m) = TQueue m + type TVar (WithEarlyExit m) = TVar m + type TMVar (WithEarlyExit m) = TMVar m + type TQueue (WithEarlyExit m) = TQueue m type TBQueue (WithEarlyExit m) = TBQueue m - type TArray (WithEarlyExit m) = TArray m - type TSem (WithEarlyExit m) = TSem m - type TChan (WithEarlyExit m) = TChan m - - newTVar = lift . newTVar - readTVar = lift . readTVar - writeTVar = lift .: writeTVar - retry = lift retry - orElse = (earlyExit .: orElse) `on` withEarlyExit - newTMVar = lift . newTMVar - newEmptyTMVar = lift newEmptyTMVar - takeTMVar = lift . takeTMVar - tryTakeTMVar = lift . tryTakeTMVar - putTMVar = lift .: putTMVar - tryPutTMVar = lift .: tryPutTMVar - readTMVar = lift . readTMVar - writeTMVar = lift .: writeTMVar - tryReadTMVar = lift . tryReadTMVar - swapTMVar = lift .: swapTMVar - isEmptyTMVar = lift . isEmptyTMVar - newTQueue = lift newTQueue - readTQueue = lift . readTQueue - tryReadTQueue = lift . tryReadTQueue - peekTQueue = lift . peekTQueue - tryPeekTQueue = lift . tryPeekTQueue - flushTQueue = lift . flushTQueue - writeTQueue = lift .: writeTQueue - isEmptyTQueue = lift . isEmptyTQueue - unGetTQueue = lift .: unGetTQueue - newTBQueue = lift . newTBQueue - readTBQueue = lift . readTBQueue - tryReadTBQueue = lift . tryReadTBQueue - peekTBQueue = lift . peekTBQueue - tryPeekTBQueue = lift . tryPeekTBQueue - flushTBQueue = lift . flushTBQueue - writeTBQueue = lift .: writeTBQueue - lengthTBQueue = lift . lengthTBQueue - isEmptyTBQueue = lift . isEmptyTBQueue - isFullTBQueue = lift . isFullTBQueue - unGetTBQueue = lift .: unGetTBQueue - newTSem = lift . newTSem - waitTSem = lift . waitTSem - signalTSem = lift . signalTSem - signalTSemN = lift .: signalTSemN - - newTChan = lift newTChan - newBroadcastTChan = lift newBroadcastTChan - dupTChan = lift . dupTChan - cloneTChan = lift . cloneTChan - readTChan = lift . readTChan - tryReadTChan = lift . tryReadTChan - peekTChan = lift . peekTChan - tryPeekTChan = lift . tryPeekTChan - writeTChan = lift .: writeTChan - unGetTChan = lift .: unGetTChan - isEmptyTChan = lift . isEmptyTChan - - newTMVarIO = lift . newTMVarIO - newEmptyTMVarIO = lift newEmptyTMVarIO - -instance (MonadMVar m, MonadMask m, MonadEvaluate m) - => MonadMVar (WithEarlyExit m) where + type TArray (WithEarlyExit m) = TArray m + type TSem (WithEarlyExit m) = TSem m + type TChan (WithEarlyExit m) = TChan m + + newTVar = lift . newTVar + readTVar = lift . readTVar + writeTVar = lift .: writeTVar + retry = lift retry + orElse = (earlyExit .: orElse) `on` withEarlyExit + newTMVar = lift . newTMVar + newEmptyTMVar = lift newEmptyTMVar + takeTMVar = lift . takeTMVar + tryTakeTMVar = lift . tryTakeTMVar + putTMVar = lift .: putTMVar + tryPutTMVar = lift .: tryPutTMVar + readTMVar = lift . readTMVar + writeTMVar = lift .: writeTMVar + tryReadTMVar = lift . tryReadTMVar + swapTMVar = lift .: swapTMVar + isEmptyTMVar = lift . isEmptyTMVar + newTQueue = lift newTQueue + readTQueue = lift . readTQueue + tryReadTQueue = lift . tryReadTQueue + peekTQueue = lift . peekTQueue + tryPeekTQueue = lift . tryPeekTQueue + flushTQueue = lift . flushTQueue + writeTQueue = lift .: writeTQueue + isEmptyTQueue = lift . isEmptyTQueue + unGetTQueue = lift .: unGetTQueue + newTBQueue = lift . newTBQueue + readTBQueue = lift . readTBQueue + tryReadTBQueue = lift . tryReadTBQueue + peekTBQueue = lift . peekTBQueue + tryPeekTBQueue = lift . tryPeekTBQueue + flushTBQueue = lift . flushTBQueue + writeTBQueue = lift .: writeTBQueue + lengthTBQueue = lift . lengthTBQueue + isEmptyTBQueue = lift . isEmptyTBQueue + isFullTBQueue = lift . isFullTBQueue + unGetTBQueue = lift .: unGetTBQueue + newTSem = lift . newTSem + waitTSem = lift . waitTSem + signalTSem = lift . signalTSem + signalTSemN = lift .: signalTSemN + + newTChan = lift newTChan + newBroadcastTChan = lift newBroadcastTChan + dupTChan = lift . dupTChan + cloneTChan = lift . cloneTChan + readTChan = lift . readTChan + tryReadTChan = lift . tryReadTChan + peekTChan = lift . peekTChan + tryPeekTChan = lift . tryPeekTChan + writeTChan = lift .: writeTChan + unGetTChan = lift .: unGetTChan + isEmptyTChan = lift . isEmptyTChan + + newTMVarIO = lift . newTMVarIO + newEmptyTMVarIO = lift newEmptyTMVarIO + +instance + (MonadMVar m, MonadMask m, MonadEvaluate m) => + MonadMVar (WithEarlyExit m) + where type MVar (WithEarlyExit m) = MVar m - newEmptyMVar = lift newEmptyMVar - takeMVar = lift . takeMVar - putMVar = lift .: putMVar - tryTakeMVar = lift . tryTakeMVar - tryPutMVar = lift .: tryPutMVar - tryReadMVar = lift . tryReadMVar - isEmptyMVar = lift . isEmptyMVar + newEmptyMVar = lift newEmptyMVar + takeMVar = lift . takeMVar + putMVar = lift .: putMVar + tryTakeMVar = lift . tryTakeMVar + tryPutMVar = lift .: tryPutMVar + tryReadMVar = lift . tryReadMVar + isEmptyMVar = lift . isEmptyMVar - newMVar = lift . newMVar - readMVar = lift . readMVar - swapMVar = lift .: swapMVar + newMVar = lift . newMVar + readMVar = lift . readMVar + swapMVar = lift .: swapMVar instance MonadCatch m => MonadThrow (WithEarlyExit m) where throwIO = lift . throwIO @@ -208,28 +229,30 @@ instance MonadCatch m => MonadThrow (WithEarlyExit m) where #endif instance MonadCatch m => MonadCatch (WithEarlyExit m) where - catch act handler = earlyExit $ + catch act handler = + earlyExit $ catch (withEarlyExit act) (withEarlyExit . handler) generalBracket acquire release use = earlyExit $ do - -- This is modelled on the case for ErrorT, except that we don't have - -- to worry about reporting the right error, since we only have @Nothing@ - (mb, mc) <- generalBracket - (withEarlyExit acquire) - (\mResource exitCase -> - case (mResource, exitCase) of - (Nothing, _) -> - -- resource not acquired - return Nothing - (Just resource, ExitCaseSuccess (Just b)) -> - withEarlyExit $ release resource (ExitCaseSuccess b) - (Just resource, ExitCaseException e) -> - withEarlyExit $ release resource (ExitCaseException e) - (Just resource, _otherwise) -> - withEarlyExit $ release resource ExitCaseAbort - ) - (maybe (return Nothing) (withEarlyExit . use)) - return $ (,) <$> mb <*> mc + -- This is modelled on the case for ErrorT, except that we don't have + -- to worry about reporting the right error, since we only have @Nothing@ + (mb, mc) <- + generalBracket + (withEarlyExit acquire) + ( \mResource exitCase -> + case (mResource, exitCase) of + (Nothing, _) -> + -- resource not acquired + return Nothing + (Just resource, ExitCaseSuccess (Just b)) -> + withEarlyExit $ release resource (ExitCaseSuccess b) + (Just resource, ExitCaseException e) -> + withEarlyExit $ release resource (ExitCaseException e) + (Just resource, _otherwise) -> + withEarlyExit $ release resource ExitCaseAbort + ) + (maybe (return Nothing) (withEarlyExit . use)) + return $ (,) <$> mb <*> mc instance MonadMask m => MonadMask (WithEarlyExit m) where mask f = earlyExit $ @@ -240,55 +263,60 @@ instance MonadMask m => MonadMask (WithEarlyExit m) where uninterruptibleMask $ \unmask -> let unmask' :: forall a. WithEarlyExit m a -> WithEarlyExit m a unmask' = earlyExit . unmask . withEarlyExit - in withEarlyExit (f unmask') + in withEarlyExit (f unmask') instance MonadThread m => MonadThread (WithEarlyExit m) where type ThreadId (WithEarlyExit m) = ThreadId m - myThreadId = lift myThreadId - labelThread = lift .: labelThread + myThreadId = lift myThreadId + labelThread = lift .: labelThread -instance (MonadMask m, MonadAsync m, MonadCatch (STM m)) - => MonadAsync (WithEarlyExit m) where +instance + (MonadMask m, MonadAsync m, MonadCatch (STM m)) => + MonadAsync (WithEarlyExit m) + where type Async (WithEarlyExit m) = WithEarlyExit (Async m) - async = lift . (fmap earlyExit . async) . withEarlyExit - asyncBound = lift . (fmap earlyExit . async) . withEarlyExit - asyncOn n = lift . (fmap earlyExit . asyncOn n) . withEarlyExit - asyncThreadId = asyncThreadId - cancel a = lift $ cancel (withEarlyExit a) - cancelWith a = lift . cancelWith (withEarlyExit a) + async = lift . (fmap earlyExit . async) . withEarlyExit + asyncBound = lift . (fmap earlyExit . async) . withEarlyExit + asyncOn n = lift . (fmap earlyExit . asyncOn n) . withEarlyExit + asyncThreadId = asyncThreadId + cancel a = lift $ cancel (withEarlyExit a) + cancelWith a = lift . cancelWith (withEarlyExit a) - waitCatchSTM a = earlyExit (commute <$> waitCatchSTM (withEarlyExit a)) - pollSTM a = earlyExit (fmap commute <$> pollSTM (withEarlyExit a)) + waitCatchSTM a = earlyExit (commute <$> waitCatchSTM (withEarlyExit a)) + pollSTM a = earlyExit (fmap commute <$> pollSTM (withEarlyExit a)) - asyncWithUnmask f = earlyExit $ fmap (Just . earlyExit) $ - asyncWithUnmask $ \unmask -> - withEarlyExit (f (earlyExit . unmask . withEarlyExit)) + asyncWithUnmask f = earlyExit $ + fmap (Just . earlyExit) $ + asyncWithUnmask $ \unmask -> + withEarlyExit (f (earlyExit . unmask . withEarlyExit)) - asyncOnWithUnmask n f = earlyExit $ fmap (Just . earlyExit) $ - asyncOnWithUnmask n $ \unmask -> - withEarlyExit (f (earlyExit . unmask . withEarlyExit)) + asyncOnWithUnmask n f = earlyExit $ + fmap (Just . earlyExit) $ + asyncOnWithUnmask n $ \unmask -> + withEarlyExit (f (earlyExit . unmask . withEarlyExit)) commute :: Either SomeException (Maybe a) -> Maybe (Either SomeException a) -commute (Left e) = Just (Left e) -commute (Right Nothing) = Nothing +commute (Left e) = Just (Left e) +commute (Right Nothing) = Nothing commute (Right (Just a)) = Just (Right a) instance MonadFork m => MonadFork (WithEarlyExit m) where - forkIO f = lift $ forkIO (collapse <$> withEarlyExit f) - forkOn n f = lift $ forkOn n (collapse <$> withEarlyExit f) + forkIO f = lift $ forkIO (collapse <$> withEarlyExit f) + forkOn n f = lift $ forkOn n (collapse <$> withEarlyExit f) forkIOWithUnmask f = lift $ forkIOWithUnmask $ \unmask -> - let unmask' :: forall a. WithEarlyExit m a -> WithEarlyExit m a - unmask' = earlyExit . unmask . withEarlyExit - in collapse <$> withEarlyExit (f unmask') - forkFinally f fin = lift $ forkFinally - (withEarlyExit f) - (withEarlyExit_ . maybe (pure ()) fin . commute) - - throwTo = lift .: throwTo - yield = lift yield - + let unmask' :: forall a. WithEarlyExit m a -> WithEarlyExit m a + unmask' = earlyExit . unmask . withEarlyExit + in collapse <$> withEarlyExit (f unmask') + forkFinally f fin = + lift $ + forkFinally + (withEarlyExit f) + (withEarlyExit_ . maybe (pure ()) fin . commute) + + throwTo = lift .: throwTo + yield = lift yield instance PrimMonad m => PrimMonad (WithEarlyExit m) where type PrimState (WithEarlyExit m) = PrimState m @@ -296,69 +324,69 @@ instance PrimMonad m => PrimMonad (WithEarlyExit m) where {-# INLINE primitive #-} instance MonadST m => MonadST (WithEarlyExit m) where - stToIO = lift . stToIO + stToIO = lift . stToIO withLiftST k = k stToIO - instance MonadMonotonicTimeNSec m => MonadMonotonicTimeNSec (WithEarlyExit m) where getMonotonicTimeNSec = lift getMonotonicTimeNSec - instance MonadMonotonicTime m => MonadMonotonicTime (WithEarlyExit m) where getMonotonicTime = lift getMonotonicTime instance MonadDelay m => MonadDelay (WithEarlyExit m) where threadDelay = lift . threadDelay - instance TimerSI.MonadDelay m => TimerSI.MonadDelay (WithEarlyExit m) where threadDelay = lift . TimerSI.threadDelay instance (MonadEvaluate m, MonadCatch m) => MonadEvaluate (WithEarlyExit m) where - evaluate = lift . evaluate + evaluate = lift . evaluate instance MonadEventlog m => MonadEventlog (WithEarlyExit m) where - traceEventIO = lift . traceEventIO + traceEventIO = lift . traceEventIO traceMarkerIO = lift . traceMarkerIO instance MonadLabelledSTM m => MonadLabelledSTM (WithEarlyExit m) where - labelTVar = lift .: labelTVar - labelTMVar = lift .: labelTMVar - labelTQueue = lift .: labelTQueue - labelTBQueue = lift .: labelTBQueue - labelTArray = lift .: labelTArray - labelTSem = lift .: labelTSem - labelTChan = lift .: labelTChan - labelTVarIO = lift .: labelTVarIO - labelTMVarIO = lift .: labelTMVarIO - labelTQueueIO = lift .: labelTQueueIO + labelTVar = lift .: labelTVar + labelTMVar = lift .: labelTMVar + labelTQueue = lift .: labelTQueue + labelTBQueue = lift .: labelTBQueue + labelTArray = lift .: labelTArray + labelTSem = lift .: labelTSem + labelTChan = lift .: labelTChan + labelTVarIO = lift .: labelTVarIO + labelTMVarIO = lift .: labelTMVarIO + labelTQueueIO = lift .: labelTQueueIO labelTBQueueIO = lift .: labelTBQueueIO - labelTArrayIO = lift .: labelTArrayIO - labelTSemIO = lift .: labelTSemIO - labelTChanIO = lift .: labelTChanIO + labelTArrayIO = lift .: labelTArrayIO + labelTSemIO = lift .: labelTSemIO + labelTChanIO = lift .: labelTChanIO instance MonadSay m => MonadSay (WithEarlyExit m) where say = lift . say -instance (MonadInspectSTM m, Monad (InspectMonad m)) => MonadInspectSTM (WithEarlyExit m) where - type InspectMonad (WithEarlyExit m) = InspectMonad m - inspectTVar _ = inspectTVar (Proxy @m) - inspectTMVar _ = inspectTMVar (Proxy @m) +instance (MonadInspectSTM m, Monad (InspectMonad m)) => MonadInspectSTM (WithEarlyExit m) where + type InspectMonad (WithEarlyExit m) = InspectMonad m + inspectTVar _ = inspectTVar (Proxy @m) + inspectTMVar _ = inspectTMVar (Proxy @m) instance MonadTraceSTM m => MonadTraceSTM (WithEarlyExit m) where - traceTVar _ = lift .: traceTVar (Proxy @m) - traceTMVar _ = lift .: traceTMVar (Proxy @m) - traceTQueue _ = lift .: traceTQueue (Proxy @m) + traceTVar _ = lift .: traceTVar (Proxy @m) + traceTMVar _ = lift .: traceTMVar (Proxy @m) + traceTQueue _ = lift .: traceTQueue (Proxy @m) traceTBQueue _ = lift .: traceTBQueue (Proxy @m) - traceTSem _ = lift .: traceTSem (Proxy @m) + traceTSem _ = lift .: traceTSem (Proxy @m) {------------------------------------------------------------------------------- Finally, the consensus IOLike wrapper -------------------------------------------------------------------------------} -instance ( IOLike m - , forall a. NoThunks (StrictTVar (WithEarlyExit m) a) - , forall a. NoThunks (StrictSVar (WithEarlyExit m) a) - , forall a. NoThunks (StrictMVar (WithEarlyExit m) a) - ) => IOLike (WithEarlyExit m) where +instance + ( IOLike m + , forall a. NoThunks (StrictTVar (WithEarlyExit m) a) + , forall a. NoThunks (StrictSVar (WithEarlyExit m) a) + , forall a. NoThunks (StrictMVar (WithEarlyExit m) a) + ) => + IOLike (WithEarlyExit m) + where forgetSignKeyKES = lift . forgetSignKeyKES diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Enclose.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Enclose.hs index eb11e700a4..9c53bc6cd9 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Enclose.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Enclose.hs @@ -1,8 +1,8 @@ {-# LANGUAGE PatternSynonyms #-} -- | Utility functions for enclosing a code segment with tracing events. -module Ouroboros.Consensus.Util.Enclose ( - Enclosing +module Ouroboros.Consensus.Util.Enclose + ( Enclosing , Enclosing' (..) , EnclosingTimed , encloseTimedWith @@ -10,15 +10,18 @@ module Ouroboros.Consensus.Util.Enclose ( , pattern FallingEdge ) where -import Control.Monad.Class.MonadTime.SI (DiffTime, - MonadMonotonicTime (..), diffTime) -import Control.Tracer (Tracer, traceWith) +import Control.Monad.Class.MonadTime.SI + ( DiffTime + , MonadMonotonicTime (..) + , diffTime + ) +import Control.Tracer (Tracer, traceWith) -data Enclosing' a = - -- | Preceding a specific code segment. +data Enclosing' a + = -- | Preceding a specific code segment. RisingEdge - -- | Succeeding a specific code segment, with extra information. - | FallingEdgeWith !a + | -- | Succeeding a specific code segment, with extra information. + FallingEdgeWith !a deriving (Show, Eq, Ord) type Enclosing = Enclosing' () @@ -30,24 +33,24 @@ pattern FallingEdge = FallingEdgeWith () -- | Enclose an action using the given 'Tracer'. encloseWith :: - Applicative m - => Tracer m Enclosing - -> m a - -> m a + Applicative m => + Tracer m Enclosing -> + m a -> + m a encloseWith tracer action = - traceWith tracer RisingEdge *> action <* traceWith tracer FallingEdge + traceWith tracer RisingEdge *> action <* traceWith tracer FallingEdge type EnclosingTimed = Enclosing' DiffTime encloseTimedWith :: - MonadMonotonicTime m - => Tracer m EnclosingTimed - -> m a - -> m a + MonadMonotonicTime m => + Tracer m EnclosingTimed -> + m a -> + m a encloseTimedWith tracer action = do - before <- getMonotonicTime - traceWith tracer RisingEdge - res <- action - after <- getMonotonicTime - traceWith tracer (FallingEdgeWith (after `diffTime` before)) - pure res + before <- getMonotonicTime + traceWith tracer RisingEdge + res <- action + after <- getMonotonicTime + traceWith tracer (FallingEdgeWith (after `diffTime` before)) + pure res diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/FileLock.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/FileLock.hs index 95f3b53144..207257a45e 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/FileLock.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/FileLock.hs @@ -1,22 +1,22 @@ -module Ouroboros.Consensus.Util.FileLock ( - FileLock (..) +module Ouroboros.Consensus.Util.FileLock + ( FileLock (..) , ioFileLock ) where -import qualified System.FileLock as IO +import System.FileLock qualified as IO -- | Abstraction for file locks -data FileLock m = FileLock { - -- | Obtain an exclusive lock on the given file. - -- - -- Returns the function to unlock the file. - -- - -- Blocks until the lock is available. - -- - -- We don't guarantee the ability to read/write to a locked file, not - -- even when holding the lock. - lockFile :: FilePath -> m (m ()) - } +data FileLock m = FileLock + { lockFile :: FilePath -> m (m ()) + -- ^ Obtain an exclusive lock on the given file. + -- + -- Returns the function to unlock the file. + -- + -- Blocks until the lock is available. + -- + -- We don't guarantee the ability to read/write to a locked file, not + -- even when holding the lock. + } -- | Implementation of 'FileLock' for 'IO', using on "System.FileLock". -- @@ -25,7 +25,8 @@ data FileLock m = FileLock { -- Unlocking the file is not guaranteed to be synchronous. Near instantaneous -- on Linux, but not synchronous. On Windows, unlocking is even more lazy. ioFileLock :: FileLock IO -ioFileLock = FileLock { - lockFile = \fp -> +ioFileLock = + FileLock + { lockFile = \fp -> IO.unlockFile <$> IO.lockFile fp IO.Exclusive } diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/HList.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/HList.hs index 0642dfdb4b..291333a5a7 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/HList.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/HList.hs @@ -13,10 +13,11 @@ -- | Heterogeneous lists -- -- Intended for qualified import -module Ouroboros.Consensus.Util.HList ( - -- * Basic definitions +module Ouroboros.Consensus.Util.HList + ( -- * Basic definitions All , HList (..) + -- * Folding , collapse , foldMap @@ -25,109 +26,138 @@ module Ouroboros.Consensus.Util.HList ( , foldr , repeatedly , repeatedlyM + -- * Singletons , IsList (..) , SList + -- * n-ary functions , Fn , afterFn , applyFn ) where -import Data.Kind (Constraint, Type) -import Data.Proxy -import Prelude hiding (foldMap, foldl, foldr) +import Data.Kind (Constraint, Type) +import Data.Proxy +import Prelude hiding (foldMap, foldl, foldr) {------------------------------------------------------------------------------- Basic definitions -------------------------------------------------------------------------------} data HList :: [Type] -> Type where - Nil :: HList '[] + Nil :: HList '[] (:*) :: a -> HList as -> HList (a ': as) -infixr :* +infixr 9 :* type family All c as :: Constraint where - All c '[] = () + All c '[] = () All c (a ': as) = (c a, All c as) instance All Show as => Show (HList as) where show = show . collapse (Proxy @Show) show instance (IsList as, All Eq as) => Eq (HList as) where - (==) = eq isList - where - eq :: All Eq bs => SList bs -> HList bs -> HList bs -> Bool - eq SNil _ _ = True - eq (SCons s) (x :* xs) (y :* ys) = x == y && eq s xs ys + (==) = eq isList + where + eq :: All Eq bs => SList bs -> HList bs -> HList bs -> Bool + eq SNil _ _ = True + eq (SCons s) (x :* xs) (y :* ys) = x == y && eq s xs ys instance (IsList as, All Eq as, All Ord as) => Ord (HList as) where - compare = cmp isList - where - cmp :: All Ord bs => SList bs -> HList bs -> HList bs -> Ordering - cmp SNil _ _ = EQ - cmp (SCons s) (x :* xs) (y :* ys) = compare x y <> cmp s xs ys + compare = cmp isList + where + cmp :: All Ord bs => SList bs -> HList bs -> HList bs -> Ordering + cmp SNil _ _ = EQ + cmp (SCons s) (x :* xs) (y :* ys) = compare x y <> cmp s xs ys {------------------------------------------------------------------------------- Folding -------------------------------------------------------------------------------} -foldl :: forall c as b proxy. All c as - => proxy c - -> (forall a. c a => b -> a -> b) -> b -> HList as -> b +foldl :: + forall c as b proxy. + All c as => + proxy c -> + (forall a. c a => b -> a -> b) -> + b -> + HList as -> + b foldl _ f = go - where - go :: All c as' => b -> HList as' -> b - go !acc Nil = acc - go !acc (a :* as) = go (f acc a) as - -foldlM :: forall c as m b proxy. (All c as, Monad m) - => proxy c - -> (forall a. c a => b -> a -> m b) -> b -> HList as -> m b + where + go :: All c as' => b -> HList as' -> b + go !acc Nil = acc + go !acc (a :* as) = go (f acc a) as + +foldlM :: + forall c as m b proxy. + (All c as, Monad m) => + proxy c -> + (forall a. c a => b -> a -> m b) -> + b -> + HList as -> + m b foldlM _ f = go - where - go :: All c as' => b -> HList as' -> m b - go !acc Nil = return acc - go !acc (a :* as) = f acc a >>= \acc' -> go acc' as - -foldr :: forall c as b proxy. All c as - => proxy c - -> (forall a. c a => a -> b -> b) -> b -> HList as -> b + where + go :: All c as' => b -> HList as' -> m b + go !acc Nil = return acc + go !acc (a :* as) = f acc a >>= \acc' -> go acc' as + +foldr :: + forall c as b proxy. + All c as => + proxy c -> + (forall a. c a => a -> b -> b) -> + b -> + HList as -> + b foldr _ f e = go - where - go :: All c as' => HList as' -> b - go Nil = e - go (a :* as) = f a (go as) - -foldMap :: forall c as b proxy. (All c as, Monoid b) - => proxy c - -> (forall a. c a => a -> b) - -> HList as - -> b + where + go :: All c as' => HList as' -> b + go Nil = e + go (a :* as) = f a (go as) + +foldMap :: + forall c as b proxy. + (All c as, Monoid b) => + proxy c -> + (forall a. c a => a -> b) -> + HList as -> + b foldMap p f = foldl p (\b a -> b <> f a) mempty -- | Apply function repeatedly for all elements of the list -- -- > repeatedly p = flip . foldl p . flip -repeatedly :: forall c as b proxy. All c as - => proxy c - -> (forall a. c a => a -> b -> b) -> (HList as -> b -> b) +repeatedly :: + forall c as b proxy. + All c as => + proxy c -> + (forall a. c a => a -> b -> b) -> + (HList as -> b -> b) repeatedly p f as e = foldl p (\b a -> f a b) e as -repeatedlyM :: forall c as b proxy m. (Monad m, All c as) - => proxy c - -> (forall a. c a => a -> b -> m b) -> (HList as -> b -> m b) +repeatedlyM :: + forall c as b proxy m. + (Monad m, All c as) => + proxy c -> + (forall a. c a => a -> b -> m b) -> + (HList as -> b -> m b) repeatedlyM p f as e = foldlM p (\b a -> f a b) e as -collapse :: forall c as b proxy. All c as - => proxy c - -> (forall a. c a => a -> b) -> HList as -> [b] +collapse :: + forall c as b proxy. + All c as => + proxy c -> + (forall a. c a => a -> b) -> + HList as -> + [b] collapse _ f = go - where - go :: All c as' => HList as' -> [b] - go Nil = [] - go (a :* as) = f a : go as + where + go :: All c as' => HList as' -> [b] + go Nil = [] + go (a :* as) = f a : go as {------------------------------------------------------------------------------- Singleton for HList @@ -140,7 +170,7 @@ data SList :: [Type] -> Type where class IsList (xs :: [Type]) where isList :: SList xs -instance IsList '[] where isList = SNil +instance IsList '[] where isList = SNil instance IsList as => IsList (a ': as) where isList = SCons isList {------------------------------------------------------------------------------- @@ -148,16 +178,16 @@ instance IsList as => IsList (a ': as) where isList = SCons isList -------------------------------------------------------------------------------} type family Fn as b where - Fn '[] b = b + Fn '[] b = b Fn (a ': as) b = a -> Fn as b withArgs :: HList as -> Fn as b -> b -withArgs Nil b = b +withArgs Nil b = b withArgs (a :* as) f = withArgs as (f a) applyFn :: Fn as b -> HList as -> b applyFn = flip withArgs afterFn :: SList as -> (b -> c) -> Fn as b -> Fn as c -afterFn SNil g b = g b +afterFn SNil g b = g b afterFn (SCons ss) g f = afterFn ss g . f diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IOLike.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IOLike.hs index a589e31e60..02dd67c3f2 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IOLike.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IOLike.hs @@ -2,12 +2,13 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE QuantifiedConstraints #-} {-# LANGUAGE ScopedTypeVariables #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Util.IOLike ( - IOLike (..) +module Ouroboros.Consensus.Util.IOLike + ( IOLike (..) + -- * Re-exports + -- *** MonadThrow , Exception (..) , ExitCase (..) @@ -15,94 +16,105 @@ module Ouroboros.Consensus.Util.IOLike ( , MonadMask (..) , MonadThrow (..) , SomeException + -- *** Variables with NoThunks invariants , module Ouroboros.Consensus.Util.MonadSTM.NormalForm , module Ouroboros.Consensus.Util.NormalForm.StrictMVar , module Ouroboros.Consensus.Util.NormalForm.StrictTVar + -- *** MonadFork, TODO: Should we hide this in favour of MonadAsync? , MonadFork (..) , MonadThread (..) , labelThisThread + -- *** MonadAsync , ExceptionInLinkedThread (..) , MonadAsync (..) , link , linkTo + -- *** MonadST , MonadST (..) , PrimMonad (..) + -- *** MonadTime , DiffTime , MonadMonotonicTime (..) , Time (..) , addTime , diffTime + -- *** MonadDelay , MonadDelay (..) + -- *** MonadEventlog , MonadEventlog (..) + -- *** MonadEvaluate , MonadEvaluate (..) + -- *** NoThunks , NoThunks (..) ) where -import Cardano.Crypto.KES (KESAlgorithm, SignKeyKES) -import qualified Cardano.Crypto.KES as KES -import Control.Applicative (Alternative) -import Control.Concurrent.Class.MonadMVar (MonadInspectMVar (..)) -import qualified Control.Concurrent.Class.MonadMVar.Strict as Strict -import qualified Control.Concurrent.Class.MonadSTM.Strict as StrictSTM -import Control.Monad.Base (MonadBase) -import Control.Monad.Class.MonadAsync -import Control.Monad.Class.MonadEventlog -import Control.Monad.Class.MonadFork -import Control.Monad.Class.MonadST -import Control.Monad.Class.MonadThrow -import Control.Monad.Class.MonadTime.SI -import Control.Monad.Class.MonadTimer.SI -import Control.Monad.Primitive -import Data.Functor (void) -import Data.Proxy (Proxy (..)) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Util.MonadSTM.NormalForm -import Ouroboros.Consensus.Util.NormalForm.StrictMVar -import Ouroboros.Consensus.Util.NormalForm.StrictTVar -import Ouroboros.Consensus.Util.Orphans () - +import Cardano.Crypto.KES (KESAlgorithm, SignKeyKES) +import Cardano.Crypto.KES qualified as KES +import Control.Applicative (Alternative) +import Control.Concurrent.Class.MonadMVar (MonadInspectMVar (..)) +import Control.Concurrent.Class.MonadMVar.Strict qualified as Strict +import Control.Concurrent.Class.MonadSTM.Strict qualified as StrictSTM +import Control.Monad.Base (MonadBase) +import Control.Monad.Class.MonadAsync +import Control.Monad.Class.MonadEventlog +import Control.Monad.Class.MonadFork +import Control.Monad.Class.MonadST +import Control.Monad.Class.MonadThrow +import Control.Monad.Class.MonadTime.SI +import Control.Monad.Class.MonadTimer.SI +import Control.Monad.Primitive +import Data.Functor (void) +import Data.Proxy (Proxy (..)) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Util.MonadSTM.NormalForm +import Ouroboros.Consensus.Util.NormalForm.StrictMVar +import Ouroboros.Consensus.Util.NormalForm.StrictTVar +import Ouroboros.Consensus.Util.Orphans () {------------------------------------------------------------------------------- IOLike -------------------------------------------------------------------------------} -class ( MonadAsync m - , MonadLabelledSTM m - , MonadTraceSTM m - , MonadMVar m - , MonadEventlog m - , MonadFork m - , MonadST m - , MonadDelay m - , MonadThread m - , MonadThrow m - , MonadCatch m - , MonadMask m - , MonadMonotonicTime m - , MonadEvaluate m - , MonadTraceSTM m - , Alternative (STM m) - , MonadCatch (STM m) - , PrimMonad m - , MonadLabelledSTM m - , MonadBase m m - , forall a. NoThunks (m a) - , forall a. NoThunks a => NoThunks (StrictSTM.StrictTVar m a) - , forall a. NoThunks a => NoThunks (StrictSVar m a) - , forall a. NoThunks a => NoThunks (Strict.StrictMVar m a) - , forall a. NoThunks a => NoThunks (StrictTVar m a) - , forall a. NoThunks a => NoThunks (StrictMVar m a) - , forall a. NoThunks a => NoThunks (StrictSTM.StrictTMVar m a) - ) => IOLike m where +class + ( MonadAsync m + , MonadLabelledSTM m + , MonadTraceSTM m + , MonadMVar m + , MonadEventlog m + , MonadFork m + , MonadST m + , MonadDelay m + , MonadThread m + , MonadThrow m + , MonadCatch m + , MonadMask m + , MonadMonotonicTime m + , MonadEvaluate m + , MonadTraceSTM m + , Alternative (STM m) + , MonadCatch (STM m) + , PrimMonad m + , MonadLabelledSTM m + , MonadBase m m + , forall a. NoThunks (m a) + , forall a. NoThunks a => NoThunks (StrictSTM.StrictTVar m a) + , forall a. NoThunks a => NoThunks (StrictSVar m a) + , forall a. NoThunks a => NoThunks (Strict.StrictMVar m a) + , forall a. NoThunks a => NoThunks (StrictTVar m a) + , forall a. NoThunks a => NoThunks (StrictMVar m a) + , forall a. NoThunks a => NoThunks (StrictSTM.StrictTMVar m a) + ) => + IOLike m + where -- | Securely forget a KES signing key. -- -- No-op for the IOSim, but 'KES.forgetSignKeyKES' for IO. @@ -114,30 +126,31 @@ instance IOLike IO where -- | Generalization of 'link' that links an async to an arbitrary thread. -- -- Non standard (not in 'async' library) --- -linkTo :: (MonadAsync m, MonadFork m, MonadMask m) - => ThreadId m -> Async m a -> m () +linkTo :: + (MonadAsync m, MonadFork m, MonadMask m) => + ThreadId m -> Async m a -> m () linkTo tid = linkToOnly tid (not . isCancel) -- | Generalization of 'linkOnly' that links an async to an arbitrary thread. -- -- Non standard (not in 'async' library). --- -linkToOnly :: forall m a. (MonadAsync m, MonadFork m, MonadMask m) - => ThreadId m -> (SomeException -> Bool) -> Async m a -> m () +linkToOnly :: + forall m a. + (MonadAsync m, MonadFork m, MonadMask m) => + ThreadId m -> (SomeException -> Bool) -> Async m a -> m () linkToOnly tid shouldThrow a = do - void $ forkRepeat ("linkToOnly " <> show linkedThreadId) $ do - r <- waitCatch a - case r of - Left e | shouldThrow e -> throwTo tid (exceptionInLinkedThread e) - _otherwise -> return () - where - linkedThreadId :: ThreadId m - linkedThreadId = asyncThreadId a - - exceptionInLinkedThread :: SomeException -> ExceptionInLinkedThread - exceptionInLinkedThread = - ExceptionInLinkedThread (show linkedThreadId) + void $ forkRepeat ("linkToOnly " <> show linkedThreadId) $ do + r <- waitCatch a + case r of + Left e | shouldThrow e -> throwTo tid (exceptionInLinkedThread e) + _otherwise -> return () + where + linkedThreadId :: ThreadId m + linkedThreadId = asyncThreadId a + + exceptionInLinkedThread :: SomeException -> ExceptionInLinkedThread + exceptionInLinkedThread = + ExceptionInLinkedThread (show linkedThreadId) isCancel :: SomeException -> Bool isCancel e @@ -147,16 +160,16 @@ isCancel e forkRepeat :: (MonadFork m, MonadMask m) => String -> m a -> m (ThreadId m) forkRepeat label action = mask $ \restore -> - let go = do r <- tryAll (restore action) - case r of - Left _ -> go - _ -> return () - in forkIO (labelThisThread label >> go) + let go = do + r <- tryAll (restore action) + case r of + Left _ -> go + _ -> return () + in forkIO (labelThisThread label >> go) tryAll :: MonadCatch m => m a -> m (Either SomeException a) tryAll = try - {------------------------------------------------------------------------------- NoThunks instance -------------------------------------------------------------------------------} @@ -164,20 +177,19 @@ tryAll = try instance NoThunks a => NoThunks (StrictSTM.StrictTVar IO a) where showTypeOf _ = "StrictTVar IO" wNoThunks ctxt tv = do - -- We can't use @atomically $ readTVar ..@ here, as that will lead to a - -- "Control.Concurrent.STM.atomically was nested" exception. - a <- StrictSTM.readTVarIO tv - noThunks ctxt a + -- We can't use @atomically $ readTVar ..@ here, as that will lead to a + -- "Control.Concurrent.STM.atomically was nested" exception. + a <- StrictSTM.readTVarIO tv + noThunks ctxt a instance NoThunks a => NoThunks (Strict.StrictMVar IO a) where showTypeOf _ = "StrictMVar IO" wNoThunks ctxt mvar = do - aMay <- inspectMVar (Proxy :: Proxy IO) (Strict.toLazyMVar mvar) - noThunks ctxt aMay - + aMay <- inspectMVar (Proxy :: Proxy IO) (Strict.toLazyMVar mvar) + noThunks ctxt aMay instance NoThunks a => NoThunks (StrictSTM.StrictTMVar IO a) where showTypeOf _ = "StrictTMVar IO" - wNoThunks ctxt t = do - a <- inspectTMVar (Proxy :: Proxy IO) $ toLazyTMVar t - noThunks ctxt a + wNoThunks ctxt t = do + a <- inspectTMVar (Proxy :: Proxy IO) $ toLazyTMVar t + noThunks ctxt a diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IndexedMemPack.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IndexedMemPack.hs index 402b7a7667..a5c96b47e0 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IndexedMemPack.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/IndexedMemPack.hs @@ -10,21 +10,21 @@ -- tag next to the TxOut, as the Ledger layer establishes the property that -- TxOuts are forwards deserializable, meaning we can read them in any later -- era. -module Ouroboros.Consensus.Util.IndexedMemPack ( - IndexedMemPack (..) +module Ouroboros.Consensus.Util.IndexedMemPack + ( IndexedMemPack (..) , MemPack (..) , indexedPackByteString , indexedUnpackError ) where -import qualified Control.Monad as Monad -import Control.Monad.Trans.Fail (Fail, errorFail, failT) -import Data.Array.Byte (ByteArray (..)) -import Data.ByteString -import Data.MemPack -import Data.MemPack.Buffer -import Data.MemPack.Error -import GHC.Stack +import Control.Monad qualified as Monad +import Control.Monad.Trans.Fail (Fail, errorFail, failT) +import Data.Array.Byte (ByteArray (..)) +import Data.ByteString +import Data.MemPack +import Data.MemPack.Buffer +import Data.MemPack.Error +import GHC.Stack -- | See 'MemPack'. class IndexedMemPack idx a where @@ -33,7 +33,8 @@ class IndexedMemPack idx a where indexedUnpackM :: Buffer b => idx -> Unpack b a indexedTypeName :: idx -> String -indexedPackByteString :: forall a idx. (IndexedMemPack idx a, HasCallStack) => idx -> a -> ByteString +indexedPackByteString :: + forall a idx. (IndexedMemPack idx a, HasCallStack) => idx -> a -> ByteString indexedPackByteString idx = pinnedByteArrayToByteString . indexedPackByteArray True idx {-# INLINE indexedPackByteString #-} @@ -45,24 +46,31 @@ indexedPackByteArray :: a -> ByteArray indexedPackByteArray isPinned idx a = - packWithByteArray isPinned (indexedTypeName @idx @a idx) (indexedPackedByteCount idx a) (indexedPackM idx a) + packWithByteArray + isPinned + (indexedTypeName @idx @a idx) + (indexedPackedByteCount idx a) + (indexedPackM idx a) {-# INLINE indexedPackByteArray #-} - -indexedUnpackError :: forall idx a b. (Buffer b, IndexedMemPack idx a, HasCallStack) => idx -> b -> a +indexedUnpackError :: + forall idx a b. (Buffer b, IndexedMemPack idx a, HasCallStack) => idx -> b -> a indexedUnpackError idx = errorFail . indexedUnpackFail idx {-# INLINEABLE indexedUnpackError #-} -indexedUnpackFail :: forall idx a b. (IndexedMemPack idx a, Buffer b, HasCallStack) => idx -> b -> Fail SomeError a +indexedUnpackFail :: + forall idx a b. (IndexedMemPack idx a, Buffer b, HasCallStack) => idx -> b -> Fail SomeError a indexedUnpackFail idx b = do let len = bufferByteCount b (a, consumedBytes) <- indexedUnpackLeftOver idx b - Monad.when (consumedBytes /= len) $ unpackFailNotFullyConsumed (indexedTypeName @idx @a idx) consumedBytes len + Monad.when (consumedBytes /= len) $ + unpackFailNotFullyConsumed (indexedTypeName @idx @a idx) consumedBytes len pure a {-# INLINEABLE indexedUnpackFail #-} - -indexedUnpackLeftOver :: forall idx a b. (IndexedMemPack idx a, Buffer b, HasCallStack) => idx -> b -> Fail SomeError (a, Int) +indexedUnpackLeftOver :: + forall idx a b. + (IndexedMemPack idx a, Buffer b, HasCallStack) => idx -> b -> Fail SomeError (a, Int) indexedUnpackLeftOver idx b = do let len = bufferByteCount b res@(_, consumedBytes) <- runStateT (runUnpack (indexedUnpackM idx) b) 0 diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/LeakyBucket.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/LeakyBucket.hs index 326372d835..c72aa181e0 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/LeakyBucket.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/LeakyBucket.hs @@ -25,10 +25,8 @@ -- REVIEW: Could be used as leaky bucket used for rate limiting algorithms. All -- the infrastructure is here (put 'onEmpty' to @pure ()@ and you're good to go) -- but it has not been tested with that purpose in mind. --- --- $leakyBucketDesign -module Ouroboros.Consensus.Util.LeakyBucket ( - Config (..) +module Ouroboros.Consensus.Util.LeakyBucket + ( Config (..) , Handlers (..) , State (..) , atomicallyWithMonotonicTime @@ -46,55 +44,55 @@ module Ouroboros.Consensus.Util.LeakyBucket ( , updateConfig' ) where -import Control.Exception (assert) -import Control.Monad (forever, void, when) -import qualified Control.Monad.Class.MonadSTM.Internal as TVar -import Control.Monad.Class.MonadTimer (MonadTimer, registerDelay) -import Control.Monad.Class.MonadTimer.SI (diffTimeToMicrosecondsAsInt) -import Data.Ord (clamp) -import Data.Ratio ((%)) -import Data.Time.Clock (diffTimeToPicoseconds) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Util.IOLike hiding (killThread) -import Ouroboros.Consensus.Util.STM (blockUntilChanged) -import Prelude hiding (init) +import Control.Exception (assert) +import Control.Monad (forever, void, when) +import Control.Monad.Class.MonadSTM.Internal qualified as TVar +import Control.Monad.Class.MonadTimer (MonadTimer, registerDelay) +import Control.Monad.Class.MonadTimer.SI (diffTimeToMicrosecondsAsInt) +import Data.Ord (clamp) +import Data.Ratio ((%)) +import Data.Time.Clock (diffTimeToPicoseconds) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Util.IOLike hiding (killThread) +import Ouroboros.Consensus.Util.STM (blockUntilChanged) +import Prelude hiding (init) -- | Configuration of a leaky bucket. data Config m = Config - { -- | Initial and maximal capacity of the bucket, in number of tokens. - capacity :: !Rational, - -- | Tokens per second leaking off the bucket. - rate :: !Rational, - -- | Whether to fill to capacity on overflow or to do nothing. - fillOnOverflow :: !Bool, - -- | A monadic action to trigger when the bucket is empty. - onEmpty :: !(m ()) + { capacity :: !Rational + -- ^ Initial and maximal capacity of the bucket, in number of tokens. + , rate :: !Rational + -- ^ Tokens per second leaking off the bucket. + , fillOnOverflow :: !Bool + -- ^ Whether to fill to capacity on overflow or to do nothing. + , onEmpty :: !(m ()) + -- ^ A monadic action to trigger when the bucket is empty. } - deriving (Generic) + deriving Generic deriving instance NoThunks (m ()) => NoThunks (Config m) -- | A configuration for a bucket that does nothing. -dummyConfig :: (Applicative m) => Config m +dummyConfig :: Applicative m => Config m dummyConfig = Config - { capacity = 0, - rate = 0, - fillOnOverflow = True, - onEmpty = pure () + { capacity = 0 + , rate = 0 + , fillOnOverflow = True + , onEmpty = pure () } -- | State of a leaky bucket, giving the level and the associated time. data State m = State - { level :: !Rational, - time :: !Time, - paused :: !Bool, - configGeneration :: !Int, - config :: !(Config m) + { level :: !Rational + , time :: !Time + , paused :: !Bool + , configGeneration :: !Int + , config :: !(Config m) } - deriving (Generic) + deriving Generic -deriving instance (NoThunks (m ())) => NoThunks (State m) +deriving instance NoThunks (m ()) => NoThunks (State m) -- | A bucket is simply a TVar of a state. The state carries a 'Config' and an -- integer (a “generation”) to detect changes in the configuration. @@ -107,36 +105,36 @@ data FillResult = Overflew | DidNotOverflow -- bucket. All the endpoints are STM but require the current time; the easy way -- to provide this being 'atomicallyWithMonotonicTime'. data Handlers m = Handlers - { -- | Refill the bucket by the given amount and returns whether the bucket - -- overflew. The bucket may silently get filled to full capacity or not get - -- filled depending on 'fillOnOverflow'. - fill :: + { fill :: !( Rational -> Time -> STM m FillResult - ), - -- | Pause or resume the bucket. Pausing stops the bucket from leaking until - -- it is resumed. It is still possible to fill it during that time. @setPaused - -- True@ and @setPaused False@ are idempotent. - setPaused :: + ) + -- ^ Refill the bucket by the given amount and returns whether the bucket + -- overflew. The bucket may silently get filled to full capacity or not get + -- filled depending on 'fillOnOverflow'. + , setPaused :: !( Bool -> Time -> STM m () - ), - -- | Dynamically update the level and configuration of the bucket. Updating - -- the level matters if the capacity changes, in particular. If updating - -- leave the bucket empty, the action is triggered immediately. - updateConfig :: + ) + -- ^ Pause or resume the bucket. Pausing stops the bucket from leaking until + -- it is resumed. It is still possible to fill it during that time. @setPaused + -- True@ and @setPaused False@ are idempotent. + , updateConfig :: !( ((Rational, Config m) -> (Rational, Config m)) -> Time -> STM m () ) + -- ^ Dynamically update the level and configuration of the bucket. Updating + -- the level matters if the capacity changes, in particular. If updating + -- leave the bucket empty, the action is triggered immediately. } -- | Variant of 'fill' already wrapped in 'atomicallyWithMonotonicTime'. fill' :: - ( MonadMonotonicTime m, - MonadSTM m + ( MonadMonotonicTime m + , MonadSTM m ) => Handlers m -> Rational -> @@ -145,8 +143,8 @@ fill' h r = atomicallyWithMonotonicTime $ fill h r -- | Variant of 'setPaused' already wrapped in 'atomicallyWithMonotonicTime'. setPaused' :: - ( MonadMonotonicTime m, - MonadSTM m + ( MonadMonotonicTime m + , MonadSTM m ) => Handlers m -> Bool -> @@ -155,8 +153,8 @@ setPaused' h p = atomicallyWithMonotonicTime $ setPaused h p -- | Variant of 'updateConfig' already wrapped in 'atomicallyWithMonotonicTime'. updateConfig' :: - ( MonadMonotonicTime m, - MonadSTM m + ( MonadMonotonicTime m + , MonadSTM m ) => Handlers m -> ((Rational, Config m) -> (Rational, Config m)) -> @@ -168,12 +166,12 @@ updateConfig' h f = atomicallyWithMonotonicTime $ updateConfig h f -- first case, return the value returned by the action. In the second case, -- return @Nothing@. execAgainstBucket :: - ( MonadDelay m, - MonadAsync m, - MonadFork m, - MonadMask m, - MonadTimer m, - NoThunks (m ()) + ( MonadDelay m + , MonadAsync m + , MonadFork m + , MonadMask m + , MonadTimer m + , NoThunks (m ()) ) => Config m -> (Handlers m -> m a) -> @@ -183,12 +181,12 @@ execAgainstBucket config action = snd <$> runAgainstBucket config action -- | Variant of 'execAgainstBucket' that uses a dummy configuration. This only -- makes sense for actions that use 'updateConfig'. execAgainstBucket' :: - ( MonadDelay m, - MonadAsync m, - MonadFork m, - MonadMask m, - MonadTimer m, - NoThunks (m ()) + ( MonadDelay m + , MonadAsync m + , MonadFork m + , MonadMask m + , MonadTimer m + , NoThunks (m ()) ) => (Handlers m -> m a) -> m a @@ -198,7 +196,12 @@ execAgainstBucket' action = -- | Same as 'execAgainstBucket' but returns the 'State' of the bucket when the -- action terminates. Exposed for testing purposes. evalAgainstBucket :: - (MonadDelay m, MonadAsync m, MonadFork m, MonadMask m, MonadTimer m, NoThunks (m ()) + ( MonadDelay m + , MonadAsync m + , MonadFork m + , MonadMask m + , MonadTimer m + , NoThunks (m ()) ) => Config m -> (Handlers m -> m a) -> @@ -209,12 +212,12 @@ evalAgainstBucket config action = fst <$> runAgainstBucket config action -- the action terminates. Exposed for testing purposes. runAgainstBucket :: forall m a. - ( MonadDelay m, - MonadAsync m, - MonadFork m, - MonadMask m, - MonadTimer m, - NoThunks (m ()) + ( MonadDelay m + , MonadAsync m + , MonadFork m + , MonadMask m + , MonadTimer m + , NoThunks (m ()) ) => Config m -> (Handlers m -> m a) -> @@ -223,61 +226,64 @@ runAgainstBucket config action = do leakingPeriodVersionTMVar <- atomically newEmptyTMVar -- see note [Leaky bucket design]. tid <- myThreadId bucket <- init config - withAsync (do - labelThisThread "Leaky bucket (ouroboros-consensus)" - leak (readTMVar leakingPeriodVersionTMVar) tid bucket) $ \_ -> do - atomicallyWithMonotonicTime $ maybeStartThread Nothing leakingPeriodVersionTMVar bucket - result <- - action $ - Handlers - { fill = \r t -> (snd <$>) $ snapshotFill bucket r t, - setPaused = setPaused bucket, - updateConfig = updateConfig leakingPeriodVersionTMVar bucket - } - state <- atomicallyWithMonotonicTime $ snapshot bucket - pure (state, result) - where - -- Start the thread (that is, write to its 'leakingPeriodVersionTMVar') if it is useful. - -- Takes a potential old value of the 'leakingPeriodVersionTMVar' as first argument, - -- which will be increased to help differentiate between restarts. - maybeStartThread :: Maybe Int -> StrictTMVar m Int -> Bucket m -> Time -> STM m () - maybeStartThread mLeakingPeriodVersion leakingPeriodVersionTMVar bucket time = do - State {config = Config {rate}} <- snapshot bucket time - when (rate > 0) $ void $ tryPutTMVar leakingPeriodVersionTMVar $ maybe 0 (+ 1) mLeakingPeriodVersion + withAsync + ( do + labelThisThread "Leaky bucket (ouroboros-consensus)" + leak (readTMVar leakingPeriodVersionTMVar) tid bucket + ) + $ \_ -> do + atomicallyWithMonotonicTime $ maybeStartThread Nothing leakingPeriodVersionTMVar bucket + result <- + action $ + Handlers + { fill = \r t -> (snd <$>) $ snapshotFill bucket r t + , setPaused = setPaused bucket + , updateConfig = updateConfig leakingPeriodVersionTMVar bucket + } + state <- atomicallyWithMonotonicTime $ snapshot bucket + pure (state, result) + where + -- Start the thread (that is, write to its 'leakingPeriodVersionTMVar') if it is useful. + -- Takes a potential old value of the 'leakingPeriodVersionTMVar' as first argument, + -- which will be increased to help differentiate between restarts. + maybeStartThread :: Maybe Int -> StrictTMVar m Int -> Bucket m -> Time -> STM m () + maybeStartThread mLeakingPeriodVersion leakingPeriodVersionTMVar bucket time = do + State{config = Config{rate}} <- snapshot bucket time + when (rate > 0) $ void $ tryPutTMVar leakingPeriodVersionTMVar $ maybe 0 (+ 1) mLeakingPeriodVersion - setPaused :: Bucket m -> Bool -> Time -> STM m () - setPaused bucket paused time = do - newState <- snapshot bucket time - writeTVar bucket newState {paused} + setPaused :: Bucket m -> Bool -> Time -> STM m () + setPaused bucket paused time = do + newState <- snapshot bucket time + writeTVar bucket newState{paused} - updateConfig :: - StrictTMVar m Int -> - Bucket m -> - ((Rational, Config m) -> (Rational, Config m)) -> - Time -> - STM m () - updateConfig leakingPeriodVersionTMVar bucket f time = do + updateConfig :: + StrictTMVar m Int -> + Bucket m -> + ((Rational, Config m) -> (Rational, Config m)) -> + Time -> + STM m () + updateConfig leakingPeriodVersionTMVar bucket f time = do + State + { level = oldLevel + , paused + , configGeneration = oldConfigGeneration + , config = oldConfig + } <- + snapshot bucket time + let (newLevel, newConfig) = f (oldLevel, oldConfig) + Config{capacity = newCapacity} = newConfig + newLevel' = clamp (0, newCapacity) newLevel + writeTVar bucket $ State - { level = oldLevel, - paused, - configGeneration = oldConfigGeneration, - config = oldConfig - } <- - snapshot bucket time - let (newLevel, newConfig) = f (oldLevel, oldConfig) - Config {capacity = newCapacity} = newConfig - newLevel' = clamp (0, newCapacity) newLevel - writeTVar bucket $ - State - { level = newLevel', - time, - paused, - configGeneration = oldConfigGeneration + 1, - config = newConfig - } - -- Ensure that 'leakingPeriodVersionTMVar' is empty, then maybe start the thread. - mLeakingPeriodVersion <- tryTakeTMVar leakingPeriodVersionTMVar - maybeStartThread mLeakingPeriodVersion leakingPeriodVersionTMVar bucket time + { level = newLevel' + , time + , paused + , configGeneration = oldConfigGeneration + 1 + , config = newConfig + } + -- Ensure that 'leakingPeriodVersionTMVar' is empty, then maybe start the thread. + mLeakingPeriodVersion <- tryTakeTMVar leakingPeriodVersionTMVar + maybeStartThread mLeakingPeriodVersion leakingPeriodVersionTMVar bucket time -- | Initialise a bucket given a configuration. The bucket starts full at the -- time where one calls 'init'. @@ -285,15 +291,15 @@ init :: (MonadMonotonicTime m, MonadSTM m, NoThunks (m ())) => Config m -> m (Bucket m) -init config@Config {capacity} = do +init config@Config{capacity} = do time <- getMonotonicTime newTVarIO $ State - { time, - level = capacity, - paused = False, - configGeneration = 0, - config = config + { time + , level = capacity + , paused = False + , configGeneration = 0 + , config = config } -- $leakyBucketDesign @@ -331,16 +337,15 @@ init config@Config {capacity} = do -- thread should pause running if the @leakingPeriodVersionSTM@ starts blocking -- again or that the configuration changed as that it might have to wait less -- long. --- -- | Neverending computation that runs 'onEmpty' whenever the bucket becomes -- empty. See note [Leaky bucket design]. leak :: - ( MonadDelay m, - MonadCatch m, - MonadFork m, - MonadAsync m, - MonadTimer m + ( MonadDelay m + , MonadCatch m + , MonadFork m + , MonadAsync m + , MonadTimer m ) => -- | A computation indicating the version of the configuration affecting the -- leaking period. Whenever the configuration changes, the returned integer @@ -354,40 +359,38 @@ leak :: Bucket m -> m () leak leakingPeriodVersionSTM actionThreadId bucket = forever $ do - -- Block until we are allowed to run. - leakingPeriodVersion <- atomically leakingPeriodVersionSTM - -- NOTE: It is tempting to group this @atomically@ and - -- @atomicallyWithMonotonicTime@ into one; however, because the former is - -- blocking, the latter could get a _very_ inaccurate time, which we - -- cannot afford. - State {level, configGeneration = oldConfigGeneration, config = Config {rate, onEmpty}} <- - atomicallyWithMonotonicTime $ snapshot bucket - let timeToWait = secondsRationalToDiffTime (level / rate) - timeToWaitMicroseconds = diffTimeToMicrosecondsAsInt timeToWait - -- NOTE: It is possible that @timeToWait <= 1µs@ while @level > 0@ when - -- @level@ is extremely small. - if level <= 0 || timeToWaitMicroseconds <= 0 - then do - handle (\(e :: SomeException) -> throwTo actionThreadId e) onEmpty - -- We have run the action on empty, there is nothing left to do, - -- unless someone changes the configuration. - void $ atomically $ blockUntilChanged configGeneration oldConfigGeneration $ readTVar bucket - else - -- Wait for the bucket to empty, or for the thread to be stopped or - -- restarted. Beware not to call 'registerDelay' with argument 0, that - -- is ensure that @timeToWaitMicroseconds > 0@. - assert (timeToWaitMicroseconds > 0) $ do - varTimeout <- registerDelay timeToWaitMicroseconds - atomically $ - (check =<< TVar.readTVar varTimeout) - `orElse` - (void $ blockUntilChanged id leakingPeriodVersion leakingPeriodVersionSTM) + -- Block until we are allowed to run. + leakingPeriodVersion <- atomically leakingPeriodVersionSTM + -- NOTE: It is tempting to group this @atomically@ and + -- @atomicallyWithMonotonicTime@ into one; however, because the former is + -- blocking, the latter could get a _very_ inaccurate time, which we + -- cannot afford. + State{level, configGeneration = oldConfigGeneration, config = Config{rate, onEmpty}} <- + atomicallyWithMonotonicTime $ snapshot bucket + let timeToWait = secondsRationalToDiffTime (level / rate) + timeToWaitMicroseconds = diffTimeToMicrosecondsAsInt timeToWait + -- NOTE: It is possible that @timeToWait <= 1µs@ while @level > 0@ when + -- @level@ is extremely small. + if level <= 0 || timeToWaitMicroseconds <= 0 + then do + handle (\(e :: SomeException) -> throwTo actionThreadId e) onEmpty + -- We have run the action on empty, there is nothing left to do, + -- unless someone changes the configuration. + void $ atomically $ blockUntilChanged configGeneration oldConfigGeneration $ readTVar bucket + else + -- Wait for the bucket to empty, or for the thread to be stopped or + -- restarted. Beware not to call 'registerDelay' with argument 0, that + -- is ensure that @timeToWaitMicroseconds > 0@. + assert (timeToWaitMicroseconds > 0) $ do + varTimeout <- registerDelay timeToWaitMicroseconds + atomically $ + (check =<< TVar.readTVar varTimeout) + `orElse` (void $ blockUntilChanged id leakingPeriodVersion leakingPeriodVersionSTM) -- | Take a snapshot of the bucket, that is compute its state at the current -- time. snapshot :: - ( MonadSTM m - ) => + MonadSTM m => Bucket m -> Time -> STM m (State m) @@ -398,22 +401,21 @@ snapshot bucket newTime = fst <$> snapshotFill bucket 0 newTime -- -- REVIEW: What to do when 'toAdd' is negative? snapshotFill :: - ( MonadSTM m - ) => + MonadSTM m => Bucket m -> Rational -> Time -> STM m (State m, FillResult) snapshotFill bucket toAdd newTime = do - State {level, time, paused, configGeneration, config = config} <- readTVar bucket - let Config {rate, capacity, fillOnOverflow} = config + State{level, time, paused, configGeneration, config = config} <- readTVar bucket + let Config{rate, capacity, fillOnOverflow} = config elapsed = diffTime newTime time leaked = if paused then 0 else (diffTimeToSecondsRational elapsed * rate) levelLeaked = clamp (0, capacity) (level - leaked) levelFilled = clamp (0, capacity) (levelLeaked + toAdd) overflew = levelLeaked + toAdd > capacity newLevel = if not overflew || fillOnOverflow then levelFilled else levelLeaked - !newState = State {time = newTime, level = newLevel, paused, configGeneration, config} + !newState = State{time = newTime, level = newLevel, paused, configGeneration, config} writeTVar bucket newState pure (newState, if overflew then Overflew else DidNotOverflow) @@ -428,8 +430,8 @@ secondsRationalToDiffTime = realToFrac -- | Helper around 'getMonotonicTime' and 'atomically'. atomicallyWithMonotonicTime :: - ( MonadMonotonicTime m, - MonadSTM m + ( MonadMonotonicTime m + , MonadSTM m ) => (Time -> STM m b) -> m b diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/NormalForm.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/NormalForm.hs index d7593709f5..e9ab885089 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/NormalForm.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/NormalForm.hs @@ -1,26 +1,36 @@ -module Ouroboros.Consensus.Util.MonadSTM.NormalForm ( - module LazySTM +module Ouroboros.Consensus.Util.MonadSTM.NormalForm + ( module LazySTM , module Ouroboros.Consensus.Util.MonadSTM.StrictSVar , module StrictSTM , newEmptySVar , newSVar + -- * Temporary , uncheckedNewEmptySVar , uncheckedNewSVar ) where -import Control.Concurrent.Class.MonadSTM.Strict.TMVar as StrictSTM hiding - (newTMVar, newTMVarIO, traceTMVar, traceTMVarIO) -import Control.Concurrent.Class.MonadSTM.TBQueue as LazySTM -import Control.Concurrent.Class.MonadSTM.TQueue as LazySTM -import Control.Monad.Class.MonadSTM as StrictSTM hiding (traceTVar, - traceTVarIO) -import GHC.Stack -import NoThunks.Class (NoThunks (..), unsafeNoThunks) -import Ouroboros.Consensus.Util.MonadSTM.StrictSVar hiding - (newEmptySVar, newEmptySVarWithInvariant, newSVar, - newSVarWithInvariant) -import qualified Ouroboros.Consensus.Util.MonadSTM.StrictSVar as Strict +import Control.Concurrent.Class.MonadSTM.Strict.TMVar as StrictSTM hiding + ( newTMVar + , newTMVarIO + , traceTMVar + , traceTMVarIO + ) +import Control.Concurrent.Class.MonadSTM.TBQueue as LazySTM +import Control.Concurrent.Class.MonadSTM.TQueue as LazySTM +import Control.Monad.Class.MonadSTM as StrictSTM hiding + ( traceTVar + , traceTVarIO + ) +import GHC.Stack +import NoThunks.Class (NoThunks (..), unsafeNoThunks) +import Ouroboros.Consensus.Util.MonadSTM.StrictSVar hiding + ( newEmptySVar + , newEmptySVarWithInvariant + , newSVar + , newSVarWithInvariant + ) +import Ouroboros.Consensus.Util.MonadSTM.StrictSVar qualified as Strict -- TODO: use strict versions of 'TQueue' and 'TBQueue'. Previously the -- 'Control.Monad.Class.MonadSTM.Strict' was imported which @@ -31,8 +41,9 @@ import qualified Ouroboros.Consensus.Util.MonadSTM.StrictSVar as Strict Wrappers that check for thunks -------------------------------------------------------------------------------} -newSVar :: (MonadSTM m, HasCallStack, NoThunks a) - => a -> m (StrictSVar m a) +newSVar :: + (MonadSTM m, HasCallStack, NoThunks a) => + a -> m (StrictSVar m a) newSVar = Strict.newSVarWithInvariant (fmap show . unsafeNoThunks) newEmptySVar :: (MonadSTM m, NoThunks a) => a -> m (StrictSVar m a) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/StrictSVar.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/StrictSVar.hs index ced16d0771..fd1bc15348 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/StrictSVar.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/MonadSTM/StrictSVar.hs @@ -6,8 +6,8 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} -module Ouroboros.Consensus.Util.MonadSTM.StrictSVar ( - castStrictSVar +module Ouroboros.Consensus.Util.MonadSTM.StrictSVar + ( castStrictSVar , isEmptySVar , modifySVar , modifySVar_ @@ -25,19 +25,24 @@ module Ouroboros.Consensus.Util.MonadSTM.StrictSVar ( , tryTakeSVar , updateSVar , updateSVar_ + -- * constructors exported for benefit of tests , StrictSVar (..) ) where -import Control.Concurrent.Class.MonadSTM -import qualified Control.Concurrent.Class.MonadSTM as Lazy -import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked - (checkInvariant) -import Control.Monad (when) -import Control.Monad.Class.MonadThrow (ExitCase (..), MonadCatch, - generalBracket) -import GHC.Stack -import NoThunks.Class (NoThunks (..)) +import Control.Concurrent.Class.MonadSTM +import Control.Concurrent.Class.MonadSTM qualified as Lazy +import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked + ( checkInvariant + ) +import Control.Monad (when) +import Control.Monad.Class.MonadThrow + ( ExitCase (..) + , MonadCatch + , generalBracket + ) +import GHC.Stack +import NoThunks.Class (NoThunks (..)) {------------------------------------------------------------------------------- Strict SVar @@ -63,35 +68,38 @@ import NoThunks.Class (NoThunks (..)) -- weaker guarantee that we provide for 'StrictSVar'. data StrictSVar m a = StrictSVar { invariant :: !(a -> Maybe String) - -- ^ Invariant checked whenever updating the 'StrictSVar'. - , tmvar :: !(Lazy.TMVar m a) - -- ^ The main TMVar supporting this 'StrictSVar' - , tvar :: !(Lazy.TVar m a) - -- ^ TVar for supporting 'readSVarSTM' - -- - -- This TVar is always kept up to date with the 'Lazy.TMVar', but holds on - -- the old value of the 'Lazy.TMVar' when it is empty. This is very useful - -- to support single writer/many reader scenarios. - -- - -- NOTE: We should always update the 'tmvar' before the 'tvar' so that if - -- the update to the 'tmvar' fails, the 'tvar is left unchanged. + -- ^ Invariant checked whenever updating the 'StrictSVar'. + , tmvar :: !(Lazy.TMVar m a) + -- ^ The main TMVar supporting this 'StrictSVar' + , tvar :: !(Lazy.TVar m a) + -- ^ TVar for supporting 'readSVarSTM' + -- + -- This TVar is always kept up to date with the 'Lazy.TMVar', but holds on + -- the old value of the 'Lazy.TMVar' when it is empty. This is very useful + -- to support single writer/many reader scenarios. + -- + -- NOTE: We should always update the 'tmvar' before the 'tvar' so that if + -- the update to the 'tmvar' fails, the 'tvar is left unchanged. } -castStrictSVar :: ( Lazy.TMVar m ~ Lazy.TMVar n - , Lazy.TVar m ~ Lazy.TVar n - ) - => StrictSVar m a -> StrictSVar n a +castStrictSVar :: + ( Lazy.TMVar m ~ Lazy.TMVar n + , Lazy.TVar m ~ Lazy.TVar n + ) => + StrictSVar m a -> StrictSVar n a castStrictSVar StrictSVar{..} = StrictSVar{..} newSVar :: MonadSTM m => a -> m (StrictSVar m a) newSVar = newSVarWithInvariant (const Nothing) -newSVarWithInvariant :: (MonadSTM m, HasCallStack) - => (a -> Maybe String) -- ^ Invariant (expect 'Nothing') - -> a - -> m (StrictSVar m a) +newSVarWithInvariant :: + (MonadSTM m, HasCallStack) => + -- | Invariant (expect 'Nothing') + (a -> Maybe String) -> + a -> + m (StrictSVar m a) newSVarWithInvariant invariant !a = - checkInvariant (invariant a) $ + checkInvariant (invariant a) $ StrictSVar invariant <$> Lazy.newTMVarIO a <*> Lazy.newTVarIO a newEmptySVar :: MonadSTM m => a -> m (StrictSVar m a) @@ -103,50 +111,53 @@ newEmptySVar = newEmptySVarWithInvariant (const Nothing) -- empty, we need an initial value of @a@ even though the 'StrictSVar' starts -- out empty. However, we are /NOT/ strict in this value, to allow it to be -- @error@. -newEmptySVarWithInvariant :: MonadSTM m - => (a -> Maybe String) -- ^ Invariant (expect 'Nothing') - -> a -- ^ The initial stale value - -> m (StrictSVar m a) +newEmptySVarWithInvariant :: + MonadSTM m => + -- | Invariant (expect 'Nothing') + (a -> Maybe String) -> + -- | The initial stale value + a -> + m (StrictSVar m a) newEmptySVarWithInvariant invariant stale = - StrictSVar invariant <$> Lazy.newEmptyTMVarIO <*> Lazy.newTVarIO stale + StrictSVar invariant <$> Lazy.newEmptyTMVarIO <*> Lazy.newTVarIO stale takeSVar :: MonadSTM m => StrictSVar m a -> m a -takeSVar StrictSVar { tmvar } = atomically $ Lazy.takeTMVar tmvar +takeSVar StrictSVar{tmvar} = atomically $ Lazy.takeTMVar tmvar tryTakeSVar :: MonadSTM m => StrictSVar m a -> m (Maybe a) -tryTakeSVar StrictSVar { tmvar } = atomically $ Lazy.tryTakeTMVar tmvar +tryTakeSVar StrictSVar{tmvar} = atomically $ Lazy.tryTakeTMVar tmvar putSVar :: (MonadSTM m, HasCallStack) => StrictSVar m a -> a -> m () -putSVar StrictSVar { tmvar, tvar, invariant } !a = do - atomically $ do - Lazy.putTMVar tmvar a - Lazy.writeTVar tvar a - checkInvariant (invariant a) $ return () +putSVar StrictSVar{tmvar, tvar, invariant} !a = do + atomically $ do + Lazy.putTMVar tmvar a + Lazy.writeTVar tvar a + checkInvariant (invariant a) $ return () tryPutSVar :: (MonadSTM m, HasCallStack) => StrictSVar m a -> a -> m Bool -tryPutSVar StrictSVar { tmvar, tvar, invariant } !a = do - didPut <- atomically $ do - didPut <- Lazy.tryPutTMVar tmvar a - when didPut $ Lazy.writeTVar tvar a - return didPut - checkInvariant (invariant a) $ return didPut +tryPutSVar StrictSVar{tmvar, tvar, invariant} !a = do + didPut <- atomically $ do + didPut <- Lazy.tryPutTMVar tmvar a + when didPut $ Lazy.writeTVar tvar a + return didPut + checkInvariant (invariant a) $ return didPut readSVar :: MonadSTM m => StrictSVar m a -> m a -readSVar StrictSVar { tmvar } = atomically $ Lazy.readTMVar tmvar +readSVar StrictSVar{tmvar} = atomically $ Lazy.readTMVar tmvar tryReadSVar :: MonadSTM m => StrictSVar m a -> m (Maybe a) -tryReadSVar StrictSVar { tmvar } = atomically $ Lazy.tryReadTMVar tmvar +tryReadSVar StrictSVar{tmvar} = atomically $ Lazy.tryReadTMVar tmvar -- | Read the possibly-stale value of the @SVar@ -- -- Will return the current value of the @SVar@ if it non-empty, or the last -- known value otherwise. readSVarSTM :: MonadSTM m => StrictSVar m a -> STM m a -readSVarSTM StrictSVar { tmvar, tvar } = do - ma <- Lazy.tryReadTMVar tmvar - case ma of - Just a -> return a - Nothing -> Lazy.readTVar tvar +readSVarSTM StrictSVar{tmvar, tvar} = do + ma <- Lazy.tryReadTMVar tmvar + case ma of + Just a -> return a + Nothing -> Lazy.readTVar tvar -- | Swap value of a 'StrictSVar' -- @@ -156,57 +167,59 @@ readSVarSTM StrictSVar { tmvar, tvar } = do -- 'putSVar', otherwise it will be difficult to understand when a 'StrictSVar' -- is updated and when it is not. swapSVar :: (MonadSTM m, HasCallStack) => StrictSVar m a -> a -> m a -swapSVar StrictSVar { tmvar, tvar, invariant } !a = do - oldValue <- atomically $ do - oldValue <- Lazy.swapTMVar tmvar a - Lazy.writeTVar tvar a - return oldValue - checkInvariant (invariant a) $ return oldValue +swapSVar StrictSVar{tmvar, tvar, invariant} !a = do + oldValue <- atomically $ do + oldValue <- Lazy.swapTMVar tmvar a + Lazy.writeTVar tvar a + return oldValue + checkInvariant (invariant a) $ return oldValue isEmptySVar :: MonadSTM m => StrictSVar m a -> m Bool -isEmptySVar StrictSVar { tmvar } = atomically $ Lazy.isEmptyTMVar tmvar +isEmptySVar StrictSVar{tmvar} = atomically $ Lazy.isEmptyTMVar tmvar updateSVar :: (MonadSTM m, HasCallStack) => StrictSVar m a -> (a -> (a, b)) -> m b -updateSVar StrictSVar { tmvar, tvar, invariant } f = do - -- it's not unreasonable to assume that forcing !(!a', b) inside the - -- atomically block will force the new value before putting it into the - -- SVar, but although the value in the tuple is forced, there's actually - -- a thin closure constructed that just points to the forced value which - -- is what GHC returns in the constructed tuple (so it is actually a thunk, - -- albeit a trivial one!). in order to ensure that we're forcing the value - -- inside the SVar before calling checkInvariant, we need an additional - -- bang outside the atomically block, which will correctly force a' before - -- checkInvariant looks to see if it's been evaluated or not. without this - -- change, it's possible to put a lazy value inside a StrictSVar (though - -- it's unlikely to occur in production environments because this - -- intermediate unforced closure is optimized away at -O1 and above). - (!a', b) <- atomically $ do - a <- Lazy.takeTMVar tmvar - let !(!a', b) = f a - Lazy.putTMVar tmvar a' - Lazy.writeTVar tvar a' - -- To exactly see what we mean, compile this module with `-ddump-stg-final` - -- and look for the definition of the closure that is placed as the first - -- item in the tuple returned here - return (a', b) - checkInvariant (invariant a') $ return b +updateSVar StrictSVar{tmvar, tvar, invariant} f = do + -- it's not unreasonable to assume that forcing !(!a', b) inside the + -- atomically block will force the new value before putting it into the + -- SVar, but although the value in the tuple is forced, there's actually + -- a thin closure constructed that just points to the forced value which + -- is what GHC returns in the constructed tuple (so it is actually a thunk, + -- albeit a trivial one!). in order to ensure that we're forcing the value + -- inside the SVar before calling checkInvariant, we need an additional + -- bang outside the atomically block, which will correctly force a' before + -- checkInvariant looks to see if it's been evaluated or not. without this + -- change, it's possible to put a lazy value inside a StrictSVar (though + -- it's unlikely to occur in production environments because this + -- intermediate unforced closure is optimized away at -O1 and above). + (!a', b) <- atomically $ do + a <- Lazy.takeTMVar tmvar + let !(!a', b) = f a + Lazy.putTMVar tmvar a' + Lazy.writeTVar tvar a' + -- To exactly see what we mean, compile this module with `-ddump-stg-final` + -- and look for the definition of the closure that is placed as the first + -- item in the tuple returned here + return (a', b) + checkInvariant (invariant a') $ return b updateSVar_ :: (MonadSTM m, HasCallStack) => StrictSVar m a -> (a -> a) -> m () -updateSVar_ var f = updateSVar var ((, ()) . f) +updateSVar_ var f = updateSVar var ((,()) . f) -modifySVar :: (MonadSTM m, MonadCatch m, HasCallStack) - => StrictSVar m a -> (a -> m (a, b)) -> m b +modifySVar :: + (MonadSTM m, MonadCatch m, HasCallStack) => + StrictSVar m a -> (a -> m (a, b)) -> m b modifySVar var action = - snd . fst <$> generalBracket (takeSVar var) putBack action - where - putBack a ec = case ec of - ExitCaseSuccess (a', _) -> putSVar var a' - ExitCaseException _ex -> putSVar var a - ExitCaseAbort -> putSVar var a - -modifySVar_ :: (MonadSTM m, MonadCatch m, HasCallStack) - => StrictSVar m a -> (a -> m a) -> m () -modifySVar_ var action = modifySVar var (fmap (, ()) . action) + snd . fst <$> generalBracket (takeSVar var) putBack action + where + putBack a ec = case ec of + ExitCaseSuccess (a', _) -> putSVar var a' + ExitCaseException _ex -> putSVar var a + ExitCaseAbort -> putSVar var a + +modifySVar_ :: + (MonadSTM m, MonadCatch m, HasCallStack) => + StrictSVar m a -> (a -> m a) -> m () +modifySVar_ var action = modifySVar var (fmap (,()) . action) {------------------------------------------------------------------------------- NoThunks @@ -214,8 +227,8 @@ modifySVar_ var action = modifySVar var (fmap (, ()) . action) instance NoThunks a => NoThunks (StrictSVar IO a) where showTypeOf _ = "StrictSVar IO" - wNoThunks ctxt StrictSVar { tvar } = do - -- We can't use @atomically $ readTVar ..@ here, as that will lead to a - -- "Control.Concurrent.STM.atomically was nested" exception. - a <- readTVarIO tvar - noThunks ctxt a + wNoThunks ctxt StrictSVar{tvar} = do + -- We can't use @atomically $ readTVar ..@ here, as that will lead to a + -- "Control.Concurrent.STM.atomically was nested" exception. + a <- readTVarIO tvar + noThunks ctxt a diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictMVar.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictMVar.hs index 20f48ecef1..2ed9973d16 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictMVar.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictMVar.hs @@ -1,7 +1,6 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | 'StrictMVar's with 'NoThunks' invariants. @@ -16,28 +15,34 @@ -- The exports of this module (should) mirror the exports of the -- "Control.Concurrent.Class.MonadMVar.Strict.Checked" module from the -- @strict-checked-vars@ package. -module Ouroboros.Consensus.Util.NormalForm.StrictMVar ( - -- * StrictMVar +module Ouroboros.Consensus.Util.NormalForm.StrictMVar + ( -- * StrictMVar newEmptyMVar , newEmptyMVarWithInvariant , newMVar , newMVarWithInvariant + -- * Invariant , noThunksInvariant + -- * Unchecked , uncheckedNewEmptyMVar , uncheckedNewMVar + -- * Re-exports , module Control.Concurrent.Class.MonadMVar.Strict.Checked ) where -import qualified Control.Concurrent.Class.MonadMVar.Strict as Strict -import Control.Concurrent.Class.MonadMVar.Strict.Checked hiding - (newEmptyMVar, newEmptyMVarWithInvariant, newMVar, - newMVarWithInvariant) -import qualified Control.Concurrent.Class.MonadMVar.Strict.Checked as Checked -import GHC.Stack (HasCallStack) -import NoThunks.Class (NoThunks (..), unsafeNoThunks) +import Control.Concurrent.Class.MonadMVar.Strict qualified as Strict +import Control.Concurrent.Class.MonadMVar.Strict.Checked hiding + ( newEmptyMVar + , newEmptyMVarWithInvariant + , newMVar + , newMVarWithInvariant + ) +import Control.Concurrent.Class.MonadMVar.Strict.Checked qualified as Checked +import GHC.Stack (HasCallStack) +import NoThunks.Class (NoThunks (..), unsafeNoThunks) {------------------------------------------------------------------------------- StrictMVar @@ -56,12 +61,12 @@ newEmptyMVar = Checked.newEmptyMVarWithInvariant noThunksInvariant -- When both the custom and 'NoThunks' invariants are broken, only the error -- related to the custom invariant is reported. newMVarWithInvariant :: - (HasCallStack, MonadMVar m, NoThunks a) - => (a -> Maybe String) - -> a - -> m (StrictMVar m a) + (HasCallStack, MonadMVar m, NoThunks a) => + (a -> Maybe String) -> + a -> + m (StrictMVar m a) newMVarWithInvariant inv = - Checked.newMVarWithInvariant (\x -> inv x <> noThunksInvariant x) + Checked.newMVarWithInvariant (\x -> inv x <> noThunksInvariant x) -- | Create an empty 'StrictMVar' with a custom invariant /and/ a 'NoThunks' -- invariant. @@ -69,11 +74,11 @@ newMVarWithInvariant inv = -- When both the custom and 'NoThunks' invariants are broken, only the error -- related to the custom invariant is reported. newEmptyMVarWithInvariant :: - (MonadMVar m, NoThunks a) - => (a -> Maybe String) - -> m (StrictMVar m a) + (MonadMVar m, NoThunks a) => + (a -> Maybe String) -> + m (StrictMVar m a) newEmptyMVarWithInvariant inv = - Checked.newEmptyMVarWithInvariant (\x -> inv x <> noThunksInvariant x) + Checked.newEmptyMVarWithInvariant (\x -> inv x <> noThunksInvariant x) {------------------------------------------------------------------------------- Invariant diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictTVar.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictTVar.hs index 717fb30089..22984e9d06 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictTVar.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/NormalForm/StrictTVar.hs @@ -1,7 +1,6 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | 'StrictTVar's with 'NoThunks' invariants. @@ -16,29 +15,37 @@ -- The exports of this module (should) mirror the exports of the -- "Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked" module from the -- @strict-checked-vars@ package. -module Ouroboros.Consensus.Util.NormalForm.StrictTVar ( - -- * StrictTVar +module Ouroboros.Consensus.Util.NormalForm.StrictTVar + ( -- * StrictTVar newTVar , newTVarIO , newTVarWithInvariant , newTVarWithInvariantIO + -- * Invariant , noThunksInvariant + -- * Unchecked , uncheckedNewTVarM + -- * Re-exports , module Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked ) where -import qualified Control.Concurrent.Class.MonadSTM.Strict as StrictSTM -import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked hiding - (checkInvariant, newTVar, newTVarIO, newTVarWithInvariant, - newTVarWithInvariantIO) -import qualified Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked as Checked -import GHC.Stack -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Util.NormalForm.StrictMVar - (noThunksInvariant) +import Control.Concurrent.Class.MonadSTM.Strict qualified as StrictSTM +import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked hiding + ( checkInvariant + , newTVar + , newTVarIO + , newTVarWithInvariant + , newTVarWithInvariantIO + ) +import Control.Concurrent.Class.MonadSTM.Strict.TVar.Checked qualified as Checked +import GHC.Stack +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Util.NormalForm.StrictMVar + ( noThunksInvariant + ) {------------------------------------------------------------------------------- StrictTVar @@ -57,24 +64,24 @@ newTVarIO = Checked.newTVarWithInvariantIO noThunksInvariant -- When both the custom and 'NoThunks' invariants are broken, only the error -- related to the custom invariant is reported. newTVarWithInvariant :: - (HasCallStack, StrictSTM.MonadSTM m, NoThunks a) - => (a -> Maybe String) - -> a - -> StrictSTM.STM m (StrictTVar m a) + (HasCallStack, StrictSTM.MonadSTM m, NoThunks a) => + (a -> Maybe String) -> + a -> + StrictSTM.STM m (StrictTVar m a) newTVarWithInvariant inv = - Checked.newTVarWithInvariant (\x -> inv x <> noThunksInvariant x) + Checked.newTVarWithInvariant (\x -> inv x <> noThunksInvariant x) -- | Create a 'StrictTVar' with a custom invariant /and/ a 'NoThunks' invariant. -- -- When both the custom and 'NoThunks' invariants are broken, only the error -- related to the custom invariant is reported. newTVarWithInvariantIO :: - (HasCallStack, StrictSTM.MonadSTM m, NoThunks a) - => (a -> Maybe String) - -> a - -> m (StrictTVar m a) + (HasCallStack, StrictSTM.MonadSTM m, NoThunks a) => + (a -> Maybe String) -> + a -> + m (StrictTVar m a) newTVarWithInvariantIO inv = - Checked.newTVarWithInvariantIO (\x -> inv x <> noThunksInvariant x) + Checked.newTVarWithInvariantIO (\x -> inv x <> noThunksInvariant x) {------------------------------------------------------------------------------- NoThunks instance diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Orphans.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Orphans.hs index 65d65c5c75..7ddbbf5614 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Orphans.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Orphans.hs @@ -7,36 +7,40 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Ouroboros.Consensus.Util.Orphans () where -import Cardano.Crypto.DSIGN.Class -import Cardano.Crypto.DSIGN.Mock (MockDSIGN) -import Cardano.Crypto.Hash (Hash, SizeHash) -import Cardano.Ledger.Genesis (NoGenesis (..)) -import Codec.CBOR.Decoding (Decoder) -import Codec.Serialise (Serialise (..)) -import Control.Tracer (Tracer) -import Data.IntPSQ (IntPSQ) -import qualified Data.IntPSQ as PSQ -import Data.MultiSet (MultiSet) -import qualified Data.MultiSet as MultiSet -import Data.SOP.BasicFunctors -import GHC.TypeLits (KnownNat) -import NoThunks.Class (InspectHeap (..), InspectHeapNamed (..), - NoThunks (..), OnlyCheckWhnfNamed (..), allNoThunks) -import Ouroboros.Network.Util.ShowProxy -import System.FS.API (SomeHasFS) -import System.FS.API.Types (FsPath, Handle) -import System.FS.CRC (CRC (CRC)) +import Cardano.Crypto.DSIGN.Class +import Cardano.Crypto.DSIGN.Mock (MockDSIGN) +import Cardano.Crypto.Hash (Hash, SizeHash) +import Cardano.Ledger.Genesis (NoGenesis (..)) +import Codec.CBOR.Decoding (Decoder) +import Codec.Serialise (Serialise (..)) +import Control.Tracer (Tracer) +import Data.IntPSQ (IntPSQ) +import Data.IntPSQ qualified as PSQ +import Data.MultiSet (MultiSet) +import Data.MultiSet qualified as MultiSet +import Data.SOP.BasicFunctors +import GHC.TypeLits (KnownNat) +import NoThunks.Class + ( InspectHeap (..) + , InspectHeapNamed (..) + , NoThunks (..) + , OnlyCheckWhnfNamed (..) + , allNoThunks + ) +import Ouroboros.Network.Util.ShowProxy +import System.FS.API (SomeHasFS) +import System.FS.API.Types (FsPath, Handle) +import System.FS.CRC (CRC (CRC)) {------------------------------------------------------------------------------- Serialise -------------------------------------------------------------------------------} -instance KnownNat (SizeHash h) => Serialise (Hash h a) where +instance KnownNat (SizeHash h) => Serialise (Hash h a) instance Serialise (VerKeyDSIGN MockDSIGN) where encode = encodeVerKeyDSIGN @@ -50,17 +54,23 @@ instance NoThunks (NoGenesis era) where showTypeOf _ = "NoGenesis" wNoThunks _ NoGenesis = return Nothing -instance ( NoThunks p - , NoThunks v - , Ord p - ) => NoThunks (IntPSQ p v) where +instance + ( NoThunks p + , NoThunks v + , Ord p + ) => + NoThunks (IntPSQ p v) + where showTypeOf _ = "IntPSQ" wNoThunks ctxt = - allNoThunks - . concatMap (\(k, p, v) -> - [ noThunks ctxt k - , noThunks ctxt p - , noThunks ctxt v]) + allNoThunks + . concatMap + ( \(k, p, v) -> + [ noThunks ctxt k + , noThunks ctxt p + , noThunks ctxt v + ] + ) . PSQ.toList deriving via OnlyCheckWhnfNamed "Decoder" (Decoder s a) instance NoThunks (Decoder s a) @@ -69,7 +79,7 @@ deriving via OnlyCheckWhnfNamed "Tracer" (Tracer m ev) instance NoThunks (Tracer instance NoThunks a => NoThunks (K a b) where showTypeOf _ = showTypeOf (Proxy @a) - wNoThunks ctxt (K a) = wNoThunks ("K":ctxt) a + wNoThunks ctxt (K a) = wNoThunks ("K" : ctxt) a instance NoThunks a => NoThunks (MultiSet a) where showTypeOf _ = "MultiSet" @@ -81,7 +91,11 @@ instance NoThunks a => NoThunks (MultiSet a) where deriving via InspectHeap FsPath instance NoThunks FsPath deriving newtype instance NoThunks CRC -deriving via InspectHeapNamed "Handle" (Handle h) - instance NoThunks (Handle h) -deriving via OnlyCheckWhnfNamed "SomeHasFS" (SomeHasFS m) - instance NoThunks (SomeHasFS m) +deriving via + InspectHeapNamed "Handle" (Handle h) + instance + NoThunks (Handle h) +deriving via + OnlyCheckWhnfNamed "SomeHasFS" (SomeHasFS m) + instance + NoThunks (SomeHasFS m) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/RedundantConstraints.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/RedundantConstraints.hs index f5436530ee..b983c99f8d 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/RedundantConstraints.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/RedundantConstraints.hs @@ -1,14 +1,14 @@ {-# LANGUAGE ConstraintKinds #-} - {-# OPTIONS_GHC -Wno-redundant-constraints #-} -module Ouroboros.Consensus.Util.RedundantConstraints ( - keepRedundantConstraint +module Ouroboros.Consensus.Util.RedundantConstraints + ( keepRedundantConstraint + -- * Convenience re-export , Proxy (..) ) where -import Data.Proxy +import Data.Proxy -- | Can be used to silence individual "redundant constraint" warnings -- diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/STM.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/STM.hs index 4d441861b7..9130e3bee1 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/STM.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/STM.hs @@ -8,11 +8,12 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Util.STM ( - -- * 'Watcher' +module Ouroboros.Consensus.Util.STM + ( -- * 'Watcher' Watcher (..) , forkLinkedWatcher , withWatcher + -- * Misc , Fingerprint (..) , WithFingerprint (..) @@ -20,57 +21,64 @@ module Ouroboros.Consensus.Util.STM ( , blockUntilChanged , blockUntilJust , runWhenJust + -- * Simulate various monad stacks in STM , Sim (..) , simId , simStateT + -- * withTMVar , withTMVar , withTMVarAnd ) where -import Control.Monad (void) -import Control.Monad.State (StateT (..)) -import Control.ResourceRegistry -import Data.Void -import Data.Word (Word64) -import GHC.Generics (Generic) -import GHC.Stack -import Ouroboros.Consensus.Util.IOLike +import Control.Monad (void) +import Control.Monad.State (StateT (..)) +import Control.ResourceRegistry +import Data.Void +import Data.Word (Word64) +import GHC.Generics (Generic) +import GHC.Stack +import Ouroboros.Consensus.Util.IOLike {------------------------------------------------------------------------------- Misc -------------------------------------------------------------------------------} -- | Wait until the TVar changed -blockUntilChanged :: forall m a b. (MonadSTM m, Eq b) - => (a -> b) -> b -> STM m a -> STM m (a, b) +blockUntilChanged :: + forall m a b. + (MonadSTM m, Eq b) => + (a -> b) -> b -> STM m a -> STM m (a, b) blockUntilChanged f b getA = do - a <- getA - let b' = f a - if b' == b - then retry - else return (a, b') + a <- getA + let b' = f a + if b' == b + then retry + else return (a, b') -- | Spawn a new thread that waits for an STM value to become 'Just' -- -- The thread will be linked to the registry. -runWhenJust :: IOLike m - => ResourceRegistry m - -> String -- ^ Label for the thread - -> STM m (Maybe a) - -> (a -> m ()) - -> m () +runWhenJust :: + IOLike m => + ResourceRegistry m -> + -- | Label for the thread + String -> + STM m (Maybe a) -> + (a -> m ()) -> + m () runWhenJust registry label getMaybeA action = - void $ forkLinkedThread registry label $ + void $ + forkLinkedThread registry label $ action =<< atomically (blockUntilJust getMaybeA) blockUntilJust :: MonadSTM m => STM m (Maybe a) -> STM m a blockUntilJust getMaybeA = do - ma <- getMaybeA - case ma of - Nothing -> retry - Just a -> return a + ma <- getMaybeA + case ma of + Nothing -> retry + Just a -> return a blockUntilAllJust :: MonadSTM m => [STM m (Maybe a)] -> STM m [a] blockUntilAllJust = mapM blockUntilJust @@ -78,31 +86,32 @@ blockUntilAllJust = mapM blockUntilJust -- | Simple type that can be used to indicate something in a @TVar@ is -- changed. newtype Fingerprint = Fingerprint Word64 - deriving stock (Show, Eq, Generic) - deriving newtype (Enum) - deriving anyclass (NoThunks) + deriving stock (Show, Eq, Generic) + deriving newtype Enum + deriving anyclass NoThunks -- | Store a value together with its fingerprint. data WithFingerprint a = WithFingerprint { forgetFingerprint :: !a - , getFingerprint :: !Fingerprint - } deriving (Show, Eq, Functor, Generic, NoThunks) + , getFingerprint :: !Fingerprint + } + deriving (Show, Eq, Functor, Generic, NoThunks) {------------------------------------------------------------------------------- Simulate monad stacks -------------------------------------------------------------------------------} -newtype Sim n m = Sim { runSim :: forall a. n a -> STM m a } +newtype Sim n m = Sim {runSim :: forall a. n a -> STM m a} simId :: Sim (STM m) m simId = Sim id simStateT :: IOLike m => StrictTVar m st -> Sim n m -> Sim (StateT st n) m simStateT stVar (Sim k) = Sim $ \(StateT f) -> do - st <- readTVar stVar - (a, st') <- k (f st) - writeTVar stVar st' - return a + st <- readTVar stVar + (a, st') <- k (f st) + writeTVar stVar st' + return a {------------------------------------------------------------------------------- Watchers @@ -114,58 +123,63 @@ simStateT stVar (Sim k) = Sim $ \(StateT f) -> do -- NOTE: STM does not guarantee that 'wNotify' will /literally/ be called on -- /every/ change: when the system is under heavy load, some updates may be -- missed. -data Watcher m a fp = Watcher { - -- | Obtain a fingerprint from a value of the monitored variable. - wFingerprint :: a -> fp - -- | The initial fingerprint - -- - -- If 'Nothing', the action is executed once immediately to obtain the - -- initial fingerprint. - , wInitial :: Maybe fp - -- | An action executed each time the fingerprint changes. - , wNotify :: a -> m () - -- | The variable to monitor. - , wReader :: STM m a +data Watcher m a fp = Watcher + { wFingerprint :: a -> fp + -- ^ Obtain a fingerprint from a value of the monitored variable. + , wInitial :: Maybe fp + -- ^ The initial fingerprint + -- + -- If 'Nothing', the action is executed once immediately to obtain the + -- initial fingerprint. + , wNotify :: a -> m () + -- ^ An action executed each time the fingerprint changes. + , wReader :: STM m a + -- ^ The variable to monitor. } -- | Execute a 'Watcher' -- -- NOT EXPORTED -runWatcher :: forall m a fp. (IOLike m, Eq fp, HasCallStack) - => Watcher m a fp - -> m Void +runWatcher :: + forall m a fp. + (IOLike m, Eq fp, HasCallStack) => + Watcher m a fp -> + m Void runWatcher watcher = do - initB <- case mbInitFP of - Just fp -> return fp - Nothing -> do - a <- atomically getA - notify a - return $ f a - loop initB - where - Watcher { - wFingerprint = f - , wInitial = mbInitFP - , wNotify = notify - , wReader = getA - } = watcher - - loop :: fp -> m Void - loop fp = do - (a, fp') <- atomically $ blockUntilChanged f fp getA + initB <- case mbInitFP of + Just fp -> return fp + Nothing -> do + a <- atomically getA notify a - loop fp' + return $ f a + loop initB + where + Watcher + { wFingerprint = f + , wInitial = mbInitFP + , wNotify = notify + , wReader = getA + } = watcher + + loop :: fp -> m Void + loop fp = do + (a, fp') <- atomically $ blockUntilChanged f fp getA + notify a + loop fp' -- | Spawn a new thread that runs a 'Watcher' -- -- The thread will be linked to the registry. -forkLinkedWatcher :: forall m a fp. (IOLike m, Eq fp, HasCallStack) - => ResourceRegistry m - -> String -- ^ Label for the thread - -> Watcher m a fp - -> m (Thread m Void) +forkLinkedWatcher :: + forall m a fp. + (IOLike m, Eq fp, HasCallStack) => + ResourceRegistry m -> + -- | Label for the thread + String -> + Watcher m a fp -> + m (Thread m Void) forkLinkedWatcher registry label watcher = - forkLinkedThread registry label $ runWatcher watcher + forkLinkedThread registry label $ runWatcher watcher -- | Spawn a new thread that runs a 'Watcher' -- @@ -173,15 +187,18 @@ forkLinkedWatcher registry label watcher = -- -- We do not provide the 'Async' handle only because our anticipated use cases -- don't need it. -withWatcher :: forall m a fp r. (IOLike m, Eq fp, HasCallStack) - => String -- ^ Label for the thread - -> Watcher m a fp - -> m r - -> m r +withWatcher :: + forall m a fp r. + (IOLike m, Eq fp, HasCallStack) => + -- | Label for the thread + String -> + Watcher m a fp -> + m r -> + m r withWatcher label watcher k = - withAsync - (do labelThisThread label; runWatcher watcher) - (\h -> do link h; k) + withAsync + (do labelThisThread label; runWatcher watcher) + (\h -> do link h; k) {------------------------------------------------------------------------------- withTMVar @@ -190,35 +207,38 @@ withWatcher label watcher k = -- | Apply @f@ with the content of @tv@ as state, restoring the original value when an -- exception occurs withTMVar :: - IOLike m - => StrictTMVar m a - -> (a -> m (c, a)) - -> m c + IOLike m => + StrictTMVar m a -> + (a -> m (c, a)) -> + m c withTMVar tv f = withTMVarAnd tv (const $ pure ()) (\a -> const $ f a) -- | Apply @f@ with the content of @tv@ as state, restoring the original value -- when an exception occurs. Additionally run a @STM@ action when acquiring the -- value. withTMVarAnd :: - IOLike m - => StrictTMVar m a - -> (a -> STM m b) -- ^ Additional STM action to run in the same atomically - -- block as the TMVar is acquired - -> (a -> b -> m (c, a)) -- ^ Action - -> m c + IOLike m => + StrictTMVar m a -> + -- | Additional STM action to run in the same atomically + -- block as the TMVar is acquired + (a -> STM m b) -> + -- | Action + (a -> b -> m (c, a)) -> + m c withTMVarAnd tv guard f = - fst . fst <$> generalBracket - (atomically $ do - istate <- takeTMVar tv - guarded <- guard istate - pure (istate, guarded) - ) - (\(origState, _) -> \case - ExitCaseSuccess (_, newState) - -> atomically $ putTMVar tv newState - ExitCaseException _ - -> atomically $ putTMVar tv origState - ExitCaseAbort - -> atomically $ putTMVar tv origState - ) - (uncurry f) + fst . fst + <$> generalBracket + ( atomically $ do + istate <- takeTMVar tv + guarded <- guard istate + pure (istate, guarded) + ) + ( \(origState, _) -> \case + ExitCaseSuccess (_, newState) -> + atomically $ putTMVar tv newState + ExitCaseException _ -> + atomically $ putTMVar tv origState + ExitCaseAbort -> + atomically $ putTMVar tv origState + ) + (uncurry f) diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Time.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Time.hs index b43a7f99e7..62032e9ac3 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Time.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Time.hs @@ -1,10 +1,10 @@ -module Ouroboros.Consensus.Util.Time ( - multipleNominalDelay +module Ouroboros.Consensus.Util.Time + ( multipleNominalDelay , nominalDelay , secondsToNominalDiffTime ) where -import Data.Time (DiffTime, NominalDiffTime) +import Data.Time (DiffTime, NominalDiffTime) {------------------------------------------------------------------------------- Operations diff --git a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Versioned.hs b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Versioned.hs index 55a84ae748..4d3c09a3dd 100644 --- a/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Versioned.hs +++ b/ouroboros-consensus/src/ouroboros-consensus/Ouroboros/Consensus/Util/Versioned.hs @@ -7,8 +7,8 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Ouroboros.Consensus.Util.Versioned ( - VersionDecoder (..) +module Ouroboros.Consensus.Util.Versioned + ( VersionDecoder (..) , VersionError (..) , Versioned (..) , decodeVersion @@ -16,18 +16,18 @@ module Ouroboros.Consensus.Util.Versioned ( , decodeVersioned , encodeVersion , encodeVersioned + -- * opaque , VersionNumber ) where -import Cardano.Binary (enforceSize) -import qualified Codec.CBOR.Decoding as Dec -import Codec.Serialise (Serialise (..)) -import Codec.Serialise.Decoding (Decoder, decodeWord8) -import Codec.Serialise.Encoding (Encoding, encodeListLen, encodeWord8) -import Control.Exception (Exception) -import Data.Word (Word8) - +import Cardano.Binary (enforceSize) +import Codec.CBOR.Decoding qualified as Dec +import Codec.Serialise (Serialise (..)) +import Codec.Serialise.Decoding (Decoder, decodeWord8) +import Codec.Serialise.Encoding (Encoding, encodeListLen, encodeWord8) +import Control.Exception (Exception) +import Data.Word (Word8) newtype VersionNumber = VersionNumber Word8 deriving newtype (Eq, Ord, Num, Show) @@ -38,61 +38,65 @@ instance Serialise VersionNumber where data Versioned a = Versioned { versionNumber :: !VersionNumber - , versioned :: !a - } deriving (Eq, Show) + , versioned :: !a + } + deriving (Eq, Show) data VersionError - = IncompatibleVersion VersionNumber String - -- ^ We cannot deserialise the version of the data with the given + = -- | We cannot deserialise the version of the data with the given -- 'VersionNumber' because its data format is incompatible. -- -- For example, the given format lacks data that was added in later -- version that cannot be reconstructed from scratch. - | UnknownVersion VersionNumber - -- ^ The given 'VersionNumber' is unknown and thus not supported. - | MigrationFailed VersionNumber String - -- ^ A migration from the given 'VersionNumber' failed. See 'Migrate'. - deriving stock (Show) - deriving anyclass (Exception) + IncompatibleVersion VersionNumber String + | -- | The given 'VersionNumber' is unknown and thus not supported. + UnknownVersion VersionNumber + | -- | A migration from the given 'VersionNumber' failed. See 'Migrate'. + MigrationFailed VersionNumber String + deriving stock Show + deriving anyclass Exception -- | How to decode a version of a format. data VersionDecoder a where -- | This version is incompatible, fail with 'IncompatibleVersion' and the -- given message. - Incompatible :: String - -> VersionDecoder a - + Incompatible :: + String -> + VersionDecoder a -- | Decode the version using the given 'Decoder'. - Decode :: (forall s. Decoder s a) - -> VersionDecoder a - + Decode :: + (forall s. Decoder s a) -> + VersionDecoder a -- | Decode an other format (@from@) and migrate from that. When migration -- fails, the version decoder will fail with @MigrationFailed@. - Migrate :: VersionDecoder from - -> (from -> Either String to) - -> VersionDecoder to + Migrate :: + VersionDecoder from -> + (from -> Either String to) -> + VersionDecoder to -- | Return a 'Decoder' for the given 'VersionDecoder'. getVersionDecoder :: - VersionNumber - -> VersionDecoder a - -> forall s. Decoder s a + VersionNumber -> + VersionDecoder a -> + forall s. + Decoder s a getVersionDecoder vn = \case - Incompatible msg -> fail $ show $ IncompatibleVersion vn msg - Decode dec -> dec - Migrate vDec migrate -> do - from <- getVersionDecoder vn vDec - case migrate from of - Left msg -> fail $ show $ MigrationFailed vn msg - Right to -> return to + Incompatible msg -> fail $ show $ IncompatibleVersion vn msg + Decode dec -> dec + Migrate vDec migrate -> do + from <- getVersionDecoder vn vDec + case migrate from of + Left msg -> fail $ show $ MigrationFailed vn msg + Right to -> return to -- | Given a 'VersionNumber' and the encoding of an @a@, encode the -- corresponding @'Versioned' a@. Use 'decodeVersion' to decode it. encodeVersion :: - VersionNumber - -> Encoding - -> Encoding -encodeVersion vn encodedA = mconcat + VersionNumber -> + Encoding -> + Encoding +encodeVersion vn encodedA = + mconcat [ encodeListLen 2 , encode vn , encodedA @@ -106,10 +110,11 @@ encodeVersion vn encodedA = mconcat -- of 'lookup'). When no match is found, a decoder that fails with -- 'UnknownVersion' is returned. decodeVersion :: - [(VersionNumber, VersionDecoder a)] - -> forall s. Decoder s a + [(VersionNumber, VersionDecoder a)] -> + forall s. + Decoder s a decodeVersion versionDecoders = - versioned <$> decodeVersioned versionDecoders + versioned <$> decodeVersioned versionDecoders -- | Same as 'decodeVersion', but with a hook that gets called in case the -- encoding was not produced by a versioned encoder. This allows a transition @@ -127,48 +132,49 @@ decodeVersion versionDecoders = -- length 2, as the new versioned decoder will be called in those cases, not the -- hook. decodeVersionWithHook :: - forall a. - (forall s. Maybe Int -> Decoder s a) - -> [(VersionNumber, VersionDecoder a)] - -> forall s. Decoder s a + forall a. + (forall s. Maybe Int -> Decoder s a) -> + [(VersionNumber, VersionDecoder a)] -> + forall s. + Decoder s a decodeVersionWithHook hook versionDecoders = do - tokenType <- Dec.peekTokenType + tokenType <- Dec.peekTokenType - if isListLen tokenType then do + if isListLen tokenType + then do len <- Dec.decodeListLen case len of 2 -> goVersioned _ -> hook (Just len) - else hook Nothing - - where - isListLen :: Dec.TokenType -> Bool - isListLen = \case - Dec.TypeListLen -> True - Dec.TypeListLen64 -> True - _ -> False - - goVersioned :: forall s. Decoder s a - goVersioned = do - vn <- decode - case lookup vn versionDecoders of - Nothing -> fail $ show $ UnknownVersion vn - Just vDec -> getVersionDecoder vn vDec + where + isListLen :: Dec.TokenType -> Bool + isListLen = \case + Dec.TypeListLen -> True + Dec.TypeListLen64 -> True + _ -> False + + goVersioned :: forall s. Decoder s a + goVersioned = do + vn <- decode + case lookup vn versionDecoders of + Nothing -> fail $ show $ UnknownVersion vn + Just vDec -> getVersionDecoder vn vDec encodeVersioned :: - ( a -> Encoding) - -> (Versioned a -> Encoding) + (a -> Encoding) -> + (Versioned a -> Encoding) encodeVersioned enc (Versioned vn a) = - encodeVersion vn (enc a) + encodeVersion vn (enc a) decodeVersioned :: - [(VersionNumber, VersionDecoder a)] - -> forall s. Decoder s (Versioned a) + [(VersionNumber, VersionDecoder a)] -> + forall s. + Decoder s (Versioned a) decodeVersioned versionDecoders = do - enforceSize "Versioned" 2 - vn <- decode - case lookup vn versionDecoders of - Nothing -> fail $ show $ UnknownVersion vn - Just vDec -> Versioned vn <$> getVersionDecoder vn vDec + enforceSize "Versioned" 2 + vn <- decode + case lookup vn versionDecoders of + Nothing -> fail $ show $ UnknownVersion vn + Just vDec -> Versioned vn <$> getVersionDecoder vn vDec diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/LedgerTables.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/LedgerTables.hs index 10fa48e326..76f1b645d0 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/LedgerTables.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/LedgerTables.hs @@ -1,13 +1,13 @@ {-# LANGUAGE FlexibleContexts #-} -module Test.LedgerTables ( - prop_hasledgertables_laws +module Test.LedgerTables + ( prop_hasledgertables_laws , prop_stowable_laws ) where -import Data.Function (on) -import Ouroboros.Consensus.Ledger.Basics -import Test.QuickCheck +import Data.Function (on) +import Ouroboros.Consensus.Ledger.Basics +import Test.QuickCheck -- | We compare the Ledger Tables of the result because the comparison with the -- rest of the LedgerState takes considerably more time to run. @@ -18,10 +18,10 @@ import Test.QuickCheck , EqMK mk , ShowMK mk , HasLedgerTables (LedgerState blk) - ) - => LedgerState blk mk - -> LedgerState blk mk - -> Property + ) => + LedgerState blk mk -> + LedgerState blk mk -> + Property (==?) = (===) `on` projectLedgerTables infix 4 ==? @@ -32,15 +32,15 @@ infix 4 ==? -- -- > unstow . stow == id prop_stowable_laws :: - ( HasLedgerTables (LedgerState blk) - , CanStowLedgerTables (LedgerState blk) - ) - => LedgerState blk EmptyMK - -> LedgerState blk ValuesMK - -> Property + ( HasLedgerTables (LedgerState blk) + , CanStowLedgerTables (LedgerState blk) + ) => + LedgerState blk EmptyMK -> + LedgerState blk ValuesMK -> + Property prop_stowable_laws = \ls ls' -> - stowLedgerTables (unstowLedgerTables ls) ==? ls .&&. - unstowLedgerTables (stowLedgerTables ls') ==? ls' + stowLedgerTables (unstowLedgerTables ls) ==? ls + .&&. unstowLedgerTables (stowLedgerTables ls') ==? ls' -- | The HasLedgerTables instances should follow these two laws: -- @@ -48,10 +48,10 @@ prop_stowable_laws = \ls ls' -> -- -- > project . with == id prop_hasledgertables_laws :: - HasLedgerTables (LedgerState blk) - => LedgerState blk EmptyMK - -> LedgerTables (LedgerState blk) ValuesMK - -> Property + HasLedgerTables (LedgerState blk) => + LedgerState blk EmptyMK -> + LedgerTables (LedgerState blk) ValuesMK -> + Property prop_hasledgertables_laws = \ls tbs -> - (ls `withLedgerTables` (projectLedgerTables ls)) ==? ls .&&. - projectLedgerTables (ls `withLedgerTables` tbs) === tbs + (ls `withLedgerTables` (projectLedgerTables ls)) ==? ls + .&&. projectLedgerTables (ls `withLedgerTables` tbs) === tbs diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Adversarial.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Adversarial.hs index 0a429dd219..dad4cccd33 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Adversarial.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Adversarial.hs @@ -10,14 +10,15 @@ {-# LANGUAGE TypeApplications #-} -- TODO rename to .Alternative? -module Test.Ouroboros.Consensus.ChainGenerator.Adversarial ( - -- * Generating +module Test.Ouroboros.Consensus.ChainGenerator.Adversarial + ( -- * Generating AdversarialRecipe (AdversarialRecipe, arHonest, arParams, arPrefix) , CheckedAdversarialRecipe (UnsafeCheckedAdversarialRecipe, carHonest, carParams, carWin) , NoSuchAdversarialChainSchema (NoSuchAdversarialBlock, NoSuchCompetitor, NoSuchIntersection) , SomeCheckedAdversarialRecipe (SomeCheckedAdversarialRecipe) , checkAdversarialRecipe , uniformAdversarialChain + -- * Testing , AdversarialViolation (..) , AnchorViolation (HonestActiveMustAnchorAdversarial) @@ -27,33 +28,39 @@ module Test.Ouroboros.Consensus.ChainGenerator.Adversarial ( , genPrefixBlockCount ) where -import Control.Applicative ((<|>)) -import Control.Monad (foldM, forM_, void, when) -import qualified Control.Monad.Except as Exn -import Control.Monad.ST (ST) -import Data.Maybe (fromJust, fromMaybe) -import Data.Proxy (Proxy (Proxy)) -import qualified Data.Vector.Unboxed as Vector -import qualified System.Random.Stateful as R -import qualified Test.Ouroboros.Consensus.ChainGenerator.BitVector as BV -import qualified Test.Ouroboros.Consensus.ChainGenerator.Counting as C -import Test.Ouroboros.Consensus.ChainGenerator.Honest - (ChainSchema (ChainSchema), HonestRecipe (HonestRecipe)) -import Test.Ouroboros.Consensus.ChainGenerator.Params (Asc, - Delta (Delta), Kcp (Kcp), Scg (Scg)) -import qualified Test.Ouroboros.Consensus.ChainGenerator.RaceIterator as RI -import qualified Test.Ouroboros.Consensus.ChainGenerator.Slot as S -import Test.Ouroboros.Consensus.ChainGenerator.Slot - (E (ActiveSlotE, EmptySlotE, SlotE)) -import qualified Test.Ouroboros.Consensus.ChainGenerator.Some as Some +import Control.Applicative ((<|>)) +import Control.Monad (foldM, forM_, void, when) +import Control.Monad.Except qualified as Exn +import Control.Monad.ST (ST) +import Data.Maybe (fromJust, fromMaybe) +import Data.Proxy (Proxy (Proxy)) +import Data.Vector.Unboxed qualified as Vector +import System.Random.Stateful qualified as R +import Test.Ouroboros.Consensus.ChainGenerator.BitVector qualified as BV +import Test.Ouroboros.Consensus.ChainGenerator.Counting qualified as C +import Test.Ouroboros.Consensus.ChainGenerator.Honest + ( ChainSchema (ChainSchema) + , HonestRecipe (HonestRecipe) + ) +import Test.Ouroboros.Consensus.ChainGenerator.Params + ( Asc + , Delta (Delta) + , Kcp (Kcp) + , Scg (Scg) + ) +import Test.Ouroboros.Consensus.ChainGenerator.RaceIterator qualified as RI +import Test.Ouroboros.Consensus.ChainGenerator.Slot + ( E (ActiveSlotE, EmptySlotE, SlotE) + ) +import Test.Ouroboros.Consensus.ChainGenerator.Slot qualified as S +import Test.Ouroboros.Consensus.ChainGenerator.Some qualified as Some ----- -data AnchorViolation = - -- | An honest active slot must immediately precede the adversarial interval +data AnchorViolation + = -- | An honest active slot must immediately precede the adversarial interval HonestActiveMustAnchorAdversarial - | - -- | The were not exactly 'arPrefix' many active slots preceding @adv@ + | -- | The were not exactly 'arPrefix' many active slots preceding @adv@ WrongNumberOfHonestPredecessors deriving (Eq, Read, Show) @@ -62,24 +69,20 @@ data AnchorViolation = -- INVARIANT: @'C.windowLast' 'rvAdv' < 'C.windowLast' 'rvHon' + 'Delta'@ -- -- INVARIANT: @'C.windowStart' 'rvHon' <= 'C.windowStart' 'rvAdv'@ -data RaceViolation hon adv = AdversaryWonRace { - -- | The adversarial race window - rvAdv :: !(RI.Race adv) - , - -- | The honest race window - rvHon :: !(RI.Race hon) +data RaceViolation hon adv = AdversaryWonRace + { rvAdv :: !(RI.Race adv) + -- ^ The adversarial race window + , rvHon :: !(RI.Race hon) + -- ^ The honest race window } deriving (Eq, Read, Show) -data AdversarialViolation hon adv = - BadAnchor !AnchorViolation - | - -- | The schema does not contain a positive number of active slots +data AdversarialViolation hon adv + = BadAnchor !AnchorViolation + | -- | The schema does not contain a positive number of active slots BadCount - | - BadRace !(RaceViolation hon adv) - | - -- | The density of the adversarial schema is higher than the density of + | BadRace !(RaceViolation hon adv) + | -- | The density of the adversarial schema is higher than the density of -- the honest schema in the first stability window after the intersection. -- -- In @BadDensity w h a@, @w@ is a prefix of the first stability window @@ -122,248 +125,236 @@ data AdversarialViolation hon adv = -- dense chain would have to lose at least one Praos race within the Genesis window. checkAdversarialChain :: forall base hon adv. - AdversarialRecipe base hon - -> ChainSchema base adv - -> Exn.Except (AdversarialViolation hon adv) () + AdversarialRecipe base hon -> + ChainSchema base adv -> + Exn.Except (AdversarialViolation hon adv) () checkAdversarialChain recipe adv = do - checkStart - checkCount - checkRaces - checkDensity - where - AdversarialRecipe { - arHonest = ChainSchema winH vH - , - arParams = (Kcp k, Scg s, Delta d) - , - arPrefix - } = recipe - - ChainSchema winA vA = adv - - checkStart = do - let startA = C.windowStart winA :: C.Index base SlotE - intersection = startA C.- 1 :: C.Index base SlotE - - -- The intersection must be at an active slot in the honest chain. - case C.toWindow winH intersection of - Nothing -> do - -- the genesis block is the only permissible anchor outside of @hon@ - when (startA /= C.Count 0) $ Exn.throwError $ BadAnchor HonestActiveMustAnchorAdversarial - when (arPrefix /= C.Count 0) $ Exn.throwError $ BadAnchor WrongNumberOfHonestPredecessors - - Just i -> do - let _ = i :: C.Index hon SlotE - - when (BV.testV S.inverted vH i) $ do - Exn.throwError $ BadAnchor HonestActiveMustAnchorAdversarial - - C.SomeWindow Proxy precedingSlots <- - pure $ C.withWindowBetween (C.windowSize winH) (C.Lbl @"foo") (C.Count 0) i - let pc = BV.countActivesInV S.notInverted (C.sliceV precedingSlots vH) - - -- arPrefix must correctly count the active slots in the part of - -- the chain upto the intersection - when (C.fromWindowVar precedingSlots (C.toVar pc) /= arPrefix) $ do - Exn.throwError $ BadAnchor WrongNumberOfHonestPredecessors - - checkCount = do - let pc = BV.countActivesInV S.notInverted vA - when (C.toVar pc <= 0) $ Exn.throwError BadCount - - -- the youngest slot in which the adversarial schedule cannot have accelerated - -- - -- (IE @s@ past the first active adversarial slot, or @d@ past the @k+1@st - -- slot after the intersection) - youngestStableA :: C.Index base SlotE - youngestStableA = - let sYoungest = case BV.findIthEmptyInV S.inverted vA (C.Count 0) of - BV.JustFound firstActiveA -> - -- if s=0, then the slot of their block is the youngest stable slot - C.fromWindow winA $ firstActiveA C.+ s - BV.NothingFound -> - -- the rest of the function won't force this since there are no - -- adversarial active slots - error "dead code" - kPlus1stYoungest = case BV.findIthEmptyInV S.inverted vH (C.toIndex arPrefix C.+ k) of - BV.JustFound kPlus1st -> C.fromWindow winH kPlus1st - BV.NothingFound -> - -- If the honest fork didn't reach @k+1@ before it ended, then - -- the conservative assumption that all slots after 'Len' are - -- active on the honest chain implies the @k+1@st block is in - -- slot @C.windowLast winH + (k+1-x)@, where the honest chain - -- has @x@ remaining blocks such that @x>= RI.next vA of - Nothing -> pure () -- there are <= k total adversarial active slots - Just iterA -> - -- TODO optimization: how to skip all the honest Race Windows that - -- don't reach beyond the intersection? Perhaps skip to i - s + d? - go iterH iterA - - -- INVARIANT iterH spans k+1 active slots (possibly conservatively) - -- - -- INVARIANT iterA spans k active slots (actually, not conservatively) - go !iterH !iterA = do - C.SomeWindow Proxy raceWinH <- pure $ let RI.Race x = iterH in x - C.SomeWindow Proxy raceWinA <- pure $ let RI.Race x = iterA in x - - -- lift both windows to @base@ so that they're comparable - let raceWinH' = C.joinWin winH raceWinH - raceWinA' = C.joinWin winA raceWinA - - if - - -- any Race Window that ends /after/ the adversary can accelerate is unconstrained - | youngestStableA < C.windowLast raceWinH' C.+ d -> pure () - - -- advance the adversarial Race Window if its start is <= the honest Race Window's start - | C.windowStart raceWinA' <= C.windowStart raceWinH' -> - case RI.next vA iterA of - Just iterA' -> go iterH iterA' - Nothing -> pure () -- there are < k remaining adversarial active slots - - -- fail if the adversary won or tied the race - | C.windowLast raceWinA' <= C.windowLast raceWinH' C.+ d -> - -- iterA contains exactly k active slots, but A) it's anchored - -- in an active slot and B) iterH contains that anchor. Thus - -- adv has k+1 in iterH. - Exn.throwError $ BadRace AdversaryWonRace { - rvAdv = iterA - , - rvHon = iterH - } - - -- advance the honest Race Window - | otherwise -> case RI.next vH iterH <|> RI.nextConservative vH iterH of - Just iterH' -> go iterH' iterA - Nothing -> pure () -- there are no remaining honest active slots - -- - -- TODO hpc shows this never executes - - -- | Check that the density of the adversarial schema is less than the - -- density of the honest schema in the first stability window after the - -- intersection and in any prefix that contains the first race to the - -- k+1st block. - -- - -- See description of @ensureLowerDensityInWindows@ - checkDensity = do + checkStart + checkCount + checkRaces + checkDensity + where + AdversarialRecipe + { arHonest = ChainSchema winH vH + , arParams = (Kcp k, Scg s, Delta d) + , arPrefix + } = recipe + + ChainSchema winA vA = adv + + checkStart = do + let startA = C.windowStart winA :: C.Index base SlotE + intersection = startA C.- 1 :: C.Index base SlotE + + -- The intersection must be at an active slot in the honest chain. + case C.toWindow winH intersection of + Nothing -> do + -- the genesis block is the only permissible anchor outside of @hon@ + when (startA /= C.Count 0) $ Exn.throwError $ BadAnchor HonestActiveMustAnchorAdversarial + when (arPrefix /= C.Count 0) $ Exn.throwError $ BadAnchor WrongNumberOfHonestPredecessors + Just i -> do + let _ = i :: C.Index hon SlotE + + when (BV.testV S.inverted vH i) $ do + Exn.throwError $ BadAnchor HonestActiveMustAnchorAdversarial + + C.SomeWindow Proxy precedingSlots <- + pure $ C.withWindowBetween (C.windowSize winH) (C.Lbl @"foo") (C.Count 0) i + let pc = BV.countActivesInV S.notInverted (C.sliceV precedingSlots vH) + + -- arPrefix must correctly count the active slots in the part of + -- the chain upto the intersection + when (C.fromWindowVar precedingSlots (C.toVar pc) /= arPrefix) $ do + Exn.throwError $ BadAnchor WrongNumberOfHonestPredecessors + + checkCount = do + let pc = BV.countActivesInV S.notInverted vA + when (C.toVar pc <= 0) $ Exn.throwError BadCount + + -- the youngest slot in which the adversarial schedule cannot have accelerated + -- + -- (IE @s@ past the first active adversarial slot, or @d@ past the @k+1@st + -- slot after the intersection) + youngestStableA :: C.Index base SlotE + youngestStableA = + let sYoungest = case BV.findIthEmptyInV S.inverted vA (C.Count 0) of + BV.JustFound firstActiveA -> + -- if s=0, then the slot of their block is the youngest stable slot + C.fromWindow winA $ firstActiveA C.+ s + BV.NothingFound -> + -- the rest of the function won't force this since there are no + -- adversarial active slots + error "dead code" + kPlus1stYoungest = case BV.findIthEmptyInV S.inverted vH (C.toIndex arPrefix C.+ k) of + BV.JustFound kPlus1st -> C.fromWindow winH kPlus1st + BV.NothingFound -> + -- If the honest fork didn't reach @k+1@ before it ended, then + -- the conservative assumption that all slots after 'Len' are + -- active on the honest chain implies the @k+1@st block is in + -- slot @C.windowLast winH + (k+1-x)@, where the honest chain + -- has @x@ remaining blocks such that @x>= RI.next vA of + Nothing -> pure () -- there are <= k total adversarial active slots + Just iterA -> + -- TODO optimization: how to skip all the honest Race Windows that + -- don't reach beyond the intersection? Perhaps skip to i - s + d? + go iterH iterA + + -- INVARIANT iterH spans k+1 active slots (possibly conservatively) + -- + -- INVARIANT iterA spans k active slots (actually, not conservatively) + go !iterH !iterA = do + C.SomeWindow Proxy raceWinH <- pure $ let RI.Race x = iterH in x + C.SomeWindow Proxy raceWinA <- pure $ let RI.Race x = iterA in x + + -- lift both windows to @base@ so that they're comparable + let raceWinH' = C.joinWin winH raceWinH + raceWinA' = C.joinWin winA raceWinA + + if + -- any Race Window that ends /after/ the adversary can accelerate is unconstrained + | youngestStableA < C.windowLast raceWinH' C.+ d -> pure () + -- advance the adversarial Race Window if its start is <= the honest Race Window's start + | C.windowStart raceWinA' <= C.windowStart raceWinH' -> + case RI.next vA iterA of + Just iterA' -> go iterH iterA' + Nothing -> pure () -- there are < k remaining adversarial active slots + + -- fail if the adversary won or tied the race + | C.windowLast raceWinA' <= C.windowLast raceWinH' C.+ d -> + -- iterA contains exactly k active slots, but A) it's anchored + -- in an active slot and B) iterH contains that anchor. Thus + -- adv has k+1 in iterH. + Exn.throwError $ + BadRace + AdversaryWonRace + { rvAdv = iterA + , rvHon = iterH + } + -- advance the honest Race Window + | otherwise -> case RI.next vH iterH <|> RI.nextConservative vH iterH of + Just iterH' -> go iterH' iterA + Nothing -> pure () -- there are no remaining honest active slots + -- + -- TODO hpc shows this never executes + + -- \| Check that the density of the adversarial schema is less than the + -- density of the honest schema in the first stability window after the + -- intersection and in any prefix that contains the first race to the + -- k+1st block. + -- + -- See description of @ensureLowerDensityInWindows@ + checkDensity = do + let + -- window of the honest schema after the intersection + carWin = + C.UnsafeContains + (fromJust $ C.toWindow winH $ C.windowStart winA) + (C.windowSize winA) + -- honest schema after the intersection + vHAfterIntersection = C.sliceV carWin vH + iterH :: RI.Race adv + iterH = + fromMaybe (error $ "there should be k+1 active slots after the intersection k=" ++ show k) $ + RI.init (Kcp k) vHAfterIntersection + + -- first race window after the intersection + C.SomeWindow pw0 w0 <- let RI.Race x = iterH in pure x + + let + w0' = C.truncateWin w0 (C.Count s) + vvH = C.getVector vHAfterIntersection + vvA = C.getVector vA + -- cumulative sums of active slots per slot after the intersection + hSum = Vector.scanl (+) 0 (Vector.map (\x -> if S.test S.notInverted x then 1 else 0) vvH) + aSum = Vector.scanl (+) 0 (Vector.map (\x -> if S.test S.notInverted x then 1 else 0) vvA) + -- cumulative sums of active slots per slot until the first stability + -- window after the intersection + hwSum = Vector.toList $ Vector.drop (C.getCount $ C.windowSize w0') $ Vector.take (s + 1) hSum + awSum = Vector.toList $ Vector.drop (C.getCount $ C.windowSize w0') $ Vector.take (s + 1) aSum + case [cmp | cmp@(_, (x, y)) <- zip [0 ..] (zip hwSum awSum), x <= y] of + [] -> pure () + ((i, (x, y)) : _) -> let - -- window of the honest schema after the intersection - carWin = - C.UnsafeContains - (fromJust $ C.toWindow winH $ C.windowStart winA) - (C.windowSize winA) - -- honest schema after the intersection - vHAfterIntersection = C.sliceV carWin vH - iterH :: RI.Race adv - iterH = - fromMaybe (error $ "there should be k+1 active slots after the intersection k=" ++ show k) - $ RI.init (Kcp k) vHAfterIntersection - - -- first race window after the intersection - C.SomeWindow pw0 w0 <- let RI.Race x = iterH in pure x - - let - w0' = C.truncateWin w0 (C.Count s) - vvH = C.getVector vHAfterIntersection - vvA = C.getVector vA - -- cumulative sums of active slots per slot after the intersection - hSum = Vector.scanl (+) 0 (Vector.map (\x -> if S.test S.notInverted x then 1 else 0) vvH) - aSum = Vector.scanl (+) 0 (Vector.map (\x -> if S.test S.notInverted x then 1 else 0) vvA) - -- cumulative sums of active slots per slot until the first stability - -- window after the intersection - hwSum = Vector.toList $ Vector.drop (C.getCount $ C.windowSize w0') $ Vector.take (s + 1) hSum - awSum = Vector.toList $ Vector.drop (C.getCount $ C.windowSize w0') $ Vector.take (s + 1) aSum - case [ cmp | cmp@(_, (x, y)) <- zip [0..] (zip hwSum awSum), x <= y ] of - [] -> pure () - ((i, (x, y)):_) -> - let - w0'' = C.UnsafeContains (C.windowStart w0') (C.windowSize w0' C.+ i) - in - Exn.throwError $ BadDensity (C.SomeWindow pw0 w0'') x y + w0'' = C.UnsafeContains (C.windowStart w0') (C.windowSize w0' C.+ i) + in + Exn.throwError $ BadDensity (C.SomeWindow pw0 w0'') x y ----- -- | Named arguments for 'checkAdversarialRecipe' -data AdversarialRecipe base hon = - AdversarialRecipe { - -- | The honest chain to branch off of - arHonest :: !(ChainSchema base hon) - , - -- | protocol parameters - arParams :: (Kcp, Scg, Delta) - , - -- | Where to branch off of 'arHonest' - -- - -- It is the amount of blocks shared by the honest and the adversarial - -- chain. In other words, the 0-based index of their intersection - -- in blocks, such that - -- - -- * @0@ identifies the genesis block - -- * @1@ identifies the first block in arHonest - -- * @2@ identifies the second block in arHonest - -- * etc - arPrefix :: !(C.Var hon ActiveSlotE) - } +data AdversarialRecipe base hon + = AdversarialRecipe + { arHonest :: !(ChainSchema base hon) + -- ^ The honest chain to branch off of + , arParams :: (Kcp, Scg, Delta) + -- ^ protocol parameters + , arPrefix :: !(C.Var hon ActiveSlotE) + -- ^ Where to branch off of 'arHonest' + -- + -- It is the amount of blocks shared by the honest and the adversarial + -- chain. In other words, the 0-based index of their intersection + -- in blocks, such that + -- + -- * @0@ identifies the genesis block + -- * @1@ identifies the first block in arHonest + -- * @2@ identifies the second block in arHonest + -- * etc + } deriving (Eq, Read, Show) -- | See 'CheckedAdversarialRecipe' -data SomeCheckedAdversarialRecipe base hon = - forall adv. +data SomeCheckedAdversarialRecipe base hon + = forall adv. SomeCheckedAdversarialRecipe - !(Proxy adv) - !(CheckedAdversarialRecipe base hon adv) + !(Proxy adv) + !(CheckedAdversarialRecipe base hon adv) instance Show (SomeCheckedAdversarialRecipe base hon) where - showsPrec p (SomeCheckedAdversarialRecipe adv car) = - Some.runShowsPrec p - $ Some.showCtor SomeCheckedAdversarialRecipe "SomeCheckedAdversarialRecipe" - `Some.showArg` adv - `Some.showArg` car + showsPrec p (SomeCheckedAdversarialRecipe adv car) = + Some.runShowsPrec p $ + Some.showCtor SomeCheckedAdversarialRecipe "SomeCheckedAdversarialRecipe" + `Some.showArg` adv + `Some.showArg` car instance Read (SomeCheckedAdversarialRecipe base hon) where - readPrec = - Some.runReadPrec - $ Some.readCtor SomeCheckedAdversarialRecipe "SomeCheckedAdversarialRecipe" - <*> Some.readArg - <*> Some.readArg + readPrec = + Some.runReadPrec $ + Some.readCtor SomeCheckedAdversarialRecipe "SomeCheckedAdversarialRecipe" + <*> Some.readArg + <*> Some.readArg -- | Image of 'checkAdversarialRecipe' when it accepts the recipe -data CheckedAdversarialRecipe base hon adv = - UnsafeCheckedAdversarialRecipe { - -- | The honest chain to branch off of - carHonest :: !(ChainSchema base hon) - , - -- | protocol parameters - carParams :: (Kcp, Scg, Delta) - , - -- | The window starting at the first slot after the intersection - -- and ending at the last slot of 'carHonest'. - -- - -- INVARIANT: there is at least one active honest slot in @adv@ - -- - -- In other words, the adversarial leader schedule does not /extend/ the - -- chain represented by 'carHonest', it competes with it. - carWin :: !(C.Contains SlotE hon adv) - } +data CheckedAdversarialRecipe base hon adv + = UnsafeCheckedAdversarialRecipe + { carHonest :: !(ChainSchema base hon) + -- ^ The honest chain to branch off of + , carParams :: (Kcp, Scg, Delta) + -- ^ protocol parameters + , carWin :: !(C.Contains SlotE hon adv) + -- ^ The window starting at the first slot after the intersection + -- and ending at the last slot of 'carHonest'. + -- + -- INVARIANT: there is at least one active honest slot in @adv@ + -- + -- In other words, the adversarial leader schedule does not /extend/ the + -- chain represented by 'carHonest', it competes with it. + } deriving (Eq, Read, Show) -- | Image of 'checkAdversarialRecipe' when it rejects the recipe -data NoSuchAdversarialChainSchema = - -- | There is no slot the adversary can lead +data NoSuchAdversarialChainSchema + = -- | There is no slot the adversary can lead -- -- Two possible reasons, where @X@ is the slot of 'arPrefix' and Y is the youngest slot of 'arHonest'. -- @@ -380,11 +371,9 @@ data NoSuchAdversarialChainSchema = -- and a slot z are two honest active slots with only honest empty slots -- between them... NoSuchAdversarialBlock - | - -- | @not ('arPrefix' < C)@ where @C@ is the number of active slots in 'arHonest' + | -- | @not ('arPrefix' < C)@ where @C@ is the number of active slots in 'arHonest' NoSuchCompetitor - | - -- | @not (0 <= 'arPrefix' <= C)@ where @C@ is the number of active slots in 'arHonest' + | -- | @not (0 <= 'arPrefix' <= C)@ where @C@ is the number of active slots in 'arHonest' NoSuchIntersection deriving (Eq, Show) @@ -399,49 +388,48 @@ data SettledLbl -- | Reject a bad 'AdversarialRecipe' checkAdversarialRecipe :: forall base hon. - AdversarialRecipe base hon - -> Exn.Except - NoSuchAdversarialChainSchema - (SomeCheckedAdversarialRecipe base hon) + AdversarialRecipe base hon -> + Exn.Except + NoSuchAdversarialChainSchema + (SomeCheckedAdversarialRecipe base hon) checkAdversarialRecipe recipe = do - when (0 == k) $ Exn.throwError NoSuchAdversarialBlock - - -- validate 'arPrefix' - firstAdvSlot <- case compare arPrefix 0 of - LT -> Exn.throwError NoSuchIntersection - EQ -> pure $ C.Count 0 - GT -> case BV.findIthEmptyInV S.inverted vH $ C.toIndex $ arPrefix - 1 of - BV.NothingFound -> Exn.throwError NoSuchIntersection - BV.JustFound x -> do - when (x == C.lastIndex (C.windowSize winH)) $ Exn.throwError NoSuchAdversarialBlock - pure (x C.+ 1) - - C.SomeWindow Proxy winA <- pure $ C.withSuffixWindow (C.windowSize winH) (C.Lbl @AdvLbl) firstAdvSlot - - -- there must be at least one honest active slot in @adv@ - case BV.findIthEmptyInV S.inverted (C.sliceV winA vH) (C.Count 0) of - BV.NothingFound -> Exn.throwError NoSuchCompetitor - BV.JustFound{} -> pure () - - pure $ SomeCheckedAdversarialRecipe Proxy $ UnsafeCheckedAdversarialRecipe { - carHonest = arHonest - , - carParams = arParams - , - carWin = winA - } - where - AdversarialRecipe { - arHonest - , - arParams - , - arPrefix - } = recipe - - (Kcp k, _scg, _delta) = arParams - - ChainSchema winH vH = arHonest + when (0 == k) $ Exn.throwError NoSuchAdversarialBlock + + -- validate 'arPrefix' + firstAdvSlot <- case compare arPrefix 0 of + LT -> Exn.throwError NoSuchIntersection + EQ -> pure $ C.Count 0 + GT -> case BV.findIthEmptyInV S.inverted vH $ C.toIndex $ arPrefix - 1 of + BV.NothingFound -> Exn.throwError NoSuchIntersection + BV.JustFound x -> do + when (x == C.lastIndex (C.windowSize winH)) $ Exn.throwError NoSuchAdversarialBlock + pure (x C.+ 1) + + C.SomeWindow Proxy winA <- + pure $ C.withSuffixWindow (C.windowSize winH) (C.Lbl @AdvLbl) firstAdvSlot + + -- there must be at least one honest active slot in @adv@ + case BV.findIthEmptyInV S.inverted (C.sliceV winA vH) (C.Count 0) of + BV.NothingFound -> Exn.throwError NoSuchCompetitor + BV.JustFound{} -> pure () + + pure $ + SomeCheckedAdversarialRecipe Proxy $ + UnsafeCheckedAdversarialRecipe + { carHonest = arHonest + , carParams = arParams + , carWin = winA + } + where + AdversarialRecipe + { arHonest + , arParams + , arPrefix + } = recipe + + (Kcp k, _scg, _delta) = arParams + + ChainSchema winH vH = arHonest ----- @@ -465,296 +453,301 @@ data TouchableLbl -- after the intersection. uniformAdversarialChain :: forall g base hon adv. - R.RandomGen g - => Maybe Asc -- ^ 'Nothing' means @1@ - -> CheckedAdversarialRecipe base hon adv - -> g - -> ChainSchema base adv -{-# INLINABLE uniformAdversarialChain #-} + R.RandomGen g => + -- | 'Nothing' means @1@ + Maybe Asc -> + CheckedAdversarialRecipe base hon adv -> + g -> + ChainSchema base adv +{-# INLINEABLE uniformAdversarialChain #-} uniformAdversarialChain mbAsc recipe g0 = wrap $ C.createV $ do - g <- R.newSTGenM g0 - - let sz = C.windowSize carWin :: C.Size adv SlotE - - -- randomly initialize the bitstring - mv <- C.replicateMV sz $ case mbAsc of - Nothing -> pure $ S.mkActive S.notInverted - Just asc -> S.genS asc `R.applySTGen` g - - -- ensure the adversarial leader schedule is not empty - do -- Since the first active slot in the adversarial chain might determine - -- the position of the acceleration bound, we ensure it is early enough - -- so we can always fit k+1 blocks in the alternative schema. - -- - -- There will be at least k unstable slots at the end, plus one more - -- active slot a stability window earlier. - -- See Note [Minimum schema length] in "Test.Ouroboros.Consensus.ChainGenerator.Honest" - -- for the rationale. - let trailingSlots = s + k - szFirstActive = sz C.- trailingSlots - when (szFirstActive <= C.Count 0) $ - error "the adversarial schema length should be greater than s+k" - void $ BV.fillInWindow - S.notInverted - (BV.SomeDensityWindow (C.Count 1) szFirstActive) - g - (C.sliceMV (C.UnsafeContains (C.Count 0) szFirstActive) mv) - - -- find the slot of the k+1 honest block - let kPlus1st :: C.Index adv SlotE - kPlus1st = case BV.findIthEmptyInV S.inverted (C.sliceV carWin vH) (C.Count k) of - BV.NothingFound -> maybe (error "dead code") id $ C.toWindow carWin $ C.windowLast carWin - BV.JustFound x -> x - - - -- ensure the adversarial leader schedule does not win any of the races for - -- which the honest Race Window fits within the Stability Window anchored at - -- the first adversarial active slot + g <- R.newSTGenM g0 + + let sz = C.windowSize carWin :: C.Size adv SlotE + + -- randomly initialize the bitstring + mv <- C.replicateMV sz $ case mbAsc of + Nothing -> pure $ S.mkActive S.notInverted + Just asc -> S.genS asc `R.applySTGen` g + + -- ensure the adversarial leader schedule is not empty + do + -- Since the first active slot in the adversarial chain might determine + -- the position of the acceleration bound, we ensure it is early enough + -- so we can always fit k+1 blocks in the alternative schema. -- - -- TODO Why is it ok to skip early honest races, some of which overlap with - -- @adv@? Is it because not having >k in some prefix [0, n] of adv ensures - -- you can't have >k in the interval [0, C.frWindow adv n] either? - let iterH :: RI.Race adv - iterH = - fromMaybe (error "there should be k+1 active slots after the intersection") - $ RI.init kcp vHAfterIntersection - - -- We don't want to change the first active slot in the adversarial chain - -- Otherwise, we can't predict the position of the acceleration bound. - firstActive <- BV.findIthEmptyInMV S.inverted mv (C.Count 0) >>= \case + -- There will be at least k unstable slots at the end, plus one more + -- active slot a stability window earlier. + -- See Note [Minimum schema length] in "Test.Ouroboros.Consensus.ChainGenerator.Honest" + -- for the rationale. + let trailingSlots = s + k + szFirstActive = sz C.- trailingSlots + when (szFirstActive <= C.Count 0) $ + error "the adversarial schema length should be greater than s+k" + void $ + BV.fillInWindow + S.notInverted + (BV.SomeDensityWindow (C.Count 1) szFirstActive) + g + (C.sliceMV (C.UnsafeContains (C.Count 0) szFirstActive) mv) + + -- find the slot of the k+1 honest block + let kPlus1st :: C.Index adv SlotE + kPlus1st = case BV.findIthEmptyInV S.inverted (C.sliceV carWin vH) (C.Count k) of + BV.NothingFound -> maybe (error "dead code") id $ C.toWindow carWin $ C.windowLast carWin + BV.JustFound x -> x + + -- ensure the adversarial leader schedule does not win any of the races for + -- which the honest Race Window fits within the Stability Window anchored at + -- the first adversarial active slot + -- + -- TODO Why is it ok to skip early honest races, some of which overlap with + -- @adv@? Is it because not having >k in some prefix [0, n] of adv ensures + -- you can't have >k in the interval [0, C.frWindow adv n] either? + let iterH :: RI.Race adv + iterH = + fromMaybe (error "there should be k+1 active slots after the intersection") $ + RI.init kcp vHAfterIntersection + + -- We don't want to change the first active slot in the adversarial chain + -- Otherwise, we can't predict the position of the acceleration bound. + firstActive <- + BV.findIthEmptyInMV S.inverted mv (C.Count 0) >>= \case BV.NothingFound -> error "the adversarial schema is empty" - BV.JustFound x -> pure x - - ensureLowerDensityInWindows firstActive iterH g mv - - -- While densities are lower in the adversarial schema, the adversarial - -- schema could still win races to the k+1st block by less than 1+delta - -- slots. Therefore, we call @unfillRaces@ to deactivate further slots. - unfillRaces kPlus1st (firstActive C.+ 1) UnknownYS iterH g mv - - -- Fill active slots after the stability window to ensure the alternative - -- schema has more than k active slots - let trailingSlots = C.getCount sz - k - forM_ [trailingSlots .. C.getCount sz - 1] $ \i -> - BV.setMV S.notInverted mv (C.Count i) - - pure mv - where - UnsafeCheckedAdversarialRecipe { - carHonest - , - carParams = (kcp, scg, delta) - , - carWin - } = recipe - - wrap v = ChainSchema (C.joinWin winH carWin) v - - Kcp k = kcp - Scg s = scg - Delta d = delta - - ChainSchema winH vH = carHonest - - vHAfterIntersection = C.sliceV carWin vH - - -- ensure the adversary loses this 'RI.Race' and each subsequent race that ends before it can accelerate - unfillRaces kPlus1st !scope !mbYS !iter !g !mv = when (withinYS delta mbYS iter) $ do - C.SomeWindow Proxy rwin <- pure $ let RI.Race x = iter in x - - C.SomeWindow (Proxy :: Proxy skolem) win <- - pure - $ C.withWindowBetween - (C.windowSize carWin) - (C.Lbl @RaceAssumptionLbl) - (C.windowStart rwin) - (C.windowLast rwin C.+ d) -- rwin ends in a block, so if d=0 - -- then the slot after that block - -- is unconstrained; hence no +1 - - -- INVARIANT: @win@ contains @scope@ - let _ = scope :: C.Index adv SlotE - - -- remove adversarial active slots as needed - -- - -- But only remove them from /after/ @scope@ (ie do not remove them - -- from slots in the previous Race Window). - do untouchZeroCount :: C.Var adv EmptySlotE <- do - C.SomeWindow Proxy untouch <- -- untouchable slots - pure - $ C.withWindowBetween - (C.windowSize carWin) - (C.Lbl @UntouchableLbl) - (C.windowStart rwin) - (scope C.- 1) -- window will be empty if scope is 0 - C.fromWindowVar untouch <$> BV.countActivesInMV S.inverted (C.sliceMV untouch mv) - - C.SomeWindow (Proxy :: Proxy skolem2) touch <- -- touchable slots - pure - $ C.withWindowBetween - (C.windowSize carWin) - (C.Lbl @TouchableLbl) - scope - (C.windowLast rwin C.+ d) - - - let - maxActive :: C.Var (C.Win RaceAssumptionLbl skolem) ActiveSlotE - maxActive = C.Count k - - -- at most k can be active in this race window, so at least size - k must be empty - minEmpty :: C.Var (C.Win RaceAssumptionLbl skolem) EmptySlotE - minEmpty = S.complementActive S.notInverted (C.windowSize win) maxActive - - -- Discount that basic requirement by the number of zeros already - -- in the untouchable portion of this race window. - let touchableEmpty = - max 0 - $ C.fromWindowVar win minEmpty - untouchZeroCount - :: C.Var adv EmptySlotE - - void $ BV.fillInWindow - S.inverted - (BV.SomeDensityWindow (C.toWindowVar touch touchableEmpty) (C.windowSize touch)) - g - (C.sliceMV touch mv) - - case RI.next vHAfterIntersection iter - <|> RI.nextConservative vHAfterIntersection iter of - Nothing -> pure () -- there are no remaining honest active slots - - Just iter' -> do - C.SomeWindow Proxy rwin' <- pure $ let RI.Race x = iter' in x - mbYS' <- case mbYS of - KnownYS{} -> pure mbYS - UnknownYS -> do - -- check whether the slots that are settled as of just - -- now contain the first adversarial active slot - C.SomeWindow Proxy settledSlots <- - pure - $ C.withWindowBetween - (C.windowSize carWin) - (C.Lbl @SettledLbl) - (C.windowStart rwin) - (C.windowStart rwin' C.- 1) - mbFound <- BV.findIthEmptyInMV S.inverted (C.sliceMV settledSlots mv) (C.Count 0) - case mbFound of - BV.NothingFound -> pure UnknownYS - BV.JustFound x -> - -- x is the first settled adversarial slot, so - -- the adversary can accelerate its growth as - -- of x+s+1 (If s were 0, it could accelerate - -- in the very next slot, thus the plus 1.) - pure $! KnownYS $! - max - (kPlus1st C.+ d C.+ 1) - (C.fromWindow settledSlots x C.+ s C.+ 1) - unfillRaces kPlus1st (max scope (C.windowLast win C.+ 1)) mbYS' iter' g mv - - -- | Ensure the density of the adversarial schema is less than the density - -- of the honest schema in the first stability window after the intersection - -- and in any prefix window that contains the first race to the k+1st block. - -- - -- Ensuring lower density of the prefix windows is necessary to avoid chains like - -- - -- > k: 3 - -- > s: 9 - -- > H: 0111100 - -- > A: 1110011 - -- - -- where the honest chain wins the race to the k+1st block, might win the - -- density comparison if the chains are extended to include a full stability - -- window after the intersection, but loses the density comparison if the - -- chains aren't extended. + BV.JustFound x -> pure x + + ensureLowerDensityInWindows firstActive iterH g mv + + -- While densities are lower in the adversarial schema, the adversarial + -- schema could still win races to the k+1st block by less than 1+delta + -- slots. Therefore, we call @unfillRaces@ to deactivate further slots. + unfillRaces kPlus1st (firstActive C.+ 1) UnknownYS iterH g mv + + -- Fill active slots after the stability window to ensure the alternative + -- schema has more than k active slots + let trailingSlots = C.getCount sz - k + forM_ [trailingSlots .. C.getCount sz - 1] $ \i -> + BV.setMV S.notInverted mv (C.Count i) + + pure mv + where + UnsafeCheckedAdversarialRecipe + { carHonest + , carParams = (kcp, scg, delta) + , carWin + } = recipe + + wrap v = ChainSchema (C.joinWin winH carWin) v + + Kcp k = kcp + Scg s = scg + Delta d = delta + + ChainSchema winH vH = carHonest + + vHAfterIntersection = C.sliceV carWin vH + + -- ensure the adversary loses this 'RI.Race' and each subsequent race that ends before it can accelerate + unfillRaces kPlus1st !scope !mbYS !iter !g !mv = when (withinYS delta mbYS iter) $ do + C.SomeWindow Proxy rwin <- pure $ let RI.Race x = iter in x + + C.SomeWindow (Proxy :: Proxy skolem) win <- + pure $ + C.withWindowBetween + (C.windowSize carWin) + (C.Lbl @RaceAssumptionLbl) + (C.windowStart rwin) + (C.windowLast rwin C.+ d) -- rwin ends in a block, so if d=0 + -- then the slot after that block + -- is unconstrained; hence no +1 + + -- INVARIANT: @win@ contains @scope@ + let _ = scope :: C.Index adv SlotE + + -- remove adversarial active slots as needed -- - -- For the sake of shrinking test inputs, we also prevent the above scenario - -- to occur in any intersection, not just the intersections near the end of - -- the chains. + -- But only remove them from /after/ @scope@ (ie do not remove them + -- from slots in the previous Race Window). + do + untouchZeroCount :: C.Var adv EmptySlotE <- do + C.SomeWindow Proxy untouch <- -- untouchable slots + pure $ + C.withWindowBetween + (C.windowSize carWin) + (C.Lbl @UntouchableLbl) + (C.windowStart rwin) + (scope C.- 1) -- window will be empty if scope is 0 + C.fromWindowVar untouch <$> BV.countActivesInMV S.inverted (C.sliceMV untouch mv) + + C.SomeWindow (Proxy :: Proxy skolem2) touch <- -- touchable slots + pure $ + C.withWindowBetween + (C.windowSize carWin) + (C.Lbl @TouchableLbl) + scope + (C.windowLast rwin C.+ d) + + let + maxActive :: C.Var (C.Win RaceAssumptionLbl skolem) ActiveSlotE + maxActive = C.Count k + + -- at most k can be active in this race window, so at least size - k must be empty + minEmpty :: C.Var (C.Win RaceAssumptionLbl skolem) EmptySlotE + minEmpty = S.complementActive S.notInverted (C.windowSize win) maxActive + + -- Discount that basic requirement by the number of zeros already + -- in the untouchable portion of this race window. + let touchableEmpty = + max 0 $ + C.fromWindowVar win minEmpty - untouchZeroCount :: + C.Var adv EmptySlotE + + void $ + BV.fillInWindow + S.inverted + (BV.SomeDensityWindow (C.toWindowVar touch touchableEmpty) (C.windowSize touch)) + g + (C.sliceMV touch mv) + + case RI.next vHAfterIntersection iter + <|> RI.nextConservative vHAfterIntersection iter of + Nothing -> pure () -- there are no remaining honest active slots + Just iter' -> do + C.SomeWindow Proxy rwin' <- pure $ let RI.Race x = iter' in x + mbYS' <- case mbYS of + KnownYS{} -> pure mbYS + UnknownYS -> do + -- check whether the slots that are settled as of just + -- now contain the first adversarial active slot + C.SomeWindow Proxy settledSlots <- + pure $ + C.withWindowBetween + (C.windowSize carWin) + (C.Lbl @SettledLbl) + (C.windowStart rwin) + (C.windowStart rwin' C.- 1) + mbFound <- BV.findIthEmptyInMV S.inverted (C.sliceMV settledSlots mv) (C.Count 0) + case mbFound of + BV.NothingFound -> pure UnknownYS + BV.JustFound x -> + -- x is the first settled adversarial slot, so + -- the adversary can accelerate its growth as + -- of x+s+1 (If s were 0, it could accelerate + -- in the very next slot, thus the plus 1.) + pure $! + KnownYS $! + max + (kPlus1st C.+ d C.+ 1) + (C.fromWindow settledSlots x C.+ s C.+ 1) + unfillRaces kPlus1st (max scope (C.windowLast win C.+ 1)) mbYS' iter' g mv + + -- \| Ensure the density of the adversarial schema is less than the density + -- of the honest schema in the first stability window after the intersection + -- and in any prefix window that contains the first race to the k+1st block. + -- + -- Ensuring lower density of the prefix windows is necessary to avoid chains like + -- + -- > k: 3 + -- > s: 9 + -- > H: 0111100 + -- > A: 1110011 + -- + -- where the honest chain wins the race to the k+1st block, might win the + -- density comparison if the chains are extended to include a full stability + -- window after the intersection, but loses the density comparison if the + -- chains aren't extended. + -- + -- For the sake of shrinking test inputs, we also prevent the above scenario + -- to occur in any intersection, not just the intersections near the end of + -- the chains. + ensureLowerDensityInWindows :: + R.StatefulGen sg (ST s) => + C.Index adv SlotE -> + RI.Race adv -> + sg -> + C.MVector adv SlotE s S.S -> + ST s () + ensureLowerDensityInWindows firstActiveSlot (RI.Race (C.SomeWindow _ w0)) g mv = do + let + -- A window after the intersection as short as the shortest of the + -- stability window or the first race to the k+1st block. + w0' = C.truncateWin w0 (C.Count s) + hCount = + C.toVar $ + BV.countActivesInV S.notInverted (C.sliceV w0' vHAfterIntersection) + + aCount <- ensureLowerDensityInWindow firstActiveSlot w0' g mv hCount + + void $ + foldM + updateDensityOfMv + (hCount, aCount) + (stablePrefixWindowsContaining w0') + where + -- Yields all windows of the adversarial schema with proper prefix @w@ + -- that are prefixes of the first stability window after the + -- intersection. + stablePrefixWindowsContaining w = + let + start = C.windowStart w + size = C.getCount (C.windowSize w) + end = s `min` C.getCount (C.lengthMV mv) + in + [C.UnsafeContains start (C.Count size') | size' <- [size + 1 .. end]] + + -- Updates mv to ensure the density of the adversarial schema is lower + -- than the density of the honest schema in @increaseSizeW w@. -- - ensureLowerDensityInWindows - :: R.StatefulGen sg (ST s) - => C.Index adv SlotE - -> RI.Race adv - -> sg - -> C.MVector adv SlotE s S.S - -> ST s () - ensureLowerDensityInWindows firstActiveSlot (RI.Race (C.SomeWindow _ w0)) g mv = do - let - -- A window after the intersection as short as the shortest of the - -- stability window or the first race to the k+1st block. - w0' = C.truncateWin w0 (C.Count s) - hCount = C.toVar $ - BV.countActivesInV S.notInverted (C.sliceV w0' vHAfterIntersection) - - aCount <- ensureLowerDensityInWindow firstActiveSlot w0' g mv hCount - - void $ foldM - updateDensityOfMv - (hCount, aCount) - (stablePrefixWindowsContaining w0') - - where - -- Yields all windows of the adversarial schema with proper prefix @w@ - -- that are prefixes of the first stability window after the - -- intersection. - stablePrefixWindowsContaining w = - let - start = C.windowStart w - size = C.getCount (C.windowSize w) - end = s `min` C.getCount (C.lengthMV mv) - in - [ C.UnsafeContains start (C.Count size') | size' <- [ size+1 .. end ] ] - - -- Updates mv to ensure the density of the adversarial schema is lower - -- than the density of the honest schema in @increaseSizeW w@. - -- - -- @hc@ is the number of active slots in the honest schema in @w@ minus - -- the last slot - -- - -- @ac@ is the number of active slots in the adversarial schema in @w@ - -- minus the last slot - updateDensityOfMv (hc, ac) w = do - sA <- BV.testMV S.notInverted mv (C.windowLast w) - - let - ac' = if sA then ac C.+ 1 else ac - sH = BV.testV S.notInverted vHAfterIntersection (C.windowLast w) - hc' = if sH then hc C.+ 1 else hc - - ac'' <- - if ac' >= hc' then - ensureLowerDensityInWindow firstActiveSlot w g mv hc' - else - pure ac' - - return (hc', ac'') - - -- | Ensure the density of the adversarial schema is less than the density - -- of the honest schema in the given window. + -- @hc@ is the number of active slots in the honest schema in @w@ minus + -- the last slot -- - -- @hCount@ is the number of active slots in the honest schema in the - -- given window. - ensureLowerDensityInWindow firstActiveSlot w g mv hCount = do - let emptyCountTarget = C.toVar $ S.complementActive S.notInverted (C.windowSize w) hCount C.+ 1 - - emptyCount <- fillInWindowSkippingFirstActiveSlot - firstActiveSlot - emptyCountTarget - w - g - mv - - pure $ C.toVar $ S.complementActive S.inverted (C.windowSize w) emptyCount - - fillInWindowSkippingFirstActiveSlot firstActiveSlot emptyCountTarget w g mv - | C.getCount (C.windowSize w) <= C.getCount firstActiveSlot = pure (C.Count 0) - | otherwise = do + -- @ac@ is the number of active slots in the adversarial schema in @w@ + -- minus the last slot + updateDensityOfMv (hc, ac) w = do + sA <- BV.testMV S.notInverted mv (C.windowLast w) + + let + ac' = if sA then ac C.+ 1 else ac + sH = BV.testV S.notInverted vHAfterIntersection (C.windowLast w) + hc' = if sH then hc C.+ 1 else hc + + ac'' <- + if ac' >= hc' + then + ensureLowerDensityInWindow firstActiveSlot w g mv hc' + else + pure ac' + + return (hc', ac'') + + -- \| Ensure the density of the adversarial schema is less than the density + -- of the honest schema in the given window. + -- + -- @hCount@ is the number of active slots in the honest schema in the + -- given window. + ensureLowerDensityInWindow firstActiveSlot w g mv hCount = do + let emptyCountTarget = C.toVar $ S.complementActive S.notInverted (C.windowSize w) hCount C.+ 1 + + emptyCount <- + fillInWindowSkippingFirstActiveSlot + firstActiveSlot + emptyCountTarget + w + g + mv + + pure $ C.toVar $ S.complementActive S.inverted (C.windowSize w) emptyCount + + fillInWindowSkippingFirstActiveSlot firstActiveSlot emptyCountTarget w g mv + | C.getCount (C.windowSize w) <= C.getCount firstActiveSlot = pure (C.Count 0) + | otherwise = do let slot = C.getCount firstActiveSlot emptyCountTarget' = emptyCountTarget C.- slot - w' = C.UnsafeContains - (firstActiveSlot C.+ 1) - (C.windowSize w C.- (slot + 1)) + w' = + C.UnsafeContains + (firstActiveSlot C.+ 1) + (C.windowSize w C.- (slot + 1)) BV.fillInWindow S.inverted @@ -771,8 +764,8 @@ data MaybeYS base = UnknownYS | KnownYS !(C.Index base SlotE) -- | Does the Race Window end in a stable slot? withinYS :: Delta -> MaybeYS base -> RI.Race base -> Bool withinYS (Delta d) !mbYS !(RI.Race (C.SomeWindow Proxy win)) = case mbYS of - KnownYS ys -> C.windowLast win C.+ d < ys - UnknownYS -> True -- Honest Chain Growth ensures every Race Window is at most @'Scg' - 'Delta'@ slots wide + KnownYS ys -> C.windowLast win C.+ d < ys + UnknownYS -> True -- Honest Chain Growth ensures every Race Window is at most @'Scg' - 'Delta'@ slots wide -- | Draw a random active slot count for the prefix of a fork. -- @@ -782,30 +775,31 @@ withinYS (Delta d) !mbYS !(RI.Race (C.SomeWindow Proxy win)) = case mbYS of -- for the rationale of the precondition. -- -- PRECONDITION: @schemaSize schedH >= s + d + k + 1@ -genPrefixBlockCount :: R.RandomGen g => HonestRecipe -> g -> ChainSchema base hon -> C.Var hon 'ActiveSlotE +genPrefixBlockCount :: + R.RandomGen g => HonestRecipe -> g -> ChainSchema base hon -> C.Var hon 'ActiveSlotE genPrefixBlockCount (HonestRecipe (Kcp k) (Scg s) (Delta d) _len) g schedH - | C.lengthV vH < C.Count (s + d + k + 1) = - error "size of schema is smaller than s + d + k + 1" - | otherwise = - -- @uniformIndex n@ yields a value in @[0..n-1]@, we add 1 to the - -- argument to account for the possibility of intersecting at the - -- genesis block - C.toVar $ R.runSTGen_ g $ C.uniformIndex (activesInPrefix C.+ 1) - where - ChainSchema _winH vH = schedH - - -- activesInPrefix is the amount of active slots in the honest schema with a - -- suffix of s+d+k+1 slots. - -- - -- In the honest chain, the suffix is sufficiently long to ensure there are - -- k+1 active slots by the Extended Praos Chain Growth Assumption. - -- - -- In the alternative chain, there is enough room to fit k+1 active slots - -- as explained the Note [Minimum schema length] in - -- "Test.Ouroboros.Consensus.ChainGenerator.Honest". - - activesInPrefix = - BV.countActivesInV S.notInverted - $ C.sliceV - (C.UnsafeContains (C.Count 0) $ C.lengthV vH C.- (s + d + k + 1)) - vH + | C.lengthV vH < C.Count (s + d + k + 1) = + error "size of schema is smaller than s + d + k + 1" + | otherwise = + -- @uniformIndex n@ yields a value in @[0..n-1]@, we add 1 to the + -- argument to account for the possibility of intersecting at the + -- genesis block + C.toVar $ R.runSTGen_ g $ C.uniformIndex (activesInPrefix C.+ 1) + where + ChainSchema _winH vH = schedH + + -- activesInPrefix is the amount of active slots in the honest schema with a + -- suffix of s+d+k+1 slots. + -- + -- In the honest chain, the suffix is sufficiently long to ensure there are + -- k+1 active slots by the Extended Praos Chain Growth Assumption. + -- + -- In the alternative chain, there is enough room to fit k+1 active slots + -- as explained the Note [Minimum schema length] in + -- "Test.Ouroboros.Consensus.ChainGenerator.Honest". + + activesInPrefix = + BV.countActivesInV S.notInverted $ + C.sliceV + (C.UnsafeContains (C.Count 0) $ C.lengthV vH C.- (s + d + k + 1)) + vH diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/BitVector.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/BitVector.hs index 1c14bedec9..feca8ac366 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/BitVector.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/BitVector.hs @@ -9,52 +9,58 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeFamilies #-} -module Test.Ouroboros.Consensus.ChainGenerator.BitVector ( - -- * Finding +module Test.Ouroboros.Consensus.ChainGenerator.BitVector + ( -- * Finding MaybeFound (JustFound, NothingFound) , findIthActiveInV , findIthEmptyInMV , findIthEmptyInV + -- * Counting , countActivesInMV , countActivesInV + -- * Slots , setMV , testMV , testV + -- * Generating , SomeDensityWindow (SomeDensityWindow) , fillInWindow ) where -import Control.Monad.ST (ST, runST) -import Data.Functor ((<&>)) -import qualified Data.Vector.Unboxed.Mutable as MV -import qualified System.Random.Stateful as R -import qualified Test.Ouroboros.Consensus.ChainGenerator.Counting as C -import qualified Test.Ouroboros.Consensus.ChainGenerator.Slot as S -import Test.Ouroboros.Consensus.ChainGenerator.Slot - (E (ActiveSlotE, EmptySlotE, SlotE), POL, PreImage, S) -import qualified Test.Ouroboros.Consensus.ChainGenerator.Some as Some +import Control.Monad.ST (ST, runST) +import Data.Functor ((<&>)) +import Data.Vector.Unboxed.Mutable qualified as MV +import System.Random.Stateful qualified as R +import Test.Ouroboros.Consensus.ChainGenerator.Counting qualified as C +import Test.Ouroboros.Consensus.ChainGenerator.Slot + ( E (ActiveSlotE, EmptySlotE, SlotE) + , POL + , PreImage + , S + ) +import Test.Ouroboros.Consensus.ChainGenerator.Slot qualified as S +import Test.Ouroboros.Consensus.ChainGenerator.Some qualified as Some ----- -data MaybeFound base = - NothingFound - | - JustFound +data MaybeFound base + = NothingFound + | JustFound {-# UNPACK #-} !(C.Index base SlotE) deriving (Eq, Read, Show) -- | Trivial wrapper around 'findIthEmptyInMV' findIthEmptyInV :: - POL pol - => proxy pol - -> C.Vector base SlotE S - -> C.Index base (PreImage pol EmptySlotE) - -> MaybeFound base + POL pol => + proxy pol -> + C.Vector base SlotE S -> + C.Index base (PreImage pol EmptySlotE) -> + MaybeFound base findIthEmptyInV pol v i = - runST $ C.unsafeThawV v >>= \mv -> findIthEmptyInMV pol mv i + runST $ C.unsafeThawV v >>= \mv -> findIthEmptyInMV pol mv i -- | Find the (i+1)st empty slot in a window -- @@ -65,55 +71,57 @@ findIthEmptyInV pol v i = -- > findIthEmptyInMV notInverted 01101 0 == JustFound 0 -- > findIthEmptyInMV notInverted 01101 1 == JustFound 3 -- > findIthEmptyInMV notInverted 01101 2 == NothingFound --- findIthEmptyInMV :: forall proxy pol base s. - POL pol - => proxy pol - -> C.MVector base SlotE s S - -> C.Index base (PreImage pol EmptySlotE) - -> ST s (MaybeFound base) + POL pol => + proxy pol -> + C.MVector base SlotE s S -> + C.Index base (PreImage pol EmptySlotE) -> + ST s (MaybeFound base) findIthEmptyInMV pol mv i = - go 0 (C.toVar i) - where - go !j !toSkip = if C.getCount (C.lengthMV mv) <= j then pure NothingFound else do + go 0 (C.toVar i) + where + go !j !toSkip = + if C.getCount (C.lengthMV mv) <= j + then pure NothingFound + else do w <- C.readMV mv (C.Count j) if | S.test pol w -> go (j + 1) toSkip - | 0 == toSkip -> pure $ JustFound (C.Count j) - | otherwise -> go (j + 1) (toSkip - 1) + | 0 == toSkip -> pure $ JustFound (C.Count j) + | otherwise -> go (j + 1) (toSkip - 1) findIthActiveInV :: - C.Vector base SlotE S - -> C.Index base ActiveSlotE - -> MaybeFound base + C.Vector base SlotE S -> + C.Index base ActiveSlotE -> + MaybeFound base findIthActiveInV = - findIthEmptyInV S.inverted + findIthEmptyInV S.inverted ----- -- | Trivial wrapper around 'countActivesInMV' countActivesInV :: - POL pol - => proxy pol - -> C.Vector base SlotE S - -> C.Size base (PreImage pol ActiveSlotE) + POL pol => + proxy pol -> + C.Vector base SlotE S -> + C.Size base (PreImage pol ActiveSlotE) countActivesInV pol v = - C.toSize $ runST $ C.unsafeThawV v >>= \mv -> countActivesInMV pol mv + C.toSize $ runST $ C.unsafeThawV v >>= \mv -> countActivesInMV pol mv -- | The number of active slots in the vector countActivesInMV :: - POL pol - => proxy pol - -> C.MVector base SlotE s S - -> ST s (C.Var base (PreImage pol ActiveSlotE)) + POL pol => + proxy pol -> + C.MVector base SlotE s S -> + ST s (C.Var base (PreImage pol ActiveSlotE)) countActivesInMV pol mv = - MV.foldl' - (\acc w -> if S.test pol w then acc + 1 else acc) - 0 - mv' - where - C.MVector mv' = mv + MV.foldl' + (\acc w -> if S.test pol w then acc + 1 else acc) + 0 + mv' + where + C.MVector mv' = mv ----- @@ -122,29 +130,31 @@ countActivesInMV pol mv = -- @pol@ is the polarity to use for the active slots -- -- TODO: rename to SomeDensity -data SomeDensityWindow pol = - forall slidingWindow. +data SomeDensityWindow pol + = forall slidingWindow. SomeDensityWindow - !(C.Var slidingWindow (PreImage pol ActiveSlotE)) -- ^ Numerator: The active slots - !(C.Size slidingWindow SlotE) -- ^ Denominator: The total amount of slots + -- | Numerator: The active slots + !(C.Var slidingWindow (PreImage pol ActiveSlotE)) + -- | Denominator: The total amount of slots + !(C.Size slidingWindow SlotE) instance Eq (SomeDensityWindow pol) where - SomeDensityWindow l1 l2 == SomeDensityWindow r1 r2 = - C.forgetBase l1 == C.forgetBase r1 && C.forgetBase l2 == C.forgetBase r2 + SomeDensityWindow l1 l2 == SomeDensityWindow r1 r2 = + C.forgetBase l1 == C.forgetBase r1 && C.forgetBase l2 == C.forgetBase r2 instance Show (SomeDensityWindow pol) where - showsPrec p (SomeDensityWindow numer denom) = - Some.runShowsPrec p - $ Some.showCtor (SomeDensityWindow @pol) "SomeDensityWindow" - `Some.showArg` numer - `Some.showArg` denom + showsPrec p (SomeDensityWindow numer denom) = + Some.runShowsPrec p $ + Some.showCtor (SomeDensityWindow @pol) "SomeDensityWindow" + `Some.showArg` numer + `Some.showArg` denom instance Read (SomeDensityWindow pol) where - readPrec = - Some.runReadPrec - $ Some.readCtor SomeDensityWindow "SomeDensityWindow" - <*> Some.readArg - <*> Some.readArg + readPrec = + Some.runReadPrec $ + Some.readCtor SomeDensityWindow "SomeDensityWindow" + <*> Some.readArg + <*> Some.readArg -- | @fillInWindow pol (SomeDensityWindow k s) g mv@ mutates @mv@ to ensure -- that the vector @take s $ mv ++ repeat (mkActive pol)@ has at least @k@ @@ -154,55 +164,61 @@ instance Read (SomeDensityWindow pol) where -- -- > lengthMV mv <= s -- > k <= s --- fillInWindow :: forall proxy pol base g s. - (POL pol, R.StatefulGen g (ST s)) - => proxy pol - -> SomeDensityWindow pol - -> g - -> C.MVector base SlotE s S - -> ST s (C.Var base (PreImage pol ActiveSlotE)) -- ^ the count after filling + (POL pol, R.StatefulGen g (ST s)) => + proxy pol -> + SomeDensityWindow pol -> + g -> + C.MVector base SlotE s S -> + -- | the count after filling + ST s (C.Var base (PreImage pol ActiveSlotE)) fillInWindow pol (SomeDensityWindow k s) g mv - | not (C.getCount k <= C.getCount s) = - error $ "fillInWindow: assertion failure: k <= s: " - ++ show k ++ " <= " ++ show s - | not (C.getCount sz <= C.getCount s) = - error $ "fillInWindow: assertion failure: sz <= s: " - ++ show sz ++ " <= " ++ show s - | otherwise = do - -- how many active polarized slots @actual@ currently has - initialActives <- countActivesInMV pol mv - - - -- discount the numerator accordingly if @mv@ is smaller than @s@ - -- - -- EG when a full-size @mv@ would reach past the 'Len'. - -- - -- This discount reflects that we (very conservatively!) assume every - -- truncated slot would be an active polarized slot. - let discountedK :: C.Var base (PreImage pol ActiveSlotE) - discountedK = C.Count $ C.getCount k - (C.getCount s - C.getCount sz) - - -- how many active polarized slots need to be added to @mv@ - let adding = max 0 $ C.toVar discountedK - initialActives :: C.Var base (PreImage pol ActiveSlotE) - - -- draw from the empty polarized slots uniformly without replacement, a la Fisher-Yates shuffle - C.forRange_ (C.toSize adding) $ \alreadyAdded -> do + | not (C.getCount k <= C.getCount s) = + error $ + "fillInWindow: assertion failure: k <= s: " + ++ show k + ++ " <= " + ++ show s + | not (C.getCount sz <= C.getCount s) = + error $ + "fillInWindow: assertion failure: sz <= s: " + ++ show sz + ++ " <= " + ++ show s + | otherwise = do + -- how many active polarized slots @actual@ currently has + initialActives <- countActivesInMV pol mv + + -- discount the numerator accordingly if @mv@ is smaller than @s@ + -- + -- EG when a full-size @mv@ would reach past the 'Len'. + -- + -- This discount reflects that we (very conservatively!) assume every + -- truncated slot would be an active polarized slot. + let discountedK :: C.Var base (PreImage pol ActiveSlotE) + discountedK = C.Count $ C.getCount k - (C.getCount s - C.getCount sz) + + -- how many active polarized slots need to be added to @mv@ + let adding = max 0 $ C.toVar discountedK - initialActives :: C.Var base (PreImage pol ActiveSlotE) + + -- draw from the empty polarized slots uniformly without replacement, a la Fisher-Yates shuffle + C.forRange_ (C.toSize adding) $ \alreadyAdded -> do let currentActives = C.toSize $ initialActives + C.toVar alreadyAdded currentEmpties = S.complementActive pol sz currentActives whichEmptyToFlip <- C.uniformIndex currentEmpties g - slot <- findIthEmptyInMV pol mv whichEmptyToFlip <&> \case - JustFound i -> i + slot <- + findIthEmptyInMV pol mv whichEmptyToFlip <&> \case + JustFound i -> i NothingFound -> error "impossible! fillInWindow" setMV pol mv slot - pure $ initialActives + adding - where - sz = C.lengthMV mv :: C.Size base SlotE + pure $ initialActives + adding + where + sz = C.lengthMV mv :: C.Size base SlotE ----- @@ -211,8 +227,8 @@ testV pol mv i = S.test pol (C.readV mv i) testMV :: POL pol => proxy pol -> C.MVector base SlotE s S -> C.Index base SlotE -> ST s Bool testMV pol mv i = do - w <- C.readMV mv i - pure $ S.test pol w + w <- C.readMV mv i + pure $ S.test pol w setMV :: POL pol => proxy pol -> C.MVector base SlotE s S -> C.Index base SlotE -> ST s () setMV pol mv i = C.writeMV mv i $ S.mkActive pol diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Counting.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Counting.hs index 9b5bbe0eec..e8bbf895b1 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Counting.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Counting.hs @@ -12,14 +12,15 @@ {-# LANGUAGE TypeOperators #-} -- | Very strong types for working with indices, counts, etc within sequences. -module Test.Ouroboros.Consensus.ChainGenerator.Counting ( - -- * general counts +module Test.Ouroboros.Consensus.ChainGenerator.Counting + ( -- * general counts Count (Count) , forgetBase , forgetElem , getCount , (+) , (-) + -- * indices and sizes , Index , Preds @@ -29,6 +30,7 @@ module Test.Ouroboros.Consensus.ChainGenerator.Counting ( , lastIndex , range , uniformIndex + -- * windows , Contains (Contains, UnsafeContains) , Lbl (Lbl) @@ -48,6 +50,7 @@ module Test.Ouroboros.Consensus.ChainGenerator.Counting ( , withTopWindow , withWindow , withWindowBetween + -- * vectors , MVector (MVector) , Vector (Vector) @@ -64,6 +67,7 @@ module Test.Ouroboros.Consensus.ChainGenerator.Counting ( , sliceV , unsafeThawV , writeMV + -- * variables , Other , Var @@ -73,20 +77,20 @@ module Test.Ouroboros.Consensus.ChainGenerator.Counting ( , toVar ) where -import Control.Monad.ST (ST) -import Data.Coerce (coerce) -import Data.Foldable (for_) -import Data.Kind (Type) -import Data.Proxy (Proxy (Proxy)) -import qualified Data.Type.Equality as TypeEq -import qualified Data.Vector.Unboxed as V -import qualified Data.Vector.Unboxed.Mutable as MV -import GHC.OverloadedLabels (IsLabel (fromLabel)) -import Prelude hiding ((+), (-)) -import qualified Prelude -import qualified System.Random.Stateful as R -import qualified Test.Ouroboros.Consensus.ChainGenerator.Some as Some -import qualified Test.QuickCheck as QC +import Control.Monad.ST (ST) +import Data.Coerce (coerce) +import Data.Foldable (for_) +import Data.Kind (Type) +import Data.Proxy (Proxy (Proxy)) +import Data.Type.Equality qualified as TypeEq +import Data.Vector.Unboxed qualified as V +import Data.Vector.Unboxed.Mutable qualified as MV +import GHC.OverloadedLabels (IsLabel (fromLabel)) +import System.Random.Stateful qualified as R +import Test.Ouroboros.Consensus.ChainGenerator.Some qualified as Some +import Test.QuickCheck qualified as QC +import Prelude hiding ((+), (-)) +import Prelude qualified ----- @@ -138,7 +142,7 @@ data Preds data Total type Index base elem = Count base elem Preds -type Size base elem = Count base elem Total +type Size base elem = Count base elem Total -- | The 'Index' of the rightmost element in the sequence of the given 'Size' lastIndex :: Size base elem -> Index base elem @@ -158,7 +162,8 @@ uniformIndex n g = Count <$> R.uniformRM (0, getCount $ lastIndex n) g -- | A human-readable label for a 'Win' type Lbl :: forall {k}. k -> Type data Lbl lbl = Lbl -instance (lbl TypeEq.~~ s) => IsLabel s (Lbl lbl) where fromLabel = Lbl + +instance lbl TypeEq.~~ s => IsLabel s (Lbl lbl) where fromLabel = Lbl -- | A type-level name for a window within some containing sequence -- @@ -182,13 +187,15 @@ data Win (lbl :: klbl) (skolem :: Type) -- than its type name. -- -- TODO: rename Contains to Window -data Contains (elem :: kelem) (outer :: Type) (inner :: Type) = - UnsafeContains - !(Index outer elem) -- ^ index of the start of the window as - -- an offset in the containing sequence. - !(Size inner elem) -- ^ size of the window - -- INVARIANT: does not reach past the end of the containing - -- sequence (whatever that end is) +data Contains (elem :: kelem) (outer :: Type) (inner :: Type) + = UnsafeContains + -- | index of the start of the window as + -- an offset in the containing sequence. + !(Index outer elem) + -- | size of the window + -- INVARIANT: does not reach past the end of the containing + -- sequence (whatever that end is) + !(Size inner elem) deriving (Eq, Read, Show) pattern Contains :: Index outer elem -> Size inner elem -> Contains elem outer inner @@ -232,29 +239,29 @@ joinWin :: Contains elem outer mid -> Contains elem mid inner -> Contains elem o {-# INLINE joinWin #-} joinWin win win2 = UnsafeContains (fromWindow win $ windowStart win2) (windowSize win2) -data SomeWindow (lbl :: klbl) (outer :: Type) (elem :: kelem) = - forall (skolem :: Type). +data SomeWindow (lbl :: klbl) (outer :: Type) (elem :: kelem) + = forall (skolem :: Type). SomeWindow - !(Proxy skolem) - !(Contains elem outer (Win lbl skolem)) + !(Proxy skolem) + !(Contains elem outer (Win lbl skolem)) instance Eq (SomeWindow lbl outer elem) where - SomeWindow _l1 l2 == SomeWindow _r1 r2 = - forgetWindow l2 == forgetWindow r2 + SomeWindow _l1 l2 == SomeWindow _r1 r2 = + forgetWindow l2 == forgetWindow r2 instance Show (SomeWindow lbl outer elem) where - showsPrec p (SomeWindow prx win) = - Some.runShowsPrec p - $ Some.showCtor SomeWindow "SomeWindow" - `Some.showArg` prx - `Some.showArg` win + showsPrec p (SomeWindow prx win) = + Some.runShowsPrec p $ + Some.showCtor SomeWindow "SomeWindow" + `Some.showArg` prx + `Some.showArg` win instance Read (SomeWindow lbl outer elem) where - readPrec = - Some.runReadPrec - $ Some.readCtor SomeWindow "SomeWindow" - <*> Some.readArg - <*> Some.readArg + readPrec = + Some.runReadPrec $ + Some.readCtor SomeWindow "SomeWindow" + <*> Some.readArg + <*> Some.readArg -- | @withWindow outerSz lbl offset innerSz@ is a window of length @innerSz@ -- with name @lbl@ starting at @offset@ in a sequence with length @outerSz@. @@ -265,22 +272,23 @@ instance Read (SomeWindow lbl outer elem) where -- Note that the window can spill either on the right if @i + innerSz > outerSz@, -- or it can spill on the left if @i < 0@, or it can spill on both sides -- simultaneously. --- -withWindow :: Size outer elem -> Lbl lbl -> Index outer elem -> Size x elem -> SomeWindow lbl outer elem +withWindow :: + Size outer elem -> Lbl lbl -> Index outer elem -> Size x elem -> SomeWindow lbl outer elem withWindow (Count n) _lbl (Count i) (Count m) = - SomeWindow Proxy $ UnsafeContains (Count i') (Count m') - where - i' = min n (max 0 i) + SomeWindow Proxy $ UnsafeContains (Count i') (Count m') + where + i' = min n (max 0 i) - -- we compute the elements that fall outside the containing sequence - precedingElements = i' .- i - trailingElements = max 0 $ i .+ m .- n + -- we compute the elements that fall outside the containing sequence + precedingElements = i' .- i + trailingElements = max 0 $ i .+ m .- n - m' = max 0 $ m .- precedingElements .- trailingElements + m' = max 0 $ m .- precedingElements .- trailingElements -- | @withWindowBetween outerSz lbl i j@ is the window between indices @i@ -- and @j@ with name @lbl@ in a containing sequence of length @outerSz@. -withWindowBetween :: Size outer elem -> Lbl lbl -> Index outer elem -> Index outer elem -> SomeWindow lbl outer elem +withWindowBetween :: + Size outer elem -> Lbl lbl -> Index outer elem -> Index outer elem -> SomeWindow lbl outer elem withWindowBetween n lbl (Count i) (Count j) = withWindow n lbl (Count i) (Count $ j .- i .+ 1) -- | @withSuffixWindow outerSz lbl i@ is the window between indices @i@ and the @@ -291,12 +299,12 @@ withSuffixWindow n lbl i = withWindow n lbl i (Count $ getCount n .- getCount i) -- | @withTopWindow lbl sz k@ passes to @k@ a window of size @sz@ with name -- @lbl@ at offset @0@ of some containing sequence with a unique name @base@. withTopWindow :: - Lbl lbl - -> Int - -> (forall base. Proxy base -> SomeWindow lbl base elem -> ans) - -> ans + Lbl lbl -> + Int -> + (forall base. Proxy base -> SomeWindow lbl base elem -> ans) -> + ans withTopWindow _lbl n k = - k Proxy $ SomeWindow Proxy $ UnsafeContains (Count 0) (Count n) + k Proxy $ SomeWindow Proxy $ UnsafeContains (Count 0) (Count n) ----- @@ -305,8 +313,8 @@ newtype Vector base elem a = Vector (V.Vector a) deriving (Eq, Read, Show) instance (QC.Arbitrary a, V.Unbox a) => QC.Arbitrary (Vector base elem a) where - arbitrary = (Vector . V.fromList) <$> QC.arbitrary - shrink = map (Vector . V.fromList) . QC.shrink . V.toList . getVector + arbitrary = (Vector . V.fromList) <$> QC.arbitrary + shrink = map (Vector . V.fromList) . QC.shrink . V.toList . getVector getVector :: Vector base elem a -> V.Vector a getVector (Vector v) = v @@ -317,10 +325,10 @@ lengthV = Count . V.length . getVector sliceV :: MV.Unbox a => Contains elem outer inner -> Vector outer elem a -> Vector inner elem a {-# INLINE sliceV #-} sliceV win (Vector v) = - Vector $ V.slice i n v - where - Count i = fromWindow win (Count 0) - Count n = windowSize win + Vector $ V.slice i n v + where + Count i = fromWindow win (Count 0) + Count n = windowSize win unsafeThawV :: MV.Unbox a => Vector base elem a -> ST s (MVector base elem s a) unsafeThawV (Vector v) = MVector <$> V.unsafeThaw v @@ -332,7 +340,6 @@ createV m = Vector $ V.create (getMVector <$> m) -- -- * @base@ is a type-level name identifying the container (e.g. @Win (Lbl HonestLbl) skolem1@) -- * @elem@ is a type-level name of the elements in the container (e.g. 'Test.Ouroboros.Consensus.ChainGenerator.Slot.SlotE') --- newtype MVector base elem s a = MVector (MV.MVector s a) getMVector :: MVector base elem s a -> MV.MVector s a @@ -341,24 +348,24 @@ getMVector (MVector mv) = mv lengthMV :: MV.Unbox a => MVector base elem s a -> Size base elem lengthMV = Count . MV.length . getMVector -sliceMV :: MV.Unbox a => Contains elem outer inner -> MVector outer elem s a -> MVector inner elem s a +sliceMV :: + MV.Unbox a => Contains elem outer inner -> MVector outer elem s a -> MVector inner elem s a {-# INLINE sliceMV #-} sliceMV win (MVector mv) = - MVector $ MV.slice i n mv - where - Count i = fromWindow win (Count 0) - Count n = windowSize win + MVector $ MV.slice i n mv + where + Count i = fromWindow win (Count 0) + Count n = windowSize win replicateMV :: MV.Unbox a => Size base elem -> ST s a -> ST s (MVector base elem s a) replicateMV (Count n) m = fmap MVector $ MV.replicateM n m -readMV :: MV.Unbox a => MVector base elem s a -> Index base elem -> ST s a -writeMV :: MV.Unbox a => MVector base elem s a -> Index base elem -> a -> ST s () -modifyMV :: MV.Unbox a => MVector base elem s a -> (a -> a) -> Index base elem -> ST s () - -readMV (MVector mv) (Count i) = MV.read mv i -writeMV (MVector mv) (Count i) x = MV.write mv i x -modifyMV (MVector mv) f (Count i) = MV.modify mv f i +readMV :: MV.Unbox a => MVector base elem s a -> Index base elem -> ST s a +writeMV :: MV.Unbox a => MVector base elem s a -> Index base elem -> a -> ST s () +modifyMV :: MV.Unbox a => MVector base elem s a -> (a -> a) -> Index base elem -> ST s () +readMV (MVector mv) (Count i) = MV.read mv i +writeMV (MVector mv) (Count i) x = MV.write mv i x +modifyMV (MVector mv) f (Count i) = MV.modify mv f i readV :: MV.Unbox a => Vector base elem a -> Index base elem -> a readV (Vector v) (Count i) = v V.! i @@ -368,8 +375,8 @@ readV (Vector v) (Count i) = v V.! i -- | A type-level name for counting elements without a specific property data Other -deriving instance (which TypeEq.~~ Other) => Enum (Count base elem which) -deriving instance (which TypeEq.~~ Other) => Num (Count base elem which) +deriving instance which TypeEq.~~ Other => Enum (Count base elem which) +deriving instance which TypeEq.~~ Other => Num (Count base elem which) type Var base elem = Count base elem Other diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Honest.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Honest.hs index 1966f3bcba..fc628a47da 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Honest.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Honest.hs @@ -7,8 +7,8 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE ViewPatterns #-} -module Test.Ouroboros.Consensus.ChainGenerator.Honest ( - -- * Generating +module Test.Ouroboros.Consensus.ChainGenerator.Honest + ( -- * Generating ChainSchema (ChainSchema) , CheckedHonestRecipe (UnsafeCheckedHonestRecipe, chrScgDensity, chrWin) , HonestLbl @@ -20,6 +20,7 @@ module Test.Ouroboros.Consensus.ChainGenerator.Honest ( , countChainSchema , genHonestRecipe , uniformTheHonestChain + -- * Testing , HonestChainViolation (BadCount, BadScgWindow, BadLength) , ScgLbl @@ -29,24 +30,32 @@ module Test.Ouroboros.Consensus.ChainGenerator.Honest ( , prettyWindow ) where -import Control.Monad (void, when) -import qualified Control.Monad.Except as Exn -import Data.Monoid (Endo (Endo, appEndo)) -import Data.Proxy (Proxy (Proxy)) -import Data.STRef (newSTRef, readSTRef, writeSTRef) -import qualified Data.Vector.Unboxed as V -import Prelude hiding (words) -import qualified System.Random.Stateful as R -import qualified Test.Ouroboros.Consensus.ChainGenerator.BitVector as BV -import qualified Test.Ouroboros.Consensus.ChainGenerator.Counting as C -import Test.Ouroboros.Consensus.ChainGenerator.Params (Asc, - Delta (Delta), Kcp (Kcp), Len (Len), Scg (Scg), genKSD) -import qualified Test.Ouroboros.Consensus.ChainGenerator.Slot as S -import Test.Ouroboros.Consensus.ChainGenerator.Slot - (E (ActiveSlotE, SlotE), S) -import qualified Test.Ouroboros.Consensus.ChainGenerator.Some as Some -import qualified Test.QuickCheck as QC -import Test.QuickCheck.Extras (sized1) +import Control.Monad (void, when) +import Control.Monad.Except qualified as Exn +import Data.Monoid (Endo (Endo, appEndo)) +import Data.Proxy (Proxy (Proxy)) +import Data.STRef (newSTRef, readSTRef, writeSTRef) +import Data.Vector.Unboxed qualified as V +import System.Random.Stateful qualified as R +import Test.Ouroboros.Consensus.ChainGenerator.BitVector qualified as BV +import Test.Ouroboros.Consensus.ChainGenerator.Counting qualified as C +import Test.Ouroboros.Consensus.ChainGenerator.Params + ( Asc + , Delta (Delta) + , Kcp (Kcp) + , Len (Len) + , Scg (Scg) + , genKSD + ) +import Test.Ouroboros.Consensus.ChainGenerator.Slot + ( E (ActiveSlotE, SlotE) + , S + ) +import Test.Ouroboros.Consensus.ChainGenerator.Slot qualified as S +import Test.Ouroboros.Consensus.ChainGenerator.Some qualified as Some +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Extras (sized1) +import Prelude hiding (words) ----- @@ -60,47 +69,46 @@ data HonestRecipe = HonestRecipe !Kcp !Scg !Delta !Len -- * @hon@ the type-level name of the honest chain's slot interval -- -- TODO: Rename to CheckedHonestSchemaSpec -data CheckedHonestRecipe base hon = UnsafeCheckedHonestRecipe { - -- | Desired density - chrScgDensity :: !(BV.SomeDensityWindow S.NotInverted) - , -- | Window in the @base@ containing sequence where the density should be - -- ensured - chrWin :: !(C.Contains SlotE base hon) +data CheckedHonestRecipe base hon = UnsafeCheckedHonestRecipe + { chrScgDensity :: !(BV.SomeDensityWindow S.NotInverted) + -- ^ Desired density + , chrWin :: !(C.Contains SlotE base hon) + -- ^ Window in the @base@ containing sequence where the density should be + -- ensured } deriving (Eq, Read, Show) -- TODO: Rename to SomeCheckedHonestSpec -data SomeCheckedHonestRecipe = - forall base hon. +data SomeCheckedHonestRecipe + = forall base hon. SomeCheckedHonestRecipe - !(Proxy base) - !(Proxy hon) - !(CheckedHonestRecipe base hon) + !(Proxy base) + !(Proxy hon) + !(CheckedHonestRecipe base hon) instance Show SomeCheckedHonestRecipe where - showsPrec p (SomeCheckedHonestRecipe base hon recipe) = - Some.runShowsPrec p - $ Some.showCtor SomeCheckedHonestRecipe "SomeCheckedHonestRecipe" - `Some.showArg` base - `Some.showArg` hon - `Some.showArg` recipe + showsPrec p (SomeCheckedHonestRecipe base hon recipe) = + Some.runShowsPrec p $ + Some.showCtor SomeCheckedHonestRecipe "SomeCheckedHonestRecipe" + `Some.showArg` base + `Some.showArg` hon + `Some.showArg` recipe instance Read SomeCheckedHonestRecipe where - readPrec = - Some.runReadPrec - $ Some.readCtor SomeCheckedHonestRecipe "SomeCheckedHonestRecipe" - <*> Some.readArg - <*> Some.readArg - <*> Some.readArg - -data NoSuchHonestChainSchema = - -- | must have @1 <= 'Kcp' < 'Scg'@ + readPrec = + Some.runReadPrec $ + Some.readCtor SomeCheckedHonestRecipe "SomeCheckedHonestRecipe" + <*> Some.readArg + <*> Some.readArg + <*> Some.readArg + +data NoSuchHonestChainSchema + = -- | must have @1 <= 'Kcp' < 'Scg'@ -- -- Chosing @Kcp > 0@ allows adversarial schemas to have at least 1 active -- slot and still lose density comparisons and races. BadKcp - | - -- | 'Len' must be positive + | -- | 'Len' must be positive BadLen deriving (Eq, Read, Show) @@ -144,29 +152,32 @@ data NoSuchHonestChainSchema = genHonestRecipe :: QC.Gen HonestRecipe genHonestRecipe = sized1 $ \sz -> do - (Kcp k, Scg s, Delta d) <- genKSD - -- See Note [Minimum schema length]. - l <- (+ (s + d + k + 1)) <$> QC.choose (0, 5 * sz) - pure $ HonestRecipe (Kcp k) (Scg s) (Delta d) (Len l) + (Kcp k, Scg s, Delta d) <- genKSD + -- See Note [Minimum schema length]. + l <- (+ (s + d + k + 1)) <$> QC.choose (0, 5 * sz) + pure $ HonestRecipe (Kcp k) (Scg s) (Delta d) (Len l) -- | Checks whether the given 'HonestRecipe' determines a valid input to -- 'uniformTheHonestChain' checkHonestRecipe :: HonestRecipe -> Exn.Except NoSuchHonestChainSchema SomeCheckedHonestRecipe checkHonestRecipe recipe = do - when (l <= 0) $ Exn.throwError BadLen + when (l <= 0) $ Exn.throwError BadLen - when (k < 1 || s < k) $ Exn.throwError BadKcp + when (k < 1 || s < k) $ Exn.throwError BadKcp - C.withTopWindow (C.Lbl @HonestLbl) l $ \base topWindow -> do - C.SomeWindow Proxy slots <- pure topWindow + C.withTopWindow (C.Lbl @HonestLbl) l $ \base topWindow -> do + C.SomeWindow Proxy slots <- pure topWindow - pure $ SomeCheckedHonestRecipe base Proxy UnsafeCheckedHonestRecipe { - chrScgDensity = BV.SomeDensityWindow (C.Count (k + 1)) (C.Count s) - , - chrWin = slots + pure $ + SomeCheckedHonestRecipe + base + Proxy + UnsafeCheckedHonestRecipe + { chrScgDensity = BV.SomeDensityWindow (C.Count (k + 1)) (C.Count s) + , chrWin = slots } - where - HonestRecipe (Kcp k) (Scg s) (Delta _d) (Len l) = recipe + where + HonestRecipe (Kcp k) (Scg s) (Delta _d) (Len l) = recipe ----- @@ -177,65 +188,65 @@ checkHonestRecipe recipe = do -- extend the block from the previous active slot. -- -- INVARIANT: at least one active slot -data ChainSchema base inner = - ChainSchema - !(C.Contains SlotE base inner) - !(C.Vector inner SlotE S) +data ChainSchema base inner + = ChainSchema + !(C.Contains SlotE base inner) + !(C.Vector inner SlotE S) deriving (Eq, Read, Show) countChainSchema :: ChainSchema base inner -> C.Size inner ActiveSlotE countChainSchema sched = - BV.countActivesInV S.notInverted v - where - ChainSchema _slots v = sched + BV.countActivesInV S.notInverted v + where + ChainSchema _slots v = sched prettyWindow :: C.Contains SlotE base inner -> String -> String prettyWindow win s = - -- for example, i=0 n=1 should be "[)" - replicate i ' ' <> "[" <> replicate (n - theOpenBracket) ' ' <> ")" <> s - where - C.Count i = C.windowStart win - C.Count n = C.windowSize win + -- for example, i=0 n=1 should be "[)" + replicate i ' ' <> "[" <> replicate (n - theOpenBracket) ' ' <> ")" <> s + where + C.Count i = C.windowStart win + C.Count n = C.windowSize win - theOpenBracket = 1 + theOpenBracket = 1 prettyChainSchema :: forall base inner. - ChainSchema base inner - -> String - -> [String] + ChainSchema base inner -> + String -> + [String] prettyChainSchema sched s = - map (replicate (C.getCount shift) ' ' <>) - $ [ prettyWindow slots s + map (replicate (C.getCount shift) ' ' <>) $ + [ prettyWindow slots s , V.foldMap (Endo . S.showS) (C.getVector v) `appEndo` "" ] - where - ChainSchema slots v = sched + where + ChainSchema slots v = sched - shift = C.windowStart slots + shift = C.windowStart slots -data SomeHonestChainSchema = - forall base hon. - SomeHonestChainSchema - !(Proxy base) - !(Proxy hon) - !(ChainSchema base hon) +data SomeHonestChainSchema + = forall base hon. + SomeHonestChainSchema + !(Proxy base) + !(Proxy hon) + !(ChainSchema base hon) instance Show SomeHonestChainSchema where - showsPrec p (SomeHonestChainSchema base hon sched) = - Some.runShowsPrec p - $ Some.showCtor SomeHonestChainSchema "SomeHonestChainSchema" - `Some.showArg` base - `Some.showArg` hon - `Some.showArg` sched + showsPrec p (SomeHonestChainSchema base hon sched) = + Some.runShowsPrec p $ + Some.showCtor SomeHonestChainSchema "SomeHonestChainSchema" + `Some.showArg` base + `Some.showArg` hon + `Some.showArg` sched instance Read SomeHonestChainSchema where - readPrec = - Some.runReadPrec - $ Some.readCtor SomeHonestChainSchema "SomeHonestChainSchema" - <*> Some.readArg - <*> Some.readArg - <*> Some.readArg + readPrec = + Some.runReadPrec $ + Some.readCtor SomeHonestChainSchema "SomeHonestChainSchema" + <*> Some.readArg + <*> Some.readArg + <*> Some.readArg data HonestLbl @@ -376,148 +387,146 @@ would solve the problem with just two toggles. -- to start with empty slots and be dense at the end. uniformTheHonestChain :: forall base hon g. - R.RandomGen g - => Maybe Asc -- ^ When @Nothing@, the generated schema has a minimal amount - -- of active slots. Deactivating any of them would violate - -- safety properties. Such a minimal schema is necessarily - -- completely periodic. - -> CheckedHonestRecipe base hon - -> g - -> ChainSchema base hon -{-# INLINABLE uniformTheHonestChain #-} + R.RandomGen g => + -- | When @Nothing@, the generated schema has a minimal amount + -- of active slots. Deactivating any of them would violate + -- safety properties. Such a minimal schema is necessarily + -- completely periodic. + Maybe Asc -> + CheckedHonestRecipe base hon -> + g -> + ChainSchema base hon +{-# INLINEABLE uniformTheHonestChain #-} uniformTheHonestChain mbAsc recipe g0 = wrap $ C.createV $ do - BV.SomeDensityWindow (C.Count (toEnum -> numerator)) (C.Count (toEnum -> denominator)) <- pure chrScgDensity - let _ = numerator :: C.Var hon ActiveSlotE - _ = denominator :: C.Var hon SlotE - - g <- R.newSTGenM g0 - - -- randomly initialize the bitstring - mv <- C.replicateMV sz $ case mbAsc of - Nothing -> pure $ S.mkActive S.inverted - Just asc -> S.genS asc `R.applySTGen` g - - -- /always/ ensure at least one slot is filled - void $ BV.fillInWindow S.notInverted (C.Count 1 `BV.SomeDensityWindow` sz) g mv - - -- fill the first window up to @k+1@ - rtot <- do - -- NB @withWindow@ truncates if it would reach past @slots@ - C.SomeWindow Proxy scg <- pure $ C.withWindow sz (C.Lbl @ScgLbl) (C.Count 0) (C.toSize denominator) - tot <- C.fromWindowVar scg <$> BV.fillInWindow S.notInverted chrScgDensity g (C.sliceMV scg mv) - - firstSlot <- BV.testMV S.notInverted mv (C.Count 0) - newSTRef $ (if firstSlot then subtract 1 else id) $ (tot :: C.Var hon ActiveSlotE) - - C.SomeWindow Proxy remainingFullWindows <- do - -- "number of windows that fit" is usually "total - windowWidth + 1", - -- but we do not add the one here because the previous init step above - -- already handled the first window - let numRemainingFullWindows = sz C.- C.getCount denominator - pure $ C.withWindow sz (C.Lbl @RemainingHcgWindowsLbl) (C.Count 1) numRemainingFullWindows - - -- visit all subsequent windows that do not reach beyond @slots@ - -- - -- Visiting a window ensures it has at least k+1 active slots; thus the - -- first window beyond @slots@ will have at least k actives in its actual - -- slots. We assume slots beyond @slots@ are active; thus the first window - -- beyond has at least k+1 active slots. And subsequent windows can only have - -- more active slots than that; thus we don't need to visit windows that - -- reach beyond @slots@. - -- - -- LOOP INVARIANT: @rtot@ contains the count active slots in the current window excluding its youngest slot - -- - -- LOOP INVARIANT: @numerator - 1 <= rtot@ - -- - -- This loop only alters the final slot in each window. That is key to this - -- whole function being a /uniform/ sampler. In particular: - -- - -- * Every excessive empty slot in the first window has an equal chance - -- to be filled in (by the init step above). - -- - -- * If some subsequent window is sparse, then its final slot is filled - -- in (by this loop). It must never fill in any older slot in the - -- window because those slots have already been sampled (either by - -- the init step above or by previous iterations of this loop). - -- - -- * Every slot that was not filled in was drawn from @mbAsc@. - -- - -- * In total: the init step uniformly fills the first window up to - -- @numerator@, and then each slot not in the first window is either - -- forced to @1@ by its preceding @denominator - 1@ samples or is - -- sampled from @mbAsc@. - C.forRange_ (C.windowSize remainingFullWindows) $ \(C.fromWindow remainingFullWindows -> islot) -> do - -- NB will not be truncated - C.SomeWindow Proxy scgSlots <- pure $ C.withWindow sz (C.Lbl @ScgLbl) islot (C.toSize denominator) - - tot <- do - tot <- readSTRef rtot - end <- BV.testMV S.notInverted mv (C.windowLast scgSlots) - pure $ (if end then (+1) else id) $ tot - - let sparse = tot == numerator - 1 -- see LOOP INVARIANT - - tot' <- if not sparse then pure tot else do - BV.setMV S.notInverted mv (C.windowLast scgSlots) - pure numerator - - start <- BV.testMV S.notInverted mv (C.windowStart scgSlots) - writeSTRef rtot $! (if start then subtract 1 else id) $ tot' - - pure mv - where - UnsafeCheckedHonestRecipe { - chrScgDensity - , - chrWin = slots - } = recipe - - sz = C.windowSize slots :: C.Size hon SlotE -- ie 'Len' - - wrap v = ChainSchema slots v + BV.SomeDensityWindow (C.Count (toEnum -> numerator)) (C.Count (toEnum -> denominator)) <- + pure chrScgDensity + let _ = numerator :: C.Var hon ActiveSlotE + _ = denominator :: C.Var hon SlotE + + g <- R.newSTGenM g0 + + -- randomly initialize the bitstring + mv <- C.replicateMV sz $ case mbAsc of + Nothing -> pure $ S.mkActive S.inverted + Just asc -> S.genS asc `R.applySTGen` g + + -- /always/ ensure at least one slot is filled + void $ BV.fillInWindow S.notInverted (C.Count 1 `BV.SomeDensityWindow` sz) g mv + + -- fill the first window up to @k+1@ + rtot <- do + -- NB @withWindow@ truncates if it would reach past @slots@ + C.SomeWindow Proxy scg <- pure $ C.withWindow sz (C.Lbl @ScgLbl) (C.Count 0) (C.toSize denominator) + tot <- C.fromWindowVar scg <$> BV.fillInWindow S.notInverted chrScgDensity g (C.sliceMV scg mv) + + firstSlot <- BV.testMV S.notInverted mv (C.Count 0) + newSTRef $ (if firstSlot then subtract 1 else id) $ (tot :: C.Var hon ActiveSlotE) + + C.SomeWindow Proxy remainingFullWindows <- do + -- "number of windows that fit" is usually "total - windowWidth + 1", + -- but we do not add the one here because the previous init step above + -- already handled the first window + let numRemainingFullWindows = sz C.- C.getCount denominator + pure $ C.withWindow sz (C.Lbl @RemainingHcgWindowsLbl) (C.Count 1) numRemainingFullWindows + + -- visit all subsequent windows that do not reach beyond @slots@ + -- + -- Visiting a window ensures it has at least k+1 active slots; thus the + -- first window beyond @slots@ will have at least k actives in its actual + -- slots. We assume slots beyond @slots@ are active; thus the first window + -- beyond has at least k+1 active slots. And subsequent windows can only have + -- more active slots than that; thus we don't need to visit windows that + -- reach beyond @slots@. + -- + -- LOOP INVARIANT: @rtot@ contains the count active slots in the current window excluding its youngest slot + -- + -- LOOP INVARIANT: @numerator - 1 <= rtot@ + -- + -- This loop only alters the final slot in each window. That is key to this + -- whole function being a /uniform/ sampler. In particular: + -- + -- * Every excessive empty slot in the first window has an equal chance + -- to be filled in (by the init step above). + -- + -- * If some subsequent window is sparse, then its final slot is filled + -- in (by this loop). It must never fill in any older slot in the + -- window because those slots have already been sampled (either by + -- the init step above or by previous iterations of this loop). + -- + -- * Every slot that was not filled in was drawn from @mbAsc@. + -- + -- * In total: the init step uniformly fills the first window up to + -- @numerator@, and then each slot not in the first window is either + -- forced to @1@ by its preceding @denominator - 1@ samples or is + -- sampled from @mbAsc@. + C.forRange_ (C.windowSize remainingFullWindows) $ \(C.fromWindow remainingFullWindows -> islot) -> do + -- NB will not be truncated + C.SomeWindow Proxy scgSlots <- pure $ C.withWindow sz (C.Lbl @ScgLbl) islot (C.toSize denominator) + + tot <- do + tot <- readSTRef rtot + end <- BV.testMV S.notInverted mv (C.windowLast scgSlots) + pure $ (if end then (+ 1) else id) $ tot + + let sparse = tot == numerator - 1 -- see LOOP INVARIANT + tot' <- + if not sparse + then pure tot + else do + BV.setMV S.notInverted mv (C.windowLast scgSlots) + pure numerator + + start <- BV.testMV S.notInverted mv (C.windowStart scgSlots) + writeSTRef rtot $! (if start then subtract 1 else id) $ tot' + + pure mv + where + UnsafeCheckedHonestRecipe + { chrScgDensity + , chrWin = slots + } = recipe + + sz = C.windowSize slots :: C.Size hon SlotE -- ie 'Len' + wrap v = ChainSchema slots v ----- -data ScgViolation hon = - forall skolem. - ScgViolation { - -- | How many active slots 'scgvWindow' has - scgvPopCount :: !(C.Size (C.Win ScgLbl skolem) ActiveSlotE) - , - -- | The ChainGrowth window that doesn't have enough active slots - scgvWindow :: !(C.Contains SlotE hon (C.Win ScgLbl skolem)) - } +data ScgViolation hon + = forall skolem. + ScgViolation + { scgvPopCount :: !(C.Size (C.Win ScgLbl skolem) ActiveSlotE) + -- ^ How many active slots 'scgvWindow' has + , scgvWindow :: !(C.Contains SlotE hon (C.Win ScgLbl skolem)) + -- ^ The ChainGrowth window that doesn't have enough active slots + } instance Eq (ScgViolation hon) where - ScgViolation l1 l2 == ScgViolation r1 r2 = - C.forgetBase l1 == C.forgetBase r1 - && - C.forgetWindow l2 == C.forgetWindow r2 + ScgViolation l1 l2 == ScgViolation r1 r2 = + C.forgetBase l1 == C.forgetBase r1 + && C.forgetWindow l2 == C.forgetWindow r2 instance Show (ScgViolation hon) where - showsPrec p (ScgViolation x y) = - Some.runShowsPrec p - $ Some.showCtor ScgViolation "ScgViolation" - `Some.showArg` x - `Some.showArg` y + showsPrec p (ScgViolation x y) = + Some.runShowsPrec p $ + Some.showCtor ScgViolation "ScgViolation" + `Some.showArg` x + `Some.showArg` y instance Read (ScgViolation hon) where - readPrec = - Some.runReadPrec - $ Some.readCtor ScgViolation "ScgViolation" - <*> Some.readArg - <*> Some.readArg - -data HonestChainViolation hon = - -- | The schema does not contain a positive number of active slots + readPrec = + Some.runReadPrec $ + Some.readCtor ScgViolation "ScgViolation" + <*> Some.readArg + <*> Some.readArg + +data HonestChainViolation hon + = -- | The schema does not contain a positive number of active slots BadCount - | - -- | The schema has some window of 'Scg' slots that contains less than + | -- | The schema has some window of 'Scg' slots that contains less than -- 'Kcp+1' active slots, even despite optimistically assuming that all slots -- beyond 'Len' are active BadScgWindow !(ScgViolation hon) - | - -- | The schema does not span exactly 'Len' slots + | -- | The schema does not span exactly 'Len' slots BadLength !(C.Size hon SlotE) deriving (Eq, Read, Show) @@ -538,36 +547,37 @@ data ScgLbl -- @3k/f@ after Byron on Cardano @mainnet@.) checkHonestChain :: forall base hon. - HonestRecipe - -> ChainSchema base hon - -> Exn.Except (HonestChainViolation hon) () + HonestRecipe -> + ChainSchema base hon -> + Exn.Except (HonestChainViolation hon) () checkHonestChain recipe sched = do - when (C.getCount sz /= l) $ Exn.throwError $ BadLength sz - - do let pc = countChainSchema sched - when (C.toVar pc <= 0) $ Exn.throwError BadCount + when (C.getCount sz /= l) $ Exn.throwError $ BadLength sz - -- every slot is the first slot of a unique stability window - C.forRange_ sz $ \i -> do - -- note that withWindow truncates if the requested slots reach past 'Len' - C.SomeWindow Proxy scg <- pure $ C.withWindow sz (C.Lbl @ScgLbl) i (C.Count s) + do + let pc = countChainSchema sched + when (C.toVar pc <= 0) $ Exn.throwError BadCount - let pc = BV.countActivesInV S.notInverted (C.sliceV scg v) + -- every slot is the first slot of a unique stability window + C.forRange_ sz $ \i -> do + -- note that withWindow truncates if the requested slots reach past 'Len' + C.SomeWindow Proxy scg <- pure $ C.withWindow sz (C.Lbl @ScgLbl) i (C.Count s) - -- generously assume that the slots of this stability window that extend past 'Len' are active - let benefitOfTheDoubt = s - C.getCount (C.windowSize scg) + let pc = BV.countActivesInV S.notInverted (C.sliceV scg v) - -- check the density in the stability window - when (C.getCount pc + benefitOfTheDoubt < k + 1) $ do - Exn.throwError $ BadScgWindow $ ScgViolation { - scgvPopCount = pc - , - scgvWindow = scg - } + -- generously assume that the slots of this stability window that extend past 'Len' are active + let benefitOfTheDoubt = s - C.getCount (C.windowSize scg) - where - HonestRecipe (Kcp k) (Scg s) (Delta _d) (Len l) = recipe + -- check the density in the stability window + when (C.getCount pc + benefitOfTheDoubt < k + 1) $ do + Exn.throwError $ + BadScgWindow $ + ScgViolation + { scgvPopCount = pc + , scgvWindow = scg + } + where + HonestRecipe (Kcp k) (Scg s) (Delta _d) (Len l) = recipe - ChainSchema hon v = sched + ChainSchema hon v = sched - sz = C.windowSize hon + sz = C.windowSize hon diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Params.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Params.hs index 2495a76d90..e3092f87fa 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Params.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Params.hs @@ -1,8 +1,8 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE PatternSynonyms #-} -module Test.Ouroboros.Consensus.ChainGenerator.Params ( - Asc (Asc, UnsafeAsc) +module Test.Ouroboros.Consensus.ChainGenerator.Params + ( Asc (Asc, UnsafeAsc) , Delta (Delta) , Kcp (Kcp) , Len (Len) @@ -14,10 +14,10 @@ module Test.Ouroboros.Consensus.ChainGenerator.Params ( , genKSD ) where -import qualified Data.Bits as B -import Data.Word (Word8) -import qualified Test.QuickCheck as QC -import Test.QuickCheck.Extras (sized1) +import Data.Bits qualified as B +import Data.Word (Word8) +import Test.QuickCheck qualified as QC +import Test.QuickCheck.Extras (sized1) ----- @@ -80,8 +80,8 @@ pattern Asc d <- UnsafeAsc d ascFromDouble :: Double -> Asc ascFromDouble d - | d <= 0 = error "Asc must be > 0" - | 1 <= d = error "Asc must be < 1" + | d <= 0 = error "Asc must be > 0" + | 1 <= d = error "Asc must be < 1" | otherwise = UnsafeAsc d -- | PRECONDITION: the bits aren't all the same @@ -99,9 +99,9 @@ genAsc = ascFromBits <$> QC.choose (1 :: Word8, maxBound - 1) genKSD :: QC.Gen (Kcp, Scg, Delta) genKSD = sized1 $ \sz -> do - -- k > 0 so we can ensure an alternative schema loses the density comparison - -- without having to deactivate the first active slot - k <- QC.choose (1, sz + 1) - s <- (+ (k + 1)) <$> QC.choose (0, 2 * sz) -- ensures @(k+1) / s <= 1@ - d <- QC.choose (0, max 0 $ min (div sz 4) (s-1)) -- ensures @d < s@ - pure (Kcp k, Scg s, Delta d) + -- k > 0 so we can ensure an alternative schema loses the density comparison + -- without having to deactivate the first active slot + k <- QC.choose (1, sz + 1) + s <- (+ (k + 1)) <$> QC.choose (0, 2 * sz) -- ensures @(k+1) / s <= 1@ + d <- QC.choose (0, max 0 $ min (div sz 4) (s - 1)) -- ensures @d < s@ + pure (Kcp k, Scg s, Delta d) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/RaceIterator.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/RaceIterator.hs index 9618d1c64e..26d5ae91f7 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/RaceIterator.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/RaceIterator.hs @@ -3,24 +3,24 @@ {-# LANGUAGE PolyKinds #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -{- | These functions iteratively produce all race windows in a slot vector. - -The first window is produced by 'init', which unconditionally starts the window at the first slot. -This window can then be passed to 'next', which starts the new window after the first active slot. - -@ ----X--X--X--X-- ... -^ start of window 1 from 'init' - ^ start of window 2 from 'next' - ^ start of window 3 from 'next' -@ - -Valid windows must have @k+1@ active slots. -If the vector doesn't have sufficient slots to meet this condition, 'init' and 'next' return 'Nothing' and we fall back -to 'initConservative' and 'nextConservative', which return windows truncated at the end of time. --} -module Test.Ouroboros.Consensus.ChainGenerator.RaceIterator ( - Race (Race, UnsafeRace) + +-- | These functions iteratively produce all race windows in a slot vector. +-- +-- The first window is produced by 'init', which unconditionally starts the window at the first slot. +-- This window can then be passed to 'next', which starts the new window after the first active slot. +-- +-- @ +-- ---X--X--X--X-- ... +-- ^ start of window 1 from 'init' +-- ^ start of window 2 from 'next' +-- ^ start of window 3 from 'next' +-- @ +-- +-- Valid windows must have @k+1@ active slots. +-- If the vector doesn't have sufficient slots to meet this condition, 'init' and 'next' return 'Nothing' and we fall back +-- to 'initConservative' and 'nextConservative', which return windows truncated at the end of time. +module Test.Ouroboros.Consensus.ChainGenerator.RaceIterator + ( Race (Race, UnsafeRace) , RaceLbl , init , initConservative @@ -28,14 +28,16 @@ module Test.Ouroboros.Consensus.ChainGenerator.RaceIterator ( , nextConservative ) where -import Control.Monad (when) -import Data.Proxy (Proxy (Proxy)) -import Prelude hiding (init) -import qualified Test.Ouroboros.Consensus.ChainGenerator.BitVector as BV -import qualified Test.Ouroboros.Consensus.ChainGenerator.Counting as C -import Test.Ouroboros.Consensus.ChainGenerator.Params (Kcp (Kcp)) -import Test.Ouroboros.Consensus.ChainGenerator.Slot - (E (ActiveSlotE, SlotE), S) +import Control.Monad (when) +import Data.Proxy (Proxy (Proxy)) +import Test.Ouroboros.Consensus.ChainGenerator.BitVector qualified as BV +import Test.Ouroboros.Consensus.ChainGenerator.Counting qualified as C +import Test.Ouroboros.Consensus.ChainGenerator.Params (Kcp (Kcp)) +import Test.Ouroboros.Consensus.ChainGenerator.Slot + ( E (ActiveSlotE, SlotE) + , S + ) +import Prelude hiding (init) ----- @@ -58,43 +60,43 @@ pattern Race x <- UnsafeRace x -- Race windows are anchored in an active slot, and so could start with an empty or active slot. nthActiveSlotIndex :: forall base adv. - C.Index adv ActiveSlotE - -> C.Vector base SlotE S - -> C.Contains SlotE base adv - -> Maybe (C.Index base SlotE) + C.Index adv ActiveSlotE -> + C.Vector base SlotE S -> + C.Contains SlotE base adv -> + Maybe (C.Index base SlotE) nthActiveSlotIndex n v raceWin = -- the given race window has at least k+1 blocks in it and 0<=k, so this pattern can't fail -- TODO by invariant during construction of the honest chain? case BV.findIthActiveInV (C.sliceV raceWin v) n of - BV.NothingFound -> Nothing -- would be impossible if we never called next after *Conservative - BV.JustFound slot -> pure $! C.fromWindow raceWin slot + BV.NothingFound -> Nothing -- would be impossible if we never called next after *Conservative + BV.JustFound slot -> pure $! C.fromWindow raceWin slot -- | Yields the race window starting at position 0 of the given -- vector if the @k+1@ active slot exists. init :: Kcp -> C.Vector base SlotE S -> Maybe (Race base) init (Kcp k) v = - -- find the @k+1@st active slot in the given race window - case BV.findIthActiveInV v (C.Count k) of - BV.NothingFound -> Nothing - BV.JustFound kPlus1st -> - Just - $! UnsafeRace - $ C.withWindowBetween - (C.lengthV v) - (C.Lbl @RaceLbl) - (C.Count 0) - kPlus1st + -- find the @k+1@st active slot in the given race window + case BV.findIthActiveInV v (C.Count k) of + BV.NothingFound -> Nothing + BV.JustFound kPlus1st -> + Just $! + UnsafeRace $ + C.withWindowBetween + (C.lengthV v) + (C.Lbl @RaceLbl) + (C.Count 0) + kPlus1st -- | @initConservative@ creates a window for the whole vector. initConservative :: C.Vector base SlotE S -> Race base initConservative v = - let sz = C.lengthV v - in UnsafeRace - $ C.withWindow - sz - (C.Lbl @RaceLbl) - (C.Count 0) - (C.Count $ C.getCount sz) + let sz = C.lengthV v + in UnsafeRace $ + C.withWindow + sz + (C.Lbl @RaceLbl) + (C.Count 0) + (C.Count $ C.getCount sz) data RaceStepLbl @@ -102,50 +104,54 @@ data RaceStepLbl -- active slot of @r@ if there is an active slot after @r@. next :: forall base. - C.Vector base SlotE S - -> Race base - -> Maybe (Race base) + C.Vector base SlotE S -> + Race base -> + Maybe (Race base) next v (UnsafeRace (C.SomeWindow Proxy raceWin)) = do - next0 <- nthActiveSlotIndex (C.Count 0) v raceWin - - -- find the first active slot /after/ the given race window - -- - -- Race windows end in an active slot. - nextK <- do - C.SomeWindow Proxy searchWin <- - pure - $ C.withWindowBetween - sz - (C.Lbl @RaceStepLbl) - (C.windowLast raceWin) - (C.lastIndex sz) - nthActiveSlotIndex (C.Count 1) v searchWin - - pure $! UnsafeRace $ C.withWindowBetween + next0 <- nthActiveSlotIndex (C.Count 0) v raceWin + + -- find the first active slot /after/ the given race window + -- + -- Race windows end in an active slot. + nextK <- do + C.SomeWindow Proxy searchWin <- + pure $ + C.withWindowBetween + sz + (C.Lbl @RaceStepLbl) + (C.windowLast raceWin) + (C.lastIndex sz) + nthActiveSlotIndex (C.Count 1) v searchWin + + pure $! + UnsafeRace $ + C.withWindowBetween sz (C.Lbl @RaceLbl) (next0 C.+ 1) nextK - where - sz = C.lengthV v + where + sz = C.lengthV v -- | @nextConservative v r@ yields a window anchored at the first -- active slot of @r@ if it exists, and extending until the end of @v@. nextConservative :: forall base. - C.Vector base SlotE S - -> Race base - -> Maybe (Race base) + C.Vector base SlotE S -> + Race base -> + Maybe (Race base) nextConservative v (UnsafeRace (C.SomeWindow Proxy raceWin)) = do - next0 <- nthActiveSlotIndex (C.Count 0) v raceWin + next0 <- nthActiveSlotIndex (C.Count 0) v raceWin - -- do not return a Race Window that starts after 'Len' - when (next0 == C.lastIndex sz) Nothing + -- do not return a Race Window that starts after 'Len' + when (next0 == C.lastIndex sz) Nothing - pure $! UnsafeRace $ C.withWindowBetween + pure $! + UnsafeRace $ + C.withWindowBetween sz (C.Lbl @RaceLbl) (next0 C.+ 1) (C.lastIndex sz) - where - sz = C.lengthV v + where + sz = C.lengthV v diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Slot.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Slot.hs index b549863a38..e92d2b0762 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Slot.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Slot.hs @@ -6,15 +6,17 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} -module Test.Ouroboros.Consensus.ChainGenerator.Slot ( - -- * Counting +module Test.Ouroboros.Consensus.ChainGenerator.Slot + ( -- * Counting E (ActiveSlotE, EmptySlotE, SlotE) , complementActive , complementEmpty + -- * Slot , S , Test.Ouroboros.Consensus.ChainGenerator.Slot.showS , genS + -- * Reuse , POL (mkActive, test) , Pol (Inverted, NotInverted) @@ -23,36 +25,35 @@ module Test.Ouroboros.Consensus.ChainGenerator.Slot ( , notInverted ) where -import Data.Coerce (coerce) -import Data.Proxy (Proxy (Proxy)) -import qualified Data.Vector.Generic as VG -import qualified Data.Vector.Generic.Mutable as MVG -import qualified Data.Vector.Unboxed as V -import qualified Data.Vector.Unboxed.Mutable as MV -import qualified System.Random.Stateful as R -import qualified Test.Ouroboros.Consensus.ChainGenerator.Counting as C -import Test.Ouroboros.Consensus.ChainGenerator.Params (Asc, ascVal) -import qualified Test.QuickCheck as QC +import Data.Coerce (coerce) +import Data.Proxy (Proxy (Proxy)) +import Data.Vector.Generic qualified as VG +import Data.Vector.Generic.Mutable qualified as MVG +import Data.Vector.Unboxed qualified as V +import Data.Vector.Unboxed.Mutable qualified as MV +import System.Random.Stateful qualified as R +import Test.Ouroboros.Consensus.ChainGenerator.Counting qualified as C +import Test.Ouroboros.Consensus.ChainGenerator.Params (Asc, ascVal) +import Test.QuickCheck qualified as QC -- | The activeness of some slot newtype S = S Bool deriving (QC.Arbitrary, Eq, Ord, Read, Show) newtype instance MV.MVector s S = MV_S (MV.MVector s Bool) -newtype instance V.Vector S = V_S (V.Vector Bool) +newtype instance V.Vector S = V_S (V.Vector Bool) deriving newtype instance MVG.MVector MV.MVector S -deriving newtype instance VG.Vector V.Vector S +deriving newtype instance VG.Vector V.Vector S instance V.Unbox S ----- genS :: R.RandomGen g => Asc -> g -> (S, g) genS asc g = - bool `seq` (S bool, g') - where - (q, g') = R.random g -- note 0 <= q <= 1 - - bool = q < ascVal asc + bool `seq` (S bool, g') + where + (q, g') = R.random g -- note 0 <= q <= 1 + bool = q < ascVal asc showS :: S -> ShowS showS (S bool) = showChar $ if bool then '1' else '0' @@ -63,14 +64,13 @@ showS (S bool) = showChar $ if bool then '1' else '0' -- -- The data constructors of this type are used in promoted form with -- @-XDataKinds@. --- -data E = - -- | Active slots must be filled on the honest chain and may be filled on an alternative chain. - ActiveSlotE - -- | Empty slots may be filled on the honest chain and must not be filled on an alternative chain. - | EmptySlotE - -- | @SlotE@ is the union of 'ActiveSlotE' and 'EmptySlotE' - | SlotE +data E + = -- | Active slots must be filled on the honest chain and may be filled on an alternative chain. + ActiveSlotE + | -- | Empty slots may be filled on the honest chain and must not be filled on an alternative chain. + EmptySlotE + | -- | @SlotE@ is the union of 'ActiveSlotE' and 'EmptySlotE' + SlotE inverted :: Proxy Inverted inverted = Proxy @@ -95,10 +95,11 @@ data Pol = Inverted | NotInverted -- | Overloaded slot operations for the two polarities class POL (pol :: Pol) where - -- | Make an active slot - mkActive :: proxy pol -> S - -- | Test whether @pol@ maps the given bit to one - test :: proxy pol -> S -> Bool + -- | Make an active slot + mkActive :: proxy pol -> S + + -- | Test whether @pol@ maps the given bit to one + test :: proxy pol -> S -> Bool -- Both 'complementActive' and 'complementEmpty' are offered for simplicity -- instead of a generalized function that works in both cases (it would need @@ -106,30 +107,30 @@ class POL (pol :: Pol) where -- | Every slot is either active or empty complementActive :: - proxy pol - -> C.Size base SlotE - -> C.Count base (PreImage pol ActiveSlotE) which - -> C.Count base (PreImage pol EmptySlotE ) which + proxy pol -> + C.Size base SlotE -> + C.Count base (PreImage pol ActiveSlotE) which -> + C.Count base (PreImage pol EmptySlotE) which complementActive _pol (C.Count n) (C.Count i) = C.Count (n - i) -- | Every slot is either active or empty complementEmpty :: - proxy pol - -> C.Size base SlotE - -> C.Count base (PreImage pol EmptySlotE ) which - -> C.Count base (PreImage pol ActiveSlotE) which + proxy pol -> + C.Size base SlotE -> + C.Count base (PreImage pol EmptySlotE) which -> + C.Count base (PreImage pol ActiveSlotE) which complementEmpty _pol (C.Count n) (C.Count i) = C.Count (n - i) instance POL Inverted where - mkActive _pol = coerce False - test _pol = coerce not + mkActive _pol = coerce False + test _pol = coerce not instance POL NotInverted where - mkActive _pol = coerce True - test _pol = coerce + mkActive _pol = coerce True + test _pol = coerce -- | @PreImage pol e@ is the complement of @e@ if @pol@ is 'Inverted' and simply @e@ if it's 'NotInverted' type family PreImage (pol :: Pol) (e :: E) where - PreImage Inverted EmptySlotE = ActiveSlotE - PreImage Inverted ActiveSlotE = EmptySlotE - PreImage NotInverted e = e + PreImage Inverted EmptySlotE = ActiveSlotE + PreImage Inverted ActiveSlotE = EmptySlotE + PreImage NotInverted e = e diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Some.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Some.hs index 4227a8633d..2f331dbcd8 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Some.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/ChainGenerator/Some.hs @@ -16,49 +16,53 @@ -- That can be shoehorned into @some@ with some encoding, but I believe this -- module's weight is preferable to the overhead of using that encoding in our -- existential data types' declarations. -module Test.Ouroboros.Consensus.ChainGenerator.Some ( - -- * 'Show' +module Test.Ouroboros.Consensus.ChainGenerator.Some + ( -- * 'Show' runShowsPrec , showArg , showCtor , showCtorProxy + -- * 'Read' , Read.readPrec , readArg , readCtor , runReadPrec + -- * 'Eq' , Forgotten , forgotten ) where -import Data.Kind (Constraint, Type) -import Data.Void (Void) -import qualified GHC.Read as Read -import GHC.TypeLits (Symbol) -import qualified GHC.TypeLits as TE -import Ouroboros.Consensus.Util.RedundantConstraints -import qualified Text.ParserCombinators.ReadPrec as Read -import qualified Text.Read.Lex as Read +import Data.Kind (Constraint, Type) +import Data.Void (Void) +import GHC.Read qualified as Read +import GHC.TypeLits (Symbol) +import GHC.TypeLits qualified as TE +import Ouroboros.Consensus.Util.RedundantConstraints +import Text.ParserCombinators.ReadPrec qualified as Read +import Text.Read.Lex qualified as Read ----- type family AbsError (s :: Symbol) (a :: Type) :: Void where - AbsError s a = TE.TypeError ( - TE.Text "You have accidentaly applied `" - TE.:<>: TE.Text s - TE.:<>: TE.Text "' to a non-concrete type: " - TE.:<>: TE.ShowType a + AbsError s a = + TE.TypeError + ( TE.Text "You have accidentaly applied `" + TE.:<>: TE.Text s + TE.:<>: TE.Text "' to a non-concrete type: " + TE.:<>: TE.ShowType a ) type family NoFun (s :: Symbol) (a :: Type) (absError :: Void) :: Constraint where - NoFun s (a -> b) abs = TE.TypeError ( - TE.Text "You have accidentaly applied `" - TE.:<>: TE.Text s - TE.:<>: TE.Text "' to a function type: " - TE.:<>: TE.ShowType (a -> b) + NoFun s (a -> b) abs = + TE.TypeError + ( TE.Text "You have accidentaly applied `" + TE.:<>: TE.Text s + TE.:<>: TE.Text "' to a function type: " + TE.:<>: TE.ShowType (a -> b) ) - NoFun s t abs = () + NoFun s t abs = () ----- @@ -68,24 +72,25 @@ infixl 1 `showArg` -- | The context is satisfied by any type @a@ that is manifestly apart from @->@ runShowsPrec :: - forall a. NoFun "runShowsPrec" a (AbsError "runShowsPrec" a) - => Int -> ShowBuilder a -> ShowS + forall a. + NoFun "runShowsPrec" a (AbsError "runShowsPrec" a) => + Int -> ShowBuilder a -> ShowS runShowsPrec p (ShowBuilder x) = showParen (p >= 11) x - where - _ = keepRedundantConstraint (Proxy @(NoFun "runShowsPrec" a (AbsError "runShowsPrec" a))) + where + _ = keepRedundantConstraint (Proxy @(NoFun "runShowsPrec" a (AbsError "runShowsPrec" a))) showCtor :: a -> String -> ShowBuilder a showCtor a s = - showCtorProxy (toProxy a) s - where - toProxy :: a -> Proxy a - toProxy = const Proxy + showCtorProxy (toProxy a) s + where + toProxy :: a -> Proxy a + toProxy = const Proxy showCtorProxy :: proxy a -> String -> ShowBuilder a showCtorProxy _a s = ShowBuilder $ showString s showArg :: Show a => ShowBuilder (a -> b) -> a -> ShowBuilder b -ShowBuilder l `showArg` r = ShowBuilder $ l . showString " " . showsPrec 11 r +ShowBuilder l `showArg` r = ShowBuilder $ l . showString " " . showsPrec 11 r ----- @@ -94,11 +99,12 @@ newtype ReadBuilder a = ReadBuilder (Read.ReadPrec a) -- | The context is satisfied by any type @a@ that is manifestly apart from @->@ runReadPrec :: - forall a. NoFun "runReadPrec" a (AbsError "runReadPrec" a) - => ReadBuilder a -> Read.ReadPrec a + forall a. + NoFun "runReadPrec" a (AbsError "runReadPrec" a) => + ReadBuilder a -> Read.ReadPrec a runReadPrec (ReadBuilder x) = Read.parens $ Read.prec 10 x - where - _ = keepRedundantConstraint (Proxy @(NoFun "runReadPrec" a (AbsError "runReadPrec" a))) + where + _ = keepRedundantConstraint (Proxy @(NoFun "runReadPrec" a (AbsError "runReadPrec" a))) readCtor :: a -> String -> ReadBuilder a readCtor a s = ReadBuilder $ a <$ Read.expectP (Read.Ident s) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/DiffusionPipelining.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/DiffusionPipelining.hs index f792d46fb1..7999dd9a26 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/DiffusionPipelining.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/DiffusionPipelining.hs @@ -5,40 +5,41 @@ module Test.Ouroboros.Consensus.DiffusionPipelining (prop_diffusionPipeliningSubsequenceConsistency) where -import Control.Exception (assert) -import Data.Either (isRight) -import Data.Proxy -import Ouroboros.Consensus.Block -import Test.QuickCheck +import Control.Exception (assert) +import Data.Either (isRight) +import Data.Proxy +import Ouroboros.Consensus.Block +import Test.QuickCheck -- | See /Consistent validity under subsequences/ in -- 'BlockSupportsDiffusionPipelining'. prop_diffusionPipeliningSubsequenceConsistency :: - forall blk. - BlockSupportsDiffusionPipelining blk - => Proxy blk - -> [TentativeHeaderView blk] - -- ^ Have to satisfy the pipelining criterion. - -> Property + forall blk. + BlockSupportsDiffusionPipelining blk => + Proxy blk -> + -- | Have to satisfy the pipelining criterion. + [TentativeHeaderView blk] -> + Property prop_diffusionPipeliningSubsequenceConsistency _ thvs = - assert (isRight $ satisfyPipeliningCriterion thvs) $ + assert (isRight $ satisfyPipeliningCriterion thvs) $ forAllShrink (sublistOf thvs) (shrinkList (const [])) $ \thvs' -> case satisfyPipeliningCriterion thvs' of Right () -> property () Left (hdrs'', st) -> - counterexample ("tentative header view subsequence: " <> show hdrs'') - $ counterexample ("last state: " <> show st) - $ counterexample "unexpected violation of pipelining criterion!" - () - where - satisfyPipeliningCriterion :: - [TentativeHeaderView blk] - -> Either ([TentativeHeaderView blk], TentativeHeaderState blk) () - satisfyPipeliningCriterion allThvs = - go 1 (initialTentativeHeaderState (Proxy @blk)) allThvs - where - go ix st = \case - [] -> Right () - thv : thvs' -> case applyTentativeHeaderView (Proxy @blk) thv st of - Just st' -> go (ix + 1) st' thvs' - Nothing -> Left (take ix allThvs, st) + counterexample ("tentative header view subsequence: " <> show hdrs'') $ + counterexample ("last state: " <> show st) $ + counterexample + "unexpected violation of pipelining criterion!" + () + where + satisfyPipeliningCriterion :: + [TentativeHeaderView blk] -> + Either ([TentativeHeaderView blk], TentativeHeaderState blk) () + satisfyPipeliningCriterion allThvs = + go 1 (initialTentativeHeaderState (Proxy @blk)) allThvs + where + go ix st = \case + [] -> Right () + thv : thvs' -> case applyTentativeHeaderView (Proxy @blk) thv st of + Just st' -> go (ix + 1) st' thvs' + Nothing -> Left (take ix allThvs, st) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/Protocol.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/Protocol.hs index a25bd3e948..7032fc7156 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/Protocol.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Ouroboros/Consensus/Protocol.hs @@ -4,27 +4,29 @@ module Test.Ouroboros.Consensus.Protocol (tests_chainOrder) where -import Data.Proxy -import Data.Typeable (Typeable, typeRep) -import Ouroboros.Consensus.Protocol.Abstract -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.QuickCheck +import Data.Proxy +import Data.Typeable (Typeable, typeRep) +import Ouroboros.Consensus.Protocol.Abstract +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.QuickCheck -- | Test the laws of the 'ChainOrder' class (in particular, that 'Ord' is -- lawful) /except/ for the high-level "Chain extension precedence" property. tests_chainOrder :: - forall a. - ( ChainOrder a - , Typeable a - , Arbitrary a - , Show a - , Arbitrary (ChainOrderConfig a) - , Show (ChainOrderConfig a) - ) - => Proxy a - -> TestTree -tests_chainOrder aPrx = testGroup ("ChainOrder " <> show (typeRep aPrx)) + forall a. + ( ChainOrder a + , Typeable a + , Arbitrary a + , Show a + , Arbitrary (ChainOrderConfig a) + , Show (ChainOrderConfig a) + ) => + Proxy a -> + TestTree +tests_chainOrder aPrx = + testGroup + ("ChainOrder " <> show (typeRep aPrx)) [ testProperty "Eq & Ord" (prop_lawfulEqAndTotalOrd @a) , testProperty "Consistency with Ord" $ \cfg (a :: a) b -> preferCandidate cfg a b ==> a `lt` b diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/QuickCheck/Extras.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/QuickCheck/Extras.hs index e85e17ce3e..14514e7443 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/QuickCheck/Extras.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/QuickCheck/Extras.hs @@ -1,9 +1,9 @@ -module Test.QuickCheck.Extras ( - sized1 +module Test.QuickCheck.Extras + ( sized1 , unsafeMapSuchThatJust ) where -import qualified Test.QuickCheck as QC +import Test.QuickCheck qualified as QC sized1 :: (Int -> QC.Gen a) -> QC.Gen a sized1 f = QC.sized (f . succ) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/BoolProps.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/BoolProps.hs index 8c2d7d99be..b2f1f960b3 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/BoolProps.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/BoolProps.hs @@ -4,8 +4,8 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE TypeOperators #-} -module Test.Util.BoolProps ( - CollectReqs (..) +module Test.Util.BoolProps + ( CollectReqs (..) , Prereq (..) , Requirement (..) , checkReqs @@ -14,8 +14,8 @@ module Test.Util.BoolProps ( , requiredIf ) where -import Data.Kind (Type) -import GHC.Generics +import Data.Kind (Type) +import GHC.Generics {------------------------------------------------------------------------------- Generic boolean properties @@ -52,9 +52,9 @@ requiredIf b = if b then Required else Optional -- 'Bool's as either disjunctive\/conjunctive\/etc observations. class CollectReqs a where collectReqs :: a -> ([Prereq], [Requirement]) - - default collectReqs :: (Generic a, GCollectReqs (Rep a)) - => a -> ([Prereq], [Requirement]) + default collectReqs :: + (Generic a, GCollectReqs (Rep a)) => + a -> ([Prereq], [Requirement]) collectReqs = gCollectReqs . from instance CollectReqs Bool where @@ -66,14 +66,18 @@ instance CollectReqs a => CollectReqs [a] where instance (CollectReqs a, CollectReqs b) => CollectReqs (a, b) where collectReqs (a, b) = collectReqs a <> collectReqs b -instance (CollectReqs a, CollectReqs b, CollectReqs c) - => CollectReqs (a, b, c) where +instance + (CollectReqs a, CollectReqs b, CollectReqs c) => + CollectReqs (a, b, c) + where collectReqs (a, b, c) = collectReqs a <> collectReqs b <> collectReqs c -instance (CollectReqs a, CollectReqs b, CollectReqs c, CollectReqs d) - => CollectReqs (a, b, c, d) where +instance + (CollectReqs a, CollectReqs b, CollectReqs c, CollectReqs d) => + CollectReqs (a, b, c, d) + where collectReqs (a, b, c, d) = - collectReqs a <> collectReqs b <> collectReqs c <> collectReqs d + collectReqs a <> collectReqs b <> collectReqs c <> collectReqs d instance CollectReqs Requirement where collectReqs req = ([], [req]) @@ -84,11 +88,11 @@ instance CollectReqs Prereq where -- | Via 'CollectReqs', check if the ultimate observation has a required value checkReqs :: CollectReqs a => a -> Maybe Bool checkReqs x - | Blocked `elem` prereqs = Just False - | Optional `elem` reqs = Nothing - | otherwise = Just True - where - (prereqs, reqs) = collectReqs x + | Blocked `elem` prereqs = Just False + | Optional `elem` reqs = Nothing + | otherwise = Just True + where + (prereqs, reqs) = collectReqs x {------------------------------------------------------------------------------- Generic boolean properties, generically @@ -103,15 +107,19 @@ instance GCollectReqs U1 where instance GCollectReqs rep => GCollectReqs (M1 c meta rep) where gCollectReqs (M1 rep) = gCollectReqs rep -instance (GCollectReqs rep1, GCollectReqs rep2) - => GCollectReqs (rep1 :*: rep2) where +instance + (GCollectReqs rep1, GCollectReqs rep2) => + GCollectReqs (rep1 :*: rep2) + where gCollectReqs (rep1 :*: rep2) = gCollectReqs rep1 <> gCollectReqs rep2 -instance (GCollectReqs rep1, GCollectReqs rep2) - => GCollectReqs (rep1 :+: rep2) where +instance + (GCollectReqs rep1, GCollectReqs rep2) => + GCollectReqs (rep1 :+: rep2) + where gCollectReqs = \case - L1 rep -> gCollectReqs rep - R1 rep -> gCollectReqs rep + L1 rep -> gCollectReqs rep + R1 rep -> gCollectReqs rep instance CollectReqs c => GCollectReqs (K1 meta c) where gCollectReqs (K1 c) = collectReqs c diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainDB.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainDB.hs index 41b8a6b951..02d3434e19 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainDB.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainDB.hs @@ -3,50 +3,52 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeFamilies #-} -module Test.Util.ChainDB ( - MinimalChainDbArgs (..) +module Test.Util.ChainDB + ( MinimalChainDbArgs (..) , NodeDBs (..) , emptyNodeDBs , fromMinimalChainDbArgs , mkTestChunkInfo ) where - -import Control.Concurrent.Class.MonadSTM.Strict -import Control.ResourceRegistry (ResourceRegistry) -import Control.Tracer (nullTracer) -import Ouroboros.Consensus.Block.Abstract -import Ouroboros.Consensus.Config - (TopLevelConfig (topLevelConfigLedger), configCodec) -import Ouroboros.Consensus.HardFork.History.EraParams (eraEpochSize) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB hiding - (TraceFollowerEvent (..)) -import Ouroboros.Consensus.Storage.ChainDB.Impl.Args -import Ouroboros.Consensus.Storage.ImmutableDB -import qualified Ouroboros.Consensus.Storage.ImmutableDB as ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB -import qualified Ouroboros.Consensus.Storage.LedgerDB.Snapshots as LedgerDB -import Ouroboros.Consensus.Storage.LedgerDB.V2.Args -import Ouroboros.Consensus.Storage.VolatileDB -import qualified Ouroboros.Consensus.Storage.VolatileDB as VolatileDB -import Ouroboros.Consensus.Util.Args -import Ouroboros.Consensus.Util.IOLike hiding (invariant) -import System.FS.API (SomeHasFS (..)) -import qualified System.FS.Sim.MockFS as Mock -import System.FS.Sim.MockFS -import System.FS.Sim.STM (simHasFS) -import Test.Util.Orphans.NoThunks () -import Test.Util.TestBlock (TestBlock, TestBlockLedgerConfig (..)) +import Control.Concurrent.Class.MonadSTM.Strict +import Control.ResourceRegistry (ResourceRegistry) +import Control.Tracer (nullTracer) +import Ouroboros.Consensus.Block.Abstract +import Ouroboros.Consensus.Config + ( TopLevelConfig (topLevelConfigLedger) + , configCodec + ) +import Ouroboros.Consensus.HardFork.History.EraParams (eraEpochSize) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB hiding + ( TraceFollowerEvent (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.Impl.Args +import Ouroboros.Consensus.Storage.ImmutableDB +import Ouroboros.Consensus.Storage.ImmutableDB qualified as ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots qualified as LedgerDB +import Ouroboros.Consensus.Storage.LedgerDB.V2.Args +import Ouroboros.Consensus.Storage.VolatileDB +import Ouroboros.Consensus.Storage.VolatileDB qualified as VolatileDB +import Ouroboros.Consensus.Util.Args +import Ouroboros.Consensus.Util.IOLike hiding (invariant) +import System.FS.API (SomeHasFS (..)) +import System.FS.Sim.MockFS +import System.FS.Sim.MockFS qualified as Mock +import System.FS.Sim.STM (simHasFS) +import Test.Util.Orphans.NoThunks () +import Test.Util.TestBlock (TestBlock, TestBlockLedgerConfig (..)) -- | A vector with an element for each database of a node -- -- The @db@ type parameter is instantiated by this module at types for mock -- filesystems; either the 'MockFS' type or reference cells thereof. -data NodeDBs db = NodeDBs { - nodeDBsImm :: db +data NodeDBs db = NodeDBs + { nodeDBsImm :: db , nodeDBsVol :: db , nodeDBsLgr :: db , nodeDBsGsm :: db @@ -54,22 +56,24 @@ data NodeDBs db = NodeDBs { deriving (Functor, Foldable, Traversable) emptyNodeDBs :: MonadSTM m => m (NodeDBs (StrictTMVar m MockFS)) -emptyNodeDBs = atomically $ NodeDBs - <$> newTMVar Mock.empty - <*> newTMVar Mock.empty - <*> newTMVar Mock.empty - <*> newTMVar Mock.empty +emptyNodeDBs = + atomically $ + NodeDBs + <$> newTMVar Mock.empty + <*> newTMVar Mock.empty + <*> newTMVar Mock.empty + <*> newTMVar Mock.empty -- | Minimal set of arguments for creating a ChainDB instance for testing purposes. -data MinimalChainDbArgs m blk = MinimalChainDbArgs { - mcdbTopLevelConfig :: TopLevelConfig blk - , mcdbChunkInfo :: ImmutableDB.ChunkInfo +data MinimalChainDbArgs m blk = MinimalChainDbArgs + { mcdbTopLevelConfig :: TopLevelConfig blk + , mcdbChunkInfo :: ImmutableDB.ChunkInfo -- ^ Specifies the layout of the ImmutableDB on disk. - , mcdbInitLedger :: ExtLedgerState blk ValuesMK + , mcdbInitLedger :: ExtLedgerState blk ValuesMK -- ^ The initial ledger state. - , mcdbRegistry :: ResourceRegistry m + , mcdbRegistry :: ResourceRegistry m -- ^ Keeps track of non-lexically scoped resources. - , mcdbNodeDBs :: NodeDBs (StrictTMVar m MockFS) + , mcdbNodeDBs :: NodeDBs (StrictTMVar m MockFS) -- ^ File systems underlying the immutable, volatile and ledger databases. -- Would be useful to default this to StrictTMVar's containing empty MockFS's. } @@ -80,61 +84,66 @@ mkTestChunkInfo = simpleChunkInfo . eraEpochSize . tblcHardForkParams . topLevel -- | Creates a default set of of arguments for ChainDB tests. fromMinimalChainDbArgs :: - ( MonadThrow m - , MonadSTM m - , ConsensusProtocol (BlockProtocol blk) - , PrimMonad m - ) - => MinimalChainDbArgs m blk -> Complete ChainDbArgs m blk -fromMinimalChainDbArgs MinimalChainDbArgs {..} = ChainDbArgs { - cdbImmDbArgs = ImmutableDbArgs { - immCacheConfig = ImmutableDB.CacheConfig 2 60 - -- Cache at most 2 chunks and expire each chunk after 60 seconds of + ( MonadThrow m + , MonadSTM m + , ConsensusProtocol (BlockProtocol blk) + , PrimMonad m + ) => + MinimalChainDbArgs m blk -> Complete ChainDbArgs m blk +fromMinimalChainDbArgs MinimalChainDbArgs{..} = + ChainDbArgs + { cdbImmDbArgs = + ImmutableDbArgs + { immCacheConfig = ImmutableDB.CacheConfig 2 60 + , -- Cache at most 2 chunks and expire each chunk after 60 seconds of -- being unused. - , immCheckIntegrity = const True - -- Getting a verified block component does not do any integrity + immCheckIntegrity = const True + , -- Getting a verified block component does not do any integrity -- checking, both for the ImmutableDB, as the VolatileDB. This is -- done in @extractBlockComponent@ in the iterator for the -- ImmutableDB, and in @getBlockComponent@ for the VolatileDB. - , immChunkInfo = mcdbChunkInfo - , immHasFS = SomeHasFS $ simHasFS (nodeDBsImm mcdbNodeDBs) - , immRegistry = mcdbRegistry - , immTracer = nullTracer - , immCodecConfig = configCodec mcdbTopLevelConfig + immChunkInfo = mcdbChunkInfo + , immHasFS = SomeHasFS $ simHasFS (nodeDBsImm mcdbNodeDBs) + , immRegistry = mcdbRegistry + , immTracer = nullTracer + , immCodecConfig = configCodec mcdbTopLevelConfig , immValidationPolicy = ImmutableDB.ValidateAllChunks } - , cdbVolDbArgs = VolatileDbArgs { - volCheckIntegrity = const True - , volCodecConfig = configCodec mcdbTopLevelConfig - , volHasFS = SomeHasFS $ simHasFS (nodeDBsVol mcdbNodeDBs) - , volMaxBlocksPerFile = VolatileDB.mkBlocksPerFile 4 - , volTracer = nullTracer - , volValidationPolicy = VolatileDB.ValidateAll - } - , cdbLgrDbArgs = LedgerDbArgs { - lgrSnapshotPolicyArgs = - LedgerDB.SnapshotPolicyArgs - LedgerDB.DefaultSnapshotInterval - LedgerDB.DefaultNumOfDiskSnapshots - -- Keep 2 ledger snapshots, and take a new snapshot at least every 2 * - -- k seconds, where k is the security parameter. - , lgrGenesis = return mcdbInitLedger - , lgrHasFS = SomeHasFS $ simHasFS (nodeDBsLgr mcdbNodeDBs) - , lgrTracer = nullTracer - , lgrRegistry = mcdbRegistry - , lgrConfig = configLedgerDb mcdbTopLevelConfig OmitLedgerEvents - , lgrFlavorArgs = LedgerDbFlavorArgsV2 (V2Args InMemoryHandleArgs) - , lgrQueryBatchSize = DefaultQueryBatchSize - , lgrStartSnapshot = Nothing - } - , cdbsArgs = ChainDbSpecificArgs { - cdbsBlocksToAddSize = 1 - , cdbsGcDelay = 1 - , cdbsHasFSGsmDB = SomeHasFS $ simHasFS (nodeDBsGsm mcdbNodeDBs) - , cdbsGcInterval = 1 - , cdbsRegistry = mcdbRegistry - , cdbsTracer = nullTracer - , cdbsTopLevelConfig = mcdbTopLevelConfig - , cdbsLoE = pure LoEDisabled - } + , cdbVolDbArgs = + VolatileDbArgs + { volCheckIntegrity = const True + , volCodecConfig = configCodec mcdbTopLevelConfig + , volHasFS = SomeHasFS $ simHasFS (nodeDBsVol mcdbNodeDBs) + , volMaxBlocksPerFile = VolatileDB.mkBlocksPerFile 4 + , volTracer = nullTracer + , volValidationPolicy = VolatileDB.ValidateAll + } + , cdbLgrDbArgs = + LedgerDbArgs + { lgrSnapshotPolicyArgs = + LedgerDB.SnapshotPolicyArgs + LedgerDB.DefaultSnapshotInterval + LedgerDB.DefaultNumOfDiskSnapshots + , -- Keep 2 ledger snapshots, and take a new snapshot at least every 2 * + -- k seconds, where k is the security parameter. + lgrGenesis = return mcdbInitLedger + , lgrHasFS = SomeHasFS $ simHasFS (nodeDBsLgr mcdbNodeDBs) + , lgrTracer = nullTracer + , lgrRegistry = mcdbRegistry + , lgrConfig = configLedgerDb mcdbTopLevelConfig OmitLedgerEvents + , lgrFlavorArgs = LedgerDbFlavorArgsV2 (V2Args InMemoryHandleArgs) + , lgrQueryBatchSize = DefaultQueryBatchSize + , lgrStartSnapshot = Nothing + } + , cdbsArgs = + ChainDbSpecificArgs + { cdbsBlocksToAddSize = 1 + , cdbsGcDelay = 1 + , cdbsHasFSGsmDB = SomeHasFS $ simHasFS (nodeDBsGsm mcdbNodeDBs) + , cdbsGcInterval = 1 + , cdbsRegistry = mcdbRegistry + , cdbsTracer = nullTracer + , cdbsTopLevelConfig = mcdbTopLevelConfig + , cdbsLoE = pure LoEDisabled + } } diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainUpdates.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainUpdates.hs index 3915879db0..d960f0bd16 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainUpdates.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChainUpdates.hs @@ -3,44 +3,48 @@ {-# LANGUAGE LambdaCase #-} -- | Generate sequences of updates to model an evolving chain -module Test.Util.ChainUpdates ( - ChainUpdate (..) +module Test.Util.ChainUpdates + ( ChainUpdate (..) , UpdateBehavior (..) , genChainUpdates , toChainUpdates + -- * Tests , prop_genChainUpdates ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Control.Monad (replicateM, replicateM_) -import Control.Monad.State.Strict (execStateT, get, lift, modify) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import Ouroboros.Network.Mock.Chain (Chain (Genesis)) -import qualified Ouroboros.Network.Mock.Chain as Chain -import Test.QuickCheck -import Test.Util.QuickCheck (frequency') -import Test.Util.TestBlock - -data ChainUpdate = - AddBlock TestBlock - -- | Roll back to the given 'Point', and then /immediately/ roll +import Cardano.Ledger.BaseTypes (unNonZero) +import Control.Monad (replicateM, replicateM_) +import Control.Monad.State.Strict (execStateT, get, lift, modify) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Ouroboros.Network.Mock.Chain (Chain (Genesis)) +import Ouroboros.Network.Mock.Chain qualified as Chain +import Test.QuickCheck +import Test.Util.QuickCheck (frequency') +import Test.Util.TestBlock + +data ChainUpdate + = AddBlock TestBlock + | -- | Roll back to the given 'Point', and then /immediately/ roll -- forward by the given 'TestBlock's. - | SwitchFork (Point TestBlock) [TestBlock] + SwitchFork (Point TestBlock) [TestBlock] deriving stock (Eq, Show) instance Condense ChainUpdate where condense = \case AddBlock b -> "AddBlock " <> condense b - SwitchFork p bs -> "SwitchFork <- " <> condense p <> " -> " <> - unwords (map condense bs) + SwitchFork p bs -> + "SwitchFork <- " + <> condense p + <> " -> " + <> unwords (map condense bs) toChainUpdates :: [ChainUpdate] -> [Chain.ChainUpdate TestBlock TestBlock] toChainUpdates = concatMap $ \case - SwitchFork pt bs -> Chain.RollBack pt : map Chain.AddBlock bs - AddBlock b -> Chain.AddBlock b : [] + SwitchFork pt bs -> Chain.RollBack pt : map Chain.AddBlock bs + AddBlock b -> Chain.AddBlock b : [] {------------------------------------------------------------------------------- Generating ChainUpdates @@ -49,25 +53,27 @@ toChainUpdates = concatMap $ \case -- | We need some state to generate @ChainUpdate@s data ChainUpdateState = ChainUpdateState { cusCurrentChain :: !(Chain TestBlock) - -- ^ The current chain, obtained by applying all the 'cusUpdates' in reverse - -- order. - , cusUpdates :: ![ChainUpdate] - -- ^ The updates that have been generated so far, in reverse order: the - -- first update in the list is the last update to apply. - } deriving stock (Show) + -- ^ The current chain, obtained by applying all the 'cusUpdates' in reverse + -- order. + , cusUpdates :: ![ChainUpdate] + -- ^ The updates that have been generated so far, in reverse order: the + -- first update in the list is the last update to apply. + } + deriving stock Show emptyUpdateState :: ChainUpdateState -emptyUpdateState = ChainUpdateState - { cusCurrentChain = Genesis - , cusUpdates = [] - } +emptyUpdateState = + ChainUpdateState + { cusCurrentChain = Genesis + , cusUpdates = [] + } getChainUpdates :: ChainUpdateState -> [ChainUpdate] getChainUpdates = reverse . cusUpdates -- | Different strategies how to generate a sequence of 'ChainUpdate's. -data UpdateBehavior = - -- | Chain updates tracking the selected chain of an honest node. In +data UpdateBehavior + = -- | Chain updates tracking the selected chain of an honest node. In -- particular, this includes: -- -- * All blocks involved are valid. @@ -85,92 +91,94 @@ data UpdateBehavior = deriving stock (Show, Eq, Enum, Bounded) genChainUpdates :: - UpdateBehavior - -> SecurityParam - -> Int -- ^ The number of updates to generate - -> Gen [ChainUpdate] + UpdateBehavior -> + SecurityParam -> + -- | The number of updates to generate + Int -> + Gen [ChainUpdate] genChainUpdates updateBehavior securityParam n = - getChainUpdates + getChainUpdates <$> genChainUpdateState updateBehavior securityParam n emptyUpdateState genChainUpdateState :: - UpdateBehavior - -> SecurityParam - -> Int - -> ChainUpdateState - -> Gen ChainUpdateState + UpdateBehavior -> + SecurityParam -> + Int -> + ChainUpdateState -> + Gen ChainUpdateState genChainUpdateState updateBehavior securityParam n = - execStateT (replicateM_ n genChainUpdate) - where - -- Modify the state - addUpdate u cus = cus { cusUpdates = u : cusUpdates cus } - setChain c cus = cus { cusCurrentChain = c } - - k = fromIntegral $ unNonZero $ maxRollbacks securityParam - - genChainUpdate = do - ChainUpdateState { cusCurrentChain = chain } <- get - let genValid = - frequency' - [ (3, genAddBlock Valid) - , ( if Chain.null chain then 0 else 1 - , genSwitchFork (choose (1, k)) - ) - ] - frequency' $ - (5, replicateM_ 2 genValid) : - [ (1, genInvalidBlock) | updateBehavior == TentativeChainBehavior ] - - genBlockToAdd validity = do - ChainUpdateState { cusCurrentChain = chain } <- get - block <- lift $ case Chain.head chain of - Nothing -> setValidity . firstBlock <$> genForkNo - Just curHead -> do - forkNo <- case validity of - Valid -> genForkNo - Invalid -> pure 3 - return - . modifyFork (const forkNo) - . setValidity - $ successorBlock curHead - modify $ setChain (Chain.addBlock block chain) - return block - where - setValidity b = b { tbValid = validity } - genForkNo = case validity of - Valid -> frequency - [ (1, return 0) - , (1, choose (1, 2)) + execStateT (replicateM_ n genChainUpdate) + where + -- Modify the state + addUpdate u cus = cus{cusUpdates = u : cusUpdates cus} + setChain c cus = cus{cusCurrentChain = c} + + k = fromIntegral $ unNonZero $ maxRollbacks securityParam + + genChainUpdate = do + ChainUpdateState{cusCurrentChain = chain} <- get + let genValid = + frequency' + [ (3, genAddBlock Valid) + , + ( if Chain.null chain then 0 else 1 + , genSwitchFork (choose (1, k)) + ) ] - -- Blocks with equal hashes have to have equal validity, so we reserve - -- a specific ForkNo for invalid blocks to ensure this. + frequency' $ + (5, replicateM_ 2 genValid) + : [(1, genInvalidBlock) | updateBehavior == TentativeChainBehavior] + + genBlockToAdd validity = do + ChainUpdateState{cusCurrentChain = chain} <- get + block <- lift $ case Chain.head chain of + Nothing -> setValidity . firstBlock <$> genForkNo + Just curHead -> do + forkNo <- case validity of + Valid -> genForkNo Invalid -> pure 3 - - genAddBlock validity = do - block <- genBlockToAdd validity - modify $ addUpdate (AddBlock block) - - genSwitchFork genRollBackBlocks = do - ChainUpdateState { cusCurrentChain = chain } <- get - rollBackBlocks <- lift genRollBackBlocks - let chain' = Chain.drop rollBackBlocks chain - modify $ setChain chain' - blocks <- replicateM rollBackBlocks (genBlockToAdd Valid) - modify $ addUpdate (SwitchFork (Chain.headPoint chain') blocks) - - genInvalidBlock = do - genAddBlock Invalid - genSwitchFork (pure 1) - + return + . modifyFork (const forkNo) + . setValidity + $ successorBlock curHead + modify $ setChain (Chain.addBlock block chain) + return block + where + setValidity b = b{tbValid = validity} + genForkNo = case validity of + Valid -> + frequency + [ (1, return 0) + , (1, choose (1, 2)) + ] + -- Blocks with equal hashes have to have equal validity, so we reserve + -- a specific ForkNo for invalid blocks to ensure this. + Invalid -> pure 3 + + genAddBlock validity = do + block <- genBlockToAdd validity + modify $ addUpdate (AddBlock block) + + genSwitchFork genRollBackBlocks = do + ChainUpdateState{cusCurrentChain = chain} <- get + rollBackBlocks <- lift genRollBackBlocks + let chain' = Chain.drop rollBackBlocks chain + modify $ setChain chain' + blocks <- replicateM rollBackBlocks (genBlockToAdd Valid) + modify $ addUpdate (SwitchFork (Chain.headPoint chain') blocks) + + genInvalidBlock = do + genAddBlock Invalid + genSwitchFork (pure 1) -- | Test that applying the generated updates gives us the same chain -- as @cusCurrentChain@. prop_genChainUpdates :: SecurityParam -> Int -> Property prop_genChainUpdates securityParam n = - forAll genCUS $ \cus -> - Chain.applyChainUpdates (toChainUpdates (getChainUpdates cus)) Genesis === - Just (cusCurrentChain cus) - where - genCUS = do - behavior <- chooseEnum (minBound, maxBound) - genChainUpdateState behavior securityParam n emptyUpdateState + forAll genCUS $ \cus -> + Chain.applyChainUpdates (toChainUpdates (getChainUpdates cus)) Genesis + === Just (cusCurrentChain cus) + where + genCUS = do + behavior <- chooseEnum (minBound, maxBound) + genChainUpdateState behavior securityParam n emptyUpdateState diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChunkInfo.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChunkInfo.hs index 8f9edae31b..9bbc26f860 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChunkInfo.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ChunkInfo.hs @@ -2,21 +2,21 @@ module Test.Util.ChunkInfo (SmallChunkInfo (..)) where -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks -import Test.QuickCheck +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks +import Test.QuickCheck {------------------------------------------------------------------------------- ChunkInfo -------------------------------------------------------------------------------} data SmallChunkInfo = SmallChunkInfo ChunkInfo - deriving (Show) + deriving Show instance Arbitrary SmallChunkInfo where arbitrary = do - numRegularBlocks <- choose (5, 15) - chunkCanContainEBB <- arbitrary - return $ SmallChunkInfo $ singleChunkInfo $ ChunkSize{..} + numRegularBlocks <- choose (5, 15) + chunkCanContainEBB <- arbitrary + return $ SmallChunkInfo $ singleChunkInfo $ ChunkSize{..} - -- Intentionally no shrinker, as shrinking the epoch size independent from - -- the rest of the commands may lead to a non-sensical test +-- Intentionally no shrinker, as shrinking the epoch size independent from +-- the rest of the commands may lead to a non-sensical test diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Corruption.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Corruption.hs index d332af0a14..eb99ebe9e0 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Corruption.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Corruption.hs @@ -3,25 +3,24 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Util.Corruption ( - Corruption (..) +module Test.Util.Corruption + ( Corruption (..) , applyCorruption , detectCorruption ) where -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import Codec.CBOR.Read (deserialiseFromBytes) -import Codec.CBOR.Term (Term) -import Codec.CBOR.Write (toLazyByteString) -import Codec.Serialise (deserialise) -import qualified Data.ByteString.Lazy as Lazy -import Test.QuickCheck - +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.Read (deserialiseFromBytes) +import Codec.CBOR.Term (Term) +import Codec.CBOR.Write (toLazyByteString) +import Codec.Serialise (deserialise) +import Data.ByteString.Lazy qualified as Lazy +import Test.QuickCheck newtype Corruption = Corruption Word - deriving stock (Show) - deriving newtype (Arbitrary) + deriving stock Show + deriving newtype Arbitrary -- | Increment (overflow if necessary) the byte at position @i@ in the -- bytestring, where @i = n `mod` length bs@. @@ -29,15 +28,15 @@ newtype Corruption = Corruption Word -- If the bytestring is empty, return it unmodified. applyCorruption :: Corruption -> Lazy.ByteString -> Lazy.ByteString applyCorruption (Corruption n) bs - | Lazy.null bs - = bs - | otherwise - = before <> case Lazy.uncons atAfter of - Nothing -> error "split bytestring after last byte" - Just (hd, tl) -> Lazy.cons (hd + 1) tl - where - offset = fromIntegral n `mod` Lazy.length bs - (before, atAfter) = Lazy.splitAt offset bs + | Lazy.null bs = + bs + | otherwise = + before <> case Lazy.uncons atAfter of + Nothing -> error "split bytestring after last byte" + Just (hd, tl) -> Lazy.cons (hd + 1) tl + where + offset = fromIntegral n `mod` Lazy.length bs + (before, atAfter) = Lazy.splitAt offset bs -- | Serialise @a@, apply the given corruption, deserialise it, when that -- fails, the corruption was detected. When deserialising the corrupted @@ -45,37 +44,37 @@ applyCorruption (Corruption n) bs -- function. If that function returns 'False', the corruption was detected, if -- it returns 'True', the corruption was not detected and the test fails. detectCorruption :: - Show a - => (a -> Encoding) - -> (forall s. Decoder s (Lazy.ByteString -> a)) - -> (a -> Bool) - -- ^ Integrity check that should detect the corruption. Return 'False' - -- when corrupt. - -> a - -> Corruption - -> Property + Show a => + (a -> Encoding) -> + (forall s. Decoder s (Lazy.ByteString -> a)) -> + -- | Integrity check that should detect the corruption. Return 'False' + -- when corrupt. + (a -> Bool) -> + a -> + Corruption -> + Property detectCorruption enc dec isValid a cor = - case deserialiseFromBytes dec corruptBytes of - Right (leftover, mkA') - | not (Lazy.null leftover) - -> label "corruption detected by decoder" $ property True - | not (isValid a') - -> label "corruption detected" $ property True - | otherwise - -> counterexample - ("Corruption not detected: " <> show a') - $ counterexample - ("Original bytes: " <> show origBytes) - $ counterexample - ("Corrupt bytes: " <> show corruptBytes) - $ counterexample - ("Original CBOR: " <> show (deserialise origBytes :: Term)) - $ counterexample - ("Corrupt CBOR: " <> show (deserialise corruptBytes :: Term)) - False - where - a' = mkA' corruptBytes - Left _ -> label "corruption detected by decoder" $ property True - where - origBytes = toLazyByteString (enc a) - corruptBytes = applyCorruption cor origBytes + case deserialiseFromBytes dec corruptBytes of + Right (leftover, mkA') + | not (Lazy.null leftover) -> + label "corruption detected by decoder" $ property True + | not (isValid a') -> + label "corruption detected" $ property True + | otherwise -> + counterexample + ("Corruption not detected: " <> show a') + $ counterexample + ("Original bytes: " <> show origBytes) + $ counterexample + ("Corrupt bytes: " <> show corruptBytes) + $ counterexample + ("Original CBOR: " <> show (deserialise origBytes :: Term)) + $ counterexample + ("Corrupt CBOR: " <> show (deserialise corruptBytes :: Term)) + False + where + a' = mkA' corruptBytes + Left _ -> label "corruption detected by decoder" $ property True + where + origBytes = toLazyByteString (enc a) + corruptBytes = applyCorruption cor origBytes diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/FileLock.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/FileLock.hs index 9287698d50..b110c7102e 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/FileLock.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/FileLock.hs @@ -2,14 +2,14 @@ module Test.Util.FileLock (mockFileLock) where -import Control.Monad (join, void) -import Control.Monad.IOSim -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Util.FileLock -import Ouroboros.Consensus.Util.IOLike -import Test.Util.Orphans.IOLike () +import Control.Monad (join, void) +import Control.Monad.IOSim +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Util.FileLock +import Ouroboros.Consensus.Util.IOLike +import Test.Util.Orphans.IOLike () {------------------------------------------------------------------------------- mockFileLock @@ -21,22 +21,24 @@ import Test.Util.Orphans.IOLike () -- unlocking as done by Linux (near instantaneous but not instant) and -- Windows. mockFileLock :: - Maybe DiffTime -- ^ Optional release delay - -> IOSim s (FileLock (IOSim s)) + -- | Optional release delay + Maybe DiffTime -> + IOSim s (FileLock (IOSim s)) mockFileLock releaseDelay = do - locks <- newMockFileLocks releaseDelay - return FileLock { - lockFile = \fp -> mockUnlockFile locks <$> mockLockFile locks fp + locks <- newMockFileLocks releaseDelay + return + FileLock + { lockFile = \fp -> mockUnlockFile locks <$> mockLockFile locks fp } {------------------------------------------------------------------------------- MockFileLocks -------------------------------------------------------------------------------} -data MockFileLocks m = MockFileLocks { - varLocks :: StrictTVar m (Map FilePath LockStatus) - , releaseDelay :: Maybe DiffTime - } +data MockFileLocks m = MockFileLocks + { varLocks :: StrictTVar m (Map FilePath LockStatus) + , releaseDelay :: Maybe DiffTime + } -- | The status of a file lock, required to account for lazy releases. Note -- that we don't have to model \"unlocked\", as the absence in 'varLocks' @@ -45,36 +47,37 @@ data LockStatus = Held | LazyRelease newMockFileLocks :: IOLike m => Maybe DiffTime -> m (MockFileLocks m) newMockFileLocks releaseDelay = do - varLocks <- uncheckedNewTVarM Map.empty - return MockFileLocks { varLocks, releaseDelay } + varLocks <- uncheckedNewTVarM Map.empty + return MockFileLocks{varLocks, releaseDelay} mockLockFile :: IOLike m => MockFileLocks m -> FilePath -> m FilePath -mockLockFile MockFileLocks { varLocks } path = atomically $ do - locks <- readTVar varLocks - if Map.member path locks - then retry - else writeTVar varLocks $ Map.insert path Held locks - return path +mockLockFile MockFileLocks{varLocks} path = atomically $ do + locks <- readTVar varLocks + if Map.member path locks + then retry + else writeTVar varLocks $ Map.insert path Held locks + return path -- | We simulate lazy lock releases by changing the status of the lock to -- 'LazyRelease' and spawning a thread that waits for 'releaseDelay' before -- removing the lock from 'varLocks'. -mockUnlockFile :: (IOLike m, HasCallStack) - => MockFileLocks m -> FilePath -> m () -mockUnlockFile MockFileLocks { varLocks, releaseDelay } path = - join $ atomically $ do - locks <- readTVar varLocks - case Map.lookup path locks of - Nothing -> - error $ "unlocking an unlocked file: " <> show path - Just LazyRelease -> - error $ "unlocking a file that is still being unlocked: " <> show path - Just Held -> case releaseDelay of - Nothing -> do - writeTVar varLocks $ Map.delete path locks - return $ return () - Just delay -> do - writeTVar varLocks $ Map.insert path LazyRelease locks - return $ void $ forkIO $ do - threadDelay delay - atomically $ writeTVar varLocks $ Map.delete path locks +mockUnlockFile :: + (IOLike m, HasCallStack) => + MockFileLocks m -> FilePath -> m () +mockUnlockFile MockFileLocks{varLocks, releaseDelay} path = + join $ atomically $ do + locks <- readTVar varLocks + case Map.lookup path locks of + Nothing -> + error $ "unlocking an unlocked file: " <> show path + Just LazyRelease -> + error $ "unlocking a file that is still being unlocked: " <> show path + Just Held -> case releaseDelay of + Nothing -> do + writeTVar varLocks $ Map.delete path locks + return $ return () + Just delay -> do + writeTVar varLocks $ Map.insert path LazyRelease locks + return $ void $ forkIO $ do + threadDelay delay + atomically $ writeTVar varLocks $ Map.delete path locks diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/Future.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/Future.hs index d3f16e856f..42f7334f8f 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/Future.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/Future.hs @@ -2,8 +2,8 @@ {-# LANGUAGE DerivingVia #-} {-# LANGUAGE LambdaCase #-} -module Test.Util.HardFork.Future ( - EraSize (..) +module Test.Util.HardFork.Future + ( EraSize (..) , Future (..) , futureEpochInFirstEra , futureFirstEpochSize @@ -15,16 +15,16 @@ module Test.Util.HardFork.Future ( , singleEraFuture ) where -import qualified Data.Fixed -import Data.Time (NominalDiffTime) -import Data.Word (Word64) -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (SlotLength (..)) -import Ouroboros.Consensus.Util (nTimes) -import Quiet (Quiet (..)) -import Test.Util.Slots (NumSlots (..)) -import Test.Util.Stream (Stream (..)) +import Data.Fixed qualified +import Data.Time (NominalDiffTime) +import Data.Word (Word64) +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime (SlotLength (..)) +import Ouroboros.Consensus.Util (nTimes) +import Quiet (Quiet (..)) +import Test.Util.Slots (NumSlots (..)) +import Test.Util.Stream (Stream (..)) {------------------------------------------------------------------------------- Careful counts @@ -33,7 +33,7 @@ import Test.Util.Stream (Stream (..)) -- | Number of epochs newtype EraSize = EraSize {unEraSize :: Word64} deriving (Eq, Generic) - deriving (Show) via (Quiet EraSize) + deriving Show via (Quiet EraSize) {------------------------------------------------------------------------------- A test's whole timeline @@ -42,9 +42,9 @@ newtype EraSize = EraSize {unEraSize :: Word64} -- | Every era in the test -- -- INVARIANT: every number is @> 0@ -data Future = - EraFinal SlotLength EpochSize - | EraCons SlotLength EpochSize EraSize Future +data Future + = EraFinal SlotLength EpochSize + | EraCons SlotLength EpochSize EraSize Future deriving (Eq, Show) -- | 'Future' with only one era @@ -54,91 +54,94 @@ singleEraFuture = EraFinal -- | 'SlotLength' of the first era futureFirstSlotLength :: Future -> SlotLength futureFirstSlotLength future = case future of - EraCons slotLength _epochSize _eraSize _future -> slotLength - EraFinal slotLength _epochSize -> slotLength + EraCons slotLength _epochSize _eraSize _future -> slotLength + EraFinal slotLength _epochSize -> slotLength -- | 'EpochSize' of the first era futureFirstEpochSize :: Future -> EpochSize futureFirstEpochSize future = case future of - EraCons _slotLength epochSize _eraSize _future -> epochSize - EraFinal _slotLength epochSize -> epochSize + EraCons _slotLength epochSize _eraSize _future -> epochSize + EraFinal _slotLength epochSize -> epochSize -- | Length of each slot in the whole 'Future' futureSlotLengths :: Future -> Stream SlotLength futureSlotLengths = \case - EraFinal slotLength _epochSize -> - let x = slotLength :< x in x - EraCons slotLength epochSize eraSize future -> - nTimes (slotLength :<) eraSlots $ - futureSlotLengths future - where - NumSlots eraSlots = calcEraSlots epochSize eraSize + EraFinal slotLength _epochSize -> + let x = slotLength :< x in x + EraCons slotLength epochSize eraSize future -> + nTimes (slotLength :<) eraSlots $ + futureSlotLengths future + where + NumSlots eraSlots = calcEraSlots epochSize eraSize -- | @(slot, time left in slot, length of slot)@ -futureTimeToSlot :: Future - -> NominalDiffTime - -> (SlotNo, NominalDiffTime, SlotLength) +futureTimeToSlot :: + Future -> + NominalDiffTime -> + (SlotNo, NominalDiffTime, SlotLength) futureTimeToSlot = \future d -> go 0 d future - where - done acc d slotLength = - (SlotNo $ acc + n, getSlotLength slotLength - timeInSlot, slotLength) - where - n = divide d slotLength - timeInSlot = d - multiply n slotLength - - go acc d (EraFinal slotLength _epochSize) = - done acc d slotLength - go acc d (EraCons slotLength epochSize eraSize future) = - case d `safeSub` eraLength of - Nothing -> done acc d slotLength - Just d' -> go (acc + eraSlots) d' future - where - NumSlots eraSlots = calcEraSlots epochSize eraSize - eraLength = multiply eraSlots slotLength + where + done acc d slotLength = + (SlotNo $ acc + n, getSlotLength slotLength - timeInSlot, slotLength) + where + n = divide d slotLength + timeInSlot = d - multiply n slotLength + + go acc d (EraFinal slotLength _epochSize) = + done acc d slotLength + go acc d (EraCons slotLength epochSize eraSize future) = + case d `safeSub` eraLength of + Nothing -> done acc d slotLength + Just d' -> go (acc + eraSlots) d' future + where + NumSlots eraSlots = calcEraSlots epochSize eraSize + eraLength = multiply eraSlots slotLength -- | Which epoch the slot is in -futureSlotToEpoch :: Future - -> SlotNo - -> EpochNo +futureSlotToEpoch :: + Future -> + SlotNo -> + EpochNo futureSlotToEpoch = \future (SlotNo s) -> EpochNo $ go 0 s future - where - go acc s = \case - EraFinal _slotLength (EpochSize epSz) -> - acc + s `div` epSz - EraCons slotLength epochSize eraSize future -> - case s `safeSub` eraSlots of - Nothing -> go acc s (EraFinal slotLength epochSize) - Just s' -> go (acc + n) s' future - where - EraSize n = eraSize - NumSlots eraSlots = calcEraSlots epochSize eraSize + where + go acc s = \case + EraFinal _slotLength (EpochSize epSz) -> + acc + s `div` epSz + EraCons slotLength epochSize eraSize future -> + case s `safeSub` eraSlots of + Nothing -> go acc s (EraFinal slotLength epochSize) + Just s' -> go (acc + n) s' future + where + EraSize n = eraSize + NumSlots eraSlots = calcEraSlots epochSize eraSize -- | When the slot begins -futureSlotToTime :: Future - -> SlotNo - -> NominalDiffTime +futureSlotToTime :: + Future -> + SlotNo -> + NominalDiffTime futureSlotToTime = \future (SlotNo s) -> go 0 s future - where - done acc s slotLength = - acc + multiply s slotLength - - go acc s = \case - EraFinal slotLength _epochSize -> - done acc s slotLength - EraCons slotLength epochSize eraSize future -> - case s `safeSub` eraSlots of - Nothing -> done acc s slotLength - Just s' -> go (acc + eraLength) s' future - where - NumSlots eraSlots = calcEraSlots epochSize eraSize - eraLength = multiply eraSlots slotLength + where + done acc s slotLength = + acc + multiply s slotLength + + go acc s = \case + EraFinal slotLength _epochSize -> + done acc s slotLength + EraCons slotLength epochSize eraSize future -> + case s `safeSub` eraSlots of + Nothing -> done acc s slotLength + Just s' -> go (acc + eraLength) s' future + where + NumSlots eraSlots = calcEraSlots epochSize eraSize + eraLength = multiply eraSlots slotLength -- | Whether the epoch is in the first era futureEpochInFirstEra :: Future -> EpochNo -> Bool futureEpochInFirstEra = \case - EraCons _slotLength _epochSize (EraSize n) _future -> - \(EpochNo e) -> e < n - EraFinal{} -> const True + EraCons _slotLength _epochSize (EraSize n) _future -> + \(EpochNo e) -> e < n + EraFinal{} -> const True {------------------------------------------------------------------------------- Miscellany @@ -156,4 +159,4 @@ safeSub x y = if x < y then Nothing else Just (x - y) calcEraSlots :: EpochSize -> EraSize -> NumSlots calcEraSlots (EpochSize slotPerEpoch) (EraSize epochPerEra) = - NumSlots (slotPerEpoch * epochPerEra) + NumSlots (slotPerEpoch * epochPerEra) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/OracularClock.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/OracularClock.hs index e71b27e1d4..c7a70251d8 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/OracularClock.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/HardFork/OracularClock.hs @@ -6,26 +6,30 @@ -- -- > import Test.Util.OracularClock (OracularClock(..)) -- > import qualified Test.Util.OracularClock as OracularClock -module Test.Util.HardFork.OracularClock ( - EndOfDaysException (..) +module Test.Util.HardFork.OracularClock + ( EndOfDaysException (..) , OracularClock (..) , forkEachSlot , mkOracularClock ) where -import Control.Monad (void, when) -import Control.ResourceRegistry -import Data.Foldable (toList) -import Data.Function (fix) -import Data.Time -import GHC.Stack -import Ouroboros.Consensus.Block -import qualified Ouroboros.Consensus.BlockchainTime as BTime -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.Time (nominalDelay) -import Test.Util.HardFork.Future (Future, futureSlotLengths, - futureSlotToTime, futureTimeToSlot) -import Test.Util.Slots (NumSlots (..)) +import Control.Monad (void, when) +import Control.ResourceRegistry +import Data.Foldable (toList) +import Data.Function (fix) +import Data.Time +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime qualified as BTime +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.Time (nominalDelay) +import Test.Util.HardFork.Future + ( Future + , futureSlotLengths + , futureSlotToTime + , futureTimeToSlot + ) +import Test.Util.Slots (NumSlots (..)) -- | A clock that knows the future -- @@ -43,31 +47,27 @@ import Test.Util.Slots (NumSlots (..)) -- nodes adopt the same timeline, which must be /the/ 'Future' that this clock -- anticipates. data OracularClock m = OracularClock - { -- | Returns 'True' if the requested slot is already over - blockUntilSlot :: SlotNo -> m Bool - - -- | The current delay duration until the onset of the next slot - , delayUntilNextSlot :: m NominalDiffTime - - -- | A mock system time - -- - -- Note that 'BTime.systemTimeCurrent' eventually raises - -- 'EndOfDaysException'. - , finiteSystemTime :: BTime.SystemTime m - - -- | The current slot - , getCurrentSlot :: m SlotNo - - -- | See 'forkEachSlot' - , forkEachSlot_ :: HasCallStack - => ResourceRegistry m - -> String - -> (SlotNo -> m ()) - -> m (m ()) - - -- | Block until the clock is exhausted - , waitUntilDone :: m () - } + { blockUntilSlot :: SlotNo -> m Bool + -- ^ Returns 'True' if the requested slot is already over + , delayUntilNextSlot :: m NominalDiffTime + -- ^ The current delay duration until the onset of the next slot + , finiteSystemTime :: BTime.SystemTime m + -- ^ A mock system time + -- + -- Note that 'BTime.systemTimeCurrent' eventually raises + -- 'EndOfDaysException'. + , getCurrentSlot :: m SlotNo + -- ^ The current slot + , forkEachSlot_ :: + HasCallStack => + ResourceRegistry m -> + String -> + (SlotNo -> m ()) -> + m (m ()) + -- ^ See 'forkEachSlot' + , waitUntilDone :: m () + -- ^ Block until the clock is exhausted + } -- | Forks a thread that executes an action at the onset of each slot -- @@ -78,14 +78,16 @@ data OracularClock m = OracularClock -- from within the given action will always return the correct slot. -- -- See the discussion of ticker threads in 'getCurrentSlot'. -forkEachSlot :: HasCallStack - => ResourceRegistry m - -> OracularClock m - -> String - -> (SlotNo -> m ()) - -> m (m ()) +forkEachSlot :: + HasCallStack => + ResourceRegistry m -> + OracularClock m -> + String -> + (SlotNo -> m ()) -> + m (m ()) forkEachSlot reg clk = forkEachSlot_ clk reg - -- jumping the hoop so HasCallStack is useful + +-- jumping the hoop so HasCallStack is useful -- | See 'OracularClock' -- @@ -100,12 +102,15 @@ forkEachSlot reg clk = forkEachSlot_ clk reg -- its 'BTime.systemCurrentTime' ticks before any 'threadDelay'-ed thread -- scheduled to wake-up then does so. The 'BTime.defaultSystemTime' in the mock -- 'IO' monad provided by @io-sim@ satisfies this assumption. -mkOracularClock :: forall m. (IOLike m) - => BTime.SystemTime m - -> NumSlots - -> Future - -> OracularClock m -mkOracularClock BTime.SystemTime{..} numSlots future = OracularClock +mkOracularClock :: + forall m. + IOLike m => + BTime.SystemTime m -> + NumSlots -> + Future -> + OracularClock m +mkOracularClock BTime.SystemTime{..} numSlots future = + OracularClock { blockUntilSlot = \slot -> do BTime.RelativeTime now <- finiteSystemTimeCurrent let later = futureSlotToTime future slot @@ -116,83 +121,78 @@ mkOracularClock BTime.SystemTime{..} numSlots future = OracularClock exhaustedM blockUntilTime now later - , delayUntilNextSlot = do (_slot, leftInSlot, _slotLength) <- getPresent pure leftInSlot - - , finiteSystemTime = BTime.SystemTime - { BTime.systemTimeCurrent = finiteSystemTimeCurrent - , BTime.systemTimeWait = systemTimeWait - } - + , finiteSystemTime = + BTime.SystemTime + { BTime.systemTimeCurrent = finiteSystemTimeCurrent + , BTime.systemTimeWait = systemTimeWait + } , getCurrentSlot = do (slot, _leftInSlot, _slotLength) <- getPresent pure slot - , forkEachSlot_ = \rr threadLabel action -> fmap cancelThread $ - forkLinkedThread rr threadLabel $ - fix $ \loop -> do - -- INVARIANT the slot returned here ascends monotonically unless - -- the underlying 'BTime.SystemTime' jumps backwards - (slot, leftInSlot, _slotLength) <- getPresent - - let lbl = threadLabel <> " [" <> show slot <> "]" - -- fork the action, so it can't threadDelay us - void $ forkLinkedThread rr lbl $ action slot - - threadDelay $ nominalDelay leftInSlot - loop - + forkLinkedThread rr threadLabel $ + fix $ \loop -> do + -- INVARIANT the slot returned here ascends monotonically unless + -- the underlying 'BTime.SystemTime' jumps backwards + (slot, leftInSlot, _slotLength) <- getPresent + + let lbl = threadLabel <> " [" <> show slot <> "]" + -- fork the action, so it can't threadDelay us + void $ forkLinkedThread rr lbl $ action slot + + threadDelay $ nominalDelay leftInSlot + loop , waitUntilDone = do BTime.RelativeTime now <- finiteSystemTimeCurrent void $ blockUntilTime now endOfDays - } - where - -- when the clock becomes exhausted - endOfDays :: NominalDiffTime - endOfDays = - (sum . map BTime.getSlotLength) $ - (take (fromIntegral n) . toList) $ + where + -- when the clock becomes exhausted + endOfDays :: NominalDiffTime + endOfDays = + (sum . map BTime.getSlotLength) $ + (take (fromIntegral n) . toList) $ futureSlotLengths future - where - NumSlots n = numSlots - - -- what any method called at exactly @endOfDays@ or blocked as of - -- @endOfDays@ ends up doing at the exact @endOfDays@ moment - exhaustedM :: forall a. m a - exhaustedM = do - -- throw if this thread isn't terminated in time - threadDelay $ picosecondsToDiffTime 1 -- the smallest possible delay - throwIO EndOfDaysException - - -- a 'BTime.systemTimeCurrent' that respects @endOfDays@ - finiteSystemTimeCurrent :: m BTime.RelativeTime - finiteSystemTimeCurrent = do - t <- systemTimeCurrent - - -- check if clock is exhausted - let tFinal = BTime.RelativeTime endOfDays - when (t > tFinal) $ throwIO EndOfDaysException - when (t == tFinal) $ exhaustedM - - pure t - - getPresent :: m (SlotNo, NominalDiffTime, BTime.SlotLength) - getPresent = do - BTime.RelativeTime now <- finiteSystemTimeCurrent - pure $ futureTimeToSlot future now - - blockUntilTime :: NominalDiffTime -> NominalDiffTime -> m Bool - blockUntilTime now later = - case compare now later of - LT -> do - threadDelay $ nominalDelay $ later - now - pure False - EQ -> pure False - GT -> pure True -- ie " too late " + where + NumSlots n = numSlots + + -- what any method called at exactly @endOfDays@ or blocked as of + -- @endOfDays@ ends up doing at the exact @endOfDays@ moment + exhaustedM :: forall a. m a + exhaustedM = do + -- throw if this thread isn't terminated in time + threadDelay $ picosecondsToDiffTime 1 -- the smallest possible delay + throwIO EndOfDaysException + + -- a 'BTime.systemTimeCurrent' that respects @endOfDays@ + finiteSystemTimeCurrent :: m BTime.RelativeTime + finiteSystemTimeCurrent = do + t <- systemTimeCurrent + + -- check if clock is exhausted + let tFinal = BTime.RelativeTime endOfDays + when (t > tFinal) $ throwIO EndOfDaysException + when (t == tFinal) $ exhaustedM + + pure t + + getPresent :: m (SlotNo, NominalDiffTime, BTime.SlotLength) + getPresent = do + BTime.RelativeTime now <- finiteSystemTimeCurrent + pure $ futureTimeToSlot future now + + blockUntilTime :: NominalDiffTime -> NominalDiffTime -> m Bool + blockUntilTime now later = + case compare now later of + LT -> do + threadDelay $ nominalDelay $ later - now + pure False + EQ -> pure False + GT -> pure True -- ie " too late " ----- @@ -203,6 +203,6 @@ mkOracularClock BTime.SystemTime{..} numSlots future = OracularClock -- enough, the thread then throws this exception, which we don't catch -- anywhere. data EndOfDaysException = EndOfDaysException - deriving (Show) + deriving Show instance Exception EndOfDaysException diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Header.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Header.hs index 8297c396d5..3a4859c05c 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Header.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Header.hs @@ -1,50 +1,55 @@ {-# LANGUAGE FlexibleContexts #-} -module Test.Util.Header ( - -- * Enriching headers with a relative slot time +module Test.Util.Header + ( -- * Enriching headers with a relative slot time attachSlotTime , attachSlotTimeToFragment , dropTimeFromFragment ) where -import Cardano.Slotting.EpochInfo.API (epochInfoSlotToRelativeTime) -import Data.Functor.Identity (runIdentity) -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block (Header, blockSlot) -import Ouroboros.Consensus.Config (TopLevelConfig) -import Ouroboros.Consensus.HardFork.Combinator.Abstract - (ImmutableEraParams, immutableEpochInfo) -import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as AF +import Cardano.Slotting.EpochInfo.API (epochInfoSlotToRelativeTime) +import Data.Functor.Identity (runIdentity) +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block (Header, blockSlot) +import Ouroboros.Consensus.Config (TopLevelConfig) +import Ouroboros.Consensus.HardFork.Combinator.Abstract + ( ImmutableEraParams + , immutableEpochInfo + ) +import Ouroboros.Consensus.HeaderValidation (HeaderWithTime (..)) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as AF {------------------------------------------------------------------------------- Enriching headers with a relative slot time -------------------------------------------------------------------------------} -dropTimeFromFragment :: (AF.HasHeader (Header blk)) - => AnchoredFragment (HeaderWithTime blk) - -> AnchoredFragment (Header blk) -dropTimeFromFragment = AF.mapAnchoredFragment hwtHeader +dropTimeFromFragment :: + AF.HasHeader (Header blk) => + AnchoredFragment (HeaderWithTime blk) -> + AnchoredFragment (Header blk) +dropTimeFromFragment = AF.mapAnchoredFragment hwtHeader attachSlotTimeToFragment :: - ( AF.HasHeader (Header blk) - , Typeable blk - , ImmutableEraParams blk) - => TopLevelConfig blk - -> AnchoredFragment (Header blk) - -> AnchoredFragment (HeaderWithTime blk) + ( AF.HasHeader (Header blk) + , Typeable blk + , ImmutableEraParams blk + ) => + TopLevelConfig blk -> + AnchoredFragment (Header blk) -> + AnchoredFragment (HeaderWithTime blk) attachSlotTimeToFragment cfg = AF.mapAnchoredFragment (attachSlotTime cfg) attachSlotTime :: - (AF.HasHeader (Header blk), ImmutableEraParams blk) - => TopLevelConfig blk - -> Header blk - -> HeaderWithTime blk -attachSlotTime cfg hdr = HeaderWithTime { - hwtHeader = hdr + (AF.HasHeader (Header blk), ImmutableEraParams blk) => + TopLevelConfig blk -> + Header blk -> + HeaderWithTime blk +attachSlotTime cfg hdr = + HeaderWithTime + { hwtHeader = hdr , hwtSlotRelativeTime = runIdentity $ epochInfoSlotToRelativeTime ei (blockSlot hdr) } - where - ei = immutableEpochInfo cfg + where + ei = immutableEpochInfo cfg diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/InvertedMap.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/InvertedMap.hs index fc6faa8e4f..bd668a8e37 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/InvertedMap.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/InvertedMap.hs @@ -1,34 +1,38 @@ -module Test.Util.InvertedMap ( - -- * InvertedMap type +module Test.Util.InvertedMap + ( -- * InvertedMap type InvertedMap + -- * Query , Test.Util.InvertedMap.null + -- * Construction , toMap , unsafeInvertedMap + -- * Conversion , fromMap , unsafeCoercion + -- * Filter , spanAntitone + -- * Min/Max , minViewWithKey ) where -import Data.List.NonEmpty (NonEmpty) -import qualified Data.List.NonEmpty as NE -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Type.Coercion +import Data.List.NonEmpty (NonEmpty) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Type.Coercion -- | An inverted 'Map' -- -- INVARIANT the @k@s are all unique -- -- INVARIANT the 'NonEmpty's are all ascending --- newtype InvertedMap v k = UnsafeInvertedMap {getInvertedMap :: Map v (NonEmpty k)} - deriving (Show) + deriving Show unsafeCoercion :: Coercion (InvertedMap v k) (Map v (NonEmpty k)) unsafeCoercion = Coercion @@ -37,30 +41,29 @@ unsafeInvertedMap :: Map v (NonEmpty k) -> InvertedMap v k unsafeInvertedMap = UnsafeInvertedMap -- | This inverts the given 'Map' --- fromMap :: Ord v => Map k v -> InvertedMap v k fromMap m = - UnsafeInvertedMap $ Map.fromListWith (<>) $ - [ (v, k NE.:| []) | (k, v) <- Map.toList m ] + UnsafeInvertedMap $ + Map.fromListWith (<>) $ + [(v, k NE.:| []) | (k, v) <- Map.toList m] minViewWithKey :: InvertedMap v k -> Maybe ((v, NonEmpty k), InvertedMap v k) minViewWithKey = - fmap (fmap UnsafeInvertedMap) . Map.minViewWithKey . getInvertedMap + fmap (fmap UnsafeInvertedMap) . Map.minViewWithKey . getInvertedMap null :: InvertedMap v k -> Bool null = Map.null . getInvertedMap spanAntitone :: (v -> Bool) -> InvertedMap v k -> (InvertedMap v k, InvertedMap v k) spanAntitone f (UnsafeInvertedMap m) = (UnsafeInvertedMap l, UnsafeInvertedMap r) - where - (l, r) = Map.spanAntitone f m + where + (l, r) = Map.spanAntitone f m -- | This inverts the given 'InvertedMap' -- -- Inversion is an , so -- this returns to 'Map'. --- toMap :: Ord k => InvertedMap v k -> Map k v toMap (UnsafeInvertedMap m) = - Map.fromList $ - [ (k, v) | (v, ks) <- Map.toList m, k <- NE.toList ks ] + Map.fromList $ + [(k, v) | (v, ks) <- Map.toList m, k <- NE.toList ks] diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LedgerStateOnlyTables.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LedgerStateOnlyTables.hs index 0b513c7181..4afe9f20e2 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LedgerStateOnlyTables.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LedgerStateOnlyTables.hs @@ -15,55 +15,61 @@ -- -- This is useful when we only need a ledger state and ledger tables, but not -- necessarily blocks with payloads (such as defined in @Test.Util.TestBlock@). -module Test.Util.LedgerStateOnlyTables ( - OTLedgerState +module Test.Util.LedgerStateOnlyTables + ( OTLedgerState , OTLedgerTables , emptyOTLedgerState , pattern OTLedgerState ) where -import Data.MemPack -import GHC.Generics -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Ledger.Basics (LedgerState) -import Ouroboros.Consensus.Ledger.Tables -import Ouroboros.Consensus.Ledger.Tables.Utils (emptyLedgerTables) -import Ouroboros.Consensus.Storage.LedgerDB.API -import Ouroboros.Consensus.Util.IndexedMemPack +import Data.MemPack +import GHC.Generics +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Ledger.Basics (LedgerState) +import Ouroboros.Consensus.Ledger.Tables +import Ouroboros.Consensus.Ledger.Tables.Utils (emptyLedgerTables) +import Ouroboros.Consensus.Storage.LedgerDB.API +import Ouroboros.Consensus.Util.IndexedMemPack {------------------------------------------------------------------------------- Simple ledger state -------------------------------------------------------------------------------} -type OTLedgerState k v = LedgerState (OTBlock k v) +type OTLedgerState k v = LedgerState (OTBlock k v) type OTLedgerTables k v = LedgerTables (OTLedgerState k v) -- | An empty type for blocks, which is only used to record the types @k@ and -- @v@. data OTBlock k v -data instance LedgerState (OTBlock k v) (mk :: MapKind) = OTLedgerState { - otlsLedgerState :: ValuesMK k v +data instance LedgerState (OTBlock k v) (mk :: MapKind) = OTLedgerState + { otlsLedgerState :: ValuesMK k v , otlsLedgerTables :: OTLedgerTables k v mk - } deriving Generic - -deriving instance (Ord k, Eq v, Eq (mk k v)) - => Eq (OTLedgerState k v mk) -deriving stock instance (Show k, Show v, Show (mk k v)) - => Show (OTLedgerState k v mk) -deriving instance (NoThunks k, NoThunks v, NoThunks (mk k v)) - => NoThunks (OTLedgerState k v mk) + } + deriving Generic + +deriving instance + (Ord k, Eq v, Eq (mk k v)) => + Eq (OTLedgerState k v mk) +deriving stock instance + (Show k, Show v, Show (mk k v)) => + Show (OTLedgerState k v mk) +deriving instance + (NoThunks k, NoThunks v, NoThunks (mk k v)) => + NoThunks (OTLedgerState k v mk) emptyOTLedgerState :: - (Ord k, Eq v, MemPack k, MemPack v, ZeroableMK mk) - => LedgerState (OTBlock k v) mk + (Ord k, Eq v, MemPack k, MemPack v, ZeroableMK mk) => + LedgerState (OTBlock k v) mk emptyOTLedgerState = OTLedgerState emptyMK emptyLedgerTables instance CanUpgradeLedgerTables (LedgerState (OTBlock k v)) where upgradeTables _ _ = id -instance MemPack v - => IndexedMemPack (LedgerState (OTBlock k v) EmptyMK) v where +instance + MemPack v => + IndexedMemPack (LedgerState (OTBlock k v) EmptyMK) v + where indexedTypeName _ = typeName @v indexedPackedByteCount _ = packedByteCount indexedPackM _ = packM @@ -77,8 +83,10 @@ instance (Ord k, MemPack k, MemPack v) => SerializeTablesWithHint (LedgerState ( Stowable -------------------------------------------------------------------------------} -instance (Ord k, Eq v, MemPack k, MemPack v) - => CanStowLedgerTables (OTLedgerState k v) where +instance + (Ord k, Eq v, MemPack k, MemPack v) => + CanStowLedgerTables (OTLedgerState k v) + where stowLedgerTables OTLedgerState{otlsLedgerTables} = OTLedgerState (getLedgerTables otlsLedgerTables) emptyLedgerTables @@ -91,13 +99,15 @@ instance (Ord k, Eq v, MemPack k, MemPack v) Simple ledger tables -------------------------------------------------------------------------------} -type instance TxIn (OTLedgerState k v) = k +type instance TxIn (OTLedgerState k v) = k type instance TxOut (OTLedgerState k v) = v -instance (Ord k, Eq v, Show k, Show v, MemPack k, MemPack v, NoThunks k, NoThunks v) - => HasLedgerTables (OTLedgerState k v) where +instance + (Ord k, Eq v, Show k, Show v, MemPack k, MemPack v, NoThunks k, NoThunks v) => + HasLedgerTables (OTLedgerState k v) + where projectLedgerTables OTLedgerState{otlsLedgerTables} = otlsLedgerTables withLedgerTables st lt = - st { otlsLedgerTables = lt } + st{otlsLedgerTables = lt} diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LogicalClock.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LogicalClock.hs index 42e9ec12f2..3be678e210 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LogicalClock.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/LogicalClock.hs @@ -8,57 +8,58 @@ -- -- > import Test.Util.LogicalClock (LogicalClock) -- > import qualified Test.Util.LogicalClock as LogicalClock -module Test.Util.LogicalClock ( - -- * API +module Test.Util.LogicalClock + ( -- * API LogicalClock (..) , NumTicks (..) , Tick (..) + -- * Construction , new , sufficientTimeFor + -- * Scheduling actions , blockUntilTick , onTick , tickWatcher + -- * Utilities , tickTracer ) where -import Control.Monad -import Control.ResourceRegistry -import Control.Tracer (Tracer, contramapM) -import Data.Time (NominalDiffTime) -import Data.Word -import GHC.Stack -import qualified Ouroboros.Consensus.BlockchainTime as BTime -import Ouroboros.Consensus.Util.IOLike -import Ouroboros.Consensus.Util.STM -import Ouroboros.Consensus.Util.Time -import System.Random (Random) +import Control.Monad +import Control.ResourceRegistry +import Control.Tracer (Tracer, contramapM) +import Data.Time (NominalDiffTime) +import Data.Word +import GHC.Stack +import Ouroboros.Consensus.BlockchainTime qualified as BTime +import Ouroboros.Consensus.Util.IOLike +import Ouroboros.Consensus.Util.STM +import Ouroboros.Consensus.Util.Time +import System.Random (Random) {------------------------------------------------------------------------------- API -------------------------------------------------------------------------------} -- | Logical time unit -newtype Tick = Tick { tickToWord64 :: Word64 } - deriving stock (Show, Eq, Ord) +newtype Tick = Tick {tickToWord64 :: Word64} + deriving stock (Show, Eq, Ord) deriving newtype (Num, Enum, Random) -- | Number of ticks the test will run for newtype NumTicks = NumTicks Word64 -- | Logical clock (in terms of ticks rather than actual 'UTCTime') -data LogicalClock m = LogicalClock { - -- | Get the current " time " - getCurrentTick :: STM m Tick - - -- | Wait for the end of time (each clock has a maximum number of ticks) - , waitUntilDone :: m () - - -- | Translate the logical clock to mock 'SystemTime' - , mockSystemTime :: BTime.SystemTime m - } +data LogicalClock m = LogicalClock + { getCurrentTick :: STM m Tick + -- ^ Get the current " time " + , waitUntilDone :: m () + -- ^ Wait for the end of time (each clock has a maximum number of ticks) + , mockSystemTime :: BTime.SystemTime m + -- ^ Translate the logical clock to mock 'SystemTime' + } {------------------------------------------------------------------------------- Construction @@ -86,31 +87,33 @@ tickDelay = 0.5 -------------------------------------------------------------------------------} -- | Execute action on every clock tick -tickWatcher :: LogicalClock m - -> (Tick -> m ()) - -> Watcher m Tick Tick +tickWatcher :: + LogicalClock m -> + (Tick -> m ()) -> + Watcher m Tick Tick tickWatcher clock action = - Watcher { - wFingerprint = id - , wInitial = Nothing - , wNotify = action - , wReader = getCurrentTick clock - } + Watcher + { wFingerprint = id + , wInitial = Nothing + , wNotify = action + , wReader = getCurrentTick clock + } -- | Execute action once at the specified tick -onTick :: (IOLike m, HasCallStack) - => ResourceRegistry m - -> LogicalClock m - -> String - -> Tick - -> m () - -> m () +onTick :: + (IOLike m, HasCallStack) => + ResourceRegistry m -> + LogicalClock m -> + String -> + Tick -> + m () -> + m () onTick registry clock threadLabel tick action = do - void $ - forkLinkedThread - registry - threadLabel - (waitForTick clock tick >> action) + void $ + forkLinkedThread + registry + threadLabel + (waitForTick clock tick >> action) -- | Block until the specified tick -- @@ -118,8 +121,9 @@ onTick registry clock threadLabel tick action = do -- 'True' if they were equal. blockUntilTick :: MonadSTM m => LogicalClock m -> Tick -> m Bool blockUntilTick clock tick = atomically $ do - now <- getCurrentTick clock - if now > tick then + now <- getCurrentTick clock + if now > tick + then return True else do when (now < tick) retry @@ -130,12 +134,12 @@ blockUntilTick clock tick = atomically $ do -------------------------------------------------------------------------------} tickTracer :: - MonadSTM m - => LogicalClock m - -> Tracer m (Tick, ev) - -> Tracer m ev + MonadSTM m => + LogicalClock m -> + Tracer m (Tick, ev) -> + Tracer m ev tickTracer clock = contramapM $ \ev -> - (,ev) <$> atomically (getCurrentTick clock) + (,ev) <$> atomically (getCurrentTick clock) {------------------------------------------------------------------------------- Internal @@ -145,61 +149,64 @@ tickTracer clock = contramapM $ \ev -> -- -- NOTE: Tests using the logical clock really should not need to know what the -- tick delay is; that's kind of the point of a /logical/ clock after all. -newWithDelay :: (IOLike m, HasCallStack) - => ResourceRegistry m - -> NumTicks - -> NominalDiffTime - -> m (LogicalClock m) +newWithDelay :: + (IOLike m, HasCallStack) => + ResourceRegistry m -> + NumTicks -> + NominalDiffTime -> + m (LogicalClock m) newWithDelay registry (NumTicks numTicks) tickLen = do - current <- newTVarIO 0 - done <- newEmptyMVar - _thread <- forkThread registry "ticker" $ do - -- Tick 0 is the first tick, so increment @numTicks - 1@ times - replicateM_ (fromIntegral numTicks - 1) $ do - -- Give simulator chance to execute other threads - threadDelay (nominalDelay tickLen) - atomically $ modifyTVar current (+ 1) - - -- Give tests that need to do some final processing on the last - -- tick a chance to do that before we indicate completion. - threadDelay (nominalDelay tickLen) - putMVar done () - - return LogicalClock { - getCurrentTick = Tick <$> readTVar current - , waitUntilDone = readMVar done - , mockSystemTime = BTime.SystemTime { - BTime.systemTimeCurrent = do - tick <- atomically $ readTVar current - return $ BTime.RelativeTime $ fromIntegral tick * tickLen - , BTime.systemTimeWait = - return () - } + current <- newTVarIO 0 + done <- newEmptyMVar + _thread <- forkThread registry "ticker" $ do + -- Tick 0 is the first tick, so increment @numTicks - 1@ times + replicateM_ (fromIntegral numTicks - 1) $ do + -- Give simulator chance to execute other threads + threadDelay (nominalDelay tickLen) + atomically $ modifyTVar current (+ 1) + + -- Give tests that need to do some final processing on the last + -- tick a chance to do that before we indicate completion. + threadDelay (nominalDelay tickLen) + putMVar done () + + return + LogicalClock + { getCurrentTick = Tick <$> readTVar current + , waitUntilDone = readMVar done + , mockSystemTime = + BTime.SystemTime + { BTime.systemTimeCurrent = do + tick <- atomically $ readTVar current + return $ BTime.RelativeTime $ fromIntegral tick * tickLen + , BTime.systemTimeWait = + return () + } } -- | Wait for the specified tick (blocking the current thread) waitForTick :: IOLike m => LogicalClock m -> Tick -> m () waitForTick clock tick = do - start <- atomically $ getCurrentTick clock - when (start >= tick) $ - throwIO $ WaitForTickTooLate { - tickRequest = tick + start <- atomically $ getCurrentTick clock + when (start >= tick) $ + throwIO $ + WaitForTickTooLate + { tickRequest = tick , tickCurrent = start } - atomically $ do - now <- getCurrentTick clock - check (now >= tick) + atomically $ do + now <- getCurrentTick clock + check (now >= tick) -- | Thrown by 'waitForTick' (and hence 'onTick') -data WaitForTickException = - WaitForTickTooLate { - -- | The time the action should have run at - tickRequest :: Tick - - -- | The time when 'onTick' was called - , tickCurrent :: Tick - } +data WaitForTickException + = WaitForTickTooLate + { tickRequest :: Tick + -- ^ The time the action should have run at + , tickCurrent :: Tick + -- ^ The time when 'onTick' was called + } deriving (Eq, Show) instance Exception WaitForTickException diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/MockChain.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/MockChain.hs index 5a53582a83..092ab742c6 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/MockChain.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/MockChain.hs @@ -4,40 +4,40 @@ -- -- Intended for qualified import -- > import qualified Test.Util.MockChain as Chain -module Test.Util.MockChain ( - commonPrefix +module Test.Util.MockChain + ( commonPrefix , dropLastBlocks , lastSlot ) where -import Data.Foldable as Foldable (foldl') -import Data.Sequence.Strict (StrictSeq (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Network.Mock.Chain +import Data.Foldable as Foldable (foldl') +import Data.Sequence.Strict (StrictSeq (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Network.Mock.Chain {------------------------------------------------------------------------------- Utility functions on chains -------------------------------------------------------------------------------} lastSlot :: HasHeader b => Chain b -> Maybe SlotNo -lastSlot Genesis = Nothing +lastSlot Genesis = Nothing lastSlot (_ :> b) = Just $ blockSlot b commonPrefix :: Eq b => Chain b -> Chain b -> Chain b commonPrefix c d = chainFromSeq $ go (chainToSeq c) (chainToSeq d) - where - go :: Eq b => StrictSeq b -> StrictSeq b -> StrictSeq b - go Empty _ = Empty - go _ Empty = Empty - go (x :<| xs) (y :<| ys) - | x == y = x :<| go xs ys - | otherwise = Empty + where + go :: Eq b => StrictSeq b -> StrictSeq b -> StrictSeq b + go Empty _ = Empty + go _ Empty = Empty + go (x :<| xs) (y :<| ys) + | x == y = x :<| go xs ys + | otherwise = Empty dropLastBlocks :: Int -> Chain b -> Chain b dropLastBlocks _ Genesis = Genesis dropLastBlocks i bs@(cs :> _) - | i <= 0 = bs - | otherwise = dropLastBlocks (i - 1) cs + | i <= 0 = bs + | otherwise = dropLastBlocks (i - 1) cs {------------------------------------------------------------------------------- Internal auxiliary diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Arbitrary.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Arbitrary.hs index 6c01ebb69d..69c8febdc1 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Arbitrary.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Arbitrary.hs @@ -12,66 +12,86 @@ {-# LANGUAGE TypeApplications #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Test.Util.Orphans.Arbitrary ( - SmallDiffTime (..) +module Test.Util.Orphans.Arbitrary + ( SmallDiffTime (..) , genLimitedEpochSize , genLimitedSlotNo , genSmallEpochNo , genSmallSlotNo + -- * Time , genNominalDiffTime50Years , genUTCTime50Years ) where -import Cardano.Ledger.BaseTypes (NonZero (..), unsafeNonZero) -import Data.Coerce (coerce) -import Data.SOP.BasicFunctors -import Data.SOP.Constraint -import Data.SOP.Dict (Dict (..), all_NP, mapAll) -import Data.SOP.Functors (Flip (..)) -import Data.SOP.NonEmpty (IsNonEmpty, ProofNonEmpty (..), - checkIsNonEmpty, isNonEmpty) -import Data.SOP.Sing -import Data.SOP.Strict -import Data.Time -import Data.Word (Word64) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.HardFork.Combinator (HardForkBlock, - HardForkChainDepState, HardForkState (..), - LedgerEraInfo (..), LedgerState (..), Mismatch (..), - MismatchEraInfo (..), SingleEraBlock (..), SingleEraInfo, - Telescope (..), proxySingle) -import Ouroboros.Consensus.HardFork.Combinator.State (Current (..), - Past (..)) -import Ouroboros.Consensus.HardFork.History (Bound (..)) -import Ouroboros.Consensus.HardFork.History.EraParams -import Ouroboros.Consensus.HeaderValidation (TipInfo) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck - (ClockSkew) -import qualified Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck as InFutureCheck -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState, - SecurityParam (..)) -import Ouroboros.Consensus.Storage.ChainDB.API (LoE (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal - (ChunkNo (..), ChunkSize (..), RelativeSlot (..)) -import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout -import qualified Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index as Index -import Ouroboros.Consensus.TypeFamilyWrappers -import Ouroboros.Consensus.Util (Flag (..)) -import Ouroboros.Network.SizeInBytes -import Test.Cardano.Ledger.Binary.Arbitrary () -import Test.Cardano.Slotting.Arbitrary () -import Test.QuickCheck hiding (Fixed (..)) -import Test.QuickCheck.Instances () -import Test.Util.Time (dawnOfTime) +import Cardano.Ledger.BaseTypes (NonZero (..), unsafeNonZero) +import Data.Coerce (coerce) +import Data.SOP.BasicFunctors +import Data.SOP.Constraint +import Data.SOP.Dict (Dict (..), all_NP, mapAll) +import Data.SOP.Functors (Flip (..)) +import Data.SOP.NonEmpty + ( IsNonEmpty + , ProofNonEmpty (..) + , checkIsNonEmpty + , isNonEmpty + ) +import Data.SOP.Sing +import Data.SOP.Strict +import Data.Time +import Data.Word (Word64) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.HardFork.Combinator + ( HardForkBlock + , HardForkChainDepState + , HardForkState (..) + , LedgerEraInfo (..) + , LedgerState (..) + , Mismatch (..) + , MismatchEraInfo (..) + , SingleEraBlock (..) + , SingleEraInfo + , Telescope (..) + , proxySingle + ) +import Ouroboros.Consensus.HardFork.Combinator.State + ( Current (..) + , Past (..) + ) +import Ouroboros.Consensus.HardFork.History (Bound (..)) +import Ouroboros.Consensus.HardFork.History.EraParams +import Ouroboros.Consensus.HeaderValidation (TipInfo) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck + ( ClockSkew + ) +import Ouroboros.Consensus.MiniProtocol.ChainSync.Client.InFutureCheck qualified as InFutureCheck +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.Protocol.Abstract + ( ChainDepState + , SecurityParam (..) + ) +import Ouroboros.Consensus.Storage.ChainDB.API (LoE (..)) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Internal + ( ChunkNo (..) + , ChunkSize (..) + , RelativeSlot (..) + ) +import Ouroboros.Consensus.Storage.ImmutableDB.Chunks.Layout +import Ouroboros.Consensus.Storage.ImmutableDB.Impl.Index qualified as Index +import Ouroboros.Consensus.TypeFamilyWrappers +import Ouroboros.Consensus.Util (Flag (..)) +import Ouroboros.Network.SizeInBytes +import Test.Cardano.Ledger.Binary.Arbitrary () +import Test.Cardano.Slotting.Arbitrary () +import Test.QuickCheck hiding (Fixed (..)) +import Test.QuickCheck.Instances () +import Test.Util.Time (dawnOfTime) minNumCoreNodes :: Word64 minNumCoreNodes = 2 @@ -86,9 +106,9 @@ instance Arbitrary NumCoreNodes where -- it uses a much wider timespan. genNominalDiffTime50Years :: Gen NominalDiffTime genNominalDiffTime50Years = conv <$> choose (0, 50 * daysPerYear * secondsPerDay) - where - conv :: Double -> NominalDiffTime - conv = realToFrac + where + conv :: Double -> NominalDiffTime + conv = realToFrac -- | Picks moment between 'dawnOfTime' and (roughly) 50 years later -- @@ -104,8 +124,8 @@ instance Arbitrary SlotLength where -- Try to shrink the slot length to just "1", for tests where the slot length -- really doesn't matter very much shrink slotLen = if slotLen /= oneSec then [oneSec] else [] - where - oneSec = slotLengthFromSec 1 + where + oneSec = slotLengthFromSec 1 instance Arbitrary RelativeSlot where arbitrary = RelativeSlot <$> arbitrary <*> arbitrary <*> arbitrary @@ -117,13 +137,13 @@ instance Arbitrary RelativeSlot where -- seconds. genLimitedSlotNo :: Gen SlotNo genLimitedSlotNo = - SlotNo <$> arbitrary `suchThat` (< 0x8000000000000000) + SlotNo <$> arbitrary `suchThat` (< 0x8000000000000000) -- | Generate a small SlotNo for the state machine tests. The runtime of the -- StateMachine prop_sequential tests is proportional the the upper bound. genSmallSlotNo :: Gen SlotNo genSmallSlotNo = - SlotNo <$> choose (0, 1000) + SlotNo <$> choose (0, 1000) -- | The tests for 'CumulEpochSizes' requires that the sum of a list of these -- values does not overflow. @@ -131,11 +151,11 @@ genSmallSlotNo = -- An epoch size must be > 0. genLimitedEpochSize :: Gen EpochSize genLimitedEpochSize = - EpochSize <$> choose (1, 100_000) + EpochSize <$> choose (1, 100_000) genSmallEpochNo :: Gen EpochNo genSmallEpochNo = - EpochNo <$> choose (0, 10000) + EpochNo <$> choose (0, 10000) -- | This picks an 'EpochNo' between 0 and 10000 -- @@ -143,30 +163,31 @@ genSmallEpochNo = -- due to huge epoch numbers and even huger slot numbers. instance Arbitrary ChunkNo where arbitrary = ChunkNo <$> choose (0, 10000) - shrink = genericShrink + shrink = genericShrink -- | Picks a 'ChunkSize' between 1 and 100, and randomly choose to enable EBBs instance Arbitrary ChunkSize where arbitrary = ChunkSize <$> arbitrary <*> choose (1, 100) - shrink = genericShrink + shrink = genericShrink instance Arbitrary ChunkSlot where arbitrary = UnsafeChunkSlot <$> arbitrary <*> arbitrary - shrink = genericShrink + shrink = genericShrink instance Arbitrary ClockSkew where arbitrary = InFutureCheck.clockSkewInSeconds <$> choose (0, 5) - shrink skew = concat [ - -- Shrink to some simple values, including 0 - -- (it would be useful to know if a test fails only when having non-zero - -- clock skew) - [ skew0 | skew0 < skew ] - , [ skew1 | skew1 < skew ] - ] - where - skew0, skew1 :: ClockSkew - skew0 = InFutureCheck.clockSkewInSeconds 0 - skew1 = InFutureCheck.clockSkewInSeconds 1 + shrink skew = + concat + [ -- Shrink to some simple values, including 0 + -- (it would be useful to know if a test fails only when having non-zero + -- clock skew) + [skew0 | skew0 < skew] + , [skew1 | skew1 < skew] + ] + where + skew0, skew1 :: ClockSkew + skew0 = InFutureCheck.clockSkewInSeconds 0 + skew1 = InFutureCheck.clockSkewInSeconds 1 deriving newtype instance Arbitrary SizeInBytes @@ -191,21 +212,22 @@ deriving newtype instance Arbitrary SizeInBytes -- * With a 0.1 second precision -- * Shrinks newtype SmallDiffTime = SmallDiffTime NominalDiffTime - deriving (Show) + deriving Show instance Arbitrary SmallDiffTime where arbitrary = conv <$> choose (0, 1000 * 20 * 10 * 10) - where - -- NominalDiffTime conversion functions treat it as seconds - conv :: Integer -> SmallDiffTime - conv n = SmallDiffTime $ realToFrac seconds - where - seconds :: Double - seconds = fromInteger n / 10 + where + -- NominalDiffTime conversion functions treat it as seconds + conv :: Integer -> SmallDiffTime + conv n = SmallDiffTime $ realToFrac seconds + where + seconds :: Double + seconds = fromInteger n / 10 -- try to shrink to some small, simple values -- (include 1.5 so that we can shrink to a simple, but yet not whole, value) - shrink (SmallDiffTime d) = map SmallDiffTime $ + shrink (SmallDiffTime d) = + map SmallDiffTime $ filter (< d) [1, 1.5, 2, 3, 100] {------------------------------------------------------------------------------- @@ -227,48 +249,59 @@ secondsPerDay = 24 * 60 * 60 -------------------------------------------------------------------------------} -- | Forwarding -instance Arbitrary (ChainDepState (BlockProtocol blk)) - => Arbitrary (WrapChainDepState blk) where +instance + Arbitrary (ChainDepState (BlockProtocol blk)) => + Arbitrary (WrapChainDepState blk) + where arbitrary = WrapChainDepState <$> arbitrary - shrink x = WrapChainDepState <$> shrink (unwrapChainDepState x) + shrink x = WrapChainDepState <$> shrink (unwrapChainDepState x) -- | Forwarding -instance Arbitrary (HeaderHash blk) - => Arbitrary (WrapHeaderHash blk) where +instance + Arbitrary (HeaderHash blk) => + Arbitrary (WrapHeaderHash blk) + where arbitrary = WrapHeaderHash <$> arbitrary - shrink x = WrapHeaderHash <$> shrink (unwrapHeaderHash x) + shrink x = WrapHeaderHash <$> shrink (unwrapHeaderHash x) -- | Forwarding -instance Arbitrary (TipInfo blk) - => Arbitrary (WrapTipInfo blk) where +instance + Arbitrary (TipInfo blk) => + Arbitrary (WrapTipInfo blk) + where arbitrary = WrapTipInfo <$> arbitrary - shrink x = WrapTipInfo <$> shrink (unwrapTipInfo x) + shrink x = WrapTipInfo <$> shrink (unwrapTipInfo x) -- | Forwarding instance Arbitrary a => Arbitrary (I a) where arbitrary = I <$> arbitrary - shrink x = I <$> shrink (unI x) + shrink x = I <$> shrink (unI x) -- | Forwarding -instance Arbitrary (ApplyTxErr blk) - => Arbitrary (WrapApplyTxErr blk) where +instance + Arbitrary (ApplyTxErr blk) => + Arbitrary (WrapApplyTxErr blk) + where arbitrary = WrapApplyTxErr <$> arbitrary - shrink x = WrapApplyTxErr <$> shrink (unwrapApplyTxErr x) + shrink x = WrapApplyTxErr <$> shrink (unwrapApplyTxErr x) {------------------------------------------------------------------------------- NS -------------------------------------------------------------------------------} -instance (All (Arbitrary `Compose` f) xs, IsNonEmpty xs) - => Arbitrary (NS f xs) where +instance + (All (Arbitrary `Compose` f) xs, IsNonEmpty xs) => + Arbitrary (NS f xs) + where arbitrary = case isNonEmpty (Proxy @xs) of - ProofNonEmpty _ pf -> case checkIsNonEmpty pf of - Nothing -> Z <$> arbitrary - Just (ProofNonEmpty _ pf') -> frequency + ProofNonEmpty _ pf -> case checkIsNonEmpty pf of + Nothing -> Z <$> arbitrary + Just (ProofNonEmpty _ pf') -> + frequency [ (1, Z <$> arbitrary) - -- Use the number of remaining cases (one less than @xs@) as the + , -- Use the number of remaining cases (one less than @xs@) as the -- weight so that the distribution is uniform - , (lengthSList pf', S <$> arbitrary) + (lengthSList pf', S <$> arbitrary) ] shrink = hctraverse' (Proxy @(Arbitrary `Compose` f)) shrink @@ -280,7 +313,8 @@ instance Arbitrary EraParams where arbitrary = EraParams <$> arbitrary <*> arbitrary <*> arbitrary <*> (GenesisWindow <$> arbitrary) instance Arbitrary SafeZone where - arbitrary = oneof + arbitrary = + oneof [ StandardSafeZone <$> arbitrary , return UnsafeIndefiniteSafeZone ] @@ -294,10 +328,10 @@ instance Arbitrary (f y x) => Arbitrary (Flip f (x :: kx) (y :: ky)) where instance Arbitrary Bound where arbitrary = - Bound - <$> (RelativeTime <$> arbitrary) - <*> (SlotNo <$> arbitrary) - <*> (EpochNo <$> arbitrary) + Bound + <$> (RelativeTime <$> arbitrary) + <*> (SlotNo <$> arbitrary) + <*> (EpochNo <$> arbitrary) instance Arbitrary (K Past blk) where arbitrary = K <$> (Past <$> arbitrary <*> arbitrary) @@ -305,86 +339,100 @@ instance Arbitrary (K Past blk) where instance Arbitrary (f blk) => Arbitrary (Current f blk) where arbitrary = Current <$> arbitrary <*> arbitrary -instance ( IsNonEmpty xs - , All (Arbitrary `Compose` f) xs - , All (Arbitrary `Compose` g) xs - ) => Arbitrary (Telescope g f xs) where +instance + ( IsNonEmpty xs + , All (Arbitrary `Compose` f) xs + , All (Arbitrary `Compose` g) xs + ) => + Arbitrary (Telescope g f xs) + where arbitrary = case isNonEmpty (Proxy @xs) of - ProofNonEmpty _ pf -> case checkIsNonEmpty pf of - Nothing -> TZ <$> arbitrary - Just (ProofNonEmpty _ pf') -> frequency + ProofNonEmpty _ pf -> case checkIsNonEmpty pf of + Nothing -> TZ <$> arbitrary + Just (ProofNonEmpty _ pf') -> + frequency [ (1, TZ <$> arbitrary) , (lengthSList pf', TS <$> arbitrary <*> arbitrary) ] shrink = hctraverse' (Proxy @(Arbitrary `Compose` f)) shrink -instance (IsNonEmpty xs, SListI xs, All (Arbitrary `Compose` Flip LedgerState mk) xs) - => Arbitrary (LedgerState (HardForkBlock xs) mk) where +instance + (IsNonEmpty xs, SListI xs, All (Arbitrary `Compose` Flip LedgerState mk) xs) => + Arbitrary (LedgerState (HardForkBlock xs) mk) + where arbitrary = case (dictKPast, dictCurrentLedgerState) of - (Dict, Dict) -> inj <$> arbitrary - where - inj :: - Telescope (K Past) (Current (Flip LedgerState mk)) xs - -> LedgerState (HardForkBlock xs) mk - inj = coerce - - dictKPast :: Dict (All (Arbitrary `Compose` (K Past))) xs - dictKPast = all_NP $ hpure Dict - - dictCurrentLedgerState :: - Dict (All (Arbitrary `Compose` (Current (Flip LedgerState mk)))) xs - dictCurrentLedgerState = - mapAll - @(Arbitrary `Compose` Flip LedgerState mk) - @(Arbitrary `Compose` Current (Flip LedgerState mk)) - (\Dict -> Dict) - Dict - -instance (IsNonEmpty xs, SListI xs, All (Arbitrary `Compose` WrapChainDepState) xs) - => Arbitrary (HardForkChainDepState xs) where + (Dict, Dict) -> inj <$> arbitrary + where + inj :: + Telescope (K Past) (Current (Flip LedgerState mk)) xs -> + LedgerState (HardForkBlock xs) mk + inj = coerce + + dictKPast :: Dict (All (Arbitrary `Compose` (K Past))) xs + dictKPast = all_NP $ hpure Dict + + dictCurrentLedgerState :: + Dict (All (Arbitrary `Compose` (Current (Flip LedgerState mk)))) xs + dictCurrentLedgerState = + mapAll + @(Arbitrary `Compose` Flip LedgerState mk) + @(Arbitrary `Compose` Current (Flip LedgerState mk)) + (\Dict -> Dict) + Dict + +instance + (IsNonEmpty xs, SListI xs, All (Arbitrary `Compose` WrapChainDepState) xs) => + Arbitrary (HardForkChainDepState xs) + where arbitrary = case (dictKPast, dictCurrentWrapChainDepState) of - (Dict, Dict) -> inj <$> arbitrary - where - inj :: - Telescope (K Past) (Current WrapChainDepState) xs - -> HardForkChainDepState xs - inj = coerce - - dictKPast :: Dict (All (Arbitrary `Compose` (K Past))) xs - dictKPast = all_NP $ hpure Dict - - dictCurrentWrapChainDepState :: - Dict (All (Arbitrary `Compose` (Current WrapChainDepState))) xs - dictCurrentWrapChainDepState = - mapAll - @(Arbitrary `Compose` WrapChainDepState) - @(Arbitrary `Compose` Current WrapChainDepState) - (\Dict -> Dict) - Dict + (Dict, Dict) -> inj <$> arbitrary + where + inj :: + Telescope (K Past) (Current WrapChainDepState) xs -> + HardForkChainDepState xs + inj = coerce + + dictKPast :: Dict (All (Arbitrary `Compose` (K Past))) xs + dictKPast = all_NP $ hpure Dict + + dictCurrentWrapChainDepState :: + Dict (All (Arbitrary `Compose` (Current WrapChainDepState))) xs + dictCurrentWrapChainDepState = + mapAll + @(Arbitrary `Compose` WrapChainDepState) + @(Arbitrary `Compose` Current WrapChainDepState) + (\Dict -> Dict) + Dict {------------------------------------------------------------------------------- Mismatch & MismatchEraInfo -------------------------------------------------------------------------------} -instance ( IsNonEmpty xs - , All (Arbitrary `Compose` f) (x ': xs) - , All (Arbitrary `Compose` g) (x ': xs) - ) => Arbitrary (Mismatch f g (x ': xs)) where - arbitrary = case isNonEmpty (Proxy @xs) of - ProofNonEmpty _ pf -> frequency $ mconcat [ - -- length (x ': xs) = n + 1 - -- This line: n cases, the line below: also n cases. - [ (1, ML <$> arbitrary <*> arbitrary) - , (1, MR <$> arbitrary <*> arbitrary) - ] - , case checkIsNonEmpty pf of - Nothing -> [] - -- The line below: n * (n - 1) cases. We want the weights to be - -- proportional so that the distribution is uniform. We divide each - -- weight by n to get 1 and 1 for the ML and MR cases above and n - 1 (= - -- lengthSList pxs') for the MS case below. - Just (ProofNonEmpty _ pxs') -> [(lengthSList pxs', MS <$> arbitrary)] - ] +instance + ( IsNonEmpty xs + , All (Arbitrary `Compose` f) (x ': xs) + , All (Arbitrary `Compose` g) (x ': xs) + ) => + Arbitrary (Mismatch f g (x ': xs)) + where + arbitrary = case isNonEmpty (Proxy @xs) of + ProofNonEmpty _ pf -> + frequency $ + mconcat + [ -- length (x ': xs) = n + 1 + -- This line: n cases, the line below: also n cases. + + [ (1, ML <$> arbitrary <*> arbitrary) + , (1, MR <$> arbitrary <*> arbitrary) + ] + , case checkIsNonEmpty pf of + Nothing -> [] + -- The line below: n * (n - 1) cases. We want the weights to be + -- proportional so that the distribution is uniform. We divide each + -- weight by n to get 1 and 1 for the ML and MR cases above and n - 1 (= + -- lengthSList pxs') for the MS case below. + Just (ProofNonEmpty _ pxs') -> [(lengthSList pxs', MS <$> arbitrary)] + ] instance SingleEraBlock blk => Arbitrary (SingleEraInfo blk) where arbitrary = return $ singleEraInfo (Proxy @blk) @@ -392,19 +440,21 @@ instance SingleEraBlock blk => Arbitrary (SingleEraInfo blk) where instance SingleEraBlock blk => Arbitrary (LedgerEraInfo blk) where arbitrary = return $ LedgerEraInfo $ singleEraInfo (Proxy @blk) -instance (All SingleEraBlock (x ': xs), IsNonEmpty xs) - => Arbitrary (MismatchEraInfo (x ': xs)) where +instance + (All SingleEraBlock (x ': xs), IsNonEmpty xs) => + Arbitrary (MismatchEraInfo (x ': xs)) + where arbitrary = - case (dictSingleEraInfo, dictLedgerEraInfo) of - (Dict, Dict) -> MismatchEraInfo <$> arbitrary - where - dictSingleEraInfo :: - Dict (All (Arbitrary `Compose` SingleEraInfo)) (x ': xs) - dictSingleEraInfo = all_NP $ hcpure proxySingle Dict + case (dictSingleEraInfo, dictLedgerEraInfo) of + (Dict, Dict) -> MismatchEraInfo <$> arbitrary + where + dictSingleEraInfo :: + Dict (All (Arbitrary `Compose` SingleEraInfo)) (x ': xs) + dictSingleEraInfo = all_NP $ hcpure proxySingle Dict - dictLedgerEraInfo :: - Dict (All (Arbitrary `Compose` LedgerEraInfo)) (x ': xs) - dictLedgerEraInfo = all_NP $ hcpure proxySingle Dict + dictLedgerEraInfo :: + Dict (All (Arbitrary `Compose` LedgerEraInfo)) (x ': xs) + dictLedgerEraInfo = all_NP $ hcpure proxySingle Dict {------------------------------------------------------------------------------- Query @@ -414,23 +464,26 @@ instance Arbitrary QueryVersion where arbitrary = arbitraryBoundedEnum shrink v = if v == minBound then [] else [pred v] -instance Arbitrary (SomeBlockQuery (BlockQuery blk)) - => Arbitrary (SomeSecond Query blk) where +instance + Arbitrary (SomeBlockQuery (BlockQuery blk)) => + Arbitrary (SomeSecond Query blk) + where arbitrary = do SomeBlockQuery someBlockQuery <- arbitrary return (SomeSecond (BlockQuery someBlockQuery)) instance Arbitrary Index.CacheConfig where arbitrary = do - pastChunksToCache <- frequency - -- Pick small values so that we exercise cache eviction - [ (1, return 1) - , (1, return 2) - , (1, choose (3, 10)) - ] + pastChunksToCache <- + frequency + -- Pick small values so that we exercise cache eviction + [ (1, return 1) + , (1, return 2) + , (1, choose (3, 10)) + ] -- TODO create a Cmd that advances time, so this is being exercised too. expireUnusedAfter <- (fromIntegral :: Int -> DiffTime) <$> choose (1, 100) - return Index.CacheConfig {Index.pastChunksToCache, Index.expireUnusedAfter} + return Index.CacheConfig{Index.pastChunksToCache, Index.expireUnusedAfter} {------------------------------------------------------------------------------- LoE @@ -438,7 +491,7 @@ instance Arbitrary Index.CacheConfig where instance Arbitrary a => Arbitrary (LoE a) where arbitrary = oneof [pure LoEDisabled, LoEEnabled <$> arbitrary] - shrink LoEDisabled = [] + shrink LoEDisabled = [] shrink (LoEEnabled x) = LoEDisabled : map LoEEnabled (shrink x) {------------------------------------------------------------------------------- @@ -447,6 +500,6 @@ instance Arbitrary a => Arbitrary (LoE a) where instance Arbitrary SecurityParam where arbitrary = SecurityParam . unsafeNonZero <$> choose (1, 6) - shrink (SecurityParam k) = [ SecurityParam (unsafeNonZero x) | x <- shrink (unNonZero k), x > 0 ] + shrink (SecurityParam k) = [SecurityParam (unsafeNonZero x) | x <- shrink (unNonZero k), x > 0] deriving newtype instance Arbitrary (Flag symbol) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/IOLike.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/IOLike.hs index 3cadd35a04..246b35d83b 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/IOLike.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/IOLike.hs @@ -1,14 +1,13 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Util.Orphans.IOLike () where -import Control.Monad.Base -import Control.Monad.IOSim -import Ouroboros.Consensus.Util.IOLike -import Test.Util.Orphans.NoThunks () +import Control.Monad.Base +import Control.Monad.IOSim +import Ouroboros.Consensus.Util.IOLike +import Test.Util.Orphans.NoThunks () instance IOLike (IOSim s) where forgetSignKeyKES = const $ return () diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/NoThunks.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/NoThunks.hs index 089875fc3d..8ea8364b46 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/NoThunks.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/NoThunks.hs @@ -5,43 +5,42 @@ {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Util.Orphans.NoThunks () where -import Control.Concurrent.Class.MonadMVar -import Control.Concurrent.Class.MonadMVar.Strict -import Control.Concurrent.Class.MonadSTM.Strict -import Control.Monad.IOSim -import Control.Monad.ST.Lazy -import Control.Monad.ST.Unsafe (unsafeSTToIO) -import Data.Proxy -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Util.MonadSTM.StrictSVar -import qualified Ouroboros.Consensus.Util.NormalForm.StrictMVar as NormalForm -import qualified Ouroboros.Consensus.Util.NormalForm.StrictTVar as NormalForm -import System.FS.API.Types -import System.FS.Sim.FsTree -import System.FS.Sim.MockFS +import Control.Concurrent.Class.MonadMVar +import Control.Concurrent.Class.MonadMVar.Strict +import Control.Concurrent.Class.MonadSTM.Strict +import Control.Monad.IOSim +import Control.Monad.ST.Lazy +import Control.Monad.ST.Unsafe (unsafeSTToIO) +import Data.Proxy +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Util.MonadSTM.StrictSVar +import Ouroboros.Consensus.Util.NormalForm.StrictMVar qualified as NormalForm +import Ouroboros.Consensus.Util.NormalForm.StrictTVar qualified as NormalForm +import System.FS.API.Types +import System.FS.Sim.FsTree +import System.FS.Sim.MockFS instance NoThunks a => NoThunks (StrictSVar (IOSim s) a) where showTypeOf _ = "StrictSVar IOSim" - wNoThunks ctxt StrictSVar { tvar } = do - a <- unsafeSTToIO $ lazyToStrictST $ inspectTVar (Proxy :: Proxy (IOSim s)) tvar - noThunks ctxt a + wNoThunks ctxt StrictSVar{tvar} = do + a <- unsafeSTToIO $ lazyToStrictST $ inspectTVar (Proxy :: Proxy (IOSim s)) tvar + noThunks ctxt a instance NoThunks a => NoThunks (StrictMVar (IOSim s) a) where showTypeOf _ = "StrictMVar IOSim" wNoThunks ctxt mvar = do - aMay <- unsafeSTToIO $ lazyToStrictST $ inspectMVar (Proxy :: Proxy (IOSim s)) (toLazyMVar mvar) - noThunks ctxt aMay + aMay <- unsafeSTToIO $ lazyToStrictST $ inspectMVar (Proxy :: Proxy (IOSim s)) (toLazyMVar mvar) + noThunks ctxt aMay instance NoThunks a => NoThunks (StrictTMVar (IOSim s) a) where showTypeOf _ = "StrictTMVar IOSim" wNoThunks ctxt mvar = do - aMay <- unsafeSTToIO $ lazyToStrictST $ inspectTMVar (Proxy :: Proxy (IOSim s)) (toLazyTMVar mvar) - noThunks ctxt aMay + aMay <- unsafeSTToIO $ lazyToStrictST $ inspectTMVar (Proxy :: Proxy (IOSim s)) (toLazyTMVar mvar) + noThunks ctxt aMay instance NoThunks (StrictMVar (IOSim s) a) => NoThunks (NormalForm.StrictMVar (IOSim s) a) where showTypeOf _ = "StrictMVar IOSim" diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Serialise.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Serialise.hs index e4e7a3d958..b452dc4a61 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Serialise.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/Serialise.hs @@ -1,13 +1,11 @@ {-# LANGUAGE DerivingVia #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE StandaloneDeriving #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Util.Orphans.Serialise () where -import Codec.Serialise (Serialise) -import Ouroboros.Network.SizeInBytes +import Codec.Serialise (Serialise) +import Ouroboros.Network.SizeInBytes deriving newtype instance Serialise SizeInBytes - diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/SignableRepresentation.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/SignableRepresentation.hs index 5bdfbdae4e..c79e34eb74 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/SignableRepresentation.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/SignableRepresentation.hs @@ -1,10 +1,9 @@ {-# LANGUAGE OverloadedStrings #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Util.Orphans.SignableRepresentation () where -import Cardano.Crypto.Util +import Cardano.Crypto.Util instance SignableRepresentation () where getSignableRepresentation () = "" diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/ToExpr.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/ToExpr.hs index 0fdeca1416..85285695b7 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/ToExpr.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Orphans/ToExpr.hs @@ -9,36 +9,35 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} module Test.Util.Orphans.ToExpr () where -import qualified Control.Monad.Class.MonadTime.SI as SI -import Data.TreeDiff -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mempool.API -import Ouroboros.Consensus.Mempool.TxSeq -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ChainDB.API (LoE (..)) -import Ouroboros.Consensus.Storage.ImmutableDB -import Ouroboros.Consensus.Storage.LedgerDB.Snapshots -import Ouroboros.Consensus.Util.STM (Fingerprint, WithFingerprint) -import Ouroboros.Network.AnchoredFragment (AnchoredFragment) -import qualified Ouroboros.Network.AnchoredFragment as Fragment -import Ouroboros.Network.Block (MaxSlotNo) -import Ouroboros.Network.Mock.Chain -import Ouroboros.Network.Mock.ProducerState -import Ouroboros.Network.Point -import System.FS.API -import System.FS.CRC (CRC (..)) -import Test.Cardano.Slotting.TreeDiff () -import Test.Util.ToExpr () +import Control.Monad.Class.MonadTime.SI qualified as SI +import Data.TreeDiff +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mempool.API +import Ouroboros.Consensus.Mempool.TxSeq +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ChainDB.API (LoE (..)) +import Ouroboros.Consensus.Storage.ImmutableDB +import Ouroboros.Consensus.Storage.LedgerDB.Snapshots +import Ouroboros.Consensus.Util.STM (Fingerprint, WithFingerprint) +import Ouroboros.Network.AnchoredFragment (AnchoredFragment) +import Ouroboros.Network.AnchoredFragment qualified as Fragment +import Ouroboros.Network.Block (MaxSlotNo) +import Ouroboros.Network.Mock.Chain +import Ouroboros.Network.Mock.ProducerState +import Ouroboros.Network.Point +import System.FS.API +import System.FS.CRC (CRC (..)) +import Test.Cardano.Slotting.TreeDiff () +import Test.Util.ToExpr () {------------------------------------------------------------------------------- ouroboros-network @@ -48,10 +47,11 @@ instance ToExpr (HeaderHash blk) => ToExpr (Point blk) instance ToExpr (HeaderHash blk) => ToExpr (RealPoint blk) instance (ToExpr slot, ToExpr hash) => ToExpr (Block slot hash) -deriving instance ( ToExpr blk - , ToExpr (HeaderHash blk) - ) - => ToExpr (Fragment.Anchor blk) +deriving instance + ( ToExpr blk + , ToExpr (HeaderHash blk) + ) => + ToExpr (Fragment.Anchor blk) instance (ToExpr blk, ToExpr (HeaderHash blk)) => ToExpr (AnchoredFragment blk) where toExpr f = toExpr (Fragment.anchor f, Fragment.toOldestFirst f) @@ -60,14 +60,18 @@ instance (ToExpr blk, ToExpr (HeaderHash blk)) => ToExpr (AnchoredFragment blk) ouroboros-consensus -------------------------------------------------------------------------------} -instance ( ToExpr (LedgerState blk EmptyMK) - , ToExpr (ChainDepState (BlockProtocol blk)) - , ToExpr (TipInfo blk) - ) => ToExpr (ExtLedgerState blk EmptyMK) +instance + ( ToExpr (LedgerState blk EmptyMK) + , ToExpr (ChainDepState (BlockProtocol blk)) + , ToExpr (TipInfo blk) + ) => + ToExpr (ExtLedgerState blk EmptyMK) -instance ( ToExpr (ChainDepState (BlockProtocol blk)) - , ToExpr (TipInfo blk) - ) => ToExpr (HeaderState blk) +instance + ( ToExpr (ChainDepState (BlockProtocol blk)) + , ToExpr (TipInfo blk) + ) => + ToExpr (HeaderState blk) instance ToExpr SecurityParam where toExpr = defaultExprViaShow @@ -79,12 +83,33 @@ instance ToExpr ChunkSize instance ToExpr ChunkNo instance ToExpr ChunkSlot instance ToExpr RelativeSlot -instance (ToExpr a, ToExpr b, ToExpr c, ToExpr d, ToExpr e, ToExpr f, ToExpr g, - ToExpr h, ToExpr i, ToExpr j) - => ToExpr (a, b, c, d, e, f, g, h, i, j) where - toExpr (a, b, c, d, e, f, g, h, i, j) = App "_×_×_×_×_×_×_×_×_x_" - [ toExpr a, toExpr b, toExpr c, toExpr d, toExpr e, toExpr f, toExpr g - , toExpr h, toExpr i, toExpr j +instance + ( ToExpr a + , ToExpr b + , ToExpr c + , ToExpr d + , ToExpr e + , ToExpr f + , ToExpr g + , ToExpr h + , ToExpr i + , ToExpr j + ) => + ToExpr (a, b, c, d, e, f, g, h, i, j) + where + toExpr (a, b, c, d, e, f, g, h, i, j) = + App + "_×_×_×_×_×_×_×_×_x_" + [ toExpr a + , toExpr b + , toExpr c + , toExpr d + , toExpr e + , toExpr f + , toExpr g + , toExpr h + , toExpr i + , toExpr j ] instance ToExpr ChunkInfo where @@ -94,14 +119,12 @@ instance ToExpr FsError where deriving instance ToExpr a => ToExpr (LoE a) - {------------------------------------------------------------------------------- si-timers --------------------------------------------------------------------------------} instance ToExpr SI.Time where toExpr = defaultExprViaShow - deriving anyclass instance ToExpr Fingerprint deriving anyclass instance ToExpr FollowerNext deriving anyclass instance ToExpr MaxSlotNo @@ -115,10 +138,11 @@ deriving instance Generic (ChainProducerState blk) deriving instance Generic (FollowerState blk) deriving instance ToExpr blk => ToExpr (Chain blk) -deriving instance ( ToExpr blk - , ToExpr (HeaderHash blk) - ) - => ToExpr (ChainProducerState blk) +deriving instance + ( ToExpr blk + , ToExpr (HeaderHash blk) + ) => + ToExpr (ChainProducerState blk) deriving instance ToExpr a => ToExpr (WithFingerprint a) instance ToExpr (TipInfo blk) => ToExpr (AnnTip blk) @@ -132,16 +156,21 @@ deriving newtype instance ToExpr TicketNo instance Show (TxId (GenTx blk)) => ToExpr (TxId (GenTx blk)) where toExpr x = App (show x) [] -deriving instance ( ToExpr (GenTx blk) - , LedgerSupportsMempool blk - , measure ~ TxMeasure blk - , ToExpr measure - , ToExpr (Validated (GenTx blk)) - ) => ToExpr (TxTicket measure (Validated (GenTx blk))) - -instance ( ToExpr (GenTx blk) - , LedgerSupportsMempool blk - , ToExpr (Validated (GenTx blk)) - ) => ToExpr (MempoolAddTxResult blk) where - toExpr (MempoolTxAdded vtx) = App "Added" [toExpr vtx] - toExpr (MempoolTxRejected tx e) = App "Rejected" [toExpr tx, App (show e) [] ] +deriving instance + ( ToExpr (GenTx blk) + , LedgerSupportsMempool blk + , measure ~ TxMeasure blk + , ToExpr measure + , ToExpr (Validated (GenTx blk)) + ) => + ToExpr (TxTicket measure (Validated (GenTx blk))) + +instance + ( ToExpr (GenTx blk) + , LedgerSupportsMempool blk + , ToExpr (Validated (GenTx blk)) + ) => + ToExpr (MempoolAddTxResult blk) + where + toExpr (MempoolTxAdded vtx) = App "Added" [toExpr vtx] + toExpr (MempoolTxRejected tx e) = App "Rejected" [toExpr tx, App (show e) []] diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Paths.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Paths.hs index fec00840d4..cf505ac58f 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Paths.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Paths.hs @@ -3,17 +3,16 @@ -- -- Copied from -- -module Test.Util.Paths ( - getGoldenDir +module Test.Util.Paths + ( getGoldenDir , getRelPath , inNixBuild ) where - -import Control.Monad.IO.Class (liftIO) -import Data.FileEmbed (makeRelativeToProject) -import Language.Haskell.TH.Syntax (Exp, Q, liftData) -import System.Environment (lookupEnv) +import Control.Monad.IO.Class (liftIO) +import Data.FileEmbed (makeRelativeToProject) +import Language.Haskell.TH.Syntax (Exp, Q, liftData) +import System.Environment (lookupEnv) -- | A TH function to get the path corresponding to the golden output -- directory relative to the package root directory. @@ -31,15 +30,15 @@ getGoldenDir = getRelPath "golden" -- makes the test data path relative to the current directory. getRelPath :: FilePath -> Q Exp getRelPath relPath = do - absPath <- makeRelativeToProject relPath - useRel <- liftIO inNixBuild - liftData (if useRel then relPath else absPath) + absPath <- makeRelativeToProject relPath + useRel <- liftIO inNixBuild + liftData (if useRel then relPath else absPath) -- | Infer from environment variables whether we are running within a Nix build -- (and not just a nix-shell). inNixBuild :: IO Bool inNixBuild = do - let testEnv = fmap (maybe False (not . null)) . lookupEnv - haveNixBuildDir <- testEnv "NIX_BUILD_TOP" - inNixShell <- testEnv "IN_NIX_SHELL" - pure (haveNixBuildDir && not inNixShell) + let testEnv = fmap (maybe False (not . null)) . lookupEnv + haveNixBuildDir <- testEnv "NIX_BUILD_TOP" + inNixShell <- testEnv "IN_NIX_SHELL" + pure (haveNixBuildDir && not inNixShell) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QSM.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QSM.hs index ccf984be1c..a9b97289b2 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QSM.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QSM.hs @@ -1,23 +1,23 @@ {-# LANGUAGE ScopedTypeVariables #-} -module Test.Util.QSM ( - Example - -- opaque +module Test.Util.QSM + ( Example + -- opaque , example , run , run' ) where -import Control.Monad -import qualified Control.Monad.Fail as Fail -import Data.Typeable -import qualified Test.StateMachine.Logic as Logic -import Test.StateMachine.Sequential -import Test.StateMachine.Types -import qualified Test.StateMachine.Types.Rank2 as Rank2 +import Control.Monad +import Control.Monad.Fail qualified as Fail +import Data.Typeable +import Test.StateMachine.Logic qualified as Logic +import Test.StateMachine.Sequential +import Test.StateMachine.Types +import Test.StateMachine.Types.Rank2 qualified as Rank2 -data Example cmd a = - Done a +data Example cmd a + = Done a | Run (cmd Symbolic) ([Var] -> Example cmd a) | Fail String @@ -25,13 +25,13 @@ instance Functor (Example cmd) where fmap = liftM instance Applicative (Example cmd) where - pure = Done + pure = Done (<*>) = ap instance Monad (Example cmd) where - return = pure - Done a >>= f = f a - Run c k >>= f = Run c (k >=> f) + return = pure + Done a >>= f = f a + Run c k >>= f = Run c (k >=> f) Fail err >>= _ = Fail err instance Fail.MonadFail (Example cmd) where @@ -45,30 +45,32 @@ run cmd = Run cmd (Done . map (Reference . Symbolic)) run' :: cmd Symbolic -> Example cmd () run' cmd = Run cmd (\_vars -> Done ()) -example :: forall model cmd m resp. (Rank2.Foldable resp, Show (cmd Symbolic)) - => StateMachine model cmd m resp - -> Example cmd () - -> Commands cmd resp +example :: + forall model cmd m resp. + (Rank2.Foldable resp, Show (cmd Symbolic)) => + StateMachine model cmd m resp -> + Example cmd () -> + Commands cmd resp example sm = - Commands . fst . flip runGenSym newCounter . go (initModel sm) - where - go :: model Symbolic -> Example cmd () -> GenSym [Command cmd resp] - go _ (Done ()) = return [] - go _ (Fail err) = error $ "example: " ++ err - go m (Run cmd k) = do - case Logic.logic (precondition sm m cmd) of - Logic.VFalse counterexample -> - error $ "Invalid command " ++ show cmd ++ ": " ++ show counterexample - Logic.VTrue -> do - resp <- mock sm m cmd + Commands . fst . flip runGenSym newCounter . go (initModel sm) + where + go :: model Symbolic -> Example cmd () -> GenSym [Command cmd resp] + go _ (Done ()) = return [] + go _ (Fail err) = error $ "example: " ++ err + go m (Run cmd k) = do + case Logic.logic (precondition sm m cmd) of + Logic.VFalse counterexample -> + error $ "Invalid command " ++ show cmd ++ ": " ++ show counterexample + Logic.VTrue -> do + resp <- mock sm m cmd - let m' :: model Symbolic - m' = transition sm m cmd resp + let m' :: model Symbolic + m' = transition sm m cmd resp - vars :: [Var] - vars = getUsedVars resp + vars :: [Var] + vars = getUsedVars resp - cmd' :: Command cmd resp - cmd' = Command cmd resp vars + cmd' :: Command cmd resp + cmd' = Command cmd resp vars - (cmd' :) <$> go m' (k vars) + (cmd' :) <$> go m' (k vars) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QuickCheck.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QuickCheck.hs index ede742437d..2a167bbc35 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QuickCheck.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/QuickCheck.hs @@ -5,11 +5,12 @@ {-# LANGUAGE TypeApplications #-} -- | QuickCheck utilities -module Test.Util.QuickCheck ( - -- * Generic QuickCheck utilities +module Test.Util.QuickCheck + ( -- * Generic QuickCheck utilities checkGenerator , checkInvariant , checkShrinker + -- * Comparison functions , expectRight , ge @@ -17,34 +18,40 @@ module Test.Util.QuickCheck ( , le , lt , strictlyIncreasing + -- * Gen variants that allow transformers , frequency' , oneof' + -- * Comparing maps , isSubmapOfBy + -- * Improved variants , (=:=) + -- * SOP , cshrinkNP , shrinkNP + -- * Convenience , collects , forAllGenRunShrinkCheck , implies + -- * Typeclass laws , prop_lawfulEqAndTotalOrd ) where -import Control.Monad.Except (Except, runExcept) -import Control.Monad.Trans (MonadTrans (..)) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy -import Data.SOP.Constraint -import Data.SOP.Strict -import Ouroboros.Consensus.Util (repeatedly) -import Ouroboros.Consensus.Util.Condense (Condense, condense) -import Test.QuickCheck +import Control.Monad.Except (Except, runExcept) +import Control.Monad.Trans (MonadTrans (..)) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.SOP.Constraint +import Data.SOP.Strict +import Ouroboros.Consensus.Util (repeatedly) +import Ouroboros.Consensus.Util.Condense (Condense, condense) +import Test.QuickCheck {------------------------------------------------------------------------------- Generic QuickCheck utilities @@ -59,19 +66,20 @@ checkGenerator p = forAll arbitrary $ p -- | Test the shrinker checkShrinker :: forall a. (Arbitrary a, Show a) => (a -> Property) -> Property checkShrinker p = - -- Starting point, some arbitrary value - -- Explicit 'forAll': don't shrink when testing the shrinker - forAll arbitrary go - where - go :: a -> Property - go a = - if null (shrink a) then - property True - else - -- Nested 'forAll': testing that /all/ shrunk values satisfy the - -- property is too expensive. Since we're not shrinking, nesting - -- 'forAll' is ok. - forAll (elements (shrink a)) $ \a' -> p a' .&&. go a' + -- Starting point, some arbitrary value + -- Explicit 'forAll': don't shrink when testing the shrinker + forAll arbitrary go + where + go :: a -> Property + go a = + if null (shrink a) + then + property True + else + -- Nested 'forAll': testing that /all/ shrunk values satisfy the + -- property is too expensive. Since we're not shrinking, nesting + -- 'forAll' is ok. + forAll (elements (shrink a)) $ \a' -> p a' .&&. go a' -- | Check invariant checkInvariant :: (a -> Except String ()) -> (a -> Property) @@ -99,8 +107,8 @@ forAllGenRunShrinkCheck gen run shrink_ check = (map run' . uncurry shrink_) (run' input) (uncurry check) - where - run' inp = (inp, run inp) + where + run' inp = (inp, run inp) {------------------------------------------------------------------------------- Comparison functions @@ -129,12 +137,12 @@ x `ge` y = counterexample (show x ++ " < " ++ show y) $ x >= y strictlyIncreasing :: forall a. (Show a, Ord a) => [a] -> Property strictlyIncreasing xs = - counterexample (show xs) $ go xs - where - go :: [a] -> Property - go [] = property True - go [_] = property True - go (x:y:zs) = x `lt` y .&&. go (y:zs) + counterexample (show xs) $ go xs + where + go :: [a] -> Property + go [] = property True + go [_] = property True + go (x : y : zs) = x `lt` y .&&. go (y : zs) -- | Check that we have the expected 'Right' value -- @@ -142,26 +150,40 @@ strictlyIncreasing xs = -- equality constraint on @a@. expectRight :: (Show a, Show b, Eq b) => b -> Either a b -> Property expectRight b (Right b') = b === b' -expectRight _ (Left a) = counterexample ("Unexpected left " ++ show a) $ - False +expectRight _ (Left a) = + counterexample ("Unexpected left " ++ show a) $ + False {------------------------------------------------------------------------------- Comparing maps -------------------------------------------------------------------------------} -isSubmapOfBy :: (Ord k, Show k, Show a, Show b) - => (a -> b -> Property) -> Map k a -> Map k b -> Property -isSubmapOfBy p l r = conjoin [ - case Map.lookup k r of - Nothing -> counterexample ("key " ++ show k - ++ " with value " ++ show a - ++ " not present in other map") $ - property False - Just b -> counterexample ("key " ++ show k - ++ " with values " ++ show a - ++ " and " ++ show b - ++ " doesn't satisfy the property") $ - p a b +isSubmapOfBy :: + (Ord k, Show k, Show a, Show b) => + (a -> b -> Property) -> Map k a -> Map k b -> Property +isSubmapOfBy p l r = + conjoin + [ case Map.lookup k r of + Nothing -> + counterexample + ( "key " + ++ show k + ++ " with value " + ++ show a + ++ " not present in other map" + ) + $ property False + Just b -> + counterexample + ( "key " + ++ show k + ++ " with values " + ++ show a + ++ " and " + ++ show b + ++ " doesn't satisfy the property" + ) + $ p a b | (k, a) <- Map.toList l ] @@ -171,41 +193,44 @@ isSubmapOfBy p l r = conjoin [ -- | Like '===', but uses 'Condense' instead of 'Show' when it fails. infix 4 =:= + (=:=) :: (Eq a, Condense a) => a -> a -> Property x =:= y = - counterexample (condense x ++ interpret res ++ condense y) res - where - res = x == y - interpret True = " == " - interpret False = " /= " + counterexample (condense x ++ interpret res ++ condense y) res + where + res = x == y + interpret True = " == " + interpret False = " /= " {------------------------------------------------------------------------------- SOP -------------------------------------------------------------------------------} -cshrinkNP :: forall proxy c f g xs. - All c xs - => proxy c - -> (forall a. c a => f a -> g a) -- For elements we don't shrink - -> (forall a. c a => f a -> [g a]) - -> NP f xs - -> [NP g xs] +cshrinkNP :: + forall proxy c f g xs. + All c xs => + proxy c -> + (forall a. c a => f a -> g a) -> -- For elements we don't shrink + (forall a. c a => f a -> [g a]) -> + NP f xs -> + [NP g xs] cshrinkNP p g f = go - where - go :: All c xs' => NP f xs' -> [NP g xs'] - go Nil = [] -- Can't shrink the empty list - go (x :* xs) = concat [ - -- Shrink the head of the list - [ x' :* hcmap p g xs | x' <- f x ] - - -- Or shrink the tail of the list - , [ g x :* xs' | xs' <- go xs ] - ] - -shrinkNP :: (forall a. f a -> g a) -- For elements we don't shrink - -> (forall a. f a -> [g a]) - -> NP f xs - -> [NP g xs] + where + go :: All c xs' => NP f xs' -> [NP g xs'] + go Nil = [] -- Can't shrink the empty list + go (x :* xs) = + concat + [ -- Shrink the head of the list + [x' :* hcmap p g xs | x' <- f x] + , -- Or shrink the tail of the list + [g x :* xs' | xs' <- go xs] + ] + +shrinkNP :: + (forall a. f a -> g a) -> -- For elements we don't shrink + (forall a. f a -> [g a]) -> + NP f xs -> + [NP g xs] shrinkNP g f np = npToSListI np $ cshrinkNP (Proxy @Top) g f np {------------------------------------------------------------------------------- @@ -220,6 +245,7 @@ collects = repeatedly collect -- @p1@ being true. implies :: Testable prop => Bool -> prop -> Property implies p1 p2 = not p1 .||. p2 + infixr 0 `implies` {------------------------------------------------------------------------------- @@ -227,21 +253,23 @@ infixr 0 `implies` -------------------------------------------------------------------------------} prop_lawfulEqAndTotalOrd :: - forall a. (Show a, Ord a) - => a -> a -> a -> Property -prop_lawfulEqAndTotalOrd a b c = conjoin + forall a. + (Show a, Ord a) => + a -> a -> a -> Property +prop_lawfulEqAndTotalOrd a b c = + conjoin [ counterexample "Not total: a <= b || b <= a VIOLATED" $ a <= b || b <= a , counterexample "Not transitive: a <= b && b <= c => a <= c VIOLATED" $ - let antecedent = a <= b && b <= c in - classify antecedent "Antecedent for transitivity" $ - antecedent `implies` a <= c + let antecedent = a <= b && b <= c + in classify antecedent "Antecedent for transitivity" $ + antecedent `implies` a <= c , counterexample "Not reflexive: a <= a VIOLATED" $ a `le` a , counterexample "Not antisymmetric: a <= b && b <= a => a == b VIOLATED" $ - let antecedent = a <= b && b <= a in - classify antecedent "Antecedent for antisymmetry" $ - antecedent `implies` a == b + let antecedent = a <= b && b <= a + in classify antecedent "Antecedent for antisymmetry" $ + antecedent `implies` a == b , -- compatibility laws counterexample "(a <= b) == (b >= a) VIOLATED" $ (a <= b) === (b >= a) @@ -269,14 +297,14 @@ prop_lawfulEqAndTotalOrd a b c = conjoin frequency' :: (MonadTrans t, Monad (t Gen)) => [(Int, t Gen a)] -> t Gen a frequency' [] = error "frequency' used with empty list" frequency' xs0 = lift (choose (1, tot)) >>= (`pick` xs0) - where - tot = sum (map fst xs0) + where + tot = sum (map fst xs0) - pick n ((k,x):xs) - | n <= k = x - | otherwise = pick (n-k) xs - pick _ _ = error "pick used with empty list" + pick n ((k, x) : xs) + | n <= k = x + | otherwise = pick (n - k) xs + pick _ _ = error "pick used with empty list" oneof' :: (MonadTrans t, Monad (t Gen)) => [t Gen a] -> t Gen a oneof' [] = error "QuickCheck.oneof used with empty list" -oneof' gs = lift (chooseInt (0,length gs - 1)) >>= (gs !!) +oneof' gs = lift (chooseInt (0, length gs - 1)) >>= (gs !!) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Range.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Range.hs index 0741858fea..7da96e905f 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Range.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Range.hs @@ -4,17 +4,17 @@ {-# OPTIONS_GHC -Wno-x-partial #-} #endif -module Test.Util.Range ( - Range (..) +module Test.Util.Range + ( Range (..) , RangeK (..) , range , rangeK ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import qualified Data.List as L -import Data.Word -import Ouroboros.Consensus.Config.SecurityParam +import Cardano.Ledger.BaseTypes (unNonZero) +import Data.List qualified as L +import Data.Word +import Ouroboros.Consensus.Config.SecurityParam {------------------------------------------------------------------------------- Range of values related to K @@ -23,68 +23,65 @@ import Ouroboros.Consensus.Config.SecurityParam -- | Rough indication of the size of a value in relation to @k@ -- -- Useful for labelling tests. -data RangeK = - -- | The value was equal to K +data RangeK + = -- | The value was equal to K Range_Eq_K SecurityParam - - -- | The value was just below to K + | -- | The value was just below to K -- -- We indicate how far off from K it was - | Range_Just_Below_K SecurityParam Word64 - - -- | The value was just above K + Range_Just_Below_K SecurityParam Word64 + | -- | The value was just above K -- -- We indicate how far off from K it was - | Range_Just_Above_K SecurityParam Word64 - - -- | The value was almost zero + Range_Just_Above_K SecurityParam Word64 + | -- | The value was almost zero -- -- If there is a choice between 'Range_Just_Below_K' and 'Range_Near_Zero', -- the constructor with the smaller argument should be used. - | Range_Near_Zero SecurityParam Word64 - - -- | Less than k (but not near k and not near 0) + Range_Near_Zero SecurityParam Word64 + | -- | Less than k (but not near k and not near 0) -- -- We round to the neareast multiple of (k / 10) - | Range_Less_Than_K SecurityParam Word64 - - -- | More than k (but not near k) + Range_Less_Than_K SecurityParam Word64 + | -- | More than k (but not near k) -- -- We round to the first power of two above k that is equal to above the value. - | Range_More_Than_K SecurityParam Word64 - deriving (Eq) + Range_More_Than_K SecurityParam Word64 + deriving Eq instance Show RangeK where show r = - case r of - Range_Eq_K (SecurityParam k) -> "= (k = " ++ show k ++ ")" - Range_Just_Below_K (SecurityParam k) n -> "= (k = " ++ show k ++ ")" ++ " > " ++ show (unNonZero k - n) - Range_Just_Above_K (SecurityParam k) n -> "= (k = " ++ show k ++ ")" ++ " < " ++ show (unNonZero k + n) - Range_Near_Zero (SecurityParam k) n -> "= (k = " ++ show k ++ ")" ++ " >> " ++ show n - Range_Less_Than_K (SecurityParam k) n -> "≈ (k = " ++ show k ++ ")" ++ " >> " ++ show n - Range_More_Than_K (SecurityParam k) n -> "≈ (k = " ++ show k ++ ")" ++ " << " ++ show n + case r of + Range_Eq_K (SecurityParam k) -> "= (k = " ++ show k ++ ")" + Range_Just_Below_K (SecurityParam k) n -> "= (k = " ++ show k ++ ")" ++ " > " ++ show (unNonZero k - n) + Range_Just_Above_K (SecurityParam k) n -> "= (k = " ++ show k ++ ")" ++ " < " ++ show (unNonZero k + n) + Range_Near_Zero (SecurityParam k) n -> "= (k = " ++ show k ++ ")" ++ " >> " ++ show n + Range_Less_Than_K (SecurityParam k) n -> "≈ (k = " ++ show k ++ ")" ++ " >> " ++ show n + Range_More_Than_K (SecurityParam k) n -> "≈ (k = " ++ show k ++ ")" ++ " << " ++ show n rangeK :: Integral a => SecurityParam -> a -> RangeK rangeK (SecurityParam k) a - | n == unNonZero k = Range_Eq_K (SecurityParam k) + | n == unNonZero k = Range_Eq_K (SecurityParam k) | n < nearK = Range_Near_Zero (SecurityParam k) n - | n < unNonZero k = if belowK <= nearK - then Range_Just_Below_K (SecurityParam k) n - else Range_Less_Than_K (SecurityParam k) (n `div` bandSize) - | otherwise = if aboveK <= nearK - then Range_Just_Above_K (SecurityParam k) n - else Range_More_Than_K (SecurityParam k) (head (dropWhile (< n) powers)) - where - n = fromIntegral a - belowK = unNonZero k - n - aboveK = n - unNonZero k - powers = [unNonZero k + 2 ^ i | i <- [0..] :: [Int]] - - -- threshold for determining if a value is near k - nearK = 5 - - -- bands for summarizing values less than k - bandSize = max 1 (unNonZero k `div` 10) + | n < unNonZero k = + if belowK <= nearK + then Range_Just_Below_K (SecurityParam k) n + else Range_Less_Than_K (SecurityParam k) (n `div` bandSize) + | otherwise = + if aboveK <= nearK + then Range_Just_Above_K (SecurityParam k) n + else Range_More_Than_K (SecurityParam k) (head (dropWhile (< n) powers)) + where + n = fromIntegral a + belowK = unNonZero k - n + aboveK = n - unNonZero k + powers = [unNonZero k + 2 ^ i | i <- [0 ..] :: [Int]] + + -- threshold for determining if a value is near k + nearK = 5 + + -- bands for summarizing values less than k + bandSize = max 1 (unNonZero k `div` 10) {------------------------------------------------------------------------------- Summarize values not related to K @@ -95,13 +92,13 @@ data Range n = R_Eq n | R_Btwn (n, n) | R_Gt n range :: (Ord n, Show n, Num n) => n -> Range n range n - | n > limit = R_Gt limit + | n > limit = R_Gt limit | n `L.elem` vals = R_Eq n - | otherwise = + | otherwise = case L.find (\(lo, hi) -> lo <= n && n <= hi) rngs of - Nothing -> error $ "range: unable to classify " ++ show n + Nothing -> error $ "range: unable to classify " ++ show n Just rng -> R_Btwn rng - where - vals = [0, 1, 2, 3, 4] - rngs = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 10), (10, 20)] - limit = 20 + where + vals = [0, 1, 2, 3, 4] + rngs = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 10), (10, 20)] + limit = 20 diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/RefEnv.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/RefEnv.hs index e9fe0bd9f4..7b52cf8433 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/RefEnv.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/RefEnv.hs @@ -4,9 +4,9 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TupleSections #-} -module Test.Util.RefEnv ( - RefEnv - -- opaque +module Test.Util.RefEnv + ( RefEnv + -- opaque , elems , empty , filter @@ -21,63 +21,71 @@ module Test.Util.RefEnv ( , (!) ) where -import Data.Bifunctor -import Data.Functor.Classes -import Data.List (intercalate) -import Data.TreeDiff (ToExpr) -import GHC.Generics (Generic) -import GHC.Stack -import Prelude hiding (filter, lookup, null) -import qualified Prelude -import Test.StateMachine (Reference) -import qualified Test.StateMachine.Types.Rank2 as Rank2 -import Test.Util.ToExpr () - -data RefEnv k a r = RefEnv { toList :: [(Reference k r, a)] } +import Data.Bifunctor +import Data.Functor.Classes +import Data.List (intercalate) +import Data.TreeDiff (ToExpr) +import GHC.Generics (Generic) +import GHC.Stack +import Test.StateMachine (Reference) +import Test.StateMachine.Types.Rank2 qualified as Rank2 +import Test.Util.ToExpr () +import Prelude hiding (filter, lookup, null) +import Prelude qualified + +data RefEnv k a r = RefEnv {toList :: [(Reference k r, a)]} deriving (Generic, ToExpr, Show) -- | Extend mapping -- -- We don't insist that the keys are disjoint, but if the same key appears -- twice, the value must agree. -extendMapping :: forall k v. (Eq k, Eq v, Show k, Show v, HasCallStack) - => [(k, v)] -- Mapping known to have duplicate keys - -> [(k, v)] -- With potential duplicates - -> [(k, v)] -extendMapping acc [] = acc +extendMapping :: + forall k v. + (Eq k, Eq v, Show k, Show v, HasCallStack) => + [(k, v)] -> -- Mapping known to have duplicate keys + [(k, v)] -> -- With potential duplicates + [(k, v)] +extendMapping acc [] = acc extendMapping acc ((k, v) : kvs) = - case Prelude.lookup k acc of - Just v' | v /= v' -> error $ renderError v' - _otherwise -> extendMapping ((k, v) : acc) kvs - where - renderError :: v -> String - renderError v' = intercalate " " [ - "Key" - , show k - , "with two different values" - , show v - , "and" - , show v' - ] - -fromList :: (Eq k, Show k, Eq a, Show a, Eq1 r, Show1 r, HasCallStack) - => [(Reference k r, a)] -> RefEnv k a r + case Prelude.lookup k acc of + Just v' | v /= v' -> error $ renderError v' + _otherwise -> extendMapping ((k, v) : acc) kvs + where + renderError :: v -> String + renderError v' = + intercalate + " " + [ "Key" + , show k + , "with two different values" + , show v + , "and" + , show v' + ] + +fromList :: + (Eq k, Show k, Eq a, Show a, Eq1 r, Show1 r, HasCallStack) => + [(Reference k r, a)] -> RefEnv k a r fromList = RefEnv . extendMapping [] instance Rank2.Functor (RefEnv k a) where - fmap f (RefEnv ras) = RefEnv $ + fmap f (RefEnv ras) = + RefEnv $ fmap (first (Rank2.fmap f)) ras instance Rank2.Foldable (RefEnv k a) where foldMap f (RefEnv ras) = - foldMap (Rank2.foldMap f . fst) ras + foldMap (Rank2.foldMap f . fst) ras instance Rank2.Traversable (RefEnv k a) where - traverse f (RefEnv ras) = RefEnv <$> - traverse (\(r, a) -> (,a) <$> Rank2.traverse f r) ras + traverse f (RefEnv ras) = + RefEnv + <$> traverse (\(r, a) -> (,a) <$> Rank2.traverse f r) ras -union :: (Eq k, Show k, Eq a, Show a, Eq1 r, Show1 r, HasCallStack) - => RefEnv k a r -> RefEnv k a r -> RefEnv k a r +union :: + (Eq k, Show k, Eq a, Show a, Eq1 r, Show1 r, HasCallStack) => + RefEnv k a r -> RefEnv k a r -> RefEnv k a r union (RefEnv ras1) (RefEnv ras2) = RefEnv (extendMapping ras1 ras2) -- | Empty environment @@ -92,8 +100,8 @@ lookup r (RefEnv ras) = Prelude.lookup r ras (!) :: (Eq k, Eq1 r) => RefEnv k a r -> Reference k r -> a env ! r = case lookup r env of - Just a -> a - Nothing -> error "(RefEnv.!): key not found" + Just a -> a + Nothing -> error "(RefEnv.!): key not found" keys :: RefEnv k a r -> [Reference k r] keys (RefEnv ras) = map fst ras diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SOP.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SOP.hs index ebdabb87d7..c5a0d20878 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SOP.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SOP.hs @@ -1,31 +1,32 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} -module Test.Util.SOP ( - constrName +module Test.Util.SOP + ( constrName , constrNames ) where -import Data.Proxy -import qualified Generics.SOP as SOP +import Data.Proxy +import Generics.SOP qualified as SOP -constrInfo :: SOP.HasDatatypeInfo a - => proxy a - -> SOP.NP SOP.ConstructorInfo (SOP.Code a) +constrInfo :: + SOP.HasDatatypeInfo a => + proxy a -> + SOP.NP SOP.ConstructorInfo (SOP.Code a) constrInfo = SOP.constructorInfo . SOP.datatypeInfo constrName :: forall a. SOP.HasDatatypeInfo a => a -> String constrName a = - SOP.hcollapse $ SOP.hliftA2 go (constrInfo p) (SOP.unSOP (SOP.from a)) - where - go :: SOP.ConstructorInfo b -> SOP.NP SOP.I b -> SOP.K String b - go nfo _ = SOP.K $ SOP.constructorName nfo + SOP.hcollapse $ SOP.hliftA2 go (constrInfo p) (SOP.unSOP (SOP.from a)) + where + go :: SOP.ConstructorInfo b -> SOP.NP SOP.I b -> SOP.K String b + go nfo _ = SOP.K $ SOP.constructorName nfo - p = Proxy @a + p = Proxy @a constrNames :: SOP.HasDatatypeInfo a => proxy a -> [String] constrNames p = - SOP.hcollapse $ SOP.hmap go (constrInfo p) - where - go :: SOP.ConstructorInfo a -> SOP.K String a - go nfo = SOP.K $ SOP.constructorName nfo + SOP.hcollapse $ SOP.hmap go (constrInfo p) + where + go :: SOP.ConstructorInfo a -> SOP.K String a + go nfo = SOP.K $ SOP.constructorName nfo diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SanityCheck.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SanityCheck.hs index eb6a9564e1..d2736915c3 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SanityCheck.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SanityCheck.hs @@ -1,21 +1,21 @@ -module Test.Util.SanityCheck ( - prop_sanityChecks +module Test.Util.SanityCheck + ( prop_sanityChecks , prop_securityParamConsistent ) where -import Ouroboros.Consensus.Block.SupportsSanityCheck -import Ouroboros.Consensus.Config -import Test.Tasty.QuickCheck -import Test.Util.Orphans.Arbitrary () +import Ouroboros.Consensus.Block.SupportsSanityCheck +import Ouroboros.Consensus.Config +import Test.Tasty.QuickCheck +import Test.Util.Orphans.Arbitrary () -prop_sanityChecks - :: BlockSupportsSanityCheck blk - => TopLevelConfig blk -> Property +prop_sanityChecks :: + BlockSupportsSanityCheck blk => + TopLevelConfig blk -> Property prop_sanityChecks cfg = sanityCheckConfig cfg === [] -prop_securityParamConsistent - :: BlockSupportsSanityCheck blk - => TopLevelConfig blk -> Property +prop_securityParamConsistent :: + BlockSupportsSanityCheck blk => + TopLevelConfig blk -> Property prop_securityParamConsistent cfg = checkSecurityParamConsistency cfg === Nothing diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Schedule.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Schedule.hs index d242e002d7..3ed2508b0c 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Schedule.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Schedule.hs @@ -3,70 +3,73 @@ {-# LANGUAGE RankNTypes #-} -- | Utilities to schedule actions per 'Tick'. -module Test.Util.Schedule ( - Schedule (..) +module Test.Util.Schedule + ( Schedule (..) , genSchedule , joinSchedule , lastTick , shrinkSchedule ) where -import Data.List (intercalate, unfoldr) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import Test.QuickCheck -import Test.Util.LogicalClock (Tick (..)) +import Data.List (intercalate, unfoldr) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import Test.QuickCheck +import Test.Util.LogicalClock (Tick (..)) -- | A schedule plans actions on certain times. -- -- TODO Note that a schedule can't express delays between the actions -- within a single tick. Generating such delays may expose more (most -- likely concurrency-related) bugs. -newtype Schedule a = Schedule { getSchedule :: Map Tick [a] } +newtype Schedule a = Schedule {getSchedule :: Map Tick [a]} deriving stock (Show, Eq) instance Condense a => Condense (Schedule a) where condense = - unlines + unlines . map (uncurry showEntry) . filter (not . null . snd) . Map.toAscList . getSchedule - where - showEntry (Tick tick) as = show tick <> ": " <> - intercalate ", " (map condense as) + where + showEntry (Tick tick) as = + show tick + <> ": " + <> intercalate ", " (map condense as) -- | Return the last tick at which an update is planned, if no updates -- are planned, return 0. lastTick :: Schedule a -> Tick lastTick = fromMaybe (Tick 0) . maxKey . getSchedule - where - maxKey :: forall k v. Map k v -> Maybe k - maxKey = fmap (fst . fst) . Map.maxViewWithKey + where + maxKey :: forall k v. Map k v -> Maybe k + maxKey = fmap (fst . fst) . Map.maxViewWithKey -- | Spread out elements over a schedule, i.e. schedule a number of -- elements to be processed on each tick. Most ticks will have no -- associated elements. genSchedule :: [a] -> Gen (Schedule a) genSchedule = fmap Schedule . go Map.empty 1 - where - go :: Map Tick [a] - -> Tick - -> [a] - -> Gen (Map Tick [a]) - go !schedule tick as - | null as = return schedule - | otherwise = do - nbAs <- frequency [ (2, return 0), (1, choose (1, 5)) ] + where + go :: + Map Tick [a] -> + Tick -> + [a] -> + Gen (Map Tick [a]) + go !schedule tick as + | null as = return schedule + | otherwise = do + nbAs <- frequency [(2, return 0), (1, choose (1, 5))] let (this, rest) = splitAt nbAs as go (Map.insert tick this schedule) (succ tick) rest -- | Repeatedly remove the last entry (highest 'Tick') shrinkSchedule :: Schedule a -> [Schedule a] shrinkSchedule = - unfoldr (fmap (\(_, m) -> (Schedule m, m)) . Map.maxView) + unfoldr (fmap (\(_, m) -> (Schedule m, m)) . Map.maxView) . getSchedule -- | Inverse of 'genSchedule' diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Examples.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Examples.hs index 07fce3e39b..1e253117c3 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Examples.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Examples.hs @@ -1,120 +1,136 @@ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Util.Serialisation.Examples ( - -- * Examples +module Test.Util.Serialisation.Examples + ( -- * Examples Examples (..) + -- ** Operations on examples , combineExamples , mapExamples , prefixExamples + -- * Labelling , Labelled , labelled , unlabelled ) where -import Data.Bifunctor (first) -import Ouroboros.Consensus.Block (BlockProtocol, Header, HeaderHash, - SlotNo) -import Ouroboros.Consensus.HeaderValidation (AnnTip) -import Ouroboros.Consensus.Ledger.Abstract (EmptyMK, LedgerConfig, - LedgerState, LedgerTables, ValuesMK) -import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState) -import Ouroboros.Consensus.Ledger.Query (BlockQuery, SomeBlockQuery) -import Ouroboros.Consensus.Ledger.SupportsMempool (ApplyTxErr, GenTx, - GenTxId) -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) -import Ouroboros.Consensus.Storage.Serialisation (SerialisedHeader) -import Ouroboros.Network.Block (Serialised) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Data.Bifunctor (first) +import Ouroboros.Consensus.Block + ( BlockProtocol + , Header + , HeaderHash + , SlotNo + ) +import Ouroboros.Consensus.HeaderValidation (AnnTip) +import Ouroboros.Consensus.Ledger.Abstract + ( EmptyMK + , LedgerConfig + , LedgerState + , LedgerTables + , ValuesMK + ) +import Ouroboros.Consensus.Ledger.Extended (ExtLedgerState) +import Ouroboros.Consensus.Ledger.Query (BlockQuery, SomeBlockQuery) +import Ouroboros.Consensus.Ledger.SupportsMempool + ( ApplyTxErr + , GenTx + , GenTxId + ) +import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) +import Ouroboros.Consensus.Storage.Serialisation (SerialisedHeader) +import Ouroboros.Network.Block (Serialised) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- Examples -------------------------------------------------------------------------------} -data Examples blk = Examples { - exampleBlock :: Labelled blk - , exampleSerialisedBlock :: Labelled (Serialised blk) - , exampleHeader :: Labelled (Header blk) - , exampleSerialisedHeader :: Labelled (SerialisedHeader blk) - , exampleHeaderHash :: Labelled (HeaderHash blk) - , exampleGenTx :: Labelled (GenTx blk) - , exampleGenTxId :: Labelled (GenTxId blk) - , exampleApplyTxErr :: Labelled (ApplyTxErr blk) - , exampleQuery :: Labelled (SomeBlockQuery (BlockQuery blk)) - , exampleResult :: Labelled (SomeResult blk) - , exampleAnnTip :: Labelled (AnnTip blk) - , exampleLedgerState :: Labelled (LedgerState blk EmptyMK) - , exampleChainDepState :: Labelled (ChainDepState (BlockProtocol blk)) - , exampleExtLedgerState :: Labelled (ExtLedgerState blk EmptyMK) - , exampleSlotNo :: Labelled SlotNo - , exampleLedgerConfig :: Labelled (LedgerConfig blk) - , exampleLedgerTables :: Labelled (LedgerTables (LedgerState blk) ValuesMK) - } +data Examples blk = Examples + { exampleBlock :: Labelled blk + , exampleSerialisedBlock :: Labelled (Serialised blk) + , exampleHeader :: Labelled (Header blk) + , exampleSerialisedHeader :: Labelled (SerialisedHeader blk) + , exampleHeaderHash :: Labelled (HeaderHash blk) + , exampleGenTx :: Labelled (GenTx blk) + , exampleGenTxId :: Labelled (GenTxId blk) + , exampleApplyTxErr :: Labelled (ApplyTxErr blk) + , exampleQuery :: Labelled (SomeBlockQuery (BlockQuery blk)) + , exampleResult :: Labelled (SomeResult blk) + , exampleAnnTip :: Labelled (AnnTip blk) + , exampleLedgerState :: Labelled (LedgerState blk EmptyMK) + , exampleChainDepState :: Labelled (ChainDepState (BlockProtocol blk)) + , exampleExtLedgerState :: Labelled (ExtLedgerState blk EmptyMK) + , exampleSlotNo :: Labelled SlotNo + , exampleLedgerConfig :: Labelled (LedgerConfig blk) + , exampleLedgerTables :: Labelled (LedgerTables (LedgerState blk) ValuesMK) + } emptyExamples :: Examples blk -emptyExamples = Examples { - exampleBlock = mempty - , exampleSerialisedBlock = mempty - , exampleHeader = mempty +emptyExamples = + Examples + { exampleBlock = mempty + , exampleSerialisedBlock = mempty + , exampleHeader = mempty , exampleSerialisedHeader = mempty - , exampleHeaderHash = mempty - , exampleGenTx = mempty - , exampleGenTxId = mempty - , exampleApplyTxErr = mempty - , exampleQuery = mempty - , exampleResult = mempty - , exampleAnnTip = mempty - , exampleLedgerState = mempty - , exampleChainDepState = mempty - , exampleExtLedgerState = mempty - , exampleSlotNo = mempty - , exampleLedgerConfig = mempty - , exampleLedgerTables = mempty + , exampleHeaderHash = mempty + , exampleGenTx = mempty + , exampleGenTxId = mempty + , exampleApplyTxErr = mempty + , exampleQuery = mempty + , exampleResult = mempty + , exampleAnnTip = mempty + , exampleLedgerState = mempty + , exampleChainDepState = mempty + , exampleExtLedgerState = mempty + , exampleSlotNo = mempty + , exampleLedgerConfig = mempty + , exampleLedgerTables = mempty } combineExamples :: - forall blk. - (forall a. Labelled a -> Labelled a -> Labelled a) - -> Examples blk - -> Examples blk - -> Examples blk -combineExamples f e1 e2 = Examples { - exampleBlock = combine exampleBlock - , exampleSerialisedBlock = combine exampleSerialisedBlock - , exampleHeader = combine exampleHeader + forall blk. + (forall a. Labelled a -> Labelled a -> Labelled a) -> + Examples blk -> + Examples blk -> + Examples blk +combineExamples f e1 e2 = + Examples + { exampleBlock = combine exampleBlock + , exampleSerialisedBlock = combine exampleSerialisedBlock + , exampleHeader = combine exampleHeader , exampleSerialisedHeader = combine exampleSerialisedHeader - , exampleHeaderHash = combine exampleHeaderHash - , exampleGenTx = combine exampleGenTx - , exampleGenTxId = combine exampleGenTxId - , exampleApplyTxErr = combine exampleApplyTxErr - , exampleQuery = combine exampleQuery - , exampleResult = combine exampleResult - , exampleAnnTip = combine exampleAnnTip - , exampleLedgerState = combine exampleLedgerState - , exampleChainDepState = combine exampleChainDepState - , exampleExtLedgerState = combine exampleExtLedgerState - , exampleSlotNo = combine exampleSlotNo - , exampleLedgerConfig = combine exampleLedgerConfig - , exampleLedgerTables = combine exampleLedgerTables + , exampleHeaderHash = combine exampleHeaderHash + , exampleGenTx = combine exampleGenTx + , exampleGenTxId = combine exampleGenTxId + , exampleApplyTxErr = combine exampleApplyTxErr + , exampleQuery = combine exampleQuery + , exampleResult = combine exampleResult + , exampleAnnTip = combine exampleAnnTip + , exampleLedgerState = combine exampleLedgerState + , exampleChainDepState = combine exampleChainDepState + , exampleExtLedgerState = combine exampleExtLedgerState + , exampleSlotNo = combine exampleSlotNo + , exampleLedgerConfig = combine exampleLedgerConfig + , exampleLedgerTables = combine exampleLedgerTables } - where - combine :: (Examples blk -> Labelled a) -> Labelled a - combine getField = f (getField e1) (getField e2) + where + combine :: (Examples blk -> Labelled a) -> Labelled a + combine getField = f (getField e1) (getField e2) instance Semigroup (Examples blk) where (<>) = combineExamples (<>) instance Monoid (Examples blk) where - mempty = emptyExamples + mempty = emptyExamples mappend = (<>) mapExamples :: - forall blk. - (forall a. Labelled a -> Labelled a) - -> Examples blk - -> Examples blk + forall blk. + (forall a. Labelled a -> Labelled a) -> + Examples blk -> + Examples blk mapExamples f = combineExamples (const f) mempty -- | Add the given prefix to each labelled example. @@ -123,15 +139,15 @@ mapExamples f = combineExamples (const f) mempty -- empty, the prefix and @_@ are prepended. prefixExamples :: String -> Examples blk -> Examples blk prefixExamples prefix = mapExamples addPrefix - where - addPrefix :: Labelled a -> Labelled a - addPrefix l = [ - (Just label, x) - | (mbLabel, x) <- l - , let label = case mbLabel of - Nothing -> prefix - Just lbl -> prefix <> "_" <> lbl - ] + where + addPrefix :: Labelled a -> Labelled a + addPrefix l = + [ (Just label, x) + | (mbLabel, x) <- l + , let label = case mbLabel of + Nothing -> prefix + Just lbl -> prefix <> "_" <> lbl + ] {------------------------------------------------------------------------------- Labelling diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Golden.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Golden.hs index 0ec6ab6976..1c7b405d6c 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Golden.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Golden.hs @@ -7,7 +7,6 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeApplications #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Golden tests infrastructure. @@ -27,53 +26,60 @@ -- -- In particular, if we introduce golden tests in new suites, we need to add -- a line in the nix configuration above similar to the previous ones. -module Test.Util.Serialisation.Golden ( - ToGoldenDirectory (..) +module Test.Util.Serialisation.Golden + ( ToGoldenDirectory (..) , goldenTest_SerialiseDisk , goldenTest_SerialiseNodeToClient , goldenTest_SerialiseNodeToNode , goldenTest_all ) where -import Cardano.Prelude (forceElemsToWHNF) -import Codec.CBOR.Encoding (Encoding) -import Codec.CBOR.FlatTerm (TermToken (..)) -import qualified Codec.CBOR.FlatTerm as CBOR -import qualified Codec.CBOR.Write as CBOR -import Codec.Serialise (encode) -import Control.Exception (SomeException, evaluate, try) -import Data.Bifunctor (first) -import qualified Data.ByteString as Strict -import qualified Data.ByteString.UTF8 as BS.UTF8 -import Data.List (nub) -import qualified Data.Map.Strict as Map -import Data.Proxy (Proxy (..)) -import Data.TreeDiff -import GHC.Stack (HasCallStack) -import Ouroboros.Consensus.Block (CodecConfig) -import Ouroboros.Consensus.Ledger.Extended (encodeDiskExtLedgerState) -import Ouroboros.Consensus.Ledger.Query (QueryVersion, - nodeToClientVersionToQueryVersion) -import Ouroboros.Consensus.Ledger.Tables (valuesMKEncoder) -import Ouroboros.Consensus.Node.NetworkProtocolVersion - (HasNetworkProtocolVersion (..), - SupportedNetworkProtocolVersion (..)) -import Ouroboros.Consensus.Node.Run (SerialiseDiskConstraints, - SerialiseNodeToClientConstraints, - SerialiseNodeToNodeConstraints) -import Ouroboros.Consensus.Node.Serialisation - (SerialiseBlockQueryResult (..), - SerialiseNodeToClient (..), SerialiseNodeToNode (..)) -import Ouroboros.Consensus.Storage.Serialisation (EncodeDisk (..)) -import Ouroboros.Consensus.Util.CBOR (decodeAsFlatTerm) -import Ouroboros.Consensus.Util.Condense (Condense (..)) -import System.Directory (createDirectoryIfMissing) -import System.FilePath (takeDirectory, ()) -import Test.Cardano.Binary.TreeDiff (CBORBytes (..)) -import Test.Tasty -import Test.Tasty.Golden.Advanced (goldenTest) -import Test.Util.Serialisation.Examples (Examples (..), Labelled) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Cardano.Prelude (forceElemsToWHNF) +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.FlatTerm (TermToken (..)) +import Codec.CBOR.FlatTerm qualified as CBOR +import Codec.CBOR.Write qualified as CBOR +import Codec.Serialise (encode) +import Control.Exception (SomeException, evaluate, try) +import Data.Bifunctor (first) +import Data.ByteString qualified as Strict +import Data.ByteString.UTF8 qualified as BS.UTF8 +import Data.List (nub) +import Data.Map.Strict qualified as Map +import Data.Proxy (Proxy (..)) +import Data.TreeDiff +import GHC.Stack (HasCallStack) +import Ouroboros.Consensus.Block (CodecConfig) +import Ouroboros.Consensus.Ledger.Extended (encodeDiskExtLedgerState) +import Ouroboros.Consensus.Ledger.Query + ( QueryVersion + , nodeToClientVersionToQueryVersion + ) +import Ouroboros.Consensus.Ledger.Tables (valuesMKEncoder) +import Ouroboros.Consensus.Node.NetworkProtocolVersion + ( HasNetworkProtocolVersion (..) + , SupportedNetworkProtocolVersion (..) + ) +import Ouroboros.Consensus.Node.Run + ( SerialiseDiskConstraints + , SerialiseNodeToClientConstraints + , SerialiseNodeToNodeConstraints + ) +import Ouroboros.Consensus.Node.Serialisation + ( SerialiseBlockQueryResult (..) + , SerialiseNodeToClient (..) + , SerialiseNodeToNode (..) + ) +import Ouroboros.Consensus.Storage.Serialisation (EncodeDisk (..)) +import Ouroboros.Consensus.Util.CBOR (decodeAsFlatTerm) +import Ouroboros.Consensus.Util.Condense (Condense (..)) +import System.Directory (createDirectoryIfMissing) +import System.FilePath (takeDirectory, ()) +import Test.Cardano.Binary.TreeDiff (CBORBytes (..)) +import Test.Tasty +import Test.Tasty.Golden.Advanced (goldenTest) +import Test.Util.Serialisation.Examples (Examples (..), Labelled) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) {------------------------------------------------------------------------------- Golden test @@ -85,136 +91,147 @@ import Test.Util.Serialisation.SomeResult (SomeResult (..)) -- Exceptions: when forcing an encoding throws an exception, we 'show' the -- exception and use that as the output. goldenTestCBOR :: - TestName - -> a - -> (a -> Encoding) - -> FilePath -- ^ Path to the file containing the golden output - -> TestTree + TestName -> + a -> + (a -> Encoding) -> + -- | Path to the file containing the golden output + FilePath -> + TestTree goldenTestCBOR testName example enc goldenFile = - goldenTest - testName - (Strict.readFile goldenFile) - (either exceptionToByteString id <$> try (evaluate actualValue)) - diff - updateGoldenFile - where - -- Copied from tasty-golden because it isn't exported - updateGoldenFile :: Strict.ByteString -> IO () - updateGoldenFile bytes = do - let dir = takeDirectory goldenFile - createDirectoryIfMissing True dir - Strict.writeFile goldenFile bytes - - actualValue :: Strict.ByteString - actualValue = CBOR.toStrictByteString (enc example) + goldenTest + testName + (Strict.readFile goldenFile) + (either exceptionToByteString id <$> try (evaluate actualValue)) + diff + updateGoldenFile + where + -- Copied from tasty-golden because it isn't exported + updateGoldenFile :: Strict.ByteString -> IO () + updateGoldenFile bytes = do + let dir = takeDirectory goldenFile + createDirectoryIfMissing True dir + Strict.writeFile goldenFile bytes - exceptionToByteString :: SomeException -> Strict.ByteString - exceptionToByteString = BS.UTF8.fromString . show + actualValue :: Strict.ByteString + actualValue = CBOR.toStrictByteString (enc example) - -- | Use 'ediff' ('ToExpr') to diff the 'FlatTerm' representation. - diff :: Strict.ByteString -> Strict.ByteString -> IO (Maybe String) - diff golden actual = do - actualRes <- fmap (first (\(e :: SomeException) -> e)) - . try - . evaluate - . forceElemsToWHNF - . CBOR.toFlatTerm - . enc - $ example - return $ case (actualRes, decodeAsFlatTerm golden) of - (Left e, Right goldenFlatTerm) - -- Encoder threw an exception and the golden output was valid - -- CBOR. However, sometimes the 'show'n exception is also valid - -- CBOR. So if the exception and the golden output match, the test - -- passes. - | exceptionToByteString e == golden -> Nothing - | otherwise -> Just $ unlines [ - "Exception thrown by encoder doesn't match the golden CBOR output" - , "Exception:" - , show e - , "Golden term:" - , condense goldenFlatTerm - ] + exceptionToByteString :: SomeException -> Strict.ByteString + exceptionToByteString = BS.UTF8.fromString . show - (Left e, Left _) - -- Encoder threw an exception. The golden output was not valid - -- CBOR and the bytestrings match: we expected the exception - | exceptionToByteString e == golden -> Nothing - | otherwise -> Just $ unlines [ - "Exception thrown by encoder doesn't match the golden output" - , "Exception:" - , show e - , "Golden output:" - , BS.UTF8.toString golden - ] - - (Right _actualFlatTerm, Right _goldenFlatTerm) - | actual == golden -> Nothing - | otherwise -> Just $ unlines [ - "Golden term /= actual term, diff golden actual:" - , show (ansiWlEditExpr (ediff (CBORBytes golden) (CBORBytes actual))) - ] - - (Right actualFlatTerm, Left e) -> Just $ unlines [ - "Golden output /= actual term:" - , "Golden output is not valid CBOR:" - , BS.UTF8.toString golden - , "Exception: " - , show e - , "Actual term:" - , condense actualFlatTerm - ] + -- \| Use 'ediff' ('ToExpr') to diff the 'FlatTerm' representation. + diff :: Strict.ByteString -> Strict.ByteString -> IO (Maybe String) + diff golden actual = do + actualRes <- + fmap (first (\(e :: SomeException) -> e)) + . try + . evaluate + . forceElemsToWHNF + . CBOR.toFlatTerm + . enc + $ example + return $ case (actualRes, decodeAsFlatTerm golden) of + (Left e, Right goldenFlatTerm) + -- Encoder threw an exception and the golden output was valid + -- CBOR. However, sometimes the 'show'n exception is also valid + -- CBOR. So if the exception and the golden output match, the test + -- passes. + | exceptionToByteString e == golden -> Nothing + | otherwise -> + Just $ + unlines + [ "Exception thrown by encoder doesn't match the golden CBOR output" + , "Exception:" + , show e + , "Golden term:" + , condense goldenFlatTerm + ] + (Left e, Left _) + -- Encoder threw an exception. The golden output was not valid + -- CBOR and the bytestrings match: we expected the exception + | exceptionToByteString e == golden -> Nothing + | otherwise -> + Just $ + unlines + [ "Exception thrown by encoder doesn't match the golden output" + , "Exception:" + , show e + , "Golden output:" + , BS.UTF8.toString golden + ] + (Right _actualFlatTerm, Right _goldenFlatTerm) + | actual == golden -> Nothing + | otherwise -> + Just $ + unlines + [ "Golden term /= actual term, diff golden actual:" + , show (ansiWlEditExpr (ediff (CBORBytes golden) (CBORBytes actual))) + ] + (Right actualFlatTerm, Left e) -> + Just $ + unlines + [ "Golden output /= actual term:" + , "Golden output is not valid CBOR:" + , BS.UTF8.toString golden + , "Exception: " + , show e + , "Actual term:" + , condense actualFlatTerm + ] goldenTests :: - HasCallStack - => TestName - -> Labelled a - -> (a -> Encoding) - -> FilePath -- ^ Folder containing the golden files - -> TestTree + HasCallStack => + TestName -> + Labelled a -> + (a -> Encoding) -> + -- | Folder containing the golden files + FilePath -> + TestTree goldenTests testName examples enc goldenFolder - | nub labels /= labels - = error $ "Examples with the same label for " <> testName - | [(Nothing, example)] <- examples - -- If there's just a single unlabelled example, no need for grouping, - -- which makes the output more verbose. - = goldenTestCBOR testName example enc (goldenFolder testName) - | otherwise - = testGroup testName [ - goldenTestCBOR testName' example enc (goldenFolder testName') - | (mbLabel, example) <- examples - , let testName' = case mbLabel of - Nothing -> testName - Just label -> testName <> "_" <> label - ] - where - labels :: [Maybe String] - labels = map fst examples + | nub labels /= labels = + error $ "Examples with the same label for " <> testName + | [(Nothing, example)] <- examples = + -- If there's just a single unlabelled example, no need for grouping, + -- which makes the output more verbose. + goldenTestCBOR testName example enc (goldenFolder testName) + | otherwise = + testGroup + testName + [ goldenTestCBOR testName' example enc (goldenFolder testName') + | (mbLabel, example) <- examples + , let testName' = case mbLabel of + Nothing -> testName + Just label -> testName <> "_" <> label + ] + where + labels :: [Maybe String] + labels = map fst examples goldenTests' :: - HasCallStack - => TestName - -> Labelled (a, a -> Encoding) - -> FilePath -- ^ Folder containing the golden files - -> TestTree + HasCallStack => + TestName -> + Labelled (a, a -> Encoding) -> + -- | Folder containing the golden files + FilePath -> + TestTree goldenTests' testName examples goldenFolder - | nub labels /= labels - = error $ "Examples with the same label for " <> testName - | [(Nothing, (example, exampleEncoder))] <- examples - -- If there's just a single unlabelled example, no need for grouping, - -- which makes the output more verbose. - = goldenTestCBOR testName example exampleEncoder (goldenFolder testName) - | otherwise - = testGroup testName [ - goldenTestCBOR testName' example exampleEncoder (goldenFolder testName') - | (mbLabel, (example, exampleEncoder)) <- examples - , let testName' = case mbLabel of - Nothing -> testName - Just label -> testName <> "_" <> label - ] - where - labels :: [Maybe String] - labels = map fst examples + | nub labels /= labels = + error $ "Examples with the same label for " <> testName + | [(Nothing, (example, exampleEncoder))] <- examples = + -- If there's just a single unlabelled example, no need for grouping, + -- which makes the output more verbose. + goldenTestCBOR testName example exampleEncoder (goldenFolder testName) + | otherwise = + testGroup + testName + [ goldenTestCBOR testName' example exampleEncoder (goldenFolder testName') + | (mbLabel, (example, exampleEncoder)) <- examples + , let testName' = case mbLabel of + Nothing -> testName + Just label -> testName <> "_" <> label + ] + where + labels :: [Maybe String] + labels = map fst examples {------------------------------------------------------------------------------- Skeletons @@ -232,7 +249,6 @@ goldenTests' testName examples goldenFolder -- colons cannot be used in Windows file/folder names. class ToGoldenDirectory a where toGoldenDirectory :: a -> FilePath - default toGoldenDirectory :: Show a => a -> FilePath toGoldenDirectory = show @@ -244,153 +260,162 @@ class ToGoldenDirectory a where -- 'CardanoNodeToNodeVersion1', we 'show' the exception and use that as the -- output. goldenTest_all :: - ( SerialiseDiskConstraints blk - , SerialiseNodeToNodeConstraints blk - , SerialiseNodeToClientConstraints blk - , SupportedNetworkProtocolVersion blk - - , ToGoldenDirectory (BlockNodeToNodeVersion blk) - , ToGoldenDirectory (QueryVersion, BlockNodeToClientVersion blk) - - , HasCallStack - ) - => CodecConfig blk - -> FilePath - -- ^ Path relative to the root of the repository that contains the golden - -- files - -> Examples blk - -> TestTree + ( SerialiseDiskConstraints blk + , SerialiseNodeToNodeConstraints blk + , SerialiseNodeToClientConstraints blk + , SupportedNetworkProtocolVersion blk + , ToGoldenDirectory (BlockNodeToNodeVersion blk) + , ToGoldenDirectory (QueryVersion, BlockNodeToClientVersion blk) + , HasCallStack + ) => + CodecConfig blk -> + -- | Path relative to the root of the repository that contains the golden + -- files + FilePath -> + Examples blk -> + TestTree goldenTest_all codecConfig goldenDir examples = - testGroup "Golden tests" [ - goldenTest_SerialiseDisk codecConfig goldenDir examples - , goldenTest_SerialiseNodeToNode codecConfig goldenDir examples - , goldenTest_SerialiseNodeToClient codecConfig goldenDir examples - ] + testGroup + "Golden tests" + [ goldenTest_SerialiseDisk codecConfig goldenDir examples + , goldenTest_SerialiseNodeToNode codecConfig goldenDir examples + , goldenTest_SerialiseNodeToClient codecConfig goldenDir examples + ] -- TODO how can we ensure that we have a test for each constraint listed in -- 'SerialiseDiskConstraints'? goldenTest_SerialiseDisk :: - forall blk. - ( SerialiseDiskConstraints blk - , HasCallStack - ) - => CodecConfig blk - -> FilePath - -> Examples blk - -> TestTree -goldenTest_SerialiseDisk codecConfig goldenDir Examples {..} = - testGroup "SerialiseDisk" [ - test "Block" exampleBlock (encodeDisk codecConfig) - , test "HeaderHash" exampleHeaderHash encode - , test "LedgerState" exampleLedgerState (encodeDisk codecConfig) - , test "AnnTip" exampleAnnTip (encodeDisk codecConfig) - , test "ChainDepState" exampleChainDepState (encodeDisk codecConfig) - , test "ExtLedgerState" exampleExtLedgerState encodeExt - , testLedgerTables - ] - where - test :: TestName -> Labelled a -> (a -> Encoding) -> TestTree - test testName exampleValues enc = - goldenTests - testName - exampleValues - enc - (goldenDir "disk") + forall blk. + ( SerialiseDiskConstraints blk + , HasCallStack + ) => + CodecConfig blk -> + FilePath -> + Examples blk -> + TestTree +goldenTest_SerialiseDisk codecConfig goldenDir Examples{..} = + testGroup + "SerialiseDisk" + [ test "Block" exampleBlock (encodeDisk codecConfig) + , test "HeaderHash" exampleHeaderHash encode + , test "LedgerState" exampleLedgerState (encodeDisk codecConfig) + , test "AnnTip" exampleAnnTip (encodeDisk codecConfig) + , test "ChainDepState" exampleChainDepState (encodeDisk codecConfig) + , test "ExtLedgerState" exampleExtLedgerState encodeExt + , testLedgerTables + ] + where + test :: TestName -> Labelled a -> (a -> Encoding) -> TestTree + test testName exampleValues enc = + goldenTests + testName + exampleValues + enc + (goldenDir "disk") - testLedgerTables :: TestTree - testLedgerTables = - goldenTests' - "LedgerTables" - (zipWith (\(lbl, tbs) (_, st) -> (lbl, (tbs, valuesMKEncoder st))) exampleLedgerTables exampleLedgerState) - (goldenDir "disk") + testLedgerTables :: TestTree + testLedgerTables = + goldenTests' + "LedgerTables" + ( zipWith + (\(lbl, tbs) (_, st) -> (lbl, (tbs, valuesMKEncoder st))) + exampleLedgerTables + exampleLedgerState + ) + (goldenDir "disk") - encodeExt = encodeDiskExtLedgerState codecConfig + encodeExt = encodeDiskExtLedgerState codecConfig -- TODO how can we ensure that we have a test for each constraint listed in -- 'SerialiseNodeToNodeConstraints'? goldenTest_SerialiseNodeToNode :: - forall blk. - ( SerialiseNodeToNodeConstraints blk - , SupportedNetworkProtocolVersion blk - , ToGoldenDirectory (BlockNodeToNodeVersion blk) - , HasCallStack - ) - => CodecConfig blk - -> FilePath - -> Examples blk - -> TestTree -goldenTest_SerialiseNodeToNode codecConfig goldenDir Examples {..} = - testGroup "SerialiseNodeToNode" [ - testVersion version - | version <- nub $ Map.elems $ supportedNodeToNodeVersions $ Proxy @blk + forall blk. + ( SerialiseNodeToNodeConstraints blk + , SupportedNetworkProtocolVersion blk + , ToGoldenDirectory (BlockNodeToNodeVersion blk) + , HasCallStack + ) => + CodecConfig blk -> + FilePath -> + Examples blk -> + TestTree +goldenTest_SerialiseNodeToNode codecConfig goldenDir Examples{..} = + testGroup + "SerialiseNodeToNode" + [ testVersion version + | version <- nub $ Map.elems $ supportedNodeToNodeVersions $ Proxy @blk + ] + where + testVersion :: BlockNodeToNodeVersion blk -> TestTree + testVersion version = + testGroup + (toGoldenDirectory version) + [ test "Block" exampleBlock + , test "Header" exampleHeader + , test "SerialisedBlock" exampleSerialisedBlock + , test "SerialisedHeader" exampleSerialisedHeader + , test "GenTx" exampleGenTx + , test "GenTxId" exampleGenTxId ] - where - testVersion :: BlockNodeToNodeVersion blk -> TestTree - testVersion version = testGroup (toGoldenDirectory version) [ - test "Block" exampleBlock - , test "Header" exampleHeader - , test "SerialisedBlock" exampleSerialisedBlock - , test "SerialisedHeader" exampleSerialisedHeader - , test "GenTx" exampleGenTx - , test "GenTxId" exampleGenTxId - ] - where - test :: SerialiseNodeToNode blk a => TestName -> Labelled a -> TestTree - test testName exampleValues = - goldenTests - testName - exampleValues - (encodeNodeToNode codecConfig version) - (goldenDir toGoldenDirectory version) + where + test :: SerialiseNodeToNode blk a => TestName -> Labelled a -> TestTree + test testName exampleValues = + goldenTests + testName + exampleValues + (encodeNodeToNode codecConfig version) + (goldenDir toGoldenDirectory version) -- TODO how can we ensure that we have a test for each constraint listed in -- 'SerialiseNodeToClientConstraints'? goldenTest_SerialiseNodeToClient :: - forall blk. - ( SerialiseNodeToClientConstraints blk - , SupportedNetworkProtocolVersion blk - , ToGoldenDirectory (QueryVersion, BlockNodeToClientVersion blk) - , HasCallStack - ) - => CodecConfig blk - -> FilePath - -> Examples blk - -> TestTree -goldenTest_SerialiseNodeToClient codecConfig goldenDir Examples {..} = - testGroup "SerialiseNodeToClient" [ - testVersion (queryVersion, blockVersion) - | (queryVersion, blockVersion) <- - nub . fmap (first nodeToClientVersionToQueryVersion) . Map.toList $ - supportedNodeToClientVersions (Proxy @blk) + forall blk. + ( SerialiseNodeToClientConstraints blk + , SupportedNetworkProtocolVersion blk + , ToGoldenDirectory (QueryVersion, BlockNodeToClientVersion blk) + , HasCallStack + ) => + CodecConfig blk -> + FilePath -> + Examples blk -> + TestTree +goldenTest_SerialiseNodeToClient codecConfig goldenDir Examples{..} = + testGroup + "SerialiseNodeToClient" + [ testVersion (queryVersion, blockVersion) + | (queryVersion, blockVersion) <- + nub . fmap (first nodeToClientVersionToQueryVersion) . Map.toList $ + supportedNodeToClientVersions (Proxy @blk) + ] + where + testVersion :: (QueryVersion, BlockNodeToClientVersion blk) -> TestTree + testVersion versions@(_, blockVersion) = + testGroup + (toGoldenDirectory versions) + [ test "Block" exampleBlock enc' + , test "SerialisedBlock" exampleSerialisedBlock enc' + , test "GenTx" exampleGenTx enc' + , test "GenTxId" exampleGenTxId enc' + , test "ApplyTxErr" exampleApplyTxErr enc' + , test "Query" exampleQuery enc' + , test "SlotNo" exampleSlotNo enc' + , test "LedgerConfig" exampleLedgerConfig enc' + , test "Result" exampleResult encRes ] - where - testVersion :: (QueryVersion, BlockNodeToClientVersion blk) -> TestTree - testVersion versions@(_, blockVersion) = testGroup (toGoldenDirectory versions) [ - test "Block" exampleBlock enc' - , test "SerialisedBlock" exampleSerialisedBlock enc' - , test "GenTx" exampleGenTx enc' - , test "GenTxId" exampleGenTxId enc' - , test "ApplyTxErr" exampleApplyTxErr enc' - , test "Query" exampleQuery enc' - , test "SlotNo" exampleSlotNo enc' - , test "LedgerConfig" exampleLedgerConfig enc' - , test "Result" exampleResult encRes - ] - where - - enc' :: SerialiseNodeToClient blk a => a -> Encoding - enc' = encodeNodeToClient codecConfig blockVersion + where + enc' :: SerialiseNodeToClient blk a => a -> Encoding + enc' = encodeNodeToClient codecConfig blockVersion - encRes :: SomeResult blk -> Encoding - encRes (SomeResult q r) = encodeBlockQueryResult codecConfig blockVersion q r + encRes :: SomeResult blk -> Encoding + encRes (SomeResult q r) = encodeBlockQueryResult codecConfig blockVersion q r - test :: TestName -> Labelled a -> (a -> Encoding) -> TestTree - test testName exampleValues enc = - goldenTests - testName - exampleValues - enc - (goldenDir toGoldenDirectory versions) + test :: TestName -> Labelled a -> (a -> Encoding) -> TestTree + test testName exampleValues enc = + goldenTests + testName + exampleValues + enc + (goldenDir toGoldenDirectory versions) {------------------------------------------------------------------------------- FlatTerm diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Roundtrip.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Roundtrip.hs index 109c4c3753..f4d2826f0c 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Roundtrip.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/Roundtrip.hs @@ -14,11 +14,12 @@ {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} -module Test.Util.Serialisation.Roundtrip ( - -- * Basic test helpers +module Test.Util.Serialisation.Roundtrip + ( -- * Basic test helpers roundtrip , roundtrip' , roundtripAnd + -- * Test skeleton , Arbitrary' , Coherent (..) @@ -31,80 +32,95 @@ module Test.Util.Serialisation.Roundtrip ( , roundtrip_all , roundtrip_all_skipping , roundtrip_envelopes + -- ** Exclusion of CBOR validity tests , ShouldCheckCBORValidity (CheckCBORValidity, DoNotCheckCBORValidity) + -- * Roundtrip tests for 'Example's , examplesRoundtrip ) where -import Codec.CBOR.Decoding (Decoder) -import Codec.CBOR.Encoding (Encoding) -import Codec.CBOR.FlatTerm (toFlatTerm, validFlatTerm) -import Codec.CBOR.Read (DeserialiseFailure, deserialiseFromBytes) -import Codec.CBOR.Write (toLazyByteString) -import Codec.Serialise (decode, encode) -import Control.Arrow (left) -import Control.Monad (unless, when) -import qualified Data.ByteString.Base16.Lazy as Base16 -import qualified Data.ByteString.Lazy as Lazy -import qualified Data.ByteString.Lazy.Char8 as Char8 -import qualified Data.ByteString.Short as Short -import Data.Constraint -import Data.Function (on) -import Data.Maybe (fromMaybe) -import qualified Data.Text.Lazy as T -import Data.Typeable -import GHC.Generics (Generic) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation (AnnTip) -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.Extended (decodeDiskExtLedgerState, - encodeDiskExtLedgerState) -import Ouroboros.Consensus.Ledger.Query -import qualified Ouroboros.Consensus.Ledger.Query as Query -import Ouroboros.Consensus.Ledger.SupportsMempool (ApplyTxErr, GenTx, - GenTxId) -import Ouroboros.Consensus.Ledger.SupportsProtocol - (LedgerSupportsProtocol) -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run (SerialiseNodeToClientConstraints, - SerialiseNodeToNodeConstraints (..)) -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) -import Ouroboros.Consensus.Storage.ChainDB (SerialiseDiskConstraints) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised (..), fromSerialised, - mkSerialised) -import Quiet (Quiet (..)) -import Test.Tasty -import Test.Tasty.QuickCheck -import Test.Util.Orphans.Arbitrary () -import Test.Util.Serialisation.Examples (Examples (..), Labelled) -import Test.Util.Serialisation.SomeResult (SomeResult (..)) -import Test.Util.TestEnv (adjustQuickCheckTests) -import Text.Pretty.Simple (pShow) +import Codec.CBOR.Decoding (Decoder) +import Codec.CBOR.Encoding (Encoding) +import Codec.CBOR.FlatTerm (toFlatTerm, validFlatTerm) +import Codec.CBOR.Read (DeserialiseFailure, deserialiseFromBytes) +import Codec.CBOR.Write (toLazyByteString) +import Codec.Serialise (decode, encode) +import Control.Arrow (left) +import Control.Monad (unless, when) +import Data.ByteString.Base16.Lazy qualified as Base16 +import Data.ByteString.Lazy qualified as Lazy +import Data.ByteString.Lazy.Char8 qualified as Char8 +import Data.ByteString.Short qualified as Short +import Data.Constraint +import Data.Function (on) +import Data.Maybe (fromMaybe) +import Data.Text.Lazy qualified as T +import Data.Typeable +import GHC.Generics (Generic) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation (AnnTip) +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Extended + ( decodeDiskExtLedgerState + , encodeDiskExtLedgerState + ) +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.Query qualified as Query +import Ouroboros.Consensus.Ledger.SupportsMempool + ( ApplyTxErr + , GenTx + , GenTxId + ) +import Ouroboros.Consensus.Ledger.SupportsProtocol + ( LedgerSupportsProtocol + ) +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run + ( SerialiseNodeToClientConstraints + , SerialiseNodeToNodeConstraints (..) + ) +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Protocol.Abstract (ChainDepState) +import Ouroboros.Consensus.Storage.ChainDB (SerialiseDiskConstraints) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block + ( Serialised (..) + , fromSerialised + , mkSerialised + ) +import Quiet (Quiet (..)) +import Test.Tasty +import Test.Tasty.QuickCheck +import Test.Util.Orphans.Arbitrary () +import Test.Util.Serialisation.Examples (Examples (..), Labelled) +import Test.Util.Serialisation.SomeResult (SomeResult (..)) +import Test.Util.TestEnv (adjustQuickCheckTests) +import Text.Pretty.Simple (pShow) {------------------------------------------------------------------------------ Basic test helpers ------------------------------------------------------------------------------} -roundtrip :: (Eq a, Show a) - => (a -> Encoding) - -> (forall s. Decoder s a) - -> a - -> Property +roundtrip :: + (Eq a, Show a) => + (a -> Encoding) -> + (forall s. Decoder s a) -> + a -> + Property roundtrip enc dec = roundtrip' enc (const <$> dec) -- | Perform roundtrip tests, checking the validity of the encoded CBOR. -- -- See 'roundtripAnd' --- -roundtrip' :: forall a. - (Eq a, Show a) - => (a -> Encoding) -- ^ @enc@ - -> (forall s. Decoder s (Lazy.ByteString -> a)) - -> a - -> Property +roundtrip' :: + forall a. + (Eq a, Show a) => + -- | @enc@ + (a -> Encoding) -> + (forall s. Decoder s (Lazy.ByteString -> a)) -> + a -> + Property roundtrip' = roundtripAnd CheckCBORValidity data ShouldCheckCBORValidity = CheckCBORValidity | DoNotCheckCBORValidity @@ -124,80 +140,83 @@ data ShouldCheckCBORValidity = CheckCBORValidity | DoNotCheckCBORValidity -- 'Lazy.ByteString' encoding is not equal to @enc a'@. One way in which this -- might happen is if the annotation is not canonical CBOR, but @enc@ does -- produce canonical CBOR. -roundtripAnd :: forall a. - (Eq a, Show a) - => ShouldCheckCBORValidity - -> (a -> Encoding) -- ^ @enc@ - -> (forall s. Decoder s (Lazy.ByteString -> a)) - -> a - -> Property +roundtripAnd :: + forall a. + (Eq a, Show a) => + ShouldCheckCBORValidity -> + -- | @enc@ + (a -> Encoding) -> + (forall s. Decoder s (Lazy.ByteString -> a)) -> + a -> + Property roundtripAnd check enc dec a = checkRoundtripResult $ do - let enc_a = enc a - bs = toLazyByteString enc_a - - when (check == CheckCBORValidity) $ - (validFlatTerm (toFlatTerm enc_a) ?! "Encoded flat term is not valid: " <> show enc_a) - (bsRem, a' ) <- deserialiseFromBytes dec bs `onError` showByteString bs - Lazy.null bsRem ?! "Left-over bytes: " <> toBase16 bsRem - a == a' bs ?! pShowNeq a (a' bs) - where - (?!) :: Bool -> String -> Either String () - cond ?! msg = unless cond $ Left msg - infix 1 ?! - - pShowNeq x y = T.unpack (pShow x) <> "\n \t/= \n" <> T.unpack (pShow y) - - onError :: - Either DeserialiseFailure (Char8.ByteString, Char8.ByteString -> a) - -> (DeserialiseFailure -> String) - -> Either String (Char8.ByteString, Char8.ByteString -> a) - onError result showDeserialiseFailure = - left showDeserialiseFailure result - - showByteString :: - Char8.ByteString - -> DeserialiseFailure - -> String - showByteString bs deserialiseFailure = - show deserialiseFailure <> "\n" <> "When deserialising " <> toBase16 bs - - toBase16 :: Lazy.ByteString -> String - toBase16 = Char8.unpack . Base16.encode - - checkRoundtripResult :: Either String () -> Property - checkRoundtripResult (Left str) = counterexample str False - checkRoundtripResult (Right ()) = property () + let enc_a = enc a + bs = toLazyByteString enc_a + + when (check == CheckCBORValidity) $ + (validFlatTerm (toFlatTerm enc_a) ?! "Encoded flat term is not valid: " <> show enc_a) + (bsRem, a') <- deserialiseFromBytes dec bs `onError` showByteString bs + Lazy.null bsRem ?! "Left-over bytes: " <> toBase16 bsRem + a == a' bs ?! pShowNeq a (a' bs) + where + (?!) :: Bool -> String -> Either String () + cond ?! msg = unless cond $ Left msg + infix 1 ?! + + pShowNeq x y = T.unpack (pShow x) <> "\n \t/= \n" <> T.unpack (pShow y) + + onError :: + Either DeserialiseFailure (Char8.ByteString, Char8.ByteString -> a) -> + (DeserialiseFailure -> String) -> + Either String (Char8.ByteString, Char8.ByteString -> a) + onError result showDeserialiseFailure = + left showDeserialiseFailure result + + showByteString :: + Char8.ByteString -> + DeserialiseFailure -> + String + showByteString bs deserialiseFailure = + show deserialiseFailure <> "\n" <> "When deserialising " <> toBase16 bs + + toBase16 :: Lazy.ByteString -> String + toBase16 = Char8.unpack . Base16.encode + + checkRoundtripResult :: Either String () -> Property + checkRoundtripResult (Left str) = counterexample str False + checkRoundtripResult (Right ()) = property () roundtripComparingEncoding :: - (a -> Encoding) - -> (forall s. Decoder s a) - -> a - -> Property + (a -> Encoding) -> + (forall s. Decoder s a) -> + a -> + Property roundtripComparingEncoding enc dec = roundtripComparingEncoding' enc (const <$> dec) -- | Like 'roundtrip'', but checks for equality of the encoding (i.e. the byte -- string) instead of the @a@ values using @Eq a@. This is useful When we don't -- have an @Eq a@ instance. roundtripComparingEncoding' :: - (a -> Encoding) -- ^ @enc@ - -> (forall s. Decoder s (Lazy.ByteString -> a)) - -> a - -> Property + -- | @enc@ + (a -> Encoding) -> + (forall s. Decoder s (Lazy.ByteString -> a)) -> + a -> + Property roundtripComparingEncoding' enc dec a = case deserialiseFromBytes dec bs of - Right (remainingBytes, a') - | let bs' = toLazyByteString (enc (a' bs)) - , Lazy.null remainingBytes - -> bs === bs' - | otherwise - -> counterexample ("left-over bytes: " <> toBase16 remainingBytes) False - Left e - -> counterexample (show e) $ - counterexample (toBase16 bs) False - where - bs = toLazyByteString (enc a) - - toBase16 :: Lazy.ByteString -> String - toBase16 = Char8.unpack . Base16.encode + Right (remainingBytes, a') + | let bs' = toLazyByteString (enc (a' bs)) + , Lazy.null remainingBytes -> + bs === bs' + | otherwise -> + counterexample ("left-over bytes: " <> toBase16 remainingBytes) False + Left e -> + counterexample (show e) $ + counterexample (toBase16 bs) False + where + bs = toLazyByteString (enc a) + + toBase16 :: Lazy.ByteString -> String + toBase16 = Char8.unpack . Base16.encode {------------------------------------------------------------------------------ Test skeleton @@ -208,45 +227,39 @@ roundtripComparingEncoding' enc dec a = case deserialiseFromBytes dec bs of type Arbitrary' a = (Arbitrary a, Eq a, Show a) -- | All roundtrip tests -roundtrip_all - :: forall blk. - ( SerialiseDiskConstraints blk - , SerialiseNodeToNodeConstraints blk - , SerialiseNodeToClientConstraints blk - - , Show (BlockNodeToNodeVersion blk) - - , StandardHash blk - , GetHeader blk - - , Arbitrary' blk - , Arbitrary' (Header blk) - , Arbitrary' (HeaderHash blk) - , Arbitrary' (LedgerState blk EmptyMK) - , Arbitrary' (AnnTip blk) - , Arbitrary' (ChainDepState (BlockProtocol blk)) - - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) blk - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Coherent blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Header blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTx blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTxId blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (SomeSecond (NestedCtxt Header) blk) - - , ArbitraryWithVersion (BlockNodeToClientVersion blk) blk - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (GenTx blk) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (ApplyTxErr blk) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk)) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeResult blk) - , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (LedgerConfig blk)) - , ArbitraryWithVersion (QueryVersion, BlockNodeToClientVersion blk) (SomeSecond Query blk) - - , Show (BlockNodeToClientVersion blk) - , BlockSupportsLedgerQuery blk - ) - => CodecConfig blk - -> (forall a. NestedCtxt_ blk Header a -> Dict (Eq a, Show a)) - -> TestTree +roundtrip_all :: + forall blk. + ( SerialiseDiskConstraints blk + , SerialiseNodeToNodeConstraints blk + , SerialiseNodeToClientConstraints blk + , Show (BlockNodeToNodeVersion blk) + , StandardHash blk + , GetHeader blk + , Arbitrary' blk + , Arbitrary' (Header blk) + , Arbitrary' (HeaderHash blk) + , Arbitrary' (LedgerState blk EmptyMK) + , Arbitrary' (AnnTip blk) + , Arbitrary' (ChainDepState (BlockProtocol blk)) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) blk + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Coherent blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Header blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTx blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTxId blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (SomeSecond (NestedCtxt Header) blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) blk + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (GenTx blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (ApplyTxErr blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk)) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeResult blk) + , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (LedgerConfig blk)) + , ArbitraryWithVersion (QueryVersion, BlockNodeToClientVersion blk) (SomeSecond Query blk) + , Show (BlockNodeToClientVersion blk) + , BlockSupportsLedgerQuery blk + ) => + CodecConfig blk -> + (forall a. NestedCtxt_ blk Header a -> Dict (Eq a, Show a)) -> + TestTree roundtrip_all = roundtrip_all_skipping (const CheckCBORValidity) -- | All roundtrip tests, skipping the specified CBOR validity tests. @@ -258,99 +271,97 @@ roundtrip_all = roundtrip_all_skipping (const CheckCBORValidity) -- -- - Node to client tests due to -- [this issue](https://github.com/IntersectMBO/cardano-ledger/issues/3800). --- -roundtrip_all_skipping - :: forall blk. - ( SerialiseDiskConstraints blk - , SerialiseNodeToNodeConstraints blk - , SerialiseNodeToClientConstraints blk - - , Show (BlockNodeToNodeVersion blk) - - , StandardHash blk - , GetHeader blk - - , Arbitrary' blk - , Arbitrary' (Header blk) - , Arbitrary' (HeaderHash blk) - , Arbitrary' (LedgerState blk EmptyMK) - , Arbitrary' (AnnTip blk) - , Arbitrary' (ChainDepState (BlockProtocol blk)) - - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) blk - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Coherent blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Header blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTx blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTxId blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (SomeSecond (NestedCtxt Header) blk) - - , ArbitraryWithVersion (BlockNodeToClientVersion blk) blk - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (GenTx blk) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (ApplyTxErr blk) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk)) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeResult blk) - , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (LedgerConfig blk)) - , ArbitraryWithVersion (QueryVersion, BlockNodeToClientVersion blk) (SomeSecond Query blk) - - , Show (BlockNodeToClientVersion blk) - , BlockSupportsLedgerQuery blk - ) - => (TestName -> ShouldCheckCBORValidity) - -> CodecConfig blk - -> (forall a. NestedCtxt_ blk Header a -> Dict (Eq a, Show a)) - -> TestTree +roundtrip_all_skipping :: + forall blk. + ( SerialiseDiskConstraints blk + , SerialiseNodeToNodeConstraints blk + , SerialiseNodeToClientConstraints blk + , Show (BlockNodeToNodeVersion blk) + , StandardHash blk + , GetHeader blk + , Arbitrary' blk + , Arbitrary' (Header blk) + , Arbitrary' (HeaderHash blk) + , Arbitrary' (LedgerState blk EmptyMK) + , Arbitrary' (AnnTip blk) + , Arbitrary' (ChainDepState (BlockProtocol blk)) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) blk + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Coherent blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Header blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTx blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTxId blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (SomeSecond (NestedCtxt Header) blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) blk + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (GenTx blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (ApplyTxErr blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk)) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeResult blk) + , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (LedgerConfig blk)) + , ArbitraryWithVersion (QueryVersion, BlockNodeToClientVersion blk) (SomeSecond Query blk) + , Show (BlockNodeToClientVersion blk) + , BlockSupportsLedgerQuery blk + ) => + (TestName -> ShouldCheckCBORValidity) -> + CodecConfig blk -> + (forall a. NestedCtxt_ blk Header a -> Dict (Eq a, Show a)) -> + TestTree roundtrip_all_skipping shouldCheckCBORvalidity ccfg dictNestedHdr = - testGroup "Roundtrip" [ - testGroup "SerialiseDisk" $ roundtrip_SerialiseDisk ccfg dictNestedHdr - , testGroup "SerialiseNodeToNode" $ roundtrip_SerialiseNodeToNode ccfg - , testGroup "SerialiseNodeToClient" $ roundtrip_SerialiseNodeToClient - shouldCheckCBORvalidity ccfg - , testProperty "envelopes" $ roundtrip_envelopes ccfg - , testProperty "ConvertRawHash" $ roundtrip_ConvertRawHash (Proxy @blk) - , testProperty "hashSize" $ prop_hashSize (Proxy @blk) - , testProperty "estimateBlockSize" $ prop_estimateBlockSize ccfg - ] + testGroup + "Roundtrip" + [ testGroup "SerialiseDisk" $ roundtrip_SerialiseDisk ccfg dictNestedHdr + , testGroup "SerialiseNodeToNode" $ roundtrip_SerialiseNodeToNode ccfg + , testGroup "SerialiseNodeToClient" $ + roundtrip_SerialiseNodeToClient + shouldCheckCBORvalidity + ccfg + , testProperty "envelopes" $ roundtrip_envelopes ccfg + , testProperty "ConvertRawHash" $ roundtrip_ConvertRawHash (Proxy @blk) + , testProperty "hashSize" $ prop_hashSize (Proxy @blk) + , testProperty "estimateBlockSize" $ prop_estimateBlockSize ccfg + ] -- TODO how can we ensure that we have a test for each constraint listed in -- 'SerialiseDiskConstraints'? -roundtrip_SerialiseDisk - :: forall blk. - ( SerialiseDiskConstraints blk - , Arbitrary' blk - , Arbitrary' (Header blk) - , Arbitrary' (LedgerState blk EmptyMK) - , Arbitrary' (AnnTip blk) - , Arbitrary' (ChainDepState (BlockProtocol blk)) - ) - => CodecConfig blk - -> (forall a. NestedCtxt_ blk Header a -> Dict (Eq a, Show a)) - -> [TestTree] +roundtrip_SerialiseDisk :: + forall blk. + ( SerialiseDiskConstraints blk + , Arbitrary' blk + , Arbitrary' (Header blk) + , Arbitrary' (LedgerState blk EmptyMK) + , Arbitrary' (AnnTip blk) + , Arbitrary' (ChainDepState (BlockProtocol blk)) + ) => + CodecConfig blk -> + (forall a. NestedCtxt_ blk Header a -> Dict (Eq a, Show a)) -> + [TestTree] roundtrip_SerialiseDisk ccfg dictNestedHdr = - [ testProperty "roundtrip block" $ - roundtrip' @blk (encodeDisk ccfg) (decodeDisk ccfg) - , testProperty "roundtrip Header" $ \hdr -> - case unnest hdr of - DepPair ctxt nestedHdr -> case dictNestedHdr (flipNestedCtxt ctxt) of - Dict -> - roundtrip' - (encodeDiskDep ccfg ctxt) - (decodeDiskDep ccfg ctxt) - nestedHdr - -- Since the 'LedgerState' is a large data structure, we lower the - -- number of tests to avoid slowing down the testsuite too much - , adjustQuickCheckTests (`div` 10) $ + [ testProperty "roundtrip block" $ + roundtrip' @blk (encodeDisk ccfg) (decodeDisk ccfg) + , testProperty "roundtrip Header" $ \hdr -> + case unnest hdr of + DepPair ctxt nestedHdr -> case dictNestedHdr (flipNestedCtxt ctxt) of + Dict -> + roundtrip' + (encodeDiskDep ccfg ctxt) + (decodeDiskDep ccfg ctxt) + nestedHdr + , -- Since the 'LedgerState' is a large data structure, we lower the + -- number of tests to avoid slowing down the testsuite too much + adjustQuickCheckTests (`div` 10) $ rt (Proxy @(LedgerState blk EmptyMK)) "LedgerState" - , rt (Proxy @(AnnTip blk)) "AnnTip" - , rt (Proxy @(ChainDepState (BlockProtocol blk))) "ChainDepState" - ] - where - rt :: forall a. (Arbitrary' a, EncodeDisk blk a, DecodeDisk blk a) - => Proxy a -> String -> TestTree - rt _ name = - testProperty ("roundtrip " <> name) $ - roundtrip @a - (encodeDisk ccfg) - (decodeDisk ccfg) + , rt (Proxy @(AnnTip blk)) "AnnTip" + , rt (Proxy @(ChainDepState (BlockProtocol blk))) "ChainDepState" + ] + where + rt :: + forall a. + (Arbitrary' a, EncodeDisk blk a, DecodeDisk blk a) => + Proxy a -> String -> TestTree + rt _ name = + testProperty ("roundtrip " <> name) $ + roundtrip @a + (encodeDisk ccfg) + (decodeDisk ccfg) -- | Used to generate arbitrary values for the serialisation roundtrip tests. -- As the serialisation format can change with the version, not all arbitrary @@ -365,11 +376,13 @@ data WithVersion v a = WithVersion v a -- @('WithVersion' v a)@. type ArbitraryWithVersion v a = (Arbitrary (WithVersion v a), Eq a, Show a) -instance ( blockVersion ~ BlockNodeToClientVersion blk - , Arbitrary blockVersion - , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk))) - ) - => Arbitrary (WithVersion (QueryVersion, blockVersion) (SomeSecond Query blk)) where +instance + ( blockVersion ~ BlockNodeToClientVersion blk + , Arbitrary blockVersion + , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk))) + ) => + Arbitrary (WithVersion (QueryVersion, blockVersion) (SomeSecond Query blk)) + where arbitrary = do queryVersion <- arbitrary case queryVersion of @@ -379,54 +392,64 @@ instance ( blockVersion ~ BlockNodeToClientVersion blk Query.QueryVersion1 -> genTopLevelQuery1 Query.QueryVersion2 -> genTopLevelQuery2 Query.QueryVersion3 -> genTopLevelQuery3 - where - mkEntry :: - QueryVersion - -> Query blk query - -> Gen - (WithVersion (QueryVersion, blockVersion) (SomeSecond Query blk)) - mkEntry qv q = do - blockV <- arbitrary - return (WithVersion (qv, blockV) (SomeSecond q)) - - genTopLevelQuery1 = - let version = Query.QueryVersion1 - in frequency - [ (15, arbitraryBlockQuery version ) - , (1 , mkEntry version GetSystemStart ) - ] - - genTopLevelQuery2 = - let version = Query.QueryVersion2 - in frequency - [ (15, arbitraryBlockQuery version ) - , (1 , mkEntry version GetSystemStart ) - , (1 , mkEntry version GetChainBlockNo) - , (1 , mkEntry version GetChainPoint ) - ] - - genTopLevelQuery3 = - let version = Query.QueryVersion3 - in frequency - [ (15, arbitraryBlockQuery version ) - , (1 , mkEntry version GetSystemStart ) - , (1 , mkEntry version GetChainBlockNo ) - , (1 , mkEntry version GetChainPoint ) - , (1 , mkEntry version DebugLedgerConfig) - ] - - arbitraryBlockQuery :: QueryVersion - -> Gen (WithVersion (QueryVersion, blockVersion) - (SomeSecond Query blk)) - arbitraryBlockQuery queryVersion = do - WithVersion blockV (SomeBlockQuery someBlockQuery) <- arbitrary - return (WithVersion (queryVersion, blockV) - (SomeSecond (BlockQuery someBlockQuery))) + where + mkEntry :: + QueryVersion -> + Query blk query -> + Gen + (WithVersion (QueryVersion, blockVersion) (SomeSecond Query blk)) + mkEntry qv q = do + blockV <- arbitrary + return (WithVersion (qv, blockV) (SomeSecond q)) + + genTopLevelQuery1 = + let version = Query.QueryVersion1 + in frequency + [ (15, arbitraryBlockQuery version) + , (1, mkEntry version GetSystemStart) + ] + + genTopLevelQuery2 = + let version = Query.QueryVersion2 + in frequency + [ (15, arbitraryBlockQuery version) + , (1, mkEntry version GetSystemStart) + , (1, mkEntry version GetChainBlockNo) + , (1, mkEntry version GetChainPoint) + ] + + genTopLevelQuery3 = + let version = Query.QueryVersion3 + in frequency + [ (15, arbitraryBlockQuery version) + , (1, mkEntry version GetSystemStart) + , (1, mkEntry version GetChainBlockNo) + , (1, mkEntry version GetChainPoint) + , (1, mkEntry version DebugLedgerConfig) + ] + + arbitraryBlockQuery :: + QueryVersion -> + Gen + ( WithVersion + (QueryVersion, blockVersion) + (SomeSecond Query blk) + ) + arbitraryBlockQuery queryVersion = do + WithVersion blockV (SomeBlockQuery someBlockQuery) <- arbitrary + return + ( WithVersion + (queryVersion, blockV) + (SomeSecond (BlockQuery someBlockQuery)) + ) -- | This is @OVERLAPPABLE@ because we have to override the default behaviour -- for e.g. 'Query's. -instance {-# OVERLAPPABLE #-} (Arbitrary version, Arbitrary a) - => Arbitrary (WithVersion version a) where +instance + {-# OVERLAPPABLE #-} + (Arbitrary version, Arbitrary a) => + Arbitrary (WithVersion version a) + where arbitrary = WithVersion <$> arbitrary <*> arbitrary -- | Used to generate slightly less arbitrary values @@ -435,220 +458,225 @@ instance {-# OVERLAPPABLE #-} (Arbitrary version, Arbitrary a) -- context-dependent. The original motivating example is that some of our -- serialization-adjacent properties require that the generated block contains -- a header and a body that match, ie are /coherent/. -newtype Coherent a = Coherent { getCoherent :: a } +newtype Coherent a = Coherent {getCoherent :: a} deriving (Eq, Generic) - deriving (Show) via (Quiet (Coherent a)) + deriving Show via (Quiet (Coherent a)) -- TODO how can we ensure that we have a test for each constraint listed in -- 'SerialiseNodeToNodeConstraints'? -roundtrip_SerialiseNodeToNode - :: forall blk. - ( SerialiseNodeToNodeConstraints blk - , Show (BlockNodeToNodeVersion blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) blk - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Header blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTx blk) - , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTxId blk) - - -- Needed for testing the @Serialised blk@ - , EncodeDisk blk blk - , DecodeDisk blk (Lazy.ByteString -> blk) - -- Needed for testing the @Serialised (Header blk)@ - , HasNestedContent Header blk - , EncodeDiskDep (NestedCtxt Header) blk - , DecodeDiskDep (NestedCtxt Header) blk - ) - => CodecConfig blk - -> [TestTree] +roundtrip_SerialiseNodeToNode :: + forall blk. + ( SerialiseNodeToNodeConstraints blk + , Show (BlockNodeToNodeVersion blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) blk + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (Header blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTx blk) + , ArbitraryWithVersion (BlockNodeToNodeVersion blk) (GenTxId blk) + , -- Needed for testing the @Serialised blk@ + EncodeDisk blk blk + , DecodeDisk blk (Lazy.ByteString -> blk) + , -- Needed for testing the @Serialised (Header blk)@ + HasNestedContent Header blk + , EncodeDiskDep (NestedCtxt Header) blk + , DecodeDiskDep (NestedCtxt Header) blk + ) => + CodecConfig blk -> + [TestTree] roundtrip_SerialiseNodeToNode ccfg = - [ rt (Proxy @blk) "blk" - , rt (Proxy @(Header blk)) "Header" - , rt (Proxy @(GenTx blk)) "GenTx" - , rt (Proxy @(GenTxId blk)) "GenTxId" - -- Roundtrip a @'Serialised' blk@ - -- - -- We generate a random @blk@, convert it to 'Serialised' (using - -- 'encodeDisk', which doesn't add CBOR-in-CBOR), encode it (adding - -- CBOR-in-CBOR), decode that 'Serialised' and convert (using - -- 'decodeNodeToNode') it to a @blk@ again. - , testProperty "roundtrip Serialised blk" $ - \(WithVersion version blk) -> - roundtrip @blk - (encodeThroughSerialised (encodeDisk ccfg) (enc version)) - (decodeThroughSerialised (decodeDisk ccfg) (dec version)) - blk - -- Same as above but for 'Header' - , testProperty "roundtrip Serialised Header" $ - \(WithVersion version hdr) -> - roundtrip @(Header blk) - (enc version . SerialisedHeaderFromDepPair . encodeDepPair ccfg . unnest) - (nest <$> (decodeDepPair ccfg . serialisedHeaderToDepPair =<< dec version)) - hdr - -- Check the compatibility between 'encodeNodeToNode' for @'Serialised' - -- blk@ and 'decodeNodeToNode' for @blk@. - , testProperty "roundtrip Serialised blk compat 1" $ - \(WithVersion version blk) -> - roundtrip @blk - (encodeThroughSerialised (encodeDisk ccfg) (enc version)) - (dec version) - blk - -- Check the compatibility between 'encodeNodeToNode' for @blk@ and - -- 'decodeNodeToNode' for @'Serialised' blk@. - , testProperty "roundtrip Serialised blk compat 2" $ - \(WithVersion version blk) -> - roundtrip @blk - (enc version) - (decodeThroughSerialised (decodeDisk ccfg) (dec version)) - blk - -- Same as above but for 'Header' - , testProperty "roundtrip Serialised Header compat 1" $ - \(WithVersion version hdr) -> - roundtrip @(Header blk) - (enc version . SerialisedHeaderFromDepPair . encodeDepPair ccfg . unnest) - (dec version) - hdr - , testProperty "roundtrip Serialised Header compat 2" $ - \(WithVersion version hdr) -> - roundtrip @(Header blk) - (enc version) - (nest <$> (decodeDepPair ccfg . serialisedHeaderToDepPair =<< dec version)) - hdr - ] - where - enc :: SerialiseNodeToNode blk a - => BlockNodeToNodeVersion blk -> a -> Encoding - enc = encodeNodeToNode ccfg - - dec :: SerialiseNodeToNode blk a - => BlockNodeToNodeVersion blk -> forall s. Decoder s a - dec = decodeNodeToNode ccfg - - rt - :: forall a. - ( Arbitrary (WithVersion (BlockNodeToNodeVersion blk) a) - , Eq a - , Show a - , SerialiseNodeToNode blk a - ) - => Proxy a -> String -> TestTree - rt _ name = - testProperty ("roundtrip " <> name) $ \(WithVersion version x) -> - roundtrip @a (enc version) (dec version) x + [ rt (Proxy @blk) "blk" + , rt (Proxy @(Header blk)) "Header" + , rt (Proxy @(GenTx blk)) "GenTx" + , rt (Proxy @(GenTxId blk)) "GenTxId" + , -- Roundtrip a @'Serialised' blk@ + -- + -- We generate a random @blk@, convert it to 'Serialised' (using + -- 'encodeDisk', which doesn't add CBOR-in-CBOR), encode it (adding + -- CBOR-in-CBOR), decode that 'Serialised' and convert (using + -- 'decodeNodeToNode') it to a @blk@ again. + testProperty "roundtrip Serialised blk" $ + \(WithVersion version blk) -> + roundtrip @blk + (encodeThroughSerialised (encodeDisk ccfg) (enc version)) + (decodeThroughSerialised (decodeDisk ccfg) (dec version)) + blk + , -- Same as above but for 'Header' + testProperty "roundtrip Serialised Header" $ + \(WithVersion version hdr) -> + roundtrip @(Header blk) + (enc version . SerialisedHeaderFromDepPair . encodeDepPair ccfg . unnest) + (nest <$> (decodeDepPair ccfg . serialisedHeaderToDepPair =<< dec version)) + hdr + , -- Check the compatibility between 'encodeNodeToNode' for @'Serialised' + -- blk@ and 'decodeNodeToNode' for @blk@. + testProperty "roundtrip Serialised blk compat 1" $ + \(WithVersion version blk) -> + roundtrip @blk + (encodeThroughSerialised (encodeDisk ccfg) (enc version)) + (dec version) + blk + , -- Check the compatibility between 'encodeNodeToNode' for @blk@ and + -- 'decodeNodeToNode' for @'Serialised' blk@. + testProperty "roundtrip Serialised blk compat 2" $ + \(WithVersion version blk) -> + roundtrip @blk + (enc version) + (decodeThroughSerialised (decodeDisk ccfg) (dec version)) + blk + , -- Same as above but for 'Header' + testProperty "roundtrip Serialised Header compat 1" $ + \(WithVersion version hdr) -> + roundtrip @(Header blk) + (enc version . SerialisedHeaderFromDepPair . encodeDepPair ccfg . unnest) + (dec version) + hdr + , testProperty "roundtrip Serialised Header compat 2" $ + \(WithVersion version hdr) -> + roundtrip @(Header blk) + (enc version) + (nest <$> (decodeDepPair ccfg . serialisedHeaderToDepPair =<< dec version)) + hdr + ] + where + enc :: + SerialiseNodeToNode blk a => + BlockNodeToNodeVersion blk -> a -> Encoding + enc = encodeNodeToNode ccfg + + dec :: + SerialiseNodeToNode blk a => + BlockNodeToNodeVersion blk -> forall s. Decoder s a + dec = decodeNodeToNode ccfg + + rt :: + forall a. + ( Arbitrary (WithVersion (BlockNodeToNodeVersion blk) a) + , Eq a + , Show a + , SerialiseNodeToNode blk a + ) => + Proxy a -> String -> TestTree + rt _ name = + testProperty ("roundtrip " <> name) $ \(WithVersion version x) -> + roundtrip @a (enc version) (dec version) x -- TODO how can we ensure that we have a test for each constraint listed in -- 'SerialiseNodeToClientConstraints'? -roundtrip_SerialiseNodeToClient - :: forall blk. - ( SerialiseNodeToClientConstraints blk - , ArbitraryWithVersion (BlockNodeToClientVersion blk) blk - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (GenTx blk) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (ApplyTxErr blk) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk)) - , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeResult blk) - , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (LedgerConfig blk)) - , ArbitraryWithVersion (QueryVersion, BlockNodeToClientVersion blk) (SomeSecond Query blk) - - , Show (BlockNodeToClientVersion blk) - , BlockSupportsLedgerQuery blk - -- Needed for testing the @Serialised blk@ - , EncodeDisk blk blk - , DecodeDisk blk (Lazy.ByteString -> blk) - ) - => (TestName -> ShouldCheckCBORValidity) - -> CodecConfig blk - -> [TestTree] +roundtrip_SerialiseNodeToClient :: + forall blk. + ( SerialiseNodeToClientConstraints blk + , ArbitraryWithVersion (BlockNodeToClientVersion blk) blk + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (GenTx blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (ApplyTxErr blk) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeBlockQuery (BlockQuery blk)) + , ArbitraryWithVersion (BlockNodeToClientVersion blk) (SomeResult blk) + , Arbitrary (WithVersion (BlockNodeToClientVersion blk) (LedgerConfig blk)) + , ArbitraryWithVersion (QueryVersion, BlockNodeToClientVersion blk) (SomeSecond Query blk) + , Show (BlockNodeToClientVersion blk) + , BlockSupportsLedgerQuery blk + , -- Needed for testing the @Serialised blk@ + EncodeDisk blk blk + , DecodeDisk blk (Lazy.ByteString -> blk) + ) => + (TestName -> ShouldCheckCBORValidity) -> + CodecConfig blk -> + [TestTree] roundtrip_SerialiseNodeToClient shouldCheckCBORvalidity ccfg = - [ rt (Proxy @blk) "blk" - , rt (Proxy @(GenTx blk)) "GenTx" - , rt (Proxy @(ApplyTxErr blk)) "ApplyTxErr" - , rt (Proxy @(SomeBlockQuery (BlockQuery blk))) "BlockQuery" - -- Note: Ideally we'd just use 'rt' to test Ledger config, but that would + [ rt (Proxy @blk) "blk" + , rt (Proxy @(GenTx blk)) "GenTx" + , rt (Proxy @(ApplyTxErr blk)) "ApplyTxErr" + , rt (Proxy @(SomeBlockQuery (BlockQuery blk))) "BlockQuery" + , -- Note: Ideally we'd just use 'rt' to test Ledger config, but that would -- require an 'Eq' and 'Show' instance for all ledger config types which -- we'd like to avoid (as the EpochInfo is a record of functions). - , testProperty "roundtrip (comparing encoding) LedgerConfig" $ - withMaxSuccess 20 $ \(Blind (WithVersion version a)) -> - roundtripComparingEncoding @(LedgerConfig blk) (enc version) (dec version) a - , rtWith - @(SomeSecond Query blk) - @(QueryVersion, BlockNodeToClientVersion blk) - (\(queryVersion, blockVersion) query -> Query.queryEncodeNodeToClient - ccfg - queryVersion - blockVersion - query - ) - (\(queryVersion, blockVersion) -> Query.queryDecodeNodeToClient - ccfg - queryVersion - blockVersion - ) - "Query" - -- See roundtrip_SerialiseNodeToNode for more info - , let testLabel = "roundtrip Serialised blk" in - testProperty testLabel $ + testProperty "roundtrip (comparing encoding) LedgerConfig" $ + withMaxSuccess 20 $ \(Blind (WithVersion version a)) -> + roundtripComparingEncoding @(LedgerConfig blk) (enc version) (dec version) a + , rtWith + @(SomeSecond Query blk) + @(QueryVersion, BlockNodeToClientVersion blk) + ( \(queryVersion, blockVersion) query -> + Query.queryEncodeNodeToClient + ccfg + queryVersion + blockVersion + query + ) + ( \(queryVersion, blockVersion) -> + Query.queryDecodeNodeToClient + ccfg + queryVersion + blockVersion + ) + "Query" + , -- See roundtrip_SerialiseNodeToNode for more info + let testLabel = "roundtrip Serialised blk" + in testProperty testLabel $ \(WithVersion version blk) -> roundtripAnd @blk (shouldCheckCBORvalidity testLabel) (encodeThroughSerialised (encodeDisk ccfg) (enc version)) (const <$> decodeThroughSerialised (decodeDisk ccfg) (dec version)) blk - -- See roundtrip_SerialiseNodeToNode for more info - , let testLabel = "roundtrip Serialised blk compat" in - testProperty testLabel $ + , -- See roundtrip_SerialiseNodeToNode for more info + let testLabel = "roundtrip Serialised blk compat" + in testProperty testLabel $ \(WithVersion version blk) -> roundtripAnd @blk (shouldCheckCBORvalidity testLabel) (encodeThroughSerialised (encodeDisk ccfg) (enc version)) (const <$> dec version) blk - , let testLabel = "roundtrip Result" in - testProperty testLabel $ + , let testLabel = "roundtrip Result" + in testProperty testLabel $ \(WithVersion version (SomeResult query result :: SomeResult blk)) -> roundtripAnd (shouldCheckCBORvalidity testLabel) (encodeBlockQueryResult ccfg version query) (const <$> decodeBlockQueryResult ccfg version query) result - ] - where - enc :: SerialiseNodeToClient blk a - => BlockNodeToClientVersion blk -> a -> Encoding - enc = encodeNodeToClient ccfg - - dec :: SerialiseNodeToClient blk a - => BlockNodeToClientVersion blk -> forall s. Decoder s a - dec = decodeNodeToClient ccfg - - rt - :: forall a. - ( Arbitrary (WithVersion (BlockNodeToClientVersion blk) a) - , Eq a - , Show a - , SerialiseNodeToClient blk a - ) - => Proxy a -> String -> TestTree - rt _ name = rtWith (enc @a) (dec @a) name - - rtWith - :: forall a version. - ( Arbitrary (WithVersion version a) - , Eq a - , Show a - , Show version - ) - => (version -> a -> Encoding) - -> (version -> forall s. Decoder s a) - -> String - -> TestTree - rtWith enc' dec' name = - testProperty ("roundtrip " <> name) $ - \(WithVersion version a) -> - roundtripAnd @a (shouldCheckCBORvalidity testLabel) - (enc' version) - (const <$> dec' version) - a - where - testLabel = "roundtrip " <> name + ] + where + enc :: + SerialiseNodeToClient blk a => + BlockNodeToClientVersion blk -> a -> Encoding + enc = encodeNodeToClient ccfg + + dec :: + SerialiseNodeToClient blk a => + BlockNodeToClientVersion blk -> forall s. Decoder s a + dec = decodeNodeToClient ccfg + + rt :: + forall a. + ( Arbitrary (WithVersion (BlockNodeToClientVersion blk) a) + , Eq a + , Show a + , SerialiseNodeToClient blk a + ) => + Proxy a -> String -> TestTree + rt _ name = rtWith (enc @a) (dec @a) name + + rtWith :: + forall a version. + ( Arbitrary (WithVersion version a) + , Eq a + , Show a + , Show version + ) => + (version -> a -> Encoding) -> + (version -> forall s. Decoder s a) -> + String -> + TestTree + rtWith enc' dec' name = + testProperty ("roundtrip " <> name) $ + \(WithVersion version a) -> + roundtripAnd @a + (shouldCheckCBORvalidity testLabel) + (enc' version) + (const <$> dec' version) + a + where + testLabel = "roundtrip " <> name {------------------------------------------------------------------------------- Checking envelopes @@ -658,157 +686,191 @@ roundtrip_SerialiseNodeToClient shouldCheckCBORvalidity ccfg = -- start with a header but some fixed bytestring in the payload. This makes -- debugging a bit easier as we can focus on just the envelope. roundtrip_envelopes :: - forall blk. ( - SerialiseNodeToNode blk (SerialisedHeader blk) - , HasNestedContent Header blk - ) - => CodecConfig blk - -> WithVersion (BlockNodeToNodeVersion blk) (SomeSecond (NestedCtxt Header) blk) - -> Property + forall blk. + ( SerialiseNodeToNode blk (SerialisedHeader blk) + , HasNestedContent Header blk + ) => + CodecConfig blk -> + WithVersion (BlockNodeToNodeVersion blk) (SomeSecond (NestedCtxt Header) blk) -> + Property roundtrip_envelopes ccfg (WithVersion v (SomeSecond ctxt)) = - roundtrip - (encodeNodeToNode ccfg v . unBase16) - (Base16 <$> decodeNodeToNode ccfg v) - (Base16 serialisedHeader) - where - serialisedHeader :: SerialisedHeader blk - serialisedHeader = SerialisedHeaderFromDepPair $ - GenDepPair ctxt (Serialised bs) + roundtrip + (encodeNodeToNode ccfg v . unBase16) + (Base16 <$> decodeNodeToNode ccfg v) + (Base16 serialisedHeader) + where + serialisedHeader :: SerialisedHeader blk + serialisedHeader = + SerialisedHeaderFromDepPair $ + GenDepPair ctxt (Serialised bs) - bs :: Lazy.ByteString - bs = "" -- Something we can easily recognize in test failures + bs :: Lazy.ByteString + bs = "" -- Something we can easily recognize in test failures -newtype Base16 a = Base16 { unBase16 :: a } +newtype Base16 a = Base16 {unBase16 :: a} instance HasNestedContent Header blk => Show (Base16 (SerialisedHeader blk)) where show = aux . serialisedHeaderToDepPair . unBase16 - where - aux :: GenDepPair Serialised (NestedCtxt Header blk) -> String - aux (GenDepPair ctxt (Serialised bs)) = - "(" <> show ctxt <> "," <> Char8.unpack (Base16.encode bs) <> ")" + where + aux :: GenDepPair Serialised (NestedCtxt Header blk) -> String + aux (GenDepPair ctxt (Serialised bs)) = + "(" <> show ctxt <> "," <> Char8.unpack (Base16.encode bs) <> ")" instance HasNestedContent Header blk => Eq (Base16 (SerialisedHeader blk)) where (==) = aux `on` (serialisedHeaderToDepPair . unBase16) - where - aux :: GenDepPair Serialised (NestedCtxt Header blk) - -> GenDepPair Serialised (NestedCtxt Header blk) - -> Bool - aux (GenDepPair ctxt bs) (GenDepPair ctxt' bs') = - case sameDepIndex ctxt ctxt' of - Just Refl -> bs == bs' - Nothing -> False + where + aux :: + GenDepPair Serialised (NestedCtxt Header blk) -> + GenDepPair Serialised (NestedCtxt Header blk) -> + Bool + aux (GenDepPair ctxt bs) (GenDepPair ctxt' bs') = + case sameDepIndex ctxt ctxt' of + Just Refl -> bs == bs' + Nothing -> False {------------------------------------------------------------------------------- ConvertRawHash -------------------------------------------------------------------------------} -roundtrip_ConvertRawHash - :: (StandardHash blk, ConvertRawHash blk) - => Proxy blk -> HeaderHash blk -> Property +roundtrip_ConvertRawHash :: + (StandardHash blk, ConvertRawHash blk) => + Proxy blk -> HeaderHash blk -> Property roundtrip_ConvertRawHash p h = - h === fromShortRawHash p (toShortRawHash p h) + h === fromShortRawHash p (toShortRawHash p h) -prop_hashSize - :: ConvertRawHash blk - => Proxy blk -> HeaderHash blk -> Property +prop_hashSize :: + ConvertRawHash blk => + Proxy blk -> HeaderHash blk -> Property prop_hashSize p h = - hashSize p === fromIntegral (Short.length (toShortRawHash p h)) + hashSize p === fromIntegral (Short.length (toShortRawHash p h)) {------------------------------------------------------------------------------- estimateBlockSize -------------------------------------------------------------------------------} prop_estimateBlockSize :: - (SerialiseNodeToNodeConstraints blk, GetHeader blk) - => CodecConfig blk - -> WithVersion (BlockNodeToNodeVersion blk) (Coherent blk) - -> Property + (SerialiseNodeToNodeConstraints blk, GetHeader blk) => + CodecConfig blk -> + WithVersion (BlockNodeToNodeVersion blk) (Coherent blk) -> + Property prop_estimateBlockSize ccfg (WithVersion version (Coherent blk)) - | actualBlockSize > expectedBlockSize - = counterexample - ("actualBlockSize > expectedBlockSize: " - <> show actualBlockSize <> " > " - <> show expectedBlockSize) - (property False) - | actualBlockSize < expectedBlockSize - allowedOverestimate - = counterexample - ("actualBlockSize < expectedBlockSize - allowedOverestimate: " - <> show actualBlockSize <> " > " - <> show expectedBlockSize <> " - " - <> show allowedOverestimate) - (property False) - | otherwise - = classify (actualBlockSize == expectedBlockSize) "exact" - $ classify (actualBlockSize < expectedBlockSize) "overestimate" - $ property True - where - allowedOverestimate :: SizeInBytes - allowedOverestimate = 10 - - actualBlockSize :: SizeInBytes - actualBlockSize = - fromIntegral - . Lazy.length - . toLazyByteString - . encodeNodeToNode ccfg version - $ blk - - expectedBlockSize :: SizeInBytes - expectedBlockSize = - estimateBlockSize - . getHeader - $ blk + | actualBlockSize > expectedBlockSize = + counterexample + ( "actualBlockSize > expectedBlockSize: " + <> show actualBlockSize + <> " > " + <> show expectedBlockSize + ) + (property False) + | actualBlockSize < expectedBlockSize - allowedOverestimate = + counterexample + ( "actualBlockSize < expectedBlockSize - allowedOverestimate: " + <> show actualBlockSize + <> " > " + <> show expectedBlockSize + <> " - " + <> show allowedOverestimate + ) + (property False) + | otherwise = + classify (actualBlockSize == expectedBlockSize) "exact" $ + classify (actualBlockSize < expectedBlockSize) "overestimate" $ + property True + where + allowedOverestimate :: SizeInBytes + allowedOverestimate = 10 + + actualBlockSize :: SizeInBytes + actualBlockSize = + fromIntegral + . Lazy.length + . toLazyByteString + . encodeNodeToNode ccfg version + $ blk + + expectedBlockSize :: SizeInBytes + expectedBlockSize = + estimateBlockSize + . getHeader + $ blk {------------------------------------------------------------------------------- Serialised helpers -------------------------------------------------------------------------------} encodeThroughSerialised :: - (a -> Encoding) - -> (Serialised a -> Encoding) - -> (a -> Encoding) + (a -> Encoding) -> + (Serialised a -> Encoding) -> + (a -> Encoding) encodeThroughSerialised enc encSerialised = encSerialised . mkSerialised enc decodeThroughSerialised :: - (forall s. Decoder s (Lazy.ByteString -> a)) - -> (forall s. Decoder s (Serialised a)) - -> (forall s. Decoder s a) + (forall s. Decoder s (Lazy.ByteString -> a)) -> + (forall s. Decoder s (Serialised a)) -> + (forall s. Decoder s a) decodeThroughSerialised dec decSerialised = do - serialised <- decSerialised - fromSerialised dec serialised + serialised <- decSerialised + fromSerialised dec serialised {------------------------------------------------------------------------------ Roundtrip tests for examples ------------------------------------------------------------------------------} examplesRoundtrip :: - forall blk . (SerialiseDiskConstraints blk, Eq blk, Show blk, LedgerSupportsProtocol blk) - => CodecConfig blk - -> Examples blk - -> [TestTree] + forall blk. + (SerialiseDiskConstraints blk, Eq blk, Show blk, LedgerSupportsProtocol blk) => + CodecConfig blk -> + Examples blk -> + [TestTree] examplesRoundtrip codecConfig examples = - [ testRoundtripFor "Block" (encodeDisk codecConfig) (decodeDisk codecConfig) exampleBlock - , testRoundtripFor "Header hash" encode (const <$> decode) exampleHeaderHash - , testRoundtripFor "Ledger state" (encodeDisk codecConfig) (const <$> decodeDisk codecConfig) exampleLedgerState - , testRoundtripFor "Annotated tip" (encodeDisk codecConfig) (const <$> decodeDisk codecConfig) exampleAnnTip - , testRoundtripFor "Chain dependent state" (encodeDisk codecConfig) (const <$> decodeDisk codecConfig) exampleChainDepState - , testRoundtripFor "Extended ledger state" (encodeDiskExtLedgerState codecConfig) (const <$> decodeDiskExtLedgerState codecConfig) exampleExtLedgerState - ] - where - testRoundtripFor :: - forall a . (Eq a, Show a) - => String - -> (a -> Encoding) - -> (forall s . Decoder s (Char8.ByteString -> a)) - -> (Examples blk -> Labelled a) - -> TestTree - testRoundtripFor testLabel enc dec field = - testGroup testLabel - [ mkTest exampleName example - | (exampleName, example) <- field examples - ] - where - mkTest exampleName example = - testProperty (fromMaybe "" exampleName) - $ once - $ roundtrip' enc dec example + [ testRoundtripFor + "Block" + (encodeDisk codecConfig) + (decodeDisk codecConfig) + exampleBlock + , testRoundtripFor + "Header hash" + encode + (const <$> decode) + exampleHeaderHash + , testRoundtripFor + "Ledger state" + (encodeDisk codecConfig) + (const <$> decodeDisk codecConfig) + exampleLedgerState + , testRoundtripFor + "Annotated tip" + (encodeDisk codecConfig) + (const <$> decodeDisk codecConfig) + exampleAnnTip + , testRoundtripFor + "Chain dependent state" + (encodeDisk codecConfig) + (const <$> decodeDisk codecConfig) + exampleChainDepState + , testRoundtripFor + "Extended ledger state" + (encodeDiskExtLedgerState codecConfig) + (const <$> decodeDiskExtLedgerState codecConfig) + exampleExtLedgerState + ] + where + testRoundtripFor :: + forall a. + (Eq a, Show a) => + String -> + (a -> Encoding) -> + (forall s. Decoder s (Char8.ByteString -> a)) -> + (Examples blk -> Labelled a) -> + TestTree + testRoundtripFor testLabel enc dec field = + testGroup + testLabel + [ mkTest exampleName example + | (exampleName, example) <- field examples + ] + where + mkTest exampleName example = + testProperty (fromMaybe "" exampleName) $ + once $ + roundtrip' enc dec example diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/SomeResult.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/SomeResult.hs index dc096ba4ee..234032d8a9 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/SomeResult.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Serialisation/SomeResult.hs @@ -4,16 +4,17 @@ module Test.Util.Serialisation.SomeResult (SomeResult (..)) where -import Data.Typeable -import Ouroboros.Consensus.Ledger.Query (BlockQuery) +import Data.Typeable +import Ouroboros.Consensus.Ledger.Query (BlockQuery) -- | To easily generate all the possible @result@s of the 'Query' GADT, we -- introduce an existential that also bundles the corresponding 'Query' as -- evidence. We also capture 'Eq', 'Show', and 'Typeable' constraints, as we -- need them in the tests. data SomeResult blk where - SomeResult :: (Eq result, Show result, Typeable result) - => BlockQuery blk fp result -> result -> SomeResult blk + SomeResult :: + (Eq result, Show result, Typeable result) => + BlockQuery blk fp result -> result -> SomeResult blk instance Show (SomeResult blk) where show (SomeResult _ result) = show result @@ -21,5 +22,5 @@ instance Show (SomeResult blk) where instance Eq (SomeResult blk) where SomeResult _ (res1 :: result1) == SomeResult _ (res2 :: result2) = case eqT @result1 @result2 of - Nothing -> False + Nothing -> False Just Refl -> res1 == res2 diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Shrink.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Shrink.hs index 4abaf2c0bf..27b98e05fb 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Shrink.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Shrink.hs @@ -1,7 +1,6 @@ -- | Utility functions for defining @QuickCheck@'s 'Test.QuickCheck.shrink' --- -module Test.Util.Shrink ( - andId +module Test.Util.Shrink + ( andId , dropId ) where @@ -13,11 +12,9 @@ module Test.Util.Shrink ( -- -- If every source list in a comprehension uses 'andId', then the last element -- will be a copy of the initial input. --- dropId :: [a] -> [a] dropId = init -- | Add the argument as the last element of the output --- andId :: (a -> [a]) -> a -> [a] andId f x = f x ++ [x] diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Slots.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Slots.hs index b0f29699cb..db3ff20a77 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Slots.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Slots.hs @@ -4,17 +4,17 @@ module Test.Util.Slots (NumSlots (..)) where -import Data.Word (Word64) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Quiet (Quiet (..)) -import qualified Test.QuickCheck as QC -import Test.QuickCheck (Arbitrary (..)) +import Data.Word (Word64) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Quiet (Quiet (..)) +import Test.QuickCheck (Arbitrary (..)) +import Test.QuickCheck qualified as QC -- | Number of slots newtype NumSlots = NumSlots {unNumSlots :: Word64} deriving (Eq, Generic, NoThunks) - deriving (Show) via (Quiet NumSlots) + deriving Show via (Quiet NumSlots) {------------------------------------------------------------------------------- Arbitrary instances diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Split.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Split.hs index 3484232924..3a78396c2a 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Split.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Split.hs @@ -1,14 +1,14 @@ {-# LANGUAGE LambdaCase #-} {-# LANGUAGE ScopedTypeVariables #-} -module Test.Util.Split ( - spanLeft +module Test.Util.Split + ( spanLeft , spanLeft' , splitAtJust ) where -import Data.Bifunctor (first) -import Data.Word (Word64) +import Data.Bifunctor (first) +import Data.Word (Word64) {------------------------------------------------------------------------------- spanLeft @@ -18,23 +18,23 @@ import Data.Word (Word64) -- -- INVARIANT The output data is a segmentation of the given list. spanLeft :: - forall x a b. - (x -> Either a b) -> [x] -> ([a], Maybe (b, [x])) + forall x a b. + (x -> Either a b) -> [x] -> ([a], Maybe (b, [x])) spanLeft prj xs = (reverse acc, mbBxs) - where - (acc, mbBxs) = spanLeft' prj xs + where + (acc, mbBxs) = spanLeft' prj xs -- | As 'spanLeft', but the @[a]@ is reversed. -spanLeft' - :: forall x a b. - (x -> Either a b) -> [x] -> ([a], Maybe (b, [x])) +spanLeft' :: + forall x a b. + (x -> Either a b) -> [x] -> ([a], Maybe (b, [x])) spanLeft' prj = go [] - where - go acc = \case - [] -> (acc, Nothing) - x : xs -> case prj x of - Left a -> go (a : acc) xs - Right b -> (acc, Just (b, xs)) + where + go acc = \case + [] -> (acc, Nothing) + x : xs -> case prj x of + Left a -> go (a : acc) xs + Right b -> (acc, Just (b, xs)) {------------------------------------------------------------------------------- splitAtJust @@ -48,23 +48,24 @@ data Prj a b = Prj !a !b -- -- INVARIANT The output data is a segmentation of the given list. splitAtJust :: - forall x b. - (x -> Maybe b) -> Word64 -> [x] -> (Maybe ([x], b), [x]) + forall x b. + (x -> Maybe b) -> Word64 -> [x] -> (Maybe ([x], b), [x]) splitAtJust prj = \n xs -> - if 0 == n then (Nothing, xs) - else case peel xs of - (pre, Just (xb, xs')) -> Just `first` go pre xb (n - 1) xs' - (_, Nothing) -> (Nothing, xs) - where - peel :: [x] -> ([x], Maybe (Prj x b, [x])) - peel = spanLeft' prj' - where - prj' x = case prj x of - Nothing -> Left x - Just b -> Right (Prj x b) + if 0 == n + then (Nothing, xs) + else case peel xs of + (pre, Just (xb, xs')) -> Just `first` go pre xb (n - 1) xs' + (_, Nothing) -> (Nothing, xs) + where + peel :: [x] -> ([x], Maybe (Prj x b, [x])) + peel = spanLeft' prj' + where + prj' x = case prj x of + Nothing -> Left x + Just b -> Right (Prj x b) - go pre (Prj x b) n xs - | 0 == n = ((reverse pre, b), xs) - | otherwise = case peel xs of - (pre', Nothing ) -> ((reverse pre, b), reverse pre') - (pre', Just (xb, xs')) -> go (pre' ++ x : pre) xb (n - 1) xs' + go pre (Prj x b) n xs + | 0 == n = ((reverse pre, b), xs) + | otherwise = case peel xs of + (pre', Nothing) -> ((reverse pre, b), reverse pre') + (pre', Just (xb, xs')) -> go (pre' ++ x : pre) xb (n - 1) xs' diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Stream.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Stream.hs index d3d6a9c9a3..6c25078266 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Stream.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Stream.hs @@ -1,18 +1,18 @@ {-# LANGUAGE DeriveTraversable #-} -module Test.Util.Stream ( - Stream (..) +module Test.Util.Stream + ( Stream (..) , nubOrdBy ) where -import qualified Data.Set as Set +import Data.Set qualified as Set data Stream a = a :< Stream a deriving (Foldable, Functor, Show, Traversable) nubOrdBy :: Ord b => (a -> b) -> Set.Set b -> Stream a -> Stream a nubOrdBy f = go - where - go acc (x :< xs) - | Set.member (f x) acc = go acc xs - | otherwise = x :< go (Set.insert (f x) acc) xs + where + go acc (x :< xs) + | Set.member (f x) acc = go acc xs + | otherwise = x :< go (Set.insert (f x) acc) xs diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SupportedNetworkProtocolVersion.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SupportedNetworkProtocolVersion.hs index 8c7078c535..a2fed26933 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SupportedNetworkProtocolVersion.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/SupportedNetworkProtocolVersion.hs @@ -2,36 +2,39 @@ module Test.Util.SupportedNetworkProtocolVersion (exhaustiveSupportedNetworkProtocolVersions) where -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Proxy -import qualified Data.Set as Set -import Data.Typeable -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Test.Tasty.HUnit +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Proxy +import Data.Set qualified as Set +import Data.Typeable +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Test.Tasty.HUnit -- | Make sure that 'supportedNodeToNodeVersions' and -- 'supportedNodeToClientVersions' contain entries for all 'NodeToNodeVersion's -- and 'NodeToClientVersion', respectively. exhaustiveSupportedNetworkProtocolVersions :: - forall blk. (Typeable blk, SupportedNetworkProtocolVersion blk) - => Proxy blk - -> Assertion + forall blk. + (Typeable blk, SupportedNetworkProtocolVersion blk) => + Proxy blk -> + Assertion exhaustiveSupportedNetworkProtocolVersions p = do - testVersions supportedNodeToNodeVersions - testVersions supportedNodeToClientVersions - where - testVersions :: - (Show v, Ord v, Enum v, Bounded v) - => (Proxy blk -> Map v a) - -> Assertion - testVersions f = - assertBool - ( "unmapped versions for " <> show (typeRep p) <> ": " - <> show (Set.toList unmappedVersions) - ) - (Set.null unmappedVersions) - where - unmappedVersions = allVersions Set.\\ mappedVersions - allVersions = Set.fromList [minBound .. maxBound] - mappedVersions = Map.keysSet $ f p + testVersions supportedNodeToNodeVersions + testVersions supportedNodeToClientVersions + where + testVersions :: + (Show v, Ord v, Enum v, Bounded v) => + (Proxy blk -> Map v a) -> + Assertion + testVersions f = + assertBool + ( "unmapped versions for " + <> show (typeRep p) + <> ": " + <> show (Set.toList unmappedVersions) + ) + (Set.null unmappedVersions) + where + unmappedVersions = allVersions Set.\\ mappedVersions + allVersions = Set.fromList [minBound .. maxBound] + mappedVersions = Map.keysSet $ f p diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestBlock.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestBlock.hs index 1079843e1e..a1975d4ba6 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestBlock.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestBlock.hs @@ -20,12 +20,11 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE ViewPatterns #-} - {-# OPTIONS_GHC -Wno-incomplete-uni-patterns #-} -- | Minimal instantiation of the consensus layer to be able to run the ChainDB -module Test.Util.TestBlock ( - -- * Blocks +module Test.Util.TestBlock + ( -- * Blocks BlockConfig (..) , BlockQuery (..) , CodecConfig (..) @@ -42,28 +41,34 @@ module Test.Util.TestBlock ( , successorBlockWithPayload , testHashFromList , unTestHash + -- ** Test block without payload , TestBlock , firstBlock , successorBlock + -- ** Payload semantics , PayloadDependentState (..) , PayloadSemantics (..) , applyDirectlyToPayloadDependentState + -- * LedgerState , LedgerState (TestLedger, payloadDependentState, lastAppliedPoint) , Ticked (TickedTestLedger) , getTickedTestLedger + -- * Chain , BlockChain (..) , blockChain , chainToBlocks + -- * Tree , BlockTree (..) , blockTree , treePreferredChain , treeToBlocks , treeToChains + -- * Ledger infrastructure , singleNodeTestConfig , singleNodeTestConfigWith @@ -72,6 +77,7 @@ module Test.Util.TestBlock ( , testInitExtLedgerWithState , testInitLedger , testInitLedgerWithState + -- * Support for tests , Permutation (..) , TestBlockLedgerConfig (..) @@ -85,71 +91,72 @@ module Test.Util.TestBlock ( , updateToNextNumeral ) where -import Cardano.Crypto.DSIGN -import Cardano.Ledger.BaseTypes (knownNonZeroBounded, unNonZero) -import Codec.Serialise (Serialise (..), serialise) -import Control.DeepSeq (force) -import Control.Monad (guard, replicateM, replicateM_) -import Control.Monad.Except (throwError) -import qualified Data.Binary.Get as Get -import qualified Data.Binary.Put as Put -import qualified Data.ByteString.Lazy as BL -import Data.Foldable (for_) -import Data.Int -import Data.Kind (Type) -import Data.List (isSuffixOf, transpose) -import Data.List.NonEmpty (NonEmpty (..)) -import qualified Data.List.NonEmpty as NE -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import Data.Maybe.Strict (StrictMaybe (..), strictMaybeToMaybe) -import Data.Proxy -import Data.Time.Calendar (fromGregorian) -import Data.Time.Clock (UTCTime (..)) -import Data.Tree (Tree (..)) -import qualified Data.Tree as Tree -import Data.TreeDiff (ToExpr) -import Data.Typeable (Typeable) -import Data.Void (Void) -import Data.Word -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HardFork.Combinator.Abstract - (ImmutableEraParams (immutableEraParams)) -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.BFT -import Ouroboros.Consensus.Protocol.MockChainSel -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Storage.ChainDB (SerialiseDiskConstraints) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util (ShowProxy (..)) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IndexedMemPack -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Magic (NetworkMagic (..)) -import Ouroboros.Network.Mock.Chain (Chain (..)) -import qualified Ouroboros.Network.Mock.Chain as Chain -import qualified System.Random as R -import Test.QuickCheck hiding (Result) -import Test.Util.Orphans.SignableRepresentation () -import Test.Util.Orphans.ToExpr () +import Cardano.Crypto.DSIGN +import Cardano.Ledger.BaseTypes (knownNonZeroBounded, unNonZero) +import Codec.Serialise (Serialise (..), serialise) +import Control.DeepSeq (force) +import Control.Monad (guard, replicateM, replicateM_) +import Control.Monad.Except (throwError) +import Data.Binary.Get qualified as Get +import Data.Binary.Put qualified as Put +import Data.ByteString.Lazy qualified as BL +import Data.Foldable (for_) +import Data.Int +import Data.Kind (Type) +import Data.List (isSuffixOf, transpose) +import Data.List.NonEmpty (NonEmpty (..)) +import Data.List.NonEmpty qualified as NE +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import Data.Maybe.Strict (StrictMaybe (..), strictMaybeToMaybe) +import Data.Proxy +import Data.Time.Calendar (fromGregorian) +import Data.Time.Clock (UTCTime (..)) +import Data.Tree (Tree (..)) +import Data.Tree qualified as Tree +import Data.TreeDiff (ToExpr) +import Data.Typeable (Typeable) +import Data.Void (Void) +import Data.Word +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.Combinator.Abstract + ( ImmutableEraParams (immutableEraParams) + ) +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.BFT +import Ouroboros.Consensus.Protocol.MockChainSel +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Storage.ChainDB (SerialiseDiskConstraints) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util (ShowProxy (..)) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IndexedMemPack +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Magic (NetworkMagic (..)) +import Ouroboros.Network.Mock.Chain (Chain (..)) +import Ouroboros.Network.Mock.Chain qualified as Chain +import System.Random qualified as R +import Test.QuickCheck hiding (Result) +import Test.Util.Orphans.SignableRepresentation () +import Test.Util.Orphans.ToExpr () {------------------------------------------------------------------------------- Test infrastructure: test block @@ -190,16 +197,17 @@ import Test.Util.Orphans.ToExpr () -- in-memory representation). -- -- The 'BlockNo' of the corresponding block is just the length of the list. -newtype TestHash = UnsafeTestHash { - unTestHash :: NonEmpty Word64 - } - deriving stock (Generic) - deriving newtype (Eq, Ord, Serialise, ToExpr) - deriving anyclass (NoThunks) +newtype TestHash = UnsafeTestHash + { unTestHash :: NonEmpty Word64 + } + deriving stock Generic + deriving newtype (Eq, Ord, Serialise, ToExpr) + deriving anyclass NoThunks pattern TestHash :: NonEmpty Word64 -> TestHash -pattern TestHash path <- UnsafeTestHash path where - TestHash path = UnsafeTestHash (force path) +pattern TestHash path <- UnsafeTestHash path + where + TestHash path = UnsafeTestHash (force path) {-# COMPLETE TestHash #-} @@ -213,7 +221,7 @@ instance Condense TestHash where condense = condense . reverse . NE.toList . unTestHash data Validity = Valid | Invalid - deriving stock (Show, Eq, Ord, Enum, Bounded, Generic) + deriving stock (Show, Eq, Ord, Enum, Bounded, Generic) deriving anyclass (Serialise, NoThunks, ToExpr) -- | Test block parametrized on the payload type @@ -222,21 +230,20 @@ data Validity = Valid | Invalid -- -- By defining a 'PayloadSemantics' it is possible to obtain an 'ApplyBlock' -- instance. See the former class for more details. --- -data TestBlockWith ptype = TestBlockWith { - tbHash :: !TestHash - , tbSlot :: !SlotNo - -- ^ We store a separate 'Block.SlotNo', as slots can have gaps between - -- them, unlike block numbers. - -- - -- Note that when generating a 'TestBlock', you must make sure that - -- blocks with the same 'TestHash' have the same slot number. - , tbValid :: !Validity - -- ^ Note that when generating a 'TestBlock', you must make sure that - -- blocks with the same 'TestHash' have the same value for 'tbValid'. - , tbPayload :: !ptype - } - deriving stock (Show, Eq, Ord, Generic) +data TestBlockWith ptype = TestBlockWith + { tbHash :: !TestHash + , tbSlot :: !SlotNo + -- ^ We store a separate 'Block.SlotNo', as slots can have gaps between + -- them, unlike block numbers. + -- + -- Note that when generating a 'TestBlock', you must make sure that + -- blocks with the same 'TestHash' have the same slot number. + , tbValid :: !Validity + -- ^ Note that when generating a 'TestBlock', you must make sure that + -- blocks with the same 'TestHash' have the same value for 'tbValid'. + , tbPayload :: !ptype + } + deriving stock (Show, Eq, Ord, Generic) deriving anyclass (Serialise, NoThunks, ToExpr) -- | Create a block directly with the given parameters. This allows creating @@ -248,10 +255,11 @@ unsafeTestBlockWithPayload tbHash tbSlot tbValid tbPayload = -- | Create the first block in the given fork, @[fork]@, with the given payload. -- The 'SlotNo' will be 1. firstBlockWithPayload :: Word64 -> ptype -> TestBlockWith ptype -firstBlockWithPayload forkNo payload = TestBlockWith - { tbHash = TestHash (forkNo NE.:| []) - , tbSlot = 1 - , tbValid = Valid +firstBlockWithPayload forkNo payload = + TestBlockWith + { tbHash = TestHash (forkNo NE.:| []) + , tbSlot = 1 + , tbValid = Valid , tbPayload = payload } @@ -261,10 +269,11 @@ firstBlockWithPayload forkNo payload = TestBlockWith -- In Zipper parlance, this corresponds to going down in a tree. successorBlockWithPayload :: TestHash -> SlotNo -> ptype -> TestBlockWith ptype -successorBlockWithPayload hash slot payload = TestBlockWith - { tbHash = TestHash (NE.cons 0 (unTestHash hash)) - , tbSlot = succ slot - , tbValid = Valid +successorBlockWithPayload hash slot payload = + TestBlockWith + { tbHash = TestHash (NE.cons 0 (unTestHash hash)) + , tbSlot = succ slot + , tbValid = Valid , tbPayload = payload } @@ -281,8 +290,7 @@ isAncestorOf b1 b2 = -- NOTE: 'unTestHash' returns the list of hash components _in reverse -- order_ so we need to test that one hash is the _suffix_ of the other. NE.toList (unTestHash (blockHash b1)) - `isSuffixOf` - NE.toList (unTestHash (blockHash b2)) + `isSuffixOf` NE.toList (unTestHash (blockHash b2)) -- | Variant of 'isAncestorOf' that returns @False@ when the two blocks are -- equal. @@ -305,19 +313,20 @@ isDescendentOf = flip isAncestorOf isStrictDescendentOf :: TestBlock -> TestBlock -> Bool isStrictDescendentOf b1 b2 = b1 `isDescendentOf` b2 && b1 /= b2 -instance ShowProxy TestBlock where +instance ShowProxy TestBlock -newtype instance Header (TestBlockWith ptype) = - TestHeader { testHeader :: TestBlockWith ptype } +newtype instance Header (TestBlockWith ptype) + = TestHeader {testHeader :: TestBlockWith ptype} deriving stock (Eq, Show) deriving newtype (NoThunks, Serialise) -instance Typeable ptype => ShowProxy (Header (TestBlockWith ptype)) where +instance Typeable ptype => ShowProxy (Header (TestBlockWith ptype)) instance Typeable ptype => HasHeader (Header (TestBlockWith ptype)) where - getHeaderFields (TestHeader TestBlockWith{..}) = HeaderFields { - headerFieldHash = tbHash - , headerFieldSlot = tbSlot + getHeaderFields (TestHeader TestBlockWith{..}) = + HeaderFields + { headerFieldHash = tbHash + , headerFieldSlot = tbSlot , headerFieldBlockNo = fromIntegral . NE.length . unTestHash $ tbHash } @@ -333,15 +342,16 @@ instance (Typeable ptype, Eq ptype) => HasHeader (TestBlockWith ptype) where instance (Typeable ptype, Eq ptype) => GetPrevHash (TestBlockWith ptype) where headerPrevHash (TestHeader b) = - case NE.nonEmpty . NE.tail . unTestHash . tbHash $ b of - Nothing -> GenesisHash - Just prevHash -> BlockHash (TestHash prevHash) + case NE.nonEmpty . NE.tail . unTestHash . tbHash $ b of + Nothing -> GenesisHash + Just prevHash -> BlockHash (TestHash prevHash) instance StandardHash (TestBlockWith ptype) instance (Typeable ptype, Eq ptype) => Condense (TestBlockWith ptype) where - condense b = mconcat [ - "(H:" + condense b = + mconcat + [ "(H:" , condense (blockHash b) , ",S:" , condense (blockSlot b) @@ -354,26 +364,27 @@ instance (Typeable ptype, Eq ptype) => Condense (Header (TestBlockWith ptype)) w condense = condense . testHeader instance Condense (ChainHash (TestBlockWith ptype)) where - condense GenesisHash = "genesis" + condense GenesisHash = "genesis" condense (BlockHash h) = show h -data instance BlockConfig (TestBlockWith ptype) = TestBlockConfig { - -- | Number of core nodes - -- - -- We need this in order to compute the 'ValidateView', which must - -- conjure up a validation key out of thin air - testBlockNumCoreNodes :: !NumCoreNodes - } +data instance BlockConfig (TestBlockWith ptype) = TestBlockConfig + { testBlockNumCoreNodes :: !NumCoreNodes + -- ^ Number of core nodes + -- + -- We need this in order to compute the 'ValidateView', which must + -- conjure up a validation key out of thin air + } deriving (Show, Generic, NoThunks) -instance HasNetworkProtocolVersion (TestBlockWith ptype) where - -- Use defaults +instance HasNetworkProtocolVersion (TestBlockWith ptype) + +-- Use defaults instance ConfigSupportsNode (TestBlockWith ptype) where getSystemStart = const (SystemStart dummyDate) - where - -- This doesn't matter much - dummyDate = UTCTime (fromGregorian 2019 8 13) 0 + where + -- This doesn't matter much + dummyDate = UTCTime (fromGregorian 2019 8 13) 0 getNetworkMagic = const (NetworkMagic 42) @@ -381,42 +392,37 @@ instance ConfigSupportsNode (TestBlockWith ptype) where Payload semantics -------------------------------------------------------------------------------} -class ( Typeable ptype - , Eq ptype - , NoThunks ptype - - , forall mk. EqMK mk => Eq (PayloadDependentState ptype mk) - , forall mk. NoThunksMK mk => NoThunks (PayloadDependentState ptype mk) - , forall mk. ShowMK mk => Show (PayloadDependentState ptype mk) - - , forall mk. Generic (PayloadDependentState ptype mk) - , Serialise (PayloadDependentState ptype EmptyMK) - - - , HasLedgerTables (LedgerState (TestBlockWith ptype)) - , HasLedgerTables (Ticked (LedgerState (TestBlockWith ptype))) - , CanStowLedgerTables (LedgerState (TestBlockWith ptype)) - - , Eq (PayloadDependentError ptype) - , Show (PayloadDependentError ptype) - , Generic (PayloadDependentError ptype) - , ToExpr (PayloadDependentError ptype) - , Serialise (PayloadDependentError ptype) - , NoThunks (PayloadDependentError ptype) - - , NoThunks (CodecConfig (TestBlockWith ptype)) - - , NoThunks (StorageConfig (TestBlockWith ptype)) - ) => PayloadSemantics ptype where - +class + ( Typeable ptype + , Eq ptype + , NoThunks ptype + , forall mk. EqMK mk => Eq (PayloadDependentState ptype mk) + , forall mk. NoThunksMK mk => NoThunks (PayloadDependentState ptype mk) + , forall mk. ShowMK mk => Show (PayloadDependentState ptype mk) + , forall mk. Generic (PayloadDependentState ptype mk) + , Serialise (PayloadDependentState ptype EmptyMK) + , HasLedgerTables (LedgerState (TestBlockWith ptype)) + , HasLedgerTables (Ticked (LedgerState (TestBlockWith ptype))) + , CanStowLedgerTables (LedgerState (TestBlockWith ptype)) + , Eq (PayloadDependentError ptype) + , Show (PayloadDependentError ptype) + , Generic (PayloadDependentError ptype) + , ToExpr (PayloadDependentError ptype) + , Serialise (PayloadDependentError ptype) + , NoThunks (PayloadDependentError ptype) + , NoThunks (CodecConfig (TestBlockWith ptype)) + , NoThunks (StorageConfig (TestBlockWith ptype)) + ) => + PayloadSemantics ptype + where data PayloadDependentState ptype (mk :: MapKind) :: Type type PayloadDependentError ptype :: Type applyPayload :: - PayloadDependentState ptype ValuesMK - -> ptype - -> Either (PayloadDependentError ptype) (PayloadDependentState ptype TrackingMK) + PayloadDependentState ptype ValuesMK -> + ptype -> + Either (PayloadDependentError ptype) (PayloadDependentState ptype TrackingMK) -- | This function is used to implement the 'getBlockKeySets' function of the -- 'ApplyBlock' class. Thus we assume that the payload contains all the @@ -438,14 +444,15 @@ instance PayloadSemantics () where -- | Apply the payload directly to the payload dependent state portion of a -- ticked state, leaving the rest of the input ticked state unaltered. applyDirectlyToPayloadDependentState :: - PayloadSemantics ptype - => Ticked (LedgerState (TestBlockWith ptype)) ValuesMK - -> ptype - -> Either (PayloadDependentError ptype) - (Ticked (LedgerState (TestBlockWith ptype)) TrackingMK) + PayloadSemantics ptype => + Ticked (LedgerState (TestBlockWith ptype)) ValuesMK -> + ptype -> + Either + (PayloadDependentError ptype) + (Ticked (LedgerState (TestBlockWith ptype)) TrackingMK) applyDirectlyToPayloadDependentState (TickedTestLedger st) tx = do - payloadDepSt' <- applyPayload (payloadDependentState st) tx - pure $ TickedTestLedger $ st { payloadDependentState = payloadDepSt' } + payloadDepSt' <- applyPayload (payloadDependentState st) tx + pure $ TickedTestLedger $ st{payloadDependentState = payloadDepSt'} {------------------------------------------------------------------------------- NestedCtxt @@ -474,14 +481,15 @@ type instance Signed (Header (TestBlockWith ptype)) = () instance SignedHeader (Header (TestBlockWith ptype)) where headerSigned _ = () -data TestBlockError ptype = - -- | The hashes don't line up +data TestBlockError ptype + = -- | The hashes don't line up InvalidHash - (ChainHash (TestBlockWith ptype)) -- ^ Expected hash - (ChainHash (TestBlockWith ptype)) -- ^ Invalid hash - - -- | The block itself is invalid - | InvalidBlock + -- | Expected hash + (ChainHash (TestBlockWith ptype)) + -- | Invalid hash + (ChainHash (TestBlockWith ptype)) + | -- | The block itself is invalid + InvalidBlock | InvalidPayload (PayloadDependentError ptype) deriving stock instance Eq (PayloadDependentError ptype) => Eq (TestBlockError ptype) @@ -491,30 +499,36 @@ deriving stock instance Generic (TestBlockError ptype) deriving anyclass instance ( Typeable ptype , Generic (PayloadDependentError ptype) - , NoThunks (PayloadDependentError ptype)) => NoThunks (TestBlockError ptype) - -instance ( Typeable ptype - , Eq ptype - , NoThunks ptype - , NoThunks (CodecConfig (TestBlockWith ptype)) - , NoThunks (StorageConfig (TestBlockWith ptype)) - ) => BlockSupportsProtocol (TestBlockWith ptype) where - validateView TestBlockConfig{..} = - bftValidateView bftFields - where - NumCoreNodes numCore = testBlockNumCoreNodes + , NoThunks (PayloadDependentError ptype) + ) => + NoThunks (TestBlockError ptype) - bftFields :: Header (TestBlockWith ptype) -> BftFields BftMockCrypto () - bftFields (TestHeader tb) = BftFields { - bftSignature = SignedDSIGN $ mockSign () (signKey (tbSlot tb)) - } +instance + ( Typeable ptype + , Eq ptype + , NoThunks ptype + , NoThunks (CodecConfig (TestBlockWith ptype)) + , NoThunks (StorageConfig (TestBlockWith ptype)) + ) => + BlockSupportsProtocol (TestBlockWith ptype) + where + validateView TestBlockConfig{..} = + bftValidateView bftFields + where + NumCoreNodes numCore = testBlockNumCoreNodes + + bftFields :: Header (TestBlockWith ptype) -> BftFields BftMockCrypto () + bftFields (TestHeader tb) = + BftFields + { bftSignature = SignedDSIGN $ mockSign () (signKey (tbSlot tb)) + } - -- We don't want /our/ signing key, but rather the signing key of the - -- node that produced the block - signKey :: SlotNo -> SignKeyDSIGN MockDSIGN - signKey (SlotNo n) = SignKeyMockDSIGN $ n `mod` numCore + -- We don't want /our/ signing key, but rather the signing key of the + -- node that produced the block + signKey :: SlotNo -> SignKeyDSIGN MockDSIGN + signKey (SlotNo n) = SignKeyMockDSIGN $ n `mod` numCore -type instance TxIn (LedgerState TestBlock) = Void +type instance TxIn (LedgerState TestBlock) = Void type instance TxOut (LedgerState TestBlock) = Void instance LedgerTablesAreTrivial (LedgerState TestBlock) where @@ -522,35 +536,51 @@ instance LedgerTablesAreTrivial (LedgerState TestBlock) where instance LedgerTablesAreTrivial (Ticked (LedgerState TestBlock)) where convertMapKind (TickedTestLedger x) = TickedTestLedger $ convertMapKind x -deriving via TrivialLedgerTables (LedgerState TestBlock) - instance HasLedgerTables (LedgerState TestBlock) -deriving via TrivialLedgerTables (LedgerState TestBlock) - instance HasLedgerTables (Ticked (LedgerState TestBlock)) -deriving via TrivialLedgerTables (LedgerState TestBlock) - instance CanStowLedgerTables (LedgerState TestBlock) -deriving via TrivialLedgerTables (LedgerState TestBlock) - instance CanUpgradeLedgerTables (LedgerState TestBlock) -deriving via TrivialLedgerTables (LedgerState TestBlock) - instance SerializeTablesWithHint (LedgerState TestBlock) -deriving via Void - instance IndexedMemPack (LedgerState TestBlock EmptyMK) Void - -instance PayloadSemantics ptype - => ApplyBlock (LedgerState (TestBlockWith ptype)) (TestBlockWith ptype) where +deriving via + TrivialLedgerTables (LedgerState TestBlock) + instance + HasLedgerTables (LedgerState TestBlock) +deriving via + TrivialLedgerTables (LedgerState TestBlock) + instance + HasLedgerTables (Ticked (LedgerState TestBlock)) +deriving via + TrivialLedgerTables (LedgerState TestBlock) + instance + CanStowLedgerTables (LedgerState TestBlock) +deriving via + TrivialLedgerTables (LedgerState TestBlock) + instance + CanUpgradeLedgerTables (LedgerState TestBlock) +deriving via + TrivialLedgerTables (LedgerState TestBlock) + instance + SerializeTablesWithHint (LedgerState TestBlock) +deriving via + Void + instance + IndexedMemPack (LedgerState TestBlock EmptyMK) Void + +instance + PayloadSemantics ptype => + ApplyBlock (LedgerState (TestBlockWith ptype)) (TestBlockWith ptype) + where applyBlockLedgerResultWithValidation _validation _events _ tb@TestBlockWith{..} (TickedTestLedger TestLedger{..}) - | blockPrevHash tb /= pointHash lastAppliedPoint - = throwError $ InvalidHash (pointHash lastAppliedPoint) (blockPrevHash tb) - | tbValid == Invalid - = throwError $ InvalidBlock - | otherwise - = case applyPayload payloadDependentState tbPayload of - Left err -> throwError $ InvalidPayload err - Right st' -> return $ pureLedgerResult - $ trackingToDiffs - $ TestLedger { - lastAppliedPoint = Chain.blockPoint tb - , payloadDependentState = st' - } + | blockPrevHash tb /= pointHash lastAppliedPoint = + throwError $ InvalidHash (pointHash lastAppliedPoint) (blockPrevHash tb) + | tbValid == Invalid = + throwError $ InvalidBlock + | otherwise = + case applyPayload payloadDependentState tbPayload of + Left err -> throwError $ InvalidPayload err + Right st' -> + return $ + pureLedgerResult $ + trackingToDiffs $ + TestLedger + { lastAppliedPoint = Chain.blockPoint tb + , payloadDependentState = st' + } applyBlockLedgerResult = defaultApplyBlockLedgerResult reapplyBlockLedgerResult = @@ -558,56 +588,62 @@ instance PayloadSemantics ptype getBlockKeySets = getPayloadKeySets . tbPayload -data instance LedgerState (TestBlockWith ptype) mk = - TestLedger { - -- | The ledger state simply consists of the last applied block - lastAppliedPoint :: Point (TestBlockWith ptype) - -- | State that depends on the application of the block payload to the - -- state. - , payloadDependentState :: PayloadDependentState ptype mk - } +data instance LedgerState (TestBlockWith ptype) mk + = TestLedger + { lastAppliedPoint :: Point (TestBlockWith ptype) + -- ^ The ledger state simply consists of the last applied block + , payloadDependentState :: PayloadDependentState ptype mk + -- ^ State that depends on the application of the block payload to the + -- state. + } -deriving stock instance (ShowMK mk, PayloadSemantics ptype) - => Show (LedgerState (TestBlockWith ptype) mk) +deriving stock instance + (ShowMK mk, PayloadSemantics ptype) => + Show (LedgerState (TestBlockWith ptype) mk) -deriving stock instance Eq (PayloadDependentState ptype mk) - => Eq (LedgerState (TestBlockWith ptype) mk) +deriving stock instance + Eq (PayloadDependentState ptype mk) => + Eq (LedgerState (TestBlockWith ptype) mk) deriving stock instance Generic (LedgerState (TestBlockWith ptype) mk) -deriving anyclass instance PayloadSemantics ptype => +deriving anyclass instance + PayloadSemantics ptype => Serialise (LedgerState (TestBlockWith ptype) EmptyMK) -deriving anyclass instance NoThunks (PayloadDependentState ptype mk) => - NoThunks (LedgerState (TestBlockWith ptype) mk) +deriving anyclass instance + NoThunks (PayloadDependentState ptype mk) => + NoThunks (LedgerState (TestBlockWith ptype) mk) testInitLedgerWithState :: PayloadDependentState ptype mk -> LedgerState (TestBlockWith ptype) mk testInitLedgerWithState = TestLedger GenesisPoint -- Ticking has no effect -newtype instance Ticked (LedgerState (TestBlockWith ptype)) mk = TickedTestLedger { - getTickedTestLedger :: LedgerState (TestBlockWith ptype) mk - } +newtype instance Ticked (LedgerState (TestBlockWith ptype)) mk = TickedTestLedger + { getTickedTestLedger :: LedgerState (TestBlockWith ptype) mk + } deriving stock instance Generic (Ticked (LedgerState (TestBlockWith ptype)) mk) -deriving anyclass instance (NoThunksMK mk, NoThunks (PayloadDependentState ptype mk)) - => NoThunks (Ticked (LedgerState (TestBlockWith ptype)) mk) +deriving anyclass instance + (NoThunksMK mk, NoThunks (PayloadDependentState ptype mk)) => + NoThunks (Ticked (LedgerState (TestBlockWith ptype)) mk) testInitExtLedgerWithState :: PayloadDependentState ptype mk -> ExtLedgerState (TestBlockWith ptype) mk -testInitExtLedgerWithState st = ExtLedgerState { - ledgerState = testInitLedgerWithState st +testInitExtLedgerWithState st = + ExtLedgerState + { ledgerState = testInitLedgerWithState st , headerState = genesisHeaderState () } -data TestBlockLedgerConfig = TestBlockLedgerConfig { - tblcHardForkParams :: !HardFork.EraParams, - -- | `Nothing` means an infinite forecast range. +data TestBlockLedgerConfig = TestBlockLedgerConfig + { tblcHardForkParams :: !HardFork.EraParams + , tblcForecastRange :: !(StrictMaybe SlotNo) + -- ^ `Nothing` means an infinite forecast range. -- Instead of SlotNo, it should be something like "SlotRange" - tblcForecastRange :: !(StrictMaybe SlotNo) -} + } deriving (Show, Eq, Generic) - deriving anyclass (NoThunks) + deriving anyclass NoThunks testBlockLedgerConfigFrom :: HardFork.EraParams -> TestBlockLedgerConfig testBlockLedgerConfigFrom eraParams = TestBlockLedgerConfig eraParams SNothing @@ -623,70 +659,81 @@ instance GetTip (Ticked (LedgerState (TestBlockWith ptype))) where instance PayloadSemantics ptype => IsLedger (LedgerState (TestBlockWith ptype)) where type LedgerErr (LedgerState (TestBlockWith ptype)) = TestBlockError ptype - type AuxLedgerEvent (LedgerState (TestBlockWith ptype)) = - VoidLedgerEvent (LedgerState (TestBlockWith ptype)) + type + AuxLedgerEvent (LedgerState (TestBlockWith ptype)) = + VoidLedgerEvent (LedgerState (TestBlockWith ptype)) - applyChainTickLedgerResult _ _ _ = pureLedgerResult - . TickedTestLedger - . noNewTickingDiffs + applyChainTickLedgerResult _ _ _ = + pureLedgerResult + . TickedTestLedger + . noNewTickingDiffs instance PayloadSemantics ptype => UpdateLedger (TestBlockWith ptype) -instance InspectLedger (TestBlockWith ptype) where - -- Defaults are fine +instance InspectLedger (TestBlockWith ptype) + +-- Defaults are fine -instance (PayloadSemantics ptype) => HasAnnTip (TestBlockWith ptype) where - -- Use defaults +instance PayloadSemantics ptype => HasAnnTip (TestBlockWith ptype) -instance (PayloadSemantics ptype) => BasicEnvelopeValidation (TestBlockWith ptype) where +-- Use defaults + +instance PayloadSemantics ptype => BasicEnvelopeValidation (TestBlockWith ptype) where -- The block number of a test block is derived from the length of the hash expectedFirstBlockNo _ = BlockNo 1 -instance (PayloadSemantics ptype) => ValidateEnvelope (TestBlockWith ptype) where - -- Use defaults +instance PayloadSemantics ptype => ValidateEnvelope (TestBlockWith ptype) + +-- Use defaults -instance (PayloadSemantics ptype) => LedgerSupportsProtocol (TestBlockWith ptype) where - protocolLedgerView _ _ = () +instance PayloadSemantics ptype => LedgerSupportsProtocol (TestBlockWith ptype) where + protocolLedgerView _ _ = () ledgerViewForecastAt cfg state = constantForecastInRange (strictMaybeToMaybe (tblcForecastRange cfg)) () (getTipSlot state) singleNodeTestConfigWith :: - CodecConfig (TestBlockWith ptype) - -> StorageConfig (TestBlockWith ptype) - -> SecurityParam - -> GenesisWindow - -> TopLevelConfig (TestBlockWith ptype) -singleNodeTestConfigWith codecConfig storageConfig k genesisWindow = TopLevelConfig { - topLevelConfigProtocol = BftConfig { - bftParams = BftParams { bftSecurityParam = k - , bftNumNodes = numCoreNodes - } - , bftSignKey = SignKeyMockDSIGN 0 - , bftVerKeys = Map.singleton (CoreId (CoreNodeId 0)) (VerKeyMockDSIGN 0) - } - , topLevelConfigLedger = ledgerCfgParams - , topLevelConfigBlock = TestBlockConfig numCoreNodes - , topLevelConfigCodec = codecConfig - , topLevelConfigStorage = storageConfig + CodecConfig (TestBlockWith ptype) -> + StorageConfig (TestBlockWith ptype) -> + SecurityParam -> + GenesisWindow -> + TopLevelConfig (TestBlockWith ptype) +singleNodeTestConfigWith codecConfig storageConfig k genesisWindow = + TopLevelConfig + { topLevelConfigProtocol = + BftConfig + { bftParams = + BftParams + { bftSecurityParam = k + , bftNumNodes = numCoreNodes + } + , bftSignKey = SignKeyMockDSIGN 0 + , bftVerKeys = Map.singleton (CoreId (CoreNodeId 0)) (VerKeyMockDSIGN 0) + } + , topLevelConfigLedger = ledgerCfgParams + , topLevelConfigBlock = TestBlockConfig numCoreNodes + , topLevelConfigCodec = codecConfig + , topLevelConfigStorage = storageConfig , topLevelConfigCheckpoints = emptyCheckpointsMap } - where - slotLength :: SlotLength - slotLength = slotLengthFromSec 20 - - numCoreNodes :: NumCoreNodes - numCoreNodes = NumCoreNodes 1 - - ledgerCfgParams :: TestBlockLedgerConfig - ledgerCfgParams = TestBlockLedgerConfig { - tblcHardForkParams = (HardFork.defaultEraParams k slotLength) { - HardFork.eraGenesisWin = genesisWindow - }, - tblcForecastRange = SNothing - } + where + slotLength :: SlotLength + slotLength = slotLengthFromSec 20 + + numCoreNodes :: NumCoreNodes + numCoreNodes = NumCoreNodes 1 + + ledgerCfgParams :: TestBlockLedgerConfig + ledgerCfgParams = + TestBlockLedgerConfig + { tblcHardForkParams = + (HardFork.defaultEraParams k slotLength) + { HardFork.eraGenesisWin = genesisWindow + } + , tblcForecastRange = SNothing + } instance ImmutableEraParams (TestBlockWith ptype) where - immutableEraParams = tblcHardForkParams . topLevelConfigLedger + immutableEraParams = tblcHardForkParams . topLevelConfigLedger {------------------------------------------------------------------------------- Test blocks without payload @@ -713,8 +760,8 @@ data instance BlockQuery TestBlock fp result where instance BlockSupportsLedgerQuery TestBlock where answerPureBlockQuery _cfg QueryLedgerTip dlv = let - TestLedger{ lastAppliedPoint } = ledgerState dlv - in + TestLedger{lastAppliedPoint} = ledgerState dlv + in lastAppliedPoint answerBlockQueryLookup _cfg q = case q of {} answerBlockQueryTraverse _cfg q = case q of {} @@ -741,21 +788,25 @@ singleNodeTestConfig = singleNodeTestConfigWithK (SecurityParam $ knownNonZeroBo singleNodeTestConfigWithK :: SecurityParam -> TopLevelConfig TestBlock singleNodeTestConfigWithK k = - singleNodeTestConfigWith TestBlockCodecConfig TestBlockStorageConfig k (GenesisWindow (2 * unNonZero (maxRollbacks k))) + singleNodeTestConfigWith + TestBlockCodecConfig + TestBlockStorageConfig + k + (GenesisWindow (2 * unNonZero (maxRollbacks k))) {------------------------------------------------------------------------------- Chain of blocks (without payload) -------------------------------------------------------------------------------} newtype BlockChain = BlockChain Word64 - deriving (Show) + deriving Show blockChain :: BlockChain -> Chain TestBlock blockChain = Chain.fromOldestFirst . chainToBlocks chainToBlocks :: BlockChain -> [TestBlock] chainToBlocks (BlockChain c) = - take (fromIntegral c) $ iterate successorBlock (firstBlock 0) + take (fromIntegral c) $ iterate successorBlock (firstBlock 0) instance Arbitrary BlockChain where arbitrary = BlockChain <$> choose (0, 30) @@ -773,7 +824,8 @@ successorBlock TestBlockWith{tbHash, tbSlot} = successorBlockWithPayload tbHash -- @g@ -> @[.., f]@ -> @[.., g f]@ -- The 'SlotNo' is left unchanged. modifyFork :: (Word64 -> Word64) -> TestBlock -> TestBlock -modifyFork g tb@TestBlockWith{ tbHash = UnsafeTestHash (f NE.:| h) } = tb +modifyFork g tb@TestBlockWith{tbHash = UnsafeTestHash (f NE.:| h)} = + tb { tbHash = let !gf = g f in UnsafeTestHash (gf NE.:| h) } @@ -793,13 +845,13 @@ newtype BlockTree = BlockTree (Tree ()) blockTree :: BlockTree -> Tree TestBlock blockTree (BlockTree t) = go (firstBlock 0) t - where - go :: TestBlock -> Tree () -> Tree TestBlock - go b (Node () ts) = Node b (zipWith go bs ts) - where - -- The first child of a node is the sucessor of b ("go down"), each - -- subsequent child is a "fork" ("go right") - bs = iterate forkBlock (successorBlock b) + where + go :: TestBlock -> Tree () -> Tree TestBlock + go b (Node () ts) = Node b (zipWith go bs ts) + where + -- The first child of a node is the sucessor of b ("go down"), each + -- subsequent child is a "fork" ("go right") + bs = iterate forkBlock (successorBlock b) treeToBlocks :: BlockTree -> [TestBlock] treeToBlocks = Tree.flatten . blockTree @@ -809,12 +861,12 @@ treeToChains = map Chain.fromOldestFirst . allPaths . blockTree treePreferredChain :: BlockTree -> Chain TestBlock treePreferredChain = - fromMaybe Genesis + fromMaybe Genesis . selectUnvalidatedChain - (Proxy @(BlockProtocol TestBlock)) - (() :: ChainOrderConfig (SelectView (BlockProtocol TestBlock))) - blockNo - Genesis + (Proxy @(BlockProtocol TestBlock)) + (() :: ChainOrderConfig (SelectView (BlockProtocol TestBlock))) + blockNo + Genesis . treeToChains instance Show BlockTree where @@ -822,36 +874,40 @@ instance Show BlockTree where instance Arbitrary BlockTree where arbitrary = sized $ \n -> - BlockTree <$> mkTree 0.2 (replicate (max 1 n) ()) + BlockTree <$> mkTree 0.2 (replicate (max 1 n) ()) shrink (BlockTree t) = - BlockTree <$> shrinkTree t + BlockTree <$> shrinkTree t {------------------------------------------------------------------------------- Generic auxiliary -------------------------------------------------------------------------------} -- | Construct random binary tree from given set of elements -mkTree :: forall a. - Double -- ^ Likelyhood of branching at any point - -> [a] -> Gen (Tree a) +mkTree :: + forall a. + -- | Likelyhood of branching at any point + Double -> + [a] -> + Gen (Tree a) mkTree threshold = go - where - go :: [a] -> Gen (Tree a) - go [] = error "go: no elements" - go [a] = return $ Node a [] - go (a:as) = do n <- choose (0, 1) - if n >= threshold || null right - then (\t -> Node a [t]) <$> go as - else (\l r -> Node a [l, r]) <$> go left <*> go right - where - (left, right) = split as + where + go :: [a] -> Gen (Tree a) + go [] = error "go: no elements" + go [a] = return $ Node a [] + go (a : as) = do + n <- choose (0, 1) + if n >= threshold || null right + then (\t -> Node a [t]) <$> go as + else (\l r -> Node a [l, r]) <$> go left <*> go right + where + (left, right) = split as -- | Shrink tree (without shrinking any elements) shrinkTree :: Tree a -> [Tree a] -shrinkTree (Node a ts) = map (Node a) (shrinkList shrinkTree ts) - -- Also try shrinking all subtrees at once - ++ map (Node a) (transpose (map shrinkTree ts)) - +shrinkTree (Node a ts) = + map (Node a) (shrinkList shrinkTree ts) + -- Also try shrinking all subtrees at once + ++ map (Node a) (transpose (map shrinkTree ts)) -- | Split list into two -- @@ -859,42 +915,43 @@ shrinkTree (Node a ts) = map (Node a) (shrinkList shrinkTree ts) -- > take 5 (fst (split [1..])) == [1,3,5,7,9] -- > take 5 (snd (split [1..])) == [2,4,6,8,10] split :: [a] -> ([a], [a]) -split [] = ([], []) -split (a:as) = let (xs, ys) = split as in (a:ys, xs) +split [] = ([], []) +split (a : as) = let (xs, ys) = split as in (a : ys, xs) -- | All paths through a tree allPaths :: Tree a -> [[a]] allPaths t = [] : nonEmptyPaths t nonEmptyPaths :: Tree a -> [[a]] -nonEmptyPaths (Node a ts) = [a] : map (a:) (concatMap nonEmptyPaths ts) +nonEmptyPaths (Node a ts) = [a] : map (a :) (concatMap nonEmptyPaths ts) {------------------------------------------------------------------------------- Test auxiliary -------------------------------------------------------------------------------} newtype Permutation = Permutation Int - deriving (Show) + deriving Show instance Arbitrary Permutation where arbitrary = Permutation . cast <$> arbitrary - where - -- Use the generator for 'Int64' (rather than 'Int') as it is not biased - -- towards small values - cast :: Int64 -> Int - cast = fromIntegral + where + -- Use the generator for 'Int64' (rather than 'Int') as it is not biased + -- towards small values + cast :: Int64 -> Int + cast = fromIntegral -- Doesn't make sense to shrink PRNG seed shrink _ = [] permute :: Permutation -> [a] -> [a] permute (Permutation n) = go (R.mkStdGen n) - where - go :: R.StdGen -> [a] -> [a] - go _ [] = [] - go g as = let (i, g') = R.randomR (0, length as - 1) g - (before, a:after) = splitAt i as - in a : go g' (before ++ after) + where + go :: R.StdGen -> [a] -> [a] + go _ [] = [] + go g as = + let (i, g') = R.randomR (0, length as - 1) g + (before, a : after) = splitAt i as + in a : go g' (before ++ after) {------------------------------------------------------------------------------- Additional serialisation instances @@ -918,26 +975,29 @@ instance ConvertRawHash (TestBlockWith ptype) where -- 8 + 100 * 8: size of the list, and its elements, one Word64 each hashSize _ = 808 toRawHash _ (TestHash h) - | len > 100 = error "hash too long" - | otherwise = BL.toStrict . Put.runPut $ do - Put.putWord64le (fromIntegral len) - for_ h Put.putWord64le - replicateM_ (100 - len) $ Put.putWord64le 0 - where - len = length h + | len > 100 = error "hash too long" + | otherwise = BL.toStrict . Put.runPut $ do + Put.putWord64le (fromIntegral len) + for_ h Put.putWord64le + replicateM_ (100 - len) $ Put.putWord64le 0 + where + len = length h fromRawHash _ bs = flip Get.runGet (BL.fromStrict bs) $ do - len <- fromIntegral <$> Get.getWord64le - (NE.nonEmpty -> Just h, rs) <- - splitAt len <$> replicateM 100 Get.getWord64le - guard $ all (0 ==) rs - pure $ TestHash h + len <- fromIntegral <$> Get.getWord64le + (NE.nonEmpty -> Just h, rs) <- + splitAt len <$> replicateM 100 Get.getWord64le + guard $ all (0 ==) rs + pure $ TestHash h instance Serialise ptype => EncodeDisk (TestBlockWith ptype) (TestBlockWith ptype) instance Serialise ptype => DecodeDisk (TestBlockWith ptype) (BL.ByteString -> TestBlockWith ptype) where decodeDisk _ = const <$> decode instance Serialise ptype => EncodeDisk (TestBlockWith ptype) (Header (TestBlockWith ptype)) -instance Serialise ptype => DecodeDisk (TestBlockWith ptype) (BL.ByteString -> Header (TestBlockWith ptype)) where +instance + Serialise ptype => + DecodeDisk (TestBlockWith ptype) (BL.ByteString -> Header (TestBlockWith ptype)) + where decodeDisk _ = const <$> decode instance EncodeDisk (TestBlockWith ptype) (AnnTip (TestBlockWith ptype)) @@ -945,8 +1005,12 @@ instance DecodeDisk (TestBlockWith ptype) (AnnTip (TestBlockWith ptype)) instance ReconstructNestedCtxt Header (TestBlockWith ptype) -instance PayloadSemantics ptype => EncodeDisk (TestBlockWith ptype) (LedgerState (TestBlockWith ptype) EmptyMK) -instance PayloadSemantics ptype => DecodeDisk (TestBlockWith ptype) (LedgerState (TestBlockWith ptype) EmptyMK) +instance + PayloadSemantics ptype => + EncodeDisk (TestBlockWith ptype) (LedgerState (TestBlockWith ptype) EmptyMK) +instance + PayloadSemantics ptype => + DecodeDisk (TestBlockWith ptype) (LedgerState (TestBlockWith ptype) EmptyMK) instance Serialise ptype => EncodeDiskDep (NestedCtxt Header) (TestBlockWith ptype) instance Serialise ptype => DecodeDiskDep (NestedCtxt Header) (TestBlockWith ptype) @@ -957,24 +1021,29 @@ instance DecodeDisk (TestBlockWith ptype) () -- Header (TestBlockWith ptype) is a newtype around TestBlockWith ptype instance Serialise ptype => HasBinaryBlockInfo (TestBlockWith ptype) where - getBinaryBlockInfo blk = BinaryBlockInfo { - headerOffset = 0 - , headerSize = fromIntegral . BL.length . serialise $ blk + getBinaryBlockInfo blk = + BinaryBlockInfo + { headerOffset = 0 + , headerSize = fromIntegral . BL.length . serialise $ blk } -instance ( Serialise ptype - , PayloadSemantics ptype - , IndexedMemPack - (LedgerState (TestBlockWith ptype) EmptyMK) - (TxOut (LedgerState (TestBlockWith ptype))) - , SerializeTablesWithHint (LedgerState (TestBlockWith ptype)) - ) => SerialiseDiskConstraints (TestBlockWith ptype) +instance + ( Serialise ptype + , PayloadSemantics ptype + , IndexedMemPack + (LedgerState (TestBlockWith ptype) EmptyMK) + (TxOut (LedgerState (TestBlockWith ptype))) + , SerializeTablesWithHint (LedgerState (TestBlockWith ptype)) + ) => + SerialiseDiskConstraints (TestBlockWith ptype) ----- -deriving via SelectViewDiffusionPipelining (TestBlockWith ptype) - instance BlockSupportsProtocol (TestBlockWith ptype) - => BlockSupportsDiffusionPipelining (TestBlockWith ptype) +deriving via + SelectViewDiffusionPipelining (TestBlockWith ptype) + instance + BlockSupportsProtocol (TestBlockWith ptype) => + BlockSupportsDiffusionPipelining (TestBlockWith ptype) ----- @@ -1007,34 +1076,36 @@ deriving via SelectViewDiffusionPipelining (TestBlockWith ptype) -- @ updateToNextNumeral :: RealPoint TestBlock -> (Point TestBlock, NonEmpty TestBlock) updateToNextNumeral rp0 = - let TestHash (fork :| forks) = hash0 in go (0 :: Int) fork forks - where - RealPoint slot0 hash0 = rp0 - - go !depth fork = \case - [] -> rebuild depth (fork + 1) [] - fork2 : forks -> - if 0 == fork then rebuild depth 1 (fork2 : forks) else - go (depth + 1) fork2 forks - - rebuild depth fork' forks = - let islot = slot0 - fromIntegral depth - 1 - ipoint = case NE.nonEmpty forks of - Nothing -> GenesisPoint - Just ne -> BlockPoint islot (TestHash ne) - - next = TestBlockWith { - tbHash = TestHash (fork' :| forks) - , tbSlot = islot + 1 - , tbValid = Valid - , tbPayload = () - } - in - (ipoint, go' (next :| []) depth) - - go' ne = \case - 0 -> NE.reverse ne - depth -> - go' - (successorBlock (NE.head ne) `NE.cons` ne) - (depth - 1) + let TestHash (fork :| forks) = hash0 in go (0 :: Int) fork forks + where + RealPoint slot0 hash0 = rp0 + + go !depth fork = \case + [] -> rebuild depth (fork + 1) [] + fork2 : forks -> + if 0 == fork + then rebuild depth 1 (fork2 : forks) + else + go (depth + 1) fork2 forks + + rebuild depth fork' forks = + let islot = slot0 - fromIntegral depth - 1 + ipoint = case NE.nonEmpty forks of + Nothing -> GenesisPoint + Just ne -> BlockPoint islot (TestHash ne) + + next = + TestBlockWith + { tbHash = TestHash (fork' :| forks) + , tbSlot = islot + 1 + , tbValid = Valid + , tbPayload = () + } + in (ipoint, go' (next :| []) depth) + + go' ne = \case + 0 -> NE.reverse ne + depth -> + go' + (successorBlock (NE.head ne) `NE.cons` ne) + (depth - 1) diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestEnv.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestEnv.hs index 2e960ff719..1c64f801f2 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestEnv.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/TestEnv.hs @@ -2,8 +2,8 @@ {-# LANGUAGE RecordWildCards #-} -- | A @tasty@ command-line option for enabling nightly tests -module Test.Util.TestEnv ( - TestEnv (..) +module Test.Util.TestEnv + ( TestEnv (..) , adjustQuickCheckMaxSize , adjustQuickCheckTests , askTestEnv @@ -11,61 +11,63 @@ module Test.Util.TestEnv ( , defaultTestEnvConfig ) where -import Cardano.Crypto.Init (cryptoInit) -import Data.Proxy (Proxy (..)) -import Main.Utf8 (withStdTerminalHandles) -import Options.Applicative (metavar) -import Test.Tasty -import Test.Tasty.Ingredients -import Test.Tasty.Ingredients.Rerun -import Test.Tasty.Options -import Test.Tasty.QuickCheck +import Cardano.Crypto.Init (cryptoInit) +import Data.Proxy (Proxy (..)) +import Main.Utf8 (withStdTerminalHandles) +import Options.Applicative (metavar) +import Test.Tasty +import Test.Tasty.Ingredients +import Test.Tasty.Ingredients.Rerun +import Test.Tasty.Options +import Test.Tasty.QuickCheck -- | 'defaultMain' extended with 'iohkTestEnvIngredient' and setting the -- terminal handles to UTF-8. defaultMainWithTestEnv :: TestEnvConfig -> TestTree -> IO () defaultMainWithTestEnv testConfig testTree = do - cryptoInit - withStdTerminalHandles $ - defaultMainWithIngredients - [rerunningTests (testEnvIngredient : defaultIngredients)] - ( withTestEnv testConfig testTree ) - where - testEnvIngredient :: Ingredient - testEnvIngredient = includingOptions [Option (Proxy :: Proxy TestEnv)] + cryptoInit + withStdTerminalHandles $ + defaultMainWithIngredients + [rerunningTests (testEnvIngredient : defaultIngredients)] + (withTestEnv testConfig testTree) + where + testEnvIngredient :: Ingredient + testEnvIngredient = includingOptions [Option (Proxy :: Proxy TestEnv)] -- | Set the appropriate options for the test environment withTestEnv :: TestEnvConfig -> TestTree -> TestTree withTestEnv TestEnvConfig{..} testTree = askOption $ \case - Nightly -> localOption (QuickCheckTests nightly) testTree - CI -> localOption (QuickCheckTests ci) testTree - Dev -> testTree + Nightly -> localOption (QuickCheckTests nightly) testTree + CI -> localOption (QuickCheckTests ci) testTree + Dev -> testTree -- | Query and adjust options for `TestEnv` askTestEnv :: (TestEnv -> TestTree) -> TestTree askTestEnv = askOption -- | Test configurations for test environment -data TestEnvConfig = TestEnvConfig { nightly :: Int, ci :: Int } +data TestEnvConfig = TestEnvConfig {nightly :: Int, ci :: Int} -- | Default set of tests for each environment defaultTestEnvConfig :: TestEnvConfig -defaultTestEnvConfig = TestEnvConfig { nightly = 100000, ci = 10000 } +defaultTestEnvConfig = TestEnvConfig{nightly = 100000, ci = 10000} -- | An 'Option' that indicates the environment in which to run tests. data TestEnv = Nightly | CI | Dev safeReadTestEnv :: String -> Maybe TestEnv safeReadTestEnv "nightly" = Just Nightly -safeReadTestEnv "ci" = Just CI -safeReadTestEnv "dev" = Just Dev -safeReadTestEnv _ = Nothing +safeReadTestEnv "ci" = Just CI +safeReadTestEnv "dev" = Just Dev +safeReadTestEnv _ = Nothing instance IsOption TestEnv where defaultValue = Dev parseValue = safeReadTestEnv optionName = pure "test-env" - optionHelp = pure "Enable a test mode. \ + optionHelp = + pure + "Enable a test mode. \ \ The 'dev' env sets the default number of quickcheck tests to 100, \ \ 'nightly' env sets it to 100_000 quickcheck tests, and \ \ 'ci' env sets it to 10_000 quickcheck tests. \ diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Time.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Time.hs index b7f9c7e3eb..be562a9f79 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Time.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Time.hs @@ -2,15 +2,15 @@ module Test.Util.Time (dawnOfTime) where -import Data.Time (UTCTime (..), fromGregorian) +import Data.Time (UTCTime (..), fromGregorian) -- | Dawn of time -- -- Everybody knows nothing happened before 2000-01-01 00:00:00 dawnOfTime :: UTCTime dawnOfTime = UTCTime day 0 - where - -- Force it to avoid a thunk in 'UTCTime', which doesn't have bangs on its - -- arguments. The thunk itself would be harmless, as it would be forced the - -- first time it's accessed, but it causes the 'NoThunks' check to fail. - !day = fromGregorian 2000 01 01 + where + -- Force it to avoid a thunk in 'UTCTime', which doesn't have bangs on its + -- arguments. The thunk itself would be harmless, as it would be forced the + -- first time it's accessed, but it causes the 'NoThunks' check to fail. + !day = fromGregorian 2000 01 01 diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ToExpr.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ToExpr.hs index c711a07ff8..e4fd99dc6e 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ToExpr.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/ToExpr.hs @@ -1,27 +1,26 @@ {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | This module implements QSM's @CanDiff@ typeclass using @tree-diff@'s -- @ToExpr@. module Test.Util.ToExpr () where -import Data.TreeDiff as T -import qualified Test.StateMachine as QSM -import Test.StateMachine.Diffing (CanDiff (..)) -import qualified Test.StateMachine.Types.References as QSM +import Data.TreeDiff as T +import Test.StateMachine qualified as QSM +import Test.StateMachine.Diffing (CanDiff (..)) +import Test.StateMachine.Types.References qualified as QSM instance ToExpr x => CanDiff x where - type ADiff x = Edit EditExpr + type ADiff x = Edit EditExpr type AnExpr x = Expr - toDiff = toExpr - exprDiff _ = T.exprDiff + toDiff = toExpr + exprDiff _ = T.exprDiff diffToDocCompact _ = ansiWlBgEditExprCompact - diffToDoc _ = ansiWlBgEditExpr - exprToDoc _ = ansiWlBgExpr + diffToDoc _ = ansiWlBgEditExpr + exprToDoc _ = ansiWlBgExpr {------------------------------------------------------------------------------- QSM's References instances diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Tracer.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Tracer.hs index e2218e6e93..3dbbf33c2d 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Tracer.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/Tracer.hs @@ -1,32 +1,37 @@ {-# LANGUAGE ScopedTypeVariables #-} -module Test.Util.Tracer ( - recordingTracerIORef + +module Test.Util.Tracer + ( recordingTracerIORef , recordingTracerM , recordingTracerTVar ) where -import Control.Tracer -import Data.IORef -import Ouroboros.Consensus.Util.IOLike -import System.IO.Unsafe (unsafePerformIO) +import Control.Tracer +import Data.IORef +import Ouroboros.Consensus.Util.IOLike +import System.IO.Unsafe (unsafePerformIO) -- | Create a 'Tracer' that stores all events in an 'IORef' that is atomically -- updated. The second return value lets you obtain the events recorded so far -- (from oldest to newest). Obtaining the events does not erase them. recordingTracerIORef :: IO (Tracer IO ev, IO [ev]) -recordingTracerIORef = newIORef [] >>= \ref -> return - ( Tracer $ \ev -> atomicModifyIORef' ref $ \evs -> (ev:evs, ()) - , reverse <$> readIORef ref - ) +recordingTracerIORef = + newIORef [] >>= \ref -> + return + ( Tracer $ \ev -> atomicModifyIORef' ref $ \evs -> (ev : evs, ()) + , reverse <$> readIORef ref + ) -- | Create a 'Tracer' that stores all events in a 'TVar' that is atomically -- updated. The second return value lets you obtain the events recorded so far -- (from oldest to newest). Obtaining the events does not erase them. recordingTracerTVar :: MonadSTM m => m (Tracer m ev, m [ev]) -recordingTracerTVar = uncheckedNewTVarM [] >>= \ref -> return - ( Tracer $ \ev -> atomically $ modifyTVar ref (ev:) - , atomically $ reverse <$> readTVar ref - ) +recordingTracerTVar = + uncheckedNewTVarM [] >>= \ref -> + return + ( Tracer $ \ev -> atomically $ modifyTVar ref (ev :) + , atomically $ reverse <$> readTVar ref + ) -- | Like 'recordingTracerIORef', but lifts IO to an arbitrary applicative. -- This is useful to record events without changing the scheduling during a @@ -35,18 +40,18 @@ recordingTracerM :: forall m ev. Monad m => m (Tracer m ev, m [ev]) recordingTracerM = do (tr, get) <- liftIOtoM recordingTracerIORef pure (natTracer liftIOtoM tr, liftIOtoM get) - where - liftIOtoM :: IO a -> m a - liftIOtoM m = do - -- The ficticious state is only used to force unsafePerformIO to run @m@ - -- every time @liftIOtoM m@ is evaluated. - s <- getStateM - pure $! snd $ unsafePerformIO $ do - r <- m - pure (s, r) + where + liftIOtoM :: IO a -> m a + liftIOtoM m = do + -- The ficticious state is only used to force unsafePerformIO to run @m@ + -- every time @liftIOtoM m@ is evaluated. + s <- getStateM + pure $! snd $ unsafePerformIO $ do + r <- m + pure (s, r) - -- We mark this function as NOINLINE to ensure the compiler cannot reason - -- that two calls of @getStateM@ might yield the same value. - {-# NOINLINE getStateM #-} - getStateM :: m Int - getStateM = pure 0 + -- We mark this function as NOINLINE to ensure the compiler cannot reason + -- that two calls of @getStateM@ might yield the same value. + {-# NOINLINE getStateM #-} + getStateM :: m Int + getStateM = pure 0 diff --git a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/WithEq.hs b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/WithEq.hs index dd931c0c6e..1adaf5d95a 100644 --- a/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/WithEq.hs +++ b/ouroboros-consensus/src/unstable-consensus-testlib/Test/Util/WithEq.hs @@ -2,16 +2,16 @@ {-# LANGUAGE DerivingStrategies #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} -module Test.Util.WithEq ( - Id (..) +module Test.Util.WithEq + ( Id (..) , WithEq (..) ) where -import Data.Function (on) -import GHC.Generics (Generic) +import Data.Function (on) +import GHC.Generics (Generic) newtype Id = Id Word - deriving stock (Show, Generic) + deriving stock (Show, Generic) deriving newtype (Eq, Ord, Enum, Bounded, Num) -- | Use this type to add an `Eq` instance for types that don't have one or @@ -20,9 +20,9 @@ newtype Id = Id Word -- -- E.g., `ImmutableDB.Iterator` needs an `Eq` instance in the q-s-m tests data WithEq a = WithEq - { getId :: Id - , unWithEq :: a - } + { getId :: Id + , unWithEq :: a + } deriving (Show, Generic) instance Eq (WithEq a) where diff --git a/ouroboros-consensus/src/unstable-mempool-test-utils/Test/Consensus/Mempool/Mocked.hs b/ouroboros-consensus/src/unstable-mempool-test-utils/Test/Consensus/Mempool/Mocked.hs index 811b80c894..8880cca248 100644 --- a/ouroboros-consensus/src/unstable-mempool-test-utils/Test/Consensus/Mempool/Mocked.hs +++ b/ouroboros-consensus/src/unstable-mempool-test-utils/Test/Consensus/Mempool/Mocked.hs @@ -3,40 +3,52 @@ {-# LANGUAGE ScopedTypeVariables #-} -- | Mempool with a mocked ledger interface -module Test.Consensus.Mempool.Mocked ( - InitialMempoolAndModelParams (..) +module Test.Consensus.Mempool.Mocked + ( InitialMempoolAndModelParams (..) + -- * Mempool with a mocked LedgerDB interface , MockedMempool (getMempool) , openMockedMempool , setLedgerState + -- * Mempool API functions , addTx , getTxs , removeTxsEvenIfValid ) where -import Control.Concurrent.Class.MonadSTM.Strict (StrictTVar, - atomically, newTVarIO, readTVar, readTVarIO, writeTVar) -import Control.DeepSeq (NFData (rnf)) -import Control.Tracer (Tracer) -import qualified Data.List.NonEmpty as NE -import Ouroboros.Consensus.Block (castPoint) -import Ouroboros.Consensus.HeaderValidation as Header -import Ouroboros.Consensus.Ledger.Basics -import qualified Ouroboros.Consensus.Ledger.Basics as Ledger -import qualified Ouroboros.Consensus.Ledger.SupportsMempool as Ledger -import Ouroboros.Consensus.Ledger.Tables.Utils (forgetLedgerTables, - restrictValues') -import Ouroboros.Consensus.Mempool (Mempool) -import qualified Ouroboros.Consensus.Mempool as Mempool -import Ouroboros.Consensus.Mempool.API (AddTxOnBehalfOf, - MempoolAddTxResult) +import Control.Concurrent.Class.MonadSTM.Strict + ( StrictTVar + , atomically + , newTVarIO + , readTVar + , readTVarIO + , writeTVar + ) +import Control.DeepSeq (NFData (rnf)) +import Control.Tracer (Tracer) +import Data.List.NonEmpty qualified as NE +import Ouroboros.Consensus.Block (castPoint) +import Ouroboros.Consensus.HeaderValidation as Header +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.Basics qualified as Ledger +import Ouroboros.Consensus.Ledger.SupportsMempool qualified as Ledger +import Ouroboros.Consensus.Ledger.Tables.Utils + ( forgetLedgerTables + , restrictValues' + ) +import Ouroboros.Consensus.Mempool (Mempool) +import Ouroboros.Consensus.Mempool qualified as Mempool +import Ouroboros.Consensus.Mempool.API + ( AddTxOnBehalfOf + , MempoolAddTxResult + ) -data MockedMempool m blk = MockedMempool { - getLedgerInterface :: !(Mempool.LedgerInterface m blk) - , getLedgerStateTVar :: !(StrictTVar m (LedgerState blk ValuesMK)) - , getMempool :: !(Mempool m blk) - } +data MockedMempool m blk = MockedMempool + { getLedgerInterface :: !(Mempool.LedgerInterface m blk) + , getLedgerStateTVar :: !(StrictTVar m (LedgerState blk ValuesMK)) + , getMempool :: !(Mempool m blk) + } instance NFData (MockedMempool m blk) where -- TODO: check we're OK with skipping the evaluation of the @@ -47,72 +59,78 @@ instance NFData (MockedMempool m blk) where -- [env]( Mempool.MempoolCapacityBytesOverride - -> Tracer IO (Mempool.TraceEventMempool blk) - -> InitialMempoolAndModelParams blk - -> IO (MockedMempool IO blk) + ( Ledger.LedgerSupportsMempool blk + , Ledger.HasTxId (Ledger.GenTx blk) + , Header.ValidateEnvelope blk + ) => + Mempool.MempoolCapacityBytesOverride -> + Tracer IO (Mempool.TraceEventMempool blk) -> + InitialMempoolAndModelParams blk -> + IO (MockedMempool IO blk) openMockedMempool capacityOverride tracer initialParams = do - currentLedgerStateTVar <- newTVarIO (immpInitialState initialParams) - let ledgerItf = Mempool.LedgerInterface { - Mempool.getCurrentLedgerState = forgetLedgerTables <$> readTVar currentLedgerStateTVar - , Mempool.getLedgerTablesAtFor = \pt keys -> do - st <- readTVarIO currentLedgerStateTVar - if castPoint (getTip st) == pt - then pure $ Just $ restrictValues' st keys - else pure Nothing - } - mempool <- Mempool.openMempoolWithoutSyncThread - ledgerItf - (immpLedgerConfig initialParams) - capacityOverride - tracer - pure MockedMempool { - getLedgerInterface = ledgerItf + currentLedgerStateTVar <- newTVarIO (immpInitialState initialParams) + let ledgerItf = + Mempool.LedgerInterface + { Mempool.getCurrentLedgerState = forgetLedgerTables <$> readTVar currentLedgerStateTVar + , Mempool.getLedgerTablesAtFor = \pt keys -> do + st <- readTVarIO currentLedgerStateTVar + if castPoint (getTip st) == pt + then pure $ Just $ restrictValues' st keys + else pure Nothing + } + mempool <- + Mempool.openMempoolWithoutSyncThread + ledgerItf + (immpLedgerConfig initialParams) + capacityOverride + tracer + pure + MockedMempool + { getLedgerInterface = ledgerItf , getLedgerStateTVar = currentLedgerStateTVar - , getMempool = mempool - } + , getMempool = mempool + } setLedgerState :: - MockedMempool IO blk - -> LedgerState blk ValuesMK - -> IO () -setLedgerState MockedMempool {getLedgerStateTVar} newSt = + MockedMempool IO blk -> + LedgerState blk ValuesMK -> + IO () +setLedgerState MockedMempool{getLedgerStateTVar} newSt = atomically $ writeTVar getLedgerStateTVar newSt addTx :: - MockedMempool m blk - -> AddTxOnBehalfOf - -> Ledger.GenTx blk - -> m (MempoolAddTxResult blk) + MockedMempool m blk -> + AddTxOnBehalfOf -> + Ledger.GenTx blk -> + m (MempoolAddTxResult blk) addTx = Mempool.addTx . getMempool removeTxsEvenIfValid :: - MockedMempool m blk - -> NE.NonEmpty (Ledger.GenTxId blk) - -> m () + MockedMempool m blk -> + NE.NonEmpty (Ledger.GenTxId blk) -> + m () removeTxsEvenIfValid = Mempool.removeTxsEvenIfValid . getMempool -getTxs :: forall blk. - (Ledger.LedgerSupportsMempool blk) - => MockedMempool IO blk -> IO [Ledger.GenTx blk] +getTxs :: + forall blk. + Ledger.LedgerSupportsMempool blk => + MockedMempool IO blk -> IO [Ledger.GenTx blk] getTxs mockedMempool = do - snapshotTxs <- fmap Mempool.snapshotTxs $ atomically - $ Mempool.getSnapshot - $ getMempool mockedMempool - pure $ fmap prjTx snapshotTxs - where - prjTx (a, _b, _c) = Ledger.txForgetValidated a :: Ledger.GenTx blk + snapshotTxs <- + fmap Mempool.snapshotTxs $ + atomically $ + Mempool.getSnapshot $ + getMempool mockedMempool + pure $ fmap prjTx snapshotTxs + where + prjTx (a, _b, _c) = Ledger.txForgetValidated a :: Ledger.GenTx blk diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger.hs index 9440ce2a65..01746bca0b 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger.hs @@ -1,12 +1,12 @@ module Ouroboros.Consensus.Mock.Ledger (module X) where -import Ouroboros.Consensus.Mock.Ledger.Address as X -import Ouroboros.Consensus.Mock.Ledger.Block as X -import Ouroboros.Consensus.Mock.Ledger.Block.BFT as X -import Ouroboros.Consensus.Mock.Ledger.Block.PBFT as X -import Ouroboros.Consensus.Mock.Ledger.Block.Praos as X -import Ouroboros.Consensus.Mock.Ledger.Block.PraosRule as X -import Ouroboros.Consensus.Mock.Ledger.Forge as X -import Ouroboros.Consensus.Mock.Ledger.Stake as X -import Ouroboros.Consensus.Mock.Ledger.State as X -import Ouroboros.Consensus.Mock.Ledger.UTxO as X hiding (TxId) +import Ouroboros.Consensus.Mock.Ledger.Address as X +import Ouroboros.Consensus.Mock.Ledger.Block as X +import Ouroboros.Consensus.Mock.Ledger.Block.BFT as X +import Ouroboros.Consensus.Mock.Ledger.Block.PBFT as X +import Ouroboros.Consensus.Mock.Ledger.Block.Praos as X +import Ouroboros.Consensus.Mock.Ledger.Block.PraosRule as X +import Ouroboros.Consensus.Mock.Ledger.Forge as X +import Ouroboros.Consensus.Mock.Ledger.Stake as X +import Ouroboros.Consensus.Mock.Ledger.State as X +import Ouroboros.Consensus.Mock.Ledger.UTxO as X hiding (TxId) diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Address.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Address.hs index 92a6e74d30..d0c0e54979 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Address.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Address.hs @@ -1,29 +1,29 @@ {-# LANGUAGE GeneralizedNewtypeDeriving #-} -module Ouroboros.Consensus.Mock.Ledger.Address ( - Addr +module Ouroboros.Consensus.Mock.Ledger.Address + ( Addr , AddrDist , mkAddrDist ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..)) -import Codec.Serialise (Serialise) -import Control.DeepSeq (NFData) -import qualified Data.ByteString.Char8 as BS8 -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.MemPack (MemPack (..)) -import Data.String -import Data.Text (pack, unpack) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (NodeId (..)) -import Ouroboros.Consensus.Util.Condense +import Cardano.Binary (FromCBOR (..), ToCBOR (..)) +import Codec.Serialise (Serialise) +import Control.DeepSeq (NFData) +import Data.ByteString.Char8 qualified as BS8 +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.MemPack (MemPack (..)) +import Data.String +import Data.Text (pack, unpack) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (NodeId (..)) +import Ouroboros.Consensus.Util.Condense -- | Mock address newtype Addr = Addr String - deriving ( - Show + deriving + ( Show , Eq , Ord , IsString @@ -54,7 +54,9 @@ type AddrDist = Map Addr NodeId -- | Construct address to node ID mapping mkAddrDist :: NumCoreNodes -> AddrDist mkAddrDist numCoreNodes = - Map.fromList $ zip [ fromString [addr] | addr <- ['a'..] ] - [ CoreId nid - | nid <- enumCoreNodes numCoreNodes - ] + Map.fromList $ + zip + [fromString [addr] | addr <- ['a' ..]] + [ CoreId nid + | nid <- enumCoreNodes numCoreNodes + ] diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block.hs index 143df4ea50..51c166a6aa 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block.hs @@ -22,8 +22,8 @@ -- -- None of the definitions in this module depend on, or even refer to, any -- specific consensus protocols. -module Ouroboros.Consensus.Mock.Ledger.Block ( - BlockQuery (..) +module Ouroboros.Consensus.Mock.Ledger.Block + ( BlockQuery (..) , Header (..) , SimpleBlock , SimpleBlock' (..) @@ -31,81 +31,91 @@ module Ouroboros.Consensus.Mock.Ledger.Block ( , SimpleHash , SimpleHeader , SimpleStdHeader (..) + -- * Working with 'SimpleBlock' , countSimpleGenTxs , matchesSimpleHeader , mkSimpleHeader + -- * Configuration , BlockConfig (..) , CodecConfig (..) , SimpleLedgerConfig (..) , StorageConfig (..) + -- * Protocol-specific part , MockProtocolSpecific (..) + -- * 'UpdateLedger' , LedgerState (..) , LedgerTables (..) , Ticked (..) , genesisSimpleLedgerState , updateSimpleLedgerState + -- * 'ApplyTx' (mempool support) , GenTx (..) , TxId (..) , Validated (..) , genTxSize , mkSimpleGenTx + -- * Crypto , SimpleCrypto , SimpleMockCrypto , SimpleStandardCrypto + -- * Serialisation , decodeSimpleHeader , encodeSimpleHeader , simpleBlockBinaryBlockInfo + -- * For tests , simpleBlockCapacity ) where -import Cardano.Binary (ToCBOR (..)) -import Cardano.Crypto.Hash (Hash, HashAlgorithm, SHA256, ShortHash) -import qualified Cardano.Crypto.Hash as Hash -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (Serialise (..), serialise) -import Control.Monad.Except -import qualified Data.ByteString.Lazy as Lazy -import Data.Kind (Type) -import Data.Proxy -import Data.Typeable -import Data.Word -import GHC.Generics (Generic) -import GHC.TypeNats (KnownNat) -import NoThunks.Class (NoThunks (..)) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.HardFork.Abstract -import Ouroboros.Consensus.HardFork.Combinator.PartialConfig -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.CommonProtocolParams -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.Inspect -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Ledger.SupportsPeerSelection -import Ouroboros.Consensus.Ledger.Tables.Utils -import Ouroboros.Consensus.Mock.Ledger.Address -import Ouroboros.Consensus.Mock.Ledger.State -import qualified Ouroboros.Consensus.Mock.Ledger.UTxO as Mock -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Storage.Common (BinaryBlockInfo (..), - SizeInBytes) -import Ouroboros.Consensus.Storage.LedgerDB -import Ouroboros.Consensus.Util (ShowProxy (..), hashFromBytesShortE) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.IndexedMemPack -import Test.Util.Orphans.Serialise () +import Cardano.Binary (ToCBOR (..)) +import Cardano.Crypto.Hash (Hash, HashAlgorithm, SHA256, ShortHash) +import Cardano.Crypto.Hash qualified as Hash +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (Serialise (..), serialise) +import Control.Monad.Except +import Data.ByteString.Lazy qualified as Lazy +import Data.Kind (Type) +import Data.Proxy +import Data.Typeable +import Data.Word +import GHC.Generics (Generic) +import GHC.TypeNats (KnownNat) +import NoThunks.Class (NoThunks (..)) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.Abstract +import Ouroboros.Consensus.HardFork.Combinator.PartialConfig +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.CommonProtocolParams +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.Inspect +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Ledger.SupportsPeerSelection +import Ouroboros.Consensus.Ledger.Tables.Utils +import Ouroboros.Consensus.Mock.Ledger.Address +import Ouroboros.Consensus.Mock.Ledger.State +import Ouroboros.Consensus.Mock.Ledger.UTxO qualified as Mock +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Storage.Common + ( BinaryBlockInfo (..) + , SizeInBytes + ) +import Ouroboros.Consensus.Storage.LedgerDB +import Ouroboros.Consensus.Util (ShowProxy (..), hashFromBytesShortE) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.IndexedMemPack +import Test.Util.Orphans.Serialise () {------------------------------------------------------------------------------- Definition of a block @@ -114,78 +124,82 @@ import Test.Util.Orphans.Serialise () previous block hash. -------------------------------------------------------------------------------} -type SimpleBlock c ext = SimpleBlock' c ext ext +type SimpleBlock c ext = SimpleBlock' c ext ext type SimpleHeader c ext = Header (SimpleBlock c ext) -data SimpleBlock' c ext ext' = SimpleBlock { - simpleHeader :: Header (SimpleBlock' c ext ext') - , simpleBody :: SimpleBody - } +data SimpleBlock' c ext ext' = SimpleBlock + { simpleHeader :: Header (SimpleBlock' c ext ext') + , simpleBody :: SimpleBody + } deriving (Generic, Show, Eq) instance (SimpleCrypto c, Serialise ext') => Serialise (SimpleBlock' c ext ext') where - encode (SimpleBlock hdr body) = mconcat [ - CBOR.encodeListLen 2 + encode (SimpleBlock hdr body) = + mconcat + [ CBOR.encodeListLen 2 , encode hdr , encode body ] decode = do - CBOR.decodeListLenOf 2 - hdr <- decode - body <- decode - return (SimpleBlock hdr body) - -instance (Typeable c, Typeable ext, Typeable ext') - => ShowProxy (SimpleBlock' c ext ext') where - -data instance Header (SimpleBlock' c ext ext') = SimpleHeader { - -- | The header hash - -- - -- This is the hash of the header itself. This is a bit unpleasant, - -- because it makes the hash look self-referential (when computing the - -- hash we must ignore the 'simpleHeaderHash' field). However, the benefit - -- is that we can give a 'HasHeader' instance that does not require - -- a (static) 'Serialise' instance. - simpleHeaderHash :: HeaderHash (SimpleBlock' c ext ext') - - -- | Fields required for the 'HasHeader' instance - , simpleHeaderStd :: SimpleStdHeader c ext - - -- | Header extension - -- - -- This extension will be required when using 'SimpleBlock' for specific - -- consensus protocols. - , simpleHeaderExt :: ext' - } + CBOR.decodeListLenOf 2 + hdr <- decode + body <- decode + return (SimpleBlock hdr body) + +instance + (Typeable c, Typeable ext, Typeable ext') => + ShowProxy (SimpleBlock' c ext ext') + +data instance Header (SimpleBlock' c ext ext') = SimpleHeader + { simpleHeaderHash :: HeaderHash (SimpleBlock' c ext ext') + -- ^ The header hash + -- + -- This is the hash of the header itself. This is a bit unpleasant, + -- because it makes the hash look self-referential (when computing the + -- hash we must ignore the 'simpleHeaderHash' field). However, the benefit + -- is that we can give a 'HasHeader' instance that does not require + -- a (static) 'Serialise' instance. + , simpleHeaderStd :: SimpleStdHeader c ext + -- ^ Fields required for the 'HasHeader' instance + , simpleHeaderExt :: ext' + -- ^ Header extension + -- + -- This extension will be required when using 'SimpleBlock' for specific + -- consensus protocols. + } deriving (Generic, Show, Eq, NoThunks) -instance (Typeable c, Typeable ext, Typeable ext') - => ShowProxy (Header (SimpleBlock' c ext ext')) where +instance + (Typeable c, Typeable ext, Typeable ext') => + ShowProxy (Header (SimpleBlock' c ext ext')) -instance (SimpleCrypto c, Typeable ext, Typeable ext') - => GetHeader (SimpleBlock' c ext ext') where +instance + (SimpleCrypto c, Typeable ext, Typeable ext') => + GetHeader (SimpleBlock' c ext ext') + where getHeader = simpleHeader blockMatchesHeader = matchesSimpleHeader headerIsEBB = const Nothing -data SimpleStdHeader c ext = SimpleStdHeader { - simplePrev :: ChainHash (SimpleBlock c ext) - , simpleSlotNo :: SlotNo - , simpleBlockNo :: BlockNo - , simpleBodyHash :: Hash (SimpleHash c) SimpleBody - , simpleBodySize :: SizeInBytes - } - deriving stock (Generic, Show, Eq) - deriving anyclass (NoThunks) - -deriving anyclass instance KnownNat (Hash.SizeHash (SimpleHash c)) => +data SimpleStdHeader c ext = SimpleStdHeader + { simplePrev :: ChainHash (SimpleBlock c ext) + , simpleSlotNo :: SlotNo + , simpleBlockNo :: BlockNo + , simpleBodyHash :: Hash (SimpleHash c) SimpleBody + , simpleBodySize :: SizeInBytes + } + deriving stock (Generic, Show, Eq) + deriving anyclass NoThunks + +deriving anyclass instance + KnownNat (Hash.SizeHash (SimpleHash c)) => Serialise (SimpleStdHeader c ext) -data SimpleBody = SimpleBody { - simpleTxs :: [Mock.Tx] - } +data SimpleBody = SimpleBody + { simpleTxs :: [Mock.Tx] + } deriving (Generic, Show, Eq) instance Serialise SimpleBody where @@ -198,33 +212,37 @@ instance Serialise SimpleBody where -- | Create a header by hashing the header without hash and adding to the -- resulting value. -mkSimpleHeader :: SimpleCrypto c - => (ext' -> CBOR.Encoding) - -> SimpleStdHeader c ext - -> ext' - -> Header (SimpleBlock' c ext ext') +mkSimpleHeader :: + SimpleCrypto c => + (ext' -> CBOR.Encoding) -> + SimpleStdHeader c ext -> + ext' -> + Header (SimpleBlock' c ext ext') mkSimpleHeader encodeExt std ext = - headerWithoutHash { - simpleHeaderHash = Hash.hashWithSerialiser - (encodeSimpleHeader encodeExt) - headerWithoutHash - } - where - headerWithoutHash = SimpleHeader { - simpleHeaderHash = error "Serialise instances should ignore hash" - , simpleHeaderStd = std - , simpleHeaderExt = ext + headerWithoutHash + { simpleHeaderHash = + Hash.hashWithSerialiser + (encodeSimpleHeader encodeExt) + headerWithoutHash + } + where + headerWithoutHash = + SimpleHeader + { simpleHeaderHash = error "Serialise instances should ignore hash" + , simpleHeaderStd = std + , simpleHeaderExt = ext } -- | Check whether the block matches the header -matchesSimpleHeader :: SimpleCrypto c - => Header (SimpleBlock' c ext ext') - -> SimpleBlock' c ext ext'' - -> Bool -matchesSimpleHeader SimpleHeader{..} SimpleBlock {..} = - simpleBodyHash == Hash.hashWithSerialiser toCBOR simpleBody - where - SimpleStdHeader{..} = simpleHeaderStd +matchesSimpleHeader :: + SimpleCrypto c => + Header (SimpleBlock' c ext ext') -> + SimpleBlock' c ext ext'' -> + Bool +matchesSimpleHeader SimpleHeader{..} SimpleBlock{..} = + simpleBodyHash == Hash.hashWithSerialiser toCBOR simpleBody + where + SimpleStdHeader{..} = simpleHeaderStd countSimpleGenTxs :: SimpleBlock c ext -> Word64 countSimpleGenTxs = fromIntegral . length . extractTxs @@ -233,11 +251,14 @@ countSimpleGenTxs = fromIntegral . length . extractTxs HasHeader instance for SimpleHeader -------------------------------------------------------------------------------} -instance (SimpleCrypto c, Typeable ext, Typeable ext') - => HasHeader (Header (SimpleBlock' c ext ext')) where - getHeaderFields hdr = HeaderFields { - headerFieldHash = simpleHeaderHash hdr - , headerFieldSlot = simpleSlotNo . simpleHeaderStd $ hdr +instance + (SimpleCrypto c, Typeable ext, Typeable ext') => + HasHeader (Header (SimpleBlock' c ext ext')) + where + getHeaderFields hdr = + HeaderFields + { headerFieldHash = simpleHeaderHash hdr + , headerFieldSlot = simpleSlotNo . simpleHeaderStd $ hdr , headerFieldBlockNo = simpleBlockNo . simpleHeaderStd $ hdr } @@ -245,23 +266,27 @@ instance (SimpleCrypto c, Typeable ext, Typeable ext') HasHeader instance for SimpleBlock -------------------------------------------------------------------------------} -type instance HeaderHash (SimpleBlock' c ext ext') = - Hash (SimpleHash c) (Header (SimpleBlock' c ext ext')) +type instance + HeaderHash (SimpleBlock' c ext ext') = + Hash (SimpleHash c) (Header (SimpleBlock' c ext ext')) -instance (SimpleCrypto c, Typeable ext, Typeable ext') - => HasHeader (SimpleBlock' c ext ext') where +instance + (SimpleCrypto c, Typeable ext, Typeable ext') => + HasHeader (SimpleBlock' c ext ext') + where getHeaderFields = getBlockHeaderFields instance (SimpleCrypto c, Typeable ext) => GetPrevHash (SimpleBlock c ext) where headerPrevHash = simplePrev . simpleHeaderStd -instance (SimpleCrypto c, Typeable ext, Typeable ext') - => StandardHash (SimpleBlock' c ext ext') +instance + (SimpleCrypto c, Typeable ext, Typeable ext') => + StandardHash (SimpleBlock' c ext ext') instance SimpleCrypto c => ConvertRawHash (SimpleBlock' c ext ext') where - toShortRawHash _ = Hash.hashToBytesShort + toShortRawHash _ = Hash.hashToBytesShort fromShortRawHash _ = hashFromBytesShortE - hashSize _ = fromIntegral $ Hash.sizeHash (Proxy @(SimpleHash c)) + hashSize _ = fromIntegral $ Hash.sizeHash (Proxy @(SimpleHash c)) {------------------------------------------------------------------------------- HasMockTxs instance @@ -278,37 +303,40 @@ instance Mock.HasMockTxs SimpleBody where -------------------------------------------------------------------------------} instance (SimpleCrypto c, Typeable ext) => HasAnnTip (SimpleBlock c ext) - -- Use defaults + +-- Use defaults instance (SimpleCrypto c, Typeable ext) => BasicEnvelopeValidation (SimpleBlock c ext) - -- Use defaults + +-- Use defaults instance (SimpleCrypto c, Typeable ext) => ValidateEnvelope (SimpleBlock c ext) - -- Use defaults + +-- Use defaults {------------------------------------------------------------------------------- Block config -------------------------------------------------------------------------------} data instance BlockConfig (SimpleBlock c ext) = SimpleBlockConfig - deriving stock (Generic) - deriving anyclass (NoThunks) + deriving stock Generic + deriving anyclass NoThunks {------------------------------------------------------------------------------- Codec config -------------------------------------------------------------------------------} data instance CodecConfig (SimpleBlock c ext) = SimpleCodecConfig - deriving stock (Generic) - deriving anyclass (NoThunks) + deriving stock Generic + deriving anyclass NoThunks {------------------------------------------------------------------------------- Storage config -------------------------------------------------------------------------------} data instance StorageConfig (SimpleBlock c ext) = SimpleStorageConfig SecurityParam - deriving stock (Generic) - deriving anyclass (NoThunks) + deriving stock Generic + deriving anyclass NoThunks {------------------------------------------------------------------------------- Hard fork history @@ -322,42 +350,46 @@ instance HasHardForkHistory (SimpleBlock c ext) where Protocol specific constraints -------------------------------------------------------------------------------} -class ( SimpleCrypto c - , Typeable ext - , Show (MockLedgerConfig c ext) - , NoThunks (MockLedgerConfig c ext) - , Serialise (MockLedgerConfig c ext) - ) => MockProtocolSpecific c ext where - type family MockLedgerConfig c ext :: Type +class + ( SimpleCrypto c + , Typeable ext + , Show (MockLedgerConfig c ext) + , NoThunks (MockLedgerConfig c ext) + , Serialise (MockLedgerConfig c ext) + ) => + MockProtocolSpecific c ext + where + type MockLedgerConfig c ext :: Type {------------------------------------------------------------------------------- Update the ledger -------------------------------------------------------------------------------} -data SimpleLedgerConfig c ext = SimpleLedgerConfig { - -- | Config required by the various kinds of mock block (PFT, Praos, ..) - simpleMockLedgerConfig :: !(MockLedgerConfig c ext) - - -- | Era parameters - , simpleLedgerEraParams :: !HardFork.EraParams - - , simpleLedgerMockConfig :: !MockConfig - } - deriving (Generic) +data SimpleLedgerConfig c ext = SimpleLedgerConfig + { simpleMockLedgerConfig :: !(MockLedgerConfig c ext) + -- ^ Config required by the various kinds of mock block (PFT, Praos, ..) + , simpleLedgerEraParams :: !HardFork.EraParams + -- ^ Era parameters + , simpleLedgerMockConfig :: !MockConfig + } + deriving Generic deriving instance Show (MockLedgerConfig c ext) => Show (SimpleLedgerConfig c ext) -deriving instance Eq (MockLedgerConfig c ext) => Eq (SimpleLedgerConfig c ext) -deriving instance NoThunks (MockLedgerConfig c ext) - => NoThunks (SimpleLedgerConfig c ext) -deriving instance Serialise (MockLedgerConfig c ext) - => Serialise (SimpleLedgerConfig c ext) +deriving instance Eq (MockLedgerConfig c ext) => Eq (SimpleLedgerConfig c ext) +deriving instance + NoThunks (MockLedgerConfig c ext) => + NoThunks (SimpleLedgerConfig c ext) +deriving instance + Serialise (MockLedgerConfig c ext) => + Serialise (SimpleLedgerConfig c ext) type instance LedgerCfg (LedgerState (SimpleBlock c ext)) = SimpleLedgerConfig c ext instance MockProtocolSpecific c ext => HasPartialLedgerConfig (SimpleBlock c ext) -instance (Serialise (MockLedgerConfig c ext)) - => SerialiseNodeToClient (SimpleBlock c ext) (SimpleLedgerConfig c ext) +instance + Serialise (MockLedgerConfig c ext) => + SerialiseNodeToClient (SimpleBlock c ext) (SimpleLedgerConfig c ext) instance GetTip (LedgerState (SimpleBlock c ext)) where getTip (SimpleLedgerState st _) = castPoint $ mockTip st @@ -365,25 +397,33 @@ instance GetTip (LedgerState (SimpleBlock c ext)) where instance GetTip (Ticked (LedgerState (SimpleBlock c ext))) where getTip = castPoint . getTip . getTickedSimpleLedgerState -instance MockProtocolSpecific c ext - => IsLedger (LedgerState (SimpleBlock c ext)) where +instance + MockProtocolSpecific c ext => + IsLedger (LedgerState (SimpleBlock c ext)) + where type LedgerErr (LedgerState (SimpleBlock c ext)) = MockError (SimpleBlock c ext) - type AuxLedgerEvent (LedgerState (SimpleBlock c ext)) = VoidLedgerEvent (LedgerState (SimpleBlock c ext)) + type + AuxLedgerEvent (LedgerState (SimpleBlock c ext)) = + VoidLedgerEvent (LedgerState (SimpleBlock c ext)) - applyChainTickLedgerResult _ _ _ = pureLedgerResult - . TickedSimpleLedgerState - . flip SimpleLedgerState emptyLedgerTables - . simpleLedgerState + applyChainTickLedgerResult _ _ _ = + pureLedgerResult + . TickedSimpleLedgerState + . flip SimpleLedgerState emptyLedgerTables + . simpleLedgerState -instance MockProtocolSpecific c ext - => ApplyBlock (LedgerState (SimpleBlock c ext)) (SimpleBlock c ext) where +instance + MockProtocolSpecific c ext => + ApplyBlock (LedgerState (SimpleBlock c ext)) (SimpleBlock c ext) + where applyBlockLedgerResultWithValidation _validation _events a blk st = - fmap ( pureLedgerResult - . trackingToDiffs - . calculateDifference st - . unstowLedgerTables - ) + fmap + ( pureLedgerResult + . trackingToDiffs + . calculateDifference st + . unstowLedgerTables + ) . updateSimpleLedgerState a blk . TickedSimpleLedgerState . stowLedgerTables @@ -396,76 +436,85 @@ instance MockProtocolSpecific c ext getBlockKeySets SimpleBlock{simpleBody = SimpleBody txs} = LedgerTables $ KeysMK $ Mock.txIns txs -data instance LedgerState (SimpleBlock c ext) mk = SimpleLedgerState { - simpleLedgerState :: MockState (SimpleBlock c ext) - , simpleLedgerTables :: LedgerTables (LedgerState (SimpleBlock c ext)) mk - } - deriving stock (Generic) - -deriving instance ( SimpleCrypto c - , Typeable ext - , Eq (mk Mock.TxIn Mock.TxOut) - ) - => Eq (LedgerState (SimpleBlock c ext) mk) -deriving instance ( SimpleCrypto c - , Typeable ext - , NoThunks (mk Mock.TxIn Mock.TxOut) - ) - => NoThunks (LedgerState (SimpleBlock c ext) mk) -deriving instance ( SimpleCrypto c - , Typeable ext - , Show (mk Mock.TxIn Mock.TxOut) - ) - => Show (LedgerState (SimpleBlock c ext) mk) +data instance LedgerState (SimpleBlock c ext) mk = SimpleLedgerState + { simpleLedgerState :: MockState (SimpleBlock c ext) + , simpleLedgerTables :: LedgerTables (LedgerState (SimpleBlock c ext)) mk + } + deriving stock Generic + +deriving instance + ( SimpleCrypto c + , Typeable ext + , Eq (mk Mock.TxIn Mock.TxOut) + ) => + Eq (LedgerState (SimpleBlock c ext) mk) +deriving instance + ( SimpleCrypto c + , Typeable ext + , NoThunks (mk Mock.TxIn Mock.TxOut) + ) => + NoThunks (LedgerState (SimpleBlock c ext) mk) +deriving instance + ( SimpleCrypto c + , Typeable ext + , Show (mk Mock.TxIn Mock.TxOut) + ) => + Show (LedgerState (SimpleBlock c ext) mk) -- Ticking has no effect on the simple ledger state -newtype instance Ticked (LedgerState (SimpleBlock c ext)) mk = TickedSimpleLedgerState { - getTickedSimpleLedgerState :: LedgerState (SimpleBlock c ext) mk - } - deriving (Generic) - -deriving anyclass instance ( SimpleCrypto c - , Typeable ext - ) - => NoThunks (Ticked (LedgerState (SimpleBlock c ext)) TrackingMK) -deriving instance ( SimpleCrypto c - , Typeable ext - , Show (LedgerState (SimpleBlock c ext) mk) - ) - => Show (Ticked (LedgerState (SimpleBlock c ext)) mk) +newtype instance Ticked (LedgerState (SimpleBlock c ext)) mk = TickedSimpleLedgerState + { getTickedSimpleLedgerState :: LedgerState (SimpleBlock c ext) mk + } + deriving Generic + +deriving anyclass instance + ( SimpleCrypto c + , Typeable ext + ) => + NoThunks (Ticked (LedgerState (SimpleBlock c ext)) TrackingMK) +deriving instance + ( SimpleCrypto c + , Typeable ext + , Show (LedgerState (SimpleBlock c ext) mk) + ) => + Show (Ticked (LedgerState (SimpleBlock c ext)) mk) instance MockProtocolSpecific c ext => UpdateLedger (SimpleBlock c ext) -updateSimpleLedgerState :: (SimpleCrypto c, Typeable ext) - => LedgerConfig (SimpleBlock c ext) - -> SimpleBlock c ext - -> TickedLedgerState (SimpleBlock c ext) mk1 - -> Except (MockError (SimpleBlock c ext)) - (LedgerState (SimpleBlock c ext) mk1) +updateSimpleLedgerState :: + (SimpleCrypto c, Typeable ext) => + LedgerConfig (SimpleBlock c ext) -> + SimpleBlock c ext -> + TickedLedgerState (SimpleBlock c ext) mk1 -> + Except + (MockError (SimpleBlock c ext)) + (LedgerState (SimpleBlock c ext) mk1) updateSimpleLedgerState cfg b (TickedSimpleLedgerState (SimpleLedgerState st tbs)) = - flip SimpleLedgerState tbs <$> updateMockState (simpleLedgerMockConfig cfg) b st - -updateSimpleUTxO :: Mock.HasMockTxs a - => LedgerConfig (SimpleBlock c ext) - -> SlotNo - -> a - -> TickedLedgerState (SimpleBlock c ext) EmptyMK - -> Except (MockError (SimpleBlock c ext)) - (TickedLedgerState (SimpleBlock c ext) EmptyMK) + flip SimpleLedgerState tbs <$> updateMockState (simpleLedgerMockConfig cfg) b st + +updateSimpleUTxO :: + Mock.HasMockTxs a => + LedgerConfig (SimpleBlock c ext) -> + SlotNo -> + a -> + TickedLedgerState (SimpleBlock c ext) EmptyMK -> + Except + (MockError (SimpleBlock c ext)) + (TickedLedgerState (SimpleBlock c ext) EmptyMK) updateSimpleUTxO cfg slot x (TickedSimpleLedgerState (SimpleLedgerState st tbs)) = - TickedSimpleLedgerState . flip SimpleLedgerState tbs - <$> updateMockUTxO (simpleLedgerMockConfig cfg) slot x st + TickedSimpleLedgerState . flip SimpleLedgerState tbs + <$> updateMockUTxO (simpleLedgerMockConfig cfg) slot x st genesisSimpleLedgerState :: AddrDist -> LedgerState (SimpleBlock c ext) ValuesMK genesisSimpleLedgerState = - unstowLedgerTables - . flip SimpleLedgerState emptyLedgerTables - . genesisMockState + unstowLedgerTables + . flip SimpleLedgerState emptyLedgerTables + . genesisMockState -- | Dummy values instance MockProtocolSpecific c ext => CommonProtocolParams (SimpleBlock c ext) where maxHeaderSize = const 2000000 - maxTxSize = const 2000000 + maxTxSize = const 2000000 instance LedgerSupportsPeerSelection (SimpleBlock c ext) where getPeers = const [] @@ -474,7 +523,7 @@ instance LedgerSupportsPeerSelection (SimpleBlock c ext) where LedgerTables -------------------------------------------------------------------------------} -type instance TxIn (LedgerState (SimpleBlock c ext)) = Mock.TxIn +type instance TxIn (LedgerState (SimpleBlock c ext)) = Mock.TxIn type instance TxOut (LedgerState (SimpleBlock c ext)) = Mock.TxOut instance CanUpgradeLedgerTables (LedgerState (SimpleBlock c ext)) where @@ -495,34 +544,35 @@ instance HasLedgerTables (LedgerState (SimpleBlock c ext)) where withLedgerTables (SimpleLedgerState s _) = SimpleLedgerState s instance HasLedgerTables (Ticked (LedgerState (SimpleBlock c ext))) where - projectLedgerTables = castLedgerTables - . simpleLedgerTables - . getTickedSimpleLedgerState - withLedgerTables (TickedSimpleLedgerState st) tables = - TickedSimpleLedgerState $ withLedgerTables st $ castLedgerTables tables + projectLedgerTables = + castLedgerTables + . simpleLedgerTables + . getTickedSimpleLedgerState + withLedgerTables (TickedSimpleLedgerState st) tables = + TickedSimpleLedgerState $ withLedgerTables st $ castLedgerTables tables instance CanStowLedgerTables (LedgerState (SimpleBlock c ext)) where stowLedgerTables st = - SimpleLedgerState { - simpleLedgerState = simpleLedgerState { mockUtxo = m } + SimpleLedgerState + { simpleLedgerState = simpleLedgerState{mockUtxo = m} , simpleLedgerTables = emptyLedgerTables } - where - SimpleLedgerState { - simpleLedgerState - , simpleLedgerTables = LedgerTables (ValuesMK m) - } = st + where + SimpleLedgerState + { simpleLedgerState + , simpleLedgerTables = LedgerTables (ValuesMK m) + } = st unstowLedgerTables st = - SimpleLedgerState { - simpleLedgerState = simpleLedgerState { mockUtxo = mempty } + SimpleLedgerState + { simpleLedgerState = simpleLedgerState{mockUtxo = mempty} , simpleLedgerTables = LedgerTables (ValuesMK (mockUtxo simpleLedgerState)) } - where - SimpleLedgerState { - simpleLedgerState - } = st + where + SimpleLedgerState + { simpleLedgerState + } = st deriving newtype instance CanStowLedgerTables (Ticked (LedgerState (SimpleBlock c ext))) @@ -530,34 +580,41 @@ deriving newtype instance CanStowLedgerTables (Ticked (LedgerState (SimpleBlock Support for the mempool -------------------------------------------------------------------------------} -data instance GenTx (SimpleBlock c ext) = SimpleGenTx { - simpleGenTx :: !Mock.Tx - , simpleGenTxId :: !Mock.TxId - } - deriving stock (Generic, Eq, Ord) - deriving anyclass (Serialise) +data instance GenTx (SimpleBlock c ext) = SimpleGenTx + { simpleGenTx :: !Mock.Tx + , simpleGenTxId :: !Mock.TxId + } + deriving stock (Generic, Eq, Ord) + deriving anyclass Serialise -newtype instance Validated (GenTx (SimpleBlock c ext)) = ValidatedSimpleGenTx { - forgetValidatedSimpleGenTx :: GenTx (SimpleBlock c ext) - } +newtype instance Validated (GenTx (SimpleBlock c ext)) = ValidatedSimpleGenTx + { forgetValidatedSimpleGenTx :: GenTx (SimpleBlock c ext) + } deriving newtype (Generic, Eq, Ord) -instance (Typeable c, Typeable ext) - => ShowProxy (GenTx (SimpleBlock c ext)) where +instance + (Typeable c, Typeable ext) => + ShowProxy (GenTx (SimpleBlock c ext)) type instance ApplyTxErr (SimpleBlock c ext) = MockError (SimpleBlock c ext) -instance MockProtocolSpecific c ext - => LedgerSupportsMempool (SimpleBlock c ext) where +instance + MockProtocolSpecific c ext => + LedgerSupportsMempool (SimpleBlock c ext) + where applyTx cfg _wti slot tx st = do - let st' = stowLedgerTables st - st'' <- unstowLedgerTables - <$> updateSimpleUTxO cfg slot tx st' - return ( trackingToDiffs $ calculateDifference st st'' - , ValidatedSimpleGenTx tx ) - - reapplyTx _ cfg slot vtx st = attachAndApplyDiffs st . fst - <$> applyTx cfg DoNotIntervene slot (forgetValidatedSimpleGenTx vtx) st + let st' = stowLedgerTables st + st'' <- + unstowLedgerTables + <$> updateSimpleUTxO cfg slot tx st' + return + ( trackingToDiffs $ calculateDifference st st'' + , ValidatedSimpleGenTx tx + ) + + reapplyTx _ cfg slot vtx st = + attachAndApplyDiffs st . fst + <$> applyTx cfg DoNotIntervene slot (forgetValidatedSimpleGenTx vtx) st txForgetValidated = forgetValidatedSimpleGenTx @@ -574,21 +631,22 @@ instance TxLimits (SimpleBlock c ext) where blockCapacityTxMeasure _cfg _st = IgnoringOverflow simpleBlockCapacity txMeasure cfg _st = - fmap IgnoringOverflow + fmap IgnoringOverflow . checkTxSize (simpleLedgerMockConfig cfg) . simpleGenTx simpleBlockCapacity :: ByteSize32 simpleBlockCapacity = ByteSize32 512 -newtype instance TxId (GenTx (SimpleBlock c ext)) = SimpleGenTxId { - unSimpleGenTxId :: Mock.TxId - } - deriving stock (Generic) +newtype instance TxId (GenTx (SimpleBlock c ext)) = SimpleGenTxId + { unSimpleGenTxId :: Mock.TxId + } + deriving stock Generic deriving newtype (Show, Eq, Ord, Serialise, NoThunks) -instance (Typeable c, Typeable ext) - => ShowProxy (TxId (GenTx (SimpleBlock c ext))) where +instance + (Typeable c, Typeable ext) => + ShowProxy (TxId (GenTx (SimpleBlock c ext))) instance HasTxId (GenTx (SimpleBlock c ext)) where txId = SimpleGenTxId . simpleGenTxId @@ -606,20 +664,21 @@ instance Mock.HasMockTxs (GenTx (SimpleBlock p c)) where getMockTxs = Mock.getMockTxs . simpleGenTx instance Condense (GenTx (SimpleBlock p c)) where - condense = condense . simpleGenTx + condense = condense . simpleGenTx instance Show (GenTx (SimpleBlock p c)) where - show = show . simpleGenTx + show = show . simpleGenTx instance Show (Validated (GenTx (SimpleBlock p c))) where - show = show . forgetValidatedSimpleGenTx + show = show . forgetValidatedSimpleGenTx instance Condense (GenTxId (SimpleBlock p c)) where - condense = condense . unSimpleGenTxId + condense = condense . unSimpleGenTxId mkSimpleGenTx :: Mock.Tx -> GenTx (SimpleBlock c ext) -mkSimpleGenTx tx = SimpleGenTx - { simpleGenTx = tx +mkSimpleGenTx tx = + SimpleGenTx + { simpleGenTx = tx , simpleGenTxId = Hash.hashWithSerialiser toCBOR tx } @@ -631,11 +690,11 @@ genTxSize = txSize . simpleGenTx -------------------------------------------------------------------------------} data instance BlockQuery (SimpleBlock c ext) fp result where - QueryLedgerTip :: BlockQuery (SimpleBlock c ext) QFNoTables (Point (SimpleBlock c ext)) + QueryLedgerTip :: BlockQuery (SimpleBlock c ext) QFNoTables (Point (SimpleBlock c ext)) instance MockProtocolSpecific c ext => BlockSupportsLedgerQuery (SimpleBlock c ext) where answerPureBlockQuery _cfg QueryLedgerTip = - castPoint + castPoint . ledgerTipPoint . ledgerState answerBlockQueryLookup _cfg q = case q of {} @@ -647,26 +706,33 @@ instance SameDepIndex2 (BlockQuery (SimpleBlock c ext)) where deriving instance Show (BlockQuery (SimpleBlock c ext) fp result) -instance (Typeable c, Typeable ext) - => ShowProxy (BlockQuery (SimpleBlock c ext)) where +instance + (Typeable c, Typeable ext) => + ShowProxy (BlockQuery (SimpleBlock c ext)) -instance (SimpleCrypto c, Typeable ext) - => ShowQuery (BlockQuery (SimpleBlock c ext) fp) where +instance + (SimpleCrypto c, Typeable ext) => + ShowQuery (BlockQuery (SimpleBlock c ext) fp) + where showResult QueryLedgerTip = show {------------------------------------------------------------------------------- Inspection -------------------------------------------------------------------------------} -instance InspectLedger (SimpleBlock c ext) where - -- Use defaults +instance InspectLedger (SimpleBlock c ext) + +-- Use defaults {------------------------------------------------------------------------------- Crypto needed for simple blocks -------------------------------------------------------------------------------} -class (KnownNat (Hash.SizeHash (SimpleHash c)), HashAlgorithm (SimpleHash c), Typeable c) => SimpleCrypto c where - type family SimpleHash c :: Type +class + (KnownNat (Hash.SizeHash (SimpleHash c)), HashAlgorithm (SimpleHash c), Typeable c) => + SimpleCrypto c + where + type SimpleHash c :: Type data SimpleStandardCrypto data SimpleMockCrypto @@ -682,8 +748,9 @@ instance SimpleCrypto SimpleMockCrypto where -------------------------------------------------------------------------------} instance Condense ext' => Condense (Header (SimpleBlock' c ext ext')) where - condense SimpleHeader{..} = mconcat [ - "(" + condense SimpleHeader{..} = + mconcat + [ "(" , condense simplePrev , "->" , condense simpleHeaderHash @@ -693,12 +760,13 @@ instance Condense ext' => Condense (Header (SimpleBlock' c ext ext')) where , condense simpleHeaderExt , ")" ] - where - SimpleStdHeader{..} = simpleHeaderStd + where + SimpleStdHeader{..} = simpleHeaderStd instance Condense ext' => Condense (SimpleBlock' c ext ext') where - condense SimpleBlock{..} = mconcat [ - "(" + condense SimpleBlock{..} = + mconcat + [ "(" , condense simplePrev , "->" , condense simpleHeaderHash @@ -710,10 +778,10 @@ instance Condense ext' => Condense (SimpleBlock' c ext ext') where , condense simpleTxs , ")" ] - where - SimpleHeader{..} = simpleHeader - SimpleStdHeader{..} = simpleHeaderStd - SimpleBody{..} = simpleBody + where + SimpleHeader{..} = simpleHeader + SimpleStdHeader{..} = simpleHeaderStd + SimpleBody{..} = simpleBody {------------------------------------------------------------------------------- Serialisation @@ -722,34 +790,41 @@ instance Condense ext' => Condense (SimpleBlock' c ext ext') where instance ToCBOR SimpleBody where toCBOR = encode -encodeSimpleHeader :: KnownNat (Hash.SizeHash (SimpleHash c)) - => (ext' -> CBOR.Encoding) - -> Header (SimpleBlock' c ext ext') - -> CBOR.Encoding -encodeSimpleHeader encodeExt SimpleHeader{..} = mconcat [ - CBOR.encodeListLen 2 +encodeSimpleHeader :: + KnownNat (Hash.SizeHash (SimpleHash c)) => + (ext' -> CBOR.Encoding) -> + Header (SimpleBlock' c ext ext') -> + CBOR.Encoding +encodeSimpleHeader encodeExt SimpleHeader{..} = + mconcat + [ CBOR.encodeListLen 2 , encode simpleHeaderStd , encodeExt simpleHeaderExt ] -decodeSimpleHeader :: SimpleCrypto c - => (ext' -> CBOR.Encoding) - -> (forall s. CBOR.Decoder s ext') - -> forall s. CBOR.Decoder s (Header (SimpleBlock' c ext ext')) +decodeSimpleHeader :: + SimpleCrypto c => + (ext' -> CBOR.Encoding) -> + (forall s. CBOR.Decoder s ext') -> + forall s. + CBOR.Decoder s (Header (SimpleBlock' c ext ext')) decodeSimpleHeader encodeExt decodeExt = do - CBOR.decodeListLenOf 2 - mkSimpleHeader encodeExt <$> decode <*> decodeExt + CBOR.decodeListLenOf 2 + mkSimpleHeader encodeExt <$> decode <*> decodeExt -- | Custom 'Serialise' instance that doesn't serialise the hash -instance (SimpleCrypto c, Serialise ext') - => Serialise (Header (SimpleBlock' c ext ext')) where +instance + (SimpleCrypto c, Serialise ext') => + Serialise (Header (SimpleBlock' c ext ext')) + where encode = encodeSimpleHeader encode decode = decodeSimpleHeader encode decode simpleBlockBinaryBlockInfo :: - (SimpleCrypto c, Serialise ext', Typeable ext, Typeable ext') - => SimpleBlock' c ext ext' -> BinaryBlockInfo -simpleBlockBinaryBlockInfo b = BinaryBlockInfo + (SimpleCrypto c, Serialise ext', Typeable ext, Typeable ext') => + SimpleBlock' c ext ext' -> BinaryBlockInfo +simpleBlockBinaryBlockInfo b = + BinaryBlockInfo { headerOffset = 1 -- For the 'encodeListLen' - , headerSize = fromIntegral $ Lazy.length $ serialise (getHeader b) + , headerSize = fromIntegral $ Lazy.length $ serialise (getHeader b) } diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/BFT.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/BFT.hs index 1aa7443ede..084b5eccac 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/BFT.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/BFT.hs @@ -7,37 +7,36 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Ledger.Block.BFT ( - SignedSimpleBft (..) +module Ouroboros.Consensus.Mock.Ledger.Block.BFT + ( SignedSimpleBft (..) , SimpleBftBlock , SimpleBftExt (..) , SimpleBftHeader , forgeBftExt ) where -import Cardano.Binary (ToCBOR (..)) -import Cardano.Crypto.DSIGN -import Cardano.Crypto.Util -import Codec.Serialise (Serialise (..), serialise) -import qualified Data.ByteString.Lazy as BSL -import Data.Typeable (Typeable) -import Data.Void (Void) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Mock.Ledger.Forge -import Ouroboros.Consensus.Mock.Node.Abstract -import Ouroboros.Consensus.Protocol.BFT -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Condense +import Cardano.Binary (ToCBOR (..)) +import Cardano.Crypto.DSIGN +import Cardano.Crypto.Util +import Codec.Serialise (Serialise (..), serialise) +import Data.ByteString.Lazy qualified as BSL +import Data.Typeable (Typeable) +import Data.Void (Void) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Mock.Ledger.Forge +import Ouroboros.Consensus.Mock.Node.Abstract +import Ouroboros.Consensus.Protocol.BFT +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Instantiate the @ext@ to suit BFT @@ -53,17 +52,17 @@ type SimpleBftBlock c c' = SimpleBlock c (SimpleBftExt c c') type SimpleBftHeader c c' = SimpleHeader c (SimpleBftExt c c') -- | Block extension required for BFT -newtype SimpleBftExt c c' = SimpleBftExt { - simpleBftExt :: BftFields c' (SignedSimpleBft c c') - } - deriving stock (Show, Eq) +newtype SimpleBftExt c c' = SimpleBftExt + { simpleBftExt :: BftFields c' (SignedSimpleBft c c') + } + deriving stock (Show, Eq) deriving newtype (Condense, NoThunks) -- | Part of the block that gets signed -data SignedSimpleBft c c' = SignedSimpleBft { - signedSimpleBft :: SimpleStdHeader c (SimpleBftExt c c') - } - deriving (Generic) +data SignedSimpleBft c c' = SignedSimpleBft + { signedSimpleBft :: SimpleStdHeader c (SimpleBftExt c c') + } + deriving Generic type instance BlockProtocol (SimpleBftBlock c c') = Bft c' @@ -75,8 +74,10 @@ _simpleBFtHeader = simpleHeader Customization of the generic infrastructure -------------------------------------------------------------------------------} -instance (SimpleCrypto c, Typeable c') - => MockProtocolSpecific c (SimpleBftExt c c') where +instance + (SimpleCrypto c, Typeable c') => + MockProtocolSpecific c (SimpleBftExt c c') + where type MockLedgerConfig c (SimpleBftExt c c') = () {------------------------------------------------------------------------------- @@ -88,23 +89,32 @@ type instance Signed (SimpleBftHeader c c') = SignedSimpleBft c c' instance SignedHeader (SimpleBftHeader c c') where headerSigned = SignedSimpleBft . simpleHeaderStd -instance ( SimpleCrypto c - , BftCrypto c' - ) => RunMockBlock c (SimpleBftExt c c') where +instance + ( SimpleCrypto c + , BftCrypto c' + ) => + RunMockBlock c (SimpleBftExt c c') + where mockNetworkMagic = const constructMockNetworkMagic -instance ( SimpleCrypto c - , BftCrypto c' - , Signable (BftDSIGN c') (SignedSimpleBft c c') - ) => BlockSupportsProtocol (SimpleBftBlock c c') where +instance + ( SimpleCrypto c + , BftCrypto c' + , Signable (BftDSIGN c') (SignedSimpleBft c c') + ) => + BlockSupportsProtocol (SimpleBftBlock c c') + where validateView _ = bftValidateView (simpleBftExt . simpleHeaderExt) -instance ( SimpleCrypto c - , BftCrypto c' - , Signable (BftDSIGN c') (SignedSimpleBft c c') - ) => LedgerSupportsProtocol (SimpleBftBlock c c') where - protocolLedgerView _ _ = () - ledgerViewForecastAt _ = trivialForecast +instance + ( SimpleCrypto c + , BftCrypto c' + , Signable (BftDSIGN c') (SignedSimpleBft c c') + ) => + LedgerSupportsProtocol (SimpleBftBlock c c') + where + protocolLedgerView _ _ = () + ledgerViewForecastAt _ = trivialForecast {------------------------------------------------------------------------------- Forging @@ -116,36 +126,39 @@ type instance ForgeStateInfo (SimpleBftBlock c c') = () type instance ForgeStateUpdateError (SimpleBftBlock c c') = Void -forgeBftExt :: forall c c'. - ( SimpleCrypto c - , BftCrypto c' - , Signable (BftDSIGN c') (SignedSimpleBft c c') - ) - => ForgeExt c (SimpleBftExt c c') +forgeBftExt :: + forall c c'. + ( SimpleCrypto c + , BftCrypto c' + , Signable (BftDSIGN c') (SignedSimpleBft c c') + ) => + ForgeExt c (SimpleBftExt c c') forgeBftExt = ForgeExt $ \cfg _ SimpleBlock{..} -> - let SimpleHeader{..} = simpleHeader - ext :: SimpleBftExt c c' - ext = SimpleBftExt $ + let SimpleHeader{..} = simpleHeader + ext :: SimpleBftExt c c' + ext = + SimpleBftExt $ forgeBftFields (configConsensus cfg) $ - SignedSimpleBft { - signedSimpleBft = simpleHeaderStd + SignedSimpleBft + { signedSimpleBft = simpleHeaderStd } - in SimpleBlock { - simpleHeader = mkSimpleHeader encode simpleHeaderStd ext - , simpleBody = simpleBody - } + in SimpleBlock + { simpleHeader = mkSimpleHeader encode simpleHeaderStd ext + , simpleBody = simpleBody + } {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} instance BftCrypto c' => Serialise (SimpleBftExt c c') where - encode (SimpleBftExt BftFields{..}) = mconcat [ - encodeSignedDSIGN bftSignature + encode (SimpleBftExt BftFields{..}) = + mconcat + [ encodeSignedDSIGN bftSignature ] decode = do - bftSignature <- decodeSignedDSIGN - return $ SimpleBftExt BftFields{..} + bftSignature <- decodeSignedDSIGN + return $ SimpleBftExt BftFields{..} instance SimpleCrypto c => Serialise (SignedSimpleBft c c') instance SimpleCrypto c => SignableRepresentation (SignedSimpleBft c c') where @@ -155,7 +168,9 @@ instance (Typeable c', SimpleCrypto c) => ToCBOR (SignedSimpleBft c c') where toCBOR = encode instance EncodeDisk (SimpleBftBlock c c') () - -- Default instance + +-- Default instance instance DecodeDisk (SimpleBftBlock c c') () - -- Default instance + +-- Default instance diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PBFT.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PBFT.hs index ac6cc26cf1..3f890941de 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PBFT.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PBFT.hs @@ -10,38 +10,37 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Ledger.Block.PBFT ( - SignedSimplePBft (..) +module Ouroboros.Consensus.Mock.Ledger.Block.PBFT + ( SignedSimplePBft (..) , SimplePBftBlock , SimplePBftExt (..) , SimplePBftHeader , forgePBftExt ) where -import Cardano.Binary (ToCBOR (..)) -import Cardano.Crypto.DSIGN -import Cardano.Crypto.Util -import Codec.Serialise (Serialise (..), serialise) -import qualified Data.ByteString.Lazy as BSL -import Data.Typeable (Typeable) -import Data.Void (Void) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.Ledger.Basics -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Mock.Ledger.Forge -import Ouroboros.Consensus.Mock.Node.Abstract -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Condense +import Cardano.Binary (ToCBOR (..)) +import Cardano.Crypto.DSIGN +import Cardano.Crypto.Util +import Codec.Serialise (Serialise (..), serialise) +import Data.ByteString.Lazy qualified as BSL +import Data.Typeable (Typeable) +import Data.Void (Void) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.Ledger.Basics +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Mock.Ledger.Forge +import Ouroboros.Consensus.Mock.Node.Abstract +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Instantiate the @ext@ to suit PBFT @@ -57,12 +56,12 @@ type SimplePBftBlock c c' = SimpleBlock c (SimplePBftExt c c') type SimplePBftHeader c c' = SimpleHeader c (SimplePBftExt c c') -- | Block extension required for PBFT -newtype SimplePBftExt c c' = SimplePBftExt { - simplePBftExt :: PBftFields c' (SignedSimplePBft c c') - } - deriving stock (Generic, Show, Eq) - deriving newtype (Condense) - deriving anyclass (NoThunks) +newtype SimplePBftExt c c' = SimplePBftExt + { simplePBftExt :: PBftFields c' (SignedSimplePBft c c') + } + deriving stock (Generic, Show, Eq) + deriving newtype Condense + deriving anyclass NoThunks -- | Part of the block that gets signed -- @@ -71,10 +70,10 @@ newtype SimplePBftExt c c' = SimplePBftExt { -- -- The signature does not cover the body explicitly, but since the standard -- header includes a hash of the body, the signature covers the body implicitly. -data SignedSimplePBft c c' = SignedSimplePBft { - signedSimplePBft :: SimpleStdHeader c (SimplePBftExt c c') - } - deriving (Generic) +data SignedSimplePBft c c' = SignedSimplePBft + { signedSimplePBft :: SimpleStdHeader c (SimplePBftExt c c') + } + deriving Generic type instance BlockProtocol (SimplePBftBlock c c') = PBft c' @@ -86,9 +85,11 @@ _simplePBftHeader = simpleHeader Customization of the generic infrastructure -------------------------------------------------------------------------------} -instance (SimpleCrypto c, PBftCrypto c', Serialise (PBftVerKeyHash c')) - => MockProtocolSpecific c (SimplePBftExt c c') where - -- | PBFT requires the ledger view; for the mock ledger, this is constant +instance + (SimpleCrypto c, PBftCrypto c', Serialise (PBftVerKeyHash c')) => + MockProtocolSpecific c (SimplePBftExt c c') + where + -- \| PBFT requires the ledger view; for the mock ledger, this is constant type MockLedgerConfig c (SimplePBftExt c c') = PBftLedgerView c' {------------------------------------------------------------------------------- @@ -100,27 +101,37 @@ type instance Signed (SimplePBftHeader c c') = SignedSimplePBft c c' instance SignedHeader (SimplePBftHeader c c') where headerSigned = SignedSimplePBft . simpleHeaderStd -instance ( SimpleCrypto c - , PBftCrypto c' - , Serialise (PBftVerKeyHash c') - ) => RunMockBlock c (SimplePBftExt c c') where +instance + ( SimpleCrypto c + , PBftCrypto c' + , Serialise (PBftVerKeyHash c') + ) => + RunMockBlock c (SimplePBftExt c c') + where mockNetworkMagic = const constructMockNetworkMagic -instance ( SimpleCrypto c - , Signable MockDSIGN (SignedSimplePBft c PBftMockCrypto) - ) => BlockSupportsProtocol (SimplePBftBlock c PBftMockCrypto) where - validateView _ = pbftValidateRegular () (simplePBftExt . simpleHeaderExt) - selectView _ = mkPBftSelectView +instance + ( SimpleCrypto c + , Signable MockDSIGN (SignedSimplePBft c PBftMockCrypto) + ) => + BlockSupportsProtocol (SimplePBftBlock c PBftMockCrypto) + where + validateView _ = pbftValidateRegular () (simplePBftExt . simpleHeaderExt) + selectView _ = mkPBftSelectView -- | The ledger view is constant for the mock instantiation of PBFT -- (mock blocks cannot change delegation) -instance ( SimpleCrypto c - , Signable MockDSIGN (SignedSimplePBft c PBftMockCrypto) - ) => LedgerSupportsProtocol (SimplePBftBlock c PBftMockCrypto) where - protocolLedgerView cfg _ = simpleMockLedgerConfig cfg - ledgerViewForecastAt cfg st = constantForecastOf - (simpleMockLedgerConfig cfg) - (getTipSlot st) +instance + ( SimpleCrypto c + , Signable MockDSIGN (SignedSimplePBft c PBftMockCrypto) + ) => + LedgerSupportsProtocol (SimplePBftBlock c PBftMockCrypto) + where + protocolLedgerView cfg _ = simpleMockLedgerConfig cfg + ledgerViewForecastAt cfg st = + constantForecastOf + (simpleMockLedgerConfig cfg) + (getTipSlot st) {------------------------------------------------------------------------------- Forging @@ -132,41 +143,44 @@ type instance ForgeStateInfo (SimplePBftBlock c c') = () type instance ForgeStateUpdateError (SimplePBftBlock c c') = Void -forgePBftExt :: forall c c'. - ( SimpleCrypto c - , PBftCrypto c' - , Signable (PBftDSIGN c') (SignedSimplePBft c c') - , ContextDSIGN (PBftDSIGN c') ~ () - ) - => ForgeExt c (SimplePBftExt c c') +forgePBftExt :: + forall c c'. + ( SimpleCrypto c + , PBftCrypto c' + , Signable (PBftDSIGN c') (SignedSimplePBft c c') + , ContextDSIGN (PBftDSIGN c') ~ () + ) => + ForgeExt c (SimplePBftExt c c') forgePBftExt = ForgeExt $ \_cfg isLeader SimpleBlock{..} -> - let SimpleHeader{..} = simpleHeader - ext :: SimplePBftExt c c' - ext = SimplePBftExt $ + let SimpleHeader{..} = simpleHeader + ext :: SimplePBftExt c c' + ext = + SimplePBftExt $ forgePBftFields (const ()) isLeader - SignedSimplePBft { signedSimplePBft = simpleHeaderStd } - in SimpleBlock { - simpleHeader = mkSimpleHeader encode simpleHeaderStd ext - , simpleBody = simpleBody - } + SignedSimplePBft{signedSimplePBft = simpleHeaderStd} + in SimpleBlock + { simpleHeader = mkSimpleHeader encode simpleHeaderStd ext + , simpleBody = simpleBody + } {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} instance PBftCrypto c' => Serialise (SimplePBftExt c c') where - encode (SimplePBftExt PBftFields{..}) = mconcat [ - encodeVerKeyDSIGN pbftIssuer + encode (SimplePBftExt PBftFields{..}) = + mconcat + [ encodeVerKeyDSIGN pbftIssuer , encodeVerKeyDSIGN pbftGenKey , encodeSignedDSIGN pbftSignature ] decode = do - pbftIssuer <- decodeVerKeyDSIGN - pbftGenKey <- decodeVerKeyDSIGN - pbftSignature <- decodeSignedDSIGN - return $ SimplePBftExt PBftFields{..} + pbftIssuer <- decodeVerKeyDSIGN + pbftGenKey <- decodeVerKeyDSIGN + pbftSignature <- decodeSignedDSIGN + return $ SimplePBftExt PBftFields{..} instance SimpleCrypto c => Serialise (SignedSimplePBft c c') instance SimpleCrypto c => SignableRepresentation (SignedSimplePBft c c') where @@ -175,10 +189,14 @@ instance SimpleCrypto c => SignableRepresentation (SignedSimplePBft c c') where instance (Typeable c', SimpleCrypto c) => ToCBOR (SignedSimplePBft c c') where toCBOR = encode -instance PBftCrypto c' - => EncodeDisk (SimplePBftBlock c c') (S.PBftState c') where +instance + PBftCrypto c' => + EncodeDisk (SimplePBftBlock c c') (S.PBftState c') + where encodeDisk = const S.encodePBftState -instance PBftCrypto c' - => DecodeDisk (SimplePBftBlock c c') (S.PBftState c') where +instance + PBftCrypto c' => + DecodeDisk (SimplePBftBlock c c') (S.PBftState c') + where decodeDisk = const S.decodePBftState diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/Praos.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/Praos.hs index 1560627b71..f70f375c75 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/Praos.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/Praos.hs @@ -8,39 +8,38 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Ledger.Block.Praos ( - SignedSimplePraos (..) +module Ouroboros.Consensus.Mock.Ledger.Block.Praos + ( SignedSimplePraos (..) , SimplePraosBlock , SimplePraosExt (..) , SimplePraosHeader , forgePraosExt ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..), serialize') -import Cardano.Crypto.KES -import Cardano.Crypto.Util -import qualified Codec.CBOR.Decoding as CBOR -import qualified Codec.CBOR.Encoding as CBOR -import Codec.Serialise (Serialise (..)) -import Data.Typeable (Typeable) -import Data.Void (Void) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Mock.Ledger.Address -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Mock.Ledger.Forge -import Ouroboros.Consensus.Mock.Node.Abstract -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Condense +import Cardano.Binary (FromCBOR (..), ToCBOR (..), serialize') +import Cardano.Crypto.KES +import Cardano.Crypto.Util +import Codec.CBOR.Decoding qualified as CBOR +import Codec.CBOR.Encoding qualified as CBOR +import Codec.Serialise (Serialise (..)) +import Data.Typeable (Typeable) +import Data.Void (Void) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Mock.Ledger.Address +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Mock.Ledger.Forge +import Ouroboros.Consensus.Mock.Node.Abstract +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Instantiate the @ext@ to suit Praos @@ -56,12 +55,12 @@ type SimplePraosBlock c c' = SimpleBlock c (SimplePraosExt c c') type SimplePraosHeader c c' = SimpleHeader c (SimplePraosExt c c') -- | Block extension required for Praos -newtype SimplePraosExt c c' = SimplePraosExt { - simplePraosExt :: PraosFields c' (SignedSimplePraos c c') +newtype SimplePraosExt c c' = SimplePraosExt + { simplePraosExt :: PraosFields c' (SignedSimplePraos c c') } - deriving stock (Generic, Show, Eq) - deriving newtype (Condense) - deriving anyclass (NoThunks) + deriving stock (Generic, Show, Eq) + deriving newtype Condense + deriving anyclass NoThunks -- | Part of the block that gets signed -- @@ -69,10 +68,10 @@ newtype SimplePraosExt c c' = SimplePraosExt { -- be needed. -- Of course, this Praos is merely a proof of concept so it doesn't really -- matter either way; we include them here primarily to show that we can. -data SignedSimplePraos c c' = SignedSimplePraos { - signedSimplePraos :: SimpleStdHeader c (SimplePraosExt c c') - , signedPraosFields :: PraosExtraFields c' - } +data SignedSimplePraos c c' = SignedSimplePraos + { signedSimplePraos :: SimpleStdHeader c (SimplePraosExt c c') + , signedPraosFields :: PraosExtraFields c' + } type instance BlockProtocol (SimplePraosBlock c c') = Praos c' @@ -84,9 +83,11 @@ _simplePraosHeader = simpleHeader Customization of the generic infrastructure -------------------------------------------------------------------------------} -instance (SimpleCrypto c, Typeable c') - => MockProtocolSpecific c (SimplePraosExt c c') where - -- | See 'LedgerSupportsProtocol' instance for why we need the 'AddrDist' +instance + (SimpleCrypto c, Typeable c') => + MockProtocolSpecific c (SimplePraosExt c c') + where + -- \| See 'LedgerSupportsProtocol' instance for why we need the 'AddrDist' type MockLedgerConfig c (SimplePraosExt c c') = AddrDist {------------------------------------------------------------------------------- @@ -96,27 +97,37 @@ instance (SimpleCrypto c, Typeable c') type instance Signed (SimplePraosHeader c c') = SignedSimplePraos c c' instance PraosCrypto c' => SignedHeader (SimplePraosHeader c c') where - headerSigned SimpleHeader{..} = SignedSimplePraos { - signedSimplePraos = simpleHeaderStd + headerSigned SimpleHeader{..} = + SignedSimplePraos + { signedSimplePraos = simpleHeaderStd , signedPraosFields = praosExtraFields (simplePraosExt simpleHeaderExt) } -instance ( SimpleCrypto c - , PraosCrypto c' - ) => RunMockBlock c (SimplePraosExt c c') where +instance + ( SimpleCrypto c + , PraosCrypto c' + ) => + RunMockBlock c (SimplePraosExt c c') + where mockNetworkMagic = const constructMockNetworkMagic -instance ( SimpleCrypto c - , PraosCrypto c' - , Signable (PraosKES c') (SignedSimplePraos c c') - ) => BlockSupportsProtocol (SimpleBlock c (SimplePraosExt c c')) where +instance + ( SimpleCrypto c + , PraosCrypto c' + , Signable (PraosKES c') (SignedSimplePraos c c') + ) => + BlockSupportsProtocol (SimpleBlock c (SimplePraosExt c c')) + where validateView _ = praosValidateView (simplePraosExt . simpleHeaderExt) -instance ( SimpleCrypto c - , PraosCrypto c' - , Signable (PraosKES c') (SignedSimplePraos c c') - ) => LedgerSupportsProtocol (SimplePraosBlock c c') where - protocolLedgerView _ _ = () +instance + ( SimpleCrypto c + , PraosCrypto c' + , Signable (PraosKES c') (SignedSimplePraos c c') + ) => + LedgerSupportsProtocol (SimplePraosBlock c c') + where + protocolLedgerView _ _ = () ledgerViewForecastAt _ st = constantForecastOf () (getTipSlot st) {------------------------------------------------------------------------------- @@ -127,73 +138,86 @@ type instance CannotForge (SimplePraosBlock c c') = Void type instance ForgeStateInfo (SimplePraosBlock c c') = HotKey c' -type instance ForgeStateUpdateError (SimplePraosBlock c c') = - HotKeyEvolutionError - -forgePraosExt :: forall c c'. - ( SimpleCrypto c - , PraosCrypto c' - , Signable (PraosKES c') (SignedSimplePraos c c') - ) - => HotKey c' - -> ForgeExt c (SimplePraosExt c c') +type instance + ForgeStateUpdateError (SimplePraosBlock c c') = + HotKeyEvolutionError + +forgePraosExt :: + forall c c'. + ( SimpleCrypto c + , PraosCrypto c' + , Signable (PraosKES c') (SignedSimplePraos c c') + ) => + HotKey c' -> + ForgeExt c (SimplePraosExt c c') forgePraosExt hotKey = ForgeExt $ \_cfg isLeader SimpleBlock{..} -> - let SimpleHeader{..} = simpleHeader - - ext :: SimplePraosExt c c' - ext = SimplePraosExt $ - forgePraosFields isLeader hotKey $ \praosExtraFields -> - SignedSimplePraos { - signedSimplePraos = simpleHeaderStd - , signedPraosFields = praosExtraFields - } - in SimpleBlock { - simpleHeader = mkSimpleHeader encode simpleHeaderStd ext - , simpleBody = simpleBody - } + let SimpleHeader{..} = simpleHeader + + ext :: SimplePraosExt c c' + ext = SimplePraosExt $ + forgePraosFields isLeader hotKey $ \praosExtraFields -> + SignedSimplePraos + { signedSimplePraos = simpleHeaderStd + , signedPraosFields = praosExtraFields + } + in SimpleBlock + { simpleHeader = mkSimpleHeader encode simpleHeaderStd ext + , simpleBody = simpleBody + } {------------------------------------------------------------------------------- Serialisation -------------------------------------------------------------------------------} instance PraosCrypto c' => Serialise (SimplePraosExt c c') where - encode (SimplePraosExt PraosFields{..}) = mconcat [ - encodeSignedKES praosSignature + encode (SimplePraosExt PraosFields{..}) = + mconcat + [ encodeSignedKES praosSignature , encodePraosExtraFields praosExtraFields ] decode = do - praosSignature <- decodeSignedKES - praosExtraFields <- decodePraosExtraFields - return $ SimplePraosExt PraosFields{..} - -instance (SimpleCrypto c, PraosCrypto c') - => ToCBOR (SignedSimplePraos c c') where - toCBOR SignedSimplePraos{..} = mconcat [ - encode signedSimplePraos + praosSignature <- decodeSignedKES + praosExtraFields <- decodePraosExtraFields + return $ SimplePraosExt PraosFields{..} + +instance + (SimpleCrypto c, PraosCrypto c') => + ToCBOR (SignedSimplePraos c c') + where + toCBOR SignedSimplePraos{..} = + mconcat + [ encode signedSimplePraos , encodePraosExtraFields signedPraosFields ] -instance (SimpleCrypto c, PraosCrypto c') - => SignableRepresentation (SignedSimplePraos c c') where +instance + (SimpleCrypto c, PraosCrypto c') => + SignableRepresentation (SignedSimplePraos c c') + where getSignableRepresentation = serialize' encodePraosExtraFields :: PraosCrypto c' => PraosExtraFields c' -> CBOR.Encoding -encodePraosExtraFields PraosExtraFields{..} = mconcat [ - encode praosCreator +encodePraosExtraFields PraosExtraFields{..} = + mconcat + [ encode praosCreator , toCBOR praosRho , toCBOR praosY ] -decodePraosExtraFields :: forall s c'. PraosCrypto c' - => CBOR.Decoder s (PraosExtraFields c') +decodePraosExtraFields :: + forall s c'. + PraosCrypto c' => + CBOR.Decoder s (PraosExtraFields c') decodePraosExtraFields = do - praosCreator <- decode - praosRho <- fromCBOR - praosY <- fromCBOR - return PraosExtraFields{..} + praosCreator <- decode + praosRho <- fromCBOR + praosY <- fromCBOR + return PraosExtraFields{..} instance PraosCrypto c' => EncodeDisk (SimplePraosBlock c c') (PraosChainDepState c') - -- Default instance + +-- Default instance instance PraosCrypto c' => DecodeDisk (SimplePraosBlock c c') (PraosChainDepState c') - -- Default instance + +-- Default instance diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PraosRule.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PraosRule.hs index 12d2d70404..9713d1498c 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PraosRule.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Block/PraosRule.hs @@ -7,37 +7,36 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -- | Test the Praos chain selection rule (with explicit leader schedule) -module Ouroboros.Consensus.Mock.Ledger.Block.PraosRule ( - PraosCryptoUnused +module Ouroboros.Consensus.Mock.Ledger.Block.PraosRule + ( PraosCryptoUnused , SimplePraosRuleBlock , SimplePraosRuleExt (..) , SimplePraosRuleHeader , forgePraosRuleExt ) where -import Cardano.Crypto.Hash -import Cardano.Crypto.KES -import Cardano.Crypto.VRF -import Codec.Serialise (Serialise (..)) -import Data.Void (Void) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Forecast -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Mock.Ledger.Forge -import Ouroboros.Consensus.Mock.Node.Abstract -import Ouroboros.Consensus.Mock.Protocol.LeaderSchedule -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.NodeId (CoreNodeId) -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Consensus.Util.Condense +import Cardano.Crypto.Hash +import Cardano.Crypto.KES +import Cardano.Crypto.VRF +import Codec.Serialise (Serialise (..)) +import Data.Void (Void) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Forecast +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Mock.Ledger.Forge +import Ouroboros.Consensus.Mock.Node.Abstract +import Ouroboros.Consensus.Mock.Protocol.LeaderSchedule +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.NodeId (CoreNodeId) +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Consensus.Util.Condense {------------------------------------------------------------------------------- Instantiate @ext@ @@ -59,14 +58,15 @@ type SimplePraosRuleHeader c = SimpleHeader c SimplePraosRuleExt -- The 'WithLeaderSchedule' doesn't require /anything/ in the block header. -- We add the 'CoreNodeId' just so that we can check that the schedule matches -- the chain. -newtype SimplePraosRuleExt = SimplePraosRuleExt { - simplePraosRuleExt :: CoreNodeId - } - deriving stock (Generic, Show, Eq) - deriving newtype (Condense) - deriving anyclass (NoThunks) - -type instance BlockProtocol (SimplePraosRuleBlock c) = +newtype SimplePraosRuleExt = SimplePraosRuleExt + { simplePraosRuleExt :: CoreNodeId + } + deriving stock (Generic, Show, Eq) + deriving newtype Condense + deriving anyclass NoThunks + +type instance + BlockProtocol (SimplePraosRuleBlock c) = WithLeaderSchedule (Praos PraosCryptoUnused) -- | Sanity check that block and header type synonyms agree @@ -88,15 +88,17 @@ instance SimpleCrypto c => RunMockBlock c SimplePraosRuleExt where mockNetworkMagic = const constructMockNetworkMagic instance - ( SimpleCrypto c - ) => BlockSupportsProtocol (SimpleBlock c SimplePraosRuleExt) where + SimpleCrypto c => + BlockSupportsProtocol (SimpleBlock c SimplePraosRuleExt) + where validateView _ _ = () instance - ( SimpleCrypto c - ) => LedgerSupportsProtocol (SimplePraosRuleBlock c) where - protocolLedgerView _ _ = () - ledgerViewForecastAt _ = trivialForecast + SimpleCrypto c => + LedgerSupportsProtocol (SimplePraosRuleBlock c) + where + protocolLedgerView _ _ = () + ledgerViewForecastAt _ = trivialForecast {------------------------------------------------------------------------------- We don't need crypto for this protocol @@ -105,15 +107,14 @@ instance data PraosCryptoUnused instance PraosCrypto PraosCryptoUnused where - type PraosKES PraosCryptoUnused = NeverKES - type PraosVRF PraosCryptoUnused = NeverVRF + type PraosKES PraosCryptoUnused = NeverKES + type PraosVRF PraosCryptoUnused = NeverVRF type PraosHash PraosCryptoUnused = NeverHash {------------------------------------------------------------------------------- Forging -------------------------------------------------------------------------------} - type instance CannotForge (SimplePraosRuleBlock c) = Void type instance ForgeStateInfo (SimplePraosRuleBlock c) = () @@ -122,12 +123,12 @@ type instance ForgeStateUpdateError (SimplePraosRuleBlock c) = Void forgePraosRuleExt :: SimpleCrypto c => ForgeExt c SimplePraosRuleExt forgePraosRuleExt = ForgeExt $ \cfg _ SimpleBlock{..} -> - let ext = SimplePraosRuleExt $ wlsConfigNodeId (configConsensus cfg) - SimpleHeader{..} = simpleHeader - in SimpleBlock { - simpleHeader = mkSimpleHeader encode simpleHeaderStd ext - , simpleBody = simpleBody - } + let ext = SimplePraosRuleExt $ wlsConfigNodeId (configConsensus cfg) + SimpleHeader{..} = simpleHeader + in SimpleBlock + { simpleHeader = mkSimpleHeader encode simpleHeaderStd ext + , simpleBody = simpleBody + } {------------------------------------------------------------------------------- Serialisation @@ -136,7 +137,9 @@ forgePraosRuleExt = ForgeExt $ \cfg _ SimpleBlock{..} -> instance Serialise SimplePraosRuleExt instance EncodeDisk (SimplePraosRuleBlock c) () - -- Default instance + +-- Default instance instance DecodeDisk (SimplePraosRuleBlock c) () - -- Default instance + +-- Default instance diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Forge.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Forge.hs index a1609f5329..4a8a662f78 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Forge.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Forge.hs @@ -5,21 +5,21 @@ {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Mock.Ledger.Forge ( - ForgeExt (..) +module Ouroboros.Consensus.Mock.Ledger.Forge + ( ForgeExt (..) , forgeSimple ) where -import Cardano.Binary (toCBOR) -import Cardano.Crypto.Hash (hashWithSerialiser) -import Codec.Serialise (Serialise (..), serialise) -import qualified Data.ByteString.Lazy as Lazy -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Network.SizeInBytes +import Cardano.Binary (toCBOR) +import Cardano.Crypto.Hash (hashWithSerialiser) +import Codec.Serialise (Serialise (..), serialise) +import Data.ByteString.Lazy qualified as Lazy +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Network.SizeInBytes -- | Construct the protocol specific part of the block -- @@ -29,41 +29,48 @@ import Ouroboros.Network.SizeInBytes -- Note: this is a newtype and not a type class to allow for things in the -- closure. For example, if Praos had to use a stateful KES key, it could -- refer to it in its closure. -newtype ForgeExt c ext = ForgeExt { - forgeExt :: TopLevelConfig (SimpleBlock c ext) - -> IsLeader (BlockProtocol (SimpleBlock c ext)) - -> SimpleBlock' c ext () - -> SimpleBlock c ext - } +newtype ForgeExt c ext = ForgeExt + { forgeExt :: + TopLevelConfig (SimpleBlock c ext) -> + IsLeader (BlockProtocol (SimpleBlock c ext)) -> + SimpleBlock' c ext () -> + SimpleBlock c ext + } -forgeSimple :: forall c ext mk. - ( SimpleCrypto c - ) - => ForgeExt c ext - -> TopLevelConfig (SimpleBlock c ext) - -> BlockNo -- ^ Current block number - -> SlotNo -- ^ Current slot number - -> TickedLedgerState (SimpleBlock c ext) mk -- ^ Current ledger - -> [GenTx (SimpleBlock c ext)] -- ^ Txs to include - -> IsLeader (BlockProtocol (SimpleBlock c ext)) - -> SimpleBlock c ext -forgeSimple ForgeExt { forgeExt } cfg curBlock curSlot tickedLedger txs proof = - forgeExt cfg proof $ SimpleBlock { - simpleHeader = mkSimpleHeader encode stdHeader () - , simpleBody = body +forgeSimple :: + forall c ext mk. + SimpleCrypto c => + ForgeExt c ext -> + TopLevelConfig (SimpleBlock c ext) -> + -- | Current block number + BlockNo -> + -- | Current slot number + SlotNo -> + -- | Current ledger + TickedLedgerState (SimpleBlock c ext) mk -> + -- | Txs to include + [GenTx (SimpleBlock c ext)] -> + IsLeader (BlockProtocol (SimpleBlock c ext)) -> + SimpleBlock c ext +forgeSimple ForgeExt{forgeExt} cfg curBlock curSlot tickedLedger txs proof = + forgeExt cfg proof $ + SimpleBlock + { simpleHeader = mkSimpleHeader encode stdHeader () + , simpleBody = body } - where - body :: SimpleBody - body = SimpleBody { simpleTxs = map simpleGenTx txs } + where + body :: SimpleBody + body = SimpleBody{simpleTxs = map simpleGenTx txs} - stdHeader :: SimpleStdHeader c ext - stdHeader = SimpleStdHeader { - simplePrev = castHash $ getTipHash tickedLedger - , simpleSlotNo = curSlot - , simpleBlockNo = curBlock - , simpleBodyHash = hashWithSerialiser toCBOR body - , simpleBodySize = bodySize - } + stdHeader :: SimpleStdHeader c ext + stdHeader = + SimpleStdHeader + { simplePrev = castHash $ getTipHash tickedLedger + , simpleSlotNo = curSlot + , simpleBlockNo = curBlock + , simpleBodyHash = hashWithSerialiser toCBOR body + , simpleBodySize = bodySize + } - bodySize :: SizeInBytes - bodySize = SizeInBytes $ fromIntegral $ Lazy.length $ serialise body + bodySize :: SizeInBytes + bodySize = SizeInBytes $ fromIntegral $ Lazy.length $ serialise body diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Stake.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Stake.hs index 89b947d3d1..dac936c9a4 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Stake.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/Stake.hs @@ -1,11 +1,13 @@ {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Mock.Ledger.Stake ( - -- * Stakeholders +module Ouroboros.Consensus.Mock.Ledger.Stake + ( -- * Stakeholders StakeHolder (..) + -- * Address distribution , AddrDist + -- * Stake distribution , StakeDist (..) , equalStakeDist @@ -13,30 +15,30 @@ module Ouroboros.Consensus.Mock.Ledger.Stake ( , relativeStakes , stakeWithDefault , totalStakes + -- * Type family instances , Ticked (..) ) where -import Codec.Serialise (Serialise) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (mapMaybe) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Mock.Ledger.Address -import Ouroboros.Consensus.Mock.Ledger.UTxO -import Ouroboros.Consensus.NodeId (CoreNodeId (..), NodeId (..)) -import Ouroboros.Consensus.Ticked +import Codec.Serialise (Serialise) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (mapMaybe) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Mock.Ledger.Address +import Ouroboros.Consensus.Mock.Ledger.UTxO +import Ouroboros.Consensus.NodeId (CoreNodeId (..), NodeId (..)) +import Ouroboros.Consensus.Ticked {------------------------------------------------------------------------------- Stakeholders -------------------------------------------------------------------------------} -data StakeHolder = - -- | Stake of a core node +data StakeHolder + = -- | Stake of a core node StakeCore CoreNodeId - - -- | Stake for everybody else (we don't need to distinguish) - | StakeEverybodyElse + | -- | Stake for everybody else (we don't need to distinguish) + StakeEverybodyElse deriving (Show, Eq, Ord) {------------------------------------------------------------------------------- @@ -46,18 +48,20 @@ data StakeHolder = -- | In the mock setup, only core nodes have stake -- -- INVARIANT: The rationals should sum to 1. -newtype StakeDist = StakeDist { stakeDistToMap :: Map CoreNodeId Rational } +newtype StakeDist = StakeDist {stakeDistToMap :: Map CoreNodeId Rational} deriving (Show, Eq, Serialise, NoThunks) stakeWithDefault :: Rational -> CoreNodeId -> StakeDist -> Rational stakeWithDefault d n = Map.findWithDefault d n . stakeDistToMap relativeStakes :: Map StakeHolder Amount -> StakeDist -relativeStakes m = StakeDist $ - let totalStake = fromIntegral $ sum $ Map.elems m - in Map.fromList [ (nid, fromIntegral stake / totalStake) - | (StakeCore nid, stake) <- Map.toList m - ] +relativeStakes m = + StakeDist $ + let totalStake = fromIntegral $ sum $ Map.elems m + in Map.fromList + [ (nid, fromIntegral stake / totalStake) + | (StakeCore nid, stake) <- Map.toList m + ] -- | Compute stakes of all nodes -- @@ -66,30 +70,30 @@ relativeStakes m = StakeDist $ totalStakes :: Map Addr NodeId -> Utxo -> Map StakeHolder Amount totalStakes addrDist = foldl f Map.empty where - f :: Map StakeHolder Amount -> TxOut -> Map StakeHolder Amount - f m (a, stake) = case Map.lookup a addrDist of - Just (CoreId nid) -> Map.insertWith (+) (StakeCore nid) stake m - _ -> Map.insertWith (+) StakeEverybodyElse stake m + f :: Map StakeHolder Amount -> TxOut -> Map StakeHolder Amount + f m (a, stake) = case Map.lookup a addrDist of + Just (CoreId nid) -> Map.insertWith (+) (StakeCore nid) stake m + _ -> Map.insertWith (+) StakeEverybodyElse stake m -- | Stake distribution where every address has equal state equalStakeDist :: AddrDist -> StakeDist equalStakeDist ad = - StakeDist $ + StakeDist $ Map.fromList $ - mapMaybe (nodeStake . snd) $ - Map.toList ad - where - nodeStake :: NodeId -> Maybe (CoreNodeId, Rational) - nodeStake (RelayId _) = Nothing - nodeStake (CoreId i) = Just (i, recip (fromIntegral n)) + mapMaybe (nodeStake . snd) $ + Map.toList ad + where + nodeStake :: NodeId -> Maybe (CoreNodeId, Rational) + nodeStake (RelayId _) = Nothing + nodeStake (CoreId i) = Just (i, recip (fromIntegral n)) - n = length $ filter isCore $ Map.elems ad + n = length $ filter isCore $ Map.elems ad - isCore :: NodeId -> Bool - isCore CoreId{} = True - isCore RelayId{} = False + isCore :: NodeId -> Bool + isCore CoreId{} = True + isCore RelayId{} = False -- | Genesis stake distribution genesisStakeDist :: AddrDist -> StakeDist genesisStakeDist addrDist = - relativeStakes (totalStakes addrDist (genesisUtxo addrDist)) + relativeStakes (totalStakes addrDist (genesisUtxo addrDist)) diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/State.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/State.hs index d7170e56fe..d54ea6d6b9 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/State.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/State.hs @@ -6,134 +6,142 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Mock.Ledger.State ( - -- * Config for the mock ledger +module Ouroboros.Consensus.Mock.Ledger.State + ( -- * Config for the mock ledger MockConfig (..) , defaultMockConfig + -- * State of the mock ledger , MockError (..) , MockState (..) , updateMockState , updateMockTip , updateMockUTxO + -- * Supporting definitions , checkTxSize , txSize + -- * Genesis state , genesisMockState ) where -import Cardano.Binary (toCBOR) -import Cardano.Crypto.Hash -import Codec.Serialise (Serialise, serialise) -import Control.Monad (guard) -import Control.Monad.Except (Except, throwError, withExcept) -import qualified Data.ByteString.Lazy as BL -import Data.Set (Set) -import qualified Data.Set as Set -import Data.Typeable (Typeable) -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32 (..)) -import Ouroboros.Consensus.Mock.Ledger.Address -import Ouroboros.Consensus.Mock.Ledger.UTxO -import Ouroboros.Consensus.Util (ShowProxy (..), repeatedlyM) -import Test.Util.Orphans.Serialise () +import Cardano.Binary (toCBOR) +import Cardano.Crypto.Hash +import Codec.Serialise (Serialise, serialise) +import Control.Monad (guard) +import Control.Monad.Except (Except, throwError, withExcept) +import Data.ByteString.Lazy qualified as BL +import Data.Set (Set) +import Data.Set qualified as Set +import Data.Typeable (Typeable) +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Ledger.SupportsMempool (ByteSize32 (..)) +import Ouroboros.Consensus.Mock.Ledger.Address +import Ouroboros.Consensus.Mock.Ledger.UTxO +import Ouroboros.Consensus.Util (ShowProxy (..), repeatedlyM) +import Test.Util.Orphans.Serialise () {------------------------------------------------------------------------------- Config of the mock block -------------------------------------------------------------------------------} -- | Parameters needed to validate blocks/txs -data MockConfig = MockConfig { - mockCfgMaxTxSize :: !(Maybe ByteSize32) +data MockConfig = MockConfig + { mockCfgMaxTxSize :: !(Maybe ByteSize32) } deriving stock (Show, Eq, Generic) deriving anyclass (NoThunks, Serialise) defaultMockConfig :: MockConfig -defaultMockConfig = MockConfig { - mockCfgMaxTxSize = Nothing +defaultMockConfig = + MockConfig + { mockCfgMaxTxSize = Nothing } {------------------------------------------------------------------------------- State of the mock ledger -------------------------------------------------------------------------------} -data MockState blk = MockState { - mockUtxo :: !Utxo - , mockConfirmed :: !(Set TxId) - , mockTip :: !(Point blk) - } +data MockState blk = MockState + { mockUtxo :: !Utxo + , mockConfirmed :: !(Set TxId) + , mockTip :: !(Point blk) + } deriving (Show, Eq, Generic, NoThunks) deriving instance Serialise (HeaderHash blk) => Serialise (MockState blk) -data MockError blk = - MockExpired !SlotNo !SlotNo - -- ^ The transaction expired in the first 'SlotNo', and it failed to +data MockError blk + = -- | The transaction expired in the first 'SlotNo', and it failed to -- validate in the second 'SlotNo'. + MockExpired !SlotNo !SlotNo | MockUtxoError UtxoError | MockInvalidHash (ChainHash blk) (ChainHash blk) | MockTxSizeTooBig ByteSize32 ByteSize32 deriving (Generic, NoThunks) deriving instance StandardHash blk => Show (MockError blk) -deriving instance StandardHash blk => Eq (MockError blk) +deriving instance StandardHash blk => Eq (MockError blk) deriving instance Serialise (HeaderHash blk) => Serialise (MockError blk) -instance Typeable blk => ShowProxy (MockError blk) where +instance Typeable blk => ShowProxy (MockError blk) -updateMockState :: (GetPrevHash blk, HasMockTxs blk) - => MockConfig - -> blk - -> MockState blk - -> Except (MockError blk) (MockState blk) +updateMockState :: + (GetPrevHash blk, HasMockTxs blk) => + MockConfig -> + blk -> + MockState blk -> + Except (MockError blk) (MockState blk) updateMockState cfg blk st = do - let hdr = getHeader blk - st' <- updateMockTip hdr st - updateMockUTxO cfg (blockSlot hdr) blk st' - -updateMockTip :: GetPrevHash blk - => Header blk - -> MockState blk - -> Except (MockError blk) (MockState blk) + let hdr = getHeader blk + st' <- updateMockTip hdr st + updateMockUTxO cfg (blockSlot hdr) blk st' + +updateMockTip :: + GetPrevHash blk => + Header blk -> + MockState blk -> + Except (MockError blk) (MockState blk) updateMockTip hdr (MockState u c t) - | headerPrevHash hdr == pointHash t - = return $ MockState u c (headerPoint hdr) - | otherwise - = throwError $ MockInvalidHash (headerPrevHash hdr) (pointHash t) - -updateMockUTxO :: HasMockTxs a - => MockConfig - -> SlotNo - -> a - -> MockState blk - -> Except (MockError blk) (MockState blk) + | headerPrevHash hdr == pointHash t = + return $ MockState u c (headerPoint hdr) + | otherwise = + throwError $ MockInvalidHash (headerPrevHash hdr) (pointHash t) + +updateMockUTxO :: + HasMockTxs a => + MockConfig -> + SlotNo -> + a -> + MockState blk -> + Except (MockError blk) (MockState blk) updateMockUTxO cfg now = repeatedlyM (updateMockUTxO1 cfg now) . getMockTxs -updateMockUTxO1 :: forall blk. - MockConfig - -> SlotNo - -> Tx - -> MockState blk - -> Except (MockError blk) (MockState blk) +updateMockUTxO1 :: + forall blk. + MockConfig -> + SlotNo -> + Tx -> + MockState blk -> + Except (MockError blk) (MockState blk) updateMockUTxO1 cfg now tx (MockState u c t) = case hasExpired of - Just e -> throwError e - Nothing -> do - _ <- checkTxSize cfg tx - u' <- withExcept MockUtxoError $ updateUtxo tx u - return $ MockState u' (c `Set.union` confirmed tx) t - where - Tx expiry _ins _outs = tx - - hasExpired :: Maybe (MockError blk) - hasExpired = case expiry of - DoNotExpire -> Nothing - ExpireAtOnsetOf s -> do - guard $ s <= now - Just $ MockExpired s now + Just e -> throwError e + Nothing -> do + _ <- checkTxSize cfg tx + u' <- withExcept MockUtxoError $ updateUtxo tx u + return $ MockState u' (c `Set.union` confirmed tx) t + where + Tx expiry _ins _outs = tx + + hasExpired :: Maybe (MockError blk) + hasExpired = case expiry of + DoNotExpire -> Nothing + ExpireAtOnsetOf s -> do + guard $ s <= now + Just $ MockExpired s now checkTxSize :: MockConfig -> Tx -> Except (MockError blk) ByteSize32 checkTxSize cfg tx @@ -141,8 +149,8 @@ checkTxSize cfg tx , actualTxSize > maxTxSize = throwError $ MockTxSizeTooBig actualTxSize maxTxSize | otherwise = pure actualTxSize - where - actualTxSize = txSize tx + where + actualTxSize = txSize tx {------------------------------------------------------------------------------- Supporting definitions @@ -156,8 +164,9 @@ txSize = ByteSize32 . fromIntegral . BL.length . serialise -------------------------------------------------------------------------------} genesisMockState :: AddrDist -> MockState blk -genesisMockState addrDist = MockState { - mockUtxo = genesisUtxo addrDist +genesisMockState addrDist = + MockState + { mockUtxo = genesisUtxo addrDist , mockConfirmed = Set.singleton (hashWithSerialiser toCBOR (genesisTx addrDist)) - , mockTip = GenesisPoint + , mockTip = GenesisPoint } diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/UTxO.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/UTxO.hs index dd22c6e3a6..4d120c5a9f 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/UTxO.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Ledger/UTxO.hs @@ -5,8 +5,8 @@ {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE UndecidableInstances #-} -module Ouroboros.Consensus.Mock.Ledger.UTxO ( - -- * Basic definitions +module Ouroboros.Consensus.Mock.Ledger.UTxO + ( -- * Basic definitions Addr , Amount , Expiry (..) @@ -16,6 +16,7 @@ module Ouroboros.Consensus.Mock.Ledger.UTxO ( , TxIn , TxOut , Utxo + -- * Computing UTxO , HasMockTxs (..) , UtxoError (..) @@ -23,31 +24,32 @@ module Ouroboros.Consensus.Mock.Ledger.UTxO ( , txIns , txOuts , updateUtxo + -- * Genesis , genesisTx , genesisUtxo ) where -import Cardano.Binary (ToCBOR (..)) -import Cardano.Crypto.Hash -import Codec.Serialise (Serialise (..)) -import Control.DeepSeq (NFData (..), force, rwhnf) -import Control.Monad (forM, when) -import Control.Monad.Except (Except, throwError) -import Control.Monad.State (execStateT, get, modify, put) -import Data.Functor (($>)) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Set (Set) -import qualified Data.Set as Set -import GHC.Generics (Generic) -import NoThunks.Class (InspectHeap (..), NoThunks) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Mock.Ledger.Address -import Ouroboros.Consensus.Util (repeatedlyM) -import Ouroboros.Consensus.Util.Condense -import Ouroboros.Consensus.Util.Orphans () -import Ouroboros.Network.Mock.Chain (Chain, toOldestFirst) +import Cardano.Binary (ToCBOR (..)) +import Cardano.Crypto.Hash +import Codec.Serialise (Serialise (..)) +import Control.DeepSeq (NFData (..), force, rwhnf) +import Control.Monad (forM, when) +import Control.Monad.Except (Except, throwError) +import Control.Monad.State (execStateT, get, modify, put) +import Data.Functor (($>)) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Set (Set) +import Data.Set qualified as Set +import GHC.Generics (Generic) +import NoThunks.Class (InspectHeap (..), NoThunks) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Mock.Ledger.Address +import Ouroboros.Consensus.Util (repeatedlyM) +import Ouroboros.Consensus.Util.Condense +import Ouroboros.Consensus.Util.Orphans () +import Ouroboros.Network.Mock.Chain (Chain, toOldestFirst) {------------------------------------------------------------------------------- Basic definitions @@ -56,7 +58,7 @@ import Ouroboros.Network.Mock.Chain (Chain, toOldestFirst) data Expiry = DoNotExpire | ExpireAtOnsetOf !SlotNo - deriving stock (Show, Eq, Ord, Generic) + deriving stock (Show, Eq, Ord, Generic) deriving anyclass (Serialise, NoThunks) instance NFData Expiry where rnf = rwhnf @@ -65,13 +67,14 @@ instance Condense Expiry where condense = show data Tx = UnsafeTx Expiry (Set TxIn) [TxOut] - deriving stock (Show, Eq, Ord, Generic) + deriving stock (Show, Eq, Ord, Generic) deriving anyclass (Serialise, NFData) deriving NoThunks via InspectHeap Tx pattern Tx :: Expiry -> Set TxIn -> [TxOut] -> Tx -pattern Tx expiry ins outs <- UnsafeTx expiry ins outs where - Tx expiry ins outs = force $ UnsafeTx expiry ins outs +pattern Tx expiry ins outs <- UnsafeTx expiry ins outs + where + Tx expiry ins outs = force $ UnsafeTx expiry ins outs {-# COMPLETE Tx #-} @@ -81,12 +84,12 @@ instance ToCBOR Tx where instance Condense Tx where condense (Tx expiry ins outs) = condense (expiry, ins, outs) -type Ix = Word +type Ix = Word type Amount = Word -type TxId = Hash SHA256 Tx -type TxIn = (TxId, Ix) -type TxOut = (Addr, Amount) -type Utxo = Map TxIn TxOut +type TxId = Hash SHA256 Tx +type TxIn = (TxId, Ix) +type TxOut = (Addr, Amount) +type Utxo = Map TxIn TxOut {------------------------------------------------------------------------------- Computing UTxO @@ -95,9 +98,11 @@ type Utxo = Map TxIn TxOut data UtxoError = MissingInput TxIn | InputOutputMismatch - Amount -- ^ Input - Amount -- ^ Output - deriving stock (Eq, Show, Generic) + -- | Input + Amount + -- | Output + Amount + deriving stock (Eq, Show, Generic) deriving anyclass (Serialise, NoThunks) instance Condense UtxoError where @@ -105,11 +110,10 @@ instance Condense UtxoError where class HasMockTxs a where -- | The transactions in the order they are to be applied - -- getMockTxs :: a -> [Tx] instance HasMockTxs Tx where - getMockTxs = (:[]) + getMockTxs = (: []) instance HasMockTxs a => HasMockTxs [a] where getMockTxs = concatMap getMockTxs @@ -119,44 +123,45 @@ instance HasMockTxs a => HasMockTxs (Chain a) where txIns :: HasMockTxs a => a -> Set TxIn txIns = Set.unions . map each . getMockTxs - where - each (Tx _expiry ins _outs) = ins + where + each (Tx _expiry ins _outs) = ins txOuts :: HasMockTxs a => a -> Utxo txOuts = Map.unions . map each . getMockTxs - where - each tx@(Tx _expiry _ins outs) = - Map.fromList $ zipWith aux [0..] outs - where - aux :: Ix -> TxOut -> (TxIn, TxOut) - aux ix out = ((hashWithSerialiser toCBOR tx, ix), out) + where + each tx@(Tx _expiry _ins outs) = + Map.fromList $ zipWith aux [0 ..] outs + where + aux :: Ix -> TxOut -> (TxIn, TxOut) + aux ix out = ((hashWithSerialiser toCBOR tx, ix), out) -- | @confirmed@ stands for all the transaction hashes present in the given -- collection. confirmed :: HasMockTxs a => a -> Set TxId confirmed = Set.fromList . map (hashWithSerialiser toCBOR) . getMockTxs --- |Update the Utxo with the transactions from the given @a@, by removing the --- inputs and adding the outputs. +-- | Update the Utxo with the transactions from the given @a@, by removing the +-- inputs and adding the outputs. updateUtxo :: HasMockTxs a => a -> Utxo -> Except UtxoError Utxo updateUtxo = repeatedlyM each . getMockTxs - where - each tx = execStateT $ do - -- Remove all inputs from the Utxo and calculate the sum of all the - -- input amounts - inputAmount <- fmap sum $ forM (Set.toList (txIns tx)) $ \txIn -> do - u <- get - case Map.updateLookupWithKey (\_ _ -> Nothing) txIn u of - (Nothing, _) -> throwError $ MissingInput txIn - (Just (_addr, amount), u') -> put u' $> amount - - -- Check that the sum of the inputs is equal to the sum of the outputs - let outputAmount = sum $ map snd $ Map.elems $ txOuts tx - when (inputAmount /= outputAmount) $ - throwError $ InputOutputMismatch inputAmount outputAmount - - -- Add the outputs to the Utxo - modify (`Map.union` txOuts tx) + where + each tx = execStateT $ do + -- Remove all inputs from the Utxo and calculate the sum of all the + -- input amounts + inputAmount <- fmap sum $ forM (Set.toList (txIns tx)) $ \txIn -> do + u <- get + case Map.updateLookupWithKey (\_ _ -> Nothing) txIn u of + (Nothing, _) -> throwError $ MissingInput txIn + (Just (_addr, amount), u') -> put u' $> amount + + -- Check that the sum of the inputs is equal to the sum of the outputs + let outputAmount = sum $ map snd $ Map.elems $ txOuts tx + when (inputAmount /= outputAmount) $ + throwError $ + InputOutputMismatch inputAmount outputAmount + + -- Add the outputs to the Utxo + modify (`Map.union` txOuts tx) {------------------------------------------------------------------------------- Genesis @@ -165,7 +170,7 @@ updateUtxo = repeatedlyM each . getMockTxs -- | Transaction giving initial stake to the nodes genesisTx :: AddrDist -> Tx genesisTx addrDist = - Tx DoNotExpire mempty [(addr, 1000) | addr <- Map.keys addrDist] + Tx DoNotExpire mempty [(addr, 1000) | addr <- Map.keys addrDist] genesisUtxo :: AddrDist -> Utxo genesisUtxo addrDist = txOuts (genesisTx addrDist) diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node.hs index 71935d6475..0e32357a99 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node.hs @@ -8,66 +8,75 @@ {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Node ( - CodecConfig (..) +module Ouroboros.Consensus.Mock.Node + ( CodecConfig (..) , simpleBlockForging ) where -import Cardano.Ledger.BaseTypes (unNonZero) -import Codec.Serialise (Serialise) -import qualified Data.Map.Strict as Map -import Data.Void (Void) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import Ouroboros.Consensus.Ledger.SupportsMempool (txForgetValidated) -import Ouroboros.Consensus.Ledger.SupportsProtocol -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node.Abstract -import Ouroboros.Consensus.Mock.Node.Serialisation () -import Ouroboros.Consensus.Node.InitStorage -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) -import Ouroboros.Consensus.Util.RedundantConstraints +import Cardano.Ledger.BaseTypes (unNonZero) +import Codec.Serialise (Serialise) +import Data.Map.Strict qualified as Map +import Data.Void (Void) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.Ledger.SupportsMempool (txForgetValidated) +import Ouroboros.Consensus.Ledger.SupportsProtocol +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node.Abstract +import Ouroboros.Consensus.Mock.Node.Serialisation () +import Ouroboros.Consensus.Node.InitStorage +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.ImmutableDB (simpleChunkInfo) +import Ouroboros.Consensus.Util.RedundantConstraints {------------------------------------------------------------------------------- RunNode instance for the mock ledger -------------------------------------------------------------------------------} instance SupportedNetworkProtocolVersion (SimpleBlock SimpleMockCrypto ext) where - supportedNodeToNodeVersions _ = Map.singleton maxBound () + supportedNodeToNodeVersions _ = Map.singleton maxBound () supportedNodeToClientVersions _ = Map.singleton maxBound () latestReleasedNodeVersion = latestReleasedNodeVersionDefault instance NodeInitStorage (SimpleBlock SimpleMockCrypto ext) where - nodeImmutableDbChunkInfo (SimpleStorageConfig secParam) = simpleChunkInfo $ - EpochSize $ 10 * unNonZero (maxRollbacks secParam) + nodeImmutableDbChunkInfo (SimpleStorageConfig secParam) = + simpleChunkInfo $ + EpochSize $ + 10 * unNonZero (maxRollbacks secParam) nodeCheckIntegrity _ _ = True instance BlockSupportsMetrics (SimpleBlock c ext) where isSelfIssued = isSelfIssuedConstUnknown -deriving via SelectViewDiffusionPipelining (SimpleBlock c ext) instance - ( BlockSupportsProtocol (SimpleBlock c ext) - , Show (SelectView (BlockProtocol (SimpleBlock c ext))) - ) => BlockSupportsDiffusionPipelining (SimpleBlock c ext) +deriving via + SelectViewDiffusionPipelining (SimpleBlock c ext) + instance + ( BlockSupportsProtocol (SimpleBlock c ext) + , Show (SelectView (BlockProtocol (SimpleBlock c ext))) + ) => + BlockSupportsDiffusionPipelining (SimpleBlock c ext) -instance ConsensusProtocol (BlockProtocol (SimpleBlock c ext)) => BlockSupportsSanityCheck (SimpleBlock c ext) where +instance + ConsensusProtocol (BlockProtocol (SimpleBlock c ext)) => + BlockSupportsSanityCheck (SimpleBlock c ext) + where configAllSecurityParams = pure . configSecurityParam -instance ( LedgerSupportsProtocol (SimpleBlock SimpleMockCrypto ext) - , Show (CannotForge (SimpleBlock SimpleMockCrypto ext)) - , Show (ForgeStateInfo (SimpleBlock SimpleMockCrypto ext)) - , Show (ForgeStateUpdateError (SimpleBlock SimpleMockCrypto ext)) - , Serialise ext - , RunMockBlock SimpleMockCrypto ext - ) => RunNode (SimpleBlock SimpleMockCrypto ext) +instance + ( LedgerSupportsProtocol (SimpleBlock SimpleMockCrypto ext) + , Show (CannotForge (SimpleBlock SimpleMockCrypto ext)) + , Show (ForgeStateInfo (SimpleBlock SimpleMockCrypto ext)) + , Show (ForgeStateUpdateError (SimpleBlock SimpleMockCrypto ext)) + , Serialise ext + , RunMockBlock SimpleMockCrypto ext + ) => + RunNode (SimpleBlock SimpleMockCrypto ext) {------------------------------------------------------------------------------- BlockForging @@ -75,31 +84,32 @@ instance ( LedgerSupportsProtocol (SimpleBlock SimpleMockCrypto ext) -- | Can be used when 'CanBeLeader' is static simpleBlockForging :: - forall c ext m. - ( RunMockBlock c ext - , CannotForge (SimpleBlock c ext) ~ Void - , ForgeStateInfo (SimpleBlock c ext) ~ () - , ForgeStateUpdateError (SimpleBlock c ext) ~ Void - , Monad m - ) - => CanBeLeader (BlockProtocol (SimpleBlock c ext)) - -> ForgeExt c ext - -> BlockForging m (SimpleBlock c ext) -simpleBlockForging aCanBeLeader aForgeExt = BlockForging { - forgeLabel = "simpleBlockForging" - , canBeLeader = aCanBeLeader + forall c ext m. + ( RunMockBlock c ext + , CannotForge (SimpleBlock c ext) ~ Void + , ForgeStateInfo (SimpleBlock c ext) ~ () + , ForgeStateUpdateError (SimpleBlock c ext) ~ Void + , Monad m + ) => + CanBeLeader (BlockProtocol (SimpleBlock c ext)) -> + ForgeExt c ext -> + BlockForging m (SimpleBlock c ext) +simpleBlockForging aCanBeLeader aForgeExt = + BlockForging + { forgeLabel = "simpleBlockForging" + , canBeLeader = aCanBeLeader , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () - , checkCanForge = \_ _ _ _ _ -> return () - , forgeBlock = \cfg bno slot lst txs proof -> - return - $ forgeSimple - aForgeExt - cfg - bno - slot - lst - (map txForgetValidated txs) - proof + , checkCanForge = \_ _ _ _ _ -> return () + , forgeBlock = \cfg bno slot lst txs proof -> + return $ + forgeSimple + aForgeExt + cfg + bno + slot + lst + (map txForgetValidated txs) + proof } - where - _ = keepRedundantConstraint (Proxy @(ForgeStateUpdateError (SimpleBlock c ext) ~ Void)) + where + _ = keepRedundantConstraint (Proxy @(ForgeStateUpdateError (SimpleBlock c ext) ~ Void)) diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Abstract.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Abstract.hs index 189cd4ec38..7a5287ecb3 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Abstract.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Abstract.hs @@ -1,35 +1,37 @@ {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Node.Abstract ( - CodecConfig (..) +module Ouroboros.Consensus.Mock.Node.Abstract + ( CodecConfig (..) , RunMockBlock (..) , constructMockNetworkMagic ) where -import Data.Hashable (hash) -import Data.Time.Calendar (fromGregorian) -import Data.Time.Clock (UTCTime (..)) -import GHC.Stack -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.BlockchainTime (SystemStart (..)) -import Ouroboros.Consensus.Config.SupportsNode -import Ouroboros.Consensus.Mock.Ledger.Block -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Magic (NetworkMagic (..)) +import Data.Hashable (hash) +import Data.Time.Calendar (fromGregorian) +import Data.Time.Clock (UTCTime (..)) +import GHC.Stack +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.BlockchainTime (SystemStart (..)) +import Ouroboros.Consensus.Config.SupportsNode +import Ouroboros.Consensus.Mock.Ledger.Block +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Magic (NetworkMagic (..)) -- | Protocol specific functionality required to run consensus with mock blocks -class ( MockProtocolSpecific c ext - , EncodeDisk (SimpleBlock c ext) (ChainDepState (BlockProtocol (SimpleBlock c ext))) - , DecodeDisk (SimpleBlock c ext) (ChainDepState (BlockProtocol (SimpleBlock c ext))) - ) => RunMockBlock c ext where - mockNetworkMagic - :: BlockConfig (SimpleBlock c ext) - -> NetworkMagic +class + ( MockProtocolSpecific c ext + , EncodeDisk (SimpleBlock c ext) (ChainDepState (BlockProtocol (SimpleBlock c ext))) + , DecodeDisk (SimpleBlock c ext) (ChainDepState (BlockProtocol (SimpleBlock c ext))) + ) => + RunMockBlock c ext + where + mockNetworkMagic :: + BlockConfig (SimpleBlock c ext) -> + NetworkMagic -- | Construct protocol magic ID depending on where in the code this is called -- @@ -37,13 +39,15 @@ class ( MockProtocolSpecific c ext -- different IDs from each other and from regular protocols. constructMockNetworkMagic :: HasCallStack => NetworkMagic constructMockNetworkMagic = - NetworkMagic $ fromIntegral $ hash (prettyCallStack callStack) + NetworkMagic $ fromIntegral $ hash (prettyCallStack callStack) -instance RunMockBlock c ext - => ConfigSupportsNode (SimpleBlock c ext) where +instance + RunMockBlock c ext => + ConfigSupportsNode (SimpleBlock c ext) + where getSystemStart = const $ SystemStart dummyDate - where - -- This doesn't matter much - dummyDate = UTCTime (fromGregorian 2019 8 13) 0 + where + -- This doesn't matter much + dummyDate = UTCTime (fromGregorian 2019 8 13) 0 getNetworkMagic = mockNetworkMagic diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/BFT.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/BFT.hs index 544a68998b..07992a9936 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/BFT.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/BFT.hs @@ -1,61 +1,68 @@ -module Ouroboros.Consensus.Mock.Node.BFT ( - MockBftBlock +module Ouroboros.Consensus.Mock.Node.BFT + ( MockBftBlock , blockForgingBft , protocolInfoBft ) where -import Cardano.Crypto.DSIGN -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..), NodeId (..)) -import Ouroboros.Consensus.Protocol.BFT +import Cardano.Crypto.DSIGN +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..), NodeId (..)) +import Ouroboros.Consensus.Protocol.BFT type MockBftBlock = SimpleBftBlock SimpleMockCrypto BftMockCrypto -protocolInfoBft :: NumCoreNodes - -> CoreNodeId - -> SecurityParam - -> HardFork.EraParams - -> ProtocolInfo MockBftBlock +protocolInfoBft :: + NumCoreNodes -> + CoreNodeId -> + SecurityParam -> + HardFork.EraParams -> + ProtocolInfo MockBftBlock protocolInfoBft numCoreNodes nid securityParam eraParams = - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = BftConfig { - bftParams = BftParams { - bftNumNodes = numCoreNodes - , bftSecurityParam = securityParam - } - , bftSignKey = signKey nid - , bftVerKeys = Map.fromList [ - (CoreId n, verKey n) - | n <- enumCoreNodes numCoreNodes - ] - } - , topLevelConfigLedger = SimpleLedgerConfig () eraParams defaultMockConfig - , topLevelConfigBlock = SimpleBlockConfig - , topLevelConfigCodec = SimpleCodecConfig - , topLevelConfigStorage = SimpleStorageConfig securityParam + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + BftConfig + { bftParams = + BftParams + { bftNumNodes = numCoreNodes + , bftSecurityParam = securityParam + } + , bftSignKey = signKey nid + , bftVerKeys = + Map.fromList + [ (CoreId n, verKey n) + | n <- enumCoreNodes numCoreNodes + ] + } + , topLevelConfigLedger = SimpleLedgerConfig () eraParams defaultMockConfig + , topLevelConfigBlock = SimpleBlockConfig + , topLevelConfigCodec = SimpleCodecConfig + , topLevelConfigStorage = SimpleStorageConfig securityParam , topLevelConfigCheckpoints = emptyCheckpointsMap } - , pInfoInitLedger = ExtLedgerState (genesisSimpleLedgerState addrDist) - (genesisHeaderState ()) - } - where - signKey :: CoreNodeId -> SignKeyDSIGN MockDSIGN - signKey (CoreNodeId n) = SignKeyMockDSIGN n + , pInfoInitLedger = + ExtLedgerState + (genesisSimpleLedgerState addrDist) + (genesisHeaderState ()) + } + where + signKey :: CoreNodeId -> SignKeyDSIGN MockDSIGN + signKey (CoreNodeId n) = SignKeyMockDSIGN n - verKey :: CoreNodeId -> VerKeyDSIGN MockDSIGN - verKey (CoreNodeId n) = VerKeyMockDSIGN n + verKey :: CoreNodeId -> VerKeyDSIGN MockDSIGN + verKey (CoreNodeId n) = VerKeyMockDSIGN n - addrDist :: AddrDist - addrDist = mkAddrDist numCoreNodes + addrDist :: AddrDist + addrDist = mkAddrDist numCoreNodes blockForgingBft :: Monad m => CoreNodeId -> [BlockForging m MockBftBlock] blockForgingBft nid = [simpleBlockForging nid forgeBftExt] diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PBFT.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PBFT.hs index c2b22c8c40..7b6c8e1282 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PBFT.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PBFT.hs @@ -3,61 +3,67 @@ {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Node.PBFT ( - MockPBftBlock +module Ouroboros.Consensus.Mock.Node.PBFT + ( MockPBftBlock , blockForgingMockPBFT , protocolInfoMockPBFT ) where -import Cardano.Crypto.DSIGN -import qualified Data.Bimap as Bimap -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool (txForgetValidated) -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.PBFT -import qualified Ouroboros.Consensus.Protocol.PBFT.State as S +import Cardano.Crypto.DSIGN +import Data.Bimap qualified as Bimap +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool (txForgetValidated) +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.PBFT +import Ouroboros.Consensus.Protocol.PBFT.State qualified as S type MockPBftBlock = SimplePBftBlock SimpleMockCrypto PBftMockCrypto -protocolInfoMockPBFT :: PBftParams - -> HardFork.EraParams - -> ProtocolInfo MockPBftBlock +protocolInfoMockPBFT :: + PBftParams -> + HardFork.EraParams -> + ProtocolInfo MockPBftBlock protocolInfoMockPBFT params eraParams = - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = PBftConfig { - pbftParams = params - } - , topLevelConfigLedger = SimpleLedgerConfig ledgerView eraParams defaultMockConfig - , topLevelConfigBlock = SimpleBlockConfig - , topLevelConfigCodec = SimpleCodecConfig - , topLevelConfigStorage = SimpleStorageConfig (pbftSecurityParam params) + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + PBftConfig + { pbftParams = params + } + , topLevelConfigLedger = SimpleLedgerConfig ledgerView eraParams defaultMockConfig + , topLevelConfigBlock = SimpleBlockConfig + , topLevelConfigCodec = SimpleCodecConfig + , topLevelConfigStorage = SimpleStorageConfig (pbftSecurityParam params) , topLevelConfigCheckpoints = emptyCheckpointsMap } - , pInfoInitLedger = ExtLedgerState (genesisSimpleLedgerState addrDist) - (genesisHeaderState S.empty) - } - where - ledgerView :: PBftLedgerView PBftMockCrypto - ledgerView = PBftLedgerView $ Bimap.fromList [ - (PBftMockVerKeyHash (verKey n), PBftMockVerKeyHash (verKey n)) + , pInfoInitLedger = + ExtLedgerState + (genesisSimpleLedgerState addrDist) + (genesisHeaderState S.empty) + } + where + ledgerView :: PBftLedgerView PBftMockCrypto + ledgerView = + PBftLedgerView $ + Bimap.fromList + [ (PBftMockVerKeyHash (verKey n), PBftMockVerKeyHash (verKey n)) | n <- enumCoreNodes (pbftNumNodes params) ] - verKey :: CoreNodeId -> VerKeyDSIGN MockDSIGN - verKey (CoreNodeId n) = VerKeyMockDSIGN n + verKey :: CoreNodeId -> VerKeyDSIGN MockDSIGN + verKey (CoreNodeId n) = VerKeyMockDSIGN n - addrDist :: AddrDist - addrDist = mkAddrDist (pbftNumNodes params) + addrDist :: AddrDist + addrDist = mkAddrDist (pbftNumNodes params) {------------------------------------------------------------------------------- BlockForging @@ -65,49 +71,51 @@ protocolInfoMockPBFT params eraParams = -- blockForgingMockPBFT :: Monad m => CoreNodeId -> [BlockForging m MockPBftBlock] blockForgingMockPBFT nid = [pbftBlockForging canBeLeader] - where - canBeLeader :: PBftCanBeLeader PBftMockCrypto - canBeLeader = PBftCanBeLeader { - pbftCanBeLeaderCoreNodeId = nid - , pbftCanBeLeaderSignKey = signKey nid - -- For Mock PBFT, we use our key as the genesis key. - , pbftCanBeLeaderDlgCert = (verKey nid, verKey nid) - } + where + canBeLeader :: PBftCanBeLeader PBftMockCrypto + canBeLeader = + PBftCanBeLeader + { pbftCanBeLeaderCoreNodeId = nid + , pbftCanBeLeaderSignKey = signKey nid + , -- For Mock PBFT, we use our key as the genesis key. + pbftCanBeLeaderDlgCert = (verKey nid, verKey nid) + } - signKey :: CoreNodeId -> SignKeyDSIGN MockDSIGN - signKey (CoreNodeId n) = SignKeyMockDSIGN n + signKey :: CoreNodeId -> SignKeyDSIGN MockDSIGN + signKey (CoreNodeId n) = SignKeyMockDSIGN n - verKey :: CoreNodeId -> VerKeyDSIGN MockDSIGN - verKey (CoreNodeId n) = VerKeyMockDSIGN n + verKey :: CoreNodeId -> VerKeyDSIGN MockDSIGN + verKey (CoreNodeId n) = VerKeyMockDSIGN n pbftBlockForging :: - ( SimpleCrypto c - , PBftCrypto c' - , Signable (PBftDSIGN c') (SignedSimplePBft c c') - , ContextDSIGN (PBftDSIGN c') ~ () - , Monad m - ) - => PBftCanBeLeader c' - -> BlockForging m (SimplePBftBlock c c') -pbftBlockForging canBeLeader = BlockForging { - forgeLabel = "pbftBlockForging" + ( SimpleCrypto c + , PBftCrypto c' + , Signable (PBftDSIGN c') (SignedSimplePBft c c') + , ContextDSIGN (PBftDSIGN c') ~ () + , Monad m + ) => + PBftCanBeLeader c' -> + BlockForging m (SimplePBftBlock c c') +pbftBlockForging canBeLeader = + BlockForging + { forgeLabel = "pbftBlockForging" , canBeLeader , updateForgeState = \_ _ _ -> return $ ForgeStateUpdated () - , checkCanForge = \cfg slot tickedPBftState _isLeader -> - return $ - pbftCheckCanForge - (configConsensus cfg) - canBeLeader - slot - tickedPBftState - , forgeBlock = \cfg slot bno lst txs proof -> - return - $ forgeSimple - forgePBftExt - cfg - slot - bno - lst - (map txForgetValidated txs) - proof + , checkCanForge = \cfg slot tickedPBftState _isLeader -> + return $ + pbftCheckCanForge + (configConsensus cfg) + canBeLeader + slot + tickedPBftState + , forgeBlock = \cfg slot bno lst txs proof -> + return $ + forgeSimple + forgePBftExt + cfg + slot + bno + lst + (map txForgetValidated txs) + proof } diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Praos.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Praos.hs index 0b3ce9a5d9..5109258080 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Praos.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Praos.hs @@ -1,139 +1,148 @@ {-# LANGUAGE BangPatterns #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverloadedStrings #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Node.Praos ( - MockPraosBlock +module Ouroboros.Consensus.Mock.Node.Praos + ( MockPraosBlock , blockForgingPraos , protocolInfoPraos ) where -import Cardano.Crypto.KES -import Cardano.Crypto.VRF -import Data.Bifunctor (second) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Numeric.Natural (Natural) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Ledger.SupportsMempool (txForgetValidated) -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Util.IOLike +import Cardano.Crypto.KES +import Cardano.Crypto.VRF +import Data.Bifunctor (second) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Numeric.Natural (Natural) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Ledger.SupportsMempool (txForgetValidated) +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Util.IOLike type MockPraosBlock = SimplePraosBlock SimpleMockCrypto PraosMockCrypto -protocolInfoPraos :: NumCoreNodes - -> CoreNodeId - -> PraosParams - -> HardFork.EraParams - -> Natural - -> PraosEvolvingStake - -> ProtocolInfo MockPraosBlock +protocolInfoPraos :: + NumCoreNodes -> + CoreNodeId -> + PraosParams -> + HardFork.EraParams -> + Natural -> + PraosEvolvingStake -> + ProtocolInfo MockPraosBlock protocolInfoPraos numCoreNodes nid params eraParams eta0 evolvingStakeDist = - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = PraosConfig { - praosParams = params - , praosSignKeyVRF = signKeyVRF nid - , praosInitialEta = eta0 - , praosInitialStake = genesisStakeDist addrDist - , praosEvolvingStake = evolvingStakeDist - , praosVerKeys = verKeys - } - , topLevelConfigLedger = SimpleLedgerConfig addrDist eraParams defaultMockConfig - , topLevelConfigBlock = SimpleBlockConfig - , topLevelConfigCodec = SimpleCodecConfig - , topLevelConfigStorage = SimpleStorageConfig (praosSecurityParam params) + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + PraosConfig + { praosParams = params + , praosSignKeyVRF = signKeyVRF nid + , praosInitialEta = eta0 + , praosInitialStake = genesisStakeDist addrDist + , praosEvolvingStake = evolvingStakeDist + , praosVerKeys = verKeys + } + , topLevelConfigLedger = SimpleLedgerConfig addrDist eraParams defaultMockConfig + , topLevelConfigBlock = SimpleBlockConfig + , topLevelConfigCodec = SimpleCodecConfig + , topLevelConfigStorage = SimpleStorageConfig (praosSecurityParam params) , topLevelConfigCheckpoints = emptyCheckpointsMap } - , pInfoInitLedger = ExtLedgerState { - ledgerState = genesisSimpleLedgerState addrDist + , pInfoInitLedger = + ExtLedgerState + { ledgerState = genesisSimpleLedgerState addrDist , headerState = genesisHeaderState (PraosChainDepState []) } - } - where - signKeyVRF :: CoreNodeId -> SignKeyVRF MockVRF - signKeyVRF (CoreNodeId n) = SignKeyMockVRF n + } + where + signKeyVRF :: CoreNodeId -> SignKeyVRF MockVRF + signKeyVRF (CoreNodeId n) = SignKeyMockVRF n - verKeyVRF :: CoreNodeId -> VerKeyVRF MockVRF - verKeyVRF (CoreNodeId n) = VerKeyMockVRF n + verKeyVRF :: CoreNodeId -> VerKeyVRF MockVRF + verKeyVRF (CoreNodeId n) = VerKeyMockVRF n - verKeyKES :: CoreNodeId -> VerKeyKES (MockKES t) - verKeyKES (CoreNodeId n) = VerKeyMockKES n + verKeyKES :: CoreNodeId -> VerKeyKES (MockKES t) + verKeyKES (CoreNodeId n) = VerKeyMockKES n - addrDist :: AddrDist - addrDist = mkAddrDist numCoreNodes + addrDist :: AddrDist + addrDist = mkAddrDist numCoreNodes - verKeys :: Map CoreNodeId (VerKeyKES (MockKES t), VerKeyVRF MockVRF) - verKeys = Map.fromList + verKeys :: Map CoreNodeId (VerKeyKES (MockKES t), VerKeyVRF MockVRF) + verKeys = + Map.fromList [ (nid', (kesKey, vrfKey)) | nid' <- enumCoreNodes numCoreNodes , let !kesKey = verKeyKES nid' !vrfKey = verKeyVRF nid' ] -blockForgingPraos :: IOLike m - => NumCoreNodes - -> CoreNodeId - -> m [BlockForging m MockPraosBlock] +blockForgingPraos :: + IOLike m => + NumCoreNodes -> + CoreNodeId -> + m [BlockForging m MockPraosBlock] blockForgingPraos numCoreNodes nid = sequence [praosBlockForging nid initHotKey] - where - verKeyVRF :: CoreNodeId -> VerKeyVRF MockVRF - verKeyVRF (CoreNodeId n) = VerKeyMockVRF n + where + verKeyVRF :: CoreNodeId -> VerKeyVRF MockVRF + verKeyVRF (CoreNodeId n) = VerKeyMockVRF n - verKeyKES :: CoreNodeId -> VerKeyKES (MockKES t) - verKeyKES (CoreNodeId n) = VerKeyMockKES n + verKeyKES :: CoreNodeId -> VerKeyKES (MockKES t) + verKeyKES (CoreNodeId n) = VerKeyMockKES n - verKeys :: Map CoreNodeId (VerKeyKES (MockKES t), VerKeyVRF MockVRF) - verKeys = Map.fromList + verKeys :: Map CoreNodeId (VerKeyKES (MockKES t), VerKeyVRF MockVRF) + verKeys = + Map.fromList [ (nid', (kesKey, vrfKey)) | nid' <- enumCoreNodes numCoreNodes , let !kesKey = verKeyKES nid' !vrfKey = verKeyVRF nid' ] - initHotKey :: HotKey PraosMockCrypto - initHotKey = - HotKey + initHotKey :: HotKey PraosMockCrypto + initHotKey = + HotKey + 0 + ( UnsoundPureSignKeyMockKES + -- key ID + (fst $ verKeys Map.! nid) + -- KES initial slot 0 - (UnsoundPureSignKeyMockKES - -- key ID - (fst $ verKeys Map.! nid) - -- KES initial slot - 0) - + ) praosBlockForging :: - IOLike m - => CoreNodeId - -> HotKey PraosMockCrypto - -> m (BlockForging m MockPraosBlock) + IOLike m => + CoreNodeId -> + HotKey PraosMockCrypto -> + m (BlockForging m MockPraosBlock) praosBlockForging cid initHotKey = do - varHotKey <- newMVar initHotKey - return $ BlockForging { - forgeLabel = "praosBlockForging" - , canBeLeader = cid - , updateForgeState = \_ sno _ -> modifyMVar varHotKey $ - pure - . second forgeStateUpdateInfoFromUpdateInfo - . evolveKey sno - , checkCanForge = \_ _ _ _ _ -> return () - , forgeBlock = \cfg bno sno tickedLedgerSt txs isLeader -> do - hotKey <- readMVar varHotKey - return $ - forgeSimple - (forgePraosExt hotKey) - cfg - bno sno - tickedLedgerSt - (map txForgetValidated txs) - isLeader + varHotKey <- newMVar initHotKey + return $ + BlockForging + { forgeLabel = "praosBlockForging" + , canBeLeader = cid + , updateForgeState = \_ sno _ -> + modifyMVar varHotKey $ + pure + . second forgeStateUpdateInfoFromUpdateInfo + . evolveKey sno + , checkCanForge = \_ _ _ _ _ -> return () + , forgeBlock = \cfg bno sno tickedLedgerSt txs isLeader -> do + hotKey <- readMVar varHotKey + return $ + forgeSimple + (forgePraosExt hotKey) + cfg + bno + sno + tickedLedgerSt + (map txForgetValidated txs) + isLeader } diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PraosRule.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PraosRule.hs index 3c715b5680..00b273995a 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PraosRule.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/PraosRule.hs @@ -1,74 +1,82 @@ -- | Test the Praos chain selection rule but with explicit leader schedule -module Ouroboros.Consensus.Mock.Node.PraosRule ( - MockPraosRuleBlock +module Ouroboros.Consensus.Mock.Node.PraosRule + ( MockPraosRuleBlock , blockForgingPraosRule , protocolInfoPraosRule ) where -import Cardano.Crypto.KES -import Cardano.Crypto.VRF -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Ouroboros.Consensus.Block.Forging (BlockForging) -import Ouroboros.Consensus.Config -import qualified Ouroboros.Consensus.HardFork.History as HardFork -import Ouroboros.Consensus.HeaderValidation -import Ouroboros.Consensus.Ledger.Extended -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node -import Ouroboros.Consensus.Mock.Protocol.LeaderSchedule -import Ouroboros.Consensus.Mock.Protocol.Praos -import Ouroboros.Consensus.Node.ProtocolInfo -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Cardano.Crypto.KES +import Cardano.Crypto.VRF +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Ouroboros.Consensus.Block.Forging (BlockForging) +import Ouroboros.Consensus.Config +import Ouroboros.Consensus.HardFork.History qualified as HardFork +import Ouroboros.Consensus.HeaderValidation +import Ouroboros.Consensus.Ledger.Extended +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node +import Ouroboros.Consensus.Mock.Protocol.LeaderSchedule +import Ouroboros.Consensus.Mock.Protocol.Praos +import Ouroboros.Consensus.Node.ProtocolInfo +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) type MockPraosRuleBlock = SimplePraosRuleBlock SimpleMockCrypto -protocolInfoPraosRule :: NumCoreNodes - -> CoreNodeId - -> PraosParams - -> HardFork.EraParams - -> LeaderSchedule - -> PraosEvolvingStake - -> ProtocolInfo MockPraosRuleBlock -protocolInfoPraosRule numCoreNodes - nid - params - eraParams - schedule - evolvingStake = - ProtocolInfo { - pInfoConfig = TopLevelConfig { - topLevelConfigProtocol = WLSConfig { - wlsConfigSchedule = schedule - , wlsConfigP = PraosConfig - { praosParams = params - , praosSignKeyVRF = NeverUsedSignKeyVRF - , praosInitialEta = 0 - , praosInitialStake = genesisStakeDist addrDist - , praosEvolvingStake = evolvingStake - , praosVerKeys = verKeys - } - , wlsConfigNodeId = nid +protocolInfoPraosRule :: + NumCoreNodes -> + CoreNodeId -> + PraosParams -> + HardFork.EraParams -> + LeaderSchedule -> + PraosEvolvingStake -> + ProtocolInfo MockPraosRuleBlock +protocolInfoPraosRule + numCoreNodes + nid + params + eraParams + schedule + evolvingStake = + ProtocolInfo + { pInfoConfig = + TopLevelConfig + { topLevelConfigProtocol = + WLSConfig + { wlsConfigSchedule = schedule + , wlsConfigP = + PraosConfig + { praosParams = params + , praosSignKeyVRF = NeverUsedSignKeyVRF + , praosInitialEta = 0 + , praosInitialStake = genesisStakeDist addrDist + , praosEvolvingStake = evolvingStake + , praosVerKeys = verKeys + } + , wlsConfigNodeId = nid + } + , topLevelConfigLedger = SimpleLedgerConfig () eraParams defaultMockConfig + , topLevelConfigBlock = SimpleBlockConfig + , topLevelConfigCodec = SimpleCodecConfig + , topLevelConfigStorage = SimpleStorageConfig (praosSecurityParam params) + , topLevelConfigCheckpoints = emptyCheckpointsMap } - , topLevelConfigLedger = SimpleLedgerConfig () eraParams defaultMockConfig - , topLevelConfigBlock = SimpleBlockConfig - , topLevelConfigCodec = SimpleCodecConfig - , topLevelConfigStorage = SimpleStorageConfig (praosSecurityParam params) - , topLevelConfigCheckpoints = emptyCheckpointsMap - } - , pInfoInitLedger = ExtLedgerState - { ledgerState = genesisSimpleLedgerState addrDist - , headerState = genesisHeaderState () - } - } - where + , pInfoInitLedger = + ExtLedgerState + { ledgerState = genesisSimpleLedgerState addrDist + , headerState = genesisHeaderState () + } + } + where addrDist :: AddrDist addrDist = mkAddrDist numCoreNodes verKeys :: Map CoreNodeId (VerKeyKES NeverKES, VerKeyVRF NeverVRF) - verKeys = Map.fromList [ (nid', (NeverUsedVerKeyKES, NeverUsedVerKeyVRF)) - | nid' <- enumCoreNodes numCoreNodes - ] + verKeys = + Map.fromList + [ (nid', (NeverUsedVerKeyKES, NeverUsedVerKeyVRF)) + | nid' <- enumCoreNodes numCoreNodes + ] blockForgingPraosRule :: Monad m => [BlockForging m MockPraosRuleBlock] blockForgingPraosRule = [simpleBlockForging () forgePraosRuleExt] diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Serialisation.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Serialisation.hs index 8748f20059..f31907bba6 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Serialisation.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Node/Serialisation.hs @@ -5,30 +5,32 @@ {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} - {-# OPTIONS_GHC -Wno-orphans #-} -module Ouroboros.Consensus.Mock.Node.Serialisation ( - MockBlock +module Ouroboros.Consensus.Mock.Node.Serialisation + ( MockBlock , NestedCtxt_ (..) ) where -import Codec.Serialise (Serialise, decode, encode, serialise) -import qualified Data.ByteString.Lazy as Lazy -import Data.Typeable (Typeable) -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.HeaderValidation (AnnTip, - defaultDecodeAnnTip, defaultEncodeAnnTip) -import Ouroboros.Consensus.Ledger.Abstract -import Ouroboros.Consensus.Ledger.Query -import Ouroboros.Consensus.Ledger.SupportsMempool -import Ouroboros.Consensus.Mock.Ledger -import Ouroboros.Consensus.Mock.Node.Abstract -import Ouroboros.Consensus.Node.NetworkProtocolVersion -import Ouroboros.Consensus.Node.Run -import Ouroboros.Consensus.Node.Serialisation -import Ouroboros.Consensus.Storage.Serialisation -import Ouroboros.Network.Block (Serialised) +import Codec.Serialise (Serialise, decode, encode, serialise) +import Data.ByteString.Lazy qualified as Lazy +import Data.Typeable (Typeable) +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.HeaderValidation + ( AnnTip + , defaultDecodeAnnTip + , defaultEncodeAnnTip + ) +import Ouroboros.Consensus.Ledger.Abstract +import Ouroboros.Consensus.Ledger.Query +import Ouroboros.Consensus.Ledger.SupportsMempool +import Ouroboros.Consensus.Mock.Ledger +import Ouroboros.Consensus.Mock.Node.Abstract +import Ouroboros.Consensus.Node.NetworkProtocolVersion +import Ouroboros.Consensus.Node.Run +import Ouroboros.Consensus.Node.Serialisation +import Ouroboros.Consensus.Storage.Serialisation +import Ouroboros.Network.Block (Serialised) -- | Local shorthand to make the instances more readable type MockBlock ext = SimpleBlock SimpleMockCrypto ext @@ -42,8 +44,9 @@ type MockBlock ext = SimpleBlock SimpleMockCrypto ext instance (Serialise ext, Typeable ext) => HasBinaryBlockInfo (MockBlock ext) where getBinaryBlockInfo = simpleBlockBinaryBlockInfo -instance (Serialise ext, RunMockBlock SimpleMockCrypto ext) - => SerialiseDiskConstraints (MockBlock ext) +instance + (Serialise ext, RunMockBlock SimpleMockCrypto ext) => + SerialiseDiskConstraints (MockBlock ext) instance Serialise ext => EncodeDisk (MockBlock ext) (MockBlock ext) instance Serialise ext => DecodeDisk (MockBlock ext) (Lazy.ByteString -> MockBlock ext) where @@ -70,15 +73,16 @@ instance DecodeDisk (MockBlock ext) (AnnTip (MockBlock ext)) where possible. -------------------------------------------------------------------------------} -instance HasNetworkProtocolVersion (MockBlock ext) where - -- Use defaults +instance HasNetworkProtocolVersion (MockBlock ext) + +-- Use defaults instance Serialise ext => SerialiseNodeToNodeConstraints (MockBlock ext) where estimateBlockSize hdr = - 7 {- CBOR-in-CBOR -} + 1 {- encodeListLen 2 -} + hdrSize + bodySize - where - hdrSize = fromIntegral (Lazy.length (serialise hdr)) - bodySize = simpleBodySize (simpleHeaderStd hdr) + 7 {- CBOR-in-CBOR -} + 1 {- encodeListLen 2 -} + hdrSize + bodySize + where + hdrSize = fromIntegral (Lazy.length (serialise hdr)) + bodySize = simpleBodySize (simpleHeaderStd hdr) instance Serialise ext => SerialiseNodeToNode (MockBlock ext) (MockBlock ext) where encodeNodeToNode _ _ = defaultEncodeCBORinCBOR @@ -102,8 +106,13 @@ instance SerialiseNodeToNode (MockBlock ext) (GenTxId (MockBlock ext)) possible. -------------------------------------------------------------------------------} -instance (Serialise ext, Typeable ext, Serialise (MockLedgerConfig SimpleMockCrypto ext), MockProtocolSpecific SimpleMockCrypto ext) - => SerialiseNodeToClientConstraints (MockBlock ext) +instance + ( Serialise ext + , Typeable ext + , Serialise (MockLedgerConfig SimpleMockCrypto ext) + , MockProtocolSpecific SimpleMockCrypto ext + ) => + SerialiseNodeToClientConstraints (MockBlock ext) instance Serialise ext => SerialiseNodeToClient (MockBlock ext) (MockBlock ext) where encodeNodeToClient _ _ = defaultEncodeCBORinCBOR @@ -141,8 +150,8 @@ instance TrivialDependency (NestedCtxt_ (SimpleBlock c ext) f) where instance SameDepIndex (NestedCtxt_ (SimpleBlock c ext) f) instance HasNestedContent f (SimpleBlock c ext) -instance Serialise ext => ReconstructNestedCtxt Header (MockBlock ext) +instance Serialise ext => ReconstructNestedCtxt Header (MockBlock ext) instance Serialise ext => EncodeDiskDepIx (NestedCtxt Header) (MockBlock ext) -instance Serialise ext => EncodeDiskDep (NestedCtxt Header) (MockBlock ext) +instance Serialise ext => EncodeDiskDep (NestedCtxt Header) (MockBlock ext) instance Serialise ext => DecodeDiskDepIx (NestedCtxt Header) (MockBlock ext) -instance Serialise ext => DecodeDiskDep (NestedCtxt Header) (MockBlock ext) +instance Serialise ext => DecodeDiskDep (NestedCtxt Header) (MockBlock ext) diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/LeaderSchedule.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/LeaderSchedule.hs index 8b69ac01da..b5aaaefc85 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/LeaderSchedule.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/LeaderSchedule.hs @@ -3,20 +3,20 @@ {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE TypeFamilies #-} -module Ouroboros.Consensus.Mock.Protocol.LeaderSchedule ( - ConsensusConfig (..) +module Ouroboros.Consensus.Mock.Protocol.LeaderSchedule + ( ConsensusConfig (..) , LeaderSchedule (..) , WithLeaderSchedule , leaderScheduleFor ) where -import qualified Data.Map.Strict as Map -import GHC.Generics (Generic) -import NoThunks.Class (NoThunks) -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.LeaderSchedule -import Ouroboros.Consensus.Ticked +import Data.Map.Strict qualified as Map +import GHC.Generics (Generic) +import NoThunks.Class (NoThunks) +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.LeaderSchedule +import Ouroboros.Consensus.Ticked {------------------------------------------------------------------------------- ConsensusProtocol instance that overrides leader selection @@ -28,35 +28,36 @@ import Ouroboros.Consensus.Ticked -- | Extension of protocol @p@ by a static leader schedule. data WithLeaderSchedule p -data instance ConsensusConfig (WithLeaderSchedule p) = WLSConfig { - wlsConfigSchedule :: !LeaderSchedule - , wlsConfigP :: !(ConsensusConfig p) - , wlsConfigNodeId :: !CoreNodeId - } - deriving (Generic) +data instance ConsensusConfig (WithLeaderSchedule p) = WLSConfig + { wlsConfigSchedule :: !LeaderSchedule + , wlsConfigP :: !(ConsensusConfig p) + , wlsConfigNodeId :: !CoreNodeId + } + deriving Generic instance ConsensusProtocol p => ConsensusProtocol (WithLeaderSchedule p) where - type SelectView (WithLeaderSchedule p) = SelectView p + type SelectView (WithLeaderSchedule p) = SelectView p type ChainDepState (WithLeaderSchedule p) = () - type LedgerView (WithLeaderSchedule p) = () + type LedgerView (WithLeaderSchedule p) = () type ValidationErr (WithLeaderSchedule p) = () - type IsLeader (WithLeaderSchedule p) = () - type ValidateView (WithLeaderSchedule p) = () - type CanBeLeader (WithLeaderSchedule p) = () + type IsLeader (WithLeaderSchedule p) = () + type ValidateView (WithLeaderSchedule p) = () + type CanBeLeader (WithLeaderSchedule p) = () protocolSecurityParam = protocolSecurityParam . wlsConfigP checkIsLeader WLSConfig{..} () slot _ = case Map.lookup slot $ getLeaderSchedule wlsConfigSchedule of - Nothing -> error $ "WithLeaderSchedule: missing slot " ++ show slot - Just nids - | wlsConfigNodeId `elem` nids -> Just () - | otherwise -> Nothing + Nothing -> error $ "WithLeaderSchedule: missing slot " ++ show slot + Just nids + | wlsConfigNodeId `elem` nids -> Just () + | otherwise -> Nothing - tickChainDepState _ _ _ _ = TickedTrivial - updateChainDepState _ _ _ _ = return () + tickChainDepState _ _ _ _ = TickedTrivial + updateChainDepState _ _ _ _ = return () reupdateChainDepState _ _ _ _ = () -instance ConsensusProtocol p - => NoThunks (ConsensusConfig (WithLeaderSchedule p)) +instance + ConsensusProtocol p => + NoThunks (ConsensusConfig (WithLeaderSchedule p)) diff --git a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/Praos.hs b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/Praos.hs index 232eab4df5..793db8cac3 100644 --- a/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/Praos.hs +++ b/ouroboros-consensus/src/unstable-mock-block/Ouroboros/Consensus/Mock/Protocol/Praos.hs @@ -17,8 +17,8 @@ {-# LANGUAGE UndecidableSuperClasses #-} -- | Proof of concept implementation of Praos -module Ouroboros.Consensus.Mock.Protocol.Praos ( - HotKey (..) +module Ouroboros.Consensus.Mock.Protocol.Praos + ( HotKey (..) , HotKeyEvolutionError (..) , Praos , PraosChainDepState (..) @@ -29,6 +29,7 @@ module Ouroboros.Consensus.Mock.Protocol.Praos ( , emptyPraosEvolvingStake , evolveKey , forgePraosFields + -- * Tags , PraosCrypto (..) , PraosMockCrypto @@ -36,47 +37,52 @@ module Ouroboros.Consensus.Mock.Protocol.Praos ( , PraosValidateView (..) , PraosValidationError (..) , praosValidateView + -- * Type instances , BlockInfo (..) , ConsensusConfig (..) , Ticked (..) ) where -import Cardano.Binary (FromCBOR (..), ToCBOR (..), serialize') -import Cardano.Crypto.DSIGN.Ed25519 (Ed25519DSIGN) -import Cardano.Crypto.Hash.Class (HashAlgorithm (..), hashToBytes, - hashWithSerialiser, sizeHash) -import Cardano.Crypto.Hash.SHA256 (SHA256) -import Cardano.Crypto.KES.Class -import Cardano.Crypto.KES.Mock -import Cardano.Crypto.KES.Simple -import Cardano.Crypto.Util -import Cardano.Crypto.VRF.Class -import Cardano.Crypto.VRF.Mock (MockVRF) -import Cardano.Crypto.VRF.Simple (SimpleVRF) -import Cardano.Slotting.EpochInfo -import Codec.CBOR.Decoding (decodeListLenOf) -import Codec.CBOR.Encoding (encodeListLen) -import Codec.Serialise (Serialise (..)) -import Control.Monad (unless) -import Control.Monad.Except (throwError) -import Data.Kind (Type) -import Data.Map.Strict (Map) -import qualified Data.Map.Strict as Map -import Data.Maybe (fromMaybe) -import Data.Typeable -import Data.Word (Word64) -import GHC.Generics (Generic) -import GHC.Stack (HasCallStack) -import NoThunks.Class (NoThunks (..)) -import Numeric.Natural -import Ouroboros.Consensus.Block -import Ouroboros.Consensus.Mock.Ledger.Stake -import Ouroboros.Consensus.NodeId (CoreNodeId (..)) -import Ouroboros.Consensus.Protocol.Abstract -import Ouroboros.Consensus.Protocol.Signed -import Ouroboros.Consensus.Util.Condense -import Test.Cardano.Slotting.Numeric () +import Cardano.Binary (FromCBOR (..), ToCBOR (..), serialize') +import Cardano.Crypto.DSIGN.Ed25519 (Ed25519DSIGN) +import Cardano.Crypto.Hash.Class + ( HashAlgorithm (..) + , hashToBytes + , hashWithSerialiser + , sizeHash + ) +import Cardano.Crypto.Hash.SHA256 (SHA256) +import Cardano.Crypto.KES.Class +import Cardano.Crypto.KES.Mock +import Cardano.Crypto.KES.Simple +import Cardano.Crypto.Util +import Cardano.Crypto.VRF.Class +import Cardano.Crypto.VRF.Mock (MockVRF) +import Cardano.Crypto.VRF.Simple (SimpleVRF) +import Cardano.Slotting.EpochInfo +import Codec.CBOR.Decoding (decodeListLenOf) +import Codec.CBOR.Encoding (encodeListLen) +import Codec.Serialise (Serialise (..)) +import Control.Monad (unless) +import Control.Monad.Except (throwError) +import Data.Kind (Type) +import Data.Map.Strict (Map) +import Data.Map.Strict qualified as Map +import Data.Maybe (fromMaybe) +import Data.Typeable +import Data.Word (Word64) +import GHC.Generics (Generic) +import GHC.Stack (HasCallStack) +import NoThunks.Class (NoThunks (..)) +import Numeric.Natural +import Ouroboros.Consensus.Block +import Ouroboros.Consensus.Mock.Ledger.Stake +import Ouroboros.Consensus.NodeId (CoreNodeId (..)) +import Ouroboros.Consensus.Protocol.Abstract +import Ouroboros.Consensus.Protocol.Signed +import Ouroboros.Consensus.Util.Condense +import Test.Cardano.Slotting.Numeric () -- The Praos paper can be located at https://ia.cr/2017/573 -- @@ -149,29 +155,29 @@ import Test.Cardano.Slotting.Numeric () -------------------------------------------------------------------------------} -- | The fields that Praos required in the header -data PraosFields crypto typeBeingSigned = PraosFields { - praosSignature :: SignedKES (PraosKES crypto) typeBeingSigned - , praosExtraFields :: PraosExtraFields crypto - } - deriving (Generic) +data PraosFields crypto typeBeingSigned = PraosFields + { praosSignature :: SignedKES (PraosKES crypto) typeBeingSigned + , praosExtraFields :: PraosExtraFields crypto + } + deriving Generic instance (PraosCrypto c, Typeable toSign) => NoThunks (PraosFields c toSign) deriving instance PraosCrypto c => Show (PraosFields c toSign) -deriving instance PraosCrypto c => Eq (PraosFields c toSign) +deriving instance PraosCrypto c => Eq (PraosFields c toSign) -- | Fields that should be included in the signature -data PraosExtraFields c = PraosExtraFields { - praosCreator :: CoreNodeId - , praosRho :: CertifiedVRF (PraosVRF c) (Natural, SlotNo, VRFType) - , praosY :: CertifiedVRF (PraosVRF c) (Natural, SlotNo, VRFType) - } - deriving (Generic) +data PraosExtraFields c = PraosExtraFields + { praosCreator :: CoreNodeId + , praosRho :: CertifiedVRF (PraosVRF c) (Natural, SlotNo, VRFType) + , praosY :: CertifiedVRF (PraosVRF c) (Natural, SlotNo, VRFType) + } + deriving Generic instance PraosCrypto c => NoThunks (PraosExtraFields c) deriving instance PraosCrypto c => Show (PraosExtraFields c) -deriving instance PraosCrypto c => Eq (PraosExtraFields c) +deriving instance PraosCrypto c => Eq (PraosExtraFields c) -- | A validate view is an association from the (@signed@) value to the -- @PraosFields@ that contains the signature that sign it. @@ -179,18 +185,20 @@ deriving instance PraosCrypto c => Eq (PraosExtraFields c) -- In this mock implementation, this could have been simplified to use -- @SignedSimplePraos@ but from the consensus point of view, it is not relevant -- which actual value is being signed, that's why we use the existential. -data PraosValidateView c = - forall signed. Cardano.Crypto.KES.Class.Signable (PraosKES c) signed - => PraosValidateView (PraosFields c signed) signed +data PraosValidateView c + = forall signed. + Cardano.Crypto.KES.Class.Signable (PraosKES c) signed => + PraosValidateView (PraosFields c signed) signed -- | Convenience constructor for 'PraosValidateView' -praosValidateView :: ( SignedHeader hdr - , Cardano.Crypto.KES.Class.Signable (PraosKES c) (Signed hdr) - ) - => (hdr -> PraosFields c (Signed hdr)) - -> (hdr -> PraosValidateView c) +praosValidateView :: + ( SignedHeader hdr + , Cardano.Crypto.KES.Class.Signable (PraosKES c) (Signed hdr) + ) => + (hdr -> PraosFields c (Signed hdr)) -> + (hdr -> PraosValidateView c) praosValidateView getFields hdr = - PraosValidateView (getFields hdr) (headerSigned hdr) + PraosValidateView (getFields hdr) (headerSigned hdr) {------------------------------------------------------------------------------- Forging @@ -200,21 +208,22 @@ praosValidateView getFields hdr = -- -- A key will be poisoned if it failed to evolve by @updateKES@, and will remain -- poisoned forever after that. -data HotKey c = - HotKey - !Period -- ^ Absolute period of the KES key +data HotKey c + = HotKey + -- | Absolute period of the KES key + !Period !(UnsoundPureSignKeyKES (PraosKES c)) | HotKeyPoisoned - deriving (Generic) + deriving Generic instance PraosCrypto c => NoThunks (HotKey c) instance PraosCrypto c => Show (HotKey c) where - show (HotKey p _) = "HotKey " ++ show p ++ "