Doc: typo fixes

Note: this is only fixing typos highlighted by a spellchecker. More
typos and errors probably still lurk.
This commit is contained in:
Raphaël Proust 2018-05-10 12:46:16 +08:00
parent e16cf6d28b
commit 771f937792
13 changed files with 62 additions and 62 deletions

View File

@ -2,7 +2,7 @@
Building documentation locally
******************************
The documenation is available online at `doc.tzalpha.net <http://doc.tzalpha.net/>`_,
The documentation is available online at `doc.tzalpha.net <http://doc.tzalpha.net/>`_,
always up to date with master on `Gitlab <https://gitlab.com/tezos/tezos>`_.
Building instructions

View File

@ -84,7 +84,7 @@ The ``alphanet`` branch in the tezos git repository will always contain
the up-to-date sources of the tezos-node required for running the
alphanet. See ``docs/README.master`` on how to compile it.
Once built, you might launch the a node by running:
Once built, you might launch a node by running:
::

View File

@ -74,11 +74,11 @@ Reset 2017-11-20
now compiled to functors, taking the type signature of their
runtime environment as parameter. This simplifies the
dependencies, and will allow third party developers to
instanciate economic protocols in other contexts than the node.
instantiate economic protocols in other contexts than the node.
- Switch from Makefiles to jbuilder, yay!
- Rename (hopefully) all occurences of "mining" into "baking".
- Rename (hopefully) all occurrences of "mining" into "baking".
[Michelson]
@ -87,13 +87,13 @@ Reset 2017-11-20
of the client or node.
- Implement a basic semantics of annotations.
The typechecker now propagates annotations on types througout the
The typechecker now propagates annotations on types throughout the
code, and tagging instructions with an annotation allows the
programmer to reannotate the element produced by the instruction.
The emacs mode displays propagated annotations.
- Add a version of `ITER` that takes a static code block and expects
a colletion on the initial stack, and works like a `LOOP`, pushing
a collection on the initial stack, and works like a `LOOP`, pushing
the element of the collection one at a time on the stack. This is
like `REDUCE` but using a static code block instead of a dynamic
lambda. In the same vein, `MAP` can take a code block.
@ -173,7 +173,7 @@ Main changes includes:
https://raw.githubusercontent.com/tezos/tezos/alphanet/README.md
- The `alphanet` branch of the github repository is now automaticaly
- The `alphanet` branch of the github repository is now automatically
synchronized with `alphanet` docker image. And the latest version of
the `alphanet.sh` is available at:
@ -225,6 +225,6 @@ Main changes includes:
[CI]
- This is not directly visible in the alphanet, but our CI
infrastrucre is now ready for open development.
infrastructure is now ready for open development.
More about that soon (or later).

View File

@ -356,17 +356,17 @@ writing your own configuration file if needed.
Debugging
---------
It is possible to set independant log levels for different logging
It is possible to set independent log levels for different logging
sections in Tezos, as well as specifying an output file for logging. See
the description of log parameters above as well as documentation under
the DEBUG section diplayed by \`tezos-node run help.
the DEBUG section displayed by \`tezos-node run help.
JSON/RPC interface
------------------
The Tezos node provides a JSON/RPC interface. Note that it is an RPC,
and it is JSON based, but it does not follow the “JSON-RPC” protocol. It
is not active by default and it must be explicitely activated with the
is not active by default and it must be explicitly activated with the
``--rpc-addr`` option. Typically, if you are not trying to run a local
network and just want to explore the RPC, you would run:

View File

@ -37,7 +37,7 @@ For example, an encoding that represents a 31 bit integer has type
Encoding an object
~~~~~~~~~~~~~~~~~~
Encoding a single integer is fairly uninteresting. The Dataencoding
Encoding a single integer is fairly uninteresting. The `Dataencoding`
library provides a number of combinators that can be used to build more
complicated objects. Consider the type that represents an interval from
the first number to the second:
@ -54,9 +54,9 @@ We can define an encoding for this type as:
Data_encoding.(obj2 (req "min" int64) (req "max" int64))
In the example above we construct a new value ``interval_encoding`` by
combining two int64 integers using the ``obj2`` constructor.
combining two `int64` integers using the ``obj2`` constructor.
The library provides different constructors, i.e. for objects that have
The library provides different constructors, i.e. for objects that have
no data (``Data_encoding.empty``), constructors for object up to 10
fields, constructors for tuples, list, etc.
@ -125,7 +125,7 @@ of the type:
- We specify a function from the encoded type to the actual datatype.
Since the library does not provide an exhaustive check on these
constructors, the user must be careful when constructing unin types to
constructors, the user must be careful when constructing union types to
avoid unfortunate runtime failures.
How the Dataencoding module works

View File

@ -26,7 +26,7 @@ linking order.
Protocol Alpha is structured as a tower of abstraction layers, a coding
discipline that we designed to have OCaml check as many invariants as
possible at typing time. You will also see empty lines in
``TEZOS_PROTOCOL`` that denotate these layers of abstraction.
``TEZOS_PROTOCOL`` that denote these layers of abstraction.
These layers follow the linking order: the first modules are the towers
foundation that talk to the raw key-value store, and going forward in
@ -35,7 +35,7 @@ the module list means climbing up the abstraction tower.
The big abstraction barrier: ``Alpha_context``
----------------------------------------------
the proof-of-stake algorithm, as described in the white paper, relies on
The proof-of-stake algorithm, as described in the white paper, relies on
an abstract state of the ledger, that is read and transformed during
validation of a block.
@ -101,7 +101,7 @@ value with a wrong key, or a key bound to another value. The next
abstraction barrier is a remedy to that.
The storage module is the single place in the protocol where key
litterals are defined. Hence, it is the only module necessary to audit,
literals are defined. Hence, it is the only module necessary to audit,
to know that the keys are not colliding.
It also abstracts the keys, so that each kind of key get its own
@ -111,7 +111,7 @@ accessors specific to contracts balances.
Moreover, the keys bear the type of the values they point to. For
instance, only values of type ``Tez_repr.t`` can by stored at keys
``Storage.Contract.Balance``. And in case a key is not a global key, but
a parametric one, this key is parametered by an OCaml value, and not the
a parametric one, this key is parameterized by an OCaml value, and not the
raw key part.
So in the end, the only way to be used when accessing a contract balance
@ -132,12 +132,12 @@ deleted, all of the keys that store its state in the context are indeed
deleted.
This last series of modules named ``*_storage`` is there to enforce just
that kind of invariants: ensuring the insternal consistency of the
that kind of invariants: ensuring the internal consistency of the
context structure.
These transaction do not go as far as checking that, for instance, when
the destination of a transaction is credited, the source is also
debitted, as in some cases, it might not be the case.
debited, as in some cases, it might not be the case.
Above the ``Alpha_context``
---------------------------
@ -170,7 +170,7 @@ Smart contracts
From ``Apply``, you will also end up in modules ``Script_ir_translator``
and ``Script_interpreter``. The former is the typechecker of Michelson
that is called when creating a new smart contract, and the latter is the
interpreter that is called when transfering tokens to a new smart
interpreter that is called when transferring tokens to a new smart
contract.
Protocol RPC API
@ -182,7 +182,7 @@ Finally, the RPCs specific to Alpha are also defined above the
Services are defined in a few modules, divided by theme. Each module
defines the RPC API: URL schemes with the types of parameters, and
input and output JSON schemas. This interface serves three
purposes. As it is thourouhgly typed, it makes sure that the handlers
purposes. As it is thoroughly typed, it makes sure that the handlers
(that are registered in the same file) have the right input and output
types. It is also used by the client to perform RPC calls, to make
sure that the URL schemes and JSON formats and consistent between the

View File

@ -169,7 +169,7 @@ this:
| _ -> None)
Im also renaming the ``error`` function to ``fail``. This is the
convention used by the actual Errormonad module. Im also exposing the
convention used by the actual `Error_monad` module. Im also exposing the
``'a t`` type so that you can dispatch on it if you need to. This is
used several times in the Tezos codebase.

View File

@ -4,13 +4,13 @@ Profiling the Tezos node
Memory profiling the OCaml heap
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Install an OCaml switch with the statmemprof patch:
- Install an OCaml switch with the `statmemprof` patch:
``4.04.2+statistical-memprof`` or ``4.06.0+statistical-memprof``
- Install ``statmemprof-emacs``.
- Enable loading statmemprof into the node.
- Enable loading `statmemprof` into the node.
Add the ``statmemprof-emacs`` package as a dependency to the main package, and add
``let () = Statmemprof_emacs.start 1E-4 30 5`` to the ``node_main.ml`` file.
@ -61,15 +61,15 @@ Memory profiling the C heap
Performance profiling
~~~~~~~~~~~~~~~~~~~~~
- Install perf (The ``linux-perf`` package for debian.
- Install `perf` (the ``linux-perf`` package for debian).
If the package does not exist for your current kernel, a previous
version can be used. substitute the ``perf`` command to ``perf_4.9``
version can be used. Substitute the ``perf`` command to ``perf_4.9``
if your kernel is 4.9).
- Run the node, find the pid.
- Attach perf with ``perf record -p pid --call-stack dwarf``.
- Attach `perf` with ``perf record -p pid --call-stack dwarf``.
Then stop capturing with ``Ctrl-C``. This can represent a lot of
data. Don't do that for too long. If this is too much you can remove

View File

@ -329,10 +329,10 @@ Annotations allow you to better track data, on the stack and within
pairs and unions.
If added on the components of a type, the annotation will be propagated
by the typechecker througout access instructions.
by the typechecker throughout access instructions.
Annotating an instruction that produces a value on the stack will
rewrite the annotation an the toplevel of its type.
rewrite the annotation on the toplevel of its type.
Trying to annotate an instruction that does not produce a value will
result in a typechecking error.
@ -1058,7 +1058,7 @@ Operations on maps
Operations on ``big_maps``
~~~~~~~~~~~~~~~~~~~~~~~~~~
The behaviour of these operations is the same as if they were normal
The behavior of these operations is the same as if they were normal
maps, except that under the hood, the elements are loaded and
deserialized on demand.
@ -1390,7 +1390,7 @@ argument the transferred amount plus an ad-hoc argument and returns an
ad-hoc value. The code also takes the global data and returns it to be
stored and retrieved on the next transaction. These data are initialized
by another parameter. The calling convention for the code is as follows:
``(Pair arg globals)) -> (Pair ret globals)``, as extrapolatable from
``(Pair arg globals)) -> (Pair ret globals)``, as extrapolated from
the instruction type. The first parameters are the manager, optional
delegate, then spendable and delegatable flags and finally the initial
amount taken from the currently executed contract. The contract is
@ -1538,10 +1538,10 @@ VIII - Macros
In addition to the operations above, several extensions have been added
to the languages concrete syntax. If you are interacting with the node
via RPC, bypassing the client, which expands away these macros, you will
need to de-surgar them yourself.
need to desugar them yourself.
These macros are designed to be unambiguous and reversible, meaning that
errors are reported in terms of de-sugared syntax. Below youll see
errors are reported in terms of desugared syntax. Below youll see
these macros defined in terms of other syntactic forms. That is how
these macros are seen by the node.
@ -1890,12 +1890,12 @@ X - JSON syntax
Micheline expressions are encoded in JSON like this:
- An integer ``N`` is an object with a single field ``"int"`` whose
valus is the decimal representation as a string.
value is the decimal representation as a string.
``{ "int": "N" }``
- A string ``"contents"`` is an object with a single field ``"string"``
whose valus is the decimal representation as a string.
whose value is the decimal representation as a string.
``{ "string": "contents" }``
@ -1905,7 +1905,7 @@ Micheline expressions are encoded in JSON like this:
- A primitive application is an object with two fields ``"prim"`` for
the primitive name and ``"args"`` for the arguments (that must
contain an array). A third optionnal field ``"annot"`` may contains
contain an array). A third optional field ``"annot"`` may contains
an annotation, including the ``@`` sign.
{ “prim”: “pair”, “args”: [ { “prim”: “nat”, args: [] }, { “prim”:
@ -2492,7 +2492,7 @@ The language is implemented in OCaml as follows:
- The lower internal representation is written as a GADT whose type
parameters encode exactly the typing rules given in this
specification. In other words, if a program written in this
representation is accepted by OCamls typechecker, it is mandatorily
representation is accepted by OCamls typechecker, it is guaranteed
type-safe. This of course also valid for programs not handwritten but
generated by OCaml code, so we are sure that any manipulated code is
type-safe.
@ -2521,7 +2521,7 @@ The language is implemented in OCaml as follows:
- The typechecker is a simple function that recognizes the abstract
grammar described in section X by pattern matching, producing the
well-typed, corresponding GADT expressions. It is mostly a checker,
not a full inferer, and thus takes some annotations (basically the
not a full inferrer, and thus takes some annotations (basically the
input and output of the program, of lambdas and of uninitialized maps
and sets). It works by performing a symbolic evaluation of the
program, transforming a symbolic stack. It only needs one pass over

View File

@ -7,7 +7,7 @@ This document explains the inner workings of the peer-to-peer layer of
the Tezos shell. This part is in charge of establishing and
maintaining network connections with other nodes (gossip).
The P2P layer is instanciated by the node. It is parametrized by the
The P2P layer is instantiated by the node. It is parametrized by the
type of messages that are exchanged over the network (to allow
different P2P protocol versions/extensions), and the type of metadata
associated to each peer. The latter is useful to compute a score for

View File

@ -41,7 +41,7 @@ Protocol header (for tezos.alpha):
ordered list of bakers. The first baker in that list is the first one
who can bake a block at that height, one minute after the previous
block. The second baker in the list can do so, but only two minutes
after the previous block, etc, the third baker three minutes after.
after the previous block, etc., the third baker three minutes after.
This integer is the priority of the block.
- ``seed_nonce_hash``: a commitment to a random number, used to
generate entropy on the chain. Present in only one out of
@ -58,7 +58,7 @@ size in bytes is applied to the list of transactions
``MAX_TRANSACTION_LIST_SIZE`` = 500kB (that's 5MB every 10 minutes at
most).
Other lists of operations (endorsements, denounciations, reveals) are
Other lists of operations (endorsements, denunciations, reveals) are
limited in terms of number of operations (though the defensive
programming style also puts limits on the size of operations it
expects).
@ -125,7 +125,7 @@ Rolls
In theory, it would be possible to give each token a serial number, and
track the specific tokens assigned to specific delegates. However, it
would be too demanding of nodes to track assignement at such a granular
would be too demanding of nodes to track assignment at such a granular
level. Instead we introduce the concept of rolls. A roll represents a
set of coins delegated to a given key. When tokens are moved, or a
delegate for a contract is changed, the rolls change delegate according
@ -164,7 +164,7 @@ Roll snapshots represent the state of rolls for a given block. Roll
snapshots are taken every ``BLOCKS_PER_ROLL_SNAPSHOT`` = 256 blocks,
that is 16 times per cycle. There is a tradeoff between memory
consumption and economic efficiency. If roll snapshots are too frequent,
they will consumme a lot of memory. If they are too rare, strategic
they will consume a lot of memory. If they are too rare, strategic
participants could purchase many tokens in anticipation of a snapshot
and resell them right after.
@ -284,10 +284,10 @@ Denounciations
--------------
If two endorsements are made for the same slot or two blocks at the same
height by a delegate, this can be denounced. The denounciation would be
height by a delegate, this can be denounced. The denunciation would be
typically be made by the baker, who includes it as a special operation.
In a first time, denounciation will only forfeit the security deposit
In a first time, denunciation will only forfeit the security deposit
for the doubly signed operation. However, over time, as the risk of
accidental double signing becomes small enough, denounciation will
accidental double signing becomes small enough, denunciation will
forfeit the entirety of the safety deposits. Half is burned, and half is
added to the block reward.

View File

@ -44,7 +44,7 @@ is called :ref:`the validator<validation>`.
The rest of the shell includes the peer-to-peer layer, the disk storage
of blocks, the operations to allow the node to transmit the chain data
to new nodes and the versioned state of the ledger. Inbetween the
to new nodes and the versioned state of the ledger. In-between the
validator, the peer-to-peer layer and the storage sits a component
called the distributed database, that abstracts the fetching and
replication of new chain data to the validator.
@ -80,9 +80,9 @@ dropped for clarity.
|Tezos source packages diagram|
In green at the bottom are binaries. Hilighted in yellow are the OPAM
In green at the bottom are binaries. Highlighted in yellow are the OPAM
packages (sometimes with shortened names). Black arrows show direct
dependencies. Orange arrows show other indirect relashionships (code
dependencies. Orange arrows show other indirect relationships (code
generation, interface sharing), explained below. The part circled in
blue, contains modules that bear no dependency to Unix, and can thus
be compiled to JavaScript. External dependencies are not shown in this
@ -100,10 +100,10 @@ that are used everywhere for basic operations.
module, etc.), a few ``Lwt`` utilities, and a ``Compare`` module
that implements monomorphic comparison operators.
- :package:`tezos-data-encoding` is the in-house
comibnator-based serialization library. From a single type
combinator-based serialization library. From a single type
description ``t encoding``, the code can read to and write from
values of type ``t`` both binary and JSON representations. For
both, the library provides machine and human-redable documentations
both, the library provides machine and human-readable documentations
by the use of documentation combinators. The JSON part depends on
:opam:`ocplib-json-typed`.
A :ref:`tutorial<data_encoding>` is available for this library.
@ -124,7 +124,7 @@ that are used everywhere for basic operations.
- :package:`tezos-crypto` wraps the external cryptography
libraries that we use. We try to use minimal references
implementations, with as thin as possible bindings. A possible plan
is to use libraries from the HACL projet, so that all of our crypto
is to use libraries from the HACL project, so that all of our crypto
is extracted from Fstar, either with thin C bindings or directly in
OCaml.
- :package:`tezos-micheline` is the concrete syntax used by
@ -209,9 +209,9 @@ protocol in alternative environment possible.
that let you build an environment from a few context accessors.
- ``tezos-embedded-protocol-xxx`` contains a version of protocol
``xxx`` whose standard library is pre-instanciated to the shell's
``xxx`` whose standard library is pre-instantiated to the shell's
implementation, these are the ones that are linked into the
node. It alse contains a module that registers the protocol in the
node. It also contains a module that registers the protocol in the
node's protocol table.
The Embedded Economic Protocols
@ -246,7 +246,7 @@ compatible, and library vs command line interface.
:package:`tezos-shell-services` and
:package:`tezos-protocol-alpha`, are abstracted over this object
type. That way, it is possible to use the same code for different
platforms ot toolkits.
platforms or toolkits.
- :package:`tezos-client-alpha` provides some functions to perform
the operations of protocol alpha using the wallet and signers from
the client context.
@ -265,7 +265,7 @@ compatible, and library vs command line interface.
Tests Packages
~~~~~~~~~~~~~~
The tests are splitted into various packages, testing more and more
The tests are split into various packages, testing more and more
elements while following the dependency chain. Use ``make test`` to
run them.
@ -321,6 +321,6 @@ The Final Executables
- :package:`tezos-protocol-compiler` provides the
``tezos-protocol-compiler`` binary that is used by the node to
compile new protocols on the fly, and that can be used for
developping new protocols.
developing new protocols.
.. |Tezos source packages diagram| image:: packages.svg

View File

@ -6,7 +6,7 @@ The validation subsystem
This document explains the inner workings of the validation subsystem
of the Tezos shell, that sits between the peer-to-peer layer and the
economic protocol. This part is in charge of validating chains, blocks
and operations that come from the network, and deciding wether they
and operations that come from the network, and deciding whether they
are worthy to propagate. It is composed of three main parts: the
:ref:`validator<validator_component>`, the
:ref:`prevalidator<prevalidator_component>`, and
@ -45,7 +45,7 @@ peer, one at a time, in a loop. In the simple case, when a peer
receives a new head proposal that is a direct successor of the current
local head, it launches a simple *head increment* task: it retrieves
all the operations and triggers a validation of the block. When the
difference between the current head and the examinated proposal is
difference between the current head and the examined proposal is
more than one block, mostly during the initial bootstrap phase, the
peer worker launches a *bootstrap pipeline* task.
@ -95,7 +95,7 @@ that it considers valid, and the ones that it chooses to broadcast.
This is done by constantly baking a dummy block, floating over the
current head, and growing as new operations are received.
Operations that get included can be broadcasted unconditionally.
Operations that get included can be broadcast unconditionally.
Operations that are included are classified into categories. Some
(such as bad signatures or garbage byte sequences) are dismissed. They