* Removed the automatic document table heading based on the document id.
* Fixed the way 'absolute' links were resolved in `Processor`. A leading `/` for
a link passed into `Processor.resolveLink` really means "resolve as if I am
starting from the documentation root."
* Moved the default logging configuration out of the final jar path. This is to
help with environments that have multiple projects in the same classpath and
therefore may have multiple `logback.groovy` configurations loading.
* Tweaked CSS.
* Further documentation.
* Decided to add an `include` directive. This will have reprecussions directly
for the JLPPegParser, Directive, and LinkAnchor classes. Include needs more
semantic meaning than we currently have in the process because the author
needs some way to understand what is being included. If you include an org
link defined earlier does it include the whole file? Just that doc block? Both
may be the desired behavior in different situations, but I do not want to add
a complex syntax for selecting, just name the link. Therefore there must be
something about the link that determines how much in included. This means we
need more information in LinkAnchor, some link `type` at a minimum. My current
thought is:
* @org defined links--this is the only type of LinkAnchor defined right
now--always bring the whole DocBlock when they are included.
* A new type of link anchor, call them source file links, bring the whole
file when they are included. These types of anchors would be automatically
created by the parser/generator (have not decided where it belongs yet).
The whole SourceFile would need to be included, but we do not want to emit
the normal header for the file so we may have to do somthing about the
initial emit method.
* Additional types of link anchors arise when we add language awareness to
the process. In my current vision we will automatically add link anchors
of different types when we parse code sections (function definition, class
definition, etc.) and the include directive would bring in all the
DocBlocks related to that code node.
* Additional types of link anchors will arise when we implement the @api
directive, we will think about that in the future.
* Updated the JLPPegParser to recognise include directives.
* Added the include directive to Directive.DirectiveTypes
* Added support for multi-line comments to the JLPPegParser grammar
implementation.
* Added a Java sample file.
* Updated test script to add convenience functions for the java test file and
for using a TracingParseRunner for parse runs.
* Added an option, `--css-file`, to allow the caller to specify their own css
file.
* Added basic logic to the Processor class to detect source file types and build
a parser and a generator for that source type. Support currently exists for
the following languages: C (.c, .h), C++ (.cpp, .c++, .hpp, .h++), Erlang
(.erl), Groovy (.groovy), Java (.java), JavaScript (.js).
* Refactored the overall process flow. Instead of ``JLPMain`` handling the
process, it now reads the command line options and defers to ``Processor`` to
handle the actual process. The ``Processor`` instance is responsible for
processing one batch of input files and holds all the state that is common to
this process.
* ``JLPBaseGenerator`` and generators based on it are now only responsible for
handling one file, generating output from a source AST. As a consequence
state that is common to the overall process is no longer stored in the
generator but is stored on the ``Processor`` instance, which is exposed to the
generators.
* Generators can now be instantiated directly (instead of having just a public
static method) and are no longer one-time use. Now the life of a generator is
expected to be the same as the life of the ``Processor``.
* Fixed inter-doc link behaviour.
* Created some data classes to replace the ad-hoc maps used to store state in
the generator (now in the ``Processor``)
* Added CSS based on Docco (blatantly copied).
* Updated sample text to better fit the emerging usage patterns. Some of the
things I did to make it render nicely for the Literate output may cause
problems when we go to render API output. I will cross that bridge when I come
to it.
* Added parsing infrastructure to the Generator behaviour to allow a
pre-processing pass over the document. Current the LiterateMarkdownGenerator
is using this to compile a map of `@org` references.
* Tweaked the HTML output slightly.
* Added a layer over the PegDownProcessor in LiterateMarkdownGenerator to
capture and transform `jlp://` links into relative links.
Ideas:
* For literate output, format like Docco, tables like
Doc | Code
-----------|------------
| docblock | codeblock |
| docblock | codeblock |
| docblock | codeblock |
* For javadoc output, maybe create a running 'pure source' object containing
just the code lines. Then run an sup-parser for the language the code is
written in and build a seperate AST for the code. This code AST then gets
tagged onto the overall AST and is used in the generation phase for code
comprehension. Would still need a way to map doc blocks to code blocks. I
could probably use line numbers. In that case I would need to map the original
source line from the commented input to the 'pure source' while processing the
'pure source' and store the original line number in the code AST. That would
give me a reliable way to lookup the closest code structure to a doc block.
* The code AST would need to made of generic pieces if I want to have
language-agnostic generator code. What may be better is to allow the language
parser to create it's code AST however is wants and just have some pluggable
bit of the generator for each language. Would increase generator code
complexity though.
* Added planning documentation regrding the process.
* Updated grammer.
* Refactored the test code a bit.
* Added sample input file from vbs-suite
* Refactored the AST node structure created by the parser.