* Auto-link for documents. If there is an `org` directive in the first doc
block of the file then it is used as the file link definition. If there is no
such `org` directive then on is created automatically. This resolves issue
#0008. There is a new LinkAnchor type for these links: `LinkType.FileLink`
* Multiple `org` directives per DocBlock are now allowed. There is a new
LinkAnchor link type for these link: `LinkType.InlineLink`.
* Refactored `LinkType.OrgLink` to be `LinkType.BlockLink`.
* Tweaked CSS
* Refactored `LiterateMarkdownGenerator.emit(DocBlock)` for simplicity.
* Removed the automatic document table heading based on the document id.
* Fixed the way 'absolute' links were resolved in `Processor`. A leading `/` for
a link passed into `Processor.resolveLink` really means "resolve as if I am
starting from the documentation root."
* Moved the default logging configuration out of the final jar path. This is to
help with environments that have multiple projects in the same classpath and
therefore may have multiple `logback.groovy` configurations loading.
* Tweaked CSS.
* Finished the initial documentation for version 1.3. I do not believe the
documentation for a project is ever 'finished'. As long as the code is still
evolving, the documentation is evolving. Having said that, the documentation
is now caught up with the code for version 1.3.
* Removed an old dependany on groovy 1.7.10. This dependancy is now satisfied by
the build script based on the version of Groovy used on the system performing
the build.
* Added syntax highlighting using SyntaxHighlighter v3.0.83
(see https://github.com/alexgorbatchev/SyntaxHighlighter)
* Modified Directive to have a link to the DocBlock that contains it. Modified
JLPPegParser to account for this change.
* Modified LinkAnchor to include the ASTNode defining it. Also added
LinkAnchorType enum to facilitate different types of links.
* LiterateMarkdownGenerator is now escaping HTML characters in the code sections
and other places it is appropriate.
* Refactored the link resolution process. Added a new method
`Processor.resolveLink(String, TargetDoc)` that now resolves a URL or URL
fragment relative to the current output context. This function also takes over
the job of resolving JLP links to link anchors and generating the correct URL.
LiterateMarkdownGenerator now calls this instead of doing all the work itself.
* To support the syntax highlighter, the JarUtil class from com.jdbernard.util
is included and the syntax highlighter resources are extracted into the output
directory from a resource jar stored in the JLP main jar.
* The CSS and other assets for the output are no longer copied into every
output directory, but now stored at the output root and linked correctly into
the output files.
* Added references to the source file and file type for TargetDocs instead of
computing them on the fly during processing.
* Further documentation.
* Decided to add an `include` directive. This will have reprecussions directly
for the JLPPegParser, Directive, and LinkAnchor classes. Include needs more
semantic meaning than we currently have in the process because the author
needs some way to understand what is being included. If you include an org
link defined earlier does it include the whole file? Just that doc block? Both
may be the desired behavior in different situations, but I do not want to add
a complex syntax for selecting, just name the link. Therefore there must be
something about the link that determines how much in included. This means we
need more information in LinkAnchor, some link `type` at a minimum. My current
thought is:
* @org defined links--this is the only type of LinkAnchor defined right
now--always bring the whole DocBlock when they are included.
* A new type of link anchor, call them source file links, bring the whole
file when they are included. These types of anchors would be automatically
created by the parser/generator (have not decided where it belongs yet).
The whole SourceFile would need to be included, but we do not want to emit
the normal header for the file so we may have to do somthing about the
initial emit method.
* Additional types of link anchors arise when we add language awareness to
the process. In my current vision we will automatically add link anchors
of different types when we parse code sections (function definition, class
definition, etc.) and the include directive would bring in all the
DocBlocks related to that code node.
* Additional types of link anchors will arise when we implement the @api
directive, we will think about that in the future.
* Updated the JLPPegParser to recognise include directives.
* Added the include directive to Directive.DirectiveTypes
* Added logging with SLF4J and Logback
* Added `--version` option.
* Mofidied the input file rules. When an input object is a directory, JLPMain is
adding all the files in that directory and its subdirectories. Now JLPMain is
ignoring hidden files in the directory and subdirs. A file named explicitly on
the command line is still included regardless of if it is hidden or not.
* Documentation continues.
* Upgraded build common to version 1.9.
* Updated the release target in build.xml to take advantage of the new features
of common build 1.9. The release target now copies over the libs and release
resources.
* JLPMain now recognises directories in it's input list. It will add all the
files in a given directory to the input list (including files in abitrarily
nested subdirectories).
* Abstracted the parser behavior further. Processor no longer needs to know
about Parboiled ParseRunners and can use non-Parboiled parsers.
* Created the JLPParser interface to support the new parser abstraction.
* JLPPegParser implements the new interface trivially by creating it's own parse
runner and calling it with the input given.
* Added MarkdownParser, which does not actually parse the file, just creates the
bare-bones SourceFile object needed for the generator to emit the Markdown
contents.
* Added support for multi-line comments to the JLPPegParser grammar
implementation.
* Added a Java sample file.
* Updated test script to add convenience functions for the java test file and
for using a TracingParseRunner for parse runs.
* Added an option, `--css-file`, to allow the caller to specify their own css
file.
* Added basic logic to the Processor class to detect source file types and build
a parser and a generator for that source type. Support currently exists for
the following languages: C (.c, .h), C++ (.cpp, .c++, .hpp, .h++), Erlang
(.erl), Groovy (.groovy), Java (.java), JavaScript (.js).
* The generators originally had two phases, *parse* and *emit*. The *parse*
phase allowed the generator to walk the AST for every document noting things
it would need when emitting output. So the *parse* phase looked over every
input document before the *emit* phase ran. During the refactor this changed
and for each file the *emit* phase was running immediately after the *parse*
phase, when it should have been run only after all inputs had been through the
*parse* phase.
* Fixed a type in the ``LiterateMarkdownGenerator``: an extra '`/`' was being
inserted into the url for link targets.
* Refactored the overall process flow. Instead of ``JLPMain`` handling the
process, it now reads the command line options and defers to ``Processor`` to
handle the actual process. The ``Processor`` instance is responsible for
processing one batch of input files and holds all the state that is common to
this process.
* ``JLPBaseGenerator`` and generators based on it are now only responsible for
handling one file, generating output from a source AST. As a consequence
state that is common to the overall process is no longer stored in the
generator but is stored on the ``Processor`` instance, which is exposed to the
generators.
* Generators can now be instantiated directly (instead of having just a public
static method) and are no longer one-time use. Now the life of a generator is
expected to be the same as the life of the ``Processor``.
* Fixed inter-doc link behaviour.
* Created some data classes to replace the ad-hoc maps used to store state in
the generator (now in the ``Processor``)
* `relative-path-root` option added. This facilitates situations where the
current directory of the invocation context is different than the working
directory of the program. This is required to use `jlp` with tools like
*Nailgun*, which keeps a persistant `java` process running and proxies new
invocations to the existing process.
* Added CSS based on Docco (blatantly copied).
* Updated sample text to better fit the emerging usage patterns. Some of the
things I did to make it render nicely for the Literate output may cause
problems when we go to render API output. I will cross that bridge when I come
to it.
* Added parsing infrastructure to the Generator behaviour to allow a
pre-processing pass over the document. Current the LiterateMarkdownGenerator
is using this to compile a map of `@org` references.
* Tweaked the HTML output slightly.
* Added a layer over the PegDownProcessor in LiterateMarkdownGenerator to
capture and transform `jlp://` links into relative links.
Ideas:
* For literate output, format like Docco, tables like
Doc | Code
-----------|------------
| docblock | codeblock |
| docblock | codeblock |
| docblock | codeblock |
* For javadoc output, maybe create a running 'pure source' object containing
just the code lines. Then run an sup-parser for the language the code is
written in and build a seperate AST for the code. This code AST then gets
tagged onto the overall AST and is used in the generation phase for code
comprehension. Would still need a way to map doc blocks to code blocks. I
could probably use line numbers. In that case I would need to map the original
source line from the commented input to the 'pure source' while processing the
'pure source' and store the original line number in the code AST. That would
give me a reliable way to lookup the closest code structure to a doc block.
* The code AST would need to made of generic pieces if I want to have
language-agnostic generator code. What may be better is to allow the language
parser to create it's code AST however is wants and just have some pluggable
bit of the generator for each language. Would increase generator code
complexity though.
* Updated test data to include additional parsing edge cases.
* Updated `vbs_db_records.hrl` to use `@org` directives.
* Refactored Generator/Emitter dual-object phase concept into one object, the
Generator. The emitter ended up needing basically full visibility into the
generator anyways.
* Implemented `JLPBaseGenerator`, `MarkdownGenerator`, and
`TransparentGenerator`
* Modified the way the parser handles remaining lines to allow it to safely
handle empty lines.
* Added planning documentation regrding the process.
* Updated grammer.
* Refactored the test code a bit.
* Added sample input file from vbs-suite
* Refactored the AST node structure created by the parser.
For whatever reason, writing the parser in Groovy was causing weird errors
to occur when the parser or parse runner was created. Using a plain Java
source file fixed this.