= doctest samples = The interactive examples are a bit of a pain in the freckle to copy and paste. It would be better if there was a module or class provided that made interactive playing easier. The albatross.Application class traps exceptions and tries to build a tag traceback. It might be nice if something like that was available for playing around outside of applications. The examples are formatted like that so that a testing program can run the samples and ensure that the correct output is produced. This has the benefit of ensuring that the documentation is always correct. It also provides more unit tests for the toolkit. This is the current {{{doctest/tags-if1}}}: {{{ >>> import albatross >>> ctx = albatross.SimpleContext('.') >>> ctx.locals.a = 10 >>> albatross.Template(ctx, '', ''' ... ... a () is less than 15. ... ... a () is greater than or equal to 15. ... ... ''').to_html(ctx) >>> ctx.flush_content() a (10) is less than 15. }}} The testing program should be changed so that it uses two files; a .py file and a .out file. This would allow us to change the above {{{doctest/tags-if1}}} to {{{doctest/tags-if1.py}}} (for easier cut & paste): {{{ import albatross # Create an execution context for evaluating templates and place a # value into the local namespace. ctx = albatross.SimpleContext('.') ctx.locals.a = 10 # Evaluate an inline template. albatross.Template(ctx, '', ''' a () is less than 15. a () is greater than or equal to 15. ''').to_html(ctx) # Display the template output. ctx.flush_content() }}} and this {{{doctest/tags-if1.out}}}: {{{ a (10) is less than 15. }}} ||<#ccccFF> AndrewMcNamara: I don't agree - the primary role of the examples is to demonstrate simple concepts to a new user, and I think it's important that the expected output is where it should be if the user was entering the commands into an interactive session, and I can't see how this proposal achieves this goal. ||