How to setup a format-test workflow?

I want to setup a workflow that will watch for file changes and then 1) format the changed files using cfformat and then after that completes 2) run tests for the project from the CLI. Ideally I’d be able to use the testbox command interactively still to select which tests to run.

This is a pretty normal workflow, but I haven’t been able to find examples of how to do that.

I already have the tests running from the CLI using box testbox watch ..., but I haven’t been able to get box cfformat watch ... working. I can run it manually as box cfformat run ... in a separate terminal window, but then my tests run twice, once on save, and then once on manual format.

Does anyone have an example of a workflow like this successfully setup? I tried creating a box.json script with the watch command, but I couldn’t get it working.

Seth, while I have no specific experience with that, can you clarify first what happens when you try it? You say you can’t get it working as a box command. Does it fail, with an error? Does it just fail to do what you expect?

Second, I notice that the docs for that cfformat forgebox module indicate:

If your watch command seems slow, unresponsive, or is failing to notice some file change events, it is likely that you have it watching too many files.

Could that be the issue? If so, it also offers suggestions. If it’s NOT the issue, I look forward to your reply on my first questions above.

Or perhaps others here will have a more direct and clear answer for you.

1 Like

Charlie,

Thank you, I wasn’t especially clear about that. The docs show this example for the CLI

Since escaping meta characters can get tricky with nested strings, you can declare the command as an environment variable and then just reference it like so:

set command = "echo 'You added \${item}!'"
watch command="foreach '\${watcher_added}' \${command}" --verbose

It doesn’t show how to do that in a box.json script, but I tried adapting it like so…

scripts={
  "watch": "watch command='foreach ${watcher_added} \"echo ${item}\"''"
}

That was as far as I got and it just prints “echo” out.

I might be able to create a workflow with the pre and post hooks for scripts (Package Scripts - CommandBox : CLI, Package Manager, REPL & More), but I haven’t been able to figure out how to execute other scripts from a watch script either.

For your second question, our repo is quite large, but I haven’t had any problems with testbox watch command which uses watcher command, I believe.

OK on all that. I’ll leave the rest to others with more experience.

But as for the watch, I just would clarify that the quote I offered was indeed specifically about performance impact of large folders on cfformat watch, rather than being (or at least rather than seeming to be) a generic discussion of “any” use of watch, like the testbox watch which works fine for you. Again, others with more experience can confirm either way.

Hi @sethfeldkamp --glad to see you’re still doing CF! Sorry for the delay in response, but I just got back from vacation. Just to clarify a bit-- at first I thought you were talking about setting up a CI/CD server’s “workflow”, but then when you said you wanted it to be “interactive” it sounded like you wanted something running locally, but I’m unclear if you were expecting to specifically interact with the terminal by supplying input via your keyboard while it was running the tests? I can’t really imagine what you may have in mind there, so you’d need to unpack that thought.

There also seems to be a chance you simply mean that if you have FooService.cfc which has a corresponding test named FooServiceTest.cfc, and when you modify the FooService.cfc you are wanting to automatically run the FooServiceTest.cfc immediately. And, if two or more files were modified, you’d want each of their respective tests to run, etc…

I’m still not sure which of those options you are hitting at, but I’ll address that last one since it’s an interesting use case regardless. The biggest issue to doing that is you would need some sort of convention that allows you to determine what exact tests specs “cover” a given CFC in your code base. While you may or may not have a folder/naming convention, that’s certainly not anything enforced by TestBox itself. So, perhaps, any modifications to /models/XXX.cfc can be assumed that the matching unit test is in tests/specs/XXXTest.cfc, but that’s something you’d need to decide and enforce.

As far as how you’d stick something like that together-- I am a little curious why you want to tie the formatting and the test running together unless you’re just trying to reduce the number of watchers. I don’t tend to use the formatting watchers while I code, in favor of a one-time formatting/commit in my CI/CD workflow or a pre-commit hook FWIW. The watch command is very powerful and generic, but I think it would be SUPER difficult to make it work for your purposes just due to readability. The default example of the watch command runs a command for every file that was changed, but it would really be more efficient to do a single pass of the formatting and the test runners on all the files at once instead of running 50 testbox runs after a find/replace modifies 50 files at once. Not saying it wouldn’t work the other way, it just seems a little heavy handed.

What would probably be much much easier to write and manage would be to whip up a Task Runner that does this, where you can enforce your test naming conventions, aggregate the file changed lists, and manage the watcher. You can still even wrap up the task runner in a package script if you want the easy run-script xyx shortcut to starting it.

So, if you’re wondering if this has been done before, the answer is probably not. While I’ve heard of this, I’ve never actually seen anyone who designed their tests names in a way where they could effectively figure out which tests to run to test a given CFC model. In our Ortus projects, we tend to rely heavily on integration testing over unit testing, which is even harder since there’s an ambiguous many-to-many relationship between which integration tests may hit a given model or models.

I just threw some code in a test Task Runner and this is what I came up with. To test this, I ran these commands:

coldbox create app
coldbox create model myService
task create --open

and then placed the following code in my task.cfc:

component {

	function run(){
		watch()
			.paths( "**.cfc" )
			// Add excludes here as neccessary
			.excludePaths( "/coldbox/", "/testbox/", "/tests/", "/task.cfc" )
			.onChange( ( files ) => {
				print
					.line()
					.line( '-------------------------------------------------------------------------' )
					.line()
					.toConsole();
				
				// We care about new or changed files
				var fileChangedList = files.added.append( files.changed, true ).toList();

				// Format the files.  The watcher is smart enough to not run recursivley here.
				command( "cfformat run" )
					.params( fileChangedList & "," ) // The trailing comma is just a trick to make the "cfformat run" command output a list of formatted file
					.flags( "overwrite" )
					.run( echo = true ) // Turn off echo to reduce debug output

				print
					.line()
					.line()
					.toConsole();

				// This is where your logic goes to map modified files in your app to related test bundles
				var testBundleList = fileChangedList
					.listMap( ( file ) => {
						// handlers/Main.cfc maps to tests.specs.integration.MainSpec.  Modify as desired.
						if ( file contains "handlers" ) {
							return "tests.specs.integration.#file.reReplace( "handlers[\\/](.*)\.cfc", "\1" )#Spec";
						// mdoels/MyService.cfc maps to tests.specs.unit.MyServiceTest.  Modify as desired.
						} else if ( file contains "models" ) {
							return "tests.specs.unit.#file.reReplace( "models[\\/](.*)\.cfc", "\1" )#Test";
						} else {
							// Unmapped files are ignored
							return "";
						}
					} )
					// Ignore empty strings from the map() above
					.listFilter( ( file ) => file.len() );

				command( "testbox run" )
					.params( bundles = testBundleList )
					.flags( "noVerbose" )
					.run( echo = true ) // Turn off echo to reduce debug output
			} )
			.start();
	}

}

You can run it with

task run

or wrap it up in a package script like so:

package set scripts.watchFormatTest='task run'

than you can run like so:

run-script watchFormatTest

That should get you started and you can modify the task runner as you see fit, which gives you way more power and readability over trying to cram all that logic in a one-liner to the watch command.

Brad,

Thanks so much for this and for the time you spent working through some ideas based on my too-sparse question. It will get me a lot closer.

Yes, I’m back to doing ColdFusion for now and I hope to continue for awhile. I miss some of the safety and tooling from Typescript, but ColdFusion still feels pretty nice to me.

To answer a few of your questions…

it sounded like you wanted something running locally

Yes, locally. I’d like to get CI/CD setup someday, but we just aren’t there yet. I’m writing unit tests which are mocking data loading, mostly as a safety net for refactoring some really old gnarly code.

I’m unclear if you were expecting to specifically interact with the terminal by supplying input via your keyboard while it was running the tests?

box testbox watch has a really nice interactive prompt where you can filter and navigate the tests that you want to run there on the command line. I have configured it (in box.json) to watch the entire project for changes, but then I can target only certain bundles (or tests within bundles) to run when any changes are detected. I think I prefer this over tightly coupling the test with the system-under-test by some naming convention. It allows for composition and delegation patterns that can be easily tested.

I’ve configured a test runner that’s optimized for outputting in the terminal (only showing failed tests for example).

I am a little curious why you want to tie the formatting and the test running together unless you’re just trying to reduce the number of watchers.

Benefits…

  • I can be a little sloppy about formatting, which lets me move faster and focus on code intention
  • tests won’t run twice (once on my save, once again on save after being automatically formatted)
  • comfort. I miss this workflow specifically from Typescript where files are formatted (with Prettier) and then tested automatically. That’s a benefit of compiled language with module loading. Maybe it’s not possible in CF.

I’m fine with manually formatting if I can’t get the task solution working, I was mostly just inquiring about whether or not someone else had already implemented something like this that I could look at.

Can you show me what you’re taking about, because I built that command and I have no idea what you’re taking about, lol! Perhaps we have a different definition of "interactive’. When I say a command is interactive, I don’t just mean it outputs to the console, I mean it pauses execution to ask the user questions, then waits until they reply to continue processing based on their answer. The testbox watch command simply fires testbox run for you when files have changed on disk, but it doesn’t stop to ask you any questions. And while it can be configured via CLI args or the box.json, that’s really a one-time config that applies to all runs, not something you’re punching in on every run of the tests.

Right, you can provide a list of bundles when starting the testbox watcher as a CLI arg, or you can set the default bundles in your box.json, but it’s set until you stop the watcher and start it again. You aren’t telling it what to run every time you’ve changed a file.

I guess I’m back to not understanding quite what you’re trying to get to happen then. Are you trying to get a watcher that is “smart” enough to know what tests to run when a given source file is modified, or are you just wanting to pick a bundle that you’re working on that day, and any modifications will just run that one bundle every time? Because if it’s the latter, testbox watch should give you what you need. If it’s the former, then you have to have some coupling between which source files map to which tests!

That was unnecessary. The testbox run command already has a verbose flag, which when set to false, will only show failing tests. Just pass --noVerbose to the command like I did in my example above.

A normal (verbose) test output is

Executing tests http://127.0.0.1:18180/tests/runner.cfm?&recurse=true&reporter=json&bundles=tests.specs.integration.MainSpec&verbose=false please wait...

√ tests.specs.integration.MainSpec (386 ms)
[Passed: 9] [Failed: 0] [Errors: 0] [Skipped: 0] [Suites/Specs: 3/9]

    √ Main Handler
        √ can render the homepage (66 ms)
        √ can render some restful data (25 ms)
        √ can do a relocation (33 ms)
        √ can startup executable code (18 ms)
        √ can handle exceptions (24 ms)
        √ Request Events
            √ fires on start (21 ms)
            √ fires on end (18 ms)
        √ Session Events
            √ fires on start (19 ms)
            √ fires on end (17 ms)

√ tests.specs.integration.MainSpec (301 ms)
[Passed: 9] [Failed: 0] [Errors: 0] [Skipped: 0] [Suites/Specs: 3/9]

    √ Main Handler
        √ can render the homepage (37 ms)
        √ can render some restful data (26 ms)
        √ can do a relocation (45 ms)
        √ can startup executable code (29 ms)
        √ can handle exceptions (25 ms)
        √ Request Events
            √ fires on start (17 ms)
            √ fires on end (16 ms)
        √ Session Events
            √ fires on start (16 ms)
            √ fires on end (16 ms)

- tests.specs.unit.myServiceTest (7 ms)
[Passed: 0] [Failed: 0] [Errors: 0] [Skipped: 0] [Suites/Specs: 1/0]

    √ myService Suite

╔═════════════════════════════════════════════════════════════════════╗
║ Passed  ║ Failed  ║ Errored ║ Skipped ║ Bundles ║ Suites  ║ Specs   ║
╠═════════════════════════════════════════════════════════════════════╣
║ 18      ║ 0       ║ 0       ║ 0       ║ 3       ║ 7       ║ 18      ║
╚═════════════════════════════════════════════════════════════════════╝

TestBox         v4.5.0
CFML Engine     Lucee v5.3.9.141
Duration        777ms
Labels          ---

while a non-verbose output will just be this:

Executing tests http://127.0.0.1:18180/tests/runner.cfm?&recurse=true&reporter=json&bundles=tests.specs.integration.MainSpec&verbose=false please wait...

╔═════════════════════════════════════════════════════════════════════╗
║ Passed  ║ Failed  ║ Errored ║ Skipped ║ Bundles ║ Suites  ║ Specs   ║
╠═════════════════════════════════════════════════════════════════════╣
║ 18      ║ 0       ║ 0       ║ 0       ║ 3       ║ 7       ║ 18      ║
╚═════════════════════════════════════════════════════════════════════╝

TestBox         v4.5.0
CFML Engine     Lucee v5.3.9.141
Duration        777ms
Labels          ---

Setting testbox.verbose in your box.json should also work, but it seems there was a bug there which I just fixed: [COMMANDBOX-1487] - Welcome

This is a great reason to use automatic code formatting in general, but really doesn’t support why you’d want to use the same watcher as the tests per se.

This is a really good reason :+1: :slight_smile:

Again, this is a great reason to use automatic code formatting in general, but really doesn’t support why you’d want to use the same watcher per se.

Not sure I follow- what exactly is a benifit of a compiled language? Surely you don’t mean that automatic code formatting and/or automatic test running are a benefit of a compiled language, since we clearly have that in CF.

What exactly isn’t possible? Again, not following there.

I am very confused. I apologize. I think I’m remembering jest cli. Sorry for the wasted time.

No worries at all and not a waste of time! I’ve been wanting to play around with this idea for a while so I enjoyed looking into it.

I’m not familiar with the Jest CLI other than having heard of it. Taking a quick look at it-- The jest --watch command has some cool options for changing the tests it will run on the fly that we could easily build in CommandBox.

  • You can press p to enter a mode where you can supply a regex to match test file names to run (that would be the equivalent of bundles in TestBox)
  • You can press t to enter a mode where you can supply a regex to match actual test names (that would be the equivalent of spec names in TestBox)
  • The default behavior of the yarn jest --watch command is to only run tests that have been modified or tests that appear to be “related” to source files that changed (more on that below)
  • You can press f to re-run only tests that failed on the last run.

Jest in general also has these options, outside of its watcher:

  • The --onlyChanged flag will look at the local repo and see what uncommitted files are modified and find related tests
  • The --changedSince flag will look at the local repo and find all files that have changed since a given branch or commit hash and find their related tests

And finally, the real crux of what we were trying to decide above that makes all the jest magic work, appears to be in this option

jest --findRelatedTests <spaceSeparatedListOfSourceFiles>

And here is how that works:

So it turns out it has nothing to do with Typescrpt, or even being a compiled language- it’s just a clever recursive scan of the static source code to trace the hierarchy of require() calls. It would be possible to build some equivalent in CFML, but there are a lot of caveats. For starters, the require.js module is basically the lowest common denominator in JS and has no real conventions-- the full path to the js file is basically passed right in so it’s very easy to track. In CFML a given .cfm or .cfc file could be referenced as

  • <cfinclude>
  • <cfinvoke>
  • createObject()
  • new foo.bar()
  • <cfmodule> call
  • <cf_foo> custom tag call
  • Class inheritance (extends='com.foo.bar')

And then you get into a framework like ColdBox and you add the following to that list above

  • A view convention
  • A handler convention
  • pre/post/around AOP handler conventions
  • WireBox AOP advices
  • interceptor conventions
  • WireBox getInstance( 'arbitrary-mapping-id' )
  • Wirebox property inject='arbitrary-mapping-id';

So yeah-- it may be pretty difficult to analyze the source code of a ColdBox app and guess which tests may run models/myService.cfc unless we start making a few assumptions

Anyway, that’s no reason not to try. I think the biggest issues are

  • What would the performance of this source code scan be?
  • Would it have a high enough percentage of being correct for people to use it

So really, two fronts of very interesting new stuff we could add to the CommandBox CLI’s TestBox commands.

  • The interactive mode where the user can press keys to filter tests. This really wouldn’t be too hard at all and doing basic regex-based filters of bundles and specs is fairly straight forward. One of the big missing pieces is the --listtests option to allow the CLI to discover what tests and bundles are available without running them.
  • The idea of automatically mapping a list of CF source files back to a list of tests that run them.

@lmajano, what are you thoughts on all this fun stuff? I think Seth has brought up some compelling features we don’t have in CFML land right now.

1 Like

Thanks Brad. This is a really well researched answer. I’d love to see the jest interactive mode make it into CommandBox TestBox integration. Even if you weren’t able to implement --findRelatedTests that would still let me target the tests that I know are related to the code I changed. That’s good enough (for me anyway).

As is, I workaround this just by restarting watch when I want to change the test bundles I run. It’s not really feasible to run every test every time a file is saved. Which is I guess why jest has --findRelatedTests.

Thanks for the notes about ‘verbose’ option. Looking forward to 5.6.0 release.