Some lucee admin JS URLs failing

I have a new install of CommandBox on Ubuntu using AJP to serve files through Apache. When I load up the Lucee admin in the web browser, 4 of the scripts and one image served by .cfm extensions fail to load with an HTTP Status Code of 503. This causes that overview page to not load the metrics at all. I can browse around the admin, but some functions will also return similar errors (e.g. trying to add a datasource). If I use a text-based browser on the server to get to the lucee admin directly on 127.0.0.1:8080, I can browse and successfully add datasources, etc. This also affects other cfm scripts in similar ways.

Can someone help point me in the right direction for a resolution?

Snippet from the CommandBox log (with DEBUG and TRACE enabled):

[TRACE] io.undertow.request: Created GZIP response conduit for HttpServerExchange{ GET /lucee/res/js/base.min.js.cfm}
[TRACE] io.undertow.server.HttpServerExchange: Starting to write response for HttpServerExchange{ GET /lucee/res/js/base.min.js.cfm}
[DEBUG] runwar.context: The response channel was closed prematurely.  Request path: /lucee/res/js/base.min.js.cfm status-code: 200
[DEBUG] io.undertow.request.io: UT005013: An IOException occurred
java.nio.channels.ClosedChannelException: null
        at io.undertow.conduits.DeflatingStreamSinkConduit.write(DeflatingStreamSinkConduit.java:116) ~[runwar-4.7.16.jar:4.7.16]
        at org.xnio.conduits.Conduits.writeFinalBasic(Conduits.java:132) ~[runwar-4.7.16.jar:4.7.16]
        at io.undertow.conduits.DeflatingStreamSinkConduit.writeFinal(DeflatingStreamSinkConduit.java:174) ~[runwar-4.7.16.jar:4.7.16]
        at org.xnio.conduits.ConduitStreamSinkChannel.writeFinal(ConduitStreamSinkChannel.java:104) ~[runwar-4.7.16.jar:4.7.16]
        at io.undertow.channels.DetachableStreamSinkChannel.writeFinal(DetachableStreamSinkChannel.java:195) ~[runwar-4.7.16.jar:4.7.16]

Apache error log:

[Mon Jan 23 08:32:54.401881 2023] [proxy_ajp:error] [pid 1440666:tid 139845754992384] [client X.X.X.X:58246] AH00992: ajp_read_header: ajp_ilink_receive failed, referer: https://mydomain.com/lucee/admin/server.cfm?action=overview
[Mon Jan 23 08:32:54.401890 2023] [proxy_ajp:error] [pid 1440666:tid 139845754992384] (120007)APR does not understand this error code: [client X.X.X.X:58246] AH00893: dialog to (null) (*) failed, referer: https://mydomain.com/lucee/admin/server.cfm?action=overview

Not sure. CommandBox says the client (Apache) closed the response channel (disconnected) early. But Apache says… we’ll I’m not really sure what that Apache error message even means, lol. I would do some Googling on the Apache error and see what causes it. I’ve used Apache to proxy over AJP to CommandBox and I’ve never run into this before.

Thanks @bdw429s . Any chance this could be related to undertow? Or do I need to go hunting down the Apache path? Googling hasn’t turn up much so far, and anything it does usually mentions Tomcat.

Sure, that’s possible. Apache is the one who seems to be having the issue, so I’d start seeing if you can get any more logging out of Apache to get more details on the issue.

I have exactly the same problem with Apache AJP + Commandbox with a ContentBox website and so far, I can’t solve the issue despite several days of Googling and searching.

In my case, the issue seems to only happened when contentBox serves images (which are redirected to /index.cfm/__media/path/to/image/jpg). The other files seems to work fine.

If I use the HTTP connector, then , there is no issue, so the problem seems to come from AJP, and more specifically from the connection between Apache and the backend.
Also, When I install Lucee using the lucee installer (which comes bundeled with Tomcat), then I do not see this issue, which seems to indicate that it is related to commandbox / undertow.

@technomaz Did you solve your problem? Did you find any additional clue?

Any help appreciated… Thanks…

OK. I think I found something. Not exactly sure what’s going on, but at least my contentBox site now works as expected.
What got me on track is the following bug report in the Undertow bug base : https://issues.redhat.com/browse/UNDERTOW-998

This bug is supposed to be fixed since V2-Beta of undertow, but the behaviour described in this bug report ressembles so closely to what I was experiencing that I decided to give it a try anyway.
Now, the solution :
In examined the undertow documentation referenced from the Commandbox documentation “Undertow Options” page and found the MAX_HEADER_SIZE option.
Setting an explicit MAX_HEADER_SIZE in my server.json did the trick for me

"runwar":{
        "undertowOptions":{
            "MAX_HEADER_SIZE":"102400"
        }
    }

The header size (102400 => 100k) was chosen arbitrarily and I’m absolutely not sure it is the best choice… but it works

Now some food for toughts for @bdw429s

  • What would have helped me a lot in debugging this issue is if there was an easy way to get an overview of the currently applied undertow options in Commandox (something similar to cfconfig show, but for showing the options of the underlying undertow server rather than the options of the cfml engine).
  • Also, it would be nice to explicitly set the MAX_HEADER_SIZE option somewhere in commandbox, so that one does not need to do it in the server.json file like I did here.
  • Other than that, thank you so much for your awesome job! Commandbox is really impressive!

Hope this helps someone else…
Good day to everyone!

1 Like

@damienbkx Thanks for the research and info. The Undertow ticket you linked to seems to be more of a fix related to how logging is performed, but I agree it is a pointer in the right direction. The Javadocs are rather lacking in description, which is something I’ve pointed out to them before
https://issues.redhat.com/browse/UNDERTOW-2193
This page on their main docs has a bit more info on it

https://undertow.io/undertow-docs/undertow-docs-2.0.0/index.html#common-listener-options

The maximum size of a HTTP header block, in bytes. If a client sends more data that this as part of the request header then the connection will be closed. Defaults to 50k.

My guess is that maybe Lucee has some crazy amount of cookies set that are pushing the size of a header over 50k. I’m unclear whether this setting is affecting a single HTTP header, or all of them together. And to be honest, after looking at the Undertow source code right now, I’m not even sure the setting applies just to headers. Undertow puts the value of this settings into a variable called this.maxRequestSize and checks it after reading every byte of the incoming request! Another slightly confusing thing for me is that the Undertow source code implies there SHOULD be a console log message that corresponds.

UndertowLogger.REQUEST_LOGGER.requestHeaderWasTooLarge(connection.getPeerAddress(), maxRequestSize);

The text of the message would read like so:

Connection from %s terminated as request header was larger than %s

You may need to be running your server with --trace enabled however, as Undertow is very reticent to log per-request messages to avoid DOS attacks with malformed requests.

I would inspect the HTTP request to see what headers are being sent and how large they are.

There is. Just start your server with the --debug --console flags and you’ll see a section in the console output like so, which includes all Undertow options set.

[DEBUG] Runwar: UndertowOption MAX_HEADER_SIZE:102400

Note, normally none of these will appear because CommandBox lets Undertow use its default behaviors for most things.

FWIW, CFConfig reads XML files off disk, but Undertow has no config files. All config is passed in at runtime from Runwar, so to see the config, we put it in the console output as debug-level messages.

Why? Like I said- we basically run Undertow with its defaults since we figure the people at RedHat/JBoss are pretty smart. If you need something custom or special for your app, then we give you a mechanism to set it. I’m always leery of hard-coding settings in to be set for all servers just because a couple people needed it unless it’s a very safe thing to do.

Now, that said, if we can determine that the header sizes were specifically too large on account of Lucee and it’s likely to affect other Lucee servers, we can see about changing the default. This is the first I’ve ever heard of the issue though so I don’t have a good feel for whether it’s widespread, or perhaps related to Apache version.

Oh, and something I just thought of-- Apache may be setting additional HTTP headers which you do not see in your browser. I would recommend increasing the max size as you have done and then enabling the request dumper which will dump out the full HTTP request information as it’s seen by Undertow.

You can put a predicate in from of that serve rule to control what requests it applies to.

Sorry I had to set the project aside that I was working on when I ran into this issue. That said, I put your fix in my server.json and it worked PERFECTLY! This fixed the issue–thanks @damienbkx for your extensive research that made this happen!

1 Like