Question about Message Storage Settings #4729
-
We are using Mirth 3.9.1. We are troubleshooting issues with an ever-growing PostgreSQL database for our Mirth set up. We have pruning running and do see that messages are getting removed as a result of pruning activities. We have also confirmed that "auto-vacuuming" is running on the DB itself....yet it continues to grow in size. My question is around the message storage channel settings. These are the settings currently in use: It had been suggested that maybe unchecking "Filtered only" may improve matters, but some initial quick checks didn't seem to show it having much effect. Does anyone have suggestions for optimizations for these settings to make pruning more aggressive? Just trying to attack this database size problem from multiple angles. FYI...we have updated our pruning windows to 3 days. This did have the effect of fewer rows being present in the corresponding tables in the database, but didn't seem to result in disk space being reclaimed as we had hoped. Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 10 replies
-
Those are reasonable pruning settings but those settings are nearly always implementation dependent. I'd be verifying Postgres actually is cleaning up as expected. I.e. seems like a Postgres tuning question. Other things to reduce db size include using the attachment handler and also reducing the number of destinations for channels. The latter because for each destination the message traverses a copy of the message is sent to mirthdb. A large dB size is also a relevant statement. What is the transaction volume? What is the data? What is the table usage per channel? |
Beta Was this translation helpful? Give feedback.
-
There is one channel of particular interest that we use for exporting documents (typically in PDF format) that is growing out of control. We ran a query to show us the sizes of each of the tables. Here are the results from 9/15. The top table (d_mc4) correlates to the channel of interest. The channel in question is set up with connector type HTTP listener. We have a web application that posts requests here…the body of the request is JSON that includes bits of information about the patient as well as a base64 encoded field that has the document data. We refer to this as our “base” channel as it receives all requests from all the sites we support. Its job is to do some transformations on the data and then forward it to the site-specific channel for final processing. The transformer for this channel builds an HL7 message, but also puts several pieces of information in the channel map too (including the original message). Here is a small excerpt:
This channel has 3 destinations: The first and second destinations have filters. In our production environment, the first destination will always be filtered out. The second destination just forwards the message to a customer-specific channel that is customized to suit their situation. (Ignore the ‘Dev Destination 1’ entry below, it is disabled in production) The 3rd destination isn’t filtered and is just this: Are there any patterns here that would be contributing to uncontrolled size growth? We are also taking a look at our PostgreSQL DB to see if there are issues there that need to be changed. (We are using Azure PostgreSQL by the way). Thanks, |
Beta Was this translation helpful? Give feedback.
-
I already mentioned you need to use attachments. You are copying your PDF at least 8 times for each inbound JSON message with that code. Depends where that $c (channelMap) set is. |
Beta Was this translation helpful? Give feedback.
I already mentioned you need to use attachments.
You are copying your PDF at least 8 times for each inbound JSON message with that code. Depends where that $c (channelMap) set is.