There's a design pattern that is often used in healthcare, and in other enterprise integrations with legacy that involves a nightly update. What often happens is that a days transactions get aggregated into a single big file (by taking a dump of the days work -- which is why it is called the nightly dump). Then this single big file is sent off somewhere else to be processed.
It's a convenient design. It doesn't have to deal with transactions or locks .. the days work is already done and nobody is using the transactional system. It won't impact your users, because the day's work is already done and nobody is ...
Batches of stuff can be processed very quickly because the days work is ...
You get the picture.
This is such an often used design pattern, and for some reason, it just feels wrong to be so widely used. There's a heuristic being applied ... that at the end of the day, the day's work is done, which we know isn't really the case. There's always that one record or two which we never got back to which needs to be updated. Yet we often act as if the heuristic is in fact a law of physics, only to discover to our chagrin that it is not, and we need to account for that in our processing of the nightly dump.
Often times when I see this pattern in use, I question it. The answer I typically get is "this is more efficient". And yet, I wonder, is it really? Is it more efficient to wait to send something? Or is that a premature optimization? Is the delay between sending it, and subsequently finding a problem with it that needs to be corrected really the best way to handle this situation.
One thing I can tell you. The nightly dump is interfacing, not integration. When you have to wait 24 or 48 hours for the nightly dump to have taken place before another system can act on what was done elsewhere, someone is bound to notice somewhere. And when they do, they are sure to start swearing. Because at that point, the nightly dump will basically feel as if someone just took a dump on them.
Keith
P.S. In case you were wondering, yeah, I just got dumped on.
It's a convenient design. It doesn't have to deal with transactions or locks .. the days work is already done and nobody is using the transactional system. It won't impact your users, because the day's work is already done and nobody is ...
Batches of stuff can be processed very quickly because the days work is ...
You get the picture.
This is such an often used design pattern, and for some reason, it just feels wrong to be so widely used. There's a heuristic being applied ... that at the end of the day, the day's work is done, which we know isn't really the case. There's always that one record or two which we never got back to which needs to be updated. Yet we often act as if the heuristic is in fact a law of physics, only to discover to our chagrin that it is not, and we need to account for that in our processing of the nightly dump.
Often times when I see this pattern in use, I question it. The answer I typically get is "this is more efficient". And yet, I wonder, is it really? Is it more efficient to wait to send something? Or is that a premature optimization? Is the delay between sending it, and subsequently finding a problem with it that needs to be corrected really the best way to handle this situation.
One thing I can tell you. The nightly dump is interfacing, not integration. When you have to wait 24 or 48 hours for the nightly dump to have taken place before another system can act on what was done elsewhere, someone is bound to notice somewhere. And when they do, they are sure to start swearing. Because at that point, the nightly dump will basically feel as if someone just took a dump on them.
Keith
P.S. In case you were wondering, yeah, I just got dumped on.
0 comments:
Post a Comment