In the INFORMS annual meeting 2011 which just concluded (I had a great time, by the way), I attended a talk on improving the accuracy of optimization solver. It was a well-attended talk, and the method was also interesting. The method was in the context of LP and MIPs. The speaker suggested that, once the initial optimization solution was found, to reformulate the problem by doing the following (if I have got this right):
- Scale the problem so that the number of decimal places gets near the decimal point. E.g., if the discrepancy is coming at 6-th places of decimal, scaling it so that the discrepancy appear at 1st or 2nd place of decimal.
- Consider only active or near-active constraints (the constraint set can be filtered using the shadow prices of each constraint).
- Solve the problem using warm-start from the previous solution. Iterate this scheme a few times.
Now, I am aware of industries where the data are often between 0 and 1, for example chemical processes, but shouldn't we be rescaling these problems before solving, rather than doing the above procedure after solving? May be I am missing something here !!
Good point about the accuracy of the input data -- not to mention that, in many models, enough assumptions have been made that the model is only a very approximate representation of reality. That said, there are some models that are numerically unstable (in some cases due to poor scaling), so that grossly incorrect "solutions" can be reported by the software. Perhaps that's what the presenters had in mind?
ReplyDelete@Paul: Agree. I hope the presenter had something similar to what you are saying in mind, otherwise chasing behind that level of accuracy definitely seems futile.
ReplyDelete