[LLM + Tree Speculative Decoding] [Inference Setup] Unable to Locate modelOutputQuantScale

Environment Information

Issue Description:
Users are unsure how to locate or set the modelOutputQuantScale (as seen in Netron) for PTQ and Tree Speculative Decoding tasks.

Solution:
The toolkit automatically provides the correct modelOutputQuantScale in the config_<model_name>.yaml file. Manual extraction or adjustment is unnecessary.

The solution was tested and verified on July 12th, 2025