The evil C++ -- vsprintf, UNICODE, error output string

Before this time, I always believe building the program with UNICODE or non UNICODE flag only effect the programatic side, and won't effect the system behavior. The problem arose with mixing usage of multi-byte and unicode character set functionalities under the UNICODE base. Such as if the UNICODE is the base setting, then any multi-byte function's call may appear unpredictable behavior. Please take a look at the following example:

// we suppose the UNICODE is defined, but we still need to support the multi-byte string

void Log(const char * InFmt, ...)
{
    char buffer[256];
    // something is done.
    vsprintf(buffer, InFmt, valist);

    wprintf(MULTIBYTE_TO_UNICODE(buffer));
}

The output 'buffer' may not process the right string that you want, and the solution is at first convert all the string into Unicode style, and then use the Unicode function to process the string, such as:

// we suppose the UNICODE is defined, but we still need to support the multi-byte string

void Log(const char * InFmt, ...)
{

    // we are work under UNICODE try turn all the string value into Unicode style

    wchar_t buffer[256];

    // something is done.

    // and then use the UNICODE function to process string

    vswprintf(buffer, MULTIBYTE_TO_UNICODE(InFmt), valist);

    wprintf(buffer);

}


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章