trunk/plugins/include/sqlx.inc
Code:
stock Handle:SQL_MakeStdTuple()
{
static host[64], user[32], pass[32], db[128]
static get_type[12], set_type[12]
get_cvar_string("amx_sql_host", host, 63)
get_cvar_string("amx_sql_user", user, 31)
get_cvar_string("amx_sql_pass", pass, 31)
get_cvar_string("amx_sql_type", set_type, 11)
get_cvar_string("amx_sql_db", db, 127)
SQL_GetAffinity(get_type, 12)
if (!equali(get_type, set_type))
{
if (!SQL_SetAffinity(set_type))
{
log_amx("Failed to set affinity from %s to %s.", get_type, set_type)
}
}
return SQL_MakeDbTuple(host, user, pass, db)
}
Shouldn't the following pass a value of sizeof(get_type)-1 = 11 instead of 12?
Code:
SQL_GetAffinity(get_type, 12)
Is there any performance disadvantage to using sizeof directives in stocks like this? It seems like it would be a better or safer practice to have declaration-relative maxlens automatically put in place by the machine.