Why do you want flag position to be compile time value?
Classically you would typically do this:
enum {
A_INDEX = 0,
B_INDEX = 1,
C_INDEX = 2,
};
enum {
A = 1 << A_INDEX,
B = 1 << B_INDEX,
C = 1 << C_INDEX,
};
Then you can do flags |= value << A_INDEX;
And you can do #define FLAG(x) (1 << (x))
then you can write FLAG(A_INDEX)
and not use those preshifted enum values.
But on modern compilers you can simply do flags |= value * A;
Compiler will optimize to same shift, no multiplication will be involved.
Also what exactly is value
? if it is boolean you want to set flag A or not, then you can simply write flags |= (value ? A : 0)
and no need to worry about bit positions or shifts.
And if you're fine with value not to be compile time value, you can calculate bit position from flag value with very simple operation:
int GetFlagPos(MyFlags flag)
{
assert(flag != 0);
unsigned long index;
_BitScanForward(&index, flag);
return index;
}
Then you can do value *= myTable[GetFlagPos(C)];
That'll cost just one extra simple operation.