why do we usually store floating-point numbers in normalized form? what is the advantage of using a bias as opposed to adding a sign bit to the exponent? can you think of any situation, from the perspective of programming, where you would use floating-point data type to represent integers