In Java an int is stored in 32 bits, a float is 32 bits, and a double is 64 bits.
To see if an int will be truncated (lose precision) you need to know how float and double values are represented internally.
A Java float is a 32-bit single-precision IEEE754 floating point number.
A Java double is a 64-bit double-precision IEEE754 floating point number.
A floating point number is represented as a mantissa, base, and exponent in the following formula:
mantissa * base ^ exponent
In a Java float, the 32 bits are used as follows:
sign exponent mantissa
1 8 23
1 8 23
In a Java double, the 64 bits are used as follows:
sign exponent mantissa
1 11 52
1 11 52
Therefore a Java int can fit all of its 32 bits into the 52 bit mantissa of a double and every single value can be represented with no risk of loss of precision.
A Java int cannot fit all 32 bits into the 23 bit mantissa of a float so not every int can be represented and a loss of precision will occur when storing certain int values in a float.
For more information see: