First, there is no official standard for endian-ness when transmitting IEEE floating point data over the wire. That means that Java ends up defaulting to in
DataOutputStreamto big-endian format (to match the fact that everything is big-endian), C# defaults to host-endian format (always little-endian in practice, as the Mono guys have learned.) for
BinaryWriter. First point of fun.
Secondly, IEEE 754 floating point representation defines an entire range of values to represent
NaN, not a single value. Java takes the approach to make things byte compatible in the wire format by always emitting a single constant value for all
NaNvalues (where all the meaningless bits are set to 0), while C# allows whatever cruft happens to be in the value on the CPU to flow through to your binary representation. And don't assume in C# that
double.NaNhas all those set to 0. It doesn't. In practice,
double.NaNin C# is full of cruft.
This is fine if you read in the value and call
IsNaNon it, but not so great if you want to check that your serialized/deserialized byte arrays are fine. For that, you need to mask out to ensure that you're always writing a canonical representation of your
A useful C# block if you find yourself having to deal with this stuff is the following (using this will ensure that your binary representations are always bit-equivalent with the Java formats):