Specs for audio components are usually Vrms (and Arms for current). Power is normally average power in W (there's no such thing as rms power).
The noise floor is often a lower limit on DAC performance, so the only practical way to provide higher SNR and SINAD (~THD+N) is to increase the output voltage. However, it is harder to achieve higher linearity (and thus lower distortion) with higher voltage, so there is always a trade made for noise vs. distortion in a practical design. There are numerous other related technical trades, such as bandwidth, gain, and feedback factor, but noise vs. distortion is a biggie.
At the system level, matching signal levels for best end-to-end noise and distortion is desirable, but somewhat complex. I have a long post on ASR about that but am sure that would be unwelcome here and is not an easy read. Basically you want to optimize the interface at each component pair, that is, choose components and gain settings that optimize the noise and distortion through the signal chain. Too low a signal leads to higher noise, and too large a signal leads to higher distortion.
Example: A DAC with 1 Vrms max output feeding an amplifier with 10 Vrms max input level will likely not be able to drive the amp to full output, leading to excessive distortion from the DAC operating at its max output, and being at the low end of the amp's input range means higher noise at the amp's output. Amplifier SNR is typically specified at maximum output, so an amp with 100 dB SNR for 10 V input may exhibit only 80 dB for 1 V input. A DAC capable of 10 Vrms output driving an amp with 1 Vrms sensitivity means the DAC will be at the low end of its output, increasing noise and the chances of overdriving the amp and creating high levels of distortion in the amplifier. Always trades...
HTH - Don